paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability
1 INTRODUCTION . Statistical learning theory studies the learning properties of machine learning algorithms , and more fundamentally , the conditions under which learning from finite data is possible . In this context , classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures , such as combinatorial dimensions , covering numbers and Rademacher/Gaussian complexities ( Shalev-Shwartz & Ben-David , 2014 ; Boucheron et al. , 2005 ) . Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data ( Bousquet & Elisseeff , 2001 ; Kutin & Niyogi , 2002 ) . In this view , the continuity of the process that maps data to estimators is crucial , rather than the complexity of the hypothesis space . Different notions of stability can be considered , depending on the data perturbation and metric considered ( Kutin & Niyogi , 2002 ) . Interestingly , the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other , and can be shown to be equivalent as shown in Poggio et al . ( 2004 ) and Shalev-Shwartz et al . ( 2010 ) . In modern machine learning overparameterized models , with a larger number of parameters than the size of the training data , have become common . The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process ( Bühlmann & Van De Geer , 2011 ; Steinwart & Christmann , 2008 ) . However , it was recently shown - first for deep networks ( Zhang et al. , 2017 ) , and more recently for kernel methods ( Belkin et al. , 2019 ) - that learning is possible in the absence of regularization , i.e. , when perfectly fitting/interpolating the data . Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding . Since learning using models that interpolate is not exclusive to deep neural networks , we study generalization in the presence of interpolation in the case of kernel methods . We study both linear and kernel least squares problems in this paper . Our Contributions : • We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach . While the ( uniform ) stability properties of regularized kernel methods are well known ( Bousquet & Elisseeff , 2001 ) , we study interpolating solutions of the unregularized ( `` ridgeless '' ) regression problems . • We obtain an upper bound on the stability of interpolating solutions , and show that this upper bound is minimized by the minimum norm interpolating solution . This also means that among all interpolating solutions , the minimum norm solution has the best test error . In particular , the same conclusion is also true for gradient descent , since it converges to the minimum norm solution in the setting we consider , see e.g . Rosasco & Villa ( 2015 ) . • Our stability bounds show that the average stability of the minimum norm solution is controlled by the condition number of the empirical kernel matrix . It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix ( see the discussion of why overparametrization is “ good ” in Poggio et al . ( 2019 ) ) . Our results show that the condition number also controls stability ( and hence , test error ) in a statistical sense . Organization : In section 2 , we introduce basic ideas in statistical learning and empirical risk minimization , as well as the notation used in the rest of the paper . In section 3 , we briefly recall some definitions of stability . In section 4 , we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability . In section 5 we discuss our results in the context of recent work on high dimensional regression . We conclude in section 6 . 2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION . We begin by recalling the basic ideas in statistical learning theory . In this setting , X is the space of features , Y is the space of targets or labels , and there is an unknown probability distribution µ on the product space Z = X × Y . In the following , we consider X = Rd and Y = R. The distribution µ is fixed but unknown , and we are given a training set S consisting of n samples ( thus |S| = n ) drawn i.i.d . from the probability distribution on Zn , S = ( zi ) ni=1 = ( xi , yi ) n i=1 . Intuitively , the goal of supervised learning is to use the training set S to “ learn ” a function fS that evaluated at a new value xnew should predict the associated value of ynew , i.e . ynew ≈ fS ( xnew ) . The loss is a function V : F × Z → [ 0 , ∞ ) , where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point . We define a hypothesis space H ⊆ F where algorithms search for solutions . With the above notation , the expected risk of f is defined as I [ f ] = EzV ( f , z ) which is the expected loss on a new sample drawn according to the data distribution µ . In this setting , statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization ( ERM ) where we minimize the empirical risk IS [ f ] = 1 n ∑n i=1 V ( f , zi ) . A natural error measure for our ERM solution fS is the expected excess risk ES [ I [ fS ] −minf∈H I [ f ] ] . Another common error measure is the expected generalization error/gap given by ES [ I [ fS ] − IS [ fS ] ] . These two error measures are closely related since , the expected excess risk is easily bounded by the expected generalization error ( see Lemma 5 ) . 2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION . The focus in this paper is on the kernel least squares problem . We assume the loss function V is the square loss , that is , V ( f , z ) = ( y − f ( x ) ) 2 . The hypothesis space is assumed to be a reproducing kernel Hilbert space , defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H , such that K ( x , x′ ) = 〈Φ ( x ) , Φ ( x′ ) 〉H for all x , x′ ∈ X , where 〈· , ·〉H is the inner product in H. In this setting , functions are linearly parameterized , that is there exists w ∈ H such that f ( x ) = 〈w , Φ ( x ) 〉H for all x ∈ X . The ERM problem typically has multiple solutions , one of which is the minimum norm solution : f†S = arg min f∈M ‖f‖H , M = arg min f∈H 1 n n∑ i=1 ( f ( xi ) − yi ) 2 . ( 1 ) Here ‖·‖H is the norm onH induced by the inner product . The minimum norm solution can be shown to be unique and satisfy a representer theorem , that is for all x ∈ X : f†S ( x ) = n∑ i=1 K ( x , xi ) cS [ i ] , cS = K †y ( 2 ) where cS = ( cS [ 1 ] , . . . , cS [ n ] ) , y = ( y1 . . . yn ) ∈ Rn , K is the n by n matrix with entries Kij = K ( xi , xj ) , i , j = 1 , . . . , n , and K† is the Moore-Penrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features , that is the rank of X is n , then it is possible to show that for many kernels one can replace K† by K−1 ( see Remark 2 ) . Note that invertibility is necessary and sufficient for interpolation . That is , if K is invertible , f†S ( xi ) = yi for all i = 1 , . . . , n , in which case the training error in ( 1 ) is zero . Remark 1 ( Pseudoinverse for underdetermined linear systems ) A simple yet relevant example are linear functions f ( x ) = w > x , that correspond toH = Rd and Φ the identity map . If the rank of X ∈ Rd×n is n , then any interpolating solution wS satisfies w > S xi = yi for all i = 1 , . . . , n , and the minimum norm solution , also called Moore-Penrose solution , is given by ( w†S ) > = y > X† where the pseudoinverse X† takes the form X† = X > ( XX > ) −1 . Remark 2 ( Invertibility of translation invariant kernels ) Translation invariant kernels are a family of kernel functions given by K ( x1 , x2 ) = k ( x1 − x2 ) where k is an even function on Rd . Translation invariant kernels are Mercer kernels ( positive semidefinite ) if the Fourier transform of k ( · ) is non-negative . For Radial Basis Function kernels ( K ( x1 , x2 ) = k ( ||x1 − x2|| ) ) we have the additional property due to Theorem 2.3 of Micchelli ( 1986 ) that for distinct points x1 , x2 , . . . , xn ∈ Rd the kernel matrix K is non-singular and thus invertible . The above discussion is directly related to regularization approaches . Remark 3 ( Stability and Tikhonov regularization ) Tikhonov regularization is used to prevent potential unstable behaviors . In the above setting , it corresponds to replacing Problem ( 1 ) by minf∈H 1 n ∑n i=1 ( f ( xi ) − yi ) 2 + λ ‖f‖ 2 H where the corresponding unique solution is given by fλS ( x ) = ∑n i=1K ( x , xi ) c [ i ] , c = ( K + λIn ) −1y . In contrast to ERM solutions , the above approach prevents interpolation . The properties of the corresponding estimator are well known . In this paper , we complement these results focusing on the case λ→ 0 . Finally , we end by recalling the connection between minimum norm and the gradient descent . Remark 4 ( Minimum norm and gradient descent ) In our setting , it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist , see e.g . Rosasco & Villa ( 2015 ) . Thus , a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges . In particular , when ERM has multiple interpolating solutions , gradient descent converges to a solution that minimizes a bound on stability , as we show in this paper .
This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study.
SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
Discriminative Representation Loss (DRL): A More Efficient Approach than Gradient Re-Projection in Continual Learning
1 INTRODUCTION . In the real world , we are often faced with situations where data distributions are changing over time , and we would like to update our models by new data in time , with bounded growth in system size . These situations fall under the umbrella of “ continual learning ” , which has many practical applications , such as recommender systems , retail supply chain optimization , and robotics ( Lesort et al. , 2019 ; Diethe et al. , 2018 ; Tian et al. , 2018 ) . Comparisons have also been made with the way that humans are able to learn new tasks without forgetting previously learned ones , using common knowledge shared across different skills . The fundamental problem in continual learning is catastrophic forgetting ( McCloskey & Cohen , 1989 ; Kirkpatrick et al. , 2017 ) , i.e . ( neural network ) models have a tendency to forget previously learned tasks while learning new ones . There are three main categories of methods for alleviating forgetting in continual learning : i ) regularization-based methods which aim in preserving knowledge of models of previous tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Nguyen et al. , 2018 ) ii ) architecture-based methods for incrementally evolving the model by learning task-shared and task-specific components ( Schwarz et al. , 2018 ; Hung et al. , 2019 ) ; iii ) replay-based methods which focus in preserving knowledge of data distributions of previous tasks , including methods of experience replay by episodic memories or generative models ( Shin et al. , 2017 ; Rolnick et al. , 2019 ) , methods for generating compact episodic memories ( Chen et al. , 2018 ; Aljundi et al. , 2019 ) , and methods for more efficiently using episodic memories ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ; Riemer et al. , 2019 ; Farajtabar et al. , 2020 ) . Gradient-based approaches using episodic memories , in particular , have been receiving increasing attention . The essential idea is to use gradients produced by samples from episodic memories to constrain the gradients produced by new samples , e.g . by ensuring the inner product of the pair of gradients is non-negative ( Lopez-Paz & Ranzato , 2017 ) as follows : 〈gt , gk〉 = 〈 ∂L ( xt , θ ) ∂θ , ∂L ( xk , θ ) ∂θ 〉 ≥ 0 , ∀k < t ( 1 ) where t and k are time indices , xt denotes a new sample from the current task , and xk denotes a sample from the episodic memory . Thus , the updates of parameters are forced to preserve the performance on previous tasks as much as possible . In Gradient Episodic Memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) , gt is projected to a direction that is closest to it in L2-norm whilst also satisfying Eq . ( 1 ) : ming̃ 12 ||gt − g̃|| 2 2 , s.t.〈g̃ , gk〉 ≥ 0 , ∀k < t. Optimization of this objective requires a high-dimensional quadratic program and thus is computationally expensive . Averaged-GEM ( A-GEM ) ( Chaudhry et al. , 2019a ) alleviates the computational burden of GEM by using the averaged gradient over a batch of samples instead of individual gradients of samples in the episodic memory . This not only simplifies the computation , but also obtains comparable performance with GEM . Orthogonal Gradient Descent ( OGD ) ( Farajtabar et al. , 2020 ) projects gt to the direction that is perpendicular to the surface formed by { gk|k < t } . Moreover , Aljundi et al . ( 2019 ) propose Gradient-based Sample Selection ( GSS ) , which selects samples that produce most diverse gradients with other samples into episodic memories . Here diversity is measured by the cosine similarity between gradients . Since the cosine similarity is computed using the inner product of two normalized gradients , GSS embodies the same principle as other gradient-based approaches with episodic memories . Although GSS suggests the samples with most diverse gradients are important for generalization across tasks , Chaudhry et al . ( 2019b ) show that the average gradient over a small set of random samples may be able to obtain good generalization as well . In this paper , we answer the following questions : i ) Which samples tend to produce diverse gradients that strongly conflict with other samples and why are such samples able to help with generalization ? ii ) Why does a small set of randomly chosen samples also help with generalization ? iii ) Can we reduce the diversity of gradients in a more efficient way ? Our answers reveal the relation between diversity of gradients and discriminativeness of representations , and further show connections between Deep Metric Learning ( DML ) ( Kaya & Bilge , 2019 ; Roth et al. , 2020 ) and continual learning . Drawing on these findings we propose a new approach , Discriminative Representation Loss ( DRL ) , for classification tasks in continual learning . Our methods show improved performance with relatively low computational cost in terms of time and RAM cost when compared to several state-of-theart ( SOTA ) methods across multiple benchmark tasks in the setting of online continual learning . 2 A NEW PERSPECTIVE OF REDUCING DIVERSITY OF GRADIENTS . According to Eq . ( 1 ) , negative cosine similarities between gradients produced by current and previous tasks result in worse performance in continual learning . This can be interpreted from the perspective of constrained optimization as discussed by Aljundi et al . ( 2019 ) . Moreover , the diversity of gradients relates to the Gradient Signal to Noise Ratio ( GSNR ) ( Liu et al. , 2020 ) , which plays a crucial role in the model ’ s generalization ability . Intuitively , when more of the gradients point in diverse directions , the variance will be larger , leading to a smaller GSNR , which indicates that reducing the diversity of gradients can improve generalization . This finding leads to the conclusion that samples with the most diverse gradients contain the most critical information for generalization which is consistent with in Aljundi et al . ( 2019 ) . 2.1 THE SOURCE OF GRADIENT DIVERSITY . We first conducted a simple experiment on classification tasks of 2-D Gaussian distributions , and tried to identify samples with most diverse gradients in the 2-D feature space . We trained a linear model on the first task to discriminate between two classes ( blue and orange dots in Fig . 1a ) . We then applied the algorithm Gradient-based Sample Selection with Interger Quadratic Programming ( GSS-IQP ) ( Aljundi et al. , 2019 ) to select 10 % of the samples of training data that produce gradients with the lowest similarity ( black dots in Fig . 1a ) , and denote this set of samples as M̂ = minM ∑ i , j∈M 〈gi , gj〉 ||gi||·||gj || . It is clear from Fig . 1a that the samples in M̂ are mostly around the decision boundary between the two classes . Increasing the size of M̂ results in the inclusion of samples that trace the outer edges of the data distributions from each class . Clearly the gradients can be strongly opposed when samples from different classes are very similar . Samples close to decision boundaries are most likely to exhibit this characteristic . Intuitively , storing the decision boundaries of previously learned classes should be an effective way to preserve classification performance on those classes . However , if the episodic memory only includes samples representing the learned boundaries , it may miss important information when the model is required to incrementally learn new classes . We show this by introducing a second task - training the model above on a third class ( green dots ) . We display the decision boundaries ( which split the feature space in a one vs. all manner ) learned by the model after 4 2 0 2 4 6 x 4 2 0 2 4 6 8 y class 0 class 1 M ( a ) Samples with most diverse gradients ( M̂ ) after learning task 1 , the green line is the decision boundary . 4 2 0 2 4 6 x 4 2 0 2 4 6 8 y class 0 class 1 class 2 memory ( b ) Learned decision boundaries ( purple lines ) after task 2 . Here the episodic memory includes samples in M̂ . 4 2 0 2 4 6 x 4 2 0 2 4 6 8 y class 0 class 1 class 2 memory ( c ) Learned decision boundaries ( purple lines ) after task 2 . Here the episodic memory consists of random samples . ( a ) Splitting samples into several subsets in a 3-class classification task . Dots in different colors are from different classes . ( b ) Estimated distributions of β when drawing negative pairs from different subsets of samples . ( c ) Estimated distributions of α− δ when drawing negative pairs from different subsets of samples . Figure 2 : Illustration of how Pr ( 2β > α − δ ) in Theorem 1 behaves in various cases by drawing negative pairs from different subsets of a 3-class feature space which are defined in Fig . 2a . The classifier is a linear model . y-axis in the right side of ( b ) & ( c ) is for the case of x ∈ S1 ∪ S2 . We see that α− δ behaves in a similar way with β but in a smaller range which makes β the key in studying Pr ( 2β > α − δ ) . In the case of x ∈ S3 the distribution of β has more mass on larger values than other cases because the predicted probabilities are mostly on the two classes in a pair , and it causes all 〈gn , gm〉 having the opposite sign of 〈xn , xm〉 as shown in Tab . 1. task 2 with M̂ ( Fig . 1b ) and a random set of samples ( Fig . 1c ) from task 1 as the episodic memory . The random episodic memory shows better performance than the one selected by GSS-IQP , since the new decision boundaries rely on samples not included in M̂ . It explains why randomly selected memories may generalize better in continual learning . Ideally , with M̂ large enough , the model can remember all edges of each class , and hence learn much more accurate decision boundaries sequentially . However , memory size is often limited in practice , especially for high-dimensional data . A more efficient way could be learning more informative representations . The experimental results indicate that : 1 ) more similar representations in different classes result in more diverse gradients . 2 ) more diverse representations in a same class help with learning new boundaries incrementally . Now we formalise the connection between the diversity of gradients and the discriminativeness of representations for the linear model ( proofs are in Appx . A ) . Notations : Negative pair represents two samples from different classes . Positive pair represents two samples from a same class . Let L represent the softmax cross entropy loss , W ∈ RD×K is the weight matrix of the linear model , and xn ∈ RD denotes the input data , yn ∈ RK is a one-hot vector that denotes the label of xn , D is the dimension of representations , K is the number of classes . Let pn = softmax ( on ) , where on = W Txn , the gradient gn = ∇WL ( xn , yn ; W ) . xn , xm are two different samples when n 6= m. Lemma 1 . Let n = pn − yn , we have : 〈gn , gm〉 = 〈xn , xm〉〈 n , m〉 , Theorem 1 . Suppose yn 6= ym , and let cn denote the class index of xn ( i.e . yn , cn = 1 , yn , i = 0 , ∀i 6= cn ) . Let α , ||pn||2 + ||pm||2 , β , pn , cm + pm , cn and δ , ||pn − pm||22 , then : Pr ( sign ( 〈gn , gm〉 ) = sign ( −〈xn , xm〉 ) ) = Pr ( 2β > α− δ ) , Theorem 2 . Suppose yn = ym , when 〈gn , gm〉 6= 0 , we have : sign ( 〈gn , gm〉 ) = sign ( 〈xn , xm〉 ) For a better understanding of the theorems , we conduct empirical study by partitioning the feature space of three classes into several subsets as shown in Fig . 2a and examine four cases of pairwise samples by these subsets : 1 ) . x ∈ S0 , both samples in a pair are near the intersection of the three classes ; 2 ) . x ∈ S0∪S1 , one sample is close to decision boundaries and the other is far away from the boundaries ; 3 ) . x ∈ S3 , both samples close to the decision boundary between their true classes but away from the third class ; 4 ) . x ∈ S1 ∪ S2 , both samples are far away from the decision boundaries . Theorem 1 says that for samples from different classes , 〈gn , gm〉 gets an opposite sign of 〈xn , xm〉 with a probability that depends on the predictions pn and pm . This probability of flipping the sign especially depends on β which reflects how likely to misclassify both samples to its opposite class . We show the empirical distributions of β and ( α− δ ) obtained by a linear model in Figs . 2b and 2c , respectively . In general , ( α− δ ) shows similar behaviors with β in the four cases but in a smaller range , which makes 2β > ( α − δ ) tends to be true except when β is around zero . Basically , a subset including more samples close to decision boundaries leads to more probability mass on large values of β , and the case of x ∈ S3 results in largest mass on large values of β because the predicted probabilities mostly concentrate on the two classes in a pair . As shown in Tab . 1 , more mass on large values of β leads to larger probabilities of flipping the sign . These results demonstrate that samples with most diverse gradients ( which gradients have largely negative similarities with other samples ) are close to decision boundaries because they tend to have large β and 〈xn , xm〉 tend to be positive . In the case of x ∈ S1 ∪ S2 the probability of flipping the sign is zero because β concentrates around zero . According to Lemma 1 〈gn , gm〉 are very close to zero in this case because the predictions are close to true labels , hence , such samples are not considered as with most diverse gradients . Theorem 2 says 〈gn , gm〉 has the same sign as 〈xn , xm〉 when the two samples from a same class . We can see the results of positive pairs in Tab . 1 matches Theorem 2 . In the case of S0 ∪ S1 the two probabilities do not add up to exactly 1 because the implementation of cross-entropy loss in tensorflow smooths the function by a small value for preventing numerical issues which slightly changes the gradients . As 〈xn , xm〉 is mostly positive for positive pairs , 〈gn , gm〉 hence is also mostly positive , which explains why samples with most diverse gradients are not sufficient to preserve information within classes in experiments of Fig . 1 . On the other hand , if 〈xn , xm〉 is negative then 〈gn , gm〉 will be negative , which indicates representations within a class should not be too diverse . Extending this theoretical analysis based on a linear model , we also provide empirical study of non-linear models ( Multi-layer Perceptrons ( MLPs ) ) . As demonstrated in Tab . 1 , the probability of flipping the sign in MLPs are very similar with the linear model since it only depends on the predictions and all models have learned reasonable decision boundaries . The probability of getting negative 〈gn , gm〉 is also similar with the linear model except in the case of S1 ∪ S2 for negative pairs , in which the MLP with ReLU gets much less negative 〈gn , gm〉 . As MLP with tanh activations is still consistent with the linear model in this case , we consider the difference is caused by the representations always being positive due to ReLU activations . These results demonstrate that non-linear models exhibit similar behaviors with linear models that mostly align with the theorems . Since only negative 〈gn , gm〉 may cause conflicts , reducing the diversity of gradients hence relies on reducing negative 〈gn , gm〉 . We consider to reduce negative 〈gn , gm〉 by two ways : 1 ) .minimize the representation inner product of negative pairs , which pushes the inner product to be negative or zero ( for positive representations ) ; 2 ) .optimize the predictions to decrease the probability of flipping the sign . In this sense , decreasing the representation similarity of negative pairs might help with both ways . In addition , according to Fig . 2b x ∼ S3 gets larger prediction similarity than x ∼ S0 due to the predictions put most probability mass on both classes of a pair , which indicates decreasing the similarity of predictions may decrease the probability of flipping the sign . Hence , we include logits in the representations . We verify this idea by training two binary classifiers for two groups of MNIST classes ( { 0 , 1 } and { 7 , 9 } ) . The classifiers have two hidden layers each with 100 hidden units and ReLU activations . We randomly chose 100 test samples from each group to compute the pairwise cosine similarities . Representations are obtained by concatenating the output of all layers ( including logits ) of the neural network , gradients are computed by all parameters of the model . We display the similarities in Figs . 3a and 3b . The correlation coefficients between the gradient and representation similarities of negative pairs are -0.86 and -0.85 , which of positive pairs are 0.71 and 0.79 . In all cases , the similarities of representations show strong correlations with the similarities of gradients . The classifier for class 0 and 1 gets smaller representation similarities and much less negative gradient similarities for negative pairs ( blue dots ) and it also gains a higher accuracy than the other classifier ( 99.95 % vs. 96.25 % ) , which illustrates the potential of reducing the gradient diversity by decreasing the representation similarity of negative pairs .
This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below.
SP:b80bc890180934092cde037b49d94d6e4e06fad9
Learning without Forgetting: Task Aware Multitask Learning for Multi-Modality Tasks
1 INTRODUCTION . The process of Multi-Task Learning ( MTL ) on a set of related tasks is inspired by the patterns displayed by human learning . It involves a pretraining phase over all the tasks , followed by a finetuning phase . During pretraining , the model tries to grasp the shared knowledge of all the tasks involved , while in the finetuning phase , task-specific learning is performed to improve the performance . However , as a result of the finetuning phase , the model forgets the information about the other tasks that it learnt during pretraining . Humans , on the other hand , are less susceptible to forgetfulness and retain existing knowledge/skills while mastering a new task . For example , a polyglot who masters a new language learns to translate from this language without losing the ability to translate other languages . Moreover , the lack of task-based flexibility and having different finetuning/pretraining phases cause gaps in the learning process due to the following reasons : Role Mismatch : Consider the MTL system being trained to perform the Speech Translation ( ST ) , Automatic Speech Recognition ( ASR ) and Machine Translation ( MT ) tasks . The Encoder block has a very different role in the standalone ASR , MT and ST models and hence we can not expect a single encoder to perform well on all the tasks without any cues to identify/use task information . Moreover , there is a discrepancy between pretraining and finetuning hampering the MTL objective . Task Awareness : At each step in the MTL , the model tries to optimize over the task at hand . For tasks like ST and ASR with the same source language , it is impossible for the model to identify the task and alter its parameters accordingly , hence necessitating a finetuning phase . A few such examples have been provided in Table 1 . Humans , on the other hand , grasp the task they have to perform by means of context or explicit cues . Although MTL strategies help the finetuned models to perform better than the models directly trained on those tasks , their applicability is limited to finding a good initialization point for the finetuning phase . Moreover , having a separate model for each task increases the memory requirements , which is detrimental in low resource settings . In order to achieve the goal of jointly learning all the tasks , similar to humans , we need to perform shared learning in synergy with task-specific learning . Previous approaches such as Raffel et al . ( 2019 ) trained a joint model for a set of related text-to-text tasks by providing the task information along with the inputs during the joint learning phase . However , providing explicit task information is not always desirable , e.g. , consider the automatic multilingual speech translation task . In order to ensure seamless user experience , it is expected that the model extracts the task information implicitly . Thus , a holistic joint learning strategy requires a generic framework which learns task-specific information without any explicit supervision . In this work , we propose a generic framework which can be easily integrated into the MTL strategies which can extract task-based characteristics . The proposed approach helps align existing MTL approaches with human learning processes by incorporating task information into the learning process and getting rid of the issues related to forgetfulness . We design a modulation network for learning the task characteristics and modulating the parameters of the model during MTL . As discussed above , the task information may or may not be explicitly available during the training . Hence , we propose two different designs of task modulation network to learn the task characteristics ; one uses explicit task identities while the other uses the examples from the task as input . The model , coupled with the modulation network , jointly learns on all the tasks and at the same time , performs the task-specific learning . The proposed approach tackles issues related to forgetfulness by keeping a single model for all the tasks , and hence avoiding the expensive finetuning phase . Having a single model for all the tasks also reduces memory constraints , improving suitability for low resource devices . To evaluate the proposed framework , we conduct two sets of experiments . First , we include the task information during MTL on text-to-text tasks to show the effect of task information . Secondly , we train a model on tasks with different modalities and end goals , with highly confounding tasks . Our proposed framework allows the model to learn the task characteristics without any explicit supervision , and hence train a single model which performs well on all the tasks . The main contributions of this work are as follows : • We propose an approach to tackle the issue of forgetfulness which occurs during the finetuning phase of existing MTL strategies . • Our model , without any finetuning , achieves superior performance on all the tasks which alleviates the need to keep separate task-specific models . • Our proposed framework is generic enough to be used with any MTL strategy involving tasks with multiple modalities . 2 TASK-AWARE MULTITASK LEARNING . An overview of our proposed approach is shown in Figure 1 . 2.1 BASE MODEL . In general , the sequence-to-sequence architecture consists of two components : ( 1 ) an encoder which computes a set of representationsX = { x1 , · · · , xm } ∈ Rm×d corresponding to x , and a decoder coupled with attention mechanism ( Bahdanau et al. , 2015 ) dynamically reads encoder ’ s output and predicts target language sequence Y = { y1 , · · · , yn } ∈ Rn×d . It is trained on a dataset D to maximize the p ( Y |X ; θ ) , where θ are parameters of the model . We use the Transformer Vaswani et al . ( 2017 ) as our base model . Based on the task modalities , we choose the preprocessing layer in the Transformer , i.e. , speech or the text ( text-embedding ) preprocessing layer . The speech preprocessing layer consists of a stack of k CNN layers with stride 2 for both time and frequency dimensions . This layer compresses the speech sequence and produces the output sequence such that input sequences corresponding to all the tasks have similar dimensions , d. The overview of the base sequence-to-sequence model is shown in the rightmost part of Figure 1 . 2.2 TASK MODULATION NETWORK . The task modulation network performs two operations . In the first step , it computes the task characteristics ( te ) using the task characteristics layer . It then modulates the model parameters θ using te in the second step . 2.2.1 TASK CHARACTERISTICS NETWORK : . We propose two types of Task Characteristics Networks ( TCN ) to learn the task characteristics , where one uses explicit task identities while the other uses source-target sequences as input . Explicit Task Information : In this approach , the tasks involved are represented using different task identities and fed as input to this TCN as one hot vectors . This network consists of a feed-forward layer which produces the task embedding used for modulating the model parameters . te = FFN ( e ) , ( 1 ) where e ∈ Rs is a one-hot encoding of s tasks used during joint learning . Implicit Task Information : The Implicit TCN computes the task embeddings using example sequences from the tasks without any external supervision . It consists of four sub-layers : ( 1 ) Sequence Representation Layer , ( 2 ) Bi-directional Attention Layer , ( 3 ) Sequence Summary Layer , and ( 4 ) Task Embedding Layer . The sequence representation sub-layer consists of uni-directional Transformer Encoder ( TE ) blocks Vaswani et al . ( 2017 ) . It takes the source and target sequences from the tasks as input and produces self-attended source and target sequences . Xsa = TE ( X ) , Y sa = TE ( Y ) , ( 2 ) whereXsa ∈ RM×d , Y sa ∈ RN×d . This sub-layer computes the contextual representation of the sequences . The Bi-directional Attention ( BiA ) sub-layer takes the self-attended source and target sequences from the previous layer as input and computes the relation between them using Dot-Product Attention Luong et al . ( 2015 ) . As a result , we get target aware source ( Xat ∈ RM×d ) and source aware target ( Y asRN×d ) representations as outputs . Xat = BiA ( Xsa , Y sa ) , Y as = BiA ( Y sa , Xsa ) . ( 3 ) The sequence summary sub-layer is similar to the sequence representation sub layer and summarizes the sequences . The sequence summaries are given by : Xs = TEu ( X at ) , Y s = TEu ( Y as ) , ( 4 ) whereXs ∈ RM×d , Y s ∈ RN×d . The Equation 4 summarizes the sequencesXat and Y as which contain the contextual and attention information . We take the last tokens from both the xs ∈ Rd and ys ∈ Rd , since the last token can see the whole sequence and acts as a summary of the sequence . The task embedding layer computes te by taking the outputs of the sequence summary sub-layer and applying a feed-forward network : te = FFN ( [ x s : ys ] ) . ( 5 ) 2.2.2 MODULATING MODEL PARAMETERS . We modulate the parameters ( θ ) of the network ( Section 2.1 ) to account for the task-specific variation during MTL over a set of tasks . We achieve this by scaling ( γ ) and shifting ( β ) the outputs of each layer ( e.g. , transformer block ) including any preprocessing layers in the model adopted based on the Feature-wise Linear Modulation ( FiLM ; Perez et al . ( 2018 ) ) . The γ and β parameters are obtained from the task embedding te either by using equation 1 or 5. γ = te [ : d ] , β = te [ d : ] , ( 6 ) where te ∈ R2d , and d is the hidden dimension of the model . Once we have γ and β , we apply the feature-wise linear modulation ( Perez et al. , 2018 ) to compute the modulated output ( Ol ) for each block of the model . Ol = γ ∗ fl ( vl ; θl ) + β , l = 1 , · · · , L , ( 7 ) where L is the total number of blocks in the model and fl represents the lth block of the model with parameters θl ∈ θ and inputs vl . 2.3 TRAINING . MTL has been successfully applied across different applications of machine learning such as natural language processing ( Hashimoto et al. , 2016 ; Collobert & Weston , 2008 ) , speech recognition ( Liu et al. , 2019 ; Deng et al. , 2013 ) , computer vision ( Zhang et al. , 2014 ; Liu et al. , 2015 ; Girshick , 2015 ) , and drug discovery ( Ramsundar et al. , 2015 ) . It comes in many forms : joint learning , learning to learn , and learning with auxiliary tasks . We consider two MTL strategies : ( 1 ) joint learning and ( 2 ) learning to learn to train on set of S tasks , T = { τ1 , · · · , τS } with corresponding datasets D = { D1 , · · · , DS } . As our first training strategy , we use Joint Learning ( JL ) ( Caruana , 1997 ) , which is the most commonly used training strategy for MTL . In JL , the model parameters , including the output layer , are shared across all the tasks involved in the training . For the second training strategy under the learning-tolearn approach , we use a variant of meta-learning , Modality Agnostic Meta Learning ( MAML ) ( Finn et al. , 2017a ) . Even though MAML is mostly used in few-shot learning settings , we use it since it allows for task-specific learning during the meta-train step and it has also been shown to provide improvements in the field of speech translation ( Indurthi et al. , 2020 ) . We resolve the source-target vocabulary mismatch across different tasks in MTL by using a vocabulary of subwords ( Sennrich et al. , 2016 ) computed from all the tasks . We sample a batch of examples from Ds and use this as input to the TCN and the Transformer model . To ensure that each training example uses the task embedding computed using another example , we randomly shuffle this batch while using them as input to the TCN . This random shuffling improves the generalization performance by forcing the network to learn task-specific characteristics ( te ) in Equation 1 or 5 . We compute the task embedding in the meta-train step as well ; however , the parameters of the TCN are updated only during the meta-test step . During inference time , we use the precomputed task embeddings using a batch of examples randomly sampled from the training set .
This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves state-of-the-art results on some datasets.
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
A Robust Fuel Optimization Strategy For Hybrid Electric Vehicles: A Deep Reinforcement Learning Based Continuous Time Design Approach
1 INTRODUCTION . Hybrid electric vehicles powered by fuel cells and batteries have attracted great enthusiasm in modern days as they have the potential to eliminate emissions from the transport sector . Now , both the fuel cells and batteries have got several operational challenges which make the separate use of each of them in automotive systems quite impractical . HEVs and PHEVs powered by conventional diesel engines and batteries merely reduce the emissions , but can not eliminate completely . Some of the drawbacks include carbon emission causing environmental pollution from fuel cells and long charging times , limited driving distance per charge , non-availability of charging stations along the driving distance for the batteries . Fuel Cell powered Hybrid Electric Vehicles ( FCHEVs ) powered by fuel cells and batteries offer emission-free operation while overcoming the limitations of driving distance per charge and long charging times . So , FCHEVs have gained significant attention in recent years . As we find , most of the existing research which studied and developed several types of Fuel and Energy Management Systems ( FEMS ) for transport applications include Sulaiman et al . ( 2018 ) who has presented a critical review of different energy and fuel management strategies for FCHEVs . Li et al . ( 2017 ) has presented an extensive review of FMS objectives and strategies for FCHEVs . These strategies , however can be divided into two groups , i.e. , model-based and modelfree . The model-based methods mostly depend on the discretization of the state space and therefore suffers from the inherent curse of dimensionality . The coumputational complexity increases in an exponential fashion with the increase in the dimension of the state space . This is quite evident in the methods like state-based EMS ( Jan et al. , 2014 ; Zadeh et al. , 2014 ; 2016 ) , rule-based fuzzy logic strategy ( Motapon et al. , 2014 ) , classical PI and PID strategies ( Segura et al. , 2012 ) , Potryagin ’ s minimum principle ( PMP ) ( Zheng et al. , 2013 ; 2014 ) , model predictive control ( MPC ) ( Kim et al. , 2007 ; Torreglosa et al. , 2014 ) and differential dynamic programming ( DDP ) ( Kim et al. , 2007 ) . Out of all these methods , differential dynamic programming is considered to be computationally quite efficient which rely on the linearization of the non-linear system equations about a nominal state trajectory followed by a policy iteration to improve the policy . In this approach , the control policy for fuel optimization is used to compute the optimal trajectory and the policy is updated until the convergence is achieved . The model-free methods mostly deal with the Adaptive Dynamic Programming ( Bithmead et al. , 1991 ; Zhong et al. , 2014 ) and Reinforcement Learning ( RL ) based strategies ( Mitrovic et al. , 2010 ; Khan et al. , 2012 ) icluding DDP ( Mayne et al. , 1970 ) . Here , they tend to compute the control policy for fuel optimization by continous engagement with the environment and measuring the system response thus enabling it to achieve at a solution of the DP equation recursively in an online fashion . In deep reinforcement learning , multi-layer neural networks are used to represent the learning function using a non-linear parameterized approximation form . Although a compact paremeterized form do exist for the learning function , the inability to know it apriori renders the method suffer from the curse of dimensionality ( O ( d2 ) where , d is the dimension of the state space ) , thus making it infeasible to apply to a high-dimemsional fuel managememt system . The problem of computational complexity of the traditional RL methods like policy iteration ( PI ) and value iteration ( VI ) ( Bellman et al. , 1954 ; 2003 ; Barto et al. , 1983 ; Bartsekas , 2007 ) can be overcome by a simulation based approach ( Sutton et al. , 1998 ) where the policy or the value function can be parameterized with sufficient accuracy using a small number of parameters . Thus , we will be able to transform the optimal control problem to an approximation problem in the parameter space ( Bartesekas et al. , 1996 ; Tsitsiklis et al. , 2003 ; Konda et al. , 2004 ) side stepping the need for model knowledge and excessive computations . However , the convergence requires sufficient exploration of the state-action space and the optimality of the obtained policy depends primarily on the accuracy of the parameterization scheme . As a result , a good approximation of the value function is of utmost importance to the stability of the closed-loop system and it requires convergence of the unknown parameters to their optimal values . Hence , this sufficient exploration condition manifests itself as a persistence of excitation ( PE ) condition when RL is implemented online ( Mehta et al. , 2009 ; Bhasin et al. , 2013 ; Vrabie , 2010 ) which is impossible to be guaranteed a priori . Most of the traditional approaches for fuel optimization are unable to adrress the robustness issue . The methods described in the literature including those of PID ( Segura et al.,2012 ) , Model Predictive Control ( MPC ) ( Kim et al.,2007 ; Torreglosa et al. , 2014 ) and Adaptive Dynamic Programming ( Bithmead et al.,1991 ; Zhong et al. , 2014 ) as well as the simulation based RL strategies ( Bartesekas et al. , 1996 ; Tsitsiklis et al. , 2003 ; Konda et al. , 2004 ) suffer from the drawback of providing a suboptimal solution in the presence of external disturbances and noise . As a result , application of these methods for fuel optimization for hybrid electric vehicles that are plagued by various disturbances in the form of sudden charge and fuel depletion , change in the environment and in the values of the parameters like remaining useful life , internal resistance , voltage and temperature of the battery , are quite impractical . The fuel optimization problem for the hybrid electric vehicle therefore have been formulated as a fully observed stochastic Markov Decision Process ( MDP ) . Instead of using Trajectory-optimized LQG ( T-LQG ) or Model Predictive Control ( MPC ) to provide a sub-optimal solution in the presence of disturbances and noice , we propose a deep reinforcement learning-based optimization strategy using concurrent learning ( CL ) that uses the state-derivative-action-reward tuples to present a robust optimal solution . The convergence of the weight estimates of the policy and the value function to their optimal values justifies our claim . The two major contributions of the proposed approch can be therefore be summarized as follows : 1 ) The popular methods in RL literature including policy iteration and value iteration suffers from the curse of dimensionality owing to the use of a simulation based technique which requires sufficient exploration of the state space ( PE condition ) . Therefore , the proposed model-based RL scheme aims to relax the PE condition by using a concurrent learning ( CL ) -based system identifier to reduce the computational complexity . Generally , an estimate of the true controller designed using the CLbased method introduces an approximate estimation error which makes the stability analysis of the system quite intractable . The proposed method , however , has been able to establish the stability of the closed-loop system by introducing the estimation error and analyzing the augmented system trajectory obtained under the influnece of the control signal . 2 ) The proposed optimization algorithm implemented for fuel management in hybrid electric vehicles will nullify the limitations of the conventional fuel management approaches ( PID , Model Predictive Control , ECMS , PMP ) and traditional RL approaches ( Adaptive Dynamic Proagramming , DDP , DQN ) , all of which suffers from the problem of sub-optimal behaviour in the presence of external disturbances , model-uncertainties , frequent charging and discharging , change of enviroment and other noises . The H-infinity ( H∞ ) performance index defined as the ratio of the disturbance to the control energy has been established for the RL based optimization technique and compared with the traditional strategies to address the robustness issue of the proposed design scheme . The rest of the paper is organised as follows : Section 2 presents the problem formulation including the open-loop optimization and reinforcement learning-based optimal controller design which have been described in subsections 2.1 and 2.2 respectively . The parametric system identification and value function approximation have been detailed in subsections 2.2.1 and 2.2.2 . This is followed by the stability and robustness analysis ( using the H-infinity ( H∞ ) performance index ) of the closed loop system in subsection 2.2.4 . Section 3 provides the simulation results and discussion followed by the conclusion in Section 4 . 2 PROBLEM FORMULATION . Considering the fuel management system of a hybrid electric vehicle as a continous time affine non-linear dynamical system : ẋ = f ( x , w ) + g ( x ) u , y = h ( x , v ) ( 1 ) where , x ∈ Rnx , y ∈ Rny , u ∈ Rnu are the state , output and the control vectors respectively , f ( . ) denotes the drift dynamics and g ( . ) denotes the control effectivenss matrix . The functions f and h are assumed to be locally Lipschitz continuous functions such that f ( 0 ) = 0 and∇f ( x ) is continous for every bounded x ∈Rnx . The process noise w and measurement noise v are assumed to be zero-mean , uncorrelated Gausssian white noise with covariances W and V , respectively . Assumption 1 : We consider the system to be fully observed : y = h ( x , v ) = x ( 2 ) Remark 1 : This assumption is considered to provide a tractable formulation of the fuel management problem to side step the need for a complex treatment which is required when a stochastic control problem is treated as partially observed MDP ( POMDP ) . Optimal Control Problem : For a continous time system with unknown nonlinear dynamics f ( . ) , we need to find an optimal control policy πt in a finite time horizon [ 0 , t ] where πt is the control policy at time t such that πt = u ( t ) to minimize the cost function given by J = ∫ t 0 ( xTQx+uRuT ) dt+xTFx where , Q , F > 0 and R ≥ 0 . 2.1 OPEN LOOP OPTIMIZATION . Considering a noise-free non-linear stochastic dynamical system with unknown dynamics : ẋ = f ( x , 0 ) + g ( x ) u , y = h ( x , v ) = x ( 3 ) where , x0 ∈ Rnx , y ∈ Rny , u ∈ Rnu are the initial state , output and the control vectors respectively , f ( . ) have their usual meanings and the corresponding cost function is given by Jd ( x0 , ut ) =∫ t 0 ( xTQx+ uRuT ) dt+ xTFx . Remark : We have used piecewise convex function to approximate the non-convex fuel function globally which has been used to formulate the cost function for the fuel optimization . The open loop optimization problem is to find the control sequence ut such that for a given initial state x0 , ūt = arg min Jd ( x0 , ut ) , subject to ẋ = f ( x , 0 ) + g ( x ) u , y = h ( x , v ) = x . ( 4 ) The problem is solved using the gradient descent approach ( Bryson et al. , 1962 ; Gosavi et al. , 2003 ) , and the procedure is illustrated as follows : Starting from a random initial value of the control sequence U ( 0 ) = [ ut ( 0 ) ] the control policy is updated iteratively as U ( n+1 ) = U ( n ) − α∇UJd ( x0 , U ( n ) ) , ( 5 ) until the convergence is achieved upto a certain degree of accuracy where U ( n ) denotes the control value at the nth iteration and α is the step size parameter . The gradient vector is given by : ∇UJd ( x0 , U ( n ) ) = ( ∂Jd ∂u0 , ∂Jd ∂u1 , ∂Jd ∂u2 , ..... , ∂Jd ∂ut ) | ( x0 , ut ) ( 6 ) The Gradient Descent Algorithm showing the approach has been detailed in the Appendix A.1 . Remark 2 : The open loop optimization problem is thus solved using the gradient descent approach considering a black-box model of the underlying system dynamics using a sequence of input-output tests without having the perfect knowlegde about the non-linearities in the model at the time of the design . This method proves to be a very simple and useful strategy for implementation in case of complex dynamical systems with complicated cost-to-go functions and suitable for parallelization .
This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation.
SP:a1e2218e6943bf138aeb359e23628676b396ed66
Neural representation and generation for RNA secondary structures
1 INTRODUCTION . There is an increasing interest in developing deep generative models for biochemical data , especially in the context of generating drug-like molecules . Learning generative models of biochemical molecules can facilitate the development and discovery of novel treatments for various diseases , reducing the lead time for discovering promising new therapies and potentially translating in reduced costs for drug development ( Stokes et al. , 2020 ) . Indeed , the study of generative models for molecules has become a rich and active subfield within machine learning , with standard benchmarks ( Sterling & Irwin , 2015 ) , a set of well-known baseline approaches ( Gómez-Bombarelli et al. , 2018 ; Kusner et al. , 2017 ; Liu et al. , 2018 ; Jin et al. , 2018 ) , and high-profile cases of real-world impact 1 . Prior work in this space has focused primarily on the generation of small molecules ( with less than 100 atoms ) , leaving the development of generative models for larger and more complicated biologics and biosimilar drugs ( e.g. , RNA and protein peptides ) an open area for research . Developing generative models for larger biochemicals is critical in order to expand the frontiers of automated treatment design . More generally , developing effective representation learning for such complex biochemicals will allow machine learning systems to integrate knowledge and interactions involving these biologically-rich structures . In this work , we take a first step towards the development of deep generative models for complex biomolecules , focusing on the representation and generation of RNA structures . RNA plays a crucial 1e.g . LambdaZero project for exascale search of drug-like molecules . role in protein transcription and various regulatory processes within cells which can be influenced by its structure ( Crick , 1970 ; Stefl et al. , 2005 ) , and RNA-based therapies are an increasingly active area of research ( Pardi et al. , 2018 ; Schlake et al. , 2012 ) , making it a natural focus for the development of deep generative models . The key challenge in generating RNA molecules—compared to the generation of small molecules—is that RNA involves a hierarchical , multi-scale structure , including a primary sequential structure based on the sequence of nucleic acids as well as more complex secondary and tertiary structures based on the way that the RNA strand folds onto itself . An effective generative model for RNA must be able to generate sequences that give rise to these more complex emergent structures . There have been prior works on optimizing or designing RNA sequences—using reinforcement learning or blackbox optimization—to generate particular RNA secondary structures ( Runge et al. , 2019 ; Churkin et al. , 2017 ) . However , these prior works generally focus on optimizing sequences to conform to a specific secondary structure . In contrast , our goal is to define a generative model , which can facilitate the sampling and generation of diverse RNA molecules with meaningful secondary structures , while also providing a novel avenue for targeted RNA design via search over a tractable latent space . Key contributions . We propose a series of benchmark tasks and deep generative models for the task of RNA generation , with the goal of facilitating future work on this important and challenging problem . We propose three interrelated benchmark tasks for RNA representation and generation : 1 . Unsupervised generation : Generating stable , valid , and diverse RNAs that exhibit complex secondary structures . 2 . Semi-supervised learning : Learning latent representations of RNA structure that correlate with known RNA functional properties . 3 . Targeted generation : Generating RNAs that exhibit particular functional properties . These three tasks build upon each other , with the first task only requiring the generation of stable and valid molecules , while the latter two tasks involve representing and generating RNAs that exhibit particular properties . In addition to proposing these novel benchmarks for the field , we introduce and evaluate three generative models for RNA . All three models build upon variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) augmented with normalizing flows ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ) , and they differ in how they represent the RNA structure . To help readers better understand RNA structures and properties , a self-contained explanation is provided in appendix B . The simplest model ( termed LSTMVAE ) learns using a string-based representation of RNA structure . The second model ( termed GraphVAE ) leverages a graph-based representation and graph neural network ( GNN ) encoder approach ( Gilmer et al. , 2017 ) . Finally , the most sophisticated model ( termed HierVAE ) introduces and leverages a novel hierarchical decomposition of the RNA structure . Extensive experiments on our newly proposed benchmarks highlight how the hierarchical approach allows more effective representation and generation of complex RNA structures , while also highlighting important challenges for future work in the area . 2 TASK DESCRIPTION . Given a dataset of RNA molecules , i.e . sequences of nucleotides and corresponding secondary structures , our goals are to : ( a ) learn to generate structurally stable , diverse , and valid RNA molecules that reflect the distribution in this training dataset ; ( b ) learn latent representations that reflect the functional properties of RNA . A key factor in both these representation and generation processes is that we seek to jointly represent and generate both the primary sequence structure as well as the secondary structure conformation . Together , these two goals lay the foundations for generating novel RNAs that satisfy certain functional properties . To meet these goals , we create two types of benchmark datasets , each one focusing on one aspect of the above mentioned goals : Unlabeled and variable-length RNA . The first dataset contains unlabeled RNA with moderate and highly-variable length ( 32-512 nts ) , obtained from the human transcriptome ( Aken et al. , 2016 ) and through which we focus on the generation aspect of structured RNA and evaluate the validity , stability and diversity of generated RNA molecules . In particular , our goal with this dataset is to jointly generate RNA sequences and secondary structures that are biochemically feasible ( i.e. , valid ) , have low free energy ( i.e. , stable ) , and are distinct from the training data ( i.e. , diverse ) . We will give an extended assessment of the generation aspect under different circumstances , e.g. , when constraining the generation procedures with explicit rules . Labeled RNA . The second dataset is pulled and processed from a previous study on in vitro RNAprotein interaction , which features labeled RNAs with shorter and uniform length ( 40 nts ) ( Cook et al. , 2017 ) . With this dataset , our objective is slightly expanded ( to include obj . a ) , so that the latent space is adequately organized and reflective of the interaction with proteins . Therefore , key assessment for the latent space includes AUROC for the classification of protein binding , which is crucial for the design of desired novel RNA molecules . Essentially , this creates slight variations in the task formulation , with the first dataset suited to unsupervised learning of a generative model , while the second datasets involves additional supervision ( e.g. , for a semi-supervised model or targeted generation ) . Our specific modeling choices , to be introduced in section 3 , are invariant to different task formulations , and flexible enough to handle different representations of RNA secondary structures . We refer readers to appendix C for detailed explanation for the dataset and evaluation metrics on the generated molecules and latent embeddings . 3 METHODS . In this section , we introduce three different generative models for RNA . All three models are based upon the variational autoencoder ( VAE ) framework , involving three key components : 1 . A probabilistic encoder network qφ ( z|x ) , which generates a distribution over latent states given an input representation of an RNA . We experiment with three different types of input encodings for RNA sequence and secondary structures ( see Figure S1 : a dot-bracket annotated string , a graph with adjacency matrix representing base-pairings , and a graph augmented with a hierarchical junction tree annotation for the secondary structure . 2 . A probabilistic decoder network pθ ( x|z ) , which defines a joint distribution over RNA sequences and secondary structures , conditioned on a latent input . As with the encoder network , we design architectures based on a linearized string decoding and a graph-based hierarchical junction-tree decoding approach . 3 . A parameterized prior pψ ( z ) , which defines a prior distribution over latent states and is learned based on a continuous normalizing flow ( CNF ) ( Chen et al. , 2018 ) . For all the approaches we propose , the model is optimized via stochastic gradient descent to minimize the evidence lower bound ( ELBO ) : L = −Eqφ ( z|x ) [ pθ ( x|z ) ] + β KL ( qφ ( z|x ) |pψ ( z ) ) where β is a term to allow KL-annealing over the strength of the prior regularization . In the following sections , we explain our three different instantiations of the encoder ( section 3.1 ) , decoder ( section 3.2 ) , as well as our procedures to structurally constrain the decoding process using domain knowledge ( section 3.3 ) and our procedures to avoid posterior collapse ( section 3.4 ) . 3.1 ENCODING RNA SECONDARY STRUCTURES . The input to the encoder is a structured RNA molecule , with its sequence given by an ordered array of nucleotides x1 . . . xL , with xi ∈ { A , C , G , U } , where L is the length of the sequence , and its secondary structure , either rep- resented as ( 1 ) a dot-bracket string S = ẋ1 . . . ẋL with ẋi ∈ { . , ( , ) } ; ( 2 ) or as a graph G with two types of edges — covalent bonds along the RNA backbone , and hydrogen bonds between the base- pairs 2 . We use xuv to denote edge features between nucleotides u and v ; ( 3 ) or as a hypergraph T — a depth-first ordered array of subgraphs Ĝ1 . . . ĜD with L ( Ĝi ) ∈ { S , H , I , M } indicating the subgraph label , and I ( Ĝi ) = { j|j ∈ { 1 . . . L } } indicating the assignment of nucleotides to each subgraph . Encoding RNA secondary structure as sequence . First , we obtain a joint encoding over the nucleotide and the dot-bracket annotation , using the joint sequence-structure vocabulary { A , C , G , U } × { . , ( , ) } . Then , these one-hot encodings are processed by a stacked bidirectional LSTM ( Hochreiter & Schmidhuber , 1997 ) , followed by a multi-head self-attention module ( Vaswani et al. , 2017 ) to weigh different positions along the RNA backbone . A global max-pooling is used to aggregate the information into hS , and then we obtain mean µS and log variance log σS from hS through linear transformations , and draw latent encoding zS from N ( µS , σS ) using the reparameterization trick ( Kingma & Welling , 2014 ) . Learning graph representation of RNA secondary structure . To encode the graph view G of an RNA secondary structure , we pass rounds of neural messages along the RNA structure , which falls into the framework of Message Passing Neural Network ( MPNN ) as originally discussed in Gilmer et al . ( 2017 ) and similarly motivated by Jin et al . ( 2018 ) . For much longer RNAs , it is conceptually beneficial to pass more rounds of messages so that a nucleotide may receive information on its broader structural context . However , this may introduce undesired effects such as training instability and over-smoothing issues . Therefor , we combine our MPNN network with gating mechanism , which is collectively referred as the G-MPNN : v̂t−1uv = σ ( W g local [ xu ||xuv ] +W g msg ∑ w∈N ( u ) vt−1wu ) ( 1 ) vtuv = GRU ( v̂ t−1 uv , v t−1 uv ) ( 2 ) where [ . . . || . . . ] denotes concatenation , σ denotes the activation function and GRU indicates the gated recurrent unit ( Cho et al. , 2014 ) . Then , after T iterations of message passing , the final nucleotide level embedding is given by : hu = σ ( W g emb [ xu || ∑ v∈N ( u ) v T vu ] ) . Before pooling the nucleotide level embeddings into the graph level , we pass h1 . . . hL through a single bidirectional LSTM layer , obtaining ĥ1 . . . ĥL at each step , and hg = max ( { ĥi|i ∈ 1 ... L } ) . The latent encoding zG is similarly obtained from hG using the reparameterization trick . Hierarchical encoding of the RNA hypergraph . To encode the junction tree T of RNA , we employ a type of GRU specifically suited to tree-like structures , which has previously been applied in works such as GGNN ( Li et al. , 2016 ) and JTVAE ( Jin et al. , 2018 ) . We refer to this tree encoding network as T-GRU , and the format of its input is shown in Figure 1 . One major distinction between our RNA junction tree and the one used for chemical compounds ( Jin et al. , 2018 ) is that an RNA subgraph assumes more variable nucleotide composition such that it is impossible to enumerate based on the observed data . Therefore , we need to dynamically compute the features for each node in an RNA junction tree based on its contained nucleotides , in a hierarchical manner to leverage the nucleotide level embeddings learnt by G-MPNN . Considering a subgraph Ĝi in the junction tree T , we initialize its node feature with : xĜi = [ L ( Ĝi ) ||maxu∈I ( Ĝi ) hu ] . Notably , maxu∈Ĝi hu is a max-pooling over all nucleotides assigned to Ĝi , and nucleotide embedding hu comes from G-MPNN . To compute and pass neural messages between adjacent subgraphs in the RNA junction tree T , we use the T-GRU network in Eq.3 vtĜi , Ĝj = T-GRU ( xĜi , { v t−1 Ĝk , Ĝi | Ĝk ∈ N ( Ĝi ) } ) ( 3 ) hĜi = σ ( W t emb [ xĜi || ∑ Ĝ∈N ( Ĝi ) hĜ ] ) ( 4 ) with details of T-GRU provided in the appendix D , and compute the embeddings for subgraphs with Eq . 4 . Further , we obtain a depth-first traversal of the subgraph embeddings hĜ1 . . . hĜD′ which is also the order for hierarchical decoding to be discussed later . This ordered array of embeddings is processed by another bi-directional LSTM , and the final tree level representation hT is again given by the max-pooling over the bi-LSTM outputs . Likewise , latent encoding zT is obtained from hT . 2We do not differentiate the number of hydrogen bonds , which can be different depending on the base-pairs . For example , G-C has three hydrogen bonds whereas A-U only contains two .
This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semi-supervised learning and targeted generation. This paper has many interesting contributions — a comparison of VAE models that use different RNA secondary structure encoding schemes, including traditional dot-bracket notation and a more complex hierarchical encoding, and they also introduce various decoding schemes to encourage valid secondary structures.
SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
DiP Benchmark Tests: Evaluation Benchmarks for Discourse Phenomena in MT
1 INTRODUCTION AND RELATED WORK . The advances in neural machine translation ( NMT ) systems have led to great achievements in terms of state-of-the-art performance in automatic translation tasks . There have even been claims that their translations are no worse than what an average bilingual human may produce ( Wu et al. , 2016 ) or that the translations are on par with professional translators ( Hassan et al. , 2018 ) . However , extensive studies conducting evaluations with professional translators ( Läubli et al. , 2018 ; Popel et al. , 2020 ) have shown that there is a statistically strong preference for human translations in terms of fluency and overall quality when evaluations are conducted monolingually or at the document level . Document ( or discourse ) level phenomena ( e.g. , coreference , coherence ) may not seem lexically significant , but contribute significantly to readability and understandability of the translated texts ( Guillou , 2012 ) . Targeted datasets for evaluating phenomena like coreference ( Guillou et al. , 2014 ; Guillou & Hardmeier , 2016 ; Lapshinova-Koltunski et al. , 2018 ; Bawden et al. , 2018 ; Voita et al. , 2018b ) , or ellipsis and lexical cohesion ( Voita et al. , 2019 ) , have been proposed . The NMT framework such as the Transformer ( Vaswani et al. , 2017 ) provides more flexibility to incorporate larger context . This has spurred a great deal of interest in developing context-aware NMT systems that take advantage of source or target contexts , e.g. , Miculicich et al . ( 2018 ) , Maruf & Haffari ( 2018 ) , Voita et al . ( 2018b ; 2019 ) , Xiong et al . ( 2019 ) , Wong et al . ( 2020 ) , to name a few . Most studies only report performance on specific testsets , often limited to improvements in BLEU ( Papineni et al. , 2002 ) . Despite being the standard MT evaluation metric , BLEU has been criticised for its inadequacy ; the scores are not interpretable , and are not sensitive to small improvements in lexical terms that may lead to big improvements in fluency or readability ( Reiter , 2018 ) . There is no framework for a principled comparison of MT quality beyond mere lexical matching as done in BLEU : there are no standard corpora and no agreed-upon evaluation measures . To address these shortcomings , we propose the DiP benchmark tests ( for Discourse Phenomena ) , that will enable the comparison of machine translation models across discourse task strengths and source languages . We create diagnostic testsets for four diverse discourse phenomena , and also propose automatic evaluation methods for these tasks . However , discourse phenomena in translations can be tricky to identify , let alone evaluate . A fair number of datasets proposed thus far have been manually curated , and automatic evaluation methods have often failed to agree with human judgments ( Guillou & Hardmeier , 2018 ) . To mitigate these issues , we use trained neural models for identifying and evaluating complex discourse phenomena and conduct extensive user studies to ensure agreements with human judgments . Our methods for automatically extracting testsets can be applied to multiple languages , and find cases that are difficult to translate without having to resort to synthetic data . Moreover , our testsets are extracted in a way that makes them representative of current challenges . They can be easily updated to reflect future challenges , preventing the pitfall of becoming outdated , which is a common failing of many benchmarking testsets . We also benchmark established MT models on these testsets to convey the extent of the challenges they pose . Although discourse phenomena can and do occur at the sentence-level ( e.g. , between clauses ) , we would expect MT systems that model extra-sentential context ( Voita et al. , 2018b ; Zhang et al. , 2018 ; Miculicich et al. , 2018 ) to be more successful on these tasks . However , we observe significant differences in system behavior and quality across languages and phenomena , emphasizing the need for more extensive evaluation as a standard procedure . We propose to maintain a leaderboard that tracks and highlights advances in MT quality that go beyond BLEU improvement . Our main contributions in this paper are as follows : • Benchmark testsets for four discourse phenomena : anaphora , coherence & readability , lexical consistency , and discourse connectives . • Automatic evaluation methods and agreements with human judgments . • Benchmark evaluation and analysis of four context-aware systems contrasted with baselines , for German/Russian/Chinese-English language pairs . 2 MACHINE TRANSLATION MODELS . Model Architectures . We first introduce the MT systems that we will be benchmarking on our testsets . We evaluate a selection of established models of various complexities ( simple sentencelevel to complex context-aware models ) , taking care to include both source- and target-side contextaware models . We briefly describe the model architectures here : • S2S : A standard 6-layer base Transformer model ( Vaswani et al. , 2017 ) which translates sentences independently . • CONCAT : A 6-layer base Transformer whose input is two sentences ( previous and current sentence ) merged , with a special character as a separator ( Tiedemann & Scherrer , 2017 ) . • ANAPH : Voita et al . ( 2018b ) incorporate source context by encoding it with a separate encoder , then fusing it in the last layer of a standard Transformer encoder using a gate . They claim that their model explicitly captures anaphora resolution . • TGTCON : To model target-context , we implement a version of ANAPH with an extra operation of multi-head attention in the decoder , computed between representations of the target sentence and target context . The architecture is described in detail in the Appendix ( A.5 ) . • SAN : Zhang et al . ( 2018 ) use source attention network : a separate Transformer encoder to encode source context , which is incorporated into the source encoder and target decoder using gates . • HAN : Miculicich et al . ( 2018 ) introduce a hierarchical attention network ( HAN ) into the Transformer framework to dynamically attend to the context at two levels : word and sentence . They achieve the highest BLEU when hierarchical attention is applied separately to both the encoder and decoder . Datasets and Training . The statistics for the datasets used to train the models are shown in Table 1 . We tokenize the data using Jieba1 for Zh and Moses scripts2 for the other languages , lowercase the text , and apply BPE encodings3 from Sennrich et al . ( 2016 ) . We learn the BPE encodings with the command learn-joint-bpe-and-vocab -s 40000 . The scores reported are BLEU4 , computed either through fairseq or NLTK ( Wagner , 2010 ) . Further details about dataset composition , training settings and hyperparameters can be found in the Appendix ( A.7 ) . 1https : //github.com/fxsjy/jieba 2https : //www.statmt.org/moses/ 3https : //github.com/rsennrich/subword-nmt/ BLEU scores . The BLEU scores on the WMT-14 ( De-En , Ru-En ) and on the WMT-17 ( Zh-En ) testsets for each of the six trained models are shown in Table 2 . We were unable to train HAN for Zh-En as the model was not optimized for training with large datasets . In contrast to increases in BLEU for selected language-pairs and datasets reported in published work , incorporating context within elaborate context-dependent models decreases BLEU scores for the Zh-En and De-En tasks . However , the simple concatenation-based model CONCAT performs better than S2S for De-En and Ru-En ; this shows that context knowledge is indeed helpful for improving BLEU . 3 BENCHMARK TESTSETS . We construct our benchmarking testsets based on four main principles : Selectivity . The testsets need to provide hard to translate contexts for MT models . We ensure this by looking at translation errors made by system submissions to campaigns like WMT and IWSLT . Authenticity . The testsets can not contain artificial or synthetic data but only natural text . Rather than generating testset samples using heuristics , we extract hard contexts from existing humangenerated source text . Multilinguality . The testset extraction method should be automatic and applicable to multiple languages . Our framework can be used to extract testsets for all source languages that are part of the considered MT campaigns . Adaptability . The testsets should be easy to update frequently , making them adaptable to improvements in newer systems . Since we automatically extract hard contexts based on MT errors , our testsets are easy to update ; they adapt to errors in newer ( and possibly more accurate ) systems , making the tasks harder over time . We use the system outputs released by WMT and IWSLT for the most recent years ( Nadejde et al. , 2016 ; Bojar et al. , 2017 ; 2018 ; 2019 ; Cettolo et al. , 2016 ; 2017 ) to build our testsets . For De-En , Ru-En and Zh-En , these consist of translation outputs from 68 , 41 and 47 unique systems respectively . Since the data comes from a wide variety of systems , our testsets representatively aggregate different types of errors from several ( arguably SOTA ) models . Also note that the MT models we are benchmarking are not a part of these system submissions to WMT , so there is no potential bias in the testsets . 3.1 ANAPHORA . Anaphora are references to entities that occur elsewhere in a text ; mishandling them can result in ungrammatical sentences or the reader inferring the wrong antecedent , leading to misunderstanding of the text ( Guillou , 2012 ) . We focus specifically on the aspect of incorrect pronoun translations . Testset . To obtain hard contexts for pronoun translation , we look for source texts that lead to erroneous pronoun translations in system outputs . We align the system translations with their references , and collect the cases in which the translated pronouns do not match the reference.4 Our anaphora testset is an updated version of the one proposed by Jwalapuram et al . ( 2019 ) . We filter the system translations based on their list of cases where the translations can be considered wrong , rather than acceptable variants . The corresponding source texts are extracted as a test suite for pronoun translation . This gives us a pronoun benchmark testset of 2564 samples for De-En , 2368 for Ru-En and 1540 for Zh-En . Evaluation . Targeted evaluation of pronouns in MT has been challenging as it is not fair to expect an exact match with the reference . Evaluation methods like APT ( Miculicich Werlen & PopescuBelis , 2017 ) or AutoPRF ( Hardmeier & Federico , 2010 ) are specific to language pairs or lists of pronouns , requiring extensive manual intervention . They have also been criticised for failing to produce evaluations that are consistent with human judgments ( Guillou & Hardmeier , 2018 ) . Jwalapuram et al . ( 2019 ) propose a pairwise ranking model that scores “ good '' pronoun translations ( like in the reference ) higher than “ poor '' pronoun translations ( like in the MT output ) in context , and show that their model is good at making this distinction , along with having high agreements with human judgements . However , they do not rank multiple system translations against each other , which is our main goal ; the absolute scores produced by their model are not useful since it is trained in a pairwise fashion . We devise a way to use their model to score and rank system translations in terms of pronouns . First , we re-train their model with more up-to-date WMT data ( more details in Appendix A.1 ) . We obtain a score for each benchmarked MT system ( S2S , CONCAT , etc . ) translation using the model , plus the corresponding reference sentence . We then normalize the score for each translated sentence by calculating the difference with the reference . To get an overall score for an MT system , the assigned scores are summed across all sentences in the testset . Scoresys = ∑ i ρi ( ref|θ ) − ρi ( sys|θ ) ( 1 ) where ρi ( .|θ ) denotes the score given to sentence i by the pronoun model θ . The systems are ranked based on this overall score , where a lower score indicates a better performance . We conduct a user study to confirm that the model rankings correspond with human judgments , obtaining an agreement of 0.91 between four participants who annotated 100 samples . Appendix A.1 gives details ( e.g. , interface , participants , agreement ) about the study .
This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 150-5000 data points per language and phenomenon. A relatively large number of systems from previous work is benchmarked on each test set, and agreement with human judgments is measured.
SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
Private Image Reconstruction from System Side Channels Using Generative Models
1 INTRODUCTION . Side channel analysis ( SCA ) recovers program secrets based on the victim program ’ s nonfunctional characteristics ( e.g. , its execution time ) that depend on the values of program secrets . SCA constitutes a major threat in today ’ s system and hardware security landscape . System side channels , such as CPU cache accesses and operating system ( OS ) page table accesses made by the victim software , are widely used to recover program secrets under various real-world scenarios ( Gullasch et al. , 2011 ; Aciicmez & Koc , 2006 ; Wu et al. , 2012 ; Hähnel et al. , 2017 ; Xu et al. , 2015 ; Yarom et al. , 2017 ) . To conduct SCA , attackers first conduct an online phase to log a trace of side channel data points made by the victim software ( e.g. , its accessed CPU cache lines ) . Then , attackers launch an offline phase to analyze the logged trace and infer secrets ( e.g. , private inputs ) . Enabled by advances in system research , the online phase can be performed smoothly ( Xu et al. , 2015 ) . Nevertheless , the offline phase is challenging , requiring comprehension of victim software ’ s input-relevant operations and how such operations influence side channels . The influence is program-specific and obscure ( see an example in Fig . 1 ) . Even worse , side channel data points made by real-world software are usually highly noisy . For instance , executing libjpeg ( libjpeg , 2020 ) to decompress one unknown JPEG image produces a trace of over 700K side channel data points , where only a small portion depends on the image content . Identifying such input-dependent data points from over 700K records is extremely difficult . Launching SCA to recover images processed by media software constitutes a common threat in the era of cloud computing ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) , especially when machine learning as a service ( MLaaS ) is substantially offered ( e.g. , for face recognition ) . When envisioning the high ∗Corresponding Author risk of violating user privacy , there is a demanding need to understand the adversarial capability of reconstructing private images with SCA . To date , the offline inference phase of existing SCA attacks requires lots of manual efforts with heuristics ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) . While some preliminary studies explore to use AI models to infer secrets ( Hospodar et al. , 2011 ; Kim et al. , 2019 ; Cagli et al. , 2017 ; Hettwer et al. , 2018 ) , their approaches are primarily driven by classification , i.e. , predicting whether a particular bit of crypto key is 0 or 1 . In contrast , reconstructing user private images requires to synthesize and enhance images from a more holistic perspective . Recent advances in generative models , such as generative adversarial network ( GAN ) and variational autoencoder ( VAE ) , have enabled a major thrust in image reconstruction , given subtle signals in even cross-modal settings , e.g. , voice-to-face or text-to-image ( Radford et al. , 2016 ; Reed et al. , 2016 ; Wen et al. , 2019 ; Hong et al. , 2018b ) . Inspired by this breakthrough , we propose an SCA framework using generative models . Given a trace of side channel data points made by image analysis software ( e.g. , libjpeg ) when processing a user input , we reconstruct an image visually similar to the input . Each logged side channel trace , containing around a million records , is first encoded into a matrix and pre-processed by a convolutional neural network ( CNN ) for feature extraction . Then , a VAE network with a learned prior ( referred to as VAE-LP ) is employed to reconstruct an image with a holistic visual appearance . We further supplement VAE-LP with a GAN model to enhance the recovered image with vivid details . The GAN generator yields the final output . Our attack exploits media libraries , libjpeg ( libjpeg , 2020 ) and uPNG ( Middleditch , 2010 ) , using two popular side channels , CPU cache line accesses and OS page table accesses . Our attack is independent of the underlying computing infrastructure ( i.e. , OS , hardware , image library implementation ) . We require enough side channel logs for training , which is consistently assumed by previous works ( Heuser & Zohner , 2012 ; Maghrebi et al. , 2016 ) . While existing attacks particularly target libjpeg and leverage domain knowledge , system hacking , and manual efforts to infer pixel values ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) , we show that images with many details can be reconstructed in an end-to-end manner . We also show surprising results that enabled by our framework , side channel traces composing one-bit data read/write patterns , which prima facie seems minimally informative , suffice recovering images . We conduct qualitative and quantitative evaluations on specific and general datasets representing daily images that can violate privacy if leaked . The recovered images manifest consistent visual appearances with private inputs . The recovered images also exhibit high discriminability : each recovered image ( e.g. , a face ) can be matched to its reference input among many candidates with high accuracy . In summary , we make the following contributions : At the conceptual level , we present the first generative model-based SCA . Our novel approach learns how program inputs influence system side channels from historical side channel logs to reconstruct user private images automatically . We , for the first time , demonstrate surprisingly effective attacks toward even low-resolution side channels like one-bit data read/write access patterns . At the technical level , we design an effective framework by incorporating various design principles to facilitate image reconstruction from side channels . Our framework pipelines 2D CNN , VAE-LP , and GAN models to systematically enhance the quality of generated images . At the empirical level , our evaluations show that the proposed framework can generate images with vivid details and are closely similar to reference inputs . The reconstructed images show high discriminability , making privacy leakage attacks more practical . This is the first paper to conduct SCA with generative models , revealing new SCA opportunities and unknown threats . Our code is at https : //github.com/genSCA/genSCA . 2 BACKGROUND . To formulate SCA , let the attacked program be P and its input domain be I . For a deterministic and terminating program P , the program execution can be modeled as a mapping P : I → E where E represents program runtime behavior ( e.g. , memory access ) . As a common assumption ( Hähnel et al. , 2017 ) , program inputs are private and profitable for attackers . Since different inputs i , i′ ∈ I can likely induce different e , e′ ∈ E , using input-dependent e ∈ E enables to infer i . Modern computer architectures have primarily zeroed the possibility for adversaries to log e ∈ E. Nevertheless , an attacker ’ s view on P can be modeled as a function view : E → O that maps E to side channel observations O . Hence , the composition ( view ◦ P ) : I → O maps inputs to side channel data points that can be logged by attackers . The view indicates the attacker ’ s capability , and for typical system security scenarios , the view is formulated as view : Emem → Ocache ∪ Opage , where Emem denotes a trace of accessed memory locations when executing P with i , and Ocache andOpage represent CPU cache and OS page table side channels , respectively . Despite being unable to monitor Emem , attackers can log accessed cache lines Ocache or page table entries Opage derived from Emem . Attackers then infer Emem and recover i . We now concretize the procedure by introducing how SCA is used to exploit cloud platforms in a two-step approach as follows : Online Phase to Record O . Considering a cloud environment in Fig . 1 ( a ) , where two users , one normal and one malicious , deploy two virtual machine ( VM ) instances on the host . Private images i ∈ I uploaded by users are processed by media library P within the left VM . Modern computer design , e.g. , Intel SGX ( Intel , 2014 ) , guarantees that i ∈ I and the execution of P can not be viewed from outside the VM . However , when processing i , P usually imposes a large volume of CPU cache and page table accesses , which , as shown in Fig . 1 ( a ) , can be recorded by the co-located malicious VM or the malicious host OS in a fully automated manner ( Han et al. , 2017 ; Chiang et al. , 2015 ; Liu et al. , 2015a ; Xu et al. , 2015 ; Hähnel et al. , 2017 ) . Offline Phase to Infer i . Once side channel traces o ∈ O are collected , an offline phase is conducted to infer ( view ◦P ) −1 : O → I and recover i . Fig . 1 ( b ) presents a sample code , where depending on values of input i , different memory locations ( and cache lines ) will be visited . Fig . 1 ( c ) shows the corresponding trace of logged cache side channel records . To infer i , attackers eliminate the second record ( since it is input-independent ) , and infer i as 1 according to the first record . Attackers anticipate to 1 ) pinpointing a subset of records o∗ ⊆ o that depend on i , and to 2 ) recovering the mapping from o∗ to i . However , real-world side channel traces ( e.g. , generated by uPNG ) could contain over one million records , where only a tiny portion o∗ is input-dependent . Even worse , constructing the mapping between i and o∗ requires a deep understanding of program control flows ( e.g. , how i affects program execution and induces cache accesses in Fig . 1 ( b ) ) . To date , these tasks require either manual effort ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) or formal analysis ( Doychev et al. , 2013 ; Wang et al. , 2017 ; 2019 ) , which are program-specific and error-prone with low scalability . Existing research tackles the offline phase challenge by proposing profiling-based SCA ( Maghrebi et al. , 2016 ; Hettwer et al. , 2018 ; Kim et al. , 2019 ) , where models are trained to approximate ( view◦ P ) −1 : O → I . However , existing work focuses on predicting particular bits of crypto keys from succinct side channel traces , e.g. , a few hundred records ( Hettwer et al. , 2020 ) . In contrast , this is the first work shows that by incorporating generative models , SCA can be conducted to exploit real-world media libraries and holistically reconstruct high-quality and discriminable images . 3 THE PROPOSED FRAMEWORK . A common assumption shared by SCA ( Heuser & Zohner , 2012 ; Hähnel et al. , 2017 ; Xu et al. , 2015 ) is that the attackers can profile the victim software locally or remotely with training inputs and collect corresponding side channel traces . We train a model to learn how different inputs can influence side channel traces . Then , given a side channel trace logged when processing an unknown image , our framework reconstructs an image that is visually similar to the unknown input . Our framework has two pipelined modules ( see Fig . 2 ) . Given a side channel trace Ti corresponding to processing an image i , we first encode Ti into a matrix . The encoded matrix will be fed to the VAE-LP module to generate image îtrace , and we further use GAN to denoise îtrace and yield the final output îGAN . We now elaborate on each module . More details are given in Appendix B .
The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAE-LP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAE-LP model is refined using a GAN to add fine details. Compelling results are demonstrated for recovery of private information and state of art metrics are reported.
SP:b2fc6ca65add04fb32bcf7622d9098de9004ca2b
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
Deep ensembles perform better than a single network thanks to the diversity among their members . Recent approaches regularize predictions to increase diversity ; however , they also drastically decrease individual members ’ performances . In this paper , we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies . Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information , we introduce a novel training criterion called DICE : it increases diversity by reducing spurious correlations among features . The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant . Therefore , besides the classification loss with information bottleneck , we adversarially prevent features from being conditionally predictable from each other . We manage to reduce simultaneous errors while protecting class information . We obtain state-of-the-art accuracy results on CIFAR-10/100 : for example , an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently . We further analyze the consequences on calibration , uncertainty estimation , out-of-distribution detection and online co-distillation . 1 INTRODUCTION . Averaging the predictions of several models can significantly improve the generalization ability of a predictive system . Due to its effectiveness , ensembling has been a popular research topic ( Nilsson , 1965 ; Hansen & Salamon , 1990 ; Wolpert , 1992 ; Krogh & Vedelsby , 1995 ; Breiman , 1996 ; Dietterich , 2000 ; Zhou et al. , 2002 ; Rokach , 2010 ; Ovadia et al. , 2019 ) as a simple alternative to fully Bayesian methods ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) . It is currently the de facto solution for many machine learning applications and Kaggle competitions ( Hin , 2020 ) . Ensembling reduces the variance of estimators ( see Appendix E.1 ) thanks to the diversity in predictions . This reduction is most effective when errors are uncorrelated and members are diverse , i.e. , when they do not simultaneously fail on the same examples . Conversely , an ensemble of M identical networks is no better than a single one . In deep ensembles ( Lakshminarayanan et al. , 2017 ) , the weights are traditionally trained independently : diversity among members only relies on the randomness of the initialization and of the learning procedure . Figure 1 shows that the performance of this procedure quickly plateaus with additional members . To obtain more diverse ensembles , we could adapt the training samples through bagging ( Breiman , 1996 ) and bootstrapping ( Efron & Tibshirani , 1994 ) , but a reduction of training samples has a negative impact on members with multiple local minima ( Lee et al. , 2015 ) . Sequential boosting does not scale well for time-consuming deep learners that overfit their training dataset . Liu & Yao ( 1999a ; b ) ; Brown et al . ( 2005b ) explicitly quantified the diversity and regularized members into having negatively correlated errors . However , these ideas have not significantly improved accuracy when applied to deep learning ( Shui et al. , 2018 ; Pang et al. , 2019 ) : while members should predict the same target , they force disagreements among strong learners and therefore increase their bias . It highlights the main objective and challenge of our paper : finding a training strategy to reach an improved trade-off between ensemble diversity and individual accuracies ( Masegosa , 2020 ) . one input ( , ) should not share more information than features from two inputs in the same class ( , ) : i.e. , ( , - ) should not be able to differentiate ( - , ) and ( - , ) . Our core approach is to encourage all members to predict the same thing , but for different reasons . Therefore the diversity is enforced in the features space and not on predictions . Intuitively , to maximize the impact of a new member , extracted features should bring information about the target that is absent at this time so unpredictable from other members ’ features . It would remove spurious correlations , e.g . information redundantly shared among features extracted by different members but useless for class prediction . This redundancy may be caused by a detail in the image background and therefore will not be found in features extracted from other images belonging to the same class . This could make members predict badly simultaneously , as shown in Figure 2 . Our new learning framework , called DICE , is driven by Information Bottleneck ( IB ) ( Tishby , 1999 ; Alemi et al. , 2017 ) principles , that force features to be concise by forgetting the task-irrelevant factors . Specifically , DICE leverages the Minimum Necessary Information criterion ( Fischer , 2020 ) for deep ensembles , and aims at reducing the mutual information ( MI ) between features and inputs , but also information shared between features . We prevent extracted features from being redundant . As mutual information can detect arbitrary dependencies between random variables ( such as symmetry , see Figure 2 ) , we increase the distance between pairs of members : it promotes diversity by reducing predictions ’ covariance . Most importantly , DICE protects features ’ informativeness by conditioning mutual information upon the target . We build upon recent neural approaches ( Belghazi et al. , 2018 ) based on the Donsker-Varadhan representation of the KL formulation of MI . We summarize our contributions as follows : • We introduce DICE , a new adversarial learning framework to explicitly increase diversity in ensemble by minimizing the conditional redundancy between features . • We rationalize our training objective by arguments from information theory . • We propose an implementation through neural estimation of conditional redundancy . We consistently improve accuracy on CIFAR-10/100 as summarized in Figure 1 , with better uncertainty estimation and calibration . We analyze how the two components of our loss modify the accuracy-diversity trade-off . We improve out-of-distribution detection and online co-distillation . 2 DICE MODEL . Notations Given an input distribution X , a network θ is trained to extract the best possible dense features Z to model the distribution pθ ( Y |X ) over the targets , which should be close to the Dirac on the true label . Our approach is designed for ensembles with M members θi , i ∈ { 1 , . . . , M } extracting Zi . In branch-based setup , members share low-level weights to reduce computation cost . We average the M predictions in inference . We initially consider an ensemble of M = 2 members . Quick overview First , we train each member separately for classification with information bottleneck . Second , we train members together to remove spurious redundant correlations while training adversarially a discriminator . In conclusion , members learn to classify with conditionally uncorrelated features for increased diversity . Our procedure is driven by the following theoretical findings . 2.A DERIVING TRAINING OBJECTIVE 2.A.1 BASELINE : NON-CONDITIONAL OBJECTIVE The Minimum Necessary Information ( MNI ) criterion from ( Fischer , 2020 ) aims at finding minimal statistics . In deep ensembles , Z1 and Z2 should capture only minimal information from X , while preserving the necessary information about the task Y . First , we consider separately the two Markov chains Z1 ← X ↔ Y and Z2 ← X ↔ Y . As entropy measures information , entropy of Z1 and Z2 not related to Y should be minimized . We recover IB ( Alemi et al. , 2017 ) in deep ensembles : IBβib ( Z1 , Z2 ) = 1 βib [ I ( X ; Z1 ) + I ( X ; Z2 ) ] − [ I ( Y ; Z1 ) + I ( Y ; Z2 ) ] = IBβib ( Z1 ) + IBβib ( Z2 ) . Second , let ’ s consider I ( Z1 ; Z2 ) : we minimize it following the minimality constraint of the MNI . IBRβib , δr ( Z1 , Z2 ) = 1 βib Compression︷ ︸︸ ︷ [ I ( X ; Z1 ) + I ( X ; Z2 ) ] − Relevancy︷ ︸︸ ︷ [ I ( Y ; Z1 ) + I ( Y ; Z2 ) ] +δr Redundancy︷ ︸︸ ︷ I ( Z1 ; Z2 ) = IBβib ( Z1 ) + IBβib ( Z2 ) + δrI ( Z1 ; Z2 ) . ( green vertical stripes ) with no overlap with relevancy ( red stripes ) . Analysis In this baseline criterion , relevancy encouragesZ1 and Z2 to capture information about Y . Compression & redundancy ( R ) split the information from X into two compressed & independent views . The relevancy-compressionredundancy trade-off depends on the values of βib & δr . 2.A.2 DICE : CONDITIONAL OBJECTIVE The problem is that the compression and redundancy terms in IBR also reduce necessary information related to Y : it is detrimental to have Z1 and Z2 fully disentangled while training them to predict the same Y . As shown on Figure 3 , redundancy regions ( blue horizontal stripes ) overlap with relevancy regions ( red stripes ) . Indeed , the true constraints that the MNI criterion really entails are the following conditional equalities given Y : I ( X ; Z1|Y ) = I ( X ; Z2|Y ) = I ( Z1 ; Z2|Y ) = 0 . Mutual information being non-negative , we transform them into our main DICE objective : DICEβceb , δcr ( Z1 , Z2 ) = 1βceb [ I ( X ; Z1|Y ) + I ( X ; Z2|Y ) ] ︸ ︷︷ ︸ Conditional Compression − [ I ( Y ; Z1 ) + I ( Y ; Z2 ) ] ︸ ︷︷ ︸ Relevancy +δcr I ( Z1 ; Z2|Y ) ︸ ︷︷ ︸ Conditional Redundancy = CEBβceb ( Z1 ) + CEBβceb ( Z2 ) + δcrI ( Z1 ; Z2|Y ) , ( 1 ) where we recover two conditional entropy bottleneck ( CEB ) ( Fischer , 2020 ) components , CEBβceb ( Zi ) = 1 βceb I ( X ; Zi|Y ) − I ( Y ; Zi ) , with βceb > 0 and δcr > 0 . Analysis The relevancy terms force features to be informative about the task Y . But contrary to IBR , DICE bottleneck constraints only minimize irrelevant information to Y . First , the conditional compression removes in Z1 ( or Z2 ) information from X not relevant to Y . Second , the conditional redundancy ( CR ) reduces spurious correlations between members and only forces them to have independent bias , but definitely not independent features . It encourages diversity without affecting members ’ individual precision as it protects information related to the target class in Z1 and Z2 . Useless information from X to predict Y should certainly not be in Z1 or Z2 , but it is even worse if they are in Z1 and Z2 simultaneously as it would cause simultaneous errors . Even if for i ∈ { 1 , 2 } , reducing I ( Zi , X|Y ) indirectly controls I ( Z1 , Z2|Y ) ( as I ( Z1 ; Z2|Y ) ≤ I ( X ; Zi|Y ) by chain rule ) , it is more efficient to directly target this intersection region through the CR term . In a final word , DICE is to IBR for deep ensembles as CEB is to IB for a single network . We now approximate the two CEB and the CR components in DICE objective from equation 1 . 2.B APPROXIMATING DICE INTO A TRACTABLE LOSS 2.B.1 VARIATIONAL APPROXIMATION OF CONDITIONAL ENTROPY BOTTLENECK We leverage Markov assumptions in Zi ← X ↔ Y , i ∈ { 1 , 2 } and empirically estimate on the classification training dataset of N i.i.d . points D = { xn , yn } Nn=1 , yn ∈ { 1 , . . . , K } . Following Fischer ( 2020 ) , CEBβceb ( Zi ) = 1 βceb I ( X ; Zi|Y ) − I ( Y ; Zi ) is variationally upper bounded by : VCEBβceb ( { ei , bi , ci } ) = 1 N N∑ n=1 1 βceb DKL ( ei ( z|xn ) ‖bi ( z|yn ) ) − E [ log ci ( yn|ei ( xn , ) ) ] . ( 2 ) See explanation in Appendix E.4 . ei ( z|x ) is the true features distribution generated by the encoder , ci ( y|z ) is a variational approximation of true distribution p ( y|z ) by the classifier , and bi ( z|y ) is a variational approximation of true distribution p ( z|y ) by the backward encoder . This loss is applied separately on each member θi = { ei , ci , bi } , i ∈ { 1 , 2 } . Practically , we parameterize all distributions with Gaussians . The encoder ei is a traditional neural network features extractor ( e.g . ResNet-32 ) that learns distributions ( means and covariances ) rather than deterministic points in the features space . That ’ s why ei transforms an image into 2 tensors ; a features-mean eµi ( x ) and a diagonal features-covariance e σ i ( x ) each of size d ( e.g . 64 ) . The classifier ci is a dense layer that transforms a features-sample z into logits to be aligned with the target y through conditional cross entropy . z is obtained via reparameterization trick : z = ei ( x , ) = eµi ( x ) + e σ i ( x ) with ∼ N ( 0 , 1 ) . Finally , the backward encoder bi is implemented as an embedding layer of size ( K , d ) that maps the K classes to class-features-means bµi ( z|y ) of size d , as we set the class-features-covariance to 1 . The Gaussian parametrization also enables the exact computation of the DKL ( see Appendix E.3 ) , that forces ( 1 ) features-mean e µ i ( x ) to converge to the class-featuresmean bµi ( z|y ) and ( 2 ) the predicted features-covariance eσi ( x ) to be close to 1 . The advantage of VCEB versus VIB ( Alemi et al. , 2017 ) is the class conditional bµi ( z|y ) versus non-conditional bµi ( z ) which protects class information . 2.B.2 ADVERSARIAL ESTIMATION OF CONDITIONAL REDUNDANCY Theoretical Problem We now focus on estimating I ( Z1 ; Z2|Y ) , with no such Markov properties . Despite being a pivotal measure , mutual information estimation historically relied on nearest neighbors ( Singh et al. , 2003 ; Kraskov et al. , 2004 ; Gao et al. , 2018 ) or density kernels ( Kandasamy et al. , 2015 ) that do not scale well in high dimensions . We benefit from recent advances in neural estimation of mutual information ( Belghazi et al. , 2018 ) , built on optimizing Donsker & Varadhan ( 1975 ) dual representations of the KL divergence . Mukherjee et al . ( 2020 ) extended this formulation for conditional mutual information estimation . CR = I ( Z1 ; Z2|Y ) = DKL ( P ( Z1 , Z2 , Y ) ‖P ( Z1 , Y ) p ( Z2|Y ) ) = sup f Ex∼p ( z1 , z2 , y ) [ f ( x ) ] − log ( Ex∼p ( z1 , y ) p ( z2|y ) [ exp ( f ( x ) ) ] ) = Ex∼p ( z1 , z2 , y ) [ f∗ ( x ) ] − log ( Ex∼p ( z1 , y ) p ( z2|y ) [ exp ( f∗ ( x ) ) ] ) , where f∗ computes the pointwise likelihood ratio , i.e. , f∗ ( z1 , z2 , y ) = p ( z1 , z2 , y ) p ( z1 , y ) p ( z2|y ) . Empirical Neural Estimation We estimate CR ( 1 ) using the empirical data distribution and ( 2 ) replacing f∗ = w ∗ 1−w∗ by the output of a discriminator w , trained to imitate the optimal w ∗ . Let B be a batch sampled from the observed joint distribution p ( z1 , z2 , y ) = p ( e1 ( z|x ) , e2 ( z|x ) , y ) ; we select the features extracted by the two members from one input . Let Bp be sampled from the product distribution p ( z1 , y ) p ( z2|y ) = p ( e1 ( z|x ) , y ) p ( z2|y ) ; we select the features extracted by the two members from two different inputs that share the same class . We train a multi-layer network w on the binary task of distinguishing these two distributions with the standard cross-entropy loss : Lce ( w ) = − 1 |B|+ |Bp| ∑ ( z1 , z2 , y ) ∈B logw ( z1 , z2 , y ) + ∑ ( z1 , z′2 , y ) ∈Bp log ( 1− w ( z1 , z′2 , y ) ) . ( 3 ) If w is calibrated ( see Appendix B.3 ) , a consistent ( Mukherjee et al. , 2020 ) estimate of CR is : ÎCRDV = 1 |B| ∑ ( z1 , z2 , y ) ∈B log f ( z1 , z2 , y ) ︸ ︷︷ ︸ Diversity − log 1 |Bp| ∑ ( z1 , z′2 , y ) ∈Bp f ( z1 , z ′ 2 , y ) ︸ ︷︷ ︸ Fake correlations , with f = w 1− w . Intuition By training our members to minimize ÎCRDV , we force triples from the joint distribution to be indistinguishable from triples from the product distribution . Let ’ s imagine that two features are conditionally correlated , some spurious information is shared between features only when they are from the same input and not from two inputs ( from the same class ) . This correlation can be informative about a detail in the background , an unexpected shape in the image , that is rarely found in samples from this input ’ s class . In that case , the product and joint distributions are easily distinguishable by the discriminator . The first adversarial component will force the extracted features to reduce the correlation , and ideally one of the two features loses this information : it reduces redundancy and increases diversity . The second term would create fake correlations between features from different inputs . As we are not interested in a precise estimation of the CR , we get rid of this second term that , empirically , did not increase diversity , as detailed in Appendix G. L̂CRDV ( e1 , e2 ) = 1 |B| ∑ ( z1 , z2 , y ) ∈B∼p ( e1 ( z|x ) , e2 ( z|x ) , y ) log f ( z1 , z2 , y ) . ( 4 ) Summary First , we train each member for classification with VCEB from equation 2 , as shown in Step 1 from Figure 4 . Second , as shown in Step 2 from Figure 4 , the discriminator , conditioned on the class Y , learns to distinguish features sampled from one image versus features sampled from two images belonging to Y . Simultaneously , both members adversarially ( Goodfellow et al. , 2014 ) delete spurious correlations to reduce CR estimation from equation 4 with differentiable signals : it conditionally aligns features . We provide a pseudo-code in B.4 . While we derive similar losses for IBR and CEBR in Appendix E.5 , the full DICE loss is finally : LDICE ( θ1 , θ2 ) = VCEBβceb ( θ1 ) + VCEBβceb ( θ2 ) + δcrL̂CRDV ( e1 , e2 ) . ( 5 ) 2.C FULL PROCEDURE WITH M MEMBERS We expand our objective for an ensemble with M > 2 members . We only consider pairwise interactions for simplicity to keep quadratic rather than exponential growth in number of components and truncate higher order interactions , e.g . I ( Zi ; Zj , Zk|Y ) ( see Appendix F.1 ) . Driven by previous variational and neural estimations , we train θi = { ei , bi , ci } , i ∈ { 1 , . . . , M } on : LDICE ( θ1 : M ) = M∑ i=1 VCEBβceb ( θi ) + δcr ( M − 1 ) M∑ i=1 M∑ j=i+1 L̂CRDV ( ei , ej ) , ( 6 ) while training adversariallyw on Lce . Batch B is sampled from the concatenation of joint distribution p ( zi , zj , y ) where i , j ∈ { 1 , . . . , M } , i 6= j , while Bp is sampled from the product distribution , p ( zi , y ) p ( zj |y ) . We use the same discriminator w for ( M 2 ) estimates . It improves scalability by reducing the number of parameters to be learned . Indeed , an additional member in the ensemble only adds 256 ∗ d trainable weights in w , where d is the features dimension . See Appendix B.3 for additional information related to the discriminator w .
This paper proposes a method of learning ensembles that adhere to an "ensemble version" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for predicting the labels (Y), i.e. I(X;Z) or I(X;Z|Y), this paper proposes that ensembles should additionally avoid spurious correlations between the ensemble members that aren't useful for predicting Y, i.e. I(Z_i; Z_j| Y). They show empirically that the coefficient on this term increases diversity at the expense of decreasing accuracy of individual members of the ensemble.
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
Zero-shot Synthesis with Group-Supervised Learning
1 INTRODUCTION . Primates perform well at generalization tasks . If presented with a single visual instance of an object , they often immediately can generalize and envision the object in different attributes , e.g. , in different 3D pose ( Logothetis et al. , 1995 ) . Primates can readily do so , as their previous knowledge allows them to be cognizant of attributes . Machines , by contrast , are most-commonly trained on sample features ( e.g. , pixels ) , not taking into consideration attributes that gave rise to those features . To aid machine cognition of visual object attributes , a class of algorithms focuses on learning disentangled representations ( Kingma & Welling , 2014 ; Higgins et al. , 2017 ; Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) , which map visual samples onto a latent space that separates the information belonging to different attributes . These methods show disentanglement by interpolating between attribute values ( e.g. , interpolate pose , etc ) . However , these methods usually process one sample at a time , rather than contrasting or reasoning about a group of samples . We posit that semantic links across samples could lead to better learning . We are motivated by the visual generalization of primates . We seek a method that can synthesize realistic images for arbitrary queries ( e.g. , a particular car , in a given pose , on a given background ) , which we refer to as controlled synthesis . We design a method that enforces semantic consistency of attributes , facilitating controlled synthesis by leveraging semantic links between samples . Our method maps samples onto a disentangled latent representation space that ( i ) consists of subspaces , each encoding one attribute ( e.g. , identity , pose , ... ) , and , ( ii ) is such that two visual samples that share an attribute value ( e.g. , both have identity “ car ” ) have identical latent values in the shared attribute subspace ( identity ) , even if other attribute values ( e.g. , pose ) differ . To achieve this , we propose a general learning framework : Group Supervised Learning ( GSL , Sec . 3 ) , which provides a learner ( e.g. , neural network ) with groups of semantically-related training examples , represented as multigraph . Given a query of attributes , GSL proposes groups of training examples with attribute combinations that are useful for synthesizing a test example satisfying the query ( Fig . 1 ) . This endows the network with an envisioning capability . In addition to applications in graphics , controlled synthesis can also augment training sets for better generalization on machine learning tasks ( Sec . 6.3 ) . As an instantiation of GSL , we propose an encoder-decoder network for zero-shot synthesis : GroupSupervised Zero-Shot Synthesis Network ( GZS-Net , Sec . 4 ) . While learning ( Sec . 4.2 & 4.3 ) , we repeatedly draw a group of semantically-related examples , as informed by a multigraph created by GSL . GZS-Net encodes group examples , to obtain latent vectors , then swaps entries for one or more attributes in the latent space across examples , through multigraph edges , then decodes into an example within the group ( Sec . 4.2 ) . Our contributions are : ( i ) We propose Group-Supervised Learning ( GSL ) , explain how it casts its admissible datasets into a multigraph , and show how it can be used to express learning from semantically-related groups and to synthesize samples with controllable attributes ; ( ii ) We show one instantiation of GSL : Group-supervised Zero-shot Synthesis Network ( GZS-Net ) , trained on groups of examples and reconstruction objectives ; ( iii ) We demonstrate that GZS-Net trained with GSL outperforms state-of-the-art alternatives for controllable image synthesis on existing datasets ; ( iv ) We provide a new dataset , Fonts1 , with its generating code . It contains 1.56 million images and their attributes . Its simplicity allows rapid idea prototyping for learning disentangled representations . 2 RELATED WORK . We review research areas , that share similarities with our work , to position our contribution . Self-Supervised Learning ( e.g. , Gidaris et al . ( 2018 ) ) admits a dataset containing features of training samples ( e.g. , upright images ) and maps it onto an auxiliary task ( e.g. , rotated images ) : dataset examples are drawn and a random transformation ( e.g. , rotate 90◦ ) is applied to each . The task could be to predict the transformation ( e.g. , =90◦ ) from the transformed features ( e.g. , rotated image ) . Our approach is similar , in that it also creates auxiliary tasks , however , the tasks we create involve semantically-related group of examples , rather than from one example at a time . Disentangled Representation Learning are methods that infer latent factors given example visible features , under a generative assumption that each latent factor is responsible for generating one semantic attribute ( e.g . color ) . Following Variational Autoencoders ( VAEs , Kingma & Welling , 2014 ) , a class of models ( including , Higgins et al. , 2017 ; Chen et al. , 2018 ) achieve disentanglement implicitly , by incorporating into the objective , a distance measure e.g . KL-divergence , encouraging the latent factors to be statistically-independent . While these methods can disentangle the factors without knowing them beforehand , unfortunately , they are unable to generate novel combinations not witnessed during training ( e.g. , generating images of red car , without any in training ) . On the other hand , our method requires knowing the semantic relationships between samples ( e.g. , which objects are of same identity and/or color ) , but can then synthesize novel combinations ( e.g. , by stitching latent features of “ any car ” plus “ any red object ” ) . Conditional synthesis methods can synthesize a sample ( e.g. , image ) and some use information external to the synthesized modalities , e.g. , natural language sentence Zhang et al . ( 2017 ) ; Hong et al . 1http : //ilab.usc.edu/datasets/fonts ( 2018 ) or class label Mirza & Osindero ( 2014 ) ; Tran et al . ( 2017 ) . Ours differ , in that our “ external information ” takes the form of semantic relationships between samples . There are methods based on GAN Goodfellow et al . ( 2014 ) that also utilize semantic relationships including Motion Re-targeting ( Yang et al. , 2020 ) , which unfortunately requires domain-specific hand-engineering ( detect and track human body parts ) . On the other hand , we design and apply our method on different tasks ( including people faces , vehicles , fonts ; see Fig . 1 ) . Further , we compare against two recent GAN methods starGAN ( Choi et al. , 2018 ) and ELEGANT ( Xiao et al. , 2018 ) , as they are state-of-the-art GAN methods for amending visual attributes onto images . While they are powerful in carrying local image transformations ( within a small patch , e.g. , changing skin tone or hair texture ) . However , our method better maintains global information : when rotating the main object , the scene also rotates with it , in a semantically coherent manner . Importantly , our learning framework allows expressing simpler model network architectures , such as feed-forward auto-encoders , trained with only reconstruction objectives , as opposed to GANs , with potential difficulties such as lack of convergence guarantees . Zero-shot learning also consumes side-information . For instance , models of Lampert ( 2009 ) ; Atzmon & Chechik ( 2018 ) learn from object attributes , like our method . However , ( i ) these models are supervised to accurately predict attributes , ( ii ) they train and infer one example at a time , and ( iii ) they are concerned with classifying unseen objects . We differ in that ( i ) no learning gradients ( supervision signal ) are derived from the attributes , as ( ii ) these attributes are used to group the examples ( based on shared attribute values ) , and ( iii ) we are concerned with generation rather than classification : we want to synthesize an object in previously-unseen attribute combinations . Graph Neural Networks ( GNNs ) ( Scarselli et al. , 2009 ) are a class of models described on graph structured data . This is applicable to our method , as we propose to create a multigraph connecting training samples . In fact , our method can be described as a GNN , with message passing functions ( Gilmer et al. , 2017 ) that are aware of the latent space partitioning per attribute ( explained in Sec . 4 ) . Nonetheless , for self-containment , we introduce our method in the absence of the GNN framework . 3 GROUP-SUPERVISED LEARNING . 3.1 DATASETS ADMISSIBLE BY GSL . Formally , a dataset admissible by GSL containing n samples D = { x ( i ) } ni=1 where each example is accompanied with m attributes Da = { ( a ( i ) 1 , a ( i ) 2 , . . . a ( i ) m ) } ni=1 . Each attribute value is a member of a countable set : aj ∈ Aj . For instance , pertaining to visual scenes , A1 can denote foreground-colors A1 = { red , yellow , . . . } , A2 could denote background colors , A3 could correspond to foreground identity , A4 to ( quantized ) orientation . Such datasets have appeared in literature , e.g . in Borji et al . ( 2016 ) ; Matthey et al . ( 2017 ) ; Langner et al . ( 2010 ) ; Lai et al . ( 2011 ) . 3.2 AUXILIARY TASKS VIA MULTIGRAPHS . Given a dataset of n samples and their attributes , we define a multigraph M with node set [ 1 .. n ] . Two nodes , i , k ∈ [ 1 .. n ] with i 6= k are connected with edge labels M ( i , k ) ⊆ [ 1 .. m ] as : M ( i , k ) = { j ∣∣∣ a ( i ) j = a ( k ) j ; j ∈ [ 1 .. m ] } . In particular , M defines a multigraph , with |M ( i , k ) | denoting the number of edges connecting nodes i and k , which is equals the number of their shared attributes . Fig . 2 depicts a ( sub- ) multigraph for the Fonts dataset ( Sec . 5.1 ) . Definition 1 COVER ( S , i ) : Given node set S ⊆ [ 1 .. |Dg| ] and node i ∈ [ 1 .. |Dg| ] we say set S covers node i if every attribute value of i is in at least one member of S. Formally : COVER ( S , i ) ⇐⇒ [ 1 .. m ] = ⋃ k∈S M ( i , k ) . ( 1 ) When COVER ( S , i ) holds , there are two mutually-exclusive cases : either i ∈ S , or i /∈ S , respectively shaded as green and blue in Fig . 2 ( b ) . The first case trivially holds even for small S , e.g . COVER ( { i } , i ) holds for all i . However , we are interested in non-trivial sets where |S| > 1 , as sets with |S| = 1 would cast our proposed network ( Sec . 4 ) to a standard Auto-Encoder . The second case is crucial for zero-shot synthesis . Suppose the ( image ) features of node i ( in Fig . 2 ( b ) ) are not given , we can search for S1 , under the assumption that if COVER ( S1 , i ) holds , then S1 contains sufficient information to synthesize i ’ s features as they are not given ( i /∈ S1 ) . Until this point , we made no assumptions how the pairs ( S , i ) are extracted ( mined ) from the multigraph s.t . COVER ( S , i ) holds . In the sequel , we train with |S| = 2 and i ∈ S. We find that this particular specialization of GSL is easy to program , and we leave-out analyzing the impact of mining different kinds of cover sets for future work .
The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zero-shot setting. The framework leverages the underlying semantic links across samples which could be instantiated as a multigraph. Cycle-consistent reconstruction loss as well as reconstruction loss are computed on synthetic samples from swapped latent representations.
SP:5561773ab024b083be4e362db079e371abf79653
Asymmetric self-play for automatic goal discovery in robotic manipulation
1 INTRODUCTION . We are motivated to train a single goal-conditioned policy ( Kaelbling , 1993 ) that can solve any robotic manipulation task that a human may request in a given environment . In this work , we make progress towards this goal by solving a robotic manipulation problem in a table-top setting where the robot ’ s task is to change the initial configuration of a variable number of objects on a table to match a given goal configuration . This problem is simple in its formulation but likely to challenge a wide variety of cognitive abilities of a robot as objects become diverse and goals become complex . Motivated by the recent success of deep reinforcement learning for robotics ( Levine et al. , 2016 ; Gu et al. , 2017 ; Hwangbo et al. , 2019 ; OpenAI et al. , 2019a ) , we tackle this problem using deep reinforcement learning on a very large training distribution . An open question in this approach is how we can build a training distribution rich enough to achieve generalization to many unseen manipulation tasks . This involves defining both an environment ’ s initial state distribution and a goal distribution . The initial state distribution determines how we sample a set of objects and their configuration at the beginning of an episode , and the goal distribution defines how we sample target states given an initial state . In this work , we focus on a scalable way to define a rich goal distribution . The research community has started to explore automated ways of defining goal distributions . For example , previous works have explored learning a generative model of goal distributions ( Florensa et al. , 2018 ; Nair et al. , 2018b ; Racaniere et al. , 2020 ) and collecting teleoperated robot trajectories to identify goals ( Lynch et al. , 2020 ; Gupta et al. , 2020 ) . In this paper , we extend an alternative approach called asymmetric self-play ( Sukhbaatar et al. , 2018b ; a ) for automated goal generation . Asymmetric self-play trains two RL agents named Alice and Bob . Alice learns to propose goals that Bob is likely to fail at , and Bob , a goal-conditioned policy , learns to solve the proposed goals . Alice proposes a goal by manipulating objects and Bob has to solve the goal starting from the same initial state as Alice ’ s . By embodying these two agents into the same robotic hardware , this setup ensures that all proposed goals are provided with at least one solution : Alice ’ s trajectory . There are two main reasons why we consider asymmetric self-play to be a promising goal generation and learning method . First , any proposed goal is achievable , meaning that there exists at least one solution trajectory that Bob can follow to achieve the goal . Because of this property , we can exploit Alice ’ s trajectory to provide additional learning signal to Bob via behavioral cloning . This additional learning signal alleviates the overhead of heuristically designing a curriculum or reward shaping for learning . Second , this approach does not require labor intensive data collection . In this paper , we show that asymmetric self-play can be used to train a goal-conditioned policy for complex object manipulation tasks , and the learned policy can zero-shot generalize to many manually designed holdout tasks , which consist of either previously unseen goals , previously unseen objects , or both . To the best of our knowledge , this is the first work that presents zero-shot generalization to many previously unseen tasks by training purely with asymmetric self-play.1 2 PROBLEM FORMULATION . Our training environment for robotic manipulation consists of a robot arm with a gripper attached and a wide range of objects placed on a table surface ( Figure 1a,1b ) . The goal-conditioned policy learns to control the robot to rearrange randomly placed objects ( the initial state ) into a specified goal configuration ( Figure 1c ) . We aim to train a policy on a single training distribution and to evaluate its performance over a suite of holdout tasks which are independently designed and not explicitly present during training ( Figure 2a ) . In this work , we construct the training distribution via asymmetric self-play ( Figure 2b ) to achieve generalization to many unseen holdout tasks ( Figure 1c ) . Mathematical formulation Formally , we model the interaction between an environment and a goal-conditioned policy as a goal-augmented Markov decision process M = hS , A , P , R , Gi , where S is the state space , A is the action space , P : S ⇥ A ⇥ S 7 ! R denotes the transition probability , G ✓ S specifies the goal space and R : S ⇥ G 7 ! R is a goal-specific reward function . A goalaugmented trajectory sequence is { ( s0 , g , a0 , r0 ) , . . . , ( st , g , at , rt ) } , where the goal is provided to the policy as part of the observation at every step . We say a goal is achieved if st is sufficiently close to g ( Appendix A.2 ) . With a slightly overloaded notation , we define the goal distribution G ( g|s0 ) as the probability of a goal state g 2 G conditioned on an initial state s0 2 S . 1Asymmetric self-play is proposed in Sukhbaatar et al . ( 2018b ; a ) , but to supplement training while the majority of training is conducted on target tasks . Zero-shot generalization to unseen tasks was not evaluated . Training goal distribution A naive design of the goal distribution G ( g|s0 ) is to randomly place objects uniformly on the table , but it is unlikely to generate interesting goals , such as an object picked up and held above the table surface by a robot gripper . Another possible approach , collecting tasks and goals manually , is expensive and hard to scale . We instead sidestep these issues and automatically generate goals via training based on asymmetric self-play ( Sukhbaatar et al. , 2018b ; a ) . Asymmetric self-play involves using a policy named Alice ⇡A ( a|s ) to set goals and a goal-conditioned policy Bob ⇡B ( a|s , g ) to solve goals proposed by Alice , as illustrated in Figure 2b . We run ⇡A to generate a trajectory ⌧A = { ( s0 , a0 , r0 ) , . . . , ( sT , aT , rT ) } and the last state is labelled as a goal g for ⇡B to solve . The goal distribution G ( sT = g|s0 ) is fully determined by ⇡A and we train Bob only on this goal distribution . We therefore say zero-shot generalization when Bob generalizes to a holdout task which is not explicitly encoded into the training distribution . Evaluation on holdout tasks To assess zero-shot generalization of ⇡B ( a|s , g ) from our training setup , we hand-designed a suite of holdout tasks with goals that are never directly incorporated into the training distribution . Some holdout tasks also feature previously unseen objects . The holdout tasks are designed to either test whether a specific skill has been learned , such as the ability to pick up objects ( Figure 3 ) , or represent a semantically interesting task , such as setting a table ( Figure 1c ) . Appendix B.6 describes the list of holdout tasks that we use in our experiments . Note that none of the holdout tasks are used for training ⇡B ( a|s , g ) . 3 ASYMMETRIC SELF-PLAY . To train Alice policy ⇡A ( a|s ) and Bob policy ⇡B ( a|s , g ) , we run the following multi-goal game within one episode , as illustrated in Figure 2b : 1 . An initial state s0 is sampled from an initial state distribution . Alice and Bob are instantiated into their own copies of the environment . Alice and Bob alternate turns as follows . 2 . Alice ’ s turn . Alice interacts with its environment for a fixed number of T steps and may rearrange the objects . The state at the end of Alice ’ s turn sT will be used as a goal g for Bob . If the proposed goal is invalid ( e.g . if Alice has not moved any objects , or if an object has fallen off the table ) , the episode terminates . 3 . Bob ’ s turn . Bob receives reward if it successfully achieves the goal g in its environment . Bob ’ s turn ends when it succeeds at achieving the goal or reaches a timeout . If Bob ’ s turn ends in a failure , its remaining turns are skipped and treated as failures , while we let Alice to keep generating goals . 4 . Alice receives reward if Bob fails to solve the goal that Alice proposed . Steps 2–3 are repeated until 5 goals are set by Alice or Alice proposes an invalid goal , and then the episode terminates . The competition created by this game encourages Alice to propose goals that are increasingly challenging to Bob , while Bob is forced to solve increasingly complex goals . The multi-goal setup was chosen to allow Bob to take advantage of environmental information discovered earlier in the episode to solve its remaining goals , which OpenAI et al . ( 2019a ) found to be important for transfer to physical systems . Note however that in this work we focus on solving goals in simulation only . To improve stability and avoid forgetting , we have Alice and Bob play against past versions of their respective opponent in 20 % of games . More details about the game structure and pseudocode for training with asymmetric self-play are available in Appendix A . 3.1 REWARD STRUCTURE . For Bob , we assign sparse goal-conditioned rewards . We measure the positional and rotational distance between an object and its goal state as the Euclidean distance and the Euler angle rotational distance , respectively . Whenever both distance metrics are below a small error ( the success threshold ) , this object is deemed to be placed close enough to the goal state and Bob receives 1 reward immediately . But if this object is moved away from the goal state that it has arrived at in past steps , Bob obtains -1 reward such that the sum of per-object reward is at most 1 during a given turn . When all of the objects are in their goal state , Bob receives 5 additional reward and its turn is over . For Alice , we assign a reward after Bob has attempted to solve the goal : 5 reward if Bob failed at solving the goal , and 0 if Bob succeeded . We shape Alice ’ s reward slightly by adding 1 reward if it has set a valid goal , defined to be when no object has fallen off the table and any object has been moved more than the success threshold . An additional penalty of 3 reward is introduced when Alice sets a goal with objects outside of the placement area , defined to be a fixed 3D volume within the view of the robot ’ s camera . More details are discussed in Appendix A.2 . 3.2 ALICE BEHAVIORAL CLONING ( ABC ) . One of the main benefits of using asymmetric self-play is that the generated goals come with at least one solution to achieve it : Alice ’ s trajectory . Similarly to Sukhbaatar et al . ( 2018a ) , we exploit this property by training Bob with Behavioral Cloning ( BC ) from Alice ’ s trajectory , in addition to the reinforcement learning ( RL ) objective . We call this learning mechanism Alice Behavioral Cloning ( ABC ) . We propose several improvements over the original formulation in Sukhbaatar et al . ( 2018a ) . Demonstration trajectory filtering Compared to BC from expert demonstrations , using Alice ’ s trajectory needs extra care . Alice ’ s trajectory is likely to be suboptimal for solving the goal , as Alice might arrive at the final state merely by accident . Therefore , we only consider trajectories with goals that Bob failed to solve as demonstrations , to avoid distracting Bob with suboptimal examples . Whenever Bob fails , we relabel Alice ’ s trajectory ⌧A to be a goal-augmented version ⌧BC = { ( s0 , sT , a0 , r0 ) , . . . , ( sT , sT , aT , rT ) } as a demonstration for BC , where sT is the goal . PPO-style BC loss clipping The objective for training Bob is L = LRL + Labc , where LRL is an RL objective and Labc is the ABC loss . is a hyperparameter controlling the relative importance of the BC loss . We set = 0.5 throughout the whole experiment . A naive BC loss is to minimize the negative log-likelihood of demonstrated actions , E ( st , gt , at ) 2DBC ⇥ log ⇡B ( at|st , gt ; ✓ ) ⇤ where DBC is a mini-batch of demonstration data and ⇡B is parameterized by ✓ . We found that overly-aggressive policy changes triggered by BC sometimes led to learning instabilities . To prevent the policy from changing too drastically , we introduce PPO-style loss clipping ( Schulman et al. , 2017 ) on the BC loss by setting the advantage  = 1 in the clipped surrogate objective : Labc = E ( st , gt , at ) 2DBC clip ⇣ ⇡B ( at|st , gt ; ✓ ) ⇡B ( at|st , gt ; ✓old ) , 1 ✏ , 1 + ✏ ⌘ where ⇡B ( at|st , gt ; ✓ ) is Bob ’ s likelihood on a demonstration based on the parameters that we are optimizing , and ⇡B ( at|st , gt ; ✓old ) is the likelihood based on Bob ’ s behavior policy ( at the time of demonstration collection ) evaluated on a demonstration . This behavior policy is identical to the policy that we use to collect RL trajectories . By setting  = 1 , this objective optimizes the naive BC loss , but clips the loss whenever ⇡B ( at|st , gt ; ✓ ) ⇡B ( at|st , gt ; ✓old ) is bigger than 1 + ✏ , to prevent the policy from changing too much . ✏ is a clipping threshold and we use ✏ = 0.2 in all the experiments .
This paper presents an approach to learn goal conditioned policies by relying on self-play which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too challenging for Bob to solve. Bob's task is to solve the task by trying to reproduce the end state of Alice's demonstration. As a result, the learned policy performs various tasks and can work in zero-shot settings.
SP:9f70871f0111b58783f731748d8750c635998f32
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
1 Introduction . Graph neural networks ( GNNs ) have been intensively studied recently [ 29 , 26 , 39 , 68 ] , due to their established performance towards various real-world tasks [ 15 , 69 , 53 ] , as well as close connections to spectral graph theory [ 12 , 9 , 16 ] . While most GNN architectures are not very complicated , the training of GNNs can still be costly regarding both memory and computation resources on real-world large-scale graphs [ 10 , 63 ] . Moreover , it is intriguing to transfer learned structural information across different graphs and even domains in settings like few-shot learning [ 56 , 44 , 25 ] . Therefore , several very recent studies have been conducted on the transferability of GNNs [ 21 , 23 , 22 , 59 , 31 , 3 , 47 ] . However , it is unclear in what situations the models will excel or fail especially when the pre-training and fine-tuning tasks are different . To provide rigorous analysis and guarantee on the transferability of GNNs , we focus on the setting of direct-transfering between the source and target graphs , under an analogous setting of “ domain adaptation ” [ 7 , 59 ] . In this work , we establish a theoretically grounded framework for the transfer learning of GNNs , and leverage it to design a practically transferable GNN model . Figure 1 gives an overview of our framework . It is based on a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features , which allows us to define graph information and similarity , ∗These two authors contribute equally . 1Code and processed data are available at https : //github.com/GentleZhu/EGI . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) , Online . so as to analyze GNN transferability ( §3 ) . This view motivates us to design EGI , a novel GNN training objective based on ego-graph information maximization , which is effective in capturing the graph information as we define ( §3.1 ) . Then we further specify the requirement on transferable node features and analyze the transferability of EGI that is dependent on the local graph Laplacians of source and target graphs ( §3.2 ) . All of our theoretical conclusions have been directly validated through controlled synthetic experiments ( Table 1 ) , where we use structural-equivalent role identification in an direct-transfering setting to analyze the impacts of different model designs , node features and source-target structure similarities on GNN transferability . In §4 , we conduct real-world experiments on multiple publicly available network datasets . On the Airport and Gene graphs ( §4.1 ) , we closely follow the settings of our synthetic experiments and observe consistent but more detailed results supporting the design of EGI and the utility of our theoretical analysis . On the YAGO graphs ( §4.2 ) , we further evaluate EGI on the more generalized and practical setting of transfer learning with task-specific fine-tuning . We find our theoretical insights still indicative in such scenarios , where EGI consistently outperforms state-of-the-art GNN representation and transfer learning frameworks with significant margins . 2 Related Work . Representation learning on graphs has been studied for decades , with earlier spectral-based methods [ 6 , 46 , 52 ] theoretically grounded but hardly scaling up to graphs with over a thousand of nodes . With the emergence of neural networks , unsupervised network embedding methods based on the Skip-gram objective [ 37 ] have replenished the field [ 51 , 14 , 42 , 45 , 66 , 62 , 65 ] . Equipped with efficient structural sampling ( random walk , neighborhood , etc . ) and negative sampling schemes , these methods are easily parallelizable and scalable to graphs with thousands to millions of nodes . However , these models are essentially transductive as they compute fully parameterized embeddings only for nodes seen during training , which are impossible to be transfered to unseen graphs . More recently , researchers introduce the family of graph neural networks ( GNNs ) that are capable of inductive learning and generalizing to unseen nodes given meaningful node features [ 29 , 12 , 15 , 67 ] . Yet , most existing GNNs require task-specific labels for training in a semi-supervised fashion to achieve satisfactory performance [ 29 , 15 , 53 , 64 ] , and their usage is limited to single graphs where the downstream task is fixed . To this end , several unsupervised GNNs are presented , such as the auto-encoder-based ones like VGAE [ 28 ] and GNFs [ 35 ] , as well as the deep-infomax-based ones like DGI [ 54 ] and InfoGraph [ 50 ] . Their potential in the transfer learning of GNN remains unclear when the node features and link structures vary across different graphs . Although the architectures of popular GNNs such as GCN [ 29 ] may not be very complicated compared with heavy vision and language models , training a dedicated GNN for each graph can still be cumbersome [ 10 , 63 ] . Moreover , as pre-training neural networks are proven to be successful in other domains [ 13 , 18 ] , the idea is intriguing to transfer well-trained GNNs from relevant source graphs to improve the modeling of target graphs or enable few-shot learning [ 59 , 31 , 3 ] when labeled data are scarce . In light of this , pioneering works have studied both generative [ 22 ] and discriminative [ 21 , 23 ] GNN pre-training schemes . Though Graph Contrastive Coding [ 43 ] shares the most similar view towards graph structures as us , it utilizes contrastive learning across all graphs instead of focusing on the transfer learning between any specific pairs . On the other hand , unsupervised domain adaptive GCNs [ 59 ] study the domain adaption problem only when the source and target tasks are homogeneous . Most previous pre-training and self-supervised GNNs lack a rigorous analysis towards their transferability and thus have unpredictable effectiveness . The only existing theoretical work on GNN transferability studies the performance of GNNs across different permutations of a single original graph [ 33 , 34 ] and the tradeoff between discriminability and transferability of GNNs [ 47 ] . We , instead , are the first to rigorously study the more practical setting of transferring GNNs across pairs of different source and target graphs . 3 Transferable Graph Neural Networks . In this paper , we design a more transferable training objective for GNN ( EGI ) based on our novel view of essential graph information ( §3.1 ) . We then analyze its transferability as the gap between its abilities to model the source and target graphs , based on their local graph Laplacians ( §3.2 ) . Based on the connection between GNN and spectral graph theory [ 29 ] , we describe the output of a GNN as a combination of its input node features X , fixed graph Laplacian L and learnable graph filters Ψ . The goal of training a GNN is then to improve its utility by learning the graph filters that are compatible with the other two components towards specific tasks . In the graph transfer learning setting where downstream tasks are often unknown during pre-training , we argue that the general utility of a GNN should be optimized and quantified w.r.t . its ability of capturing the essential graph information in terms of the joint distribution of its topology structures and node features , which motivates us to design a novel ego-graph information maximization model ( EGI ) ( §3.1 ) . The general transferability of a GNN is then quantified by the gap between its abilities to model the source and target graphs . Under reasonable requirements such as using structurerespecting node features as the GNN input , we analyze this gap for EGI based on the structural difference between two graphs w.r.t . their local graph Laplacians ( §3.2 ) . 3.1 Transferable GNN via Ego-graph Information Maximization . In this work , we focus on the direct-transfering setting where a GNN is pre-trained on a source graph Ga in an unsupervised fashion and applied on a target graph Gb without fine-tuning.2 Consider a graph G = { V , E } , where the set of nodes V are associated with certain features X and the set of edges E form graph structures . Intuitively , the transfer learning will be successful only if both the features and structures of Ga and Gb are similar in some ways , so that the graph filters of a GNN learned on Ga are compatible with the features and structures of Gb . Graph kernels [ 57 , 8 , 30 , 38 ] are well-known for their capability of measuring similarity between pair of graphs . Motivated by k-hop subgraph kernels [ 4 ] , we introduce a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features . Since GNN essentially encodes such k-hop ego graph samples , this view allows us to give concrete definitions towards structural information of graphs in the transfer learning setting , which facilitates the measuring of similarity ( difference ) among graphs . Yet , none of the existing GNN training objectives are capable of recovering such distributional signals of ego graphs . To this end , we design Ego-Graph Information maximization ( EGI ) , which alternatively reconstructs the k-hop ego-graph of each center node via mutual information maximization [ 20 ] . Definition 3.1 ( K-hop ego-graph ) . We call a graph gi = { V ( gi ) , E ( gi ) } a k-hop ego-graph centered at node vi if it has a k-layer centroid expansion [ 4 ] such that the greatest distance between vi and 2In the experiments , we show our model to be generalizable to the more practical settings with task-specific pre-training and fine-tuning , while the study of rigorous bound in such scenarios is left as future work . any other nodes in the ego-graph is k , i.e . ∀vj ∈ V ( gi ) , |d ( vi , vj ) | ≤ k , where d ( vi , vj ) is the graph distance between vi and vj . In this paper , we use directed k-hop ego-graph and its direction is decided by whether it is composed of incoming or outgoing edges to the center node , i.e. , gi and g̃i . The results apply trivially to undirected graphs with gi = g̃i . Definition 3.2 ( Structural information ) . Let G be a topological space of sub-graphs , we view a graph G as samples of k-hop ego-graphs { gi } ni=1 drawn i.i.d . from G with probability µ , i.e. , gi i.i.d.∼ µ ∀i = 1 , · · · , n. The structural information of G is then defined to be the set of k-hop ego-graphs of { gi } ni=1 and their empirical distribution . As shown in Figure 1 , three graphs G0 , G1 and G2 are characterized by a set of 1-hop ego-graphs and their empirical distributions , which allows us to quantify the structural similarity among graphs as shown in §3.2 ( i.e. , G0 is more similar to G1 than G2 under such characterization ) . In practice , the nodes in a graph G are characterized not only by their k-hop ego-graph structures but also their associated node features . Therefore , G should be regarded as samples { ( gi , xi ) } drawn from the joint distribution P on the product space of G and a node feature space X . Ego-Graph Information Maximization . Given a set of ego-graphs { ( gi , xi ) } i drawn from an empirical joint distribution ( gi , xi ) ∼ P. We aim to train an GNN encoder Ψ to maximize the mutual informaion ( MI ( gi , Ψ ( gi , xi ) ) ) between the defined structural information gi3 ( i.e . k-hop ego-graph ) and node embedding zi = Ψ ( gi , xi ) . To maximize the MI , another discriminator D ( gi , zi ) : E ( gi ) × zi → R+ is introduced to compute the probability of an edge e belongs to the given ego-graph gi . We use the Jensen-Shannon MI estimator [ 20 ] in the EGI objective , LEGI = −MI ( JSD ) ( G , Ψ ) = 1N N∑ i=1 [ sp ( D ( gi , z′i ) ) + sp ( −D ( gi , zi ) ) ] , ( 1 ) where sp ( x ) = log ( 1+ex ) is the softplus function and ( gi , z′i ) is randomly drawn from the product of marginal distributions , i.e . z′i = Ψ ( gi′ , xi′ ) , ( gi′ , xi′ ) ∼ P , i′ 6= i . In general , we can also randomly draw negative g′i in the topological space , while enumerating all possible graphs gi′ leads to high computation cost . In Eq . 1 , the computation of D on E ( gi ) depends on the node orders . Following the common practice in graph generation [ 70 ] , we characterize the decision process of D with a fixed graph ordering , i.e. , the BFS-ordering π over edges E ( gi ) . D = f ◦ Φ is composed by another GNN encoder Φ and scoring function f over an edge sequence Eπ : { e1 , e2 , ... , en } , which makes predictions on the BFS-ordered edges . 3Later in section 3.2 , we will discuss the equivalence between MI ( gi , zi ) and MI ( ( gi , xi ) , zi ) when node feature is structure-respecting . Recall our previous definition on the direction of k-hop ego-graph , the center node encoder Ψ receives pairs of ( gi , xi ) while the neighbor node encoder Φ in discriminator D receives ( g̃i , xi ) . Both encoders are parameterized as GNNs , Ψ ( gi , xi ) = GNNΨ ( Ai , Xi ) , Φ ( g̃i , xi ) = GNNΦ ( A′i , Xi ) , where Ai , A′i is the adjacency matrix with self-loops of gi and g̃i , respectively . The self-loops are added following the common design of GNNs , which allows the convolutional node embeddings to always incorporate the influence of the center node . Ai = A′i ᵀ . The output of Ψ , i.e. , zi ∈ Rn , is the center node embedding , while Φ outputs representation H ∈ R|gi|×n for neighbor nodes in the ego-graph . Once node representation H is computed , we now describe the scoring function f . For each of the node pair ( p , q ) ∈ Eπ , hp is the source node representation from Φ , xq is the destination node features . The scoring function is , f ( hp , xq , zi ) = σ ( UT · τ ( WT [ hp||xq||zi ] ) ) , ( 2 ) where σ and τ are Sigmoid and ReLU activation functions . Thus , the discriminator D is asked to distinguish a positive ( ( p , q ) , zi ) and negative pair ( ( p , q ) , z′i ) ) for each edge in gi . D ( gi , zi ) = ∑ ( p , q ) ∈Eπ log f ( hp , xq , zi ) , D ( gi , z′i ) = Eπ∑ ( p , q ) log f ( hp , xq , z ′ i ) . ( 3 ) There are two types of edges ( p , q ) in our consideration of node orders , type-a - the edges across different hops ( from the center node ) , and type-b - the edges within the same hop ( from the center node ) . The aforementioned BFS-based node ordering guarantees that Eq . 3 is sensitive to the ordering of type-a edges , and invariant to the ordering of type-b edges , which is consistent with the requirement of our theoretical analysis on ∆D . Due to the fact that the output of a k-layer GNN only depends on a k-hop ego-graph for both encoders Ψ and Φ , EGI can be trained in parallel by sampling batches of gi ’ s . Besides , the training objective of EGI is transferable as long as ( gi , xi ) across source graph Ga and Gb satisfies the conditions given in §3.2 . More model details in Appendix §B and source code in the Supplementary Materials . Connection with existing work . To provide more insights into the EGI objective , we also present it as a dual problem of ego-graph reconstruction . Recall our definition of ego-graph mutual information MI ( gi , Ψ ( gi , xi ) ) . It can be related to an ego-graph reconstruction loss R ( gi|Ψ ( gi , xi ) ) as max MI ( gi , Ψ ( gi , xi ) ) = H ( gi ) −H ( gi|Ψ ( gi , xi ) ) ≤ H ( gi ) −R ( gi|Ψ ( gi , xi ) ) . ( 4 ) When EGI is maximizing the mutual information , it simultaneously minimizes the upper error bound of reconstructing an ego-graph gi . In this view , the key difference between EGI and VGAE [ 28 ] is they assume each edge in a graph to be observed independently during the reconstruction . While in EGI , edges in an ego-graph are observed jointly during the GNN decoding . Moreover , existing mutual information based GNNs such as DGI [ 54 ] and GMI [ 41 ] explicitly measure the mutual information between node features x and GNN output Ψ . In this way , they tend to capture node features instead of graph structures , which we deem more essential in graph transfer learning as discussed in §3.2 . Use cases of EGI framework . In this paper , we focus on the classical domain adaption ( directtransferring ) setting [ 7 ] , where no target domain labels are available and transferability is measured by the performance discrepancy without fine-tuning . In this setting , the transferability of EGI is theoretically guaranteed by Theorem 3.1 . In §4.1 , we validated this with the airport datasets . Beyond direct-transferring , EGI is also useful in the more generalized and practical setting of transfer learning with fine-tuning , which we introduced in §4.2 and validated with the YAGO datasets . In this setting , the transferability of EGI is not rigorously studied yet , but is empirically shown promising . Supportive observations . In the first three columns of our synthetic experimental results ( Table 1 ) , in both cases of transfering GNNs between similar graphs ( F-F ) and dissimilar graphs ( B-F ) , EGI significantly outperforms all competitors when using node degree one-hot encoding as transferable node features . In particular , the performance gains over the untrained GIN show the effectiveness of training and transfering , and our gains are always larger than the two state-of-the-art unsupervised GNNs . Such results clearly indicate advantageous structure preserving capability and transferability of EGI .
The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maximizing the mutual information between the subgraph and the GNN output embedding of the center node. Then, the authors give an upper bound for the difference of EGI scores of two graphs based on the difference of eigenvalues of the graph Laplacian of the subgraph samples from the two graphs. The implication is that if the difference of the eigenvalues is small, then the EGI scores are similar, which means the GNN has a similar ability to encode the structure of the two graphs.
SP:038a1d3066f8273977337262e975d7a7aab5002f
Information Lattice Learning
1 INTRODUCTION . With rapid progress in AI , there is an increasing desire for general AI ( Goertzel & Pennachin , 2007 ; Chollet , 2019 ) and explainable AI ( Adadi & Berrada , 2018 ; Molnar , 2019 ) , which exhibit broad , human-like cognitive capacities . One common pursuit is to move away from “ black boxes ” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples , with neither unlimited priors nor unlimited data ( “ primitive priors ” & “ small data ” instead ) . In this pursuit , we present a new , task-nonspecific framework—Information Lattice Learning ( ILL ) — to learn representations akin to human-distilled rules , e.g. , producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students ( shown at the end ) . The term information lattice was first defined by Shannon ( 1953 ) , but remains largely conceptual and unexplored . In the context of abstraction and representation learning , we independently develop representation lattices that coincide with Shannon ’ s information lattice when restricted to his context . Instead of inventing a new name , we adopt Shannon ’ s . However , we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice , yielding the name ILL. ILL explains a signal ( e.g. , a probability distribution ) by disentangled representations , called rules . A rule explains some but not all aspects of the signal , but together the collection of rules aims to capture a large part of the signal . ILL is specially designed to address the core question “ what makes X an X ” or “ what makes X different from Y ” , emphasizing the what rather than generating X or predicting labels X , Y in order to facilitate effective , rule-based explanations designed to help human learners understand . A music AI classifying concertos , or generating one that mimics the masters , does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one . Our focus represents a shift from much representation-learning work ( Bengio et al. , 2013 ) that aim to find the best representation for solving a specific task ( e.g. , classification ) rather than strong concern for explainability . Instead of optimizing a task-specific objective function ( e.g. , classification error ) , ILL balances more general objectives that favor fewer , simpler rules for interpretability , and more essential rules for effectiveness—all formalized later . One intuition behind ILL is to break the whole into simple pieces , similar to breaking a signal into a Fourier series . Yet , rather than decomposition via projection to orthonormal basis and synthesis via weighted sum , we decompose a signal in a hierarchical space called a lattice . Another intuition behind ILL is feature selection . Yet , rather than features , we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning . The goal is to restore human-like , hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice ( called projection-and-lifting , Figure 1 : left ) , resulting in more than a sum of parts . ILL comprises two phases : ( a ) lattice construction ; ( b ) learning ( i.e. , searching ) in the lattice . This is similar to many machine learning ( ML ) models comprising ( a ) function class specification then ( b ) learning in the function class , e.g. , constructing a neural network then learning—finding optimal parameters via back-propagation—in the network . ILL ’ s construction phase is prior-efficient : it builds in universal priors that resemble human innate cognition ( cf . the Core Knowledge priors ( Spelke & Kinzler , 2007 ) ) , then grows a lattice of abstractions . The priors can be customized , however , to cater to a particular human learner , or facilitate more exotic knowledge discovery . ILL ’ s learning phase is data-efficient : it learns from “ small data ” encoded by a signal , but searches for rich explanations of the signal via rule learning , wherein abstraction is key to “ making small data large ” . Notably , the construction phase is prior-driven , not data-driven—data comes in only at the learning phase . Hence , the same construction may be reused in different learning phases for different data sets or even data on different topics ( Figure 1 : right ) . Featuring these two phases , ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model , echoing the notion of “ starting like a baby ; learning like a child ” ( Hutson , 2018 ) . ILL is related to many research areas . It draws ideas and approaches from lattice theory , information theory , group theory , and optimization . It shares algorithmic similarity with a range of techniques including MaxEnt , data compression , autoencoders , and compressed sensing , but with a much greater focus on achieving human-like explainability and generalizability . Below , we broadly compares ILL to prominent , related models , leaving more comparisons to the Appendix for most similar ones . Compared to ILL is deep learning a “ white-box ” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared , common priors and few , simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general : a tree ( e.g. , decision tree or hierarchical clustering ) is essentially a linear lattice ( called a chain formally ) depicting a unidirectional refinement or coarsening process concept lattice in FCA ( Ganter & Wille , 2012 ) conceptually more general and may include both known and unknown concepts ; ILL does not require but discovers domain knowledge ( more details in Appendix A ) We illustrate ILL applications by learning music theory from scores , chemical laws from compounds , and show how ILL ’ s common priors facilitate mutual interpretation between the two subjects . To begin , imagine Tom and Jerry are playing two 12-key pianos simultaneously , one note at a time ( Figure 1 : right ) . The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap . Inspecting this heatmap , what might be the underlying rules that govern their co-play ? ( Check : all grey pixels have a larger “ Jerry-coordinate ” and project to a black key along the “ Tom-axis ” . ) We now elaborate on ILL and use it to distill rules for complex , realistic cases . 2 INFORMATION LATTICE : ABSTRACTIONS AND RULES OF A SIGNAL . Signal . A signal is a function ξ : X → R. For notational brevity and computational reasons , assume ξ is non-negative and X ⊆ Rn is finite ( not a limitation : see Appendix B ) . For example , a signal ξ : { 1 , . . . , 6 } → R being a probability mass function ( pmf ) of a dice roll , or a signal ξ : { 0 , . . . , 27 } 2 → R being a 28× 28 grayscale image . We denote the set of all signals on X by SX . Partition / abstraction . We use a partition P of a set X to denote an abstraction of X ; we call a cell C ∈ P an ( abstracted ) concept . The intuition is simple : a partition of a set renders a “ coarse-grained view ” of the set , or more precisely , an equivalence relation on the set . In this view , we identify equivalence classes of elements ( concepts ) instead of individual elements . For example , the partition P = { { 1 , 3 , 5 } , { 2 , 4 , 6 } } of the six outcomes of the roll of a die identify two concepts ( odd , even ) . Rule / representation . A rule of a signal ξ : X → R is a “ coarsened ” signal rξ : P → R defined on a partition P of X with rξ ( C ) : = ∑ x∈C ξ ( x ) for any C ∈ P . In this paper , a rule of a signal is what we mean by a representation of a signal . If the signal is a grayscale image , a rule can be a special type of blurring or downsampling of the image ; if the signal is a probability distribution , a rule can be a pmf of the “ orbits ” of the distribution for lifted inference algorithms ( Holtzen et al. , 2019 ; Kersting , 2012 ) . More generally , we define a rule ( regardless of any signal ) over a set X by any signal on any partition of X ; accordingly , we denote the set of all rules over X byRX : = ∪P∈ { all partitions of X } SP . Partition lattice . Abstractions are hierarchical : one coarse-grained view can be coarser than another . Let the partition lattice ( PX , ) of a setX be the partially ordered set ( poset ) containing all partitions of X equipped with the partial order coarser than ( ) , or finer than ( ) , defined in the standard way . Let P : = { { x } | x ∈ X } and P : = { X } denote the finest and the coarsest partition , respectively . Per general lattice theory ( Davey & Priestley , 2002 ) , PX is a complete lattice : every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P , where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening . Information lattice . The information lattice ( Rξ , ⇐ ) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than : for any two rules r , r′ ∈ Rξ , we say r is more general than r′ ( or r′ is more specific ) , denoted r ⇐ r′ , if domain ( r ) domain ( r′ ) . Notably , Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below . Projection and lifting . For any signal ξ ∈ SX , we define the projection operator ↓ξ : PX → Rξ by letting ↓ξ ( P ) be the rule of ξ on P . One can check that ↓ξ : ( PX , ) → ( Rξ , ⇐ ) is an isomorphism . Conversely , we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X ( r ) denote the set of all signals that satisfy the rule r , i.e. , ⇑X ( r ) : = { ξ ∈ SX | ↓ξ ( domain ( r ) ) = r } ⊆ SX . To make lifting unique and per Principles of Indifference ( Eva , 2019 ) , we introduce a special lifting ↑X ( r ) to pick the most “ uniform ” signal in ⇑X ( r ) . Formally , define ‖ · ‖q : SX → R by ‖ξ‖q : = ( ∑ x∈X ξ ( x ) q ) 1/q . For any ξ , ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1 , we say that ξ is more uniform than ξ′ ( or ξ′ is more deterministic ) if ‖ξ‖2 ≤ ‖ξ′‖2 . We define the ( special ) lifting operator ↑X : RX → SX by ↑X ( r ) : = argminξ∈⇑X ( r ) ‖ξ‖2 ( can be computed by simply averaging ) . Notation here follows the convention as to function projections to quotient spaces ( Kondor & Trivedi , 2018 ) . Lifting a single rule to the signal domain can be extended in two ways : ( a ) lift to a finer rule domain P instead of X , i.e. , ⇑P ( r ) or ↑P ( r ) ; ( b ) lift more than one rules . Accordingly , we write ⇑ : = ⇑X and ↑ : = ↑X as defaults , write R = ↓ξ ( P ) : = { ↓ξ ( P ) | P ∈ P } ⊆ Rξ to denote a rule set , and write ⇑ ( R ) : = ∩r∈R ⇑ ( r ) = { η ∈ SX | ↓η ( P ) = R } and ↑ ( R ) : = argminη∈⇑ ( R ) ‖η‖2 to denote signals that satisfy all rules in R ( general lifting ) and the most uniform one ( special lifting ) , respectively . More computational details on lifting and its intimate relation to join are in Appendix C .
The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy.
SP:40cba7b6c04d7e44709baed351382c27fa89a129
Don't be picky, all students in the right family can learn from good teachers
1 INTRODUCTION . Recently-developed deep learning models have achieved remarkable performance in a variety of tasks . However , breakthroughs leading to state-of-the-art ( SOTA ) results often rely on very large models : GPipe , Big Transfer and GPT-3 use 556 million , 928 million and 175 billion parameters , respectively ( Huang et al. , 2019 ; Kolesnikov et al. , 2020 ; Brown et al. , 2020 ) . Deploying these models on user devices ( e.g . smartphones ) is currently impractical as they require large amounts of memory and computation ; and even when large devices are an option ( e.g . GPU clusters ) , the cost of large-scale deployment ( e.g . continual inference ) can be very high ( Cheng et al. , 2017 ) . Additionally , target hardware does not always natively or efficiently support all operations used by SOTA architectures . The applicability of these architectures is , therefore , severely limited , and workarounds using smaller or simplified models lead to a performance gap between the technology available at the frontier of deep learning research and that usable in industry applications . In order to bridge this gap , Knowledge Distillation ( KD ) emerges as a potential solution , allowing small student models to learn from , and emulate the performance of , large teacher models ( Hinton et al. , 2015a ) . The student model can be constrained in its size and type of operations used , so that it will satisfy the requirements of the target computational environment . Unfortunately , successfully achieving this in practice is extremely challenging , requiring extensive human expertise . For example , while we know that the architecture of the student is important for distillation ( Liu et al. , 2019b ) , it remains unclear how to design the optimal network given some hardware constraints . With Neural Architecture Search ( NAS ) it is possible to discover an optimal student architecture . NAS automates the choice of neural network architecture for a specific task and dataset , given a search space of architectures and a search strategy to navigate that space ( Pham et al. , 2018 ; Real et al. , 2017 ; Liu et al. , 2019a ; Carlucci et al. , 2019 ; Zela et al. , 2018 ; Ru et al. , 2020 ) . One im- portant limitation of most NAS approaches is that the search space is very restricted , with a high proportion of resources spent on evaluating very similar architectures , thus rendering the approach limited in its effectiveness ( Yang et al. , 2020 ) . This is because traditional NAS approaches have no tools for distinguishing between architectures that are similar and architectures that are very different ; as a consequence , computational resources are needed to compare even insignificant changes in the model . Conversely , properly exploring a large space requires huge computational resources : for example , recent work by Liu et al . ( 2019b ) investigating how to find the optimal student requires evaluating 10 , 000 models . By focusing on the comparison between distributions we ensure to use computational resources only on meaningful differences , thus performing significantly more efficiently : we evaluate 33× less architectures than the most related work to ours ( Liu et al. , 2019b ) . To overcome these limitations , we propose an automated approach to knowledge distillation , in which we look for a family of good students rather than a specific model . We find that even though our method , AutoKD , does not output one specific architecture , all architectures sampled from the optimal family of students perform well when trained with KD . This reformulation of the NAS problem provides a more expressive search space containing very diverse architectures , thus increasing the effectiveness of the search procedure in finding good student networks . Our contributions are as follows : ( A ) a framework for combining KD with NAS and effectively emulate large models while using a fraction of the memory and of the parameters ; ( B ) By searching for an optimal student family , rather than for specific architectures , our algorithm is up to 20x more sample efficient than alternative NAS-based KD solutions ; ( C ) We significantly outperform advanced KD methods on a benchmark of vision datasets , despite using the traditional KD loss , showcasing the efficacy of our found students . 2 RELATED WORK . Model compression has been studied since the beginning of the machine learning era , with multiple solutions being proposed ( Choudhary et al. , 2020 ; Cheng et al. , 2017 ) . Pruning based methods allow the removal of non-essential parameters from the model , with little-to-none drop in final performance . The primary motive of these approaches was to reduce the storage requirement , but they can also be used to speed up the model ( LeCun et al. , 1990 ; Han et al. , 2015 ; Li et al. , 2016a ) . The idea behind quantization methods is to reduce the number of bits used to represent the weights and the activations in a model ; depending on the specific implementation this can lead to reduced storage , reduced memory consumption and a general speed-up of the network ( Fiesler et al. , 1990 ; Soudry et al. , 2014 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ) . In low rank factorization approaches , a given weight matrix is decomposed into the product of smaller ones , for example using singular value decomposition . When applied to fully connected layers this leads to reduced storage , while when applied to convolutional filters , it leads to faster inference ( Choudhary et al. , 2020 ) . All the above mentioned techniques can successfully reduce the complexity of a given model , but are not designed to substitute specific operations . For example , specialized hardware devices might only support a small subset of all the operations offered by modern deep learning frameworks . In Knowledge Distillation approaches , a large model ( the teacher ) distills its knowledge into a smaller student architecture ( Hinton et al. , 2015b ) . This knowledge is assumed to be represented in the neural network ’ s output distribution , hence in the standard KD framework , the output distribution of a student ’ s network is optimized to match the teacher ’ s output distribution for all the training data ( Yun et al. , 2020 ; Ahn et al. , 2019 ; Yuan et al. , 2020 ; Tian et al. , 2020 ; Tung & Mori , 2019 ) . The work of Liu et al . ( 2019b ) shows that the architecture of a student network is a contributing factor in its ability to learn from a given teacher . The authors propose combining KD with a traditional NAS pipeline , based on Reinforcement Learning , to find the optimal student . While this setup leads to good results , it does so at a huge computational cost , requiring over 5 days on 200 TPUs . Similarly , Gu & Tresp ( 2020 ) also look for the optimal student architecture , but do so by searching for a subgraph of the original teacher ; therefore , it can not be used to substitute unsupported operations . Orthogonal approaches , looking at how KD can improve NAS , are explored by Trofimov et al . ( 2020 ) and Li et al . ( 2020 ) . The first establishes that KD improves the correlation between different budgets in multi-fidelity methods , while the second uses the teacher supervision to search the architecture in a blockwise fashion . 3 SEARCHING FOR THE OPTIMAL STUDENT NETWORK GENERATOR . The AutoKD framework ( Fig . 1 ) combines Bayesian Optimization ( BO ) , Neural Architecture Search ( NAS ) and Knowledge Distillation ( KD ) . AutoKD defines a family of random network generators G ( θ ) parameterized by a hyperparameter θ , from where student networks are sampled . BO uses a surrogate model to propose generator hyperparameters , while students from these generators are trained with KD using a state-of-the-art teacher network . The student performances are evaluated and provided as feedback to update the BO surrogate model . To improve our BO surrogate model , the search procedure is iterated , until the best family of student networks G ( θ∗ ) is selected . In this section we specify all components of AutoKD . See also Fig . 1 and Algorithm 1 for an overview . 3.1 KNOWLEDGE DISTILLATION . Knowledge Distillation ( KD ; Hinton et al. , 2015b ) is a method to transfer , or distill , knowledge from one model to another—usually from a large model to small one—such that the small student model learns to emulate the performance of the large teacher model . KD can be formalized as minimizing the objective function : LKD = ∑ xi∈X l ( fT ( xi ) , fS ( xi ) ) ( 1 ) where l is the loss function that measures the difference in performance between the teacher fT and the student fS , xi is the ith input , yi is the ith target . The conventional loss function l used in practice is a linear combination of the traditional cross entropy loss LCE and the Kullback–Leibler divergence LKL of the pre-softmax outputs for fT and fS : l = ( 1− α ) LCE + αLKL ( σ ( fT ( xi ) /τ ) , σ ( fS ( xi ) /τ ) ) ( 2 ) where σ is the softmax function σ ( x ) = 1/ ( 1 + exp ( −x ) ) , and τ is the softmax temperature . Hinton et al . ( 2015b ) propose “ softening ” the probabilities using temperature scaling with τ ≥ 1 . The parameter α represents the weight trade-off between the KL loss and the cross entropy loss LCE . The LKD loss is characterized by the hyper-parameters : α and τ ; popular choices are τ ∈ { 3 , 4 , 5 } and α = 0.9 ( Huang & Wang , 2017 ; Zagoruyko & Komodakis , 2016 ; Zhu et al. , 2018 ) . Numerous other methods ( Polino et al. , 2018 ; Huang & Wang , 2017 ; Tung & Mori , 2019 ) can be formulated as a form of Equation ( 2 ) , but in this paper we use the conventional loss function l. Traditionally in KD , both the teacher and the student network have predefined architectures . In contrast , AutoKD defines a search space of student network architectures and finds the optimal student by leveraging neural architecture search , as detailed below . 3.2 STUDENT SEARCH VIA GENERATOR OPTIMIZATION . Most NAS method for vision tasks employ a cell-based search space , where networks are built by stacking building blocks ( cells ) and the operations inside the cell are searched ( Pham et al. , 2018 ; Real et al. , 2017 ; Liu et al. , 2019a ) . This results in a single architecture being output by the NAS procedure . In contrast , more flexible search spaces have recently been proposed that are based on Algorithm 1 : AutoKD 1 : Input : Network generator G , BOHB hyperparameters ( η , training budget bmin and bmax ) , Evaluation function fKD ( θ , b ) which assesses the validation performance of a generator hyperparameterθ by sampling an architecture from G ( θ ) and training it with the KD loss LKD ( equations 1 and 2 ) for b epochs . 2 : smax = blogη bmaxbmin c ; 3 : for s ∈ { smax , smax − 1 , . . . , 0 } do 4 : Sample M = d smax+1s+1 · η se generator hyperparameters Θ = { θj } Mj=1 which maximises the raito of kernel density estimators ; . ( Falkner et al. , 2018 , Algorithm 2 ) 5 : Initialise b = ηs · bmax ; . Run Successive Halving ( Li et al. , 2016b ) 6 : while b ≤ bmax do 7 : L = { fKD ( θ , b ) : θ ∈ Θ } ; 8 : Θ = top k ( Θ , L , b|Θ|/ηc ) ; 9 : b = η · b ; 10 : end while 11 : end for 12 : Obtain the best performing configuration θ∗ for the student network generator . 13 : Sample k architectures from G ( θ∗ ) , train them to completion , and obtain test performance . neural network generators ( Xie et al. , 2019 ; Ru et al. , 2020 ) . The generator hyperparameters define the characteristics of the family of networks being generated . NAGO optimizes an architecture generator instead of a single architecture and proposes a hierarchical graph-based space which is highly expressive yet low-dimensional ( Ru et al. , 2020 ) . Specifically , the search space of NAGO comprises three levels of graphs ( where the node in the higher level is a lower-level graph ) . The top level is a graph of cells ( Gtop ) and each cell is itself a graph of middlelevel modules ( Gmid ) . Each module further corresponds to a graph of bottom-level operation units ( Gbottom ) such as a relu-conv3×3-bn triplet . NAGO adopts three random graph generators to define the connectivity/topology of Gtop , Gmid and Gbottom respectively , and thus is able to produce a wide variety of architectures with only a few generator hyperparameters . AutoKD employs NAGO as the NAS backbone for finding the optimal student family . Our pipeline consists of two phases . In the first phase ( search ) , a multi-fidelity Bayesian optimisation technique , BOHB ( Falkner et al. , 2018 ) , is employed to optimise the low-dimensional search space . BOHB uses partial evaluations with smaller-than-full budget to exclude bad configurations early in the search process , thus saving resources to evaluate more promising configurations . Given the same time constraint , BOHB evaluates many more configurations than conventional BO which evaluates all configurations with full budget . As Ru et al . ( 2020 ) empirically observe that good generator hyperparameters lead to a tight distribution of well-performing architectures ( small performance standard deviation ) , we similarly assess the performance of a particular generator hyperparameter value with only one architecture sample . In the second phase ( retrainA ) , AutoKD uniformly samples multiple architectures from the optimal generator found during the search phase and evaluates them with longer training budgets to obtain the best architecture performance . Instead of the traditionally used cross-entropy loss , AutoKD uses the KD loss in equation 2 to allow the sampled architecture to distill knowledge from its teacher . The KD hyperparameters temperature τ and loss weight α are included in the search space and optimized simultaneously with the architecture to ensure that the student architectures can efficiently distill knowledge both from the designated teacher and the data distribution . A full overview of the framework is shown in Fig . 1 .
This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, increasing the diversity of the architectures explored. They show that this approach combined with the standard knowledge distillation loss is able to learn good student architectures requiring substantially less samples and achieving competitive performances when comparing to other knowledge distillation algorithms.
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
Towards Counteracting Adversarial Perturbations to Resist Adversarial Examples
1 INTRODUCTION . Deep neural networks ( DNNs ) have become the dominant approach for various tasks including image understanding , natural language processing and speech recognition ( He et al. , 2016 ; Devlin et al. , 2018 ; Park et al. , 2018 ) . However , recent studies demonstrate that neural networks are vulnerable to adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . That is , these network models make an incorrect prediction with high confidence for inputs that are only slightly different from correctly predicted examples . This reveals a potential threat to neural network-based artificial intelligence systems , many of which have been widely deployed in real-world applications . The adversarial vulnerability of neural networks reveals fundamental blind spots in the learning algorithms . Even with advanced learning and regularization techniques , neural networks are not learning the true underlying distribution of the training data , although they can obtain extraordinary performance on test sets . This phenomenon is now attracting much research attention . There have been increasing studies attempting to explain neural networks ’ adversarial vulnerability and develop methods to resist adversarial examples ( Madry et al. , 2018 ; Zhang et al. , 2020 ; Pang et al. , 2020 ) . While much progress has been made , most existing studies remain preliminary . Because it is difficult to construct a theoretical model to explain the adversarial perturbation generating process , defending against adversarial attacks is still a challenging task . Existing methods of resisting adversarial perturbations perform defense either at training time or inference time . Training time defense methods attempt to increase model capacity to improve adversarial robustness . One of the commonly used methods is adversarial training ( Szegedy et al. , 2014 ) , in which a mixture of adversarial and clean examples are used to train the neural network . The adversarial training method can be seen as minimizing the worst case loss when the training example is perturbed by an adversary ( Goodfellow et al. , 2015 ) . Adversarial training requires an adversary to generate adversarial examples in the training procedure . This can significantly increase the training time . Adversarial training also results in reduced performance on clean examples . Lamb et al . ( 2019 ) recently introduced interpolated adversarial training ( IAT ) that incorporates interpolation-based training into the adversarial training framework . The IAT method helps to improve performance on clean examples while maintaining adversarial robustness . As to inference time defense methods , the main idea is to transfer adversarial perturbations such that the obtained inputs are no longer adversarial . Tabacof & Valle ( 2016 ) studied the use of random noise such as Gaussian noise and heavy-tail noise to resist adversarial perturbations . Xie et al . ( 2018 ) introduced to apply two randomization operations , i.e. , random resizing and random zero padding , to inputs to improve adversarial robustness . Guo et al . ( 2018 ) investigated the use of random cropping and rescaling to transfer adversarial perturbations . More recently , Pang et al . ( 2020 ) proposed the mixup inference method that uses the interpolation between the input and a randomly selected clean image for inference . This method can shrink adversarial perturbations somewhat by the interpolation operation . Inference time defense methods can be directly applied to off-the-shelf network models without retraining or finetuning them . This can be much efficient as compared to training time defense methods . Though adversarial perturbations are not readily perceivable by a human observer , it is suggested that adversarial examples are outside the natural image manifold ( Hu et al. , 2019 ) . Previous studies have suggested that adversarial vulnerability is caused by the locally unstable behavior of classifiers on data manifolds ( Fawzi et al. , 2016 ; Pang et al. , 2018 ) . Pang et al . ( 2020 ) also suggested that adversarial perturbations have the locality property and could be resisted by breaking the locality . Existing inference time defense methods mainly use stochastic transformations such as mixup and random cropping and rescaling to break the locality . In this research , we observe that applying small perturbations generated for non-predicted class labels to the adversarial example helps to counteract the adversarial effect . Motivated by this observation , we propose a method that employs the use of small perturbations to counteract adversarial perturbations . In the proposed method , we generate small perturbation using local first-order gradient information for a number of randomly selected class lables . These small perturbations are added together and projected onto a specified space before finally applying to the adversarial example . Our method can be used as a preliminary step before applying existing inference time defense methods . To the best of our knowledge , this is the first research on using local first-order gradient information to resist adversarial perturbations . Successful attack methods such as projected gradient descent ( PGD ) ( Madry et al. , 2018 ) usually use local gradient to obtain adversarial perturbations . Compared to random transformations , it would be more effective to use local gradient to resist adversarial perturbations . We show through experiments that our method is effective and complementary to random transformation-based methods to improve defense performance . The contributions of this paper can be summarized as follows : • We propose a method that uses small first-order perturbations to defend against adversarial attacks . We show that our method is effective in counteracting adversarial perturbations and improving adversarial robustness . • We evaluate our method on CIFAR-10 and CIFAR-100 against PGD attacks in different settings . The experimental results demonstrate that our method significantly improves the defense performance of the baseline methods against both untargeted and targeted attacks and that it performs well in resisting strong adversarial examples generated using more iterations . 2 PRELIMINARY . 2.1 ADVERSARIAL EXAMPLES . We consider a neural network f ( · ) with parameters θ that outputs a vector of probabilities for L = { 1 , 2 , ... , l } categories . In supervised learning , empirical risk minimization ( ERM ) ( Vapnik , 1998 ) has been commonly used as the principle to optimize the parameters on a training set . Given an input x , the neural network makes a prediction c ( x ) = argmaxj∈L fj ( x ) . The prediction is correct if c ( x ) is the same as the actual target c∗ ( x ) . Unfortunately , ERM trained neural networks are vulnerable to adversarial examples , inputs formed by applying small but intentionally crafted perturbations ( Szegedy et al. , 2014 ; Madry et al. , 2018 ) . That is , an adversarial example x′ is close to a clean example x under a distance metric , e.g. , ℓ∞ distance , but the neural network outputs an incorrect result for the adversarial example x′ with high confidence . In most cases , the difference between the adversarial example and clean example is not readily recognizable to humans . 2.2 ATTACK METHODS . Existing attack methods can be categorized into white-box attacks and black-box attacks . We focus on defending against white-box attacks , wherein the adversary has full access to the network model including the architecture and weights . The fast gradient sign ( FGSM ) method ( Goodfellow et al. , 2015 ) and PGD are two successful optimization-based attack methods . The FGSM method is a one-step attack method . It generates adversarial perturbations that yield the highest loss increase in the gradient sign direction . Let x be the input to a network model , y the label associate with x and L ( θ , x , y ) be the loss function for training the neural network . The FGSM method generates a max-norm constrained perturbation as follows : η = εsign ( ∇xL ( θ , x , y ) ) , ( 1 ) where ε denotes the max-norm . This method was developed based on the view that the primary cause of neural networks ’ adversarial vulnerability is their linear nature . The required gradient can be computed efficiently using backpropagation . The PGD method is a multistep attack method that iteratively applies projected gradient descent on the negative loss function ( Kurakin et al. , 2016 ) as follows : xt+1 = Πx+S ( x t + αsign ( ∇xtL ( θ , xt , y ) ) ) , ( 2 ) where α denotes the step size and Π denotes the projection operator that projects the perturbed input onto x+ S. We consider projecting the perturbed input onto a predefined ℓ∞ ball from the original input . The PGD attack method can be seen as a multistep FGSM method . It is a much strong adversary that reliably causes a variety of neural networks to misclassify their input . 3 METHODOLOGY While many studies have been conducted on defending against adversarial attacks at inference time , these studies have not considered using local gradient information to resist adversarial perturbations . Previous work has suggested that the primary cause of neural networks ’ adversarial vulnerability is their linear nature ( Goodfellow et al. , 2015 ) . It would be more effective to use first-order gradient information to counteract adversarial perturbations such that the resulted perturbations no longer result in the model making an incorrect prediction . Adversarial perturbations are small crafted perturbations that slightly affect the visual quality of inputs but cause the neural network to misclassify the inputs in favor of an incorrect answer with high probability . We show that this effect can be counteracted by applying small perturbations generated using local first-order gradient information for class labels other than the predicted one . An illustration of this phenomenon is shown in Figure 1 . We see that by adding perturbations generated for non-predicted labels to the input , the prediction probability for the correct category increases and that for the incorrect label is suppressed . Algorithm 1 Counteracting adversarial perturbations using local first-order gradient . Input : Neural network f ; input x ; step size α used in PGD to generate perturbations to counteract the adver- sarial perturbation . Output : Prediction result for x . 1 : Randomly select N class labels { l1 , l2 , ... , lN } ; 2 : for i = 1 to N do 3 : ηi = PGD ( li , α , step=1 ) // generate perturbation ηi for li using the one-step PGD method . 4 : end for 5 : x = x+ΠC ( ∑N i=1 ηi ( x ) ) // C is a ℓ∞ bounded space . 6 : return f ( x ) . Based on this phenomenon , we propose a method of counteracting adversarial perturbations to improve adversarial robustness . In the proposed method , we generate small perturbations for a number of randomly selected class labels and apply these perturbations to the input to resist the adversarial perturbation . Let x be the input to a model , which can be an adversarial or clean example . We randomly select N class labels and generate small first-order perturbations for the N selected labels . These N small perturbations are added together and then projected onto a ℓ∞-bounded space before applying to the input . This procedure can be formulated as follows : x̃ = x+ΠC ( N∑ i=1 ηi ( x ) ) , ( 3 ) where ηi ( x ) denotes the small perturbation generated for the i-th selected class label , C = { t| ∥t− x∥∞ ≤ µ } is a µ bounded ℓ∞ space . The one-step PGD method is used to generate small perturbations . This is the same as using the FGSM method and empirically achieves better performance than using multiple steps . The perturbations can be generated in an untargeted or targeted manner . The combined perturbation is projected onto the space C. This ensures that the obtained example is visually similar to the original one . We detail the procedure for counteracting adversarial perturbations in Algorithm 1 . Discussion and Analysis Adversarial examples exposes underlying flaws in the training algorithms . While much progress has been made in defending against adversarial attacks , it is difficult to theoretically understand neural networks ’ vulnerability to adversarial examples . Previous work ( Athalye et al. , 2018 ) has suggested that the adversarial perturbation δ can be obtained by solving the following optimization problem : min ∥δ∥p , s.t . c ( x+ δ ) ̸= c∗ ( x ) , ∥δ∥p ≤ ξ , ( 4 ) where ξ is a hyperparameter constraining the size of the perturbation . This problem can be effectively solved by gradient descent-based attack methods such as PGD and FGSM that reliably cause neural networks to output an incorrect result . These attack methods typically use local first-order gradient to find the optimal solution . Because state-of-the-art neural networks usually have many parameters , perturbations obtained with these attack methods may overfit to the inputs . Therefore , perturbing and transferring these adversarial perturbations could be an effective way to resist the adversarial effect . Unlike previous random transformation-based methods , we employ the use of local first-order gradient information to counteract the adversarial effect . We show that the proposed method is effective in improving defense performance , especially against strong adversarial examples generated using more iterations . Let x0 be a clean example and δ be the adversarial perturbation . In our method , the following input is fed to the neural network : x0 + δ · 1z ( x0 ) +ΠC ( N∑ i=1 ηi ( x0 ) ) , where 1z ( x0 ) = { 0 , x0 is not subject to adversarial attack , 1 , x0 is subject to adversarial attack . ( 5 ) The perturbation ηi generated to counteract the adversarial perturbation should be small , otherwise it would be a new adversarial perturbation . This would essentially have no effect in counteracting the adversarial perturbation . Adversarial training that has been shown to be effective to improve adversarial robustness usually employs a first-order adversarial like PGD to provide adversarial examples for training . These adversarial examples help to regularize the model to be resistant to adversarial perturbations . We show through experiments that our method is complementary to adversarial training to improve overall defense performance against both untargeted and targeted attacks . The proposed method is applied at inference time . It can be directly applied to off-the-shelf models without retraining or finetuning them . The required gradient for generating small perturbations can be computed efficiently in parallel using backpropagation . This would not increase too much time for inference .
The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat model where the attacker is unaware of the defense mechanism. As has been argued again and again in the literature and in community guidelines such as [1, 2], the oblivious threat model is trivial and yields absolutely no insights into the effectiveness of a defense (e.g. you can just manipulate the backpropagated gradient in random ways to prevent any gradient-based attack from finding adversarial perturbations). The problem with oblivious attacks is clearly visible in the results section where more PGD iterations are less effective than fewer iterations - a clear red flag that the evaluation is ineffective. The paper also fails to point out that Pang et al. 2020, one of the methods they combine their method with, has been shown to be ineffective [2].
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples
1 INTRODUCTION . Debugging machine learning ( ML ) models is a critical part of the ML development life cycle . Uncovering bugs helps ML developers make important decisions about both development and deployment . In practice , much of debugging uses aggregate test statistics ( like those in leader board style challenges [ Rajpurkar et al . ( 2016 ) ] ) and continuous evaluation and monitoring post deployment [ Liberty et al . ( 2020 ) , Simon ( 2019 ) ] . However , additional issues arise with over-reliance on test statistics . For instance , aggregate statistics like held out test accuracy are known to overestimate generalization performance [ Recht et al . ( 2019 ) ] . Further , statistics offer little insight nor remedy for specific model failures [ Ribeiro et al . ( 2020 ) ; Wu et al . ( 2019 ) ] . Last , reactive debugging of failures as they occur in production does little to mitigate harmful user experiences [ La Fors et al . ( 2019 ) ] . Several techniques exist for identifying undesirable behavior in machine learning models . These methods include explanations [ Ribeiro et al . ( 2016 ) ; Slack et al . ( 2020b ) ; Lakkaraju et al . ( 2019 ) ; Lundberg & Lee ( 2017 ) ] , fairness metrics [ Feldman et al . ( 2015 ) , Slack et al . ( 2020a ) ] , data set replication [ Recht et al . ( 2019 ) ; Engstrom et al . ( 2020 ) ] , and behavioral testing tools [ Ribeiro et al . ( 2020 ) ] . However , these techniques do not provide methods to remedy model bugs or require a high level of human supervision . To enable model designers to discover and correct model bugs beyond aggregate test statistics , we analyze unrestricted adversarial examples : instances on the data manifold that are misclassified [ Song et al . ( 2018 ) ] . We identify model bugs through diagnosing common patterns in unrestricted adversarial examples . In this work , we propose Defuse : a technique for debugging classifiers through distilling1 unrestricted adversarial examples . Defuse works in three steps . First , Defuse identifies unrestricted adversarial examples by making small , semantically meaningful changes to input data using a variational autoencoder ( VAE ) . If the classifier prediction deviates from the ground truth label on the altered instance , it returns the data instance as a potential model failure . This method employs similar techniques from [ Zhao et al . ( 2018 ) ] . Namely , small perturbations in the latent space of generative models can produce images that are misclassified . Second , Defuse distills the changes through clustering on the unrestricted adversarial example ’ s latent codes . In this way , Defuse diagnoses regions in the latent space that are problematic for the classifier . This method produces a set of 1We mean distilling in the sense of “ to extract the most important aspects of ” and do not intend to invoke the knowledge distillation literature [ Hinton et al . ( 2014 ) ] . clusters in the latent space where it is likely to find misclassified data . We call these localities failure scenarios . An annotator reviews the failure scenarios and assigns the correct label— one label per scenario . Third , Defuse corrects the model behavior on the discovered failure scenarios through optimization . Because we use a generative clustering model to describe the failure scenarios , we sample many unrestricted adversarial examples and finetune to fix the classifier . Critically , failure scenarios are highly useful for model debugging because they reveal high level patterns in the way the model fails . By understanding these consistent trends in model failures , model designers can more effectively understand problematic deployment scenarios for their models . To illustrate the usefulness of failure scenarios , we run Defuse on a classifier trained on MNIST and provide an overview in figure 1 . In the identification step ( first pane in figure 1 ) , Defuse generates unrestricted adversarial examples for the model . The red number in the upper right hand corner of the image is the classifier ’ s prediction . Although the classifier achieves high test set performance , we find naturally occurring examples that are classified incorrectly . Next , the method performs the distillation step ( second pane in figure 1 ) . The clustering model groups together similar failures for annotator labeling . We see that similar mistakes are grouped together . For instance , Defuse groups together a similar style of incorrectly classified eights in the first row of the second pane in figure 1 . Next , Defuse receives annotator labels for each of the clusters.2 Last , we run the correction step using both the annotator labeled data and the original training data . We see that the model correctly classifies the images ( third pane in figure 1 ) . Importantly , the model maintains its predictive performance , scoring 99.1 % accuracy after tuning . We see that Defuse enables model designers to both discover and correct naturally occurring model failures . We provide the necessary background in Defuse ( §2 ) . Next , we detail the three steps in Defuse : identification , distillation , and correction ( §3 ) . We then demonstrate the usefulness of Defuse on three image data sets : MNIST [ LeCun et al . ( 2010 ) ] , the German traffic signs data set [ Stallkamp et al . ( 2011 ) ] , and the Street view house numbers data set [ Netzer et al . ( 2011 ) ] , and find that Defuse discovers and resolves critical bugs in high performance classifiers trained on these datasets ( §4 ) . 2 NOTATION AND BACKGROUND . In this section , we establish notation and background on unrestricted adversarial examples . Though unrestricted adversarial examples can be found in many domains , we focus on Defuse applied to image classification . 2We assign label 8 to the first row in the second pane of figure 1 , label 0 to the second row , and label 6 to the third row . Unrestricted adversarial examples Let f : RN ! [ 0 , 1 ] C denote a classifier that accepts a data point x 2 X , where X is the set of legitimate images . The classifier f returns the probability that x belongs to class c 2 { 1 , ... , C } . Next , assume f is trained on a data set D consisting of d tuples ( x , y ) containing data point x and ground truth label y using loss function L. Finally , suppose there exists an oracle o : x 2 X ! { 1 , ... , C } that outputs a label for x . We define unrestricted adversarial examples as the set AN : = { x 2 X | o ( x ) 6= f ( x ) } [ Song et al . ( 2018 ) ] . Variational Autoencoders ( VAEs ) In order to discover unrestricted adversarial examples , it is necessary to model the set of legitimate images . We use a VAE to create such a model . A VAE is composed of an encoder and a decoder neural networks . These networks are used to model the relationship between data x and latent factors z 2 RK . Where x is generated by some ground truth latent factors v 2 RM , we wish to train a model such that the learned generative factors closely resemble the true factors : p ( x|v ) ⇡ p ( x|z ) . In order to train such a model , we employ the -VAE [ Higgins et al . ( 2017 ) ] . This technique produces encoder q ( z|x ) that maps from the data and latent codes and decoder p✓ ( x|z ) that maps from codes to data . 3 METHODS . 3.1 FAILURE SCENARIOS . We begin by formalizing our notion of failure scenarios . Let z 2 RK be the latent codes corresponding to image x 2 X and q ( · ) : x ! z be the encoder mapping the relationship between images and latent codes . Definition 3.1 . Failure scenario . Given a constant ✏ > 0 , vector norm || · || , and point z0 , a failure scenario is a set of images AR = { x 2 X | ✏ > ||q ( x ) z0|| ^ o ( x ) 6= f ( x ) } . Previous works that investigate unrestricted adversarial examples look for specific instances where the oracle and the model disagree [ Song et al . ( 2018 ) ; Zhao et al . ( 2018 ) ] . We instead look for regions in the latent space where this is the case . Because the latent space of the VAE tends to take on Gaussian form due to the prior , we can use euclidean distance to define these regions . If we were to define failure scenarios on the original data manifold , we may need a much more complex distance function . Because it is likely too strict to assume the oracle and model disagree on every instance in such a region , we also introduce a relaxation . Definition 3.2 . Relaxed failure scenario . Given a constant ✏ > 0 , vector norm || · || , point z0 , and threshold ⇢ , a relaxed failure scenario is a set of images Af = { x 2 X | ✏ > ||q ( x ) z0|| } such that | { x 2 Af | o ( x ) 6= f ( x ) } | / |Af | > ⇢ . In this work , we adopt the latter definition of failure scenarios . To concretize failure scenarios and provide evidence for their existence , we continue our MNIST example from figure 1 . We plot the t-SNE embeddings of the latent codes of 10000 images from the training set and 516 unrestricted adversarial examples created during the identification step in figure 2 ( details of how we generate unrestricted adversarial examples in section 3.2.1 ) . We see that the unrestricted adversarial examples are from similar regions in the latent space . 3.2 DEFUSE . In this section , we introduce Defuse : our procedure for identifying and correcting classifier performance on failure scenarios . First , we explain how we identity unrestricted adversarial examples using VAEs . Next , we describe our clustering approach that distills these instances into failure scenarios . Last , we introduce our approach to correct classifier predictions on the failure scenarios . 3.2.1 IDENTIFYING UNRESTRICTED ADVERSARIAL EXAMPLES . This section describes the identification step in Defuse ( first pane in figure 1 ) . The aim of the identification step is to generate many unrestricted adversarial examples . In essence , we encode all the images from the training data . We perturb the latent codes with a small amount of noise drawn from a Beta distribution . We save instances that are classified differently from ground truth by f when decoded . By perturbing the latent codes with a small amount of noise , we expect the decoded instances to have small but semantically meaningful differences from the original instances . Thus , if the classifier prediction deviates on the perturbation the instance is likely misclassified . We denote the set of unrestricted adversarial examples for a single instance . We generate unrestricted adversarial examples over each instance x 2 X producing a set of unrestricted adversarial containing the produced for each instance x. Pseudo code of the algorithm for generating a single unrestricted adversarial example is given in algorithm 1 in appendix A . Our technique is related to the method for generating natural adversarial examples from [ Zhao et al . ( 2018 ) ] — a very similar but slightly different concept from unrestricted adversarial examples . The authors use a similar stochastic search method in the latent space of a GAN . They start with a small amount of noise and increase magnitude of the noise until they find a unrestricted adversarial example . Thus , they save only the unrestricted adversarial examples which are minimally distant from a data point . They also save images that differ in prediction from the original decoded instance . Because we iterate over the entire data set , it is simpler to keep the level of noise fixed and sample a predetermined number of times . In addition , we save images that differ in ground truth label from the original decoded instance because we seek to debug a classifier . Meaning , if the original instance is misclassified we wish to save this instance as a model failure .
The technique is described in sufficient detail and the paper is easy to read. Experimental results involving three datasets: MNIST, street view house numbers, and German traffic signs. The experimental results show that the proposed technique finds significant failures in all datasets, including critical failure scenarios. After correction, the performance of the method improves.
SP:8badc3f75194e9780063af5a2f26448e41e733d4
Improving Learning to Branch via Reinforcement Learning
1 INTRODUCTION . Mixed Integer Programming ( MIP ) has been applied widely in many real-world problems , such as scheduling ( Barnhart et al. , 2003 ) and transportation ( Melo & Wolsey , 2012 ) . Branch and Bound ( B & B ) is a general and widely used paradigm for solving MIP problems ( Wolsey & Nemhauser , 1999 ) . B & B recursively partitions the solution space into a search tree and compute relaxation bounds along the way to prune subtrees that provably can not contain an optimal solution . This iterative process requires sequential decision makings : node selection : selecting the next solution space to evaluate , variable selection : selecting the variable by which to partition the solution space ( Achterberg & Berthold , 2009 ) . In this work , we focus on learning a variable selection strategy , which is the core of the B & B algorithm ( Achterberg & Wunderling , 2013 ) . Very often , instances from the same MIP problem family are solved repeatedly in industry , which gives rise to the opportunity for learning to improve the variable selection policy ( Bengio et al. , 2020 ) . Based on the human-designed heuristics , Di Liberto et al . ( 2016 ) learn a classifier that dynamically selects an existing rule to perform variable selection ; Balcan et al . ( 2018 ) consider a weighted score of multiple heuristics and analyse the sample complexity of finding such a good weight . The first step towards learning a variable selection policy was taken by Khalil et al . ( 2016 ) , who learn an instance customized policy in an online fashion , as well as Alvarez et al . ( 2017 ) and Hansknecht et al . ( 2018 ) who learn a branching rule offline on a collection of similar instances . Those methods need extensively feature engineering and require strong domain knowledge in MIP . To avoid that , Gasse et al . ( 2019 ) propose a graph convolutional neural network approach to obtain competitive performance , only requiring raw features provided by the solver . In each case , the branching policy is learned by imitating the decision of strong branching as it consistently leads to the smallest B & B trees empirically ( Achterberg et al. , 2005 ) . In this work , we argue that strong branching is not a good expert to imitate . The excellent performance ( the smallest B & B tree ) of strong branching relies mostly on the information obtained in solving branch linear programming ( LP ) rather than the decision it makes . This factor prevents learning a good policy by imitating only the decision made by strong branching . To obtain more effective and non-myopic policies , i.e . minimizing the total solving nodes rather than maximizing the immediate duality gap gap , we use reinforcement learning ( RL ) and model the variable selection process as a Markov Decision Process ( MDP ) . Though the MDP formulation for MIP has been mentioned in the previous works ( Gasse et al. , 2019 ; Etheve et al. , 2020 ) , the advantage of RL has not been demonstrated clearly in literature . The challenges of using RL are multi-fold . First , the state space is a complex search tree , which can involve hundreds or thousands of nodes ( with a linear program on each node ) and evolve over time . In the meanwhile , the objective of MIP is to solve problems faster . Hence a trade-off between decision quality and computation time is required when representing the state and designing a policy based on this state representation . Second , learning a branching policy by RL requires rolling out on a distribution of instances . Moreover , for each instance , the solving trajectory could contain thousands of steps and actions can have long-lasting effects . These result in a large variance in gradient estimation . Third , each step of variable selection can have hundreds of candidates . The large action set makes the exploration in MIP very hard . In this work , we address these challenges by designing a policy network inspired by primal-dual iteration and employing a novelty search evolutionary strategy ( NS-ES ) to improve the policy . For efficiency-effectiveness trade-off , the primal-dual policy ignores the redundant information and makes high-quality decisions on the fly . For reducing variance , the ES algorithm is an attractive choice as its gradient estimation is independent of the trajectory length ( Salimans et al. , 2017 ) . For exploration , we introduce a new representation of the B & B solving process employed by novelty search ( Conti et al. , 2018 ) to encourage visiting new states . We evaluate our RL trained agent over a range of problems ( namely , set covering , maximum independent set , capacitated facility location ) . The experiments show that our approach significantly outperforms stateof-the-art human-designed heuristics ( Achterberg & Berthold , 2009 ) as well as imitation based learning methods ( Khalil et al. , 2016 ; Gasse et al. , 2019 ) . In the ablation study , we compare our primal-dual policy net with GCN ( Gasse et al. , 2019 ) , our novelty based ES with vanilla ES ( Salimans et al. , 2017 ) . The results confirm that both our policy network and the novelty search evolutionary strategy are indispensable for the success of the RL agent . In summary , our main contributions are the followings : • We point out the overestimation of the decision quality of strong branching and suggest that methods other than imitating strong branching are needed to find better variable selection policy . • We model the variable selection process as MDP and design a novel policy net based on primal-dual iteration over reduced LP relaxation . • We introduce a novel set representation and optimal transport distance for the branching process associated with a policy , based on which we train our RL agent using novelty search evolution strategy and obtain substantial improvements in empirical evaluation . 2 BACKGROUND . Mixed Integer Programming . MIP is an optimization problem , which is typically formulated as minx∈Rn { cTx : Ax ≤ b , ` ≤ x ≤ u , xj ∈ Z , ∀j ∈ J } ( 1 ) where c ∈ Rn is the objective vector , A ∈ Rm×n is the constraint coefficient matrix , b ∈ Rm is the constraint vector , ` , u ∈ Rn are the variable bounds . The set J ⊆ { 1 , · · · , n } is an index set for integer variables . We denote the feasible region of x as X . Linear Programming Relaxation . LP relaxation is an important building block for solving MIP problems , where the integer constraints are removed : minx∈Rn { cTx : Ax ≤ b , ` ≤ x ≤ u } . ( 2 ) Algorithm 1 : Branch and Bound Input : A MIP P in form Equation 1 Output : An optimal solution set x∗ and optimal value c∗ 1 Initialize the problem set S : = { PLP } . where PLP is in form Equation 2 . Set x∗ = φ , c∗ =∞ ; 2 If S = φ , exit by returning x∗ and c∗ ; 3 Select and pop a LP relaxation Q ∈ S ; 4 Solve Q with optimal solution x̂ and optimal value ĉ ; 5 If ĉ ≥ c∗ , go to 2 ; 6 If x̂ ∈ X , set x∗ = x̂ , c∗ = ĉ , go to 2 ; 7 Select variable j , split Q into two subproblems Q+j and Q − j , add them to S and go to 3 ; Branch and Bound . LP based B & B is the most successful method in solving MIP . A typical LP based B & B algorithm for solving MIP looks as Algorithm 1 ( Achterberg et al. , 2005 ) . It consists of two major decisions : node selection , in line 3 , and variable selection , in line 7 . In this paper , we will focus on the variable selection . Given a LP relaxation and its optimal solution x̂ , the variable selection means selecting an index j . Then , branching splits the current problem into two subproblems , each representing the original LP relaxation with a new constraint xj ≤ bx̂jc for Q−j and xj ≥ dx̂je for Q + j respectively . This procedure can be visualized by a binary tree , which is commonly called search tree . We give a simple visualization in Section A.1 . Evolution Strategy . Evolution Strategies ( ES ) is a class of black box optimization algorithm ( Rechenberg , 1978 ) . In this work , we refer to the definition in Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2008 ) . NES represents the population as a distribution of parameter vectors θ characterized by parameters φ : pφ ( θ ) . NES optimizes φ to maximize the expectation of a fitness f ( θ ) over the population Eθ∼pφ [ f ( θ ) ] . In recent work , Salimans et al . ( 2017 ) outlines a version of NES applied to standard RL benchmark problems , where θ parameterizes the policy πθ , φt = ( θt , σ ) parameterizes a Gaussian distribution pφ ( θ ) = N ( θt , σ2I ) and f ( θ ) is the cumulative reward R ( θ ) over a full agent interaction . At every iteration , Salimans et al . ( 2017 ) apply n additive Gaussian noises to the current parameter and update the population as θt+1 = θt + α 1 nσ n∑ i=1 f ( θt + σ i ) i ( 3 ) To encourage exploration , Conti et al . ( 2018 ) propose Novelty Search Evolution Strategy ( NS-ES ) . In NSES , the fitness function f ( θ ) = λN ( θ ) + ( 1−λ ) R ( θ ) is selected as a combination of domain specific novelty score N and cumulative reward R , where λ is the balancing weight . 3 WHY IMITATING STRONG BRANCHING IS NOT GOOD . Strong branching is a human-designed heuristic , which solves all possible branch LPs Q+j , Q − j ahead of branching . As strong branching usually produces the smallest B & B search trees ( Achterberg , 2009 ) , many learning-based variable selection policy are trained by mimicking strong branching ( Gasse et al. , 2019 ; Khalil et al. , 2016 ; Alvarez et al. , 2017 ; Hansknecht et al. , 2018 ) . However , we claim that strong branching is not a good expert : the reason strong branching can produce a small search tree is the reduction obtained in solving branch LP , rather than its decision quality . Specifically , ( i ) Strong branching can check lines 5 , 6 in Algorithm 1 before branching . If the pruning condition is satisfied , strong branching does not need to add the subproblem into the problem set S. ( ii ) Strong branching can strengthen other LP relaxations in the problem set S via domain propagation ( Rodosek et al. , 1999 ) and conflict analysis ( Achterberg , 2007 ) . For example , if strong branching finds x1 ≥ 1 and x2 ≥ 1 can be pruned during solving branch LP , then any other LP relaxations containing x1 ≥ 1 can be strengthened by adding x2 ≤ 0 . These two reductions are the direct consequence of solving branch LP , and they can not be learned by a variable selection policy . ( iii ) Strong branching activates primal heuristics ( Berthold , 2006 ) after solving LPs . To examine the decision quality of strong branching , we employ vanilla full strong branching ( Gamrath et al. , 2020 ) , which takes the same decision as full strong branching , while the side-effect of solving branch LP is switched off . Experiments in Section 5.2 show that vanilla full strong branching has poor decision quality . Hence , imitating strong branching is not a wise choice for learning variable selection policy .
The paper proposes a model for *variable selection* in *Mixed Integer Programming (MIP)* solvers. While this problem is clearly a sequential decision making task, modeling it as an MDP is challenging. As a result, existing works use other approaches such as ranking or imitation learning. This paper overcomes these challenges by introducing a new problem representation.
SP:bbaedd5d8e7591fa3a5587260bf19f3d05779976
Frequency Decomposition in Neural Processes
Neural Processes are a powerful tool for learning representations of function spaces purely from examples , in a way that allows them to perform predictions at test time conditioned on so-called context observations . The learned representations are finite-dimensional , while function spaces are infinite-dimensional , and so far it has been unclear how these representations are learned and what kinds of functions can be represented . We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components , similar to a Fourier transform . In this context , we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce , depending on their representation size . This bound is confirmed empirically . Finally , we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others , which makes them programmable band-pass or band-stop filters . 1 INTRODUCTION . Neural Processes ( Garnelo et al. , 2018a ; b ) are a class of models that can learn a distribution over functions , or more generally a function space . In contrast to many other approaches that do the same , for example Bayesian Neural Networks , Neural Processes learn an explicit representation of such a function space , which allows them to condition their predictions on an arbitrary number of observations that are only available at test time . This representation is finite-dimensional , while function spaces are infinite-dimensional , and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so . Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space , and in the process describes constraints and conditions that decide what kinds of function spaces can be represented . We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective , which allows us to derive a theoretical upper bound on the frequencies , i.e . Fourier components , of functions that can be represented . We subsequently confirm this bound empirically , which suggests that the learned representations should contain a notion of frequency . To further investigate this hypothesis , we continue with a visualization of the learned representations , which reveals that Neural Processes can decompose a function space into different frequency components , essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour . As further evidence of this we train Neural Processes to represent only certain frequencies , which results in them suppressing those frequencies that were not observed in the training data . Our contributions can be summarized as follows1 : • We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct . As we show , the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval . • We investigate learned representations qualitatively , presenting evidence that Neural Processes perform a frequency decomposition of the function space , akin to a Fourier transform . This behaviour is not incentivized externally but rather emerges naturally . 1The complete source code to reproduce our experiments is available at https : //github.com/ * * * • We show that by choosing the training distribution appropriately , Neural Processes can be made to represent certain frequencies and suppress others , which turns them into programmable band-pass or band-stop filters . 2 BACKGROUND . Neural Processes ( Garnelo et al. , 2018a ; b ) are maps P : C , X → Y , where C is a set of tuples { ( x , f ( x ) ) } Nc=1 = : ( xc , f ( xc ) ) 2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context , because Neural Processes perform predictions for values xt ∈ X ( t for target ) , conditioned on these points . F is the function space we would like to find a representation of . Note that some sources define function spaces as any set of functions with a shared domain and co-domain , while others require them to be vector spaces as well . We don ’ t concern ourselves with this distinction and further restrict our work to X = Y = R , because it allows us to visualize learned representations . We only look at the original Neural Processes , namely the deterministic Conditional Neural Processes ( CNP ) ( Garnelo et al. , 2018a ) and the variational Neural Processes ( NP ) ( Garnelo et al. , 2018b ) , because newer contributions in the field work in ways that preclude them from being analyzed in the same way . We discuss this further in Section 5 . In CNPs and NPs , the map P is separated into two parts , a so called encoding E : C → Z and a decoding or generating part G : Z , X → Y . Z is referred to as the representation or latent space . To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators , specifically neural networks , as the name suggests . The defining characteristic of CNPs and NPs is that E encodes individual pairs ( x , f ( x ) ) from the context separately , and the resulting representations are averaged to form a global representation , meaning one that is independent of the target points xt at which we then evaluate the Neural Process . This is often not the case in later work , for example in Attentive Neural Processes ( Kim et al. , 2019 ) , where the individual representations are instead aggregated using an attention mechanism that depends on xt . In CNPs the representations are deterministic , while in NPs they parametrize mean and ( log- ) variance of a Gaussian distribution , so the latter are trained using variational inference . For details on implementation and training we refer to Appendix A.1 . Our work will investigate how these global representations , which are finite-dimensional , represent infinite-dimensional function spaces . As stated above , E and by extension the Neural Process P acts on set-valued inputs . This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering . Recall that sets are permutation invariant , so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings , but Zaheer et al . ( 2017 ) show that it is in fact the only way to ensure it : E is permutation-invariant if and only if it has a so-called sum-decomposition , i.e . it can be represented in the form E ( x ) = ρ ( N∑ i=1 φ ( xi ) ) ( 1 ) where ρ , φ are appropriately chosen functions . Wagstaff et al . ( 2019 ) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section . 3 AN UPPER BOUND ON SIGNAL FREQUENCIES . We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition , so that the global representations are permutation-invariant , as shown in Zaheer et al . ( 2017 ) . Expanding on this , Wagstaff et al . ( 2019 ) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of 2We use boldface as a shorthand for sets , not vectors . 3This will depend on the implementation of E and G , and for neural networks F is practically restricted to continuous and differentiable functions . the sets are input-output tuples of some function f , as it is typically the case in Neural Processes . We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process . In order to do this , we must first define what it means to successfully learn a representation of a function space . Definition 3.1 ( Representation of Function Spaces in Neural Processes ) . We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [ a , b ] ⊂ R , if , for some error tolerance , it holds for all x ∈ [ a , b ] and for all f ∈ F , represented as a suitable set of discrete measurements ( xf , f ( xf ) ) , that |P ( ( xf , f ( xf ) ) , x ) − f ( x ) | < . That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance . The choice of this tolerance is essentially arbitrary , but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements , by which we mean that it must be possible to reconstruct f from those measurements . Switching to signal-processing terminology , we know that to represent a continuous signal as a set of discrete measurements , we need to sample it at points with a distance of at most τ = 1/ ( 2νmax ) , where νmax is the maximum frequency component of the signal . This is most commonly known as the Nyquist-Shannon sampling theorem ( Whittaker , 1915 ; Kotelnikov , 1933 ; Shannon , 1949 ) . For any finite real interval [ a , b ] , this translates to a number of sampling points N > 2|b − a|νmax . The latter allows us to make a connection to the findings by Wagstaff et al . ( 2019 ) , so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct . Theorem 3.1 ( Maximum Frequency in Neural Process Representations ) . A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [ a , b ] ⊂ R if for all f ∈ F with a maximum frequency content νmax , f it holds that : νmax , f < Dr 2|b− a| ( 2 ) Note that this means we should in theory be able to represent any function space that obeys Eq . ( 2 ) to within arbitrarily small . In practice , we will typically have less control over F , and we only find approximate representations . Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq . ( 2 ) . It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points . During training , we work with randomly sampled inputs , but at test time equidistant points are used , as we outline in Appendix A.2 .
The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation, authors infer a maximum theoretical upper bond of frequencies of functions f that can be used to represent the points. The bond depends on the latent dimension/representation size and the finite interval spawn by the points. Simulations are computed to test the validity of the upper bond. Authors find that NPs behave like a Fourier Transform and decompose the spectrum of the signal. Since the representation during training learns to represent specific frequencies, NPs can be used as band pass/stop filter.
SP:a20769de2c7acf390c7e3bece904a17df6a991bd
Multi-agent Policy Optimization with Approximatively Synchronous Advantage Estimation
1 INTRODUCTION . Reinforcement learning ( RL ) algorithms have shown amazing performance on many singleagent ( SA ) environment tasks ( Mnih et al. , 2013 ) ( Jaderberg et al. , 2016 ) ( Oh et al. , 2018 ) . However , for many real-world problems , the environment is much more complex where RL agents often need to cooperate with other agents . For example , taxi scheduling ( Nguyen et al. , 2018 ) and network control ( Chu et al. , 2019 ) . In cooperative multi-agent tasks , each agent is treated as an independent decision-maker , but can be trained together to learn cooperation . The common goal is to maximize the global return in the perspective of a team of agents . To deal with such tasks , the architecture of centralized training and decentralized executions ( CTDE ) is proposed ( Oliehoek & Vlassis , 2007 ) ( Jorge et al. , 2016 ) . The basic idea of CTDE is to construct a centralized policy evaluator , which only works during training and is accessable to global information . At the same time , each agent is assigned with a local policy for decentralized execution . The role of the evaluator is to evaluate agents ’ local policies differentially from the global perspective . A challenge in construction of centralized evaluator is multi-agent credit assignment ( Chang et al. , 2004 ) : in cooperative settings , joint actions typically generate only global rewards , making it difficult for each agent to deduce its own contribution to the team ’ s success . Credit assignment requires differentiate evaluation for agents ’ local policies , but designing individual reward function for each agent is often complicated and lacks of generalization ( Grzes , 2017 ) ( Mannion et al. , 2018 ) . Current policy based MARL methods generally realize credit assignment by introducing differentiate value functions or advantage functions ( Foerster et al. , 2018 ) ( Lowe et al. , 2017 ) . However , these value functions or advantage functions are estimated asynchronously but decentralized policies are updated synchronously , as shown in figure 1 ( b ) , which results in natural estimation bias . In this paper , we propose a novel policy based MARL method called multi-agent policy optimization with approximatively synchronous advantage estimation ( ASAE ) . In our work , we first define the counter-factual scenes , in which MA advantage estimation can be converted to SA advantage estimation . For certain agent , each counter-factual scene is assigned with a SA advantage . Then the marginal advantage function is defined as the expectation of SA advantages on distribution of counter-factual scenes , and credit assignment is realized by constructing different scenes ’ distribution for different agents . Moreover , in order to achieve synchronous advantage estimation , an approximation of other agents ’ joint future policy is introduced . To ensure the approximation is reliable , a restriction is applied to the original multi-agent policy optimization ( MAPO ) problem . The approximate optimization problem is simplified and broken down into multiple sub-problems , which has a similar form to trust region policy optimization ( TRPO ) problem . And the sub-problems are finally solved by proximal policy optimization ( PPO ) method . We have two contributions in this work : ( 1 ) A novel advantage estimation method called marginal advantage estimation , which realizes credit assignment for MARL is proposed . More importantly , this method provides a channel for various SA advantage functions expanding to multi-agent system . ( 2 ) A simple yet effective method for approximatively synchronous advantage estimation is firstly proposed . 2 RELATED WORK . A common challenge in cooperative multi-agent tasks is credit assignment . RL algorithms designed for single-agent tasks , ignore credit assignment and take other agents as part of partial observable environment . Such algorithms perform poorly in complex cooperative tasks which require high coordination ( Lowe et al. , 2017 ) . To deal with the challenge , some value based MARL methods estimate a local Q value for each agent , and the shared global Q value is then constructed through these local Q values . Value decomposition network ( VDN ) constructs the global Q value by simply adding all local Q values together ( Sunehag et al. , 2018 ) . And in QMIX algorithm ( Rashid et al. , 2018 ) , the global Q value is obtained by mixing local Q values with a neural network . In mean field multi-agent methods , local Q values are defined on agent pairs . The mapping from local Q values to the global Q value is established by measuring the influence of each agent pair ’ s joint action to the global return ( Yang et al. , 2018 ) . Similarly , for policy based MARL methods , credit assignment is generally realized through differentiated evaluation with CTED structure . Some naive policy based methods estimate local Q values for individual agents with a centralized critic ( Lowe et al. , 2017 ) , resulting in large variance . Some other methods try to introduce advantage function in MARL . Counter-factual multi-agent policy gradient ( COMA ) method ( Foerster et al. , 2018 ) is inspired by the idea of difference reward ( Wolpert & Tumer , 2002 ) and provides a naive yet effective approach for differentiated advantage estimation in cooperative MARL . In COMA , a centralized critic is used to predict the joint Q value function Qπ ( s , u ) of joint action u under state s. And the advantage for agent a is defined as Aa ( s , u ) = Q ( s , u ) − ∑ u′a πa ( u′a|τa ) Q ( s , ( u−a , u′a ) ) ( 1 ) where τ and π represent trajectory and policy respectively . a and -a denote current agent and the set of other agents respectively . COMA introduces a counter-factual baseline , which assumes that other agents take fixed actions , as shown in figure 1 ( b ) . COMA performs synchronous updates with asynchronous estimation , which leads to lagging and biased advantage estimation . In contrast , asynchronous estimation & asynchronous updating is more reliable yet more complicated . An ideal approach is synchronous estimation & synchronous updating . However , it requires prediction of other agents ’ future policies . 3 BACKGROUND . We consider a most general setting of partially observable , full cooperative multi-agent tasks , which can be described as a stochastic game defined by a tuple G = < S , U , P , r , Z , O , n , γ > . The true state of environment s ∈ S is unavailable to all agents . At each time step , n agents identified by a ∈ A ( A = { 1 , 2 , · · · , n } ) receive their local observations za ∈ Z , and take actions ua ∈ U simultaneously . The joint observation Z = Zn is acquired by the observation function O ( s , a ) : S × A → Z . The next state is determined by joint action u ∈ U ( U = Un ) and the transition function P ( s′|s , u ) : S×U×S → [ 0 , 1 ] . The reward function r ( s , u ) : S×U→ R is shared by all agents , so as the discounted return Gt = ∑∞ t+i γ trt+i . γ ∈ [ 0 , 1 ) is a discount factor . In policy based MARL with CTED architecture , each agent has a local trajectory τa consists of historical observation and action { ( za0 , ua0 ) , ( ua1 , za1 ) , · · · } . And an independent policy πa ( ua|τa ) is constructed for each agent on their local trajectory . Action-state value function Qπ ( s , u ) and state value function V π ( s ) are used to evaluate joint policy . The advantage function is Aπ ( s , u ) = Qπ ( s , u ) −V π ( s ) . For clarity , symbols in bold are used to denote the joint variable of group agents . In single-agent policy optimization problems ( Schulman et al. , 2015a ) , the objective is to maximize the expected action state value functionEπθ [ Qπθ ] . Similarly , for MAPO with CTDE structure , each agent optimize its local policy individually with estimated Q values from centralized critic . Under this circumstance , the overall objective is for agent a = 1 to n : max θa E ( πθa , π−a ) [ Qa ( πθa , π−a ) ] ( 2 ) Where Q values can be substituted by advantages to reduce the variance . 4 APPROXIMATIVELY SYNCHRONOUS ADVANTAGE ESTIMATION IN MULTI-AGENT SYSTEM . In this section , we first introduce marginal advantage estimation which expands advantage functions of SARL to MARL as well to realize credit assignment . And then , we describe how to realize approximatively synchronous advantage estimation based on the marginal advantage function in MAPO problem . 4.1 MARGINAL ADVANTAGE ESTIMATION . In this subsection , we are going to solve the challenge of credit assignment through the proposed marginal advantage estimation . We first consider an counter-factual way where advantages are estimated asynchronously but policies are updated synchronously , as shown in figure 1 ( b ) . In this case , a counter-factual scene can be defined as : at certain state , for agent a , other agent always take fixed actions . In partially observable , full cooperative multi-agent settings , the counter-factual advantage of agent a ’ s action ua under state s is derived based on the joint action ’ s value ( or joint Q value ) function Q ( s , u ) Aa ( s , u ) = Aa ( s , ( ua , u−a ) ) = Q ( s , u ) − ∫ ua Q ( s , u−a , ua ) dπa ( ua|τa ) ( 3 ) From the view of agent a , the counter-factual advantage depends on other agents ’ joint action u−a , which is a random variable and u−a ∼ π−a . In order to remove the dependency , the marginal Q value function of agent a is defined as Qa ( s , ua ) = Eu−a∼π−a [ Q ( s , ( ua , u−a ) ) ] ( 4 ) Notice that in CTED structure , policy πa ( ua|τa ) and π−a ( u−a|τ−a ) are independent . By replacing joint Q value function with marginal Q value function , the marginal advantage function is derived Aa ( s , ua ) = Qa ( s , ua ) − ∫ ua Qa ( s , ua ) dπa ( ua|τa ) = ∫ u−a Q ( s , ua , u−a ) dπ−a ( u−a|τ−a ) − ∫ ua ∫ u−a Q ( s , ua , u−a ) dπ−a ( u−a|τ−a ) dπa ( ua|τa ) = ∫ u−a [ Q ( s , ua , u−a ) − ∫ ua Q ( s , ua , u−a ) dπa ( ua|τa ) ] dπ−a ( u−a|τ−a ) = ∫ u−a Aa ( s , u ) dπ−a ( u−a|τ−a ) ( 5 ) Such replacement will not change the result of advantage estimation because the substitution of joint Q value is its expectation . Form equation ( 5 ) , for different agent , the value of marginal advantage is different , which realizes credit assignment . It can be easily proved that if counter-factual advantage Aa ( s , u ) is an unbiased estimation of joint Q value Q ( s , u ) , then marginal advantage is also an unbiased estimation of marginal Q value ( Appendix I ) . In a counter-factual scene , from the view of agent a , other agents and their fix joint actions u−a can be regarded as part of the environment . Let ( s , u−a ) = sctf and counter-factual advantage function can be written as Aa ( s , u ) =Aa ( sctf , ua ) =Q ( sctf , u a ) − ∫ ua Q ( sctf , u a ) dπa ( ua|τa ) ( 6 ) In counter-factual scenes , counter-factual advantage function is identical to advantage function in SARL , which means the counter-factual advantage in equation ( 5 ) can be replaced by any form of advantage function used in SARL . For example , considering using TD residual δat = r ( st , u a t ) + γV ( st+1 ) −V ( st ) as an estimation of joint advantage Aa ( st , ut ) , the marginal advantages could be written as Aa ( st , ut ) : = Eu−a∼π−a [ ∞∑ l=0 γlδat+l ] Aa ( st , ut ) : = Eu−a∼π−a [ δ a t ] ( 7 ) The former is unbiased estimation , but has high variance . The latter is biased estimation for any V 6= V π , but has much lower variance . These two methods can be combined for compromise between bias and variance ( Schulman et al. , 2015b ) . As agents ’ policies are independent , the expectation in equation ( 5 ) can be split into a ( n− 1 ) -layer integration , which is complicated . For simplicity and efficiency , the Monte-Carlo ( MC ) sampling can be applied as a substitution . Aa ( st , ut ) = ∫ u−a Aa ( st , ut ) dπ−a ≈ 1 m m∑ u−a Aa ( st , ut ) ( 8 ) Where m is the number of other agents ’ joint action samples . The principle of one step process to calculate marginal advantage with TD residual is shown in figure 2 . Firstly , based on the last true state st , m joint action samples are sampled . These samples are then reorganized . Take agent 1 as example , action u1 , t from Sa1 is combined with other agents ’ action samples from Sa2 to Sam respectively . As a result , m reorganized new samples are acquired . Based on these new samples , one step simulations are executed and m counter-factual rewards and states are acquired , which are used to calculate the estimation of marginal advantage . At last , the next true state is selected form counter-factual states . Both methods in equation ( 7 ) use V value predictor and require interactive simulation . Agent needs to interact with environment to get extra samples . In this work , we consider using centralized critic to predict jointQ values , and the marginal advantages can be directly calculated with theseQ values , which avoids interactive simulation .
The paper deals with the problem of credit assignment and synchronous estimation in cooperative multi-agent reinforcement learning problems. The authors introduce marginal advantage functions and use them for the estimation of the counterfactual advantage function. These functions permit to decompose the Multi-Agent Policy Optimization Problem in Single Agent Policy Optimization subproblems, which are solved using TRPO.
SP:ba25b5b02701e01998e9dd22e4230c4e095f4542
Adaptive Stacked Graph Filter
We study Graph Convolutional Networks ( GCN ) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients . We find that by stacking graph filters with learnable polynomial parameters , we can build a highly adaptive and robust vertex classification model . Our treatment here relaxes the low-frequency ( or equivalently , high homophily ) assumptions in existing vertex classification models , resulting a more ubiquitous solution in terms of spectral properties . Empirically , by using only one hyper-parameter setting , our model achieves strong results on most benchmark datasets across the frequency spectrum . 1 INTRODUCTION . The semi-supervised vertex classification problem ( Weston et al. , 2012 ; Yang et al. , 2016 ) in attributed graphs has become one of the most fundamental machine learning problems in recent years . This problem is often associated with its most popular recent solution , namely Graph Convolutional Networks ( Kipf & Welling , 2017 ) . Since the GCN proposal , there has been a vast amount of research to improve its scalability ( Hamilton et al. , 2017 ; Chen et al. , 2018 ; Wu et al. , 2019 ) as well as performance ( Liao et al. , 2019 ; Li et al. , 2019 ; Pei et al. , 2020 ) . Existing vertex classification models often ( implicitly ) assume that the graph has large vertex homophily ( Pei et al. , 2020 ) , or equivalently , low-frequency property ( Li et al. , 2019 ; Wu et al. , 2019 ) ; see Section 2.1 for graph frequency . However , this assumption is not true in general . For instance , let us take the Wisconsin dataset ( Table 1 ) , which captures a network of students , faculty , staff , courses , and projects . These categories naturally exhibit different frequency patterns1 . Connections between people are often low-frequency , while connections between topics and projects are often midrange . This problem becomes apparent as GCN-like models show low accuracies on this dataset ; for example , see ( Pei et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ) . This paper aims at establishing a GCN model for the vertex classification problem ( Definition 1 ) that does not rely on any frequency assumption . Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure . Contributions . By observing the relation between label frequency and performance of existing GCN-like models , we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer . We use filter stacking to implement a trainable graph filter , which is capable of learning any filter function . Our stacked filter construction with novel learnable filter parameters is easy to implement , sufficiently expressive , and less sensitive to the filters ’ degree . By using only one hyper-parameter setting , we show that our model is more adaptive than existing work on a wide range of benchmark datasets . The rest of our paper is organized as follows . Section 2 introduces notations and analytical tools . Section 3 provides insights into the vertex classification problem and motivations to our model ’ s design . Section 4 presents an implementation of our model . Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models . Section 6 compares our model and other existing methods empirically . We also provide additional experimental results in Appendix A . 1 “ Frequency ” is an equivalent concept to “ homophily ” and will be explained in Section 2 . 2 PRELIMINARIES . We consider a simple undirected graph G = ( V , E ) , where V = { 1 , . . . , n } is a set of n vertices and E ⊆ V × V is a set of edges . A graph G is called an attributed graph , denoted by G ( X ) , when it is associated with a vertex feature mapping X : V 7→ Rd , where d is the dimension of the features . We define the following vertex classification problem , also known in the literature as the semi-supervised vertex classification problem ( Yang et al. , 2016 ) . Definition 1 ( Vertex Classification Problem ) . We are given an attributed graph G ( X ) , a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C , and label set C. The task is to find a model h : V → C using the training data ( Vtr , Ytr ) that approximates the true labeling function Y : V → C. Let A be the adjacency matrix of the graph G , i.e. , Ai , j = 1 if ( i , j ) ∈ E and 0 otherwise . Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag ( d1 , . . . , dn ) be the n × n diagonal matrix of degrees . Let L = D −A be the combinatorial graph Laplacian . Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian . We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties : ( 1 ) its eigenvalues range from 0 to 2 ; and ( 2 ) the spectral properties can be compared between different graphs ( Chung & Graham , 1997 ) . In recent literature , the normalized adjacency matrix with added self-loops , Ã = I −L+ c , is often used as the propagation matrix , where c is some diagonal matrix . 2.1 GRAPH FREQUENCY . Graph signal processing ( Shuman et al. , 2012 ) extends “ frequency ” concepts in the classical signal processing to graphs using the graph Laplacian . Let L = UΛU > be the eigendecomposition of the Laplacian , where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues . Then , we can regard each eigenvector uk as a “ oscillation pattern ” and its eigenvalue λk as the “ frequency ” of the oscillation . This intuition is supported by the Rayleigh quotient as follows . r ( L , x ) , x > Lx x > x = ∑ u∼v Lu , v ( x ( u ) − x ( v ) ) 2∑ u∈V x ( u ) 2 . ( 1 ) where ∑ u∼v sums over all unordered pairs for which u and v are adjacent , x ( u ) denotes the entry of vector x corresponding to vertex u , and Lu , v is the ( u , v ) -entry of L. From the definition we see that r ( x ) is non-negative and L is positive semi-definite . r ( x ) is also known as a variational characterization of eigenvalues of L ( Horn & Johnson , 2012 , Chapter 4 ) , hence 0 ≤ r ( x ) ≤ 2 for any non-zero real vector x . We use the notation r ( x ) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context . The Rayleigh quotient r ( x ) measures how the data x is oscillating . Hence , in this study , we use the term “ frequency ” and the “ Rayleigh quotient ” interchangeably . By the definition , the eigenvector ui has the frequency of λi . The labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label . This is a common assumption made by the spectral clustering algorithms ( Shi & Malik , 2000 ; Ng et al. , 2002 ; Shaham et al. , 2018 ) . Commonly used terms , homophily and heterophily , used in network science , correspond to low-frequency and high-frequency , respectively . 2.2 GRAPH FILTERING . In classical signal processing , a given signal is processed by filters in order to remove unwanted interference . Here , we first design a frequency response f ( λ ) of the filter , and then apply the filter to the signal in the sense that each frequency component x̂ ( λ ) of the data is modulated as f ( λ ) x̂ ( λ ) . Graph signal processing extends this concept as follows . Same as in classical signal processing , we design a filter f ( λ ) . Then , we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui . Then , we modulate each frequency component by f ( λ ) as x = ∑ i f ( λi ) xiui . An important fact is that this can be done without performing the eigendecomposition explicitly . Let f ( L ) be the matrix function induced from f ( λ ) . Then , the filter is represented by f ( L ) x . As an extension of signal processing , graph signal processing deals with signals defined on graphs . In definition 1 , each column of the feature matrix X ∈ Rn×d is a “ graph signal ” . Let L = UΛU > be the eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors . Signal X is filtered by function f of the eigenvalues as follow . X̄ = Uf ( Λ ) U > X = f ( L ) X ( 2 ) In general , different implementations of f ( L ) lead to different graph convolution models . For instance , GCN and SGC ( Wu et al. , 2019 ) are implemented by f ( L ) = ( I−L+ ( D+ I ) −1/2L ( D+ I ) −1/2 ) k , where the constant term stems from the fact that self-loops are added to vertices and k is the filter order . Generally , the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation . The filter in GCN is called a low-pass filter because it amplifies low-frequency components ( Li et al. , 2018 ; NT & Maehara , 2019 ) . 3 SPECTRAL PROPERTIES OF FILTERS . Towards building a ubiquitous solution , we take an intermediate step to study the vertex classification problem . Similar to the unsupervised clustering problem , an ( implicit ) low-frequency assumption is commonly made . However , the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns . Table 1 shows three groups of datasets , each with different label frequency ranges . Notably , WebKB datasets ( Wisconsin , Cornell , Texas ) have mixed label frequencies ; some labels have low frequencies while others have midrange frequencies . Therefore , in order to relax the frequency assumptions , we need to learn the filtering function f ( λ ) in a similar way as proposed by Defferrard et al . ( 2016 ) . The filtering function f ( λ ) is often approximated using a polynomial of the graph Laplacian as f ( L ) ≈ poly ( L ) = K∑ i=0 θiLi . ( 3 ) Because polynomials can uniformly approximate any real continuous function on a compact interval ( see , e.g. , ( Brosowski & Deutsch , 1981 ) ) , such approximation scheme is well-justified . Kipf & Welling ( 2017 ) derived their GCN formulation as follows . In their equation 5 , they approximated a graph filter gθ by Chebyshev polynomials Tk as gθ ∗ x ≈ K∑ k=0 θkTk ( D −1/2AD−1/2 ) x . ( 4 ) Then , they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7 : gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ ( 2IN − L ) ( 5 ) Finally , they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as Z = D̃−1/2ÃD̃−1/2XΘ ( 6 ) Kipf & Welling ( 2017 ) claimed that the weight matrix Θ can learn different filters , and subsequent works ( e.g. , ( Veličković et al. , 2018 ; Spinelli et al. , 2020 ; Chen et al. , 2020b ) ) also learned filters by Θ . However , neither in theory nor practice it is the case ( Oono & Suzuki , 2020 ) . As the construction suggest , a GCN layer only represents a filter of the form f ( λ ) ≈ 2− λ . To properly learn different graph filters , we should learn the multiplying parameters θ0 , θ1 , . . . , θK in equation 3 . In the next section , we propose a learning model which directly learns these multiplying parameters .
This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than working on the frequency domain, the paper focuses on learning the polynomial coefficients of the filter on the vertex domain. The key novelty is the consideration of a stack architecture for which the polynomial filter is formed by the successive application (i.e., matrix multiplication) of filters of order one. Numerical experiments with real datasets showcase the merits, including superior classification performance, of the proposed architecture.
SP:37bdb147b866b9e32a94d55dae82d7a42cea8da9
Deep $k$-NN Label Smoothing Improves Reproducibility of Neural Network Predictions
1 INTRODUCTION . Deep neural networks ( DNNs ) have proved to be immensely successful at solving complex classification tasks across a range of problems . Much of the effort has been spent towards improving their predictive performance ( i.e . accuracy ) , while comparatively little has been done towards improving the stability of training these models . Modern DNN training is inherently noisy due to factors such as the random initialization of network parameters , the mini-batch ordering , and effects of various data augmentation or pre-processing tricks , all of which are exacerbated by the non-convexity of the loss surface . This results in local optima corresponding to models that have very different predictions on the same data points . This may seem counter-intuitive , but even when the different runs all produce very high accuracies for the classification task , their predictions can still differ quite drastically as we will show later in the experiments . Thus , even an optimized training procedure can lead to high prediction churn , which refers to the proportion of sample-level disagreements between classifiers caused by different runs of the same training procedure1 . In practice , reducing such predictive churn can be critical . For example , in a production system , models are often continuously improved on by being trained or retrained with new data or better model architectures and training procedures . In such scenarios , a candidate model for release must be compared to the current model serving in production . Oftentimes , this decision is conditioned on more than just overall offline test accuracy– in fact , oftentimes the offline metrics are not completely aligned with actual goal , especially if these models are used as part of a larger system ( e.g . maximizing offline click-through rate vs. maximizing revenue or user satisfaction ) . As a result , these comparisons oftentimes require extensive and costly live experiments , requiring human evaluation in situations where the candidate and the production model disagree ( i.e . in many situations , the true labels are not available without a manual labeler ) . In these cases , it can be highly desirable to lower prediction churn . Despite the practical relevance of lowering predictive churn , there has been surprisingly little work done in this area , which we highlight in the related work section . In this work , we focus on predictive churn reduction under retraining the same model architecture on an identical train and test set . Our main contributions are as follows : • We provide one of the first comprehensive analyses of baselines to lower prediction churn , showing that popular approaches designed for other goals are effective baselines for churn reduction , even compared to methods designed for this goal . 1Concretely , given two classifiers applied to the same test samples , the prediction churn between them is the fraction of test samples with different predicted labels . • We improve label smoothing , a global smoothing method popular for improving model confidence scores , by utilizing the local information leveraged by the k-NN labels thus introducing k-NN label smoothing which we show to often outperform the baselines on a wide range of benchmark datasets and model architectures . • We show new theoretical results for the k-NN labels suggesting the usefulness of the k-NN label . We show under mild nonparametric assumptions that for a wide range of k , the kNN labels uniformly approximates the Bayes-optimal label and when k is tuned optimally , achieves the minimax optimal rate . We also show that when k is linear in n , the distribution implied by the k-NN label approximates the original distribution smoothed with an adaptive kernel . 2 RELATED WORKS . Our work spans multiple sub-areas of machine learning . The main problem this paper tackles is reducing prediction churn . In the process , we show that label smoothing is an effective baseline and we improve upon it in a principled manner using deep k-NN label smoothing . Prediction Churn . There are only a few works which explicitly address prediction churn . Fard et al . ( 2016 ) proposed training a model so that it has small prediction instability with future versions of the model by modifying the data that the future versions are trained on . They furthermore propose turning the classification problem into a regression towards corrected predictions of an older model as well as regularizing the new model towards the older model using example weights . Cotter et al . ( 2019 ) ; Goh et al . ( 2016 ) use constrained optimization to directly lower prediction churn across model versions . Simultaneously training multiple identical models ( apart from initialization ) while tethering their predictions together via regularization has been proposed in the context of distillation ( Anil et al. , 2018 ; Zhang et al. , 2018 ; Zhu et al. , 2018 ; Song & Chai , 2018 ) and robustness to label noise ( Malach & Shalev-Shwartz , 2017 ; Han et al. , 2018 ) . This family of methods was termed “ co-distillation ” by Anil et al . ( 2018 ) , who also noted that it can be used to reduce churn in addition to improving accuracy . In this paper , we show much more extensively that co-distillation is indeed a reasonable baseline for churn reduction . Label smoothing . Label smoothing ( Szegedy et al. , 2016 ) is a simple technique that proposes to train a model the model on the soft labels obtained by a convex combination of the hard true label and the soft uniform distribution across all the labels . It has been shown that it prevents the network from being over-confident and leads to better confidence calibration ( Müller et al. , 2019 ) . Here we show that label smoothing is a reasonable baseline for reducing prediction churn , and we moreover enhance it for this task by smoothing the labels locally via k-NN rather than a the pure global approach mixing with the uniform distribution . k-NN Theory . The theory of k-NN classification has a long history ( e.g . Fix & Hodges Jr ( 1951 ) ; Cover ( 1968 ) ; Stone ( 1977 ) ; Devroye et al . ( 1994 ) ; Chaudhuri & Dasgupta ( 2014 ) ) . To our knowledge , the most relevant k-NN classification result is by Chaudhuri & Dasgupta ( 2014 ) , who show statistical risk bounds under similar assumptions as used in our work . Our analysis shows finitesample L∞ bounds on the k-NN labels , which is a stronger notion of consistency as it provides a uniform guarantee , rather than an average guarantee as is shown in previous works under standard risk measures such asL2 error . We do this by leveraging recent techniques developed in Jiang ( 2019 ) for k-NN regression , which assumes an additive noise model instead of classification . Moreover , we provide to our knowledge the first consistency guarantee for the case where k grows linearly with n. Deep k-NN . k-NN is a classical method in machine learning which has recently been shown to be useful when applied to the intermediate embeddings of a deep neural network ( Papernot & McDaniel , 2018 ) to obtain more calibrated and adversarially robust networks . This is because standard distance measures are often better behaved in these representations leading to better performance of k-NN on these embeddings than on the raw inputs . Jiang et al . ( 2018 ) uses nearest neighbors on the intermediate representations to obtain better uncertainty scores than softmax probabilities and Bahri et al . ( 2020 ) uses the k-NN label disagreement to filter noisy labels for better training . Like these works , we also leverage k-NN on the intermediate representations but we show that utilizing the k-NN labels leads to lower prediction churn . 3 ALGORITHM . Suppose that the task is multi-class classification with L classes and the training datapoints are ( x1 , y1 ) , ... , ( xn , yn ) , where xi ∈ X , and X is a compact subset of RD and yi ∈ RL , where represents the one-hot vector encoding of the label– that is , if the i-th example has label j , then yi has 1 in the j-th entry and 0 everywhere else . Then we give the formal definition of the smoothed labels : Definition 1 ( Label Smoothing ) . Given label smoothing parameter 0 ≤ a ≤ 1 , then the smoothed label y is ( where 1L denotes the vector of all 1 ’ s in RL ) . yLSa : = ( 1− a ) · y + a L · 1L . We next formally define the k-NN label , which is the average label of the example ’ s k-nearest neighbors in the training set . Let us use shorthand X : = { x1 , ... , xn } and yi ∈ RL . Definition 2 ( k-NN label ) . Let the k-NN radius of x ∈ X be rk ( x ) : = inf { r : |B ( x , r ) ∩X| ≥ k } whereB ( x , r ) : = { x′ ∈ X : |x−x′| ≤ r } and the k-NN set of x ∈ X beNk ( x ) : = B ( x , rk ( x ) ) ∩X . Then for all x ∈ X , the k-NN label is defined as ηk ( x ) : = 1 |Nk ( x ) | n∑ i=1 yi · 1 [ xi ∈ Nk ( x ) ] . The label smoothing method can be seen as performing a global smoothing . That is , every label is equally transformed towards the uniform distribution over all labels . While it seems almost deceptively simple , it has only recently been shown to be effective in practice , specifically for better calibrated networks . However , since this smoothing technique is applied equally to all datapoints , it fails to incorporate local information about the datapoint . To this end , we propose using the k-NN label , which smooths the label across its nearest neighbors . We show theoretically that the k-NN label can be a strong proxy for the Bayes-optimal label , that is , the best possible prediction one can make given the uncertainty . In other words , compared to the true label ( or even the label smoothing ) , the k-NN label is robust to variability in the data distribution and provides a more stable estimate of the label than the original hard label which may be noisy . Training on such noisy labels have been shown to hurt model performance ( Bahri et al. , 2020 ) and using the smoothed labels can help mitigate these effects . To this end , we define k-NN label smoothing as follows : Definition 3 ( k-NN label smoothing ) . Let 0 ≤ a , b ≤ 1 be k-NN label smoothing parameters . Then the k-NN smoothed label of datapoint ( x , y ) is defined as : ykNNa , b = ( 1− a ) · y + a · ( b · 1 L · 1L + ( 1− b ) · ηk ( x ) ) . We see that a is used to weight between using the true labels vs. using smoothing , and b is used to weight between the global vs. local smoothing . Algorithm 1 shows how k-NN label smoothing is applied to deep learning models . Like Bahri et al . ( 2020 ) , we perform k-NN on the network ’ s logits layer . Algorithm 1 Deep k-NN label smoothing Inputs : 0 ≤ a , b ≤ 1 , Training data ( x1 , y1 ) , ... , ( xn , yn ) , model training procedureM . Train model M0 on ( x1 , y1 ) , ... , ( xn , yn ) withM . Let z1 , ... , zn ∈ RL be the logits of x1 , ... , xn , respectively , w.r.t . M0 Let ỹi be the k-NN smoothed label of ( zi , yi ) computed w.r.t . dataset ( z1 , y1 ) , ... , ( zn , yn ) . Train model M on ( x1 , ỹ1 ) , ... , ( xn , ỹn ) withM .
The main objective of this paper is to reduce the model stability, in particular, the prediction churn of neural networks. The prediction churn is defined as the changed prediction w.r.t. model randomness, e.g. multiple runs of networks. The paper proposed to use a interpolated version of global label smoothing and k-NN label smoothing. Theoretically it is shown that k-NN rule converges to the Bayes rule when k is small, and converges to a kernel smoothed version of Bayes rule when k is linear in n. Experiments are conducted that show the proposed method gives highest test accuracy and lowest churn rate in most cases.
SP:f19be0fdce321827638f91d57607ba340b1c3e4b
Adversarial Feature Desensitization
1 Introduction . When training a classifier , it is common to assume that the training and test samples are drawn from the same underlying distribution . In adversarial machine learning , however , this assumption is intentionally violated by using the classifier itself to perturb the samples from the original ( natural ) data distribution towards a new distribution over which the classifier ’ s error rate is increased [ 52 ] . As expected , when tested on such adversarially generated input distribution , the classifier severely underperforms . To date , various methods have been proposed to defend the neural networks against adversarial attacks [ 34 , 2 ] , additive noise patterns and corruptions [ 24 , 25 , 45 ] , and transformations [ 17 ] . Among these methods , two of the most successful adversarial defense methods to date are adversarial training [ 34 ] , which trains the neural network with examples that are perturbed to maximize the loss on the target model , and TRADES [ 57 ] , which regularizes the classifier to push the decision boundary away from the data . While past adversarial defence methods have successfully improved the neural network robustness against adversarial examples , it has also been shown that these robust networks remain susceptible to even slightly larger adversarial perturbations or other forms of attacks [ 19 , 46 , 48 ] . In this paper , we propose to view the problem of adversarial robustness through the lens of domain adaptation , and to consider distributions of natural and adversarial images as distinct input domains 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . that a classifier is expected to perform well on . We then focus our attention on learning features that are invariant under such domain shifts . Building upon domain adaptation literature [ 4 ] , we use the classification-basedH∆H-divergence to quantify the distance between the natural and adversarial domains . The theory of domain adaptation allows us to formulate a bound on the adversarial classification error ( i.e . the error under the distribution of adversarial examples ) in terms of the classification error on natural images and the divergence between the natural and adversarial features . We further propose an algorithm for minimizing the adversarial error using this bound . For this , we train a classifier and a domain discriminator to respectively minimize their losses on the label classification and domain discrimination tasks . The feature extractor is trained to minimize the label classifier ’ s loss and maximise the discriminator ’ s loss . In this way , the feature extractor network is encouraged to learn features that are both predictive for the classification task and insensitive to the adversarial attacks . The proposed setup is conceptually similar to prior work in adversarial domain adaptation [ 18 , 53 ] , where domain-invariant features are learned through an adversarial game between the domain discriminator and a feature extractor network . This setup is similar to the adversarial learning paradigm widely used in image generation and transformation [ 20 , 28 , 60 ] , unsupervised and semi-supervised learning [ 39 ] , video prediction [ 35 , 31 ] , active learning [ 47 ] , and continual learning [ 16 ] . Some prior work have also considered adversarial learning to tackle the problem of adversarial examples [ 54 , 36 , 9 , 8 ] . These methods used generative models to learn the distribution of the adversarial images [ 54 , 36 ] , or to learn the distribution of input gradients [ 9 , 8 ] . Unlike our method which learns a discriminator function between distributions of adversarial and natural features and updates the feature extractor to reduce the discriminability of those distributions . The main contributions of this work are as follows : • We apply domain-adaptation theory to the problem of adversarial robustness ; this allows to bound the adversarial error in terms of the error on the natural inputs and the divergence between the feature ( representation ) distributions of adversarial and natural domains . • Aiming to minimize this bound , we propose a method which learns adversarially robust features that are both predictive and insensitive to adversarial attacks , i.e . can not be used to discriminate between natural and adversarial data . • We empirically demonstrate the effectiveness of the proposed method in learning robust models against a wide range of attack types and attack strengths , and show that our proposed approach often significantly outperforms most previous defense methods . 2 Related Work . There is an extensive literature on mitigating susceptibility to adversarial perturbations [ 34 , 57 , 13 , 59 , 3 , 22 , 7 ] . Adversarial training [ 34 ] is one of the earliest successful attempts to improve robustness of the learned representations to potential perturbations to the input pattern by solving a min-max optimization problem . TRADES [ 57 ] adds a regularization term to the cross-entropy loss which penalizes the network for assigning different labels to natural images and their corresponding perturbed images . [ 41 ] proposed an additional regularization term ( local linearity regularizer ) that encourages the classification loss to behave linearly around the training examples . [ 55 , 51 ] proposed to regularize the flatness of the loss to improve adversarial robustness . Our work is closely related to the domain adaptation literature in which adversarial optimization has recently gained much attention [ 18 , 32 , 53 ] . From this viewpoint one could consider the clean and perturbed inputs as two distinct domains for which a network aims to learn an invariant feature set . Although in our setting , i ) the perturbed domain continuously evolves while the parameters of the feature network are tuned ; ii ) unlike the usual setting in domain-adaptation problems , here we have access to the labels associated with some samples from the perturbed ( target ) domain . Recent work [ 49 ] regularized the network to have similar logit values in response to clean and perturbed inputs and showed that this additional term leads to better robust generalization to unseen perturbations . Related to this , Adversarial Logit Pairing [ 27 ] increases robustness by directly matching the logits for clean and adversarial inputs . JARN [ 9 ] Another line of work is on developing certified defenses which consist of methods with provable bounds over which the network is certified to operate robustly [ 58 , 56 , 10 ] . While these approaches provide a sense of guarantee about the proposed defenses , they are usually prohibitively expensive to train , drastically reduce the performance of the network on natural images , and the empirical robustness gained against standard attacks is low . 3 Our approach . We will now make a connection between the domain adaptation and adversarial robustness , and build upon this connection to develop an approach for improving the network ’ s robustness against adversarial attacks . 3.1 Preliminaries . Let Fθ ( x ) : X → Z , where X ⊆ Rn , Z ⊆ Rm , be a feature extractor ( e.g . a neural network with parameters θ ) mapping the input x ∈ X into the feature vector ( representation ) z ∈ Z , and let Cφ : Z → Y , where Y = { 1 , . . . , K } are the class labels , be a classifier , with parameters φ ( e.g. , the last linear layer of a neural network plus the softmax function , on top of the extracted features ) . Adversarial attack : Let π ( x , ) denote a perturbation function ( an adversarial attack ) which , for a given ( x , y ) ∈ X × Y , generates a perturbed sample x′ ∈ B ( x , ) within the -neighborhood of x , B ( x , ) = { x′ ∈ X : ‖x′ − x‖ < } , by solving the following maximization problem max t∈B ( x , ) L ( Cφ ( Fθ ( t ) ) , y ) , ( 1 ) where L is the task classification loss function . In practice , however , the perturbed sample x′ found by an attacker is typically an approximate rather than the exact solution to this maximization problem . In order to characterize the distance between the natural and adversarial data distributions , the following notion of distance between two probability distributions , defined in [ 4 , 18 ] , will be used later to make a connection with domain adaptation theory . H∆H-distance : Let H be a set of binary classifiers ( hypotheses ) , called a hypothesis space ; then the symmetric difference hypothesis space H∆H defines the set of hypotheses that capture the disagreements between two hypotheses inH , as in [ 4 ] : g ∈ H∆H ⇐⇒ g ( x ) = h ( x ) ⊕ h′ ( x ) for some h , h′ ∈ H , ( 2 ) where ⊕ denotes the XOR function . Then theH∆H-distance [ 4 , 18 ] between two data distributions ( domains ) S and T , with respect to the hypothesis spaceH , is defined as : dH∆H ( S , T ) = 2 sup h∈H∆H |Px∼S [ h ( x ) = 1 ] − Px∼T [ h ( x ) = 1 ] | . ( 3 ) This equation turns into an inequation when the supremum is taken over the hypothesis space H instead ofH∆H [ 18 ] . 3.2 A Domain Adaptation View of Adversarial Robustness . A domain is defined as a data distribution D on the set of inputs X [ 5 ] . In the adversarial robustness setting , we consider two domains – the natural and the adversarial domains , corresponding respectively to the source and target domains in domain adaptation . We denote by DX and D′X the natural and adversarial distributions of input instances respectively and by DZ and D′Z their corresponding induced distributions over the feature space Z . As in domain adaptation , we assume that f : X → Y is a labeling function common to both domains . The expected classification error Z of the classifier Cφ over DZ is defined as the probability that the classifier Cφ disagrees with the function f̃ : Z ( Cφ ) = Ez∼DZ [ y 6= Cφ ( z ) ] , ( 4 ) where f̃ : Z → Y is a mapping from the features to the class label such that f ( x ) = f̃ ( Fθ ( x ) ) . We similarly define ′Z as the expected error of Cφ over DZ′ . Using theorem 2 from [ 4 ] that relates the source and the target domain errors , we get an upper bound on the expected adversarial error ′Z as : ′Z ( h ) ≤ Z ( h ) + 1 2 dH∆H ( DZ , D′Z ) + c , ( 5 ) where c is a constant term w.r.t . h. Eq . 5 essentially gives a bound on the adversarial error ′Z in terms of the natural error Z and a divergence dH∆H between the natural and adversarial domains with respect to their induced representation distributions DZ and D′Z . In the next section , we will describe an algorithm for improving adversarial robustness of a model by iteratively estimating and minimizing these two components of the error bound . 3.3 Adversarial Feature Desensitization . Based on Eq . 5 , the expected adversarial error could be reduced by jointly minimizing the natural error and the divergence between the distributions of natural and adversarial representations dH∆H ( DZ , D′Z ) . While minimizing the natural error X is straightforward , minimizing the crossdomain divergence requires us to estimate dH∆H ( DZ , D′Z ) . As was shown before [ 18 ] , training a domain discriminator Dψ is closely related to estimating the dH∆H ( DZ , D′Z ) . The domain discriminator is a classifier trained to assign a label of 1 to samples from DZ , and -1 to samples from D′Z . Namely , it is shown [ 18 ] that dH∆H ( DZ , D′Z ) ≤ 2 sup h∈H |αDZ , D′Z ( h ) − 1| , ( 6 ) where αDZ , D′Z ( h ) = Pz∼DZ [ h ( z ) = 1 ] +Pz∼D′Z [ h ( z ) = −1 ] combines the true positives and true negatives , and is thus maximized by the optimal domain discriminator h = Dψ . Note that , if the domain distributions DZ and D′Z are the same , then even the best choice of domain discriminator Dψ will achieve chance-level accuracy , corresponding to αDZ , D′Z ( Dψ ) = 1 . Our approach will aim at minimizing this estimated distance dH∆H ( DZ , D′Z ) by tuning the feature extractor network parameters θ in the direction that pushes the distributions DZ and D′Z closer together . In parallel , we train the domain discriminator to estimate and guide the progress of the feature extractor ’ s tuning . We now describe the proposed approach ( see Algorithm 1 ) which essentially involves simultaneous training of the feature extractor Fθ , the task classifier Cφ and the domain discriminator Dψ ( see Figure 1a ) 1 . One iteration of the training procedure consists of the following three steps . First , parameters of the feature extractor Fθ and classifier Cφ are updated aiming to minimize the natural error X using the cross-entropy loss on natural inputs : LC = − 1 m m∑ i=1 ỹi · log ( softmax ( Cφ ( Fθ ( xi ) ) ) ) , ( 7 ) where ỹi is a one-hot encoding of the true label of the i-th sample xi . Next , steps two and three essentially implement a two-player minimax game similar to that in Generative Adversarial Networks ( GAN ) [ 20 ] , carried out between the feature extractor network Fθ and the domain discriminator Dψ , with a value function V ( Fθ , Dψ ) = Ep ( y ) [ Ep ( x|y ) [ S ( −Dψ ( Fθ ( x ) , y ) ) ] ] + Eq ( y ) [ Eq ( x|y ) [ S ( Dψ ( Fθ ( x ) , y ) ) ] ] , ( 8 ) 1Note that we will somewhat abuse the notation , assuming that Cφ and Dψ below correspond to the logits ( last-layer output ) of the corresponding networks . Also , we will use class-conditional discriminators , Dψ ( Fθ ( x , y ) ) , i.e . train different domain discriminator for different label values y. Algorithm 1 : AFD training procedure Input : Adversarial perturbation function ( attack ) π , feature extractor Fθ , task classifier Cφ , domain discriminator Dψ , learning rates α , β , and γ. repeat input next mini-batch { ( xi , yi ) , ... , ( xm , ym ) } for i=1 to m : x′i ← π ( xi , ) Compute LC according to Eq . 7 Compute LD according to Eq . 9 Compute LF according to Eq . 10 ( θ , φ ) ← ( θ , φ ) − α∇θ , φLC % update feature extractor and task classifier ψ ← ψ − β∇ψLD % update domain discriminator θ ← θ − γ∇θLF % update feature extractor until convergence ; where S is the softplus function . In particular , parameters of the domain discriminator Dψ are updated to minimize the cross-entropy loss associated with discriminating natural and adversarial inputs , maximizing α ( h ) in Eq . 6 . LD = 1 m m∑ i=1 [ S ( −Dψ ( Fθ ( xi ) , yi ) ) + S ( Dψ ( Fθ ( x′i ) , yi ) ) ] , ( 9 ) while the parameters of the feature extractor function Fθ are adversarially updated to maximize the domain discriminator ’ s loss from Eq . 9 LF = 1 m m∑ i=1 S ( −Dψ ( Fθ ( x′i ) , yi ) ) . ( 10 ) In Figure 1b , we visually compare the learning dynamics in adversarial training , TRADES and AFD . Essentially , the adversarial training solves the classification problem by pushing the representation of adversarial examples from different classes away . TRADES regularizes the normal classification loss on the natural inputs with an additional term that encourages the representation of adversarial and natural images to match . Similar to TRADES , in AFD , the regular classification loss on natural inputs is augmented but with an adversarial game which consists of training the domain discriminator that distinguishes between the adversarial and natural inputs for each class followed by updates to the feature extractor to make the representations for natural and adversarial examples to become indistinguishable from each other . Notably , because the parameter update for the feature extractor network is done to maximize the domain discriminator loss and not to decrease the loss for particular adversarial examples ( as is done in adversarial training or TRADES ) , it potentially increases the network robustness against any perturbation that could be correctly classified using the same domain discriminator . This could potentially lead to a broader form of generalization learned by the network . Discussion : Relation to Adversarial Training . Adversarial training minimizes the expected error on adversarial examples ( the perturbed versions of the natural samples ) , generated by an attacker in order to maximize the classification loss . The adversarial training procedure involves a minimax optimization problem consisting of an inner maximization to find adversarial examples that maximize the classification loss and an outer minimization to find model parameters that minimize the adversarial loss . From the domain adaptation point of view , the inner optimization of adversarial training is equal to a sampling procedure that generates samples from the target domain . Intuitively , direct training of the classifier on samples from the target domain would be the best way to improve the accuracy in that domain ( i.e . adversarial classification accuracy ) . However , it ’ s important to note that the adversarial examples found through the inner optimization only approximately maximize the classification loss , and therefore the adversarial error associated with these samples only act as a lower bound on the true adversarial error and therefore the outer loop of the adversarial training method essentially minimizes a lower bound on the adversarial classification error . In contrast to this setup , our proposed method minimizes a conservative upper bound on the adversarial error and therefore is more likely to generalize to a larger set of unseen attacks , and to stronger versions of previously seen attacks ( i.e . ones that generate higher-loss samples in the inner optimization loop ) .
This paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. AFD employs a min-max adversarial learning framework where the classifier learns to encode features of both clean and adversarial images as the same distribution, thereby desensitizing adversarial features. With the aim of fooling a separate discriminator model into categorizing the classifier’s adversarial features as from clean images, the classifier is trained with the standard cross-entropy loss and adversarial loss terms. The authors showed through experiments on MNIST, CIFAR10 and CIFAR100 datasets that AFD mostly outperform previous defenses across different adversarial attacks under white- and black-box conditions.
SP:5751b2abad772e44e69e125a769f25892c2a2e30
Syntactic representations in the human brain: beyond effort-based metrics
1 INTRODUCTION . Neuroscientists have long been interested in how the brain processes syntax . To date , there is no consensus on which brain regions are involved in processing it . Classically , only a small number of regions in the left hemisphere were thought to be involved in language processing . More recently , the language system was proposed to involve a set of brain regions spanning the left and right hemisphere ( Fedorenko & Thompson-Schill , 2014 ) . Similarly , some findings show that syntax is constrained to specific brain regions ( Grodzinsky & Friederici , 2006 ; Friederici , 2011 ) , while other findings show syntax is distributed throughout the language system ( Blank et al. , 2016 ; Fedorenko et al. , 2012 ; 2020 ) . The biological basis of syntax was first explored through studies of the impact of brain lesions on language comprehension or production ( Grodzinsky , 2000 ) and later through non-invasive neuroimaging experiments that record brain activity while subjects perform language tasks , using methods such as functional Magnetic Resonance Imaging ( fMRI ) or electroencephalography ( EEG ) . These experiments usually isolate syntactic processing by contrasting the activity between a difficult syntactic condition and an easier one and by identifying regions that increase in activity with syntactic effort ( Friederici , 2011 ) . An example of these conditions is reading a sentence with an object-relative clause ( e.g . “ The rat that the cat chased was tired '' ) , which is more taxing than reading a sentence with a subject-relative clause ( e.g . “ The cat that chased the rat was tired '' ) . In the past decade , this approach was extended to study syntactic processing in naturalistic settings such as when reading or listening to a story ( Brennan et al. , 2012 ; Hale et al. , 2018 ; Willems et al. , 2015 ) . Because such complex material is not organized into conditions , neuroscientists have instead devised effort-based metrics capturing the word-by-word evolving syntactic demands required to understand the material . Brain regions with activity correlated with those metrics are suggested to be involved in processing syntax . We use the term effort-based metrics to refer to uni-dimensional measures capturing word-by-word syntactic demands . A standard approach for constructing a syntactic effort-based metric is to assume a sentence ’ s syntactic representation and estimate the number of syntactic operations performed at each word . Node Count is popular such metric . It relies on constituency trees ( structures that capture the hierarchical grammatical relationship between the words in a sentence ) . While traversing the words of the sentence in order , subtrees of the constituency tree get completed ; Node Count refers to the number of such subtrees that get completed at each word , effectively capturing syntactic load or effort . Brennan et al . ( 2012 ) use Node Count to support the theory that the Anterior Temporal Lobe ( ATL ) is involved in syntactic processing . Another example of an effort-based metric is given by an EEG study by Hale et al . ( 2018 ) . They show that parser action count ( the number of possible actions a parser can take at each word ) is predictive of the P600 , a positive peak in the brain ’ s electrical activity occurring around 600ms after word onset . The P600 is hypothesized to be driven by syntactic processing ( to resolve incongruencies ) , and the results of Hale et al . ( 2018 ) align with this hypothesis . Though effort-based metrics are a good proposal for capturing the effort involved in integrating a word into the syntactic structure of a sentence , they are not reflective of the entire syntactic information in play . Hence , these metrics can not be used to study the brain representation of syntactic constructs such as nouns , verbs , relationships and dependencies between words , and the complex hierarchical structure of phrases and sentences . Constituency trees and dependency trees are the two main structures that capture a sentence ’ s syntactic structure . Constituency trees are derived using phrase structure grammars that encode valid phrase and clause structure ( see Figure 1 ( A ) for an example ) . Dependency trees encode relations between pairs of words such as subject-verb relationships . We use representations derived from both types of trees . We derive word level dependency ( DEP ) labels from dependency trees , and we focus on encoding the structural information given by constituency trees since we want to analyze if the brain builds hierarchical representations of phrase structure . We characterize the syntactic structure inherent in sentence constituency trees by computing an evolving vector representation of the syntactic structure processed at each word using the subgraph embedding algorithm by Adhikari et al . ( 2018 ) . We show that our syntactic structure embeddings – along with other simpler syntactic structure embeddings built using conventional syntactic features such as part-of-speech ( POS ) tags and DEP tags – are better than effort-based metrics at predicting the fMRI data of subjects reading text . This indicates that representations of syntax , and not just syntactic effort , can be observed in fMRI . We also address the important question of whether regions that are predicted by syntactic features are selective for syntax , meaning they are only responsive to syntax and not to other language properties such as semantics . To answer this question , we model the semantic properties of words using a contextual word embedding space ( Devlin et al. , 2018 ) . We find that regions that are predicted by syntactic features are also predicted by semantic features and thus are not selective for syntax . Scientific questions We ask three main questions : • How can scientists construct syntactic structure embeddings that capture the syntactic structure inherent in phrases and sentences ? • Are these embeddings better at predicting brain activity compared to effort-based metrics when used as inputs to encoding models ? • Which brain regions are involved in processing complex syntactic structure and are they different from regions involved in semantic processing ? Contributions We make four main contributions : • We propose a subgraph embeddings-based method to model the syntactic structure inherent in phrases and sentences . • We show that effort-based metrics can be complemented by syntactic structure embeddings which can predict brain activity to a larger extent than effort-based metrics . • Using our syntactic structure embeddings , we find some evidence supporting the hypothesis that the brain processes and represents complex syntactic information such as phrase and clause structure . • We find evidence supporting the existing hypothesis that syntactic processing appears to be distributed in the language network in regions that are not selective for syntax . 2 METHODS . We first describe the syntactic features used in this study and their generation . All of the features we use are incremental i.e . they are computed per word . We then describe our fMRI data analyses . Effort-based metrics We use four effort-based metrics in our analyses - Node Count , Syntactic Surprisal , word frequency and word length . Node Count is an effort-based metric popular in neuroscience . To compute it , we obtain the constituency tree of each sentence using the self-attentive encoder-based constituency parser by Kitaev & Klein ( 2018 ) . We compute Node Count for each word as the number of subtrees that are completed by incorporating this word into its sentence . Syntactic Surprisal is another effort-based metric proposed by Roark et al . ( 2009 ) and is computed using an incremental top down parser ( Roark , 2001 ) . Both of these metrics aim to measure the amount of effort that is required to integrate a word into the syntactic structure of its sentence . The word frequency metric is computed using the wordfreq package ( Speer et al. , 2018 ) as the Zipf frequency of a word . This is the base-10 logarithm of the number of occurrences per billion of a given word in a large text corpus . Finally , word length is the number of characters in the presented word . The last two metrics approximate the amount of effort that is required to read a word . Constituency tree-based Graph Embeddings ( ConTreGE ) Constituency trees are a rich source of syntactic information . We build three representations of these trees that encode this information : ( a ) The largest subtree which is completed upon incorporating a word into a sentence ( see figure 1 ( B ) ) is representative of the implicit syntactic information given by the word . Given that Node Count reduces all of the information present in these subtrees to just one number , it is easy to see that it can not effectively capture this information . POS tags ( categorize words into nouns , verbs , adjectives , etc . ) also capture some of the information present in these trees as they encode phrase structure to a certain extent . But , they are incapable of completely encoding their hierarchical structure and the parsing decisions which are made while generating them . In order to better encode their structure , we first build subgraph embeddings of these completed subtrees called ConTreGE Comp vectors . ( b ) We hypothesize that the brain not only processes structure seen thus far but also predicts future structure from structure it already knows . To test this , we construct embeddings , simply called ConTreGE vectors , using incomplete subtrees that are constructed by retaining all the phrase structure grammar productions that are required to derive the words seen till now , thereby allowing us to capture future sentence structure ( in the form of future constituents ) before the full sentence is read ( see figure 1 ( C ) ) . These subtrees contain leaves that are non-terminal symbols unlike complete subtrees that only have terminal symbols ( words and punctuation ) as leaves . In this context , a non-terminal symbol is a symbol that can be derived further using some rule in the phrase structure grammar ( ex . NP , VP , etc. ) . If incomplete subtrees are more representative of the brain ’ s processes , it would mean that the brain expects certain phrase structures even before the entire phrase or sentence is read . ConTreGE Comp and ConTreGE vectors need to be built using accurate constituency trees constructed using the whole sentence . Thus , we reuse the trees generated to compute Node Count to build them . ( c ) Further , the brain could be computing several possible top down partial parses that can derive the words seen thus far ( see figures 1 ( D ) and ( E ) ) and modifying the list of possible parses as future words are read . To test this hypothesis , we designed Incremental ConTreGE ( InConTreGE ) vectors that are representative of the most probable parses so far . For a given word , its InConTreGE vector is computed as : v = ∑5 i=1 e −siWi where Wi is the subgraph embedding of a partial parse tree built by an incremental top-down parser ( Roark 2001 CoLing ) after reading the word and si is the score assigned to this partial parse that is inversely proportional to the parser ’ s confidence in this tree . To effectively capture the structure of all subtrees , we encode them using the subgraph embeddings proposed by Adhikari et al . ( 2018 ) which preserve the neighbourhood properties of subgraphs . A long fixed length random walk on a subgraph is generated to compute its embedding . Since consecutive nodes in a random walk are neighbours , a long walk can effectively inform us about the neighbourhoods of nodes in the subgraph . Each node in a walk is identified using its unique ID . So , a random walk can be interpreted as a “ paragraph '' where the words are the node IDs . Finally , the subgraph ’ s embedding is computed as the Paragraph Vector ( Le & Mikolov , 2014 ) of this paragraph that is representative of the subgraph ’ s structure . Note that all of the subtrees of a given type ( complete , incomplete or partial parse ) are encoded together . This ensures that all ConTreGE Comp vectors , all ConTreGE vectors and all InConTreGE vectors are in our own spaces . Figure 2 illustrates the subtree encoding process . First , every unique non-terminal in the subtrees is mapped to a unique number ( ex . S is mapped to 1 , NP is mapped to 2 , etc . ) and every terminal is mapped to a unique number that is representative of the order in which they were presented ( the first presented token is mapped to 10000 , the second token is mapped to 10001 and so on ) . We did not map each unique terminal to a unique number ( for instance , we did not map all instances of `` Harry '' to one number ) because a random walk through the tree could give us word co-occurrence information and thus lead to the inclusion of some semantic information in the vectors . Every tree node ’ s label is then replaced by the number it was mapped to in the previous step . The edge lists of these subtrees are supplied to the subgraph embedding generation algorithm to finally obtain 15-dimensional vectors for every presented word . The length of the random walks is set to 100000 and we use an extension of the Distributed Bag of Nodes ( DBON ) model proposed by Le & Mikolov ( 2014 ) for generating Paragraph Vectors called Sub2Vec-DBON by Adhikari et al . ( 2018 ) . The length of the sliding window is set to 5 and the model is trained for 20 epochs . Since ConTreGE Comp , ConTreGE and InConTreGE encode information about the neighbourhoods of all nodes in the constituency trees , they can capture their hierarchical structure . Thus , brain regions predicted by these vectors are likely to be involved in building and encoding hierarchical sentence structure . Punctuation We create one-hot binary vectors indicating the type of punctuation that was presented along with a word ( e.g. . or , ) . For example , a sentence might have ended with `` Malfoy. '' . In this punctuation-based feature space , the column corresponding to . will be set to 1 for this word . While punctuation is seldom considered a syntactic feature , sentence boundaries are highly correlated with changes in working memory load . These changes are bound to be a great source of variability in the fMRI signal ( as we will observe later ) . Failing to account for sentence boundaries and working memory might be a source of confounding that has been ignored in the literature . Part-of-speech tags and dependency tags We use two standard word-level syntactic features - POS and DEP tags . The POS tag of a word is read off previously generated constituency trees . The DEP tag of a word ( ex . subject , object , etc . ) correspond to its assigned role in the dependency trees of the presented sentences which were generated using the spaCy English dependency parser ( 2 ) . We create one-hot binary vectors indicating the POS tag and the DEP tag of each word and concatenate them to create one feature space which we refer to as simple syntactic structure embeddings . Semantic features We adapt the vectors obtained from layer 12 of a pretrained ( 1 ) cased BERTlarge model ( Devlin et al. , 2018 ) to identify regions that process semantics . We use layer 12 because of previous work showing that middle layers of sentence encoders are optimal for predicting brain activity ( Jain & Huth , 2018 ; Toneva & Wehbe , 2019 ) . We obtain the contextual embeddings for a word by running the pretrained model only on the words seen thus far , preventing the inclusion of future semantic information . Since a presented word can be broken up into multiple subtokens , we compute its embedding as the average of the subtokens ’ embeddings . Using principal component analysis ( PCA ) , we reduce their dimensionality to 15 to match the ConTreGE vectors ’ dimensionality . fMRI data We use the fMRI data of 9 subjects reading chapter 9 of Harry Potter and the Sorcerer ’ s Stone ( Rowling , 2012 ) , collected and made available by Wehbe et al . ( 2014 ) . Words are presented one at a time at a rate of 0.5s each . All the brain plots shown here are averages over the 9 subjects in the Montreal Neurological Institute ( MNI ) space . Preprocessing details are in Appendix B . Predicting brain activity The applicability of a given syntactic feature in studying syntactic processing is determined by its efficacy in predicting the brain data described above . Ridge regression is used to perform these predictions and their coefficient of determination ( R2 score ) measures the feature ’ s efficacy . For each voxel of each subject , the regularization parameter is chosen independently . We use Ridge regression because of its computational efficiency and because of the Wehbe et al . ( 2015 ) results showing that with such fMRI data , as long as the regularization parameter is chosen by cross-validation for each voxel independently , different regularization techniques lead to similar results . Indeed , Ridge regression is a common regularization technique used for predictive fMRI models ( Mitchell et al. , 2008 ; Nishimoto et al. , 2011 ; Wehbe et al. , 2014 ; Huth et al. , 2016 ) . For every voxel , a model is fit to predict the signals Y = [ y1 , y2 , . . . , yn ] recorded in that voxel where n is the number of time points ( TR , or time to repetition ) . The words are first grouped by the TR in which they were presented . Then , the features of words in every group are summed to form a sequence of features X = [ x1 , x2 , . . . , xn ] aligned with the brain signals . The response measured by fMRI is an indirect consequence of brain activity that peaks about 6 seconds after stimulus onset . A common solution to account for this delay is to express brain activity as a function of the features of the preceding time points ( Nishimoto et al. , 2011 ; Wehbe et al. , 2014 ; Huth et al. , 2016 ) . Thus , we train our models to predict any yi using xi−1 , xi−2 , xi−3 and xi−4 . We test the models in a cross-validation loop : the data is first split into 4 contiguous and equal sized folds . Each model uses three folds of the data for training and one fold for evaluation . We remove the data from the 5 TRs which either precede or follow the test fold from the training set of folds . This is done to avoid any unintentional data leaks since consecutive yis are correlated with each other because of the lag and continuous nature of the fMRI signal . The brain signals and the word features which comprise the training and testing data for each model are individually Z-scored . After training we obtain the predictions for the validation fold . The predictions for all folds are concatenated ( to form a prediction for the entire experiment in which each time point is predicted from a model trained without the data for that time point ) . Note that since all 3 ConTreGe vectors are stochastic , we construct them 5 times each , and learn a different model each time . The predictions of the 5 models are averaged together into a single prediction . The R2 score is computed for every voxel using the predictions and the real signals . We run a permutation test to test if R2 scores are significantly higher than chance . We permute blocks of contiguous fMRI TRs , instead of individual TRs , to account for the slowness of the underlying hemodynamic response . We choose a common value of 10 TRs ( Deniz et al. , 2019 ) . The predictions are permuted within fold 5000 times , and the resulting R2 scores are used as an empirical distribution of chance performance , from which the p-value of the unpermuted performance is estimated . We also run a bootstrap test to test if a model has a higher R2 score than another . The difference is that in each iteration , we permute ( using the same indices ) the predictions of both models and compute the difference of their R2 and use the resulting distribution to estimate the p-value of the unpermuted difference . Finally , the Benjamni-Hochberg False Discovery Rate correction ( Benjamini & Hochberg , 1995 ) is used for all tests ( appropriate because fMRI data is considered to have positive dependence ( Genovese , 2000 ) ) . The correction is performed by grouping together all the voxel-level p-values ( i.e . across all subjects and feature groups ) and choosing one threshold for all of our results . The correction is done in this way since we test multiple prediction models across multiple voxels and subjects . To compute Region of Interest ( ROI ) statistics , left-hemisphere ROI masks for the language system obtained from a “ sentence vs. non-word '' fMRI contrast ( Fedorenko et al. , 2010 ) are obtained from ( 3 ) and mirrored to obtain the right-hemisphere ROIs .
This paper derives various types of graph embeddings to encode aspects of syntactic information that the brain may be processing during real-time sentence comprehension. These embeddings, along with indicators of punctuation, POS and dependency tags, and BERT embeddings, are used to predict brain activity recorded via fMRI. The authors argue that this is an improvement over use of effort-based metrics to predict brain activity, as these embeddings contain richer information than is captured by distilling down to a single measure of effort. They show that various brain regions are significantly better predicted by the syntactic embeddings than by the effort-based metrics and POS+dependency indicators. BERT embeddings, however, prove to be a better predictor (than syntactic and other predictors) across much more substantial areas of activity.
SP:95ba9ad102adafaabf9671737e6549728d104629
Analogical Reasoning for Visually Grounded Compositional Generalization
Children acquire language subconsciously by observing the surrounding world and listening to descriptions . They can discover the meaning of words even without explicit language knowledge , and generalize to novel compositions effortlessly . In this paper , we bring this ability to AI , by studying the task of multimodal compositional generalization within the context of visually grounded language acquisition . We propose a multimodal transformer model augmented with a novel mechanism for analogical reasoning , which approximates novel compositions by learning semantic mapping and reasoning operations from previously seen compositions . Our proposed method , Analogical Reasoning Transformer Networks ( ARTNET ) , is trained on raw multimedia data ( video frames and transcripts ) , and after observing a set of compositions such as “ washing apple ” or “ cutting carrot ” , it can generalize and recognize new compositions in new video frames , such as “ washing carrot ” or “ cutting apple ” . To this end , ARTNET refers to relevant instances in the training data and uses their visual features and captions to establish analogies with the query image . Then it chooses a suitable verb and noun to create a new composition that describes the new image best . Extensive experiments on an instructional video dataset demonstrate that the proposed method achieves significantly better generalization capability and recognition accuracy compared to state-of-the-art transformer models . 1 INTRODUCTION . Visually grounded Language Acquisition ( VLA ) is an innate ability of the human brain . It refers to the way children learn their native language from scratch , through exploration , observation , and listening ( i.e. , self-supervision ) , and without taking language training lessons ( i.e. , explicit supervision ) . 2-year-old children are able to quickly learn semantics of phrases and their constituent words after repeatedly hearing phrases like “ washing apple ” , or “ cutting carrot ” and observing such situations . More interestingly , they will also understand new compositions such as “ washing carrot ” or “ cutting apple ” , even before experiencing them . This ability of human cognition is called compositional generalization ( Montague ( 1970 ) ; Minsky ( 1988 ) ; Lake et al . ( 2017 ) ) . It helps humans use a limited set of known components ( vocabulary ) to understand and produce unlimited new compositions ( e.g . verb-noun , adjective-noun , or adverb-verb compositions ) . This is also one of the long-term goals of Artificial Intelligence ( AI ) , e.g . in robotics , where it enables the robot to learn new instructions that they have never heard before . Nevertheless , contemporary machine intelligence needs to overcome several major challenges of the task . On one hand , learning compositional generalization can be difficult without using datahungry models . The power of existing language models mainly rely on large-scale language corpora ( Lake & Baroni ( 2017 ) ; Pennington et al . ( 2014 ) ; Devlin et al . ( 2018 ) ) . They are still inadequate at compositional generalization ( Marcus ( 1998 ) ; Lake & Baroni ( 2018 ) ; Surı́s et al . ( 2019 ) ) . Their goal is to recognize training examples rather than focusing on what is missing from training data . On the other hand , the designed model should close the paradigmatic gap ( Nikolaus et al . ( 2019 ) ) between seen compositions and new compositions . For instance , given seen verb-noun compositions “ 1A ” and “ 2B ” ( the digit indicates verb , the letter indicates noun ) , the model should be able to link seen compositions to new compositions ( like “ 1B ” or “ 2A ” ) in completely new cases . Different from previous work ( Johnson et al . ( 2017 ) ; Baradel et al . ( 2018 ) ; Santoro et al . ( 2017 ) ) , we bring the power of compositional generalization to state-of-the-art language models by incorporating Analogical Reasoning ( AR ) ( Gentner & Smith ( 2012 ) ; Littlemore ( 2008 ) ; Vosniadou & Ortony ( 1989 ) ) . An analogy is a comparison between similar concepts or situations , and AR is analogical semantic reasoning that relies upon an analogy . The human brain spontaneously engages in AR to make sense of unfamiliar situations in every day life ( Vamvakoussi ( 2019 ) ) . Inspired by the AR process in the human brain , we design the counterpart for machine language acquisition . To this end , we create a language model that generate appropriate novel compositions by relevant seen compositions , and forming analogies and appropriate arithmetic operations to express the new compositions ( e.g . “ washing carrot ” = “ washing apple ’ + “ cutting carrot ” - “ cutting apple ” ) . We describe this process in three steps : association , reasoning , and inference , as shown in Figure 1 . Given an image ( a video frame in our case ) and a narrative sentence describing it , we mask the main verb-noun composition from the sentence , and ask the model to guess the correct composition that completes the sentence , considering the provided image . To this end , we propose a novel self-supervised and reasoning-augmented framework , Analogical Reasoning Transformer Networks ( ARTNET ) . ARTNET adopts a multimodal transformer ( similar to ViLBERT ( Lu et al . ( 2019 ) ) ) as its backbone to represent visual-textual data in a common space . Then it builds three novel modules on top of the backbone that corresponds to the aforementioned AR steps : association , reasoning , and inference . First , we design Analogical Memory Module ( AMM ) , which discovers analogical exemplars for a given query scenario , from a reference pool of observed samples . Second , we propose Analogical Reasoning Networks ( ARN ) , which takes the retrieved samples as input , selects candidate analogy pairs from the relevant reference samples , and learns proper reasoning operations over the selected analogy pairs , resulting in an analogy context vector . Third , we devise a Conditioned Composition Engine ( CCE ) , which combines the analogy context vector with the representations of the query sample to predict the masked words and complete the target sentence with a novel composition . We show how ARTNET generalizes to new compositions and excels in visually grounded language acquisition by designing experiments in various evaluations : novel composition prediction , assessment of affordance , and sensitivity to data scarcity . The results on the ego-centric video dataset ( EPIC-Kitchens ) demonstrate the effectiveness of the proposed solution in various aspects : accuracy , capability , robustness , etc . The project code is publicly available at https : //github.com/XX . The main contributions of this paper include the following : • We call attention to a challenging problem , compositional generalization , in the context of machine language acquisition , which has seldom been studied . • We propose ideas supported by human analogical reasoning : approximating new verb-noun compositions by learned arithmetic operations over relevant compositions seen before . • We propose a novel reasoning-augmented architecture for visually grounded language acquisition , which addresses the compositional generalization problem through association and analogical reasoning . • We evaluate the proposed model in various aspects , such as composition prediction , validity test , and robustness against data scarcity . The results show that ARTNET achieves significant performance improvements in terms of new composition accuracy , over a large-scale video dataset . 2 ARTNET : ANALOGICAL REASONING TRANSFORMER NETWORKS . Our goal is to develop a framework that can support multimodal compositional generalization through learning in a visual-textual environment . The proposed framework learns to acquire the meaning of phrases and words from image-sentence pairs and to create novel compositions via reasoning . We call the framework Analogical Reasoning Transformer Networks ( ARTNET ) , due to its ability to establish analogies with the previously seen , relevant scenarios , and perform reasoning operations to generalize a composition for the new scenario . Figure 2 illustrates an overview of ARTNET , which is composed of a multimodal encoder backbone , followed by three main modules : Analogical Memory Module ( AMM ) , Analogical Reasoning Networks ( ARN ) , and Conditioned Composition Engine ( CCE ) . We elaborate each component in the rest of this section . 2.1 MULTIMODAL ENCODER BACKBONE . The backbone network is responsible for encoding image-sentence pairs into compositional semantic representations . To achieve this , we utilize the emerging multimodal transformers ( e.g . UNITER ( Chen et al . ( 2019 ) ) or ViLBERT ( Lu et al . ( 2019 ) ) ) , which have recently achieved great success in various vision-language tasks . These models take a set of visual and textual tokens ( e.g . objects and words ) , and extract a multimodal embedding for each token , which is contextualized by all the other tokens through layers of multi-head attention . We follow the architecture of UNITER , as it performs slightly better than ViLBERT and other similar models . Note that since our goal is language acquisition , we intentionally do not use the pretrained weights of UNITER , which are trained on a large-scale corpus . Instead , we train the backbone from scratch on our limited data . 2.2 ANALOGICAL MEMORY MODULE . AMM plays the role of analogical association . Like finding a useful puzzle piece , we propose AMM to discover the most useful reference samples for analogical reasoning in a target scenario . Given a target image-sentence pair ( query ) , where some tokens in the sentence are masked , we randomly select N ( N = 200 in our experiments ) sample image-sentence pairs from the training data to create a reference pool , and find the Top-K most relevant exemplars from that pool . To this end , we measure a multimodal relevance score between the query and each reference . Here , we use the initial embedding of each token on the query and reference samples as described in the Section 2.1 . Given a target and a reference sample , we define the multimodal relevance score as a combination of visual and text relevance between the corresponding sets of tokens . For visual tokens , we compute the mean cosine similarity of every pair of tokens from the query and reference token sets . For the language part , the contextual background words that are not masked can provide linguistic clues for semantic relevance . Thus , we compute the Jaccard Index ( Hamers et al . ( 1989 ) ) between two sentences as textual relevance . Specifically , the multimodal relevance score is svl = 1 2 · ( |WT ∩WR| |WT ∪WR| + 1 + ∑ i ∑ j cos ( vTi , vRj ) Nv 2 ) ( 1 ) whereWT andWR are the set of target words and reference words , Nv is the number of visual token pairs , and vTi and vRj represent the visual embeddings of the ith visual token of the query and the jth visual token of the reference . After computing the scores , AMM ranks reference samples with respect to their relevance scores and selects the Top-K most relevant samples for the given query . 2.3 ANALOGICAL REASONING NETWORKS . Given the retrieved analogical exemplars , we devise a neural network with reasoning ability to enrich the original representation of the masked compositions by making analogies with the seen compositions . The idea is to exploit the semantic relation mapping between the candidate analogy composition and the target composition . To this end , we represent the target masked composition as a query vector q , by concatenating the multimodal transformer embeddings of the masked words of that composition ( typically a verb and a noun from the target sentence ) and learning the representations of ordered constituents in a composition based on a Long Short-Term Memory ( LSTM ) ( Zhou et al . ( 2015 ) ) . Next , we apply the multimodal encoder backbone ( as mentioned above ) on the retrieved analogy samples , and parse each sample into candidate analogy compositions ( pairs of tokens ) . Since the goal is language acquisition , we do not rely on predefined grammar rules or pretrained models to generate the candidate compositions , such as applying part-of-speech tagging and taking each verb-noun pair . Instead , we enumerate all pairs of adjacent words from each retrieved sentence , and all pairs of detected image regions from each retrieved image . The multimodal resulting set of pairs are called analogy pairs hereafter . The core of ARN consists of three neural network modules for analogy attention , analogical reasoning , and analogy transformation . Analogical attention learns the importance of each pair of candidate analogy composition and query vector respectively and generates analogy aggregation from each modality independently . Analogical reasoning is designed to learn the appropriate arithmetic operations from analogy compositions for reasoning . It consists of modality-wise transformations and Neural Arithmetic Logic Units ( Trask et al . ( 2018 ) ) with multiple layers of Neural Accumulator ( NAC ) ( Trask et al . ( 2018 ) ) . NAC is a simple but effective operator that supports the ability to learn addition and subtraction . This module is applied on the analogy pairs , and computes a single vector that represents the output of some reasoning operations , optimized for our task through gradient descent . Through the analogy transformation , ARN generates the sequential representations of final analogy context vector . Specifically , ARN can be denoted as cma = ∑ j αmijh m j , αij = exp a ( [ rmi ; r m i+1 ; q ] , [ r m j ; r m j+1 ; q ] ) ∑ k exp a ( [ r m i ; r m i+1 ; q ] , [ r m k ; r m k+1 ; q ] ) Analogical Attention , ( 2 ) hc = fNAC ( [ gv ( c v a ) , gl ( c l a ) ] T ) Analogical Reasoning , ( 3 ) c = LSTM ( hc ) Analogy Transformation , ( 4 ) where v and l represent the vision and language modalities , rmi and r m i+1 ( m is modality indicator ) are the image regions or text words of candidate analogical compositions . gv and gl are modality transformations that contains two-layer fully connected networks with ReLU activation and dropout , and T represents matrix transpose . The output of ARN is the vector c , which is called analogical context vector , and will be used to augment the composition representations .
This paper explores the problem of generalizing to novel combinations of verbs and nouns in a task for captioning video stills from videos about cooking. The paper introduces a new dataset based off of EPIC-Kitchens (Damen et al. 2018) which masks out verbs and nouns and splits the evaluation data into seen combinations of verb/noun pairs and unseen combinations of verb/noun pairs, challenging a model to generate captions for pairs which were not seen during training.
SP:7327dc440b5c193c1dda156276860f89594721fa
A Unified Framework for Convolution-based Graph Neural Networks
1 INTRODUCTION . Recent years have witnessed a fast development in graph processing by generalizing convolution operation to graph-structured data , which is known as Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2017 ) . Due to the great success , numerous variants of GCNs have been developed and extensively adopted in the field of social network analysis ( Hamilton et al. , 2017 ; Wu et al. , 2019a ; Veličković et al. , 2018 ) , biology ( Zitnik et al. , 2018 ) , transportation forecasting ( Li et al. , 2017 ) and natural language processing ( Wu et al. , 2019b ; Yao et al. , 2019 ) . Inspired by GCN , a wide variety of convolution-based graph learning approaches are proposed to enhance the generalization performance of graph neural networks . Several research aim to achieve higher expressiveness by exploring higher-order information or introducing additional learning mechanisms like attention modules . Although proposed from different perspectives , their exist some connections between these approaches . For example , attention-based GCNs like GAT ( Veličković et al. , 2018 ) and AGNN ( Thekumparampil et al. , 2018 ) share the similar intention by adjusting the adjacency matrix with a function of edge and node features . Similarly , TAGCN ( Du et al. , 2017 ) and MixHop ( Kapoor et al. , 2019 ) can be viewed as particular instances of PPNP ( Klicpera et al. , 2018 ) under certain approximation . However , the relations among these graph learning models are rarely studied and the comparisons are still limited in analyzing generalization performances on public datasets . As a consequence , we still lack a systematic view of different GCN models and deep understanding of the relations among them . In this paper , we resort to the techniques in graph signal processing and attempt to understand GCN-based approaches from a general perspective . Specifically , we present a unified graph convolution framework by establishing graph convolution operations with optimization problems in the graph Fourier domain . We consider a Laplacian regularized least squares optimization problem and show that most of the convolution-based approaches can be interpreted in this framework by adding carefully designed regularizers . Besides vanilla GCNs , we also extend our framework to formulating non-convolutional operations ( Xu et al. , 2018a ; Hamilton et al. , 2017 ) , attention-based GCNs ( Veličković et al. , 2018 ; Thekumparampil et al. , 2018 ) and topology-based GCNs ( Klicpera et al. , 2018 ; Kapoor et al. , 2019 ) , which cover a large fraction of the state-of-the-art graph learning ap- proaches . This novel perspective provides a re-interpretation of graph convolution operations and enables a better understanding of the similarities and differences among many widely used GCNs , and may inspire new approaches for designing better models . As a conclusion , we summarize our contributions as follow : 1 . We introduce a unified framework for convolution-based graph neural networks and interpret various convolution filters as carefully designed regularizers in the graph Fourier domain , which provides a general methodology for evaluating and relating different graph learning modules . 2 . Based on the proposed framework , we provide new insights on understanding the limitations of GCNs and show new directions to tackle common problems and improve the generalization performance of current graph neural networks in the graph Fourier domain . Additionally , the unified framework can serve as a once-for-all platform for expert-designed modules on convolution-based approaches , where newly designed modules can be easily implemented on other networks as a plugin module with trivial adaptations . We believe that our framework can provide convenience for designing new graph learning modules and searching for better combinations . 3 . As a showcase , we present a novel regularization technique under the proposed framework to alleviate the oversmoothing problem in graph representation learning . As shown in Section 4 , the newly designed regularizer can be implemented on several convolution-based networks and effectively improve the generalization performance of graph learning models . 2 PRELIMINARY . We start with an overview of the basic concepts of graph signal processing . Let G = ( V , A ) denote a graph with node feature vectors where V represents the vertex set consisting of nodes { v1 , v2 , . . . , vN } and A = ( aij ) ∈ RN×N is the adjacency matrix implying the connectivity between nodes in the graph . Let D = diag ( d ( 1 ) , . . . , d ( N ) ) ∈ RN×N be the degree matrix of A where d ( i ) = ∑ j∈V aij is the degree of vertex i . Then , L = D−A is the combinatorial Laplacian and L̃ = I−D ( −1/2 ) AD ( −1/2 ) is the normalized Laplacian of G. Additionally , we let à = A+I and D̃ = D + I denote the augmented adjacency and degree matrices with added self-loops . Then L̃sym = I − D̃−1/2ÃD̃−1/2 ( Ãsym = D̃−1/2ÃD̃−1/2 ) and L̃rw = I − D̃−1à ( Ãrw = D̃−1à ) are the augmented symmetric normalized and random walk normalized Laplacian ( augmented adjacency matrices ) of G , respectively . Let x ∈ RN be a signal on the vertices of the graph . The spectral convolution is defined as a function of a filter gθ parameterized in the Fourier domain ( Kipf & Welling , 2017 ) : gθ ? x = Ugθ ( Λ ) U Tx , ( 1 ) where U and Λ are the eigenvectors and eigenvalues of the normalized Laplacian L̃ . Also , we follow Hoang & Maehara ( 2019 ) and define the variation ∆ and D̃-inner product as : ∆ ( x ) = ∑ i , j∈V aij ( x ( i ) − x ( j ) ) 2 = xTLx , ( x , y ) D̃ = ∑ i∈V ( d ( i ) + 1 ) x ( i ) y ( i ) = xT D̃y , ( 2 ) which specifies the smoothness and importance of the signal respectively . 3 UNIFIED GRAPH CONVOLUTION FRAMEWORK . With the success of GCNs , a wide variety of convolution-based approaches are proposed which progressively enhance the expressive power and generalization performance of graph neural networks . Despite the effectiveness of GCN and its derivatives on specific tasks , there still lack a comprehensive understanding on the relations and differences among various graph learning modules . Graph signal processing is a powerful technique which has been adopted in several graph learning researches ( Kipf & Welling , 2017 ; Hoang & Maehara , 2019 ; Zhao & Akoglu , 2019 ) . However , existing researches mainly focus on analyzing the properties of GCNs while ignore the connections between different graph learning modules . Innovatively , in this work , we consider interpreting convolution-based approaches from a general perspective with graph signal processing techniques . In specific , we establish the connections between graph convolution operations and optimization problems in graph Fourier space , showing the effect of each module explicitly with specific regularizers . This novel perspective provides a systematic view of different GCN models and deep understanding of the relations among them . 3.1 UNIFIED GRAPH CONVOLUTION FRAMEWORK . Several researches have proved that , in the field of graph signal processing , the representative features are mostly preserved in the low-frequency signals while noises are mostly contained in the high-frequency signals ( Hoang & Maehara , 2019 ) . Based on this observation , numerous graph representation learning methods are designed to decrease the high-frequency components , which can be viewed as low-pass filters in the graph Fourier space . With similar inspiration , we consider a Laplacian regularized least squares optimization problem with graph signal regularizers and attempt to build connections with these filters . Definition 1 Unified Graph Convolution Framework . Graph convolution filters can be achieved by solving the following Laplacian regularized least squares optimization : min X̄ ∑ i∈V ‖x̄ ( i ) − x ( i ) ‖2 D̃ + λLreg , ( 3 ) where ‖x‖D̃ = √ ( x , x ) D̃ denotes the norm induced by D̃ . In the following sections , we will show that a wide range of convolution-based graph neural networks can be derived from Definition 1 with different carefully designed regularizers , and provide new insights on understanding different graph learning modules from the graph signal perspective . 3.1.1 GRAPH CONVOLUTIONAL NETWORKS . Graph convolutional networks ( GCNs ) ( Kipf & Welling , 2017 ) are the foundation of numerous graph learning models and have received widespread concerns . Several researches have demonstrated that the vanilla GCN is essentially a type of Laplacian smoothing over the whole graph , which makes the features of the connected nodes similar . Therefore , to reformulate GCNs in the graph Fourier space , we consider utilizing the variation ∆ ( x ) as the regularizer . Definition 2 Vanilla GCNs . Let x̄ ( i ) i∈V be the estimation of the input observation x ( i ) i∈V . A low-pass filter : X̄ = ÃrwX , ( 4 ) is the first-order approximation of the optimal solution of the following optimization : min X̄ ∑ i∈V ‖x̄ ( i ) − x ( i ) ‖2 D̃ + ∑ i , j∈V aij‖x̄ ( i ) − x̄ ( j ) ‖22 . ( 5 ) Derivations of the definitions are presented in Appendix A . As the eigenvalues of the approximated filter Ãrw are bounded by 1 , it resembles a low-pass filter that removes the high-frequency signals . By exchanging Ãrw with Ãsym ( which has the same eigenvalues as Ãrw ) , we obtain the same formulation adopted in GCNs . It has been stated that the second term ∆ ( x ) in Eq . ( 5 ) measures the variation of the estimation x̄ over the graph structure . By adding this regularizer to the objective function , the obtained filter emphasizes the low-frequency signals through minimizing the variation over the local graph structure , while keeping the estimation close to the input in the graph Fourier space . 3.1.2 NON-CONVOLUTIONAL OPERATIONS . Residual Connection . Residual connection is first proposed by He et al . ( 2016 ) and has been widely adopted in graph representation learning approaches . In the vanilla GCNs , norms of the eigenvalues of the filter Ãrw ( or Ãsym ) are bounded by 1 which ensures numerical stability in the training procedure . However , on the other hand , signals in all frequency band will shrink as the convolution layer stacks , leading to a consistent information loss . Therefore , adding the residual connection is deemed to preserve the strength of the input signal . Definition 3 Residual Connection . A graph convolution filter with residual connection : X̄ = ÃrwX + X , ( 6 ) where > 0 controls the strength of residual connection , is the first-order approximation of the optimal solution of the following optimization : min X̄ ∑ i∈V ( ‖x̄ ( i ) − x ( i ) ‖2 D̃ − ‖x̄ ( i ) ‖2 D̃ ) + ∑ i , j∈V aij‖x̄ ( i ) − x̄ ( j ) ‖22 . ( 7 ) By adding the negative regularizer to penalize the estimations with small norms , we can induce the same formulation as the vanilla graph convolution with residual connection . Concatenation . Concatenation is practically a residual connection with different learning weights . Definition 3 ’ Concatenation . A graph convolution filter concatenating with the input signal : X̄ = ÃrwX + XΘΘ T , ( 8 ) is the first-order approximation of the optimal solution of the following optimization : min X̄ ∑ i∈V ( ‖x̄ ( i ) − x ( i ) ‖2 D̃ − ‖x̄ ( i ) Θ‖2 D̃ ) + ∑ i , j∈V aij‖x̄ ( i ) − x̄ ( j ) ‖22 , ( 9 ) where > 0 controls the strength of concatenation and Θ is the learning coefficient . Although the learning weights ΘΘT has a constrained expressive capability , it can be compensated by the following feature learning modules .
This paper presents a unified framework for graph convolutional neural networks based on regularized optimization, connecting different variants of graph neural networks including vanilla, attention-based, and topology-based approaches. The authors also propose a novel regularization technique to approach the oversmoothing problem in graph convolution. Experiments on the standard settings of node classification on Citeseer, Cora, and Pubmed prove the effectiveness of the proposed regularization techniques.
SP:5be9a3c39234c10c226c42eec95e29cbddbaf8c0
Benchmarks for Deep Off-Policy Evaluation
1 INTRODUCTION . Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction , such as in robotics ( Kober et al. , 2013 ) , board games and video games ( Tesauro , 1995 ; Mnih et al. , 2013 ; Vinyals et al. , 2019 ) , and recommender systems ( Aggarwal et al. , 2016 ) . However , this sort of active online interaction is often impractical for real-world problems , where active data collection can be costly ( Li et al. , 2010 ) , dangerous ( Hauskrecht & Fraser , 2000 ; Kendall et al. , 2019 ) , or time consuming ( Gu et al. , 2017 ) . Batch ( or offline ) reinforcement learning , has been studied extensively in domains such as healthcare ( Thapa et al. , 2005 ; Raghu et al. , 2018 ) , recommender systems ( Dudík et al. , 2014 ; Theocharous et al. , 2015 ; Swaminathan et al. , 2017 ) , education ( Mandel et al. , 2014 ) , and robotics ( Kalashnikov et al. , 2018 ) . A major challenge with such methods is the off-policy evaluation ( OPE ) problem , where one must evaluate the expected performance of policies solely from offline data . This is critical for several reasons , including providing high-confidence guarantees prior to deployment ( Thomas et al. , 2015 ) , and performing policy improvement and model selection ( Bottou et al. , 2013 ; Doroudi et al. , 2017 ) . The goal of this paper is to provide a standardized benchmark for evaluating OPE methods . Although considerable theoretical ( Thomas & Brunskill , 2016 ; Swaminathan & Joachims , 2015 ; Jiang & Li , 2015 ; Wang et al. , 2017 ; Yang et al. , 2020 ) and practical progress ( Gilotte et al. , 2018 ; Nie et al. , 2019 ; Kalashnikov et al. , 2018 ) on OPE algorithms has been made in a range of different domains , there are few broadly accepted evaluation tasks that combine complex , high-dimensional problems ∗Equally major contributors . †Policies and evaluation code are available at https : //github.com/google-research/deep_ ope . See Section 5 for links to modelling code . commonly explored by modern deep reinforcement learning algorithms ( Bellemare et al. , 2013 ; Brockman et al. , 2016 ) with standardized evaluation protocols and metrics . Our goal is to provide a set of tasks with a range of difficulty , excercise a variety of design properties , and provide policies with different behavioral patterns in order to establish a standardized framework for comparing OPE algorithms . We put particular emphasis on large datasets , long-horizon tasks , and task complexity to facilitate the development of scalable algorithms that can solve high-dimensional problems . Our primary contribution is the Deep Off-Policy Evaluation ( DOPE ) benchmark . DOPE is designed to measure the performance of OPE methods by 1 ) evaluating on challenging control tasks with properties known to be difficult for OPE methods , but which occur in real-world scenarios , 2 ) evaluating across a range of policies with different values , to directly measure performance on policy evaluation , ranking and selection , and 3 ) evaluating in ideal and adversarial settings in terms of dataset coverage and support . These factors are independent of task difficulty , but are known to have a large impact on OPE performance . To achieve 1 , we selected tasks on a set of design principles outlined in Section 3.1 . To achieve 2 , for each task we include 10 to 96 policies for evaluation and devise an evaluation protocol that measures policy evaluation , ranking , and selection as outlined in Section 3.2 . To achieve 3 , we provide two domains with differing dataset coverage and support properties described in Section 4 . Finally , to enable an easy-to-use research platform , we provide the datasets , target policies , evaluation API , as well as the recorded results of state-of-the-art algorithms ( presented in Section 5 ) as open-source . 2 BACKGROUND We briefly review the off-policy evaluation ( OPE ) problem setting . We consider Markov decision processes ( MDPs ) , defined by a tuple ( S , A , T , R , ρ0 , γ ) , with state space S , action space A , transition distribution T ( s′|s , a ) , initial state distribution ρ0 ( s ) , reward function R ( s , a ) and discount factor γ ∈ ( 0 , 1 ] . In reinforcement learning , we are typically concerned with optimizing or estimating the performance of a policy π ( a|s ) . The performance of a policy is commonly measured by the policy value V π , defined as the expected sum of discounted rewards : V π : = Es0∼ρ0 , s1 : ∞ , a0 : ∞∼π [ ∞∑ t=0 γtR ( st , at ) ] . ( 1 ) If we have access to state and action samples collected from a policy π , then we can use the sample mean of observed returns to estimate the value function above . However , in off-policy evaluation we are typically interested in estimating the value of a policy when the data is collected from a separate behavior policy πB ( a|s ) . This setting can arise , for example , when data is being generated online from another process , or in the purely offline case when we have a historical dataset . In this work we consider the latter , purely offline setting . The typical setup for this problem formulation is that we are provided with a discount γ , a dataset of trajectories collected from a behavior policy D = { ( s0 , a0 , r0 , s1 , . . . ) } , and optionally the action probabilities for the behavior policy πB ( at|st ) . In many practical applications , logging action propensities is not possible , for example , when the behavior policy is a mix of ML and hard-coded business logic . For this reason , we focus on the setting without propensities to encourage future work on behavior-agnostic OPE methods . For the methods that require propensities , we estimate the propensities with behavior cloning . The objective can take multiple flavors , as shown in Fig . 1 . A common task in OPE is to estimate the performance , or value , of a policy π ( which may not be the same as πB ) so that the estimated value is as close as possible to V π under a metric such as MSE or absolute error . A second task is to perform policy selection , where the goal is to select the best policy or set of policies out of a group of candidates . This setup corresponds to how OPE is commonly used in practice , which is to find the best performing strategy out of a pool when online evaluation is too expensive to be feasible . 3 DOPE : DEEP OFF-POLICY EVALUATION . The goal of the Deep Off-Policy Evaluation ( DOPE ) benchmark is to provide tasks that are challenging and effective measures of progress for OPE methods , yet is easy to use in order to better facilitate research . Therefore , we design our benchmark around a set of properties which are known to be difficult for existing OPE methods in order to gauge their shortcomings , and keep all tasks amenable to simulation in order for the benchmark to be accessible and easy to evaluate . 3.1 TASK PROPERTIES . We describe our motivating properties for selecting tasks for the benchmark as follows : High Dimensional Spaces ( H ) High-dimensionality is a key-feature in many real-world domains where it is difficult to perform feature engineering , such as in robotics , autonomous driving , and more . In these problems , it becomes challenging to accurately estimate quantities such as the value function without the use of high-capacity models such a neural networks and large datasets with wide state coverage . Our benchmark contains complex continuous-space tasks which exercise these challenges . Long Time-Horizon ( L ) Long time horizon tasks are known to present difficult challenges for OPE algorithms . Some algorithms have difficulty doing credit assignment for these tasks . This can be made worse as the state dimension or action dimension increases . Sparse Rewards ( R ) Sparse reward tasks increase the difficulty of credit assignment and add exploration challenges , which may interact with data coverage in the offline setting . We include a range robotics and navigation tasks which are difficult to solve due to reward sparsity . Temporally extended control ( T ) The ability to make decisions hierarchically is major challenge in many reinforcement learning applications . We include two navigation tasks which require high-level planning in addition to low-level control in order to simulate the difficulty in such problems . 3.2 EVALUATION PROTOCOL The goal of DOPE to provide metrics for policy ranking , evaluation and selection . Many existing OPE methods have only been evaluated on point estimates of value such as MSE , but policy selection is an important , practical use-case of OPE . In order to explicitly measure the quality of using OPE for policy selection , we provide a set of policies with varying value , and devise two metrics that measure how well OPE methods can rank policies . For each task we include a dataset of logged experiencesD , and a set of policies { π1 , π2 , ... , πN } with varying values . For each policy , OPE algorithms must use D to produce an estimate of the policy ’ s value . For evaluation of these estimates , we provide `` ground truth values '' { V π1 , V π2 , ... , V πN } that are computed by running the policy forM ≥ 1000 episodes , where the exact value ofM is given by the number of episodes needed to lower the error bar on the ground truth values to 0.666 . The estimated values are then compared to these ground truth values using three different metrics encompassing both policy evaluation and selection ( illustrated in Figure 2 ; see Appendix A.1 for mathematical definitions ) . Absolute Error This metric measures estimate accuracy instead of its usefulness for ranking . Error is the most commonly used metric to assess performance of OPE algorithms . We opted to use absolute error instead of MSE to be robust to outliers . Regret @ k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set . It is computed by identifying the top-k policies according to the estimated returns . Regret @ k is the difference between the actual expected return of the best policy in the entire set , and the actual value of the best policy in the top-k set . Rank correlation This metric directly measures how well estimated values rank policies , by computing the correlation between ordinal rankings according by the OPE estimates and ordinal rankings according to the ground truth values . 4 DOMAINS . DOPE contains two domains designed to provide a more comprehensive picture of how well OPE methods perform in different settings . These two domains are constructed using two benchmarks previously proposed for offline reinforcement learning : RL Unplugged ( Gulcehre et al. , 2020 ) and D4RL ( Fu et al. , 2020 ) , and reflect the challenges found within them . The DOPE RL Unplugged domain is constrained in two important ways : 1 ) the data is always generated using online RL training , ensuring there is adequate coverage of the state-action space , and 2 ) the policies are generated by applying offline RL algorithms to the same dataset we use for evaluation , ensuring that the behavior policy and evaluation policies induce similar state-action distributions . Using it , we hope to understand how OPE methods work as task complexity increases from simple Cartpole tasks to controlling a Humanoid body while controlling for ideal data . On the other hand , the DOPE D4RL domain has : 1 ) data from various sources ( including random exploration , human teleoperation , and RL-trained policies with limited exploration ) , which results in varying levels of coverage of the state-action space , and 2 ) policies that are generated using online RL algorithms , making it less likely that the behavior and evaluation policies share similar induced state-action distributions . Both of these result in distribution shift which is known to be challenging for OPE methods , even in simple tasks . So , using it we hope to measure how well OPE methods work in more practical data settings .
This article proposes a benchmark of off-policy evaluation, which provides different metrics for policy ranking, evaluation and selection. Offline metrics are provided by evaluating the value function of logged data, and then evaluating absolute error, rank correlation and regret. Verify the effectiveness of different offline evaluation methods. This article provides two evaluation scenarios, one is DOPE RL unplugged, and the other is D4RL. In the experiment, the author verified the benchmark proposed in this article in the MuJoCo environment to evaluate the effectiveness of different offline evaluation methods.
SP:dd2a50abff85d2b52b02dfe27cd42e443ea265cf
Triple-Search: Differentiable Joint-Search of Networks, Precision, and Accelerators
1 INTRODUCTION . The powerful performance and prohibitive complexity of deep neural networks ( DNNs ) have fueled a tremendous demand for efficient DNN accelerators which could boost DNN acceleration efficiency by orders-of-magnitude ( Chen et al. , 2016 ) . In response , extensive research efforts have been devoted to developing DNN accelerators . Early works decouple the design of efficient DNN algorithms and their accelerators . On the algorithms level , pruning , quantization , or neural architecture search ( NAS ) are adopted to trim down the model complexity ; On the hardware level , various FPGA-/ASIC-based accelerators have been developed to customize the micro-architectures ( e.g. , processing elements dimension , memory sizes , and network-on-chip design ) and algorithm-to-hardware mapping methods ( e.g. , loop tiling strategies and loop orders ) in order to optimize the acceleration efficiency for a given DNN . Later , hardware-aware NAS ( HA-NAS ) has been developed to further improve DNNs ’ acceleration efficiency for different applications ( Tan et al. , 2019 ) . More recently , it has been recognized that ( 1 ) optimal DNN accelerators require a joint consideration/search for all the following different yet coupled aspects , including DNNs ’ network structure , the adopted precision , and their accelerators ’ micro-architecture and mapping methods , and ( 2 ) merely exploring a subset of these aspects will lead to sub-optimal designs in terms of hardware efficiency or task accuracy . For example , the optimal accelerators for networks with different structures ( e.g. , width , depth , and kernel size ) can be very different ; while the optimal networks and their bitwidths for different accelerators can differ a lot ( Wu et al. , 2019 ) . However , the direction of jointly designing or searching for all the three aspects has only been slightly touched on previously . For example , ( Chen et al. , 2018 ; Gong et al. , 2019 ; Wang et al. , 2020 ) proposed to jointly search for the structure and precision of DNNs for a fixed target hardware ; ( Abdelfattah et al. , 2020 ; Yang et al. , 2020 ; Jiang et al. , 2020a ; b ) made the first attempt to jointly search for the networks and their accelerators , yet either their network or accelerator choices are limited due to the prohibitive time cost required by their adopted reinforcement learning ( RL ) based methods ; and EDD ( Li et al. , 2020 ) contributed a pioneering effort towards this direction by formulating a differentiable joint search framework , which however only consider one single accelerator parameter ( i.e. , parallel factor ) and more importantly , has not yet fully solved the challenges of such joint search . Although differentiable search is one of the most promising ways in terms of search efficiency to explore the huge joint search space as discussed in Sec . 4.2 , plethora of challenges exist to achieve an effective generic joint search for the aforementioned three aspects . First , Challenge 1 : to jointly search for a network and its precision via differentiable search , there exists a dilemma whether to activate all the paths during search . On one hand , the required memory consumption can easily explode and thus constrain the search ’ s scalability to more complex tasks if all paths are activated ; on the other hand , partially activating a subset of the paths can lead to a sequential training of different precision on the same weights , which might result in inaccurate accuracy ranking among different precision as discussed in ( Jin et al. , 2020 ) . Second , Challenge 2 : the accelerators ’ parameters are not differentiable , and it is non-trivial to derive the operation-wise hardware-cost penalty in order to perform differentiable search ( in considering search efficiency ) . This is because the optimal accelerator is often determined by the whole network instead of one specific operation/layer due to the fact that some accelerator parameters ( e.g. , the loop order ) need to be optimized for the whole network . In this paper , we aim to address the aforementioned challenges towards scalable generic joint search for the network , precision , and accelerator . Specifically , we make the following contributions : • We propose a Triple-Search ( TRIPS ) framework to jointly search for the network , precision , and accelerator in a differentiable manner to efficiently explore the huge joint search space which can not be afforded by previous RL-based methods due to their required prohibitive search cost . TRIPS identifies and tackles the aforementioned challenges towards scalable generic joint search of the three for maximizing both the accuracy and acceleration efficiency . • We develop a heterogeneous sampling strategy for simultaneous updating the weights and network structures to ( 1 ) avoid the need to sequentially train different precision and ( 2 ) achieve unbiased search with constant memory consumption , i.e. , solve the above Challenge 1 . In addition , we develop a novel co-search pipeline that integrates a differentiable hardware search engine to address the above Challenge 2 . • Extensive experiments and ablation studies validate the effectiveness of our proposed TRIPS framework in terms of the resulting search time , task accuracy , and accelerator efficiency , when benchmarked over state-of-the-art ( SOTA ) co-search/exploration techniques , HA-NAS methods , and DNN accelerators . Furthermore , we visualize the searched accelerators by TRIPS to provide insights towards efficient DNN accelerator design in Appendix . 2 RELATED WORKS . Hardware-aware NAS . Hardware-aware NAS has been proposed to automate the design of efficient DNNs . Early works ( Tan et al. , 2019 ; Howard et al. , 2019 ; Tan & Le , 2019 ) utilize RL-based NAS that requires a massive search time/cost , while recent works ( Wu et al. , 2019 ; Wan et al. , 2020 ; Cai et al. , 2018 ; Stamoulis et al. , 2019 ) explore the design space in a differentiable way ( Liu et al. , 2018 ) with much improved searching efficiency . Along another direction , one-shot NAS methods ( Cai et al. , 2019 ; Guo et al. , 2020 ; Yu et al. , 2020 ) pretrain the supernet and directly evaluate the performances of the sub-networks in a weight sharing manner as a proxy of their independently trained performances at the cost of a longer pretrain time . In addition , NAS has been adopted to search for quantization strategies ( Wang et al. , 2019 ; Wu et al. , 2018 ; Cai & Vasconcelos , 2020 ; Elthakeb et al. , 2020 ) for trimming down the complexity of a given DNN . However , these works leave unexplored the hardware design space , which is a crucial enabler for DNN ’ s acceleration efficiency , thus can lead to sub-optimal solutions . DNN accelerators . Motivated by customized accelerators ’ large potential gains , SOTA accelerators ( Du et al. , 2015 ; Chen et al. , 2017 ) innovate micro-architectures and algorithm-to-hardware mapping methods to optimize the acceleration efficiency , given a DNN and the hardware specifications . However , it is non-trivial to design an optimal accelerator as it requires cross-disciplinary knowledge in algorithm , micro-architecture , and circuit design . SOTA accelerator design relies on either experts ’ manual design , which is very time consuming or design flow ( Chen et al. , 2005 ; 2009 ; Rupnow et al. , 2011 ) and DNN accelerator design automation ( Wang et al. , 2016 ; Zhang et al. , 2018a ; Guan et al. , 2017 ; Venkatesan et al. , 2019 ; Wang et al. , 2018a ; Gao et al. , 2017 ) . As they merely explore the accelerator design space , they can result in sub-optimal solutions as compared to SOTA co-search/exploration methods and our TRIPS framework . Co-exploration/search techniques . Pioneering efforts have been made towards jointly searching of DNNs and their accelerators to some extent . For joint searching of DNNs and their precision , ( Chen et al. , 2018 ; Gong et al. , 2019 ; Wang et al. , 2020 ) adopt either differentiable or evolutionary algorithms yet without exploring their hardware accelerators . For joint searching of DNNs and their accelerators , ( Abdelfattah et al. , 2020 ; Yang et al. , 2020 ; Jiang et al. , 2020a ; b ) conduct RL-based search for the networks and some accelerator parameters/templates , where they strictly constrain the search space of the network or accelerator to achieve a practical RL search time , limiting their scalability and achievable efficiency . ( Lin et al . ) is another pioneering work which co-designs the newtork and accelerator in a sequential manner based on the fact that the accelerator ’ s design cycle is longer than the networks . EDD ( Li et al. , 2020 ) extends differentiable NAS to search for layer-wise precision and the accelerators ’ parallel factor , which is most relevant to our TRIPS . EDD has not yet fully solved the joint search challenges . First , it does not discuss or address the potentially explosive memory consumption issue of such joint search ; second , EDD ’ s accelerator search space only includes the parallel factor , which can be strictly limited to their accelerator template and can not generalize to include common accelerator parameters such as the tiling strategies . Built upon prior art , our TRIPS targets a scalable generic joint search framework to optimally search for the network , its precision , and adopted accelerator in a differentiable manner for improving efficiency . 3 THE PROPOSED TECHNIQUES . In this section , we describe our proposed techniques for enabling TRIPS , where Sec . 3.1 provides TRIPS ’ s formulation , Sec . 3.2 and Sec . 3.3 introduce TRIPS ’ s enablers that address the key challenges of scalable generic joint search for networks , precision , and accelerators , and Sec . 3.4 unifies the enablers to build a comprehensive co-search framework . 3.1 TRIPS : FORMULATION . Fig . 1 shows an overview of TRIPS , which jointly searches for the networks ( e.g. , kernel size , channel expansion , and group number ) , precision ( e.g. , 4-/6-/8-/12-/16-bit ) , and the accelerators ( e.g. , PE array type , buffer size , and tiling strategies of each memory hierarchy ) in a differentiable manner . TRIPS targets a scalable yet generic joint search framework , which we formulate as a bi-level optimization problem : min α , β Lval ( ω∗ , net ( α ) , prec ( β ) ) + λLcost ( hw ( γ∗ ) , net ( α ) , prec ( β ) ) ( 1 ) s.t . ω ∗ = arg min ω Ltrain ( ω , net ( α ) , prec ( β ) ) , ( 2 ) s.t . γ ∗ = arg min γ Lcost ( hw ( γ ) , net ( α ) , prec ( β ) ) ( 3 ) Where α , β , and γ are the continuous variables parameterizing the probability of different choices for the network operators , precision bitwidths , and accelerator parameters as in ( Liu et al. , 2018 ) , ω is the supernet weights , Ltrain , Lval , and Lcost are the loss during training and validation and the hardware-cost loss , and net ( α ) , prec ( β ) , and hw ( γ ) denote the network , precision , and accelerator characterized by α , β , γ , respectively .
This paper proposes Triple-Search (TRIPS), a differentiable framework of jointly searching for network architecture, quantization precision, and accelerator parameters. To address the dilemma between exploding training memory and biased search, the proposed framework leverages heterogeneous sampling where soft Gumbel Softmax is used for weight update and hard Gumbel Softmax is used for probabilities \beta. To integrate accelerator search, hard Gumbel Softmax is used on hardware design choices and the overall hardware cost is used for penalization. Experiments are conducted on the FPGA platform for CIFAR and ImageNet dataset to show the superiority of TRIPS over NAS-only methods.
SP:1037f94ce6eae4a42ea7913c76007f5f3c26aeaf
Gradient Based Memory Editing for Task-Free Continual Learning
1 INTRODUCTION . Accumulating past knowledge and adapting to evolving environments are one of the key traits in human intelligence ( McClelland et al. , 1995 ) . While contemporary deep neural networks have achieved impressive results in a range of machine learning tasks Goodfellow et al . ( 2015 ) , they haven ’ t yet manifested the ability of continually learning over evolving data streams ( Ratcliff , 1990 ) . These models suffer from catastrophic forgetting ( McCloskey & Cohen , 1989 ; Robins , 1995 ) when trained in an online fashion—i.e. , performance drops over previously seen examples during the sequential learning process . To this end , continual learning ( CL ) methods are developed to alleviate catastrophic forgetting issue when models are trained on non-stationary data streams ( Goodfellow et al. , 2013 ) . Most existing work on continual learning assume that , when models are trained on a stream of tasks sequentially , the task specifications such as task boundaries or identities are exposed to the models . These task-aware CL methods make explicit use of task specifications to avoid catastrophic forgetting issue , including consolidating important parameters on previous tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Nguyen et al. , 2018 ) , distilling knowledge from previous tasks ( Li & Hoiem , 2017 ; Rannen et al. , 2017 ) , or separating task-specific model parameters ( Rusu et al. , 2016 ; Serrà et al. , 2018 ) . However , in practice , it is more likely that the data instances comes in a sequential , non-stationary fashion without task identity or boundary—a setting that is commonly termed as task-free continual learning ( Aljundi et al. , 2018 ) . To tackle this setting , recent attempts on task-free CL methods have been made ( Aljundi et al. , 2018 ; Zeno et al. , 2018 ; Lee et al. , 2020 ) . These efforts revolve around regularization and model expansion based approaches , which rely on inferring task boundaries or identities ( Aljundi et al. , 2018 ; Lee et al. , 2020 ) and perform online paramater importance estimation ( Zeno et al. , 2018 ) , to consolidate or separate model parameters . In another line of efforts , memory-based CL methods have achieved strong results in task-free setting Aljundi et al . ( 2019b ) . These methods store a small set of previously seen instances in a fix-sized memory , and utilize them for replay ( Robins , 1995 ; Rolnick et al. , 2019 ) or regularization ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ) . The core problem in memory-based CL methods is how to manage the memory instances ( e.g. , which to replace with new instances ) and replay them given a restricted computation budget , so that the model performance can be maximally preserved or 1Code has been uploaded in the supplementary materials and will be published . enhanced . Prior work developing these methods have tried to either identify : 1 ) what instances to include in memory from a data stream ( Aljundi et al. , 2019b ; Rebuffi et al. , 2017 ; Chaudhry et al. , 2019b ) ; and 2 ) which instances in memory need to be replayed at what training step ( Aljundi et al. , 2019a ) . In this paper , we provide a new approach to solving the memory management problem in task-free continual learning by studying how to make gradient updates on stored memory examples . We develop a novel memory editing algorithm which complements existing memory-replay methods and data-sampling strategies for memory management ( updates ) . The challenge is to propose a plausible and sound optimization objective of editing . We employ the same intuition as previous study ( Toneva et al. , 2019 ; Chaudhry et al. , 2020 ; Aljundi et al. , 2019a ) : examples that are likely to be forgotten should be prioritized . Our proposed method , named Gradient-based Memory EDiting ( GMED ) , edits examples stored in the memory with gradient-based updates so that they are more likely to be forgotten . Specifically , we estimate the “ forgetting ” of a stored example by its loss increase in the upcoming one online model update . Finally , we perform gradient ascent on stored examples so that they are more likely to be forgotten . Experiments show that our algorithm consistently outperforms baselines on five benchmark datasets under various memory sizes . Our ablation study shows the proposed editing mechanism outperforms alternative editing strategies such as random editing . We demonstrate that the proposed algorithm is general enough to be used with other strong ( more recent ) memory-based CL methods to further enhance performance , thus allowing for improvements in many benchmark datasets . 2 RELATED WORKS . Task-aware Continual Learning . Most of continual learning algorithms are studied under “ taskaware ” settings , where the model visits a sequence of clearly separated “ tasks ” . A great portion of algorithms make explicit use of task boundaries ( Kirkpatrick et al. , 2017 ; Rusu et al. , 2016 ; Lopez-Paz & Ranzato , 2017 ) , by learning separate parameters for each task , or discourage changes of parameters that are important to old tasks . Existing continual learning algorithms can be summarized into three categories : regularization-based , architecture-based and data-based approaches . Regularization based approaches ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Nguyen et al. , 2018 ; Adel et al. , 2020 ) discourage the change of parameters that are important to previous data . Model expansionbased approaches ( Rusu et al. , 2016 ; Serrà et al. , 2018 ; Li et al. , 2019 ) allows expansion of model architecture to separate parameters for previous and current data . Data-based approaches ( Robins , 1995 ; Shin et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ) replay or constrain model updates with real or synthetic examples . Task-free Continual Learning . Recently , task-free continual learning ( Aljundi et al. , 2018 ) have drawn increasing interest , where we do not assume knowledge about task boundaries . To the best of our knowledge , only a handful number of regularization based ( Zeno et al. , 2018 ; Aljundi et al. , 2018 ) , model-expansion based ( Lee et al. , 2020 ) , generative replay based ( Rao et al. , 2019 ) , continual meta-learning and meta-continual learning ( He et al. , 2019 ; Caccia et al. , 2020 ; Harrison et al. , 2020 ) approaches are applicable in the task-free CL setting . Meanwhile , most memory based continual learning algorithms are applicable to the task-free setting ( Aljundi et al. , 2019a ; b ) . Memory-based CL algorithms such as Experience Replay ( ER ) ( Robins , 1995 ) store a subset of examples in a fix-sized replay memory and utilize them later at training to alleviate forgetting . Recent research has studied online strategies to improve the performance gain when examples get replayed from two dimensions : in terms of which examples to store , and which examples to replay . For example , in terms of deciding which examples to store , Gradient based Sample Selection ( GSS ) ( Aljundi et al. , 2019b ) proposes to store most diverse examples . In terms of deciding which examples to replay , maximally Interfering Retrieval ( MIR ) ( Aljundi et al. , 2019a ) select examples with the largest estimated forgetting . In particular , a task-aware approach , Hindsight Anchor Learning ( HAL ) ( Chaudhry et al. , 2020 ) , shares the same assumption that forgettable examples should be prioritized more . However , HAL only applies to task-aware settings and requires extra memory storage to keep track of the learned anchors . Figure 1 shows a categorization of memory-based task-free continual learning . 3 PRELIMINARIES . In this section we first present the problem formulation of task-free continual learning and then introduce preliminaries on memory-based continual learning methods . 3.1 PROBLEM FORMULATION . In task-free continual learning , we consider a ( potentially infinite ) stream of data examples D , which have a non-stationary data distribution , i.e. , the data distribution P ( x , y ) over time.At each time step t , the model receives a single or a mini batch of labeled examples ( xt , yt ) from the data stream D. For simplicity , here we assume that example ( xt , yt ) from D is generated by : first sampling a latent “ task ” z ∼ P ( z ; t ) , followed by sampling a data example from a joint data distribution P ( x , y|z ; t ) that is conditioned on task z , i.e. , ( xt , yt ) ∼ P ( x , y|z ; t ) . Here P ( z ; t ) is non-i.i.d and time-dependent . Similarly , P ( x , y|z ; t ) also changes over time . The goal of task-free online continual learning is to seek a classification model f ( x ; θ ) , parameterized by θ , over new example ( s ) ( x , y ) from the data stream D that minimizes a predefined loss ` ( x , y ; θ ) while not increasing the loss on previously seen examples . This capability is evaluated by testing the model over a test set of all visited tasks . 3.2 MEMORY-BASED CL METHODS . Briefly , memory-based CL algorithms maintain a fix-sized replay memory M which is used to store ( subset of ) previously seen examples ( xt , yt ) from the stream D. When the memory is full , the algorithm needs to either identify a memory example ( x , y ) to be replaced by new example , or to discard the new example it just received . Following the same setup in previous memory-based CL methods , our experiments use reservoir sampling ( Vitter , 1985 ) to determine how the memory will be updated with new examples received from stream D. Every time the model receives a new example , it draws an integer j between 0 and N randomly , where N is the number of examples visited so far . If j < |M | ( i.e. , the memory size or budget ) , it replace the example at the j-th position in the memory with the new example ; otherwise , this newly received example will be discarded . Reservoir sampling ensures at each time step each visited example is kept with an equal probability |M |/N . At each time step t , the algorithm also needs to determine the memory examples to be used for replay . Similar to previous methods , we randomly sample one or a mini-batch of examples ( x , y ) from the memory M . As an alternative replay strategy , MIR ( Aljundi et al. , 2019a ) identifies a subset of memory examples based on a predefined optimization objective ( i.e , perform one step of training on ( x , y ) ) , and then replays the selected examples . 4 GRADIENT BASED MEMORY EDITING . We propose Gradient based Memory Editing ( GMED ) , a novel algorithm for updating stored memory examples in an online fashion . We state our hypothesis about which examples should be stored in Sec . 4.1 . We then formulate an online optimization objective for example editing in Sec . 4.2 . In Sec . 4.3 , we introduce algorithmic details of GMED and its integration with MIR .
This paper deals with continual learning. Specifically, given a stream of tasks we want to maximise performance across all tasks. Typically neural networks suffer from catastrophic forgetting which results in worse performance on tasks seen earlier in training. There are many proposed solutions to this problem. One specific set of approaches are "memory based" algorithms. Here we store some training examples in memory from the tasks seen thus far. These are then mixed in with new training data so as to encourage the model to not forget past tasks.
SP:d850572819200f79545616fc92e789ce958b30d4
Improving Transformation Invariance in Contrastive Representation Learning
1 INTRODUCTION . Learning meaningful representations of data is a central endeavour in artificial intelligence . Such representations should retain important information about the original input whilst using fewer bits to store it ( van der Maaten et al. , 2009 ; Gregor et al. , 2016 ) . Semantically meaningful representations may discard a great deal of information about the input , whilst capturing what is relevant . Knowing what to discard , as well as what to keep , is key to obtaining powerful representations . By defining transformations that are believed a priori to distort the original without altering semantic features of interest , we can learn representations that are ( approximately ) invariant to these transformations ( Hadsell et al. , 2006 ) . Such representations may be more efficient and more generalizable than lossless encodings . Whilst less effective for reconstruction , these representations are useful in many downstream tasks that relate only to the semantic features of the input . Representation invariance is also a critically important task in of itself : it can lead to improved robustness and remove noise ( Du et al. , 2020 ) , afford fairness in downstream predictions ( Jaiswal et al. , 2020 ) , and enhance interpretability ( Xu et al. , 2018 ) . Contrastive learning is a recent and highly successful self-supervized approach to representation learning that has achieved state-of-the-art performance in tasks that rely on semantic features , rather than exact reconstruction ( van den Oord et al. , 2018 ; Hjelm et al. , 2018 ; Bachman et al. , 2019 ; He et al. , 2019 ) . These methods learn to match two different transformations of the same object in representation space , distinguishing them from contrasts that are representations of other objects . The objective functions used for contrastive learning encourage representations to remain similar under transformation , whilst simultaneously requiring different inputs to be well spread out in representation space ( Wang & Isola , 2020 ) . As such , the choice of transformations is key to their success ( Chen et al. , 2020a ) . Typical choices include random cropping and colour distortion . However , representations are compared using a similarity function that can be maximized even for representations that are far apart , meaning that the invariance learned is relatively weak . Unfor∗Equal contribution tunately , directly changing the similarity measure hampers the algorithm ( Wu et al. , 2018 ; Chen et al. , 2020a ) . We therefore investigate methods to improve contrastive representations by explicitly encouraging stronger invariance to the set of transformations , without changing the core selfsupervized objective ; we look to extract more information about how representations are changing with respect to transformation , and use this to direct the encoder towards greater invariance . To this end , we first develop a gradient regularization term that , when included in the training loss , forces the encoder to learn a representation function that varies slowly with continuous transformations . This can be seen as constraining the encoder to be approximately transformation invariant . We demonstrate empirically that while the parameters of the transformation can be recovered from standard contrastive learning representations using just linear regression , this is no longer the case when our regularization is used . Moreover , our representations perform better on downstream tasks and are robust to the introduction of nuisance transformations at test time . Test representations are conventionally produced using untransformed inputs ( Hjelm et al. , 2018 ; Kolesnikov et al. , 2019 ) , but this fails to combine information from different transformations and views of the object , or to emulate settings in which transformation noise can not simply be removed at test time . Our second key proposal is to instead create test time representations by feature averaging over multiple , differently transformed , inputs to address these concerns and to more directly impose invariance . We show theoretically that this leads to improved performance under linear evaluation protocols , further confirming this result empirically . We evaluate our approaches first on CIFAR-10 and CIFAR-100 ( Krizhevsky et al. , 2009 ) , using transformations appropriate to natural images and evaluating on a downstream classification task . To validate that our ideas transfer to other settings , and to use our gradient regularizer within a fully differentiable generative process , we further introduce a new synthetic dataset called Spirograph . This provides a greater variety of downstream regression tasks , and allows us to explore the interplay between nuisance transformations and generative factors of interest . We confirm that using our regularizer during training and our feature averaging at test time both improve performance in terms of transformation invariance , downstream tasks , and robustness to train–test distributional shift . In summary , the contributions of this paper are as follows : • We derive a novel contrastive learning objective that leads to more invariant representations . • We propose test time feature averaging to enforce further invariance . • We introduce the Spirograph dataset . • We show empirically that our approaches lead to more invariant representations and achieve state-of-the-art performance for existing downstream task benchmarks . 2 PROBABILISTIC FORMULATION OF CONTRASTIVE LEARNING . The goal of unsupervized representation learning is to encode high-dimensional data , such as images , retaining information that may be pertinent to downstream tasks and discarding information that is not . To formalize this , we consider a data distribution p ( x ) on X and an encoder fθ : X → Z which is a parametrized function mapping from data space to representation space . Contrastive learning is a self-supervized approach to representation learning that learns to make representations of differently transformed versions of the same input more similar than representations of other inputs . Of central importance is the set of transformations , also called augmentations ( Chen et al. , 2020a ) or views ( Tian et al. , 2019 ) , used to distort the data input x . In the common application of computer vision , it is typical to include resized cropping ; brightness , contrast , saturation and hue distortion ; greyscale conversion ; and horizontal flipping . We will later introduce the Spirograph dataset which uses quite different transformations . In general , transformations are assumed to change the input only cosmetically , so all semantic features such as the class label are preserved ; the set of transformations indicates changes which can be safely ignored by the encoder . Formally , we consider a transformation set T ⊆ { t : X → X } and a probability distribution p ( t ) on this set . A representation z of x is obtained by applying a random transformation t to x and then encoding the result using fθ . Therefore , we do not have one representation of x , but an implicit distribution p ( z|x ) . A sample of p ( z|x ) is obtained by sampling t ∼ p ( t ) and setting z = fθ ( t ( x ) ) . If the encoder is to discard irrelevant information , we would expect different encodings of x formed with different transformations t to be close in representation space . Altering the transformation should not lead to big changes in the representations of the same input . In other words , the distribution p ( z|x ) should place most probability mass in a small region . However , this does not provide a sufficient training signal for the encoder fθ as it fails to penalize trivial solutions in which all x are mapped to the same z . To preserve meaningful information about the input x whilst discarding purely cosmetic features , we should require p ( z|x ) to be focused around a single z whilst simultaneously requiring the representations of different inputs not to be close . That is , the marginal p ( z ) = Ep ( x ) [ p ( z|x ) ] should distribute probability mass over representation space . This intuition is directly reflected in contrastive learning . Most state-of-the-art contrastive learning methods utilize the InfoNCE objective ( van den Oord et al. , 2018 ) , or close variants of it ( Chen et al. , 2020a ) . InfoNCE uses a batch x1 , ... , xK of inputs , from which we form pairs of representations ( z1 , z′1 ) , ... , ( zK , z ′ K ) by applying two random transformations to each input followed by the encoder fθ . In probabilistic language xi ∼ p ( x ) for i = 1 , ... , K ( 1 ) zi , z ′ i ∼ p ( z|x = xi ) conditionally independently given xi , for i = 1 , ... , K , ( 2 ) such that zi , z′i = fθ ( t ( x ) ) , fθ ( t ′ ( x ) ) for i.i.d . transformations t , t′ ∼ p ( t ) . Given a learnable similarity score sφ : Z × Z → R , contrastive learning methods minimize the following loss L ( θ , φ ) = − 1 K K∑ i=1 sφ ( zi , z ′ i ) + 1 K K∑ i=1 log K∑ j=1 exp [ sφ ( zi , z ′ j ) ] . ( 3 ) Written in this way , we see that the loss will be minimized when sφ ( zi , z′i ) is large , but sφ ( zi , z ′ j ) is small for i 6= j . In other words , InfoNCE makes the two samples zi , z′i of p ( z|x = xi ) similar , whilst making samples zi , z′j of p ( z ) dissimilar . This can also be understood through the lens of mutual information , for more details see Appendix A . In practice , the similarity measure used generally takes the form ( Chen et al. , 2020a ) sφ ( z , z ′ ) = gφ ( z ) > gφ ( z ′ ) τ‖gφ ( z ) ‖2‖gφ ( z′ ) ‖2 ( 4 ) where gφ is a small neural network and τ is a temperature hyperparameter . If the encoder fθ is perfectly invariant to the transformations , then zi = z′i and sφ ( zi , z ′ i ) will be maximal . However , there are many ways to maximize the InfoNCE objective without encouraging strong invariance in the encoder.1 In this paper , we show how we can learn stronger invariances , above and beyond what is learned through the above approach , and that this benefits downstream task performance . 3 INVARIANCE BY GRADIENT REGULARIZATION . Contrastive learning with InfoNCE can gently encourage invariance by maximizing sφ ( z , z′ ) , but does not provide a strong signal to ensure this invariance . Our first core contribution is to show how we can use gradient methods to directly regulate how the representation changes with the transformation and thus ensure the desired invariance . The key underlying idea is to differentiate the representation with respect to the transformation , and then encourage this gradient to be small so that the representation changes slowly as the transformation is varied . To formalize this , we begin by looking more closely at the transformations T which are used to define the distribution p ( z|x ) . Many transformations , such as brightness adjustment , are controlled by a transformation parameter . We can include these parameters in our set-up by writing the transformation t as a map from both input space X and transformation parameter space U , i.e . t : X ×U → X . In this formulation , we sample a random transformation parameter from u ∼ p ( u ) which is a distribution on U . A sample from p ( z|x ) is then obtained by taking z = fθ ( t ( x , u ) ) , with t now regarded as a fixed function . 1This is because the function gφ is not an injection , so we may have gφ ( z ) = gφ ( z′ ) but z 6= z′ . Johnson & Lindenstrauss ( 1984 ) gives conditions under which a projection of this form will preserve approximate distances , in particular , the required projection dimension is much larger than the typical value 128 . The advantage of this change of perspective is that it opens up additional ways to learn stronger invariance of the encoder . In particular , it may make sense to consider the gradient ∇uz , which describes the rate of change of z with respect to the transformation . This only makes sense for some transformation parameters—we can differentiate with respect to the brightness scaling but not with respect to a horizontal flip . To separate out differentiable and non-differentiable parameters we write u = α , β where α are the parameters for which it makes sense to consider the derivative ∇αz . Intuitively , this gradient should be small to ensure that representations change only slowly as the transformation parameterα is varied . For clarity of exposition , and for implementation practicalities , it is important to consider gradients of a scalar function , so we introduce an arbitrary direction vector e ∈ Z and define F ( α , β , x , e ) = e · fθ ( t ( x , α , β ) ) ‖fθ ( t ( x , α , β ) ) ‖2 ( 5 ) so that F : A× B × X × Z → R calculates the scalar projection of the normalized representation z/‖z‖2 in the e direction . To encourage an encoder that is invariant to changes in α , we would like to minimize the expected conditional variance of F with respect to α : V = Ep ( x ) p ( β ) p ( e ) [ Varp ( α ) [ F ( α , β , x , e ) | x , β , e ] ] , ( 6 ) where we have exploited independence to write p ( x , β , e ) = p ( x ) p ( β ) p ( e ) . Defining V requires a distribution for e to be specified . For this , we make components of e independent Rademacher random variables , justification for which is included in Appendix B . A naive estimator of V can be formed using a direct nested Monte Carlo estimator ( Rainforth et al. , 2018 ) of sample variances , which , including Bessel ’ s correction , is given by V ≈ 1 K K∑ i=1 1 L− 1 L∑ j=1 F ( αij , βi , xi , ei ) 2 − 1 L ( L− 1 ) [ L∑ k=1 F ( αik , βi , xi , ei ) ] 2 ( 7 ) where xi , βi , ei ∼ p ( x ) p ( β ) p ( e ) and αij ∼ p ( α ) . However , this estimator requires LK forward passes through the encoder fθ to evaluate . As an alternative to this computationally prohibitive approach , we consider a first-order approximation2 to F F ( α′ , β , x , e ) − F ( α , β , x , e ) = ∇αF ( α , β , x , e ) · ( α′ −α ) + o ( ‖α′ −α‖ ) ( 8 ) and the following alternative form for the conditional variance ( see Appendix B for a derivation ) Varp ( α ) [ F ( α , β , x , e ) | x , β , e ] = 12Ep ( α ) p ( α′ ) [ ( F ( α , β , x , e ) − F ( α′ , β , x , e ) ) 2 | x , β , e ] ( 9 ) Combining these two ideas , we have V = Ep ( x ) p ( β ) p ( e ) [ 1 2Ep ( α ) p ( α′ ) [ ( F ( α , β , x , e ) − F ( α′ , β , x , e ) ) 2 | x , β , e ] ] ( 10 ) ≈ Ep ( x ) p ( β ) p ( e ) [ 1 2Ep ( α ) p ( α′ ) [ ( ∇αF ( α , β , x , e ) · ( α′ −α ) ) 2 | x , β , e ] ] . ( 11 ) Here we have an approximation of the conditional variance V that uses gradient information . Including this as a regularizer within contrastive learning will encourage the encoder to reduce the magnitude of the conditional variance V , forcing the representation to change slowly as the transformation is varied and thus inducing approximate invariance to the transformations . An unbiased estimator of equation 11 using a batch x1 , ... , xK is V̂regularizer = 1 K K∑ i=1 1 2L L∑ j=1 [ ∇αF ( αi , βi , xi , ei ) · ( α′ij −αi ) ] 2 ( 12 ) where xi , αi , βi , ei , ∼ p ( x ) p ( α ) p ( β ) p ( e ) , α′ij ∼ p ( α ) . We can cheaply use a large number of samples for α′ without having to take any additional forward passes through the encoder : we only require K evaluations of F . Our final loss function is L ( θ , φ ) =− 1 K K∑ i=1 sφ ( zi , z ′ i ) + 1 K K∑ i=1 log K∑ j=1 exp [ sφ ( zi , z ′ j ) ] + λ LK K∑ i=1 L∑ j=1 [ ∇αF ( αi , βi , xi , ei ) · ( α′ij −αi ) ] 2 ( 13 ) 2We use the notation a ( x ) = o ( b ( x ) ) to mean a ( x ) /b ( x ) → 0 as x → ∞ . where λ is a hyperparameter controlling the regularization strength . This loss does not require us to encode a larger number of differently transformed inputs . Instead , it uses the gradient at ( x , α , β , e ) to control properties of the encoder in a neighbourhood ofα . This can effectively reduce the representation gradient along the directions corresponding to many different transformations . This , in turn , creates an encoder that is approximately invariant to the transformations .
Given one image, the paper first generates different views which are controlled by differentiable parameter \alpha, and then minimizes the additional "conditional variance" term~(expectation of these views' squared differences). Therefore, the paper encourages representations of the same image remain similar under the augmentation. A testing strategy is further proposed by voting features with different augmentations. Results demonstrate the effectiveness.
SP:a692e1e43991839e08a02e9122757224e1582cfd
Understanding the Effect of Bias in Deep Anomaly Detection
1 INTRODUCTION . Anomaly detection ( Chandola et al. , 2009 ; Pimentel et al. , 2014 ) trains a formal model to identify unexpected or anomalous instances in incoming data , whose behaviors differ from normal instances . It is particularly useful for detecting problematic events such as digital fraud , structural defects , and system malfunctions . Building accurate anomaly detection models is a well-known challenge in machine learning , due to the scarcity of labeled anomaly data . The classical and most common approach is to train anomaly detection models using only normal data1 , i.e. , first train a model using a corpus of normal data to capture normal behaviors , then configure the model to flag instances with large deviations as anomalies . Researchers have also developed deep learning methods to better capture the complex structure in the data ( Ruff et al . ( 2018 ) ; Wang et al . ( 2019a ) ; Zhou & Paffenroth ( 2017 ) ) . Following the terminology introduced by Chandola et al . ( 2009 ) , we refer to these models as semi-supervised anomaly detection . Recently , a new line of anomaly detection models proposes to leverage available labeled anomalies during model training , i.e. , train an anomaly detection model using both normal data and additional labeled anomaly samples as they become available ( Ruff et al . ( 2020b ) ; Yamanaka et al . ( 2019 ) ; Ruff et al . ( 2020a ) ; Hendrycks et al . ( 2019a ) ) . Existing works show that these new models achieve considerable performance improvements beyond the models trained using only normal data . We hereby refer to these models as deep supervised2 anomaly detection ( Chandola et al. , 2009 ) . When exploring these models , we found that when the labeled anomalies ( used to train the model ) do not align with the target distribution , they could introduce harmful bias to the trained model . Specifically , when comparing the performance of a supervised anomaly detector to its semi-supervised 1Existing literature has used different terms to describe this type of models : some using semi-supervised anomaly detection ( Chandola et al. , 2009 ) and others using unsupervised anomaly detection ( Ruff et al. , 2018 ) . 2Some works termed these models as semi-supervised anomaly detection ( Ruff et al. , 2020b ; Yamanaka et al. , 2019 ; Ruff et al. , 2020a ; Hendrycks et al. , 2019a ) while others termed them as supervised anomaly detection ( Chandola et al. , 2009 ) . version , the performance difference varies significantly across test anomaly data , some better and some worse . That is , using labeled anomalies during model training does not always improve model performance ; instead , it may introduce large variance ( or bias ) in anomaly detection outcomes . In this paper , we aim to understand the effect of a biased training set on deep anomaly detection models . We formally state the anomaly detection problem , focusing on the anomaly detector ’ s recall at a given false positive rate as the main performance metric . We factor the contribution of the labeled anomalies by the detector ’ s anomaly scoring function , and show that different types of labeled anomalies produce different anomaly scoring functions . Next , given any two different anomaly scoring functions , we formally define their difference in performance as the relative scoring bias of the anomaly detectors . Our novel notion of scoring bias for anomaly detection aligns with the notion of bias in the classical supervised learning setting , with the key difference being the different performance metric—we target recall at a given false positive rate , the metric used by real-world anomaly detection tasks ( Li et al. , 2019 ; Liu et al. , 2018 ) . Along this line , we establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection . We empirically validate our assumptions and theoretical results on both synthetic and three real-world datasets ( Fashion-MNIST , Statlog ( Landsat Satellite ) , and Cellular Spectrum Misuse ( Li et al. , 2019 ) ) . Furthermore , we provide an empirical study on how a biased training anomaly set affects the anomaly score function and therefore the resulting detection performance . We consider the above three real-world datasets and six deep-learning based anomaly detection models . Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic , and provides a solid benchmark for future research . In this paper , we introduce a formal analysis on the effect of a biased training set on deep anomaly detection . Our main contributions are the following : • We discover the issue of large performance variance in deep anomaly detectors , caused by the use of the biased anomaly set as training data . • We model the effect of biased training as relative scoring bias , and establish the first finite sample rates for estimating the relative scoring bias of the trained models . • We conduct empirical experiments to verify and characterize the impact of the relative scoring bias on six popular anomaly detection models , and three real-world datasets . To the best of our knowledge , our work is the first to formally study the effect of a biased anomaly training set on deep anomaly detection . Our results show both significant positive and negative impacts of these biases , and suggest that model trainers must treat anomalies with additional care . We believe this leads to new opportunities for improving deep anomaly detectors and deserves more attention from the research community . 2 RELATED WORK . Anomaly Detection Models . While the literature on anomaly detection models is extensive , the most relevant to our work are deep learning based models . Following the terminology used by Chandola et al . ( 2009 ) , we consider two types of models : • Semi-supervised anomaly detection refers to models trained on only normal data , e.g. , Ruff et al . ( 2018 ) ; Sakurada & Yairi ( 2014 ) ; Zhou & Paffenroth ( 2017 ) ; • Supervised anomaly detection refers to models trained on normal data and a small set of labeled anomalies , e.g. , Pang et al . ( 2019 ) ; Daniel et al . ( 2019 ) ; Yamanaka et al . ( 2019 ) ; Ruff et al . ( 2020a ; b ) . One can also categorize models by their architecture : hypersphere ( Ruff et al. , 2018 ; 2020a ; b ) and autoencoder ( or reconstruction ) based models ( Zhou & Paffenroth , 2017 ; Yamanaka et al. , 2019 ) . Another line of recent work proposes to use synthetic or auxiliary anomalies to train anomaly detection models ( Golan & El-Yaniv ( 2018 ) ; Hendrycks et al . ( 2019c ) ; Lee et al . ( 2018 ) ; Hendrycks et al . ( 2019b ) ) , “ forcing ” the model to learn a more compact representation of the normal data . While the existing work has shown empirically that the choice of abnormal data in training can help detect some unseen abnormal distributions , it does not offer any theoretical explanation for the phe- nomenon , nor does it consider the counter-cases when additional abnormal data in training hurt the detection performance . Bias in Anomaly Detection . To the best of our knowledge , we are the first to identify the presence of bias caused by an additional labeled anomaly set in deep anomaly detection models , especially when there exists a mismatch between the anomalies present in training and those encountered in testing ( as shown in Section 5 ) . Existing work has explored the presence of bias in semi-supervised anomaly detection models when there exists defective normal data in training , like outliers and simple-to-reconstruct examples ( Tong et al. , 2019 ) , or examples with background noise ( Liu & Ma , 2019 ) . There is also literature on the bias-variance tradeoff for ensembles of semi-supervised anomaly detection models ( Aggarwal & Sathe , 2015 ; Rayana et al. , 2016 ) . But little or no work has been done on the bias of anomaly detection in the supervised setting ( i.e. , models trained on both normal data and some labeled anomalies ) . Finally , another line of work in transfer learning has identified the value of additional labeled data in training ( Kpotufe & Martinet , 2018 ; Hanneke & Kpotufe , 2019 ) and the performance bias on target data by transferring knowledge from a less related source ( Wang et al. , 2019b ; Wu et al. , 2020 ) . Yet most work only considered the cases of classification models . PAC guarantees for Anomaly Detection . Despite significant progress on developing theoretical guarantees for classification tasks ( Valiant ( 1984 ) ; Kearns et al . ( 1994 ) ) , little has been done for anomaly detection tasks . Siddiqui et al . ( 2016 ) first establishes a PAC framework for anomaly detection models using the notion of pattern space ; however , it is hard to apply such pattern spaces to deep learning models with complex latent spaces . Liu et al . ( 2018 ) proposes a model-agnostic approach to provide the PAC guarantee for anomaly detection performance , by analyzing the convergence for the cumulative distribution of anomaly scores . We follow the basic setting from this line of work to address the convergence of the relative scoring bias . In contrast to prior work , our proof relies on a novel adaption of the key theoretical tool from Massart ( 1990 ) , which allows us to extend our theory to characterize the notion of scoring bias as defined in Section 3.2 . 3 PROBLEM FORMULATION . We now formally state the anomaly detection problem . Consider a model class Θ for anomaly detection , and a ( labeled ) training set D sampled from a mixture distribution D over the normal and anomalous instances . In the context of anomaly detection , a model θ maps each input instance x to a continuous output , which corresponds to anomaly score sθ ( x ) . The model further uses a threshold τθ on the score function to produce a binary label for input x . For a given threshold value τθ , we can define the False Positive Rate ( FPR ) of the model θ on the input data distribution as FPR ( sθ , τθ ) = P [ sθ ( x ) > τθ | y = 0 ] , and the True Positive Rate ( TPR , a.k.a . Recall ) as TPR ( sθ , τθ ) = P [ sθ ( x ) > τθ | y = 1 ] . The FPR and TPR are competing objectives—therefore , a key challenge for anomaly detection algorithms is to identify a configuration of the score , threshold pair ( sθ , τθ ) that strikes a balance between the two performance metrics . W.l.o.g.3 , in this paper we focus on the following scenario , where the objective is to maximize TPR subject to achieving a target FPR . Formally , let q be the target FPR ; we define the optimal anomaly detector as4 ( s∗θ , τ∗θ ) ∈ arg max ( sθ , τθ ) : θ∈Θ TPR ( sθ , τθ ) s.t . FPR ( sθ , τθ ) ≤ q ( 3.1 ) 3.1 A GENERAL ANOMALY DETECTION FRAMEWORK . Note that the performance metric ( namely TPR ) in Problem 3.1 is statistics that depends on the entire predictive distribution , and can not be easily evaluated on any single data point . Therefore , rather than directly solving Problem 3.1 , practical anomaly detection algorithms ( such as OCSVM ( Schölkopf et al. , 1999 ) , Deep SAD ( Ruff et al. , 2020b ) , etc ) often rely on a two-stage process : ( 1 ) 3Our results can be easily extended to the setting where the goal is to minimize FPR subject to a given TPR . 4This formulation aligns with many contemporary works in deep anomaly detection . For example , Li et al . ( 2019 ) show that in real-world anomaly detection problems , it is desirable to detect anomalies with a prefixed low false alarm rate ; Liu et al . ( 2018 ) formulate the anomaly detection in a similar way , where the goal is to minimize FPR for a fixed TPR . learning the score function sθ from training data via a surrogate loss , and ( 2 ) given sθ from the previous step , computing the threshold function τθ on the training data . Formally , given a model class Θ , a training set D , a loss function ` , and a target FPR q , a two-staged anomaly detection algorithm outputs { ŝθ ∈ arg minsθ : θ∈Θ ` ( sθ , D ) τ̂θ ∈ arg maxτθ : θ∈Θ TPR ( ŝθ , τθ ) s.t . FPR ( ŝθ , τθ ) ≤ q ( 3.2 ) Note that the first part of Equation 3.2 amounts to solving a supervised learning problem . Here , the loss function ` could be instantiated into latent-space-based losses ( e.g. , Deep SAD ) , marginbased losses ( e.g. , OCSVM ) , or reconstruction-based losses ( e.g. , ABC ( Yamanaka et al. , 2019 ) ) ; therefore , many contemporary anomaly detection models fall into this framework . To set the threshold τ̂θ , we consider using the distribution of the anomaly scores ŝθ ( · ) from a labeled validation set Dval ∼ D. Let Dval : = Dval0 ∪ Dvala where Dval0 and Dvala denote the subset of normal data and the subset of abnormal data of Dval . Denote the empirical CDFs for anomaly scores assigned to x in Dval0 and D val a as F̂0 and F̂a , respectively . Then , given a target FPR value q , following a similar argument as Liu et al . ( 2018 ) , one can compute the threshold as τ̂θ = max { u ∈ R : F̂0 ( u ) ≤ q } . The steps for solving the second part of Equation 3.2 is summarized in Algorithm 1 . Algorithm 1 : Computing the anomaly detection threshold for Problem 3.2 Data : A validation dataset Dval and a scoring function s ( · ) . Result : A score threshold achieving a target FPR and the corresponding recall on Dval . 1 Get anomaly score s ( x ) for each x in Dval . 2 Compute empirical CDF F̂0 ( x ) and F̂a ( x ) for anomaly scores of x in Dval0 and Dvala . 3 Output detection threshold τ̂ = max { u ∈ R : F̂0 ( u ) ≤ q } . 4 Output TPR ( recall ) on Dvala as r̂ = 1− F̂a ( τ̂ ) .
This paper studies the potential bias in deep semi-supervised anomaly detection. The bias is evaluated in terms of TPR rate given a fixed FPR rate. It uses the anomaly scores output by unsupervised anomaly detectors as a benchmark to examine the relative scoring bias in deep semi-supervised anomaly detectors. It further studies the finite sample rate for this type of scoring bias. This type of bias is verified using some synthetic and real-world datasets. The empirical results also show the potential impact of this bias on several anomaly detectors.
SP:a24603a5dbc07070aeba98e1206511799111bec6
Calibration tests beyond classification
1 INTRODUCTION . We consider the general problem of modelling the relationship between a featureX and a target Y in a probabilistic setting , i.e. , we focus on models that approximate the conditional probability distribution P ( Y |X ) of target Y for given feature X . The use of probabilistic models that output a probability distribution instead of a point estimate demands guarantees on the predictions beyond accuracy , enabling meaningful and interpretable predicted uncertainties . One such statistical guarantee is calibration , which has been studied extensively in metereological and statistical literature ( DeGroot & Fienberg , 1983 ; Murphy & Winkler , 1977 ) . A calibrated model ensures that almost every prediction matches the conditional distribution of targets given this prediction . Loosely speaking , in a classification setting a predicted distribution of the model is called calibrated ( or reliable ) , if the empirically observed frequencies of the different classes match the predictions in the long run , if the same class probabilities would be predicted repeatedly . A classical example is a weather forecaster who predicts each day if it is going to rain on the next day . If she predicts rain with probability 60 % for a long series of days , her forecasting model is calibrated for predictions of 60 % if it actually rains on 60 % of these days . If this property holds for almost every probability distribution that the model outputs , then the model is considered to be calibrated . Calibration is an appealing property of a probabilistic model since it 1The source code of the experiments is available at https : //github.com/devmotion/ Calibration_ICLR2021 . provides safety guarantees on the predicted distributions even in the common case when the model does not predict the true distributions P ( Y |X ) . Calibration , however , does not guarantee accuracy ( or refinement ) —a model that always predicts the marginal probabilities of each class is calibrated but probably inaccurate and of limited use . On the other hand , accuracy does not imply calibration either since the predictions of an accurate model can be too over-confident and hence miscalibrated , as observed , e.g. , for deep neural networks ( Guo et al. , 2017 ) . In the field of machine learning , calibration has been studied mainly for classification problems ( Bröcker , 2009 ; Guo et al. , 2017 ; Kull et al. , 2017 ; 2019 ; Kumar et al. , 2018 ; Platt , 2000 ; Vaicenavicius et al. , 2019 ; Widmann et al. , 2019 ; Zadrozny , 2002 ) and for quantiles and confidence intervals of models for regression problems with real-valued targets ( Fasiolo et al. , 2020 ; Ho & Lee , 2005 ; Kuleshov et al. , 2018 ; Rueda et al. , 2006 ; Taillardat et al. , 2016 ) . In our work , however , we do not restrict ourselves to these problem settings but instead consider calibration for arbitrary predictive models . Thus , we generalize the common notion of calibration as : Definition 1 . Consider a model PX : = P ( Y |X ) of a conditional probability distribution P ( Y |X ) . Then model P is said to be calibrated if and only if P ( Y |PX ) = PX almost surely . ( 1 ) If P is a classification model , Definition 1 coincides with the notion of ( multi-class ) calibration by Bröcker ( 2009 ) ; Kull et al . ( 2019 ) ; Vaicenavicius et al . ( 2019 ) . Alternatively , in classification some authors ( Guo et al. , 2017 ; Kumar et al. , 2018 ; Naeini et al. , 2015 ) study the strictly weaker property of confidence calibration ( Kull et al. , 2019 ) , which only requires P ( Y = arg maxPX |maxPX ) = maxPX almost surely . ( 2 ) This notion of calibration corresponds to calibration according to Definition 1 for a reduced problem with binary targets Ỹ : = 1 ( Y = arg maxPX ) and Bernoulli distributions P̃X : = Ber ( maxPX ) as probabilistic models . For real-valued targets , Definition 1 coincides with the so-called distribution-level calibration by Song et al . ( 2019 ) . Distribution-level calibration implies that the predicted quantiles are calibrated , i.e. , the outcomes for all real-valued predictions of the , e.g. , 75 % quantile are actually below the predicted quantile with 75 % probability ( Song et al. , 2019 , Theorem 1 ) . Conversely , although quantile-based calibration is a common approach for real-valued regression problems ( Fasiolo et al. , 2020 ; Ho & Lee , 2005 ; Kuleshov et al. , 2018 ; Rueda et al. , 2006 ; Taillardat et al. , 2016 ) , it provides weaker guarantees on the predictions . For instance , the linear regression model in Fig . 1 empirically shows quantiles that appear close to being calibrated albeit being uncalibrated according to Definition 1 . Figure 1 also raises the question of how to assess calibration for general target spaces in the sense of Definition 1 , without having to rely on visual inspection . In classification , measures of calibration such as the commonly used expected calibration error ( ECE ) ( Guo et al. , 2017 ; Kull et al. , 2019 ; Naeini et al. , 2015 ; Vaicenavicius et al. , 2019 ) and the maximum calibration error ( MCE ) ( Naeini et al. , 2015 ) try to capture the average and maximal discrepancy between the distributions on the left hand side and the right hand side of Eq . ( 1 ) or Eq . ( 2 ) , respectively . These measures can be generalized to other target spaces ( see Definition B.1 ) , but unfortunately estimating these calibration errors from observations of features and corresponding targets is problematic . Typically , the predictions are different for ( almost ) all observations , and hence estimation of the conditional probability P ( Y |PX ) , which is needed in the estimation of ECE and MCE , is challenging even for low-dimensional target spaces and usually leads to biased and inconsistent estimators ( Vaicenavicius et al. , 2019 ) . Kernel-based calibration errors such as the maximum mean calibration error ( MMCE ) ( Kumar et al. , 2018 ) and the kernel calibration error ( KCE ) ( Widmann et al. , 2019 ) for confidence and multi-class calibration , respectively , can be estimated without first estimating the conditional probability and hence avoid this issue . They are defined as the expected value of a weighted sum of the differences of the left and right hand side of Eq . ( 1 ) for each class , where the weights are given as a function of the predictions ( of all classes ) and chosen such that the calibration error is maximized . A reformulation with matrix-valued kernels ( Widmann et al. , 2019 ) yields unbiased and differentiable estimators without explicit dependence on P ( Y |PX ) , which simplifies the estimation and allows to explicitly account for calibration in the training objective ( Kumar et al. , 2018 ) . Additionally , the kernel-based framework allows the derivation of reliable statistical hypothesis tests for calibration in multi-class classification ( Widmann et al. , 2019 ) . However , both the construction as a weighted difference of the class-wise distributions in Eq . ( 1 ) and the reformulation with matrix-valued kernels require finite target spaces and hence can not be applied to regression problems . To be able to deal with general target spaces , we present a new and more general framework of calibration errors without these limitations . Our framework can be used to reason about and test for calibration of any probabilistic predictive model . As explained above , this is in stark contrast with existing methods that are restricted to simple output distributions , such as classification and scalar-valued regression problems . A key contribution of this paper is a new framework that is applicable to multivariate regression , as well as situations when the output is of a different ( e.g. , discrete ordinal ) or more complex ( e.g. , graph-structured ) type , with clear practical implications . Within this framework a KCE for general target spaces is obtained . We want to highlight that for multi-class classification problems its formulation is more intuitive and simpler to use than the measure proposed by Widmann et al . ( 2019 ) based on matrix-valued kernels . To ease the application of the KCE we derive several estimators of the KCE with subquadratic sample complexity and their asymptotic properties in tests for calibrated models , which improve on existing estimators and tests in the two-sample test literature by exploiting the special structure of the calibration framework . Using the proposed framework , we numerically evaluate the calibration of neural network models and ensembles of such models . 2 CALIBRATION ERROR : A GENERAL FRAMEWORK . In classification , the distributions on the left and right hand side of Eq . ( 1 ) can be interpreted as vectors in the probability simplex . Hence ultimately the distance measure for ECE and MCE ( see Definition B.1 ) can be chosen as a distance measure of real-valued vectors . The total variation , Euclidean , and squared Euclidean distances are common choices ( Guo et al. , 2017 ; Kull et al. , 2019 ; Vaicenavicius et al. , 2019 ) . However , in a general setting measuring the discrepancy between P ( Y |PX ) and PX can not necessarily be reduced to measuring distances between vectors . The conditional distribution P ( Y |PX ) can be arbitrarily complex , even if the predicted distributions are restricted to a simple class of distributions that can be represented as real-valued vectors . Hence in general we have to resort to dedicated distance measures of probability distributions . Additionally , the estimation of conditional distributions P ( Y |PX ) is challenging , even more so than in the restricted case of classification , since in general these distributions can be arbitrarily complex . To circumvent this problem , we propose to use the following construction : We define a random variable ZX ∼ PX obtained from the predictive model and study the discrepancy between the joint distributions of the two pairs of random variables ( PX , Y ) and ( PX , ZX ) , respectively , instead of the discrepancy between the conditional distributions P ( Y |PX ) and PX . Since ( PX , Y ) d = ( PX , ZX ) if and only if P ( Y |PX ) = PX almost surely , model P is calibrated if and only if the distributions of ( PX , Y ) and ( PX , ZX ) are equal . The random variable pairs ( PX , Y ) and ( PX , ZX ) take values in the product space P×Y , where P is the space of predicted distributions PX and Y is the space of targets Y . For instance , in classification , P could be the probability simplex and Y the set of all class labels , whereas in the case of Gaussian predictive models for scalar targets P could be the space of normal distributions and Y be R. The study of the joint distributions of ( PX , Y ) and ( PX , ZX ) motivates the definition of a generally applicable calibration error as an integral probability metric ( Müller , 1997 ; Sriperumbudur et al. , 2009 ; 2012 ) between these distributions . In contrast to common f -divergences such as the Kullback-Leibler divergence , integral probability metrics do not require that one distribution is absolutely continuous with respect to the other , which can not be guaranteed in general . Definition 2 . Let Y denote the space of targets Y , and P the space of predicted distributions PX . We define the calibration error with respect to a space of functions F of the form f : P × Y → R as CEF : = sup f∈F ∣∣EPX , Y f ( PX , Y ) − EPX , ZX f ( PX , ZX ) ∣∣ . ( 3 ) By construction , if model P is calibrated , then CEF = 0 regardless of the choice of F . However , the converse statement is not true for arbitrary function spaces F . From the theory of integral probability metrics ( see , e.g. , Müller , 1997 ; Sriperumbudur et al. , 2009 ; 2012 ) , we know that for certain choices of F the calibration error in Eq . ( 3 ) is a well-known metric on the product space P×Y , which implies that CEF = 0 if and only if model P is calibrated . Prominent examples include the maximum mean discrepancy2 ( MMD ) ( Gretton et al. , 2007 ) , the total variation distance , the Kantorovich distance , and the Dudley metric ( Dudley , 1989 , p. 310 ) . As pointed out above , Definition 2 is a generalization of the definition for multi-class classification proposed by Widmann et al . ( 2019 ) —which is based on vector-valued functions and only applicable to finite target spaces—to any probabilistic predictive model . In Appendix E we show this explicitly and discuss the special case of classification problems in more detail . Previous results ( Widmann et al. , 2019 ) imply that in classification MMCE and , for common distance measures d ( · , · ) such as the total variation and squared Euclidean distance , ECEd and MCEd are special cases of CEF . In Appendix G we show that our framework also covers natural extensions of ECEd and MCEd to countably infinite discrete target spaces , which to our knowledge have not been studied before and occur , e.g. , in Poisson regression . The literature of integral probability metrics suggests that we can resort to estimating CEF from i.i.d . samples from the distributions of ( PX , Y ) and ( PX , ZX ) . For the MMD , the Kantorovich distance , and the Dudley metric tractable strongly consistent empirical estimators exist ( Sriperumbudur et al. , 2012 ) . Here the empirical estimator for the MMD is particularly appealing since compared with the other estimators “ it is computationally cheaper , the empirical estimate converges at a faster rate to the population value , and the rate of convergence is independent of the dimension d of the space ( for S = Rd ) ” ( Sriperumbudur et al . ( 2012 ) ) . Our specific design of ( PX , ZX ) can be exploited to improve on these estimators . If EZx∼Pxf ( Px , Zx ) can be evaluated analytically for a fixed prediction Px , then CEF can be estimated empirically with reduced variance by marginalizing out ZX . Otherwise EZx∼Pxf ( Px , Zx ) has to be estimated , but in contrast to the common estimators of the integral probability metrics discussed above the artificial construction of ZX allows us to approximate it by numerical integration methods such as ( quasi ) Monte Carlo integration or quadrature rules with arbitrarily small error and variance . Monte Carlo integration preserves statistical properties of the estimators such as unbiasedness and consistency . 2As we discuss in Section 3 , the MMD is a metric if and only if the employed kernel is characteristic .
The authors present an approach for testing calibration in conditional probability estimation models. They build on a line of work in the kernel estimation literature assessing whether the conditional distributions are well calibrated (i.e. P(Y | f(X)) = f(X), where f is some predictive model). They develop an MMD kernel estimator and expand on practical choices of kernels that are computationally tractable. They then derive an asymptotic null distribution for calibrated models, enabling control over the error rate when labeling a model uncalibrated. A few simulation studies are done with neural networks to show the applicability of the method.
SP:cf6c9061542bf9c43a968faa574ce03ad71a859a
Semantic Hashing with Locality Sensitive Embeddings
1 INTRODUCTION . One of most challenging aspects in many Information Retrieval ( IR ) systems is the discovery and identification of the nearest neighbors of a query element in an vector space . This is typically solved using Approximate Nearest Neighbors ( ANN ) methods as exact solutions typically do not scale well with the dimension of the vector space . ANN methods typically fall into one of three categories : space partitioning trees , such as the kd-tree ( Bentley ( 1975 ) ; Friedman et al . ( 1977 ) ; Arya et al . ( 1998 ) ) , neighborhood graph search ( Chen et al . ( 2018 ) ; Iwasaki & Miyazaki ( 2018 ) ) or Locality Sensitive Hashing ( LSH ) methods ( Charikar ( 2002 ) ; Gionis et al . ( 1999 ) ; Lv et al . ( 2007 ) ) . Despite their theoretical , intuitive , and computational appeal , LSH methods are not as prevalent in modern IR systems as are space-partitioning trees or neighborhood graph methods ( Bernhardsson ( 2013 ) ; Chen et al . ( 2018 ) ; Johnson et al . ( 2017 ) ; Iwasaki & Miyazaki ( 2018 ) ) . Empirical studies demonstrate that LSH techniques frequently do not attain the same level of quality as spacepartitioning trees ( Muja & Lowe ( 2009 ) ) . Nonetheless , space-partitioning and neighborhood graph search methods are expensive , both in data structure construction and in query time , and remain a bottleneck in many modern IR pipelines . As many modern retrieval tasks revolve around solving ANN for vector representations learned from raw , structured data , one might attempt to learn representations which are more suited towards efficient retrieval . Metric learning methods ( Xing et al . ( 2003 ) ; Weinberger et al . ( 2006 ) ; Chechik et al . ( 2010 ) ; Hoffer & Ailon ( 2015 ) ; Kulis et al . ( 2013 ) ) have been proposed for learning linear and non-linear transformations of given representations for improved clustering and retrieval quality . A class of related methods , semantic hashing or hash learning methods ( Salakhutdinov & Hinton ( 2009 ) ) , have also been explored for learning transformations into binary vector spaces . These learned binary representations may then be used in hashing based retrieval methods , typically by retrieving all neighboring elements in the Hamming ball with radius 1 or 2 . Exact hashing retrieval algorithms , that is , Hamming ball “ search ” with radius 0 , have a particular computational appeal in that search data structures are not needed nor is enumeration of all codes within a Hamming ball . In addition , binary representations that are suitable for exact hashing retrieval can also be used to identify groups of related items that can be interpreted as clusters in the traditional sense . As the number of clusters discovered by the algorithm isn ’ t explicitly controlled ( only bounded by 2d , ) algorithms generating binary embeddings suitable for exact hashing retrieval can be viewed as nonparametric clustering methods . To this end , we propose a method for learning continuous representations in which the optimized similarity is the angular similarity . The angular similarity corresponds to the collision probability of SimHash , a hyperplane based LSH function ( Charikar ( 2002 ) ) . Angular distance gives a sharp topology on the embedding space which encourages similar objects have nearly identical embeddings suitable for exact hashing retrieval . Related work on similarity learning , LSH , and hash learning can be found in Section 2 . The proposed models are found in Section 3 . The experimental results , and other technical details , can be found in Sections 4 . Finally , we conclude in Section 5 . 2 PRELIMINARIES . 2.1 SIMILARITY MODELLING . Similarity learning methods are a class of techniques for learning a similarity function between objects . One successful approach for similarity learning are “ twin network ” or “ two tower architecture ” models , in which two neural network architectures are joined to produce a similarity prediction ( Bromley et al . ( 1994 ) ; Chopra et al . ( 2005 ) ; Huang et al . ( 2013 ) ) . The weights of these networks may be shared or not , depending on whether the two input domains are equivalent or not . Let i ∈ U and j ∈ V be the identities of two objects , where U and V are the two domains across which a similarity function is to be learned . Let φu ( i ) and φv ( j ) be the input representations for the objects ( these functions φ may be identity functions if the input domains are discrete . ) These representations are then transformed through parameterized vector-valued functions fu ( ·|θu ) and fv ( ·|θv ) , whose output are typically the learned representations ui = fu ( φu ( i ) |θu ) and vj = fv ( φv ( j ) |θv ) . A loss is then defined using pairwise labels yij and an interaction function s ( ui , vj ) which denotes the similarity or relevancy of the pair . Taking fu to be a mapping for each index i to an independent parameter vector ui ( similarly for fv and vi ) , and taking s ( ui , vj ) = uTi vj with an appropriate loss results in a variety of matrix factorization approaches ( Koren et al . ( 2009 ) ; Lee & Seung ( 2001 ) ; Mnih & Salakhutdinov ( 2008 ) ; Blei et al . ( 2003 ) ; Rendle et al . ( 2012 ) ; Pennington et al . ( 2014 ) ) . Taking fu to be a neural network mapping a context φu ( i ) to a representation ui allows for similarity models that readily make use of complex contextual information . Common choices for the similarity function include transformations of Euclidean distance ( Chopra et al . ( 2005 ) ) , and cosine similarity : s ( ui , vj ) = uTi vj ||ui||||vj || ( Huang et al . ( 2013 ) ) . In addition , the loss can be defined for pairs ( Chopra et al . ( 2005 ) ) , triplets ( one positive pair , one negative pair ) ( Rendle et al . ( 2012 ) ; Chechik et al . ( 2010 ) ) , or on larger sets ( Huang et al . ( 2013 ) ) . 2.2 LOCALITY SENSITIVE HASHING AND ANGULAR SIMILARITY . A Locality Sensitive Hash ( LSH ) family F is a distribution of hashes h on a collection of objects Q such that for qi , qj ∈ Q , ( Indyk & Motwani ( 1998 ) ; Gionis et al . ( 1999 ) ; Charikar ( 2002 ) ) Pr [ h ( qi ) = h ( qj ) ] = s ( qi , qj ) ( 1 ) for some similarity function s on the objects . SimHash ( Charikar ( 2002 ) ) is a LSH technique developed for document deduplication but may be used in other contexts . For a vector representations q ∈ Rd , SimHash draws a random matrix Z ∈ Rd×M with standard Normal entries . The hash h ( qi ) ∈ { 0 , 1 } M is then constructed as h ( qi ) m = 1 [ q T i Z : m > 0 ] . ( 2 ) Intuitively , SimHash draws random hyperplanes intersecting the origin to separate points . A useful property of this hash function , as stated in Charikar ( 2002 ) , is that ψ ( qi , qj ) : = Pr [ h ( qi ) m = h ( qj ) m ] = 1− 1 π cos−1 ( qTi qj ||qi||||qj || ) , where the above probability is measured with respect to Z. ψ ( qi , qj ) , the collision probability for two vectors , is also known as the angular similarity , and ξ = 1 − ψ is the angular distance , which is a proper metric ( unlike the cosine distance 1− q T i qj ||qi||||qj || ) . As the columns of Z are independent , the collision probability for a K bit hash is ψK . 2.3 LEARNING TO HASH . A related approach to similarity learning is hash learning methods , introduced in Salakhutdinov & Hinton ( 2009 ) . These methods train binary embeddings directly and then use hash collisions or Hamming Ball search to retrieve approximate nearest neighbors . Binary representations lead to some technical challenges ; Salakhutdinov & Hinton ( 2009 ) uses contrastive divergence for training , whereas Hubara et al . ( 2016 ) implement binary threshold activation functions with stochastic neurons . Another approach ( and the one followed in this work ) is to avoid explicit binary representations in training and to introduce an quantization loss to penalize embeddings that are not close to binary , and to subsequently threshold these near-binary embeddings to binary ones . This type of quantization loss is distinct from those used in vector quantization methods ( Ahalt et al . ( 1990 ) ; Kohonen ( 1990 ) ; Sato & Yamada ( 1996 ) ) in which the data representations are fixed and the codes are learned ; here the codes are fixed and the representations are learned . The quantization loss introduced in Deep Hashing Networks ( DHN ) Zhu et al . ( 2016 ) is of the form b ( ui|θ ) = ∑ d log cosh ( |uid| − 1 ) ≈ ‖|ui| − 1‖1 . ( 3 ) Other quantization losses based on distances to binary codes have been used in Li et al . ( 2016 ) ; Liu et al . ( 2016 ) . Cao et al . ( 2017 ) utilizes a quantization loss whose strength increases over time . Finally , Deep Cauchy Hashing ( DCH ) ( Cao et al . ( 2018 ) ) has shown improvements by utilizing a heavy-tailed similarity function with a similarly inspired quantization loss . 3 LOCALITY SENSITIVE EMBEDDINGS . Many similarity learning methods utilize dot products or cosine similarity to relate the embeddings of a pair to each other . For example GloVe ( Pennington et al . ( 2014 ) ) minimizes the weighted error between the dot product of the embeddings and a log-coocurrence matrix , and the DSSM model ( Huang et al . ( 2013 ) ) utilizes cosine similarity as the “ crossing ” layer between the two halves of a twin network . In general , embeddings trained in this way are not suitable for SimHash retrieval , as can be seen in Figure 1 . If models are trained so as to minimize the error of a prediction made by cosine similarity , extremely low tolerances are required in order to achieve embeddings with significant collision probability . Similar observations on the misspecifiation of cosine distance for Semantic Hashing were made in Cao et al . ( 2018 ) . In this section , we define models in which collision probabilities of learned representations are directly optimized . 3.1 LOSS DEFINITION . In the following , we define a population loss through a data distribution D of relevant and irrelevant pairs . Each sample from D is a tuple ( y , i , j ) ∈ { 0 , 1 } × U × V , where U and V are the sets across which a similarity is to be learned – for example , “ users ” and “ items ” in a recommender system . y is the relevancy of the pair ( i , j ) . The population losses we consider are expectations over D of a per-tuple loss l with regularization terms r per item : L ( θ ) = E y , i , j∼D l ( y , i , j|θ ) + λr ( i|θ ) + λr ( j|θ ) . ( 4 ) In practice , we minimize the empirical loss L̂ constructed from a finite sample from D , and we use r ( i|θ ) = b ( ui|θ ) defined in equation 3. θ represents all parameters of the model , including any learned representations for the elements of the sets U and V . An embedding ui for element i may either be a vector of free parameters , as would be in a fixed vocabulary embedding model , or may be the output of a model on a raw input : ui = fu ( φu ( i ) ) , as would be in a twin network model . In addition , each half of the pair ( ui , vj ) may represent a different input space , as in the DSSM model .
The authors consider the problem of learning a hash function such that semantically similar elements have high collision probability. They modify the approach Deep Hashing Networks (Zhu et al., 2016) with a new loss function. Rather than use a sigmoid based loss function, the authors argue that a loss function based on angular similarity and SimHash would be better. Specifically, they use the probability of SimHash collisions as a loss function. They then experimentally verify their method on synthetic data from a Stochastic Block Model distribution, image data (CIFAR-10 and ImageNet), and text data (OSCAR). They show improvements over related methods.
SP:becb496310e88c1e2e7d03131093b9ebcf075c1d
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
1 INTRODUCTION . When models are tested on distributions that are different from the training distribution , they typically suffer large drops in performance ( Blitzer and Pereira , 2007 ; Szegedy et al. , 2014 ; Jia and Liang , 2017 ; AlBadawy et al. , 2018 ; Hendrycks et al. , 2019a ) . For example , in remote sensing , central tasks include predicting poverty , crop type , and land cover from satellite imagery for downstream humanitarian , policy , and environmental applications ( Xie et al. , 2016 ; Jean et al. , 2016 ; Wang et al. , 2020 ; Rußwurm et al. , 2020 ) . In some developing African countries , labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys ( Jean et al. , 2016 ) . To make accurate predictions in these countries , we must extrapolate to out-of-distribution ( OOD ) examples across different geographic terrains and political borders . We consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution ( e.g. , global satellite imagery ) . While labels are scarce , auxiliary information is often cheaply available for every input and may provide some signal for the missing labels . Auxiliary information can come from additional data sources ( e.g. , climate data from other satellites ) or derived from the original input ( e.g. , background or non-visible spectrum image channels ) . This auxiliary information is often discarded or not leveraged , and how to best use them is unclear . One way is to use them directly as input features ( aux-inputs ) ; another is to treat them as prediction outputs for an auxiliary task ( aux-outputs ) in pre-training . Which approach leads to better in-distribution or OOD performance ? Aux-inputs provide more features to potentially improve in-distribution performance , and one may hope that this also improves OOD performance . Indeed , previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy ( Recht et al. , 2019 ; Taori et al. , 2020 ; Xie et al. , 2020 ; Santurkar et al. , 2020 ) . However , in this paper we find that aux-inputs can introduce more spurious correlations with the labels : as a result , while aux-inputs often improve in-distribution accuracy , they can worsen OOD accuracy . We give examples of this trend on CelebA ( Liu et al. , 2015 ) and real-world satellite datasets in Sections 5.2 and 5.3 . Conversely , aux-output methods such as pre-training may improve OOD performance through auxiliary supervision ( Caruana , 1997 ; Weiss et al. , 2016 ; Hendrycks et al. , 2019a ) . Hendrycks et al . ∗Equal contribution . 𝑥 𝑧 𝑤 𝑦 𝑢 𝐵∗ 𝐴∗ 𝐶∗ 𝜃 '' 𝜃 # Figure 2 : Graphical model for our theoretical setting : prediction task with input x , target y , and auxiliary information z , which is related to y through the latent variable w and latent noise u . ( 2019a ) show that pre-training on ImageNet can improve adversarial robustness , and Hendrycks et al . ( 2019b ) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions . In this paper , we find that while aux-outputs improve OOD accuracy , the in-distribution accuracy is worse than with aux-inputs . Thus , we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs . To theoretically study how to best use auxiliary information , we extend the multi-task linear regression setting ( Du et al. , 2020 ; Tripuraneni et al. , 2020 ) to allow for distribution shifts . We show that auxiliary information helps in-distribution error by providing useful features for predicting the target , but the relationship between the aux-inputs and the target can shift significantly OOD , worsening the OOD error . In contrast , the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space . We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information . Can we do better than using auxiliary information as inputs or outputs alone ? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs ( Figure 1 ) . In-N-Out first uses an aux-inputs model , which has good in-distribution accuracy , to pseudolabel in-distribution unlabeled data . It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data . We prove that In-N-Out , which combines self-training and pre-training , further improves both in-distribution and OOD error over the aux-outputs model . We show empirical results on CelebA and two remote sensing tasks ( land cover and cropland prediction ) that parallel the theory . On all datasets , In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2 % in-distribution , 2–3 % OOD over not using auxiliary information on remote sensing tasks . Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone ( up to 5 % in-distribution , 1–2 % OOD on remote sensing tasks ) . We also find that using OOD ( rather than in-distribution ) unlabeled examples for pre-training is crucial for OOD improvements . 2 SETUP . Let x∈Rd be the input ( e.g. , a satellite image ) , y ∈R be the target ( e.g. , crop type ) , and z ∈RT be the cheaply obtained auxiliary information either from additional sources ( e.g. , climate information ) or derived from the original data ( e.g. , background ) . Training data . Let Pid and Pood denote the underlying distribution of ( x , y , z ) triples in-distribution and out-of-distribution , respectively . The training data consists of ( i ) in-distribution labeled data { ( xi , yi , zi ) } ni=1 ∼ Pid , ( ii ) in-distribution unlabeled data { ( xidi , zidi ) } mid i=1 ∼ Pid , and ( iii ) out-of-distribution unlabeled data { ( xoodi , zoodi ) } mood i=1 ∼Pood . Goal and risk metrics . Our goal is to learn a model from input and auxiliary information to the target , f : Rd×RT →R . For a loss function ` , the in-distribution population risk of the model f is Rid ( f ) =Ex , y , z∼Pid [ ` ( f ( x , z ) , y ) ] , and its OOD population risk isRood ( f ) =Ex , y , z∼Pood [ ` ( f ( x , z ) , y ) ] . 2.1 MODELS . We consider three common ways to use the auxiliary information ( z ) to learn a model . Baseline . The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information ( accomplished by setting z to 0 ) : f̂bs =argmin f 1 n n∑ i=1 ` ( f ( xi,0 ) , yi ) . ( 1 ) Aux-inputs . The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features : f̂in =argmin f 1 n n∑ i=1 ` ( f ( xi , zi ) , yi ) . ( 2 ) Aux-outputs . The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task , in hopes that there is a low-dimensional feature representation that is common to predicting both z and y . Training the aux-outputs model consists of two steps : In the pre-training step , we use all the unlabeled data to learn a shared feature representation . Let h : Rd→Rk denote a feature map and gz-out : Rk→RT denote a mapping from feature representation to the auxiliary outputs . Let ` aux denote the loss function for the auxiliary information . We define the empirical risk of h and gz-out as : R̂pre ( h , gz-out ) = 1 mid+mood ( mid∑ i=1 ` aux ( gz-out ( h ( x id i ) ) , z id i ) + mood∑ i=1 ` aux ( gz-out ( h ( x ood i ) ) , z ood i ) ) . ( 3 ) The estimate of the feature map is ĥout =argminhmingz-outR̂pre ( h , gz-out ) . In the transfer step , the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out : Rk→R from feature representation to target y . We define the transfer empirical risk as : R̂trans ( ĥout , gy-out ) = 1 n n∑ i=1 ` ( gy-out ( ĥout ( xi ) ) , yi ) ( 4 ) The estimate of the target mapping is ĝy-out = argmingy-out R̂trans ( ĥout , gy-out ) . The final aux-outputs model is f̂out ( x , z ) = ĝy-out ( ĥout ( x ) ) . ( 5 ) Like the baseline model , the aux-outputs model ignores the auxiliary information for prediction . 3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS . We now analyze the baseline , aux-inputs , and aux-outputs models introduced in Section 2 . Our setup extends a linear regression setting commonly used for analyzing multi-task problems ( Du et al. , 2020 ; Tripuraneni et al. , 2020 ) . Setup . See Figure 2 for the graphical model . Letw=B ? x∈Rk be a low-dimensional latent feature ( k≤d ) shared between auxiliary information z and the target y . Let u∈Rm denote unobserved latent variables not captured in x . We assume z and y are linear functions of u andw : y=θ > ww+θ > u u+ , ( 6 ) z=A ? w+C ? u , ( 7 ) where ∼ P denotes noise with mean 0 and variance σ2 . As in Du et al . ( 2020 ) , we assume the dimension of the auxiliary information T is greater than the feature dimension k , that is T ≥k , and thatA ? , B ? andC ? have full rank ( rank k ) . We also assume T ≥m , wherem is the dimension of u . Data . Let Px and Pu denote the distribution of x and u in-distribution ( ID ) , and let P ′x , P ′u denote the distribution x and uOOD . We assume x and u are independent , have distributions with bounded density everywhere , and have invertible covariance matrices . We assume the mean of u is zero in- and out-of-distribution1 . We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD , a common assumption in unsupervised domain adaptation theory ( Sugiyama et al. , 2007 ; Kumar et al. , 2020 ; Raghunathan et al. , 2020 ) . Loss metrics . We use the squared loss for the target and auxiliary losses : ` ( ŷ , y ) = ( y− ŷ ) 2 and ` aux ( z , z ′ ) =‖z−z′‖22 . Models . We assume all model families ( f , h , gz-out , gy-out ) in Section 2 are linear . Let S= ( A ? , B ? , C ? , θw , θu , Px , Pu ) denote a problem setting which satisfies all the above assumptions . 3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION , BUT CAN HURT OOD . We first show that the aux-inputs model ( 2 ) performs better than the baseline model ( 1 ) in-distribution . Intuitively , the target y depends on both the inputs x ( throughw ) and latent variable u ( Figure 2 ) . The baseline model only uses x to predict y ; thus it can not capture the variation in y due to u . On the other hand , the aux-inputs model uses x and z to predict y . Since z is a function of x ( through w ) and u , u can be recovered from x and z by inverting this relation . Note that u is unobserved but implicitly recovered . The aux-inputs model can then combine u and x to predict y better . Let σ2u=Eu∼Pu [ ( θ > u u ) 2 ] denote the ( in-distribution ) variance of y due to the latent variables u . The following proposition shows that if σ2u > 0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2 Proposition 1 . For all problem settings S , P , assuming regularity conditions ( bounded x , u , sub-Gaussian noise , and T =m ) , and σ2u > 0 , for all δ > 0 , there existsN such that for n≥N number of training points , with probability at least 1−δ over the training examples , the aux-inputs model improves over the baseline : Rid ( f̂in ) < Rid ( f̂bs ) . ( 8 ) Although using z as input leads to better in-distribution performance , we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples . Intuitively , the aux-inputs model uses z , which can be unreliable OOD because z depends on u and u can shift OOD . In more detail , the aux-inputs model learns to predict ŷ= θ̂ > x , inx+θ̂ > z , inz , where the true output y=θ > x x+θ > z z , and θ̂z , in is an approximation to the true parameter θz , that has some error . Out-of-distribution u and hence z can have very high variance , which would magnify ( θ̂z , in−θz ) > z and lead to bad predictions . Example 1 . There exists a problem setting S , P , such that for every n , there is some test distribution P ′x , P ′ u with : E [ Rood ( f̂in ) ] > E [ Rood ( f̂bs ) ] ( 9 )
This paper introduces a new method for leveraging auxiliary information and unlabelled data to improve out-of-distribution model performance. Theoretically, in a linear model with latent variables, they demonstrate using auxiliary data as inputs helps in-distribution test-error, but can hurt out-of-distribution error, while using auxiliary data to pretrain a "good" representation always improve out-of-distribution error. The proposed method uses the auxiliary data to learn an initial model, which generates psuedolabels to fine-tune the pretrained model.
SP:7611ee6b9dfabf7ec6a65da58cb6e3892705e1c9
Variance Reduction in Hierarchical Variational Autoencoders
1 INTRODUCTION . Variational autoencoders ( VAE ) [ 10 ] are a popular latent variable model for unsupervised learning that simplifies learning by the introduction of a learned approximate posterior . Given data x and latent variables z , we specify the conditional distribution p ( x|z ) by parameterizing the distribution parameters by a neural network . Since it is difficult to learn such a model directly , another conditional distribution q ( z|x ) is introduced to approximate the posterior distribution . During learning the goal is to maximize the evidence lower bound ( ELBO ) , which lower bounds the log likelihood , log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) +log p ( z ) − log q ( z|x ) ] . In their simplest form , the generative model p ( x|z ) and the approximate posterior q ( z|x ) are Gaussian distributions optimized in unison . A natural way to increase the modeling capacity of VAE is to incorporate a hierarchy of stochastic variables . Such models , however , turn out to be difficult to train and higher levels in the hierarchy tend to remain independent of input data – a problem termed posterior collapse . Posterior collapse in VAEs manifests itself by the latent distribution tending to fall back to the prior . With hierarchical VAEs the effect is found to be more pronounced in the top layers farther from the output . For the purpose of the paper and for clarity of exposition , we focus on the simplest extension of hierarchical variational autoencoders where stochastic layers are stacked serially on top of each other [ 2 , 21 ] , p ( x , z ) = p ( x|z1 ) p ( zL ) ∏L−1 i=1 p ( zi|zi+1 ) and q ( z|x ) = q ( z1|x ) ∏L−1 i=1 q ( zi+1|zi ) . The intermediate distributions in this model are commonly taken to be Gaussian distributions parameterized by neural network functions , so that p ( zi|zi+1 ) = N ( zi|µ ( zi+1 ) , σ ( zi+1 ) ) , where µ ( z ) , σ ( z ) are neural networks computing the mean and variance of the Gaussian distribution . We refer to them as vanilla hierarchical variational autoencoders . For each stochastic layer in this model there is a corresponding KL divergence term in the objective given by E [ KL ( q ( zi|zi−1 ) ||p ( zi|zi+1 ) ] . ( 1 ) As described later , expression 1 can be easily decomposed to show an explicit dependence on the variance of the parameterizing functions µ ( zi ) , σ ( zi ) of the intermediate Gaussian distribution . We further show the KL divergence term to be closely related to the harmonics of the parameterizing function . For complex parameterizing functions the KL divergence term has large high frequency components ( and thus high variance ) which leads to unstable training causing posterior collapse . Building on this , we suggest a method for training the simplest hierarchical extension of VAE that avoids the problem of posterior collapse without introducing further architectural complexity [ 13 , 21 ] . Given a hierarchical variational autoencoder , our training method incorporates a smoothing parameter ( we denote this by ρ ) in the neural network functions used to parameterize the intermediate latent distributions . The smoothing is done such that expected values are preserved , the higher frequencies are attenuated and the variance is reduced . Next , the gradients computed with the smooth functions are used to train the original hierarchical variational autoencoder . For the construction of the smoothing transformations for VAEs with Gaussian latent spaces we make use of ideas from the analysis of Gaussian spaces . We analyze the stochastic functions in vanilla hierarchical VAEs as Hermite expansions on Gaussian spaces [ 9 ] . The Ornstein-Uhlenbeck ( OU ) semigroup from Gaussian analysis is a set of operators that we show to smoothly interpolate between a random variable and its expectation . The OU semigroup provides the appropriate set of smoothing operators which enable us to control variance and avoid posterior collapse . We further show that by smoothing the intermediate parameterizing functions µ ( z ) , σ ( z ) in the proposed manner , the KL divergence of the top layer sees a sudden sharp drop toward zero as the amount of smoothing is decreased . This behaviour is retained when we evaluate the KL divergence on the original unsmoothed variational autoencoder model . This behaviour is reminiscent of phase transitions from statistical mechanics and we adopt the same terminology to describe the phenomenon . Our experiments suggest that the phenomenon is general across datasets and commonly used architectures . Furthermore , the critical value of the smoothing parameter ρ at which the transition occurs is fixed for a given model configuration and varies with stochastic depth and width . We make the following contributions . First , we establish a connection between higher harmonics , variance , posterior collapse and phase transitions in hierarchical VAEs . Second , we show that by using the Ornstein-Uhlenbeck semigroup of operators on the generative stochastic functions in VAEs we reduce higher frequencies and consequently variance to mitigate posterior collpase . We corroborate our findings experimentally and further obtain in CIFAR-10 likelihoods competitive with more complex architectural solutions alongside a reduction in model size . We refer to the proposed family of models as Hermite variational autoencoders ( HVAE ) . 2 HERMITE VARIATIONAL AUTOENCODERS . 2.1 ANALYSIS ON GAUSSIAN SPACES . The analysis of Gaussian spaces studies functions of Gaussian random variables . These are realvalued functions defined on Rn endowed with the Gaussian measure . Many functions employed in machine learning are instances of such functions : decoders for variational autoencoders , as is the case in this work , and generators for generative adversarial networks being two examples . By way of summary , the main facts we use from this field are that a function on a Gaussian space can be expanded in an orthonormal basis , where the basis functions are the Hermite polynomials . This orthonormal expansion is akin to a Fourier transform in this space . The second fact is that the coefficients of such an expansion can be modified in a way to reduce the variance of the expanded function by applying an operator from the Ornstein-Uhlenbeck semigroup of operators . Next , we give a brief introduction . For further details on Gaussian analysis we refer to [ 9 ] . Gaussian Spaces : Let L2 ( Rn , γ ) be the space of square integrable functions , f : Rn → R , with the Gaussian measure γ ( z ) = ∏ iN ( zi|0 , 1 ) . Given functions f , g in this space , the inner product is given by 〈f , g〉 = Eγ ( z ) [ f ( z ) g ( z ) ] . Basis functions for L2 ( R , γ ) : Taking the space of univariate functions L2 ( R , γ ) , it is known that the polynomial functions φi ( z ) = zi are a basis for this space . By a process of orthonormalization we obtain the normalized Hermite polynomial basis for this space . The first few Hermite polynomials are the following : h0 ( z ) = 1 , h1 ( z ) = z , h2 = z 2−1√ 2 , . . .. Basis functions for L2 ( Rn , γ ) : Letting α ∈ Nn be a multi-index , the basis functions for L2 ( Rn , γ ) are obtained by multiplying the univariate basis functions across dimension , hα ( z ) = ∏ i hαi ( zi ) . Hermite expansion : A function in L2 ( Rn , γ ) can be expressed as f = ∑ α∈Nn f̂ ( α ) hα , where f̂ ( α ) are the Hermite coefficients of f and are computed as f̂ ( α ) = 〈f , hα〉 = Eγ ( z ) [ f ( z ) hα ( z ) ] . Plancherel ’ s theorem is the following relation between the norm of f and f̂ which follows from orthnormality of the basis functions . 〈f , f〉 = ∑ α f̂ ( α ) 2 , ( 2 ) Ornstein-Uhlenbeck ( OU ) Semigroup : Given a parameter ρ ∈ [ 0 , 1 ] and a Gaussian variable z , we construct a correlated variable z′ as z′ = ρz + √ 1− ρ2zω , where zω ∼ N ( 0 , 1 ) is a random standard Gaussian sample . The OU semigroup is a set of operators , denoted Uρ and parameterized by ρ ∈ [ 0 , 1 ] . The action of Uρ on f at z is to average the function values on correlated z′s around z , Uρf ( z ) = Ez′|z [ f ( z′ ) ] = Ezω [ f ( ρz + √ 1− ρ2zω ) ] ( 3 ) The action of the Uρ operators on the Hermite expansion of function f ( z ) is to decay Hermite coefficients according to their degree , Uρf ( z ) = ∑ α∈Nn ρ |α|f̂ ( α ) hα . where |α| = ∑ i αi . If z is reparameterized as z = σ 1 + µ , the correlated OU sample is given by z′ = σ ( ρ 1 +√ 1− ρ2 2 ) + µ , where 1 , 2 are standard Gaussian variables . This can also be expressed in terms of z as z′ = ρz + ( 1− ρ ) µ+ σ √ 1− ρ2 2 , ( 4 ) 2.2 HERMITE EXPANSIONS FOR VAES . Our proposed method is a new training procedure for the vanilla hierarchical variational autoencoder that builds upon Hermite expansions of Gaussian functions and properties of the OU semigroup . In the context of hierarchical variational autoencoders , the Gaussian function f is the generative model µi ( zi+1 ) and σi ( zi+1 ) that receives as inputs the latent variable zi+1 to return the Gaussian latent variable of the next layer , zi ∼ N ( µi ( zi+1 ) , σi ( zi+1 ) ) . We make use of the following properties of the OU semigroup to construct Gaussian functions of lower variance . The first property we employ is that the OU semigroup of operators interpolates between a random variable ( ρ = 1 ) and its expectation ( ρ = 0 ) , where the parameter ρ controls the extent of the interpolation . Proposition 1 The operators Uρ retain the expected value of the operated function , E [ f ] = E [ Uρf ] . Proposition 2 The operators Uρ interpolate between a random variable and its expectation . In particular , as ρ→ 1 , Uρf = f . and as ρ→ 0 , Uρf = E [ f ] The second property we exploit is that the new random variable Uρf ( z ) has lower variance compared with original variable f ( z ) and is in general a smoother function than f ( z ) . The smoothing properties of the operator Uρ can be understood by examining the Hermite expansion of Uρf . First we note that we can express the expectation and variance of a function f in terms of its Hermite coefficients , specifically E [ f ] = f̂ ( 0 ) and Var ( f ) = E [ ( f −E [ f ] ) 2 ] = E [ ( f − f̂ ( 0 ) ) 2 ] =∑α : |α| > 0 f̂ ( α ) 2 , which follows from Plancherel ’ s theorem ( equation 2 ) . Replacing f with Uρf and using the Hermite expansion of Uρf from equation 3 , the mean remains the same , E [ Uρf ] = ρ0f̂ ( 0 ) = f̂ ( 0 ) , and variance reduces like Var [ Uρf ] = E [ ( Uρf − E [ f ] ) 2 ] = E [ ( f − f̂ ( 0 ) ) 2 ] = ∑ α : |α| > 0 ρ2|α|f̂ ( α ) 2 . ( 5 ) The last equation indicates that the contribution to the variance by f̂ ( α ) decays by an amount ρ2|α| when ρ ∈ ( 0 , 1 ) . This , in turn , leads to a decrease in variance . Algorithm . In essence , Hermite variational autoencoders are similar to variational autoencoders , save for applying the OU semigroup to the latent distributions p ( zi|zi+1 ) that comprise the generator to compute gradients during training only . Specifically , we apply these operators to the functions parameterizing the mean and variance of the latent Gaussian distributions . For each distribution p ( zi|zi+1 ) we substitute N ( zi|µi ( zi+1 ) , σi ( zi+1 ) ) with N ( zi|Uρµi ( zi+1 ) , Uρσi ( zi+1 ) ) . The new functions result in latent distributions with parameters that have lower variance but the same expected value relative to the conditional input latent distribution . In an alternative parameterization we apply the OU semigroup to the ratio of the mean and variance functions : Uρ µiσi ( zi+1 ) ( see next section for a justification of this ) . The OU semigroup operators can also be applied on approximate posterior functions , but we observe little benefit . In practice , we compute Uρµi ( zi+1 ) and Uρσi ( zi+1 ) by Monte Carlo averaging . As for a function f , Uρf = Ez′|z [ f ( z′ ) ] , where z′ are the correlated samples , we estimate the expectation by Monte Carlo averaging over z′ . Experiments show that 5 to 10 samples suffice . It is important to emphasize that the substitution of the lower variance functions for parameterizing the distributions is only done when computing gradients during training . All evaluations , training or test , are still done on the original hierarchical variational autoencoder model . Thus , the new training procedure has an additional computational cost only for the intermediate distributions in the generator , proportional to the number of correlated samples during training . Complexity . In Hermite VAE the OU sampling operation is only applied in the intermediate stochastic layers in the generator network . In particular , it is not applied in the inference network or in the last layer of the decoder . The fact that OU sampling is not applied in the final stochastic layer computing p ( x|z1 ) is especially important for deep VAEs for images since feature maps are upsampled to match image dimensions in this layer . Thus , for 5 OU samples , the added computational and activation memory complexity is significantly less than 5 times the total cost of the base VAE model , and is 5 times the cost in the higher decoder layers only in the base model . An empirical comparison of maximum memory usage of various models can be found in table 6 .
This paper studies the training of deep hierarchical VAEs and focuses on the problem of posterior collapse. It is argued that reducing the variance of the gradient estimate may help to overcome posterior collapse. The authors focus on reducing the variance of the functions parameterizing the variational distribution of each layer using a layer-wise smoothing operator based on the Ornstein-Uhlenbeck semigroup (parameterized by a parameter $\rho$). The operator requires additional Monte-Carlo samples. The authors provide an analytical analysis of bias and variance. Last they train multiple VAEs models, measure the posterior collapse and observe a phase transition behaviour depending on the parameter $\rho$.
SP:b6dd62914f7464efb601c6d9f8a4d35e047447d5
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
1 INTRODUCTION . Many real-world optimization problems involve function evaluations that are the result of expensive or time-consuming process . Examples occur in the design of materials ( Mansouri Tehrani et al. , 2018 ) , proteins ( Brookes et al. , 2019 ; Kumar & Levine , 2019 ) , neural network architectures ( Zoph & Le , 2016 ) , or vehicles ( Hoburg & Abbeel , 2014 ) . Rather than settling for a slow and expensive optimization process through repeated function evaluations , one may instead adopt a data-driven approach , where a large dataset of previously collected input-output pairs is given in lieu of running expensive function queries . Not only could this approach be more economical , but in some domains , such as in the design of drugs or vehicles , function evaluations pose safety concerns and an online method may simply be impractical . We refer to this setting as the offline model-based optimization ( MBO ) problem , where a static dataset is available but function queries are not allowed . A straightforward method to solving offline MBO problems would be to estimate a proxy of the ground truth function f̂θ using supervised learning , and to optimize the input x with respect to this proxy . However , this approach is brittle and prone to failure , because the model-fitting process often has little control over the values of the proxy function on inputs outside of the training set . An algorithm that directly optimizes f̂θ could easily exploit the proxy to produce adversarial inputs that nevertheless are scored highly under f̂θ ( Kumar & Levine , 2019 ; Fannjiang & Listgarten , 2020 ) . In order to counteract the effects of model exploitation , we propose to use the normalized maximum likelihood framework ( NML ) ( Barron et al. , 1998 ) . The NML estimator produces the distribution closest to the MLE assuming an adversarial output label , and has been shown to be effective for resisting adversarial attacks ( Bibas et al. , 2019 ) . Moreover , NML provides a principled approach to generating uncertainty estimates which allows it to reason about out-of-distribution queries . However , because NML is typically intractable except for a handful of special cases ( Roos et al. , 2008 ) , we show in this work how we can circumvent intractability issues with NML in order to construct a reliable and robust method for MBO . Because of its general formulation , the NML distribution pro- vides a flexible approach to constructing conservative and robust estimators using high-dimensional models such as neural networks . The main contribution of this work is to develop an offline MBO algorithm that utilizes a novel approximation to the NML distribution to obtain an uncertainty-aware forward model for optimization , which we call NEMO ( Normalized maximum likelihood Estimation for Model-based Optimization ) . The basic premise of NEMO is to construct a conditional NML distribution that maps inputs to a distribution over outputs . While constructing the NML distribution is intractable in general , we discuss novel methods to amortize the computational cost of NML , which allows us the scale our method to practical problems with high dimensional inputs using neural networks . A separate optimization algorithm can then be used to optimize over the output to any desired confidence level . Theoretically , we provide insight into why NML is useful for the MBO setting by showing a regret bound for modeling the ground truth function . Empirically , we evaluate our method on a selection of tasks from the Design Benchmark ( Anonymous , 2021 ) , where we show that our method performs competitively with state-of-the-art baselines . Additionally , we provide a qualitative analysis of the uncertainty estimates produced by NEMO , showing that it provides reasonable uncertainty estimates , while commonly used methods such as ensembles can produce erroneous estimates that are both confident and wrong in low-data regimes . 2 RELATED WORK . Derivative-free optimization methods are typically used in settings where only function evaluations are available . This includes methods such as REINFORCE ( Williams , 1992 ) and reward-weighted regression ( Peters & Schaal , 2007 ) in reinforcement learning , the cross-entropy method ( Rubinstein , 1999 ) , latent variable models ( Garnelo et al. , 2018 ; Kim et al. , 2019 ) , and Bayesian optimization ( Snoek et al. , 2012 ; Shahriari et al. , 2015 ) . Of these approaches , Bayesian optimization is the most often used when function evaluations are expensive and limited . However , all of the aforementioned methods focus on the active or online setting , whereas in this work , we are concerned with the offline setting where additional function evaluations are not available . Normalized maximum likelihood is an information-theoretic framework based on the minimum description length principle ( Rissanen , 1978 ) . While the standard NML formulation is purely generative , the conditional or predictive NML setting can be used Rissanen & Roos ( 2007 ) ; Fogel & Feder ( 2018 ) for supervised learning and prediction problems . Bibas et al . ( 2019 ) apply this framework for prediction using deep neural networks , but require an expensive finetuning process for every input . The goal of our work is to provide a scalable and tractable method to approximate the CNML distribution , and we apply this framework to offline optimization problems . Like CNML , conformal prediction ( Shafer & Vovk , 2008 ) is concerned with predicting the value of a query point ŷt+1 given a prior dataset , and provides per-instance confidence intervals , based on how consistent the new input is with the rest of the dataset . Our work instead relies on the NML framework , where the NML regret serves a similar purpose for measuring how close a new query point is to existing , known data . The offline model-based optimization problem has been applied to problems such as designing DNA ( Killoran et al. , 2017 ) , drugs ( Popova et al. , 2018 ) , or materials ( Hautier et al. , 2010 ) . The estimation of distribution algorithm ( Bengoetxea et al. , 2001 ) alternates between searching in the input space and model space using a maximum likelihood objective . Kumar & Levine ( 2019 ) propose to learn an inverse mapping from output values to input values , and optimize over the output values which produce consistent input values . Brookes et al . ( 2019 ) propose CbAS , which uses a trust-region to limit exploitation of the model . Fannjiang & Listgarten ( 2020 ) casts the MBO problem as a minimax game based on the oracle gap , or the value between the ground truth function and the estimated function . In contrast to these works , we develop an approach to MBO which explicitly reasons about uncertainty . Approaches which utilize uncertainty , such as Bayesian optimization , are commonly used in online settings , and we expect these to work in offline settings as well . There are several related areas that could arguably be viewed as special cases of MBO . One is in contextual bandits under the batch learning from bandit feedback setting , where learning is often done on logged experience ( Swaminathan & Joachims , 2015 ; Joachims et al. , 2018 ) , or offline reinforcement learning ( Levine et al. , 2020 ) , where model-based methods construct estimates of the MDP parameters ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) . Our work focuses on a more generic function optimization setting , but could be applied in these domains as well . 3 PRELIMINARIES . We begin by reviewing the problem formulation for offline model-based optimization , as well as necessary background on the normalized maximum likelihood estimator . Problem statement . We define the offline model-based optimization ( MBO ) problem as follows . Assume the existence of a stochastic ground truth function f ( y|x ) . The MBO algorithm is given a dataset D of inputs x along with outputs y sampled from f ( y|x ) . Like in standard optimization problems , the goal of MBO is to find the input value that maximizes the true function : x∗ = argmaxxEy∼f ( y|x ) [ y ] . ( 1 ) However , in offline MBO , the algorithm is not allowed to query the true function f ( y|x ) , and must find the best possible point x∗ using only the guidance of a fixed dataset D = { x1 : N , y1 : N } . One approach to solving this problem is to introduce a separate proxy function f̂θ ( y|x ) ≈ f ( y|x ) , which is learned from D as an estimate of the true function . From here , standard optimization algorithms such as gradient descent can be used to find the optimum of the proxy function , x̂∗ = argmaxxEy∼f̂θ ( y|x ) [ y ] . Alternatively , a trivial algorithm could be to select the highestperforming point in the dataset . While adversarial ground truth functions can easily be constructed where this is the best one can do ( e.g. , if f ( x ) = −∞ on any x /∈ D ) , in many reasonable domains it should be possible to perform better than the best point in the dataset . Conditional normalized maximum likelihood . In order to produce a conditional distribution pNML ( y|x ) we can use for estimating the ground truth function , we leverage the conditional or predictive NML ( CNML ) framework ( Rissanen & Roos , 2007 ; Fogel & Feder , 2018 ; Bibas et al. , 2019 ) . Intuitively , the CNML distribution is the distribution closest to the MLE assuming the test label y is chosen adversarially . This is useful for the MBO setting since we do not know the ground truth value y at points we are querying during optimization , and the CNML distribution gives us conservative estimates that help mitigate model exploitation ( see Fig . 1 ) . Formally , the CNML estimator is the minimax solution to a notion of regret , called the individual regret defined as Regretind ( h , y ) = log p ( y|x , θ̂D∪ ( x , y ) ) − log h ( y|x ) , and pNML ( y|x ) = arg minh maxy′ Regretind ( h , y′ ) ( Fogel & Feder , 2018 ) . The notation D ∪ ( x , y ) refers to an augmented dataset by appending a query point and label ( x , y ) , to a fixed offline dataset D , and θ̂D∪ ( x , y ) denotes the MLE estimate for this augmented dataset . The query point ( x , y ) serves to represent the test point we are interested in modeling . The solution to the minimax problem can be expressed as ( Fogel & Feder , 2018 ) : pNML ( y|x ) = p ( y|x , θ̂D∪ ( x , y ) ) ∫ y′ p ( y′|x , θ̂D∪ ( x , y′ ) ) dy′ , ( 2 ) where θ̂D∪ ( x , y ) = arg maxθ 1 N+1 ∑ ( x , y ) ∈D∪ ( x , y ) log p ( y|x , θ ) is the maximum likelihood estimate for p using the dataset D augmented with ( x , y ) . Algorithm 1 NEMO : Normalized Maximum Likelihood for Model-Based Optimization Input Model class { fθ : θ ∈ Θ } , Dataset D = ( x1 : N , y1 : N ) , number of bins K , evaluation function g ( y ) , learning rates αθ , αx . Initialize K models θ1 : K0 , optimization iterate x0 Quantize y1 : N into K bins , denoted as bYc = { by1c , · · · bykc } . for iteration t in 1 . . . T do for k in 1 . . .K do construct augmented dataset : D′ ← D ∪ ( xt , bykc ) . update model : θkt+1 ← θkt + αθ∇θkt LogLikelihood ( θ k t , D′ ) end for estimate CNML distribution : p̂NML ( y|xt ) ∝ p ( y|xt , θyt ) / ∑ k p ( bykc|xt , θkt ) Update x : xt+1 ← xt + αx∇xEy∼p̂NML ( y|x ) [ g ( y ) ] end for The NML family of estimators has connections to Bayesian methods , and has shown to be asymptotically equivalent to Bayesian inference under the uninformative Jeffreys prior ( Rissanen , 1996 ) . NML and Bayesian modeling both suffer from intractability , albeit for different reasons . Bayesian modeling is generally intractable outside of special choices of the prior and model class Θ where conjugacy can be exploited . On the other hand , NML is intractable because the denominator requires integrating and training a MLE estimator for every possible y . One of the primary contributions of this paper is to discuss how to approximate this intractable computation with a tractable one that is sufficient for optimization on challenging problems , which we discuss in Section 4 .
The paper proposes an approximation method, called NEMO (Normalized maximum likelihood Estimation for model-based optimization) to compute the conditional normalized maximum log-likelihood of a query data point as a way to quantify the uncertainty in a forward prediction model in offline model-based optimization problems. The main idea is to construct a conditional NML (CNML) distribution that maps the high-dimensional inputs to a distribution over output variables. In addition, the paper provides a theoretical motivation that estimating the true function with the CNML is close to the best possible expert even if the test label is chosen adversarially, which is a great challenge for an optimizer to exploit the model. By using this CNML on three offline optimization benchmark datasets (Superconductor, GFP, MoleculeActivity) with gradient ascent-based optimization, the NEMO outputs all the other four baselines on the Superconductor dataset by almost 1.4x to 1.7x, the generate comparable results as the other four baselines method on the GFP and MoleculeActivity datasets.
SP:2d25eeb93ba90f9c4064bf794f9a132a6859c8e4
Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs
1 INTRODUCTION . Transformer-based models yield state-of-the-art results on a number of tasks , including representation learning ( Devlin et al. , 2019 ; Liu et al. , 2019 ; Clark et al. , 2020 ) and generation ( Radford et al . ; Raffel et al. , 2019 ; Lewis et al. , 2020 ) . Notably , large language models have been reported to produce outputs nearly indistinguishable from human-written texts ( Brown et al. , 2020 ) . Although the predictions of autoregressive language models are fluent and coherent , it is not clear how to manipulate the model to get samples with desired properties . For example , make them shorter , more formal or more positive , or , alternatively , use the same model to rewrite human-written texts in a different tone . Current approaches often rely on external labels of target attributes and require modifications to the model . This involves retraining for new attributes or changing the decoding procedure , which is usually expensive . In contrast , models with explicit latent spaces have the innate ability to manipulate text attributes by moving along latent directions . They , however , gained limited traction . One reason is that training a VAE on text data poses a number of optimization challenges , which have been tackled with a varying degree of success ( He et al. , 2019 ; Fu et al. , 2019 ; Zhu et al. , 2020 ) . Additionally , language VAEs are mostly small LSTM-based models which goes against the current trend of using large pretrained Transformers . The first large-scale language VAE model is the recently introduced OPTIMUS ( Li et al. , 2020 ) : it uses BERT as the encoder and GPT-2 as the decoder , and sets a new record on benchmark datasets . Differently from texts , latent space models for images , especially GANs , achieve state-of-the-art generation results . Therefore , these models have been the focus of the research community , and the properties of latent spaces are well-learned . For example , even early works on generative adversarial networks for images report that it is possible to have smooth interpolations between images in the latent space ( Goodfellow et al. , 2014 ) . More recent studies show that the latent space directions corresponding to human-interpretable image transformations ( from now on , ” interpretable directions ” ) can be discovered in an unsupervised way ( Härkönen et al. , 2020 ; Voynov & Babenko , 2020 ; Peebles et al. , 2020 ) . In this paper , we show that for the language domain , much alike the well-studied visual domain , a sufficiently “ good ” latent space allows to manipulate sample attributes with relative ease . To avoid the known difficulties associated with training language GANs , we experiment with VAEs ; more specifically , with the current state-of-the-art model OPTIMUS . We show that for this model , not only it is possible to produce meaningful and “ smooth ” interpolations between examples and to transfer specific properties via arithmetic operations in the latent space , but it is also possible to discover the interpretable latent directions in an unsupervised manner . We propose a method based on the PCA of latent representations of the texts in the training dataset . According to human evaluation , the proportion of interpretable directions among the ones found by our method is consistently larger than the proportion of interpretable directions among canonical co-ordinates or random directions in the latent space . The meaningful directions found by this method include , for example , subject age , subject gender , verb tense , and sentence length . Some of the directions , e.g . sentence length , are potentially useful : the ability to expand or shrink a text while preserving its content may be useful for tasks like summarization . Note that the proposed method is simple and fast . The method is simple because it requires only the forward pass of the encoder , without backpropagating through decoding steps . This is very important for the language domain , where backpropagation through samples is significantly more difficult than for images . Namely , generation is non-differentiable , and previous attempts to overcome this issue relied on noisy or biased gradient estimates , which is less reliable than the standard MLE training . Instead , we do not rely on generated samples at all : we operate directly in the latent space . Additionally , since sampling directly from the prior does not yield diverse samples in case of OPTIMUS , we use the representations of the training data without running a decoding procedure - this maked the method fast . To summarize , our contributions are as follows : 1 . We propose the first method for unsupervised discovery of interpretable directions in latent spaces of language VAEs . 2 . This method is simple and fast : it is based on PCA of latent representations for texts in the training dataset . 3 . This method is effective : the proportion of interpretable directions among the ones found by our method is consistently larger than that of canonical co-ordinates or random directions in the latent space . 4 . Our work lays foundations for two important areas : first , it allows to compare models in terms of latent space interpretability , and second , it provides a baseline for unsupervised latent controls discovery . 2 RELATED WORK . Finding interpretable directions in latent spaces of language VAEs is related to three lines of work . First , latent variable models for text and , more specifically , properties of latent spaces : for interpretable directions to exist , latent space has to be smooth ( i.e . allow coherent interpolations ) . Then , since great part of the motivation for finding interpretable directions is manipulating generated texts , we discuss works on controllable text generation for different types of models , both VAE and standard autoregressive . Finally , we mention recent works trying to discover interpretable directions in image GANs . 2.1 LATENT VARIABLE MODELS FOR TEXT . Latent variable models encode information about text into a probability distribution . In addition to sampling new sentences from the prior distribution , they potentially allow to explicitly encode specific properties of text , such as sentiment or style . Even early works on VAEs show that a latent space obtained with the VAE objective can result in coherent interpolations ( Bowman et al. , 2016 ) . While this is encouraging , training good VAEs with smooth and expressive latent spaces is challenging . Specifically , for interpretable directions to exist , we need a model which ( i ) does not ignore latent variable – to produce good samples , ( ii ) has continuous latent space – to allow controllable manipulation . Ignoring latent variable is a known problem of VAEs . It arises because of the the KL vanishing problem : over the course of training , the KL divergence part of the loss may drop to 0 , which indicates that the model ignores the latent variable . There exist many ways to alleviate this issue ( Yang et al. , 2017 ; Fang et al. , 2019 ; Fu et al. , 2019 ; Zhu et al. , 2020 ) ; one of the simpler ways is adjusting the weight of the KL loss component according to a specific schedule . Another problem is the latent vacancy problem : differently from images , not all regions of the latent space are occupied by the posterior distribution ( Xu et al. , 2020 ) . In simple words , text latent spaces tend to have “ holes ” where the decoding network fails to generalize . As a result , when the latent codes are manipulated , the modified codes often land in these holes or vacant regions in the posterior latent space . If this happens , a model can not decode properly . In light of the above , discovery of interpretable directions in text latent spaces is possible only with a strong model . Therefore , we use the current state-of-the-art model OPTIMUS ( Li et al. , 2020 ) . It is a recent large-scale variational autoencoder which initializes the encoder with BERT ( Devlin et al. , 2019 ) and the decoder with GPT2 ( Radford et al. ) . In addition to the model ’ s high capacity , we use it because of the available checkpoints and reported results on latent space manipulation . 2.2 CONTROLLABLE GENERATION FOR TEXT DATA . Latent variable models . A natural way to achieve text generation with required attributes is using latent variable text generation models . The idea is that information about the attribute value is encoded in the latent code , and to obtain samples with the desired property one has to fix the corresponding component ( direction ) of the code . For example , several works learn latent spaces with disentangled representations of content and style ( Hu et al. , 2017 ; Logeswaran et al. , 2018 ; Lample et al. , 2019 ; Yang et al. , 2018 ; Shen et al. , 2017 ; John et al. , 2019 ) . After that , to generate sentences in a specific style , the style vector is fixed . Depending on the approach , this style vector can either be estimated by encoding sentences with the desired attribute or be directly produced by specifying the structured latent code ( e.g . one-hot encoding of an attribute ) . Another line of research shows that it is possible to achieve attribute manipulation by moving in the latent space along specific vectors . These vectors , however , are found using data labelled with the attribute , i.e . with supervision . For example , Shen et al . ( 2020 ) change tense of a sentence by adding to its latent representation a “ tense vector ” computed as a difference of averaged representations of sentences with different tenses ; Wang et al . ( 2019 ) use gradients of the attribute classifier . One of the first successful methods that learns a disentangled latent space is the work by Xu et al . ( 2020 ) : they use basis vectors in the constrained latent space ; however , this involves training a model with a structured latent space , which is rather complicated . Autoregressive models . Controllable generation for standard autoregressive language models is usually achieved by either prepending an attribute to the input sequence as a prompt ( Keskar et al. , 2019 ) , training an additional component of the model ( Chan et al. , 2020 ) or adjusting the decoding result with additional attribute-specific language models ( Dathathri et al. , 2020 ) . A more thorough comparison of approaches to controlled text generation can be read in Prabhumoye et al . ( 2020 ) . Note that all these approaches require supervision and substantial changes to either training or generation procedures , whereas our approach is applicable to any variational autoencoder . 2.3 INTERPRETABLE DIRECTIONS MINING IN IMAGE GANS . To the best of our knowledge , there are only three works which discover interpretable latent directions in an unsupervised way , and all of them operate with GANs for images . Two of them are not applicable to texts directly . In Voynov & Babenko ( 2020 ) , the interpretable directions are trained . These directions are the ones which can be recognized by a separate reconstructor based on two samples , from the original and a shifted latent vector . Peebles et al . ( 2020 ) propose to learn disentangled attributes by minimizing the sum of squared off-diagonal terms of the generator Hessian matrix . Both approaches require backpropagation through sampling and therefore are not applicable directly for texts : unlike images , generated texts are not differentiable with respect to their latent representations . The last approach , Härkönen et al . ( 2020 ) , show that interpretable controls for image synthesis can be identified by finding principal components from layer outputs of the generator network for several samples from the prior . In our more challenging language domain , instead of sampling from the generator distribution , we take advantage of the availability of the encoder in VAEs and perform PCA on training data representations .
This paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results in more interpretable directions. The method is applied on top of a VAE model (OPTIMUS), and the authors argue that different directions discovered by PCA correspond to interpretable concepts.
SP:ce75f565c3c17363695c9e39f28b49a66e3731b8
Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures
√ n dependence . We demonstrate the usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature . 1 INTRODUCTION . In recent years , there has been a surge of interest in the use of information-theoretic techniques for bounding the loss of learning algorithms . While the first results of this flavor can be traced to the probably approximately correct ( PAC ) -Bayesian approach ( McAllester , 1998 ; Catoni , 2007 ) ( see also ( Guedj , 2019 ) for a recent review ) , the connection between loss bounds and classical information-theoretic measures was made explicit in the works of Russo & Zou ( 2016 ) and Xu & Raginsky ( 2017 ) , where bounds on the average population loss were derived in terms of the mutual information between the training data and the output hypothesis . Since then , these average loss bounds have been tightened ( Bu et al. , 2019 ; Asadi et al. , 2018 ; Negrea et al. , 2019 ) . Furthermore , the information-theoretic framework has also been successfully applied to derive tail probability bounds on the population loss ( Bassily et al. , 2018 ; Esposito et al. , 2019 ; Hellström & Durisi , 2020a ) . Of particular relevance to the present paper is the random-subset setting , introduced by Steinke & Zakynthinou ( 2020 ) and further studied in ( Hellström & Durisi , 2020b ; Haghifam et al. , 2020 ) . In this setting , a random vector S is used to select n training samples Z ( S ) from a larger set Z̃ of 2n samples . Then , bounds on the average population loss are derived in terms of the conditional mutual information ( CMI ) I ( W ; S|Z̃ ) between the chosen hypothesis W and the random vector S given the set Z̃ . The bounds obtained by Xu & Raginsky ( 2017 ) depend on the mutual information I ( W ; Z ) , a quantity that can be unbounded if W reveals too much about the training set Z . In contrast , bounds for the random-subset setting are always finite , since I ( W ; S|Z̃ ) is never larger than n bits . Most information-theoretic population loss bounds mentioned thus far are given by the training loss plus a term with a √ IM ( PWZ ) /n-dependence , where IM ( PWZ ) denotes an information measure , such as mutual information or maximal leakage ( Issa et al. , 2020 ) . Assuming that the information measure grows at most polylogarithmically with n , the convergence rate of the population loss to the training loss is Õ ( 1/ √ n ) , where the Õ-notation hides logarithmic factors . This is sometimes referred to as a slow rate . In the context of bounds on the excess risk , defined as the difference between the achieved population loss for a chosen hypothesis w and its infimum over the hypothesis class , it is known that slow rates are optimal for worst-case distributions and hypothesis classes ( Talagrand , 1994 ) . However , it is also known that under the assumption of realizability ( i.e. , the existence of a w in the hypothesis class such that the population loss LPZ ( w ) = 0 ) and when the hypothesis class is finite , the dependence on the sample size can be improved to Õ ( 1/n ) ( Vapnik , 1998 , Chapter 4 ) . This is referred to as a fast rate . Excess risk bounds with fast rates for randomized classifiers have also been derived , under certain additional conditions , for both bounded losses ( Van Erven et al. , 2015 ) and unbounded losses ( Grünwald & Mehta , 2020 ) . Notably , Steinke & Zakynthinou ( 2020 , Thm . 2 ( 3 ) ) derive a population loss bound whose dependence on n is I ( W ; S|Z̃ ) /n . The price for this improved dependence is that the training loss that is added to the n-dependent term is multiplied by a constant larger than 1 . Furthermore , ( Steinke & Zakynthinou , 2020 , Thm . 8 ) shows that if the Vapnik-Chervonenkis ( VC ) dimension of the hypothesis class is finite , there exists an empirical risk minimizer ( ERM ) whose CMI grows at most logarithmically with n. This implies that the CMI approach leads to fast-rate bounds in certain scenarios . However , the result in ( Steinke & Zakynthinou , 2020 , Thm . 2 ( 3 ) ) pertains only to the average population loss : no tail bounds on the population loss are provided . Throughout the paper , we will , with an abuse of terminology , refer to bounds with an n-dependence of the form IM ( PWZ ) /n as fast-rate bounds . Such bounds are also known as linear bounds ( Dziugaite et al. , 2020 ) . Note that the n-dependence of the information measure IM ( PWZ ) has to be at most polylogarithmic for such bounds to actually achieve a fast rate in the usual sense . An intriguing open problem in statistical learning is to find a theoretical justification for the capability of overparameterized neural networks ( NNs ) to achieve good generalization performance despite being able to memorize randomly labeled training data sets ( Zhang et al. , 2017 ) . As a consequence of this behavior , classical population loss bounds that hold uniformly over a given hypothesis class , such as VC bounds , are vacuous when applied to overparameterized NNs . This has stimulated recent efforts aimed at obtaining tighter population loss bounds that are algorithm-dependent or data-dependent . In the past few years , several studies have shown that promising bounds are attainable by using techniques from the PAC-Bayesian literature ( Dziugaite & Roy , 2017 ; Zhou et al. , 2019 ; Dziugaite et al. , 2020 ) . The PAC-Bayesian approach entails using the Kullback-Leibler ( KL ) divergence to compare the distribution on the weights of the NN induced by training to some reference distribution . These distributions are referred to as the posterior and the prior , respectively . Recently , Dziugaite et al . ( 2020 ) used data-dependent priors to obtain state-of-the-art bounds for LeNet-5 trained on MNIST and Fashion-MNIST . In their approach , the available data is used both for training the network and for choosing the prior . This leads to a bound that is tighter than previously available bounds . Furthermore , the bound can be further improved by minimizing the KL divergence between the posterior and the chosen prior during training . One drawback of the PAC-Bayesian approach is that it applies only to stochastic NNs , whose weights are randomly chosen each time the network is used , and not to deterministic NNs with fixed weights . Information-theoretic bounds have also been derived for iterative , noisy training algorithms such as stochastic gradient Langevin dynamics ( SGLD ) ( Bu et al. , 2019 ) . These bounds lead to nonvacuous estimates of the population loss of overparameterized NNs that are trained using SGLD through the use of data-dependent priors ( Negrea et al. , 2019 ) . However , these bounds do not apply to deterministic NNs , nor to standard stochastic gradient descent ( SGD ) training . Furthermore , the bounds pertain to the average population loss , and not to its tails . Although the techniques yielding these estimates can be adapted to the PAC-Bayesian setting , as discussed by Negrea et al . ( 2019 , App . I ) , the resulting bounds are generally loose . 1.1 CONTRIBUTIONS . In this paper , we extend the fast-rate average loss bound by Steinke & Zakynthinou ( 2020 ) to the PAC-Bayesian and the single-draw settings . We then use the resulting PAC-Bayesian and single-draw bounds to characterize the test loss of NNs used to classify images from the MNIST and FashionMNIST data sets . The single-draw bounds can be applied to deterministic NNs trained through SGD but with Gaussian noise added to the final weights , whereas the PAC-Bayesian bounds apply only to randomized neural networks , whose weights are drawn from a Gaussian distribution each time the network is used . For the same setup , we also evaluate the slow-rate PAC-Bayesian and single-draw bounds from ( Hellström & Durisi , 2020b ) . Our numerical experiments reveal that both the slow-rate bounds from ( Hellström & Durisi , 2020b ) and the newly derived fast-rate bounds are nonvacuous . Furthermore , for some settings , the fast-rate bounds presented in this paper are quantitatively stronger than the corresponding slow-rate ones from ( Hellström & Durisi , 2020b ) , and essentially match the best bounds available in the literature for SGD-trained NNs ( Dziugaite et al. , 2020 ) . 1.2 PRELIMINARIES . We now detail some notation and describe the random-subset setting introduced in ( Steinke & Zakynthinou , 2020 ) . Let Z be the instance space , W be the hypothesis space , and ` : W ×Z → R+ be the loss function . Throughout the paper , we will assume that the range of ` ( w , z ) is restricted to [ 0 , 1 ] for all w ∈ W and all z ∈ Z . A typical example of such a loss function is the classification error . In this setting , the sample Z consists of an example X ∈ X and a corresponding label Y ∈ Y . Then , the loss is given by ` ( W , Z ) = 1 { fW ( X ) 6= Y } , where fW ( · ) is the map from X to Y induced by the hypothesis W . We note that , when applying our bounds to NNs , the function ` ( · , · ) used to characterize the performance of the network does not necessarily need to coincide with the loss function used when training the NN . For instance , one could use the ( unbounded ) cross-entropy loss when training the NN , and apply the bounds for the scenario in which ` ( · , · ) is the classification error . In the random-subset setting , 2n training samples Z̃ = ( Z̃1 , . . . , Z̃2n ) are available , with all entries of Z̃ being drawn independently from some distribution PZ onZ . However , only a randomly selected subset of cardinality n is actually used for training . Following ( Steinke & Zakynthinou , 2020 ) , we assume that the training dataZ ( S ) is selected as follows . Let S = ( S1 , . . . , Sn ) be an n-dimensional random vector , the elements of which are drawn independently from a Bern ( 1/2 ) distribution and are independent of Z̃ . Then , for i = 1 , . . . , n , the ith training sample in Z ( S ) is Zi ( Si ) = Z̃i+Sin . Thus , the binary variable Si determines whether the training set Z ( S ) will contain the sample Z̃i or the sample Z̃i+n . The selected training procedure , including the loss function used for training , will determine the conditional distribution PW |Z ( S ) on the hypothesis class given the training data . For a given W ∼ PW |Z ( S ) , we let LZ ( S ) ( W ) = 1n ∑n i=1 ` ( W , Zi ( Si ) ) denote the training loss . Furthermore , we let S̄ denote the modulo-2 complement ofS . ThenLZ ( S̄ ) ( W ) can be interpreted as a test loss , sinceW is conditionally independent ofZ ( S̄ ) givenZ ( S ) . Finally , we note that the average over ( Z̃ , S ) of the test loss is the population loss LPZ ( W ) = EPZ̃S [ LZ ( S̄ ) ( W ) ] = EPZ [ ` ( W , Z ) ] . Our bounds will depend on several different information-theoretic quantities , which we shall introduce next . The information density ı ( W , Z ) between W and Z is defined as ı ( W , Z ) = log dPWZdPWPZ , where dPWZdPWPZ is the Radon-Nikodym derivative of PWZ with respect to PWPZ . The information density is well-defined if PWZ is absolutely continuous with respect to PWPZ , denoted by PWZ PWPZ . The conditional information density ı ( W , S|Z̃ ) between W and S given Z̃ is defined as ı ( W , S|Z̃ ) = log dPWZ̃SdPW |Z̃PZ̃S , provided that PWZ̃S PW |Z̃PZ̃S . The mutual information can be obtained as I ( W ; Z ) = EPWZ [ ı ( W , Z ) ] and the conditional mutual information as I ( W ; S|Z̃ ) = EPWZ̃S [ ı ( W , S|Z̃ ) ] . We will also need the KL divergences D ( PW |Z ||PW ) = EPW |Z [ ı ( W , Z ) ] and D ( PW |Z̃S ||PW |Z̃ ) = EPW |Z̃S [ ı ( W , S|Z̃ ) ] . In practical applications , the marginal distribution PW is not available , since PZ is unknown . Furthermore , PW |Z̃ is also difficult to compute , since marginalizing PSPW |Z̃S over S involves performing training 2 n times . Hence , bounds depending on ı ( W , Z ) or on ı ( W , S|Z̃ ) can not typically be evaluated . Therefore , it will be convenient to replace the information density ı ( W , Z ) with the proxy log dPWZdQWPZ and ı ( W , S|Z̃ ) with log dPWZ̃SdQW |Z̃PZ̃S . Here , QW and QW |Z̃ are suitably chosen auxiliary distributions ( priors ) that are used in place of the intractable , true marginals .
This paper extends results of prior work by Steinke and Zakynthinou, by providing generalization bounds in the PAC-Bayesian and single-draw settings that depend on the conditional mutual information. The emphasis in this work is on obtaining fast rates ($1/n$ vs. $1/\sqrt{n}$). The authors also conduct empirical experiments showing how the fast rate bounds they propose can be useful for obtaining non-vacuous generalization bounds in the context of over-parameterized neural networks.
SP:b9d78677e836fddeab78615ad35e9545d9c1d08f
Neural Time-Dependent Partial Differential Equation
1 INTRODUCTION . The research of time-dependent partial differential equations ( PDEs ) is regarded as one of the most important disciplines in applied mathematics . PDEs appear ubiquitously in a broad spectrum of fields including physics , biology , chemistry , and finance , to name a few . Despite their fundamental importance , most PDEs can not be solved analytically and have to rely on numerical solving methods . Developing efficient and accurate numerical schemes for solving PDEs , therefore , has been an active research area over the past few decades ( Courant et al. , 1967 ; Osher & Sethian , 1988 ; LeVeque ; Cockburn et al. , 2012 ; Thomas , 2013 ; Johnson , 2012 ) . Still , devising stable and accurate schemes with acceptable computational cost is a difficult task , especially when nonlinear and ( or ) high-dimensional PDEs are considered . Additionally , PDE models emerged from science and engineering disciplines usually require huge empirical data for model calibration and validation , and determining the multidimensional parameters in such a PDE system poses another challenge ( Peng et al. , 2020 ) . Deep learning is considered to be the state-of-the-art tool in classification and prediction of nonlinear inputs , such as image , text , and speech ( Litjens et al. , 2017 ; Devlin et al. , 2018 ; LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ) . Recently , considerable efforts have been made to employ deep learning tools in designing data-driven methods for solving PDEs ( Han et al. , 2018 ; Long et al. , 2018 ; Sirignano & Spiliopoulos , 2018 ; Raissi et al. , 2019 ) . Most of these approaches are based on fully-connected neural networks ( FCNNs ) , convolutional neural networks ( CNNs ) and multilayer perceptron ( MLP ) . These neural network structures usually require an increment of the layers to improve the predictive accuracy ( Raissi et al. , 2019 ) , and subsequently lead to a more complicated model due to the additional parameters . Recurrent neural networks ( RNNs ) are one type of neural network architectures . RNNs predict the next time step value by using the input data from the current and previous states and share parameters across all inputs . This idea ( Sherstinsky , 2020 ) of using current and previous step states to calculate the state at the next time step is not unique to RNNs . In fact , it is ubiquitously used in numerical PDEs . Almost all time-stepping numerical methods applied to solve time-dependent PDEs , such as Euler ’ s , Crank-Nicolson , high-order Taylor and its variance Runge-Kutta ( Ascher et al. , 1997 ) time-stepping methods , update numerical solution by utilizing solution from previous steps . This motivates us to think what would happen if we replace the previous step data in the neural network with numerical solution data to PDE supported on grids . It is possible that the neural network behaves like a time-stepping method , for example , forward Euler ’ s method yields the numerical solution at a new time point as the current state output ( Chen et al. , 2018 ) . Since the numerical solution on each of the grid point ( for finite difference ) or grid cell ( for finite element ) computed at a set of contiguous time points can be treated as neural network input in the form of one time sequence of data , the deep learning framework can be trained to predict any time-dependent PDEs from the time series data supported on some grids if the bidirectional structure is applied ( Huang et al. , 2015 ; Schuster & Paliwal , 1997 ) . In other words , the supervised training process can be regarded as a practice of the deep learning framework to learn the numerical solution from the input data , by learning the coefficients on neural network layers . Long Short-Term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) is a neural network built upon RNNs . Unlike vanilla RNNs , which suffer from losing long term information and high probability of gradient vanishing or exploding , LSTM has a specifically designed memory cell with a set of new gates such as input gate and forget gate . Equipped with these new gates which control the time to preserve and pass the information , LSTM is capable of learning long term dependencies without the danger of having gradient vanishing or exploding . In the past two decades , LSTM has been widely used in the field of natural language processing ( NLP ) , such as machine translation , dialogue systems , question answering systems ( Lipton et al. , 2015 ) . Inspired by numerical PDE schemes and LSTM neural network , we propose a new deep learning framework , denoted as Neural-PDE . It simulates multi-dimensional governing laws , represented by time-dependent PDEs , from time series data generated on some grids and predicts the next n time steps data . The Neural-PDE is capable of intelligently processing related data from all spatial grids by using the bidirectional ( Schuster & Paliwal , 1997 ) neural network , and thus guarantees the accuracy of the numerical solution and the feasibility in learning any time-dependent PDEs . The detailed structures of the Neural-PDE and data normalization are introduced in Section 3 . The rest of the paper is organized as follows . Section 2 briefly reviews finite difference method for solving PDEs . Section 3 contains detailed description of designing the Neural-PDE . In Section 4 and Appendix A of the paper , we apply the Neural-PDE to solve four different PDEs , including the 1-dimensional ( 1D ) wave equation , the 2-dimensional ( 2D ) heat equation , and two systems of PDEs : the invicid Burgers ’ equations and a coupled Navier Stokes-Cahn Hilliard equations , which widely appear in multiscale modeling of complex fluid systems . We demonstrate the robustness of the Neural-PDE , which achieves convergence within 20 epochs with an admissible mean squared error , even when we add Gaussian noise in the input data . 2 PRELIMINARIES . 2.1 TIME DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS . A time-dependent partial differential equation is an equation of the form : ut = f ( x1 , · · · , u , ∂u ∂x1 , · · · , ∂u ∂xn , ∂2u ∂x1∂x1 , · · · , ∂ 2u ∂x1∂xn , · · · , ∂ nu ∂x1 · · · ∂xn ) , ( 2.1.1 ) where u = u ( t , x1 , ... , xn ) is known , xi ∈ R are spatial variables , and the operator f maps R 7→ R. For example , consider the parabolic heat equation : ut = α2∆u , where u represents the temperature and f is the Laplacian operator ∆ . Eq . ( 2.1.1 ) can be solved by finite difference methods , which is briefly reviewed below for the self-completeness of the paper . 2.2 FINITE DIFFERENCE METHOD . Consider using a finite difference method ( FDM ) to solve a two-dimensional second-order PDE of the form : ut = f ( x , y , ux , uy , uxx , uyy ) , ( x , y ) ∈ Ω ⊂ R2 , t ∈ R+ ∪ { 0 } , ( 2.2.1 ) with some proper boundary conditions . Let Ω be Ω = [ xa , xb ] × [ ya , yb ] , and uni , j = u ( xi , yj , tn ) ( 2.2.2 ) where tn = nδt , 0 ≤ n ≤ N , and δt = TN for t ∈ [ 0 , T ] , and some large integer N . xi = iδx , 0 ≤ i ≤ Nx , δx = xa−xbNx for x ∈ [ xa , xb ] . yj = jδy , 0 ≤ j ≤ Ny , δy = ya−yb Ny for y ∈ [ ya , yb ] . Nx and Ny are integers . The central difference method approximates the spatial derivatives as follows ( Thomas , 2013 ) : ux ( xi , yj , t ) = 1 2δx ( ui+1 , j − ui−1 , j ) +O ( δx2 ) , ( 2.2.3 ) uy ( xi , yj , t ) = 1 2δy ( ui , j+1 − ui , j−1 ) +O ( δy2 ) , ( 2.2.4 ) uxx ( xi , yj , t ) = 1 δx2 ( ui+1 , j − 2ui , j + ui−1 , j ) +O ( δx2 ) , ( 2.2.5 ) uyy ( xi , yj , t ) = 1 δy2 ( ui , j+1 − 2ui , j + ui , j−1 ) +O ( δy2 ) . ( 2.2.6 ) To this end , the explicit time-stepping scheme to update next step solution un+1 is given by : uni , j ≈ Un+1i , j = U n i , j + δtf ( xi , yj , U n i , j , U n i , j−1 , U n i , j+1 , U n i+1 , j , U n i−1 , j ) , ( 2.2.7 ) ≡ F ( xi , yj , δx , δy , δt , Uni , j , Uni , j−1 , Uni , j+1 , Uni+1 , j , Uni−1 , j ) , ( 2.2.8 ) Apparently , the finite difference method ( 2.2.7 ) for updating un+1 on a grid point relies on the previous time steps ’ solutions , supported on the grid point and its neighbours . The scheme ( 2.2.7 ) updates un+1i , j using four points of un values ( see Figure 1 ) . Similarly , the finite element method ( FEM ) approximates the new solution by calculating the corresponded mesh cell coefficient ( see Appendix ) , which is updated by its related nearby coefficients on the mesh . From this perspective , one may regard the numerical schemes for solving time-dependent PDEs as methods catching the information from neighbourhood data of interest . 3 PROPOSED METHOD . 3.1 MATHEMATICAL MOTIVATION . Recurrent neural network including LSTM is an artificial neural network structure of the form ( Lipton et al. , 2015 ) : ht = σ ( Whxxt + Whhht−1 + bh ) ≡ σa ( xt , ht−1 ) ≡ σb ( x0 , x1 , x2 , · · · , xt ) , ( 3.1.1 ) where xt ∈ Rd is the input data of the tth state and ht−1 ∈ Rh denotes the processed value in its previous state by the hidden layers . The output yt of the current state is updated by the current state value ht : yt = σ ( Whyht + by ) ( 3.1.2 ) ≡ σc ( ht ) ≡ σd ( x0 , x1 , x2 , · · · , xt ) . ( 3.1.3 ) Here Whx ∈ Rh×d , Whh ∈ Rh×h , Why ∈ Rh×h are the matrix of weights , vectors bh , by ∈ Rh are the coefficients of bias , and σ , σa , σb , σc , σd are corresponded activation and mapping functions . With proper design of input and forget gate , LSTM can effectively yield a better control over the gradient flow and better preserve useful information from long-range dependencies ( Graves & Schmidhuber , 2005 ) . Now consider a temporally continuous vector function u ∈ Rn given by an ordinary differential equation with the form : du ( t ) dt = g ( u ( t ) ) . ( 3.1.4 ) Let un = u ( t = nδt ) , a forward Euler ’ s method for solving u can be easily derived from the Taylor ’ s theorem which gives the following first-order accurate approximation of the time derivative : dun dt = un+1 − un δt +O ( δt ) . ( 3.1.5 ) Then we have : du dt = g ( u ) ( 3.1.5 ) −−−−→ un+1 = un + δt g ( un ) +O ( δt2 ) → ûn+1 = f1 ( ûn ) = f1 ◦ f1 ◦ · · · f1 ( û0 ) ︸ ︷︷ ︸ n ( 3.1.6 ) Here ûn ≈ u ( nδt ) is the numerical approximation and f1 ≡ un+δt g ( un ) : Rn → Rn . Combining equations ( 3.1.1 ) and ( 3.1.6 ) one may notice that the residual networks , recurrent neural network and also LSTM networks can be regarded as a numerical scheme for solving time-dependent differential equations if more layers are added and smaller time steps are taken . ( Chen et al. , 2018 ) Canonical structure for such recurrent neural network usually calculate the current state value by its previous time step value ht−1 and current state input xt . Similarly , in numerical PDEs , the next step data at a grid point is updated from the previous ( and current ) values on its nearby grid points ( see Eq . 2.2.7 ) . Thus , what if we replace the temporal input ht−1 and xt with spatial information ? A simple sketch of the upwinding method for a 1d example of u ( x , t ) : ut + νux = 0 ( 3.1.7 ) will be : un+1i = u n i − ν δt δx ( uni − uni−1 ) +O ( δx , δt ) → ûn+1i = f2 ( û n i−1 , û n i ) ( 3.1.8 ) ≡ fθ ( fη ( xi , hi−1 ( u ) ) ) = fθ , η ( ûn0 , û n 1 , · · · , ûni−1 , ûni ) = vn+1i ( 3.1.9 ) xi = û n i , hi−1 ( û ) = σ ( û n i−1 , hi−2 ( û ) ) ≡ fη ( ûn0 , ûn1 , ûn2 , · · · , ûni−1 ) . ( 3.1.10 ) Here let vn+1i be the prediction of û n+1 i processed by neural network . We replace the temporal previous state ht−1with spacial grid value hi−1 and input the numerical solution ûni ≈ u ( iδx , nδt ) as current state value , which indicates the neural network could be seen as a forward Euler method for equation 3.1.7 ( Lu et al. , 2018 ) . Function f2 ≡ ûni − ν δtδx ( û n i − ûni−1 ) : R→ R and the function fθ represents the dynamics of the hidden layers in decoder with parameters θ , and fη specifies the dynamics of the LSTM layer ( Hochreiter & Schmidhuber , 1997 ; Graves & Schmidhuber , 2005 ) in encoder withe parameters η . The function fθ , η simulates the dynamics of the Neural-PDE with paramaters θ and η . By applying Bidirectional neural network , all grid data are transferred and it enables LSTM to simulate the PDEs as : vn+1i = fθ ( fη ( hi+1 ( ˆ̂u ) , û n i , hi−1 ( û ) ) ) ( 3.1.11 ) hi+1 ( û ) ≡ fη ( ûni+1 , ûni+2 , ûni+3 , · · · , ûnk ) . ( 3.1.12 ) For a time-dependent PDE , if we map all our grid data into an input matrix which contains the information of δx , δt , then the neural network would regress such coefficients as constants and will learn and filter the physical rules from all the k mesh grids data as : vn+1i = fθ , η ( ûn0 , û n 1 , û n 2 , · · · , ûnk ) ( 3.1.13 ) The LSTM neural network is designed to overcome the vanishing gradient issue through hidden layers , therefore we use such recurrent structure to increase the stability of the numerical approach in deep learning . The highly nonlinear function fθ , η simulates the dynamics of updating rules for un+1i , which works in a way similar to a finite difference method ( section 2.2 ) or a finite element method . 3.2 NEURAL-PDE In particular , we use the bidirectional LSTM ( Hochreiter & Schmidhuber , 1997 ; Graves & Schmidhuber , 2005 ) to better retain the state information from data on grid points which are neighbourhoods in the mesh but far away in input matrix . The right frame of Figure 3 shows the overall design of the Neural-PDE . Denote the time series data at collocation points as aN1 , a N 2 , · · · , aNk with aNi = [ û0i , û 1 i , · · · , ûNi ] at ith point . The superscript represents different time points . The Neural-PDE takes the past states { aN1 , aN2 , · · · , aNk } of all collocation points , and outputs the predicted future states { bM1 , bM2 , · · · , bMk } , where bMi = [ v N+1 i , v N+2 i , · · · , v N+M i ] is the Neural-PDE prediction for the ith collocation point at time points from N + 1 to N +M . The data from time point 0 to N are the training data set . The Neural-PDE is an encoder-decoder style sequence model that first maps the input data to a low dimensional latent space that hi = −−−−→ LSTM ( ai ) ⊕ ←−−−− LSTM ( ai ) , ( 3.2.1 ) where ⊕ denotes concatenation and hi is the latent embedding of point ai under the environment . One then decoder , another bi-lstm with a dense layer : vi = ( −−−−→ LSTM ( hi ) ⊕ ←−−−− LSTM ( hi ) ) ·W , ( 3.2.2 ) where W is the learnable weight matrix in the dense layer . During training process , mean squared error ( MSE ) loss L is used as we typically don ’ t know the specific form of the PDE . L = N+M∑ t=N+1 k∑ i=1 ||ûti − vti ||2 , ( 3.2.3 )
This work proposes a sequence-to-sequence approach for learning the time evolution of PDEs. The method employs a bi-directional LSTM to predict solutions of a PDE-based formulation for a chosen number of time steps. By itself this is an interesting, and important goal, but the method does not seem to contain any novel components apart from demonstrating that LSTMs can be used to learn data from PDEs. The paper only compares to a simple form of PINNs, but not to a variety of other time forecasting algorithms available in the deep learning field (LSTM are just one of many methods used these days, a more state of the art one being e.g. transformers). In addition, the examples only contain single cases with relatively simple model equations.
SP:29a7b851d3edc2176467adc75ba67cc973a11a37
Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active Learning
1 INTRODUCTION . The impressive performance exhibited by modern machine learning models hinges on the ability to train the aforementioned models on a very large amounts of labeled data . In practice , in many real world scenarios , even when raw data exists aplenty , acquiring labels might prove challenging and/or expensive . This severely limits the ability to deploy machine learning capabilities in real world applications . This bottleneck has been recognized early on , and methods to alleviate it have been suggested . Most relevant for our work is the large body of research on active learning or optimal experimental design , which aims at selecting data point to be labeled so to maximally inform the learning process . Disappointedly , active learning techniques seem to deliver mostly lukewarm benefits in the context of deep learning . One possible reason why experimental design has so far failed to make an impact in the context of deep learning is that such models are overparameterized , and oftentimes are trained to be interpolative ( Zhang et al. , 2017 ) , i.e. , they are trained so that a perfect fit of the training data is found . This raises a conundrum : the classical perspective on statistical learning theory is that overfitting should be avoided since there is a tradeoff between the fit and complexity of the model . This conundrum is exemplified by the double descent phenomena ( Belkin et al. , 2019b ; Bartlett et al. , 2020 ) , namely when fixing the model size and increasing the amount of training data , the predictive performance initially goes down , and then starts to go up , exploding when the amount of training data approaches the model complexity , and then starts to descend again . This runs counter to statistical intuition which says that more data implies better learning . Indeed , when using interpolative models , more data can hurt ( Nakkiran et al. , 2020a ) ! This phenomena is exemplified in the curve labeled “ Random Selection ” in Figure 1 . Figure 1 explores the predictive performance of various designs when learning a linear regression model and varying the amount of training data with responses . The fact that more data can hurt further motivates experimental design in the interpolative regime . Presumably , if data is carefully curated , more data should never hurt . Unfortunately , classical optimal experimental design focuses on the underparameterized ( and thus , noninterpolative ) case . As such , the theory reported in the literature is often not applicable in the interpolative regime . As our analysis shows ( see Section 3 ) , the prediction error of interpolative models can either be bias dominated ( the first descent phase , i.e. , when training size is very small compared to the number of parameters ) , variance dominated ( near equality of size and parameters ) or of mixed nature . However , properly trained underparameterized models tend to have prediction error which is variance dominated , so classical experimental design focuses on variance reduction . As such , naively using classical optimality criteria , such as V-optimality ( the one most relevant for generalization error ) or others , in the context of interpolation , tends to produce poor results when prediction error is bias dominated or of mixed nature . This is exemplified in the curve labeled “ Classical OED ” in Figure 1 . The goal of this paper is to understand these regimes , and to propose an experimental design strategy that is well suited for overparameterized models . Like many recent work that attempt to understand the double descent phenomena by analyzing underdetermined linear regression , we too use a simple linear regression model in our analysis of experimental design in the overparameterized case ( however , we also consider kernel ridge regression , not only linear interpolative models ) . We believe that understanding experimental design in the overparameterized linear regression case is a prelude to designing effective design algorithms for deep learning . Indeed , recent theoretical results showed a deep connection between deep learning and kernel learning via the so-called Neural Tangent Kernel ( Jacot et al. , 2018 ; Arora et al. , 2019a ; Lee et al. , 2019 ) . Based on this connection , and as a proof-of-concept , we propose a new algorithm for single shot deep active learning . Let us now summarize our contributions : • We analyze the prediction error of learning overparameterized linear models for a given fixed design , revealing three possible regimes that call for different design criteria : bias dominated , variance dominated , and mixed nature . We also reveal an interesting connection between overparameterized experimental design and the column subset selection problem ( Boutsidis et al. , 2009 ) , transductive experimental design ( Yu et al. , 2006 ) , and coresets ( Sener & Savarese , 2018 ) . We also extend our approach to kernel ridge regression . • We propose a novel greedy algorithm for finding designs for overparameterized linear models . As exemplified in the curve labeled “ Overparameterized OED ” , our algorithm is sometimes able to mitigate the double descent phenomena , while still performing better than classical OED ( though no formal proof of this fact is provided ) . • We show how our algorithm can also be applied for kernel ridge regression , and report experiments which show that when the number of parameters is in a sense infinite , our algorithm is able to find designs that are better than state of the art . • We propose a new algorithm for single shot deep active learning , a scaracly treated problem so far , and demonstrate its effectiveness on MNIST . Related Work . The phenomena of benign overfitting and double descent was firstly recognized in DNNs ( Zhang et al. , 2017 ) , and later discussed and analyzed in the context of linear models ( Zhang et al. , 2017 ; Belkin et al. , 2018 ; 2019a ; b ; Bartlett et al. , 2020 ) . Recently there is also a growing interest in the related phenomena of “ more data can hurt ” ( Nakkiran et al. , 2020a ; Nakkiran , 2019 ; Nakkiran et al. , 2020b ; Loog et al. , 2019 ) . A complementary work discussed the need to consider zero or negative regularization coefficient for large real life linear models ( Kobak et al. , 2020 ) . Experimental design is an well established paradigm in statistics , extensively covered in the literature for the linear case ( Pukelsheim , 2006 ) and the non linear case ( Pronzato & Pázman , 2013 ) . The application of it to pool based active learning with batch acquisitions was explored by Yu et al . ( 2006 ) for linear models and by Hoi et al . ( 2006 ) for logistic regression . It was also proposed in the context of deep learning ( Sourati et al. , 2018 ) . Another related line of work is recent work by Haber and Horesh on experimental design for ill-posed inverse problems ( Haber et al. , 2008 ; 2012 ; Horesh et al. , 2010 ) . Active learning in the context of overparameterized learning was explored by Karzand & Nowak ( 2020 ) , however their approach differs from ours significantly since it is based on artificially completing the labels using a minimax approach . I the context of Laplacian regularized Least Squares ( LapRLS ) , which is a generalization of ridge regression , Gu et al . ( 2012 ) showed rigorously that Yu et al . ( 2006 ) criterion is justified as a bound for both the bias and variance components of the expected error . We farther show that this bound is in some sense tight only if the parameter norm is oneand the noise variance equals the l2 penalty coefficient . In addition we postulate and show experimentally that in the overparameterized case using a bias dominant criterion is preferable . Another case in which the bias term idoes not vanish is when the model is misspecified . For linear and generalized linear models this case has been tackled with reweighing of the loss function . A popular modern approach for pool based active learning with batch acquisition is coresets ( Sener & Savarese , 2018 ; Geifman & El-Yaniv , 2017 ; Ash et al. , 2019 ; Pinsler et al. , 2019 ) . This approach has been used in the context of active learning for DNNs . 2 UNDERPARAMETERIZED V-OPTIMAL EXPERIMENTAL DESIGN . Consider a noisy linear response model y = xTw + , where ∼ N ( 0 , σ2 ) and w ∈ Rd and assume we are given with some data points x1 , . . . , xn , for which we obtained independent responses , yi = x T iw + i . Consider the underparameterized case , i.e . n ≥ d , and furthermore assume that the set { x1 , . . . , xn } contains at least d independent vectors . The best linear unbiased estimator ŵ of w according to the Gauss-Markov theorem is given by : ŵ = arg minw ‖Xw − y‖22 = X +y where X ∈ Rn×d is a matrix whose rows are x1 , . . . , xn , y = [ y1 . . . yn ] T ∈ Rn and X+ is the Moore-Pensrose pseudoinverse of X . It is well known that ŵ −w is a normal random vector with zero mean and covariance matrix σ2M−1 , where M = XTX is the Fisher information matrix . This implies that ŷ ( x ) − y ( x ) is also a normal variable with zero mean and variance equal to σ2xTM−1x . Assume also that x comes from a distribution ρ . With that we can further define the excess risk R ( ŵ ) = Ex∼ρ [ ( xTw − xTŵ ) 2 ] and its expectation : E [ R ( ŵ ) ] = Ex∼ρ [ Var [ y ( x ) − ŷ ( x ) ] ] = Ex∼ρ [ σ2xTM−1x ] = Tr ( σ2M−1Cρ ) ( 1 ) where Cρ is the uncentered second moment matrix of ρ : Cρ : = Ex∼ρ [ xxT ] . Eq . ( 1 ) motivates the so-called V-optimal design criterion : select the dataset x1 , . . . , xn so that ϕ ( M ) : = Tr ( M−1Cρ ) is minimized ( if we do not have access to Cρ then it is possible to estimate it by drawing samples from ρ ) . In doing so , we are trying to minimize the expected ( with respect to the noise ) average ( with respect to the data x ) prediction variance , since the risk is composed solely from it ( due to the fact that the estimator is unbiased ) . As we shall see , this is in contrast with the overparameterized case , in which the estimator is biased . V-optimality is only one instance of various statistical criteria used in experimental design . In general experimental design , the focus is on minimizing a preselected criteria ϕ ( M ) ( Pukelsheim , 2006 ) . For example in D-optimal design , ϕ ( M ) = det ( M−1 ) and in A-optimal design ϕ ( M ) = Tr ( M−1 ) . However , since minimizing the V-optimality criterion corresponds to minimizing the risk , it is more appropriate when assessing the predictive performance of machine learning models .
In this paper, the authors develop a data selection scheme aimed to minimize a notion of Bayes excess risk for overparametrized linear models. The excess Bayes risk is the expected squared error between the prediction and the target. The authors note that solutions such as V-optimality exist for the underparametrized cases (linear regression), and offer extensions to ridge regression. After the development of a greedy schemes and a tentative extension to deep learning models, the authors show that their selection scheme can outperform random selection on MNIST with a specific model.
SP:797b07cd8142a35333037bb573db0dfe5dde65ac
Offline Policy Optimization with Variance Regularization
1 INTRODUCTION . Offline batch reinforcement learning ( RL ) algoithms are key towards scaling up RL for real world applications , such as robotics ( Levine et al. , 2016 ) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets , similar to supervised learning , avoiding continual interaction with the environment , which could be problematic for safety and feasibility reasons . However , significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates , a problem encountered by most off-policy RL algorithms ( Precup et al. , 2000 ) . A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch , leading the agent in data regions where its behavior is poor Fujimoto et al . ( 2019 ) . Recently there has been some progress in offline RL ( Kumar et al. , 2019 ; Wu et al. , 2019b ; Fujimoto et al. , 2019 ) , trying to tackle both of these problems . In this work , we study the problem of offline policy optimization with variance minimization . To avoid overly optimistic value function estimates , we propose to learn value functions under variance constraints , leading to a pessimistic estimation , which can significantly help offline RL algorithms , especially under large distribution mismatch . We propose a framework for variance minimization in offline RL , such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions . We develop a novel approach for variance regularized offline actor-critic algorithms , which we call Offline Variance Regularizer ( OVR ) . The key idea of OVR is to constrain the policy improvement step via variance regularized value function estimates . Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates , by instead considering the variance of stationary distribution corrections with per-step rewards , and using the Fenchel transformation ( Boyd & Vandenberghe , 2004 ) to formulate a minimax optimization objective . This allows minimizing variance constraints by instead optimizing dual variables , resulting in simply an augmented reward objective for variance regularized value functions . We show that even with variance constraints , we can ensure policy improvement guarantees , where the regularized value function leads to a lower bound on the true value function , which mitigates the usual overestimation problems in batch RL The use of Fenchel duality in computing the variance allows us to avoid double sampling , which has been a major bottleneck in scaling up variance-constrained actor-critic algorithms in prior work A . & Ghavamzadeh ( 2016 ) ; A . & Fu ( 2018 ) . Practically , our algorithm is easy to implement , since it simply involves augmenting the rewards with the dual variables only , such that the regularized value function can be implemented on top of any existing offline policy optimization algorithms . We evaluate our algorithm on existing offline benchmark tasks based on continuous control domains . Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random , or when it is very different from the data distributions encountered during training . 2 PRELIMINARIES AND BACKGROUND . We consider an infinite horizon MDP as ( S , A , P , γ ) where S is the set of states , A is the set of actions , P is the transition dynamics and γ is the discount factor . The goal of reinforcement learning is to maximize the expected return J ( π ) = Es∼dβ [ V π ( s ) ] , where V π ( s ) is the value function V π ( s ) = E [ ∑∞ t=0 γ tr ( st , at ) | s0 = s ] , and β is the initial state distribution . Considering parameterized policies πθ ( a|s ) , the goal is maximize the returns by following the policy gradient ( Sutton et al. , 1999 ) , based on the performance metric defined as : J ( πθ ) = Es0∼ρ , a0∼π ( s0 ) [ Qπθ ( s0 , a0 ) ] = E ( s , a ) ∼dπθ ( s , a ) [ r ( s , a ) ] ( 1 ) where Qπ ( s , a ) is the state-action value function , since V π ( s ) = ∑ a π ( a|s ) Qπ ( s , a ) . The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy πθ , where dπ ( s , a ) is the state-action occupancy measure , such that the normalized state-action visitation distribution under policy π is defined as : dπ ( s , a ) = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s , at = a|s0 ∼ β , a ∼ π ( s0 ) ) . The equality in equation 1 holds and can be equivalently written based on the linear programming ( LP ) formulation in RL ( see ( Puterman , 1994 ; Nachum & Dai , 2020 ) for more details ) . In this work , we consider the off-policy learning problem under a fixed dataset D which contains s , a , r , s′ tuples under a known behaviour policy µ ( a|s ) . Under the off-policy setting , importance sampling ( Precup et al. , 2000 ) is often used to reweight the trajectory under the behaviour data collecting policy , such as to get unbiased estimates of the expected returns . At each time step , the importance sampling correction π ( at|st ) µ ( at|st ) is used to compute the expected return under the entire trajectory as J ( π ) = ( 1 − γ ) E ( s , a ) ∼dµ ( s , a ) [ ∑T t=0 γ tr ( st , at ) ( ∏T t=1 π ( at|st ) µ ( at|st ) ] . Recent works ( Fujimoto et al. , 2019 ) have demonstrated that instead of importance sampling corrections , maximizing value functions directly for deterministic or reparameterized policy gradients ( Lillicrap et al. , 2016 ; Fujimoto et al. , 2018 ) allows learning under fixed datasets , by addressing the over-estimation problem , by maximizing the objectives of the form maxθ Es∼D [ Qπθ ( s , πθ ( s ) ] . 3 VARIANCE REGULARIZATION VIA DUALITY IN OFFLINE POLICY OPTIMIZATION . In this section , we first present our approach based on variance of stationary distribution corrections , compared to importance re-weighting of episodic returns in section 3.1 . We then present a derivation of our approach based on Fenchel duality on the variance , to avoid the double sampling issue , leading to a variance regularized offline optimization objective in section 3.2 . Finally , we present our algorithm in 1 , where the proposed regularizer can be used in any existing offline RL algorithm . 3.1 VARIANCE OF REWARDS WITH STATIONARY DISTRIBUTION CORRECTIONS . In this work , we consider the variance of rewards under occupancy measures in offline policy optimization . Let us denote the returns as Dπ = ∑T t=0 γ tr ( st , at ) , such that the value function is V π = Eπ [ Dπ ] . The 1-step importance sampling ratio is ρt = π ( at|st ) µ ( at|st ) , and the T-steps ratio can be denoted ρ1 : T = ∏T t=1 ρt . Considering per-decision importance sampling ( PDIS ) ( Precup et al. , 2000 ) , the returns can be similarly written as Dπ = ∑T t=0 γ trtρ0 : t. The variance of episodic returns , which we denote by VP ( π ) , with off-policy importance sampling corrections can be written as : VP ( π ) = Es∼β , a∼µ ( ·|s ) , s′∼P ( ·|s , a ) [ ( Dπ ( s , a ) − J ( π ) ) 2 ] . Instead of importance sampling , several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections ( Liu et al. , 2018 ; Nachum et al. , 2019a ; Zhang et al. , 2020 ; Uehara & Jiang , 2019 ) , which can lead to lower variance estimators at the cost of introducing bias . Denoting the stationary distribution ratios as ω ( s , a ) = dπ ( s , a ) dµ ( s , a ) , the returns can be written as Wπ ( s , a ) = ω ( s , a ) r ( s , a ) . The variance of marginalized IS is : VD ( π ) = E ( s , a ) ∼dµ ( s , a ) [ ( Wπ ( s , a ) − J ( π ) ) 2 ] = E ( s , a ) ∼dµ ( s , a ) [ Wπ ( s , a ) 2 ] − E ( s , a ) ∼dµ ( s , a ) [ Wπ ( s , a ) ] 2 ( 2 ) Our key contribution is to first consider the variance of marginalized IS VD ( π ) itself a as risk constraints , in the offline batch optimization setting . We show that constraining the offline policy optimization objective with variance of marginalized IS , and using the Fenchel-Legendre transformation on VD ( π ) can help avoid the well-known double sampling issue in variance risk constrained RL ( for more details on how to compute the gradient of the variance term , see appendix B ) . We emphasize that the variance here is solely based on returns with occupancy measures , and we do not consider the variance due to the inherent stochasticity of the MDP dynamics . 3.2 VARIANCE REGULARIZED OFFLINE MAX-RETURN OBJECTIVE . We consider the variance regularized off-policy max return objective with stationary distribution corrections ωπ/D ( which we denote ω for short for clarity ) in the offline fixed dataset D setting : max πθ J ( πθ ) : = Es∼D [ Qπθ ( s , πθ ( s ) ) ] − λVD ( ω , πθ ) ( 3 ) where λ ≥ 0 allows for the trade-off between offline policy optimization and variance regularization ( or equivalently variance risk minimization ) . The max-return objective under Qπθ ( s , a ) has been considered in prior works in offline policy optimization ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . We show that this form of regularizer encourages variance minimization in offline policy optimization , especially when there is a large data distribution mismatch between the fixed dataset D and induced data distribution under policy πθ . 3.3 VARIANCE REGULARIZATION VIA FENCHEL DUALITY . At first , equation 3 seems to be difficult to optimize , especially for minimizing the variance regularization w.r.t θ . This is because finding the gradient of V ( ω , πθ ) would lead to the double sampling issue since it contains the squared of the expectation term . The key contribution of OVR is to use the Fenchel duality trick on the second term of the variance expression in equation 2 , for regularizing policy optimization objective with variance of marginalized importance sampling . Applying Fenchel duality , x2 = maxy ( 2xy − y2 ) , to the second term of variance expression , we can transform the variance minimization problem into an equivalent maximization problem , by introducing the dual variables ν ( s , a ) . We have the Fenchel conjugate of the variance term as : V ( ω , πθ ) = max ν { − 1 2 ν ( s , a ) 2 + ν ( s , a ) ω ( s , a ) r ( s , a ) + E ( s , a ) ∼dD [ ω ( s , a ) r ( s , a ) 2 ] } = max ν E ( s , a ) ∼dD [ − 1 2 ν ( s , a ) 2 + ν ( s , a ) ω ( s , a ) r ( s , a ) + ω ( s , a ) r ( s , a ) 2 ] ( 4 ) Regularizing the policy optimization objective with variance under the Fenchel transformation , we therefore have the overall max-min optimization objective , explicitly written as : max θ min ν J ( πθ , ν ) : = Es∼D [ Qπθ ( s , πθ ( s ) ) ] −λE ( s , a ) ∼dD [ ( − 1 2 ν2+ν ·ω ·r+ω ·r2 ) ( s , a ) ] ( 5 )
This paper proposes a novel algorithm for offline policy optimization. The main idea is to prevent overestimation bias by regularizing against the variance of the importance weighted value estimate. There are two key modifications: (1) using an importance weight from the stationary distribution and (2) using Fenchel duality to introduce a min-max problem to avoid double sampling when estimating the gradient of the variance regularization term. The theory section motivates the use of variance regularization and the experiments show improvements over BCQ when adding the proposed variance regularization algorithm.
SP:4989f7703e106a20401cec0a5058d440720b0379
Quantifying Statistical Significance of Neural Network Representation-Driven Hypotheses by Selective Inference
1 INTRODUCTION . The remarkable predictive performance of deep neural networks ( DNNs ) stems from their ability to learn appropriate representations from data . In order to understand the decision-making process of DNNs , it is thus important to be able to explain and interpret DNN representations . For example , in image classification tasks , knowing the attention region from DNN representation allows us to understand the reason for the classification . In the past few years , several methods have been developed to explain and interpret DNN representations ( Ribeiro et al. , 2016 ; Bach et al. , 2015 ; Doshi-Velez & Kim , 2017 ; Lundberg & Lee , 2017 ; Zhou et al. , 2016 ; Selvaraju et al. , 2017 ) ; however , some of them have turned out to be unstable and not reproducible ( Kindermans et al. , 2017 ; Ghorbani et al. , 2019 ; Melis & Jaakkola , 2018 ; Zhang et al. , 2020 ; Dombrowski et al. , 2019 ; Heo et al. , 2019 ) . Therefore , it is crucially important to develop a method to quantify the reliability of DNN representations . In this paper , we interpret these representations as hypotheses that are driven by DNN ( called DNNdriven hypotheses ) and employ statistical hypothesis testing framework to quantify the reliability of DNN representations . For example , in an image classification task , the reliability of an attention region can be quantified based on the statistical significance of the difference between the attention region and the rest of the image . Unfortunately , however , traditional statistical test can not be applied to this problem because the hypothesis ( attention region in the above example ) itself is selected by the data . Traditional statistical test is valid only when the hypothesis is non-random . Roughly speaking , if a hypothesis is selected by the data , the hypothesis will over-fit to the data and the bias needs to be corrected when assessing the reliability of the hypothesis . Our main contribution in this paper is to introduce Selective Inference ( SI ) approach for testing the reliability of DNN representations . The basic idea of SI is to perform statistical inference under the condition that the hypothesis is selected . SI approach has been demonstrated to be effective in the context of feature selections such as Lasso . In this paper , in order to introduce SI for DNN representations , we develop a novel SI algorithm based on homotopy method , which enables us to derive the exact ( non-asymptotic ) conditional sampling distribution of the DNN-driven hypothesis . We use p-value as a criterion to quantify the reliability of DNN representation . In the literature , pvalues are often misinterpreted and there are various source of mis-interpretation has been discussed ( Wasserstein & Lazar , 2016 ) . In this paper , by using SI , we address one of the sources of misinterpreted p-values ; the p-values are biased when the hypothesis is selected after looking at the data ( often called double-dipping or data dredging ) . We believe our approach is a first significant step to provide valid p-values for assessing the reliability of DNN representations . Figure 1 shows an example that illustrates the importance of our method . Related works . Several recent approaches have been developed to visualize and understand a trained DNN . Many of these post-hoc approaches ( Mahendran & Vedaldi , 2015 ; Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016 ; Simonyan et al. , 2013 ) have focused on developing visualization tools for the activation maps and/or the filter weights within trained networks . Others have aimed to identify the discriminative regions in an input image , given a trained network ( Selvaraju et al. , 2017 ; Fong & Vedaldi , 2017 ; Zhou et al. , 2016 ; Lundberg & Lee , 2017 ) . In parallel , some recent studies have showed that many popular methods for explanation and interpretation are not stable with respect to the perturbation or the adversarial attack on the input data and the model ( Kindermans et al. , 2017 ; Ghorbani et al. , 2019 ; Melis & Jaakkola , 2018 ; Zhang et al. , 2020 ; Dombrowski et al. , 2019 ; Heo et al. , 2019 ) . However , there are no previous studies that quantitatively evaluate the stability and reproducibility of DNN representations with a rigorous statistical inference framework . In the past few years , SI has been actively studied for inference on the features of linear models selected by several feature selection methods , e.g. , Lasso ( Lee et al. , 2016 ; Liu et al. , 2018 ; Duy & Takeuchi , 2020 ) . The basic idea of SI is to make inference conditional on the selection event , which allows us to derive the exact ( non-asymptotic ) sampling distribution of the test statistic . Besides , SI has also been applied to various problems ( Bachoc et al. , 2014 ; Fithian et al. , 2015 ; Choi et al. , 2017 ; Tian et al. , 2018 ; Chen & Bien , 2019 ; Hyun et al. , 2018 ; Bachoc et al. , 2018 ; Loftus & Taylor , 2014 ; Loftus , 2015 ; Panigrahi et al. , 2016 ; Tibshirani et al. , 2016 ; Yang et al. , 2016 ; Suzumura et al. , 2017 ; Duy et al. , 2020 ) . However , to the best of our knowledge , there is no existing study that provides SI for DNNs , which is technically challenging . This study is partly motivated by Tanizaki et al . ( 2020 ) where the authors provide a framework to compute p-values for image segmentation results provided by graph cut and threshold-based segmentation algorithms . As we demonstrate in this paper , our method can be also used to assess the reliability of DNN-based segmentation results . Contribution . To our knowledge , this is the first study that provides an exact ( non-asymptotic ) inference method for statistically quantifying the reliability of data-driven hypotheses that are discovered from DNN representation . We propose a novel SI homotopy method , inspired by Duy & Takeuchi ( 2020 ) , for conducting powerful and efficient SI for DNN representations . We conduct experiments on both synthetic and real-world datasets , through which we offer evidence that our proposed method can successfully control the false positive rate , has decent performance in terms of computational efficiency , and provides good results in practical applications . We provide our implementation in the supplementary document and it will be released when this paper is published . 2 PROBLEM STATEMENT . To formulate the problem , we denote an image with n pixels corrupted with Gaussian noise as X = ( X1 , ... , Xn ) > = µ+ ε , ε ∼ N ( 0 , Σ ) , ( 1 ) where µ ∈ Rn is an unknown mean pixel intensity vector and ε ∈ Rn is a vector of Normally distributed noise with the covariance matrix Σ that is known or able to be estimated from external data . We note that we do not assume that the pixel intensities in an image follow Normal distribution in Equation ( 1 ) . Instead , we only assume that the vector of noises added to the true pixel values follows a multivariate Normal distribution . For an image X and a trained DNN , the main target is to identify an attention region ( discriminative/informative region ) in the input image X based on a DNN representation . A pixel is assigned to the attention region if its corresponding value in the representation layer is greater than a pre-defined threshold . We denote the set of pixels ofX divided into attention region and non-attention region as C+X and C − X , respectively . Definition 1 . We define A ( X ) as the event that the result of dividing pixels of image X into two sets of pixels C+X and C − X is obtained by applying a DNN onX , i.e. , A ( X ) = { C+X , C − X } . ( 2 ) Quantifying the statistical significance of DNN-driven hypotheses . Given an observed image xobs ∈ Rn sampled from the model ( 1 ) , we can obtain C+ xobs and C− xobs by applying DNN on xobs . Let us consider a score ∆ that represents the degree to which the attention region differs from the non-attention region . In general , we can define any score as long as it is written in the form ∆ = η > xobs . For example , we can define ∆ as the difference in average pixel values between the attention region and the non-attention region , i.e. , ∆ = mC+ xobs −mC− xobs = 1 |C+ xobs | ∑ i∈C+ xobs xobsi − 1 |C− xobs | ∑ i∈C− xobs xobsi = η > xobs , where η = 1|C+ xobs |1 n C+ xobs − 1|C− xobs |1 n C− xobs , and 1nC ∈ Rn is a vector whose elements belonging to a set C are 1 , and 0 otherwise . If the value of |∆| is sufficiently large , the difference between C+ xobs and C− xobs is significant and the attention region is reliable . To quantify the statistical significance , we consider a statistical hypothesis testing with the following null hypothesis H0 and alternative hypothesis H1 : H0 : µC+ xobs = µC− xobs vs. H1 : µC+ xobs 6= µC− xobs , ( 3 ) where µC+ xobs and µC− xobs are the true means of the pixel values in the attention region and nonattention region , respectively . Given a significance level α ( e.g. , 0.05 ) , we reject H0 if the p-value is smaller than α , which indicates the attention region differs from the non-attention region . Otherwise , we can not say that the difference is significant . In a standard ( naive ) statistical test , the hypotheses in ( 3 ) are assumed to be fixed , i.e. , non-random . Then , the naive ( two-sided ) p-value is simply given as pnaive = PH0 ( |η > X| ≥ |∆| ) = PH0 ( |η > X| ≥ |η > xobs| ) . ( 4 ) However , since the hypotheses in ( 3 ) are actually not fixed in advance , the naive p-value is not valid in the sense that , if we reject H0 with a significance level α , the false detection rate ( type-I error ) can not be controlled at level α , which indicates that pnaive is unreliable . This is due to the fact that the hypotheses ( the attention region ) in ( 3 ) are selected by looking at the data ( the input image ) , and thus selection bias exists . This selection bias is sometimes called data dredging , data snooping or p-hacking ( Ioannidis , 2005 ; Head et al. , 2015 ) . Selective inference ( SI ) for computing valid p-values . The basic idea of SI is to make inference conditional on the selection event , which allows us to derive the exact ( non-asymptotic ) sampling distribution of the test statistic η > X in an attempt to avoid the selection bias . Thus , we employ the following conditional p-value pselective = PH0 ( |η > X| ≥ |η > xobs| | A ( X ) = A ( xobs ) , q ( X ) = q ( xobs ) ) , ( 5 ) where q ( X ) = ( In − cη > ) X with c = Ση ( η > Ση ) −1 . The first condition A ( X ) = A ( xobs ) indicates the event that the result of dividing pixels into an attention region and non-attention region for a random image X is the same as that of the observed image xobs , i.e. , C+X = C + xobs and C−X = C − xobs . The second condition q ( X ) = q ( xobs ) indicates the component that is independent of the test statistic forX is the same as the one for xobs . The q ( X ) corresponds to the component z in the seminal SI paper of Lee et al . ( 2016 ) ( Sec 5 , Eq 5.2 and Theorem 5.2 ) . The p-value in ( 5 ) , which is called selective type I error or selective p-values in the SI literature ( Fithian et al. , 2014 ) , is valid in the sense that PH0 ( pselective < α ) = α , ∀α ∈ [ 0 , 1 ] , i.e. , the false detection rate is theoretically controlled at level α indicating the selective p-value is reliable . To calculate the selective p-value in ( 5 ) , we need to identify the conditional data space . Let us define the set of x ∈ Rn that satisfies the conditions in ( 5 ) as X = { x ∈ Rn | A ( x ) = A ( xobs ) , q ( x ) = q ( xobs ) } . ( 6 ) According to the second condition , the data in X are restricted to a line ( Sec 6 in Liu et al . ( 2018 ) , and Fithian et al . ( 2014 ) ) . Therefore , the set X can be re-written , using a scalar parameter z ∈ R , as X = { x ( z ) = a+ bz | z ∈ Z } , ( 7 ) where a = q ( xobs ) , b = Ση ` ( η > ` Ση ` ) −1 , and Z = { z ∈ R | A ( x ( z ) ) = A ( xobs ) } . ( 8 ) Now , let us consider a random variable Z ∈ R and its observation zobs ∈ R that satisfyX = a+bZ and xobs = a+ bzobs . Then , the selective p-value in ( 5 ) is re-written as pselective = PH0 ( |η > X| ≥ |η > xobs| |X ∈ X ) = PH0 ( |Z| ≥ |zobs| | Z ∈ Z ) . ( 9 ) Since the variable Z ∼ N ( 0 , η > Ση ) under the null hypothesis , the law of Z | Z ∈ Z follows a truncated Normal distribution . Once the truncation region Z is identified , the selective p-value ( 9 ) can be computed as pselective = F Z 0 , η > Ση ( −|z obs| ) + 1− FZ0 , η > Ση ( |z obs| ) , ( 10 ) where F Em , s2 is the c.d.f . of the truncated normal distribution with mean m , variance s 2 and truncation region E . Therefore , the most important task is to identify Z . Extension of the problem setup to hypothesis driven from DNN-based image segmentation . We interpret the hypothesis driven from image segmentation result as the one obtained from the representation at output layer instead of internal representation . Our problem setup is general and can be directly applied to this case . For example , we can consider the attention region as the object region and the non-attention region as the background region . Then , we can conduct SI to quantify the significance of the difference between object and background regions . We note that we consider the case where the image is segmented into two regions—object and background—to simplify the problem and notations . The extension to more than two regions is straightforward .
This paper proposed a novel method which to quantify the reliability of DNN-driven hypotheses in a statistical hypothesis testing framework. Naive statistical testings are not appropriate for the DNN-driven hypotheses, where the hypotheses are selected by looking at the data(i.e. The selection bias exists). To address this problem, the authors developed a novel homotopy method under the Selective-Inference(SI) framework, which can derive the exact sampling distribution of the DNN-driven hypotheses. In this paper, the authors mainly focus on DNNs which consist of affine operations, max-operations, and piecewise-linear activation. As described by Lee et al. (2016), the main idea of SI is to make the inference conditional on the selection event. Specifically to the DNN-driven hypotheses, the authors proposed a novel method that consists of two steps, 1) Adding extra conditioning to make the problem traceable. 2) Combining multiple over-conditioning cases by homotopy method to solve the over-conditioning problem. The experimental results on both synthetic and real-world datasets illustrate the proposed method can successfully control the FP error rate.
SP:4e77d43eb99688600f6c2115e1882e0b1e11a751
Gradient descent temporal difference-difference learning
1 INTRODUCTION . Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning . However , because off-policy methods learn a value function for a target policy given data due to a different behavior policy , they often exhibit greater variance in parameter updates . When applied to problems involving function approximation , off-policy methods are slower to converge than on-policy methods and may even diverge ( Baird , 1995 ; Sutton & Barto , 2018 ) . Two general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms . One approach is to use importance sampling methods to warp the update distribution back to the on-policy distribution ( Precup et al. , 2000 ; Mahmood et al. , 2014 ) . This approach is useful for decreasing the variance of parameter updates , but it does not address stability issues . The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution . Sutton et al . ( 2009a ; b ) proposed the first off-policy gradientdescent-based temporal difference ( GTD and GTD2 , respectively ) algorithms . These algorithms are guaranteed to be stable , with computational complexity scaling linearly with the size of the function approximator . Empirically , however , their convergence is much slower than conventional temporal difference ( TD ) learning , limiting their practical utility ( Ghiassian et al. , 2020 ; White & White , 2016 ) . Building on this work , extensions to the GTD family of algorithms ( see Ghiassian et al . ( 2018 ) for a review ) have allowed for incorporating eligibility traces ( Maei & Sutton , 2010 ; Geist & Scherrer , 2014 ) , non-linear function approximation such as with a neural network ( Maei , 2011 ) , and reformulation of the optimization as a saddle point problem ( Liu et al. , 2015 ; Du et al. , 2017 ) . However , due to their slow convergence , none of these stable off-policy methods are commonly used in practice . In this work , we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation . This algorithm , which we call gradient descent temporal difference-difference ( Gradient-DD ) learning , is an acceleration technique that employs second- order differences in successive parameter updates . The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in last time step , then to derive a gradient-descent algorithm based on this modified objective function . In addition to exploiting the Bellman equation to get the solution , this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate . Algorithmically , the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method , and the extra computational cost is negligible . We show mathematically that applying this method significantly improves the convergence rate relative to the GTD2 method for linear function approximation . This result is supported by numerical experiments , which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning . 1.1 RELATED WORK . In related approaches to ours , some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function . Liu et al . ( 2012 ) have used l1 regularization on weights to learn sparse representations of value functions , and Ghiassian et al . ( 2020 ) has used l2 regularization on weights . Unlike these references , our approach modifies the error objective function by regularizing the evaluation error obtained in the most recent time step . With this modification , our method provides a learning rule that contains second-order differences in successive parameter updates . Our approach is similar to trust region policy optimization ( Peters & Schaal , 2008 ; Schulman et al. , 2015 ) or relative entropy policy search ( Peters et al. , 2010 ) , which penalize large changes being learned in policy learning . In these methods , constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy . Here , however , our aim here is to look for the optimal value function , and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process . 2 GRADIENT DESCENT METHOD FOR OFF-POLICY TEMPORAL DIFFERENCE LEARNING . 2.1 PROBLEM DEFINITION AND BACKGROUND . In this section , we formalize the problem of learning the value function for a given policy under the Markov Decision Process ( MDP ) framework . In this framework , the agent interacts with the environment over a sequence of discrete time steps , t = 1 , 2 , . . .. At each time step the agent observes a partial summary of the state st ∈ S and selects an action at ∈ A . In response , the environment emits a reward rt ∈ R and transitions the agent to its next state st+1 ∈ S. The state and action sets are finite . State transitions are stochastic and dependent on the immediately preceding state and action . Rewards are stochastic and dependent on the preceding state and action , as well as on the next state . The process generating the agent ’ s actions is termed the behavior policy . In off-policy learning , this behavior policy is in general different from the target policy π : S → A . The objective is to learn an approximation to the state-value function under the target policy in a particular environment : V ( s ) = Eπ [ ∞∑ t=1 γt−1rt|s1 = s ] , ( 1 ) where γ ∈ [ 0 , 1 ) is the discount rate . In problems for which the state space is large , it is practical to approximate the value function . In this paper we consider linear function approximation , where states are mapped to feature vectors with fewer components than the number of states . Specifically , for each state s ∈ S there is a corresponding feature vector x ( s ) ∈ Rp , with p ≤ |S| , such that the approximate value function is given by Vw ( s ) : = w > x ( s ) . ( 2 ) The goal is then to learn the parameters w such that Vw ( s ) ≈ V ( s ) . 2.2 GRADIENT TEMPORAL DIFFERENCE LEARNING . A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms ( Sutton et al. , 2009a ; b ) . We begin by briefly recapitulating the GTD algorithms , which we will then extend in the following sections . To begin , we introduce the Bellman operator B such that the true value function V ∈ R|S| satisfies the Bellman equation : V = R + γPV = : BV , where R is the reward vector with components E ( rn+1|sn = s ) , and P is a matrix of state transition probabilities . In temporal difference methods , an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation . Having defined the Bellman operator , we next introduce the projection operator Π , which takes any value function V and projects it to the nearest value function within the space of approximate value functions of the form ( 2 ) . Letting X be the matrix whose rows are x ( s ) , the approximate value function can be expressed as Vw = Xw . We will also assume that there exists a limiting probability distribution such that ds = limn→∞ p ( sn = s ) ( or , in the episodic case , ds is the proportion of time steps spent in state s ) . The projection operator is then given by Π = X ( X > DX ) −1X > D , where the matrix D is diagonal , with diagonal elements ds . The natural measure of how closely the approximation Vw satisfies the Bellman equation is the mean-squared Bellman error : MSBE ( w ) = ‖Vw −BVw‖2D , ( 3 ) where the norm is weighted by D , such that ‖V‖2D = V > DV . However , because the Bellman operator follows the underlying state dynamics of the Markov chain , irrespective of the structure of the linear function approximator , BVw will typically not be representable as Vw for any w. An alternative objective function , therefore , is the mean squared projected Bellman error ( MSPBE ) , which we define as J ( w ) = ‖Vw −ΠBVw‖2D . ( 4 ) Following ( Sutton et al. , 2009b ) , our objective is to minimize this error measure . As usual in stochastic gradient descent , the weights at each time step are then updated by ∆w = −α∇wJ ( w ) , where α > 0 , and −1 2 ∇wJ ( w ) =− E [ ( γxn+1 − xn ) x > n ] [ E ( xnx > n ) ] −1E ( δnxn ) ≈− E [ ( γxn+1 − xn ) x > n ] η . ( 5 ) For notational simplicity , we have denoted the feature vector associated with sn as xn = x ( sn ) . We have also introduced the temporal difference error δn = rn + ( γxn+1 − xn ) > wn , as well as η , a linear predictor to approximate [ E ( xnx > n ) ] −1E ( δnxn ) . Because the factors in Eqn . ( 5 ) can be directly sampled , the resulting updates in each step are δn =rn + ( γxn+1 − xn ) > wn ηn+1 =ηn + βn ( δn − x > n ηn ) xn wn+1 =wn − αn ( γxn+1 − xn ) ( x > n ηn ) . ( 6 ) These updates define the GTD2 learning algorithm , which we will build upon in the following section . 3 GRADIENT DESCENT TEMPORAL DIFFERENCE-DIFFERENCE LEARNING . In order to improve the GTD2 algorithm described above , in this section we modify the objective function via additionally considering the approximation error Vw−Vwn−1 given the previous time step n− 1 . Specifically , we modify Eqn . ( 4 ) as follows : JGDD ( w|wn−1 ) = J ( w ) + κ‖Vw −Vwn−1‖2D , ( 7 ) Figure 1 : Schematic diagram of Gradient-DD learning with w ∈ R2 . Rather than updating w directly along the gradient of the MSPBE ( arrow ) , the update rule selects wn that minimizes the MSPBE while satisfying the constraint ‖Vw −Vwn−1‖2D ≤ µ ( shaded ellipse ) . where κ ≥ 0 is a parameter of the regularization . Minimizing Eqn . ( 7 ) is equivalent to the following optimization arg min w J ( w ) s.t . ‖Vw −Vwn−1‖2D ≤ µ ( 8 ) where µ > 0 is a parameter which becomes large when κ is small , so that the MSPBE objective is recovered as µ→∞ , equivalent to κ→ 0 in Eqn . ( 7 ) . We show in the Appendix that for any µ > 0 , there exist κ ≥ 0 such that the solution of Eqn . ( 7 ) and that of Eqn . ( 8 ) are the same . Eqns . ( 7 ) and ( 8 ) represent a tradeoff between minimizing the MSPBE error and preventing the estimated value function from changing too drastically . Rather than simply minimizing the optimal prediction from the projected Bellman equation , the agent makes use of the most recent update to look for the solution . Figure 1 gives a schematic view of the effect of the regularization . Rather than directly following the direction of the MSPBE gradient , the update chooses a w that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly . In effect , the regularization term encourages searching around the estimate at previous time step , especially when the state space is large . With these considerations in mind , the negative gradient of JGDD ( w|wn−1 ) is − 1 2 ∇wJGDD ( w|wn−1 ) =− E [ ( γxn+1 − xn ) x > n ] [ E ( xnx > n ) ] −1E ( δnxn ) − κE [ ( x > nwn − x > nwn−1 ) xn ] ≈− E [ ( γxn+1 − xn ) x > n ] ηn − κE [ ( x > nwn − x > nwn−1 ) xn ] . ( 9 ) Because the terms in Eqn . ( 9 ) can be directly sampled , the stochastic gradient descent updates are given by δn =rn + ( γxn+1 − xn ) > wn ηn+1 =ηn + βn ( δn − x > n ηn ) xn wn+1 =wn − κn ( x > nwn − x > nwn−1 ) xn − αn ( γxn+1 − xn ) ( x > n ηn ) . ( 10 ) These update equations define the Gradient-DD method , in which the GTD2 update equations ( 6 ) are generalized by including a second-order update term in the third update equation , where this term originates from the squared bias term in the objective ( 7 ) . In the following sections , we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning .
This paper proposes a variant of the GTD2 algorithm by adding an additional regularization term to the objective function, and the new algorithm is named as Gradient-DD (GDD). The regularization ensures that the value function does not change drastically between consecutive iterations. The authors show that the update rule of GDD can be written as a difference equation and aim to further show the convergence via Lyapunov based analysis. An simulation study is provided to compare the proposed GDD algorithm with TD, ETD, and GTD.
SP:8a32dfc80f31fd3da97e15ce98193144d03836b5
FactoredRL: Leveraging Factored Graphs for Deep Reinforcement Learning
We propose a simple class of deep reinforcement learning ( RL ) methods , called FactoredRL , that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms . In tabular and linear approximation settings , the factored Markov decision process literature has shown exponential improvements in sample efficiency by leveraging factored environment structures . We extend this to deep RL algorithms that use neural networks . For model-based algorithms , we use the factored structure to inform the state transition network architecture and for model-free algorithms we use the factored structure to inform the Q network or the policy network architecture . We demonstrate that doing this significantly improves sample efficiency in both discrete and continuous state-action space settings . 1 INTRODUCTION . In many domains , the structure of the Markov Decision Process ( MDP ) is known at the time of problem formulation . For example , in inventory management , we know the structure of the state transition : how inventory flows from a vendor , to a warehouse , to a customer ( Giannoccaro & Pontrandolfo , 2002 ; Oroojlooyjadid et al. , 2017 ) . In portfolio management , we know that a certain asset changes only when the agent buys or sells a corresponding item ( Jiang et al. , 2017 ) . Similar structural information is available in vehicle routing , robotics , computing , and many others . Our work stems from the observation that we can exploit the known structure of a given MDP to learn a good policy . We build on the Factored MDP literature ( Boutilier et al. , 1995 ; Osband & Van Roy , 2014 ; Kearns & Singh , 2002 ; Cui & Khardon , 2016 ) , and propose a factored graph to represent known relationships between states , actions and rewards in a given problem . We use the factored graphs to inform the structure of the neural networks used in deep reinforcement learning ( RL ) algorithms to improve their sample efficiency . We give literature references and example factor graphs for real world applications in Appendix A . Consider a motivational example , where the goal of the agent is to balance multiple independent cartpoles simultaneously , with each cartpole defined as per OpenAI gym ( G. Brockman & Zaremba , 2016 ) . The agent can take a ‘ left ’ or ‘ right ’ action on each cartpole , and the state includes the position and velocity of each cart and each pole . We refer to this as the Multi-CartPole problem . Both model-based and model-free algorithms treat the state-action space as a single entity , which makes exploration combinatorially complex . As a consequence , the sample efficiency of RL algorithms degrades exponentially with the number of cartpoles , despite the problem remaining conceptually simple for a human . By allowing the agent access to the problem ’ s factored structure ( i.e . each action affects only one cartpole ) , we bypass the need to learn about each action ’ s relationship with the entire state , and instead only need to learn about each action ’ s relationship with its single , related cartpole . We show how to integrate knowledge of the factored graph into both model-based and model-free deep RL algorithms , and thereby improve sample efficiency . In all cases , we first write down a factored graph as an adjacency matrix , representing the relationships between state , action , and reward . From this adjacency matrix , we then define a Factored Neural Network ( Factored NN ) , which uses input and output masking to reflect the structure of the factored graph . Finally , we show how to integrate this Factored NN into existing deep RL algorithms . For modelbased , we use the Factored NN to learn decomposed state transitions , and then integrate this state transition model with Monte Carlo Tree Search ( MCTS ) ( Kocsis & Szepesvári , 2006 ) . For model-free , we use the Factored NN to learn a decomposed Q-function , and then integrate with DQN ( Mnih et al. , 2015 ) . Also for model-free , we use the Factored NN to learn a decomposed policy function , and then integrate with PPO ( Schulman et al. , 2017 ) . In all three cases , we demonstrate empirically that these Factored RL methods ( Factored MCTS , DQN , and PPO ) are able to achieve better sample efficiency than their vanilla implementations , on a range of environments . 2 RELATED WORK . Several methods have been proposed that exploit the structural information of a problem in the Factored MDP literature . Kearns & Koller ( 1999 ) propose a method to conduct model-based RL with a Dynamic Bayesian Network ( DBN ) ( Dean & Kanazawa , 1989 ) and learn its parameters based on an extension of the Explicit Explore or Exploit ( E3 ) algorithm ( Kearns & Singh , 2002 ) . Guestrin et al . ( 2003 ) propose a linear program and a dynamic program based algorithm to learn linear value functions in Factored MDPs , and extend it to multi-agent settings ( Guestrin et al. , 2002 ) . They exploit the context specific and additive structure in Factored MDP that capture the locality of influence of specific states and actions . We use the same structures in our proposed algorithms . Cui & Khardon ( 2016 ) propose a symbolic representation of Factored MDPs . Osband & Van Roy ( 2014 ) propose posterior sampling and upper confidence bounds based algorithms and prove that they are near-optimal . They show that the sample efficiency of the algorithm scales polynomially with the number of parameters that encode the factored MDP , which may be exponentially smaller than the full state-action space . Xu & Tewari ( 2020 ) extend the results to non-episodic settings and Lattimore et al . ( 2016 ) show similar results for contextual bandits . The algorithms proposed in these prior works assume a tabular ( Cui et al. , 2015 ; Geißer et al . ) or linear setting ( Guestrin et al. , 2003 ) , or require symbolic expressions ( Cui & Khardon , 2016 ) . We extend these ideas to deep RL algorithms by incorporating the structural information in the neural network . Li & Czarnecki ( 2019 ) propose a factored DQN algorithm for urban driving applications . Our proposed algorithms are similar , but we extend the ideas to model-based algorithms like MCTS ( Kocsis & Szepesvári , 2006 ) , and model-free on-policy algorithms like PPO ( Schulman et al. , 2017 ) . We also evaluate our algorithms on a variety of environments which encompass discrete and continuous stateaction spaces . The Factored NN we propose is closely related to Graph Neural Networks ( Scarselli et al. , 2008 ; Zhou et al. , 2018 ) , which are deep learning based methods that operate on graph domain and have been applied to domains such as network analysis ( Kipf & Welling , 2016 ) , molecule design ( Liu et al. , 2018 ) and computer vision ( Xu et al. , 2018 ) . Instead of explicitly embedding the neighbors of all the nodes with neural networks , we use a single neural network with masking . NerveNet Wang et al . ( 2018 ) addresses the expressiveness of structure in an MDP , similar to our work . They focus on robotics applications and demonstrate state-action factorization with PPO . In our work , we additionally demonstrate state transition and state-reward factorization in MCTS and DQN respectively . In addition , they propose imposing a structure with Graph Neural Networks . In contrast , we propose using input and output masking without modifying the neural architecture . Working Memory Graphs Loynd et al . ( 2020 ) uses Transformer networks for modeling both factored observations and dependencies across time steps . However , they only evaluate their method in a grid world with a single discrete action . In contrast , we demonstrate our methods on multiple environments and algorithms with factorization in state transition , state-action and state-reward relationships . In addition , our factored network is a simple extension to the existing network used to solve a problem , whereas they impose a complex network architecture . Action masking has been used effectively to improve RL performance in multiple works ( Williams & Zweig , 2016 ; Williams et al. , 2017 ; Vinyals et al. , 2017 ) . We use a similar trick when applying our Factored NN to policy networks in model-free RL . However , we use both an action mask as well as a state mask to incorporate factored structure in policy networks . Our state transition networks for model-based RL also imposes masks on both input and output corresponding to current state-action and next state respectively . Wu et al . ( 2018 ) introduce an action dependent baseline in actor-critic algorithms , where a separate advantage function is learned for each action . Their method also exploits structure available in the action space . Our method to incorporate structure is orthogonal , as we modify the policy network in actor-critic methods . There is also a relationship between our work and the emerging intersection of reinforcement learning and causal inference , as factored graphs are are a super-set of causal graphs in the MDP setting . Lu et al . ( 2018 ) use the backdoor criterion in causal inference and variational autoencoders . Zhang & Bareinboim ( 2019 ) propose a near-optimal algorithm by taking advantage of causal inference in non-Markovian dynamic treatment regimes . Both works assume there exist unobserved confounders in the environment . We instead tackle a different problem where there are no unobserved confounders and show that there are still benefits to leverage structural information . 3 TERMINOLOGY . We briefly describe terminology used in this paper . We use Directed Acyclic Graphs ( DAG ) to represent relationships between the variables . DAGs consist of nodes and edges where the nodes correspond to random variables X = ( X1 , ... , Xd ) , and a directed edge from variable Xi to Xj represents that Xi has an effect on Xj ( Xi is also called the parent of Xj ) . Under Markov conditions , the joint distribution of the variables can be factored as p ( X1 : d ) = ∏d i=1 p ( Xi|PA ( Xi ) ) . Consider a general Markov Decision Process ( MDP ) defined by ( S , A , P , R , ρ0 , γ ) , where S , A denote the state and action space respectively , P denotes the transition probability , R represents the reward function , ρ0 and γ represent the initial distribution of the state and discount factor respectively . In the classic RL setting , one typically assumes each state Skt+1 depends on the entire previous states and actions , i.e. , PA ( Skt+1 ) = { { Skt } |S| k=1 , { Akt } |A| k=1 } , where | · | denotes the cardinality of the space , and PA denotes the parents of a node in a bayesian network . However , in many scenarios , one component of the action Akt may only cause part of the state-space { Skt } k∈Ck to change , where Ck is the index set of the related states of the kth component of the action . In other words , the parents of each state may only be a subset of the actions and previous states , i.e. , PA ( Skt+1 ) $ { { Skt } |S| k=1 , { Akt } |A| k=1 } . Simplifying the conditional dependencies helps to construct a more accurate model , enabling us to better decompose the the dynamics and reduce complexity of the learning tasks . We assume the factored structure of the environment does not change over time .
This paper presents a methodology for incorporating factor-graphs into model-based and model-free RL methods. The work starts by assuming access to a correct and factor graph showing the relationship between individual state factors, actions, and rewards. The authors propose to make use of this factor graph by using a Factored Neural Network - which is similar to the standard feed-forward MLP networks that would typically be used to parameterize a policy or Q-function - except that it masks out connections between input and output nodes that are not connected in the factor graph. Presumably this results in a sparser neural network which can lead to faster learning and better sample complexity. The authors demonstrate how these factored NNs can be incorporated with model-based MCTS as well as model-free DQN and PPO. In short - the algorithm remains unchanged and the only substition seems to be the Factored NN rather than a fully-connected NN. Experiments are performed on Multi-Cartpole (simultaneous control over several cartpoles), Taxi, BitFlip, and PyBullet's Ant, Half-Cheetah, and Humanoid. Each of the factored algorithms is compared with the un-factored equivalent and increased sample efficiency of learning is noted for the factored variants. The authors provide the manually-defined factor-graphs used for each of these environments in the Appendix.
SP:dcb62a0cc1b03e9ea24b2ed167f14255d9386f95
Parallel Training of Deep Networks with Local Updates
1 INTRODUCTION . Backpropagation ( Rumelhart et al. , 1985 ) is by far the most common method used to train neural networks . Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss ( Schulman et al. , 2015 ) , non-smooth loss landscape ( Metz et al. , 2019 ) , or due to memory and/or compute requirements ( Ororbia et al. , 2020 ) . However , progress in deep learning is producing ever larger models in terms of parameter count and depth , in vision ( Hénaff et al. , 2019 ; Chen et al. , 2020 ) , language ( Radford et al. , 2019 ; Brown et al. , 2020 ) , and many other domains ( Silver et al. , 2017 ; Vinyals et al. , 2019 ; Berner et al. , 2019 ) . As model size increases , backpropagation incurs growing computational , memory , and synchronization overhead ( Ben-Nun & Hoefler , 2018 ) . This raises the question of whether there are more efficient training strategies , even for models and losses that are considered well matched to training by backpropagation . Much of the work on training large scale models focuses on designing compute infrastructure which makes backpropagation more efficient , despite growing model size ( Dean et al. , 2012b ; Chen et al. , 2015 ; Sergeev & Balso , 2018 ) . One of the most common ways to achieve efficient training of deep neural networks with backpropagation is to scale utilizing data parallelism ( Zhang et al. , 1989 ; Chen et al. , 2016 ) , training on bigger batch sizes spread across multiple devices . However , diminishing returns have been reported with this method for larger batch sizes , effectively wasting compute ( Goyal et al. , 2017 ; Masters & Luschi , 2018 ; Shallue et al. , 2018 ; McCandlish et al. , 2018 ) . Training based on pipeline parallelism has also been introduced , but still requires large batches for efficient training ( Petrowski et al. , 1993 ; Ben-Nun & Hoefler , 2018 ; Huang et al. , 2019 ) . Moreover , in addition to the limitation that in the forward pass each layer can only process the input data in sequence ( forward locking ) , the use of backpropagation implies that the network parameters of each layer can only be updated in turn after completing the full forward pass ( backward locking ) . This backward locking results in increased memory overhead , and precludes efficient parallel processing across layers ( Jaderberg et al. , 2017 ) . The challenges of scaling compute infrastructure to support deep networks trained with backpropagation motivate the need for alternative approaches to training deep neural networks . In this work , we explore how layer-wise local updates ( Belilovsky et al. , 2019a ; Löwe et al. , 2019 ; Xiong et al. , 2020 ) can help overcome these challenges and scale more efficiently with compute than backpropagation . With local updates , each layer is updated before even completing a full forward pass through the network . This remedies the forward and backward locking problems which harm memory efficiency and update latency in standard backprop . Layer-wise local updates are not proportional to gradients of the original loss , and are not even guaranteed to descend a loss function . Nevertheless , in practice they are effective at training neural networks . We refer to this approach of parallelizing compute , which is alternative and complementary to data and model parallelism , as local parallelism . Our investigation focuses on the trade-offs of using local update methods as opposed to global backpropagation . To summarize our contributions : ( i ) We provide the first large scale investigation into local update methods in both vision and language domains . We find training speedups ( as measured by the reduction in required sequential compute steps ) of up to 10× on simple MLPs , and 2× on Transformer architectures . These training speedups are the result of local training methods being able to leverage more parallel compute than backprop . ( ii ) We provide insight into how local parallelism methods work , and experimentally compare the similarity of their gradient and features to those from backprop . ( iii ) We demonstrate a prototype implementation of local parallelism for ResNets , and show up to a 40 % increase in sample throughput ( number of training points per second ) relative to backprop , due to higher hardware utilization . We believe that local parallelism will provide benefits whenever there are diminishing returns from data parallelism , and avoid stale weights from pipelined model parallelism . Additionally , we have released code showing an example of local parallelism , available at hiddenurl . 2 RELATED WORK . 2.1 PARALLELIZATION IN DEEP LEARNING . Scaling large models has led to the development of a number of techniques to train deep models in a parallel fashion ( Ben-Nun & Hoefler , 2018 ) , summarized in Figure 1 . Data Parallelism : Data Parallelism ( Zhang et al. , 1989 ) is an attempt to speed up training of a model by splitting the data among multiple identical models and training each model on a shard of the data independently . Data parallelism is effectively training with larger minibatches ( Kaplan et al. , 2020 ) . This creates issues around the consistency of a model which then needs to be synchronized ( Deng et al. , 2012 ; Dean et al. , 2012a ) . There are two main ways to synchronize weights across model copies : ( i ) Synchronous optimization , where data parallel training synchronizes at the end of every minibatch ( Das et al. , 2016 ; Chen et al. , 2016 ) , with a communication overhead that increases with the number of devices ; ( ii ) Asynchronous optimization that implements data parallel training with independent updates of local model parameters without global synchronization ( Niu et al. , 2011 ; Dean et al. , 2012a ) – this increases device utilization , but empirically gradients are computed on stale weights , which results in a poor sample efficiency and thus slower overall training time compared to synchronous optimization . Model Parallelism : Model Parallelism is used when a model is too large to fit in the memory of a single device and is instead spread over multiple processors ( Krizhevsky et al. , 2012 ; Shazeer et al. , 2018 ; Harlap et al. , 2018 ; Lepikhin et al. , 2020 ) . This is increasingly common as state of the art performance continues to improve with increasing model size ( Brown et al. , 2020 ) . Model parallelism unfortunately has a few downsides : ( i ) High communication costs – the total training time for larger networks can become dominated by communication costs ( Simonyan & Zisserman , 2015 ) , which in the worst case can grow quadratically with the number of devices , and can reach up to 85 % of the total training time of a large model such as VGG-16 ( Harlap et al. , 2018 ; Simonyan & Zisserman , 2015 ) ; ( ii ) Device under-utilization – forward propagation and backward propagation are both synchronous operations , which can result in processor under-utilization in model-parallel systems . This problem becomes worse as we increase the number of layers ( Ben-Nun & Hoefler , 2018 ; Jia et al. , 2014 ; Collobert et al. , 2011 ; Abadi et al. , 2016 ; Huang et al. , 2018 ) . Pipeline Parallelism : Due to the forward and backward locking , using multiple devices to process consecutive blocks of the deep model would make an inefficient use of the hardware resources . Pipelining ( Harlap et al. , 2018 ) concurrently passes multiple mini-batches to multiple layers on multiple devices . This increases device utilization but can introduce staleness and consistency issues which lead to unstable training . Harlap et al . ( 2018 ) alleviates the consistency issue by storing past versions of each layer . Huang et al . ( 2019 ) addresses the staleness issue by pipelining microbatches and synchronously updating at the end of each minibatch . Guan et al . ( 2019 ) builds on this work by introducing a weight prediction strategy and Yang et al . ( 2020 ) investigates to what extent the tradeoff between staleness/consistency and device utilization is necessary . Local updates on the other hand can keep device utilization high with both small and large batches and avoid the weight staleness problem . Local Learning Rules : Local learning describes a family of methods that perform parameter updates based only on local information , where locality is defined as dependence of neighboring neurons , layers , or groups of layers . The earliest local method we are aware of is Hebbian Learning ( Hebb , 1949 ) which has further been explored in BCM theory ( Izhikevich & Desai , 2003 ; Coesmans et al. , 2004 ) , Oja ’ s rule ( Oja , 1982 ) , Generalized Hebbian Learning ( Sanger , 1989 ) , and meta-learned local learning rules ( Bengio et al. , 1990 ; 1992 ; Metz et al. , 2018 ; Gu et al. , 2019 ) . Architectures like Hopfield Networks ( Hopfield , 1982 ) and Boltzmann Machines ( Ackley et al. , 1985 ) also employ a local update , and predate backprogation in deep learning . Modern variants of local training methods have attempted to bridge the performance gap with backpropagation . These include projection methods such as Hebbian learning rules for deep networks ( Krotov & Hopfield , 2019 ; Grinberg et al. , 2019 ; Ryali et al. , 2020 ) , and local layer-wise learning with auxiliary losses ( Belilovsky et al. , 2019a ; b ) . Most similar to our work is decoupled greedy layer-wise learning ( Belilovsky et al. , 2019b ; Löwe et al. , 2019 ) , which trained auxiliary image classifiers greedily , and local contrastive learning ( Xiong et al. , 2020 ) . These methods mainly focus on matching the performance of backpropagation with respect to training epochs , whereas our work focuses on tradeoffs . Finally , while not local in the sense that parallelized layers still optimize for the global objective , Huo et al . ( 2018b ) parallelize layers by caching gradients and using delayed gradient signals to overcome the backward locking problem and update decoupled layers in parallel . 3 LOCAL PARALLELISM . Given a deep neural network , we divide the layers into a sequence of J blocks , which may contain one or more layers . Each block is trained independently with an auxiliary objective , and receives the activations output by the previous block as input or , in the case of the first block , the data from the sampled minibatch . We consider five variants to train this sequence of J blocks : backpropagation , greedy local parallelism , overlapping local parallelism , and chunked local parallelism , as shown in Figure 2 . We also include a baseline method of just training the last , or last two , layers . In all of the local methods , training occurs by attaching objective functions to the end of each block and back propagating the signal locally into the corresponding block or blocks . In this work the auxiliary objective functions that we use take the same form as the global objective . For example , to train a classifier on CIFAR-10 , we attach auxiliary linear classifiers to each local block . See Belilovsky et al . ( 2019b ) for further discussion on the form of this objective . Backpropagation : In our notation , backpropagation groups all layers into one block and thus J = 1 . The parameters are updated with one instance of global error correction . While backpropagation ensures that all weights are updated according to the final output loss , it also suffers from forward and backward locking ( Jaderberg et al. , 2017 ) , an issue that local parallelized methods aim to resolve . Greedy local parallelism : A straightforward approach to enable local training is to attach an auxiliary network to each local layer , which generates predictions from the activations of hidden layers . After generating predictions , each local gradient is backpropagated to its respective local block , shown in Figure 2 ( b ) . The activations are then passed as input to the next layer . We refer to this approach , introduced in ( Belilovsky et al. , 2019b ) , as greedy . Greedy local parallelism is the most parallelizable of all the schemes we consider . However , a potential downside is that fully greedy updates force the layers to learn features that are only relevant to their local objective and preclude inter-layer communication , which may result in lower evaluation performance for the global objective , or worse generalization . Overlapping local parallelism : One issue with the purely greedy approach is that features learned for any individual block may not be useful for subsequent blocks , since there is no inter-block propagation of gradient . For this reason , we consider overlapping local architectures where the first layer of each block is also the last layer of the previous block , as shown in Figure 2 ( c ) , though overlapping of more layers is also possible . This redundancy enables inter-block propagation of gradient that is still local , since only neighboring blocks overlap . However , this comes at the cost of running additional backward passes . The overlapping architecture has appeared before in Xiong et al . ( 2020 ) , but was used only for contrastive losses . Ours is the first work to investigate overlapping local architectures for standard prediction objectives in computer vision and language . Overlapping updates are parallelizable , but come with the additional complexity of keeping duplicates of the overlapping components and averaging updates for these layers . Chunked local parallelism : The greedy architecture is maximally parallel in the sense that it distributes one layer per block . However , it is also possible to have fewer parallel blocks by combining multiple layers into one . We refer to this architecture , shown in Figure 2 ( d ) , as chunked local parallelism . This method trades off parallelizability and therefore throughput for an error signal that propagates through more consecutive layers . It differs from overlapping local parallelism by not needing to duplicate any layer . While previous work has investigated the asymptotic performance of chunked parallelism ( Belilovsky et al. , 2019b ) , ours is the first to consider the compute efficiency and parallelizability of local parallelism . By stacking multiple layers per each parallelized block , chunked parallelism sits between fully parallelized methods , such as greedy and overlapping updates , and fully sequential methods like backpropagation .
It is a very poorly written paper. Basic idea of finding a way to not have to wait for full forward pass is not new. Multiple research papers have been published from the extreme of using stale weight to some form of sub-network backdrop as a proxy for the full network. This paper proposed no new idea for local update. Prior work have all suffered with one or both of these two limitations: a) poor experimental framework, or b) not being able to meet the accuracy bar set by backprop. This work suffers from both. Very poorly described experimental basis - and failing to come even close to the backprop accuracy target with any decent speedup claim. Former is my biggest concern. Section 6 starts with 'Here we show that performance gains of local parallelism can be realized on real hardware' - with near-zero description of any 'real' hardware, except a footnote on '1000 IPUs on a chip'.
SP:ad7eb2bcb3a83153f140e5e8bfaa8b76110e62ab
Simple and Effective VAE Training with Calibrated Decoders
1 INTRODUCTION . Deep density models based on the variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) have found ubiquitous use in probabilistic modeling and representation learning as they are both conceptually simple and are able to scale to very complex distributions and large datasets . These VAE techniques are used for tasks such as future frame prediction ( Castrejon et al. , 2019 ) , image segmentation ( Kohl et al. , 2018 ) , generating speech ( Chung et al. , 2015 ) and music ( Dhariwal et al. , 2020 ) , as well as model-based reinforcement learning ( Hafner et al. , 2019a ) . However , in practice , many of these approaches require careful manual tuning of the balance between two terms that correspond to distortion and rate from information theory ( Alemi et al. , 2017 ) . This balance trades off fidelity of reconstruction and quality of samples from the model : a model with low rate would not contain enough information to reconstruct the data , while allowing the model to have high rate might lead to unrealistic samples from the prior as the KL-divergence constraint becomes weaker ( Alemi et al. , 2017 ; Higgins et al. , 2017 ) . While a proper variational lower bound does not expose any free parameters to control this tradeoff , many prior works heuristically introduce a weight on the prior KL-divergence term , often denoted β . Usually , β needs to be tuned for every dataset and model variant as a hyperparameter , which slows down development and can lead to poor performance as finding the optimal value is often prohibitively computationally expensive . Moreover , using β 6= 1 precludes the appealing interpretation of the VAE objective as a bound on the data likelihood , and is undesirable for applications like density modeling . While many architectures for calibrating decoders have been proposed in the literature ( Kingma & Welling , 2014 ; Kingma et al. , 2016 ; Dai & Wipf , 2019 ) , more applied work typically employs VAEs with uncalibrated decoding distributions , such as Gaussian distributions without a learned variance , where the decoder only outputs the mean parameter ( Castrejon et al. , 2019 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ; Babaeizadeh et al. , 2018 ; Lee et al. , 2018 ; Hafner et al. , 2019b ; Pong et al. , 2019 ; Zhu et al. , 2017 ; Pavlakos et al. , 2019 ) , or uses other ad-hoc modifications to the objective ( Sohn et al. , 2015 ; Henaff et al. , 2019 ) . Indeed , it is well known that attempting to learn the variance in a Gaussian decoder may lead to numerical instability ( Rezende & Viola , 2018 ; Dai & Wipf , 2019 ) , and naı̈ve approaches often lead to poor results . As a result , it remains unclear whether practical empirical performance of VAEs actually benefits from calibrated decoders or not . To rectify this , our first contribution is a comparative analysis of various calibrated decoder architectures and practical recommendations for simple and effective VAE training . We find that , while naı̈ve calibrated decoders often lead to worse results , a careful choice of the decoder distribution can work very well , and removes the need to tune the additional parameter β . Indeed , we note that the entropy of the decoding distribution controls the mutual information I ( x ; z ) . Calibrated decoders allow the model to control I ( x ; z ) automatically , instead of relying on manual tuning . Our second contribution is a simple but novel technique for optimizing the decoder variance analytically , without requiring the decoder network to produce it as an additional output . We call the resulting approach to learning the Gaussian variance the σ-VAE . In our experiments , the σ-VAE outperforms the alternative of learning the variance through gradient descent , while being simpler to implement and extend . We validate our results on several VAE and sequence VAE models and a range of image and video datasets . 2 RELATED WORK . Prior work on variational autoencoders has studied a number of different decoder parameterizations . Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) use the Bernoulli distribution for the binary MNIST data and Kingma & Welling ( 2014 ) use Gaussian distributions with learned variance parameter for grayscale images . However , modeling images with continuous distributions is prone to instability as the variance can converge to zero ( Rezende & Viola , 2018 ; Mattei & Frellsen , 2018 ; Dai & Wipf , 2019 ) . Some work has attempted to rectify this problem by using dequantization ( Gregor et al. , 2016 ) , which is theoretically appealing as it is tightly related to the log-likelihood of the original discrete data ( Theis et al. , 2016 ) , optimizing the variance in a two-stage procedure ( Arvanitidis et al. , 2017 ) , or training a post-hoc prior ( Ghosh et al. , 2019 ) . Takahashi et al . ( 2018 ) ; Barron ( 2019 ) proposed more expressive distributions . Additionally , different choices for representing such variance exist , including diagonal covariance ( Kingma & Welling , 2014 ; Sønderby et al. , 2016 ; Rolfe , 2016 ) , or a single shared parameter ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ; Edwards & Storkey , 2016 ; Rezende & Viola , 2018 ) . We analyze these and notice that learning a single variance parameter shared across images leads to stable training and good performance , without the use of dequantization or even clipping the variance , although these techniques can be used with our decoders ; and further improve the estimation of this variance with an analytic solution . Early work on discrete VAE decoders for color images modeled them with the Bernoulli distribution , treating the color intensities as probabilities ( Gregor et al. , 2015 ) . Further work has explored various parameterizations based on discretized continuous distributions , such as discretized logistic ( Kingma et al. , 2016 ) . More recent work has improved expressivity of the decoder with a mixture of discretized logistics ( Chen et al. , 2016 ; Maaløe et al. , 2019 ) . However , these models also employ powerful autoregressive decoders ( Chen et al. , 2016 ; Gulrajani et al. , 2016 ; Maaløe et al. , 2019 ) , and the latent variables in these models may not represent all of the significant factors of variation in the data , as some factors can instead be modeled internally by the autoregressive decoder ( Alemi et al. , 2017 ) .1 While many calibrated decoders have been proposed , outside the core generative modeling community uncalibrated decoders are ubiquitous . They are used in work on video prediction ( Denton & Fergus , 2018 ; Castrejon et al. , 2019 ; Lee et al. , 2018 ; Babaeizadeh et al. , 2018 ) , image segmentation ( Kohl et al. , 2018 ) , image-to-image translation ( Zhu et al. , 2017 ) , 3D human pose ( Pavlakos et al. , 2019 ) , as well as model-based reinforcement learning ( Henaff et al. , 2019 ; Hafner et al. , 2019b ; a ) , and representation learning ( Lee et al. , 2019 ; Watter et al. , 2015 ; Pong et al. , 2019 ) . Most of these works utilize the heuristic hyperparameter β instead , which is undesirable both as the resulting objective is no longer a bound on the likelihood , and as β usually requires extensive tuning . In this work , we analyze the common pitfalls of using calibrated decoders that may have prevented the practitioners from using them , propose a simple and effective analytic way of learning such calibrated distribution , and provide a comprehensive experimental evaluation of different decoding distributions . Alternative discussions of the hyperparameter β are presented by Zhao et al . ( 2017 ) ; Higgins et al . ( 2017 ) ; Alemi et al . ( 2017 ) ; Achille & Soatto ( 2018 ) , who show that it controls the amount of information in the latent variable , I ( x ; z ) . Peng et al . ( 2018 ) ; Rezende & Viola ( 2018 ) further discuss constrained optimization objectives for VAEs , which also yield a similar hyperparameter . Here , we focus on β-VAEs with Gaussian decoders with constant variance , as commonly used in recent work , and show that the hyperparameter β can be incorporated in the decoding likelihood for these models . 1BIVA ( Maaløe et al. , 2019 ) uses the Mixture of Logistics decoder proposed in ( Salimans et al. , 2017 ) that produces the channels for each pixel autoregressively , see also App D . 3 ANALYSING DECODING DISTRIBUTIONS . The generative model of a VAE ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) with parameters θ is specified with a prior distribution over the latent variable pθ ( z ) , commonly unit Gaussian , and a decoding distribution pθ ( x|z ) , which for color images is commonly a conditional Gaussian parameterized with a neural network . We would like to fit this generative model to a given dataset by maximizing the evidence lower bound ( ELBO ( Neal & Hinton , 1998 ; Jordan et al. , 1999 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) ) , which uses an approximate posterior distribution qφ ( z|x ) , also commonly a conditional Gaussian specified with a neural network . In this work , we focus on the form of the decoding distribution pθ ( x|z ) . To achieve the best results , we want a decoding distribution that represents the required probability p ( x|z ) accurately In this section , we will review and analyze various choices of decoding distributions that enable better decoder calibration , including expressive decoding distributions that can represent both the prediction of the image and the uncertainty about such prediction , or even multimodal predictions . 3.1 GAUSSIAN DECODERS . We first analyse the commonly used Gaussian decoders . We note that the commonly used MSE reconstruction loss between the reconstruction x̂ and ground truth data x is equivalent to the negative log-likelihood objective with a Gaussian decoding distribution with constant variance : − ln p ( x|z ) = 1 2 ||x̂− x||2 +D ln √ 2π = 1 2 ||x̂− x||2 + c = D 2 MSE ( x̂ , x ) + c , where p ( x|z ) ∼ N ( x̂ , I ) , the prediction x̂ is produced with a neural network x̂ = µθ ( z ) , and D is the dimensionality of x . This demonstrates a drawback of methods that rely simply on the MSE loss ( Castrejon et al. , 2019 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ; Hafner et al. , 2019b ; Pong et al. , 2019 ; Zhu et al. , 2017 ; Henaff et al. , 2019 ) , as it is equivalent to assuming a particular , constant variance of the Gaussian decoding distribution . By learning this variance , we can achieve much better performance due to better calibration of the decoder . There are several ways in which we can specify this variance . An expressive way to specify the variance is to specify a diagonal covariance matrix for the image , with one value per pixel ( Kingma & Welling , 2014 ; Sønderby et al. , 2016 ; Rolfe , 2016 ) . This can be done , for example , by letting a neural network σθ output the diagonal entries of the covariance matrix given a latent sample z : pθ ( x|z ) ∼ N ( µθ ( z ) , σθ ( z ) 2 ) . ( 1 ) This parameterization of the decoding distribution outputs one variance value per each pixel and channel . While powerful , we observe in Section 5.3 that this approach attains suboptimal performance , and is moreover prone to numerical instability . Instead , we will find experimentally that a simpler parameterization , in which the covariance matrix is specified with a single shared ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ; Edwards & Storkey , 2016 ; Rezende & Viola , 2018 ) parameter σ as Σ = σI often works better in practice : pθ , σ ( x|z ) ∼ N ( µθ ( z ) , σ 2I ) . ( 2 ) The parameter σ can be optimized together with parameters of the neural network θ with gradient descent . Of particular interest is the interpretation of this parameter . Writing out the expression for the decoding likelihood , we obtain − ln p ( x|z ) = 1 2σ2 ||x̂−x||2+D lnσ √ 2π = 1 2σ2 ||x̂−x||2+D lnσ+c = D lnσ+ D 2σ2 MSE ( x̂ , x ) +c . The full objective of the resulting Gaussian σ-VAE is : Lθ , φ , σ = D lnσ + D 2σ2 MSE ( x̂ , x ) +DKL ( q ( z|x ) ||p ( z ) ) . ( 3 ) Note that σ may be viewed as a weighting parameter between the MSE reconstruction term and the KL-divergence term in the objective . Moreover , this objective explicitly specifies how to select the optimal variance : the variance should be selected to minimize the ( weighted ) MSE loss while also minimizing the logarithm of the variance . Decoder Calibration It is important that the decoder distribution be calibrated in the statistical sense , that is , the predicted probabilities should correspond to the frequencies of seeing a particular value of x given that prediction ( DeGroot & Fienberg , 1983 ; Dawid , 1982 ) . The calibration of a neural network can be usually improved by estimating the uncertainty of that prediction ( Guo et al. , 2017 ) , such as the variance of a Gaussian ( Kendall & Gal , 2017 ) . Since the naive MSE loss assumes a constant variance , it does not effectively represent the uncertainty of the prediction , and is often poorly calibrated . Instead , learning the variance as in Eq . 3 leads to better uncertainty estimation and better calibration . In Sec 5.1 , we show that learning a good estimate of this uncertainty is crucial for the quality of the VAE generations . Connection to β-VAE . The β-VAE objective ( Higgins et al. , 2017 ) for a Gaussian decoder with unit variance is : Lβ = D 2 MSE ( x̂ , x ) + βDKL ( q ( z|x ) ||p ( z ) ) . ( 4 ) We see that it can be interpreted as a particular case of the objective ( 3 ) , where the variance is constant and the term D lnσ can be ignored during optimization . The β-VAE objective is then equivalent to a σ-VAE with a constant variance σ = √ β/2 ( for a particular learning rate setting ) . In recent work ( Zhu et al. , 2017 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ) , β-VAE models are often used in this exact regime . By tuning the β term , practitioners are able to tune the variance of the decoder , manually producing a more calibrated decoder . However , by re-interpreting the β-VAE objective as a special case of the VAE and introducing the missing D lnσ term , we can both obtain a valid evidence lower bound , and remove the need to manually select β . Instead , the variance σ can instead simply be learned end-to-end , reducing the need for hyperparameter tuning . An alternative discussion of this connection in the context of linear VAEs is also presented by Lucas et al . ( 2019 ) . While the β term is not necessary for good performance if the decoder is calibrated , it can still be employed if desired , such as when the aim is to attain better disentanglement ( Higgins et al. , 2017 ) or a particular rate-distortion tradeoff ( Alemi et al. , 2017 ) . However , we found that with calibrated decoders , the best sample quality is obtained when β = 1 . Loss implementation details . For the correct evidence lower bound computation , it is necessary to add the values of the MSE loss and the KL divergence across the dimensions . We observe that common implementations of these losses ( Denton & Fergus , 2018 ; Abadi et al. , 2016 ; Paszke et al. , 2019 ) use averaging instead , which will lead to poor results if the number of image dimensions is significantly different from the number of the latent dimensions . While this can be conveniently ignored in the β-VAE regime , where the balance term is tuned manually anyway , for the σ-VAE it is essential to compute the objective value correctly . Variance implementation details . Since the variance is non-negative , we parameterize it logarithmically as σ2 = e2λ , where λ is the logarithm of the standard deviation . For some models , such as per-pixel variance decoders , we observed that it is necessary to restrict the variance range for numerical stability . We do so by using the soft clipping operations proposed by Chua et al . ( 2018 ) : λ : = λmax − softplus ( λmax − λ ) ; λ : = λmin + softplus ( λ− λmin ) . We observe that setting λmin = −6 to lower bound the standard deviation to be at least half of the distance between allowed color values works well in practice . We also observe that this clipping is unnecessary when learning a shared σ value .
This paper discusses a well-known problem of VAE training that decoder produces blurry reconstruction with constant variance. While much existing work addressed this problem by introducing independent variance training (as of the original VAE model) or additional hyper-parameters, those approaches usually come with additional training/tuning difficulty and even break the ELBO assumption. This paper proposed a simple $\sigma$-VAE that addresses the above problem by optimizing a single variance variable. This also could be easily connected to the well known $\beta$-VAE works. The experiment results in Tables 2 and 3 show the proposed model obtains a better FID score than the existing works on multiple datasets.
SP:a3e5acdd322677d019a4582db78dab2dc1102818
Bayesian Neural Networks with Variance Propagation for Uncertainty Evaluation
1 INTRODUCTION . Uncertainty evaluation is a core technique in practical applications of deep neural networks ( DNNs ) . As an example , let us consider the Cyber-Physical Systems ( CPS ) such as the automated driving system . In the past decade , machine learning methods are widely utilized to realize the environment perception and path-planing components in the CPS . In particular , the automated driving system has drawn a huge attention as a safety-critical and real-time CPS ( NITRD CPS Senior Steering Group , 2012 ; Wing , 2009 ) . In the automated driving system , the environment perception component is built using DNN-based predictive models . In real-world applications , the CPS is required to deal with unexpected samples that have not seen in the training process . Therefore , not only achieving the high-prediction accuracy under the ideal environment but providing uncertainty evaluation for real-world data is significant for safety-critical systems ( Henne et al. , 2019 ) . The CPS should prepare some options such as the rejection of the recommended action to promote the user ’ s intervention when the uncertainty is high . Such an interactive system is necessary to build fail-safe systems ( Varshney & Alemzadeh , 2017 ; Varshney , 2016 ) . On the other hand , the uncertainty evaluation is useful to enhance the efficiency of learning algorithms , i.e. , samples with high uncertainty are thought to convey important information for training networks . Active data selection based on the uncertainty has been studied for long time under the name of active learning ( David et al. , 1996 ; Gal et al. , 2017 ; Holub et al. , 2008 ; Li & Guo , 2013 ; Shui et al. , 2020 ) . In statistics and machine learning , Bayesian estimation has been commonly exploited for uncertainty evaluation ( Bishop , 2006. ) . In the Bayesian framework , the prior knowledge is represented as the prior distribution of the statistical model . The prior distribution is updated to the posterior distribution based on observations . The epistemic model uncertainty is represented in the prior distribution , and upon observing data , those beliefs can be updated in the form of a posterior distribution , which yields model uncertainty conditioned on observed data . The entropy or the variance is representative of uncertainty measures ( Cover & Thomas , 2006 ) . For complicated models such as DNNs , however , a direct application of Bayesian methods is prohibited as the computation including the high-dimensional integration highly costs . In deep learning , Bayesian methods are related to stochastic learning algorithms . This relation is utilized to approximate the posterior over complex models . The stochastic method called dropout is a powerful regularization method for DNNs ( Srivastava et al. , 2014 ) . In each layer of the DNN , some units are randomly dropped in the learning using stochastic gradient descent methods . Gal & Ghahramani ( 2016a ) revealed that the dropout is interpreted as the variational Bayes method . Based on this interpretation , they proposed a simple sampling method of DNN parameters from the approximate posterior distribution . Furthermore , the uncertainty of the DNN-based prediction is evaluated using the Monte-Carlo ( MC ) method called MC dropout . While the Bayesian DNN trained using dropout is realized by a simple procedure , the computational overhead is not ignorable . In the MC dropout , dropout is used also at the test time with a number of repeated feed-forward calculations to effectively sample from the approximate posterior . Hence , the naive MC dropout is not necessarily relevant to the system demanding the real-time response . In this work , we propose a sampling-free method to evaluate the uncertainty of the DNN-based prediction . Our method is computationally inexpensive comparing to the MC dropout and provides reliable uncertainty evaluation . In the following , we will first outline related works . Section 3 is devoted to show the detailed formulae of calculating the uncertainty . In our method , an upper bound of the variance is propagated in each layer to evaluate the uncertainty of the output . We show that the our method alleviates the overconfident prediction . This property is shared with scaling methods for the calibration of the class-probability on test samples . In Section 4 , we study the relation between our method and scaling methods . In Section 5 , we demonstrate the computational efficiency and statistical reliability of our method through some numerical experiments using both DNNs and RNNs . 2 RELATED WORKS . The framework of Bayesian inference is often utilized to evaluate the uncertainty of DNN-based predictions . In Bayesian methods , the uncertainty is represented by the predictive distribution defined from the posterior distribution of the weight parameters . MacKay ( 1992 ) proposed a simple approximation method of the posterior distribution for neural networks , and demonstrated that the Bayesian method improves the prediction performance on classification tasks . Graves ( 2011 ) showed that the variational method efficiently works to approximate the posterior distribution of complex neural network models . There are many approaches to evaluate the uncertainty of modern DNNs ( Alex Kendall & Cipolla , 2017 ; Choi et al. , 2018 ; Lu et al. , 2017 ; Le et al. , 2018 ) . We briefly review MC-based methods and sampling-free methods . Monte-Carlo methods based on Stochastic Learning : The randomness in the learning process can be interpreted as a prior distribution . In particular , the dropout is a landmark of stochastic regularization method to train DNNs ( Srivastava et al. , 2014 ) . Gal & Ghahramani ( 2016a ) proposed a simple method to generate weight parameters from the posterior distribution induced from the prior corresponding to the dropout regularization . The predictive distribution is approximated by the MC dropout , which compute the expected output over the Monte-Carlo sampling of the weight parameters . Gal & Ghahramani ( 2016b ) reported that the MC dropout efficiently works not only for feed-forward DNNs but for recurrent neural networks ( RNNs ) . Another sampling based method is the ensemble-based posteriors with different random seeds ( Lakshminarayanan et al. , 2017 ) . However , the computation cost is high as the bootstrap method requires repeated training of parameters using resampling data . Sampling-free methods : Though the MC dropout is a simple and practical method to evaluate the uncertainty , a number of feed-forward computations are necessary to approximate the predictive distribution . Recently , some sampling-free methods have been proposed for the uncertainty evaluation . Probabilistic network is a direct way to deal with uncertainty . The parameters of the probabilistic model , say the mean and the variance of the Gaussian distribution , are propagated in probabilistic neural networks . Then , the uncertainty evaluation is given by a single feed-forward calculation . Choi et al . ( 2018 ) used the mixture of Gaussian distributions as a probabilistic neural network and Wang et al . ( 2016 ) proposed natural-parameter networks as a class of probabilistic neural networks based on exponential families . For a given input vector , the network outputs the parameters of the distribution . For the recurrent neural networks , Hwang et al . ( 2019 ) proposed a variant of the natural-parameter networks . Instead of parameters of statistical models , Wu et al . ( 2019 ) developed a sampling-free method to propagate the first and second order moments of the posterior distribution . Sampling-free methods can evaluate the uncertainty with a one-pass computation for neural networks . However , specialized learning algorithms are required to train the probabilistic networks . Our method is applicable to DNNs and RNNs trained by common learning methods with the dropout . Postels et al . ( 2019 ) and Shekhovtsov & Flach ( 2019 ) proposed similar methods that propagate the uncertainty of the network to the output layer . Differently from the past works , our method takes the upper limit of the correlations among the inputs at the affine layer into account when the uncertainty is evaluated . In addition , we show that our method efficiently works even for RNNs . 3 UNCERTAINTY EVALUATION WITH VARIANCE PROPAGATION . In this work , we assume that we can access to the weight parameters in the DNN and the dropout probability in the training process . As the variance is a common measure of uncertainty , we propose a variance propagation algorithm for the trained DNN . Implementation of our method called nn2vpbnn is presented in Section A in the appendix . In our method , we need only the DNN or RNN trained using dropout . Unlike various kinds of probabilistic NNs , we do not need any specialized training procedure to evaluate the uncertainty . This is a great advantage for our implementation . Furthermore , the representative values of the predictive distribution , i.e . the mean and variance , are obtained by a one-path feed-forward calculation . Hence , we can circumvent iterative Monte-Carlo calculations . 3.1 UNCERTAINTY IN AFFINE LAYER . Let us consider the output of the affine layer y = Wx + b for the random input x , where W = ( Wij ) ∈ R ` ×m and b = ( bi ) ` i=1 ∈ R ` . Suppose that the random vector x has the mean vector E [ x ] and the variance covariance matrix ( Σx ) i , j = Cov ( xi , xj ) for i , j = 1 , . . . , m. Then , the mean vector E [ y ] and the variance covariance matrix Σy of y are given by E [ y ] = WE [ x ] + b and Σy = WΣxW T . As the estimation of the full variable-covariance matrix is not necessarily reliable , we use only the variances of each xi and an upper bound of the absolute correlation coefficient to evaluate the uncertainty . For W = ( Wij ) , the variance Var [ yi ] is Var [ yi ] = ∑ jW 2 ijVar [ xj ] +∑ j , j′ : j 6=j′WijWij′Cov ( xj , xj′ ) . Suppose the absolute correlation coefficient among x1 , . . . , xm is bounded above by ρ , 0 ≤ ρ ≤ 1 . Using the relation between the correlation and variance , we have Var [ yi ] ≤ ∑ j W 2ijVar [ xj ] + ρ ∑ j , j′ : j 6=j′ |Wij ||Wij′ | √ Var ( xj ) √ Var ( xj′ ) = ( 1− ρ ) ∑ j |Wij |2Var [ xj ] + ρ ( ∑ j |Wij | √ Var ( xj ) ) 2 , i = 1 , . . . , ` . ( 1 ) Under the independent assumption , i.e. , ρ = 0 , the minimum upper bound is obtained . The prediction with a small variance leads to overconfident decision making . Hence , the upper bounding of the variance is important to build fail-safe systems . A simple method of estimating ρ is presented in Section 3.5 . Using the above formula , the mean and an upper bound of the variance of y are computed using the mean and an upper bound of the variance of x . In this paper , such a computation is referred to as the Variance Propagation or VP for short . Let us define the variance vector of the m-dimensional random vector x = ( x1 , . . . , xm ) ∈ Rm by Var [ x ] = ( Var [ x1 ] , . . . , Var [ xm ] ) ∈ Rm . Furthermore , we denote the concatenated vector of the mean and variance of z or its approximation as U ( z ) , i.e. , U ( z ) = ( E [ z ] , Var [ z ] ) . The VP at the affine layer is expressed by the function Taff , U ( y ) = ( m , v ) = Taff ( U ( x ) ) , ( 2 ) where m = WE [ x ] + b ∈ Rm and each element of v ∈ Rm is defined by equation 1 . The average pooling layer , global average pooling layer ( Lin et al. , 2013 ) , and the batch normalization layer ( Ioffe & Szegedy , 2015 ) are examples of the affine layer . Hence , the VP of the affine layer also works to evaluate the uncertainty of these layers . The distribution of yi is well approximated by the univariate Gaussian distribution if the correlation among x is small ( Wang & Manning , 2013 ; Wu et al. , 2019 ) . Based on this fact , the uncertainty of yi can be represented by the univariate Gaussian distribution N ( E [ yi ] , Var [ yi ] ) . In our method , the variance Var [ yi ] of the approximate Gaussian is given by the variance v in equation 2 .
This paper proposes a sampling free technique based on variance propagation to model predictive distributions of deep learning models. Estimating uncertainty of deep learning models is an important line of research for understanding the reliability of predictions and ensuring robustness to out-of-distribution data. Results are shown using synthetic data, perplexity analysis for a language modeling task and out-of-distribution detection performance using a convolutional network.
SP:3a1d7f7165762299ba2d9bab4144576660b9a784
Private Post-GAN Boosting
1 INTRODUCTION . The vast collection of detailed personal data , including everything from medical history to voting records , to GPS traces , to online behavior , promises to enable researchers from many disciplines to conduct insightful data analyses . However , many of these datasets contain sensitive personal information , and there is a growing tension between data analyses and data privacy . To protect the privacy of individual citizens , many organizations , including Google ( Erlingsson et al. , 2014 ) , Microsoft ( Ding et al. , 2017 ) , Apple ( Differential Privacy Team , Apple , 2017 ) , and more recently the 2020 US Census ( Abowd , 2018 ) , have adopted differential privacy ( Dwork et al. , 2006 ) as a mathematically rigorous privacy measure . However , working with noisy statistics released under differential privacy requires training . A natural and promising approach to tackle this challenge is to release differentially private synthetic data—a privatized version of the dataset that consists of fake data records and that approximates the real dataset on important statistical properties of interest . Since they already satisfy differential privacy , synthetic data enable researchers to interact with the data freely and to perform the same analyses even without expertise in differential privacy . A recent line of work ( Beaulieu-Jones et al. , 2019 ; Xie et al. , 2018 ; Yoon et al. , 2019 ) studies how one can generate synthetic data by incorporating differential privacy into generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Although GANs provide a powerful framework for synthetic data , they are also notoriously hard to train and privacy constraint imposes even more difficulty . Due to the added noise in the private gradient updates , it is often difficult to reach convergence with private training . In this paper , we study how to improve the quality of the synthetic data produced by private GANs . Unlike much of the prior work that focuses on fine-tuning of network architectures and training techniques , we propose Private post-GAN boosting ( Private PGB ) —a differentially private method that boosts the quality of the generated samples after the training of a GAN . Our method can be viewed as a simple and practical amplification scheme that improves the distribution from any ex- isting black-box GAN training method – private or not . We take inspiration from an empirical observation in Beaulieu-Jones et al . ( 2019 ) that even though the generator distribution at the end of the private training may be a poor approximation to the data distribution ( due to e.g . mode collapse ) , there may exist a high-quality mixture distribution that is given by several generators over different training epochs . PGB is a principled method for finding such a mixture at a moderate privacy cost and without any modification of the GAN training procedure . To derive PGB , we first formulate a two-player zero-sum game , called post-GAN zero-sum game , between a synthetic data player , who chooses a distribution over generated samples over training epochs to emulate the real dataset , and a distinguisher player , who tries to distinguish generated samples from real samples with the set of discriminators over training epochs . We show that under a “ support coverage ” assumption the synthetic data player ’ s mixed strategy ( given by a distribution over the generated samples ) at an equilibrium can successfully “ fool ” the distinguisher–that is , no mixture of discriminators can distinguish the real versus fake examples better than random guessing . While the strict assumption does not always hold in practice , we demonstrate empirically that the synthetic data player ’ s equilibrium mixture consistently improves the GAN distribution . The Private PGB method then privately computes an approximate equilibrium in the game . The algorithm can be viewed as a computationally efficient variant of MWEM ( Hardt & Rothblum , 2010 ; Hardt et al. , 2012 ) , which is an inefficient query release algorithm with near-optimal sample complexity . Since MWEM maintains a distribution over exponentially many “ experts ” ( the set of all possible records in the data domain ) , it runs in time exponential in the dimension of the data . In contrast , we rely on private GAN to reduce the support to only contain the set of privately generated samples , which makes PGB tractable even for high-dimensional data . We also provide an extension of the PGB method by incorporating the technique of discriminator rejection sampling ( Azadi et al. , 2019 ; Turner et al. , 2019 ) . We leverage the fact that the distinguisher ’ s equilibrium strategy , which is a mixture of discriminators , can often accurately predict which samples are unlikely and thus can be used as a rejection sampler . This allows us to further improve the PGB distribution with rejection sampling without any additional privacy cost since differential privacy is preserved under post-processing . Our Private PGB method also has a natural non-private variant , which we show improves the GAN training without privacy constraints . We empirically evaluate both the Private and Non-Private PGB methods on several tasks . To visualize the effects of our methods , we first evaluate our methods on a two-dimensional toy dataset with samples drawn from a mixture of 25 Gaussian distributions . We define a relevant quality score function and show that the both Private and Non-Private PGB methods improve the score of the samples generated from GAN . We then show that the Non-Private PGB method can also be used to improve the quality of images generated by GANs using the MNIST dataset . Finally , we focus on applications with high relevance for privacy-protection . First we synthesize US Census datasets and demonstrate that the PGB method can improve the generator distribution on several statistical measures , including 3-way marginal distributions and pMSE . Secondly , we evaluate the PGB methods on a dataset with a natural classification task . We train predictive models on samples from Private PGB and samples from a private GAN ( without PGB ) , and show that PGB consistently improves the model accuracy on real out-of-sample test data . Related work . Our PGB method can be viewed as a modular boosting method that can improve on a growing line of work on differentially private GANs ( Beaulieu-Jones et al. , 2019 ; Xie et al. , 2018 ; Frigerio et al. , 2019 ; Torkzadehmahani et al. , 2020 ) . To obtain formal privacy guarantees , these algorithms optimize the discriminators in GAN under differential privacy , by using private SGD , RMSprop , or Adam methods , and track the privacy cost using moments accounting Abadi et al . ( 2016 ) ; Mironov ( 2017 ) . Yoon et al . ( 2019 ) give a private GAN training method by adapting ideas from the PATE framework ( Papernot et al. , 2018 ) . Our PGB method is inspired by the Private Multiplicative Weigths method ( Hardt & Rothblum , 2010 ) and its more practical variant MWEM ( Hardt et al. , 2012 ) , which answer a large collection of statistical queries by releasing a synthetic dataset . Our work also draws upon two recent techniques ( Turner et al . ( 2019 ) and Azadi et al . ( 2019 ) ) that use the discriminator as a rejection sampler to improve the generator distribution . We apply their technique by using the mixture discriminator computed in PGB as the rejection sampler . There has also been work that applies the idea of boosting to ( non-private ) GANs . For example , Arora et al . ( 2017 ) and Hoang et al . ( 2018 ) propose methods that directly train a mixture of generators and discriminators , and Tolstikhin et al . ( 2017 ) proposes AdaGAN that reweighes the real examples during training similarly to what is done in AdaBoost ( Freund & Schapire , 1997 ) . Both of these methods may be hard to make differentially private : they either require substantially more privacy budget to train a collection of discriminators or increase the weights on a subset of examples , which requires more adding more noise when computing private gradients . In contrast , our PGB method boosts the generated samples post training and does not make modifications to the GAN training procedure . 2 PRELIMINARIES . Let X denote the data domain of all possible observations in a given context . Let pd be a distribution over X . We say that two datasets X , X ′ ∈ Xn are adjacent , denoted by X ∼ X ′ , if they differ by at most one observation . We will write pX to denote the empirical distribution over X . Definition 1 ( Differential Privacy ( DP ) ( Dwork et al. , 2006 ) ) . A randomized algorithm A : Xn → R with output domain R ( e.g . all generative models ) is ( ε , δ ) -differentially private ( DP ) if for all adjacent datasets X , X ′ ∈ Xn and for all S ⊆ R : P ( A ( X ) ∈ S ) ≤ eεP ( A ( X ′ ) ∈ S ) + δ . A very nice property of differential privacy is that it is preserved under post-processing . Lemma 1 ( Post-processing ) . LetM be an ( ε , δ ) -differentially private algorithm with output range R and f : R→ R′ be any mapping , the composition f ◦M is ( ε , δ ) -differentially private . As a result , any subsequent analyses conducted on DP synthetic data also satisfy DP . The exponential mechanism ( McSherry & Talwar , 2007 ) is a private mechanism for selecting among the best of a discrete set of alternativesR , where “ best ” is defined by a quality function q : Xn×R → R that measures the quality of the result r for the dataset X . The sensitivity of the quality score q is defined as ∆ ( q ) = maxr∈RmaxX∼X′ |q ( X , r ) −q ( X ′ , r ) | . Then given a quality score q and privacy parameter ε , the exponential mechanismME ( q , ε , X ) simply samples a random alternative from the rangeR such that the probability of selecting each r is proportional to exp ( εq ( X , r ) / ( 2∆ ( q ) ) ) . 2.1 DIFFERENTIALLY PRIVATE GAN . The framework of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) consists of two types of neural networks : generators and discriminators . A generator G is a function that maps random vectors z ∈ Z drawn from a prior distribution pz to a sample G ( z ) ∈ X . A discriminator D takes an observation x ∈ X as input and computes a probability D ( x ) that the observation is real . Each observation is either drawn from the underlying distribution pd or the induced distribution pg from a generator . The training of GAN involves solving the following joint optimization over the discriminator and generator : min G max D Ex∼pX [ f ( D ( x ) ) ] + Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] where f : [ 0 , 1 ] → R is a monotone function . For example , in standard GAN , f ( a ) = log a , and in Wasserstein GAN ( Arjovsky et al. , 2017 ) , f ( a ) = a . The standard ( non-private ) algorithm iterates between optimizing the parameters of the discriminator and the generator based on the loss functions : LD = −Ex∼pX [ f ( D ( x ) ) ] − Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] , LG = Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] The private algorithm for training GAN also performs the same alternating optimization , but it optimizes the discriminator under differential privacy while keeping the generator optimization the same . In general , the training proceeds over epochs τ = 1 , . . . , N , and at the end of each epoch τ the algorithm obtains a discriminator Dτ and a generator Gτ by optimizing the loss functions respectively . In Beaulieu-Jones et al . ( 2019 ) ; Xie et al . ( 2018 ) , the private optimization on the discriminators is done by running the private SGD method Abadi et al . ( 2016 ) or its variants . Yoon et al . ( 2019 ) performs the private optimization by incorporating the PATE framework Papernot et al . ( 2018 ) . For all of these private GAN methods , the entire sequence of discriminators { D1 , . . . , DN } satisfies privacy , and thus the sequence of generators { G1 , . . . , GN } is also private since they can be viewed as post-processing of the discriminators . Our PGB method is agnostic to the exact private GAN training methods .
This paper studies the differential private synthetic dataset generation. Unlike previous DP based GAN models, this paper aims to boost the sample quality of after the training stage. In particular, the final synthetic dataset is sampled from the sequence of generators obtained during GAN training. The distribution is obtained by a private two-player game between the privately selected discriminator and a sampler from the mixture of generators. The results are demonstrated on gaussian data and tabular data.
SP:72d1283f3602edc22896934271fcec5b03f25d9e
A Near-Optimal Recipe for Debiasing Trained Machine Learning Models
1 INTRODUCTION . Machine learning is increasingly applied to critical decisions which can have a lasting impact on individual lives , such as for credit lending ( Bruckner , 2018 ) , medical applications ( Deo , 2015 ) , and criminal justice ( Brennan et al. , 2009 ) . Consequently , it is imperative to understand and improve the degree of bias of such automated decision-making . Unfortunately , despite the fact that bias ( or “ fairness ” ) is a central concept in our society today , it is difficult to define it in precise terms . In fact , as people perceive ethical matters differently depending on a plethora of factors including geographical location or culture ( Awad et al. , 2018 ) , no universally-agreed upon definition for bias exists . Moreover , the definition of bias may depend on the application and might even be ignored in favor of accuracy when the stakes are high , such as in medical diagnosis ( Kleinberg et al. , 2017 ; Ingold and Soper , 2016 ) . As such , it is not surprising that several definitions of “ unbiased classification ” have been introduced . These include statistical parity ( Dwork et al. , 2012 ; Zafar et al. , 2017a ) , equality of opportunity ( Hardt et al. , 2016 ) , and equalized odds ( Hardt et al. , 2016 ; Kleinberg et al. , 2017 ) . Unfortunately , such definitions are not generally compatible ( Chouldechova , 2017 ) and some might even be in conflict with calibration ( Kleinberg et al. , 2017 ) . In addition , because fairness is a societal concept , it does not necessarily translate into a statistical criteria ( Chouldechova , 2017 ; Dixon et al. , 2018 ) . Statistical parity Let X be an instance space and let Y = { 0 , 1 } be the target set in a standard binary classification problem . In the fair classification setting , we may further assume the existence of a ( possibly randomized ) sensitive attribute s : X → { 0 , 1 , . . . , K } , where s ( x ) = k if and only if x ∈ Xk for some total partition X = ∪kXk . For example , X might correspond to the set of job applicants while s indicates their gender . Here , the sensitive attribute can be randomized if , for instance , the gender of an applicant is not a deterministic function of the full instance x ∈ X ( e.g . number of publications , years of experience , ... etc ) . Then , a commonly used criterion for fairness is to require similar mean outcomes across the sensitive attribute . This property is well-captured through the notion of statistical parity ( a.k.a . demographic parity ) ( Corbett-Davies et al. , 2017 ; Dwork et al. , 2012 ; Zafar et al. , 2017a ; Mehrabi et al. , 2019 ) : Definition 1 ( Statistical Parity ) . Let X be an instance space and X = ∪kXk be a total partition of X . A classifier f : X → { 0 , 1 } satisfies statistical parity across all groups X1 , . . . , XK if : max k∈ { 1,2 , ... , K } Ex [ f ( x ) | x ∈ Xk ] − min k∈ { 1,2 , ... , K } Ex [ f ( x ) | x ∈ Xk ] ≤ To motivate and further clarify the definition , we showcase the empirical results on the Adult benchmark dataset ( Blake and Merz , 1998 ) in Figure 1 . When tasked with predicting whether the income of individuals is above $ 50K per year , all considered classifiers exhibit gender-related bias . One way of removing such bias is to enforce statistical parity across genders . Crucially , however , without taking ethnicity into account , different demographic groups may experience different outcomes . In fact , gender bias can actually increase in some minority groups after enforcing statistical parity . This can be fixed by redefining the sensitive attribute to be the cross product of both gender and ethnicity ( green bars ) . Our main contribution is to present a near-optimal recipe for debiasing models , including deep neural networks , according to Definition 1 . Specifically , we formulate the task of debiasing learned models as a regularized optimization problem that is solved efficiently using the projected SGD method . We show how the algorithm produces thresholding rules with randomization near the thresholds , where the width of randomization is controlled by the regularization parameter . We also show that randomization near the threshold is necessary for Bayes risk consistency . While we focus on binary sensitive attributes in our experiments in Section 5 , our algorithm and its theoretical guarantees continue to hold for non-binary sensitive attributes as well . Statement of Contribution . 1 . We derive a near-optimal post-processing algorithm for debiasing learned models ( Section 3 ) . 2 . We prove theoretical guarantees for the proposed algorithm , including a proof of correctness and an explicit bound on the Bayes excess risk ( Section 4 ) . 3 . We empirically validate the proposed algorithm on benchmark datasets across both classical algorithms and modern DNN architectures . Our experiments demonstrate that the proposed algorithm significantly outperforms previous post-processing methods ( Section 5 ) . In Appendix E , we also show how the proposed algorithm can be modified to handle other criteria of bias as well . 2 RELATED WORK . Algorithms for fair machine learning can be broadly classified into three groups : ( 1 ) pre-processing methods , ( 2 ) in-processing methods , and ( 3 ) post-processing methods ( Zafar et al. , 2019 ) . Preprocessing algorithms transform the data into a different representation such that any classifier trained on it will not exhibit bias . This includes methods for learning a fair representation ( Zemel et al. , 2013 ; Lum and Johndrow , 2016 ; Bolukbasi et al. , 2016 ; Calmon et al. , 2017 ; Madras et al. , 2018 ; Kamiran and Calders , 2012 ) , label manipulation ( Kamiran and Calders , 2009 ) , data augmentation ( Dixon et al. , 2018 ) , or disentanglement ( Locatello et al. , 2019 ) . On the other hand , in-processing methods constrain the behavior of learning algorithms in order to control bias . This includes methods based on adversarial learning ( Zhang et al. , 2018 ) and constraint-based classification , such as by incorporating constrains on the decision margin ( Zafar et al. , 2019 ) or features ( Grgić-Hlača et al. , 2018 ) . Agarwal et al . ( 2018 ) showed that the task of learning an unbiased classifier could be reduced to a sequence of cost-sensitive classification problems , which could be applied to any black-box classifier . One caveat of the latter approach is that it requires solving a linear program ( LP ) and retraining classifiers , such as neural networks , many times before convergence . The algorithm we propose in this paper is a post-processing method , which can be justified theoretically ( Corbett-Davies et al. , 2017 ; Hardt et al. , 2016 ; Menon and Williamson , 2018 ; Celis et al. , 2019 ) . Fish et al . ( 2016 ) and Woodworth et al . ( 2017 ) fall under this category . However , the former only provides generalization guarantees without consistency results while the latter proposes a twostage approach that requires changes to the original training algorithm . Kamiran et al . ( 2012 ) also proposes a post-processing algorithm , called Reject Option Classifier ( ROC ) , without providing any theoretical guarantees . In contrast , our algorithm is Bayes consistent and does not alter the original classification method . In Celis et al . ( 2019 ) and Menon and Williamson ( 2018 ) , instance-dependent thresholding rules are also learned . However , our algorithm also learns to randomize around the threshold ( Figure 2 ( a ) ) and this randomization is key to our algorithm both theoretically as well as experimentally ( Appendix C and Section 5 ) . Hardt et al . ( 2016 ) learns a randomized post-processing rule but our proposed algorithm outperforms it in all of our experiments ( Section 5 ) . Woodworth et al . ( 2017 ) showed that the post-processing approach can , sometimes , be highly suboptimal . Nevertheless , the latter result does not contradict the statement that our post-processing rule is near-optimal because we assume that the original classifier outputs a monotone transformation of some approximation to the posterior probability p ( y = 1 | x ) ( e.g . margin or softmax output ) whereas Woodworth et al . ( 2017 ) assumed in their construction that the post-processing rule had access to the binary predictions only . We argue that the proposed algorithm has distinct advantages , particularly for deep neural networks ( DNNs ) . First , stochastic convex optimization methods are well-understood and can scale well to massive amounts of data ( Bottou , 2010 ) , which is often the case in deep learning today . Second , the guarantees provided by our algorithm hold w.r.t . the binary predictions instead of using a proxy , such as the margin as in some previous works ( Zafar et al. , 2017b ; 2019 ) . Third , unlike previous reduction methods that would require retraining a deep neural network several times until convergence ( Agarwal et al. , 2018 ) , which can be prohibitively expensive , our algorithm operates on learned models that are trained once and does not require retraining . Besides developing algorithms for fair classification , several recent works focused on other related aspects , such as proposing new definitions for fairness ; e.g . demographic parity ( Dwork et al. , 2012 ; Mehrabi et al. , 2019 ) , equalized odds ( Hardt et al. , 2016 ) , equality of opportunity/disparate mistreatment ( Zafar et al. , 2017a ; Hardt et al. , 2016 ) , and individual fairness ( Dwork et al. , 2012 ) . Recent works have also established several impossibility results related to fair classification , such as Kleinberg et al . ( 2017 ) ; Chouldechova ( 2017 ) . In our case , we derive a new impossibility result that holds for any deterministic binary classifier and relate it to the task of controlling the covariance between the classifier ’ s predictions and the sensitive attribute ( Appendix E ) . 3 NEAR-OPTIMAL ALGORITHM FOR STATISTICAL PARITY . Notation We reserve boldface letters for random variables ( e.g . x ) , small letters for instances ( e.g . x ) , capital letters for sets ( e.g . X ) , and calligraphic typeface for universal sets ( e.g . the instance space X ) . Given a set S , 1S ( x ) ∈ { 0 , 1 } is the characteristic function indicating whether x ∈ S. We denote by [ n ] the set of integers { 1 , . . . , n } and [ x ] + = max { 0 , x } . Algorithm Given a classifier f : X → [ −1 , +1 ] our goal is to post-process the predictions made by f 1 in order to control the bias with respect to a sensitive attribute s : X → [ K ] as in Definition 1 . To this end , instead of learning a deterministic classifier , we consider randomized prediction rules of the form h̃ : X × { 1 , 2 , . . . , K } × [ −1 , 1 ] → [ 0 , 1 ] , where h̃ ( x ) represents the probability of predicting the positive class given ( i ) instance x ∈ X , ( ii ) sensitive attribute s ( x ) , and ( iii ) classifier ’ s output f ( x ) . As discussed in Appendix B , for post-processing rule h̃ ( x ) , and for each group Xk ⊆ X , the fairness constraint in Definition 1 can be written as |Ex [ h̃ ( x ) | x ∈ Xk ] − ρ| ≤ , where ρ ∈ [ 0 , 1 ] is a hyper-parameter tuned via a validation dataset . On the other hand , minimizing the probability of altering the predictions of the original classifier can be achieved by maximizing the inner product Ex [ h̃ ( x ) ·f ( x ) ] . Instead of optimizing this quantity directly , which would lead to a pure thresholding rule , we minimize the regularized objective : ( γ/2 ) Ex [ h̃ ( x ) 2 ] −Ex [ h̃ ( x ) · f ( x ) ] for some regularization parameter γ > 0 . This regularization leads to randomization around the threshold , which we show to be critical , both theoretically ( Section 4 and Appendix C ) and experimentally ( Section 5 ) . Using Lagrange duality we show that the solution reduces to the update rules in Equation 2 with optimization variables { λk , µk } k∈ [ K ] and the corresponding predictor which outputs +1 for group Xk with probability h̃γ ( x ) is given by h̃γ ( x ) = 0 , f ( x ) ≤ λk − µk ( f ( x ) − λk + µk ) /γ , λk − µk ≤ f ( x ) ≤ λk − µk + γ 1 , f ( x ) ≥ λk − µk + γ ( 1 ) where ξγ is given by Eq . ( 3 ) . Update rules To learn these parameters , one can apply the following update rules ( Appendix B ) : λs ( x ) ← max { 0 , λs ( x ) − η ( 2 + ρ+ ∂ ∂λs ( x ) ξγ ( f ( x ) − ( λs ( x ) − µs ( x ) ) ) ) } µs ( x ) ← max { 0 , µs ( x ) − η ( 2 − ρ+ ∂ ∂µs ( x ) ξγ ( f ( x ) − ( λs ( x ) − µs ( x ) ) ) ) } , ( 2 ) where , again , ρ ∈ [ 0 , 1 ] is a hyperparameter tuned via a validation dataset , s : X → [ K ] is the sensitive attribute , and γ > 0 is a regularization parameter that controls the level of randomization . In addition , the function ξγ : R→ R+ is given by : ξγ ( w ) = w2 2γ · I { 0 ≤ w ≤ γ } + ( w − γ 2 ) · I { w > γ } ( 3 ) Note that ξγ is convex and its derivative ξ′γ is ( 1/γ ) -Lipschitz continuous ; it can be interpreted as differentiable approximation to the ReLU unit ( Nair and Hinton , 2010 ) . A full pseudocode of the proposed algorithm is presented in Appendix A .
In this paper, the authors propose a post-processing method for removing bias from a trained model. The bias is defined as conditional statistical parity — for a given partitioning of the data, the predicted label should be conditionally uncorrelated with the sensitive (bias inducing) attribute for each partition. The authors relax this strong requirement to an epsilon-constraint on the conditional covariance for each partition. As an example, race (sensitive attribute) should be conditionally uncorrelated to whether an individual will default on their loan (predicted target) for each city (data partition). The authors propose a constrained optimization problem that takes the input data, sensitive attribute, partitioning and a trained model to yield a probabilistic decision rule. Subsequently, they propose an iterative solution to the problem, proving some theoretical properties as well as showing how the method compares to different baselines.
SP:a6280b6605e621403de6ac4c3fc80fa71184ab6d
DeLighT: Deep and Light-weight Transformer
1 INTRODUCTION . Attention-based transformer networks ( Vaswani et al. , 2017 ) are widely used for sequence modeling tasks , including language modeling and machine translation . To improve performance , models are often scaled to be either wider , by increasing the dimension of hidden layers , or deeper , by stacking more transformer blocks . For example , T5 ( Raffel et al. , 2019 ) uses a dimension of 65K and GPT-3 ( Brown et al. , 2020 ) uses 96 transformer blocks . However , such scaling increases the number of network parameters significantly ( e.g. , T5 and GPT-3 have 11 billion and 175 billion parameters , respectively ) , and complicates learning , i.e. , these models either require very large training corpora ( Raffel et al. , 2019 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) or careful regularization ( Hinton et al. , 2012 ; Wan et al. , 2013 ; Merity et al. , 2018a ) . In this paper , we introduce a new parameter-efficient attention-based architecture that can be easily scaled to be both wide and deep . Our Deep and Light-weight Transformer architecture , DeLighT , extends the transformer architecture of Vaswani et al . ( 2017 ) and delivers similar or better performance with significantly fewer parameters and operations . At the heart of DeLighT is the DeLighT transformation that uses the group linear transformations ( GLTs ) of Mehta et al . ( 2018 ) with an expand-reduce strategy for varying the width and depth of the DeLighT block efficiently . Since GLTs are local by nature , the DeLighT transformation uses feature shuffling , which is analogous to channel shuffling in convolutional networks ( Zhang et al. , 2018 ) , to share information between different groups . Such wide and deep representations facilitate replacing the multi-head attention and feed-forward layers in transformers with single headed attention and light-weight feed-forward layers , reducing total network parameters and operations . Importantly , unlike transformers , the DeLighT transformation decouples the depth and width from the input size , allowing us to allocate parameters more efficiently across blocks by using shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output . We demonstrate that DeLighT models achieve similar or better performance than transformer models with significantly fewer parameters and operations , on two common sequence modeling tasks , ( i ) machine translation and ( ii ) language modeling . On the low resource WMT ’ 16 En-Ro machine translation dataset , DeLighT attains transformer performance using 2.8× fewer parameters . On the high resource WMT ’ 14 En-Fr dataset , DeLighT delivers better performance ( +0.4 BLEU score ) with 1.8× fewer parameters than baseline transformers . Similarly , on language modeling , DeLighTmatches the performance of Transformer-XL ( Dai et al. , 2019 ) with 1.5× fewer parameters on the WikiText-103 dataset . Our source code is open-source and is available at : https : //github.com/ sacmehta/delight 2 RELATED WORK . Improving transformers : Several methods have been introduced to improve the transformer architecture . The first line of research addresses the challenge of computing self attention on long input sequences ( Child et al. , 2019 ; Kitaev et al. , 2020 ; Beltagy et al. , 2020 ) . These methods can be combined with our architecture . The second line of research focuses on explaining multi-head attention ( Raganato and Tiedemann , 2018 ; Brunner et al. , 2020 ) . They show that increasing the number of transformer heads can lead to redundant representations ( Voita et al. , 2019a ; Michel et al. , 2019 ) and using fixed attention heads with predefined patterns ( Raganato et al. , 2020 ) or synthetic attention matrices ( Tay et al. , 2020 ) improves performance . The third line of research focuses on improving transformers by learning better representations ( Wu et al. , 2019 ; 2020 ; So et al. , 2019 ) . These works aim to improve the expressiveness of transformers using different transformations – for example , using convolutions ( Wu et al. , 2019 ; Gehring et al. , 2017 ) , gated linear units ( Dauphin et al. , 2017 ) , or multi-branch feature extractors ( So et al. , 2019 ; Wu et al. , 2020 ) . Our work falls into this category . Unlike previous works , we show that it is possible to efficiently allocate parameters both at the block-level using the DeLighT transformation and across blocks using block-wise scaling . Model scaling : Model scaling is a standard method to improve the performance of sequence models ( Vaswani et al. , 2017 ; Raffel et al. , 2019 ; Lan et al. , 2020 ; Devlin et al. , 2019 ; Shoeybi et al. , 2019 ; Tan and Le , 2019 ; Brown et al. , 2020 ) . Model dimensions are increased in width-wise scaling ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ) while more blocks ( e.g. , Transformer blocks ) are stacked in depth-wise scaling ( Shoeybi et al. , 2019 ; Brown et al. , 2020 ; Wang et al. , 2019 ) . In both cases ( and their combination ) , parameters inside each block of the network are the same , which may lead to a sub-optimal solution . To further improve the performance of sequence models , this paper introduces block-wise scaling that allows for variably-sized blocks and efficient allocation of parameters in the network . Our results show that ( 1 ) shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output deliver the best performance , and ( 2 ) models with block-wise scaling coupled with model scaling achieve better performance compared to model scaling alone . We note that convolutional neural networks ( CNNs ) also learn shallower and narrower representations near the input and deeper and wider representations near the output . Unlike CNNs ( e.g. , ResNet of He et al . 2016 ) that perform a fixed number of operations at each convolutional layer , the proposed block-wise scaling uses a variable number of operations in each layer and block . Improving sequence models : There is also significant recent work on other related methods for improving sequence models , including ( 1 ) improving accuracy using better token-level representations – for example , using BPE ( Sennrich et al. , 2016 ) , adaptive inputs ( Baevski and Auli , 2019 ) and outputs ( Grave et al. , 2017a ) , and DeFINE ( Mehta et al. , 2020 ) , and ( 2 ) improving efficiency – for example , using compression ( Chen et al. , 2018 ; Sun et al. , 2020 ) , pruning ( Han et al. , 2016 ; Voita et al. , 2019b ) , and distillation ( Hinton et al. , 2015 ; Sanh et al. , 2019 ) . The closest to our work is the DeFINE transformation , which also learns representations using an expand-reduce strategy . The key difference between the DeFINE transformation ( Figure 1c ) and the DeLighT transformation ( Figure 1d ) is that the DeLighT transformation more efficiently allocates parameters within expansion and reduction layers . Unlike DeFINE , which uses fewer groups in group linear transformations to learn wider representations , DeLighT transformation uses more groups to learn wider representations with fewer parameters . The DeLighT transformation achieves comparable performance to the DeFINE transformation but with significantly fewer parameters . 3 DELIGHT : DEEP AND LIGHT-WEIGHT TRANSFORMER . A standard transformer block ( Figure 1a ) comprises of multi-head attention that uses a query-keyvalue decomposition to model relationships between sequence tokens , and a feed forward network ( FFN ) to learn wider representations . Multi-head attention obtains query Q , key K , and value V by applying three projections to the input , each consisting of h linear layers ( or heads ) that map the dm-dimensional input into a dh-dimensional space , where dh = dm/h is the head dimension . The FFN consists of two linear layers , where the first expands the dimensions from dm to df and the learnable parameters ( Linear and DeLighT ) are shown in color . The shape of linear transformations indicate their operation ( expansion , reduction , etc. ) . ( c , d ) compares the DeFINE transformation ( Mehta et al. , 2020 ) with the DeLighT transformation . Compared to the DeFINE transformation , the DeLighT transformation uses group linear transformations ( GLTs ) with more groups to learn wider representations with fewer parameters . Different colors are used to show groups in GLTs . For simplicity , feature shuffling is not shown in ( d ) . second reduces the dimensions from df to dm . The depth of a transformer block is 4 , consisting of ( 1 ) three parallel branches for queries , keys , and values , ( 2 ) a fusion layer that combines the output of multiple heads , and ( 3 ) two sequential linear layers in the FFN . In general , transformer-based networks sequentially stacks transformer blocks to increase network capacity and depth . This paper extends the transformer architecture and introduces a deep and light-weight transformer , DeLighT . Our model uses a deep and light-weight expand-reduce transformation , DeLighT transformation ( Section 3.1 ) , that enables learning wider representations efficiently . It also enables replacing multi-head attention and feed forward network ( FFN ) layers with single-head attention and a light-weight FFN ( Section 3.2 ) . DeLighT transformation decouples attention dimensions from the depth and width , allowing us to learn representations efficiently using block-wise scaling instead of uniform stacking of transformer blocks ( Section 3.3 ) . 3.1 DELIGHT TRANSFORMATION . DeLighT transformation maps a dm dimensional input vector into a high dimensional space ( expansion ) and then reduces it down to a do dimensional output vector ( reduction ) using N layers of the group transformations of Mehta et al . ( 2018 ) , as shown in Figure 1d . During these expansion and reduction phases , DeLighT transformation uses group linear transformations ( GLTs ) because they learn local representations by deriving the output from a specific part of the input and are more efficient than linear transformations . To learn global representations , the DeLighT transformation shares information between different groups in the group linear transformation using feature shuffling , analogous to channel shuffling in convolutional networks ( Zhang et al. , 2018 ) . A standard approach to increase the expressivity and capacity of transformers is to increase the input dimensions , dm . However , increasing dm linearly also increases the number of operations in multihead attention ( O ( n2dm ) , where n is the sequence length ) in a standard transformer block ( Figure 1a ) . In contrast , to increase the expressivity and capacity of the DeLighT block , we increase the depth and width of its intermediate DeLighT transformations using expansion and reduction phases . This enables us to use smaller dimensions for computing attention , requiring fewer operations . Formally , the DeLighT transformation is controlled by five configuration parameters : ( 1 ) number of GLT layers N , ( 2 ) width multiplier wm , ( 3 ) input dimension dm , ( 4 ) output dimension do , and ( 5 ) maximum groups gmax in a GLT . In the expansion phase , the DeLighT transformation projects the dm-dimensional input to a high-dimensional space , dmax = wmdm , linearly using dN2 e layers . In the reduction phase , the DeLighT transformation projects the dmax-dimensional vector to a do-dimensional space using the remaining N − dN2 e GLT layers . Mathematically , we define the output Y at each GLT layer l as : Yl = { F ( X , Wl , bl , gl ) , l = 1 F ( H ( X , Yl−1 ) , Wl , bl , gl ) , Otherwise ( 1 ) where Wl = { Wl1 , · · · , Wlgl } and bl = { bl1 , · · · , blgl } are the learnable weights and biases of group linear transformation F with gl groups at the l-th layer . Briefly , the F function takes the input X ( orH ( X , Yl−1 ) ) and splits into gl non-overlapping groups such that X = { X1 , · · · , Xgl } . The function F then linearly transforms each Xi with weights Wli and bias bli to produce output Yli = XiW l i + b l i . The outputs of each group Y l i are then concatenated to produce the output Y l. The functionH first shuffles the output of each group in Yl−1 and then combines it with the input X using the input mixer connection of Mehta et al . ( 2020 ) to avoid vanishing gradient problems . Figure 2 visualizes the expansion phase in the DeLighT transformation with group linear transformation , feature shuffling , and the input mixer connection . The number of groups at the l-th GLT in DeLighT transformation are computed as : gl = { min ( 2l−1 , gmax ) , 1 ≤ l ≤ dN/2e gN−l , Otherwise ( 2 ) In our experiments , we use gmax = ddm32 e so that each group has at least 32 input elements .
This paper presents a variant of Transformer where low-dimension matrix multiplications and single-head attention are used. Stacked group-linear-transformation (GLT) are applied on input of each layer to perform dimension growth and then reduction. The paper is well-written and easy to follow. Experiments demonstrate the propose architecture matches or improves the performance of baseline Transformers with fewer parameters.
SP:90ffef024018f59b3bde23aa2e2a4677602d41e8
On the mapping between Hopfield networks and Restricted Boltzmann Machines
1 INTRODUCTION . Hopfield networks ( HNs ) ( Hopfield , 1982 ; Amit , 1989 ) are a classical neural network architecture that can store prescribed patterns as fixed-point attractors of a dynamical system . In their standard formulation with binary valued units , HNs can be regarded as spin glasses with pairwise interactions Jij that are fully determined by the patterns to be encoded . HNs have been extensively studied in the statistical mechanics literature ( e.g . ( Kanter & Sompolinsky , 1987 ; Amit et al. , 1985 ) ) , where they can be seen as an interpolation between the ferromagnetic Ising model ( p = 1 pattern ) and the Sherrington-Kirkpatrick spin glass model ( many random patterns ) ( Kirkpatrick & Sherrington , 1978 ; Barra & Guerra , 2008 ) . By encoding patterns as dynamical attractors which are robust to perturbations , HNs provide an elegant solution to pattern recognition and classification tasks . They are considered the prototypical attractor neural network , and are the historical precursor to modern recurrent neural networks . Concurrently , spin glasses have been used extensively in the historical machine learning literature where they comprise a sub-class of “ Boltzmann machines ” ( BMs ) ( Ackley et al. , 1985 ) . Given a collection of data samples drawn from a data distribution , one is generally interested in “ training ” a BM by tuning its weights Jij such that its equilibrium distribution can reproduce the data distribution as closely as possible ( Hinton , 2012 ) . The resulting optimization problem is dramatically simplified when the network has a two-layer structure where each layer has no self-interactions , so that there are only inter-layer connections ( Hinton , 2012 ) ( see Fig . 1 ) . This architecture is known as a Restricted Boltzmann Machine ( RBM ) , and the two layers are sometimes called the visible layer and the hidden layer . The visible layer characteristics ( dimension , type of units ) are determined by the training data , whereas the hidden layer can have binary or continuous units and the dimension is chosen somewhat arbitrarily . In addition to generative modelling , RBMs and their multi-layer extensions have been used for a variety of learning tasks , such as classification , feature extraction , and dimension reduction ( e.g . Salakhutdinov et al . ( 2007 ) ; Hinton & Salakhutdinov ( 2006 ) ) . There has been extensive interest in the relationship between HNs and RBMs , as both are built on the Ising model formalism and fulfill similar roles , with the aim of better understanding RBM behaviour and potentially improving performance . Various results in this area have been recently reviewed ( Marullo & Agliari , 2021 ) . In particular , an exact mapping between HNs and RBMs has been previously noted for the special case of uncorrelated ( orthogonal ) patterns ( Barra et al. , 2012 ) . Several related models have since been studied ( Agliari et al. , 2013 ; Mézard , 2017 ) , which partially relax the uncorrelated pattern constraint . However , the patterns observed in most real datasets exhibit significant correlations , precluding the use of these approaches . In this paper , we demonstrate exact correspondence between HNs and RBMs in the case of correlated pattern HNs . Specifically , we show that any HN with N binary units and p < N arbitrary ( i.e . non-orthogonal ) binary patterns encoded via the projection rule ( Kanter & Sompolinsky , 1987 ; Personnaz et al. , 1986 ) , can be transformed into an RBM with N binary and p gaussian variables . We then characterize when the reverse map from RBMs to HNs can be made . We consider a practical example using the mapping , and discuss the potential importance of this correspondence for the training and interpretability of RBMs . 2 RESULTS . We first introduce the classical solution to the problem of encodingN -dimensional binary { −1 , +1 } vectors { ξµ } pµ=1 , termed “ patterns ” , as global minima of a pairwise spin glass H ( s ) = − 12s TJs . This is often framed as a pattern retrieval problem , where the goal is to specify or learn Jij such that an energy-decreasing update rule for H ( s ) converges to the patterns ( i.e . they are stable fixed points ) . Consider the N × p matrix ξ with the p patterns as its columns . Then the classical prescription known as the projection rule ( or pseudo-inverse rule ) ( Kanter & Sompolinsky , 1987 ; Personnaz et al. , 1986 ) , J = ξ ( ξT ξ ) −1ξT , guarantees that the p patterns will be global minima of H ( s ) . This resulting spin model is commonly called a ( projection ) Hopfield network , and has the Hamiltonian H ( s ) = −1 2 sT ξ ( ξT ξ ) −1ξTs . ( 1 ) Note that ξT ξ invertibility is guaranteed as long as the patterns are linearly independent ( we therefore require p ≤ N ) . Also note that in the special ( rare ) case of orthogonal patterns ξµ · ξν = Nδµν ( also called “ uncorrelated ” ) , studied in the previous work ( Barra et al. , 2012 ) , one has ξT ξ = NI and so the pseudo-inverse interactions reduce to the well-known Hebbian form J = 1N ξξ T ( the properties of which are studied extensively in Amit et al . ( 1985 ) ) . Additional details on the projection HN Eq . ( 1 ) are provided in Appendix A . To make progress in analyzing Eq . ( 1 ) , we first consider a transformation of ξ which eliminates the inverse factor . 2.1 MAPPING A HOPFIELD NETWORK TO A RESTRICTED BOLTZMANN MACHINE . In order to obtain a more useful representation of the quadratic form Eq . ( 1 ) ( for our purposes ) , we utilize the QR-decomposition ( Schott & Stewart , 1999 ) of ξ to “ orthogonalize ” the patterns , ξ = QR , ( 2 ) with Q ∈ RN×p , R ∈ Rp×p . The columns of Q are the orthogonalized patterns , and form an orthonormal basis ( of non-binary vectors ) for the p-dimensional subspace spanned by the binary patterns . R is upper triangular , and if its diagonals are held positive then Q and R are both unique ( Schott & Stewart , 1999 ) . Note both the order and sign of the columns of ξ are irrelevant for HN pattern recall , so there are n = 2p · p ! possibleQ , R pairs . Fixing a pattern ordering , we can use the orthogonality ofQ to re-write the interaction matrix as J = ξ ( ξT ξ ) −1ξT = QR ( RTR ) −1RTQT = QQT ( 3 ) ( the last equality follows from ( RTR ) −1 = R−1 ( RT ) −1 ) . Eq . ( 3 ) resembles the simple Hebbian rule but with non-binary orthogonal patterns . Defining q ≡ QTs in analogy to the classical pattern overlap parameterm ≡ 1N ξ Ts ( Amit et al. , 1985 ) , we have H ( s ) = −1 2 sTQQTs = −1 2 q ( s ) · q ( s ) . ( 4 ) Using a Gaussian integral as in Amit et al . ( 1985 ) ; Barra et al . ( 2012 ) ; Mézard ( 2017 ) to transform ( exactly ) the partition function Z ≡ ∑ { s } e −βH ( s ) of Eq . ( 1 ) , we get Z = ∑ { s } e 1 2 ( βq ) T ( β−1I ) ( βq ) = ∑ { s } ∫ e− β 2 ∑ µ λ 2 µ+β ∑ µ λµ ∑ iQiµsi ∏ µ dλµ√ 2π/β . ( 5 ) The second line can be seen as the partition function of an expanded Hamiltonian for the N ( binary ) original variables { si } and the p ( continuous ) auxiliary variables { λµ } , i.e . HRBM ( { si } , { λµ } ) = 1 2 ∑ µ λ2µ − ∑ µ ∑ i Qiµsiλµ . ( 6 ) Note that this is the Hamiltonian of a binary-continuous RBM with inter-layer weights Qiµ . The original HN is therefore equivalent to an RBM described by Eq . ( 6 ) ( depicted in Fig . 1 ) . As mentioned above , there are many RBMs which correspond to the same HN due to the combinatorics of choosing Q . In fact , instead of QR factorization one can use any decomposition which satisfies J = UUT , with orthogonal U ∈ RN×p ( see Appendix B ) , in which case U acts as the RBM weights . Also note the inclusion of an applied field term − ∑ i bisi in Eq . ( 1 ) trivially carries through the procedure , i.e . H̃RBM ( { si } , { λµ } ) = 12 ∑ µ λ 2 µ − ∑ i bisi − ∑ µ ∑ iQiµsiλµ . Instead of working with the joint form Eq . ( 6 ) , one could take a different direction from Eq . ( 5 ) and sum out the original variables { si } , i.e . Z = ∫ e− β 2 ∑ µ λ 2 µ2N ∏ i cosh ( β ∑ µ Qiµλµ ) ∏ µ dλµ√ 2π/β . ( 7 ) This continuous , p-dimensional representation is useful for numerical estimation of Z ( Section 3.1 ) . We may write Eq . ( 7 ) as Z = ∫ e−F0 ( λ ) dλµ , where F0 ( { λµ } ) = 1 2 ∑ µ λ2µ − 1 β ∑ i ln cosh ( β ∑ µ Qiµλµ ) . ( 8 ) Eq . ( 8 ) is an approximate Lyapunov function for the mean dynamics of { λµ } ; ∇λF0 describes the effective behaviour of the stochastic dynamics of the N binary variables { si } at temperature β−1 . 2.2 COMMENTS ON THE REVERSE MAPPING . With the mapping from HNs ( with correlated patterns ) to RBMs established , we now consider the reverse direction . Consider a binary-continuous RBM with inter-layer weights Wiµ which couple a visible layer of N binary variables { si } to a hidden layer of p continuous variables { λµ } , H ( s , λ ) = 1 2 ∑ µ λ2µ − ∑ i bisi − ∑ µ ∑ i Wiµsiλµ . ( 9 ) Here we use W instead of Q for the RBM weights to emphasize that the RBM is not necessarily an HN . First , following Mehta et al . ( 2019 ) , we transform the RBM to a BM with binary states by integrating out the hidden variables . The corresponding Hamiltonian for the visible units alone is ( see Appendix D.1 for details ) , H̃ ( s ) = − ∑ i bisi − 1 2 ∑ i ∑ j ∑ µ WiµWjµsisj , ( 10 ) a pairwise Ising model with a particular coupling structure Jij = ∑ µWiµWjµ , which in vector form is J = ∑ µ wµw T µ =WW T , ( 11 ) where { wµ } are the p columns ofW . In general , this Ising model Eq . ( 10 ) produced by integrating out the hidden variables need not have Hopfield structure ( discussed below ) . However , it automatically does ( as noted in Barra et al . ( 2012 ) ) , in the very special case whereWiµ ∈ { −1 , +1 } . In that case , the binary patterns are simply { wµ } , so that Eq . ( 11 ) represents a Hopfield network with the Hebbian prescription . This situation is likely rare and may only arise as a by-product of constrained training ; for a generically trained RBM the weights will not be binary . It is therefore interesting to clarify when and how real-valued RBM interactionsW can be associated with HNs . Approximate binary representation of W : In Section 2.1 , we orthogonalized the binary matrix ξ via the QR decomposition ξ = QR , where Q is an orthogonal ( but non-binary ) matrix , which allowed us to map a projection HN ( defined by its patterns ξ , Eq . ( 1 ) ) to an RBM ( defined by its inter-layer weightsQ , Eq . ( 6 ) ) . Here we consider the reverse map . Given a trained RBM with weights W ∈ RN×p , we look for an invertible transformation X ∈ Rp×p which binarizes W . We make the mild assumption that W is rank p. If we find such an X , then B =WX will be the Hopfield pattern matrix ( analogous to ξ ) , with Biµ ∈ { −1 , +1 } . This is a non-trivial problem , and an exact solution is not guaranteed . As a first step to study the problem , we relax it to that of finding a matrix X ∈ GLp ( R ) ( i.e . invertible , p × p , real ) which minimizes the binarization error argmin X∈GLp ( R ) ||WX − sgn ( WX ) ||F . ( 12 ) We denote the approximately binary transformation ofW via a particular solutionX by Bp =WX . ( 13 ) We also define the associated error matrixE ≡ Bp− sgn ( Bp ) . We stress thatBp is non-binary and approximatesB ≡ sgn ( Bp ) , the columns of which will be HN patterns under certain conditions on E. We provide an initial characterization and example in Appendix D .
This paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function of a binary-continuous (Bernoulli-Gaussian) RBM. The authors comments on the mapping from RBM to BN. The experiments show the advantages of training RBM with weights initialised from BN projection weights in generation and classification.
SP:c83ecc74eb885df5f29e5a7080a8c60d1ee0a3b0
One Reflection Suffice
Orthogonal weight matrices are used in many areas of deep learning . Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal . One popular approach utilizes many Householder reflections . The only practical drawback is that many reflections cause low GPU utilization . We mitigate this final drawback by proving that one reflection is sufficient , if the reflection is computed by an auxiliary neural network . 1 INTRODUCTION . Orthogonal matrices have shown several benefits in deep learning , with successful applications in Recurrent Neural Networks , Convolutional Neural Networks and Normalizing Flows . One popular approach can represent any d × d orthogonal matrix using d Householder reflections ( Mhammedi et al. , 2017 ) . The only practical drawback is low GPU utilization , which happens because the d reflections needs to be evaluated sequentially ( Mathiasen et al. , 2020 ) . Previous work often increases GPU utilization by using k d reflections ( Tomczak & Welling , 2016 ; Mhammedi et al. , 2017 ; Zhang et al. , 2018 ; Berg et al. , 2018 ) . Using fewer reflections limits the orthogonal transformations the reflections can represent , yielding a trade-off between representational power and computation time . This raises an intriguing question : can we circumvent the trade-off and attain full representational power without sacrificing computation time ? We answer this question with a surprising “ yes. ” The key idea is to use an auxiliary neural network to compute a different reflection for each input . In theory , we prove that one such “ auxiliary reflection ” can represent any number of normal reflections . In practice , we demonstrate that one auxiliary reflection attains similar validation error to models with d normal reflections , when training Fully Connected Neural Networks ( Figure 1 left ) , Recurrent Neural Networks ( Figure 1 center ) and convolutions in Normalizing Flows ( Figure 1 right ) . Notably , auxiliary reflections train between 2 and 6 times faster for Fully Connected Neural Networks with orthogonal weight matrices ( see Section 3 ) . 1.1 OUR RESULTS . The Householder reflection of x ∈ Rd around v ∈ Rd can be represented by a matrixH ( v ) ∈ Rd×d . H ( v ) x = ( I − 2 vv T ||v||2 ) x . An auxiliary reflection uses a Householder matrix H ( v ) with v = n ( x ) for a neural network n. f ( x ) = H ( n ( x ) ) x = ( I − 2n ( x ) n ( x ) T ||n ( x ) ||2 ) x . One auxiliary reflection can represent any composition of Householder reflections . We prove this claim even when we restrict the neural network n ( x ) to have a single linear layer n ( x ) = Wx for W ∈ Rd×d such that f ( x ) = H ( Wx ) x. Theorem 1 . For any k Householder reflections U = H ( v1 ) · · ·H ( vk ) there exists a neural network n ( x ) = Wx with W ∈ Rd×d such that f ( x ) = H ( Wx ) x = Ux for all x ∈ Rd\ { 0 } . Previous work ( Mhammedi et al. , 2017 ; Zhang et al. , 2018 ) often employ k d reflections and compute Ux as k sequential Householder reflectionsH ( v1 ) · · ·H ( vk ) ·xwith weights V = ( v1 · · · vk ) . It is the evaluation of these sequential Householder reflection that cause low GPU utilization ( Mathiasen et al. , 2020 ) , so lower values of k increase GPU utilization but decrease representational power . Theorem 1 states that it is sufficient to evaluate a single auxiliary reflection H ( Wx ) x instead of k reflections H ( v1 ) · · ·H ( vk ) · x , thereby gaining high GPU utilization while retaining the full representational power of any number of reflections . In practice , we demonstrate that d reflections can be substituted with a single auxiliary reflection without decreasing validation error , when training Fully Connected Neural Networks ( Section 3.1 ) , Recurrent Neural Networks ( Section 3.2 ) and Normalizing Flows ( Section 3.3 ) . While the use of auxiliary reflections is straightforward for Fully Connected Neural Networks and Recurrent Neural Networks , we needed additional ideas to support auxiliary reflections in Normalizing Flows . In particular , we developed further theory concerning the inverse and Jacobian of f ( x ) = H ( Wx ) x . Note that f is invertible if there exists a unique x given y = H ( Wx ) x and W . Theorem 2 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if W = WT and has eigenvalues which satisfy 3/2 · λmin ( W ) > λmax ( W ) . Finally , we present a matrix formula for the Jacobian of the auxiliary reflection f ( x ) = H ( Wx ) x . This matrix formula is used in our proof of Theorem 2 , but it also allows us simplify the Jacobian determinant ( Lemma 1 ) which is needed when training Normalizing Flows . Theorem 3 . The Jacobian of f ( x ) = H ( Wx ) x is : J = H ( Wx ) A− 2Wxx TW ||Wx||2 where A = I − 2x TWTx ||Wx||2 W. We prove Theorem 1 in Appendix A.1.1 while Theorems 2 and 3 are proved in Section 2 . 2 NORMALIZING FLOWS . 2.1 BACKGROUND . Let z ∼ N ( 0 , 1 ) d and f be an invertible neural network . Then f−1 ( z ) ∼ Pmodel defines a model distribution for which we can compute likelihood of x ∼ Pdata ( Dinh et al. , 2015 ) . log pmodel ( x ) = log pz ( f ( x ) ) + log ∣∣∣∣det ( ∂f ( x ) ∂x ) ∣∣∣∣ ( 1 ) This allows us to train invertible neural network as generative models by maximum likelihood . Previous work demonstrate how to construct invertible neural networks and efficiently compute the log jacobian determinant ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Ho et al. , 2019 ) . 2.2 INVERTIBILITY AND JACOBIAN DETERMINANT ( PROOF SKETCH ) . To use auxiliary reflections in Normalizing Flows we need invertibility . That is , for every y ∈ Rd there must exist a unique x ∈ Rd so f ( x ) = H ( Wx ) x = y.1 We find that f is invertible if its Jacobian determinant is non-zero for all x in Sd−1 = { x ∈ Rd | ‖x‖ = 1 } . Theorem 4 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if the Jacobian determinant of f is non-zero for all x ∈ Sd−1 and W is invertible . The Jacobian determinant of H ( Wx ) x takes the following form . Lemma 1 . The Jacobian determinant of f ( x ) = H ( Wx ) x is : −det ( A ) ( 1 + 2 vTA−1u ||u||2 ) where vT = xTW , u = Wx and A = I − 2x TWTx ||Wx||2 W. It is then sufficient that det ( A ) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0 . We prove that this happens if W = WT with eigenvalues 3/2 ·λmin ( W ) > λmax ( W ) . This can be achieved with W = I+V V T if we guarantee σmax ( V V T ) < 1/2 by spectral normalization ( Miyato et al. , 2018 ) . Combining these results yields Theorem 2 . Theorem 2 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if W = WT and has eigenvalues which satisfy 3/2 · λmin ( W ) > λmax ( W ) . Computing the Inverse . In practice , we use Newtons method to compute x so H ( Wx ) x = y . Figure 2 show reconstructions n−1 ( n ( x ) ) = x for an invertible neural network n with auxiliary reflections using Newtons method , see Appendix A.2.1 for details . 2.3 PROOFS . The goal of this section is to prove that f ( x ) = H ( Wx ) x is invertible . Our proof strategy has two parts . Section 2.3.1 first shows f is invertible if it has non-zero Jacobian determinant . Section 2.3.2 then present an expression for the Jacobian determinant , Lemma 1 , and prove the expression is non-zero if W = WT and 3/2 · λmin ( W ) > λmin ( W ) . 2.3.1 NON-ZERO JACOBIAN DETERMINANT IMPLIES INVERTIBILITY . In this section , we prove that f ( x ) = H ( Wx ) x is invertible on Rd if f has non-zero Jacobian determinant . To simplify matters , we first prove that invertibility on Sd−1 implies invertibility on Rd . Informally , invertibility on Sd−1 is sufficient because H ( Wx ) is scale invariant , i.e. , H ( c ·Wx ) = H ( Wx ) for all c 6= 0 . This is formalized by Lemma 2 . Lemma 2 . If f ( x ) = H ( Wx ) x is invertible on Sd−1 it is also invertible on Rd\ { 0 } . Proof . Assume that f ( x ) is invertible on Sd−1 . Pick any y′ ∈ Rd such that ||y′|| = c for any c > 0 . Our goal is to compute x′ such that H ( Wx′ ) x′ = y′ . By normalizing , we see y′/‖y′‖ ∈ Sd−1 . We can then use the inverse f−1 on y′/‖y′‖ to find x such that H ( Wx ) x = y′/‖y‖ . The result is then x′ = x‖y‖ since H ( Wx′ ) x′ = H ( Wx ) x||y|| = y due to scale invariance of H ( Wx ) . 1Note that we do not know H ( Wx ) so we can not trivially compute x = H ( Wx ) −1y = H ( Wx ) y . The main theorem we use to prove invertibiliy on Sd−1 is a variant of Hadamards global function inverse theorem from ( Krantz & Parks , 2012 ) . On a high-level , Hadamard ’ s theorem says that a function is invertible if it has non-zero Jacobian determinant and satisfies a few additional conditions . It turns out that these additional conditions are meet by any continuously differentiable function f ( x ) when ( in the notation of Theorem 5 ) M1 = M2 = Sd−1 . Theorem 5 . ( Krantz & Parks , 2012 , 6.2.8 ) Let M1 and M2 be smooth , connected N -dimensional manifolds and let f : M1 → M2 be continuously differentiable . If ( 1 ) f is proper , ( 2 ) the Jacobian of f is non-zero , and ( 3 ) M2 is simple connected , then f is invertible . For M1 = M2 = Sd−1 the additional conditions are met if f is continuously differentiable . Corollary 1 . Let f : Sd−1 → Sd−1 with d ≥ 2 be continuously differentiable with non-zero Jacobian determinant , then f is invertible . Proof . Note that Sd−1 is smooth and simply connected if d ≥ 2 ( Lee , 2013 ) . Continuously functions on Sd−1 are proper . We conclude f is invertible on Sd−1 by Theorem 5 . We now show that f ( x ) = H ( Wx ) x is continuously differentiable on Sd−1 . Lemma 3 . The function f ( x ) = H ( Wx ) x is continuously differentiable on Sd−1 ifW is invertible . Proof . Compositions of continuously differentiable functions are continuously differentiable by the chain rule . All the functions used to construct H ( Wx ) x are continuously differentiable , except the division . However , the only case where division is not continously differentiable is when ||Wx|| = 0 . Since W is invertible , ||Wx|| = 0 iff x = 0 . But 0 /∈ Sd−1 and we conclude f is continuously differentiable on Sd−1 . Theorem 4 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if the Jacobian determinant of f is non-zero for all x ∈ Sd−1 and W is invertible . Proof . By Lemma 3 , we see f is continuously differentiable since W is invertible , which by Corollary 1 means f is invertible on Sd−1 if f has non-zero Jacobian determinant on Sd−1 . By Lemma 2 , we get that f is invertible on Rd if it has non-zero Jacobian on Sd−1 .
The authors present a way to learn the action of an arbitrary orthogonal matrix on a vector via a map from $\mathbb{R}^{n\times n}$ onto $\operatorname{O}(n)$. They show that the map is surjective, and give conditions under which they can invert this action. They then compare against previous proposed schemes in one task and show the performance of their models in other two.
SP:3d705a1b70254d2b9d05277efff8ac08b0539086
PCPs: Patient Cardiac Prototypes
1 INTRODUCTION . Modern medical research is arguably anchored around the “ gold standard ” of evidence provided by randomized control trials ( RCTs ) ( Cartwright , 2007 ) . However , RCT-derived conclusions are population-based and fail to capture nuances at the individual patient level ( Akobeng , 2005 ) . This is primarily due to the complex mosaic that characterizes a patient from demographics , to physiological state , and treatment outcomes . Similarly , despite the success of deep learning algorithms in automating clinical diagnoses ( Galloway et al. , 2019 ; Attia et al. , 2019a ; b ; Ko et al. , 2020 ) , network-generated predictions remain population-based and difficult to interpret . Such properties are a consequence of a network ’ s failure to incorporate patient-specific structure during training or inference . As a result , physicians are reluctant to integrate such systems into their clinical workflow . In contrast to such reluctance , personalized medicine , the ability to deliver the right treatment to the right patient at the right time , is increasingly viewed as a critical component of medical diagnosis ( Hamburg & Collins , 2010 ) . The medical diagnosis of cardiac signals such as the electrocardiogram ( ECG ) is of utmost importance in a clinical setting ( Strouse et al. , 1939 ) . For example , such signals , which convey information about potential abnormalities in a patent ’ s heart , also known as cardiac arrhythmias , are used to guide medical treatment both within and beyond the cardiovascular department ( Carter , 1950 ) . In this paper , we conceptually borrow insight from the field of personalized medicine in order to learn patient representations which allow for a high level of network interpretability . Such representations have several potential clinical applications . First , they allow clinicians to quantify the similarity of patients . By doing so , network-generated predictions for a pair of patients can be traced back to this similarity , and in turn , their corresponding ECG recordings . Allowing for this inspection of ECG recordings aligns well with the existing clinical workflow . An additional application of patient similarity is the exploration of previously unidentified patient relationships , those which may lead to the discovery of novel patient sub-cohorts . Such discoveries can lend insight into particular diseases and appropriate medical treatments . In contrast to existing patient representation learning methods ( Zhu et al. , 2016 ; Suo et al. , 2017 ) , we concurrently optimize for a predictive task ( cardiac arrhythmia classification ) , leverage patient similarity , and design a system specifically for 12-lead ECG signals . Contributions . Our contributions are the following : 1 . Patient cardiac prototypes ( PCPs ) - we learn representations that efficiently summarize the cardiac state of a patient in an end-to-end manner via contrastive learning . 2 . Patient similarity quantification - we show that , by measuring the Euclidean distance between PCPs and representations , we can identify similar patients across different datasets . 3 . Dataset distillation - we show that PCPs can be used to train a network , in lieu of the original dataset , and maintain strong generalization performance . 2 RELATED WORK . Contrastive learning is a self-supervised method that encourages representations of instances with commonalities to be similar to one another . This is performed for each instance and its perturbed counterpart ( Oord et al. , 2018 ; Chen et al. , 2020a ; b ; Grill et al. , 2020 ) and for different visual modalities ( views ) of the same instance ( Tian et al. , 2019 ) . Such approaches are overly-reliant on the choice of perturbations and necessitate a large number of comparisons . Instead , Caron et al . ( 2020 ) propose to learn cluster prototypes . Most similar to our work is that of Cheng et al . ( 2020 ) and CLOCS ( Kiyasseh et al. , 2020 ) which both show the benefit of encouraging patient-specific representations to be similar to one another . Although DROPS ( Anonymous , 2021 ) leverages contrastive learning , it does so at the patient-attribute level . In contrast to existing methods , we learn patient-specific representations , PCPs , in an end-to-end manner Meta-learning designs learning paradigms that allow for the fast adaptation of networks . Prototypical Networks ( Snell et al. , 2017 ) average representations to obtain class-specific prototypes . During inference , the similarity of representations to these prototypes determines the classification . Relational Networks ( Sung et al. , 2018 ) build on this idea by learning the similarity of representations to prototypes through a parametric function . Gidaris & Komodakis ( 2018 ) and Qiao et al . ( 2018 ) exploit hypernetworks ( Ha et al. , 2016 ) and propose to generate the parameters of the final linear layer of a network for few-shot learning on visual tasks . In contrast , during inference only , we compute the cosine similarity between representations and PCPs and use the latter as the input to a hypernetwork . Patient similarity aims at discovering relationships between patient data ( Sharafoddini et al. , 2017 ) . To quantify these relationships , Pai & Bader ( 2018 ) and ( Pai et al. , 2019 ) propose Patient Similarity Networks for cancer survival classification . Exploiting electronic health record data , Zhu et al . ( 2016 ) use Word2Vec to learn patient representations , and Suo et al . ( 2017 ) propose to exploit patient similarity to guide the re-training of models , an approach which is computationally expensive . Instead , our work naturally learns PCPs as efficient descriptors of the cardiac state of a patient . 3 METHODS . 3.1 LEARNING PATIENT CARDIAC PROTOTYPES VIA CONTRASTIVE LEARNING . We assume the presence of a dataset , D = { xi , yi } Ni=1 , comprising N ECG recordings , x , and cardiac arrhythmia labels , y , for a total of Ptot patients . Typically , multiple recordings are associated with a single patient , p. This could be due to multiple recordings within the same hospital visit or multiple visits to a hospital . Therefore , each patient is associated with N/Ptot recordings . We learn a feature extractor fθ : x ∈ RD −→ h ∈ RE , parameterized by θ , that maps a D-dimensional recording , x , to an E-dimensional representation , h. In the quest to learn patient-specific representations , we associate each patient , p , out of a total of P patients in the training set with a unique and learnable embedding , v ∈ RE , in a set of embeddings , V , where |V | = P N . Such embeddings are designed to be efficient descriptors of the cardiac state of a patient , and we thus refer to them as patient cardiac prototypes or PCPs . We propose to learn PCPs in an end-to-end manner via contrastive learning . More specifically , given an instance , xi , that belongs to a particular patient , k , we encourage its representation , hi = fθ ( xi ) , to be similar to the same patient ’ s PCP , vk , and dissimilar to the remaining PCPs , vj , j 6= k. We quantify this similarity , s ( hi , vk ) , by using the cosine similarity with a temperature parameter , τ . The intuition is that each PCP , in being attracted to a diverse set of representations that belong to the same patient , should become invariant to insidious intra-patient differences . For a mini-batch of size , B , the contrastive loss is as follows . Lcontrastive = − B∑ i log [ es ( hi , vk ) ∑P j e s ( hi , vj ) ] ( 1 ) s ( hi , vj ) = fθ ( xi ) · vj ‖fθ ( xi ) ‖‖vj‖ · 1 τ ( 2 ) 3.2 GENERATING PATIENT-SPECIFIC PARAMETERS VIA HYPERNETWORKS . Network parameters are typically updated during training and fixed during inference . This allows the parameters to exploit population-based information in order to learn high-level features useful for solving the task at hand . Such an approach , however , means that all instances are exposed to the same set of parameters during inference , regardless of instance-specific information . Such information can be related to any meta-label including , but not limited to , patient ID , geographical location , and even temporal period . As an exemplar , and motivated by the desire to generate patient-specific diagnoses , we focus on patient-specific information . We are essentially converting a traditional classification task to one that is conditioned on patient-specific information . To perform such conditioning , we propose to exploit both PCPs and hypernetworks , as explained next . We assume the presence of a hypernetwork , gφ : h ∈ RE −→ ω ∈ RE×C , parameterized by φ , that maps an E-dimensional representation , h , to a matrix of classification parameters , ω , where C is the number of class labels . During training , we feed a representation , hi , to the hypernetwork and generate instance-specific parameters , ωi ( see Fig . 1 left ) . During inference , however , we retrieve , and feed into the hypernetwork , the most similar PCP , vk , to the current representation , hi , ( based on similarity metric , s ) . We chose this strategy after having experimented with several of them ( see Sec . 5.2 ) . It is worthwhile to note that although this approach bears some resemblance to clustering , it is distinct from it . In a clustering scenario , we would have assigned labels to instances based on their proximity to PCPs . In contrast , we are leveraging this proximity to determine the input of a hypernetwork ( see Fig . 1 right ) . ωi = gφ ( hi ) for traininggφ ( vk ) for inference , vk = arg max vj s ( hi , vj ) ( 3 ) By performing this retrieval , we exploit the similarity between patients in the training and inference set . As a result , the hypernetwork generates patient-specific parameters that parameterize the linear classifier , pω : h ∈ RE −→ y ∈ RC , which maps a representation , h , to a posterior class distribution , y . We train the entire network in an end-to-end manner using a combined contrastive and supervised loss . Lsupervised = − B∑ i log pωi ( yi = c|hi ) ( 4 ) Lcombined = Lcontrastive + Lsupervised ( 5 ) 4 EXPERIMENTAL DESIGN . 4.1 DATASETS . We conduct experiments using PyTorch ( Paszke et al. , 2019 ) on three large-scale ECG datasets that contain a significant number of patients . PhysioNet 2020 ECG consists of 12-lead ECG recordings from 6,877 patients alongside labels corresponding to 9 different classes of cardiac arrhythmia . Each recording can be associated with multiple labels . Chapman ECG ( Zheng et al. , 2020 ) consists of 12-lead ECG recordings from 10,646 patients alongside labels corresponding to 11 different classes of cardiac arrhythmia . As is suggested by Zheng et al . ( 2020 ) , we group these labels into 4 major classes . PTB-XL ECG ( Wagner et al. , 2020 ) consists of 12-lead ECG recordings from 18,885 patients alongside 71 different types of annotations provided by two cardiologists . We follow the training and evaluation protocol presented by Strodthoff et al . ( 2020 ) where we leverage the 5 diagnostic class labels . We alter the original setup to only consider ECG segments with one label assigned to them and convert the task into a binary classification problem . Further details can be found in Appendix A.1 . Unless otherwise mentioned , datasets were split into training , validation , and test sets according to patient ID using a 60 , 20 , 20 configuration . In other words , patients appeared in only one of the sets . Further details about the dataset splits can be found in Appendix A.2 .
This paper proposes to learn patient-specific representation using patient physiological signals. The authors design a PCP representation for each patient, which is learned to agree with signals from the same patients and disagrees with the remaining patients. In the supervised part, the classifier is generated from patient-specific parameters by meta-learning. The model was evaluated on three large ECG datasets: PhysioNet 2020 ECG, Chapman ECG, PTB-XL ECG.
SP:0cb862cf3806c4f04d2d30f200c25841a1cb52a8
Activation-level uncertainty in deep neural networks
1 INTRODUCTION . Deep Neural Networks ( DNNs ) have achieved state-of-the-art performance in many different tasks , such as speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Mikolov et al. , 2013 ) or computer vision ( Krizhevsky et al. , 2012 ) . In spite of their predictive power , DNNs are limited in terms of uncertainty estimation . This has been a classical concern in the field ( MacKay , 1992 ; Hinton & Van Camp , 1993 ; Barber & Bishop , 1998 ) , which has attracted a lot of attention in the last years ( Lakshminarayanan et al. , 2017 ; Guo et al. , 2017 ; Sun et al. , 2019 ; Wenzel et al. , 2020 ) . Indeed , this ability to “ know what is not known ” is essential for critical applications such as medical diagnosis ( Esteva et al. , 2017 ; Mobiny et al. , 2019 ) or autonomous driving ( Kendall & Gal , 2017 ; Gal , 2016 ) . Bayesian Neural Networks ( BNNs ) address this problem through a Bayesian treatment of the network weights1 ( MacKay , 1992 ; Neal , 1995 ) . This will be refered to as weight-space stochasticity . However , dealing with uncertainty in weight space is challenging , since it contains many symmetries and is highly dimensional ( Wenzel et al. , 2020 ; Sun et al. , 2019 ; Snoek et al. , 2019 ; Fort et al. , 2019 ) . Here we focus on two specific limitations . First , it has been recently shown that BNNs with well-established inference methods such as Bayes by Backprop ( BBP ) ( Blundell et al. , 2015 ) and MC-Dropout ( Gal & Ghahramani , 2016 ) underestimate the predictive uncertainty for instances located in-between two clusters of training points ( Foong et al. , 2020 ; 2019 ; Yao et al. , 2019 ) . Second , the weight-space prior does not allow BNNs to guide extrapolation to out-of-distribution ( OOD ) data ( Sun et al. , 2019 ; Nguyen et al. , 2015 ; Ren et al. , 2019 ) . Both aspects are illustrated graphically in Figure 3 , more details in Section 3.1 . ∗Work developed mostly while visiting Cambridge University , UK . 1The bias term will be absorbed within the weights throughout the work . As an alternative to standard BNNs , Functional Bayesian Neural Nets ( fBNN ) specify the prior and perform inference directly in function space ( Sun et al. , 2019 ) . This provides a mechanism to guide the extrapolation in OOD data , e.g . predictions can be encouraged to revert to the prior in regions of no observed data . However , the posterior stochastic process is still defined by a factorized Gaussian on the network weights ( i.e . as in BBP ) , see ( Sun et al. , 2019 , Sect . 3.1 ) . We will show that this makes fBNN inherit the problem of underestimating the predictive uncertainty for in-between data . In this work , we adopt a different approach by moving stochasticity from the weights to the activation function , see Figure 1 . This will be referred to as auNN ( activation-level uncertainty for Neural Networks ) . The activation functions are modelled with ( one-dimensional ) GP priors , for which a triangular kernel inspired by the ReLu non-linearity ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) is used . Since non-linearities are typically simple functions ( e.g . ReLu , sigmoid , tanh ) , our GPs are sparsified with few inducing points . The network weights are deterministic parameters which are estimated to maximize the marginal likelihood of the model . The motivation behind auNN is to avoid inference in the complex space of weights . We hypothesise that it could be enough to introduce stochasticity in the activation functions that follow the linear projections to provide sensible uncertainty estimations . We show that auNN obtains well-calibrated estimations for in-between data , and its prior allows to guide the extrapolation to OOD data by reverting to the empirical mean . This will be visualized in a simple 1D example ( Figure 3 and Table 1 ) . Moreover , auNN obtains competitive performance in standard benchmarks , is scalable ( datasets of up to ten millions training points are used ) , and can be readily used for classification . The use of GPs for the activations establishes an interesting connection with deep GPs ( DGPs ) ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . The main difference is the linear projection before the GP , recall Figure 1 ( c-d ) . This allows auNN units to model simpler mappings between layers , which are defined along one direction of the input space , similarly to neural networks . However , DGP units model more complex mappings defined on the whole input space , see also Figure 2a . We will show that auNN units require fewer inducing points and are better suited for deep architectures , achieving superior performance . Also , a thorough discussion on additional related work will be provided in Section 4 . In summary , the main contributions of this paper are : ( 1 ) a new approach to model uncertainty in DNNs , based on deterministic weights and simple stochastic non-linearities ( in principle , not necessarily modelled by GPs ) ; ( 2 ) the specific use of non-parametric GPs as a prior , including the triangular kernel inspired by the ReLu ; ( 3 ) auNN addresses a well-known limitation of BNNs and fBNNs ( uncertainty underestimation for in-between data ) , can guide the extrapolation to OOD data by reverting to the empirical mean , and is competitive in standard prediction tasks ; ( 4 ) auNN units require fewer inducing points and are better suited for deep architectures than DGP ones , achieving superior performance . 2 PROBABILISTIC MODEL AND INFERENCE . Model specification . We focus on a supervised task ( e.g . regression or classification ) with training data2 { xn , : , yn , : } Nn=1 . The graphical model in Figure 2b will be useful throughout this section . We 2The output is represented as a vector since all the derivations apply for the multi-output case . assume a model of L layers , each one with Dl units as in Figure 1c . Each activation is modelled with a ( 1D ) GP prior , i.e . f ld ( a l d ) ∼ GP ( µld , kld ) , with µld : R → R and kld : R × R → R. The GP hyperparameters θld will be omitted for clarity ( for the kernels used here , θ l d includes the amplitude and the lengthscale ) . Assuming independence between units , each layer depends on the previous one as : p ( Fl|Fl−1 , Wl ) = p ( Fl|Al ) = ∏Dl d=1 p ( f l d|ald ) , ( 1 ) where Fl is the N ×Dl matrix of outputs of the l-th layer for N inputs , Wl is the Dl−1 ×Dl matrix of weights in that layer , and Al is the N ×Dl matrix of pre-activations , i.e . Al = Fl−1 ·Wl . As usual , the columns and rows of Fl are denoted as f ld and f l n , : , respectively ( and analogously for the other matrices ) . Since the activation is defined by a GP , we have p ( f ld|ald ) = N ( f ld|µld , Kld ) , with µld ( resp . Kld ) the result of evaluating µ l d ( resp . k l d ) on a l d ( that is , µ l d is a N -dimensional vector and K l d is a N ×N matrix ) . To fully specify the model , the output Y is defined from the last layer with a distribution that factorizes across data points , i.e . p ( Y|FL ) = ∏N n=1 p ( yn , :|fLn , : ) . This formulation resembles that of DGPs ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . The main difference is that we model Fl|Fl−1 through Dl 1D GPs evaluated on the pre-activations Al ( i.e . the projections of Fl−1 through Wl ) , whereas DGPs use Dl GPs of dimension Dl−1 evaluated directly on Fl−1 , recall Figure 1 ( c-d ) . Variational Inference . Inference in the proposed model is intractable . To address this , we follow standard sparse variational GP approaches ( Titsias , 2009 ; Hensman et al. , 2013 ; 2015 ) , similarly to the Doubly Stochastic Variational Inference ( DSVI ) for DGPs ( Salimbeni & Deisenroth , 2017 ) . Specifically , in each unit of each layer we introduce M l inducing values uld , which are the result of evaluating the GP on the one-dimensional inducing points zld . We naturally write U l and Zl for the corresponding M l × Dl matrices associated to the l-th layer , respectively . Following eq . ( 1 ) , the augmented model for one layer is p ( Fl , Ul|Fl−1 , Wl , Zl ) = p ( Fl|Ul , Al , Zl ) p ( Ul|Zl ) = ∏Dl d=1 p ( f l d|uld , ald , zld ) p ( uld|zld ) . ( 2 ) Variational inference ( VI ) involves the approximation of the true posterior p ( { Fl , Ul } l|Y ) . Following ( Hensman et al. , 2013 ; Salimbeni & Deisenroth , 2017 ) , we propose a posterior given by p ( F|U ) and a parametric Gaussian on U : q ( { Fl , Ul } l ) = ∏L l=1 p ( F l|Ul , Al , Zl ) q ( Ul ) = ∏L l=1 ∏Dl d=1 p ( f l d|uld , ald , zld ) q ( uld ) , ( 3 ) where q ( uld ) = N ( uld|mld , Sld ) , with mld ∈ RM l and Sld ∈ RM l×M l variational parameters to be estimated . Minimizing the KL divergence between q ( { Fl , Ul } l ) and the true posterior is equivalent to maximizing the following evidence lower bound ( ELBO ) : log p ( Y| { Wl , Zl } l ) ≥ ELBO = N∑ n=1 Eq ( fLn , : ) [ log p ( yn , :|fLn , : ) ] − L∑ l=1 Dl∑ d=1 KL ( q ( uld ) ||p ( uld ) ) . ( 4 ) In the ELBO , the KL term can be computed in closed-form , as both q ( uld ) and p ( u l d ) are Gaussians . The log likelihood term can be approximated by sampling from the marginal posterior q ( fLn , : ) , which can be done efficiently through univariate Gaussians as in ( Salimbeni & Deisenroth , 2017 ) . Specifically , Ul can be analytically marginalized in eq . ( 3 ) , which yields q ( { Fl } l ) = ∏ l q ( F l|Fl−1 , Wl ) = ∏ l , dN ( f ld|µ̃ l d , Σ̃ l d ) , with : [ µ̃ld ] i = µ l d ( a l id ) +α l d ( a l id ) ᵀ ( mld − µld ( zld ) ) , ( 5 ) [ Σ̃ l d ] ij = k l d ( a l id , a l jd ) −αld ( alid ) ᵀ ( kld ( zld ) − Sld ) αld ( aljd ) , ( 6 ) where αld ( x ) = k l d ( x , z l d ) [ k l d ( z l d ) ] −1 and aln , : = W lf l−1n , : . Importantly , the marginal posterior q ( f l n , : ) is a Gaussian that depends only on aln , : , which in turn only depends on q ( f l−1 n , : ) . Therefore , sampling from f ln , : is straightforward using the reparametrization trick ( Kingma & Welling , 2013 ) : f lnd = [ µ̃ l d ] n + ε · [ Σ̃ l d ] 1/2 nn , with ε ∼ N ( 0 , 1 ) , and f0n , : = xn , : . ( 7 ) Training consists in maximizing the ELBO , eq . ( 4 ) , w.r.t . variational parameters { mld , Sld } , inducing points { zld } , and model parameters ( i.e . weights { wld } and kernel parameters { θ l d } ) . This can be done in batches , allowing for scalability to very large datasets . The complexity to evaluate the ELBO is O ( NM2 ( D1 + · · ·+DL ) ) , the same as DGPs with DSVI ( Salimbeni & Deisenroth , 2017 ) .3 Predictions . Given a new x∗ , : , we want to compute4 p ( fL∗ , :|X , Y ) ≈ Eq ( { Ul } ) [ p ( fL∗ , :| { Ul } ) ] . As in ( Salimbeni & Deisenroth , 2017 ) , this can be approximated by sampling S values up to the ( L− 1 ) -th layer with the same eq . ( 7 ) , but starting with x∗ , : . Then , p ( fL∗ , :|X , Y ) is given by the mixture of the S Gaussians distributions obtained from eqs . ( 5 ) - ( 6 ) . Triangular kernel . One of the most popular kernels in GPs is the RBF ( Williams & Rasmussen , 2006 ) , which produces very smooth functions . However , the ReLu non-linearity led to a general boost in performance in DNNs ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) , and we aim to model similar activations . Therefore , we introduce the use of the triangular ( TRI ) kernel . Just like RBF , TRI is an isotropic kernel , i.e . it depends on the distance between the inputs , k ( x , y ) = γ · g ( |x− y|/ ` ) , with γ and ` the amplitude and lengthscale . For RBF , g ( t ) = e−t 2/2 . For TRI , g ( t ) = max ( 1− t , 0 ) . This is a valid kernel ( Williams & Rasmussen , 2006 , Section 4.2.1 ) . Similarly to the ReLu , the functions modelled by TRI are piecewise linear , see Figure 6a in the main text and Figure 8 in Appendix C. Comparison with DGP . The difference between auNN and DGP units is graphically illustrated in Figure 2a . Whereas DGP mappings from one layer to the next are complex functions defined on Dl−1 dimensions ( Dl−1 = 2 in the figure ) , auNN mappings are defined just along one direction via the weight projection . This is closer in spirit to NNs , whose mappings are also simpler and better suited for feature extraction and learning more abstract concepts . Moreover , since the GP is defined on a 1D space , auNN requires fewer inducing points than DGP ( which , intuitively , can be interpreted as inducing ( hyper ) planes in the Dl−1-dimensional space before the projection ) .
Either putting the uncertainty on the weights (e.g., Bayes by BP) or on the activation (e.g., fast dropout or variants of natural-parameter networks [2,3] or Bayesian dark knowledge [4]) or both [1] have been investigated before. The idea of moving the uncertainty from the weight to the activation function is not new. One could argue that VAE-style parameterization or local reparameterization trick is also a kind of methods that put uncertainty in the activation function. In fact the proposed method does involve the reprarameterization trick in each layer as shown in Eq. 7.
SP:b7a45906d972644e9d0e757a83ff50fd3ad7cde3
Local SGD Meets Asynchrony
1 INTRODUCTION . In this paper , we consider the classic problem of minimizing an empirical risk , defined simply as min x∈Rd ∑ i∈ [ I ] fi ( x ) , ( 1 ) where d is the dimension , x ∈ Rd denotes the set of model parameters , [ I ] is the training set , and fi ( x ) : Rd → R is the loss on the training sample i ∈ [ I ] . Stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ) is an extremely popular iterative approach to solving this problem : xk+1 = xk − αk∇fBk ( xk ) , ( 2 ) where∇fBk ( xk ) = 1|Bk| ∑ i∈Bk ∇fi ( xk ) is the sum of gradients computed over samples , typically selected uniformly and randomly as a minibatch Bk ⊆ [ I ] , and αk is the learning rate at iteration k . 1.1 BACKGROUND ON DECENTRALIZED DATA-PARALLEL SGD . For better or worse , SGD and its variants currently represent the computational backbone for many large-scale optimization tasks , most notably the training of deep neural networks ( DNNs ) . Arguably the most popular SGD variant is minibatch SGD ( MB-SGD ) ( Bottou ( 2012 ) ) . In a distributed setting with decentralized workers q ∈ [ Q ] , it follows the iteration xk+1 = xk − αk 1 Q Q∑ q=1 ∇fBqk , ( 3 ) where Bqk ⊆ [ I ] is a local minibatch selected by worker q ∈ [ Q ] at iteration k. This strategy is straightforward to scale in a data-parallel way , as each worker can process a subset of the samples in parallel , and the model is then updated by the average of the workers ’ gradient computations . For convenience , we assume the same batch size per worker . This approach has achieved tremendous popularity recently , and there has been significant interest in running training with increasingly large batch sizes aggregated over a large number of GPUs , e.g . Goyal et al . ( 2017 ) . An alternative approach is parallel or local SGD ( L-SGD ) ( Zinkevich et al . ( 2010 ) ; Zhang et al . ( 2016c ) ; Lin et al . ( 2020 ) ) : xqj , t+1 = x q j , t − αj , t∇fBqj , t , 0 ≤ t < Kj ; x q j+1,0 = 1 Q ∑ q xqj , Kj , ( 4 ) where xqj , t denotes the local model at worker q ∈ [ Q ] after j synchronization rounds followed by t local gradient updates and Bqj , t is the local minibatch sampled at the same iteration . Kj denotes the number of local gradient update steps before the jth synchronization . Essentially , workers run SGD without any communication for several local steps , after which they globally average the resulting local models . This method is intuitively easy to scale , since it reduces the frequency of the communication . Recently , a variant called post local SGD ( PL-SGD ) ( Lin et al . ( 2020 ) ) , was introduced to address the issue of loss in generalization performance of L-SGD , wherein the averaging frequency during the initial phase of training is high and is reduced later when optimization stabilizes . Method Bloc Train Loss Train Acc . Test Loss Test Acc Time ( Sec ) Quality/ Perf . MB-SGD 128 0.016 99.75 0.234 92.95 1754 Baseline MB-SGD 1024 0.023 99.51 0.293 91.38 1201 OK PL-SGD 128 0.018 99.69 0.245 92.98 1603 Good PL-SGD 1024 0.154 94.69 0.381 87.81 1159 Poor Bloc , where Bloc = 128 , Q is the number of workers , 2 here , and α0 = 0.1 . In PL-SGD , we average the model after each gradient update for first 150 epochs and thereafter averaging frequencyK is set to 16 as in Lin et al . ( 2020 ) ; other HPs are identical to theirs . The listed results are average of 3 runs with different seeds . with PL-SGD . Clearly , these methods can not tolerate a larger Bloc , though the GPUs can support them . This shortcoming of the existing methods in harnessing the growing data-parallelism is also identified via empirical studies ( Golmant et al . ( 2018 ) ; Shallue et al . ( 2019 ) ) existing in literature . To our knowledge no effective remedy ( yet ) exists to address this challenge . Notice that , here our core target is maximally harnessing the local data-parallelism and therefore the larger local batch size , as against the existing trend in the literature wherein large number of GPUs are deployed to have a large aggregated global batch size with a relatively small Bloc . For example , refer to the performance of MB-SGD and PL-SGD as listed in Table 1 of Lin et al . ( 2020 ) . Notice that with 16 GPUs , each with Bloc = 128 , thus totaling the minibatch size as 2048 , identical to the one with 2 GPUs each with Bloc = 1024 as above , with exactly the same LR scaling and warmup strategy , both MB-SGD and PL-SGD do not face generalization degradation . However , unfortunately , such an implementation setting would incur excessive wastage of available data-parallel compute resources on each of the GPUs . Indeed , the existing specific techniques such as LARS ( You et al . ( 2017 ) ) to address the issue of poor generalization for global large batch training are insufficient for the larger local minibatch size ; we empirically describe it in Section 3 ( Table 11 ) . 1.2 LOCALLY-ASYNCHRONOUS PARALLEL SGD . Now , consider an implementation scheme as the following : 1 . In a decentralized setting of L-SGD , i.e . wherein each worker q ∈ [ Q ] has a local model xq undergoing local SGD updates as described earlier , multiple local concurrent processes u ∈ Uq share the model xq . Processes u ∈ Uq perform asynchronous concurrent gradient updates locally . 2 . The workers average their models whenever any one of them would have had at least Kj local shared updates , where Kj is as that in Equation 4 . The averaging is performed asynchronously and in a non-blocking way by the ( averaging- ) processes aq on behalf of each worker q ∈ [ Q ] . Essentially , the decentralized workers run shared-memory-based asynchronous SGD locally and periodically synchronize in a totally non-blocking fashion . More formally , consider Algorithm 1 . The model xq on a GPU q ∈ [ Q ] is shared by the processes p ∈ P q = { { aq } ∪Uq } locally . The processes p ∈ P q also maintain a shared counter Sq , initialized to 0 . The operation read-and-inc implements an atomic ( with lock ) read and increment of Sq , whereas , read provides an atomic read . Sq essentially enables ordering the shared gradient updates . In turn , this order streamlines the synchronization among workers , thereby determines the averaging rounds j . The ( updater ) processes u ∈ Uq asynchronously and lock-freely update xq with gradients computed over a non-blocking , potentially inconsistent , snapshot va , q of xq , essentially going Hogwild ! ( Recht et al . ( 2011 ) ) , see Algorithm 1a . 1 Initialize s = 0 ; 2 while s ≤ T do 3 vu , q [ i ] : = xq [ i ] , ∀ 1 ≤ i ≤ d ; 4 s : = read-and-inc ( S ) ; 5 Compute∇fBqs ( v u , q ) ; 6 xq [ i ] −= αs∇fBqs ( v u , q ) [ i ] , ∀ 1 ≤ i ≤ d ; ( a ) Local asynchronous gradient update by process u ∈ Uq . 1 Initialize scur = spre = |Uq| , j = 0 ; 2 while scur ≤ T do 3 scur : = read ( S ) ; Compute j corresponding to scur ; 4 if scur − spre ≥ Kj then 5 va , qj [ i ] : = x q [ i ] , ∀ 1 ≤ i ≤ d ; 6 Synchronize across ar , r ∈ [ Q ] \ { q } to compute vj : = 1 Q ∑ q∈ [ Q ] v a , q j ; 7 Compute ∆vqj = vj − v a , q j ; spre : = scur ; 8 xq [ i ] += ∆vqj [ i ] , ∀ 1 ≤ i ≤ d ; j = j + 1 ; ( b ) Asynchronous non-blocking in-place averaging . Algorithm 1 : Locally-asynchronous Parallel SGD ( LAP-SGD ) The process aq , which performs averaging for the worker q ∈ [ Q ] , concurrently keeps on atomically reading Sq , see Algorithm 1b . As soon as it notices an increment Kj in Sq , i.e . xq got concurrently updated with Kj number of gradients , it takes a non-blocking snapshot v a , q j of x q and synchronizes with ar of peers r ∈ [ Q ] /q to compute the average vj of the snapshots . Thereafter , aq adds the difference of the average with the snapshot va , qj to the model x q without blocking the concurrent asynchronous local gradient updates . We call this method locally-asynchronous parallel SGD ( LAP-SGD ) . This method closely resembles Hogwild++ ( Zhang et al . ( 2016a ) ) , which targets the heterogeneous NUMA based multi-core machines , though there are key differences which we describe in Section 4 . Results of the same training task as before by LAP-SGD is given in Table 2 . The distinction of this implementation is that it harnesses the compute power of the GPUs not by increasing the size of Bloc but by concurrently computing many minibatch gradients . Evidently , LAP-SGD provides speed-up without losing the quality of optimization in comparison to the baseline . Recently , Kungurtsev et al . ( 2019 ) presented a shared-memory based method wherein they showed that partitioned gradient updates for some iterations during the course of training over a shared model can reasonably save on total computation cost by means of restricted backpropagation without necessarily losing on optimization quality . Their method is limited to a centralized sharedmemory setting . Moreover , aiming to establish convergence under non-smoothness assumption , they ensure write consistency under a model-wide lock . Having designed our asynchronous parallel SGD , it inspires us to adapt the partitioned gradient update strategy to our lock-free decentralized setting . More specifically , building on LAP-SGD , we consider locally partitioned gradient computation along with asynchronous lock-free updates . Essentially , we partition the model xq to { xqi ( u ) } for u ∈ Uq , i ( u ) ∩ i ( w ) = ∅ , ∀u , w ∈ Uq ( i.e. , non-overlapping block components of the vector x ) . With that , a partitioned gradient computation will amount to computing ∇i ( u ) fBqs ( vq , u ) , the minibatch gradient with respect to the partition xqi ( u ) at line 5 in Figure 1a . Accordingly , the update step at line 6 in Algorithm 1a transforms to xq [ i ] −= αs∇fBqs ( vq , u ) [ i ] , ∀ i ∈ i ( u ) . It is to be noted that we do not use write lock for iterations at any stage . Having devised a partitioned update scheme , we propose locally-partitionedasynchronous parallel SGD ( LPP-SGD ) as described below . 1 . Processes u ∈ Uq maintain a process-local variable last iter which can take two values PARTITIONED and FULL . Each u ∈ Uq initializes last iter as FULL . 2 . While s ≤ Tst , each process u ∈ Uq performs LAP-SGD updates as lines 3 to 6 of Algorithm 1a . 3 . If Tst < s ≤ T , each process u ∈ Uq performs ( a ) a partitioned gradient computation and update : xq , u [ i ] −= αs∇fBqs ( vu , q ) [ i ] , ∀i ∈ i ( u ) if last iter = FULL , and sets last iter = PARTITIONED . ( b ) an LAP-SGD update if last iter = PARTITIONED , and sets last iter = FULL . Essentially , after some initial stabilizing epochs each process u ∈ Uq alternates between a full and a partitioned lock-free asynchronous gradient updates to the model xq . Our experiments showed that Tst = T 10 was almost always sufficient to obtain a competitive optimization result . The results of a sample implementation of LPP-SGD are available in Table 3 . It is clear that LPP-SGD handsomely speeds up the computation and provides equally competitive optimization results .
In this paper, the authors argue that the mini-batch method and local SGD method suffers generalization performance degradation for large local mini-batch size. An asynchronous method is proposed to improve the generalization performance. A sublinear convergence rate is provided for the non-convex objective. As there are some missing definitions and little explanation of the proposed method, the reviewer finds the paper hard to read.
SP:4d94ef57fdaf5f1100b6b09331d5cff5264fcdf6
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
1 INTRODUCTION . Negotiation is ubiquitous in human interaction , from e-commerce to the multi-billion dollar sales of companies . Learning how to negotiate effectively involves deep pragmatic understanding and planning the dialogue strategically ( Thompson ; Bazerman et al. , 2000b ; Pruitt , 2013 ) . Modern dialogue systems for collaborative tasks such as restaurant or flight reservations have made considerable progress by modeling the dialogue history and structure explicitly using the semantic content , like slot-value pairs ( Larionov et al. , 2018 ; Young , 2006 ) , or implicitly with encoder-decoder architectures ( Sordoni et al. , 2015 ; Li et al. , 2016 ) . In such tasks , users communicate explicit intentions , enabling systems to map the utterances into specific intent slots ( Li et al. , 2020 ) . However , such mapping is less clear in complex non-collaborative tasks like negotiation ( He et al. , 2018 ) and persuasion ( Wang et al. , 2019 ) , where user intent and most effective strategies are hidden . Hence , along with the generated dialogue , the strategic choice of framing and the sequence of chosen strategies play a vital role , as depicted in Figure 1 . Indeed , prior work on negotiation dialogues has primarily focused on optimizing dialogue strategies—from highlevel task-specific strategies ( Lewis et al. , 2017 ) , to more specific task execution planning ( He et al. , 2018 ) , to fine-grained planning of linguistic outputs given 1Code , data and a demo system is released at https : //github.com/rishabhjoshi/ DialoGraph_ICLR21 strategic choices ( Zhou et al. , 2019 ) . These studies have confirmed that it is crucial to control for pragmatics of the dialogue to build effective negotiation systems . To model the explicit dialogue structure , prior work incorporated Hidden Markov Models ( HMMs ) ( Zhai & Williams , 2014 ; Ritter et al. , 2010 ) , Finite State Transducers ( FSTs ) ( Zhou et al. , 2020 ) and RNNs ( He et al. , 2018 ; Shi et al. , 2019 ) . While RNN-based models lack interpretability , HMMand FST-based approaches may lack expressivity . In this paper , we hypothesize that Graph Neural Networks ( GNNs ) ( Wu et al. , 2020 ) can combine the benefits of interpretability and expressivity because of their effectiveness in encoding graph-structured data through message propagation . While being sufficiently expressive to model graph structures , GNNs also provide a natural means for interpretation via intermediate states ( Xie & Lu , 2019 ; Pope et al. , 2019 ) . We propose DIALOGRAPH , an end-to-end negotiation dialogue system that leverages Graph Attention Networks ( GAT ) ( Veličković et al. , 2018 ) to model complex negotiation strategies while providing interpretability for the model via intermediate structures . DIALOGRAPH incorporates the recently proposed hierarchical graph pooling based approaches ( Ranjan et al. , 2020 ) to learn the associations between negotiation strategies , including conceptual and linguistic strategies and dialogue acts , and their relative importance in predicting the best sequence . We focus on buyer–seller negotiations in which two individuals negotiate on the price of an item through a chat interface , and we model the seller ’ s behavior on the CraigslistBargain dataset ( He et al. , 2018 ) .2 We demonstrate that DIALOGRAPH outperforms previous state-of-art methods on strategy prediction and downstream dialogue responses . This paper makes several contributions . First , we introduce a novel approach to model negotiation strategies and their dependencies as graph structures , via GNNs . Second , we incorporate these learned graphs into an end-to-end negotiation dialogue system and demonstrate that it consistently improves future-strategy prediction and downstream dialogue generation , leading to better negotiation deals ( sale prices ) . Finally , we demonstrate how to interpret intermediate structures and learned sequences of strategies , opening-up the black-box of end-to-end strategic dialogue systems . 2 DIALOGRAPH . We introduce DIALOGRAPH , a modular end-to-end dialogue system , that incorporates GATs with hierarchical pooling to learn pragmatic dialogue strategies jointly with the dialogue history . DIALOGRAPH is based on a hierarchical encoder-decoder model and consists of three main components : ( 1 ) hierarchical dialogue encoder , which learns a representation for each utterance and encodes its local context ; ( 2 ) structure encoder for encoding sequences of negotiation strategies and dialogue acts ; and ( 3 ) utterance decoder , which finally generates the output utterance . Formally , our dialogue input consists of a sequence of tuples , D = [ ( u1 , da1 , ST1 ) , ( u2 , da2 , ST2 ) , ... , ( un , dan , STn ) ] where ui is the utterance , dai is the coarse dialogue act and STi = { sti,1 , sti,2 , . . . , sti , k } is the set of k fine-grained negotiation strategies for the utterance ui.3 The dialogue context forms the input to ( 1 ) and the previous dialogue acts and negotiation strategies form the input to ( 2 ) . The overall architecture is shown in Figure 2 . In what follows , we describe DIALOGRAPH in detail . 2.1 HIERARCHICAL DIALOGUE ENCODER . A dialogue context typically comprises of multiple dialogue utterances which are sequential in nature . We use hierarchical encoders for modeling such sequential dialogue contexts ( Jiao et al. , 2019 ) . To encode the utterance ut at time t , we use the pooled representations from BERT ( Devlin et al. , 2019 ) to obtain the corresponding utterance embedding et . We then pass the utterance embeddings through a GRU to obtain the dialogue context encoding till time t , denoted by hUt . 2We focus on the seller ’ s side following Zhou et al . ( 2019 ) who devised a set of strategies specific to maximizing the seller ’ s success . Our proposed methodology , however , is general . 3For example , in an utterance Morning ! My bro destroyed my old kit and I ’ m looking for a new pair for $ 10 , the coarse dialogue act is Introduction , and the finer grained negotiation strategies include Proposing price , Being informal and Talking about family for building rapport . 2.2 STRUCTURE ENCODER . Our structure encoder is designed to model the graph representations of the strategies and dialogue acts using GATs and output their structural representations . These structural representations are used to predict the next set of strategies and dialogue acts and enrich the encoded dialogue representation . Below we describe the structure encoder for negotiation strategies . We model the sequence of negotiation strategies , ST = [ ST1 , ST2 , . . . , STt ] by creating a directed graph , where STi is the set of k fine-grained negotiation strategies for the utterance ui . Formally , we define a graph G ( V , E , X ) with |E| edges andN = |V| nodes where each node vi ∈ V represents a particular negotiation strategy for an utterance and has a d-dimensional feature representation denoted by zi . Z ∈ RN×d denotes the feature matrix of the nodes and A ∈ RN×N represents the adjacency matrix , where N is the total number of nodes ( strategies ) that have occurred in the conversation till that point . Therefore , each node represents a strategy-utterance pair . We define the set of edges as E = { ( a , b ) } ; a , b ∈ V where a and b denote strategies at utterances ua and ub , present at turns ta and tb , such that tb > ta . In other words , we make a directed edge from a particular node ( strategy in an utterance ) to all the consecutive nodes . This ensures a direct connection from all the previous strategies to the more recent ones.4 In the same way , we form the graph out of the sequence of dialogue acts . These direct edges and learned edge attention weights help us interpret the dependence and influence of strategies on each other . To get the structural representations from the strategy graphs , we pass them through a hierarchical graph pooling based encoder , which consists of l layers of GAT , each followed by the Adaptive Structure Aware Pooling ( ASAP ) layer ( Ranjan et al. , 2020 ) . As part of the ASAP layer , the model first runs GAT over the input graph representations to obtain structurally informed representations of the nodes . Then a cluster assignment step is performed which generates a cluster assignment matrix , S , which tells the model which nodes come in a similar structural context . After that , the clusters are ranked and then the graph is pooled by taking the top few clusters as new nodes and forming edges between them using the existing graph . This way the size of the graph is reduced at every step which leads to a structurally informed graph representation . We take advantage of the cluster formulation to obtain the associations between the negotiation strategies , as identified from the cluster assignment matrix , S. These association scores can later be used to interpret which strategies are associated with each other and tend to co-occur in similar contexts . Moreover , we also use the node attention scores from GAT to interpret the influence of different strategies on the 4Appendix C shows an example of the graph obtained from a sequence of strategies . representation of a particular strategy , which essentially gives the dependence information between strategies . In this way , the structure representation is learned and accumulated in a manner that preserves the structural information ( Ying et al. , 2018 ; Lee et al. , 2019 ) . After each pooling step , the graph representation is summarized using the concatenation of mean and max of the node representations . The summaries are then added and passed through fully connected layers to obtain the final structural representation of the strategies hSTt . We employ a similar structure encoder to encode the graph obtained from the sequence of dialogue acts , to obtain hdat . 2.3 UTTERANCE DECODER . The utterance decoder uses the dialogue context representation and structural representations of dialogue acts and negotiation strategies to produce the dialogue response ( next utterance ) . We enrich the dialogue representation by concatenating the structural representations before passing it to a standard greedy GRU ( Cho et al. , 2014 ) decoder . This architecture follows Zhou et al . ( 2020 ) , who introduced a dynamic negotiation system that incorporates negotiation strategies and dialogue acts via FSTs . We thus follow their utterance decoder architecture to enable direct baseline comparison . For the jth word of utterance ut+1 , w j t+1 , we condition on the previous word w j−1 t+1 to calculate the probability distribution over the vocabulary as pwjt+1 = softmax ( GRU ( ht , w j−1 t+1 ) ) where ht = [ hut ; h ST t ; h da t ] and [ ; ] represents the concatenation operator . For encoding the price , we replace all price information in the dataset with placeholders representing the percentage of the offer price . For example , we would replace $ 35 with < price− 0.875 > if the original selling price is $ 40 . The decoder generates these placeholders which are then replaced with the calculated price before generating the utterance . 2.4 MODEL TRAINING . We use hSTt to predict the next set of strategies STt+1 , a binary value vector which represents the k-hot representation of negotiation strategies for the next turn . We compute the probability of the jth strategy occurring in ut+1 as p ( stt+1 , j |hSTt ) = σ ( hSTt ) . where σ denotes the sigmoid operator . We threshold the probability by 0.5 to obtain the k-hot representation . We denote the weighted negative log likelihood of strategies LST as the loss function of the task of next strategy prediction LST = − ∑ j δj log ( p ( stt+1 , j ) ) − ∑ k log ( 1 − p ( stt+1 , k ) ) where the summation of j are over the strategies present ( st ′ t+1 , j = 1 ) and not present ( st ′ t+1 , k = 0 ) in the ground truth strategies set , ST ′ . Here δj is the positive weight associated with the particular strategy . We add this weight to the positive examples to trade off precision and recall . We put δj = # of instances not having strategy j/ # of instances having strategy j . Similarly , we use hdat to predict the dialogue act for the next utterance dat+1 . Given the target dialogue act da ′ t+1 and the class weights ρda for the dialogue acts , we denote the class-weighted cross entropy loss over the set of possible dialogue acts , LDA = −ρda log ( softmax ( hdat ) ) . We pass ht = [ hut ; h ST t ; h da t ] through a linear layer to predict the negotiation success , which is denoted by the sale-to-list ratio r = ( sale price− buyer target price ) / ( listed price− buyer target price ) ( Zhou et al. , 2019 ) . We split the ratios into 5 negotiation classes of equal sizes using the training data and use those to predict the success of negotiation . Therefore , given the predicted probabilities for target utterance u ′ t+1 from §2.3 , target ratio class y ′ r and the learnable parameters Wr and br , we use the cross entropy loss as the loss for the generation task ( LNLG ) as well as the negotiation outcome prediction task ( LR ) , thus LNLG = − ∑ wj∈u ′ t+1 log ( p wj t+1 ) and LR = − ∑ r∈ [ 1,5 ] y ′ r log ( softmax ( Wrht + br ) ) . The LR loss optimizes for encoding negotiation strategies to enable accurate prediction of negotiation outcome . We use hyperparameters α , β and γ to optimize the joint loss Ljoint , of strategy prediction , dialogue act prediction , utterance generation and outcome prediction together , using the Adam optimizer ( Kingma & Ba , 2014 ) , to get Ljoint = LNLG + αLST + βLDA + γLR .
This paper deals with the problem of natural language generation for a dialogue system involved in complex communication tasks such as negotiation or persuasion. The proposed architecture consists of two encoders: one for the utterance and the other for dialogue acts and negotiation strategies. The decoder is an RNN that converts the encoded vectors to the output utterance. Each utterance is first passed through BERT to get an utterance-level encoding. The sequence of utterance encodings is then passed through an RNN to generate a conversation level encodings. The negotiation strategies and dialogue acts in a conversation are represented using a node-edge graph, where the nodes are one of the N different strategies/acts and there exists an edge from node a to node b if an utterance with strategy A precedes any utterance with strategy B. The entire architecture is trained in a multi-task setup where the loss function accounts for both the predictions of the model and generated language. The proposed architecture is evaluated on the CraigslistBargain dataset and compared against Zhou et al. 2020.
SP:3dffd0add054e13be141cfe939e367f6f6785eb8
Learning the Step-size Policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm
1 INTRODUCTION . Consider the unconstrained optimization problem minimize x f ( x ) ( 1 ) where f : Rn → R is an objective function that is differentiable for all x ∈ Rn , with n being the number of decision variables forming x . Let ∇xf ( x0 ) be the gradient of f ( x ) evaluated at some x0 ∈ Rn . A general quasi-Newton algorithm for solving this problem iterates xk+1 = xk − tkHkgk ( 2 ) for an initial x0 ∈ Rn until a given stop criterion is met . At the k-th iteration , gk = ∇xf ( xk ) is the gradient , Hk is a positive-definite matrix satisfying the secant equation ( Nocedal and Wright , 2006 , p. 137 ) and tk is the step size . In this paper , we develop a policy that learns to suitably determine step sizes tk when the product Hkgk is calculated by the Limited-Memory Broyden–Fletcher–Goldfarb–Shanno ( L-BFGS ) algorithm ( Liu and Nocedal , 1989 ) . The main contributions of the paper are : 1 . We propose a neural network architecture defining this policy taking as input local information of the current iterate . In contrast with more standard strategies , this policy is tuning-free and avoids re-evaluations of the objective function and gradients at each step . The training procedure is formulated as a stochastic optimization problem and can be performed by easily applying backpropagation through time ( TBPTT ) . 2 . Training classifiers in the MNIST database ( LeCun et al. , 1998 ) , our approach is competitive against heuristically tuned optimization procedures . Our tests show that the proposed policy is not only able to outperform competitors such as ADAM and RMSprop in wall-clock time and optimal/final value , but also performs better than L-BFGS with backtracking line searches , which is the gold standard , and with constant step sizes , which is the baseline . 3 . According to subsequent experiments on CIFAR-10 ( Krizhevsky et al. , 2009 ) , the proposed policy can generalize to different classes of problems after a few additional training steps on examples from these classes . This indicates that learning may be transferable between distinct types of tasks , allowing to explore transfer learning strategies . This result is a step towards the development of optimization methods that frees the designer from tuning control parameters as it will be motivated in Section 2 . The remaining parts of this paper are organized as follows : Section 3 presents the classical L-BFGS algorithm and discuss some methodologies to determine step sizes ; Section 4 contains the architecture for the proposed policy and also discussions on how it was implemented ; Section 5 describes the training procedure ; and , finally , Section 6 presents experiments using classifiers to operate on MNIST and CIFAR-10 databases . The notation is mainly standard . Scalars are plain lower-case letters , vectors are bold lower-case , and matrices are bold upper-case . The clip function is defined as clipul ( y ) : = min ( u , max ( l , y ) ) . 2 MOTIVATION . Most algorithms used in artificial intelligence and statistics are based on optimization theory , which has widely collaborated for the success of machine learning applications in the last decades . However , this two-way bridge seems not to be currently leveraging its full potential in the other sense , that is , to learn how to automate optimization procedures . Indeed , performing satisfactory optimization , or solving learning problems , still relies upon the appropriate tuning of parameters of the chosen algorithm , which are often grouped with other hyper-parameters of the learning task . Despite the existence of several methodologies to obtain good values for these parameters ( Bengio , 2000 ; Bergstra et al. , 2011 ; Bergstra and Bengio , 2012 ; Snoek et al. , 2015 ; Daniel et al. , 2016 ; Dong et al. , 2018 ) , the search for tuning-free algorithms that perform better than heuristically designed ones is of great interest among practitioner and theoreticians . Indeed , besides the generally-desirable faster convergence , the ready-to-use nature of such algorithms allows the user to focus his attention on other problem-level hyper-parameters while the optimization procedure is automatically performed , resulting in better time and effort allocation . As recent advancements of machine learning have helped automatize the solution of numberless problems , optimization theory should equally benefit from these , balancing the bridge flows . From a wider viewpoint , most optimization problem requires the user to select an algorithm and tune it to some extent . Although intuition and knowledge about the problem can speed-up these processes , trial-and-error methodologies are often employed which can be a time-consuming and inefficient task . With that in mind , the concept of Learned optimizers has been gathering attention in the last few years and , basically , refers to optimization policies and routines that were learned by looking at instances of optimization problems , here called tasks . This idea was introduced by Li and Malik ( 2016 ) and Andrychowicz et al . ( 2016 ) building upon previous results of “ learning to learn ” or “ meta-learning ” ( Thrun and Pratt , 1998 ; Hochreiter et al. , 2001 ) . In the former , the authors presented an optimization policy based on a neural network trained by reinforcement learning and taking as input the history of gradient vectors at previous iterations . The latter adopts a long short-term memory ( LSTM ) to achieve a similar task , but the learning is done by truncated backpropagation through time after unrolling the proposed optimizer for a certain number of steps . Subsequently , it was shown in Metz et al . ( 2019 ) how multilayer perceptrons ( MLP ) , adequately trained using a combined gradient estimation method , can perform faster in wall-clock time compared to current algorithms of choice . Also within this scenario , in Xu et al . ( 2019 ) a reinforcement learning-based methodology to auto-learn an adaptive learning rate is presented . Following this same fashion , in this present paper , instead of completely learning an optimizer from data , we propose a mixture of these ideas into a classical optimization procedure . Thus , the resulting optimizer , composed by a combination of L-BFGS and the proposed policy , will be learned in a constrained domain that assures valuable mathematical properties . The idea is to leverage both frameworks , inheriting the theoretical aspects assured by optimization theory while learning a policy to rule out the hand-design of parameters . Algorithm 1 : L-BFGS algorithm Input : si = xi+1 − xi , yi = gi+1 − gi and ρi = 1/ ( sTi yi ) for all i ∈ k −m , . . . , k − 1 ; and current gradient gk , Result : update direction dk = −Hkgk 1 q ← gk ; 2 for i = k − 1 , . . . , k −m do 3 αi ← ρisTi q ; 4 q ← q − αiyi ; 5 end 6 γ = |sTk−1yk−1|/ ( yTk−1yk−1 ) ; 7 r ← γq ; 8 for i = k −m , . . . , k − 1 do 9 β ← ρiyTi r ; 10 r ← r + si ( αi − β ) ; 11 end 12 dk ← −r ; 3 L-BFGS ALGORITHM . The L-BFGS algorithm was originally presented in Liu and Nocedal ( 1989 ) and is here transcribed into Algorithm 1 . It is a quasi-Newton method derived from the BFGS algorithm ( Nocedal and Wright , 2006 ) lowering space complexity from quadratic to linear in the problem dimension at the expense of precision . This algorithm calculates a descending direction in the search space taking into account an estimate of the inverse hessian matrix of f ( x ) , given by Hk . This matrix is not explicitly constructed but rather the product dk : = −Hkgk is obtained from the past m values of xk and gk , which have to be stored . This property makes it often the algorithm of choice for large-scale deterministic non-linear optimization problems . If f ( x ) is convex in x , this algorithm is guaranteed to provide a descending update direction , but the same does not apply for non-convex objective functions . However , a simple way to circumvent this is by removing iterations i in lines 2 and 8 of Algorithm 1 such that ρi ≤ 0 ( Nocedal and Wright , 2006 , p. 537 ) , which is used in this paper . A matter of great relevance within this scope is how to choose an appropriate step size tk to apply the update rule in Eq . ( 2 ) . To the best of our knowledge , there does not seem to exist a consensus on how to choose tk in a general way for non-convex objective functions . The scaling factor γ in lines 6-7 of Algorithm 1 is known to assure that the step size tk = 1 is accepted in most iterations in the convex optimization context , but not always . We will refer to a constant step-size policy that outputs tk = 1 as the baseline L-BFGS . However , a line search ( LS ) procedure is often combined with L-BFGS to assure its convergence . Ideally , this should be performed by solving tk = arg mint > 0 f ( xk + tdk ) but this exact approach is often too expensive to be adopted , motivating the use of inexact ones . An example is the backtracking line search ( BTLS ) , which takes an initial length tk for the step size and shrinks it repeatedly until the so-called sufficient decrease Wolfe Condition f ( xk + tkdk ) ≤ f ( xk ) + c1tkg T k dk is fulfilled , where c1 ∈ ( 0 , 1 ) is a control parameter to be tuned . Another parameter that has to be designed is the contraction factor c2 ∈ ( 0 , 1 ) that shrinks the step size , i.e. , tk ← c2tk , see Nocedal and Wright ( 2006 , p. 37 ) . This method assures convergence to a localminima at the cost of re-evaluating the objective function several times per iteration . This is a price that the user is , in some cases , willing to pay , but for large-dimensional problems this procedure is likely to become the bottle-neck of the optimization task . It is important to highlight that the method to be presented may also apply to other optimization algorithms that deeply rely on line searches to perform well . However , this paper focus on L-BFGS as it is often the algorithm of choice in large-scale deterministic optimization . In the context of stochastic optimization , many modified versions of Algorithm 1 together with methodologies for choosing tk are available ( Moritz et al. , 2016 ; Zhou et al. , 2017 ; Bollapragada et al. , 2018 ; Wills and Schön , 2019 ) , but for sake of simplicity , our work will deal exclusively with deterministic non-linear optimization problems . 4 LEARNED POLICY FOR SELECTING STEP SIZES . Recalling the definition of sk and yk in Algorithm 1 , our policy is defined as tk = π ( dk , gk , sk−1 , yk−1 ; θ ) and selects an adequate step size for L-BFGS but neither relying on any parameter tuning nor requiring additional evaluations of the objective function . Instead , its parameters that are represented by θ should be learned from data . Let us , from now on , de- fine this policy combined with Algorithm 1 as the L-BFGS-π approach . The architecture of the policy π ( dk , gk , sk−1 , yk−1 ; θ ) is shown in Fig . 1 . To allow the policy to be independent from the problem size n , only the inner products between its inputs are used . These values define u0 = dotln ( dk , gk , sk−1 , yk−1 ) where dotln ( · ) returns the component-wise application of f ( x ) = ln ( min ( x , ) ) to the elements of X = [ dk gk sk−1 yk−1 ] T [ dk gk sk−1 yk−1 ] but with the superdiagonal entries having their signs reversed . We have chosen = 10−8 to avoid imaginaryvalued entries . The vector u0 is the input to two parallel input layers , which are fully connected linear layers that transport information in u0 to another vector space Rnh ( in our tests , we adopted nh = 6 ) . Their outputs , as usual , are defined as u1 = W01u0 + b01 and u2 = W02u0 + b02 . The logarithm operation was adopted to let the linear layers evaluate products and divisions between powers of the inputs by simply summing and subtracting them . Moreover , as the output is positive , working in the logarithmic vector space allows us to use a wider range of numerical values . Subsequently , let us define the normalized vectors ū1 = u1/‖u2‖ and ū2 = u2/‖u2‖ to calculate the scalar projection of ū1 onto ū2 and clip the result to some interval [ τm , τM ] , yielding the log-step size τk = clip τM τm ( ūT2 ū1 ) = : p ( u1 , u2 ) ( 3 ) Finally , the selected step size is obtained as tk = eτk . To geometrically interpret this , we sketch three different scenarios in Fig . 2 . The dashed lines represent orthogonal axes spanned by some arbitrary ū2 and the gray strip represents the interval [ τm , τM ] along the direction of ū2 whence τk should be taken . When the Linear Layer 1 maps u0 into u′1 , the scalar projection of ū ′ 1 onto ū2 is beyond the maximal τM , so τk is clipped to it . In the same way , for ū′′′1 the step size will be the minimal one tk = eτm whereas for the intermediate ū′′1 we have τk ∈ ( τm , τM ) . The two layers , jointly trained , should learn how to position ū1 and ū2 in the lifted space to represent important directional information of dk and gk by looking at similar optimization tasks , being thus able to produced suitable step sizes . ū2 ū′1 ū′′′1 ū′′1 [ τm , τM ] for example , in the sufficient decrease Wolfe condition for backtracking line search , which makes our policy comparable to them in the sense that π ( · ; θ ) does not require additional information to operate . However , notice that the clip function is not suitable for training given that it is non-differentiable and gradients can not be backpropagated through it . Fortunately , the clip operation ( 3 ) can be cast as a convex optimization problem τk = arg min τ∈R ‖u2τ − u1‖2 ( 4 ) s.t . τm ≤ τ ≤ τM ( 5 ) allowing τk to be calculated by a convex optimization layer , defined here as a CVX Layer , ( Agrawal et al. , 2019 ) . This last layer can output the solution to a parameter-dependent convex optimization problem . For the special case where a solution is not differentiable with respect to the input ( e.g. , in our case when an inequality constraint is active ) , the automatic differentiation procedure delivers an heuristic quantity that can be employed as a gradient . The use of a CVX Layer is therefore convenient for training our policy but , on the other hand , using Eq . ( 3 ) in its place when applying the already-trained policy significantly speeds up the step-size evaluation , compared to solving ( 4 ) . It is important to remark that this policy is defined as independent from both the memory length m of Algorithm 1 and the problem dimension n. Additionally , the lower and upper limits for the log-step size are τm and τM , respectively , and can also be learned . In this work , however , we chose τm = −3 and τM = 0 , letting tk ∈ [ 0.0497 , 1 ] . This interval is comprehensive enough to let our method be compared in a fair way to backtracking line searches . Moreover , when we allowed τM to be learned in our tests it converged to values that were very close to τM = 0 , indicating that 1 was already an adequate upper limit for the step size .
The paper studies a problem of learning step-size policy for L-BFGS algorithm. This paper falls into a general category of meta-learning algorithms that try to derive a data-driven approach to learn one of the parameters of the learning algorithm. In this case, it is the learning rate of L-BFGS. The paper is very similar in nature to the papers of Ravi & Larochelle, MAML and Andrychowicz.
SP:3b3e7833784c53527eb32d5f6ac8d720f9d764bd
Uncertainty Calibration Error: A New Metric for Multi-Class Classification
1 INTRODUCTION . Advances in deep learning have led to superior accuracy in classification tasks , making deep learning classifiers an attractive choice for safety-critical applications like autonomous driving ( Chen et al. , 2015 ) or computer-aided diagnosis ( Esteva et al. , 2017 ) . However , the high accuracy of recent deep learning models alone is not sufficient for such applications . In cases where serious decisions are made upon model ’ s predictions , it is essential to also consider the uncertainty of these predictions . We need to know if a prediction is likely to be incorrect or if invalid input data is presented to a deep model , e.g . data that is far away from the training domain or obtained from a defective sensor . The consequences of a false decision based on an uncertain prediction can be fatal . A natural expectation is that the certainty of a prediction should be directly correlated with the quality of the prediction . In other words , predictions with high certainty are more likely to be accurate than uncertain predictions , which are more likely to be incorrect . A common misconception is the assumption that the estimated softmax likelihood can be directly used as a confidence measure for the predicted class . This expectation is dangerous in the context of critical decision-making . The estimated likelihood of models trained by minimizing the negative log-likelihood ( i.e . cross entropy ) is highly overconfident ; that is , the estimated likelihood is considerably higher than the observed frequency of accurate predictions with that likelihood ( Guo et al. , 2017 ) . 2 UNCERTAINTY ESTIMATION . In this work , we focus on uncertainty from approximately Bayesian methods . We assume a general multi-class classification task with C classes . Let input x ∈ X be a random variable with corresponding label y ∈ Y = { 1 , . . . , C } . Let fw ( x ) be the output ( logits ) of a neural network with weight matrices w , and with model likelihood p ( y=c |fw ( x ) ) for class c , which is sampled from a probability vector p = σSM ( fw ( x ) ) , obtained by passing the model output through the softmax function σSM ( · ) . From a frequentist perspective , the softmax likelihood is often interpreted as confidence of prediction . Throughout this paper , we follow this definition . The frequentist approach assumes a single best point estimate of the parameters ( or weights ) of a neural network . In frequentist inference , the weights of a deep model are obtained by maximum likelihood estimation ( Bishop , 2006 ) , and the normalized output likelihood for an unseen test input does not consider uncertainty in the weights ( Kendall & Gal , 2017 ) . Weight uncertainty ( also referred to as model or epistemic uncertainty ) is a considerable source of predictive uncertainty for models trained on data sets of limited size ( Bishop , 2006 ; Kendall & Gal , 2017 ) . Bayesian neural networks and recent advances in their approximation provide valuable mathematical tools for quantification of model uncertainty ( Gal & Ghahramani , 2016 ; Kingma & Welling , 2014 ) . Instead of assuming the existence of a single best parameter set , we place distributions over the parameters and want to consider all possible parameter configurations , weighted by their posterior . More specifically , given a training data set D and an unseen test sample x with class label y , we are interested in evaluating the predictive distribution p ( y|x , D ) = ∫ p ( y|x , w ) p ( w|D ) dw . This integral requires to evaluate the posterior p ( w|D ) , which involves the intractable marginal likelihood . A possible solution to this is to approximate the posterior with a more simple , tractable distribution q ( w ) by optimization . In this work , we incorporate the following approximately Bayesian methods which we use in our experiments to obtain weight uncertainty : Monte Carlo ( MC ) dropout ( Gal & Ghahramani , 2016 ) , Gaussian dropout ( Wang & Manning , 2013 ; Kingma et al. , 2015 ) , Bayes by Backprop ( Blundell et al. , 2015 ) , SWA-Gaussian ( Maddox et al. , 2019 ) , and ( although not Bayesian ) deep ensembles ( Lakshminarayanan et al. , 2017 ) . A short review of each of the methods can be found in Appendix A.2 . 3 RELATED CALIBRATION METRICS . Expected Calibration Error The expected calibration error ( ECE ) is one of the most popular calibration error metrics and estimates model calibration by binning the predicted confidences p̂ = maxc p ( y = c |x ) into M bins from equidistant intervals and comparing them to average accuracies per bin ( Naeini et al. , 2015 ; Guo et al. , 2017 ) : ECE = M∑ m=1 |Bm| n ∣∣acc ( Bm ) − conf ( Bm ) ∣∣ , ( 1 ) with number of test samples n and acc ( B ) and conf ( B ) denoting the accuracy and confidence of bin B , respectively . Several recent works have described severe pathologies of the ECE metric ( Ashukha et al. , 2020 ; Nixon et al. , 2019 ; Kumar et al. , 2019 ) . Most notably , the ECE metric is minimized by a model constantly predicting the marginal distribution of the majority class which makes it impossible to directly optimize it ( Kumar et al. , 2018 ) . Additionally , the ECE only considers the maximum class probability and ignores the remaining entries of the probability vector p ( x ) . Adaptive Calibration Error Nixon et al . ( 2019 ) proposed the adaptive calibration error ( ACE ) to address the issue of fixed bin widths of ECE-like metrics . For models with high accuracy or overconfidence , most of the predictions fall into the rightmost bins , whereas only very few predictions fall into the rest of the bins . ACE spaces the bins such that an equal number of predictions contribute to each bin . The final ACE is computed by averaging over per-class ACE values to address the issue raised by Kull et al . ( 2019 ) . However , this makes the metric more sensitive to the manually selected number of bins M as the number of bins effectively becomes C ·M , with number of classes C. Using fixed bin widths , the numbers of samples in the sparsely populated bins is further reduced , which increases the variance of each measurement per bin . Using adaptive bins , this results in the lower confidence bins spanning a wide range of values , which increases the bias of the bin ’ s measurement . Negative Log-Likelihood Deep models for classification are usually trained by minimizing the average negative log-likelihood ( NLL ) : NLL = 1 N N∑ i=1 − log p ( y = yi |xi ) . ( 2 ) The NLL is also commonly used as a metric for measuring the calibration of uncertainty . However , the NLL is minimized by increasing the confidence maxc p ( y = c |x ) , which favors over-confident models and models with higher accuracy ( Ashukha et al. , 2020 ) . This metric is therefore unable to compare the calibration of models with different accuracies and training a model by minimizing NLL does not necessarily lead to good calibration . Brier Score The average Brier score is another popular metric for assessing the quality of predictive uncertainty and is defined as ( Brier , 1950 ; Lakshminarayanan et al. , 2017 ) BS = 1 N N∑ i=1 C∑ c=1 ( 1 ( yi = c ) − p ( y = c |xi ) ) 2 . ( 3 ) Similarly to the NLL , the Brier score favors high probabilities for correct predictions and low probabilities for incorrect predictions . Thus , models with higher accuracy tend to show a better Brier score , which makes the metric unsuitable for comparing the quality of uncertainty for models with different accuracies . Maximum Mean Calibration Error Common recalibration methods are applied post-hoc , e.g . temperature scaling on a separate calibration set . Kumar et al . ( 2018 ) proposed the maximum mean calibration error ( MMCE ) , a trainable calibration surrogate for the calibration error . It is defined as MMCE2 ( D ) = ∑ i , j∈D ( 1 ( ŷi = yi ) − p̂i ) ( 1 ( ŷj = yj ) − p̂j ) k ( p̂i , p̂j ) m2 ( 4 ) over batch D ⊂ D with batch size m , matrix-valued universal kernel k and ŷ = argmaxc p ( y = c |x ) . Trainable calibration metrics are used in joint optimization with the negative log-likelihood argmin w ∑ D NLL ( D , w ) + λMMCE ( D , w ) . ( 5 ) Kumar et al . ( 2018 ) claim to have addressed the issue that the ECE is unsuitable for direct optimization due to its high discontinuity in w. However , MMCE is also minimized by a model constantly predicting the marginal distribution of the classes . This leads to subpar logit temperature when training with MMCE and temperature scaling can further reduce miscalibration ( Kumar et al. , 2018 ) . 4 UNCERTAINTY CALIBRATION ERROR . To give an insight into our general approach to measuring the calibration of uncertainty , we will first revisit the definition of perfect calibration of confidence ( Guo et al. , 2017 ) and show how this concept can be extended to calibration of our definition uncertainty . Let ŷ = argmaxp be the most likely class prediction of input x with confidence p̂ = maxp and true label y . Then , following Guo et al . ( 2017 ) , perfect calibration of confidence is defined as P [ ŷ = y | p̂ = α ] = α , ∀α ∈ [ 0 , 1 ] . ( 6 ) That is , the probability of a correct prediction ŷ = y given the prediction confidence p̂ should exactly correspond to the prediction confidence . Instead of using only the probability of the predicted class , we use the entropy of p to express prediction uncertainty : H ( p ) = − C∑ c=1 p ( c ) log p ( c ) . ( 7 ) Let q ( k ) : = ( P [ y = 1| argmaxp ( x ) = k ] , . . . , P [ y = C| argmaxp ( x ) = k ] ) ( 8 ) be a probability vector of true marginal class probabilities for all inputs x predicted with class k. Consider the following example : Three i.i.d . inputs x1:3 in a binary classification task with ground truth labels { 1 , 1 , 2 } have all been predicted with argmaxp ( x1:3 ) = 1 . Then , q ( 1 ) = ( 2 3 , 1 3 ) . With this , we define a model to be perfectly calibrated if H ( q ( k ) ) = H ( p | argmaxp = k ) ∀ k ∈ { 1 , . . . , C } . ( 9 ) From this , we derive an error metric for calibration of uncertainty : Ep [ |H ( q ) −H ( p ) | ] . ( 10 ) However , this metric and the use of the entropy as measure of uncertainty lacks interpretability , as the entropy scales with the number of classes C. This does not allow to compare the uncertainty or the calibration of models trained on different data sets . Therefore , we propose to use the normalized entropy to scale the values to a range between 0 and 1 : H̃ ( p ) : = − 1 logC C∑ c=1 p ( c ) log p ( c ) , H̃ ∈ [ 0 , 1 ] . ( 11 ) We further increase interpretability and argue , that the normalized entropy should correlate with the model error . From Eq . ( 6 ) and Eq . ( 11 ) , we define perfect calibration of uncertainty as P [ ŷ 6= y | H̃ ( p ) = α ] = α , ∀α ∈ [ 0 , 1 ] . ( 12 ) That is , in a batch of inputs that are all predicted with uncertainty of e. g. 0.2 , a top-1 error of 20 % is expected . The confidence is interpreted as the probability of belonging to a particular class , which should naturally correlate with the model error of that class . This characteristic does not generally apply to entropy , and thus the question arises why entropy should correspond with the model error . Proposition 1 . The normalized entropy ( uncertainty ) H̃ ( p ) approaches the top-1 error in the limit of number of classes C if the model p is well-calibrated . Proof . lim C→∞ H̃ ( p ) = ( 1− p̂ ) ( 13 ) The top-1 error equals ( 1 − p̂ ) if the model is perfectly calibrated in the sense of Eq . ( 6 ) . For a detailed proof , see Appendix A.1 . Thus , the normalized entropy gives us an intuitive and interpretable measure of uncertainty . If a model is perfectly calibrated , H̃ corresponds to the top-1 error . We propose the following notion to quantify miscalibration of uncertainty : EH̃ [ ∣∣P [ ŷ 6= y | H̃ ( p ) = α ] − α∣∣ ] , ∀α ∈ [ 0 , 1 ] . ( 14 ) We refer to this as Expected Uncertainty Calibration Error ( UCE ) and approximate with UCE : = M∑ m=1 |Bm| n ∣∣err ( Bm ) − uncert ( Bm ) ∣∣ , ( 15 ) using the same binning scheme as in ECE estimation . The error per bin is defined as err ( Bm ) : = 1 |Bm| ∑ i∈Bm 1 ( ŷi 6= y ) , ( 16 ) where 1 ( ŷi 6= y ) = 1 and 1 ( ŷi = y ) = 0 . Uncertainty per bin is defined as uncert ( Bm ) : = 1 |Bm| ∑ i∈Bm H̃ ( pi ) . ( 17 ) Properties of UCE The proposed UCE metric solves several problems of other metrics . First , the UCE is not zero for a model constantly predicting the marginal class distribution . Estimators of metrics with this pathology ( e.g . ECE , MMCE ) suffer from varying bias and therefore do not allow comparing miscalibration of different models ( Ashukha et al. , 2020 ; Vaicenavicius et al. , 2019 ) . In contrast to ACE , UCE is not highly sensitive to the numbers of bins and provides a consistent ranking of different models for the same classification task ( see Fig . 1 ) . Additionally , UCE can be used as a trainable regularizer in similar manner to MMCE . During training , we compute the UCE over mini-batches D ⊂ D and add it to the NLL training objective argmin w ∑ D NLL ( D , w ) + λUCE ( D , w ) , ( 18 ) weighted by a factor λ. UCE is zero for an optimal model and thus does not penalize high confident predictions for models with high accuracy , which is a major disadvantage of plain entropy regularization ( Pereyra et al. , 2017 ) . Predictions with low uncertainty , but high top-1 error are penalized whereas predictions with high accuracy are encouraged to have low uncertainty .
This paper proposes a new calibration error measurement named UCE (Uncertainty Calibration Error) for deep classification models. It consists in doing a calibration in order to achieve "perfect calibration" (i.e., the uncertainty provided is equivalent to the classification error at all levels in [0, 1]), relying on normalized entropy for multiclass classification. This UCE is well justified for classification problems with several classes to process, where the entropy is demonstrated to be asymptotically equivalent to the classification (top-1) error. A point with this UCE metric is that is has some interpretability properties in terms of its value, and is said to be robust to the number of bins used.
SP:7a92beaba926a93a627208abebe4a455ae3e0400
Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference
1 INTRODUCTION . Bayesian inference provides a powerful framework to blend prior knowledge , data generation process and ( possibly small ) data for statistical inference . With some prior knowledge ⇢ ( distribution ) for the quantity of interest x 2 Rd , and some ( noisy ) measurement y 2 Rdy , it casts on x a posterior q ( x|y ) / ⇢ ( x ) L ( y|x ) , where L ( y|x ) = N ( y F ( x ) ; 0 , `` ) . ( 1 ) where L ( y|x ) is the likelihood that compares the data y with system prediction F ( x ) from the candidate x , here F denotes the forward process . We can use different distributions to model the mismatch `` = y F ( x ) , and for illustration simplicity , we assume Gaussian in Equation 1 . For example , Bayesian deep learning generates model predicted logits F ( x ) from model parameters x , and compares it with discrete labels y through binomial or multinomial distribution . Sampling or inferring from q is a long-standing challenge , especially for high-dimensional ( high-d ) cases . An arbitrary high-d posterior can have its importance regions ( also called “ modes ” ) anywhere in the high-d space , and finding these modes requires computational cost that grows exponentially with the dimension d. This intrinsic difficulty is the consequence of “ the curse of dimensionality ” , which all existing Bayesian inference methods suffer from , e.g. , MCMC-based methods ( Neal et al. , 2011 ; Welling & Teh , 2011 ; Cui et al. , 2016 ) , SVGD-type methods ( Liu & Wang , 2016 ; Chen et al. , 2018 ; 2019a ) , and generative modeling ( Morzfeld et al. , 2012 ; Parno et al. , 2016 ; Hou et al. , 2019 ) . In this paper , we focus on Bayesian inference problems with multiscale structure and exploit this structure to sample from a high-d posterior . While the original problem has a high spatial resolution ( fine-scale ) , its low resolution ( coarse-scale ) analogy is computationally attractive because it lies in a low-dimension ( low-d ) space . A problem has the multiscale structure if such coarse-scale low-d surrogate exists and gives good approximation to the fine-scale high-d problem , see Section 2.1 . Such multiscale property is very common in high-d Bayesian inference problems . For example , inferring 3-D permeability field of subsurface at the scale of meters is a reasonable approximation of itself at the scale of centimeters , while the problem dimension is 106-times fewer . We propose a Multiscale Invertible Generative Network ( MsIGN ) to sample from high-d Bayesian inference problems with multiscale structure . MsIGN is a flow-based generative network that can both generate samples and give density evaluation . It consists of multiple scales that recursively lifts up samples to a finer-scale ( higher-resolution ) , except that the coarsest scale directly samples from a low-d ( low resolution ) distribution . At each scale , a fixed prior conditioning layer combines coarse-scale samples with some random noise according to the prior to enhance the resolution , and then an invertible flow modifies the samples for better accuracy , see Figure 1 . The architecture of MsIGN makes it fully invertible between the final sample and random noise at all scales . MsIGN undergoes a multi-stage training that learns a hierarchy of distributions with dimensions growing from the lowest to the highest ( the target posterior ) . Each stage gives a good initialization to the next stage thanks to the multiscale property . To capture multiple modes , we choose Jeffreys divergence DJ ( pkq ) as the training objective at each stage , which is defined as DJ ( pkq ) = DKL ( pkq ) +DKL ( qkp ) = Ex⇠p [ log ( p ( x ) /q ( x ) ) ] + Ex⇠q [ log ( q ( x ) /p ( x ) ) ] . ( 2 ) Jeffreys divergence removes bad local minima of single-sided Kullback-Leibler ( KL ) divergence to avoid mode missing . We build an unbiased estimation of it by leveraging prior conditioning layer in importance sampling . Proper loss function and good initialization from multi-stage training solve the non-convex optimization stably and capture multi-modes of the high-d distribution . In summary , we claim four contributions in this work . First , we propose a Multiscale Invertible deep Generative Network ( MsIGN ) with a novel prior conditioning layer , which can be trained in a coarse-to-fine scale manner . Second , Jeffreys divergence is used as the objective function to avoid mode collapse , and is estimated by importance sampling based on the prior conditioning layer . Third , when applied to two Bayesian inverse problems , MsIGN clearly captures multiple modes in the high-d posterior and approximates the posterior accurately , demonstrating its superior performance compared with previous methods via the generative modeling approach . Fourth , we also apply MsIGN to image synthesis tasks , where it achieves superior performance in bits-perdimension among our baseline models , like Glow ( Kingma & Dhariwal , 2018 ) , FFJORD ( Grathwohl et al. , 2018 ) , Flow++ ( Ho et al. , 2019 ) , i-ResNet ( Behrmann et al. , 2019 ) , and Residual Flow ( Chen et al. , 2019b ) . MsIGN also yields great interpret-ability of its neurons in intermediate layers . 2 METHOLOGY . We will abbreviate q ( x|y ) in Equation 1 as q ( x ) for simplicity in the following context , because y only plays the role of defining the target distribution q ( x ) in MsIGN . In Section 2.1 , we discuss the multiscale structure in detail of the posterior q ( x ) and derive a scale decoupling that can be utilized to divide and conquer the high-d challenge of Bayesian inference . As a flow-based generative model like in Dinh et al . ( 2016 ) , MsIGN models a bijective that maps Gaussian noise z to a sample x whose distribution is denoted as p✓ ( x ) , where ✓ is the network parameters . MsIGN allows fast generation of samples x and density evaluation p✓ ( x ) , so we train our working distribution p✓ ( x ) to approximate the target distribution q ( x ) . We present the architecture of MsIGN in Section 2.2 and the training algorithm in Section 2.3 . 2.1 MULTISCALE STRUCTURE AND SCALE DECOUPLING . We say a Bayesian inference problem has multiscale structure if the associated coarse-scale likelihood Lc approximates the original likelihood L well : L ( y|x ) ⇡ Lc ( y|xc ) , where Lc ( y|xc ) : = N ( y Fc ( xc ) ; 0 , `` ) . ( 3 ) Here xc 2 Rdc is a coarse-scale version of the fine-scale quantity x 2 Rd ( dc < d ) , given by a deterministic pooling operator A : xc = A ( x ) . The map Fc : Rdc ! Rdy is a forward process that gives system prediction based on the coarse-scale information xc . A popular case of the multiscale structure is when A is the average pooling operator , and F ( x ) ⇡ Fc ( xc ) , meaning that the system prediction mainly depends on the lower-resolution information xc . Equation 3 motivates us to define a surrogate distribution q̃ ( x ) / ⇢ ( x ) Lc ( y|A ( x ) ) that approximates the target posterior q ( x ) well1 : q̃ ( x ) = ⇢ ( x ) Lc ( y|A ( x ) ) = ⇢ ( x ) Lc ( y|xc ) ⇡ ⇢ ( x ) L ( y|x ) = q ( x ) . ( 4 ) We also notice that the prior ⇢ allows an exact scale decoupling . To generate a sample x from ⇢ , one can first sample its coarse-scale version xc = A ( x ) , and then replenish missing fine-scale details without changing the coarse-scale structure by sampling from the conditional distribution ⇢ ( x|xc ) = ⇢ ( x|A ( x ) = xc ) . Using ⇢c to denote the distribution of xc = A ( x ) , the conditional probability calculation summarizes this scale decoupling process as ⇢ ( x ) = ⇢ ( x|xc ) ⇢c ( xc ) . Combining the scale effect in the likelihood and the scale decoupling in the prior , we decouple the surrogate q̃ ( x ) = ⇢ ( x ) Lc ( y|A ( x ) ) into the prior conditional distribution ⇢ ( x|xc ) and a coarse-scale posterior , defined as qc ( xc ) : = ⇢c ( xc ) L ( y|xc ) . The decoupling goes as q̃ ( x ) = ⇢ ( x ) Lc ( y|xc ) = ⇢ ( x|xc ) ⇢c ( xc ) Lc ( y|xc ) = ⇢ ( x|xc ) qc ( xc ) , ( 5 ) The prior conditional distribution ⇢ ( x|xc ) bridges the coarse-scale posterior qc ( xc ) and the surrogate q̃ ( x ) , which in turn approximates the original fine-scale posterior q ( x ) . Parno et al . ( 2016 ) proposed a similar scale decoupling relation , and we leave the discussion and comparison to Appendix A . Figure 1 shows the integrated sampling strategy . To sample an x from q , we start with an xc from qc . The prior conditioning layer then performs random upsampling from the prior conditional distribution ⇢ ( ·|xc ) , and the output will be a sample x̃ of the surrogate q̃ . Due to the approximation q̃ ⇡ q from Equation 4 , we stack multiple invertible blocks for the invertible flow F to modify the sample x̃ ⇠ q̃ to a sample x ⇠ q : x = F ( x̃ ) . F is initialized as an identity map in training . Finally , to obtain the xc from qc , we apply the above procedure recursively until the dimension of the coarsest scale is small enough so that qc can be easily sampled by a standard method . 2.2 MULTISCALE INVERTIBLE GENERATIVE NETWORK : ARCHITECTURE . Our proposed MsIGN has multiple levels to recursively apply the above strategy . We denote L the number of levels , xl 2 Rdl the sample at level l , and Al : Rdl ! Rdl 1 the pooling operator from level l to l 1 : xl 1 = Al ( xl ) . Following the idea in Section 2.1 , we can define the l-th level target ql ( xl ) and surrogate q̃l ( x̃l ) , and the last-level target qL is our original target q in Equation 1 . The l-th level of MsIGN uses a prior conditioning layer PCl and an inverse transform Fl to capture ql . Prior conditioning layer . The prior conditioning layer PCl at level l lifts a coarse-scale sample xl 1 2 Rdl 1 up to a random fine-scale one xl 2 Rdl following the conditional distribution ⇢ ( xl|xl 1 ) . The difference in dimension is compensated by a Gaussian noise zl 2 Rdl dl 1 , which is the source of randomness : xl = PCl ( xl 1 , zl ) . PCl depends only on the prior conditional distribution ⇢ ( xl|xl 1 ) , and thus can be pre-computed independently for different levels regardless of the likelihood L. When the prior is Gaussian and the pooling operators are linear ( e.g. , average pooling ) , the prior conditional distribution is still Gaussian with moments specified as follows . Lemma 2.1 Suppose that ⇢ ( xl ) = N ( xl ; 0 , ⌃l ) , and Al ( xl ) = Alxl for some Al 2 Rdl 1⇥dl , then with Ul 1 : = ⌃lATl ( Al⌃lA T l ) 1 and ⌃l|l 1 : = ⌃l ⌃lATl ( Al⌃lATl ) 1Al⌃l , we have ⇢ ( xl|xl 1 = Alxl ) = N ( xl ; Ul 1xl 1 , ⌃l|l 1 ) . 1We omit normalizing constants . Equivalence and approximation are up to normalization in the following . With the Cholesky decomposition ( or eigen-decomposition ) ⌃l|l 1 = BlBTl , we design the prior conditioning layer PCl as below , which is invertible between xl and ( xl 1 , zl ) : xl = PCl ( xl 1 , zl ) : = Ul 1xl 1 +Blzl , zl ⇠ N ( 0 , Idl dl 1 ) . ( 6 ) We refer readers to Appendix B for proof of Lemma 2.1 and the invertibility in Equation 6 . When the prior is non-Gaussian or the pooling operators are nonlinear , there exists a nonlinear invertible prior conditioning operator xl = PCl ( xl 1 , zl ) such that xl follows the prior conditional distribution ⇢ ( xl|xl 1 ) given xl 1 and zl ⇠ N ( 0 , Idl dl 1 ) . We can pre-train an invertible network to approximate this sampling process , and fix it as the prior conditioning layer . Invertible flow . The invertible flow Fl at level l modifies the surrogate q̃l towards the target ql . The more accurate the multiscale structure in Equation 3 is , the better q̃l approximates ql , and the closer Fl is to the identity map . Therefore , we parameterize Fl by some flow-based generative model and initialize it as an identity map . In practice , we utilize the invertible block of Glow ( Kingma & Dhariwal , 2018 ) , which consists of actnorm , invertible 1⇥ 1 convolution , and affine coupling layer , and stack several blocks as the inverse flow Fl in MsIGN . Overall model . MsIGN is a bijective map between random noise inputs at different scales { zl } Ll=1 and the finest-scale sample xL . The forward direction of MsIGN maps { zl } Ll=1 to xL as below : x1 = F1 ( z1 ) , x̃l = PCl ( xl 1 , zl ) , xl = Fl ( x̃l ) , 2 l L . ( 7 ) As a flow-based generative model , sample generation as in Equation 7 and density evaluation p✓ ( x ) by the change-of-variable rule is accessible and fast for MsIGN . In scenarios when certain bound needs enforcing to the output , we can append element-wise output activations at the end of MsIGN . For example , image synthesis can use the sigmoid function so that pixel values lie in [ 0 , 1 ] . Such activations should be bijective to keep the invertible relation between random noise to the sample .
This paper presents a model and a corresponding training approach for multi-scale invertible models. The presented model is defined on multiple scales with information on finer scales being conditioned on coarser scales. Data generation is hence done sequentially from a coarser to finer scale. The authors argue that this multi-scale sampling helps in addressing the curse of dimensionality problem by allowing to sample from high density regions more efficiently.
SP:92d112388a1eac20c2208f0596cdfcdcca685c8f
Meta Gradient Boosting Neural Networks
1 INTRODUCTION . While humans can learn quickly with a few samples with prior knowledge and experiences , artificial intelligent algorithms face challenges in dealing with such situations . Learning to learn ( or metalearning ) ( Vilalta & Drissi , 2002 ) emerges as the common practice to address the challenge by leveraging transferable knowledge learned from previous tasks to improve learning on new tasks ( Hospedales et al. , 2020 ) . An important direction in meta-learning research is meta-optimization frameworks ( Lee & Choi , 2018 ; Nichol & Schulman , 2018 ; Rusu et al. , 2019 ) , a.k.a. , model-agnostic meta-learning ( MAML ) ( Finn et al. , 2017 ) . Such frameworks learn initial model parameters from similar tasks and commit to achieving superior performance on new tasks that conform to the same distribution through fast adaptation . They offer excellent flexibility in model choices and demonstrate appealing performance in various domains , such as image classification ( Li et al. , 2017 ; Finn et al. , 2017 ) , language modeling ( Vinyals et al. , 2016 ) , and reinforcement learning ( Fernando et al. , 2018 ; Jaderberg et al. , 2019 ) . Generally , such frameworks define a target model Fθ and a meta-learnerM . The learning tasks T = { T train , T test } are divided into training and testing tasks , where T are generated from the meta-datasetD , i.e. , T ∼ P ( D ) . Each task contains a support set DS and a query set DQ for training and evaluating a local model . The initialization of the model parameter θ is learned by the meta learner , i.e. , θ ←M ( T train ) . We denote the meta-learned parameter as φ so that θ ← φ . For each task , the model obtains locally optimal parameter θ̂ by minimizing the loss L ( Fθ ( DS ) ) . The meta parameter φ will be updated across all training tasks by minimizing the loss ΣT∈T train ( L ( Fθ̂ ( D Q ) ) ) . Generally , it takes only a small number of epochs to learn locally optimal parameters across training tasks so that meta-learned parameter φ can quickly converge to an optimal parameter for new tasks . Most methods assume some transferable knowledge across all tasks and rely on a single shared meta parameter . However , the success of the meta-learners are limited within similar task families , and the single shared meta parameter can not well support fast learning on diverse tasks ( e.g. , a large meta-dataset ) or task distributions ( e.g. , T are generated from multiple meta-datasets ) due to conflicting gradients for those tasks ( Hospedales et al. , 2020 ) . Recent efforts have studied multiple initial conditions to solve the above challenges . Some employ probabilistic models ( Rusu et al. , 2019 ; Finn et al. , 2018 ; Yoon et al. , 2018 ) while others incorporate task-specific information ( Lee & Choi , 2018 ; Vuorio et al. , 2019 ; Alet et al. , 2018 ) . The former learns to obtain an approximate posterior of an unseen task yet needs sufficient samples to get reliable data distributions ; the latter conducts task-specific parameter initialization using multiple meta-learners yet requires expensive computation and can not transfer knowledge across different modes of task distributions . In this work , we aim to resolve the above challenges from a novel perspective by proposing a meta gradient boosting framework . Gradient boosting ( Friedman , 2001 ) aims to build a new learner towards the residuals of the previous prediction result for each step . We call the learner for each step as weak learner and make predictions based on summing up the weak learners . Recent research ( Badirli et al. , 2020 ; Olson et al. , 2018 ) has demonstrated the potential of decomposing deep neural nets into an ensemble of sub-networks with each achieving low training errors . We propose to use the first or first few weak learners as the base learner , followed by a series of gradient boosting modules to cope with a diverse array of tasks—the base learner is responsible for inferring transferable knowledge by learning across all tasks ; the gradient-boosting modules are designed to make task-specific updates to the base learner . Compare with existing work , which uses multiple initial conditions , our approach does not require specifying a set of initialization conditions and thus has better flexibility in dealing with multi-mode tasks . Our proposed framework is also more efficient than its counterparts as it does not require a large number of gradient boosting modules . We evaluate the proposed framework on few-shot learning scenarios for both regression and classification tasks . The experimental results show the well performance of the proposed framework , which demonstrates the model ’ s ability in learning with very few cases . 2 RELATED WORK . Meta-learning has the potential of replicating human ability to learn new concepts from one or very few instances . It has recently drawn increasing attention , given its broad applicability to different fields ( Hospedales et al. , 2020 ) . Pioneers ( Finn et al. , 2017 ; Nichol & Schulman , 2018 ) in this topic propose optimization algorithms with learned parameters to automate the exploitation to the structures of learning problems . However , most of them initialize the same set of model parameters for all tasks , which may have different distributions , thus resulting in over-fitting . Recent studies either model the mixture of multiple initial conditions via probabilistic modeling ( Finn et al. , 2018 ; Yoon et al. , 2018 ) or incorporate task-specific knowledge ( Lee & Choi , 2018 ; Alet et al. , 2018 ) , to address the above issues . Yoon et al . ( 2018 ) and Finn et al . ( 2018 ) use variational approximation to enable probabilistic extensions to MAML . But it is unclear how to extend MAML for a wide range of task distributions . Rusu et al . ( 2019 ) consider multiple conditions by borrowing the idea of variational autoencoders ( Kingma & Welling , 2014 ) , which encodes inputs to a low-dimensional latent embedding space and then decodes the learned latent code to generalize taskspecific parameters . Another line of research defines a set of initialization modules and incorporate task-specific information to select task-specific modules ; this way , it can identify the mode of tasks sampled from a multimodal task distribution and adapt quickly through gradient updates ( Vuorio et al. , 2019 ) . Yao et al . ( 2019 ) propose a Hierarchically Structured Meta-Learning ( HSML ) framework to perform soft clustering on tasks . HSML first learns the inputs and then obtains clustering results by the hierarchical clustering structure . HSML tailors the globally shared parameter initialization for each cluster via a parameter gate to initialize all tasks within the clusters . The above approaches have common limitations in 1 ) requiring sufficient data samples to generalize task distribution thus may fail in few-shot cases ; 2 ) being computationally expensive , due to the globally stored initialization modules ; 3 ) facing challenges in exhaustively listing every possible initial condition . Two closely-related topics to meta-learning are modular approaches ( Andreas et al. , 2016 ) and multitask learning ( Zhang & Yang , 2017 ) . Modular approaches are similar to meta-learning in that the input signal gives relatively direct information about a good structural decomposition of the problem . For example , Alet et al . ( 2018 ) adopt the modular structure and parameter adaptation method for learning reusable modules . Multi-task learning aims to learn a good shared-parameter or make the parameter for each task as similar as possible ( Wang et al. , 2020 ) . For example , Zhang et al . ( 2018 ) propose two task networks that share the first few layers for the generic information before applying different prediction layers to different tasks . These approaches differ from meta-learning in requiring fine-tuning the models over all training samples and thus can not adapt well to new tasks . Our framework Meta Gradient Booting ( MGB ) neural network is based on the idea of gradient boosting ( Friedman , 2001 ) , which aims to build a new learner towards the residuals of the previous prediction result for each step . The learner for each step is called weak learner , and the prediction is based on the summation of weak learners . Weak learners may vary from traditional decision trees ( Chen & Guestrin , 2016 ) to neural networks ( Tannor & Rokach , 2019 ; Badirli et al. , 2020 ) . Algorithm 1 Training of MGB 1 : Randomly initialize global parameter φ 2 : while Not done do 3 : for T ∈ T do 4 : for ( x , y ) ∈ DS do 5 : Initialize fθ0 by θ0 ← φ 6 : for k ∈ range ( K ) do 7 : θ ← θ − βL ( y , Fθ ) 8 : end for 9 : end for 10 : Get updated parameter θ̂ 11 : for ( x , y ) ∈ DQ do 12 : Calculate predictions Fθ̂ ( x ) 13 : Calculate task loss L ( y , Fθ̂ ) 14 : end for 15 : end for 16 : Update φ by φ← φ− γLmeta 17 : end while ... Feature Target φ θ0 ∇Loss ... Feature θ1 ∇Loss Outputs Hid Task 1 Task T Base Learner Gradient BoostModule Feature ∇Loss Feature Local Update G lo ba l U pd at e ... Figure 1 : Example of the model with only one gradient-boosting module . Green lines are for local update and red lines are for global update . A recent study ( Badirli et al. , 2020 ) proposes a general framework for gradient boosting on neural networks , which work for both regression and classification tasks . It uses the deep layers of neural nets as a bagging mechanism in a similar spirit to random forest classifier ( Veit et al. , 2016 ) . After only slight tuning , deep neural nets can perform well on a wide range of small real-world datasets ( Olson et al. , 2018 ) . These findings demonstrate the potential of decomposing deep neural nets into an ensemble of sub-networks each achieving low training errors . In our framework , we use the first weak learner or the first few weak learners as the base learner for learning the shared initialization parameter across tasks . The output for each weak learner is then aggregated to the inputs for the next step for constructing an end-to-end learning strategy until the last gradient boosting module . This way , the base learner serves as transferable knowledge , and the gradient boosting modules following it are trained for task-specific predictions . 3 METHOD . We explore the problem in the context of supervised learning , where input-output pairs are available in both training and validation sets . Similar to previous meta-optimization based approaches ( Finn et al. , 2017 ; Nichol & Schulman , 2018 ) , we assume the tasks are generated from an underlying distribution T ∼ P ( D ) , where D is the meta-dataset , which is either a uni-mode dataset or multi-mode datasets . Given a set of tasks T = { T train , T test } , each task T ∈ T contains a support dataset DS and a query dataset DQ , both representing input-output pairs ( x , y ) . We aim to learn a meta-learnerM to guide the initialization for a target model Fθ so that the target model can quickly adapt and perform well on a given new task . We propose a Meta Gradient Boosting ( MGB ) framework as the target modelFθ , which consists of several weak learners and can be represented asFθ ∼ ΣKk=0fθk . The first weak learner fθ0 or the first few weak learners are regarded as the base learner for learning the shared information across tasks ; the weak learners are gradient boosting modules for capturing task-specific information . The meta learner aims to learn transferable knowledge and provides initialization details for the base learner so that the model can quickly adapt to task-specific predictions with a few gradient-boosting steps . Figure 1 shows an example of our MGB framework under K = 1 , where we update the model locally for task-specific predictions and update the meta-learner globally for all tasks .
This study is presented clearly, and the core idea is interesting. However, the presented novelty is limited to a globally (for all tasks) and locally (task-specific) learning paradigm using a framework inspired by (Badirli et al., 2020). The authors have presented experimental results for both regression and classification setups, which are interesting.
SP:077926a214f87b9fdcd5a5f9d818d6313437cd90
Test-Time Adaptation and Adversarial Robustness
1 INTRODUCTION . There is a surge of interest to study test-time adaptation to help generalization to unseen domains ( e.g. , recent work by Sun et al . ( 2020 ) ; Wang et al . ( 2020 ) ; Nado et al . ( 2020 ) ) . At the high level , a generic test-time adaptation can be modeled as an algorithm Γ which accepts an ( optional ) labeled training dataset D , an ( optional ) model F trained on D ( usually used as a starting point ) , and an unlabeled test feature set U , outputs a model F̃ = Γ ( F , D , U ) , in order to achieve high test accuracy on U . For large test set U , test-time adaptation can be viewed as a form of transductive learning ( Joachims ( 1999 ) ; Vapnik ( 1998 ) ) ( i.e. , using D , U to train a model to predict specific instances in U ) , which is argued to be easier than more traditional inductive learning . This paper studies test-time adaptation in the context of adversarial robustness ( i.e. , there is an active agent who tries to fool the test-time adaptation by perturbing the input so that F̃ gives wrong predictions ) . There are several motivations in pursuing this direction . First , this question is of practical interest : Many practical ML pipelines run in a batch mode1 , where they first collect a set of unlabelled data points , and then send them to a model ( e.g . Nado et al . ( 2020 ) ) . In such cases , data in the batch may have been adversarially perturbed , and it is a natural question whether we can leverage the large batch size and test-time adaptation to enhance adversarial robustness . Second , from a purely theoretical point of view , since test-time adaptation is a form of transductive learning , it is intriguing to ask whether transductive adversarial learning can be easier , given that traditional adversarial robustness is formulated in the inductive learning setting ( e.g . Madry et al . ( 2018 ) ) . To this end , a recent work by Goldwasser et al . ( 2020 ) shows that , with transductive learning , one can achieve nontrivial guarantees for classes of bounded VC dimension with arbitrary train and test distributions . The current work complements their paper in the setting of deep learning . To study these questions , we formalize a threat model , which we call ( test-time ) maximin threat model , for the adversarial robustness of test-time adaptation . Recall that the classic adversarial 1For example , Instagram collects a large batch of photos before sending them to a model to tag people . robustness game is a minimax game minF EV [ maxṼ L ( F , Ṽ ) ] , where V is random sampled data , Ṽ is the perturbed data generated from V by the adversary , and L ( F , Ṽ ) is the loss of the model F on Ṽ . By contrast , in the maximin threat model , we allow V to be sampled from a different domain , and the game is maximin : EV [ maxU minF̃ L ( F̃ , Ṽ ) ] ( where U is the perturbed features of V , subject to the attack type , and Ṽ is the labeled perturbed data , see Definition 2 ) . By the maximin inequality , it follows that this threat model is no harder than the minimax model ( to allow source and target domains to be different , we need to generalize the classic minimax model , see Definition 3 ) . We then move on to the focus of this work : Whether the maximin threat model is “ strictly weaker ” than the minimax threat model . We note that any good defender solution ( a robust model ) in the minimax game induces a good defender solution in the maximin game ( an adaptation algorithm that outputs that robust model ) , thus intuitively , the good defender solutions of the minimax model is a subset of the good defender solutions of the maximin threat model . We ask whether such a containment is proper : That is , whether there exists a defender solution that is good in the maximin threat model , but is bad in the minimax threat model . The existence of such a defender will demonstrate that the maximin threat model admits more good solutions . Besides theoretical interest , this question is also of practical importance since these “ new ” solutions may possess desirable properties that good solutions in the minimax threat model may lack . For example , one such property is that the defender solution is attack agnostic ( Goodfellow ( 2018 ) ( pp.30 ) ) : That is , the solution is not to directly optimize the performance measure for a particular type of perturbation2 . To this end , we first present a provable separation between the maximin and minimax threat models in a natural Gaussian data model . In fact , the separation holds even when U only contains a single point , indicating the power of transductive learning . We then move to deep learning . While we do not have provable guarantees , we empirically examine Domain Adverarial Neural Networks ( DANN ) ( Ganin et al . ( 2017 ) ) , an algorithm designed for unsupervised domain adaptation ( UDA ) , as a candidate for the separation . Specifically , we demonstrate that DANN provides nontrivial testtime adversarial robustness against both transfer attacks and adaptive attacks , in both homogeneous and inhomogeneous cases . This is somewhat surprising as DANN is attack agnostic as we mentioned above , and has not been considered for adversarial robustness . Not surprisingly , as we hypothesized for a separation , the accuracy becomes very low when evaluating F̃ in the minimax model . Complementing the above result , we explore the maximin robustness of the recent data-oblivious adaptation algorithms ( namely , the adaptation algorithms do not useD , but just the pretrained model F and unlabeled test set U ) . Specifically , we consider Test-Time Training ( TTT ) by Sun et al . ( 2020 ) 3 . We show that TTT can be easily attacked using simple transfer attacks . While this is not surprising as authors of Sun et al . ( 2020 ) have cautioned that TTT is not designed for adversarial robustness , the situation is in sharp contrast to our results with DANN . The rest of the paper is organized as follows : Section 2 presents the setup . In Section 3 we define threat models . In Section 4 we present theoretical results about separation , and examine DANN as a candidate separation in the deep learning . Finally , Section 5 explores the maximin robustness of oblivious test-time adaptation , and concludes the paper with future directions . 2 PRELIMINARIES . Let F be a model , for a data point ( x , y ) ∈ X ×Y , a loss function ` ( F ; x , y ) give the loss of F on x given the true label y . Let V be a set of labeled data points . We use the notation L ( F , V ) = 1 |V | ∑ ( x , y ) ∈V ` ( F ; x , y ) to denote the empirical loss of F on V . For example , if we use binary loss ` 0,1 ( F ; x , y ) = 1 [ F ( x ) 6= y ] , this gives the test error of F on V . We use the notation V |X to denote the projection of V to its features , that is { ( xi , yi ) } mi=1 7→ { x1 , . . . , xm } . Threat model for classic adversarial robustness . To formulate the threat model for test-time adaptation , we first present a threat model for the classic adversarial robustness . Although the classic adversarial robustness can be written down succinctly as a minimax objective , namely 2Another consideration , which is beyond the scope of this paper , is the computational feasibility of finding a good solution , given the hardness of minimax optimization Katz et al . ( 2017 ) ; Daskalakis et al . ( 2020 ) . 3While TTT does not use training data D at the test time , it has a special self-training component , and the joint architecture is a Y -structure . A more domain agnostic approach is discussed in Wang et al . ( 2020 ) . minF E ( x , y ) ∼ ( X , Y ) [ maxx′∈N ( x ) [ ` ( F ; x ′ , y ) ] ] ( N ( x ) is a neighborhood function of x , determined by the attack type ) , a threat model formulation will help us develop more nuanced models . Definition 1 ( Threat model for classic adversarial robustness ) . Attacker and defender agree on a particular attack type . Attacker is an algorithm A , and defender is a supervised learning algorithm T . Before game starts . • A ( labeled ) training set D is sampled i.i.d . from from ( X , Y ) . Training time . • ( Defender ) Train a model F on D as F = T ( D ) . Test time . • A ( labeled ) natural test set V is sampled i.i.d . from ( X , Y ) . • ( Attacker ) On input F , D , and V , A perturbs each point ( x , y ) ∈ V to ( x′ , y ) ( subject to the agreed attack type ) , giving Ṽ = A ( F , D , V ) . Evaluation : . Evaluate the test loss of F on Ṽ , L ( F , Ṽ ) . Attacker ’ s goal is to maximize the test loss , while the defender ’ s goal is to minimize the test loss . We stress that the i.i.d sampling of V is important ( which is also present in the expectation in the minimax objective ) : Otherwise an attacker can pick any point that fools F and repeat it arbitrarily many times . ( we refer readers to Goodfellow ( 2019 ) for more discussions along this line ) . Notations for models and attacks . In this paper we mainly use the PGD attacks ( Projected Gradient Descent attacks ) with norm-based perturbations Madry et al . ( 2018 ) . For example , given a model F , we use the notation PGD ( F , V ) to denote PGD attacks against F , on data V ( the attack type is specified in the context ) . We adopt the following notations : Notation Meaning T A target model trained on the labeled target data V . AdvT An adversarially trained target model using the labeled target data V . S A source model trained on the labeled source data D. AdvS An adversarially trained source model using the labeled source data D. PGD ( · , · ) PGD Attacks on a model and data . For example , PGD ( AdvT , V ) means to use PGD attacks on the model AdvT and data V . Test-time defenses and BPDA . Various previous work have investigated test-time defenses where a pretrained model is fixed and there is a “ preprocessing procedure ” to preprocess an input before sending it to the model . Several such defenses were described and attacked in Athalye et al . ( 2018 ) , by the BPDA technique ( Backward Pass Differentiable Approximation ) . While syntactically one can fit these defenses into our framework , they only form some very special cases of our framework , which reuses a fixed pretrained model and focuses on input sanitization . As we will show later in the paper , for both of our provable separation and deep learning results , the adaptation algorithms train new models ( beyond sanitizing inputs ) ; and theoretically attacking these adaptations becomes a bilevel optimization . In these cases , it is unclear how to apply BPDA , and indeed it is an intriguing direction to further study attacking unsupervised domain adaptation algorithms , such as DANN .
The paper explores adversarial robustness in a new setting of test-time adaptation. It shows this new problem of “test-time-adapted adversarial robustness” is strictly weaker than the “traditional adversarial robustness” when assuming the training data is available for the “test-time-adapted adversarial robustness”. The gap between the two problems is demonstrated by the simple DANN solution which has good “test-time-adapted adversarial robustness” but bad “traditional adversarial robustness”. The paper also explores the subcase of “test-time-adapted adversarial robustness” when assuming the training data is not available and provide some initial result.
SP:2969ff98eb93abe37242a962df458541311090ff
Subspace Clustering via Robust Self-Supervised Convolutional Neural Network
1 INTRODUCTION . Subspace clustering approaches have achieved encouraging performance when compared with the clustering algorithms that rely on proximity measures between data points . The main idea behind the subspace model is that the data can be drawn from low-dimensional subspaces which are embedded in a high-dimensional ambient space ( Lodhi & Bajwa , 2018 ) . Grouping such data associated with respective subspaces is known as the subspace clustering ( Vidal , 2011 ) . That is , each low-dimensional subspace corresponds to a class or category . Up to now , two main approaches for recovering lowdimensional subspaces are developed : models that are based on the self-representation property , and non-linear generalization of subspace clustering called union of subspaces ( UoS ) ( Lodhi & Bajwa , 2018 ; Lu & Do , 2008 ; Wu & Bajwa , 2014 ; 2015 ) . UoS algorithms are out of the scope of this work . Self-representation subspace clustering is achieved in two steps : ( i ) learning representation matrix C from data X and building corresponding affinity matrix A = |C|+ |CT | ; ( ii ) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix that correspond with the leading k eigenvalues . This second step is known as spectral clustering ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . Owning to the presumed subspace structure , the data points obey the self-expressiveness or self-representation property ( Elhamifar & Vidal , 2013 ; Peng et al. , 2016b ; Liu et al. , 2012 ; Li & Vidal , 2016 ; Favaro et al. , 2011 ) . In other words , each data point can be represented as a linear combination of other points in a dataset : X=XC . The self-representation approach is facing serious limitations regarding real-world datasets . One limitation relates to the linearity assumption because in a wide range of applications samples lie in nonlinear subspaces , e.g . face images acquired under non-uniform illumination and different poses ( Ji et al. , 2017 ) . Standard practice for handling data from nonlinear manifolds is to use the kernel trick on samples mapped implicitly into high dimensional space . Therein , samples better conform to linear subspaces ( Patel et al. , 2013 ; Patel & Vidal , 2014 ; Xiao et al. , 2015 ; Brbić & Kopriva , 2018 ) . However , identifying an appropriate kernel function for a given data set is quite a difficult task ( Zhang et al. , 2019b ) . The second limitation of existing deep SC methods relates to their assumption that the origin of data corruption is known , in which case the proper error model can be employed . In real-word applications origin of data corruption is unknown . That can severely harm the algorithm ’ s learning process if the non-robust loss function is used . Furthermore , validation ( i.e . stopping of the learning process ) in most of the deep SC methods often requires access to the ground-truth labels . That stands for violation of the basic principle of unsupervised machine learning and yields the overly-optimistic results . Dataset size is also a limitation when it comes to memory requirements . Since the self-representation subspace clustering is based on building the affinity matrix , memory complexity increases as the square of the dataset size . However , the latter limitation is not in the main focus of this work . Motivated by the exceptional ability of deep neural networks to capture complex underlying structures of data and learn discriminative features for clustering ( Hinton & Salakhutdinov , 2006 ; Dilokthanakul et al. , 2016 ; Ghasedi Dizaji et al. , 2017 ; Tian et al. , 2014 ; Xie et al. , 2016 ) , deep subspace clustering approaches emerged recently ( Ji et al. , 2017 ; Abavisani & Patel , 2018 ; Peng et al. , 2016a ; Yang et al. , 2019 ; Zhou et al. , 2018 ; Ji et al. , 2019b ; Peng et al. , 2018 ; 2017 ; Zhou et al. , 2019 ; Zhang et al. , 2019a ; Kheirandishfard et al. , 2020 ) . In particular , it is shown that convolutional neural networks ( CNNs ) , when applied to images of different classes , can learn features that lie in a UoS ( Lezama et al. , 2018 ) . Mostly , the base of the recently developed deep subspace-clustering networks is convolutional autoencoder . It is an end-to-end fully convolutional network that is based on the minimization of the reconstruction error . Together , the autoencoder and an additional self-expression ( SE ) module are forming a Deep subspace clustering network ( DSCNet ) ( Ji et al. , 2017 ) . Hence , the total loss function of DSCNet is composed of reconstruction loss and SE model loss . That is , during the learning process the clustering quality is not taken into account . Self-supervised convolutional SC network ( S2ConvSCN ) ( Zhang et al. , 2019a ) addressed this issue through the addition of a fully connected layer ( FC ) module and a spectral clustering module that , respectively , generate soft- and pseudo-labels . Dual self-supervision is achieved by forcing these two modules to converge towards consensus . Related accumulated loss , therefore , participates in enhancing the self-representation matrix and the quality of features extracted in the encoder layer . The architecture of S2ConvSCN has a possibility of direct classification once the learning process is completed . A trained encoder and the FC module can make a new network that can directly classify unseen data , also known as an out-of-sample problem . However , while this network can be validated and compared with other algorithms on a separate data set , such an ablation study was not completed . Furthermore , the main disadvantage of the DSCNet architecture , and indirectly S2ConvSCN , is that the network training is stopped when the accuracy is highest ( Ji et al. , 2019a ) . First , it is a direct violation of the unsupervised learning principle as the ground-truth labels are exposed . Second , the reported performance ( Zhang et al. , 2019a ; Ji et al. , 2017 ) is overly-optimistic and can not be compared to other algorithms . Also , as mentioned in ( Haeffele et al. , 2020 ) , most self-expressive based deep subspace clustering models suffer from the need of post-processing the self-representation matrix . Compared to the baseline model , we significantly reduced the post-processing while maintaining the noise-free matrix . Mentioned research problems led to three main contributions of proposed Robust S2ConvSCN : • robustness to errors of the unknown ( arbitrary ) origin is achieved by using the correntropy induced metric ( CIM ) in the self-expression loss , • the network is trained using the early-stopping method while monitoring only the accumulated loss , • thanks to correntropy based loss function the training process is less sensitive to data corruptions which enables the network to generalize better . This study has , also , three side-contributions : • the performance of models is estimated using the unseen ( out-of-sample ) data , • block-diagonal regularization of self-representation matrix is integrated into the gradient descent learning process , • post-processing of self-representation matrix is reduced to a significant extent . A complete head to head comparison of the baseline S2ConvSCN model and our robust approach can be seen in Figure 1 . 2 BACKGROUND AND RELATED WORK . 2.1 MAIN NOTATIONS AND DEFINITIONS . Throughout this paper , matrices are represented with bold capital symbols and vectors with bold lower-case symbols . X ∈ Rd×N represents data matrix comprised of N data samples with dimen- sionality d. { H ( l ) i } m ( l ) i=1 represent feature maps produced at the output of layer l−1 . Thus , H ( 0 ) = X and H ( L ) = X̂ . X̂ represents the output of the decoder and L represents number of convolutional layers in the autoencoder . { w ( l ) i } m ( l ) i=1 stand for a set of filters with associated biases { b ( l ) i } m ( l ) i=1 that form a convolutional layer l = 1 , . . . , L. zn = [ h ( L/2 ) 1 ( : ) . . .h ( L/2 ) m ( L/2 ) ( : ) ] T ∈ Rd̂×1 stands for feature vector comprised of vectorized and concatenated feature maps , with d̂ extracted features , in the top layer L2 ( encoder output ) representing input sample xn , n = 1 , . . . , N . C ∈ R N×N stands for representation matrix in self-expressive model Z = ZC . A = |C|+ |CT | is the affinity matrix and L = D− 1 2AD 1 2 is corresponding graph Laplacian matrix . D is diagonal degree matrix such that Dii = ∑N j=1 Aij . ‖X‖F = √∑N i , j=1 x 2 ij is the Frobenius norm of matrix X . ` p ( x ) = ‖x‖p = ( ∑d i=1 ‖xi‖ p ) 1/p , 0 < p ≤ 1 is the ` p norm of x . ` 0 ( x ) = ‖x‖0 = # { xi 6= 0 , i = 1 , . . . , d } , where # denotes the cardinality function , is ` 0 quasi norm of x . The Sp , 0 < p ≤ 1 , Schatten norms of matrix X are defined as the corresponding ` p norms of the vector of singular values of X , i.e . Sp ( X ) = ‖σ ( X ) ‖p where σ ( X ) stands for the vector of singular values of X . Depending on the context , 0 represents matrix/vector of all zeros and 1 represents matrix/vector of all ones . Grouping the data according to the linear subspaces they are drawn from is known as subspace clustering ( Vidal , 2011 ) . The problem is formally defined in : Definition 1 . Let X = [ X1 , . . . , Xk ] be a set of sample vectors drawn from a union of k subspaces in Rd , ∪ki=1 { Si } , of dimensions di min { d , N } , for i = 1 , . . . , k. Let Xi be a collection of Ni samples drawn from subspace Si , N = ∑k i=1Ni . The problem of subspace clustering is to segment samples into the subspaces they are drawn from . Throughout this paper , as it is the case in the majority of other papers , we have assumed that number of clusters k is known a priori . 2.2 APPROACHES TO SUBSPACE CLUSTERING . Usually , processes that operate in different modes generate data in real-world scenarios . Each mode models such data as lying on a subspace , while the whole process , thus , generates data lying on a union of subspaces ( UoS ) ( Lodhi & Bajwa , 2018 ) . The alternative to the UoS model is the selfrepresentation based subspace model . It implies that every sample from the dataset can be represented as a linear combination of other samples from the same cluster . While shallow models directly optimize such a self-representation matrix , their deep counterparts train the whole network to better extract features from the raw data and achieve representation linearity . Many approaches to deep subspace clustering are based on the introduction of the self-representation in the feature space ( Abavisani & Patel , 2018 ; Ji et al. , 2017 ; Peng et al. , 2016a ; Zhou et al. , 2018 ; 2019 ; Zhang et al. , 2019a ; Kheirandishfard et al. , 2020 ; Zhang et al. , 2020 ) . However , one weakness of self-expressive deep subspace clustering models is that their perfomance mainly depends on the self-representation matrix . Thus , elimination of the noise is done by post-processing ( Haeffele et al. , 2020 ) . It appears in many cases that from the final performance point of view the post-processing matters more than depth of the network . By the virtue of self-representation property , improvements of the shallow subspace clustering methods are of direct relevance to their deep counterparts . The subspace clustering task is accomplished through ( i ) learning the representation matrix C from data X , and ( ii ) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix L that correspond with the k leading eigenvalues . This second step is known as spectral clustering ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . Low-rank ( Liu et al. , 2012 ; Favaro et al. , 2011 ) and sparse models ( Elhamifar & Vidal , 2013 ) are one of the commonly used algorithms to solve SC clustering problem . They aim to learn the low-rank and sparse representation matrix by solving the following optimization problem ( Li & Vidal , 2016 ) : min C λ ‖C‖pSp + τ ‖C‖ p p s.t . Z = ZC , diag ( C ) = 0 ( 1 ) where λ and τ are nonnegative regularization constants . If number of layers L = 0 problem ( 1 ) is related to shallow subspace clustering . Constraint diag ( C ) = 0 is necessary to prevent sparseness regularized optimization algorithms to converge towards trivial solution where each data point represents itself . This constraint is not necessary for problem constrained only by low-rank . When data samples are contaminated with additive white Gaussian noise ( AWGN ) problem ( 1 ) becomes : min C ‖E‖2F + λ ‖C‖ p Sp + τ ‖C‖pp s.t . diag ( C ) = 0 ( 2 ) where E stands for the modelling error ( noise ) : E = Z− ZC . ( 3 ) Alternatively , square of the Frobenius norm of C is used for regularization ( Lu et al. , 2012 ) : min C ‖E‖2F + λ ‖C‖ 2 F ( 4 ) Objective ( 4 ) is used also in the self-expression module of the S2ConvSCN in ( Zhang et al. , 2019a ) . As seen from ( 2 ) and ( 4 ) , the MSE measure for discrepancy between Z and its self-representation ZC is justified only for the contamination by the AWGN . For sample-specific corruptions ( outliers ) the proper norm is ‖E‖2,1 while for large random corruptions the proper choice is ‖E‖1 ( Liu et al. , 2012 ) . However , errors in real world data have different origins and magnitude and may not follow specific probabilistic model . Sometimes , it is hard to know the true origin of corruptions present in data . Thus , to obtain method robust to arbitrary corruption we propose to introduce the CIM of the error . Rationale behind introduction of any regularization on C is to reflect its structural property of block-diagonality . Even though ‖C‖Sp and ‖C‖p , 0 ≤ p ≤ 1 in principle satisfy the enforced block-diagonality condition , their approximation of the BD structure of C is indirect ( Lu et al. , 2018 ) . Hence , for comparison , this study proposes introduction of loss function with gradient-based BD regularization on representation matrix C .
This paper presents an approach to deep subspace clustering based on minimizing the correntropy induced metric (CIM), with the goal of establishing when training should be stopped and generalizing to unseen data. The main contribution over the existing S2ConfSCN method is a change from squared error loss to CIM when optimizing over the affinity matrix. A key benefit of CIM as a loss is that it does not decrease arbitrarily with training epochs, so it provides a means of estimating when training should cease without needing ground truth labels. The authors argue that CIM "ensures a smooth decrease of the loss function that enables the use of label-free stopping criterion." However, this claim is only justified through a minimal empirical evaluation. The authors also include a means of enforcing block diagonal structure in the learned affinity matrix.
SP:b7532fd6e281d88fff5a0a89c73ae3e6651f8827
UNSUPERVISED ANOMALY DETECTION FROM SEMANTIC SIMILARITY SCORES
1 INTRODUCTION . Anomaly detection or novelty detection aims at identifying patterns in data that are significantly different to what is expected . This problem is inherently a binary classification problem that classifies examples either as in-distribution or out-of-distribution , given a sufficiently large sample from the in-distribution ( training set ) . A natural approach to OOD detection is to learn a density model from the training data and compute the likelihood ratio of OOD examples . However , in practice this approach frequently fails for high-dimensional data ( Nalisnick et al . ( 2019 ) ) , where it has been shown that deep generative models can assign higher likelihood to OOD examples than to in-distribution examples . This surprising result is likely the consequence of how existing deep generative models generalise . For example , Variational Autoencoders ( Kingma & Welling ( 2014 ) ) generalise by superposition of examples , which is a consequence of the stochastic nature of the posterior that can map different examples to the same point in latent space . As superposition is an averaging process that reduces the information content it can be expected that examples of lower complexity than the training examples can map to high likelihood regions in latent space . Note that it is possible for a datapoint to have high likelihood under a distribution yet be nearly impossible to be sampled , a property known as asymptotic equipartition property in information theory Cover & Thomas ( 2001 ) . For autoregressive generative models , such as PixelCNN ( van den Oord et al . ( 2016 ) ) , it has been shown that the pixel-by-pixel generation process is strongly determined by the local surrounding of pixels ( Chen et al . ( 2018 ) ) , where the fact that nearby pixels of training examples frequently share the same color can explain why mono-chromatic images are assigned a high likelihood ( Nalisnick et al . ( 2019 ) ) . Local pixel correlations also seem to be responsible for the failure of generative models based on Normalising Flows to assign correct likelihood values to OOD examples Schirrmeister et al . ( 2020 ) . As a consequence , most of the current OOD detection approaches make use of a score function s ( x ) to classify test examples as in-distribution or OOD . In case that the examples of the training set are labelled , a simple score can be given by s ( x ) = maxy p ( y|x ) , with p ( y|x ) the softmax probability for predicting class labels , y ∈ { 1 , .. , K } ( Hendrycks & Gimpel ( 2017 ) ) . If s ( x ) is below a threshold the test example is classified as OOD . Labelled data allows to learn representations that are associated with the semantic information shared by the examples in the training set , which can be used for OOD detection . However , the approach suffers from the problem that the scores for in-distribution examples can be widely distributed across the interval of possible score values , s ( x ) ∈ [ 1/K , 1 ] , especially if the number of labels are low and the classification task is hard , which strongly increases the false-positive rate . Consequently , better performance was found for approaches that use labeled data for learning a higher dimensional representation that encodes for semantic information ( Lee et al . ( 2018b ) ) . In this representation space the in-distribution occupies just a small volume and a random feature vector would be most likely classified as OOD . Another simplification arises if the OOD detection problem is supervised , with some OOD examples labelled as such and contribute to the training set . In this case the OOD detection problem boils down to an unbalanced classification problem ( Chalapathy & Chawla ( 2019 ) ) . In general OOD detection benefits from separating the factors of variation for the in-distribution in either relevant ( e.g . object identity ) or irrelevant ( e.g . compression artefacts ) using prior knowledge , where the relevant factors are typically those that carry salient semantic information . In line with the arguments put forward by Ahmed & Courville ( 2020 ) , this separation helps an OOD model to systematically generalise , e.g . whether we are allowed to re-colour or add noise to images for data augmentation . Generalisation over the training set is necessary , as learning under insufficient inductive bias would result in misclassification of examples from an in-distribution test set as OOD . Labeled data provide this additional information , as relevant factors can be defined as those that help the classification task , with the limitation that there might be more factors involved in characterising the in-distribution than those needed to predict the labels . In this work , we introduce a general framework for OOD detection problems that does not require label information . Our framework can be widely applied to OOD detection tasks , including visual , audio , and textual data with the only limitation that transformations must be a priori known that conserve the semantics of training examples , such as geometric transformations for images , proximity of time intervals for audio recordings ( van den Oord et al . ( 2018 ) ) , or randomly masking a small fraction of words in a sentence or paragraph ( Devlin et al . ( 2019 ) ) . For visual data we show new state-of-the-art OOD classification accuracies for standard benchmark data sets , surpassing even the accuracies that include labels as additional information . The key contributions of this work are • We propose a new OOD detection framework that is applicable in absence of labeled indistribution data or OOD examples that are labeled as such . • We show that our approach strongly improves OOD detection for challenging tasks in the visual domain • We find that identifying semantically close examples in the training set is central for reliable OOD detection 2 RELATED WORK . Unsupervised Methods using in-distribution labels . Many OOD detection methods make use of labels to generate scores that are either based on class prediction probabilities or on intermediate representations for an in-distribution classification task . For example , Hendrycks & Gimpel ( Hendrycks & Gimpel ( 2017 ) ) used the maximum of the softmax probabilities ( MSP ) to discriminate between OOD and in-distribution . More recent approaches Lee et al . ( 2018a ) ; Winkens et al . ( 2020 ) ; Zhang et al . ( 2020 ) use labels to learn an intermediate representation on which a density distribution ( e.g . multivariate normal distribution or deep generative network ) can be fitted , which then can be used to compute the likelihood of OOD examples . As labels implicitly provide information about the semantic relation of examples in the training set , approaches using label information typically show higher accuracy than unsupervised methods . These approaches can be improved by introducing additional parameters or training strategies . For example , MSP was improved by introducing a temperature parameter ( Liang et al . ( 2018 ) ) , alternative losses ( Lee et al . ( 2018a ) ; Vyas et al . ( 2018 ) ) , auxiliary objectives ( Devries & Taylor ( 2018 ) ; Hendrycks et al . ( 2019b ) ; Mohseni et al . ( 2020 ) ) , or outlier exposure ( Hendrycks et al . ( 2019a ) ) . Intermediate representations were improved using a multi-head network architecture ( Shalev et al . ( 2018 ) , contrastive learning Winkens et al . ( 2020 ) , metric learning Masana et al . ( 2018 ) ) . General Unsupervised Methods . If label information is absent , other means must be found to impose an inductive bias on the OOD detection model to generalise over the training set . Existing approaches can be separated in methods that learn generalisable features based on ( i ) self-supervised learning tasks Golan & El-Yaniv ( 2018 ) , transformations that destroy semantics Choi & Chung ( 2020 ) , match of encoder-decoder architectures Xiao et al . ( 2020 ) , or make use of a semantically related auxiliary outlier distribution Schirrmeister et al . ( 2020 ) . The work that is most related to ours is Geometric-Transformation Classification ( GEOM ) , proposed by Golan & El-Yaniv ( 2018 ) and improved by Bergman & Hoshen ( 2020 ) , which belongs to the class of self-supervised learning approaches ( Hendrycks et al . ( 2019b ) ) . The central idea of GEOM is to construct an auxiliary in-distribution classification task by transforming each image of the training set by one of 72 different combinations of geometric transformations with fixed strength , such as rotation , reflection , and translation . The task is to predict which of the 72 transformations has been applied , given a transformed image . GEOM gives examples that show high prediction uncertainty a high OOD score . The relevant features learned by this task are salient geometrical features , such as the typical orientation of an object . Our approach differs from GEOM by the fact that we define the relevant features as those that are invariant under geometric and other transformations , such as cropping and color jitter , which are chosen of moderate strength to not change the semantics of the images in the training set . 3 METHOD . An intuitive approach for OOD detection is to learn a representation that densely maps the indistribution to a small region within a lower dimensional space ( latent space ) , with the consequence that OOD examples will be found outside this region with high probability . The representation should include the salient semantic information of the training set , to ensure that test examples from the in-distribution are not misclassified as OOD , but disregard irrelevant factors of variation that would prevent dense mapping . As learning this mapping by Autoencoders is difficult , we split the OOD detection task into finding a semantically dense mapping of in-distribution onto a d-dimensional unit-hypersphere by contrastive learning , followed by classifying neighbouring examples on the unit-hypersphere as semantically close or distant . 3.1 LEARNING SEMANTIC SIMILARITY . A contrastive objective can be used to align feature vectors h ( x ) ∈ Rd that are semantically similar and at the same time distributes examples of the training set almost uniformly over the unit-hypersphere ( Wang & Isola ( 2020 ) ; Chen et al . ( 2020 ) ) . This representation allows to identify for any test example the semantically close example from the training set . The mapping h ( x ) = f ( x ) /||f ( x ) || can be learned from training a deep neural network f ( x ) to minimise the contrastive loss L [ h ] = −E ( x , x′ ) ∼Th ( x , x′ ) [ log eh ( x ) Th ( x′ ) /τ Exneg∼Th ( x ) [ eh ( x ) Th ( xneg ) /τ ] ] , ( 1 ) where τ denotes a temperature parameter . Here , each positive pair ( x , x′ ) is the result of sampling from a distribution of transformations Th ( x , x′ ) that conserve semantics between x and x′ , with Th ( x′ ) the marginal of Th ( x , x′ ) . For datasets used to benchmark object recognition tasks , samples ( x , x′ ) ∼ Th ( x , x′ ) can be generated by picking a single example from the training set and independently apply random transformations , such as geometric transformations , colour distortions , or cropping ( Appendix D ) . The negative pairs can be generated by applying random transformations to different training examples . We emphasise that the types of transformations and their strengths essentially define the semantics we want to encode and thus determine if , for example , the image of a black swan is classified as OOD for an in-distribution that contains only white swans . The design of transformations that capture the underlying semantics of the training dataset requires either higher level understanding of the data or extensive sampling of different combinations of transformations with evaluation on an in-distribution validation set .
The authors present a new Algorithm for performing unsupervised anomaly detection in diverse applications such as visual, audio and text data. They propose a two-step method in which first they utilise contrastive learning in order to find a semantically dense map of the data onto the unit-hypersphere. Then, they classify neighbouring pairs of test examples as in- or out-of- distribution based on the amount of the shared semantic information. Finally, they show that in several anomaly detection problems in the field of visual data their proposed method outperforms several existing methods.
SP:f0e0d909df518f25eb9243837939225d7db1196e
Learning to Generate 3D Shapes with Generative Cellular Automata
1 INTRODUCTION . Probabilistic 3D shape generation aims to learn and sample from the distribution of diverse 3D shapes and has applications including 3D contents generation or robot interaction . Specifically , learning the distribution of shapes or scenes can automate the process of generating diverse and realistic virtual environments or new object designs . Likewise , modeling the conditional distribution of the whole scene given partial raw 3D scans can help the decision process of a robot , by informing various possible outputs of occluded space . The distribution of plausible shapes in 3D space is diverse and complex , and we seek a scalable formulation of the shape generation process . Pioneering works on 3D shape generation try to regress the entire shape ( Dai et al . ( 2017 ) ) which often fail to recover fine details . We propose a more modular approach that progressively generates shape by a sequence of local updates . Our work takes inspiration from prior works on autoregressive models in the image domains , such as the variants of pixelCNN ( van den Oord & Kalchbrenner ( 2016 ) ; van den Oord et al . ( 2016 ; 2017 ) ) , which have been successful in image generation . The key idea of pixelCNN ( van den Oord et al . ( 2016 ) ) is to order the pixels , and then learn the conditional distribution of the next pixel given all of the previous pixels . Thus generating an image becomes the task of sampling pixel-by-pixel in the predefined order . Recently , PointGrow ( Sun et al . ( 2020 ) ) proposes a similar approach in the field of 3D generation , replacing the RGB values of pixels with the coordinates of points and sampling point-by-point in a sequential manner . While the work proposes a promising interpretable generation process by sequentially growing a shape , the required number of sampling procedures expands linearly with the number of points , making the model hard to scale to high-resolution data . We believe that a more scalable solution in 3D is to employ the local update rules of cellular automata ( CA ) . CA , a mathematical model operating on a grid , defines a state to be a collection of cells that carries values in the grid ( Wolfram ( 1982 ) ) . The CA repeatedly mutates its states based on the predefined homogeneous update rules only determined by the spatial neighborhood of the current cell . In contrast to the conventional CA where the rules are predefined , we employ a neural network to infer the stochastic sequential transition rule of individual cells based on Markov chain . The obtained homogeneous local rule for the individual cells constitutes the 3D generative model , named Generative Cellular Automata ( GCA ) . When the rule is distributed into the group of occupied cells of an arbitrary starting shape , the sequence of local transitions eventually evolves into an instance among the diverse shapes from the multi-modal distribution . The local update rules of CA greatly reduce the search space of voxel occupancy , exploiting the sparsity and connectivity of 3D shapes . We suggest a simple , progressive training procedure to learn the distribution of local transitions of which repeated application generates the shape of the data distribution . We represent the shape in terms of surface points and store it within a 3D grid , and the transition rule is trained only on the occupied cells by employing a sparse CNN ( Graham et al . ( 2018 ) ) . The sparse representation can capture the high-resolution context information , and yet learn the effective rule enjoying the expressive power of deep CNN as demonstrated in various computer vision tasks ( Krizhevsky et al . ( 2012 ) ; He et al . ( 2017 ) ) . Inspired by Bordes et al . ( 2017 ) , our model learns sequences that are slightly different from the sampling chain but converge to the full shapes in the training data . The network successfully learns the update rules of CA , such that a single inference samples from the distribution of diverse modes along the surface . The contributions of the paper are highlighted as follows : ( 1 ) We propose Generative Cellular Automata ( GCA ) , a Markov chain based 3D generative model that iteratively mends the shape to a learned distribution , generating diverse and high-fidelity shapes . ( 2 ) Our work is the first to learn the local update rules of cellular automata for 3D shape generation in voxel representation . This enables the use of an expressive sparse CNN and reduces the search space of voxel occupancy by fully exploiting sparsity and connectivity of 3D shapes . ( 3 ) Extensive experiments show that our method has competitive performance against the state-of-the-art models in probabilistic shape completion and shape generation . 2 3D SHAPE GENERATION WITH GENERATIVE CELLULAR AUTOMATA . Let Zn be an n-dimensional uniform grid space , where n = 3 for a 3D voxel space . A 3D shape is represented as a state s ⊂ Z3 , which is an ordered set of occupied cells c ∈ Z3 in a binary occupancy grid based on the location of the surface . Note that our voxel representation is different from the conventional occupancy grid , where 1 represents that the cell is inside the surface . Instead , we only store the cells lying on the surface . This representation can better exploit the sparsity of 3D shape than the full occupancy grid . The shape generation process is presented as a sequence of state variables s0 : T that is drawn from the following Markov Chain : s0 ∼ p0 st+1 ∼ pθ ( st+1|st ) ( 1 ) where p0 is the initial distribution and pθ is the homogeneous transition kernel parameterized by θ . We denote the sampled sequence s0 → s1 → · · · → sT as a sampling chain . Given the data set D containing 3D shapes x ∈ D , our objective is to learn the parameters θ of transition kernel pθ , such that the marginal distribution of final generated sample p ( sT ) = ∑ s0 : T−1 p 0 ( s0 ) ∏ 0≤t < T pθ ( s t+1|st ) is close to the data distribution . The initial state s0 can be defined differently depending on the task to solve . For the task of probabilistic shape completion , s0 is given as the partial input shape . For shape generation , we set the initial state s0 to be the most simple state we can think of , a single cell { c } . Figure 1 presents examples of sampling chains of shape generation , where the starting shape s0 is merely a single cell . The GCA further splits the transition kernel pθ ( st+1|st ) to be the combination of local update rules on individual occupied cells ci ∈ st , as depicted in Figure 2 . The cellular transition is implemented with the sparse convolution , which is translation invariant if implemented with a fully convolutional network , and outputs the distribution of local occupied cells . Then individual predictions are aggregated by cell-wise averaging , resulting in the binary probability distribution for occupancy of each cell that follows the Bernoulli distribution : pθ ( s t+1|st ) = ∏ c∈Zn pθ ( c|st ) . ( 2 ) The next state st+1 is sampled independently for individual cells from the obtained distribution and the sampling chain continues to the next time step . For each transition pθ ( st+1|st ) , we limit the search space of the occupied cells by confining our predictions within the neighborhood of the occupied cells . The underlying assumption is that the occupied cells of a valid shape are connected and a successful generation is possible by progressive growing into the immediate neighborhood of the given state . Specifically , the output of the sparse convolution is the occupancy probability of neighborhood cells pi = pθ ( N ( ci ) |st ) , where the neighborhood cells are those that fall within a radius r ball centered at the cell , N ( ci ) = { c′ ∈ Zn| d ( ci , c′ ) ≤ r } given a distance metric d. Other cells are ignored assuming they have probability 0 of occupancy . If the input state has M occupied cells st = { c1 , · · · , cM } , the sparse convolution predicts the occupancy probability of individual cells with N -dimension vectors P = { p1 , · · · , pM } , where N is the number of neighborhood cells fixed by the distance threshold r within the uniform grid Zn . After the cell-wise averaging step , the aggregated probability is nonzero for coordinates in N ( st ) = ⋃ c∈st N ( c ) . Then the cell-wise sampling in Eq . ( 2 ) is performed only within N ( st ) , instead of the full grid Zn , leading to an efficient sampling procedure . The stochastic local transition rule pθ ( N ( ci ) |st ) changes the state of a cell ’ s immediate neighborhood N ( ci ) , but the inference is determined from a larger perception neighborhood . In contrast , classical cellular automata updates a state of a cell determined by a fixed rule given the observation of the cell ’ s immediate neighborhood . The large perception neighborhood of GCA is effectively handled by deep sparse convolutional network , and results in convergence to a single consistent global shape out of diverse possible output shapes as further discussed in the appendix F . 3 TRAINING GENERATIVE CELLULAR AUTOMATA . We aim to learn the parameters for the local transition probability pθ ( N ( ci ) |st ) , whose repetitive application generates shape that follows the complex distribution of the data set . If we have sequences of sampling chains , then all current and next states can serve as the training data . Because we only have the set of complete shapes D , we emulate the sequence of sampling chain and obtain the intermediate states . The emulated sequence of a sampling chain may start from an arbitrary state , and needs to converge to the full shape x ∈ D after local transitions to the neighbor of the previous state . A naive method would be to sample the next state st from the sampling chain and maximize pθ ( x ∩N ( st ) |st ) , the probability of the shape that is closest to x among reachable forms from the current state1 , similar to scheduled sampling ( Bengio et al . ( 2015 ) ) . While this approach clearly emulates the inference procedure , it leads to learning a biased estimator as pointed out in Huszár ( 2015 ) . More importantly , the emulated sequence can not consistently learn the full shape x as it is not guaranteed to visit the state s such that x ⊂ N ( s ) . We instead employ the use of infusion training procedure suggested by Bordes et al . ( 2017 ) . Specifically , the infusion chain , denoted as s̃0 → s̃1 → ... → s̃T , is obtained as following : s̃0 ∼ q0 ( s̃0|x ) qt ( s̃t+1|s̃t , x ) = ∏ c̃∈N ( s̃t ) ( 1− αt ) pθ ( c̃|s̃t ) + αtδx ( c̃ ) ( 3 ) where q0 indicates the initial distribution of infusion chain , and qt is the infusion transition kernel at time step t. For probabilistic shape completion s̃0 is sampled as a subset of x , while for shape generation s̃0 is a single cell c ∈ x . The transition kernel qt is defined for cells in the neighborhood c̃ ∈ N ( s̃t ) as the mixture of pθ ( c̃|s̃t ) and δx ( c̃ ) , which are the transition kernel of the sampling chain and the infusion of the ground shape x , respectively . δx ( c̃ ) is crucial to guarantee the sequence to ultimately reach the ground truth shape , and is formulated as the Bernoulli distribution with probability 1 , if c̃ ∈ x , else 0 . We set the infusion rate to increase linearly with respect to time step , i.e. , αt = max ( wt , 1 ) , where w > 0 is the speed of infusion rate as in Bordes et al . ( 2017 ) . We can prove that the training procedure converges to the shape x in the training data if it is composed of weakly connected cells . We first define the connectivity of two states . Definition 1 . We call a state s̃′ to be partially connected to state x , if for any cell b ∈ x , there is a cell c0 ∈ s̃′ ∩ x , and a finite sequence of coordinates c0 : Tb in x that starts with c0 and ends with cTb = b , where each subsequent element is closer than the given threshold distance r , i.e. , for any b ∈ x , ∃c0 : Tb , such that ci ∈ x , d ( ci , ci+1 ) ≤ r , 0 ≤ i ≤ Tb and c0 ∈ s̃′ , cTb = b . The shape x is partially connected to any s̃′ 6= ∅ if we can create a sequence of coordinates between any pair of cells in x that is composed of local transitions bounded by the distance r. This is a very weak connectivity condition , and any set that overlaps with x is partially connected to x for shapes with continuous surfaces , which include shapes in the conventional dataset . Now assuming that the state s̃t ′ is partially connected to the state x , we recursively create a sequence by defining s̃t+1 = N ( s̃t ) ∩ x , which is the next sequence of infusion chain with infusion rate 1 . Since we use a linear scheduler for infusion rate , we can assume that there exists a state s̃t ′ such that infusion rate αt ′ = 1 . The following proposition proves that the sequence ( s̃t ) t≥t′ converges to x with local transitions . Proposition 1 . Let state s̃t ′ be partially connected to state x , where x has a finite number of occupied cells . We denote a sequence of states s̃t ′ : ∞ , recursively defined as s̃t+1 = N ( s̃t ) ∩ x . Then , there exists an integer T ′ such that s̃t = x , t ≥ T ′ . Proof . The proof is found in the appendix A . The proposition states that the samples of infusion chain eventually converge to the shape x , and thus we can compute the nonzero likelihood p ( x|s̃T ) during training if T is large enough . One can also interpret the training procedure as learning the sequence of states that converges to x and is close to 1Since we defined a state as a set of occupied cells , union ( ∪ ) , intersection ( ∩ ) , and subset ( ⊂ ) operations can be defined as regular sets . Figure 3 : Qualitative comparison of probabilistic shape completion . Figure 4 : Samples from shape generation . the sampling chain . We empirically observe that most of the training samples converge to the shape x before the infusion rate becomes 1 . The training procedure can be summarized as the following : 1 . Sample infusion chain s̃0 : T from s̃0 ∼ q0 ( s̃0|x ) , s̃t+1 ∼ qt ( s̃t+1|s̃t , x ) . 2 . For each state s̃t , maximize the log-likelihood that has the closest form to x via gradient descent , i.e. , θ ← θ + η ∂ log pθ ( x∩N ( s̃ t ) |s̃t ) ∂θ . The full training algorithm utilizes a data buffer to de-correlate the gradients of subsequent time steps , and accelerates the process by controlling the time steps based on the current state . More details regarding the full training algorithm can be found in the appendix B .
The paper proposes a generative method for 3D objects (voxels representation). Given an initial voxels configuration (e.g. partial shape, or even a single voxel), the method learns a local transition kernel for a Markov chain to decide how to evolve the configuration; sampling iteratively from these probabilities leads to a final model. The paper shows results on shape completion and generation, obtaining fairly good results.
SP:7c44bf5a4a8d5e5ee1e86ee4582c42186e2df72c
Decentralized Deterministic Multi-Agent Reinforcement Learning
[ Zhang , ICML 2018 ] provided the first decentralized actor-critic algorithm for1 multi-agent reinforcement learning ( MARL ) that offers convergence guarantees . In2 that work , policies are stochastic and are defined on finite action spaces . We extend3 those results to offer a provably-convergent decentralized actor-critic algorithm for4 learning deterministic policies on continuous action spaces . Deterministic policies5 are important in real-world settings . To handle the lack of exploration inherent in de-6 terministic policies , we consider both off-policy and on-policy settings . We provide7 the expression of a local deterministic policy gradient , decentralized deterministic8 actor-critic algorithms and convergence guarantees for linearly-approximated value9 functions . This work will help enable decentralized MARL in high-dimensional10 action spaces and pave the way for more widespread use of MARL.11 1 Introduction12 Cooperative multi-agent reinforcement learning ( MARL ) has seen considerably less use than its13 single-agent analog , in part because often no central agent exists to coordinate the cooperative agents.14 As a result , decentralized architectures have been advocated for MARL . Recently , decentralized15 architectures have been shown to admit convergence guarantees comparable to their centralized16 counterparts under mild network-specific assumptions ( see Zhang et al . [ 2018 ] , Suttle et al . [ 2019 ] ) .17 In this work , we develop a decentralized actor-critic algorithm with deterministic policies for multi-18 agent reinforcement learning . Specifically , we extend results for actor-critic with stochastic policies19 ( Bhatnagar et al . [ 2009 ] , Degris et al . [ 2012 ] , Maei [ 2018 ] , Suttle et al . [ 2019 ] ) to handle deterministic20 policies . Indeed , theoretical and empirical work has shown that deterministic algorithms outperform21 their stochastic counterparts in high-dimensional continuous action settings ( Silver et al . [ January22 2014b ] , Lillicrap et al . [ 2015 ] , Fujimoto et al . [ 2018 ] ) . Deterministic policies further avoid estimating23 the complex integral over the action space . Empirically this allows for lower variance of the critic24 estimates and faster convergence . On the other hand , deterministic policy gradient methods suffer25 from reduced exploration . For this reason , we provide both off-policy and on-policy versions of our26 results , the off-policy version allowing for significant improvements in exploration . The contributions27 of this paper are three-fold : ( 1 ) we derive the expression of the gradient in terms of the long-term28 average reward , which is needed in the undiscounted multi-agent setting with deterministic policies ; 29 ( 2 ) we show that the deterministic policy gradient is the limiting case , as policy variance tends to30 zero , of the stochastic policy gradient ; and ( 3 ) we provide a decentralized deterministic multi-agent31 actor critic algorithm and prove its convergence under linear function approximation.32 Submitted to 34th Conference on Neural Information Processing Systems ( NeurIPS 2020 ) . Do not distribute . 2 Background33 Consider a system of N agents denoted by N = [ N ] in a decentralized setting . Agents determine34 their decisions independently based on observations of their own rewards . Agents may however com-35 municate via a possibly time-varying communication network , characterized by an undirected graph36 Gt = ( N , Et ) , where Et is the set of communication links connecting the agents at time t ∈ N. The37 networked multi-agent MDP is thus characterized by a tuple ( S , { Ai } i∈N , P , { Ri } i∈N , { Gt } t≥0 ) 38 where S is a finite global state space shared by all agents in N , Ai is the action space of agent i , and39 { Gt } t≥0 is a time-varying communication network . In addition , let A = ∏ i∈N Ai denote the joint40 action space of all agents . Then , P : S × A × S → [ 0 , 1 ] is the state transition probability of the41 MDP , and Ri : S ×A → R is the local reward function of agent i . States and actions are assumed42 globally observable whereas rewards are only locally observable . At time t , each agent i chooses its43 action ait ∈ Ai given state st ∈ S , according to a local parameterized policy πiθi : S ×A i → [ 0 , 1 ] ,44 where πiθi ( s , a i ) is the probability of agent i choosing action ai at state s , and θi ∈ Θi ⊆ Rmi is45 the policy parameter . We pack the parameters together as θ = [ ( θ1 ) > , · · · , ( θN ) > ] > ∈ Θ where46 Θ = ∏ i∈N Θ i . We denote the joint policy by πθ : S×A → [ 0 , 1 ] where πθ ( s , a ) = ∏ i∈N π i θi ( s , a i ) .47 Note that decisions are decentralized in that rewards are observed locally , policies are evaluated48 locally , and actions are executed locally . We assume that for any i ∈ N , s ∈ S , ai ∈ Ai , the49 policy function πiθi ( s , a i ) > 0 for any θi ∈ Θi and that πiθi ( s , a i ) is continuously differentiable with50 respect to the parameters θi over Θi . In addition , for any θ ∈ Θ , let P θ : S × S → [ 0 , 1 ] denote51 the transition matrix of the Markov chain { st } t≥0 induced by policy πθ , that is , for any s , s′ ∈ S,52 P θ ( s′|s ) = ∑ a∈A πθ ( s , a ) · P ( s′|s , a ) . We make the standard assumption that the Markov chain53 { st } t≥0 is irreducible and aperiodic under any πθ and denote its stationary distribution by dθ.54 Our objective is to find a policy πθ that maximizes the long-term average reward over the network.55 Let rit+1 denote the reward received by agent i as a result of taking action a i t. Then , we wish to solve:56 max θ J ( πθ ) = lim T→∞ 1 T E [ T−1∑ t=0 1 N ∑ i∈N rit+1 ] = ∑ s∈S , a∈A dθ ( s ) πθ ( s , a ) R̄ ( s , a ) , where R̄ ( s , a ) = ( 1/N ) · ∑ i∈N R i ( s , a ) is the globally averaged reward function . Let r̄t =57 ( 1/N ) · ∑ i∈N r i t , then R̄ ( s , a ) = E [ r̄t+1|st = s , at = a ] , and therefore , the global relative action-58 value function is : Qθ ( s , a ) = ∑ t≥0 E [ r̄t+1 − J ( θ ) |s0 = s , a0 = a , πθ ] , and the global relative59 state-value function is : Vθ ( s ) = ∑ a∈A πθ ( s , a ) Qθ ( s , a ) . For simplicity , we refer to Vθ and Qθ60 as simply the state-value function and action-value function . We define the advantage function as61 Aθ ( s , a ) = Qθ ( s , a ) − Vθ ( s ) .62 Zhang et al . [ 2018 ] provided the first provably convergent MARL algorithm in the context of the63 above model . The fundamental result underlying their algorithm is a local policy gradient theorem:64 ∇θiJ ( µθ ) = Es∼dθ , a∼πθ [ ∇θi log πiθi ( s , a i ) ·Aiθ ( s , a ) ] , where Aiθ ( s , a ) = Qθ ( s , a ) − Ṽ iθ ( s , a−i ) is a local advantage function and Ṽ iθ ( s , a−i ) =65 ∑ ai∈Ai π i θi ( s , a i ) Qθ ( s , a i , a−i ) . This theorem has important practical value as it shows that the66 policy gradient with respect to each local parameter θi can be obtained locally using the corresponding67 score function ∇θi log πiθi provided that agent i has an unbiased estimate of the advantage functions68 Aiθ or Aθ . With only local information , the advantage functions A i θ or Aθ can not be well estimated69 since the estimation requires the rewards { rit } i∈N of all agents . Therefore , they proposed a consensus70 based actor-critic that leverages the communication network to share information between agents71 by placing a weight ct ( i , j ) on the message transmitted from agent j to agent i at time t. Their72 action-value function Qθ was approximated by a parameterized function Q̂ω : S ×A → R , and each73 agent i maintains its own parameter ωi , which it uses to form a local estimate Q̂ωi of the global Qθ.74 At each time step t , each agent i shares its local parameter ωit with its neighbors on the network , and75 the shared parameters are used to arrive at a consensual estimate of Qθ over time.76 3 Local Gradients of Deterministic Policies77 While the use of a stochastic policy facilitates the derivations of convergence proofs , most real-world78 control tasks require a deterministic policy to be implementable . In addition , the quantities estimated79 in the deterministic critic do not involve estimation of the complex integral over the action space found80 in the stochastic version . This offers lower variance of the critic estimates and faster convergence . To81 address the lack of exploration that comes with deterministic policies , we provide both off-policy82 and on-policy versions of our results . Our first requirement is a local deterministic policy gradient83 theorem.84 We assume that Ai = Rni . We make standard regularity assumptions on our MDP . That is , we85 assume that for any s , s′ ∈ S , P ( s′|s , a ) and Ri ( s , a ) are bounded and have bounded first and86 second derivatives . We consider local deterministic policies µiθi : S → A i with parameter vector87 θi ∈ Θi , and denote the joint policy by µθ : S → A , where µθ ( s ) = ( µ1θ1 ( s ) , . . . , µNθN ( s ) ) and88 θ = [ ( θ1 ) > , . . . , ( θN ) > ] > . We assume that for any s ∈ S , the deterministic policy function µiθi ( s ) 89 is twice continuously differentiable with respect to the parameter θi over Θi . Let P θ denote the90 transition matrix of the Markov chain { st } t≥0 induced by policy µθ , that is , for any s , s′ ∈ S,91 P θ ( s′|s ) = P ( s′|s , µθ ( s ) ) . We assume that the Markov chain { st } t≥0 is irreducible and aperiodic92 under any µθ and denote its stationary distribution by dµθ .93 Our objective is to find a policy µθ that maximizes the long-run average reward:94 max θ J ( µθ ) = Es∼dµθ [ R̄ ( s , µθ ( s ) ) ] = ∑ s∈S dµθ ( s ) R̄ ( s , µθ ( s ) ) . Analogous to the stochastic policy case , we denote the action-value function by Qθ ( s , a ) =95 ∑ t≥0 E [ r̄t+1 − J ( µθ ) |s0 = s , a0 = a , µθ ] , and the state-value function by Vθ ( s ) = Qθ ( s , µθ ( s ) ) .96 When there is no ambiguity , we will denote J ( µθ ) and dµθ by simply J ( θ ) and dθ , respectively . We97 present three results for the long-run average reward : ( 1 ) an expression for the local deterministic98 policy gradient in the on-policy setting ∇θiJ ( µθ ) , ( 2 ) an expression for the gradient in the off-policy99 setting , and ( 3 ) we show that the deterministic policy gradient can be seen as the limit of the stochastic100 one.101 On-Policy Setting102 Theorem 1 ( Local Deterministic Policy Gradient Theorem - On Policy ) . For any θ ∈ Θ , i ∈ N ,103 ∇θiJ ( µθ ) exists and is given by104 ∇θiJ ( µθ ) = Es∼dµθ [ ∇θiµiθi ( s ) ∇ai Qθ ( s , µ −i θ−i ( s ) , a i ) ∣∣ ai=µi θi ( s ) ] . The first step of the proof consists in showing that ∇θJ ( µθ ) =105 Es∼dθ [ ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) ] . This is an extension of the well-known stochastic106 case , for which we have∇θJ ( πθ ) = Es∼dθ [ ∇θ log ( πθ ( a|s ) ) Qθ ( s , a ) ] , which holds for a long-term107 averaged return with stochastic policy ( e.g Theorem 1 of Sutton et al . [ 2000a ] ) . See the Appendix for108 the details.109 Off-Policy Setting In the off-policy setting , we are given a behavior policy π : S → P ( A ) , and110 our goal is to maximize the long-run average reward under state distribution dπ:111 Jπ ( µθ ) = Es∼dπ [ R̄ ( s , µθ ( s ) ) ] = ∑ s∈S dπ ( s ) R̄ ( s , µθ ( s ) ) . ( 1 ) Note that we consider here an excursion objective ( Sutton et al . [ 2009 ] , Silver et al . [ January 2014a ] ,112 Sutton et al . [ 2016 ] ) since we take the average over the state distribution of the behaviour policy π of113 the state-action reward when selecting action given by the target policy µθ . We thus have:114 Theorem 2 ( Local Deterministic Policy Gradient Theorem - Off Policy ) . For any θ ∈ Θ , i ∈ N ,115 π : S → P ( A ) a fixed stochastic policy , ∇θiJπ ( µθ ) exists and is given by116 ∇θiJπ ( µθ ) = Es∼dπ [ ∇θiµiθi ( s ) ∇ai R̄ ( s , µ −i θ−i ( s ) , a i ) ∣∣ ai=µi θi ( s ) ] . Proof . Since dπ is independent of θ we can take the gradient on both sides of ( 1 ) 117 ∇θJπ ( µθ ) = Es∼dπ [ ∇θµθ ( s ) ∇aR̄ ( s , µθ ( s ) ) ∣∣ a=µθ ( s ) ] . Given that∇θiµjθ ( s ) = 0 if i 6= j , we have∇θµθ ( s ) = Diag ( ∇θ1µ1θ1 ( s ) , . . . , ∇θNµ N θN ( s ) ) and the118 result follows.119 This result implies that , off-policy , each agent needs access to µ−i θ−it ( st ) for every t .120 Limit Theorem As noted by Silver et al . [ January 2014b ] , the fact that the deterministic gradient121 is a limit case of the stochastic gradient enables the standard machinery of policy gradient , such as122 compatible-function approximation ( Sutton et al . [ 2000b ] ) , natural gradients ( Kakade [ 2001 ] ) , on-line123 feature adaptation ( Prabuchandran et al . [ 2016 ] , ) and actor-critic ( Konda [ 2002 ] ) to be used with124 deterministic policies . We show that it holds in our setting . The proof can be found in the Appendix.125 Theorem 3 ( Limit of the Stochastic Policy Gradient for MARL ) . Let πθ , σ be a stochastic policy126 such that πθ , σ ( a|s ) = νσ ( µθ ( s ) , a ) , where σ is a parameter controlling the variance , and νσ satisfy127 Condition 1 in the Appendix . Then,128 lim σ↓0 ∇θJπθ , σ ( πθ , σ ) = ∇θJµθ ( µθ ) where on the l.h.s the gradient is the standard stochastic policy gradient and on the r.h.s . the gradient129 is the deterministic policy gradient.130 4 Algorithms131 We provide two decentralized deterministic actor-critic algorithms , one on-policy and the other132 off-policy and demonstrate their convergence in the next section ; assumptions and proofs are provided133 in the Appendix.134 On-Policy Deterministic Actor-Critic135 Algorithm 1 Networked deterministic on-policy actor-critic Initialize : step t = 0 ; parameters Ĵ i0 , ω i 0 , ω̃ i 0 , θ i 0 , ∀i ∈ N ; state s0 ; stepsizes { βω , t } t≥0 , { βθ , t } t≥0 Draw ai0 = µ i θi0 ( s0 ) and compute ãi0 = ∇θiµiθi0 ( s0 ) Observe joint action a0 = ( a10 , . . . , a N 0 ) and ã0 = ( ã10 , . . . , ã N 0 ) repeat for i ∈ N do Observe st+1 and reward rit+1 = r i ( st , at ) Update Ĵ it+1 ← ( 1− βω , t ) · Ĵ it + βω , t · rit+1 Draw action at+1 = µiθit ( st+1 ) and compute ã i t+1 = ∇θiµiθit ( st+1 ) end for Observe joint action at+1 = ( a1t+1 , . . . , a N t+1 ) and ãt+1 = ( ã1t+1 , . . . , ã N t+1 ) for i ∈ N do Update : δit ← rit+1 − Ĵ it + Q̂ωit ( st+1 , at+1 ) − Q̂ωit ( st , at ) Critic step : ω̃it ← ωit + βω , t · δit · ∇ωQ̂ωi ( st , at ) ∣∣∣ ω=ωit Actor step : θit+1 = θit + βθ , t · ∇θiµiθit ( st ) ∇aiQ̂ωit ( st , a −i t , a i ) ∣∣∣ ai=ait Send ω̃it to the neighbors { j ∈ N : ( i , j ) ∈ Et } over Gt Consensus step : ωit+1 ← ∑ j∈N c ij t · ω̃ j t end for Update t← t+ 1 until end Consider the following on-policy algorithm . The actor step is based on an expression for∇θiJ ( µθ ) 136 in terms of∇aiQθ ( see Equation ( 15 ) in the Appendix ) . We approximate the action-value function Qθ137 using a family of functions Q̂ω : S×A → R parameterized by ω , a column vector in RK . Each agent138 i maintains its own parameter ωi and uses Q̂ωi as its local estimate of Qθ . The parameters ωi are139 updated in the critic step using consensus updates through a weight matrix Ct = ( cijt ) i , j ∈ RN×N140 where cijt is the weight on the message transmitted from i to j at time t , namely:141 Ĵ it+1 = ( 1− βω , t ) · Ĵ it + βω , t · rit+1 ( 2 ) ω̃it = ω i t + βω , t · δit · ∇ωQ̂ωi ( st , at ) ∣∣∣ ω=ωit ( 3 ) ωit+1 = ∑ j∈N cijt · ω̃ j t ( 4 ) with142 δit = r i t+1 − Ĵ it + Q̂ωit ( st+1 , at+1 ) − Q̂ωit ( st , at ) . For the actor step , each agent i improves its policy via:143 θit+1 = θ i t + βθ , t · ∇θiµiθit ( st ) · ∇aiQ̂ωit ( st , a −i t , a i ) ∣∣∣ ai=ait . ( 5 ) Since Algorithm 1 is an on-policy algorithm , each agent updates the critic using only ( st , at , st+1 ) , at144 time t knowing that at+1 = µθt ( st+1 ) . The terms in blue are additional terms that need to be shared145 when using compatible features ( this is explained further in the next section ) .146 Off-Policy Deterministic Actor-Critic We further propose an off-policy actor-critic algorithm,147 defined in Algorithm 2 to enable better exploration capability . Here , the goal is to maximize148 Jπ ( µθ ) where π is the behavior policy . To do so , the globally averaged reward function R̄ ( s , a ) is149 approximated using a family of functions ˆ̄Rλ : S ×A → R that are parameterized by λ , a column150 vector in RK . Each agent i maintains its own parameter λi and uses ˆ̄Rλi as its local estimate of R̄.151 Based on ( 1 ) , the actor update is152 θit+1 = θ i t + βθ , t · ∇θiµiθit ( st ) · ∇ai ˆ̄Rλit ( st , µ −i θ−it ( st ) , a i ) ∣∣∣ ai=µ θit ( st ) , ( 6 ) which requires each agent i to have access to µj θjt ( st ) for j ∈ N .153 The critic update is154 λ̃it = λ i t + βλ , t · δit · ∇λ ˆ̄Rλi ( st , at ) ∣∣∣ λ=λit ( 7 ) λit+1 = ∑ j∈N cijt λ̃ j t , ( 8 ) with155 δit = r i ( st , at ) − ˆ̄Rλit ( st , at ) . ( 9 ) In this case , δit was motivated by distributed optimization results , and is not related to the local156 TD-error ( as there is no `` temporal '' relationship for R ) . Rather , it is simply the difference between157 the sample reward and the bootstrap estimate . The terms in blue are additional terms that need to be158 shared when using compatible features ( this is explained further in the next section ) .159 5 Convergence160 To show convergence , we use a two-timescale technique where in the actor , updating deterministic161 policy parameter θi occurs more slowly than that of ωi and Ĵ i in the critic . We study the asymptotic162 behaviour of the critic by freezing the joint policy µθ , then study the behaviour of θt under convergence163 of the critic . To ensure stability , projection is often assumed since it is not clear how boundedness of164 Algorithm 2 Networked deterministic off-policy actor-critic Initialize : step t = 0 ; parameters λi0 , λ̃ i 0 , θ i 0 , ∀i ∈ N ; state s0 ; stepsizes { βλ , t } t≥0 , { βθ , t } t≥0 Draw ai0 ∼ πi ( s0 ) , compute ȧi0 = µiθi0 ( s0 ) and ã i 0 = ∇θiµiθi0 ( s0 ) Observe joint action a0 = ( a10 , . . . , a N 0 ) , ȧ0 = ( ȧ 1 0 , . . . , ȧ N 0 ) and ã0 = ( ã10 , . . . , ã N 0 ) repeat for i ∈ N do Observe st+1 and reward rit+1 = r i ( st , at ) end for for i ∈ N do Update : δit ← rit+1 − ˆ̄Rλit ( st , at ) Critic step : λ̃it ← λit + βλ , t · δit · ∇λ ˆ̄Rλi ( st , at ) ∣∣∣ λ=λit Actor step : θit+1 = θit + βθ , t · ∇θiµiθit ( st ) · ∇ai ˆ̄Rλit ( st , µ −i θ−it ( st ) , a i ) ∣∣∣ ai=µ θit ( st ) Send λ̃it to the neighbors { j ∈ N : ( i , j ) ∈ Et } over Gt end for for i ∈ N do Consensus step : λit+1 ← ∑ j∈N c ij t · λ̃ j t Draw action at+1 ∼ π ( st+1 ) , compute ȧit+1 = µiθit+1 ( st+1 ) and compute ã i t+1 = ∇θiµiθit+1 ( st+1 ) end for Observe joint action at+1 = ( a1t+1 , . . . , a N t+1 ) , ȧt+1 = ( ȧ 1 t+1 , . . . , ȧ N t+1 ) and ãt+1 = ( ã1t+1 , . . . , ã N t+1 ) Update t← t+ 1 until end { θit } can otherwise be ensured ( see Bhatnagar et al . [ 2009 ] ) . However , in practice , convergence is165 typically observed even without the projection step ( see Bhatnagar et al . [ 2009 ] , Degris et al . [ 2012 ] ,166 Prabuchandran et al . [ 2016 ] , Zhang et al . [ 2018 ] , Suttle et al . [ 2019 ] ) . We also introduce the following167 technical assumptions which will be needed in the statement of the convergence results.168 Assumption 1 ( Linear approximation , average-reward ) . For each agent i , the average-reward function169 R̄ is parameterized by the class of linear functions , i.e. , ˆ̄Rλi , θ ( s , a ) = wθ ( s , a ) ·λi where wθ ( s , a ) =170 [ wθ,1 ( s , a ) , . . . , wθ , K ( s , a ) ] ∈ RK is the feature associated with the state-action pair ( s , a ) . The171 feature vectors wθ ( s , a ) , as well as ∇awθ , k ( s , a ) are uniformly bounded for any s ∈ S , a ∈ A , k ∈172 J1 , KK . Furthermore , we assume that the feature matrix Wπ ∈ R|S|×K has full column rank , where173 the k-th column of Wπ , θ is [ ∫ A π ( a|s ) wθ , k ( s , a ) da , s ∈ S ] for any k ∈ J1 , KK.174 Assumption 2 ( Linear approximation , action-value ) . For each agent i , the action-value function175 is parameterized by the class of linear functions , i.e. , Q̂ωi ( s , a ) = φ ( s , a ) · ωi where φ ( s , a ) =176 [ φ1 ( s , a ) , . . . , φK ( s , a ) ] ∈ RK is the feature associated with the state-action pair ( s , a ) . The feature177 vectors φ ( s , a ) , as well as∇aφk ( s , a ) are uniformly bounded for any s ∈ S , a ∈ A , k ∈ { 1 , . . . , K } .178 Furthermore , we assume that for any θ ∈ Θ , the feature matrix Φθ ∈ R|S|×K has full column rank,179 where the k-th column of Φθ is [ φk ( s , µθ ( s ) ) , s ∈ S ] for any k ∈ J1 , KK . Also , for any u ∈ RK ,180 Φθu 6= 1.181 Assumption 3 ( Bounding θ ) . The update of the policy parameter θi includes a local projection by182 Γi : Rmi → Θi that projects any θit onto a compact set Θi that can be expressed as { θi|qij ( θi ) ≤183 0 , j = 1 , . . . , si } ⊂ Rmi , for some real-valued , continuously differentiable functions { qij } 1≤j≤si184 defined on Rmi . We also assume that Θ = ∏N i=1 Θ i is large enough to include at least one local185 minimum of J ( θ ) .186 We use { Ft } to denote the filtration with Ft = σ ( sτ , Cτ−1 , aτ−1 , rτ−1 , τ ≤ t ) .187 Assumption 4 ( Random matrices ) . The sequence of non-negative random matrices { Ct = ( cijt ) ij } 188 satisfies:189 1 . Ct is row stochastic and E ( Ct|Ft ) is a.s. column stochastic for each t , i.e. , Ct1 = 1 and190 1 > E ( Ct|Ft ) = 1 > a.s . Furthermore , there exists a constant η ∈ ( 0 , 1 ) such that , for any191 cijt > 0 , we have c ij t ≥ η.192 2 . Ct respects the communication graph Gt , i.e. , cijt = 0 if ( i , j ) /∈ Et.193 3 . The spectral norm of E [ C > t · ( I − 11 > /N ) · Ct ] is smaller than one.194 4 . Given the σ-algebra generated by the random variables before time t , Ct , is conditionally195 independent of st , at and rit+1 for any i ∈ N .196 Assumption 5 ( Step size rules , on-policy ) . The stepsizes βω , t , βθ , t satisfy:197 ∑ t βω , t = ∑ t βθ , t =∞∑ t ( β2ω , t + β 2 θ , t ) < ∞∑ t |βθ , t+1 − βθ , t| < ∞ . In addition , βθ , t = o ( βω , t ) and limt→∞βω , t+1/βω , t = 1.198 Assumption 6 ( Step size rules , off-policy ) . The step-sizes βλ , t , βθ , t satisfy:199 ∑ t βλ , t = ∑ t βθ , t =∞ , ∑ t β2λ , t + β 2 θ , t < ∞ βθ , t = o ( βλ , t ) , lim t→∞ βλ , t+1/βλ , t = 1 . On-Policy Convergence To state convergence of the critic step , we define Dsθ = Diag [ dθ ( s ) , s ∈ S ] , R̄θ = [ R̄ ( s , µθ ( s ) ) , s ∈ S ] > ∈ R|S| and the operator TQθ : R|S| → R|S| for any action-value vector Q ∈ R|S| ( and not R|S|·|A| since there is a mapping associating an action to each state ) as : TQθ ( Q ′ ) = R̄θ − J ( µθ ) · 1 + P θQ′ . Theorem 4 . Under Assumptions 3 , 4 , and 5 , for any given deterministic policy µθ , with { Ĵt } and { ωt } generated from ( 2 ) , we have limt→∞ 1N ∑ i∈N Ĵ i t = J ( µθ ) and limt→∞ω i t = ωθ a.s. for any i ∈ N , where J ( µθ ) = ∑ s∈S dθ ( s ) R̄ ( s , µθ ( s ) ) is the long-term average return under µθ , and ωθ is the unique solution to200 Φθ > Dsθ [ TQθ ( Φθωθ ) − Φθωθ ] = 0 . ( 10 ) Moreover , ωθ is the minimizer of the Mean Square Projected Bellman Error ( MSPBE ) , i.e. , the solution to minimize ω ‖Φθω −ΠTQθ ( Φθω ) ‖ 2 Dsθ , where Π is the operator that projects a vector to the space spanned by the columns of Φθ , and ‖·‖2Dsθ201 denotes the euclidean norm weighted by the matrix Dsθ.202 To state convergence of the actor step , we define quantities ψit , θ , ξ i t and ξ i t , θ as203 ψit , θ = ∇θiµiθi ( st ) and ψ i t = ψ i t , θt = ∇θiµ i θit ( st ) , ξit , θ = ∇aiQ̂ωθ ( st , a −i t , ai ) ∣∣∣ ai=ai=µi θit ( st ) = ∇aiφ ( st , a−it , ai ) ∣∣ ai=ai=µi θit ( st ) ωθ , ξit = ∇aiQ̂ωit ( st , a −i t , ai ) ∣∣∣ ai=µi θi ( st ) = ∇aiφ ( st , a−it , ai ) ∣∣ ai=µi θi ( st ) ωit . Additionally , we introduce the operator Γ̂ ( · ) as204 Γ̂i [ g ( θ ) ] = lim 0 < η→0 Γi [ θi + η · g ( θ ) ] − θi η ( 11 ) for any θ ∈ Θ and g : Θ→ Rmi a continuous function . In case the limit above is not unique we take205 Γ̂i [ g ( θ ) ] to be the set of all possible limit points of ( 11 ) .206 Theorem 5 . Under Assumptions 2 , 3 , 4 , and 5 , the policy parameter θit obtained from ( 5 ) converges207 a.s. to a point in the set of asymptotically stable equilibria of208 θ̇i = Γ̂i [ Est∼dθ , µθ [ ψit , θ · ξit , θ ] ] , for any i ∈ N . ( 12 ) In the case of multiple limit points , the above is treated as a differential inclusion rather than an209 ODE.210 The convergence of the critic step can be proved by taking similar steps as that in Zhang et al . [ 2018 ] .211 For the convergence of the actor step , difficulties arise from the projection ( which is handled using212 Kushner-Clark Lemma Kushner and Clark [ 1978 ] ) and the state-dependent noise ( that is handled by213 “ natural ” timescale averaging Crowder [ 2009 ] ) . Details are provided in the Appendix.214 Remark . Note that that with a linear function approximator Qθ , ψt , θ · ξt , θ =215 ∇θµθ ( st ) ∇aQ̂ωθ ( st , a ) ∣∣∣ a=µθ ( st ) may not be an unbiased estimate of∇θJ ( θ ) :216 Es∼dθ [ ψt , θ·ξt , θ ] = ∇θJ ( θ ) +Es∼dθ [ ∇θµθ ( s ) · ( ∇aQ̂ωθ ( s , a ) ∣∣∣ a=µθ ( s ) − ∇aQωθ ( s , a ) |a=µθ ( s ) ) ] . A standard approach to overcome this approximation issue is via compatible features ( see , for217 example , Silver et al . [ January 2014a ] and Zhang and Zavlanos [ 2019 ] ) , i.e . φ ( s , a ) = a · ∇θµθ ( s ) > ,218 giving , for ω ∈ Rm,219 Q̂ω ( s , a ) = a · ∇θµθ ( s ) > ω = ( a− µθ ( s ) ) · ∇θµθ ( s ) > ω + V̂ω ( s ) , with V̂ω ( s ) = Q̂ω ( s , µθ ( s ) ) and ∇aQ̂ω ( s , a ) ∣∣∣ a=µθ ( s ) = ∇θµθ ( s ) > ω . We thus expect that the convergent point of ( 5 ) corresponds to a small neighborhood of a local220 optimum of J ( µθ ) , i.e. , ∇θiJ ( µθ ) = 0 , provided that the error for the gradient of the action-221 value function ∇aQ̂ω ( s , a ) ∣∣∣ a=µθ ( s ) − ∇aQθ ( s , a ) |a=µθ ( s ) is small . However , note that using222 compatible features requires computing , at each step t , φ ( st , at ) = at · ∇θµθ ( st ) > . Thus , in223 Algorithm 1 , each agent observes not only the joint action at+1 = ( a1t+1 , . . . , a N t+1 ) but also224 ( ∇θ1µ1θ1t ( st+1 ) , . . . , ∇θNµ N θNt ( st+1 ) ) ( see the parts in blue in Algorithm 1 ) .225 Off-Policy Convergence226 Theorem 6 . Under Assumptions 1 , 4 , and 6 , for any given behavior policy π and any θ ∈ Θ , with227 { λit } generated from ( 7 ) , we have limt→∞λit = λθ a.s. for any i ∈ N , where λθ is the unique228 solution to229 Bπ , θ · λθ = Aπ , θ · dsπ ( 13 ) where dsπ = [ dπ ( s ) , s ∈ S ] > , Aπ , θ = [ ∫ A π ( a|s ) R̄ ( s , a ) w ( s , a ) > da , s ∈ S ] ∈ RK×|S| and230 Bπ , θ = [ ∑ s∈S d π ( s ) ∫ A π ( a|s ) wi ( s , a ) · w ( s , a ) > da , 1 ≤ i ≤ K ] ∈ RK×K .231 From here on we let232 ξit , θ = ∇ai ˆ̄Rλθ ( st , µ −i θ−it ( st ) , ai ) ∣∣∣ ai=µi θit ( st ) = ∇aiw ( st , µ−iθ−it ( st ) , ai ) ∣∣∣ ai=µi θit ( st ) λθ ξit = ∇ai ˆ̄Rλit ( st , µ −i θ−it ( st ) , ai ) ∣∣∣ ai=µi θit ( st ) = ∇aiw ( st , µ−iθ−i ( st ) , ai ) ∣∣ ai=µi θi ( st ) λit and we keep233 ψit , θ = ∇θiµiθi ( st ) , and ψ i t = ψ i t , θt = ∇θiµ i θit ( st ) . Theorem 7 . Under Assumptions 1 , 3 , 4 , and 6 , the policy parameter θit obtained from ( 6 ) converges234 a.s. to a point in the asymptotically stable equilibria of235 θ̇i = Γi [ Es∼dπ [ ψit , θ · ξit , θ ] ] . ( 14 ) We define compatible features for the action-value and the average-reward function in an analogous236 manner : wθ ( s , a ) = ( a− µθ ( s ) ) · ∇θµθ ( s ) > . For λ ∈ Rm,237 ˆ̄Rλ , θ ( s , a ) = ( a− µθ ( s ) ) · ∇θµθ ( s ) > · λ ∇a ˆ̄Rλ , θ ( s , a ) = ∇θµθ ( s ) > · λ and we have that , for λ∗ = argmin λ Es∼dπ [ ‖∇a ˆ̄Rλ , θ ( s , µθ ( s ) ) −∇aR̄ ( s , µθ ( s ) ) ‖2 ] : ∇θJπ ( µθ ) = Es∼dπ [ ∇θµθ ( s ) · ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) ] = Es∼dπ [ ∇θµθ ( s ) · ∇a ˆ̄Rλ∗ , θ ( s , a ) ∣∣∣ a=µθ ( s ) ] . The use of compatible features requires each agent to observe not only the joint action taken238 at+1 = ( a 1 t+1 , . . . , a N t+1 ) and the “ on-policy action ” ȧt+1 = ( ȧ 1 t+1 , . . . , ȧ N t+1 ) , but also ãt+1 =239 ( ∇θ1µ1θ1t ( st+1 ) , . . . , ∇θNµ N θNt ( st+1 ) ) ( see the parts in blue in Algorithm 2 ) .240 We illustrate algorithm convergence on multi-agent extension of a continuous bandit problem from241 Sec . 5.1 of Silver et al . [ January 2014b ] . Details are in the Appendix . Figure 2 shows the convergence242 of Algorithms 1 and 2 averaged over 5 runs . In all cases , the system converges and the agents are243 able to coordinate their actions to minimize system cost . 244 6 Conclusion245 We have provided the tools needed to implement decentralized , deterministic actor-critic algorithms246 for cooperative multi-agent reinforcement learning . We provide the expressions for the policy247 gradients , the algorithms themselves , and prove their convergence in on-policy and off-policy settings.248 We also provide numerical results for a continuous multi-agent bandit problem that demonstrates249 the convergence of our algorithms . Our work differs from Zhang and Zavlanos [ 2019 ] as the latter250 was based on policy consensus whereas ours is based on critic consensus . Our approach represents251 agreement between agents on every participants ’ contributions to the global reward , and as such,252 provides a consensus scoring function with which to evaluate agents . Our approach may be used253 in compensation schemes to incentivize participation . An interesting extension of this work would254 be to prove convergence of our actor-critic algorithm for continuous state spaces , as it may hold255 with assumptions on the geometric ergodicity of the stationary state distribution induced by the256 deterministic policies ( see Crowder [ 2009 ] ) . The expected policy gradient ( EPG ) of Ciosek and257 Whiteson [ 2018 ] , a hybrid between stochastic and deterministic policy gradient , would also be258 interesting to leverage . The Multi-Agent Deep Deterministic Policy Gradient algorithm ( MADDPG ) 259 of Lowe et al . [ 2017 ] assumes partial observability for each agent and would be a useful extension,260 but it is likely difficult to extend our convergence guarantees to the partially observed setting.261 References262 Albert Benveniste , Pierre Priouret , and Michel Métivier . Adaptive Algorithms and Stochastic263 Approximations . Springer-Verlag , Berlin , Heidelberg , 1990 . ISBN 0-387-52894-6.264 Shalabh Bhatnagar , Richard S. Sutton , Mohammad Ghavamzadeh , and Mark Lee . Natural actor-critic265 algorithms . Automatica , 45 ( 11 ) :2471–2482 , November 2009 . ISSN 0005-1098. doi : 10.1016/j.266 automatica.2009.07.008 . URL http : //dx.doi.org/10.1016/j.automatica.2009.07.008.267 Kamil Ciosek and Shimon Whiteson . Expected Policy Gradients for Reinforcement Learning . arXiv268 e-prints , art . arXiv:1801.03326 , Jan 2018.269 Martin Crowder . Stochastic approximation : A dynamical systems viewpoint by vivek s. borkar.270 International Statistical Review , 77 ( 2 ) :306–306 , 2009.271 Thomas Degris , Martha White , and Richard S. Sutton . Off-policy actor-critic . CoRR , abs/1205.4839,272 2012 . URL http : //arxiv.org/abs/1205.4839.273 Scott Fujimoto , Herke van Hoof , and Dave Meger . Addressing function approximation error in actor-274 critic methods . CoRR , abs/1802.09477 , 2018 . URL http : //arxiv.org/abs/1802.09477.275 Sham Kakade . A natural policy gradient . In Proceedings of the 14th International Conference on276 Neural Information Processing Systems : Natural and Synthetic , NIPS ’ 01 , pages 1531–1538 , Cam-277 bridge , MA , USA , 2001 . MIT Press . URL http : //dl.acm.org/citation.cfm ? id=2980539.278 2980738.279 Vijaymohan Konda . Actor-critic Algorithms . PhD thesis , Cambridge , MA , USA , 2002 . AAI0804543.280 Harold J . ( Harold Joseph ) Kushner and ( joint author . ) Clark , Dean S. Stochastic approximation281 methods for constrained and unconstrained systems . New York : Springer-Verlag , 1978 . ISBN282 0387903410.283 Timothy P. Lillicrap , Jonathan J . Hunt , Alexander Pritzel , Nicolas Manfred Otto Heess , Tom Erez,284 Yuval Tassa , David Silver , and Daan Wierstra . Continuous control with deep reinforcement285 learning . CoRR , abs/1509.02971 , 2015.286 Ryan Lowe , Yi Wu , Aviv Tamar , Jean Harb , Pieter Abbeel , and Igor Mordatch . Multi-agent actor-287 critic for mixed cooperative-competitive environments . Neural Information Processing Systems288 ( NIPS ) , 2017.289 Hamid Reza Maei . Convergent actor-critic algorithms under off-policy training and function approxi-290 mation . CoRR , abs/1802.07842 , 2018 . URL http : //arxiv.org/abs/1802.07842.291 P. Marbach and J. N. Tsitsiklis . Simulation-based optimization of markov reward processes . IEEE292 Transactions on Automatic Control , 46 ( 2 ) :191–209 , Feb 2001 . ISSN 0018-9286. doi : 10.1109/9.293 905687.294 K. J. Prabuchandran , Shalabh Bhatnagar , and Vivek S. Borkar . Actor-critic algorithms with online295 feature adaptation . ACM Trans . Model . Comput . Simul. , 26 ( 4 ) :24:1–24:26 , February 2016 . ISSN296 1049-3301. doi : 10.1145/2868723 . URL http : //doi.acm.org/10.1145/2868723.297 Martin L. Puterman . Markov Decision Processes : Discrete Stochastic Dynamic Programming . John298 Wiley & Sons , Inc. , New York , NY , USA , 1st edition , 1994 . ISBN 0471619779.299 David Silver , Guy Lever , Nicolas Heess , Thomas Degris , Daan Wierstra , and Martin Riedmiller.300 Deterministic Policy Gradient Algorithms . International Conference on Machine Learning , pages301 387–395 , January 2014a.302 David Silver , Guy Lever , Nicolas Heess , Thomas Degris , Daan Wierstra , and Martin Riedmiller.303 Deterministic Policy Gradient Algorithms . International Conference on Machine Learning , pages304 387–395 , January 2014b.305 Wesley Suttle , Zhuoran Yang , Kaiqing Zhang , Zhaoran Wang , Tamer Basar , and Ji Liu . A multi-agent306 off-policy actor-critic algorithm for distributed reinforcement learning . CoRR , abs/1903.06372,307 2019 . URL http : //arxiv.org/abs/1903.06372.308 Richard S Sutton , David A. McAllester , Satinder P. Singh , and Yishay Mansour . Policy gradient309 methods for reinforcement learning with function approximation . In S. A. Solla , T. K. Leen , and310 K. Müller , editors , Advances in Neural Information Processing Systems 12 , pages 1057–1063 . MIT311 Press , 2000a.312 Richard S Sutton , David A. McAllester , Satinder P. Singh , and Yishay Mansour . Policy gradient313 methods for reinforcement learning with function approximation . In S. A. Solla , T. K. Leen , and314 K. Müller , editors , Advances in Neural Information Processing Systems 12 , pages 1057–1063 . MIT315 Press , 2000b.316 Richard S. Sutton , Hamid Reza Maei , Doina Precup , Shalabh Bhatnagar , David Silver , Csaba317 Szepesvári , and Eric Wiewiora . Fast gradient-descent methods for temporal-difference learning318 with linear function approximation . In Proceedings of the 26th Annual International Conference319 on Machine Learning , ICML ’ 09 , pages 993–1000 , New York , NY , USA , 2009 . ACM . ISBN320 978-1-60558-516-1.321 Richard S. Sutton , A. Rupam Mahmood , and Martha White . An emphatic approach to the problem322 of off-policy temporal-difference learning . J. Mach . Learn . Res. , 17 ( 1 ) :2603–2631 , January 2016.323 ISSN 1532-4435 . URL http : //dl.acm.org/citation.cfm ? id=2946645.3007026.324 Kaiqing Zhang , Zhuoran Yang , Han Liu , Tong Zhang , and Tamer Basar . Fully decentralized multi-325 agent reinforcement learning with networked agents . 80:5872–5881 , 10–15 Jul 2018.326 Yan Zhang and Michael M. Zavlanos . Distributed off-policy actor-critic reinforcement learning with327 policy consensus . CoRR , abs/1903.09255 , 2019.328 Numerical experiment details329 We demonstrate the convergence of our algorithm in a continuous bandit problem that is a multi-330 agent extension of the experiment in Section 5.1 of Silver et al . ( 2014 ) . Each agent chooses331 an action ai ∈ Rm . We assume all agents have the same reward function given by Ri ( a ) =332 − ( ∑ i a i − a∗ ) T C ( ∑ i a i − a∗ ) . The matrix C is positive definite with eigenvalues chosen from333 { 0.1 , 1 } , and a∗ = [ 4 , . . . , 4 ] T. We consider 10 agents and action dimensions m = 10 , 20 , 50 . Note334 that there are multiple possible solutions for this problem , requiring the agents to coordinate their335 actions to sum to a∗ . We assume a target policy of the form µθi = θi for each agent i and a Gaussian336 behaviour policy β ( · ) ∼ N ( θi , σ2β ) where σβ = 0.1 . We use the Gaussian behaviour policy for both337 Algorithms 1 and 2 . Strictly speaking , Algorithm 1 is on-policy , but in this simplified setting where338 the target policy is constant , the on-policy version would be degenerate such that the Q estimate does339 not affect the TD-error . Therefore , we add a Gaussian behaviour policy to Algorithm 1 . Each agent340 maintains an estimate Qω i ( a ) of the critic using a linear function of the compatible features a− θ341 and a bias feature . The critic is recomputed from each successive batch of 2m steps and the actor342 is updated once per batch . The critic step size is 0.1 and the actor step size is 0.01 . Performance343 is evaluated by measuring the cost of the target policy ( without exploration ) . Figure 2 shows the344 convergence of Algorithms 1 and 2 averaged over 5 runs . In all cases , the system converges and the345 agents are able to coordinate their actions to minimize system cost . The jupyter notebook will be346 made available for others to use . In fact , in this simple experiment , we also observe convergence347 under discounted rewards.348 Proof of Theorem 1349 The proof follows the same scheme as Sutton et al . [ 2000a ] , naturally extending their results for a350 deterministic policy µθ and a continuous action space A.351 Note that our regularity assumptions ensure that , for any s ∈ S , Vθ ( s ) , ∇θVθ ( s ) , J ( θ ) , ∇θJ ( θ ) ,352 dθ ( s ) are Lipschitz-continuous functions of θ ( since µθ is twice continuously differentiable and Θ is353 compact ) , and that Qθ ( s , a ) and ∇aQθ ( s , a ) are Lipschitz-continuous functions of a ( Marbach and354 Tsitsiklis [ 2001 ] ) .355 We first show that∇θJ ( θ ) = Es∼dθ [ ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) ] .356 The Poisson equation under policy µθ is given by Puterman [ 1994 ] 357 Qθ ( s , a ) = R̄ ( s , a ) − J ( θ ) + ∑ s′∈S P ( s′|s , a ) Vθ ( s′ ) . So,358 ∇θVθ ( s ) = ∇θQθ ( s , µθ ( s ) ) = ∇θ [ R̄ ( s , µθ ( s ) ) − J ( θ ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) Vθ ( s′ ) ] = ∇θµθ ( s ) ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) −∇θJ ( θ ) +∇θ ∑ s′∈S P ( s′|s , µθ ( s ) ) Vθ ( s′ ) = ∇θµθ ( s ) ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) −∇θJ ( θ ) + ∑ s′∈S ∇θµθ ( s ) ∇aP ( s′|s , a ) |a=µθ ( s ) Vθ ( s ′ ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) = ∇θµθ ( s ) ∇a [ R̄ ( s , a ) + ∑ s′∈S P ( s|s′ , a ) Vθ ( s′ ) ] ∣∣∣∣∣ a=µθ ( s ) −∇θJ ( θ ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) = ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) −∇θJ ( θ ) Hence,359 ∇θJ ( θ ) = ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) −∇θVθ ( s ) ∑ s∈S dθ ( s ) ∇θJ ( θ ) = ∑ s∈S dθ ( s ) ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) + ∑ s∈S dθ ( s ) ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) − ∑ s∈S dθ ( s ) ∇θVθ ( s ) . Using stationarity property of dθ , we get∑ s∈S ∑ s′∈S dθ ( s ) P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) = ∑ s′∈S dθ ( s′ ) ∇θVθ ( s′ ) . Therefore , we get ∇θJ ( θ ) = ∑ s∈S dθ ( s ) ∇θµθ ( s ) ∇aQθ ( s , a ) |a=µθ ( s ) = Es∼dθ [ ∇θµθ ( s ) ∇aQθ ( s , a ) |a=µθ ( s ) ] . Given that ∇θiµjθ ( s ) = 0 if i 6= j , we have ∇θµθ ( s ) = Diag ( ∇θ1µ1θ1 ( s ) , . . . , ∇θNµ N θN ( s ) ) , which360 implies361 ∇θiJ ( θ ) = Es∼dθ [ ∇θiµiθi ( s ) ∇ai Qθ ( s , µ −i θ−i ( s ) , a i ) ∣∣ ai=µi θi ( s ) ] . ( 15 ) Proof of Theorem 3362 We extend the notation for off-policy reward function to stochastic policies as follows . Let β be a363 behavior policy under which { st } t≥0 is irreducible and aperiodic , with stationary distribution dβ . For364 a stochastic policy π : S → P ( A ) , we define365 Jβ ( π ) = ∑ s∈S dβ ( s ) ∫ A π ( a|s ) R̄ ( s , a ) da . Recall that for a deterministic policy µ : S → A , we have366 Jβ ( µ ) = ∑ s∈S dβ ( s ) R̄ ( s , µ ( s ) ) . We introduce the following conditions which are identical to Conditions B1 from Silver et al.367 [ January 2014a ] .368 Conditions 1 . Functions νσ parametrized by σ are said to be regular delta-approximation onR ⊂ A369 if they satisfy the following conditions:370 1 . The distributions νσ converge to a delta distribution : limσ↓0 ∫ A νσ ( a ′ , a ) f ( a ) da = f ( a′ ) 371 for a′ ∈ R and suitably smooth f . Specifically we require that this convergence is uniform372 in a′ and over any class F of L-Lipschitz and bounded functions , ‖∇af ( a ) ‖ < L < ∞,373 supaf ( a ) < b < ∞ , i.e . :374 lim σ↓0 sup f∈F , a′∈R ∣∣∣∣∫ A νσ ( a ′ , a ) f ( a ) da− f ( a′ ) ∣∣∣∣ = 0 . 2 . For each a′ ∈ R , νσ ( a′ , · ) is supported on some compact Ca′ ⊆ A with Lipschitz boundary375 bd ( Ca′ ) , vanishes on the boundary and is continuously differentiable on Ca′ .376 3 . For each a′ ∈ R , for each a ∈ A , the gradient∇a′νσ ( a′ , a ) exists.377 4 . Translation invariance : for all a ∈ A , a′ ∈ R , and any δ ∈ Rn such that a + δ ∈ A,378 a′ + δ ∈ A , νσ ( a′ , a ) = νσ ( a′ + δ , a+ δ ) .379 The following lemma is an immediate corollary of Lemma 1 from Silver et al . [ January 2014a ] .380 Lemma 1 . Let νσ be a regular delta-approximation onR ⊆ A . Then , wherever the gradients exist ∇a′ν ( a′ , a ) = −∇aν ( a′ , a ) . Theorem 3 is a less technical restatement of the following result.381 Theorem 8 . Let µθ : S → A. Denote the range of µθ by Rθ ⊆ A , and R = ∪θRθ . For382 each θ , consider πθ , σ a stochastic policy such that πθ , σ ( a|s ) = νσ ( µθ ( s ) , a ) , where νσ satisfy383 Conditions 1 on R. Then , there exists r > 0 such that , for each θ ∈ Θ , σ 7→ Jπθ , σ ( πθ , σ ) ,384 σ 7→ Jπθ , σ ( µθ ) , σ 7→ ∇θJπθ , σ ( πθ , σ ) , and σ 7→ ∇θJπθ , σ ( µθ ) are properly defined on [ 0 , r ] ( with385 Jπθ,0 ( πθ,0 ) = Jπθ,0 ( µθ ) = Jµθ ( µθ ) and ∇θJπθ,0 ( πθ,0 ) = ∇θJπθ,0 ( µθ ) = ∇θJµθ ( µθ ) ) , and we386 have:387 lim σ↓0 ∇θJπθ , σ ( πθ , σ ) = lim σ↓0 ∇θJπθ , σ ( µθ ) = ∇θJµθ ( µθ ) . To prove this result , we first state and prove the following Lemma.388 Lemma 2 . There exists r > 0 such that , for all θ ∈ Θ and σ ∈ [ 0 , r ] , stationary distribution dπθ , σ389 exists and is unique . Moreover , for each θ ∈ Θ , σ 7→ dπθ , σ and σ 7→ ∇θdπθ , σ are properly defined390 on [ 0 , r ] and both are continuous at 0.391 Proof of Lemma 2 . For any policy β , we let ( P βs , s′ ) s , s′∈S be the transition matrix associated to the392 Markov Chain { st } t≥0 induced by β . In particular , for each θ ∈ Θ , σ > 0 , s , s′ ∈ S , we have393 Pµθs , s′ = P ( s ′|s , µθ ( s ) ) , P πθ , σ s , s′ = ∫ A πθ , σ ( a|s ) P ( s′|s , a ) da = ∫ A νσ ( µθ ( s ) , a ) P ( s ′|s , a ) da . Let θ ∈ Θ , s , s′ ∈ S , ( θn ) ∈ ΘN such that θn → θ and ( σn ) n∈N ∈ R+ N , σn ↓ 0:394 ∣∣∣Pπθn , σns , s′ − Pµθs , s′ ∣∣∣ ≤ ∣∣∣Pπθn , σns , s′ − Pµθns , s′ ∣∣∣+ ∣∣∣Pµθns , s′ − Pµθs , s′ ∣∣∣ . Applying the first condition of Conditions 1 with f : a 7→ P ( s′|s , a ) belonging to F :395 ∣∣∣Pπθn , σns , s′ − Pµθns , s′ ∣∣∣ = ∣∣∣∣∫ A νσn ( µθn ( s ) , a ) P ( s ′|s , a ) da− P ( s′|s , µθn ( s ) ) ∣∣∣∣ ≤ sup f∈F , a′∈R ∣∣∣∣∫ A νσn ( a ′ , a ) f ( a ) da− f ( a′ ) ∣∣∣∣ −→n→∞ 0 . By regularity assumptions on θ 7→ µθ ( s ) and P ( s′|s , · ) , we have396 ∣∣∣Pµθns , s′ − Pµθs , s′ ∣∣∣ = |P ( s′|s , µθn ( s ) ) − P ( s′|s , µθ ( s ) ) | −→n→∞ 0 . Hence,397 ∣∣∣Pπθn , σns , s′ − Pµθs , s′ ∣∣∣ −→n→∞ 0 . Therefore , for each s , s′ ∈ S , ( θ , σ ) 7→ Pπθ , σs , s′ , with P πθ,0 s , s′ = P µθ s , s′ , is continuous on Θ× { 0 } . Note398 that , for each n ∈ N , P 7→ ∏ s , s′ ( P n ) s , s′ is a polynomial function of the entries of P . Thus , for399 each n ∈ N , fn : ( θ , σ ) 7→ ∏ s , s′ ( P πθ , σn ) s , s′ , with fn ( θ , 0 ) = ∏ s , s′ ( P µθn ) s , s′ is continuous on400 Θ × { 0 } . Moreover , for each θ ∈ Θ , σ ≥ 0 , from the structure of Pπθ , σ , if there is some n∗ ∈ N401 such that fn∗ ( θ , σ ) > 0 then , for all n ≥ n∗ , fn ( θ , σ ) > 0.402 Now let us suppose that there exists ( θn ) ∈ ΘN ∗ such that , for each n > 0 there is a σn ≤ n−1 such403 that fn ( θn , σn ) = 0 . By compacity of Θ , we can take ( θn ) converging to some θ ∈ Θ . For each404 n∗ ∈ N , by continuity we have fn∗ ( θ , 0 ) = lim n→∞ fn∗ ( θn , σn ) = 0 . Since Pµθ is irreducible and405 aperiodic , there is some n ∈ N such that for all s , s′ ∈ S and for all n∗ ≥ n , ( Pµθn ∗ ) s , s′ > 0 , i.e.406 fn∗ ( θ , 0 ) > 0 . This leads to a contradiction.407 Hence , there exists n∗ > 0 such that for all θ ∈ Θ and σ ≤ n∗−1 , fn ( θ , σ ) > 0 . We let r = n∗−1 . It408 follows that , for all θ ∈ Θ and σ ∈ [ 0 , r ] , Pπθ , σ is a transition matrix associated to an irreducible and409 aperiodic Markov Chain , thus dπθ , σ is well defined as the unique stationary probability distribution410 associated to Pπθ , σ . We fix θ ∈ Θ in the remaining of the proof.411 Let β a policy for which the Markov Chain corresponding to P β is irreducible and aperiodic . Let412 s∗ ∈ S , as asserted in Marbach and Tsitsiklis [ 2001 ] , considering stationary distribution dβ as a413 vector ( dβs ) s∈S ∈ R |S| , dβ is the unique solution of the balance equations:414 ∑ s∈S dβsP β s , s′ = d β s′ s ′ ∈ S\ { s∗ } , ∑ s∈S dβs = 1 . Hence , we have Aβ an |S| × |S| matrix and a 6= 0 a constant vector of R|S| such that the balance415 equations is of the form416 Aβdβ = a ( 16 ) with Aβs , s′ depending on P β s′ , s in an affine way , for each s , s ′ ∈ S. Moreover , Aβ is invertible , thus417 dβ is given by418 dβ = 1 det ( Aβ ) adj ( Aβ ) > a . Entries of adj ( Aβ ) and det ( Aβ ) are polynomial functions of the entries of P β .419 Thus , σ 7→ dπθ , σ = 1 det ( Aπθ , σ ) adj ( Aπθ , σ ) > a is defined on [ 0 , r ] and is continuous at 0.420 Lemma 1 and integration by parts imply that , for s , s′ ∈ S , σ ∈ [ 0 , r ] :421 ∫ A ∇a′νσ ( a′ , a ) |a′=µθ ( s ) P ( s ′|s , a ) da = − ∫ A ∇aνσ ( µθ ( s ) , a ) P ( s′|s , a ) da = ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da+ boundary terms = ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da where the boundary terms are zero since νσ vanishes on the boundary due to Conditions 1.422 Thus , for s , s′ ∈ S , σ ∈ [ 0 , r ] :423 ∇θP πθ , σ s , s′ = ∇θ ∫ A πθ , σ ( a|s ) P ( s′|s , a ) da = ∫ A ∇θπθ , σ ( a|s ) P ( s′|s , a ) da ( 17 ) = ∫ A ∇θµθ ( s ) ∇a′νσ ( a′ , a ) |a′=µθ ( s ) P ( s ′|s , a ) da = ∇θµθ ( s ) ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da where exchange of derivation and integral in ( 17 ) follows by application of Leibniz rule with:424 • ∀a ∈ A , θ 7→ πθ , σ ( a|s ) P ( s′|s , a ) is differentiable , and ∇θπθ , σ ( a|s ) P ( s′|s , a ) =425 ∇θµθ ( s ) ∇a′νσ ( a′ , a ) |a′=µθ ( s ) .426 427 • Let a∗ ∈ R , ∀θ ∈ Θ,428 ‖∇θπθ , σ ( a|s ) P ( s′|s , a ) ‖ = ∥∥∥∇θµθ ( s ) ∇a′νσ ( a′ , a ) |a′=µθ ( s ) ∥∥∥ ≤ ‖∇θµθ ( s ) ‖op ∥∥∥∇a′νσ ( a′ , a ) |a′=µθ ( s ) ∥∥∥ ≤ sup θ∈Θ ‖∇θµθ ( s ) ‖op ‖∇aνσ ( µθ ( s ) , a ) ‖ = sup θ∈Θ ‖∇θµθ ( s ) ‖op ‖∇aνσ ( a ∗ , a− µθ ( s ) + a∗ ) ‖ ( 18 ) ≤ sup θ∈Θ ‖∇θµθ ( s ) ‖op sup a∈Ca∗ ‖∇aνσ ( a∗ , a ) ‖ 1a∈Ca∗ where ‖·‖op denotes the operator norm , and ( 18 ) comes from translation invariance ( we take429 ∇aνσ ( a∗ , a ) = 0 for a ∈ Rn\Ca∗ ) . a 7→ sup θ∈Θ ‖∇θµθ ( s ) ‖op sup a∈Ca∗ ‖∇aνσ ( a∗ , a ) ‖ 1a∈Ca∗ is430 measurable , bounded and supported on Ca∗ , so it is integrable on A.431 • Dominated convergence ensures that , for each k ∈ J1 , mK , partial derivative gk ( θ ) =432 ∂θk ∫ A∇θπθ , σ ( a|s ) P ( s ′|s , a ) da is continuous : let θn ↓ θ , then433 gk ( θn ) = ∂θk ∫ A ∇θπθn , σ ( a|s ) P ( s′|s , a ) da = ∂θkµθn ( s ) ∫ Ca∗ νσ ( a ∗ , a− µθn ( s ) + a∗ ) ∇aP ( s′|s , a ) da −→ n→∞ ∂θkµθ ( s ) ∫ Ca∗ νσ ( a ∗ , a− µθ ( s ) + a∗ ) ∇aP ( s′|s , a ) da = gk ( θ ) with the dominating function a 7→ sup a∈Ca∗ |νσ ( a∗ , a ) |sup a∈A ‖∇aP ( s′|s , a ) ‖ 1a∈Ca∗ .434 Thus σ 7→ ∇θP πθ , σ s , s′ is defined for σ ∈ [ 0 , r ] and is continuous at 0 , with ∇θP πθ,0 s , s′ =435 ∇θµθ ( s ) ∇aP ( s′|s , a ) |a=µθ ( s ) . Indeed , let ( σn ) n∈N ∈ [ 0 , r ] +N , σn ↓ 0 , then , applying the first436 condition of Conditions 1 with f : a 7→ ∇aP ( s′|s , a ) belonging to F , we get437 ∥∥∥∇θPπθ , σns , s′ −∇θPµθs , s′∥∥∥ = ‖∇θµθ ( s ) ‖op ∥∥∥∥∥ ∫ Cµθ ( s ) νσn ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da− ∇aP ( s′|s , a ) |a=µθ ( s ) ∥∥∥∥∥ −→n→∞ 0 . Since dπθ , σ = 1 det ( Aπθ , σ ) adj ( Aπθ , σ ) > a with |det ( Aπθ , σ ) | > 0 for all σ ∈ [ 0 , r ] and since entries438 of adj ( Aπθ , σ ) and det ( Aπθ , σ ) are polynomial functions of the entries of Pπθ , σ , it follows that439 σ 7→ ∇θdπθ , σ is properly defined on [ 0 , r ] and is continuous at 0 , which concludes the proof of440 Lemma 2.441 We now proceed to prove Theorem 8.442 Let θ ∈ Θ , πθ as in Theorem 3 , and r > 0 such that σ 7→ dπθ , σ , σ 7→ ∇θdπθ , σ are well defined on443 [ 0 , r ] and are continuous at 0 . Then , the following two functions444 σ 7→ Jπθ , σ ( πθ , σ ) = ∑ s∈S dπθ , σ ( s ) ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da , σ 7→ Jπθ , σ ( µθ ) = ∑ s∈S dπθ , σ ( s ) R̄ ( s , µθ ( s ) ) , are properly defined on [ 0 , r ] ( with Jπθ,0 ( πθ,0 ) = Jπθ,0 ( µθ ) = Jµθ ( µθ ) ) . Let s ∈ S , by taking445 similar arguments as in the proof of Lemma 2 , we have446 ∇θ ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da = ∫ A ∇θπθ , σ ( a , s ) R̄ ( s , a ) da , = ∇θµθ ( s ) ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da . Thus , σ 7→ ∇θJπθ , σ ( πθ , σ ) is properly defined on [ 0 , r ] and447 ∇θJπθ , σ ( πθ , σ ) = ∑ s∈S ∇θdπθ , σ ( s ) ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da + ∑ s∈S dπθ , σ ( s ) ∇θ ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da = ∑ s∈S ∇θdπθ , σ ( s ) ∫ A νσ ( µθ ( s ) , a ) R̄ ( s , a ) da + ∑ s∈S dπθ , σ ( s ) ∇θµθ ( s ) ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da . Similarly , σ 7→ ∇θJπθ , σ ( µθ ) is properly defined on [ 0 , r ] and448 ∇θJπθ , σ ( µθ ) = ∑ s∈S ∇θdπθ , σ ( s ) R̄ ( s , µθ ( s ) ) + ∑ s∈S dπθ , σ ( s ) ∇θµθ ( s ) ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) To prove continuity at 0 of both σ 7→ ∇θJπθ , σ ( πθ , σ ) and σ 7→ ∇θJπθ , σ ( µθ ) ( with ∇θJπθ,0 ( πθ,0 ) =449 ∇θJπθ,0 ( µθ ) = ∇θJµθ ( µθ ) ) , let ( σn ) n≥0 ↓ 0:450 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ,0 ( πθ,0 ) ∥∥ ≤ ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ , σn ( µθ ) ∥∥+ ∥∥∇θJπθ , σn ( µθ ) −∇θJµθ ( µθ ) ∥∥ . ( 19 ) For the first term of the r.h.s we have451 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ , σn ( µθ ) ∥∥ ≤ ∑ s∈S ‖∇θdπθ , σn ( s ) ‖ ∣∣∣∣∫ A νσn ( µθ ( s ) , a ) R̄ ( s , a ) da− R̄ ( s , µθ ( s ) ) ∣∣∣∣ + ∑ s∈S dπθ , σn ( s ) ‖∇θµθ ( s ) ‖op ∥∥∥∥∫ A νσn ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da− ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) ∥∥∥∥ . Applying the first assumption in Condition 1 with f : a 7→ R̄ ( s , a ) and f : a 7→ ∇aR̄ ( s , a ) belonging452 to F we have , for each s ∈ S:453 ∣∣∣∣∫ A νσn ( µθ ( s ) , a ) R̄ ( s , a ) da− R̄ ( s , µθ ( s ) ) ∣∣∣∣ −→n→∞ 0 and∥∥∥∥∫ A νσn ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da− ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) ∥∥∥∥ −→n→∞ 0 . Moreover , for each s ∈ S , dπθ , σn ( s ) −→ n→∞ dµθ ( s ) and∇θdπθ , σn ( s ) −→ n→∞ ∇θdµθ ( s ) ( by Lemma 2 ) ,454 and ‖∇θµθ ( s ) ‖op < ∞ , so455 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ , σn ( µθ ) ∥∥ −→n→∞ 0 . For the second term of the r.h.s of ( 19 ) , we have456 ∥∥∇θJπθ , σn ( µθ ) −∇θJµθ ( µθ ) ∥∥ ≤∑ s∈S ‖∇θdπθ , σn ( s ) −∇θdµθ ( s ) ‖ ∣∣R̄ ( s , µθ ( s ) ) ∣∣ + ∑ s∈S |dπθ , σn ( s ) − dµθ ( s ) | ‖∇θµθ ( s ) ‖op ∥∥∥∇aR̄ ( s , a ) ∣∣a=µθ ( s ) ∥∥∥ . Continuity at 0 of σ 7→ dπθ , σ ( s ) and σ 7→ ∇θdπθ , σ ( s ) for each s ∈ S , boundedness of R̄ ( s , · ) ,457 ∇aR̄ ( s , · ) and ∇θ ( s ) µθ ( s ) implies that458 ∥∥∇θJπθ , σn ( µθ ) −∇θJµθ ( µθ ) ∥∥ −→n→∞ 0 . Hence,459 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ,0 ( πθ,0 ) ∥∥ −→n→∞ 0 . So , σ 7→ ∇θJπθ , σ ( πθ , σ ) and ∇θJπθ , σ ( µθ ) are continuous at 0:460 lim σ↓0 ∇θJπθ , σ ( πθ , σ ) = lim σ↓0 ∇θJπθ , σ ( µθ ) = ∇θJµθ ( µθ ) . Proof of Theorem 4461 We will use the two-time-scale stochastic approximation analysis . We let the policy parameter θt462 fixed as θt ≡ θ when analysing the convergence of the critic step . Thus we can show the convergence463 of ωt towards an ωθ depending on θ , which will then be used to prove the convergence for the slow464 time-scale.465 Lemma 3 . Under Assumptions 3 – 5 , the sequence ωit generated from ( 2 ) is bounded a.s. , i.e.,466 supt‖ωit‖ < ∞ a.s. , for any i ∈ N .467 The proof follows the same steps as that of Lemma B.1 in the PMLR version of Zhang et al . [ 2018 ] .468 Lemma 4 . Under Assumption 5 , the sequence { Ĵ it } generated as in 2 is bounded a.s , i.e. , supt|Ĵ it | < 469 ∞ a.s. , for any i ∈ N .470 The proof follows the same steps as that of Lemma B.2 in the PMLR version of Zhang et al . [ 2018 ] .471 The desired result holds since Step 1 and Step 2 of the proof of Theorem 4.6 in Zhang et al . [ 2018 ] 472 can both be repeated in the setting of deterministic policies.473 Proof of Theorem 5474 Let Ft,2 = σ ( θτ , sτ , τ ≤ t ) a filtration . In addition , we define475 H ( θ , s , ω ) = ∇θµθ ( s ) · ∇aQω ( s , a ) |a=µθ ( s ) , H ( θ , s ) = H ( θ , s , ωθ ) , h ( θ ) = Es∼dθ [ H ( θ , s ) ] . Then , for each θ ∈ Θ , we can introduce νθ : S → Rn the solution to the Poisson equation:476 ( I − P θ ) νθ ( · ) = H ( θ , · ) − h ( θ ) that is given by νθ ( s ) = ∑ k≥0 Esk+1∼P θ ( ·|sk ) [ H ( θ , sk ) − h ( θ ) |s0 = s ] which is properly defined477 ( similar to the differential value function V ) .478 With projection , actor update ( 5 ) becomes479 θt+1 = Γ [ θt + βθ , tH ( θt , st , ωt ) ] ( 20 ) = Γ [ θt + βθ , th ( θt ) − βθ , t ( h ( θt ) −H ( θt , st ) ) − βθ , t ( H ( θt , st ) −H ( θt , st , ωt ) ) ] = Γ [ θt + βθ , th ( θt ) + βθ , t ( ( I − P θt ) νθt ( st ) ) + βθ , tA 1 t ] = Γ [ θt + βθ , th ( θt ) + βθ , t ( νθt ( st ) − νθt ( st+1 ) ) + βθ , t ( νθt ( st+1 ) − P θtνθt ( st ) ) + βθ , tA 1 t ] = Γ [ θt + βθ , t ( h ( θt ) +A 1 t +A 2 t +A 3 t ) ] where480 A1t = H ( θt , st , ωt ) −H ( θt , st ) , A2t = νθt ( st ) − νθt ( st+1 ) , A3t = νθt ( st+1 ) − P θtνθt ( st ) . For r < t we have481 t−1∑ k=r βθ , kA 2 k = t−1∑ k=r βθ , k ( νθk ( sk ) − νθk ( sk+1 ) ) = t−1∑ k=r βθ , k ( νθk ( sk ) − νθk+1 ( sk+1 ) ) + t−1∑ k=r βθ , k ( νθk+1 ( sk+1 ) − νθk ( sk+1 ) ) = t−1∑ k=r ( βθ , k+1 − βθ , k ) νθk+1 ( sk+1 ) + βθrνθr ( sr ) − βθtνθt ( st ) + t−1∑ k=r ( 2 ) k = t−1∑ k=r ( 1 ) k + t−1∑ k=r ( 2 ) k + ηr , t where482 ( 1 ) k = ( βθ , k+1 − βθ , k ) νθk+1 ( sk+1 ) , ( 2 ) k = βθ , k ( νθk+1 ( sk+1 ) − νθk ( sk+1 ) ) , ηr , t = βθrνθr ( sr ) − βθtνθt ( st ) . Lemma 5 . ∑t−1 k=0 βθ , kA 2 k converges a.s. for t→∞483 Proof of Lemma 5 . Since νθ ( s ) is uniformly bounded for θ ∈ Θ , s ∈ S , we have for some K > 0484 t−1∑ k=0 ∥∥∥ ( 1 ) k ∥∥∥ ≤ K t−1∑ k=0 |βθ , k+1 − βθ , k| which converges given Assumption 5.485 Moreover , since µθ ( s ) is twice continuously differentiable , θ 7→ νθ ( s ) is Lipschitz for each s , and so486 we have487 t−1∑ k=0 ∥∥∥ ( 2 ) k ∥∥∥ ≤ t−1∑ k=0 βθ , k ∥∥νθk ( sk+1 ) − νθk+1 ( sk+1 ) ∥∥ ≤ K2 t−1∑ k=0 βθ , k ‖θk − θk+1‖ ≤ K3 t−1∑ k=0 β2θ , k . Finally , lim t→∞ ‖η0 , t‖ = βθ,0 ‖νθ0 ( s0 ) ‖ < ∞ a.s.488 Thus , ∑t−1 k=0 ∥∥βθ , kA2k∥∥ ≤∑t−1k=0 ∥∥∥ ( 1 ) k ∥∥∥+∑t−1k=0 ∥∥∥ ( 2 ) k ∥∥∥+ ‖η0 , t‖ converges a.s.489 Lemma 6 . ∑t−1 k=0 βθ , kA 3 k converges a.s. for t→∞.490 Proof of Lemma 6 . We set491 Zt = t−1∑ k=0 βθ , kA 3 k = t−1∑ k=0 βθ , k ( νθk ( sk+1 ) − P θkνθk ( sk ) ) . Since Zt is Ft-adapted and E [ νθt ( st+1 ) |Ft ] = P θtνθt ( st ) , Zt is a martingale . The remaining of the492 proof is now similar to the proof of Lemma 2 on page 224 of Benveniste et al . [ 1990 ] .493 Let gi ( θt ) = Est∼dθt [ ψit · ξit , θt |Ft,2 ] and g ( θ ) = [ g1 ( θ ) , . . . , gN ( θ ) ] . We have gi ( θt ) = ∑ st∈S dθt ( st ) · ψit · ξit , θt . Given ( 10 ) , θ 7→ ωθ is continuously differentiable and θ 7→ ∇θωθ is bounded so θ 7→ ωθ is494 Lipschitz-continuous . Thus θ 7→ ξit , θ is Lipschitz-continuous for each st ∈ S . Due to our regularity495 assumptions , θ 7→ ψit , θt is also continuous for each i ∈ N , st ∈ S. Moreover , θ 7→ d θ ( s ) is also496 Lipschitz continuous for each s ∈ S. Hence , θ 7→ g ( θ ) is Lipschitz-continuous in θ and the ODE497 ( 12 ) is well-posed . This holds even when using compatible features.498 By critic faster convergence , we have limt→∞‖ξit − ξit , θt‖= 0 so limt→∞A 1 t = 0.499 Hence , by Kushner-Clark lemma Kushner and Clark [ 1978 ] ( pp 191-196 ) we have that the update in500 ( 20 ) converges a.s. to the set of asymptotically stable equilibria of the ODE ( 12 ) .501 Proof of Theorem 6502 We use the two-time scale technique : since critic updates at a faster rate than the actor , we let the503 policy parameter θt to be fixed as θ when analysing the convergence of the critic update.504 Lemma 7 . Under Assumptions 4 , 1 and 6 , for any i ∈ N , sequence { λit } generated from ( 7 ) is505 bounded almost surely.506 To prove this lemma we verify the conditions for Theorem A.2 of Zhang et al . [ 2018 ] to hold.507 We use { Ft,1 } to denote the filtration with Ft,1 = σ ( sτ , Cτ−1 , aτ−1 , rτ , λτ , τ ≤ t ) . With λt =508 [ ( λ1t ) > , . . . , ( λNt ) > ] > , critic step ( 7 ) has the form:509 λt+1 = ( Ct ⊗ I ) ( λt + βλ , t · yt+1 ) ( 21 ) with yt+1 = ( δ1tw ( st , at ) > , . . . , δNt w ( st , at ) > ) > ∈ RKN , ⊗ denotes Kronecker product and I is510 the identity matrix . Using the same notation as in Assumption A.1 from Zhang et al . [ 2018 ] , we511 have:512 hi ( λit , st ) = Ea∼π [ δitw ( st , a ) > |Ft,1 ] = ∫ A π ( a|st ) ( Ri ( st , a ) − w ( st , a ) · λit ) w ( st , a ) > da , M it+1 = δ i tw ( st , at ) > − Ea∼π [ δitw ( st , a ) > |Ft,1 ] , h̄i ( λt ) = A i π , θ · dsπ −Bπ , θ · λt , where Aiπ , θ = [ ∫ A π ( a|s ) Ri ( s , a ) w ( s , a ) > da , s ∈ S ] . Since feature vectors are uniformly bounded for any s ∈ S and a ∈ A , hi is Lipschitz continuous in513 its first argument . Since , for i ∈ N , the ri are also uniformly bounded , E [ ‖Mt+1‖2|Ft,1 ] ≤ K · ( 1 +514 ‖λt‖2 ) for some K > 0 . Furthermore , finiteness of |S| ensures that , a.s. , ‖h̄ ( λt ) − h ( λt , st ) ‖2≤515 K ′ · ( 1 + ‖λt‖2 ) . Finally , h∞ ( y ) exists and has the form516 h∞ ( y ) = −Bπ , θ · y . From Assumption 1 , we have that −Bπ , θ is a Hurwitcz matrix , thus the origin is a globally asymptot-517 ically stable attractor of the ODE ẏ = h∞ ( y ) . Hence Theorem A.2 of Zhang et al . [ 2018 ] applies,518 which concludes the proof of Lemma 7.519 We introduce the following operators as in Zhang et al . [ 2018 ] :520 • 〈·〉 : RKN → RK521 〈λ〉 = 1 N ( 1 > ⊗ I ) λ = 1 N ∑ i∈N λi . • J = ( 1 N 11 > ⊗ I ) : RKN → RKN such that J λ = 1⊗ 〈λ〉.522 • J⊥ = I − J : RKN → RKN and we note λ⊥ = J⊥λ = λ− 1⊗ 〈λ〉.523 We then proceed in two steps as in Zhang et al . [ 2018 ] , firstly by showing the convergence a.s. of the524 disagreement vector sequence { λ⊥ , t } to zero , secondly showing that the consensus vector sequence525 { 〈λt〉 } converges to the equilibrium such that 〈λt〉 is solution to ( 13 ) .526 Lemma 8 . Under Assumptions 4 , 1 and 6 , for any M > 0 , we have527 sup t E [ ‖β−1λ , tλ⊥ , t‖ 2 1 { supt‖λt‖≤M } ] < ∞ . Since dynamic of { λt } described by ( 21 ) is similar to ( 5.2 ) in Zhang et al . [ 2018 ] we have528 E [ ‖β−1λ , t+1λ⊥ , t+1‖ 2|Ft,1 ] = β2λ , t β2λ , t+1 ρ ( ‖β−1λ , tλ⊥ , t‖ 2+2 · ‖β−1λ , tλ⊥ , t‖·E ( ‖yt+1‖ 2|Ft,1 ) 1 2 + E ( ‖yt+1‖2|Ft,1 ) ) ( 22 ) where ρ represents the spectral norm of E [ C > t · ( I − 11 > /N ) · Ct ] , with ρ ∈ [ 0 , 1 ) by Assumption529 4 . Since yit+1 = δ i t · w ( st , at ) > we have530 E [ ‖yt+1‖2|Ft,1 ] = E [ ∑ i∈N ‖ ( ri ( st , at ) − w ( st , at ) λit ) · w ( st , at ) > ‖2|Ft,1 ] ≤ 2 · E [ ∑ i∈N ‖ri ( st , at ) w ( st , at ) > ‖2+‖w ( st , at ) > ‖4·‖λit‖2|Ft,1 ] . By uniform boundedness of r ( s , · ) and w ( s , · ) ( Assumptions 1 ) and finiteness of S , there exists531 K1 > 0 such that532 E [ ‖yt+1‖2|Ft,1 ] ≤ K1 ( 1 + ‖λt‖2 ) . Thus , for any M > 0 there exists K2 > 0 such that , on the set { supτ≤t‖λτ‖ < M } ,533 E [ ‖yt+1‖21 { supτ≤t‖λτ‖ < M } |Ft,1 ] ≤ K2 . ( 23 ) We let vt = ‖β−1λ , tλ⊥ , t‖21 { supτ≤t‖λτ‖ < M } . Taking expectation over ( 22 ) , noting that534 1 { supτ≤t+1‖λτ‖ < M } ≤ 1 { supτ≤t‖λτ‖ < M } we get535 E ( vt+1 ) ≤ β2λ , t β2λ , t+1 ρ ( E ( vt ) + 2 √ E ( vt ) · √ K2 +K2 ) which is the same expression as ( 5.10 ) in Zhang et al . [ 2018 ] . So similar conclusions to the ones of536 Step 1 of Zhang et al . [ 2018 ] holds:537 sup t E [ ‖β−1λ , tλ⊥ , t‖ 2 1 { supt‖λt‖≤M } ] < ∞ ( 24 ) and lim t λ⊥ , t = 0 a.s. ( 25 ) We now show convergence of the consensus vector 1⊗ 〈λt〉 . Based on ( 21 ) we have538 〈λt+1〉 = 〈 ( Ct ⊗ I ) ( 1⊗ 〈λt〉+ λ⊥ , t + βλ , tyt+1 ) 〉 = 〈λt〉+ 〈λ⊥ , t〉+ βλ , t〈 ( Ct ⊗ I ) ( yt+1 + β−1λ , tλ⊥ , t ) 〉 = 〈λt〉+ βλ , t ( h ( λt , st ) +Mt+1 ) where h ( λt , st ) = Eat∼π [ 〈yt+1〉|Ft ] andMt+1 = 〈 ( Ct⊗I ) ( yt+1+β−1λ , tλ⊥ , t ) 〉−Eat∼π [ 〈yt+1〉|Ft ] .539 Since 〈δt〉 = r̄ ( st , at ) − w ( st , at ) 〈λt〉 , we have540 h ( λt , st ) = Eat∼π ( r̄ ( st , at ) w ( st , at ) > |Ft ) + Eat∼π ( w ( st , at ) 〈λt〉 · w ( st , at ) > |Ft,1 ) so h is Lipschitz-continuous in its first argument . Moreover , since 〈λ⊥ , t〉 = 0 and 1 > E ( Ct|Ft,1 ) =541 1 > a.s.:542 Eat∼π [ 〈 ( Ct ⊗ I ) ( yt+1 + β−1λ , tλ⊥ , t ) 〉|Ft,1 ] = Eat∼π [ 1 N ( 1 > ⊗ I ) ( Ct ⊗ I ) ( yt+1 + β−1λ , tλ⊥ , t ) |Ft,1 ] = 1 N ( 1 > ⊗ I ) ( E ( Ct|Ft,1 ) ⊗ I ) Eat∼π [ yt+1 + β −1 λ , tλ⊥ , t|Ft,1 ] = 1 N ( 1 > E ( Ct|Ft,1 ) ⊗ I ) Eat∼π [ yt+1 + β −1 λ , tλ⊥ , t|Ft,1 ] = Eat∼π [ 〈yt+1〉|Ft,1 ] a.s . So { Mt } is a martingale difference sequence . Additionally we have543 E [ ‖Mt+1‖2|Ft,1 ] ≤ 2 · E [ ‖yt+1 + β−1λ , tλ⊥ , t‖ 2 Gt |Ft,1 ] + 2 · ‖E [ 〈yt+1〉|Ft,1 ] ‖2 with Gt = N−2 ·C > t 11 > Ct ⊗ I whose spectral norm is bounded for Ct is stochastic . From ( 23 ) and544 ( 24 ) we have that , for any M > 0 , over the set { supt‖λt‖≤M } , there exists K3 , K4 < ∞ such that545 E [ ‖yt+1+β−1λ , tλ⊥ , t‖ 2 Gt |Ft,1 ] 1 { supt‖λt‖≤M } ≤ K3·E [ ‖yt+1‖2+‖β−1λ , tλ⊥ , t‖ 2|Ft,1 ] 1 { supt‖λt‖≤M } ≤ K4 . Besides , since rit+1 and w are uniformly bounded , there exists K5 < ∞ such that546 ‖E [ 〈yt+1〉|Ft,1 ] ‖2≤ K5 · ( 1 + ‖〈λt〉‖2 ) . Thus , for any M > 0 , there exists some K6 < ∞547 such that over the set { supt‖λt‖≤M } 548 E [ ‖Mt+1‖2|Ft,1 ] ≤ K6 · ( 1 + ‖〈λt〉‖2 ) . Hence , for any M > 0 , assumptions ( a.1 ) - ( a.5 ) of B.1 . from Zhang et al . [ 2018 ] are verified on the549 set { supt‖λt‖≤M } . Finally , we consider the ODE asymptotically followed by 〈λt〉:550 ˙〈λt〉 = −Bπ , θ · 〈λt〉+Aπ , θ · dπ which has a single globally asymptotically stable equilibrium λ∗ ∈ RK , since Bπ , θ is positive551 definite : λ∗ = B−1π , θ ·Aπ , θ · dπ . By Lemma 7 , supt‖〈λt〉‖ < ∞ a.s. , all conditions to apply Theorem552 B.2 . of Zhang et al . [ 2018 ] hold a.s. , which means that 〈λt〉 −→ t→∞ λ∗ a.s. As λt = 1⊗ 〈λt〉+ λ⊥ , t553 and λ⊥ , t −→ t→∞ 0 a.s. , we have for each i ∈ N , a.s.,554 λit −→ t→∞ B−1π , θ ·Aπ , θ · d π . Proof of Theorem 7555 Let Ft,2 = σ ( θτ , τ ≤ t ) be the σ-field generated by { θτ , τ ≤ t } , and let556 ζit,1 = ψ i t · ξit − Est∼dπ [ ψit · ξit|Ft,2 ] , ζit,2 = Est∼dπ [ ψit · ( ξit − ξit , θt ) |Ft,2 ] . With local projection , actor update ( 6 ) becomes557 θit+1 = Γ i [ θit + βθ , tEst∼dπ [ ψit · ξit , θt |Ft,2 ] + βθ , tζ i t,1 + βθ , tζ i t,2 ] . ( 26 ) So with hi ( θt ) = Est∼dπ [ ψit · ξit , θt |Ft,2 ] and h ( θ ) = [ h1 ( θ ) , . . . , hN ( θ ) ] , we have hi ( θt ) = ∑ st∈S dπ ( st ) · ψit · ξit , θt . Given ( 10 ) , θ 7→ ωθ is continuously differentiable and θ 7→ ∇θωθ is bounded so θ 7→ ωθ is Lipschitz-558 continuous . Thus θ 7→ ξit , θ is Lipschitz-continuous for each st ∈ S. Our regularity assumptions559 ensure that θ 7→ ψit , θt is continuous for each i ∈ N , st ∈ S. Moreover , θ 7→ d θ ( s ) is also Lipschitz560 continuous for each s ∈ S. Hence , θ 7→ g ( θ ) is Lipschitz-continuous in θ and the ODE ( 12 ) is561 well-posed . This holds even when using compatible features.562 By critic faster convergence , we have limt→∞‖ξit − ξit , θt‖= 0.563 Let M it = ∑t−1 τ=0 βθ , τζ i τ,1 . M i t is a martingale sequence with respect to Ft,2 . Since564 { ωt } t , { ∇aφk ( s , a ) } s , k , and { ∇θµθ ( s ) } s are bounded ( Lemma 3 , Assumption 2 ) , it follows565 that the sequence { ζit,1 } is bounded . Thus , by Assumption 5 , ∑ t E [ ∥∥M it+1 −M it∥∥2 |Ft,2 ] =566 ∑ t ∥∥βθ , tζit,1∥∥2 < ∞ a.s . The martingale convergence theorem ensures that { M it } converges a.s.567 Thus , for any > 0,568 lim t P ( sup n≥t ∥∥∥∥∥ n∑ τ=t βθ , τζ i τ,1 ∥∥∥∥∥ ≥ ) = 0 . Hence , by Kushner-Clark lemma Kushner and Clark [ 1978 ] ( pp 191-196 ) we have that the update in569 ( 26 ) converges a.s. to the set of asymptotically stable equilibria of the ODE ( 12 ) .570
This paper extends the results for actor-critic with stochastic policies of [Zhang, ICML 2018] to deterministic policies and offers the proof of convergence under some specific assumptions. The authors consider both the on-policy setting and the off-policy setting and offers some convincing derivation. It provides a valuable idea and a promising direction in MARL, but the current version has several problems that need to be fixed. Specifically, some parts of equations, algorithms, and expressions are ambiguous and unintelligible. Besides, problems with the format in the formula and citations also exist, which degrade the paper’s quality and clarity.
SP:9326f169cc5e8d2f4268dcf39af31590ee004d98
Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
1 INTRODUCTION . State-of-the-art for most existing natural language processing ( NLP ) classification tasks is achieved by models that are first pre-trained on auxiliary language modeling tasks and then fine-tuned on the task of interest with cross-entropy loss ( Radford et al. , 2019 ; Howard & Ruder , 2018 ; Liu et al. , 2019 ; Devlin et al. , 2019 ) . Although ubiquitous , the cross-entropy loss – the KL-divergence between one-hot vectors of labels and the distribution of model ’ s output logits – has several shortcomings . Cross entropy loss leads to poor generalization performance ( Liu et al. , 2016 ; Cao et al. , 2019 ) , and it lacks robustness to noisy labels ( Zhang & Sabuncu , 2018 ; Sukhbaatar et al. , 2015 ) or adversarial examples ( Elsayed et al. , 2018 ; Nar et al. , 2019 ) . Effective alternatives have been proposed to modify the reference label distributions through label smoothing ( Szegedy et al. , 2016 ; Müller et al. , 2019 ) , Mixup ( Zhang et al. , 2018 ) , CutMix ( Yun et al. , 2019 ) , knowledge distillation ( Hinton et al. , 2015 ) or self-training ( Yalniz et al. , 2019 ; Xie et al. , 2020 ) . Fine-tuning using cross entropy loss in NLP also tends to be unstable across different runs ( Zhang et al. , 2020 ; Dodge et al. , 2020 ) , especially when supervised data is limited , a scenario in which pre-training is particularly helpful . To tackle the issue of unstable fine-tuning and poor generalization , recent works propose local smoothness-inducing regularizers ( Jiang et al. , 2020 ) and regularization methods inspired by the trust region theory ( Aghajanyan et al. , 2020 ) to prevent representation collapse . Empirical evidence suggests that fine-tuning for more iterations , reinitializing top few layers ( Zhang et al. , 2020 ) , and using debiased Adam optimizer during fine-tuning ( Mosbach et al. , 2020 ) can make the fine-tuning stage more stable . Inspired by the learning strategy that humans utilize when given a few examples , we seek to find the commonalities between the examples of each class and contrast them with examples from other classes . We hypothesize that a similarity-based loss will be able to hone in on the important dimensions of the multidimensional hidden representations hence lead to better few-shot learning results and be more stable while fine-tuning pre-trained language models . We propose a novel objective for fine-tuning that includes a supervised contrastive learning ( SCL ) term that pushes the examples from the same class close and the examples from different classes further apart . The SCL ∗Work done during Facebook AI research internship , correspondence to bgunel @ stanford.edu . term is similar to the contrastive objectives used in self-supervised representation learning across image , speech , and video domains . ( Sohn , 2016 ; Oord et al. , 2018 ; Wu et al. , 2018 ; Bachman et al. , 2019 ; Hénaff et al. , 2019 ; Baevski et al. , 2020 ; Conneau et al. , 2020 ; Tian et al. , 2020 ; Hjelm et al. , 2019 ; Han et al. , 2019 ; He et al. , 2020 ; Misra & Maaten , 2020 ; Chen et al. , 2020a ; b ) . Unlike these methods , however , we use a contrastive objective for supervised learning of the final task , instead of contrasting different augmented views of examples . In few-shot learning settings ( 20 , 100 , 1000 labeled examples ) , the addition of the SCL term to the finetuning objective significantly improves the performance on several natural language understanding classification tasks from the popular GLUE benchmark ( Wang et al. , 2019 ) over the very strong baseline of fine-tuning RoBERTa-Large with cross-entropy loss only . Furthermore , pre-trained language models fine-tuned with our proposed objective are not only robust to noise in the fine-tuning training data , but can also exhibit improved generalization to related tasks with limited labeled task data . Our approach does not require any specialized network architectures ( Bachman et al. , 2019 ; Hénaff et al. , 2019 ) , memory banks ( Wu et al. , 2018 ; Tian et al. , 2020 ; Misra & Maaten , 2020 ) , data augmentation of any kind , or additional unsupervised data . To the best of our knowledge , our work is the first to successfully integrate a supervised contrastive learning objective for fine-tuning pre-trained language models . We empirically demonstrate that the new objective has desirable properties across several different settings . Our contributions in this work are listed in the following : • We propose a novel objective for fine-tuning pre-trained language models that includes a supervised contrastive learning term , as described in Section 2 . • We obtain strong improvements in the few-shot learning settings ( 20 , 100 , 1000 labeled examples ) as shown in Table 2 , leading up to 10.7 points improvement on a subset of GLUE benchmark tasks ( SST-2 , QNLI , MNLI ) for the 20 labeled example few-shot setting , over a very strong baseline – RoBERTa-Large fine-tuned with cross-entropy loss . • We demonstrate that our proposed fine-tuning objective is more robust , in comparison to RoBERTa-Large fine-tuned with cross-entropy loss , across augmented noisy training datasets ( used to fine-tune the models for the task of interest ) with varying noise levels as shown in Table 3 – leading up to 7 points improvement on a subset of GLUE benchmark tasks ( SST-2 , QNLI , MNLI ) across augmented noisy training datasets . We use a backtranslation model to construct the augmented noisy training datasets of varying noise levels ( controlled by the temperature parameter ) , as described in detail in Section 4.2 . • We show that the task-models fine-tuned with our proposed objective have improved generalizability to related tasks despite having limited availability of labeled task data ( Table 7 ) . This led to a 2.9 point improvement on Amazon-2 over the task model fine-tuned with cross-entropy loss only . Moreover , it considerably reduced the variance across few-shot training samples , when transferred from the source SST-2 sentiment analysis task model . 2 APPROACH . We propose a novel objective that includes a supervised contrastive learning term for fine-tuning pre-trained language models . The loss is meant to capture the similarities between examples of the same class and contrast them with the examples from other classes . For a multi-class classification problem with C classes , we work with a batch of training examples of size N , { xi , yi } i=1 , ... N . Φ ( · ) ∈ Rd denotes an encoder that outputs the l2 normalized final encoder hidden layer before the softmax projection ; Nyi is the total number of examples in the batch that have the same label as yi ; τ > 0 is an adjustable scalar temperature parameter that controls the separation of classes ; yi , c denotes the label and ŷi , c denotes the model output for the probability of the ith example belonging to the class c ; λ is a scalar weighting hyperparameter that we tune for each downstream task and setting . The overall loss is then given in the following : L = ( 1− λ ) LCE + λLSCL ( 1 ) LCE = − 1 N N∑ i=1 C∑ c=1 yi , c · logŷi , c ( 2 ) LSCL = N∑ i=1 − 1 Nyi − 1 N∑ j=1 1i6=j1yi=yj log exp ( Φ ( xi ) · Φ ( xj ) /τ ) ∑N k=1 1i 6=k exp ( Φ ( xi ) · Φ ( xk ) /τ ) ( 3 ) The overall loss is a weighted average of CE and the proposed SCL loss , as given in the equation ( 1 ) . The canonical definition of the multi-class CE loss that we use is given in equation ( 2 ) . The novel SCL loss is given in the equation ( 3 ) . This loss can be applied using a variety of encoders Φ ( · ) ∈ Rd – for example a ResNet for a computer vision application or a pre-trained language model such as BERT for an NLP application . In this work , we focus on fine-tuning pre-trained language models for single sentence and sentence-pair classification settings . For single sentence classification , each example xi consists of sequence of tokens prepended with the special [ CLS ] token xi = [ [ CLS ] , t1 , t2 , . . . , tL , [ EOS ] ] . The length of sequence L is constrained such that L < Lmax . Similarly , for sentence-pair classification tasks , each example xi is a concatenation of two sequences of tokens [ t1 , t2 , . . . tL ] and [ s1 , s2 , . . . , sM ] corresponding to the sentences with special tokens delimiting them : xi = [ [ CLS ] , t1 , t2 , . . . , tL , [ SEP ] , s1 , s2 , . . . , sM , [ EOS ] ] . The length of concatenated sequences is constrained such that L+M < Lmax . In both cases , Φ ( xi ) ∈ Rd uses the embedding of [ CLS ] token as the representation for example xi . These choices follow standard practices for fine-tuning pre-trained language models for classification ( Devlin et al. , 2019 ; Liu et al. , 2019 ) . Empirical observations show that both l2 normalization of the encoded embedding representations and an adjustable scalar temperature parameter τ improve performance . Lower temperature increases the influence of examples that are harder to separate , effectively creating harder negatives . Using hard negatives has been previously shown to improve performance in the context of margin-based loss formulations such as triplet loss ( Schroff et al. , 2015 ) . The empirical behavior of the adjustable temperature parameter is consistent with the observations of previous work related to supervised contrastive learning . ( Chen et al. , 2020a ; Khosla et al. , 2020 ) . Relationship to Self-Supervised Contrastive Learning Self-supervised contrastive learning has shown success in learning powerful representations , particularly in the computer vision domain . ( Chen et al. , 2020a ; He et al. , 2020 ; Tian et al. , 2020 ; Mnih & Kavukcuoglu , 2013 ; Gutmann & Hyvärinen , 2012 ; Kolesnikov et al. , 2019 ) Self-supervised learning methods do not require any labeled data ; instead they sample a mini batch from unsupervised data and create positive and negative examples from these samples using strong data augmentation techniques such as AutoAugment ( Cubuk et al. , 2019 ) or RandAugment ( Cubuk et al. , 2020 ) for computer vision . Positive examples are constructed by applying data augmentation to the same example ( cropping , flipping , etc . for an image ) , and negative examples are simply all the other examples in the sampled mini batch . Intuitively , selfsupervised contrastive objectives are learning representations that are invariant to different views of positive pairs ; while maximizing the distance between negative pairs . The distance metric used is often the inner product or the Euclidean distance between vector representations of the examples . For a batch of size N , self-supervised contrastive loss is defined as : Lself = 2N∑ i=1 − log exp ( Φ ( x′2i−1 ) · Φ ( x′2i ) /τ ) ∑2N k=1 1i 6=k exp ( Φ ( x ′ i ) · Φ ( x′k ) /τ ) ( 4 ) where Φ ( · ) ∈ Rd denotes an encoder that outputs the l2 normalized final encoder hidden layer before the softmax projection ; τ > 0 is a scalar temperature parameter . A is defined as a data augmentation block that generates two randomly generated augmented examples , x′2i and x ′ 2i−1 from the original example xi : A ( { xi , yi } i=1 , ... N ) = { x′i , y′i } i=1 , ... 2N . As an example , A can be RandAugment for a computer vision application ; or it could be a back-translation model for an NLP application .
The paper proposes a new training objective for fine-tuning pre-trained models: a weighted sum of the classical cross-entropy (CE) and a new supervised contrastive learning term (SCP). The latter uses the (negated) softmax over the embedding distances (i.e. dot products) between a training instance and all other instances in the batch with the same label. In contrast to the more traditional self-supervised contrastive learning (where positive pairs are obtained by applying transformations to the original data instance), there is no data augmentation; two examples with the same label constitute a positive pair.
SP:cc282126b689c7311c3a28f0d173a004ed24382f
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards . Complex tasks are often hierarchically composed of sub-tasks . Solving a sub-task increases the return expectation and leads to a step in the Q-function . RUDDER identifies these steps and then redistributes reward to them , thus immediately giving reward if sub-tasks are solved . Since the delay of rewards is reduced , learning is considerably sped up . However , for complex tasks , current exploration strategies struggle with discovering episodes with high rewards . Therefore , we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration . Unfortunately , the number of demonstrations is typically small and RUDDER ’ s LSTM as a deep learning model does not learn well on these few training samples . Hence , we introduce Align-RUDDER , which is RUDDER with two major modifications . First , Align-RUDDER assumes that episodes with high rewards are given as demonstrations , replacing RUDDER ’ s safe exploration and lessons replay buffer . Second , we substitute RUDDER ’ s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations . Profile models can be constructed from as few as two demonstrations . Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards . Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations . On the MineCraft ObtainDiamond task , Align-RUDDER is able to mine a diamond , though not frequently . 1 INTRODUCTION . Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards ( Sutton & Barto , 2018 ; Rahmandad et al. , 2009 ; Luoma et al. , 2017 ) . For delayed rewards , temporal difference ( TD ) suffers from vanishing information ( Arjona-Medina et al. , 2019 ) . On the other hand Monte Carlo ( MC ) has high variance since it must average over all possible futures ( ArjonaMedina et al. , 2019 ) . Monte-Carlo Tree Search ( MCTS ) , used for Go and chess , can handle delayed and rare rewards since it has a perfect environment model ( Silver et al. , 2016 ; 2017 ) . RUDDER ( Arjona-Medina et al. , 2019 ; 2018 ) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given . RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network . However , for complex tasks , current exploration strategies find episodes with high rewards only after an incommensurate long time . Humans and animals obtain high reward episodes by teachers , role models , or prototypes . Along this line , we assume that episodes with high rewards are given as demonstrations . Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies , typically , only a few demonstrations are available . However , RUDDER ’ s LSTM ( Hochreiter , 1991 ; Hochreiter & Schmidhuber , 1997a ) as a deep learning method requires many examples for learning . Therefore , we introduce Align-RUDDER , which replaces RUDDER ’ s LSTM with a profile model obtained from multiple sequence alignment ( MSA ) of the demonstrations . Profile models are well known in bioinformatics . They are used to score new sequences according to their sequence similarity to the aligned sequences . Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model— , which considerably speeds up learning even if only a few demonstrations are available . Our main contributions are : • We suggest a reinforcement algorithm that works well for sparse and delayed rewards , where standard exploration fails but few demonstrations with high rewards are available . • We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations . • We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning . 2 REVIEW OF RUDDER . Basic insight : Q-functions for complex tasks are step functions . Complex tasks are typically composed of sub-tasks . Therefore the Q-function of an optimal policy resembles a step function . The Q-function is the expected future return and it increases ( i.e , makes a step ) when a sub-task is completed . Identifying large steps in the Q-function speeds up learning since it allows ( i ) to increase the return by performing actions that cause the step and ( ii ) to sample episodes with a larger return for learning . An approximation to the Q-function must predict the expected future return for every state-action pair . However , a Q-function that resembles a step-function is mostly constant . Therefore predictions are only necessary at the steps . We have to identify the relevant state-actions that cause the steps and then predict the size of the steps . An LSTM network ( Hochreiter , 1991 ; Hochreiter & Schmidhuber , 1995 ; 1997a ; b ) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells . Consequently , LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed . Therefore , both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode . Reward Redistribution . We consider episodic Markov decision processes ( MDPs ) , i.e. , the reward is only given once at the end of the sequence . The Q-function is assumed to be a step function , that is , the task can be decomposed into sub-tasks ( see previous paragraph ) . Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward . Since the Q-function of an optimal policy is not known , we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work . The differences in predictions determine the reward redistribution . The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most . Fortunately , just identifying the largest steps even with poor predictions speeds up learning considerably . See Figure 1 for a description of the reward redistribution . Learning methods based on reward redistribution . The redistributed reward serves as reward for a subsequent learning method : ( A ) The Q-values can be directly estimated ( Arjona-Medina et al. , 2019 ) , which is used in the experiments for the artificial tasks and BC pre-training for MineCraft . ( B ) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2018 ) , which is used in the MineCraft experiments . ( C ) Redistributed rewards can serve for temporal difference learning like Q-learning ( Watkins , 1989 ) . LSTM models for reward redistribution . RUDDER uses an LSTM model for predicting the future return . The reward redistribution is the difference between two subsequent predictions . If a stateaction pair increases the prediction of the return , then it is immediately rewarded . Using state-action sub-sequences ( s , a ) 0 : t = ( s0 , a0 , . . . , st , at ) , the redistributed reward is Rt+1 = g ( ( s , a ) 0 : t ) − g ( ( s , a ) 0 : t−1 ) , where g is an LSTM model that predicts the return of the episode . The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most . 3 ALIGN-RUDDER : RUDDER WITH FEW DEMONSTRATIONS . In bioinformatics , sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship ( Needleman & Wunsch , 1970 ; Smith & Waterman , 1981 ) . The result of the alignment of multiple sequences is a profile model . The profile model is a consensus sequence , a frequency matrix , or a Position-Specific Scoring Matrix ( PSSM ) ( Stormo et al. , 1982 ) . New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model . Align-RUDDER uses such alignment techniques to align two or more high return demonstrations . For the alignment , we assume that the demonstrations follow the same underlying strategy , therefore they are similar to each other analog to being evolutionary related . If the agent generates a state-action sequence ( s , a ) 0 : t−1 , then this sequence is aligned to the profile model g giving a score g ( ( s , a ) 0 : t−1 ) . The next action of the agent extends the state-action sequence by one state-action pair ( st , at ) . The extended sequence ( s , a ) 0 : t is also aligned to the profile model g giving another score g ( ( s , a ) 0 : t ) . The redistributed reward Rt+1 is the difference of these scores : Rt+1 = g ( ( s , a ) 0 : t ) − g ( ( s , a ) 0 : t−1 ) ( see Eq . ( 1 ) ) . This difference indicates how much of the return is gained or lost by a adding another sequence element . Align-RUDDER scores how close an agent follows an underlying strategy , which has been extracted by the profile model . Similar to the LSTM model , we identify the largest steps in the Q-function via relevant events determined by the profile model . Therefore , redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees . RUDDER ’ s theory for reward redistribution is valid for LSTM , other recurrent networks , attention mechanisms , or sequence and profile models . Advantages of alignment compared to LSTM . Learning an LSTM model is severely limited when very few demonstrations are available . First , LSTM is known to require a large number of samples to generalize to new sequences . In contrast , sequence alignment requires only two examples to generalize well as known from bioinformatics . Second , expert demonstrations have high rewards . Therefore random demonstrations with very low rewards have to be generated . LSTM does not generalize well when only these extreme reward cases can be observed in the training set . In contrast , sequence alignment only uses examples that are closely related ; that is , they belong to the same category ( expert demonstrations ) . Reward Redistribution by Sequence Alignment . The new reward redistribution approach consists of five steps , see Fig . 3 : ( I ) Define events to turn episodes of state-action sequences into sequences of events . ( II ) Determine an alignment scoring scheme , so that relevant events are aligned to each other . ( III ) Perform a multiple sequence alignment ( MSA ) of the demonstrations . ( IV ) Compute the profile model like a PSSM . ( V ) Redistribute the reward : Each sub-sequence τt of a new episode τ is aligned to the profile . The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step ( IV ) , i.e . Rt+1 ∝ S ( τt ) − S ( τt−1 ) . In the following , the five steps of Align-RUDDER ’ s reward redistribution are outlined . For the interested reader , each step is detailed in Sec . A.3 in the appendix . Finally , in Sec . A.7.3 in the appendix , we illustrate these five steps on the example of Minecraft . ( I ) Defining Events . Instead of states , we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal . An event is defined as a cluster of state differences . We use similarity-based clustering like affinity propagation ( AP ) ( Frey & Dueck , 2007 ) . If states are only enumerated , we suggest to use the “ successor representation ” ( Dayan , 1993 ) or “ successor features ” ( Barreto et al. , 2017 ) . We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation . A sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e ( the event ) and ignoring the actions . Alignment techniques from bioinformatics assume sequences composed of a few events , e.g . 20 events . If there are too many events , good fitting alignments can not be distinguished from random alignments . This effect is known in bioinformatics as “ Inconsistency of Maximum Parsimony ” ( Felsenstein , 1978 ) . ( II ) Determining the Alignment Scoring System . A scoring matrix S with entries si , j determines the score for aligning event i with j . A priori , we only know that a relevant event should be aligned to itself but not to other events . Therefore , we set si , j = 1/pi for i = j and si , j = α for i 6= j . Here , pi is the relative frequency of event i in the demonstrations . α is a hyper-parameter , which is typically a small negative number . This scoring scheme encourages alignment of rare events , for which pi is small . For more details see Appendix Sec . A.3 . ( III ) Multiple sequence alignment ( MSA ) . An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i , j , i < j ∑L t=0 si , j , ti , tj , t in an alignment , where si , j , ti , tj , t is the score at alignment column t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length , since gaps make the alignment longer than the length of each sequence . We use ClustalW ( Thompson et al. , 1994 ) for MSA . MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations . This guiding tree allows to identify multiple strategies . For more details see Appendix Sec . A.3 . ( IV ) Position-Specific Scoring Matrix ( PSSM ) and MSA profile model . From the alignment , we construct a profile model as a ) column-wise event probabilities and b ) a PSSM ( Stormo et al. , 1982 ) . The PSSM is a column-wise scoring matrix to align new sequences to the profile model . More details are given in Appendix Sec . A.3 . ( V ) Reward Redistribution . The reward redistribution is based on the profile model . A sequence τ = e0 : T ( et is event at position t ) is aligned to the profile , which gives the score S ( τ ) = ∑L l=0 sl , tl . Here , sl , tl is the alignment score for event etl at position l in the alignment . Alignment gaps are columns to which no event was aligned , which have tl = T + 1 with gap penalty sl , T+1 . If τt = e0 : t is the prefix sequence of τ of length t+ 1 , then the reward redistribution Rt+1 for 0 6 t 6 T is Rt+1 = ( S ( τt ) − S ( τt−1 ) ) C = g ( ( s , a ) 0 : t ) − g ( ( s , a ) 0 : t−1 ) , RT+2 = G̃0 − T∑ t=0 Rt+1 , ( 1 ) where C = Edemo [ G̃0 ] / Edemo [ ∑T t=0 S ( τt ) − S ( τt−1 ) ] with S ( τ−1 ) = 0 . The original return of the sequence τ is G̃0 = ∑T t=0 R̃t+1 and the expectation of the return over demonstrations is Edemo . The constant C scales Rt+1 to the range of G̃0 . RT+2 is the correction of the redistributed reward ( Arjona-Medina et al. , 2019 ) , with zero expectation for demonstrations : Edemo [ RT+2 ] = 0 . Since τt = e0 : t and et = f ( st , at ) , we can set g ( ( s , a ) 0 : t ) = S ( τt ) C. We ensure strict return equivalence ( Arjona-Medina et al. , 2019 ) by G0 = ∑T+1 t=0 Rt+1 = G̃0 . The redistributed reward depends only on the past : Rt+1 = h ( ( s , a ) 0 : t ) . Sub-tasks . The reward redistribution identifies sub-tasks as alignment positions with high redistributed rewards . These sub-tasks are indicated by high scores s in the PSSM . Reward redistribution also determines the terminal states of sub-tasks since it assigns rewards for solving the sub-tasks . However , reward redistribution and Align-RUDDER can not guarantee that the redistributed reward is Markov . For redistributed Markov reward , options ( Sutton et al. , 1999 ) , MAXQ ( Dietterich , 2000 ) , or recursive option composition ( Silver & Ciosek , 2012 ) can be used . Higher Order Markov Reward Redistributions . Align-RUDDER may lead to higher-order Markov redistribution . Corollary 1 in the appendix states that the optimality criterion from Theorem 2 in Arjona-Medina et al . ( 2019 ) also holds for higher-order Markov reward redistributions . If the expected redistributed higher-order Markov reward is the difference of Q-values . In that case the redistribution is optimal , and there is no delayed reward . Furthermore , the optimal policies are the same as for the original problem . This corollary is the motivation for redistributing the reward to the steps in the Q-function . In the Appendix , Corollary 2 states that under a condition , an optimal higher-order reward redistribution can be expressed as the difference of Q-values .
Paper proposes to attack the challenging problem of RL with sparse feedback by leveraging a few demonstrations and learnable reward redistribution. The redistributed reward is computed by aligning the key events (a set of clustered symbols) to the demonstrations via PSSM-based seq matching. Experiments on two artificial tasks and a Minecraft task demonstrate that the presented method performs advantageously than two baselines (DQfD and BC+Q learning).
SP:7eb0d8278168465270570233e4af64ebb3f2f154
ChePAN: Constrained Black-Box Uncertainty Modelling with Quantile Regression
1 INTRODUCTION . The present paper proposes a novel method for adding aleatoric uncertainty estimation to any pointwise predictive system currently in use . Considering the system as a black box , i.e . avoiding any hypothesis about the internal structure of the system , the method offers a solution to the technical debt debate . The concept of technical debt was introduced in 1992 to initiate a debate on the long-term costs incurred when moving quickly in software engineering ( Sculley et al . ( 2015 ) ; Cunningham ( 1992 ) ) . Specifically , most of the predictive systems currently in use have previously required much effort in terms of code development , documentation writing , unit test implementation , preparing dependencies or even their compliance with the appropriate regulations ( e.g. , medical ( Ustun & Rudin ( 2016 ) ) or financial models ( Rudin ( 2019 ) ) may have to satisfy interpretability constraints ) . However , once the system is being used with real-world problems , a new requirement can arise regarding the confidence of its predictions when the cost of an erroneous prediction is high . That being said , replacing the currently-in-use system may not be advisable in the short term . To address this issue , k “ 0 the coefficients of the integrated polynomial , β the black box function and P the conditional prediction of the quantile τ . the aim of this work is to report any information that is useful for auditing the system ’ s associated uncertainty without modifying its predictions . In general terms , sources of uncertainty can be understood by analysing the conditional members of this joint distribution : ppy , xq “ ş M ppy | x , MqppM | xqppxq dM where M P M is the family ( assumed non-finite ) of models being considered . Not all methods developed to model uncertainty can be applied in the black-box scenario , since the main hypothesis is that the black box is a fixed single model and unknown internally . Here , we refer specifically to those solutions that model epistemic uncertainty , which requires modelling ppM | xq . By epistemic , we mean that uncertainty which can derive from ignorance about the model , including , for example , ensemble models ( Lakshminarayanan et al . ( 2017 ) ) , Bayesian neural networks ( Rasmussen ( 1996 ) ; Blundell et al . ( 2015 ) ; Hernández-Lobato & Adams ( 2015b ) ; Teye et al . ( 2018 ) ) or MC-Dropout ( Gal & Ghahramani ( 2016 ) ) . However , the black box could be a non-parametric predictive system or even a handcrafted rulebased system , as shown in Figure 1 . Hence the reason for studying aleatoric uncertainty ( Der Kiureghian & Ditlevsen ( 2009 ) ; Kendall & Gal ( 2017 ) ; Brando et al . ( 2019 ) ) , which originates from the variability of possible correct answers given the same input data , ppy | xq . This type of uncertainty can be tackled by modelling the response variable distribution . For instance , imposing a conditional normal distribution where the location parameter is the black-box function and the corresponding scale parameter is learnt . However , the more restricted the assumptions made about this distribution , the more difficult it will be to model heterogeneous distributions . One solution to this limitation is the type of regression analysis used in statistics and econometrics known as Quantile Regression ( QR ) , which will provide a more comprehensive estimation . Unlike classic regression methods , which only estimate a selected statistic such as the mean or the median , QR allows us to approximate any desired quantile . The main advantage of this method is that it allows confidence intervals to be captured without having to make strong assumptions about the distribution function to be approximated . Recently , several works ( Dabney et al . ( 2018a ) ; Tagasovska & Lopez-Paz ( 2018 ) ; Brando et al . ( 2019 ) ) have proposed a single deep learning model that implicitly learns all the quantiles at the same time , i.e . the model can be evaluated for any real value τ P r0 , 1s to give a pointwise estimation of any quantile value of the response variable . Nevertheless , these QR solutions are not directly applicable to the uncertainty modelling of a black box because the predicted quantiles need to be linked to the black-box prediction in some way . In the present paper , we propose a novel method for QR based on estimating the derivative of the final function using a Chebyshev polynomial approximation to model the uncertainty of a blackbox system . Specifically , this method disentangles the estimation of a selected statistic β of the distribution ppy | xq from the estimation of the quantiles of ppy | xq ( shown in Figure 2 ) . Hence , our method is not restricted to scenarios where we can jointly train both estimators , but can also be applied to pre-existing regression systems as a wrapper that produces the necessary information to evaluate aleatoric uncertainty . Additionally , the proposed method scales to several real-world data sets . This paper is organised as follows . Section 2 states the real-world motivation of the current research as well as the contribution it will be presented . Section 3 introduces the problem of QR and reviews the classic approach to use with neural networks , showing how it can not be applied directly to constrained black-box uncertainty modelling . Section 4 explores an approach for modelling the derivative of a function using neural networks . The two previous sections provide the baseline for developing our proposed model and its properties , which is presented in Section 5 . And finally , in Section 6 , we show how our model can be applied in large data sets and defines a new way of modelling the aleatoric uncertainty of a black box . The results are then summarised in the conclusion . 2 RESEARCH GOAL AND CONTRIBUTION . The present article was motivated by a real-world need that appears in a pointwise regression forecasting system of a large company . Due to the risk nature of the internal problem where it is applied , uncertainty modelling is important . However , similarly to the medical or financial cases presented in the introduction , interpretability requirements were essential in defining the model currently used by the company , which does not report confidence any prediction made . The need for this research arises in cases where the replacement of the aforementioned system is not advisable in the short term , despite the ongoing need for the uncertainty estimation of that system . Definition of constrained black-box uncertainty modelling From the probabilistic perspective , solving a regression problem involves determining a conditional density model , qpy | xq . This model fits an observed set of samples D “ pX , Y q “ pxi , yiq | xi P RD , yi P R ( n i “ 1 , which we assume to be sampled from an unknown distribution ppy | xq . i.e . the real data . Given this context , the pointwise forecasting system mentioned above is a function , β : RD Ñ R , which tries to approximate a certain conditional summary statistic ( a percentile or moment ) of ppy | xq . Regarding the notation , we will call the “ constraint ” the known or assumed summary statistic that is approximated by βpxq ( e.g . if β is reducing the mean square error , then it corresponds to the conditional mean . Otherwise , if it minimises the mean absolute error , it corresponds to the median ) . Importantly , in the constrained black-box uncertainty modelling context , the mismatch between the real conditional statistic and the black box , β , becomes a new source of aleatoric uncertainty that is different from the one derived from the data . However , the way to model it continues to be by estimating ppy | xq . Therefore , a poorly estimated β will impact the modelling of ppy | xq , given that we always force the constraint to be satisfied ( as shown in Figure 3 of the Experiment section ) . So far , we have attempted to highlight the fact that we do not have a strong hypothesis about the internals of this β function , we have only assumed that it approximates a certain statistic of ppy | xq . Accordingly , we call this function the “ constrained black box ” . This flexible assumption will enable us to consider several pointwise models as β , as shown in Figure 1 . The overall goal of the present article is , taking a pre-defined black box βpxq that estimates a certain conditional summary statistic of ppy | xq , to model qpy | xq under the constraint that if we calculate the summary statistic of this predicted conditional distribution , it will correspond to βpxq . As mentioned in the Introduction , since we have a fixed black box , we are unable to apply Bayesian techniques such as those that infer the distribution of parameters within the model , ppM | xq . In general , even though they are very common techniques in generic uncertainty modelling , no such epistemic uncertainty techniques can be applied in this context due to the limitation of only having a single fixed model . In addition , it should be noted that not all models that estimate ppy | xq can be used in the constrained black-box uncertainty modelling context . To solve this problem , we require models that predict qpy | xq but also force the chosen conditional summary statistic of qpy | xq to have the same value as βpxq . The main contribution of this work is to present a new approach that allows us not only to outperform other baseline models when tackling this problem , but also to decide which kind of constraint we wish to impose between βpxq and qpy | xq . The qpy | xq will be approximated using Quantile Regression ( explained in Section 3 ) and the constraint will be created considering the integration constant of the qpy | xq derivative ( shown in Section 5.1 ) . 3 CONDITIONAL QUANTILE REGRESSION . In Quantile Regression ( QR ) , we estimate q in a discrete manner by means of quantiles , which does not assume any typical parametric family distribution to the predicted p , i.e . it goes beyond central tendency or unimodality assumptions . For each quantile value τ P r0 , 1s and each input value x P RD , the conditional quantile function will be f : r0 , 1s ˆ RD Ñ R. In our case , we use deep learning as a generic function approximator ( Hornik et al . ( 1989 ) ) to build the model f , as we shall see later . Consequently , f is a parametric function that will be optimised by minimising the following loss function with respect to their weights w , Lpx , y , τq “ ` y ´ fwpτ , xq ˘ ¨ ` τ ´ 1ry ă fwpτ , xqs ˘ ( 1 ) where 1rcs denotes the indicator function that verifies the condition c. Equation 1 is an asymmetric convex loss function that penalises overestimation errors with weight τ and underestimation errors with weight 1´ τ . Recently , different works ( Dabney et al . ( 2018b ; a ) ; Wen et al . ( 2017 ) ) have proposed deep learning models that minimise a QR loss function similar to Equation 1 . For instance , in the field of reinforcement learning , the Implicit Quantile Network ( IQN ) model was proposed ( Dabney et al . ( 2018a ) ) and subsequently applied to solve regression problems as the Simultaneous Quantile Regression ( SQR ) model ( Tagasovska & Lopez-Paz ( 2019 ) ) or the IQN in ( Brando et al . ( 2019 ) ) . These models consist of a neural network ψ : r0 , 1s ˆ RD Ñ R such that it directly learns the function f that minimises Equation 1 , i.e . f “ ψ . In order to optimise ψ for all possible τ values , these models pair up each input x with a sampled τ „ Up0 , 1q from a uniform distribution in each iteration of the stochastic gradient descent method . Thus , the final loss function is an expectation over τ of Equation 1 . However , these QR models can not be applied to the constrained black-box scenario , given that they do not link their predicted quantiles with a pointwise forecasting system in a constrained way ( Section 5.1 ) . Other models , such as quantile forests , have a similar limitation . In the next section , we introduce the other main part required to define our proposed method .
This paper proposes a novel approach to modeling uncertainty, as an layer added-on to an otherwise black-box system. The ChePAN uses a neural network to estimate per-quantile roots of a chebyshev polynomial, then uses a quantile regression loss to fit these coefficients using backpropagation. Importantly, the Chebyshev polynomial formulation gives the practitioner some flexibility around deciding which statistic of the conditional CDF should be matched by the black box system. Examples are given for matching the 0 and 1 quantiles, for cases of min and max estimation, as well as matching either of median or mean.
SP:233335a3dc327cf153bd2e8d35a9e4594cf5bc67
Towards Robust and Efficient Contrastive Textual Representation Learning
1 INTRODUCTION . Representation learning is one of the pivotal topics in natural language processing ( NLP ) , in both supervised and unsupervised settings . It has been widely recognized that some forms of “ general representation ” exist beyond specific applications ( Oord et al. , 2018 ) . To extract such generalizable features , unsupervised representation models are generally pretrained on large-scale text corpora ( e.g. , BERT ( Devlin et al. , 2018 ; Liu et al. , 2019 ; Clark et al. , 2020 ; Lagler et al. , 2013 ) ) to avoid data bias . In supervised learning , models are typically built on top of these pre-trained representations and further fine-tuned on downstream tasks . Representation learning greatly expedites model deployment and meanwhile yields performance gains . There has been growing interest in exploiting contrastive learning ( CL ) techniques to refine context representations in NLP ( Mikolov et al. , 2013a ; b ) . These techniques aim to avoid representation collapse for downstream tasks , i.e. , getting similar output sentences with different input in conditional generation tasks ( Dai & Lin , 2017 ) . Intuitively , these methods carefully engineer features from crafted ( “ negative ” ) examples to contrast against the features from real ( “ positive ” ) examples . A feature encoder can then enhance its representation power by characterizing input texts at a finer granularity . Efforts have been made to empirically investigate and theoretically understand the effectiveness of CL in NLP , including noise contrastive estimation ( NCE ) of word embeddings ( Mikolov et al. , 2013b ) and probabilistic machine translation ( Vaswani et al. , 2013 ) with theoretical developments ( Gutmann & Hyvärinen , 2010 ) . More recently , InfoNCE ( Oord et al. , 2018 ) further links the CL to the optimization of mutual information , which inspired a series of practical followup works ( Tian et al. , 2020 ; Hjelm et al. , 2019a ; He et al. , 2020 ; Chen et al. , 2020 ) . Despite the significant empirical success of CL , there are still many open challenges in its application to NLP , including ( i ) the propagation of stable contrastive signals . An unregularized critic function in CL can suffer from unstable training and gradient vanishing issues , especially in NLP tasks due to the discrete nature of text . The inherent differences between positive and negative textual features make those examples easily distinguished , resulting in a weak learning signal in contrastive schemes ( Arora et al. , 2019 ) . ( ii ) Empirical evidence ( Wu et al. , 2017 ) shows that it is crucial to compare each positive example with adequate negative examples . However , recent works suggest using abundant negative examples , which are not akin to the positive examples , which can result in sub-optimal results and unstable training with additional computational overhead ( Ozair et al. , 2019 ; McAllester & Stratos , 2020 ) . In this paper , we propose two methods to mitigate the above issues . In order to stabilize the training and enhance the model ’ s generalization ability , we propose to use the Wasserstein dependency measure ( Ozair et al. , 2019 ) as a substitute for the Kullback-Leibler ( KL ) measure in the vanilla CL objective . We further actively select K high-quality negative samples to contrast with each positive sample under the current learned representations . These supply the training procedure with necessarily large and non-trivial contrastive samples , encouraging the representation network to generate more distinguishable features . Notably , our approach also significantly alleviates the computational burden of massive features compared with previous works ( Tian et al. , 2020 ; Hjelm et al. , 2019b ) . Contributions : ( i ) We propose a Wasserstein-regularized critic to stabilize training in a generic CL framework for learning better textual representations . ( ii ) We further employ an active negativesample selection method to find high-quality contrastive samples , thus reducing the gradient noise and mitigating the computation concerns . ( iii ) We empirically verify the effectiveness of our approach under various NLP tasks , including variational text generation ( Bowman et al. , 2016 ) , natural language understanding tasks on GLUE with supervised and semi-supervised setups ( Wang et al. , 2018 ) , and image-text retrieval ( Lee et al. , 2018 ) . 2 BACKGROUND . 2.1 NOISE CONTRASTIVE ESTIMATION . Our formulation is inspired by Noise Contrastive Estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ) , which was originally introduced for unnormalized density estimation , where the partition functions is intractable . To estimate a parametric distribution p , which we refer to as our target distribution , NCE leverages not only the observed samples A = ( a1 , a2 , ... , an1 ) ( positive samples ) , but also the samples drawn from a reference distribution q , denoted as B = ( b1 , b2 , ... , bn2 ) ( negative samples ) . Instead of estimating p directly , the density ratio p/q is estimated by training a critic between samples from A and B . Specifically , let Z = ( z1 , ... , zn1+n2 ) denote the union of A and B . A binary class label Ct is assigned to each zt , where Ct = 1 if zt ∈ A and Ct = 0 otherwise . The label probability is therefore P ( C = 1|z ) = p ( z ) p ( z ) + γq ( z ) , P ( C = 0|z ) = γq ( z ) p ( z ) + γq ( z ) , ( 1 ) where γ = n2n1 is a balancing hyperparameter accounting for the difference in number of samples between A and B . In practice , we do not know the analytic form of p ; therefore , a classifier g : z 7→ [ 0 , 1 ] to estimate p ( C = 1|z ) is trained . To get an estimation of the critic function g , NCE maximizes the log likelihood of the data for a binary classification task : L ( A , B ) = n1∑ t=1 log [ g ( at ) ] + n2∑ t=1 log [ 1− g ( bt ) ] . ( 2 ) 2.2 CONTRASTIVE TEXTUAL REPRESENTATION LEARNING AND ITS CHALLENGES . Let { wi } ni=1 be the observed text instances . We are interested in finding a vector representation u of the text w , i.e. , via an encoder u = Enc ( w ) , that can be repurposed for downstream tasks . A positive pair refers to paired instances ai = ( ui , vi ) associated with wi , where we are mostly interested in learning u ; v is a feature at a different representation level . In unsupervised scenarios , vi can be the feature representation at the layer next to the input text wi . In supervised scenarios , vi can be either the feature representation layer immediately after wi or immediately before the label yi that corresponds to the input wi . We will use π ( u , v ) to denote the joint distribution of the positive pairs , with πu ( u ) and πv ( v ) for the respective marginals . Contrastive learning follows the principle of “ learning by comparison. ” Specifically , one designs a negative sample distribution τ ( u′ , v′ ) , and attempts to distinguish samples from π ( u , v ) and τ ( u′ , v′ ) with a critic function g ( u , v ) . The heuristic is that , using samples from τ as references ( i.e. , to contrast against ) , the learner is advantaged to capture important properties that could have been otherwise missed ( Hjelm et al. , 2019a ; Oord et al. , 2018 ) . A popular choice of τ is the product of marginals , i.e. , τ ← π0 ( u′ , v′ ) = πu ( u′ ) πv ( v′ ) where ( u , v ) are independent of each other , so that bi = ( u ′ i , v ′ i ) ∼ π0 . Inputting the new ai and bi to ( 2 ) , we obtain the new CL loss : LNCE = Eu , v∼π [ log g ( u , v ) ] + γEu′ , v′∼π0 [ log ( 1− g ( u′ , v′ ) ) ] . ( 3 ) Note that when g is trained to optimality g∗ ( u , v ) = p ( C = 1|u , v ) under π0 , it establishes a lower bound of mutual information ( MI ) between u and v for the positive distribution ( Tian et al. , 2020 ; Neyshabur et al. , 2018 ) : MI ( πu , πv ) = KL ( π ( u , v ) ||πu ( u ) πv ( v ) ) ≥ Eu , v∼π [ log g∗ ( u , v ) ] + log γ . ( 4 ) However , there are three concerns regarding why directly applying equation 3 might not be good in practice for learning the contrastive representation of the input text w. • Robustness . The first issue concerns the MI ’ s strong sensitivity to small differences in data samples ( Ozair et al. , 2019 ; Tschannen et al. , 2020 ) . By definition in equation 4 , mutual information is a KL divergence . It is well known that the KL divergence is not a metric-aware divergence measure , which implies a minor difference in representation can induce drastic changes in the mutual information , as a special case of KL . Consequently , the learned g could be numerically unstable ( Ozair et al. , 2019 ) , which makes the learned representations less robust and does not generalize well to downstream tasks , especially when features come from text ( Chen et al. , 2018 ) . • Weak/vanishing contrastive signal . With a poor initialization or a poor choice of negative samples , the MI will vanish as the π ( u , v ) and πu ( u ) πv ( v ) become far apart , delivering a faint and nonsmooth gradient for training . In an extreme case , the support for π ( u , v ) and πu ( u ) πv ( v ) do not overlap , and the MI and the gradient will vanish to zero ( Arjovsky et al. , 2017 ) . • Negative-sample selection strategy . Learning MI is generally considered sample inefficient . This point can be corroborated from several perspectives , ranging from theoretical arguments to practical considerations . To confidently estimate a lower bound to the MI , one would need a sample size exponential to the mutual information ( i.e. , N ≥ exp ( Iπ ( u , v ) ) ) ( Ozair et al. , 2019 ; McAllester & Stratos , 2018 ) . Also , both theoretical prediction and empirical evidence suggest a large ratio γ is needed for good performance ( Tian et al. , 2020 ; Hjelm et al. , 2019a ) , imposing potential computational concerns for large training datasets . On the other hand , some studies report a large γ can instead deteriorate model performance ( Tschannen et al. , 2020 ; Arora et al. , 2019 ) . Such a large γ is also believed to be problematic especially when a strong association is expected between u and v. In that case , the majority of negative samples are so different from positive samples that the comparisons do not lend effective learning signals , but instead randomly drift the training ( Gutmann & Hyvärinen , 2010 ) .
This paper proposes an approach to improve (supervised and unsupervised) representation learning for text using constrastive learning. The proposed approach augments standard contrastive learning with: (1) Spectral-norm regularization of the critic to estimate the Wasserstein distance instead of the KL (as in the Wasserstein GAN-style approach), (2) Active negative sampling to select hard negative examples, and (3) momentum-based updates of intermediate features. The resulting contrastive learning objective can be combined with standard supervised and unsupervised objectives to improve downstream tasks.
SP:eff774eddcc60e943c0a41207c21a1c9d6d5d950
Progressive Skeletonization: Trimming more fat from a network at initialization
1 INTRODUCTION . The majority of pruning algorithms for Deep Neural Networks require training dense models and often fine-tuning sparse sub-networks in order to obtain their pruned counterparts . In Frankle & Carbin ( 2019 ) , the authors provide empirical evidence to support the hypothesis that there exist sparse sub-networks that can be trained from scratch to achieve similar performance as the dense ones . However , their method to find such sub-networks requires training the full-sized model and intermediate sub-networks , making the process much more expensive . Recently , Lee et al . ( 2019 ) presented SNIP . Building upon almost a three decades old saliency criterion for pruning trained models ( Mozer & Smolensky , 1989 ) , they are able to predict , at initialization , the importance each weight will have later in training . Pruning at initialization methods are much cheaper than conventional pruning methods . Moreover , while traditional pruning methods can help accelerate inference tasks , pruning at initialization may go one step further and provide the same benefits at train time Elsen et al . ( 2020 ) . Wang et al . ( 2020 ) ( GRASP ) noted that after applying the pruning mask , gradients are modified due to non-trivial interactions between weights . Thus , maximizing SNIP criterion before pruning might be sub-optimal . They present an approximation to maximize the gradient norm after pruning , where they treat pruning as a perturbation on the weight matrix and use the first order Taylor ’ s approximation . While they show improved performance , their approximation involves computing a Hessian-vector product which is expensive both in terms of memory and computation . ∗Correspondence to pau @ robots.ox.ac.uk †www.europe.naverlabs.com We argue that both SNIP and GRASP approximations of the gradients after pruning do not hold for high pruning levels , where a large portion of the weights are removed at once . In this work , while we rely on the saliency criteria introduced by Mozer & Smolensky ( 1989 ) , we optimize what this saliency would be after pruning , rather than before . Hence , we name our criteria Foresight Connection sEnsitivity ( FORCE ) . We introduce two approximate procedures to progressively optimize our objective . The first , which turns out to be equivalent to applying SNIP iteratively , removes a small fraction of weights at each step and re-computes the gradients after each pruning round . This allows to take into account the intricate interactions between weights , re-adjusting the importance of connections at each step . The second procedure , which we name FORCE , is also iterative in nature , but contrary to the first , it allows pruned parameters to resurrect . Hence , it supports exploration , which otherwise is not possible in the case of iterative SNIP . Moreover , one-shot SNIP can be viewed as a particular case of using only one iteration . Empirically , we find that both SNIP and GRASP have a sharp drop in performance when targeting higher pruning levels . Surprisingly , they perform even worse than random pruning as can be seen in Fig 1 . In contrast , our proposed pruning procedures prove to be significantly more robust on a wide range of pruning levels . 2 RELATED WORK . Pruning trained models Most of the pruning works follow the train – prune – fine-tune cycle ( Mozer & Smolensky , 1989 ; LeCun et al. , 1990 ; Hassibi et al. , 1993 ; Han et al. , 2015 ; Molchanov et al. , 2017 ; Guo et al. , 2016 ) , which requires training the dense network until convergence , followed by multiple iterations of pruning and fine-tuning until a target sparsity is reached . Particularly , Molchanov et al . ( 2017 ) present a criterion very similar to Mozer & Smolensky ( 1989 ) and therefore similar to Lee et al . ( 2019 ) and our FORCE , but they focus on pruning whole neurons , and involve training rounds while pruning . Frankle & Carbin ( 2019 ) and Frankle et al . ( 2020 ) showed that it was possible to find sparse sub-networks that , when trained from scratch or an early training iteration , were able to match or even surpass the performance of their dense counterparts . Nevertheless , to find them they use a costly procedure based on Han et al . ( 2015 ) . All these methods rely on having a trained network , thus , they are not applicable before training . In contrast , our algorithm is able to find a trainable sub-network with randomly initialized weights . Making the overall pruning cost much cheaper and presenting an opportunity to leverage the sparsity during training as well . Induce sparsity during training Another popular approach has been to induce sparsity during training . This can be achieved by modifying the loss function to consider sparsity as part of the optimization ( Chauvin , 1989 ; Carreira-Perpiñán & Idelbayev , 2018 ; Louizos et al. , 2018 ) or by dynamically pruning during training ( Bellec et al. , 2018 ; Mocanu et al. , 2018 ; Mostafa & Wang , 2019 ; Dai et al. , 2019 ; Dettmers & Zettlemoyer , 2020 ; Lin et al. , 2020 ; Kusupati et al. , 2020 ; Evci et al. , 2019 ) . These methods are usually cheaper than pruning after training , but they still need to train the network to select the final sparse sub-network . We focus on finding sparse sub-networks before any weight update , which is not directly comparable . Pruning at initialization These methods present a significant leap with respect to other pruning methods . While traditional pruning mechanisms focused on bringing speed-up and memory reduction at inference time , pruning at initialization methods bring the same gains both at training and inference time . Moreover , they can be seen as a form of Neural Architecture Search ( Zoph & Le , 2016 ) to find more efficient network topologies . Thus , they have both a theoretical and practical interest . Lee et al . ( 2019 ) presented SNIP , a method to estimate , at initialization , the importance that each weight could have later during training . SNIP analyses the effect of each weight on the loss function when perturbed at initialization . In Lee et al . ( 2020 ) , the authors studied pruning at initialization from a signal propagation perspective , focusing on the initialization scheme . Recently , Wang et al . ( 2020 ) proposed GRASP , a different method based on the gradient norm after pruning and showed a significant improvement for higher levels of sparsity . However , neither SNIP nor GRASP perform sufficiently well when larger compressions and speed-ups are required and a larger fraction of the weights need to be pruned . In this paper , we analyse the approximations made by SNIP and GRASP , and present a more suitable solution to maximize the saliency after pruning . 3 PROBLEM FORMULATION : PRUNING AT INITIALIZATION . Given a dataset D = { ( xi , yi ) } ni=1 , the training of a neural network f parameterized by θ ∈ Rm can be written as minimizing the following empirical risk : arg min θ 1 n ∑ i L ( ( f ( xi ; θ ) ) , yi ) s.t . θ ∈ C , ( 1 ) where L and C denote the loss function and the constraint set , respectively . Unconstrained ( standard ) training corresponds to C = Rm . Assuming we have access to the gradients ( batch-wise ) of the empirical risk , an optimization algorithm ( e.g . SGD ) is generally used to optimize the above objective , that , during the optimization process , produces a sequence of iterates { θi } Ti=0 , where θ0 and θT denote the initial and the final ( optimal ) parameters , respectively . Given a target sparsity level of k < m , the general parameter pruning problem involves C with a constraint ‖θT ‖0 ≤ k , i.e. , the final optimal iterate must have a maximum of k non-zero elements . Note that there is no such constraint with the intermediate iterates . Pruning at initialization , the main focus of this work , adds further restrictions to the above mentioned formulation by constraining all the iterates to lie in a fixed subspace of C. Precisely , the constraints are to find an initialization θ0 such that ‖θ0‖0 ≤ k 1 , and the intermediate iterates are θi ∈ C̄ ⊂ C , ∀i ∈ { 1 , . . . , T } , where C̄ is the subspace of Rm spanned by the natural basis vectors { ej } j∈supp ( θ0 ) . Here , supp ( θ0 ) denotes the support of θ0 , i.e. , the set of indices with non-zero entries . The first condition defines the sub-network at initialization with k parameters , and the second fixes its topology throughout the training process . Since there are ( m k ) such possible sub-spaces , exhaustive search to find the optimal sub-space to optimize ( 1 ) is impractical as it would require training ( m k ) neural networks . Below we discuss two recent approaches that circumvent this problem by maximizing a hand-designed data-dependent objective function . These objectives are tailored to preserve some relationships between the parameters , the loss , and the dataset , that might be sufficient to obtain a reliable θ0 . For the ease of notation , we will use θ to denote the dense initialization . SNIP Lee et al . ( 2019 ) present a method based on the saliency criterion from Mozer & Smolensky ( 1989 ) . They add a key insight and show this criteria works surprisingly well to predict , at initialization , the importance each connection will have during training . The idea is to preserve the parameters that will have maximum impact on the loss when perturbed . Let c ∈ { 0 , 1 } m be a binary vector , and the Hadamard product . Then , the connection sensitivity in SNIP is computed as : g ( θ ) : = ∂L ( θ c ) ∂c ∣∣∣∣ c=1 = ∂L ( θ ) ∂θ θ . ( 2 ) Once g ( θ ) is obtained , the parameters corresponding to the top-k values of |g ( θ ) i| are then kept . Intuitively , SNIP favors those weights that are far from the origin and provide high gradients ( irrespective of the direction ) . We note that SNIP objective can be written as the following problem : max c S ( θ , c ) : = ∑ i∈supp ( c ) |θi ∇L ( θ ) i| s.t . c ∈ { 0 , 1 } m , ‖c‖0 = k. ( 3 ) It is trivial to note that the optimal solution to the above problem can be obtained by selecting the indices corresponding to the top-k values of |θi ∇L ( θ ) i| . 1In practice , as will be done in this work as well , a subset of a given dense initialization is found using some saliency criterion ( will be discussed soon ) , however , note that our problem statement is more general than that . GRASP Wang et al . ( 2020 ) note that the SNIP saliency is measuring the connection sensitivity of the weights before pruning , however , it is likely to change after pruning . Moreover , they argue that , at initialization , it is more important to preserve the gradient signal than the loss itself . They propose to use as saliency the gradient norm of the loss ∆L ( θ ) = ∇L ( θ ) T∇L ( θ ) , but measured after pruning . To maximize it , Wang et al . ( 2020 ) adopt the same approximation introduced in LeCun et al . ( 1990 ) and treat pruning as a perturbation on the initial weights . Their method is equivalent to solving : max c G ( θ , c ) : = ∑ { i : ci=0 } −θi [ Hg ] i s.t . c ∈ { 0 , 1 } m , ‖c‖0 = k. ( 4 ) Where H and g denote the Hessian and the gradient of the loss respectively .
The paper finds that at extreme sparsities (>95%), existing approaches to pruning neural networks at initialization devolve to worse than random pruning. The paper posits that this degenerate behavior is due to the fact that weights are pruned in groups, though the saliency metrics only capture pointwise changes. The paper presents a modified saliency metric based on SNIP, allowing for calculating salience of partially pruned networks; this in turn allows for applying an iterative version of SNIP, as well as a variant of iterative SNIP that allows for rejuvenation. These pruning techniques are evaluated, showing that they maintain accuracy at high sparsities.
SP:a8bb14b514e474691be63b51582544a9befa7125
Communication in Multi-Agent Reinforcement Learning: Intention Sharing
1 INTRODUCTION . Reinforcement learning ( RL ) has achieved remarkable success in various complex control problems such as robotics and games ( Gu et al . ( 2017 ) ; Mnih et al . ( 2013 ) ; Silver et al . ( 2017 ) ) . Multi-agent reinforcement learning ( MARL ) extends RL to multi-agent systems , which model many practical real-world problems such as connected cars and smart cities ( Roscia et al . ( 2013 ) ) . There exist several distinct problems in MARL inherent to the nature of multi-agent learning ( Gupta et al . ( 2017 ) ; Lowe et al . ( 2017 ) ) . One such problem is how to learn coordinated behavior among multiple agents and various approaches to tackling this problem have been proposed ( Jaques et al . ( 2018 ) ; Pesce & Montana ( 2019 ) ; Kim et al . ( 2020 ) ) . One promising approach to learning coordinated behavior is learning communication protocol among multiple agents ( Foerster et al . ( 2016 ) ; Sukhbaatar et al . ( 2016 ) ; Jiang & Lu ( 2018 ) ; Das et al . ( 2019 ) ) . The line of recent researches on communication for MARL adopts end-to-end training based on differential communication channel ( Foerster et al . ( 2016 ) ; Jiang & Lu ( 2018 ) ; Das et al . ( 2019 ) ) . That is , a message-generation network is defined at each agent and connected to other agents ’ policies or critic networks through communication channels . Then , the message-generation network is trained by using the gradient of other agents ’ policy or critic losses . Typically , the message-generation network is conditioned on the current observation or the hidden state of a recurrent network with observations as input . Thus , the trained message encodes the past and current observation information to minimize other agents ’ policy or critic loss . It has been shown that due to the capability of sharing observation information , this kind of communication scheme has good performance as compared to communication-free MARL algorithms such as independent learning , which is widely used in MARL , in partially observable environments . In this paper , we consider the following further question for communication in MARL : ” How to harness the benefit of communication beyond sharing partial observation. ” We propose intention of each agent as the content of message to address the above question . Sharing intention using communication has been used in natural multi-agent systems like human society . ∗Corresponding author For example , drivers use signal light to inform other drivers of their intentions . A car driver may slow down if a driver in his or her left lane turns the right signal light on . In this case , the signal light encodes the driver ’ s intention , which indicates the driver ’ s future behavior , not current or past observation such as the field view . By sharing intention using signal light , drivers coordinate their drive with each other . In this paper , we formalize and propose a new communication scheme for MARL named Intention sharing ( IS ) in order to go beyond existing observation-sharing schemes for communication in MARL . The proposed IS scheme allows each agent to share its intention with other agents in the form of encoded imagined trajectory . That is , each agent generates an imagined trajectory by modeling the environment dynamics and other agents ’ actions . Then , each agent learns the relative importance of the components in the imagined trajectory based on the received messages from other agents by using an attention model . The output of the attention model is an encoded imagined trajectory capturing the intention of the agent and used as the communication message . We evaluate the proposed IS scheme in several multi-agent environments requiring coordination among agents . Numerical result shows that the proposed IS scheme significantly outperforms other existing communication schemes for MARL including the state-of-the-art algorithms such as ATOC and TarMAC . 2 RELATED WORKS . Under the asymmetry in learning resources between the training and execution phases , the framework of centralized training and decentralized execution ( CTDE ) , which assumes the availability of all system information in the training phase and distributed policy in the execution phase , has been adopted in most recent MARL researches ( Lowe et al . ( 2017 ) ; Foerster et al . ( 2018 ) ; Iqbal & Sha ( 2018 ) ; Kim et al . ( 2020 ) ) . Under the framework of CTDE , learning communication protocol has been considered to enhance performance in the decentralized execution phase for various multi-agent tasks ( Foerster et al . ( 2016 ) ; Jiang & Lu ( 2018 ) ; Das et al . ( 2019 ) ) . For this purpose , Foerster et al . ( 2016 ) proposed Differentiable Inter-Agent Learning ( DIAL ) . DIAL trains a message-generation network by connecting it to other agents ’ Q-networks and allowing gradient flow through communication channels in the training phase . Then , in the execution phase the messages are generated and passed to other agents through communication channels . Jiang & Lu ( 2018 ) proposed an attentional communication model named ATOC to learn when to communicate and how to combine information received from other agents through communication based on attention mechanism . Das et al . ( 2019 ) proposed Targeted Multi-Agent Communication ( TarMAC ) to learn the message-generation network in order to produce different messages for different agents based on a signature-based attention model . The message-generation networks in the aforementioned algorithms are conditioned on the current observation or a hidden state of LSTM . Under partially observable environments , such messages which encode past and current observations are useful but do not capture any future information . In our approach , we use not only the current information but also future information to generate messages and the weight between the current and future information is adaptively learned according to the environment . This yields further performance enhancement , as we will see in Section 5 . In our approach , the encoded imagined trajectory capturing the intention of each agent is used as the communication message in MARL . Imagined trajectory was used in other problems too . Racanière et al . ( 2017 ) used imagined trajectory to augment it into the policy and critic for combining modelbased and model-free approaches in single-agent RL . It is shown that arbitrary imagined trajectory ( rolled-out trajectory by using a random policy or own policy ) is useful for single-agent RL in terms of performance and data efficiency . Strouse et al . ( 2018 ) introduced information-regularizer to share or hide agent ’ s intention to other agents for a multi-goal MARL setting in which some agents know the goal and other agents do not know the goal . By maximizing ( or minimizing ) the mutual information between the goal and action , an agent knowing the goal learns to share ( or hide ) its intention to other agents not knowing the goal in cooperative ( or competitive ) tasks . They showed that sharing intention is effective in the cooperative case . In addition to our approach , Theory of Mind ( ToM ) and Opponent Modeling ( OM ) use the notion of intention . Rabinowitz et al . ( 2018 ) proposed the Theory of Mind network ( ToM-net ) to predict other agents ’ behaviors by using meta-learning . Raileanu et al . ( 2018 ) proposed Self Other-Modeling ( SOM ) to infer other agents ’ goal in an online manner . Both ToM and OM take advantage of predicting other agents ’ behaviors capturing the intention . One difference between our approach and the aforementioned two methods is that we use communication to share the intention instead of inference . That is , the agents in our approach allow other agents to know their intention directly through communication , whereas the agents in ToM and OM should figure out other agents ’ intention by themselves . Furthermore , the messages in our approach include future information by rolling out the policy , whereas ToM and CM predict only the current or just next time-step information . 3 SYSTEM MODEL . We consider a partially observable N -agent Markov game ( Littman ( 1994 ) ) and assume that communication among agents is available . At time step t , Agent i observes its own observation oit , which is a part of the global environment state st , and selects action ait ∈ Ai and messagemit ∈Mi based on its own observation oit and its own previous time step message m i t−1 plus the received messages from other agents , i.e. , mt−1 = ( m1t−1 , · · · , mNt−1 ) . We assume that the message mit of Agent i is sent to all other agents and available at other agents at the next time step , i.e. , time step t + 1 . The joint actions at = ( a1t , · · · , aNt ) yield the next environment state st+1 and rewards { rit } Ni=1 according to the transition probability T : S × A × S → [ 0 , 1 ] and the reward function Ri : S × A → R , respectively , where S and A = ∏N i=1Ai are the environment state space and the joint action space , respectively . The goal of Agent i is to find the policy πi that maximizes its discounted return Rit = ∑∞ t′=t γ t′rit′ . Hence , the objective function of Agent i is defined as Ji ( π i ) = Eπ [ Ri0 ] , where π = ( π1 , · · · , πN ) and γ ∈ [ 0 , 1 ] are the joint policy and the discounting factor , respectively . 4 THE PROPOSED INTENTION SHARING SCHEME . The key idea behind the IS scheme is that multiple agents communicate with other agents by sending their implicit future plans , which carry their intention . The received messages capturing the intention of other agents enable the agent to coordinate its action with those of other agents . We now describe the architecture of the proposed IS scheme . At time step t , Agent i selects an action ait ∼ πi ( ·|oit , mt−1 ) and a message mit = MGN i ( oit , mt−1 , πi ) based on its own observation oit and received messages mt−1 , where MGN i is the message-generation network ( MGN ) of Agent i . The MGN consists of two components : Imagined trajectory generation module ( ITGM ) and attention module ( AM ) . Each agent generates an imagined trajectory by using ITGM and learns the importance of each imagined step in the imagined trajectory by using AM . The output of AM is an encoded imagined trajectory reflecting the importance of imagined steps and is used as the communication message . The overall architecture of the proposed IS scheme is shown in Fig . 1 . In the following we describe the detail of each module . 4.1 IMAGINED TRAJECTORY GENERATION MODULE ( ITGM ) . The role of ITGM is to produce the next imagined step . ITGM takes the received messages , observation , and action as input and yields the predicted next observation and predicted action as output . By stacking ITGMs , we generate an imagined trajectory , as shown in Fig . 1 . For Agent i at time step t , we define an H-length imagined trajectory as τ i = ( τ it , τ̂ i t+1 , · · · , τ̂ it+H−1 ) , ( 1 ) where τ̂ it+k = ( ô i t+k , â i t+k ) is the imagined step at time step t + k. Note that τ i t = ( o i t , a i t ) is the true values of observation and action , but the imagined steps except τ it are predicted values . ITGM consists of a roll-out policy and two predictors : Other agents ’ action predictor f ia ( o i t ) ( we will call this predictor simply action predictor ) and observation predictor f io ( o i t , a i t , a −i t ) . First , we model the action predictor which takes the observation as input and produces other agents ’ predicted actions . The output of the action predictor is given by f ia ( o i t ) = ( â 1 t , · · · , âi−1t , âi+1t , · · · , âN ) = : â−it ( 2 ) Note that the action predictor can be trained by the previously proposed opponent modeling method ( Rabinowitz et al . ( 2018 ) ; Raileanu et al . ( 2018 ) ) and can take the received messages as input . Next , we model the observation predictor f io ( o i t , a i t , â −i t ) which is conditioned on the observation o i t , own action ait , and the output of the action predictor â −i t . Here , we adopt the dynamics function that predicts the difference between the next observation and the current observation , i.e. , oit+1 − oit instead of the next observation oit+1 proposed in ( Nagabandi et al . ( 2018 ) ) in order to reduce model bias in the early stage of learning . Hence , the next observation can be written as ôit+1 = o i t + f i o ( o i t , a i t , â −i t ) . ( 3 ) By injecting the predicted next observation and the received messages into the roll-out policy in ITGM , we obtain the predicted next action âit+1 = π i ( ôit+1 , mt−1 ) . Here , we use the current policy as the roll-out policy . Combining ôit+1 and â i t+1 , we obtain next imagined step at time step t + 1 , τ it+1 = ( ô i t+1 , â i t+1 ) . In order to produce an H-length imagined trajectory , we inject the output of ITGM and the received messages mt−1 into the input of ITGM recursively . Note that we use the received messages at time step t , mt−1 , in every recursion of ITGM.1
Paper proposed to generate the communication message in MARL with the predicted trajectories of all the agents (include the agent itself). An extra self-attention model is also stacked over the trajectories to trade off the length of prediction and the possible explaining away issue. The whole model is trained via a canonical MARL objective while the trajectory prediction model utilizes direct supervision collected from the environments. Experiments on several toy MARL benchmark demonstrates the effectiveness of the proposed method.
SP:ee89d3273df8b3b082c0e72a8768dff7cd3b7f56
Disentangling style and content for low resource video domain adaptation: a case study on keystroke inference attacks
1 INTRODUCTION . We are exceedingly reliant on our mobile devices in our everyday lives . Numerous activities , such as banking , communications , and information retrieval , have gone from having separate channels to collapsing into one : through our mobile phones . While this has made many of our lives more convenient , this phenomena further incentivizes attackers seeking to steal information from users . Therefore , studying different attack vectors and understanding the realistic threats that arise from attackers ’ abilities to recover user information is imperative to formulating defenses . The argument for studying these attacks is not a new one . A rich literature of prior works studying both attacks and defenses has assessed a wide array of potential attack vectors . The majority of these attacks utilize various machine learning algorithms to predict the user ’ s keystrokes , ( Raguram et al. , 2011 ; Cai & Chen , 2012 ; Xu et al. , 2013 ; Sun et al. , 2016 ; Chen et al. , 2018 ; Lim et al. , 2020 ) , but the ability to assess attackers leveraging deep learning methods has lagged due to the high costs of curating real-life datasets for this domain , and the lack of publicly available datasets . Despite all the recent attention to keystroke inference attacks , numerous questions have gone unanswered . Which defenses work against adversaries who leverage deep learning systems ? Which defenses are easily undermined ? Are there weaknesses in deep learning systems that we can use to develop better defenses to thwart state-of-the-art attacks ? These questions capture the essence of the underlying principles for research into defenses for keystroke inference atttacks . Given the backand-forth nature of researching attacks and defenses , these questions can not be addressed because of the current inability to assess attacks with deep learning methods . This paper aims to overcome the challenge of having limited number of labeled , real-life data by introducing a video domain adaptation technique that is able to leverage abundantly labeled synthetic data . We show that by disentangling our data into separate style and content representations , we can subsequently create style-content pairs across both domains , and combine them into representations that contain the content in the style of its inputs , i.e. , style transfer in the feature space . This is especially attractive in the case of pairs of real-life style and synthetic content , as this is an effective data augmentation scheme . Style representations need to be well separated between domains whereas content needs to be indistinguishable . To do this , we introduce auxiliary losses on the latent spaces to enforce disentanglement . Through a series of ablations , we show that doing so improves performance . In our context , Content answers the question : What was typed ? . For example , the sentence that a user types . Style answers the question : How was it typed ? . For example , the texting pattern . The majority of visual domain adaptation methods do not work well in our problem setting because they mainly focus on tasks in which the domain shift is limited to a shift in texture , e.g. , image classification , semantic segmentation , etc . ( Ganin & Lempitsky , 2014 ; Shrivastava et al. , 2016 ; Tzeng et al. , 2017 ; Hoffman et al. , 2017 ; Motiian et al. , 2017 ) . When predicting keystroke sequences , addressing the domain shift with respect to texture is not sufficient . While there is a clear difference in texture , we have to also address the temporal domain shift , e.g. , different finger motions , speeds , etc . Notice the difference between the trajectories of thumbs in the two example videos displayed in Figure 1 . The synthetic thumb is linearly interpolated whereas the real one moves in a more complex fashion . Our pairing mechanism is inspired by the one introduced by Motiian et al . ( 2017 ) . They devise a training regime that pairs the scarce data in the target domain with the data from the source domain . This strategy aims to augment the data in the target domain on the order of the source domain . In our work , we loosen the restriction of needing pairs with the same label to adapt to our setting of not having paired sentences . This makes our pairing mechanism more general and applicable to other settings . To summarize , our main contributions are : 1 ) A framework for low-resource video domain adaptation using supervised disentangled learning . 2 ) A novel method to assess the threat of keystroke inference attacks by an attacker using a deep learning system while having limited real-life data . 2 BACKGROUND . Keystroke Inference Attacks Some of the early works in ( vision-based ) keystroke inference attacks have focused on direct line of sight and reflective surfaces ( i.e. , teapots , sunglasses , eyes ) ( Backes et al. , 2008 ; 2009 ; Raguram et al. , 2011 ; Xu et al. , 2013 ; Yue et al. , 2014 ; Ye et al. , 2017 ; Lim et al. , 2020 ) to infer sensitive data . The attackers train models that account for various capture angles by aligning the user ’ s mobile phone to a template keyboard . Collectively , these works showed that attackers are able to successfully recover pins and full sentences . In this work , we advance the stateof-the-art under the direct line of sight model wherein the attacker uses a mobile camera to record a victim ’ s mobile phone usage . None of these works adequately explore the capabilities of an attacker that leverages deep learning systems because of the costs to collect large scale datasets . Lim et al . ( 2020 ) created a simulator that generates synthetic data for keystroke inference attacks and showed that training with both synthetic and real data , in a supervised domain adaptation framework , yielded a CNN that generalized to a real-life test set , despite having limited labels in the real domain . This work is limited due to the restricted threat scenario of inferring single keypresses . In our work , we assess the ability of an attacker to recover complete sequences . Predicting entire sequences from an input video is not only a more challenging task , but also it is a more realistic threat scenario . Style and Content Disentanglement in Videos Tenenbaum & Freeman ( 1997 ) ; Tenenbaum & Freeman ( 2000 ) observe that by learning to factor observations of data into two independent factors of variation , style and content , models learn separate representations that can extrapolate style into novel content , classify content in different styles , and translate new content into new styles . This framework has been extended to videos and has been explored in a variety of settings . Prior works have disentangled videos into a time-dependent style representation and time-independent content with adversarial training ( Denton & Birodkar , 2017 ; Villegas et al. , 2017 ) or with variational autoencoders ( Li & Mandt , 2018 ; Hsieh et al. , 2018 ) . In our setting , style and content are both timedependent . Style encapsulates the trajectory of the finger in between keys or speed of the user typing . The difference in texture on a per-frame basis is also encapsulated by style . Content represents the entire trajectory as that determines the sentence that was typed . These methods are all unsupervised methods to disentangle style and content . Since we have labels , we are able to leverage the observation made by Locatello et al . ( 2018 ; 2019 ) , arguing that learning disentangled representations is impossible without supervision , and that the unsupervised methods leveraging temporal inductive biases do not lead to improved disentangled representations . Low Resource Domain Adaptation We are operating in a low resource setting in which we have abundant labels in the source domain and have very few , albeit labeled , data points in the target domain . Hosseini-Asl et al . ( 2019 ) extend the CyCada ( Hoffman et al. , 2017 ) and CycleGAN ( Zhu et al. , 2017 ) frameworks to the low resource domain adaptation setting by adding a semantic consistency loss . Motiian et al . ( 2017 ) addresses this problem by learning a feature space that is domain invariant , but is semantically aligned across both domains by introducing a pairing process that pairs feature samples in the training set into four groups : 1 ) both samples from domain A with same labels ; 2 ) a sample from each domain with same labels ; 3 ) both samples from domain A with different labels ; 4 ) a sample from each domain with different labels . They use adversarial training to learn a feature representation such that a discriminator can ’ t distinguish samples from groups 1 and 2 , and also from groups 3 and 4 . We extend this pairing mechanism by relaxing the constraint of needing the same labels in both domains , i.e. , pairs of synthetic and real sentences . Since we are effectively transferring different styles onto the content latent space , we do not need labels in the target domain so long as they are effectively disentangled . 3 METHODS . We first give a brief introduction to keystroke inference attacks and define the problem setup . Then , we describe our proposed framework to disentangle the style and content latent spaces to train on all style-content pairs An overview of our method is in Figure 2 and in Algorithm 1 . 3.1 KEYSTROKE INFERENCE ATTACKS . We model the keystroke inference attack as a Seq2Seq ( Sutskever et al. , 2014 ) problem where the input X = { x1 , x2 , ... , xk } is a video with k frames and Y = { y1 , y2 , ... , yj } is a sequence of j characters . The videos are of users typing on their mobile phones that are cropped and aligned to a template image . The tokens are a sequence of characters of the sentence the user typed . We do not use any paired data ( i.e . the synthetic and real-life datasets do not contain the same sentences ) , and do not have access to any auxiliary labels such as the exact frame in which a key was pressed . Our goal is to learn the parameters of a model that maximizes the conditional probability of Y given X . We use a Transformer ( Vaswani et al. , 2017 ) encoder-decoder as our model . In our setting we have a dataset of synthetic videos , Ds = { ( Xsi , Y si ) } , and a dataset of real-life videos Dt = { ( Xti , Y ti ) } , where the number of real-life videos is significantly less than the synthetic ( Figure 3 ) . While a large synthetic dataset can be easily generated , there exists a distribution shift between the two domains ( Figure 1 ) . When the amount of labeled data is scarce , it becomes difficult to train neural networks that generalize to samples out of the training set . 3.2 DISENTANGLING STYLE AND CONTENT . Our method to address the lack of real-life data is to train on combinations of style and content representation pairs from the synthetic and real domains . We introduce auxiliary losses to enforce disentanglement of style and content , ensuring that the style latent space does not contain any information about the content , and vice versa . Our training framework consists of a Content Encoder , EC , a Style Encoder ES , a Decoder G , a Feature Aggregation Module , M , a Style Discriminator DS , a Content Discriminator DC , and a Domain-Class Discriminator DM . Pretraining Synthetic Model We first pretrain an Encoder-Decoder Transformer on only synthetic data . We train this network with a multi-class cross entropy loss where the goal is to predict the correct sentence for a given video . Then EC , ES , and DC are initialized with the weights of the pretrained Encoder , and G is initialized with the weights of the pretrained Decoder . Style Disentanglement Style disentanglement ensures that style information is removed from the content latent space . The content latent space is defined as zfcontent = EC ( X f i ; θEC ) where f ∈ { s , t } where fs and f t represent the synthetic and real domains , respectively . Similar to the setup of GANs ( Goodfellow et al. , 2014 ) the Style Discriminator , DS is trained to classify whether z f content is real or synthetic . Next , EC is trained to spoof DS and generate a content feature representation that is domain invariant . DS is trained using Equation 1 . EC is trained using the same equation , but the labels are flipped and DS is not updated . LAdvDS = −E [ log ( DS ( EC ( X s i ) ) ) − log ( 1−DS ( EC ( Xti ) ) ) ] ( 1 ) Content Disentanglement Content disentanglement ensures that content information is removed from the style latent space . The style latent space is defined as zfstyle = ES ( X f i ; θES ) where f ∈ { s , t } . The Content Discriminator , DC , is a Transformer Decoder , and is trained to predict the correct sentence given the input style representation . ES is trained to spoof DC and generate a style feature representation , zfstyle , such that DC can not predict the correct sentence . This is done by maximizing the entropy , H , of the predictions of DC . DC is trained by minimizing Equation 2 . ES is trained by maximizing Equation 3 with the weights of DC kept frozen . LAdvDC = − log p ( Y z i |DC ( ES ( X f i ) ) ) ( 2 ) LAdvES = H ( Y z i |DC ( ES ( Xsi ) ) ) ( 3 ) Feature Aggregation A Feature Aggregation Module , M , combines the disentangled representations from the previous two steps . For any given pair of style and content representations we have : M ( zfstyle , z f ′ content ) = m ( z f style + z f ′ content ) ( 4 ) In Equation 4 , m is the LayerNorm operation ( Ba et al. , 2016 ) , f ∈ { s , t } and f ′ ∈ { s , t } . There are four different possible pairs that can be the input to our model , since there are two factors of variation ( style and content ) and two domains ( synthetic and real-life ) . For any given input pair , the output feature representation ofM can be thought as the content in the style of the specified domain . We denote this as hff ′ where f is the style and f ′ is the content . Prediction The Decoder , G , takes in the output of M , hff ′ , and outputs the predicted sentence Ŷ f ′ , and is trained with the cross-entropy loss using the labels Y f ′ . The objective is : Lcls = − log p ( Y f ′|G ( M ( ES ( Xf ) , EC ( Xf ′ ) ) ) ) ( 5 ) At test time , the model outputs the most likely sentence given a real-life video : argmax Y p ( Y t|G ( htt ) ) ( 6 ) Semantic Aligment We extend the framework of Motiian et al . ( 2017 ) to create training pairs to compensate for limited data in one domain . Rather than train on pairs of feature samples , we train on the outputs of a feature aggregation module that takes in as input style-content pairs . Furthermore , we do not need to have the same labels for both domains , i.e. , we do not need to have the same synthetic and real sentences . We create four pairs Gk , k ∈ { 1 , 2 , 3 , 4 } . G1 and G2 are outputs of M that share synthetic content : ( Synthetic Style , Synthetic Content ) and ( Real Style , Synthetic Content ) . G3 and G4 share real content : ( Synthetic Style , Real Content ) and ( Real Style , Real Content ) . A multi-class discriminator , DM , is trained using Equation 7 to correctly identify which group every output of M belongs to . lk is the corresponding label for a given Gk . EC , ES , and M are updated with Equation 8 such that DM can ’ t distinguish outputs of M that are in G1 and G2 and outputs of M that are in G3 and G4 . LAdvDM = −E [ 4∑ k=1 lk log ( DM ( M ( Gk ) ) ) ] ( 7 ) LAdvM = −E [ l1 log ( DM ( M ( G2 ) ) ) − l3 log ( DM ( M ( G4 ) ) ) ] ( 8 ) The final loss function to train our model is show in Equation 9 where the weightings for each term are tuned using a validation set . An overview of the training procedure is shown in Algorithm A.9 . L = λ1Lcls + λ2LAdvM + λ3LAdvDM + λ4LAdvES + λ5LAdvDC + λ6LAdvDS ( 9 )
In this paper, the authors focus on keystroke inference attacks in which an attacker leverages machine learning approaches, In particular, a new framework is proposed for low-resource video domain adaptation using supervised disentangled learning, and another method to assess the threat of keystroke inference attacks by an attacker using a deep learning system, given limited real-life data. The novelty of the approach and its theoretical foundation is appreciated. For a given domain, they decompose the data into real-life style, synthetic style, real-life content, and synthetic content, and then combine them into feature representations from all combinations of style-content pairings across domains to train a model, This allows classify the content of a sample in the style of another domain. Results indicate that training with these pairs to disentangle style and content prevents their model from overfitting to a small real-world training sets, and thereby provides an effective form of data augmentation that prevents overfitting.
SP:b24e79d30d19c99f1093779bdba8bd8b2aed9ec0
Warpspeed Computation of Optimal Transport, Graph Distances, and Embedding Alignment
1 INTRODUCTION . Measuring the distance between two distributions or sets of objects is a central problem in machine learning . One common method of solving this is optimal transport ( OT ) . OT is concerned with the problem of finding the transport plan for moving a source distribution ( e.g . a pile of earth ) to a sink distribution ( e.g . a construction pit ) with the cheapest cost w.r.t . some pointwise cost function ( e.g . the Euclidean distance ) . The advantages of this method have been shown numerous times , e.g . in generative modelling ( Arjovsky et al. , 2017 ; Bousquet et al. , 2017 ; Genevay et al. , 2018 ) , loss functions ( Frogner et al. , 2015 ) , set matching ( Wang et al. , 2019 ) , or domain adaptation ( Courty et al. , 2017 ) . Motivated by this , many different methods for accelerating OT have been proposed in recent years ( Indyk & Thaper , 2003 ; Papadakis et al. , 2014 ; Backurs et al. , 2020 ) . However , most of these approaches are specialized methods that do not generalize to modern deep learning models , which rely on dynamically changing high-dimensional embeddings . In this work we aim to make OT computation for point sets more scalable by proposing two fast and accurate approximations of entropy-regularized optimal transport : Sparse Sinkhorn and LCNSinkhorn , the latter relying on our newly proposed locally corrected Nyström ( LCN ) method . Sparse Sinkhorn uses a sparse cost matrix to leverage the fact that in entropy-regularized OT ( also known as the Sinkhorn distance ) ( Cuturi , 2013 ) often only each point ’ s nearest neighbors influence the result . LCN-Sinkhorn extends this approach by leveraging LCN , a general similarity matrix approximation that fuses local ( sparse ) and global ( low-rank ) approximations , allowing us to simultaneously capture both kinds of behavior . LCN-Sinkhorn thus fuses sparse Sinkhorn and Nyström-Sinkhorn ( Altschuler et al. , 2019 ) . Both sparse Sinkhorn and LCN-Sinkhorn run in log-linear time . We theoretically analyze these approximations and show that sparse corrections can lead to significant improvements over the Nyström approximation . We furthermore validate these approximations by showing that they are able to reproduce both the Sinkhorn distance and transport plan significantly better than previous methods across a wide range of regularization parameters and computational budgets ( as e.g . demonstrated in Fig . 1 ) . We then show the impact of these improvements by employing Sinkhorn approximations end-to-end in two high-impact machine learning tasks . First , we incorporate them into Wasserstein Procrustes for word embedding alignment ( Grave et al. , 2019 ) . LCN-Sinkhorn improves upon the original method ’ s accuracy by 3.1 percentage points using a third of the training time without any further model changes . Second , we develop the graph transport network ( GTN ) , which combines graph neural networks ( GNNs ) with optimal transport , and further improve it via learnable unbalanced OT and multi-head OT . GTN with LCN-Sinkhorn is the first model that both overcomes the bottleneck of using a single embedding per graph and scales log-linearly in the number of nodes . In summary , our paper ’ s main contributions are : • Locally Corrected Nyström ( LCN ) , a flexible , log-linear time approximation for similarity matrices , leveraging both local ( sparse ) and global ( low-rank ) approximations . • Entropy-regularized optimal transport ( a.k.a . Sinkhorn distance ) with log-linear runtime via sparse Sinkhorn and LCN-Sinkhorn . These are the first log-linear approximations that are stable enough to substitute full entropy-regularized OT in models that leverage high-dimensional spaces . • The graph transport network ( GTN ) , which combines a graph neural network ( GNN ) with multihead unbalanced LCN-Sinkhorn . GTN both sets the state of the art on graph distance regression and still scales log-linearly in the number of nodes . 2 SPARSE SINKHORN . Entropy-regularized optimal transport . In this work we focus on optimal transport between two discrete sets of points . We furthermore add entropy regularization , which enables fast computation and often performs better than regular OT ( Cuturi , 2013 ) . Formally , given two categorical distributions modelled via the vectors p ∈ Rn and q ∈ Rm supported on two sets of points Xp = { xp1 , . . . , xpn } and Xq = { xq1 , . . . , xqm } in Rd and the cost function c : Rd × Rd → R ( e.g . the squared L2 distance ) giving rise to the cost matrix Cij = c ( xpi , xqi ) we aim to find the Sinkhorn distance dλc and the associated optimal transport plan P̄ ( Cuturi , 2013 ) dλc = min P 〈P , C〉F − λH ( P ) , s.t . P1m = p , P T1n = q , ( 1 ) with the Frobenius inner product 〈. , .〉F and the entropy H ( P ) = − ∑n i=1 ∑m j=1Pij logPij . Note that dλc includes the entropy and can thus be negative , while Cuturi ( 2013 ) originally used d 1/λ Cuturi , c = 〈P̄ , C〉F . This optimization problem can be solved by finding the vectors s̄ and t̄ that normalize the columns and rows of the matrix P̄ = diag ( s̄ ) K diag ( t̄ ) with the similarity matrixKij = e− Cij λ , so that P̄1m = p and P̄ T1n = q . This is usually achieved via the Sinkhorn algorithm , which initializes the normalization vectors as s ( 1 ) = 1n and t ( 1 ) = 1m and then updates them alternatingly via s ( i ) = p ( Kt ( i−1 ) ) , t ( i ) = q ( KTs ( i ) ) ( 2 ) until convergence , where denotes elementwise division . Sparse Sinkhorn . The Sinkhorn algorithm is faster than non-regularized EMD algorithms , which run in O ( n2m log n log ( nmax ( C ) ) ) ( Tarjan , 1997 ) . However , its computational cost is still quadratic in time , i.e . O ( nm ) , which is prohibitively expensive for large n and m. We propose to overcome this by observing that the matrix K , and hence also P̄ , is negligibly small everywhere except at each point ’ s closest neighbors because of the exponential used inK ’ s computation . We propose to leverage this by approximating C via the sparse matrix Csp , where Cspij = { Cij if xpi and xqj are “ near ” , ∞ otherwise . ( 3 ) Ksp and P̄ sp follow according to the definitions of K and P̄ . In this work we primarily consider neighbors with distance lower than r1 as “ near ” . Finding such neighbors can be efficiently solved via locality sensitive hashing ( LSH ) on Xp ∪Xq . Locality sensitive hashing . LSH tries to filter “ near ” from “ far ” data points by putting them into different hash buckets . Points closer than a certain distance r1 are put into the same bucket with probability at least p1 , while those beyond some distance r2 = c · r1 with c > 1 are put into the same bucket with probability at most p2 p1 . There is a plethora of LSH methods for different cost functions ( Wang et al. , 2014 ; Shrivastava & Li , 2014 ) , so we do not have to restrict our approach to a limited set of functions . In this work we focus on cross-polytope LSH ( Andoni et al. , 2015 ) and k-means LSH ( Paulevé et al. , 2010 ) , depending on the cost function ( see App . H ) . Sparse Sinkhorn with LSH scales log-linearly with the number of points , i.e . O ( n log n ) for n ≈ m ( see App . A and App . K for details ) . Unfortunately , LSH can fail when e.g . the cost between pairs is very similar ( see App . B ) . However , we can alleviate these limitations by fusingKsp with the Nyström approximation . 3 LOCALLY CORRECTED NYSTRÖM AND LCN-SINKHORN . Nyström method . The Nyström method is a popular way of approximating similarity matrices that provides performance guarantees for many important tasks ( Williams & Seeger , 2001 ; Musco & Musco , 2017 ) . It approximates a positive semi-definite ( PSD ) similarity matrixK via its low-rank decomposition KNys = UA−1V . Since the optimal decomposition via SVD is too expensive to compute , Nyström instead chooses a set of l landmarks L = { xl1 , . . . , xll } and obtains the matrices via Uij = k ( xpi , xlj ) , Aij = k ( xli , xlj ) , and Vij = k ( xli , xqj ) , where k ( x1 , x2 ) is an arbitrary PSD kernel , e.g . k ( x1 , x2 ) = e− c ( x1 , x2 ) λ for Sinkhorn . Common methods of choosing landmarks from Xp ∪Xq are uniform and ridge leverage score ( RLS ) sampling . We instead focus on k-means Nyström and sampling via k-means++ , which we found to be significantly faster than recursive RLS sampling ( Zhang et al. , 2008 ) and perform better than both uniform and RLS sampling ( see App . H ) . Sparse vs. Nyström . Exponential kernels like the one used forK ( e.g . the Gaussian kernel ) typically have a reproducing kernel Hilbert space that is infinitely dimensional . The resulting Gram matrixK thus always has full rank . A low-rank approximation like the Nyström method can therefore only account for its global structure and not the local structure around each point x . As such , it is ill-suited for any moderately low entropy regularization parameter , where the transport matrix P̄ resembles a permutation matrix . Sparse Sinkhorn , on the other hand , can not account for global structure and instead approximates all non-selected distances as infinity . It will hence fail if more than a handful of neighbors are required per point . These approximations are thus opposites of each other , and as such not competing but rather complementary approaches . Locally corrected Nyström . Since we know that the entries in our sparse approximation are exact , fusing this matrix with the Nyström method is rather straightforward . For all non-zero values in the sparse approximationKsp we first calculate the corresponding Nyström approximations , obtaining the sparse matrixKspNys . To obtain the locally corrected Nyström ( LCN ) approximation we remove these entries fromKNys and replace them with their exact values , i.e . KLCN = KNys +K sp ∆ = KNys −K sp Nys +K sp . ( 4 ) LCN-Sinkhorn . To obtain the approximate transport plan P̄LCN we run the Sinkhorn algorithm with KLCN instead of K. However , we never fully instantiate KLCN . Instead , we only save the decomposition and directly use these parts in Eq . ( 2 ) viaKLCNt = U ( A−1V t ) +K sp ∆t , similarly to Altschuler et al . ( 2019 ) . As a result we obtain the decomposition of P̄LCN = P̄Nys + P̄ sp ∆ = P̄U P̄W + P̄ sp − P̄ spNys and the approximate distance ( using Lemma A from Altschuler et al . ( 2019 ) ) dλLCN , c = λ ( sT P̄U P̄W1m + 1 T n P̄U P̄W t+ s T P̄ sp∆ 1m + 1 T n P̄ sp ∆ t ) . ( 5 ) This approximation scales log-linearly with dataset size ( see App . A and App . K for details ) . It allows us to smoothly move from Nyström-Sinkhorn to sparse Sinkhorn by varying the number of neighbors and landmarks . We can thus freely choose the optimal “ operating point ” based on the underlying problem and regularization parameter . We discuss the limitations of LCN-Sinkhorn in App . B .
The paper considers the problem of approximating Sinkhorn divergence and corresponding transportation plan by combining low-rank and sparse approximation for the Sinkhorn kernel and using Nystrom iterations as a substitute for Sinkhorn's iterations. The corresponding approach is amenable to differentiation and can be used as a building block in different architectures. Numerical experiments in several settings are performed to compare the proposed approach with existing ones and demonstrate its scalability.
SP:181ce6eaacf4be8ede3fbdd82c63200278f63cc4
Adversarial score matching and improved sampling for image generation
1 INTRODUCTION . Song and Ermon ( 2019 ) recently proposed a novel method of generating samples from a target distribution through a combination of Denoising Score Matching ( DSM ) ( Hyvärinen , 2005 ; Vincent , 2011 ; Raphan and Simoncelli , 2011 ) and Annealed Langevin Sampling ( ALS ) ( Welling and Teh , 2011 ; Roberts et al. , 1996 ) . Since convergence to the distribution is guaranteed by the ALS , their approach ( DSM-ALS ) produces high-quality samples and guarantees high diversity . Though , this comes at the cost of requiring an iterative process during sampling , contrary to other generative methods . These generative methods can notably be used to diverse tasks like colorization , image restoration and image inpainting ( Song and Ermon , 2019 ; Kadkhodaie and Simoncelli , 2020 ) . Song and Ermon ( 2020 ) further improved their approach by increasing the stability of score matching training and proposing theoretically sound choices of hyperparameters . They also scaled their approach to higher-resolution images and showed that DSM-ALS is competitive with other generative models . Song and Ermon ( 2020 ) observed that the images produced by their improved model were more visually appealing than the ones from their original work ; however , the reported Fréchet Inception Distance ( FID ) ( Heusel et al. , 2017 ) did not correlate with this improvement . Although DSM-ALS is gaining traction , Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) remain the leading approach to generative modeling . GANs are a very popular class of generative models ; they have been successfully applied to image generation ( Brock et al. , 2018 ; Karras et al. , 2017 ; 2019 ; 2020 ) and have subsequently spawned a wealth of variants ( Radford et al. , 2015a ; Miyato et al. , 2018 ; Jolicoeur-Martineau , 2018 ; Zhang et al. , 2019 ) . The idea behind this method is to train a Discriminator ( D ) to correctly distinguish real samples from fake samples generated by a second agent , known as the Generator ( G ) . GANs excel at generating high-quality samples as the discriminator captures features that make an image plausible , while the generator learns to emulate them . Still , GANs often have trouble producing data from all possible modes , which limits the diversity of the generated samples . A wide variety of tricks have been developed to address this issue in GANs ( Kodali et al. , 2017 ; Gulrajani et al. , 2017 ; Arjovsky et al. , 2017 ; Miyato et al. , 2018 ; JolicoeurMartineau and Mitliagkas , 2019 ) , though it remains an issue to this day . DSM-ALS , on the other hand , does not suffer from that problem since ALS allows for sampling from the full distribution captured by the score network . Nevertheless , the perceptual quality of DSM-ALS higher-resolution images has so far been inferior to that of GAN-generated images . Generative modeling has since seen some incredible work from Ho et al . ( 2020 ) , who achieved exceptionally low ( better ) FID on image generation tasks . Their approach showcased a diffusion-based method ( Sohl-Dickstein et al. , 2015 ; Goyal et al. , 2017 ) that shares close ties with DSM-ALS , and additionally proposed a convincing network architecture derived from Salimans et al . ( 2017 ) . In this paper , after introducing the necessary technical background in the next section , we build upon the work of Song and Ermon ( 2020 ) and propose improvements based on theoretical analyses both at training and sampling time . Our contributions are as follows : • We propose Consistent Annealed Sampling ( CAS ) as a more stable alternative to ALS , correcting inconsistencies relating to the scaling of the added noise ; • We show how to recover the expected denoised sample ( EDS ) and demonstrate its unequiv- ocal benefits w.r.t the FID . Notably , we show how to resolve the mismatch observed in DSM-ALS between the visual quality of generated images and its high ( worse ) FID ; • We propose to further exploit the EDS through a hybrid objective function , combining GAN and Denoising Score Matching objectives , thereby encouraging the EDS of the score network to be as realistic as possible . In addition , we show that the network architecture used used by Ho et al . ( 2020 ) significantly improves sample quality over the RefineNet ( Lin et al. , 2017a ) architecture used by Song and Ermon ( 2020 ) . In an ablation study performed on CIFAR-10 and LSUN-church , we demonstrate how these contributions bring DSM-ALS in range of the state-of-the-art for image generation tasks w.r.t . the FID . The code to replicate our experiments is publicly available at [ Available in supplementary material ] . 2 BACKGROUND . 2.1 DENOISING SCORE MATCHING . Denoising Score Matching ( DSM ) ( Hyvärinen , 2005 ) consists of training a score network to approximate the gradient of the log density of a certain distribution ( ∇x log p ( x ) ) , referred to as the score function . This is achieved by training the network to approximate a noisy surrogate of p at multiple levels of Gaussian noise corruption ( Vincent , 2011 ) . The score network s , parametrized by θ and conditioned on the noise level σ , is tasked to minimize the following loss : 1 2 Ep ( x̃ , x , σ ) [ ∥∥∥∥σsθ ( x̃ , σ ) + x̃− xσ ∥∥∥∥2 2 ] , ( 1 ) where p ( x̃ , x , σ ) = qσ ( x̃|x ) p ( x ) p ( σ ) . We define further qσ ( x̃|x ) = N ( x̃|x , σ2I ) the corrupted data distribution , p ( x ) the training data distribution , and p ( σ ) the uniform distribution over a set { σi } corresponding to different levels of noise . In practice , this set is defined as a geometric progression between σ1 and σL ( with L chosen according to some computational budget ) : { σi } Li=1 = { γiσ1 ∣∣∣ i ∈ { 0 , . . . , L− 1 } , γ , σ2 σ1 = ... = ( σL σ1 ) 1 L−1 < 1 } . ( 2 ) Rather than having to learn a different score function for every σi , one can train an unconditional score network by defining sθ ( x̃ , σi ) = sθ ( x̃ ) /σi , and then minimizing Eq . 1 . While unconditional networks are less heavy computationally , it remains an open question whether conditioning helps performance . Li et al . ( 2019 ) and Song and Ermon ( 2020 ) found that the unconditional network produced better samples , while Ho et al . ( 2020 ) obtained better results than both of them using a conditional network . Additionally , the denoising autoencoder described in Lim et al . ( 2020 ) gives evidence supporting the benefits of conditioning when the noise becomes small ( also see App . D and E for a theoretical discussion of the difference ) . While our experiments are conducted with unconditional networks , we believe our techniques can be straightforwardly applied to conditional networks ; we leave that extension for future work . 2.2 ANNEALED LANGEVIN SAMPLING . Given a score function , one can use Langevin dynamics ( or Langevin sampling ) ( Welling and Teh , 2011 ) to sample from the corresponding probability distribution . In practice , the score function is generally unknown and estimated through a score network trained to minimize Eq . 1 . Song and Ermon ( 2019 ) showed that Langevin sampling has trouble exploring the full support of the distribution when the modes are too far apart and proposed Annealed Langevin Sampling ( ALS ) as a solution . ALS starts sampling with a large noise level and progressively anneals it down to a value close to 0 , ensuring both proper mode coverage and convergence to the data distribution . Its precise description is shown in Algorithm 1 . Algorithm 1 Annealed Langevin Sampling Require : sθ , { σi } Li=1 , , nσ . 1 : Initialize x 2 : for i← 1 to L do 3 : αi ← σ2i /σ2L 4 : for nσ steps do 5 : Draw z ∼ N ( 0 , I ) 6 : x← x+ αisθ ( x , σi ) + √ 2αiz return x Algorithm 2 Consistent Annealed Sampling Require : sθ , { σi } Li=1 , γ , , σL+1 = 0 1 : Initialize x 2 : β ← √ 1− ( 1− /σ2L ) 2 /γ2 3 : for i← 1 to L do 4 : αi ← σ2i /σ2L 5 : Draw z ∼ N ( 0 , I ) 6 : x← x+ αisθ ( x , σi ) + βσi+1z return x 2.3 EXPECTED DENOISED SAMPLE ( EDS ) . A little known fact from Bayesian literature is that one can recover a denoised sample from the score function using the Empirical Bayes mean ( Robbins , 1955 ; Miyasawa , 1961 ; Raphan and Simoncelli , 2011 ) : s∗ ( x̃ , σ ) = H∗ ( x̃ , σ ) − x̃ σ2 , ( 3 ) whereH∗ ( x̃ , σ ) , Ex∼qσ ( x|x̃ ) [ x ] is the expected denoised sample given a noisy sample ( or Empirical Bayes mean ) , conditioned on the noise level . A different way of reaching the same result is through the closed-form of the optimal score function , as presented in Appendix D. The corresponding result for unconditional score function is presented in Appendix E for completeness . The EDS corresponds to the expected real image given a corrupted image ; it can be thought of as what the score network believes to be the true image concealed within the noisy input . It has also been suggested that denoising the samples ( i.e. , taking the EDS ) at the end of the Langevin sampling improves their quality ( Saremi and Hyvarinen , 2019 ; Li et al. , 2019 ; Kadkhodaie and Simoncelli , 2020 ) . In Section 4 , we provide further evidence that denoising the final Langevin sample brings it closer to the assumed data manifold . In particular , we show that the Fréchet Inception Distance ( FID ) consistently decreases ( improves ) after denoising . Finally , in Section 5 , we build a hybrid training objective using the properties of the EDS discussed above . There are interesting links to be made between ALS and the RED algorithm ( Romano et al. , 2017 ; Reehorst and Schniter , 2018 ) . The RED algorithm attempts to find the maximum a posteriori probability ( MAP ) denoised sample ( i.e. , the most plausible real data ) given a noisy sample . It does so by solving an optimization problem to obtain a sample close to the noisy sample for which the EDS is a fixed point ( denoising the sample does not change it because it is a real sample ) . Thus , just like ALS , the RED algorithm generates plausible real data given a score network . However , this algorithm does not ensure that we sample from the distribution and obtain full mode coverage . Thus , ALS ’ s key benefit is ensuring that we sample from the full support of the distribution .
The submission presents three contributions. First, the authors show the inconsistencies in the existing annealed Langevin sampling used in score-matching generative models and propose to correct it with the newly proposed Consistent Annealed Sampling (CAS) algorithm. The second contribution claimed is in providing evidence of the benefits of Expected Denoised Sample (EDS). Furthermore, the submission introduces a hybrid adversarial score-matching model that demonstrates improvements in terms of FID on simpler architectures.
SP:06414ad3c4b2438227a6d0749755106ee30f1564
Collaborative Filtering with Smooth Reconstruction of the Preference Function
The problem of predicting the rating of a set of users to a set of items in a recommender system based on partial knowledge of the ratings is widely known as collaborative filtering . In this paper , we consider a mapping of the items into a vector space and study the prediction problem by assuming an underlying smooth preference function for each user , the quantization at each given vector yields the associated rating . To estimate the preference functions , we implicitly cluster the users with similar ratings to form dominant types . Next , we associate each dominant type with a smooth preference function ; i.e. , the function values for items with nearby vectors shall be close to each other . The latter is accomplished by a rich representation learning in a so called frequency domain . In this framework , we propose two approaches for learning user and item representations . First , we use an alternating optimization method in the spirit of k-means to cluster users and map items . We further make this approach less prone to overfitting by a boosting technique . Second , we present a feedforward neural network architecture consisting of interpretable layers which implicitely clusters the users . The performance of the method is evaluated on two benchmark datasets ( ML-100k and ML-1M ) . Albeit the method benefits from simplicity , it shows a remarkable performance and opens a venue for future research . All codes are publicly available on the GitLab . 1 INTRODUCTION . Nowadays , recommender systems ( RS ) are among the most effective ways for large companies to attract more customers . A few statistics are sufficient to attract attention towards the importance of RS : 80 percent of watched movies on Netflix and 60 percent of video clicks on Youtube are linked with recommendations ( Gomez-Uribe & Hunt , 2015 ; Davidson et al. , 2010 ) . However , the world of RS is not limited to video industry . In general , recommender systems can be categorized into three groups ( Zhang et al. , 2019 ) : collaborative filtering ( CF ) , content-based RS , and hybrid RS depending on the used data type . In this paper , we focus on CF , which uses historical interactions to make recommendations . There might be some auxiliary information available to the CF algorithm ( like the user personal information ) ; however , a general CF method does not take such side information into account ( Zhang & Chen , 2019 ) . This includes our approach in this paper . Recently , deep learning has found its way to RS and specifically CF methods . Deep networks are able to learn non-linear representations with powerful optimization tools , and their efficient implementations have made then promising CF approaches . However , a quick look at some pervasive deep networks in RS ( e.g. , He et al . ( 2017 ) and Wu et al . ( 2016 ) ) shows that the utilization of deep architectures is limited to shallow networks . Still , it is unclear why networks have not gone deeper in RS in contrast to other fields like computer vision ( Zhang et al. , 2019 ) . We suppose that the fundamental reason that limits the application of a deeper structure is the absence of interpretability ( look at Seo et al . ( 2017 ) , for example ) . Here , interpretability can be defined in two ways ( Zhang et al. , 2019 ) ; first , users be aware of the purpose behind a recommendation , and second , the system operator should know how manipulation of the system will affect the predictions ( Zhang et al. , 2018 ) . This paper addresses both issues by formulating the recommendation as a smooth reconstruction of user preferences . Particularly , our contributions are : • The CF problem is formulated as the reconstruction of user preference functions by minimal assumptions . • An alternating optimization method is proposed that effectively optimizes a non-convex loss function and extracts user and item representations . In this regard , effective clustering methods are proposed and tested . • A feed-forward shallow architecture is introduced , which has interpretable layers and performs well in practice . • Despite the simplicity and interpretability of the methods , their performance on benchmark datasets is remarkable . 1.1 RELATED WORKS . The applied methods in CF are versatile and difficult to name . Below , we explain a number of methods which are well-known and are more related to our work . Multilayer perceptron based models . A natural extension of matrix factorization ( MF ) methods ( Mnih & Salakhutdinov , 2008 ) are Neural Collaborative Filtering ( NCF ) ( He et al. , 2017 ) and Neural Network Matrix Factorization ( NNMF ) ( Dziugaite & Roy , 2015 ) . Both methods extend the idea behind MF and use the outputs of two networks as the user and the item representations . The innerproduct makes the prediction of two representations . Although our work has some similarity to this method , we model users by functions and represent these functions in a so-called frequency domain . Thus , user and item representations are not in the same space . AutoEncoder based models . AutoRec ( Sedhain et al. , 2015 ) and CFN ( Strub et al. , 2016 ) are wellknown autoencoder ( AE ) structures that transform partial observations ( user-based or item-based ) into full row or column data . Our method differs from AE structures as our network use item ( user ) representations and predicts user ( item ) ratings . 2 SMOOTH RECONSTRUCTION FROM NON-UNIFORM SAMPLES . Rating as the output of the preference function . Most of the time , a finite set of features can characterize users and items that constitute the recommendation problem . Although no two users or items are exactly the same , the number of characterization features can be considerably small without losing much information . Assume that item i is characterized by the vector xi ∈ X ⊂ Rd . We further assume that all users observe similar features of an item and user u ’ s ratings are determined by a preference function fu : X → [ cmin , cmax ] . The recovery of a general preference function might need an indefinite number of samples , i.e. , observed ratings . However , we do not expect user attitudes to change too much with small changes in an item ’ s feature . E.g. , if the price is the determinative factor in someone ’ s preference , small changes in the price must not change the preference over this item significantly ( look at figure 1 ) . Reconstruction of bandlimited 1D signals . Let us start with the simplest case . Consider s [ n ] , n = 0 , 1 , . . . , N − 1 , a 1D signal with length N . We call s to have bandwidth M < N2 if there is a representation ŝ [ m ] , m = −M , −M + 1 , . . . , M − 1 , M that can represent s as : s [ n ] = M∑ m=−M ŝ [ m ] ej2π ( mn/N ) ( 1 ) So 2M +1 distinct samples from s would be enough to calculate ŝ . For an analytical approach , it is useful to interpret equation 1 as a discretization of a trigonometric continuous equation : h ( x ) = M∑ ( m=−M ) ame j2π ( mx ) , a ∈ C2M+1 , x ∈ [ 0 , 1 ) ( 2 ) Mirroring . Smoothness usually is used to refer to bandlimited signals around the zero-frequency which can be represented by equation 1 . However , we use the smooth finite-length signal to refer to a real-valued finite-length signal that has intuitively smooth behavior in its non-zero domain . figure 2 shows an example . The trigonometric functions in equation 2 can not approximate such signals well even if we shift and scale the domain to [ 0,1 ] because the original signal is not periodic . One possible solution to make the trigonometric functions still a good representative for the finitelength signal would be mirroring . figure 2 shows the shifted , scaled , and mirrored signals in 1D space . Extension to multi-dimensional real-valued mirrored signals . equation 2 will be simplified for a real-valued s just to include cosine terms . One can obtain the extension of equation 2 for real-valued mirrored signals as : h ( x ) = h ( x1 , x2 , . . . , xd ) = M∑ m1=0 . . . M∑ md=0 Am1 , m2 , . . . , md cos ( π ( m1x1 + ... +mdxd ) ) , ( 3 ) where x ∈ [ 0 , 1 ] d and A is a d-dimensional real tensor . To simplify the notation , we use mTx to refer m1x1 + ... +mdxd and a to refer vectorized A . Starting from m1 , m2 , . . . , md all to be 0 , one can put all the possible values of m as the columns of matrix C = [ c ] ( M+1 ) d×d with the same order as they appear in vectorizing of A . Now we can rewrite equation 3 with matrix operations : h ( x ) = ( M+1 ) d∑ k=1 ak cos ( πCk , :x ) ( 4 ) Vandermonde matrix . Given r non-uniform samples in [ 0 , 1 ] d , the Fourier coefficients , a , are the solution to the linear system of equations h ( xi ) = si , i = 1 , 2 , . . . , r. The Vandermonde matrix for this system is defined as : V = cos ( C [ x1 , x2 , . . . , xr ] ) , ( 5 ) where the cos ( . ) is the element-wise cosine , and [ . . . ] shows the stacking operator of column vectors . So , the linear system of equations can be shortened by : V Ta = s. Here s is the column vector of r observed si put together . In contrast to the 1D case , there is no simple theorem on the conditions to estimate a correctly . Roughly speaking , this needs the rank of V to be larger than the number of unknowns , i.e. , ( M +1 ) d or in other words , the number of samples ( r ) should be enough larger than ( M + 1 ) d. Reconstruction of the preference function from the observed ratings . We can state the problem of rating prediction , as the reconstruction of the preference function of each user ( fu ) given the observed ratings of that user ( Iu ) . If we assign a d-dimensional characterization vector ( xi ) to each item i that is assumed to lie in X = [ 0 , 1 ] d , we can estimate the user u Fourier coefficients as au = ( V T u ) †su . At the starting point we do not know how items are distributed in X which means Vu will be inaccurate . So , optimizing the reconstruction loss gives fair characterstics for the items in X : min xi , i∈I ∑ u∈U ‖V Tu ( V Tu ) †su − su‖2 . ( 6 ) 3 LEARNING REPRESENTATIONS BY MINIMIZING RECONSTRUCTION LOSS . Minimizing equation 6 , aside from the non-convexity of the cost function , implicitly involves solving V Tu au = su , which can in general be an ill-condition system of linear equations , specially when the user u has few recorder ratings . To reliably estimate the Fourier coefficients au ( user representations ) , group similar users into a number of clusters and use a single representative for each cluster ( virtually increasing the number of available ratings ) . In addition , we further consider a Tikhonov ( L2 ) regularizer to improve the condition number . With this approach , we need to solve min { xi , i∈I } , c L =min ∑ u∈U ‖V Tc ( u ) ac ( u ) − su‖ 2 + λ ∑ k∈C ‖ak‖2 , s.t . 0 ≤ xi < 1 , ( 7 ) where c : U → C is the mapping of the users into clusters , C is the set of clusters and Vk is the Vandermonde matrix associated with the cluster k ( considering all the users in a cluster as a superuser ) . Hence , Vk is a function of { xi , i ∈ ∪ { u : c ( u ) =k } Iu } . The penalty parameter λ shall be tuned via cross validation . Moreover , the Fourier coefficients ak for the cluster k are obtained by : ak = ( VkV T k + λI ) −1Vksk , ( 8 ) where sk is the vector of all observed ratings in the cluster k. In the sequel , we propose two approaches for minimizing equation 7 . In the first approach ( Section 3.1 ) , we alternatively find min { xi , i∈I } L and minc L ; as minc L requires a combinatorial search , we introduce an approximate algorithm , named k-representation ( Section 3.1.1 ) inspired by the k-means technique . Each iteration of k-representation consists of assigning each user the cluster with the lowest reconstruction loss , and updating the cluster representatives . In the second approach ( Section 3.2 ) , we train a neural network to jointly characterize the items and cluster the users . For this , the loss function of equation 7 is modified to accommodate for soft clustering .
This paper proposes an approach based on Fourier transforms to predict ratings in collaborative filtering problems. The paper’s scope (“smooth reconstruction functions”) gets immediately narrowed down to Fourier transforms--it would be nice to provide some motivation for this choice over alternative smooth functions. The paper then clusters the users as a way to reduce the number of parameters in the model, given that the Fourier transform itself does not reduce it. As a further step, the clustering is replaced by a soft-clustering learned by a neural network. In the experiments, the RMSE of the rating prediction problem is worse than some baselines and better than others.
SP:f61e427d087e7f8b176a518af6088bde2ab75167
Robust Pruning at Initialization
1 INTRODUCTION . Overparameterized deep NNs have achieved state of the art ( SOTA ) performance in many tasks ( Nguyen and Hein , 2018 ; Du et al. , 2019 ; Zhang et al. , 2016 ; Neyshabur et al. , 2019 ) . However , it is impractical to implement such models on small devices such as mobile phones . To address this problem , network pruning is widely used to reduce the time and space requirements both at training and test time . The main idea is to identify weights that do not contribute significantly to the model performance based on some criterion , and remove them from the NN . However , most pruning procedures currently available can only be applied after having trained the full NN ( LeCun et al. , 1990 ; Hassibi et al. , 1993 ; Mozer and Smolensky , 1989 ; Dong et al. , 2017 ) although methods that consider pruning the NN during training have become available . For example , Louizos et al . ( 2018 ) propose an algorithm which adds a L0 regularization on the weights to enforce sparsity while Carreira-Perpiñán and Idelbayev ( 2018 ) ; Alvarez and Salzmann ( 2017 ) ; Li et al . ( 2020 ) propose the inclusion of compression inside training steps . Other pruning variants consider training a secondary network that learns a pruning mask for a given architecture ( Li et al . ( 2020 ) ; Liu et al . ( 2019 ) ) . Recently , Frankle and Carbin ( 2019 ) have introduced and validated experimentally the Lottery Ticket Hypothesis which conjectures the existence of a sparse subnetwork that achieves similar performance to the original NN . These empirical findings have motivated the development of pruning at initialization such as SNIP ( Lee et al . ( 2018 ) ) which demonstrated similar performance to classical pruning methods of pruning-after-training . Importantly , pruning at initialization never requires training the complete NN and is thus more memory efficient , allowing to train deep NN using limited computational resources . However , such techniques may suffer from different problems . In particular , nothing prevents such methods from pruning one whole layer of the NN , making it untrainable . More generally , it is typically difficult to train the resulting pruned NN ( Li et al. , 2018 ) . To solve this situation , Lee et al . ( 2020 ) try to tackle this issue by enforcing dynamical isometry using orthogonal weights , while Wang et al . ( 2020 ) ( GraSP ) uses Hessian based pruning to preserve gradient flow . Other work by Tanaka et al . ( 2020 ) considers a data-agnostic iterative approach using the concept of synaptic flow in order to avoid the layer-collapse phenomenon ( pruning a whole layer ) . In our work , we use principled scaling and re-parameterization to solve this issue , and show numerically that our algorithm achieves SOTA performance on CIFAR10 , CIFAR100 , TinyImageNet and ImageNet in some scenarios and remains competitive in others . In this paper , we provide novel algorithms for Sensitivity-Based Pruning ( SBP ) , i.e . pruning schemes that prune a weight W based on the magnitude of |W ∂L∂W | at initialization where L is the loss . Experimentally , compared to other available one-shot pruning schemes , these algorithms provide state-of the-art results ( this might not be true in some regimes ) . Our work is motivated by a new theoretical analysis of gradient back-propagation relying on the mean-field approximation of deep NN ( Hayou et al. , 2019 ; Schoenholz et al. , 2017 ; Poole et al. , 2016 ; Yang and Schoenholz , 2017 ; Xiao et al. , 2018 ; Lee et al. , 2018 ; Matthews et al. , 2018 ) . Our contribution is threefold : • For deep fully connected FeedForward NN ( FFNN ) and Convolutional NN ( CNN ) , it has been previously shown that only an initialization on the so-called Edge of Chaos ( EOC ) make models trainable ; see e.g . ( Schoenholz et al. , 2017 ; Hayou et al. , 2019 ) . For such models , we show that an EOC initialization is also necessary for SBP to be efficient . Outside this regime , one layer can be fully pruned . • For these models , pruning pushes the NN out of the EOC making the resulting pruned model difficult to train . We introduce a simple rescaling trick to bring the pruned model back in the EOC regime , making the pruned NN easily trainable . • Unlike FFNN and CNN , we show that Resnets are better suited for pruning at initialization since they ‘ live ’ on the EOC by default ( Yang and Schoenholz , 2017 ) . However , they can suffer from exploding gradients , which we resolve by introducing a re-parameterization , called ‘ Stable Resnet ’ ( SR ) . The performance of the resulting SBP-SR pruning algorithm is illustrated in Table 1 : SBP-SR allows for pruning up to 99.5 % of ResNet104 on CIFAR10 while still retaining around 87 % test accuracy . The precise statements and proofs of the theoretical results are given in the Supplementary . Appendix H also includes the proof of a weak version of the Lottery Ticket Hypothesis ( Frankle and Carbin , 2019 ) showing that , starting from a randomly initialized NN , there exists a subnetwork initialized on the EOC . 2 SENSITIVITY PRUNING FOR FFNN/CNN AND THE RESCALING TRICK . 2.1 SETUP AND NOTATIONS . Let x be an input in Rd . A NN of depth L is defined by yl ( x ) = Fl ( W l , yl−1 ( x ) ) +Bl , 1 ≤ l ≤ L , ( 1 ) where yl ( x ) is the vector of pre-activations , W l and Bl are respectively the weights and bias of the lth layer and Fl is a mapping that defines the nature of the layer . The weights and bias are initialized with W l iid∼ N ( 0 , σ2w/vl ) , where vl is a scaling factor used to control the variance of yl , and Bl iid∼ N ( 0 , σ2b ) . Hereafter , Ml denotes the number of weights in the lth layer , φ the activation function and [ m : n ] : = { m , m+ 1 , ... , n } for m ≤ n. Two examples of such architectures are : • Fully connected FFNN . For a FFNN of depth L and widths ( Nl ) 0≤l≤L , we have vl = Nl−1 , Ml = Nl−1Nl and y1i ( x ) = d∑ j=1 W 1ijxj +B 1 i , y l i ( x ) = Nl−1∑ j=1 W lijφ ( y l−1 j ( x ) ) +B l i for l ≥ 2 . ( 2 ) • CNN . For a 1D CNN of depth L , number of channels ( nl ) l≤L , and number of neurons per channel ( Nl ) l≤L , we have y1i , α ( x ) = nl−1∑ j=1 ∑ β∈kerl W 1i , j , βxj , α+β+b 1 i , y l i , α ( x ) = nl−1∑ j=1 ∑ β∈kerl W li , j , βφ ( y l−1 j , α+β ( x ) ) +b l i , for l ≥ 2 , ( 3 ) where i ∈ [ 1 : nl ] is the channel index , α ∈ [ 0 : Nl−1 ] is the neuron location , kerl = [ −kl : kl ] is the filter range , and 2kl + 1 is the filter size . To simplify the analysis , we assume hereafter that Nl = N and kl = k for all l. Here , we have vl = nl−1 ( 2k + 1 ) and Ml = nl−1nl ( 2k + 1 ) . We assume periodic boundary conditions ; so yli , α = y l i , α+N = y l i , α−N . Generalization to multidimensional convolutions is straightforward . When no specific architecture is mentioned , ( W li ) 1≤i≤Ml denotes the weights of the l th layer . In practice , a pruning algorithm creates a binary mask δ over the weights to force the pruned weights to be zero . The neural network after pruning is given by yl ( x ) = Fl ( δl ◦W l , yl−1 ( x ) ) +Bl , ( 4 ) where ◦ is the Hadamard ( i.e . element-wise ) product . In this paper , we focus on pruning at initialization . The mask is typically created by using a vector gl of the same dimension as W l using a mapping of choice ( see below ) , we then prune the network by keeping the weights that correspond to the top k values in the sequence ( gli ) i , l where k is fixed by the sparsity that we want to achieve . There are three popular types of criteria in the literature : •Magnitude based pruning ( MBP ) : We prune weights based on the magnitude |W | . • Sensitivity based pruning ( SBP ) : We prune the weights based on the values of |W ∂L∂W | where L is the loss . This is motivated by LW ≈ LW=0 +W ∂L∂W used in SNIP ( Lee et al . ( 2018 ) ) . •Hessian based pruning ( HBP ) : We prune the weights based on some function that uses the Hessian of the loss function as in GraSP ( Wang et al. , 2020 ) . In the remainder of the paper , we focus exclusively on SBP while our analysis of MBP is given in Appendix E. We leave HBP for future work . However , we include empirical results with GraSP ( Wang et al. , 2020 ) in Section 4 . Hereafter , we denote by s the sparsity , i.e . the fraction of weights we want to prune . Let Al be the set of indices of the weights in the lth layer that are pruned , i.e . Al = { i ∈ [ 1 : Ml ] , s.t . δli = 0 } . We define the critical sparsity scr by scr = min { s ∈ ( 0 , 1 ) , s.t . ∃l , |Al| = Ml } , where |Al| is the cardinality of Al . Intuitively , scr represents the maximal sparsity we are allowed to choose without fully pruning at least one layer . scr is random as the weights are initialized randomly . Thus , we study the behaviour of the expected value E [ scr ] where , hereafter , all expectations are taken w.r.t . to the random initial weights . This provides theoretical guidelines for pruning at initialization . For all l ∈ [ 1 : L ] , we define αl by vl = αlN where N > 0 , and ζl > 0 such that Ml = ζlN2 , where we recall that vl is a scaling factor controlling the variance of yl and Ml is the number of weights in the lth layer . This notation assumes that , in each layer , the number of weights is quadratic in the number of neurons , which is satisfied by classical FFNN and CNN architectures . 2.2 SENSITIVITY-BASED PRUNING ( SBP ) . SBP is a data-dependent pruning method that uses the data to compute the gradient with backpropagation at initialization ( one-shot pruning ) .We randomly sample a batch and compute the gradients of the loss with respect to each weight . The mask is then defined by δli = I ( |W li ∂L∂W li | ≥ ts ) , where ts = |W ∂L∂W | ( ks ) and ks = ( 1− s ) ∑ lMl and |W ∂L ∂W | ( ks ) is the kths order statistics of the sequence ( |W li ∂L∂W li | ) 1≤l≤L,1≤i≤Ml . However , this simple approach suffers from the well-known exploding/vanishing gradients problem which renders the first/last few layers respectively susceptible to be completely pruned . We give a formal definition to this problem . Definition 1 ( Well-conditioned & ill-conditioned NN ) . Let ml = E [ |W l1 ∂L∂W l1 | 2 ] for l ∈ [ 1 : L ] . We say that the NN is well-conditioned if there exist A , B > 0 such that for all L ≥ 1 and l ∈ [ 1 : L ] we have A ≤ ml/mL ≤ B , and it is ill-conditioned otherwise . Understanding the behaviour of gradients at initialization is thus crucial for SBP to be efficient . Using a mean-field approach , such analysis has been carried out in ( Schoenholz et al. , 2017 ; Hayou et al. , 2019 ; Xiao et al. , 2018 ; Poole et al. , 2016 ; Yang , 2019 ) where it has been shown that an initialization known as the EOC is beneficial for DNN training . The mean-field analysis of DNNs relies on two standard approximations that we will also use here . Approximation 1 ( Mean-Field Approximation ) . When Nl 1 for FFNN or nl 1 for CNN , we use the approximation of infinitely wide NN . This means infinite number of neurons per layer for fully connected layers and infinite number of channels per layer for convolutional layers . Approximation 2 ( Gradient Independence ) . The weights used for forward propagation are independent from those used for back-propagation . These two approximations are ubiquitous in literature on the mean-field analysis of neural networks . They have been used to derive theoretical results on signal propagation ( Schoenholz et al. , 2017 ; Hayou et al. , 2019 ; Poole et al. , 2016 ; Yang , 2019 ; Yang and Schoenholz , 2017 ; Yang et al. , 2019 ) and are also key tools in the derivation of the Neural Tangent Kernel ( Jacot et al. , 2018 ; Arora et al. , 2019 ; Hayou et al. , 2020 ) . Approximation 1 simplifies the analysis of the forward propagation as it allows the derivation of closed-form formulas for covariance propagation . Approximation 2 does the same for back-propagation . See Appendix A for a detailed discussion of these approximations . Throughout the paper , we provide numerical results that substantiate the theoretical results that we derive using these two approximations . We show that these approximations lead to excellent match between theoretical results and numerical experiments . Edge of Chaos ( EOC ) : For inputs x , x′ , let cl ( x , x′ ) be the correlation between yl ( x ) and yl ( x′ ) . From ( Schoenholz et al. , 2017 ; Hayou et al. , 2019 ) , there exists a so-called correlation function f that depends on ( σw , σb ) such that cl+1 ( x , x′ ) = f ( cl ( x , x′ ) ) . Let χ ( σb , σw ) = f ′ ( 1 ) . The EOC is the set of hyperparameters ( σw , σb ) satisfying χ ( σb , σw ) = 1 . When χ ( σb , σw ) > 1 , we are in the Chaotic phase , the gradient explodes and cl ( x , x′ ) converges exponentially to some c < 1 for x 6= x′ and the resulting output function is discontinuous everywhere . When χ ( σb , σw ) < 1 , we are in the Ordered phase where cl ( x , x′ ) converges exponentially fast to 1 and the NN outputs constant functions . Initialization on the EOC allows for better information propagation ( see Supplementary for more details ) . Hence , by leveraging the above results , we show that an initialization outside the EOC will lead to an ill-conditioned NN . Theorem 1 ( EOC Initialization is crucial for SBP ) . Consider a NN of type ( 2 ) or ( 3 ) ( FFNN or CNN ) . Assume ( σw , σb ) are chosen on the ordered phase , i.e . χ ( σb , σw ) < 1 , then the NN is ill-conditioned . Moreover , we have E [ scr ] ≤ 1 L ( 1 + log ( κLN2 ) κ ) +O ( 1 κ2 √ LN2 ) , where κ = | logχ ( σb , σw ) |/8 . If ( σw , σb ) are on the EOC , i.e . χ ( σb , σw ) = 1 , then the NN is well-conditioned . In this case , κ = 0 and the above upper bound no longer holds . The proof of Theorem 1 relies on the behaviour of the gradient norm at initialization . On the ordered phase , the gradient norm vanishes exponentially quickly as it back-propagates , thus resulting in an ill-conditioned network . We use another approximation for the sake of simplification of the proof ( Approximation 3 in the Supplementary ) but the result holds without this approximation although the resulting constants would be a bit different . Theorem 1 shows that the upper bound decreases the farther χ ( σb , σw ) is from 1 , i.e . the farther the initialization is from the EOC . For constant width FFNN with L = 100 , N = 100 and κ = 0.2 , the theoretical upper bound is E [ scr ] / 27 % while we obtain E [ scr ] ≈ 22 % based on 10 simulations . A similar result can be obtained when the NN is initialized on the chaotic phase ; in this case too , the NN is ill-conditioned . To illustrate these results , Figure 1 shows the impact of the initialization with sparsity s = 70 % . The dark area in Figure 1 ( b ) corresponds to layers that are fully pruned in the chaotic phase due to exploding gradients . Using an EOC initialization , Figure 1 ( a ) shows that pruned weights are well distributed in the NN , ensuring that no layer is fully pruned .
The theoretical analysis is clearly stated in an well-organized way and the derived sparsity bound is reasonable. With FFNN and CNN, a theorem is given to show that the model is trainable only when the initialization on Edge of Chaos (EOC) and also provided a rescaling method to make the pruned NN into EOC regime. With Resnet, it proves the pruning satisfies the EOC condition by default and further provides re-parameterization method to tackle exploding gradients. The experiments well support theoretical results for both FFNN/CNN and resNet.
SP:97471b69a8e0ce6d2bbb202cc3f9cd786e77ddea
Does Adversarial Transferability Indicate Knowledge Transferability?
Despite the immense success that deep neural networks ( DNNs ) have achieved , adversarial examples , which are perturbed inputs that aim to mislead DNNs to make mistakes , have recently led to great concerns . On the other hand , adversarial examples exhibit interesting phenomena , such as adversarial transferability . DNNs also exhibit knowledge transfer , which is critical to improving learning efficiency and learning in domains that lack high-quality training data . To uncover the fundamental connections between these phenomena , we investigate and give an affirmative answer to the question : does adversarial transferability indicate knowledge transferability ? We theoretically analyze the relationship between adversarial transferability and knowledge transferability , and outline easily checkable sufficient conditions that identify when adversarial transferability indicates knowledge transferability . In particular , we show that composition with an affine function is sufficient to reduce the difference between the two models when they possess high adversarial transferability . Furthermore , we provide empirical evaluation for different transfer learning scenarios on diverse datasets , showing a strong positive correlation between the adversarial transferability and knowledge transferability , thus illustrating that our theoretical insights are predictive of practice . 1 INTRODUCTION . Knowledge transferability and adversarial transferability are two fundamental properties when a learned model transfers to other domains . Knowledge transferability , also known as learning transferability , has attracted extensive studies in machine learning . Long before it was formally defined , the computer vision community has exploited it to perform important visual manipulations ( Johnson et al. , 2016 ) , such as style transfer and super-resolution , where pretrained VGG networks ( Simonyan & Zisserman , 2014 ) are utilized to encode images into semantically meaningful features . After the release of ImageNet ( Russakovsky et al. , 2015 ) , pretrained ImageNet models ( e.g. , on TensorFlow Hub or PyTorch-Hub ) has quickly become the default option for the transfer source , because of its broad coverage of visual concepts and compatibility with various visual tasks ( Huh et al. , 2016 ) . Adversarial transferability , on the other hand , is a phenomenon that adversarial examples can not only attack the model they are generated against , but also affect other models ( Goodfellow et al. , 2014 ; Papernot et al. , 2016 ) . Thus , adversarial transferability is extensively exploited to inspire black-box attacks ( Ilyas et al. , 2018 ; Liu et al. , 2016 ) . Many theoretical analyses have been conducted to establish sufficient conditions of adversarial transferability ( Demontis et al. , 2019 ; Ma et al. , 2018 ) . Knowledge transferability and adversarial transferability both reveal some nature of machine learning models and the corresponding data distributions . Particularly , the relation between these two phenomena interests us the most . We begin by showing that adversarial transferability can indicate knowledge transferability . This tie can potentially provide a similarity measure between data distributions , an identifier of important features focused by a complex model , and an affinity map between complicated tasks . Thus , we believe our results have further implications in model interpretability and verification , fairness , robust and efficient transfer learning , and etc . To the best of our knowledge , this is the first work studying the fundamental relationship between adversarial transferability and knowledge transferability both theoretically and empirically . Our main contributions are as follows . • We formally define two quantities , τ1 and τ2 , to measure adversarial transferability from different aspects , which enables in-depth understanding of adversarial transferability from a geometric point of view in the feature representation space . • We derive an upper bound for knowledge transferability with respect to adversarial transfer- ability . We rigorously depict their underlying relation and show that adversarial transferability can indicate knowledge transferability . • We conduct thorough controlled experiments for diverse knowledge transfer scenarios ( e.g . knowledge transfer among data distributions , attributes , and tasks ) on benchmark datasets including STL-10 , CIFAR-10 , CelebA , Taskonomy-data , and four language datasets . Our empirical results show strong positive correlation between adversarial and knowledge transferability , which validates our theoretical prediction . 2 RELATED WORK . Knowledge transferability has been widely applied in scenarios where the available data for certain domain is limited , and has achieved great success ( Van Opbroek et al. , 2014 ; Wurm et al. , 2019 ; Wang et al. , 2017 ; Kim & Park , 2017 ; Maqueda et al. , 2018 ; Devlin et al. , 2018 ) . Several studies have been conducted to understand the factors that affect knowledge transferability ( Yosinski et al. , 2014 ; Long et al. , 2015b ; Wang et al. , 2019 ; Xu et al. , 2019 ; Shinya et al. , 2019 ) . Empirical observations show that the correlation between learning tasks ( Achille et al. , 2019 ; Zamir et al. , 2018 ) , the similarity of model architectures , and data distribution are all correlated with different knowledge transfer effects . Adversarial Transferability has been observed by several works ( Papernot et al. , 2016 ; Goodfellow et al. , 2014 ; Joon Oh et al. , 2017 ) . Since the early work , a lot of studies have been conducted , aiming to further understand the phenomenon and design more transferable adversarial attacks . Regardless of the threat model , a lot of attack methods have been proposed to boost adversarial transferability ( Zhou et al. , 2018 ; Demontis et al. , 2019 ; Dong et al. , 2019 ; Xie et al. , 2019 ) . Naseer et al . ( 2019 ) propose to produce adversarial examples that transfer cross-domain via a generative adversarial network . In addition to the efficacy , efficiency ( Ilyas et al. , 2018 ) and practicality ( Papernot et al. , 2017 ) are also optimized . Beyond the above empirical studies , there is some work dedicated to analyzing this phenomenon , showing different conditions that may enhance adversarial transferability ( Athalye et al. , 2018 ; Tramèr et al. , 2017 ; Ma et al. , 2018 ; Demontis et al. , 2019 ) . Building upon these observations , it is clear that there exist certain connections between adversarial transferability and other knowledge transfer scenarios , and here we aim to provide the first theoretic justification to verify it and design systematic empirical studies to measure such correlation . 3 ADVERSARIAL TRANSFERABILITY VS . KNOWLEDGE TRANSFERABILITY . In this section , we establish connections between adversarial examples and knowledge transferability rigorously . We first formally state the problem studied in this section . Then , we move on to subsection 3.1 to introduce two metrics that encode information about adversarial attacks . Finally , we present our theoretical results about the relationship between adversarial and knowledge transferability in subsection 3.2 . Notations . We use blackboard bold to denote sets , e.g. , R. We use calligraphy to denote distributions , e.g. , D. The support of a distribution D is denoted as supp ( D ) . We use bold lower case letters to denote vectors , e.g. , x ∈ Rn . We use bold uppercase letter to denote a matrix , e.g. , A . We useA† to denote the Moore–Penrose inverse of matrix A . We use ◦ to denote the composition of functions , i.e. , g ◦ f ( x ) = g ( f ( x ) ) . We use ‖ · ‖2 to denote Euclidean norm induced by standard inner product 〈· , ·〉 . Given a function f , we use f ( x ) to denote its evaluated value at x , and we use f to represent this function in function space . We use 〈· , ·〉D to denote inner product induced by distribution D , i.e. , 〈f1 , f2〉D = Ex∼D〈f1 ( x ) , f2 ( x ) 〉 . Accordingly , we use ‖ · ‖D to denote a norm induced by inner product 〈· , ·〉D , i.e. , ‖f‖D = √ 〈f , f〉D . For a matrix function F : supp ( D ) → Rd×m , we define its L2 ( D ) -norm in accordance with matrix 2-norm as ‖F‖D,2 = √ Ex∼D‖F ( x ) ‖22 . We define projection operator proj ( · , r ) to project a matrix to a hyperball of spectral norm radius r , i.e. , proj ( A , r ) = { A , if ‖A‖2 ≤ r rA/‖A‖2 if ‖A‖2 > r . Setting . Assume we are given a target problem defined by data distribution x ∼ D , where x ∈ Rn , and y : Rn → Rd represent the ground truth labeling function . As a first try , a reference model fT : Rn → Rd trained on the target dataset is obtained through optimizing over a function class fT ∈ FT . Now suppose we have a source model fS : Rn → Rm pretrained on source data , and we are curious how would fS transfer to the target data D ? Knowledge transferability . Given a trainable function g : Rm → Rd , where g ∈ G is from a small function class for efficiency purpose , we care about whether fS can achieve low loss L ( · ; y , D ) , e.g. , mean squared error , after stacking with a trainable function g comparing with fT , i.e. , min g∈G L ( g ◦ fS ; y , D ) compare with L ( fT ; y , D ) . Clearly , the solution to this optimization problem depends on the choice of G. Observing that in practice it is common to stack and fine-tune a linear layer given a pretrained feature extractor , we consider the class of affine functions . Formally , the problem that is studied in our theory is stated as follows . Problem 1 . Given a reference model fT trained on target distribution D , and a source model fS pre-trained on source data . Can we predict the best possible performance of the composite function g◦fS onD , where g is from a bounded affine function class , given adversarial transferability between fS and fT ? 3.1 ADVERSARIAL TRANSFERABILITY . We use the ` 2-norm to characterize the effectiveness of an attack . Definition 1 ( Virtual Adversarial Attack ( Miyato et al. , 2018 ) ) . Given a model f : Rn → Rd , the attack on point x within -ball is defined as argmax‖δ‖≤ ‖f ( x ) − f ( x+ δ ) ‖2 . As this is intractable in practice , we consider the use of the tangent function to approximate the difference : δf , ( x ) = arg max ‖δ‖≤ ‖∇f ( x ) > δ‖2 , where ∇f ( x ) ∈ Rn×d is the Jacobian matrix . The will be dropped in clear context or when it is irrelevant . To provide a quantitative view of adversarial transferability , we define two metrics τ1 and τ2 . Both the metrics are in the range of [ 0 , 1 ] , where higher values indicate more adversarial transferability . Definition 2 ( Adversarial Transferability ( Angle ) ) . Given two function f1 , f2 , we assume they have the same input dimension , and may have different output dimensions . The Adversarial Transferability ( Angle ) of f1 and f2 at point x is defined as the squared cosine value of the angle between the two attacks , i.e. , τ1 ( x ) = 〈δf1 ( x ) , δf2 ( x ) 〉2 ‖δf1 ( x ) ‖22 · ‖δf2 ( x ) ‖22 . We denote its expected value as τ1 = Ex∼D [ τ1 ( x ) ] . Intuitively , τ1 characterizes the similarity of the two attacks . The higher the cosine similarity , the better they can be attacked together . Noting that we are suggesting to use the square of their cosine values , which means that cosine value being either 1 or−1 has the same indication of high knowledge transferability . This is because fine-tuning the last layer can rectify such difference by changing the sign of the last linear layer . However , it is not sufficient to fully characterize how good fS will perform only knowing the angle of two attack directions . For example , it is not difficult to construct two functions with highest τ1 = 1 , but not transferable with affine functions . Moreover , it is also oberserved in our experiments that only τ1 is not sufficient . Therefore , in addition to the information of attacks δf captured by τ1 , we also need information about deviation of a function given attacks . We denote the deviation of a function f , given attack δ ( x ) , as f ( x+ δ ( x ) ) − f ( x ) , and we define its approximation as ∆f , δ ( x ) = ∇f ( x ) > δ ( x ) . ( 1 ) Accordingly , we define another metric to answer the following question : applying f1 ’ s adversarial attacks on both the models , how much can the deviation of their function value be aligned by affine transformations ? Definition 3 ( Adversarial Transferability ( Deviation ) ) . Given two functions f1 , f2 with the same input dimensions and potentially different output dimensions , the Adversarial Transferability ( Deviation ) of adversarial attacks from f1 to f2 given data distribution D is defined as τf1→f22 = 〈2∆f2 , δf1 −A∆f1 , δf1 , A∆f1 , δf1 〉D ‖∆f2 , δf1‖ 2 D , whereA is a constant matrix defined as A = proj ( Ex∼D [ ∆f2 , δf1 ( x ) ∆f1 , δf1 ( x ) > ] ( Ex∼D [ ∆f1 , δf1 ( x ) ∆f1 , δf1 ( x ) > ] ) † , ‖∆f2 , δf1‖D ‖∆f1 , δf1‖D ) . We note that A is the best linear map trying to align the two deviations ( ∆f2 , δf1 and ∆f1 , δf1 ) in the function space . It serves as a guess on the best linear map to align f1 and f2 , using only the information from adversarial attacks . To have better sense of τ2 and the relationships with other quantities , we present an example for visual illustration in Figure 1 . Note that high τ2 does not necessarily require ∆f1 , δf1 and ∆f2 , δf1 to be similar , but they can be well aligned by the constant linear transformationA . We refer to the proof of Proposition 1 at section B in appendix for detailed explanation of τ2 . Proposition 1 . Both τ1 and τ2 are in [ 0 , 1 ] .
This paper study the fundamental relationship between adversarial transferability and knowledge transferability. Theoretical analysis is conducted, revealing that adversarial transferability can indicate knowledge transferability. In this procedure, two quantities are formally defined to measure adversarial transferability from different aspects. Furthermore, empirical evaluation in three different transfer learning scenarios on diverse datasets are carried out, showing a strong positive correlation between the adversarial transferability and knowledge transferability.
SP:934bf46c7ff0d3a3b1f0b75e48235dd0c902558c
Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation
1 INTRODUCTION . Text generation has been one of the most important research problems in natural language processing ( NLP ) ( Reiter & Dale , 2000 ) . Thanks to the advances in neural architectures , models are now capable of generating texts that are of better quality than before ( Brown et al. , 2020 ) . However , despite the countless efforts that have been made to improve neural architectures , models trained with the standard Maximum Likelihood Estimation ( MLE ) objective are known to prefer generating dull and highly repetitive texts . For instance , in open-ended generation tasks , such as story continuation or open dialogue generation , it has been observed that even with large pre-trained models , e.g. , GPT-2 ( Radford et al. , 2019 ) , high frequency tokens largely dominate the generation ( Welleck et al. , 2020 ; Holtzman et al. , 2020 ) . The same observation has been reported in directed generation tasks such as text summarization ( Nallapati et al. , 2016 ; See et al. , 2017 ) , image captioning ( Melas-Kyriazi et al. , 2018 ; Wang & Chan , 2019 ) and machine translation ( Tu et al. , 2016 ; Stahlberg & Byrne , 2019 ) . The methods introduced to solve the aforementioned issues with neural text generation can be primarily categorized into two groups : ( i ) training based methods , which include incorporating auxiliary losses ( See et al. , 2017 ; Welleck et al. , 2020 ) and coverage vector ( See et al. , 2017 ; Tu et al. , 2016 ) ; ( ii ) decoding based methods , such as stochastic beam search ( Kool et al. , 2019 ) , top-k sampling ( Fan et al. , 2018 ) and nucleus sampling ( Holtzman et al. , 2020 ) . Though decoding based methods , in particular nucleus and top-k sampling , perform well in practice in open-ended generation tasks , significantly reducing degeneration problem , they do not address the fundamental issue that the token-level probabilities produced by the neural model are problematic ( Welleck et al. , 2020 ) . In addition , our experiments demonstrate that sampling methods also fail to generate high-quality texts in directed generation tasks such as abstractive text summarization . In this work , based on the known observation that the model trained with MLE objective tends to generate repititive tokens or phrases , we introduce a novel method called ScaleGrad for neural text generation training , by directly maneuvering the gradients to make the model learn to use novel tokens during training . Our method lies in the training based group , which aims to address the fundamental modeling problem , that is , the token-level distribution predicted by the model . We conduct extensive experiments with different neural architectures including LSTM ( Hochreiter & Schmidhuber , 1997 ) and Transformer ( Vaswani et al. , 2017 ) across different tasks in openedended and directed text generation . Through extensive analysis we demonstrate that ScaleGrad consistently improves the generation quality according to both human evaluation and automatic metrics . Compared to other training based methods , ScaleGrad is architecturally simpler and easier to fit into current neural models ( §3.2 ) , while possessing a wider applicability to different tasks compared to decoding based methods ( §4.2 and §5.2 ) . 2 BACKGROUND . 2.1 NEURAL TEXT GENERATION . The NLP tasks involving text generation can be broadly categorized into two types : directed generation and open-ended generation ( Holtzman et al. , 2020 ) . In the former case , the output text can be seen as a constrained transformation of the input . Examples include text summarization , machine translation , and image captioning . In the later case , the input context only provides a certain degree of constraints such that the model is allowed to generate the following texts with a considerable degree of freedom . Story/text continuation and dialogue generation fall in this category . Neural models frame text generation tasks as some form of conditional language modeling , which is typically trained to maximize the log likelihood ( equivalently , minimize the negative log likelihood ) of the training data . The Maximum Likelihood Estimation or MLE objective for an input-output pair ( x , y ) can be expressed as follows . LMLE = − T∑ t=1 log pθ ( yt|y < t , x ) ( 1 ) where θ denotes model parameters , T is the length of the output sequence y , and x is the taskspecific input condition , e.g. , source document in summarization , image in image captioning , conversation history in dialogue generation and ∅ in text continuation . Teacher Forcing ( Williams & Zipser , 1989 ) , where current step ’ s target token is passed as the next input to the decoder rather than the predicted token , is usually used to train neural text generation models for faster convergence . Degeneration Degeneration has been a key problem in neural text generation models for openended tasks , where the model generates texts that are repetitive , overly generic ( dull ) , incoherent and gibberish . It can happen at different levels of granularity – token , phrase , sentence and paragraph . The problem has not been mitigated even with large-scale pre-trained models like GPT-2 Large ( Radford et al. , 2019 ; Holtzman et al. , 2020 ) . Degeneration has also been observed in directed generation tasks even though the output in these tasks is confined by the input . For instance , in text summarization , most of the advanced models such as BertSum ( Liu & Lapata , 2019 ) , BART ( Lewis et al. , 2019 ) and ProphetNet ( Yan et al. , 2020 ) make use of tri-gram blocking ( Paulus et al. , 2018 ) within beam search to remove duplicate trigrams during decoding , which improves the generation quality in terms of automatic metric . This implies that even with involvement of large-scale pretrained models , degeneration still exists . Similar issues have been reported in machine translation ( Koehn & Knowles , 2017 ; Stahlberg & Byrne , 2019 ) and image-description generation ( MelasKyriazi et al. , 2018 ; Wang & Chan , 2019 ) . 2.2 COMBATING NEURAL TEXT DEGENERATION . Out of the methods proposed to tackle neural text degeneration , top-k sampling ( Fan et al. , 2018 ) and nucleus sampling ( Holtzman et al. , 2020 ) stand out as representatives of decoding based methods and unlikelihood training ( Welleck et al. , 2020 ) as a representative training based method . During each decoding step , nucleus and top-k sampling use different functions to filter the candidate tokens , thus reformalizing the probability distribution and sample the next token from the new distribution instead of maximizing the actual likelihood . Randomness brought by these sampling methods reduces duplicate tokens in the output . However , decoding strategy solely does not solve the underlying modeling problem with MLE , as pointed out by Welleck et al . ( 2020 ) . Our analysis in §5.2 also reveals that sampling methods fail to generate high-quality texts in directed generation tasks . To address the issue with MLE , neural unlikelihood ( UL ) training has been proposed . During training , at each decoding step t , UL adds an auxiliary loss to the original cross entropy loss as follows . Lt = LtMLE + LtUL = − log pθ ( yt|y < t ) − α · ∑ c∈Ct log ( 1− pθ ( c|y < t ) ) ( 2 ) where α is a hyper-parameter and Ct is the set of negative tokens at step t , which is constructed by previous context tokens that are not the current token , Ct = { y1 , . . . , yt−1 } \ yt . The auxiliary UL loss decreases the total loss based on the “ unlikely ” probabilities of negative tokens , thus implicitly reducing the probability assigned to the repetitive tokens . UL training targets at improving the underlying modeling problem , which accords with our goal . Therefore , we mainly compare our method with UL training1 . In addition , we discuss one how our method is different from UL training from the gradient perspective in §5.4 . 3 METHODOLOGY : LEARNING TO USE NOVEL TOKENS . Training a text generation model with MLE objective treats each token in the gold ( ground truth ) sequence equally . With this approach , the model exhibits the tendency to generate repetitive tokens/phrases during inference . To mitigate this degeneration problem , we argue that the model should focus on learning to use novel tokens , rather than treating all the tokens equally . Formally , let y = ( y1 , . . . , yt , . . . , yT ) be the ground-truth token sequence that the model is learning to generate in an auto-regressive manner , one token at a time . At time step t , we define the token ỹti in the vocabulary V as a novel token , if ỹti has not been generated before , i.e. , ỹti /∈ { y1 , . . . , yt−1 } . By the definition , we have a set of novel tokens Stnovel ⊆ V at each decoding step t in training , which shrinks over time as new tokens are generated ( or observed ) in the ground-truth sequence ( see Appendix B for an illustration ) . Note that the shrinking set of novel tokens is equivalent to the negative tokens in UL except that it may contain the current target token yt , if it was observed before . To encourage the model to focus on learning to use novel tokens , we propose an architecturallysimple method that can fit into most of the auto-regressive generation models . Our method , requiring no carefully-designed components , goes straight to the gradient analysis of the loss function . 3.1 GRADIENT INFORMATION IN MLE TRAINING . Let us first consider the gradient analysis of the model trained with MLE . Let ot denote the presoftmax scores ( i.e. , logits ) over the vocabulary at time step t , where oti is the score for the token with index i . Similarly , let ptk = [ softmax ( o t ) ] k represent the probability of the ground truth token with index k in the vocabulary . The partial derivative of the MLE objective ( Eq . 1 ) at time step t with respect to the logit oti can be shown as ( omitting t and ‘ MLE ’ subscript for simplicity ) : ∇oiL = ∂L ∂pk · ∂pk ∂oi = pi − 1 ( i = k ) ( 3 ) where pi = [ softmax ( o ) ] i ( derivation is given in Appendix A ) . Specifically , the gradient of the loss w.r.t . the ground truth token logit ok is ( pk − 1 ) and for any other token logit oi is pi . As the gradient-based optimization proceeds , the gradient converges to , a number that is close enough to 0 . Another interpretation is that the gradient of the loss is supposed to be close to 0 around a ( local ) minimum . Therefore , to reach the minimum point , or to make the gradient close to 0 , the model would try to reduce the probability of non-ground truth token pi and increase the probability of ground truth token pk in the MLE training . From Eq . 3 , it is clear that the gradient that every token oi in the vocabulary receives is directly related to its generation probability pi . Therefore , we hypothesize that directly manipulating the generation probabilities of tokens , thereby controlling their gradients , can help us achieve our goal , which is to train the model so that it is encouraged to use novel tokens .
This work proposes an effective modification of language model token-level distribution during the training which prevents some forms of degeneration such as repetitions and dullness. The approach is based on the idea of encouraging the model to use tokens which were not observed in the previous context so far. In other words, this method changes softmax distribution such that unseen/novel tokens is being rescaled with a given hyper-parameter $\gamma$ (eq.4). Authors conduct several experiments using different tasks such as open-ended generation, image captioning and abstractive text summarization. As a result they confirm substantial improvement over the standard mle training and **token-level** unlikelihood training. In addition to analysis of their method, authors discuss a potential issue of unlikelihood training criterion and how their approach avoids this issue.
SP:4a6f5bb1d0f72df5782a09a1ffc5e19504010e36
Learning Representations by Contrasting Clusters While Bootstrapping Instances
1 INTRODUCTION . Learning to extract generalized representations from a high-dimensional image is essential in solving various down-stream tasks in computer vision . Though a supervised learning framework has shown to be useful in learning discriminative representations for pre-training the model , expensive labeling cost makes it practically infeasible in a large-scale dataset . Moreover , relying on the human-annotated labels tends to cause several issues such as class imbalance ( Cui et al. , 2019 ) , noisy labels ( Lee et al. , 2019 ) , and biased datasets ( Bahng et al. , 2019 ) . To address these issues , self-supervised visual representation learning , which does not require any given labels , has emerged as an alternative training framework , being actively studied to find a proper training objective . Recently , self-supervised approaches with contrastive learning ( Wu et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ) have rapidly narrowed the performance gap with supervised pre-training in various vision tasks . The contrastive method aims to learn invariant mapping ( Hadsell et al. , 2006 ) and instance discrimination . Intuitively , two augmented views of the same instance are mapped to the same latent space while different instances are pushed away . However , aforementioned instance discrimination does not consider the semantic similarities of the representations ( e.g. , same class ) , even pushing away the relevant instances . This affects the learned representations to exhibit uniformly distributed characteristics , proven by the previous works ( Wang & Isola , 2020 ; Chen & Li , 2020 ) . We point out that this uniformly distributed characteristic over instances can be a fundamental limitation against improving the learned representation quality . For instance , consider the representations illustrated in Fig . 1 . It indicates a simple case where linearly separable representations do not always guarantee that they can be properly clustered , which is not appropriate for non-discriminative downstream tasks such as information retrieval , density estimation , and cluster analysis ( Wu et al. , 2013 ) . In response , we start this work by asking : How can we learn the representations to be properly clustered even without the class labels ? In this work , we propose a self-supervised training framework that makes the learned representations not only linearly separable but also properly clustered , as illustrated in Fig . 2 . To mitigate the uniformly distributed constraint while preserving the invariant mapping , we replace the instance discrimination with an instance alignment problem , pulling the augmented views from the same instance without pushing away the views from the different images . However , learning the invariant mapping without discrimination can easily fall into a trivial solution that maps all the individual instances to a single point . To alleviate this shortcoming , we adopt a bootstrapping strategy from Grill et al . ( 2020 ) , utilizing the Siamese network , and a momentum update strategy ( He et al. , 2020 ) . In parallel , to properly cluster the semantically related instances , we are motivated to design additional cluster branch . This branch aims to group the relevant representations by softly assigning the instances to each cluster . Since each of cluster assignments needs to be discriminative , we employ the contrastive loss to the assigned probability distribution over the clusters with a simple entropy-based regularization . In the meantime , we constructed the cluster branch in multi-scale clustering starategy where each head deals with a different number of clusters ( Lin et al. , 2017 ) . Since there exists a various granularity of semantic information in images , it helps the model to effectively capture the diverse level of semantics as analyzed in Section 4.5 . In summary , our contributions are threefold , as follows : • We propose a novel self-supervised framework which contrasts the clusters while bootstrapping the instances that can attain both linearly separable and clusterable representations . • We present a novel cluster branch with multi-scale strategy which effectively captures the different levels of semantics in images . • Our method empirically achieves state-of-the-art results in CIFAR-10 , CIFAR-100 , and STL-10 on representation learning benchmarks , for both classification and clustering tasks . 2 RELATED WORK . Our work is closely related to unsupervised visual representation learning and unsupervised image clustering literature . Although both have a slightly different viewpoints of the problem , they are essentially similar in terms of its goal to find good representations in unlabelled datasets . Instance-level discrimination utilizes an image index as supervision because it is an unique signal in the unsupervised environment . NPID ( Wu et al. , 2018 ) firstly attempts to convert the classwise classification into the extreme of instance-wise discrimination by using external memory banks . MoCo ( He et al. , 2020 ) replaces the memory bank by introducing a momentum encoder that memorizes knowledge learned from the previous mini-batch . SimCLR ( Chen et al. , 2020a ) presents that it is crucial for representation quality to combine data augmentations using a pretext head after the encoder . Although recent studies show promising results on benchmark datasets , the instancewise contrastive learning approach has a critical limitation that it pushes away representations from different images even if the images have similar semantics , e.g. , belonging to the same class . Cluster-level bootstrapping is an alternative paradigm that enhancing the initial bias of the networks can be useful in obtaining a discriminative power in visual representations , since convolutional neural networks work well on capturing the local patterns ( Caron et al. , 2018 ) . In the case of using pseudo-labels , K-means ( Caron et al. , 2018 ) or optimal transport ( Asano et al. , 2019 ; Caron et al. , 2020 ) are commonly adopted for clustering . On the other hand , soft clustering methods have also been actively studied to allow flexible cluster boundaries ( Ji et al. , 2019 ; Huang et al. , 2020 ) . Recently , a 2-stage training paradigm has been proposed to construct the cluster structure initialized from the representations learned by instance discrimination ( Gansbeke et al. , 2020 ) . 3 METHOD . Our work is motivated by an observation from SupCLR ( Khosla et al. , 2020 ) , which additionally pulls the representations together from different instances by using groundtruth labels . However , directly applying this idea in an unsupervised environment with pseudo-labels is challenging , because small false-positive errors at the initial step can be gradually spread out , degrading the quality of final representations . Instead , the main idea of our approach avoid pushing away those instances close enough to each other . To validate this idea , we conducted a toy experiment that a pulling force is only corresponding to two augmented views of the same image while not pushing the images within the same class by using the groundtruth label . We found that its classification accuracy increases over 5 % on STL-10 datasets compared to that of SimCLR ( Chen et al. , 2020a ) . Inspired by this experiment , we design our model ( i ) not to push away relevant instances with our instance-alignment loss ( Section 3.2 ) while ( ii ) discriminating the representations in a cluster-wise manner . ( Section 3.3-3.4 ) . 3.1 PRELIMINARIES . As shown in Fig . 3 , we adopt stochastic data augmentation algorithms ( Chen et al. , 2020a ; He et al. , 2020 ; Chen et al. , 2020b ; Caron et al. , 2020 ) to generate two different augmented views x′i and x′′i of the same image xi ∼ X = { x1 , x2 , ... , xN } where N is the number of unlabelled images . Inspired by Luo et al . ( 2018 ) ; Grill et al . ( 2020 ) , C2BIN consists of an instance predictor P a ( · ) , cluster predictors P c , k ( · ) , and two Siamese networks called the runner Eθ ( · ) and the follower Eφ ( · ) , respectively . The runner Eθ is rapidly updated to find the optimal parameters θ∗ over the search spaces , while the follower Eφ generates the target representations for the Eθ . The Eθ is composed of the two neural functions : encoder Fθ ( · ) and instance projector Gaθ ( · ) , and vice versa for the follower Eφ . To bootstrap the instance-level alignment , Eθ , Eφ , and P a are used . Afterwards , Fθ and P c , k are utilized to contrast the cluster-wise features . 3.2 BOOTSTRAPPING LOSS OF INSTANCE REPRESENTATIONS . Given an image x ∼ X , we can obtain two augmented views x′ = t′ ( x ) and x′′ = t′′ ( x ) where t′ and t′′ are sampled from a set of stochastic data augmentations T as mentioned above . Even though augmented views are distorted , they should contain similar semantics , and the learned representations should be closely aligned in the latent space . For training , we forward x′′ through the follower Eφ to obtain target representations at an instance level ; the runner Eθ aims to make the embedding vector of x′ closer to them . That is , we first extract image representations r = Fθ ( x′ ) ∈ Rdr where dr is the number of dimensions of our representations . Afterwards , we introduce a pretext-specific instance-wise projector Gaθ ( · ) and then obtain pretext embedding vectors za = Gaθ ( r ) ∈ R1×da ; the target pretext vectors ẑa can be obtained using the same procedure by Eφ . Motivated from Grill et al . ( 2020 ) , we calculate our alignment loss as the cosine distance as Lalign = 1− P a ( za ) · ẑa ||P a ( za ) ||2||ẑa||2 , ( 1 ) where P a ( za ) , ẑa ∈ R1×da and we adopt the number of dimensions of projected features da as in Chen et al . ( 2020a ; c ) . 3.3 CONTRASTIVE LOSS OF BATCH-WISE CLUSTER ASSIGNMENTS . Our high-level motivation of this branch is that an image feature r can be represented as the combination of cluster features capturing local patterns . However , grouping similar images conflict with the instance-level invariant mapping ; therefore , we introduce an additional branch which contains cluster predictor P c , k ( · ) after the encoder Fθ ( · ) . The cluster predictor P c , k is a linear function whose takes ri as an input and transform it to a K-dimensional output vector . Therefore , zci = P c , k ( ri ) represents a degree of confidence for the i-th image representations ri to belong to the k-th cluster feature , i.e. , zci = [ z c i,1 , z c i,2 , ... , z c i , k ] ∈ R1×K , ( 2 ) where zci indicate a cluster membership distribution of the given image xi . Since we sample n items for training , Zc ∈ Rn×K is the set of memberships distribution of the given mini-batch . Now we define batch-wise cluster assignment vectors ( BCAs ) ck as ck = Z c : ,k = z c 1 , k ... zcn , k ∈ Rn×1 , ( 3 ) which indicates how much the k-th cluster is mapped by images in the mini-batch . Although ck will dynamically change as a new mini-batch is given , the same cluster features between differently augmented views from the same image should be similar while pushing away the others to capture diverse patterns . To this end , we simply utilize the contrastive loss between the BCAs as Lbcaclust = 1 K K∑ i=1 − log ( exp ( c′i · c′′i /τ ) ∑K j=1 1 [ j 6=i ] exp ( c ′ i · c′′j /τ ) ) , ( 4 ) where τ indicates a temperature value . The vectors c′ and c′′ are outputs of P c , k following the encoder Fθ by taking x′ and x′′ respectively . Unfortunately , most of the clustering-based methods suffers from falling into a degenerate solution where the majority of items are allocated in a few clusters , especially in an unsupervised environment . To mitigate this issue , we first compute the mass of assignment to k-th cluster as sk = ∑N i ck ( i ) where ck ( i ) , indicating each element of ck . Afterwards , we encourage ri to be stochastically activated for diverse cluster features as much as possible by maximizing an entropy of s. To this end , we formulate the cluster loss function as Lclust = Lbcaclust − λentH ( s ) , ( 5 ) where H indicates an entropy function as H ( s ) = − ∑K i si log si and λent is the weight value for the regularization term .
In this paper, the authors augment the instance-level self-supervised learning with cluster-aware learning mechanism during the training procedure. Specifically, for each training batch, the authors project the instances into a clustering space and then utilize a cluster-aware contrastive loss to push the augmented samples from the same instance to belong to the same cluster, otherwise for different instances. To ensure the clustering not to collapse into a single or a few cluster to find the trivial solutions, the authors further add a penalization item keep the entropy of clustering assignment be uniform to some extent. The experimental results demonstrate that the proposed method can improve the representation learning performance over SoTA methods on several datasets, while also outperforms the previous methods on clustering task. Further ablation studies show that the loss is effective to ensure the learned representation more discriminative and clusterable.
SP:2062ab9c65e0d10e5d6d0112aaeaca208f131afd
Understanding the failure modes of out-of-distribution generalization
1 INTRODUCTION . A machine learning model in the wild ( e.g. , a self-driving car ) must be prepared to make sense of its surroundings in rare conditions that may not have been well-represented in its training set . This could range from conditions such as mild glitches in the camera to strange weather conditions . This out-of-distribution ( OoD ) generalization problem has been extensively studied within the framework of the domain generalization setting ( Blanchard et al. , 2011 ; Muandet et al. , 2013 ) . Here , the classifier has access to training data sourced from multiple “ domains ” or distributions , but no data from test domains . By observing the various kinds of shifts exhibited by the training domains , we want the classifier can learn to be robust to such shifts . The simplest approach to domain generalization is based on the Empirical Risk Minimization ( ERM ) principle ( Vapnik , 1998 ) : pool the data from all the training domains ( ignoring the “ domain label ” on each point ) and train a classifier by gradient descent to minimize the average loss on this pooled dataset . Alternatively , many recent studies ( Ganin et al. , 2016 ; Arjovsky et al. , 2019 ; Sagawa et al. , 2020a ) have focused on designing more sophisticated algorithms that do utilize the domain label on the datapoints e.g. , by enforcing certain representational invariances across domains . A basic premise behind pursuing such sophisticated techniques , as emphasized by Arjovsky et al . ( 2019 ) , is the empirical observation that ERM-based gradient-descent-training ( or for convenience , just ERM ) fails in a characteristic way . As a standard illustration , consider a cow-camel classification task ( Beery et al. , 2018 ) where the background happens to be spuriously correlated with the label in a particular manner only during training — say , most cows are found against a grassy background and most camels against a sandy one . Then , during test-time , if the correlation is completely flipped ( i.e. , all cows in deserts , and all camels in meadows ) , one would observe that the ∗Work performed in part while Vaishnavh Nagarajan was interning at Blueshift , Alphabet . 1Code is available at https : //github.com/google-research/OOD-failures accuracy of ERM drops drastically . Evidently , ERM , in its unrestrained attempt at fitting the data , indiscriminately relies on all kinds of informative features , including unreliable spurious features like the background . However , an algorithm that carefully uses domain label information can hope to identify and rely purely on invariant features ( or “ core ” features ( Sagawa et al. , 2020b ) ) . While the above narrative is an oft-stated motivation behind developing sophisticated OoD generalization algorithms , there is little formal explanation as to why ERM fails in this characteristic way . Existing works ( Sagawa et al. , 2020b ; Tsipras et al. , 2019 ; Arjovsky et al. , 2019 ; Shah et al. , 2020 ) provide valuable answers to this question through concrete theoretical examples ; however , their examples critically rely on certain factors to make the task difficult enough for ERM to rely on the spurious features . For instance , many of these examples have invariant features that are only partially predictive of the label ( see Fig 1a ) . Surprisingly though , ERM relies on spurious features even in much easier-to-learn tasks where these complicating factors are absent — such as in tasks with fully predictive invariant features e.g. , Fig 1c or the Waterbirds/CelebA examples in Sagawa et al . ( 2020a ) or for that matter , in any real-world situation where the object shape perfectly determines the label . This failure in easy-to-learn tasks , as we argue later , is not straightforward to explain ( see Fig 1b for brief idea ) . This evidently implies that there must exist factors more general and fundamental than those known so far , that cause ERM to fail . Our goal in this work is to uncover these fundamental factors behind the failure of ERM . The hope is that this will provide a vital foundation for future work to reason about OoD generalization . Indeed , recent empirical work ( Gulrajani & Lopez-Paz , 2020 ) has questioned whether existing alternatives necessarily outperform ERM on OoD tasks ; however , due to a lack of theory , it is not clear how to hypothesize about when/why one algorithm would outperform another here . Through our theoretical study , future work can hope to be better positioned to precisely identify the key missing components in these algorithms , and bridge these gaps to better solve the OoD generalization problem . Our contributions . To identify the most fundamental factors causing OoD failure , our strategy is to ( a ) study tasks that are “ easy ” to succeed at , and ( b ) to demonstrate that ERM relies on spurious features despite how easy the tasks are . More concretely : 1 . We formulate a set of constraints on how our tasks must be designed so that they are easy to succeed at ( e.g. , the invariant feature must be fully predictive of the label ) . Notably , this class of easy-to-learn tasks provides both a theoretical test-bed for reasoning about OoD generalization and also a simplified empirical test-bed . In particular , this class encompasses simplified MNIST and CIFAR10-based classification tasks where we establish empirical failure of ERM . 2 . We identify two complementary mechanisms of failure of ERM that arise from how spurious correlations induce two kinds of skews in the data : one that is geometric and the other statistical . In particular , we theoretically isolate these failure modes by studying linear classifiers trained by gradient descent ( on logistic/exponential loss ) and its infinite-time-trained equivalent , the maxmargin classifier ( Soudry et al. , 2018 ; Ji & Telgarsky , 2018 ) on the easy-to-learn-tasks . 3 . We also show that in any easy-to-learn task that does not have these geometric or statistical skews , these models do not rely on the spurious features . This suggests that these skews are not only a sufficient but also a necessary factor for failure of these models in easy-to-learn tasks . 4 . To empirically demonstrate the generality of our theoretical insights , we ( a ) experimentally validate these skews in a range of MNIST and CIFAR10-based tasks and ( b ) demonstrate their effects on fully-connected networks ( FNNs ) and ResNets . We also identify and explain failure in scenarios where standard notions of spurious correlations do not apply ( see Fig 1d ) . We perform similar experiments on a non-image classification task in App E . 2 RELATED WORK . Spurious correlations . Empirical work has shown that deep networks find superficial ways to predict the label , such as by relying on the background ( Beery et al. , 2018 ; Ribeiro et al. , 2016 ) or other kinds of shortcuts ( McCoy et al. , 2019 ; Geirhos et al. , 2020 ) . Such behavior is of practical concern because accuracy can deteriorate under shifts in those features ( Rosenfeld et al. , 2018 ; Hendrycks & Dietterich , 2019 ) . It can also lead to unfair biases and poor performance on minority groups ( Dixon et al. , 2018 ; Zhao et al. , 2017 ; Sagawa et al. , 2020b ) . Understanding failure of ERM . While the fact that ERM relies on spurious correlations has become empirical folk wisdom , only a few studies have made efforts to carefully model this . Broadly , there are two kinds of existing models that explain this phenomenon . One existing model is to imagine that both the invariant and the spurious features are only partially predictive of the label ( Tsipras et al. , 2019 ; Sagawa et al. , 2020b ; Arjovsky et al. , 2019 ; Ilyas et al. , 2019 ; Khani & Liang , 2020 ) , as a result of which the classifier that maximizes accuracy can not ignore the spurious feature ( see Fig 1a ) . The other existing model is based on the “ simplicity bias ” of gradient-descent based deep network training ( Rahaman et al. , 2018 ; Neyshabur et al. , 2015 ; Kalimeris et al. , 2019 ; Arpit et al. , 2017 ; Xu et al. , 2019 ; des Combes et al. , 2018 ) . In particular , this model typically assumes that both the invariant and spurious features are fully predictive of the label , but crucially posits that the spurious features are simpler to learn ( e.g. , more linear ) than the invariant features , and therefore gradient descent prefers to use them ( Shah et al. , 2020 ; Nar et al. , 2019 ; Hermann & Lampinen , 2020 ) . While both these models offer simple-to-understand and useful explanations for why classifiers may use spurious correlations , we provide a more fundamental explanation . In particular , we empirically and theoretically demonstrate how ERM can rely on the spurious feature even in much easier tasks where these explanations would fall apart : these are tasks where unlike in the first model ( a ) the invariant feature is fully predictive and unlike in the second model , ( b ) the invariant feature corresponds to a simple linear boundary and ( c ) the spurious feature is not fully predictive of the label . Further , we go beyond the max-margin settings analyzed in these works to analyze the dynamics of finite-time gradient-descent trained classifier on logistic loss . We would also like to point the reader to concurrent work of Khani & Liang ( 2021 ) that has proposed a different model addressing the above points . While their model sheds insight into the role of overparameterization in the context of spurious features ( and our results are agnostic to that ) , their model also requires the spurious feature to be “ dependent ” on the invariant feature , an assumption we don ’ t require ( see Sec 3 ) . Algorithms for OoD generalization . Due to the empirical shortcomings of ERM , a wide range of sophisticated algorithms have been developed for domain generalization . The most popular strategy is to learn useful features while constraining them to have similar distributions across domains ( Ganin et al. , 2016 ; Li et al. , 2018b ; Albuquerque et al. , 2020 ) . Other works constrain these features in a way that one can learn a classifier that is simultaneously optimal across all domains ( Peters et al. , 2016 ; Arjovsky et al. , 2019 ; Krueger et al. , 2020 ) . As discussed in Gulrajani & Lopez-Paz ( 2020 ) , there are also many other existing non-ERM based methods , including that of meta-learning ( Li et al. , 2018a ) , parameter-sharing ( Sagawa et al. , 2020a ) and data augmentation ( Zhang et al. , 2018 ) . Through their extensive empirical survey of many of the above algorithms , Gulrajani & Lopez-Paz ( 2020 ) suggest that ERM may be just as competitive as the state-of-the-art . But we must emphasize that this doesn ’ t vindicate ERM of its failures but rather indicates that we may be yet to develop a substantial improvement over ERM . 3 EASY-TO-LEARN DOMAIN GENERALIZATION TASKS . Below , we first set up the basic domain generalization setting and the idea of ERM . Then , in Section 3.1 , we formulate a class of domain-generalization tasks that are in many aspects “ easy ” for the learner ( such as fully informative invariant features ) – what exactly makes a task “ easy ” will be discussed in Section 3.1 ) . This discussion sets the ground for the later sections to show how ERM can fail even in these easy tasks , which will help uncover the fundamental factors behind its failure . Notations . Consider an input ( vector ) space X and label space Y ∈ { −1 , 1 } . For any distribution D over X × Y , let pD ( · ) denote its probability density function ( PDF ) . Let H denote a class of classifiers h : X → R. Let the error of h on D be denoted as LD ( h ) : = E ( x , y ) ∼D [ h ( x ) · y < 0 ] . The domain generalization setting and ERM . In the domain generalization setting , one considers an underlying class D of data distributions over X ×Y corresponding to different possible domains . The learner is given training data collected from multiple distributions from D. For an ERM-based learner in particular , the training data will be pooled together , so we can model the data as coming from a single ( pooled ) distribution Dtrain , which for simplicity , can be assumed to belong to D. Given this data , the learner outputs a hypothesis ĥ ∈ H that is tested on a new distribution Dtest picked from D. This can be potentially modeled by assuming that all test and training distributions are drawn from a common hyper-distribution over D. However , this assumption becomes pointless in most practical settings where the training domains are not more than three to four in number ( e.g. , PACS ( Asadi et al. , 2019 ) , VLCS ( Fang et al. , 2013 ) ) , and therefore hardly representative of any hyper-distribution . Here , the problem becomes as hard as ensuring good performance on a worst-case test-distribution without any hyper-distribution assumption ; this boils down to minimizing maxD∈D LD ( ĥ ) . Indeed , most works have studied the worst-case setting , both theoretically ( Sagawa et al. , 2020b ) and empirically ( Arjovsky et al. , 2019 ; Sagawa et al. , 2020a ) . Similarly , for this work , we focus on the worst-case setting and define the optimal target function h ? to be h ? = arg minh∈H maxD∈D LD ( h ) . Then , we define the features that this “ robust ” classifier uses as invariant features Xinv ( e.g. , the shape of the object ) , and the rest as spurious features Xsp ( e.g. , the background ) . To formalize this , we assume that there exists a mapping Φ : Xinv×Xsp → X such that each D ∈ D is induced by a distribution over Xinv × Xsp ( so we can denote any x as Φ ( xinv , xsp ) ) . With an abuse of notation we will use pD ( · ) to also denote the PDF of the distribution over Xinv×Xsp . Then , the fact that Xsp are features that h ? does not rely on , is mathematically stated as : ∀xinv and ∀xsp 6= x′sp , h ? ( Φ ( xinv , xsp ) ) = h ? ( Φ ( xinv , x′sp ) ) . Finally , we note that , to make this learning problem tractable , one has to impose further restrictions ; we ’ ll provide more details on those when we discuss the class of easy-to-learn domain generalization tasks in Sec 3.1 . Empirical failure of ERM . To guide us in constructing the easy-to-learn tasks , let us ground our study in a concrete empirical setup where an ERM-based linear classifier shows OoD failure . Specifically , consider the following Binary-MNIST based task , where the first five digits and the remaining five digits form the two classes . First , we let Φ be the identity mapping , and so x = ( xinv , xsp ) . Then , we let xinv be a random ReLU features representation of the MNIST digit i.e. , if xraw represents the MNIST image , then xinv = ReLU ( Wxraw ) where W is a matrix with Gaussian entries . We make this representation sufficiently high-dimensional so that the data becomes linearly separable . Next , we let the spurious feature take values in { +B , −B } for some B > 0 , imitating the two possible background colors in the camel-cow dataset . Finally , on Dtrain , for any y , we pick the image xinv from the corresponding class and independently set the “ background color ” xsp so that there is some spurious correlation i.e. , PrDtrain [ xsp · y > 0 ] > 0.5 . During test time however , we flip this correlation around so that PrDtest [ xsp · y > 0 ] = 0.0 . In this task , we observe in Fig 2a ( shown later under Sec 4 ) that as we vary the train-time spurious correlation from none ( PrDtrain [ xsp · y > 0 ] = 0.5 ) to its maximum ( PrDtrain [ xsp · y > 0 ] = 1.0 ) , the OoD accuracy of a max-margin classifier progressively deteriorates . ( We present similar results for a CIFAR10 setting , and all experiment details in App C.1 . ) Our goal is now to theoretically demonstrate why ERM fails this way ( or equivalently , why it relies on the spurious feature ) even in tasks as “ easy-to-learn ” as these .
The paper studies generalization under distribution shift, and tries to answer the question: why do ERM-based classifiers learn to rely on "spurious" features? They present a class of distributions called "easy-to-learn" that rules out several explanations given in recent work and isolates the spurious correlation phenomenon in the simplest possible setting. Even on "easy-to-learn" distributions, linear models obtained from ERM use spurious features owing to either the dynamics of gradient descent trained on separable data (very slow convergence to the max-margin classifier) or a certain geometric skew in the data.
SP:b47032cd0c8bf0189504e1c6562b058ba8f0e8ae
TaskSet: A Dataset of Optimization Tasks
We present TaskSet , a dataset of tasks for use in training and evaluating optimizers . TaskSet is unique in its size and diversity , containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks , to variational autoencoders , to non-volume preserving flows on a variety of datasets . As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially . By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search . Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers . As part of this work we have opensourced code for all tasks , as well as 29 million training curves for these problems and the corresponding hyperparameters.1 1 INTRODUCTION . As machine learning moves to new domains , collecting diverse , rich , and application-relevant datasets is critical for its continued success . Historically , research on learning optimization algorithms have only leveraged single tasks ( Andrychowicz et al. , 2016 ; Metz et al. , 2019a ) , or parametric synthetic tasks ( Wichrowska et al. , 2017 ) , due to the difficulty of obtaining large sets of tasks . 1.1 TASKSET : A SET OF TASKS . We present a set of tasks significantly larger than any optimizer dataset previously studied . We aim to better enable standardized research on optimizers , be that analysis of existing optimizers , or development of new learned learning algorithms . We call this suite of tasks TaskSet . Much in the same way that learned features in computer vision outpaced hand designed features ( Krizhevsky et al. , 2012 ; LeCun et al. , 2015 ) , we believe that data driven approaches to discover optimization algorithms will replace their hand designed counterparts resulting in increased performance and usability . To this end , standardizing a large suite of optimization tasks is an important first step towards more rigorous learned optimizer research . In this setting , a single “ example ” is an entire training procedure for a task defined by data , loss function , and architecture . Thus , TaskSet consists of over a thousand optimization tasks , largely focused on deep learning ( neural networks ) . They include image classification using fully connected and convolutional models , generative models with variational autoencoders ( Kingma & Welling , 2013 ) or flows ( Dinh et al. , 2016 ; Papamakarios et al. , 2017 ) , natural language processing tasks including both language modeling and classification , as well as synthetic tasks such as quadratics , and optimization test functions . The problems themselves are diverse in size , spanning 7 orders of magnitude in parameter count , but remain reasonably fast to compute as almost all tasks can be trained 10k iterations on a CPU in under one hour . To demonstrate the breadth of this dataset we show an embedding of all the tasks in Appendix A.1 in Figure S1 . 1redacted url 1.2 AMORTIZING HYPERPARAMETER SEARCH . Machine learning methods are growing ever more complex , and their computational demands are increasing at a frightening pace ( Amodei & Hernandez , 2018 ) . Unfortunately , most modern machine learning models also require extensive hyperparameter tuning . Often , hyperparameter search is many times more costly than the final algorithm , which ultimately has large economic and environmental costs ( Strubell et al. , 2019 ) . The most common approach to hyperparameter tuning involves some form of quasi-random search over a pre-specified grid of hyperparameters . Building on past work ( Wistuba et al. , 2015b ; Pfisterer et al. , 2018 ) , and serving as a typical example problem illustrative of the sort of research enabled by TaskSet , we explore a hyperparameter search strategy consisting of a simple ordered list of hyperparameters to try . The idea is that the first few elements in this list will cover most of the variation in good hyperparameters found in typical machine learning workloads . We choose the elements in this list by leveraging the diversity of tasks in TaskSet , by meta-learning a hyperparameter list that performs the best on the set of tasks in TaskSet . We then test this list of hyperparameters on new , larger machine learning tasks . Although learning the list of hyperparameters is costly ( in total we train∼29 million models consisting of over 4,000 distinct hyper parameter configurations ) , our final published list is now available as a good starting guess for new tasks . Furthermore , we believe the raw training curves generated by this search will be useful for future hyperparameter analysis and meta-learning research , and we release it as part of this work . We additionally release code in Tensorflow ( Abadi et al. , 2016 ) , Jax ( Bradbury et al. , 2018 ) , and PyTorch ( Paszke et al. , 2019 ) for a reference optimizer which uses our learned hyperparameter list , and can be easily applied to any model . 2 TASKSET : A SET OF TASKS . How should one choose what problems to include in a set of optimization tasks ? In our case , we strive to include optimization tasks that have been influential in deep learning research over the last several decades , and will be representative of many common machine learning problems . Designing this dataset requires striking a balance between including realistic large-scale workloads and ensuring that tasks are fast to train so that using it for meta-learning is tractable . We construct our dataset largely out of neural network based tasks . Our chosen tasks have between ten thousand and one million parameters ( much smaller than the billions commonly used today ) , as a result most problems can train in under an hour on a cloud CPU with 5 cores . We additionally focus on increased “ task diversity ” by including many different kinds of training algorithms , architectures , and datasets – inspired by past work in reinforcement learning which has demonstrated large numbers of problems and increased diversity around some domain of interest is useful for both training and generalization Heess et al . ( 2017 ) ; Tobin et al . ( 2017 ) ; Cobbe et al . ( 2018 ) ; OpenAI et al . ( 2019 ) . Again though , a balance must be struck , as in the limit of too much diversity no learning can occur due to the no free lunch theorem ( Wolpert & Macready , 1997 ) . Our dataset , TaskSet , is made up of 1162 tasks in total . We define a task as the combination of a loss function , a dataset , and initialization . Specifically we define a task as a set of 4 functions : • Initialization : ( ) → parameter initial values • Data generator : data split ( e.g . train / valid / test ) → batch of data • Forward pass : ( batch of data , params ) → loss • Gradient function : ( input data , params ) → gradients ( dlossdparams ) A task has no tunable hyperparameters and , coupled with an optimizer , provides all the necessary information to train using first order optimization . This makes experimentation easier , as each task definition specifies hyperparameters such as batch size ( Shallue et al. , 2018 ; McCandlish et al. , 2018 ) or initialization ( Schoenholz et al. , 2016 ; Yang & Schoenholz , 2017 ; Xiao et al. , 2018 ; Li & Nguyen , 2019 ; Pretorius et al. , 2018 ; Hayou et al. , 2018 ; Karakida et al. , 2018 ; Blumenfeld et al. , 2019 ; Hayou et al. , 2019 ) that no longer need to be tuned . We augment a set of “ fixed ” tasks which have been designed by hand , with “ sampled ” tasks that are randomly generated task instances . 2.1 SAMPLED FAMILIES OF TASKS . Sampled tasks are created by sampling neural network architectures ( e.g. , MLPs , convnets ) , activation functions , datasets ( e.g. , images , text , quadratic functions , and synthetic tasks ) , and other properties . We organize these sampled tasks into similar families of tasks . See Appendix H for a complete description of these sampled tasks . Broadly , these are separated into tasks sampling image models ( mlp , mlp_ae ( Hinton & Salakhutdinov , 2006 ) , mlp_vae ( Kingma & Welling , 2013 ) , conv_pooling , conv_fc , nvp ( Dinh et al. , 2016 ) , maf ( Papamakarios et al. , 2017 ) ) , tasks sampling language models ( char_rnn_language_model ( Graves , 2013 ) , word_rnn_language_model , rnn_text_classification ) , quadratics ( quadratic ) and other synthetic tasks ( losg_tasks ( Wichrowska et al. , 2017 ) ) . Defining a sampling distribution that generates tasks that are always valid , and that run within a time constraint , is difficult . Instead , we define a broad distribution and make use of rejection sampling to remove tasks that are either too slow or that we are unable to optimize at all . By starting with a distribution that is too broad , and pruning it , we hope to achieve better coverage of tasks . 2.2 HAND DESIGNED TASKS . In addition to the sampled tasks , we also include 107 hand designed tasks . These consist of more common tasks that both improve the coverage beyond the sampled tasks , and provide for better interpretability through a closer match to existing tasks in the literature . These tasks span image classification , text classification , language modeling , and generative modeling , as well as some synthetic tasks such as associative retrieval ( Ba et al. , 2016 ) . We leave the description of each one of these tasks to Appendix H.3 . 2.3 AGGREGATE STATISTICS OF TASKSET . In Figure 1a we show histograms of compute times for all problems and find almost all problems train under an hour ( see Appendix C for per task family histograms ) . In Figure 1c we plot a histogram of the number of parameters per tasks . Finally , in Figure 1b we show a distribution of task difficulty by plotting the fraction of optimizer configurations that achieve a certain loss value . We find that for some tasks as many as 50 % of optimizers perform well while for others < 1 % achieve a loss close to the smallest observed loss . For a qualitative visualization of TaskSet , see Appendix A 3 AMORTIZED HYPERPARAMETER SEARCH . As a simple demonstration of using TaskSet for meta-learning research , we consider learning hyperparameter lists . This idea of learning lists of hyper parameters has been explored in ( Wistuba et al. , 2015b ; Pfisterer et al. , 2018 ) . We define an optimizer as the pairing of an optimization algorithm and all its corresponding hyperparameters ( e.g . learning rate ) . While sometimes practitioners use a single optimizer – e.g . Adam ( Kingma & Ba , 2014 ) with default hyperparameters – most practitioners will often run multiple optimizers and use a validation set to select the best performer . 3.1 OPTIMIZER FAMILIES . We define different parameterizations of hand designed optimizers as an optimizer family . The optimizer families we consider consist of : • Adam1p : One hyperparameter , the fixed learning rate α • Adam4p : Four Adam hyperparameters , α , β1 , β2 , and • Adam6p : Adam4p hyperparameters , and two additional hyperparameters controlling linear and exponential learning rate decays • Adam8p : The hyperparameters in Adam6p plus two additional hyperparameters for ` 1 and ` 2 regularization terms • NAdamW : A 10 hyperparameter search space based on NAdam ( Dozat , 2016 ) with cosine learning rate decay , and weight decay . For the full update equations see Appendix D.1 for Adam and D.2 for NadamW . We chose Adam based on its use in existing work , and NAdam based on performance shown in ( Choi et al. , 2019 ) .
This paper proposes a dataset of tasks to help evaluate learned optimizers. The learned optimizers are evaluated by the loss that they achieve on held-out tasks after 10k steps. Using this dataset, the main strategy considered is to use search spaces that parametrize optimizers and learn a list of hyperparameter configurations for the optimizer that are tried sequentially. The authors show that the learned hyperparameter configuration list learned achieves better performance than (constrained) random search on multiple optimizer search spaces. Finally, they show that the learned hyperparameter list transfer well to realistic problems such as training a ResNet-50 model on ImageNet and training a transformer architecture on LM1B, outperforming reasonable baselines.
SP:698104525f6955ba58aee1331a9487f77a542f13
On InstaHide, Phase Retrieval, and Sparse Matrix Factorization
In this work , we examine the security of InstaHide , a scheme recently proposed by Huang et al . ( 2020b ) for preserving the security of private datasets in the context of distributed learning . To generate a synthetic training example to be shared among the distributed learners , InstaHide takes a convex combination of private feature vectors and randomly flips the sign of each entry of the resulting vector with probability 1/2 . A salient question is whether this scheme is secure in any provable sense , perhaps under a plausible complexity-theoretic assumption . The answer to this turns out to be quite subtle and closely related to the averagecase complexity of a multi-task , missing-data version of the classic problem of phase retrieval that is interesting in its own right . Motivated by this connection , under the standard distributional assumption that the public/private feature vectors are isotropic Gaussian , we design an algorithm that can actually recover a private vector using only the public vectors and a sequence of synthetic vectors generated by InstaHide . 1 INTRODUCTION . In distributed learning , where decentralized parties each possess some private local data and work together to train a global model , a central challenge is to ensure that the security of any individual party ’ s local data is not compromised . Huang et al . ( 2020b ) recently proposed an interesting approach called InstaHide for this problem . At a high level , InstaHide is a method for aggregating local data into synthetic data that can hopefully preserve the privacy of the local datasets and be used to train good models . Informally , given a collection of public feature vectors ( e.g . a publicly available dataset like ImageNet Deng et al . ( 2009 ) ) and a collection of private feature vectors ( e.g . the union of all of the private datasets among learners ) , InstaHide produces a synthetic feature vector as follows . Let integers kpub , kpriv be sparsity parameters . 1 . Form a random convex combination of kpub public and kpriv private vectors . 2 . Multiply every coordinate of the resulting vector by an independent random sign in { ±1 } , and define this to be the synthetic feature vector . The hope is that by removing any sign information from the vector obtained in Step 1 , Step 2 makes it difficult to discern which public and private vectors were selected in Step 1 . Strikingly , Huang et al . ( 2020b ) demonstrated on real-world datasets that if one trains a ResNet-18 or a NASNet on a ∗This work was supported in part by NSF CAREER Award CCF-1453261 , NSF Large CCF-1565235 and Ankur Moitra ’ s ONR Young Investigator Award . dataset consisting of synthetic vectors generated in this fashion , one can still get good test accuracy on the underlying private dataset for modest sparsity parameters ( e.g . kpub = kpriv = 2 ) . 1 The two outstanding theoretical challenges that InstaHide poses are understanding : • Utility : What property , either of neural networks or of real-world distributions , lets one tolerate this kind of covariate shift between the synthetic and original datasets ? • Security : Can one rigorously formulate a refutable security claim for InstaHide , under a plausible average-case complexity-theoretic assumption ? In this paper we consider the latter question . One informal security claim implicit in Huang et al . ( 2020b ) is that given a synthetic dataset of a certain size , no efficient algorithm can recover a private image to within a certain level of accuracy ( see Problem 1 for a formal statement of this recovery question ) . On the one hand , it is a worthwhile topic of debate whether this is a satisfactory guarantee from a security standpoint . On the other , even this kind of claim is quite delicate to pin down formally , in part because it seems impossible for such a claim to hold for arbitrary private datasets . Known Attacks and the Importance of Distributional Assumptions If the private and public datasets consisted of natural images , for example , then attacks are known Jagielski ( 2020 ) ; Carlini et al . ( 2020 ) . At a high level , the attack of Jagielski ( 2020 ) crucially leverages local Lipschitzness properties of natural images and shows that when kpriv + kpub = 2 , even a single synthetic image can reveal significant information . The very recent attack of Carlini et al . ( 2020 ) , which was independent of the present work and appeared a month after this submission appeared online , is more sophisticated and bears interesting similarities to the algorithms we consider . We defer a detailed discussion of these similarities to Appendix A in the supplement . While the original InstaHide paper Huang et al . ( 2020b ) focused on image data , their general approach has the potential to be applicable to other forms of real-valued data , and it is an interesting mathematical question whether the above attacks remain viable . For instance , for distributions over private vectors where individual features are nearly independent , one can not hope to leverage the kinds of local Lipschitz-ness properties that the attack of Jagielski ( 2020 ) exploits . Additionally , if the individual features are identically distributed , then it is information theoretically impossible to discern anything from just a single synthetic vector . For instance , if a synthetic vector ṽ is given by the entrywise absolute value of 12v1 + 1 2v2 for private vectors v1 , v2 , then an equally plausible pair of private vectors generating ṽ would be v′1 , v ′ 2 given by swapping the i-th entry of v1 with that of v2 for any collection of indices i ∈ [ d ] . In other words , there are 2d pairs of private vectors which are equally likely under the Gaussian measure and give rise to the exact same synthetic vector . Gaussian Images , and Our Results A natural candidate for probing whether such properties can make the problem of recovering private vectors more challenging is the case where the public and private vectors are sampled from the standard Gaussian distribution over Rd . While this distribution does not capture datasets in the real world , it avoids some properties of distributions over natural images that might make InstaHide more vulnerable to attack and is thus a clean testbed for stresstesting candidate security claims for InstaHide . Furthermore , in light of known hardness results for certain learning problems over Gaussian space Diakonikolas et al . ( 2017 ) ; Bruna et al . ( 2020 ) ; Diakonikolas et al . ( 2020b ) ; Goel et al . ( 2020a ) ; Diakonikolas et al . ( 2020a ) ; Klivans & Kothari ( 2014 ) ; Goel et al . ( 2020b ) ; Bubeck et al . ( 2019 ) ; Regev & Vijayaraghavan ( 2017 ) , one might hope that when the vectors are Gaussian , one could rigorously establish some lower bounds , e.g . on the size of the synthetic dataset ( information-theoretic ) and/or the runtime of the attacker ( computational ) , perhaps under an average-case assumption , or in some restricted computational model like SQ . Orthogonally , we note that the recovery task the attacker must solve appears to be an interesting inverse problem in its own right , namely a multi-task , missing-entry version of phase retrieval with an intriguing connection to sparse matrix factorization ( see Section 2.2 and Section 3 ) . The assumption of Gaussianity is a natural starting point for understanding the average-case complexity of this problem , and in this learning-theoretic context it is desirable to give algorithms with provable guarantees . 1We did not describe how the labels for the synthetic vectors are assigned , but this part of InstaHide will not be important for our theoretical results and we defer discussion of labels to Section 4 . Gaussianity is often a standard starting point for developing guarantees for such inverse problems Moitra & Valiant ( 2010 ) ; Netrapalli et al . ( 2013 ) ; Candes et al . ( 2015 ) ; Hardt & Price ( 2015 ) ; Zhong et al . ( 2017b ; a ) ; Li & Yuan ( 2017 ) ; Ge et al . ( 2018 ) ; Li & Liang ( 2018 ) ; Zhong et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Kong et al . ( 2020 ) ; Diakonikolas et al . ( 2020b ) . Our main result is to show that when the private and public data is Gaussian , we can use the synthetic and public vectors to recover a subset of the private vectors . Theorem 1.1 ( Informal , see Theorem B.1 ) . If there are npriv private vectors and npub public vectors , each of which is an i.i.d . draw from N ( 0 , Idd ) , then as long as d = Ω ( poly ( kpub , kpriv ) log ( npub + npriv ) ) , there is some m = o ( n kpriv priv ) such that , given a sample of m random synthetic vectors independently generated as above , one can exactly recover kpriv + 2 private vectors in time O ( d ( m2 +n2pub ) ) + poly ( npub ) with probability 9/10 over the randomness of the private and public vectors and the randomness of the selection vectors.2 We emphasize that we can take m = o ( nkprivpriv ) , meaning we can achieve recovery even with access to a vanishing fraction of all possible combinations of private vectors among the synthetic vectors generated . For instance , when kpriv = 2 , we show that m = O ( n 4/3 priv ) suffices ( see Theorem B.1 ) . See Remark B.2 for additional discussion . Additionally , to ensure we are not working in an uninteresting setting where InstaHide has zero utility , we empirically verify that in the setting of Theorem 1.1 , one can train on the synthetic vectors and get reasonable test accuracy on the original Gaussian dataset ( see Section 4 ) . Qualitatively , the main takeaway of Theorem 1.1 is that to prove meaningful security guarantees for InstaHide , we must be careful about the properties we posit about the underlying distribution generating the public and private data , even in challenging settings where this data does not possess the nice properties of natural images that have made other attacks possible . 1.1 CONNECTIONS AND EXTENSIONS TO PHASE RETRIEVAL . Our algorithm is based on connections and extensions to the classic problem of phase retrieval . At a high level , this can be thought of as the problem of linear regression where the signs of the linear responses are hidden . More formally , this is a setting where we get pairs ( x1 , y1 ) , ... , ( xN , yN ) ∈ Cn × R for which there exists a vector w ∈ Cn satisfying |〈w , xi〉| = yi for all i = 1 , ... , N , and the goal is to recover w. Without distributional assumptions on how x1 , ... , xN are generated , this problem is NP-hard Yi et al . ( 2014 ) , and in the last decade , there has been a huge body of work , much of it coming from the machine learning community , on giving algorithms for recovering w under the assumption that x1 , ... , xN are i.i.d . Gaussian , see e.g . Candes et al . ( 2013 ; 2015 ) ; Conca et al . ( 2015 ) ; Netrapalli et al . ( 2013 ) . To see the connection between InstaHide and phase retrieval , first imagine that InstaHide only works with public vectors ( in the notation of Theorem 1.1 , npriv = kpriv = 0 ) . Now , consider a synthetic vector y ∈ Rd generated by InstaHide , and let the vector w ∈ Rnpub be the one specifying the convex combination of public vectors that generated y . The basic observation is that for any feature i ∈ [ d ] , if pi ∈ Rnpub is the vector consisting of i-th coordinates of all the public vectors , then |〈w , xi〉| = yi . In other words , if InstaHide only works with public vectors , then the problem of recovering which public vectors generated a given synthetic vector is formally equivalent to phase retrieval . In particular , if the public dataset is Gaussian , then we can leverage the existing algorithms for Gaussian phase retrieval . Huang et al . ( 2020b ) already noted this connection but argued that if InstaHide also uses private vectors , the existing algorithms for phase retrieval fail . Indeed , consider the extreme case where InstaHide only works with private vectors ( i.e . npub = 0 ) , so that the only information we have access to is the synthetic vector ( y1 , ... , yd ) generated by InstaHide . As noted above in the discussion about private distributions where the features are identically distributed , it is clearly information-theoretically impossible to recover anything about w or the private dataset . As we will see , the key workaround is to exploit the fact that InstaHide ultimately generates multiple synthetic vectors , each of which is defined by a random sparse convex combination of public/private 2See Problem 1 and Remark 2.7 for what exact recovery precisely means in this context . vectors . And as we will make formal in Section 2.2 , the right algorithmic question to study in this context can be thought of as a multi-task , missing-data version of phase retrieval ( see Problem 2 ) that we believe to be of independent interest . Lastly , we remark that in spite of this conceptual connection to phase retrieval , and apart from one component of our algorithm ( see Section B.1 ) which draws upon existing techniques for phase retrieval , the most involved parts of our algorithm and its analysis utilize techniques that are quite different from the existing ones in the phase retrieval literature . We elaborate upon these techniques in Section 3 .
The purpose of the paper seems clear: it proposes an attack to the recently proposed algorithm called Instahide (ICML 2020) which is a probabilistic algorithm for generating synthetic private data in the distributed setting. The attack proposed in this paper is considered for the case where the private data is i.i.d. Gaussian distributed, and Thm 1.1 says that one can recover k original feature vectors with O(k^2) + O(M^2) computational complexity, where M is the total number of original data elements.
SP:4bda50ce81c790cf9b19a24d81db4c07ec3729c1
Towards Noise-resistant Object Detection with Noisy Annotations
1 INTRODUCTION . The remarkable success of modern object detectors largely relies on large-scale datasets with extensive bounding box annotations . However , it is extremely expensive and time-consuming to acquire high-quality human annotations . For example , annotating each bounding box in ILSVRC requires 42s on Mechanical Turk ( Su et al. , 2012 ) , whereas the recent OpenImagesV4 Kuznetsova et al . ( 2018 ) reports 7.4 seconds with extreme clicking ( Papadopoulos et al. , 2017b ) . On the other hand , there are ways to acquire annotations at lower costs , such as limiting the annotation time , reducing the number of annotators , or using machine-generated annotations . However , these methods would yield annotations with both label noise ( i.e . wrong classes ) and bounding box noise ( i.e . inaccurate locations ) , which could be detrimental for learning . Learning with label noise has been an active area of research . Some methods perform label correction using the predictions from the model and modify the loss accordingly ( Reed et al. , 2015 ; Tanaka et al. , 2018 ) . Other methods treat samples with small loss as those with clean labels , and only allow clean samples to contribute to the loss ( Jiang et al. , 2018b ; Han et al. , 2018 ) . However , most of those methods focus on the image classification task where the existence of an object is guaranteed . Several recent works have studied object detection with noisy annotations . Zhang et al . ( 2019 ) focus on the weakly-supervised ( WS ) setting where only image-level labels are available , and find reliable bounding box instances as those with low classification loss . Gao et al . ( 2019 ) study a semisupervised ( SS ) setting where the training data contains a small amount of fully-labeled bounding boxes and a large amount of image-level labels , and propose to distill knowledge from a detector pretrained on clean annotations . However , these methods require access to some clean annotations . In this work , we address a more challenging and practical problem , where the annotation contains an unknown mixture of label noise and bounding box noise . Furthermore , we do not assume access to any clean annotations . The entanglement of label noise and bounding box noise increases the difficulty to perform noise correction . A commonly used noise indicator , namely the classification loss , is incapable to distinguish label noise from bounding box noise . Furthermore , it is problematic to correct noise directly using the model predictions , because label correction requires accurate 1Code will be released . bounding box coordinates to crop the object , whereas bounding box correction requires accurate class labels to produce the regression offset . To overcome these difficulties , we propose a two-step noise correction procedure . In the first step , we perform class-agnostic bounding box correction ( CA-BBC ) , which seeks to decouple bounding box noise from label noise , and optimize the noisy ground-truth ( GT ) bounding box regardless of its class label . An illustration of CA-BBC is shown in Figure 1 . It is based on the following intuition : if a bounding box tightly covers an object , then two diverged classifiers would agree with each other and produce the same prediction . Furthermore , both classifiers would have low scores for the background class , i.e. , high objectness scores . Therefore , we directly regress the noisy GT bounding box to minimize both classifier discrepancy and background scores . CA-BBC also has the option to reject a bounding box as false positive if the objectness score is too low . In the second step , we leverage the model ’ s output for label noise correction and class-specific bounding box refinement . It has been shown that co-training two models can filter different types of noise and help each other learn ( Blum & Mitchell , 1998 ; Han et al. , 2018 ; Yu et al. , 2019 ; Chadwick & Newman , 2019 ) . Therefore , we distil knowledge from the ensemble of dual detection heads for noise correction , by generating soft labels and bounding box offsets . We show that soft labels with well-adjusted temperature lead to better performance even for a clean dataset . To summarize , this paper proposes a noise-resistant learning framework to train object detectors with noisy annotations . The proposed framework jointly optimizes object labels , bounding box coordinates , and model parameters by performing alternating noise correction and model training . We conduct experiments on two benchmarks : PASCAL VOC and MS-COCO , which contain different levels of synthetic noise as well as machine-generated noise . The proposed method outperforms previous methods by a large margin . We also provide qualitative results to demonstrate the efficacy of the two-step noise correction , and ablation studies to examine the effect of each component . 2 RELATED WORK . 2.1 CROWDSOURCING FOR OBJECT DETECTION . Crowdsourcing platforms such as Amazon Mechanical Turk ( AMT ) have enabled the collection of large-scale datasets . Due to the formidable cost of human annotation , many efforts have been devoted to reduce the annotation cost . However , even an efficient protocol still report 42.4s to annotate one object in an image ( Su et al. , 2012 ) . Other methods have been proposed which trade off annotation quality for lower cost , by using click supervision ( Papadopoulos et al. , 2017a ) , human-inthe-loop labeling ( Russakovsky et al. , 2015 ; Papadopoulos et al. , 2016 ; Konyushkova et al. , 2018 ) , or exploiting eye-tracking data ( Papadopoulos et al. , 2014 ) . These methods focus on reducing human effort , rather than combating the annotation noise as our method does . 2.2 LEARNING WITH LABEL NOISE . Deep Neural Networks ( DNNs ) can easily overfit to noisy labels in the training data , leading to poor generalization performance ( Zhang et al. , 2017 ) . Many works have addressed learning with label noise . Some approaches correct noise by relabeling the noisy samples ( Vahdat , 2017 ; Veit et al. , 2017 ; Lee et al. , 2018 ) , but they rely on a small set of clean samples for noise correction . Iterative relabeling methods ( Tanaka et al. , 2018 ; Yi & Wu , 2019 ) have been proposed which produce hard or soft labels using the model predictions . Other approaches filter noise by reweighting or selecting training samples ( Jiang et al. , 2018b ; Ren et al. , 2018 ; Chen et al. , 2019b ; Arazo et al. , 2019 ; Li et al. , 2020 ) . Since DNNs learn clean samples faster than noisy ones , samples with smaller classification loss are usually considered to be clean ( Arpit et al. , 2017 ) . To avoid error accumulation during the noise correction process , co-teaching ( Han et al. , 2018 ) trains two networks simultaneously , where each network selects small-loss samples to train the other . Co-teaching+ ( Yu et al. , 2019 ) further keeps the two networks diverged by training on disagreement data . 2.3 WEAKLY-SUPERVISED AND SEMI-SUPERVISED OBJECT DETECTION . Weakly-supervised object detection aims to learn object detectors with only image-level labels . Most existing works formulate it as a multiple instance learning ( MIL ) task ( Dietterich et al. , 1997 ) , where each label is assigned to a bag of object proposals . A common pipeline is to iteratively alternate between mining object instances using a detector and training the detector using the mined instances ( Deselaers et al. , 2010 ; Cinbis et al. , 2017 ) . To address the localization noise in the object proposals , Zhang et al . ( 2019 ) propose an adaptive sampling method which finds reliable instances as those with high classification scores , and use the reliable instances to impose a similarity loss on noisy images . Different from weakly-supervised object detection which assumes that the correct object label is given , our method deals with label noise and bounding box noise at the same time . Semi-supervised methods train object detectors using training data with bounding box annotations for some images and only image-level labels for other images ( Hoffman et al. , 2014 ; Tang et al. , 2016 ; Uijlings et al. , 2018 ; Gao et al. , 2019 ) . Gao et al . ( 2019 ) propose an iterative training-mining framework consisting of detector initialization , box mining , and detector retraining . To address the annotation noise of the mined boxes , they use a detector pretrained on clean annotations for knowledge distillation . Different from all semi-supervised learning methods , our method does not need access to any clean annotations . 3 METHOD . 3.1 OVERVIEW . Given a training dataset with images X , noisy object labels Y , and noisy bounding boxes B , our method aims to train an object detector parameterized by ⇥ , by jointly optimizing Y , B and ⇥ . We first warm-up ⇥ where we train the detector in a standard manner using the original noisy annotations . After the warm-up , we perform alternating optimization on the annotations and the model . Specifically , for each mini-batch of data X = { xi } , Y = { yi } , B = { bi } , we first keep ⇥ fixed and perform noise correction to update Y and B , then we used the corrected annotations to update ⇥ . An overview of the algorithm is shown in Algorithm 1 . We use a popular two-stage object detector ( i.e . Faster-RCNN ( Ren et al. , 2015 ) ) , which consists of a backbone feature extractor parameterized by ✓cnn , a Region Proposal Network ( RPN ) ✓rpn , a classification head ✓c , and a bounding box ( bbox ) regression head ✓b . Note that ✓c and ✓b have shared layers . Let detection head with parameters ✓d denote the union of the classification head and the bbox regression head . During training , we simultaneously train two detection heads ✓1d = { ✓1c , ✓1b } and ✓2d = { ✓2c , ✓2b } , which are kept diverged from each other by different ( random ) parameter initializations and different ( random ) training instance ( i.e . RoI ) sampling . Due to the entanglement of an unknown mixture of label noise and bbox noise , it is difficult to correct both types of noise in a single step . Therefore , we propose a two-step noise correction method . In the first step , we perform class-agnostic bounding box correction ( CA-BBC ) , which disentangles bbox noise from label noise . In the second step , we utilize the outputs from dual detection heads for label noise correction and class-specific bbox refinement . Figure 2 shows an illustration of our framework . Next we delineate the details . Algorithm 1 : alternating two-step noise correction and model training . 1 Input : model ⇥ = { ✓cnn , ✓rpn , ✓1d , ✓2d } , noisy training dataset ( X , Y , B ) . 2 while not MaxIters do 3 Mini-batch X = { xi } , Y = { yi } , B = { bi } . 4 for b in B do 5 Update b ! b⇤ with CA-BBC ( Eq . 2 & 3 ) . 6 end 7 for ( y , b⇤ ) in ( Y , B⇤ ) do 8 Update y ! y⇤ with dual-head soft label correction ( Eq . 4 & 5 ) . 9 Update b⇤ ! b⇤⇤ with class-specific bbox refinement ( Eq . 6 ) . 10 end 11 Update ⇥ by SGD on Lrpn ( B⇤⇤ ) , L1+2cls ( Y ⇤ ) , L1+2loc ( B ⇤⇤ , Y ⇤ ) . 12 end
In this work the authors propose a framework to perform object detection when there is noise present in class labels as well as bounding box annotations. The authors propose a two-step process, where in the first step the bounding boxes are corrected in class-agnostic way, and in the second step knowledge distillation has been used to correct the class labels. The propose method has been evaluated on two different datasets with synthetic noise.
SP:a1c54d5c42097b8ba971ac20470de864ae87dd4e
Self-Supervised Learning of Compressed Video Representations
Self-supervised learning of video representations has received great attention . Existing methods typically require frames to be decoded before being processed , which increases compute and storage requirements and ultimately hinders largescale training . In this work , we propose an efficient self-supervised approach to learn video representations by eliminating the expensive decoding step . We use a three-stream video architecture that encodes I-frames and P-frames of a compressed video . Unlike existing approaches that encode I-frames and P-frames individually , we propose to jointly encode them by establishing bidirectional dynamic connections across streams . To enable self-supervised learning , we propose two pretext tasks that leverage the multimodal nature ( RGB , motion vector , residuals ) and the internal GOP structure of compressed videos . The first task asks our network to predict zeroth-order motion statistics in a spatio-temporal pyramid ; the second task asks correspondence types between I-frames and P-frames after applying temporal transformations . We show that our approach achieves competitive performance on compressed video recognition both in supervised and self-supervised regimes . 1 INTRODUCTION . There has been significant progress on self-supervised learning of video representations . It learns from unlabeled videos by exploiting their underlying structures and statistics as free supervision signals , which allows us to leverage large amounts of videos available online . Unfortunately , training video models is notoriously difficult to scale . Typically , practitioners have to make trade-offs between compute ( decode frames and store them as JPEG images for faster data loading , but at the cost of large storage ) and storage ( decode frames on-the-fly at the cost of high computational requirements ) . Therefore , large-batch training of video models is difficult without high-end compute clusters . Although these issues are generally applicable to any video-based scenarios , they are particularly problematic for self-supervised learning because large-scale training is one key ingredient ( Brock et al. , 2019 ; Clark et al. , 2019 ; Devlin et al. , 2019 ) but that is exactly where these issues are aggravated . Recently , several approaches demonstrated benefits of compressed video recognition ( Zhang et al. , 2016 ; Wu et al. , 2018 ; Shou et al. , 2019 ; Wang et al. , 2019b ) . Without ever needing to decode frames , these approaches can alleviate compute and storage requirements , e.g. , resulting in 3 to 10 times faster solutions than traditional video CNNs at a minimal loss on accuracy ( Wu et al. , 2018 ; Wang et al. , 2019b ) . Also , motion vectors embedded in compressed videos provide a free alternative to optical flow which is compute-intensive ; leveraging this has been shown to be two orders of magnitude faster than optical flow-based approaches ( Shou et al. , 2019 ) . However , all the previous work on compressed video has focused on supervised learning and there has been no study that shows the potential of compressed videos in self-supervised learning ; this is the focus of our work . ? Equal Contribution In this work , we propose a self-supervised approach to learning video representations directly in the compressed video format . We exploit two inherent characteristics of compressed videos : First , video compression packs a sequence of images into several Group of Pictures ( GOP ) . Intuitively , the GOP structure provides atomic representation of motion ; each GOP contains images with just enough scene changes so a video codec can compress them with minimal information loss . Because of this atomic property , we enjoy less spurious , more consistent motion information at the GOP-level than at the frame-level . Second , compressed videos naturally provide multimodal representation ( i.e . RGB frames , motion vectors , and residuals ) that we can leverage for multimodal correspondence learning . Based on these , we propose two novel pretext task ( see Fig . 1 ) : The first task asks our model to predict zeroth-order motion statistics ( e.g.where is the most dynamic region ) in a pyramidal spatio-temporal grid structure . The second involves predicting correspondence types between I-frames and P-frames after temporal transformation . Solving our tasks require implicitly locating the most salient moving objects and matching their appearance-motion correspondences between I-frames and P-frames ; this encourages our model to learn discriminative representation of compressed videos . A compressed video contains three streams of multimodal information – i.e . RGB images , motion vectors , and residuals – with a dependency structure between an I-frame stream and the two P-frame streams punctuated by GOP boundaries . We design our architecture to encode this dependency structure ; it contains one CNN encoding I-frames and two other CNNs encoding motion vectors and residuals in P-frames , respectively . Unlike existing approaches that encode I-frames and P-frames individually , we propose to jointly encode them to fully exploit the underlying structure of compressed videos . To this end , we use a three-stream CNN architecture and establish bidirectional dynamic connections going from each of the two P-frame streams into the I-frame stream , and vice versa , and put these connections layer-wise to learn the correlations between them at multiple spatial/temporal scales ( see Fig . 1 ) . These connections allow our model to fully leverage the internal GOP structure of compressed videos and effectively capture atomic representation of motion . In summary , our main contributions are two-fold : ( 1 ) We propose a three-stream architecture for compressed videos with bidirectional dynamic connections to fully exploit the internal structure of compressed videos . ( 2 ) We propose novel pretext tasks to learn from compressed videos in a self-supervised manner . We demonstrate our approach by pretraining the model on Kinetics-400 ( Kay et al. , 2017 ) and finetuning it on UCF-101 ( Soomro et al. , 2012 ) , HMDB-51 ( Kuehne et al. , 2011 ) . Our model achieves new state-of-the-art performance in compressed video classification tasks in both supervised and self-supervised regimes , while maintaining a similar computational efficiency as existing compressed video recognition approaches ( Wu et al. , 2018 ; Shou et al. , 2019 ) . 2 APPROACH . We use videos compressed according to the MPEG-4 Part 2 specifications ( Le Gall , 1991 ) as our input , following the previous work ( Wu et al. , 2018 ; Shou et al. , 2019 ; Wang et al. , 2019b ) . This compression format encodes an RGB image sequence as a series of GOPs ( Group of Pictures ) where each GOP starts with one I-frame followed by a variable number of P-frames . An I-frame stores RGB values of a complete image and can be decoded on its own . A P-frame holds only the changes from the previous reference frame using motion vectors and residuals . The motion vectors store 2D displacements of the most similar patches between the reference and the target frames , and the residuals store pixel-wise differences to correct motion compensation errors . We use all the three modalities contained in compressed videos as our input . Formally , our input is T GOPs , G0 , · · · , GT−1 , where each Gt contains one I-frame It ∈ RH×W×3 followed by K − 1 pairs of motion vectors Mt , k ∈ RH×W×2 and residuals Rt , k ∈ RH×W×3 , k ∈ [ 1 , K ) . For efficiency and simplicity , we assume an identical GOP size K for all t ∈ [ 0 , T ) . 2.1 IMR NETWORK FOR COMPRESSED VIDEOS . Our model consists of three CNNs , each with 3D convolutional kernels modeling spatio-temporal dynamics within each input stream { It } , { Mt , k } , { Rt , k } , t ∈ [ 0 , T ) , k ∈ [ 0 , K ) ; we denote these sub-networks by I-network fI , M-network fM , and R-network fR , respectively , and call our model IMR Network ( IMRNet ) . We account for the difference in the amount of information between I-frames and P-frames by adjusting the capacity of networks accordingly . Specifically , following ( Wu et al. , 2018 ) , we make the capacity of fI larger than fM and fR by setting the number of channels in each layer of fI to be γ times higher than those of fM and fR ( we set γ = 64 ) . Existing models for compressed videos typically perform late fusion ( Wu et al. , 2018 ; Shou et al. , 2019 ) , i.e. , they combine embeddings of I-frames and P-frames only after encoding each stream . However , we find that it is critical to allow our sub-networks to share information as they encode their respective input streams . To this end , we establish layer-wise lateral connections between fI & fM and between fI & fR . Bidirectional dynamic connections . Lateral connections have been used to combine information from different streams , e.g. , RGB images and optical flow images ( Feichtenhofer et al. , 2016 ) , and RGB images sampled at different frame rates ( Feichtenhofer et al. , 2019 ) . In this work , we use it to combine information from I-frames and P-frames . Our approach is different from previous work in two key aspects : ( 1 ) We establish bidirectional connections between streams , instead of unidirectional connections as was typically done in the past ( Feichtenhofer et al. , 2016 ; 2019 ) , so that information sharing is symmetrical between streams . ( 2 ) We incorporate multimodal gated attention to dynamically adjust the connections based on multimodal ( I-frame and P-frames ) information . We call our approach bidirectional dynamic connections to highlight these two aspects and differentiate ours from previous work , e.g. , SlowFast networks ( Feichtenhofer et al. , 2019 ) establish unidirectional lateral connections and the connections are static regardless of the content from the other stream . We combine embeddings from different sub-networks via channel-wise concatenation , which requires embeddings to match their spatio-temporal dimensions . However , fI processes κ times less frames than fM and fR , producing embeddings that are κ times smaller in the temporal dimension . Therefore , we transform the embeddings with time-strided 3D ( de- ) convolution with ( κ× 1× 1 ) kernels , C/8 channels , and ( κ , 1 , 1 ) temporal stride : We use convolution for fI → fM/fR to decrease the time dimension and deconvolution for fM/fR → fI to increase it . Note that simply using the ( de- ) conv layers will perform static transformation regardless of what is provided from the other sub-network , similar to ( Feichtenhofer et al. , 2019 ) . However , we find it critical to make the transformations aware of information from both sub-networks so that the networks can dynamically adjust the connections and selectively share only the most relevant information from each sub-network . To achieve this , we dynamically modulate ( de- ) conv layer outputs using multimodal-gated attention weights . Let xI ∈ RTI×W×H×CI and xM ∈ RTM×W×H×CM be the embeddings from fI and fM , respectively . We max-pool xI and xM and concatenate them to obtain multimodal embedding z ∈ RCZ with CZ = CI + CM . We define multimodal gate functions that take as input z and generate attention weights aI ∈ RCI/8 and aM ∈ RCM/8 as aI = σ ( W3h+ b3 ) , aM = σ ( W4h+ b4 ) , h = ζ ( W2ζ ( W1z+ b1 ) + b2 ) ( 1 ) where σ is a sigmoid function , ζ is a Leaky ReLU function , and W1 , W2 ∈ RCZ×CZ , b1 , b2 ∈ RCZ , W3 ∈ RCI/8×CZ , b3 ∈ RcI/8 , W4 ∈ RCM/8×CZ , b4 ∈ RCM/8 are weight parameters . Next , we use these attention weights to modulate the ( de- ) conv output embeddings , vI→M = aM ⊗ 3d_conv ( xI ) , vM→I = aI ⊗ 3d_deconv ( xM ) ( 2 ) where ⊗ is channel-wise multiplication . We repeat the same process for fI & fR to obtain vI→R and vR→I , and combine them with the feature embeddings via channel-wise concatenation , x̂I = [ xI ; vM→I ; vR→I ] , x̂M = [ xM ; vI→M ] , x̂R = [ xR ; vI→R ] ( 3 ) Each of these is fed into the next layer in the corresponding sub-network . We establish these lateral connections across multiple layers of our network . To obtain the final embedding , we apply average pooling on the output from the final layer of each sub-network and concatenate them channel-wise . Note that the design of IMRNet is orthogonal to the design of video CNNs ; while we adapt 3DResNet ( He et al. , 2016 ) as the backbone in our experiments , we can use any of existing CNN architectures as the backbone , e.g. , C3D ( Tran et al. , 2015 ) , I3D ( Carreira & Zisserman , 2017 ) , R ( 2+1 ) D ( Tran et al. , 2018 ) . What is essential , however , is that ( i ) there are three sub-networks , each modeling one of the three input streams , and ( ii ) information from different networks are combined via bidirectional dynamic connections as they are encoded .
This paper proposes an approach to self-supervised learning from videos. The approach takes advantage of compressed videos, using the encoded residuals and motion vectors within the video codec. Using encoded videos has been shown to reduce computation time required by decoding videos. Previous works have explored compressed videos for supervised recognition, showing the potential, while this paper introduces a way to leverage compressed videos for self-supervised learning.
SP:4fde35c9931ca15ab6cd53b171323e1abf0224db
Provably Faster Algorithms for Bilevel Optimization and Applications to Meta-Learning
1 INTRODUCTION . Bilevel optimization has received significant attention recently and become an influential framework in various machine learning applications including meta-learning ( Franceschi et al. , 2018 ; Bertinetto et al. , 2018 ; Rajeswaran et al. , 2019 ; Ji et al. , 2020a ) , hyperparameter optimization ( Franceschi et al. , 2018 ; Shaban et al. , 2019 ; Feurer & Hutter , 2019 ) , reinforcement learning ( Konda & Tsitsiklis , 2000 ; Hong et al. , 2020 ) , and signal processing ( Kunapuli et al. , 2008 ; Flamary et al. , 2014 ) . A general bilevel optimization takes the following formulation . min x∈Rp Φ ( x ) : = f ( x , y∗ ( x ) ) s.t . y∗ ( x ) = arg min y∈Rq g ( x , y ) , ( 1 ) where the upper- and inner-level functions f and g are both jointly continuously differentiable . The goal of eq . ( 1 ) is to minimize the objective function Φ ( x ) w.r.t . x , where y∗ ( x ) is obtained by solving the lower-level minimization problem . In this paper , we focus on the setting where the lower-level function g is strongly convex with respect to ( w.r.t . ) y , and the upper-level objective function Φ ( x ) is nonconvex w.r.t . x . Such types of geometrics commonly exist in many applications including meta-learning and hyperparameter optimization , where g corresponds to an empirical loss with a strongly-convex regularizer and x are parameters of neural networks . A broad collection of algorithms have been proposed to solve such types of bilevel optimization problems . For example , Hansen et al . ( 1992 ) ; Shi et al . ( 2005 ) ; Moore ( 2010 ) reformulated the bilevel problem in eq . ( 1 ) into a single-level constrained problem based on the optimality conditions of the lower-level problem . However , such type of methods often involve a large number of constraints , and are hard to implement in machine learning applications . Recently , more efficient gradient-based bilevel optimization algorithms have been proposed , which can be generally categorized into the approximate implicit differentiation ( AID ) based approach ( Domke , 2012 ; Pedregosa , 2016 ; Gould et al. , 2016 ; Liao et al. , 2018 ; Ghadimi & Wang , 2018 ; Grazzi et al. , 2020 ; Lorraine et al. , 2020 ) and the iterative differentiation ( ITD ) based approach ( Domke , 2012 ; Maclaurin et al. , 2015 ; Franceschi et al. , 2017 ; 2018 ; Shaban et al. , 2019 ; Grazzi et al. , 2020 ) . However , most of these studies have focused on the asymptotic convergence analysis , and the finite-time analysis ( that characterizes how fast an algorithm converges ) has not been well explored except a few attempts recently . Ghadimi & Wang ( 2018 ) provided the finite-time analysis for the ITD-based approach . Grazzi et al . ( 2020 ) provided the iteration complexity for the hypergradient computation via ITD and AID , but did not characterize the finite-time convergence for the entire execution of algorithms . • Thus , the first focus of this paper is to develop a comprehensive and enhanced theory , which covers a broader class of bilevel optimizers via ITD and AID based techniques , and more importantly , to improve the exiting analysis with a more practical parameter selection and order-level lower computational complexity . The stochastic bilevel optimization often occurs in applications where fresh data need to be sampled as the algorithms run ( e.g. , reinforcement learning ( Hong et al. , 2020 ) ) or the sample size of training data is large ( e.g. , hyperparameter optimization ( Franceschi et al. , 2018 ) , Stackelberg game ( Roth et al. , 2016 ) ) . Typically , the corresponding objective function is given by min x∈Rp Φ ( x ) = f ( x , y∗ ( x ) ) : = { Eξ [ F ( x , y∗ ( x ) ; ξ ) ] 1 n ∑n i=1 F ( x , y ∗ ( x ) ; ξi ) s.t . y∗ ( x ) = arg min y∈Rq g ( x , y ) : = { Eζ [ G ( x , y∗ ( x ) ; ζ ) ] 1 m ∑m i=1G ( x , y ∗ ( x ) ; ζi ) , ( 2 ) where f ( x , y ) and g ( x , y ) take either the expectation form w.r.t . the random variables ξ and ζ or the finite-sum form over given data Dn , m = { ξi , ζj , i = 1 , ... , n ; j = 1 , ... , m } often with large sizes n and m. During the optimization process , the algorithms sample data batch via the distributions of ξ and ζ or from the setDn , m . For such a stochastic setting , Ghadimi & Wang ( 2018 ) proposed a bilevel stochastic approximation ( BSA ) method via single-sample gradient and Hessian estimates . Based on such a method , Hong et al . ( 2020 ) further proposed a two-timescale stochastic approximation ( TTSA ) , and showed that TTSA achieves a better trade-off between the complexities of inner- and outer-loop optimization stages than BSA . • The second focus of this paper is to design a more sample-efficient algorithm for bilevel stochastic optimization , which achieves an order-level lower computational complexity over BSA and TTSA . 1.1 MAIN CONTRIBUTIONS . Our main contributions lie in developing enhanced theory and provably faster algorithms for the nonconvex-strongly-convex bilevel deterministic and stochastic optimization problems , respectively . Our analysis involves several new developments , which can be of independent interest . We first provide a unified finite-time convergence and complexity analysis for both ITD and AID based bilevel optimizers , which we call as ITD-BiO and AID-BiO . Compared to existing analysis in Ghadimi & Wang ( 2018 ) for AID-BiO that requires a continuously increasing number of innerloop steps to achieve the guarantee , our analysis allows a constant number of inner-loop steps as often used in practice . In addition , we introduce a warm start initialization for the inner-loop updates and the outer-loop hypergradient estimation , which allows us to backpropagate the tracking errors to previous loops , and results in an improved computational complexity . As shown in Table 1 , the gradient complexities Gc ( f , ) , Gc ( g , ) , and Jacobian- and Hessian-vector product complexities JV ( g , ) and HV ( g , ) of AID-BiO to attain an -accurate stationary point improve those of Ghadimi & Wang ( 2018 ) by the order of κ , κ −1/4 , κ , and κ , respectively , where κ is the condition number . In addition , our analysis shows that AID-BiO requires less computations of Jacobian- and Hessianvector products than ITD-BiO by an order of κ and κ1/2 , which provides a justification for the observation in Grazzi et al . ( 2020 ) that ITD often has a larger memory cost than AID . We then propose a stochastic bilevel optimizer ( stocBiO ) to solve the stochastic bilevel optimization problem in eq . ( 2 ) . Our algorithm features a mini-batch hyper-gradient estimation via implicit differentiation , where the core design involves a sample-efficient Hypergradient estimator via the Neumann series . As shown in Table 2 , the gradient complexities of our proposed algorithm w.r.t . F and G improve upon those of BSA ( Ghadimi & Wang , 2018 ) by an order of κ and −1 , respectively . In addition , the Jacobian-vector product complexity JV ( G , ) of our algorithm improves that of BSA by an order of κ . In terms of the target accuracy , our computational complexities improve those of TTSA ( Hong et al. , 2020 ) by an order of −1/2 . We further provide the theoretical complexity guarantee of ITD-BiO , AID-BiO and stocBiO in metalearning and hyperparameter optimization . The experiments validate our theoretical results for determinisitic bilevel optimization , and demonstrate the superior efficiency of stocBiO for stochastic bilevel optimization . Due to the space limitations , we present all theoretical and empirical results on hyperparameter optimization in the supplementary materials . 1.2 RELATED WORK . Bilevel optimization approaches : Bilevel optimization was first introduced by Bracken & McGill ( 1973 ) . Since then , a number of bilevel optimization algorithms have been proposed , which include but not limited to constraint-based methods ( Shi et al. , 2005 ; Moore , 2010 ) and gradient-based methods ( Domke , 2012 ; Pedregosa , 2016 ; Gould et al. , 2016 ; Maclaurin et al. , 2015 ; Franceschi et al. , 2018 ; Ghadimi & Wang , 2018 ; Liao et al. , 2018 ; Shaban et al. , 2019 ; Hong et al. , 2020 ; Liu et al. , 2020 ; Li et al. , 2020 ; Grazzi et al. , 2020 ; Lorraine et al. , 2020 ) . Among them , Ghadimi & Wang ( 2018 ) ; Hong et al . ( 2020 ) provided the finite-time complexity analysis for their proposed methods for the nonconvex-strongly-convex bilevel optimization problem . For such a problem , this paper develops a general and enhanced finite-time analysis for gradient-based bilevel optimizers for the deterministic setting , and proposes a novel algorithm for the stochastic setting with order-level lower computational complexity than the existing results . Some works have studied other types of loss geometries . For example , Liu et al . ( 2020 ) ; Li et al . ( 2020 ) assumed that the lower- and upper-level functions g ( x , · ) and f ( x , · ) are convex and stronglyconvex , and provided an asymptotic analysis for their methods . Ghadimi & Wang ( 2018 ) ; Hong et al . ( 2020 ) studied the setting where Φ ( · ) is strongly-convex or convex , and g ( x , · ) is strongly-convex . Bilevel optimization in meta-learning : Bilevel optimization framework has been successfully employed in meta-learning recently ( Snell et al. , 2017 ; Franceschi et al. , 2018 ; Rajeswaran et al. , 2019 ; Zügner & Günnemann , 2019 ; Ji et al. , 2020a ; b ) . For example , Snell et al . ( 2017 ) proposed a bilevel optimization procedure for meta-learning to learn a common embedding model for all tasks . Rajeswaran et al . ( 2019 ) reformulated the model-agnostic meta-learning ( MAML ) ( Finn et al. , 2017 ) as a bilevel optimization problem , and proposed iMAML via implicit gradient . The paper provides a theoretical guarantee for two popular types of bilevel optimization algorithms , i.e. , AID-BiO and ITD-BiO , for meta-learning . Bilevel optimization in hyperparameter optimization : Hyperparameter optimization has become increasingly important as a powerful tool in the automatic machine learning ( autoML ) ( Okuno et al. , Algorithm 1 Deterministic bilevel optimization via AID or ITD 1 : Input : Stepsizes α , β > 0 , initializations x0 , y0 , v0 . 2 : for k = 0 , 1 , 2 , ... , K do 3 : Set y0k = y T k−1 if k > 0 and y0 otherwise 4 : for t = 1 , .... , T do 5 : Update ytk = y t−1 k − α∇yg ( xk , y t−1 k ) 6 : end for 7 : Hypergradient estimation via • AID : 1 ) set v0k = vNk−1 if k > 0 and v0 otherwise 2 ) solve vNk from∇2yg ( xk , yTk ) v = ∇yf ( xk , yTk ) via N steps of CG starting from v0k 3 ) compute Jacobian-vector product∇x∇yg ( xk , yTk ) vNk via automatic differentiation 4 ) compute ∇̂Φ ( xk ) = ∇xf ( xk , yTk ) −∇x∇yg ( xk , yTk ) vNk • ITD : compute ∇̂Φ ( xk ) = ∂f ( xk , y T k ) xk via backpropagation w.r.t . xk 8 : Update xk+1 = xk − β ∂f ( xk , y T k ) ∂xk 9 : end for 2018 ; Yu & Zhu , 2020 ) . Recently , various bilevel optimization algorithms have been proposed in the context of hyperparameter optimization , which include implicit differentiation based methods ( Pedregosa , 2016 ) , dynamical system based methods via reverse or forward gradient computation ( Franceschi et al. , 2017 ; 2018 ; Shaban et al. , 2019 ) , etc . This paper demonstrates the superior efficiency of the proposed stocBiO algorithm in hyperparameter optimization .
The paper presents two algorithms - one for the deterministic and one for stochastic bilevel optimization. The paper claims the methods are lower cost in computational complexity for various terms and easy to implement. A finite-time convergence proof is provided for the algorithms. Empirical results are presented for meta-learning, and (in the appendix) hyperparameter optimization.
SP:2d804ce6cd9917277ac5c4d6c72cceeb14bf0641
Invertible Manifold Learning for Dimension Reduction
1 INTRODUCTION . In real-world scenarios , it is widely believed that the loss of data information is inevitable after dimension reduction ( DR ) , though the goal of DR is to preserve as much information as possible in the low-dimensional space . In the case of linear DR , compressed sensing ( Donoho , 2006 ) breaks this common sense with practical sparse conditions of the given data . In the case of nonlinear dimension reduction ( NLDR ) , however , it has not been clearly discussed , e.g . what is the structure within data and how to maintain these structures after NLDR ? From the perspective of manifold learning , the manifold assumption is widely adopted , but classical manifold based DR methods usually fail to yield good results in the many practical case . Therefore , what is the gap between theoretical and real-world applications of manifold based DR ? Here , we give the first detailed discussion of these two problems in the context of manifold learning . We think that a good low-dimensional representation should preserve the topology and geometry of input data , which require the NLDR transformation to be homeomorphic . Thus , we propose an invertible NLDR process , called inv-ML , combining sparse coordinate transformation and local isometry constraint which preserve the property of topology and geometry , to explain the information-lossless NLDR in manifold learning theoretically . We instantiate inv-ML as a neural network called i-ML-Enc via a cascade of equidimensional layers and a linear transform layer . Sufficient experiments are conduct to validate invertible NLDR abilities of i-ML-Enc and analyze learned representations to reveal inherent difficulties of classical manifold learning . Topology preserving dimension reduction . To start , we first make out the theoretical definition of information-lossless DR on a manifold . The topological property is what is invariant under a homeomorphism , and thus what we want to achieve is to construct a homeomorphism for dimension reduction , removing the redundant dimensions while preserving invariant topology . To be more specific , f : Md0 → Rm is a smooth mapping of a differential manifold into another , and if f is a homeomorphism ofMd0 intoMd1 = f ( Md0 ) ⊂ Rm , we call f is an embedding ofMd0 into Rm . Assume that the data set X = { xj |1 ≤ j ≤ n } sampled from the compact manifoldMd1 ⊂ Rm which we call the data manifold and is homeomorphic toMd0 . For the sample points we get are represented in the coordinate after inclusion mapping i1 , we can only regard them as points from Euclidean space Rm without any prior knowledge , and learn to approximate the data manifold in the latent space Z . According to the Whitney Embedding Theorem ( Seshadri & Verma , 2016 ) , Md0 is can be embedded smoothly into R2d by a homeomorphism g. Rather than to find the f−1 : Md1 →Md0 , our goal is to seek a smooth map h : Md1 → Rs ⊂ R2d , where h = g ◦ f−1 is a homeomorphism ofMd1 intoMd2 = h ( Md1 ) and d ≤ s ≤ 2d m , and thus the dim ( h ( X ) ) = s , which achieves the DR while preserving the topology . Owing to the homeomorphism h we seek as a DR mapping , the data manifoldMd1 is reconstructible viaMd1 = h−1 ◦ h ( Md1 ) , by which we mean h a topology preserving DR as well as information-lossless DR. Geometry preserving dimension reduction . While the topology of the data manifoldMd1 can be preserved by the homeomorphism h discussed above , it may distort the geometry . To preserve the local geometry of the data manifold , the map should be isometric on the tangent space TpMd1 for every p ∈ Md1 , indicating that dMd1 ( u , v ) = dMd2 ( h ( u ) , h ( v ) ) , ∀u , v ∈ TpM d 1 . By Nash ’ s Embedding Theorem ( Nash , 1956 ) , any smooth manifold of class Ck with k ≥ 3 and dimension d can be embedded isometrically in the Euclidean space Rs with s polynomial in d. Noise perturbation . In the real-world scenarios , sample points are not lied on the ideal manifold strictly due to the limitation of sampling , e.g . non-uniform sampling noises . When the DR method is very robust to the noise , it is reasonable to ignore the effects of the noise and learn the representation Z from the given data . Therefore , the intrinsic dimension of X is approximate to d , resulting in the lowest isometric embedding dimension is larger than s . 2 RELATED WORK . Manifold learning . Most classical linear or nonlinear DR methods aim to preserve the geometric properties of manifolds . The Isomap ( Tenenbaum et al. , 2000 ) based methods aim to preserve the global metric between every pair of sample points . For example , McQueen et al . ( 2016 ) can be regarded as such methods based on the push-forward Riemannian metric . For the other aspect , LLE ( Roweis & Saul , 2000 ) based methods try to preserve local geometry after DR , whose derivatives like LTSA ( Zhang & Zha , 2004 ) , MLLE ( Zhang & Wang , 2007 ) , etc . have been widely used but usually fail in the high-dimensional case . Recently , based on local properties of manifolds , MLDL ( Li et al. , 2020 ) was proposed as a robust NLDR method implemented by a neural network , preserving the local geometry but abandoning the retention of topology . In contrast , our method takes the preservation of both geometry and topology into consideration , trying to maintain these properties of manifolds even in cases of excessive dimension reduction when the target dimension s′ is smaller than s. Invertible model . From AutoEncoder ( AE ) ( Hinton & Salakhutdinov , 2006 ) , the fundamental neural network based model , having achieved DR and cut information loss by minimizing the reconstruction loss , some AE based generative models like VAE ( Kingma & Welling , 2014 ) and manifold-based NLDR models like TopoAE ( Moor et al. , 2020 ) has emerged . These methods can not avoid information loss after NLDR , and thus , some invertible models consist of a series of equidimensional layers have been proposed , some of which aim to generate samples by density estimation through layers ( Dinh et al. , 2015 ) ( Dinh et al. , 2017 ) ( Behrmann et al. , 2019 ) , and the other of which are established for other targets , e.g . validating the mutual information bottleneck ( Jacobsen et al. , 2018 ) . Different from methods mentioned above , our proposed i-ML-Enc is a neural network based encoder , with NLDR as well as maintaining structures of raw data points based on manifold assumption via a series of equidimensional layers . Compressed sensing . The JohnsonLindenstrauss Theorem ( Johnson & Lindenstrauss , 1984 ) provides the lower bound of target dimension for linear DR with the pairwise distance loss . Given a small constant ∈ ( 0 , 1 ) and n samples { xi } ni=1 in Rm , a linear projection W : Rm → Rs , s > O ( logm 2 ) can be found , which embeds samples into a s-dimensional space with ( 1 + ) distortion of any sample pairs ( xi , xj ) . It adopts a prior assumption that the given samples in high-dimensional space have a relevant low-dimensional structure constraint which can be maintained by keeping the pairwise distance . Further , compressed sensing ( CS ) provides strict sparse conditions of linear DR with great probability to recover the compressed signal , which usually cooperates with sparse dictionary learning ( Hawe et al. , 2013 ) . The core of CS is Restricted Isometry Property ( RIP ) condition , which reads ( 1− ) ‖x1 − x2‖2 ≤ ‖W ( x1 − x2 ) ‖2 ≤ ( 1 + ) ‖x1 − x2‖2 , ( 1 ) where ∈ ( 0 , 1 ) is a rather small constant and W is a linear measurement of signal x1 and x2 . Given a signal x ∈ Rm with s-sparse representation α = Φx on an m-dimensional orthogonal basis Φ , α can be recovered from the linear measurement y = Wα with great probability by the sparse optimization if Wm×s satisfies the RIP condition : arg minã ||α̃||0 , s.t . y = Wα̃ . The linear measurement is rewritten as y = ΨΦα = Ψx where Ψ is a low-dimensional orthogonal basis and Φ can be found by the nonlinear dictionary learning . Some reconstructible CS-based NLDR methods ( Wei et al. , 2015 ) ( Wei et al. , 2019 ) are proposed , which are achieved by preserving local geometry on AE-based networks , but usually with unsatisfying embedding qualities . 3 PROPOSED METHOD . We will specifically discuss the proposed two-stage invertible NLDR process inv-ML as the first stage in Sec 3.1 , in which a s-dimensional representation is learned by a homeomorphism transformation while keeping all topological and geometric structure of the data manifold ; then give applicable conditions in real-world scenarios as the second stage in Sec 3.2 , in which the dimension is further compressed to s′ . We instantiate the proposed inv-ML as a neural network i-ML-Enc in Sec 3.3 . 3.1 TOPOLOGY AND GEOMETRY PRESERVATION . Canonical embedding for homeomorphism . To seek the smooth homeomorphism h , we turn to the theorem of local canonical form of immersion ( Mei , 2013 ) . Let f : M→N an immersion , and for any p ∈ M , there exist local coordinate systems ( U , φ ) around p and ( V , ψ ) around f ( p ) such that ψ ◦ f ◦ φ−1 : φ ( U ) → ψ ( V ) is a canonical embedding , which reads ψ ◦ f ◦ φ−1 ( x1 , x2 , · · · , xd ) = ( x1 , x2 , · · · , xd , 0 , 0 , · · · , 0 ) . ( 2 ) In our case , letM = Md2 , and N = Md1 , any point z = ( z1 , z2 , · · · , zs ) ∈ Md1 ⊂ Rs can be mapped to a point in Rm by the canonical embedding ψ ◦ h−1 ( z1 , z2 , · · · , zs ) = ( z1 , z2 , · · · , zs , 0 , 0 , · · · , 0 ) . ( 3 ) For the point z is regarded as a point in Rs , φ = I is an identity mapping , and for h = g ◦ f−1 is a homeomorphism , h−1 is continuous . The Eq . ( 3 ) can be written as ( z1 , z2 , · · · , zs ) = h ◦ ψ−1 ( z1 , z2 , · · · , zs , 0 , 0 , · · · , 0 ) = h ( x1 , x2 , · · · , xm ) . ( 4 ) Therefore , to reduce dim ( X ) = m to s , we can decompose h into ψ and h ◦ ψ−1 , by firstly finding a homeomorphic coordinate transformation ψ to map x = ( x1 , x2 , · · · , xm ) into ψ ( x ) = ( z1 , z2 , · · · , zs , 0 , 0 , · · · , 0 ) , which is called a sparse coordinate transformation , and h ◦ ψ−1 can be easily obtained by Eq . ( 3 ) . We denote h ◦ ψ−1 by h0 and call it a sparse compression . The theorem holds for any manifold , while in our case , we aims to find the mapping of X ⊂ Rm into Rs , so the local coordinate systems can be extended to the whole space of Rm . Local isometry constraint . The prior local isometry constraint is applied under the manifold assumption , which aims to preserve distances ( or some other metrics ) locally so that dMd1 ( u , v ) = dMd2 ( h ( u ) , h ( v ) ) , ∀u , v ∈ TpM d 1 .
In this paper, the authors propose a novel manifold learning method, via adding a locally isometric smoothness constraint, which preserves topological and geometric properties of data manifold. Empirical results demonstrate the efficacy of their approach. The authors also show that the reliability of tangent space approximated by its local neighborhood is essential to the success of manifold learning approaches.
SP:2c5537aa2c173582e193c903eb85dd63aabc7366
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
1 INTRODUCTION . Self-attention-based architectures , in particular Transformers ( Vaswani et al. , 2017 ) , have become the model of choice in natural language processing ( NLP ) . The dominant approach is to pre-train on a large text corpus and then fine-tune on a smaller task-specific dataset ( Devlin et al. , 2019 ) . Thanks to Transformers ’ computational efficiency and scalability , it has become possible to train models of unprecedented size , with over 100B parameters ( Brown et al. , 2020 ; Lepikhin et al. , 2020 ) . With the models and datasets growing , there is still no sign of saturating performance . In computer vision , however , convolutional architectures remain dominant ( LeCun et al. , 1989 ; Krizhevsky et al. , 2012 ; He et al. , 2016 ) . Inspired by NLP successes , multiple works try combining CNN-like architectures with self-attention ( Wang et al. , 2018 ; Carion et al. , 2020 ) , some replacing the convolutions entirely ( Ramachandran et al. , 2019 ; Wang et al. , 2020a ) . The latter models , while theoretically efficient , have not yet been scaled effectively on modern hardware accelerators due to the use of specialized attention patterns . Therefore , in large-scale image recognition , classic ResNetlike architectures are still state of the art ( Mahajan et al. , 2018 ; Xie et al. , 2020 ; Kolesnikov et al. , 2020 ) . Inspired by the Transformer scaling successes in NLP , we experiment with applying a standard Transformer directly to images , with the fewest possible modifications . To do so , we split an image into patches and provide the sequence of linear embeddings of these patches as an input to a Transformer . Image patches are treated the same way as tokens ( words ) in an NLP application . We train the model on image classification in supervised fashion . When trained on mid-sized datasets such as ImageNet without strong regularization , these models yield modest accuracies of a few percentage points below ResNets of comparable size . This seemingly discouraging outcome may be expected : Transformers lack some of the inductive biases 1Fine-tuning code and pre-trained models are available at https : //github.com/ google-research/vision_transformer inherent to CNNs , such as translation equivariance and locality , and therefore do not generalize well when trained on insufficient amounts of data . However , the picture changes if the models are trained on larger datasets ( 14M-300M images ) . We find that large scale training trumps inductive bias . Our Vision Transformer ( ViT ) attains excellent results when pre-trained at sufficient scale and transferred to tasks with fewer datapoints . When pre-trained on the public ImageNet-21k dataset or the in-house JFT-300M dataset , ViT approaches or beats state of the art on multiple image recognition benchmarks . In particular , the best model reaches the accuracy of 88.55 % on ImageNet , 90.72 % on ImageNet-ReaL , 94.55 % on CIFAR-100 , and 77.63 % on the VTAB suite of 19 tasks . 2 RELATED WORK . Transformers were proposed by Vaswani et al . ( 2017 ) for machine translation , and have since become the state of the art method in many NLP tasks . Large Transformer-based models are often pre-trained on large corpora and then fine-tuned for the task at hand : BERT ( Devlin et al. , 2019 ) uses a denoising self-supervised pre-training task , while the GPT line of work uses language modeling as its pre-training task ( Radford et al. , 2018 ; 2019 ; Brown et al. , 2020 ) . Naive application of self-attention to images would require that each pixel attends to every other pixel . With quadratic cost in the number of pixels , this does not scale to realistic input sizes . Thus , to apply Transformers in the context of image processing , several approximations have been tried in the past . Parmar et al . ( 2018 ) applied the self-attention only in local neighborhoods for each query pixel instead of globally . Such local multi-head dot-product self attention blocks can completely replace convolutions ( Hu et al. , 2019 ; Ramachandran et al. , 2019 ; Zhao et al. , 2020 ) . In a different line of work , Sparse Transformers ( Child et al. , 2019 ) employ scalable approximations to global selfattention in order to be applicable to images . An alternative way to scale attention is to apply it in blocks of varying sizes ( Weissenborn et al. , 2019 ) , in the extreme case only along individual axes ( Ho et al. , 2019 ; Wang et al. , 2020a ) . Many of these specialized attention architectures demonstrate promising results on computer vision tasks , but require complex engineering to be implemented efficiently on hardware accelerators . Most related to ours is the model of Cordonnier et al . ( 2020 ) , which extracts patches of size 2 × 2 from the input image and applies full self-attention on top . This model is very similar to ViT , but our work goes further to demonstrate that large scale pre-training makes vanilla transformers competitive with ( or even better than ) state-of-the-art CNNs . Moreover , Cordonnier et al . ( 2020 ) use a small patch size of 2 × 2 pixels , which makes the model applicable only to small-resolution images , while we handle medium-resolution images as well . There has also been a lot of interest in combining convolutional neural networks ( CNNs ) with forms of self-attention , e.g . by augmenting feature maps for image classification ( Bello et al. , 2019 ) or by further processing the output of a CNN using self-attention , e.g . for object detection ( Hu et al. , 2018 ; Carion et al. , 2020 ) , video processing ( Wang et al. , 2018 ; Sun et al. , 2019 ) , image classification ( Wu et al. , 2020 ) , unsupervised object discovery ( Locatello et al. , 2020 ) , or unified text-vision tasks ( Chen et al. , 2020c ; Lu et al. , 2019 ; Li et al. , 2019 ) . Another recent related model is image GPT ( iGPT ) ( Chen et al. , 2020a ) , which applies Transformers to image pixels after reducing image resolution and color space . The model is trained in an unsupervised fashion as a generative model , and the resulting representation can then be fine-tuned or probed linearly for classification performance , achieving a maximal accuracy of 72 % on ImageNet . Our work adds to the increasing collection of papers that explore image recognition at larger scales than the standard ImageNet dataset . The use of additional data sources allows to achieve state-ofthe-art results on standard benchmarks ( Mahajan et al. , 2018 ; Touvron et al. , 2019 ; Xie et al. , 2020 ) . Moreover , Sun et al . ( 2017 ) study how CNN performance scales with dataset size , and Kolesnikov et al . ( 2020 ) ; Djolonga et al . ( 2020 ) perform an empirical exploration of CNN transfer learning from large scale datasets such as ImageNet-21k and JFT-300M . We focus on these two latter datasets as well , but train Transformers instead of ResNet-based models used in prior works . 3 METHOD . In model design we follow the original Transformer ( Vaswani et al. , 2017 ) as closely as possible . An advantage of this intentionally simple setup is that scalable NLP Transformer architectures – and their efficient implementations – can be used almost out of the box . 3.1 VISION TRANSFORMER ( VIT ) . An overview of the model is depicted in Figure 1 . The standard Transformer receives as input a 1D sequence of token embeddings . To handle 2D images , we reshape the image x ∈ RH×W×C into a sequence of flattened 2D patches xp ∈ RN× ( P 2·C ) , where ( H , W ) is the resolution of the original image , C is the number of channels , ( P , P ) is the resolution of each image patch , andN = HW/P 2 is the resulting number of patches , which also serves as the effective input sequence length for the Transformer . The Transformer uses constant latent vector size D through all of its layers , so we flatten the patches and map to D dimensions with a trainable linear projection ( Eq . 1 ) . We refer to the output of this projection as the patch embeddings . Similar to BERT ’ s [ class ] token , we prepend a learnable embedding to the sequence of embedded patches ( z00 = xclass ) , whose state at the output of the Transformer encoder ( z 0 L ) serves as the image representation y ( Eq . 4 ) . Both during pre-training and fine-tuning , a classification head is attached to z0L . The classification head is implemented by a MLP with one hidden layer at pre-training time and by a single linear layer at fine-tuning time . Position embeddings are added to the patch embeddings to retain positional information . We use standard learnable 1D position embeddings , since we have not observed significant performance gains from using more advanced 2D-aware position embeddings ( Appendix D.3 ) . The resulting sequence of embedding vectors serves as input to the encoder . The Transformer encoder ( Vaswani et al. , 2017 ) consists of alternating layers of multiheaded selfattention ( MSA , see Appendix A ) and MLP blocks ( Eq . 2 , 3 ) . Layernorm ( LN ) is applied before every block , and residual connections after every block ( Wang et al. , 2019 ; Baevski & Auli , 2019 ) . The MLP contains two layers with a GELU non-linearity . z0 = [ xclass ; x 1 pE ; x 2 pE ; · · · ; xNp E ] +Epos , E ∈ R ( P 2·C ) ×D , Epos ∈ R ( N+1 ) ×D ( 1 ) z′ ` = MSA ( LN ( z ` −1 ) ) + z ` −1 , ` = 1 . . . L ( 2 ) z ` = MLP ( LN ( z ′ ` ) ) + z ′ ` , ` = 1 . . . L ( 3 ) y = LN ( z0L ) ( 4 ) Inductive bias . We note that Vision Transformer has much less image-specific inductive bias than CNNs . In CNNs , locality , two-dimensional neighborhood structure , and translation equivariance are baked into each layer throughout the whole model . In ViT , only MLP layers are local and translationally equivariant , while the self-attention layers are global . The two-dimensional neighborhood structure is used very sparingly : in the beginning of the model by cutting the image into patches and at fine-tuning time for adjusting the position embeddings for images of different resolution ( as described below ) . Other than that , the position embeddings at initialization time carry no information about the 2D positions of the patches and all spatial relations between the patches have to be learned from scratch . Hybrid Architecture . As an alternative to raw image patches , the input sequence can be formed from feature maps of a CNN ( LeCun et al. , 1989 ) . In this hybrid model , the patch embedding projection E ( Eq . 1 ) is applied to patches extracted from a CNN feature map . As a special case , the patches can have spatial size 1x1 , which means that the input sequence is obtained by simply flattening the spatial dimensions of the feature map and projecting to the Transformer dimension . The classification input embedding and position embeddings are added as described above .
This paper introduces a Transformer-based image recognition model that is fully built on the Transformer layers (multi-head self-attention + point-wise MLP) without any standard convolution layers. Basically, it splits an image into patches and takes as input the set of linear embeddings of the patches and their positions. For classification, a learnable class token is added to the input and a classification head (MLP) is attached to the class token output of the final Transformer layer. Extensive experiments of transfer learning show that when pretrained on a sufficiently large dataset (100~300M), the proposed Vision Transformer can outperform state-of-the-art convolutional networks with less training cost as well as less number of parameters.
SP:26c214e61671b012baa8824a39772738a861e44b
Learning representations from temporally smooth data
1 INTRODUCTION . Events in the world are correlated in time : the information that we receive at one moment is usually similar to the information that we receive at the next . For example , when having a conversation with someone , we see multiple samples of the same face from different angles over the course of several seconds . However , when we train neural networks for categorization or reconstruction tasks , we commonly ignore temporal ordering of samples and use randomly ordered data . Given that humans can learn robustly and efficiently when learning incrementally from sequentially correlated , it is important to examine what kinds of architectures and inductive biases may support such learning ( Hadsell et al. , 2020 ) . Therefore , we asked how does the sequential correlation structure in the data affect learning in neural networks that are performing categorization or reconstruction of one input at a time ? Moreover , we asked : which mechanisms can a network employ to exploit the temporal autocorrelation ( “ smoothness ” ) of data , without needing to perform backpropagation through time ( BPTT ) ( Sutskever , 2013 ) ? We investigated this question in three stages . In the first stage , we examined the effects of temporally smooth training data on feedforward neural networks performing category learning . Here we confirmed that autocorrelation in training data slows learning in feeforward nets . In the second stage , we investigated conditions under which these classifier networks might take advantage of smooth data . We hypothesized that human brains may possess mechanisms ( or inductive biases ) that maximize the benefits of learning from temporally smooth data . We therefore tested two network mechanisms inspired by properties of cortical circuits : leaky memory ( associated with autocorrelated brain dynamics ) , and memory gating ( associated with rapid changes of brain states at event boundaries ) . We compared the performance of these mechanisms relative to memoryless networks and also against a long short-term memory ( LSTM ) architecture trained using BPTT . Finally , having demonstrated that leaky memory can speed learning from temporally smooth data , we studied the internal representations learned by these neural networks . In particular , we showed that networks with multi-scale leaky memory and resetting could learn internal representations that separate fast-changing and slow-changing data sources . 2 RELATED WORK . Effects of sampling strategies on incremental learning . The ordering of training examples affects the speed and quality of learning . For example , learning can be sped by presenting “ easier ” examples earlier , and then gradually increasing difficulty ( Elman , 1993 ; Bengio et al. , 2009 ; Kumar et al. , 2010 ; Lee & Grauman , 2011 ) . Similarly , learning can be more efficient if training data is organized so that the magnitude of weight updates increases over training samples ( Gao & Jojic , 2016 ) . Here , we do not manipulate the order based on item difficulty or proximity to category boundaries ; we only explore the effects of ordering similar items nearby in time . We aim to identify mechanisms that can aid efficient learning across many levels of temporal autocorrelation , adapting to what is present in the data . This ability to adapt to the properties of the data is important in real-world settings , where a learner may lack control over the training order , or prior knowledge of item difficulty is unavailable . Potential costs and benefits of training with smooth data . In machine learning research , it is often assumed that the training samples are independent and identically distributed ( iid ) ( Dundar et al. , 2007 ) . When training with random sampling , one can approximately satisfy iid assumptions because shuffling samples eliminates any sequential correlations . However , in many real-world situations , the iid assumption is violated and consecutive training samples are strongly correlated . Temporally correlated data may slow learning in feedforward neural networks . If consecutive items are similar , then the gradients induced by them will be related , especially early in training . If we consider the average of the gradients induced by the whole training set as the “ ideal ” gradient , then subsets of similar samples provide a higher-variance ( i.e . noisier ) estimate of this ideal . Moreover , smoothness in data may slow learning due to catastrophic forgetting ( French , 1999 ) . Suppose that , for smoother training , we sample multiple times from a category before moving to another category . This means that the next presentation of each category will be , on average , farther apart from its previous appearance . This increased distance could lead to greater forgetting for that category , thus slowing learning overall . On the other hand , smoother training data might also benefit learning . For example , there may be some category-diagnostic features that will not reliably be extracted by a learning algorithm unless multiple weight updates occur for that feature nearby in time ; smoother training data would be more liable to present such features nearby in time . 3 RESEARCH QUESTIONS AND HYPOTHESES . 1 . How does training with temporally smooth data affect learning in feedforward networks ? In light of the work reviewed above , we hypothesized that temporally smooth data would slow learning in feedforward nets . 2 . How can neural networks benefit from temporally smooth data , in terms of either learning efficiency or learning more meaningfully structured representations ? We hypothesized that a combination of two brain-inspired mechanisms — leaky memory and memory-resetting — could enable networks to learn more efficiently from temporally smooth data , even without BPTT . 4 EFFECTS OF TEMPORAL SMOOTHNESS IN TRAINING DATA ON LEARNING . IN FEEDFORWARD NEURAL NETWORKS We first explored how smoothness of data affects the speed and accuracy of category learning ( classification ) in feedforward networks . See Appendix A.1 for similar results with unsupervised learning . 4.1 METHODS . 4.1.1 MANIPULATING SMOOTHNESS IN TRAINING DATA . We manipulated smoothness in training data by varying the number of consecutive samples drawn from the same category . We began each training session by generating a random “ category order ” , which was a permutation of the numbers from 1 to N ( e.g . the ordering in Figure 1.B is 2-1-3 ) . The same category order was used for all smoothness conditions in that training session . To sample with minimum smoothness , we sampled exactly one exemplar from each category , before sampling from the next category in the category order ( 1 repeat ) ( Figure 1.B ) . This condition is called “ minimum smoothness ” because all consecutive items were from different categories , and there were not more examples from a category until all other categories were sampled . We increased smoothness by increasing the number of consecutive samples drawn from each category ( 3 repeats and 5 repeats in Figure 1.B ) . Finally , we also used the standard random sampling method , in which items were sampled at random , without replacement , from the training set ( Figure 1.B ) . The training set was identical across all conditions , as was the order in which samples were drawn from within a category ( Figure 1.B ) . 4.1.2 FEEDFORWARD NEURAL NETWORK . Dataset . We tested MNIST , Fashion-MNIST , and synthetic datasets containing low category overlap ( LeCun et al. , 2010 ; Xiao et al. , 2017 ) . An example synthetic dataset is shown in Appendix A.2 . For creating synthetic datasets , we used Numpy ( Harris et al. , 2020 ) . For creating and testing the models , we used PyTorch ( Paszke et al. , 2019 ) . Learning rule and objective function . We used backpropagation with both mean squared error ( MSE ) and cross-entropy ( CE ) loss functions . The results reported here are using MSE , primarily for the ease of comparison with later reconstruction error measures in this manuscript . However , the same pattern was observed using CE loss , as shown in Appendix A.3 . Also , it has been shown MSE loss provides comparable performance to commonly utilized classification models with CE loss function ( Illing et al. , 2019 ) . To test incremental learning , we employed stochastic gradient descent ( SGD ) , updating weights for each training sample . Optimization , initialization , and activation function . We tested the model both with and without RMSprop optimization , along with Xavier initialization ( Tieleman & Hinton , 2012 ; Glorot & Bengio , 2010 ) . We applied ReLU to hidden units and Softmax or Sigmoid to the output units . Hyperparameters . For MNIST and Fashion-MNIST , we used a 3-layer fully connected network with ( 784 , 392 , 10 ) dimensions and a learning rate of 0.01 . The learning rate was not tuned for a specific condition . We used the same learning rate across all conditions ; only smoothness varied across conditions . To compensate for potential advantage of a specific set of hyperparameters for a specific condition , we ran 5 runs , each with a different random weight initialization , and reported 2Photos in this section are taken from the FRIENDS TV series , Warner Brothers ( Kauffman et al. , 1994 ) . the averaged results . For hyperparameters in synthetic dataset see Appendix A.2 . When RMSprop was implemented , β1 and β2 were set to 0.9 and 0.99 , respectively ( Ruder , 2016 ) . 4.2 RESULTS . Smooth training data slowed incremental learning ( Figure 2.A ) . Moreover , minimum smoothness yielded more efficient learning than random sampling ( Figure 2.A ) . These observations generalized across all tested datasets and across MSE and CE loss , with and without RMSprop optimization . 4.3 DISCUSSION . The superiority of minimum smoothness over other conditions suggests that any level of smoothness slows incremental learning , even the smoothness that can occur by chance in random sampling ( Figure 2.A ) . Therefore , given a fixed time budget for training , a sampling strategy that minimizes smoothness can reach a higher performance than random sampling . Sampling with minimum smoothness may be advantageous because it reduces the representation overlap across consecutive training items . Catastrophic forgetting can be reduced by decreasing the overlap between learned representations , for example , via orthogonalization ( French , 1999 ) . Though we did not explicitly seek to reduce interference by sampling with minimum smoothness , this method does likely reduce the representational overlap of nearby items . In addition , training with minimum smoothness may improve learning by maintaining a near-uniform distribution of sampled categories . Training with “ low-discrepancy ” sequences , such as those with uniformly distributed data , avoids classification bias and enhances learning ( Iwata & Ishii , 2002 ; Mishra & Rusch , 2020 ) . 5 EXPLOITING TEMPORAL SMOOTHNESS IN TRAINING DATA FOR LEARNING . IN NEURAL NETWORKS Although temporally-correlated data slows learning in feedforward nets , it appears that humans are able to rapidly extract meaningful representations in such settings , even while learning incrementally . How might our brains maximize the benefits of temporally smooth training sets ? Two properties of cortical population dynamics appear especially relevant to incremental learning : ( i ) all cortical dynamics exhibit autocorrelation on the scale of milliseconds to seconds , so that correlation in consecutive internal states is unavoidable ( Murray et al. , 2014 ; Honey et al. , 2012 ; Bright et al. , 2020 ) ; ( ii ) neural circuits appear to shift state suddenly at event boundaries , and this appears to be associated with “ resetting ” of context representations ( DuBrow et al. , 2017 ; Chien & Honey , 2020 ; Baldassano et al. , 2018 ) . We hypothesized that these two neural properties represent an inductive bias in cortical learning . In particular , we hypothesized that ( i ) data sampled from a slowly-changing environment may contain important features that are stable over time , which can be better extracted by mixing current input with a memory of recent input ; and ( ii ) the interference of irrelevant prior information can be reduced by ” resetting ” memory at boundaries between events . Therefore , we examined how neural network learning was affected by two brain-inspired mechanisms : ( i ) leaky memory in internal representations ; ( ii ) a memory gating mechanism that resets internal representation at transitions between categories .
Temporal smoothness is a recurring feature of real-world data that has been unaccounted for when training neural networks. Much of the random sampling in training neural networks is done to remove the temporal correlations originally present when the data is collected. This work aims to propose a method to train on this 'less processed' form of data.
SP:87507439ef121d5d243502d2cb45eafec175f2bc
AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient
1 INTRODUCTION . We consider the problem of black-box optimization , where we search for the optima of a loss function F : Rd → R given access to only its function queries . This type of optimization finds applications in many machine learning areas where the loss function ’ s gradient is inaccessible , or unuseful , for example , in optimizing neural network architecture ( Real et al. , 2017 ) , reinforcement learning ( Salimans et al. , 2017 ) , design of adversarial attacks ( Chen et al. , 2017 ) , and searching the latent space of a generative model ( Sinay et al. , 2020 ) . The local gradient , i.e. , ∇F ( x ) , is the most commonly used quantities to guide optimization . When ∇F ( x ) is inaccessible , we usually reformulate∇F ( x ) as a functional of F ( x ) . One class of methods for reformulation is Gaussian smoothing ( GS ) ( Salimans et al. , 2017 ; Liu et al. , 2017 ; Mania et al. , 2018 ) . GS first smooths the loss landscape with d-dimensional Gaussian convolution and represents∇F ( x ) by the gradient of the smoothed function . Monte Carlo ( MC ) sampling is used to estimate the Gaussian convolution . It is known that the local gradient∇F ( x ) points to the direction of the steepest slope in an infinitesimal neighborhood around the current state x . An optimizer guided by the local gradient is often trapped in local optima when the loss landscape is non-convex or multimodal . Despite the improvements ( Maggiar et al. , 2018 ; Choromanski et al. , 2018 ; 2019 ; Sener & Koltun , 2020 ; Maheswaranathan et al. , 2019 ; Meier et al. , 2019 ) , GS did not address the challenge of applying the local gradient to global optimization , especially in high-dimensional spaces . The nonlocal Directional Gaussian Smoothing ( DGS ) gradient , originally developed in ( Zhang et al. , 2020 ) , shows strong potential to alleviate such challenge . The key idea of the DGS gradient is to conduct 1D nonlocal explorations along d orthogonal directions in Rd , each of which defines a non- local directional derivative as a 1D integral . Then , the d directional derivatives are assembled to form the DGS gradient . Compared with the traditional GS approach , the DGS gradient can use large smoothing radius to achieve long-range exploration along the orthogonal directions This enables the DGS gradient to provide better search directions than the local gradient , making it particularly suitable for optimizing multi-modal functions . However , the optimal performance of the DGS gradient may rely on fine tuning of two important hyper-parameters , i.e. , the smoothing radius and the learning rate , which limits its applicability in practice . In this work , we propose AdaDGS , an adaptive optimization method based on the DGS gradient . Instead of designing a schedule for updating the learning rate and the smoothing radius as in ( Zhang et al. , 2020 ) , we learn their update rules automatically from a backtracking line search ( Nocedal & Wright , 2006 ) . Our algorithm is based on a simple observation : while the DGS gradient generally points to a good search direction , the best candidate solution along that direction may not locate in nearby neighborhood . More importantly , relying on a single candidate in the search direction based on a prescribed learning rate is simply too susceptible to highly fluctuating landscapes . Therefore , we allow the optimizer to perform more thorough search along the DGS gradient and let the line search determine the step size for the best improvement possible . Our experiments show that the introduction of the line search into the DGS setting requires a small but well-worth extra amount of function queries per iteration . After each line search , we update the smoothing radius according to the learned step size , because this quantity now represents an estimate of the distance to an important mode of the loss function , which we retain in the smoothing process . The performance and comparison of AdaDGS to other methods are demonstrated herein through three medium- and high-dimensional test problems , in particular , a high-dimensional benchmark test suite , an airfoil design problem and a level generation problem for Super Mario Bros. Related works . The literature on black-box optimization is extensive . We only review methods closely related to this work ( see ( Rios & Sahinidis , 2009 ; Larson et al. , 2019 ) for overviews ) . Random search . These methods randomly generate the search direction and either estimate the directional derivative using GS formula or perform direct search for the next candidates . Examples are two-point approaches ( Flaxman et al. , 2005 ; Nesterov & Spokoiny , 2017 ; Duchi et al. , 2015 ; Bubeck & Cesa-Bianchi , 2012 ) , three-point approaches ( Bergou et al. , 2019 ) , coordinate-descent algorithms ( Jamieson et al. , 2012 ) , and binary search with adaptive radius ( Golovin et al. , 2020 ) . Zeroth order methods based on local gradient surrogate . This family mimics first-order methods but approximate the gradient via function queries ( Liu et al. , 2017 ; Chen et al. , 2019 ; Balasubramanian & Ghadimi , 2018 ) . A exemplary type of these methods is the particular class of Evolution Strategy ( ES ) based on the traditional GS , first developed by ( Salimans et al. , 2017 ) . MC is overwhelmingly used for gradient approximation , and strategies for enhancing MC estimators is an active area of research , see , e.g. , ( Maggiar et al. , 2018 ; Rowland et al. , 2018 ; Maheswaranathan et al. , 2019 ; Meier et al. , 2019 ; Sener & Koltun , 2020 ) . Nevertheless , these effort only focus on local regime , rather than the nonlocal regime considered in this work . Orthogonal exploration . It has been investigated in black-box optimization , e.g. , finite difference explores orthogonal directions . ( Choromanski et al. , 2018 ) introduced orthogonal MC sampling into GS for approximating the local gradient ; ( Zhang et al. , 2020 ) introduced orthogonal exploration and the Gauss-Hermite quadrature to define and approximate a nonlocal gradient . Adaptive methods . Another adaptive method based on DGS gradient can be found in ( Dereventsov et al. , 2020 ) . Our work is dramatically different in that our update rule for the learning rate and smoothing radius is drawn from line search instead of from Lipschitz constant estimation . The long-range line search can better exploit the DGS direction and thus significantly reduce the number of function evaluations and iterations . Line search is a classical method for selecting learning rate ( Nocedal & Wright , 2006 ) and has also been used in adaptation of some nonlocal search techniques , see , e.g. , ( Hansen , 2008 ) . In this work , we apply backtracking line search on DGS direction . We do not employ popular terminate conditions such as Armijo ( Armijo , 1966 ) and Wolfe condition ( Wolfe , 1969 ) and always conduct the full line search , as this requires a small extra cost , compared to high-dimensional searching . 2 THE DIRECTIONAL GAUSSIAN SMOOTHING ( DGS ) GRADIENT . We are concerned with solving the following optimization problem min x∈Rd F ( x ) , where x = ( x1 , . . . , xd ) ∈ Rd consists of d parameters , and F : Rd → R is a d-dimensional loss function . The traditional GS method defines the smoothed loss function as Fσ ( x ) = Eu∼N ( 0 , Id ) [ F ( x+ σu ) ] , whereN ( 0 , Id ) is the d-dimensional standard Gaussian distribution , and σ > 0 is the smoothing radius . When the local gradient ∇F ( x ) is unavailable , the traditional GS uses ∇Fσ ( x ) = 1σEu∼N ( 0 , Id ) [ F ( x+ σu ) u ] ( Flaxman et al. , 2005 ) to approximate ∇F by exploiting limσ→0∇Fσ ( x ) = ∇F ( x ) ( i.e. , setting σ small ) . Hence , the traditional GS is unsuitable for defining a nonlocal gradient where a large smoothing radius σ is needed . In ( Zhang et al. , 2020 ) , the DGS gradient was proposed to circumvent this hurdle . The key idea was to apply the 1D Gaussian smoothing along d orthogonal directions , so that only 1D numerical integration is needed . In particular , define a 1D cross section of F ( x ) G ( y |x , ξ ) = F ( x+y ξ ) , y ∈ R , where x is the current state of F and ξ is a unit vector in Rd . Then , the Gaussian smoothing of F ( x ) along ξ is represented as Gσ ( y |x , ξ ) : = ( 1/ √ 2π ) ∫ RG ( y+ σv |x , ξ ) exp ( −v 2/2 ) dv . The derivative of the smoothed F ( x ) along ξ is a 1D expectation D [ Gσ ( 0 |x , ξ ) ] = 1 σ Ev∼N ( 0,1 ) [ G ( σv |x , ξ ) v ] , where D [ · ] denotes the differential operator . Intuitively , the DGS gradient is formed by assembling these directional derivatives on d orthogonal directions . Let Ξ : = ( ξ1 , . . . , ξd ) be an orthonormal system , the DGS gradient is defined as ∇σ , Ξ [ F ] ( x ) : = [ D [ Gσ ( 0 |x , ξ1 ) ] , · · · , D [ Gσ ( 0 |x , ξd ) ] ] Ξ , where Ξ and σ can be adjusted during an optimization process . Since each component of ∇σ , Ξ [ F ] ( x ) only involves a 1D integral , ( Zhang et al. , 2020 ) proposed to use the Gauss-Hermite ( GH ) quadrature rule ( Abramowitz & Stegun , 1972 ) , where each component D [ Gσ ( 0 |x , ξ ) is approximated as DM [ Gσ ( 0 |x , ξ ) ] = 1√ πσ M∑ m=1 wm F ( x+ √ 2σvmξ ) √ 2vm . ( 1 ) Here { vm } Mm=1 are the roots of the M -th order Hermite polynomial and { wm } Mm=1 are quadrature weights , the values of which can be found in ( Abramowitz & Stegun , 1972 ) . It was theoretically proved in ( Abramowitz & Stegun , 1972 ) that the error of the GH estimator is ∼M ! / ( 2M ( 2M ) ! ) that is much smaller than the MC ’ s error ∼ 1/ √ M . Applying the GH quadrature to each component of ∇σ , Ξ [ F ] ( x ) , the following estimator is defined for the DGS gradient : ∇Mσ , Ξ [ F ] ( x ) = [ DM [ Gσ ( 0 |x , ξ1 ) ] , · · · , DM [ Gσ ( 0 |x , ξd ) ] ] Ξ . ( 2 ) Then , the DGS gradient is readily integrated to first-order schemes to replace the local gradient .
The authors study the problem of global non-convex optimization with access only to function valuations. Specifically, they propose an approach to automatically control the hyper-parameters of Directional Gaussian Smoothing (DGS) a recently proposed solution for the problem. Their proposed solution trade-offs some additional function evaluations per parameter update to perform a line search for the optimal learning rate. Then the tuned learning rate informs the update for the Gaussian smoothing parameter. The proposed automated tuning approach is supported by a large set of experiments that include standard global optimization benchmarks as well as practical applications.
SP:d460957c05007cafe286b0590ffed111c806dd48
End-to-end Quantized Training via Log-Barrier Extensions
1 INTRODUCTION . As state-of-the-art deep learning models for vision , language understanding and speech grow increasingly large and computationally burdensome ( He et al. , 2017 ; Devlin et al. , 2018 ; Karita et al. , 2019 ) , there is increasing antithetical demand , motivated by latency , security and privacy concerns , to perform training and inference in these models on smaller devices at the edge rather than in server farms in the cloud . Model quantization has emerged as a promising approach to enable deployment of deep learning models on edge devices that reduce energy , latency and storage requirements by performing floating-point computation in low precision ( less than 32 bits ) . There are two primary strategies for quantization : Post-training approaches quantize the parameters of a model trained in full precision post-hoc , and tend to suffer a heavy penalty on accuracy since their inference graph differs substantially from training ( Jacob et al. , 2018 ) . Quantization-aware training ( QAT ) ( Bhuwalka et al. , 2020 ) combats this discrepancy by simulating quantization during training , so that model parameters are learned that will work well when inference is performed in low precision . In this work , we focus on the latter setting , suitable for fully quantized training on low-precision ( e.g . 8-bit ) devices . Though QAT results in quantized models that perform largely on par with their non-quantized counterparts , current state-of-the-art QAT methods ( Wu et al. , 2018 ; Wang et al. , 2018 ; Bhuwalka et al. , 2020 ) are not suitable for training on fully low-precision hardware because they employ fake quantization , meaning each operation is executed using 32- or 16-bit floating point arithmetic , and its output is quantized to lower precision , e.g . int8 . This results in two key incompatibilities with fully low-precision training , and consequently deployment on real low-precision hardware . First , existing QAT approaches assume perfect sums in inner product operations , which means that the accumulators used to compute matrix multiplies ( the acc row in Table 1 ) must be higher precision than the values being multiplied ( other bit-precision rows in Table 1 ) . This is to avoid losing res- olution in low-precision additions , also known as swamping ( Wang et al. , 2018 ) 1 . Second , QAT commonly leverages dynamic quantization ranges per-layer , meaning the mapping between highand low-precision values varies by layer , carefully tuned as a function of the network architecture , optimization dynamics and data during training . While this practice results in higher quantized inference accuracy , it is also a challenge to low-precision training , since it is unclear how to tune those ranges when training on new data in the absence of high-precision arithmetic . These incompatibilities present a substantial hurdle to quantized training in practice . For example , an automotive electronics manufacturer may want to deploy a machine learning model on its 8-bit door lock or power window controller to adaptively fit the users ’ habits . In this scenario , existing approaches for quantized training would fail ( Sakr et al. , 2019 ) . In response , we propose a new approach for fully quantized training of neural network models , inspired by the barrier method from convex optimization ( Boyd & Vandenberghe , 2004 ) . Log Barrier Tail-bounded Quantization ( LogBTQ ) utilizes a log barrier extension loss ( Kervadec et al. , 2019 ) to constrain the output of the network , encouraging all model parameters and activations to stay within the same predefined range . The log barrier function itself is a smooth approximation of the indicator function , which is ideal for selecting the weights that are within the range of quantization ( see Figure 1 , left ) . By fixing a single quantization range throughout the network at the beginning of training , our approach both obviates the need for dynamic ranges , and the limits of the range are set so as to alleviate overflow2 in matrix multiply accumulations . We combine the log barrier extension loss with an L1 regularization term ( Hoffer et al. , 2018 ) to further reduce the total magnitude of parameters and activations in the model . To allow for gradients , which tend form a peaky distribution near extremely small values ( Zhou et al. , 2016 ; Jain et al. , 2020 ) , to be quantized using the same range as the rest of the network , we also adopt the nonlinear µ-law algorithm from audio applications ( Deng & Doroslovacki , 2006 ) to construct a new MU8 codebook that better deals with “ swamping ” issues compared to the standard IEEE Float Standard . Experiments show that our approach achieves competitive results compared to state-of-art full-precision models on the MNIST , CIFAR-10 and ImageNet classification benchmarks , despite our models being trained end-to-end using only 8 bits of precision . 1Swamping : Accumulation of floating-point numbers , where the small magnitude value is ignored ( or truncated ) when it is added to the large magnitude sum . 2Overflowing : for the fixed-point accumulation where the accumulated value wraps around to the small value when it exceeds the largest value representable by the given accumulation precision . 2 BACKGROUND AND RELATED WORK . 2.1 POST-TRAINING QUANTIZATION . There was been a recent surge of interest in quantization research . In 2020 alone , there were a number of important developments in post-training quantization . Rusci et al . ( 2020 ) ; Jain et al . ( 2020 ) ; Esser et al . ( 2020 ) ; Uhlich et al . ( 2020 ) proposed learning-based approaches for determining the quantization ranges of activation and weights at low precision . Stock et al . ( 2020 ) advocates preserving the quality of the reconstruction of the network outputs rather than its weights . They all show excellent performance compared to full-precision models after quantization . Sakr & Shanbhag ( 2019 ) presented a detailed analysis of reduced precision training for a feedforward network that accounts for both the forward and backward passes , demonstrating that precision can be greatly reduced throughout the network computations while largely preserving training quality . Our work share the same intuition of preferring small predetermined dynamic range ( PDR ) and small clipping rate3 . However , Sakr & Shanbhag ( 2019 ) ’ s approach requires the network first be trained to convergence at full 32 bit precision , which is a significant limitation . In this paper , we focus on training rather than inference on low-precision hardware , therefore , we do not assume access to a full-precision high-performing model as a starting point . 2.2 QUANTIZATION-AWARE TRAINING . Pioneering works in this domain ( Zhou et al. , 2016 ; Courbariaux et al. , 2015 ) looked at quantizing model weights , activations , gradients to lower precision to accelerate neural network training . The terminology quantization aware training ( QAT ) was first introduced by Jacob et al . ( 2018 ) . QAT incoporates quantization error as noise during training and as part of the overall loss , which the optimization algorithm tries to minimize . Hence , the model learns parameters that are more robust to quantization , but QAT is not meant to be performed entirely in low precision , it aims to learn parameters that will work well for low-precision inference . More recently , several works further pursued the goal of enabling fully low-precision training ( Wu et al. , 2018 ; Wang et al. , 2018 ; Das et al. , 2018 ; Sun et al. , 2019 ) . As shown in Table 1 , most existing work employs fake quantization , resorting to higher precision values to compensate for the swamping issue , especially during gradient accumulation . Mixed-precision quantization ( Das et al. , 2018 ; Wang et al. , 2018 ; Zhang et al. , 2020a ) , which quantizes a neural network using multiple bit precisions across layers , still relies on higher-precision gradients to preserve model accuracy . This means it is difficult , if not impossible , to implement these approaches on low-bit ( e.g . 8-bit ) hardware . Most similar to our work , Sun et al . ( 2019 ) claim it is possible to do every step in low precision , but the quantization range for the layers in their work is very carefully chosen empirically , which presents great difficulty if we were to train models from scratch on low-precision hardware . Their method also requires a copy of the quantization error ( residual ) in FP16 ( 1-6-9 ) ( hence 8/16 in Ta- 3see Appendix B of Sakr & Shanbhag ( 2019 ) explaining PDR ; refer to their Criterion 2 about clipping rate . ble 1 ) . In addition to the 9-bit mantissa , the exponent bit in their floating point format would need to be manually modified to store the residual due to its small value . In this paper , we propose a new quantization scheme : log-barrier tail-bounded quantization ( LogBTQ ) that can perform fully end-to-end low precision training , suitable for deployment on lowprecision hardware . Our major contributions are the following : 1 . We apply a log barrier extension loss to soft-threshold the values of network weights and activations to constrain all the values to be small . Our quantization scheme also enables global fixed-range quantization which together significantly alleviates the overflow issue caused by large numbers and dynamic range . 2 . We add an L1 loss term to encourage sparsity and further reduce overflow . 3 . We propose µ-law quantization ( MU8 ) instead of INT8 , FP8 ( 1-4-3 ) or FP8 ( 1-5-2 ) to construct a more accurate codebook that better compensates for the peaky concentration of network parameters around small values . 3 LOG BARRIER TAIL-BOUNDED QUANTIZATION ( LOGBTQ ) . The overall diagram of our quantization scheme is shown in Figure 2 ( left ) . Figure 2 ( right ) shows the backward pass , where we quantize everything at each layer and all operations including input x , weights w , activations a , errors e , and gradients g ( including the gradient accumulation step ) . We denote all these values as the set Z = { x , w , a , e , g } . In this work , different from previous works ( Sakr & Shanbhag , 2019 ; Zhang et al. , 2020b ) that used adaptive quantization range , we adopt a globally fixed quantization range for every element z ∈ Z , and set z ∈ [ −2 , 2 ] . We do not need to adjust the range and precision during training as in other quantization work that relies on layer-wise dynamic ranges . This would greatly reduce the overhead for implementation on hardware . 3.1 CONSTRAINED FORMULATION . Let D = { I1 , ... IN } denote the labeled set of N training images , and f denote the neural network model , θ here denotes all the parameters of the neural network including weights w. For task , such as image classification , we are usually solving such an optimization problem : min θ L ( fθ ( I ) ) where L is the loss function of our neural network training objective . In this work , we use the typical cross-entropy loss , and since we are interested in constraining the quantization threshold , we are effectively performing constrained optimization in such a form : minimize θ L ( fθ ( I ) ) subject to |θn| ≤ u , n = 1 , . . . , N. ( 1 ) With u our desired barrier ( perturbation ) . In practice , we set u = 0.1 to ensure we can represent as much information as possible within our quantization range ( Figure 1 , left ) . This setting is also explained further in Section 3.3 .
The paper is addressing an important and challenging problem of end-to-end training of deep nets in fixed-point, in this case, with 8-bit precision. A good solution to this problem can have a major impact on the deployability of deep nets on embedded hardware. The basic idea is to introduce an additional term (the log-barrier constraint) in the loss function to constrain the allowable range over which model parameters are allowed to take values. The authors use of mu-encoding to assign non-uniform quantization levels to minimize the quantization error. The main results are in Table 2 showing that the method eliminates overflow in practice and allows quantized networks to approach the accuracy of full-precision networks on the MNIST, CIFAR-10 and ImageNet.
SP:253566b5271d22d4d6492ef9def2e67fb99c5d57
A Simple and Effective Baseline for Out-of-Distribution Detection using Abstention
1 INTRODUCTION AND RELATED WORK . Most of supervised machine learning has been developed with the assumption that the distribution of classes seen at train and test time are the same . However , the real-world is unpredictable and open-ended , and making machine learning systems robust to the presence of unknown categories and out-of-distribution samples has become increasingly essential for their safe deployment . While refraining from predicting when uncertain should be intuitively obvious to humans , the peculiarities of DNNs makes them overconfident to unknown inputs Nguyen et al . ( 2015 ) and makes this a challenging problem to solve in deep learning . A very active sub-field of deep learning , known as out-of-distribution ( OoD ) detection , has emerged in recent years that attempts to impart to deep neural networks the quality of `` knowing when it doesn ’ t know '' . The most straight-forward approach in this regard is based on using the DNNs output as a proxy for predictive confidence . For example , a simple baseline for detecting OoD samples using thresholded softmax scores was presented in Hendrycks & Gimpel ( 2016 ) . where the authors provided empirical evidence that for DNN classifiers , in-distribution predictions do tend to have higher winning scores than OoD samples , thus empirically justifying the use of softmax thresholding as a useful baseline . However this approach is vulnerable to the pathologies discussed in Nguyen et al . ( 2015 ) . Subsequently , increasingly sophisticated methods have been developed to attack the OoD problem . Liang et al . ( 2018 ) introduced a detection technique that involves perturbing the inputs in the direction of increasing the confidence of the network ’ s predictions on a given input , based on the observation that the magnitude of gradients on in-distribution data tend to be larger than for OoD data . The method proposed in Lee et al . ( 2018 ) also involves input perturbation , but confidence in this case was measured by the Mahalanobis distance score using the computed mean and covariance of the pre-softmax scores . A drawback of such methods , however , is that it introduces a number of hyperparameters that need to be tuned on the OoD dataset , which is infeasible in many real-world scenarios as one does not often know in advance the properties of unknown classes . A modified version of the perturbation approach was recently proposed in in Hsu et al . ( 2020 ) that circumvents some of these issues , though one still needs to ascertain an ideal perturbation magnitude , which might not generalize from one OoD set to the other . Given that one might expect a classifier to be more uncertain when faced with OoD data , many methods developed for estimating uncertainty for DNN predictions have also been used for OoD detection . A useful baseline in this regard is the temperature scaling method of Guo et al . ( 2017 ) that was was proposed for calibrating DNN predictions on in-distribution data and has been observed to also serve as a useful OoD detector in some scenarios . Further , label smoothing techniques like mixup Zhang et al . ( 2017 ) have also been shown to be able to improve OoD detection performance in DNNs Thulasidasan et al . ( 2019 ) . An ensemble-of-deep models approach , that is also augmented with adversarial examples during training , described in Lakshminarayanan et al . ( 2017 ) was also shown to improve predictive uncertainty and succesfully applied to OoD detection . In the Bayesian realm , methods such as Maddox et al . ( 2019 ) and Osawa et al . ( 2019 ) have also been used for OoD detection , though at increased computational cost . However , it has been argued that for OoD detection , Bayesian priors on the data are not completely justified since one does not have access to the prior of the open-set Boult et al . ( 2019 ) . Nevertheless , simple approaches like dropout – which have been shown to be equivalent to deep gaussian processes Gal & Ghahramani ( 2016 ) – have been used as baselines for OoD detection . Training the model to recognize unknown classes by using data from categories that do not overlap with classes of interest has been shown to be quite effective for out-of-distribution detection and a slew of methods that use additional data for discriminating between ID and OD data have been proposed . DeVries & Taylor ( 2018 ) describes a method ithat uses a separate confidence branch and misclassified training data samples that serve as a proxy for OoD samples . In the outlier exposure technique described in Hendrycks et al . ( 2018 ) , the predictions on natural outlier images used in training are regularized against the uniform distribution to encourage high-entropy posteriors on outlier samples . An approach that uses an extra-class for outlier samples is described in Neal et al . ( 2018 ) , where instead of natural outliers , counterfactual images that lie just outside the class boundaries of known classes are generated using a GAN and assigned the extra class label . A similar approach using generative samples for the extra class , but using a conditional Variational Auto-Encoders Kingma & Welling ( 2013 ) for generation , is described in Vernekar et al . ( 2019 ) . A method to force a DNN to produce high-entropy ( i.e. , low confidence ) predictions and suppress the magnitude of feature activations for OoD samples was discussed in Dhamija et al . ( 2018 ) , where , arguing that methods that use an extra background class for OoD samples force all such samples to lie in one region of the feature space , the work also forces separation by suppressing the activation magnitudes of samples from unknown classes The above works have shown that the use of known OoD samples ( or known unknowns ) often generalizes well to unknown unknown samples . Ineed , even though the space of unknown classes is potentially infinite , and one can never know in advance the myriad of inputs that can occur during test time , empirically this approach has been shown to work . The abstention method that we describe in the next section borrows ideas from many of the above methods : as in Hendrycks et al . ( 2018 ) , we uses additional samples of real images and text from non-overlapping categories to train the model to abstain , but instead of entropy regularization over OoD samples , out method uses an extra abstention class . While it has been sometimes argued in the literature that that using an additional abstention ( or rejection ) class is not an effective approach for OoD detection Dhamija et al . ( 2018 ) ; Lee et al . ( 2017 ) , comprehensive experiments we conduct in this work demonstrate that this is not the case . Indeed , we find that such an approach is not only simple but also highly effective for OoD detection , often outperforming existing methods that are more complicated and involve tuning of multiple hyperparameters . The main contributions of this work are as follows : • To the best of our knowledge , this is the first work to comprehensively demonstrate the efficacy of using an extra abstention ( or rejection class ) in combination with outlier training data for effective OoD detection . • In addition to being effective , our method is also simple : we introduce no additional hyperparameters in the loss function , and train with regular cross entropy . From a practical standpoint , this is especially useful for deep learning practitioners who might not wish to make modifications to the loss function while training deep models . In addition , since outlier data is simply an additional training class , no architectural modifications to existing networks are needed . • Due to the simplicity and effectiveness of this method , we argue that this approach be considered a strong baseline for comparing new methods in the field of OoD detection . 2 OUT-OF-DISTRIBUTION DETECTION WITH AN ABSTAINING CLASSIFIER . ( DAC ) Our approach uses a DNN trained with an extra abstention class for detecting out-of-distribution and novel samples ; from here on , we will refer to this as the deep abstaining classifier ( DAC ) . We augment our training set of in-distribution samples ( Din ) with an auxiliary dataset of known out-of-distribution samples ( D̃out ) , that are known to be mostly disjoint from the main training set ( we will use Dout to denote unknown out-of-distribution samples that we use for testing ) . We assign the training label of K + 1 to all the outlier samples in D̃out ( where K is the number of known classes ) and train with cross-entropy ; the minimization problem then becomes : min θ E ( x , y ) ∼Din [ − logPθ ( y = ŷ|x ) ] + Ex , y∼D̃out [ − logPθ ( y = K + 1|x ) ] ( 1 ) where θ are the weights of the neural network . This is somewhat similar to the approaches described in Hendrycks et al . ( 2018 ) as well as in Lee et al . ( 2017 ) , with the main difference being that in those methods , an extra class is not used ; instead predictions on outliers are regularized against the uniform distribution . Further the loss on the outlier samples is weighted by a hyperparameter λ which has to be tuned ; in contrast , our approach does not introduce any additional hyperparameters . In our experiments , we find that the presence of an abstention class that is used to capture the mass in D̃out significantly increases the ability to detect Dout during testing . For example , in Figure 1 , we show the distribution of the winning logits ( pre-softmax activations ) in a regular DNN ( left ) . For the same experimental setup , the abstention logit of the DAC produces near-perfect separation of the in and outof-distribution logits indicating that using an abstention class for mapping outliers can be a very effective approach to OoD detection . Theoretically , it might be argued that the abstention class might only capture data that is aligned with the weight vector of that class , and thus this approach might fail to detect the myriad of OoD inputs that might span the entire input region . Comprehensive experiments over a wide variety of benchmarks described in the subsequent section , however , empirically demonstrate that while the detection is not perfect , it performs very well , and indeed , much better than more complicated approaches . Once the model is trained , we use a simple thresholding mechanism for detection . Concretely , the detector , g ( x ) : X → 0 , 1 assigns label 1 ( OoD ) if the softmax score of the abstention class , i.e. , pK+1 ( x ) is above some threshold δ , and label 0 , otherwise : g ( x ) = { 1 if pK+1 ( x ) ≥ δ 0 otherwise ( 2 ) Like in other methods , the threshold δ has to be determined based on acceptable risk that might be specific to the application . However , using performance metrics like area under the ROC curve ( AUROC ) , we can determine threshold-independent performance of various methods , and we use this as one of our evaluation metrics in all our experiments . 3 EXPERIMENTS . The experiments we describe here can be divided into two sets : in the first set , we compare against methods that are explicitly designed for OoD detection , while in the second category , we compare against methods that are known to improve predictive uncertainty in deep learning . In both cases , we report results over a variety of architectures to demonstrate the efficacy of our method .
I read this paper with great interest. The authors propose an easy-to-understand, easy-to-implement baseline method for detecting when inputs to a ML model is out of distribution. The method involves augmenting the training dataset with an out of distribution dataset and adding an additional class in the classification layer for out of distribution. The paper describes several experiments—some in computer vision, some in NLP—and then compares them to other OOD techniques. The results are comparable to other techniques, although the proposed technique is definitely simpler.
SP:d9155553fae947cc53d87a221fdd1d57b44f5ec6
Outlier Robust Optimal Transport
1 INTRODUCTION . Optimal transport is a fundamental problem in applied mathematics . In its original form ( Monge , 1781 ) , the problem entails finding the minimum cost way to transport mass from a prescribed probability distribution µ on X to another prescribed distribution ν on X . Kantorovich ( 1942 ) relaxed Monge ’ s formulation of the optimal transport problem to obtain the Kantorovich formulation : OT ( µ , ν ) , min Π∈F ( µ , ν ) E ( X1 , X2 ) ∼Π [ c ( X1 , X2 ) ] , ( 1.1 ) where F ( µ , ν ) is the set of couplings between µ and ν ( probability distributions on X × X whose marginals are µ and ν ) and c is a cost function , where we typically assume c ( x , y ) ≥ 0 and c ( x , x ) = 0 . Compared to other notions of distance between probability distributions , optimal transport uniquely depends on the geometry of the sample space . Recent advancements in optimization for optimal transport ( Cuturi , 2013 ; Solomon et al. , 2015 ; Genevay et al. , 2016 ; Seguy et al. , 2018 ) enabled its broad adaptation in machine learning applications where geometry of the data is important . See ( Peyré & Cuturi , 2018 ) for a survey . Optimal transport has found applications in natural language processing ( Kusner et al. , 2015 ; Huang et al. , 2016 ; Alvarez-Melis & Jaakkola , 2018 ; Yurochkin et al. , 2019 ) , generative modeling ( Arjovsky et al. , 2017 ) , clustering ( Ho et al. , 2017 ) , domain adaptation ( Courty et al. , 2014 ; 2017 ) , large-scale Bayesian modeling ( Srivastava et al. , 2018 ) , and many other domains . Many applications use OT as a loss in an optimization problem of the form : θ ∈ arg minθ∈Θ OT ( µn , νθ ) , ( 1.2 ) where { νθ } θ∈Θ is a collection of parametric models , µn is the empirical distribution of the samples . Such estimators are called minimum Kantorovich estimators ( MKE ) ( Bassetti et al. , 2006 ) . They are popular alternatives to likelihood-based estimators , especially in generative modeling . For example , when OT ( · , · ) is the Wasserstein-1 distance and νθ is a generator parameterized by a neural network with weights θ , equation 1.2 corresponds to the Wasserstein GAN ( Arjovsky et al. , 2017 ) . One drawback of optimal transport is its sensitivity to outliers . Because all the mass in µ must be transported to ν , a small fraction of outliers can have an outsized impact on the optimal transport problem . For statistics and machine learning applications in which the data is corrupted or noisy , this is a major issue . For example , the poor performance of Wasserstein GANs in the presence of outliers was noted in the recent works on outlier-robust generative learning with f -divergence GANs ( Chao et al. , 2018 ; Wu et al. , 2020 ) . The problem of outlier-robustness in MKE has not been studied , with the exception of two concurrent works ( Staerman et al. , 2020 ; Balaji et al. , 2020 ) . In this paper , we propose a modification of OT to address its sensitivity to outliers . Our formulation can be used as a loss in equation 1.2 so that it is robust to a small fraction of outliers in the data . To keep things simple , we consider the -contamination model ( Huber & Ronchetti , 2009 ) . Let νθ0 be a member of a parametric model { νθ : θ ∈ Θ } and let µ = ( 1− ) νθ0 + ν̃ , where µ is the data-generating distribution , > 0 is the fraction of outliers , and ν̃ is the distribution of the outliers . Although the fraction of outliers is capped at , the value of the outliers is arbitrary , so the outliers may have an arbitrarily large impact on the optimal transport problem . Our goal is to modify the optimal transport problem so that it is more robust to outliers . We have in mind the downstream application of learning θ0 from ( samples from ) µ in the -contamination model . Our main contributions are as follows : 1 . We propose a robust OT formulation that is suitable for statistical estimation in the - contamination model using MKE . 2 . We show that our formulation is equivalent to the original OT problem with a clipped transport cost . This connection enables us to leverage the voluminous literature on computational optimal transport to develop efficient algorithm to perform MKE robust to outliers . 3 . Our formulation enables a new application of optimal transport : outlier detection in data . 2 PROBLEM FORMULATION . 2.1 ROBUST OT FOR MKE . To promote outlier-robustness in MKE , we need to allow the corresponding OT problem to ignore the outliers in the data distribution µ . The -contamination model imposes a cap on the fraction of outliers , so it is not hard to see that ‖µ − νθ0‖TV ≤ , where ‖ · ‖TV is the total-variation norm defined as ‖µ‖TV = ∫ 1 2 |µ ( dx ) | . This suggests we solve a TV-constrained/regularized version of equation 1.2 . The constrained version min θ∈Θ , µ̃ OT ( µ̃ , νθ ) subject to ‖µ− µ̃‖TV ≤ suffers from identification issues . In particular , it can not distinguish between “ clean ” distributions within TV distance of νθ0 . This makes it unsuitable as a loss function for statistical estimation , because it can not lead to a consistent estimator . However , its regularized counterpart min θ∈Θ , s OT ( µ+ s , νθ ) + λ‖s‖TV , ( 2.1 ) where λ > 0 is a regularization parameter , does not suffer from this issue . In the rest of this paper , we work with the TV-regularized formulation equation 2.1 . The main idea of our formulation is to allow for modifications of µ , while penalizing their magnitude and ensuring that the modified µ is still a probability measure . Below we formulate this intuition in an optimization problem titled ROBOT ( ROBust Optimal Transport ) : Formulation 1 : ROBOT ( µ , ν ) = minΠ∈F+ ( Rd×Rd ) s∈F ( Rd ) ∫ C ( x , y ) Π ( dx , dy ) + λ‖s‖TV subject to ∫ B×Rd Π ( dx , dy ) = ∫ B ( µ ( dx ) + s ( dx ) ) ≥ 0 ∀ B ∈ B ( Rd ) ( Borel σ-algebra ) ∫ Rd×C Π ( dx , dy ) = ∫ C ν ( dy ) ∀ C ∈ B ( Rd ) ∫ s ( dx ) = 0 . ( 2.2 ) Here F ( Rd ) denotes the set of all signed measures with finite total variation on Rd , F+ ( Rd × Rd ) is the set of all measures with finite total variation on Rd × Rd . The first and the last constraints ensure that µ + s is a valid probability measure , while λ‖s‖TV penalizes the amount of modifications in µ . It is worth noting that we can identify exact locations of outliers in µ by inspecting µ+ s , i.e . if µ ( x ) + s ( x ) = 0 , then x got eliminated and is an outlier . ROBOT , unlike classical OT , guarantees that an adversarially picked outliers can not increase the distance arbitrarily . Let µ̃ = ( 1− ) µ+ µc , i.e . µ̃ is µ contaminated with outliers from µc , and let ν be an arbitrary measure ( in MKE , µ̃ is the contaminated data and ν is the model we learn ) . Adversary can arbitrarily increase OT ( µ̃ , ν ) by manipulating the outlier distribution µc . For ROBOT we have the following bound : Theorem 2.1 . Let µ̃ = ( 1− ) µ+ µc for some ∈ [ 0 , 1 ) , then ROBOT ( µ̃ , ν ) ≤ ( OT ( µ , ν ) + λ ‖µ− µc‖TV ) ∧ λ‖µ̃− ν‖TV ∧OT ( µ̃ , ν ) . ( 2.3 ) This bound has two key takeaways : since TV norm of any two distributions is bounded by 1 , adversary can not increase ROBOT ( µ̃ , ν ) arbitrarily ; in the absence of outliers , ROBOT is bounded by classical OT . See Appendix C for the proof . Related work We note connection between equation 2.2 and unbalanced OT ( UOT ) ( Chizat. , 2017 ; Chizat et al. , 2018 ) . UOT is typically formulated by replacing TV norm with KL ( µ + s|µ ) and adding an analogous term for ν. Chizat et al . ( 2018 ) studied entropy regularized UOT with various divergences penalizing marginal violations . Optimization problems similar to equation 2.2 have also been considered outside of the ML literature ( Piccoli & Rossi , 2014 ; Liero et al. , 2018 ) . We are unaware of prior applications of UOT to outlier-robustness , but it was studied in the concurrent work of Balaji et al . ( 2020 ) . Another relevant variation of OT is partial OT ( Figalli , 2010 ; Caffarelli & McCann , 2010 ) . It may also be considered for outlier-robustness , but it has a drawback of forcing mass destruction rather than adjusting marginals to ignore outliers when they are present . A concurrent work by Staerman et al . ( 2020 ) took a different path : they replaced the expectation in the Wasserstein-1 dual with a median-of-means to promote robustness . It is unclear what is the corresponding primal , making it hard to interpret as an optimal transport problem . A major challenge with the aforementioned methods , including our Formulation 1 , is the difficulty of the optimization problem . This is especially the case for MKEs , where a transport problem has to be solved in every iteration to obtain the gradient of the model parameters . Chizat et al . ( 2018 ) proposed a Sinkhorn-like algorithm for entropy regularized UOT , but it is not amenable to stochastic optimization . Balaji et al . ( 2020 ) proposed a stochastic optimization algorithm based on the UOT dual , but it requires two additional neural networks ( total of four including dual potentials ) to parameterize modified marginal distributions ( i.e. , µ + s and analogous one for ν ) . Optimizing with a median-of-means in the objective function as in ( Staerman et al. , 2020 ) is also challenging . The key contribution of our work is a formulation equivalent to equation 2.2 , which is easily compatible with the large body of classical OT optimization techniques ( Cuturi , 2013 ; Solomon et al. , 2015 ; Genevay et al. , 2016 ; Seguy et al. , 2018 ) . More efficient equivalent formulation At a first glance , there are two issues with equation 2.2 : it appears asymmetric and it is unclear if it can be optimized efficiently . Below we present an equivalent formulation that is free of these issues : Formulation 2 : ROBOT ( µ , ν ) = minΠ∈F+ ( Rd×Rd ) ∫ Cλ ( x , y ) Π ( dx , dy ) subject to ∫ B×Rd Π ( dx , dy ) = ∫ B µ ( dx ) ∀ B ∈ B ( Rd ) ∫ Rd×C Π ( dx , dy ) = ∫ C ν ( dy ) ∀ C ∈ B ( Rd ) , ( 2.4 ) where Cλ is the truncated cost function defined as Cλ ( x , y ) = C ( x , y ) ∧ 2λ . Looking at equation 2.4 , it is not apparent that it adds robustness to MKE , but it is symmetric , easy to combine with entropic regularization by simply truncating the cost , and benefits from stochastic optimization algorithms ( Genevay et al. , 2016 ; Seguy et al. , 2018 ) . This formulation also has a distant relation to the idea of loss truncation for achieving robustness ( Shen & Sanghavi , 2019 ) . Pele & Werman ( 2009 ) considered the Earth Mover Distance ( discrete OT ) with truncated cost to achieve computational improvements ; they also mentioned its potential to promote robustness against outlier noise but did not explore this direction . In Section 3 , we establish equivalence between the two ROBOT formulations , equation 2.2 and equation 2.4 . This equivalence allows us to obtain an efficient algorithm based on equation 2.4 for robust MKE . We also provide a simple procedure for computing optimal s in equation 2.2 from the solution of equation 2.4 , enabling a new OT application : outlier detection . We verify the effectiveness of robust MKE and outlier detection in our experiments in Section 4 . Before presenting the equivalence proof , we formulate the discrete analogs of the two ROBOT formulations for their practical value .
The authors propose to address the robustness over outliers for optimal transport (OT). They propose a new formulation based on penalizing the contaminated probability measures by a signed measure (which shares a close relation with unbalanced OT). The authors further derive an equivalent formulation by adjusting the cost matrix for the corresponding standard OT. Empirically, the authors evaluate their proposed approach on a toy example of robust mean estimation and outlier detection for data collection.
SP:70b8c75426f18a3dc4a359c8a8cd7dd2076953a0
Dependency Structure Discovery from Interventions
1 INTRODUCTION . Structure learning concerns itself with the recovery of the graph structure of Bayesian networks ( BNs ) from data samples . A natural application of Bayesian networks is to describe cause-effect relationships between variables . In that context , one may speak of causal structure learning . Causal structure learning is challenging because purely observational data may be satisfactorily explained by multiple Bayesian networks ( a Markov equivalence class ) , but only one is the most robust to distributional shifts : The one with the correct graph . A more powerful tool than BNs is thus needed to model causal relationships . Structural Causal Models ( SCMs ) are that tool . An SCM over a set of random variables is a collection of assignments to these variables and a directed acyclic graph of dependencies between them ( Peters et al. , 2017 , §6.2 ) . Each assignment is a function of only the direct causes of a variable , plus an independent noise source . An SCM entails precisely one ( observational ) data distribution . Interventions on an SCM ’ s assignments , such as setting a random variable to a fixed value ( a hard intervention ) , entail new interventional data distributions ( Peters et al. , 2017 , §6.3 ) . SCMs can be used to answer higher-order questions of cause-and-effect , up the ladder of causation ( Pearl & Mackenzie , 2018 ) . Causal structure learning using SCMs has been attempted in several disciplines including biology ( Sachs et al. , 2005 ; Hill et al. , 2016 ) , weather forecasting ( Abramson et al. , 1996 ) and medicine ( Lauritzen & Spiegelhalter , 1988 ; Korb & Nicholson , 2010 ) . Causal structure is most frequently learned from data drawn from observational distributions . Structure learning methods generally can not do more than identify the causal graph up to a Markov equivalence class ( Spirtes et al. , 2000 ) . In order to fully identify the true causal graph , a method must either make restrictive assumptions about the underlying data-generating process , such as linear but non-Gaussian data ( Shimizu et al. , 2006 ) , or must access enough data from outside the observational distribution ( i.e. , from interventions ) . Under certain assumptions about the number , diversity , and nature of the interventions , the true underlying causal graph is always identifiable , given that the method knows the intervention performed ( Heckerman et al. , 1995 ) . In much of the prior work on causal model induction it is assumed that there is an experimenter and this experimenter performs interventions . However , in the real world , interventions can also be performed by other agents , which could lead to unknown interventions ( interventions with unknown target variables ) . A few works have attempted to learn structures from unknown-intervention data ( Eaton & Murphy , 2007a ; Squires et al. , 2020 ; Huang et al. , 2020 ) . A notable such work , ( Mooij et al. , 2016 ) , has been extended in ( Kocaoglu et al. , 2019 ; Jaber et al. , 2020 ) . Although there is no theoretical guarantee that the true causal graph can be identified in that setting , evidence so far points to that still being the case . Another common setting is when the graph structure is partially provided , but must be completed . An example is protein structure learning in biology , where we may have definitive knowledge of some causal edges in the protein-protein interactome , but the remaining causal edges must be discovered . We will call this setting “ partial graph completion ” . This is an easier task compared to learning the entire graph , since it limits the number of edges that have to be learned . Recently , a flurry of work on structure learning using continuous optimization methods has appeared ( Zheng et al. , 2018 ; Yu et al. , 2019 ) . These methods operate on observational data and are competitive with other methods . Because of the theoretical limitations on identification from purely observational data cited above , it would be interesting to extend these methods to interventional data . However , it is not straightforward to apply continuous optimization methods to structure learning from interventional data . Our key contributions are to answer the following questions experimentally : 1 . Can the proposed model recover true causal structure ? Yes , see Figure §4 . 2 . How does the proposed model compare against state of the art causal methods on real-world datasets ? Favourably ; see §5.4 and Table §1 . 3 . Does a proposed model generalize well to unseen interventions ? Yes , see §5.5 . 4 . How does the proposed model perform on partial graph recovery ? It scales to∼ 50 variables while the other baselines can ’ t . see §5.7 . 2 PRELIMINARIES . Causal modeling . A Structural Causal Model ( SCM ) ( Peters et al. , 2017 ) over a finite number M of random variables Xi is a set of structural assignments Xi : = fi ( Xpa ( i , C ) , Ni ) , ∀i ∈ { 0 , . . . , M − 1 } ( 1 ) Identifiability . In a purely-observational setting , it is known that causal graphs can be distinguished only up to a Markov equivalence class . In order to identify the true causal graph structure , interventional data is needed ( Eberhardt et al. , 2012 ) . Interventions . There are several types of common interventions which may be available ( Eaton & Murphy , 2007b ) . These are : No intervention : only observational data is obtained from the ground truth model . Hard/perfect : the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables . Soft/imperfect : the conditional distribution of the variable on which the intervention is performed is changed . Uncertain : the learner is not sure of which variable exactly the intervention affected directly . Here we make use of soft intervention because they include hard intervention as a limiting case and hence are more general . Structure discovery using continuous optimization . Structure discovery is a super-exponential search problem that searches though all possible directed acyclic graphs ( DAGs ) . Previous continuousoptimization structure learning works ( Zheng et al. , 2018 ; Yu et al. , 2019 ; Lachapelle et al. , 2019 ) mitigate the problem of searching in the super-exponential set of graph structures by considering the degree to which a hypothesis graph violates “ DAG-ness ” as an additional penalty to be optimized . If there are M such variables , the strategy of considering all the possible structural graphs as separate hypotheses is not feasible because it would require maintaining O ( 2M 2 ) models of the data.2 3 RELATED WORK . The recovery of the underlying structural causal graph from observational and interventional data is a fundamental problem ( Pearl , 1995 ; 2009 ; Spirtes et al. , 2000 ) . Different approaches have been studied : score-based , constraint-based , asymmetry-based and continuous optimization methods . Score-based methods search through the space of all possible directed acyclic graphs ( DAGs ) representing the causal structure based on some form of scoring function for network structures ( Heckerman et al. , 1995 ; Chickering , 2002 ; Tsamardinos et al. , 2006 ; Hauser & Bühlmann , 2012 ; Goudet et al. , 2017 ; Cooper & Yoo , 1999 ; Zhu & Chen , 2019 ) . Constraint-based methods ( Spirtes et al. , 2000 ; Sun et al. , 2007 ; Zhang et al. , 2012 ; Monti et al. , 2019 ; Zhu & Chen , 2019 ) infer the DAG by analyzing conditional independences in the data . Eaton & Murphy ( 2007c ) use dynamic programming techniques to accelerate Markov Chain Monte Carlo ( MCMC ) sampling in a Bayesian approach to structure learning for discrete variable DAGs . Peters et al . ( 2016 ) ; Ghassami et al . ( 2017 ) ; Rojas-Carulla et al . ( 2018 ) exploit invariance across environments to infer causal structure , which faces difficulty scaling due to the iteration over the super-exponential set of possible graphs . Recently , ( Zheng et al. , 2018 ; Yu et al. , 2019 ; Lachapelle et al. , 2019 ) framed the structure search as a continuous optimization problem , however , the methods only uses observational data and are non-trivial to extend to interventional data . In our paper , we present a method that uses continuous optimization methods that works on both observational and interventional data . For interventional data , it is often assumed that the models have access to full intervention information , which is rare in the real world . Rothenhäusler et al . ( 2015 ) have investigated the case of additive shift interventions , while Eaton & Murphy ( 2007b ) have examined the situation where the targets of experimental interventions are imperfect or uncertain . This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment . Learning based methods have been proposed ( Guyon , 2013 ; 2014 ; Lopez-Paz et al. , 2015 ) and there also exist recent approaches using the generalization ability of neural networks to learn causal signals from purely observational data ( Kalainathan et al. , 2018 ; Goudet et al. , 2018 ) . Neural network methods equipped with learned masks , such as ( Ivanov et al. , 2018 ; Li et al. , 2019 ; Yoon et al. , 2018 ; Douglas et al. , 2017 ) , exist in the literature , but only a few ( Kalainathan et al. , 2018 ) have been adapted to causal inference . This last work is , however , tailored for causal inference on continuous variables and from observations only . Adapting it to a discrete-variable setting is made difficult by its use of a Generative Adversarial Network ( GAN ) Goodfellow et al . ( 2014 ) framework . 4 STRUCTURE DISCOVERY FROM INTERVENTIONS METHOD . Scope of Applicability and Objective . The proposed method , like any structure learning algorithm , assumes the availability of a data-generating process based on ancestral sampling of a ground-truth SCM of M variables , which can be queried for samples . The SCM supports applying and retracting known or unknown interventions . The method can support infinite- or finite-data as well as infiniteor finite-intervention regimes . The objective is , then , to learn the SCM ’ s structure from the insights that each intervention gives about cause-effect relationships between variables in the SCM . 4.1 PROBLEM SETTING AND ASSUMPTIONS . In this paper , we restrict the problem setting to specific , but still broad classes of SCMs and interventions . In particular , we assume that : Data is discrete-valued . The SCM ’ s random variables are all categorical . Causal sufficiency . For every data sample , the value of all variables are available ; There are no latent confounders . Interventions are localized . They affect only a single variable ( but which one may not be known ) . Interventions are soft . An intervention does not necessarily pin its target random variable to a fixed value ( though it may , as a special case ) . It changes the relationship of a variable with its parents . Interventions do not stack . Before a new intervention is made , the previous one is fully retracted . This stops the SCM from wandering away from its initial , observational configuration after a long series of interventions . No control over interventions . The structure learning algorithm has control neither of the target , nor the nature of the next intervention on the SCM . For a detailed description of the interventions , refer to §A.2 .
This paper aims to extend the continuous optimization approach to causal discovery to handle interventional data as well as observational data. It describes a method for learning the causal structure over a set of categorical variables and reports strong empirical performance. However, no theoretical guarantee or analysis is provided, which is a significant weakness in my view. It also makes no comment on or comparison to a paper that has essentially the same goal, https://arxiv.org/pdf/2007.01754.pdf. The latter paper seems to me more principled and convincing.
SP:198d7f650c930a1423f7f30688cd2f73d2719920
Improved Autoregressive Modeling with Distribution Smoothing
1 INTRODUCTION . Autoregressive models have exhibited promising results in a variety of downstream tasks . For instance , they have shown success in compressing images ( Minnen et al. , 2018 ) , synthesizing speech ( Oord et al. , 2016a ) and modeling complex decision rules in games ( Vinyals et al. , 2019 ) . However , the sample quality of autoregressive models on real-world image datasets is still lacking . Poor sample quality might be explained by the manifold hypothesis : many real world data distributions ( e.g . natural images ) lie in the vicinity of a low-dimensional manifold ( Belkin & Niyogi , 2003 ) , leading to complicated densities with sharp transitions ( i.e . high Lipschitz constants ) , which are known to be difficult to model for density models such as normalizing flows ( Cornish et al. , 2019 ) . Since each conditional of an autoregressive model is a 1-dimensional normalizing flow ( given a fixed context of previous pixels ) , a high Lipschitz constant will likely hinder learning of autoregressive models . Another reason for poor sample quality is the “ compounding error ” issue in autoregressive modeling . To see this , we note that an autoregressive model relies on the previously generated context to make a prediction ; once a mistake is made , the model is likely to make another mistake which compounds ( Kääriäinen , 2006 ) , eventually resulting in questionable and unrealistic samples . Intuitively , one would expect the model to assign low-likelihoods to such unrealistic images , however , this is not always the case . In fact , the generated samples , although appearing unrealistic , often are assigned high-likelihoods by the autoregressive model , resembling an “ adversarial example ” ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) , an input that causes the model to output an incorrect answer with high confidence . Inspired by the recent success of randomized smoothing techniques in adversarial defense ( Cohen et al. , 2019 ) , we propose to apply randomized smoothing to autoregressive generative modeling . More specifically , we propose to address a density estimation problem via a two-stage process . Unlike Cohen et al . ( 2019 ) which applies smoothing to the model to make it more robust , we apply smoothing to the data distribution . Specifically , we convolve a symmetric and stationary noise distribution with the data distribution to obtain a new “ smoother ” distribution . In the first stage , we model the smoothed version of the data distribution using an autoregressive model . In the second stage , we reverse the smoothing process—a procedure which can also be understood as “ denoising ” —by either applying a gradient-based denoising approach ( Alain & Bengio , 2014 ) or introducing another conditional autoregressive model to recover the original data distribution from the smoothed one . By choosing an appropriate smoothing distribution , we aim to make each step easier than the original learning problem : smoothing facilitates learning in the first stage by making the input distribution fully supported without sharp transitions in the density function ; generating a sample given a noisy one is easier than generating a sample from scratch . We show with extensive experimental results that our approach is able to drastically improve the sample quality of current autoregressive models on several synthetic datasets and real-world image datasets , while obtaining competitive likelihoods on synthetic datasets . We empirically demonstrate that our method can also be applied to density estimation , image inpainting , and image denoising . 2 BACKGROUND . We consider a density estimation problem . Given D-dimensional i.i.d samples { x1 , x2 , ... , xN } from a continuous data distribution pdata ( x ) , the goal is to approximate pdata ( x ) with a model pθ ( x ) parameterized by θ . A commonly used approach for density estimation is maximum likelihood estimation ( MLE ) , where the objective is to maximize L ( θ ) , 1N ∑N i=1 log pθ ( xi ) . 2.1 AUTOREGRESSIVE MODELS . An autoregressive model ( Larochelle & Murray , 2011 ; Salimans et al. , 2017 ) decomposes a joint distribution pθ ( x ) into the product of univariate conditionals : pθ ( x ) = D∏ i=1 pθ ( xi|x < i ) , ( 1 ) where xi stands for the i-th component of x , and x < i refers to the components with indices smaller than i . In general , an autoregressive model parameterizes each conditional pθ ( xi|x < i ) using a prespecified density function ( e.g . mixture of logistics ) . This bounds the capacity of the model by limiting the number of modes for each conditional . Although autoregressive models have achieved top likelihoods amongst all types of density based models , their sample quality is still lacking compared to energy-based models ( Du & Mordatch , 2019 ) and score-based models ( Song & Ermon , 2019 ) . We believe this can be caused by the following two reasons . 2.2 MANIFOLD HYPOTHESIS . Several existing methods ( Roweis & Saul , 2000 ; Tenenbaum et al. , 2000 ) rely on the manifold hypothesis , i.e . that real-world high-dimensional data tends to lie on a low-dimensional manifold ( Narayanan & Mitter , 2010 ) . If the manifold hypothesis is true , then the density of the data distribution is not well defined in the ambient space ; if the manifold hypothesis holds only approximately and the data lies in the vicinity of a manifold , then only points that are very close to the manifold would have high density , while all other points would have close to zero density . Thus we may expect the data density around the manifold to have large first-order derivatives , i.e . the density function has a high Lipschitz constant ( if not infinity ) . To see this , let us consider a 2-d example where the data distribution is a thin ring distribution ( almost a unit circle ) formed by rotating the 1-d Gaussian distribution N ( 1 , 0.012 ) around the origin . The density function of the ring has a high Lipschitz constant near the “ boundary ” . Let us focus on a data point travelling along the diagonal as shown in the leftmost panel in figure 2 . We plot the first-order directional derivatives of the density for the point as it approaches the boundary from the inside , then lands on the ring , and finally moves outside the ring ( see figure 2 ) . As we can see , when the point is far from the boundary , the derivative has a small magnitude . When the point moves closer to the boundary , the magnitude increases and changes significantly near the boundary even with small displacements in the trajectory . However , once the point has landed on the ring , the magnitude starts to decrease . As it gradually moves off the ring , the magnitude first increases and then decreases just like when the point approached the boundary from the inside . It has been observed that certain likelihood models , such as normalizing flows , exhibit pathological behaviors on data distributions whose densities have high Lipschitz constants ( Cornish et al. , 2019 ) . Since each conditional of an autoregressive model is a 1-d normalizing flow given a fixed context , a high Lipschitz constant on data density could also hinder learning of autoregressive models . 2.3 COMPOUNDING ERRORS IN AUTOREGRESSIVE MODELING . Autoregressive models can also be susceptible to compounding errors from the conditional distributions ( Lamb et al. , 2016 ) during sampling time . We notice that an autoregressive model pθ ( x ) learns the joint density pdata ( x ) by matching each of the conditional pθ ( xi|x < i ) with pdata ( xi|x < i ) . In practice , we typically have access to a limited amount of training data , which makes it hard for an autoregressive model to capture all the conditional distributions correctly due to the curse of dimensionality . During sampling , since a prediction is made based on the previously generated context , once a mistake is made at a previous step , the model is likely to make more mistakes in the later steps , eventually generating a sample x̂ that is far from being an actual image , but is mistakenly assigned a high-likelihood by the model . The generated image x̂ , being unrealistic but assigned a high-likelihood , resembles an adversarial example , i.e. , an input that causes the model to make mistakes . Recent works ( Cohen et al. , 2019 ) in adversarial defense have shown that random noise can be used to improve the model ’ s robustness to adversarial perturbations — a process during which adversarial examples that are close to actual data are generated to fool the model . We hypothesize that such approach can also be applied to improve an autoregressive modeling process by making the model less vulnerable to compounding errors occurred during density estimation . Inspired by the success of randomized smoothing in adversarial defense ( Cohen et al. , 2019 ) , we propose to apply smoothing to autoregressive modeling to address the problems mentioned above . 3 GENERATIVE MODELS WITH DISTRIBUTION SMOOTHING . In the following , we propose to decompose a density estimation task into a smoothed data modeling problem followed by an inverse smoothing problem where we recover the true data density from the smoothed one . 3.1 RANDOMIZED SMOOTHING PROCESS . Unlike Cohen et al . ( 2019 ) where randomized smoothing is applied to a model , we apply smoothing directly to the data distribution pdata ( x ) . To do this , we introduce a smoothing distribution q ( x̃|x ) — a distribution that is symmetric and stationary ( e.g . a Gaussian or Laplacian kernel ) — and convolve it with pdata ( x ) to obtain a new distribution q ( x̃ ) , ∫ q ( x̃|x ) pdata ( x ) dx . When q ( x̃|x ) is a normal distribution , this convolution process is equivalent to perturbing the data distribution with Gaussian noise , which , intuitively , will make the data distribution smoother . In the following , we formally prove that convolving a 1-d distribution pdata ( x ) with a suitable noise can indeed “ smooth ” pdata ( x ) . Theorem 1 . Given a continuous and bounded 1-d distribution pdata ( x ) that is supported on R , for any 1-d distribution q ( x̃|x ) that is symmetric ( i.e . q ( x̃|x ) = q ( x|x̃ ) ) , stationary ( i.e . translation invariant ) and satisfies limx→∞ pdata ( x ) q ( x|x̃ ) = 0 for any given x̃ , we have Lip ( q ( x̃ ) ) ≤ Lip ( pdata ( x ) ) , where q ( x̃ ) , ∫ q ( x̃|x ) pdata ( x ) dx and Lip ( · ) denotes the Lipschitz constant of the given 1-d function . Theorem 1 shows that convolving a 1-d data distribution pdata ( x ) with a suitable noise distribution q ( x̃|x ) ( e.g.N ( x̃|x , σ2 ) ) can reduce the Lipschitzness ( i.e . increase the smoothness ) of pdata ( x ) . We provide the proof of Theorem 1 in Appendix A . Given pdata ( x ) with a high Lipschitz constant , we empirically verify that density estimation becomes an easier task on the smoothed distribution q ( x̃ ) than directly on pdata ( x ) . To see this , we visualize a 1-d example in figure 3a , where we want to model a ten-mode data distribution with a mixture of logistics model . If our model has three logistic components , there is almost no way for the model , which only has three modes , to perfectly fit this data distribution , which has ten separate modes with sharp transitions . The model , after training ( see figure 3a ) , mistakenly assigns a much higher density to the low density regions between nearby modes . If we convolve the data distribution with q ( x̃|x ) = N ( x̃|x , 0.52 ) , the new distribution becomes smoother ( see figure 3b ) and can be captured reasonably well by the same mixture of logistics model with only three modes ( see figure 3b ) . Comparing the same model ’ s performance on the two density estimation tasks , we can see that the model is doing a better job at modeling the smoothed version of the data distribution than the original data distribution , which has a high Lipschitz constant . This smoothing process can also be understood as a regularization term for the original maximum likelihood objective ( on the un-smoothed data distribution ) , encouraging the learned model to be smooth , as formalized by the following statement : Proposition 1 ( Informal ) . Assume that the symmetric and stationary smoothing distribution q ( x̃|x ) has small variance and negligible higher order moments , then Epdata ( x ) Eq ( x̃|x ) [ log pθ ( x̃ ) ] ≈ Epdata ( x ) [ log pθ ( x ) + η 2 ∑ i ∂2 log pθ ∂x2i ] , for some constant η . Proposition 1 shows that our smoothing process provides a regularization effect on the original objective Epdata ( x ) [ log pθ ( x ) ] when no noise is added , where the regularization aims to maximize η 2 ∑ i ∂2 log pθ ∂x2i . Since samples from pdata should be close to a local maximum of the model , this encourages the second order gradients computed at a data point x to become closer to zero ( if it were positive then x will not be a local maximum ) , creating a smoothing effect . This extra term is also the trace of the score function ( up to a multiplicative constant ) that can be found in the score matching objective ( Hyvärinen , 2005 ) , which is closely related to many denoising methods ( Vincent , 2011 ; Hyvärinen , 2008 ) . This regularization effect can , intuitively , increase the generalization capability of the model . In fact , it has been demonstrated empirically that training with noise can lead to improvements in network generalization ( Sietsma & Dow , 1991 ; Bishop , 1995 ) . Our argument is also similar to that used in ( Bishop , 1995 ) except that we consider a more general generative modeling case as opposed to supervised learning with squared error . We provide the formal statement and proof of Proposition 1 in Appendix A .
.** Autoregressive models have demonstrate their potential utility for modeling images and other types of complex data with high flexibility (particularly in density estimation). However, its sampling ability is not that good as explained in the paper. Authors show that one of the main weaknesses of autoregressive models comes from the propagation of mistakes due to the mismatch of conditionals. Inspired in the promising results of randomized smoothing in adversarial models (Cohen et al. 2019), authors propose a similar strategy. The addition of Gaussian noise and posterior modeling of the smoother data makes easier to the autoregressive density to capture the true data distribution. The benefits of this strategy are empirically proved and shown in the experiments.
SP:d8c4980cf2187b549f2f2a4fbb2fba4101337459
CorrAttack: Black-box Adversarial Attack with Structured Search
We present a new method for score-based adversarial attack , where the attacker queries the loss-oracle of the target model . Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function . We show that searching over the structured space can be approximated by a time-varying contextual bandits problem , where the attacker takes feature of the associated arm to make modifications of the input , and receives an immediate reward as the reduction of the loss function . The time-varying contextual bandits problem can then be solved by a Bayesian optimization procedure , which can take advantage of the features of the structured action space . The experiments on ImageNet and the Google Cloud Vision API demonstrate that the proposed method achieves the state of the art success rates and query efficiencies for both undefended and defended models . 1 INTRODUCTION . Although deep learning has many applications , it is known that neural networks are vulnerable to adversarial examples , which are small perturbations of inputs that can fool neural networks into making wrong predictions ( Szegedy et al. , 2014 ) . While adversarial noise can easily be found when the neural models are known ( referred to as white-box attack ) ( Kurakin et al. , 2016 ) . However , in real world scenarios models are often unknown , this situation is referred to as black-box attack . Some methods ( Liu et al. , 2016 ; Papernot et al. , 2016 ) use the transfer-based attack , which generates adversarial examples on a substitute model and transfer the adversarial noise to the target model . However , the transferability is limited and its effectiveness relies highly on the similarity between the networks ( Huang & Zhang , 2020 ) . If two networks are very different , transfer-based methods will have low success rates . In practice , most computer vision API such as the Google Cloud Vision API allow users to access the scores or probabilities of the classification results . Therefore , the attacker may query the black-box model and perform zeroth order optimization to find an adversarial example without the knowledge of the target model . Due to the availability of scores , this scenario is called score-based attack . There have been a line of studies on black-box attack which directly estimate the gradient direction of the underlying model , and apply ( stochastic ) gradient descent to the input image ( Ilyas et al. , 2018 ; 2019 ; Chen et al. , 2017 ; Huang & Zhang , 2020 ; Tu et al. , 2018 ; Li et al. , 2019 ) . In this paper , we take another approach and formulate score-based attack as a time-varying contextual bandits problem . At each state , the attacker may change the adversarial perturbation and get the reward as the reduction of the loss . And the attacker would receive some features about the arms before making the decision . By limiting the action space to image blocks , the associated bandits problem exhibits local correlation structures and the slow varying property suitable for learning . Therefore , we may use the location and other features of the blocks to estimate the reward for the future selection of the actions . Using the above insights , we propose a new method called CorrAttack , which utilizes the local correlation structure and the slow varying property of the underlying bandits problem . CorrAttack uses Bayesian optimization with Gaussian process regression ( Rasmussen , 2003 ) to model the correlation and select optimal actions . A forgetting strategy is added to the algorithm so that the Gaussian process regression can handle the time-varying changes . CorrAttack can effectively find blocks with the largest rewards . The resulting method achieves much lower numbers of average queries and higher success rates than prior methods with a similar action space ( Moon et al. , 2019 ) . It is worth noting that BayesOpt ( Ru et al. , 2020 ) and Bayes-Attack ( Shukla et al. , 2019 ) also employ Bayesian optimization for score-based attack . However , their Gaussian process regression directly models the loss as a function of the image , whose dimension can be more than one thousand . Therefore , their speed is slow especially for BayesOpt , which uses slow additive kernel . CorrAttack , on the other hand , searches over a much limited action space and models the reward as a function of the low dimensional feature . Therefore , the optimization of CorrAttack is more efficient , and the method is significantly faster than BayesOpt . We summarize the contributions of this work as follows : 1 . We formulate the score-based adversarial attack as a time-varying contextual bandits , and show that the reward function has slow varying properties . In our new formulation , the attacker could take advantage of the features to model the reward of the arms with learning techniques . Compared to the traditional approach , the use of learning in the proposed framework greatly improves the efficiency of searching over optimal actions . 2 . We propose a new method , CorrAttack , which uses Bayesian optimization with Gaussian process regression to learn the reward of each action , by using the feature of the arms . 3 . The experiments show that CorrAttack achieves the state of the art performance on ImageNet and Google Cloud Vision API for both defended and undefended models . 2 RELATED WORK . There have been a line of works focusing on black-box adversarial attack . Here , we give a brief review of various existing methods . Transfer-Based Attack Transfer-based attack assumes the transferability of adversarial examples across different neural networks . It starts with a substitute model that is in the same domain as the target model . The adversaries can be easily generated on the white-box substitute model , and be transferred to attack the target model ( Papernot et al. , 2016 ) . The approach , however , depends highly on the similarity of the networks . If two networks are distinct , the success rate of transferred attack would rapidly decrease ( Huang & Zhang , 2020 ) . Besides , we may not access the data for training the substitute model in practice . Score-based Attack Many approaches estimate the gradient with the output scores of the target network . However , the high dimensionality of input images makes naive coordinate-wise search impossible as it requires millions of queries . ZOO ( Chen et al. , 2017 ) is an early work of gradient estimation , which estimates the gradient of an image block and perform block-wise gradient descent . NES ( Wierstra et al. , 2008 ) and CMA-ES ( Hansen , 2016 ) are two evolution strategies that can perform query efficient score-based attack Ilyas et al . ( 2018 ) ; Meunier et al . ( 2019 ) . Instead of the gradient itself , SignHunter ( Al-Dujaili & O ’ Reilly , 2020a ) just estimates the sign of gradient to reduce the complexity . AutoZOOM ( Tu et al. , 2018 ) uses bilinear transformation or autoencoder to reduce the sampling space and accelerate the optimization process . In the same spirit , data prior can be used to improve query efficiency ( Ilyas et al. , 2019 ) . Besides , MetaAttack ( Du et al. , 2020 ) takes a meta learning approach to learn gradient patterns from prior information , which reduces queries for attacking targeted model . Many zeroth order optimization methods for black-box attacks rely on gradient estimation . However , there are some research works using gradient free methods to perform black-box attack . BayesOpt and Bayes-Attack ( Ru et al. , 2020 ; Shukla et al. , 2019 ) employ Bayesian optimization to find the adversarial examples . They use Gaussian process regression on the embedding and apply bilinear transformation to resize the embedding to the size of image . Although the bilinear transformation could alleviate the high dimensionality of images , the dimension of their embeddings are still in the thousands , which makes Bayesian optimization very ineffective and computationally expensive . A different method , PARSI , poses the attack on ` ∞ norm as a discrete optimization problem over { −ε , ε } d ( Moon et al. , 2019 ) . It uses a Lazy-Greedy algorithm to search over the space { −ε , ε } d to find an adversarial example . SimBA ( Guo et al. , 2018 ) also employs a discrete search space targeted at ` 2 norm . Decision-based Attack Decision-based attack assumes the attacker could only get the output label of the model . Boundary Attack and its variants ( Brendel et al. , 2017 ; Chen et al. , 2020 ; Li et al. , 2020 ) are designed for the setting . However , the information received by the attacker is much smaller than score-based attack , and it would take many more queries than score-based attack to successfully attack an image . 3 PRELIMINARIES . A Gaussian process ( Rasmussen , 2003 ) is a prior distribution defined on some bounded set Z , and is determined by a mean function µ : Z → R and a covariance kernel κ : Z × Z → R. Given n observations Dn = { ( zi , f ( zi ) ) } ni=1 , the prior distribution on f ( z1 : n ) is f ( z1 : n ) ∼ Normal ( µ0 ( z1 : n ) , κ0 ( z1 : n , z1 : n ) ) , ( 1 ) where we use compact notation for functions applied to collections of input points : z1 : n indicates the sequence z1 , · · · , zn , f ( z1 : n ) = [ f ( z1 ) , · · · , f ( zn ) ] , µ0 ( z1 : n ) = [ µ0 ( z1 ) , · · · , µ0 ( zn ) ] , κ0 ( z1 : n , z1 : n ) = [ κ0 ( z1 , z1 ) , · · · , κ0 ( z1 , zn ) ; · · · ; κ0 ( zn , z1 ) , · · · , κ0 ( zn , zn ) ; ] . Now we wish to infer the value of f ( z ) at some new point z , the posterior process f ( z ) |Dn is also a Gaussian process ( GP ) with mean µn and covariance σ2n : f ( z ) |Dn ∼ Normal ( µn ( z ) , σ2n ( z ) ) , ( 2 ) µn ( z ) = κ0 ( z , z1 : n ) κ0 ( z1 : n , z1 : n ) −1 ( f ( z1 : n ) − µ0 ( z1 : n ) ) + µ0 ( z ) , σ2n ( z ) = κ0 ( z , z ) − κ0 ( z , z1 : n ) κ0 ( z1 : n , z1 : n ) −1κ0 ( z1 : n , z ) . As a optimization method to maximize a function f , Bayesian optimization models the function to make decisions about where to evaluate the next point z . Assuming we already obtained observations Dt−1 = { ( zi , f ( zi ) ) } t−1i=1 , to determine the next point zt for evaluation , we first use the posterior GP to define an acquisition function ϕt : Z → R , which models the utility of evaluating f ( z ) for any z ∈ Z . We then evaluate f ( zt ) with zt = arg max Z ϕt ( z ) . ( 3 ) In this work , we use the expected improvement ( EI ) acquisition function ( Mockus et al. , 1978 ) ϕt ( z ) = √ σ2n ( z ) ( γ ( z ) Φ ( γ ( z ) ) + φ ( γ ( z ) ) ) with γ ( z ) = µn ( z ) − f ( zbest ) √ σ2n ( z ) , ( 4 ) which measures the expected improvement over the current best value zbest = arg maxzi f ( zi ) according to the posterior GP . Here Φ ( · ) and φ ( · ) are the cdf and pdf of N ( 0 , I ) respectively .
This work considers an important problem of generating adversarial examples to attack a black-box model. The paper proposes a new approach to consider an adversarial example as a result of a sequence of pixel changes from a benign instance. Therefore, the adversarial generation problem can be considered as a bandit problem, and thus we can leverage Bayesian optimization to search for an instance that maximize the changes on the loss function through a sequence of pixel changes. The evaluation is comprehensive, and demonstrates that fewer number of black-box queries are needed to achieve a higher attack success rate.
SP:5918a2c105a901f8de4bba248dc283a476d9beac
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima
1 INTRODUCTION . In recent years , deep learning ( LeCun et al. , 2015 ) has achieved great empirical success in various application areas . Due to the over-parametrization and the highly complex loss landscape of deep networks , optimizing deep networks is a difficult task . Stochastic Gradient Descent ( SGD ) and its variants are mainstream methods for training deep networks . Empirically , SGD can usually find flat minima among a large number of sharp minima and local minima ( Hochreiter & Schmidhuber , 1995 ; 1997 ) . More papers reported that learning flat minima closely relate to generalization ( Hardt et al. , 2016 ; Zhang et al. , 2017a ; Arpit et al. , 2017 ; Hoffer et al. , 2017 ; Dinh et al. , 2017 ; Neyshabur et al. , 2017 ; Wu et al. , 2017 ; Dziugaite & Roy , 2017 ; Kleinberg et al. , 2018 ) . Some researchers specifically study flatness itself . They try to measure flatness ( Hochreiter & Schmidhuber , 1997 ; Keskar et al. , 2017 ; Sagun et al. , 2017 ; Yao et al. , 2018 ) , rescale flatness ( Tsuzuku et al. , 2019 ; Xie et al. , 2020b ) , and find flatter minima ( Hoffer et al. , 2017 ; Chaudhari et al. , 2017 ; He et al. , 2019b ; Xie et al. , 2020a ) . However , we still lack a quantitative theory that answers why deep learning dynamics selects a flat minimum . The diffusion theory is an important theoretical tool to understand how deep learning dynamics works . It helps us model the diffusion process of probability densities of parameters instead of model parameters themselves . The density diffusion process of Stochastic Gradient Langevin Dynamics ( SGLD ) under injected isotropic noise has been discussed by ( Sato & Nakagawa , 2014 ; Raginsky et al. , 2017 ; Zhang et al. , 2017b ; Xu et al. , 2018 ) . Zhu et al . ( 2019 ) revealed that anisotropic diffusion of SGD often leads to flatter minima than isotropic diffusion . A few papers has quantitatively studied the diffusion process of SGD under the isotropic gradient noise assumption . Jastrzębski et al . ( 2017 ) first studied the minima selection probability of SGD . Smith & Le ( 2018 ) presented a Beyesian perspective on generalization of SGD . Wu et al . ( 2018 ) studied the escape problems of SGD from a dynamical perspective , and obtained the qualitative conclusion on the effects of batch size , learning rate , and sharpness . Hu et al . ( 2019 ) quantitatively showed that the mean escape time of SGD exponentially depends on the inverse learning rate . Achille & Soatto ( 2019 ) also obtained a related proposition that describes the mean escape time in terms of a free energy that depends on the Fisher Information . Li et al . ( 2017 ) analyzed Stochastic Differential Equation ( SDE ) of adaptive gradient methods . Nguyen et al . ( 2019 ) mainly contributed to closing the theoretical gap between continuous-time dynamics and discrete-time dynamics under isotropic heavy-tailed noise . However , the related papers mainly analyzed the diffusion process under parameter-independent and isotropic gradient noise , while stochastic gradient noise ( SGN ) is highly parameter-dependent and anisotropic in deep learning dynamics . Thus , they failed to quantitatively formulate how SGD selects flat minima , which closely depends on the Hessian-dependent structure of SGN . We try to bridge the gap between the qualitative knowledge and the quantitative theory for SGD in the presence of parameter-dependent and anisotropic SGN . Mainly based on Theorem 3.2 , we have four contributions : • The proposed theory formulates the fundamental roles of gradient noise , batch size , the learning rate , and the Hessian in minima selection . • The SGN covariance is approximately proportional to the Hessian and inverse to batch size . • Either a small learning rate or large-batch training requires exponentially many iterations to escape minima in terms of ratio of batch size and learning rate . • To the best of our knowledge , we are the first to theoretically and empirically reveal that SGD favors flat minima exponentially more than sharp minima . 2 STOCHASTIC GRADIENT NOISE AND SGD DYNAMICS . We mainly introduce the necessary foundation for the proposed diffusion theory in this section . We denote the data samples as { xj } mj=1 , the model parameters as θ and the loss function over data samples x as L ( θ , x ) . For simplicity , we denote the training loss as L ( θ ) . Following Mandt et al . ( 2017 ) , we may write SGD dynamics as θt+1 = θt − η ∂L̂ ( θt ) ∂θt = θt − η ∂L ( θt ) ∂θt + ηC ( θt ) 1 2 ζt , ( 1 ) where L̂ ( θ ) is the loss of one minibatch , ζt ∼ N ( 0 , I ) , and C ( θ ) represents the gradient noise covariance matrix . The classic approach is to model SGN by Gaussian noise , N ( 0 , C ( θ ) ) ( Mandt et al. , 2017 ; Smith & Le , 2018 ; Chaudhari & Soatto , 2018 ) . Stochastic Gradient Noise Analysis . We first note that the SGN we study is introduced by minibatch training , C ( θt ) 1 2 ζt = ∂L ( θt ) ∂θt − ∂L̂ ( θt ) ∂θt , which is the difference between gradient descent and stochastic gradient descent . According to Generalized Central Limit Theorem ( Gnedenko et al. , 1954 ) , the mean of many infinite-variance random variables converges to a stable distribution , while the mean of many finite-variance random variables converges to a Gaussian distribution . As SGN is finite in practice , we believe the Gaussian approximation of SGN is reasonable . Simsekli et al . ( 2019 ) argued that SGN is Lévy noise ( stable variables ) , rather than Gaussian noise . They presented empirical evidence showing that SGN seems heavy-tailed , and the heavy-tailed distribution looks closer to a stable distribution than a Gaussian distribution . However , this research line ( Simsekli et al. , 2019 ; Nguyen et al. , 2019 ) relies on a hidden strict assumption that SGN must be isotropic and obey the same distribution across dimensions . Simsekli et al . ( 2019 ) computed “ SGN ” across n model parameters and regarded “ SGN '' as n samples drawn from a single-variant distribution . This is why one tail-index for all parameters was studied in Simsekli et al . ( 2019 ) . The arguments in Simsekli et al . ( 2019 ) did not necessarily hold for parameter-dependent and anisotropic Gaussian noise . In our paper , SGN computed over different minibatches obeys a n-variant Gaussian distribution , which can be parameter-dependent and anisotropic . In Figure 1 , we empirically verify that SGN is highly similar to Gaussian noise instead of heavy-tailed Lévy noise . We recover the experiment of Simsekli et al . ( 2019 ) to show that gradient noise is approximately Lévy noise only if it is computed across parameters . Figure 1 actually suggests that the contradicted observations are from the different formulations of gradient noise . Simsekli et al . ( 2019 ) studied the distribution of SGN as a single-variant distribution , while we relax it as a n-variant distribution . Our empirical analysis in Figure 1 holds well at least when the batch size B is larger than 16 , which is common in practice . Similar empirical evidence can be observed for training ResNet18 ( He et al. , 2016 ) on CIFAR-10 ( Krizhevsky et al. , 2009 ) , seen in Appendix C. Panigrahi et al . ( 2019 ) also observed that for batch sizes 256 and above , the distribution of SGN is best described as Gaussian at-least in the early phases of training . Comparing our results with Panigrahi et al . ( 2019 ) , we noticed that the Gaussianity of SGN may depend on more unknown factors . First , SGN on random models is more Gaussian than well-trained models . Second , the layer/network matters . Because SGN on some layers/networks is more Gaussian than other layers/networks . The isotropic gradient noise assumption is too rough to capture the Hessian-dependent covariance structure of SGN , which we will study in Figure 2 later . Our theory that focuses on parameterdependent and anisotropic SGN brings a large improvement over existing parameter-independent and isotropic noise , although Simsekli et al . ( 2019 ) brought an improvement over more conventional parameter-independent and isotropic Gaussian noise . A more sophisticated theory is interesting under parameter-independent anisotropic heavy-tailed noise , when the batch size is too small ( B ∼ 1 ) to apply Central Limit Theorem . We will leave it as future work . SGD Dynamics . Let us replace η by dt as unit time . Then the continuous-time dynamics of SGD ( Coffey & Kalmykov , 2012 ) is written as dθ = −∂L ( θ ) ∂θ dt+ [ 2D ( θ ) ] 1 2 dWt , ( 2 ) where dWt ∼ N ( 0 , Idt ) and D ( θ ) = η2C ( θ ) . We note that the dynamical time t in the continuoustime dynamics is equal to the product of the number of iterations T and the learning rate η : t = ηT . The associated Fokker-Planck Equation is written as ∂P ( θ , t ) ∂t =∇ · [ P ( θ , t ) ∇L ( θ ) ] +∇ · ∇D ( θ ) P ( θ , t ) ( 3 ) = ∑ i ∂ ∂θi [ P ( θ , t ) ∂L ( θ ) ∂θi ] + ∑ i ∑ j ∂2 ∂θi∂θj Dij ( θ ) P ( θ , t ) , ( 4 ) where∇ is a nabla operator , and Dij is the element in the ith row and jth column of D. In standard SGLD , the injected gradient noise is fixed and isotropic Gaussian , D = I . The next question is how to formulate the SGN covariance C ( θ ) for SGD ? Based on Smith & Le ( 2018 ) , we can express the SGN covariance as C ( θ ) = 1 B 1 m m∑ j=1 ∇L ( θ , xj ) ∇L ( θ , xj ) > −∇L ( θ ) ∇L ( θ ) > ≈ 1 Bm m∑ j=1 ∇L ( θ , xj ) ∇L ( θ , xj ) > . ( 5 ) The approximation is true near critical points , due to the fact that the gradient noise variance dominates the gradient mean near critical points . We know the observed fisher information matrix satisfies FIM ( θ ) ≈ H ( θ ) near minima , referring to Chapter 8 of ( Pawitan , 2001 ) . Following Jastrzębski et al . ( 2017 ) ; Zhu et al . ( 2019 ) , we obtain C ( θ ) ≈ 1 Bm m∑ j=1 ∇L ( θ , xj ) ∇L ( θ , xj ) > = 1 B FIM ( θ ) ≈ 1 B H ( θ ) , ( 6 ) which approximately gives D ( θ ) = η 2 C ( θ ) = η 2B H ( θ ) ( 7 ) near minima . It indicates that the SGN covariance C ( θ ) is approximately proportional to the Hessian H ( θ ) and inverse to the batch size B . Obviously , we can generalize Equation 7 by D ( θ ) = ηC ( θ ) 2 = η 2B [ H ( θ ) ] + near critical points , when there exist negative eigenvalues in H along some directions . We use [ · ] + to denote the positive semidefinite transformation of a symmetric matrix : if we have the eigendecomposation H = U diag ( H1 , · · · , Hn−1 , Hn ) U > , then [ H ] + = U diag ( |H1| , · · · , |Hn−1| , |Hn| ) U > . We empirically verify this relation in Figure 2 for pretrained fully-connected networks , and a followup paper Xie et al . ( 2020c ) first verified this relation for randomly initialized fully-connected networks on real-world datasets . The Pearson Correlation is up to 0.999 for pretrained networks . We note that , the relation still approximately holds for even the randomly network , which is far from critical points . The correlation is especially high along the flat directions with small-magnitude eigenvalues of the Hessian ( Xie et al. , 2020c ) . We emphasize that previous papers with the isotropic Lévy or Gaussian noise approximation all failed to capture this core relation in deep learning dynamics .
The paper develops a density diffusion theory to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. It shows theoretically and empirically that SGD favors flat minima exponentially more than sharp minima. In particular, the paper analyzed the dependence of mean escape time from the valley with the Hessians on local minima and saddle points for both SGD and SGLD, and revealed the exponential dependence of the mean escape time with the sharpness. Experiments on real-world data have verified the theoretical results on the mean escape time.
SP:9403fa2679f18af78aed2e81b75eb39abeb722eb
Differentiable Combinatorial Losses through Generalized Gradients of Linear Programs
1 INTRODUCTION . Combinatorial optimization problems , such as shortest path in a weighted directed graph , minimum spanning tree in a weighted undirected graph , or optimal assignment of tasks to workers , play a central role in many computer science applications . We have highly refined , efficient algorithms for solving these fundamental problems ( Cormen et al. , 2009 ; Schrijver , 2003 ) . However , while we can easily find , for example , the minimal spanning tree in a graph , the total weight of the tree as function of graph edge weights is not differentiable . This problem hinders using solutions to combinatorial problems as criteria in training models that rely on differentiability of the objective function with respect to the model parameters . Losses that are defined by objective value of some feasible solution to a combinatorial problem , not the optimal one , have been recently proposed for image segmentation using deep models ( Zheng et al. , 2015 ; Lin et al. , 2016 ) . These focus on a problem where some pixels in the image have segmentation labels , and the goal is to train a convolutional network that predicts segmentation labels for all pixels . For pixels with labels , a classification loss can be used . For the remaining pixels , a criterion based on a combinatorial problem – for example the maximum flow / minimal cut problem in a regular , lattice graph connecting all pixels ( Boykov et al. , 2001 ) or derived , higher-level super-pixels ( Lin et al. , 2016 ) – is often used as a loss , in an iterative process of improving discrete segmentation labels ( Zheng et al. , 2015 ; Marin et al. , 2019 ) . In this approach , the instance of the combinatorial problem is either fixed , or depends only on the input to the network ; for example , similarity of neighboring pixel colors defines edge weights . The output of the neural network gives rise to a feasible , but rarely optimal , solution to that fixed instance a combinatorial problem , and its quality is used as a loss . For example , pixel labeling proposed by the network is interpreted as a cut in a pre-defined graph connecting then pixels . Training the network should result in improved cuts , but no attempt to use a solver to find an optimal cut is made . Here , we are considering a different setup , in which each new output of the neural network gives rise to a new instance of a combinatorial problem . A combinatorial algorithm is then used to find the optimal solution to the problem defined by the output , and the value of the objective function of the optimal solution is used as a loss . After each gradient update , the network will produce a new combinatorial problem instance , even for the same input sample . Iteratively , the network is expected to learn to produce combinatorial problem instances that have low optimal objective function value . For example , in sequence-to-sequence modeling , the network will output a new sentence that is supposed to closely match the desired sentence , leading to a new optimal sequence alignment problem to be solved . Initially , the optimal alignment will be poor , but as the network improves and the quality of the output sentences get higher , the optimal alignment scores will be lower . Recently , progress in integrating combinatorial problems into differentiable models have been made by modifying combinatorial algorithms to use only differentiable elements ( Tschiatschek et al. , 2018 ; Mensch & Blondel , 2018 ; Chang et al. , 2019 ) , for example smoothed max instead of max in dynamic programming . Another approach involves executing two runs of a non-differentiable , black-box combinatorial algorithm and uses the two solutions to define a differentiable interpolation ( Vlastelica Pogančić et al. , 2020 ; Rolı́nek et al. , 2020 ) . Finally , differentiable linear programming and quadratic programming layers , which can be used to model many combinatorial problems , have been proposed recently ( Amos & Kolter , 2017 ; Agrawal et al. , 2019 ; Wilder et al. , 2019 ; Ferber et al. , 2019 ) . The approaches above allow for differentiating through optimal solution vectors . In many cases , we are interested only in the optimal objective value , not the solution vector , and the approaches above introduce unnecessary overhead . We propose an approach for gradient-descent based training of a network f ( x ; β ) for supervised learning problems involving samples ( x , y ) with the objective criterion involving a loss term of the form L ( β ) = h ( OptSolutionObjectiveValue ( Π ( F ( x ; β ) , y ) ) , where h : R → R is some differentiable function , and Π is a combinatorial solver for a problem instance defined by the output of the β-parameterized network F for feature vector x and by the true label y . We show that a broad class of combinatorial problems can be integrated into models trained using variants of gradient descent . Specifically , we show that for an efficiently solvable combinatorial problem that can be efficiently expressed as an integer linear program , generalized gradients of the problem ’ s objective value with respect to real-valued parameters defining the problem exist and can be efficiently computed from a single run of a black-box combinatorial algorithm . Using the above result , we show how generalized gradients of combinatorial problems can provide sentence-level loss for text summarization using differentiable encoder-decoder models that involve softmax or Gumbel softmax ( Jang et al. , 2016 ) , and a multi-element loss for training classification models when only weakly supervised , bagged training data is available . 2 DIFFERENTIABLE COMBINATORIAL LOSSES . 2.1 BACKGROUND ON GENERALIZED GRADIENTS . A function f : X → R defined over a convex , bounded open set X ∈ Rp is Lipschitz continuous on an open set B ∈ X if there is a finite K ∈ R such that ∀x , y ∈ B |f ( x ) − f ( y ) | ≤ K||x− y|| . A function is locally Lipschitz-continuous if for every point x0 in its domain , there is a neighborhood B0 , an open ball centered at x0 , on which the function is Lipschitz-continuous . For such functions , a generalized gradient can be defined . Definition 1 . ( Clarke , 1975 ) Let f : X → R be Lipschitz-continuous in the neighborhood of x ∈ X . Then , the Clarke subdifferential ∂f ( x ) of f at x is defined as ∂f ( x ) = conv { lim xk→x ∇f ( xk ) } , where the limit is over all convergent sequences involving those xk for which gradient exists , and conv denotes convex hull , that is , the smallest polyhedron that contains all vectors from a given set . Each element of the set ∂f ( x ) is called a generalized gradient of f at x . The Rademacher theorem ( see e.g . ( Evans , 1992 ) ) states that for any locally Lipschitz-continuous function the gradient exists almost everywhere ; convergent sequences can be found . In optimization algorithms , generalized gradients can be used in the same way as subgradients ( Redding & Downs , 1992 ) , that is , nondifferentiability may affect convergence in certain cases . 2.2 GRADIENT DESCENT OVER COMBINATORIAL OPTIMIZATION . Many combinatorial problems have linear objective function and can be intuitively expressed as integer linear programs ( ILP ) , that is , linear programs with additional constraint that the solution vector involves only integers . Any ILP can be reduced to a linear program . Consider an ILP z∗ = ILP ( c , A′ , b′ ) : = minu c Tu s.t . A′u = b′ , u ≥ 0 , u ∈ Zp , with an optimal solution vector u∗ and optimal objective value z∗ . Then , there exists a corresponding linear program LP ( c , A , b ) z∗ = LP ( c , A , b ) : = minu c Tu s.t . Au = b , u ≥ 0 , called ideal formulation ( Wolsey , 1989 ) , for which u∗ is also an optimal solution vector , with the same objective value z∗ . For a feasible , bounded p-dimensional integer program , we can view the pair ( A′ , b′ ) as a convex polyhedron A′ , the set of all feasible solutions . Then , the pair ( A , b ) in the ideal formulation LP is defined as the set of constraints specifying the feasible set A = conv { A′ ∩ Zp } . Convex hull of a subset of a convex set A′ can not extend beyond A′ , thus , A is convex , contains all integer solutions from A′ , and no other integer solutions . The number of linear constraints in the ideal formulation may be exponential in p , and/or in m , the number of the original constraints in A′ . Thus , the existence of the ideal formulation LP for an ILP may not have practical utility for solving the ILP . For a combinatorial problem and its corresponding ILP , we use the ideal formulation of the ILP as a conceptual tool to define generalized gradient of the objective value of the optimal solution to the combinatorial problem with respect to the parameters defining the combinatorial problem . Specifically , our approach first uses a single run of an efficient , black-box combinatorial algorithm to produce the optimal solution vector and the associated objective value . Then , the combinatorial problem is conceptually viewed as an instance of an ILP . A possibly exponentially large linear program ( LP ) equivalent to the ILP is then used , without actually being spelled out or solved , to derive generalized gradients based on the solution vector returned by the combinatorial algorithm . First , we introduce several notions of efficiency of transforming a combinatorial problem into a linear integer program that will be convenient in defining the generalized gradients of combinatorial problems . Definition 2 . Let P ( w ) be a combinatorial problem that is parameterized by a continuous vector w ∈ W ⊆ Rn , whereW is simply connected and n is the problem size , and let k ∈ Z be a constant that may depend on the problem type but not on its size . Then , a combinatorial problem is • primal-dual ∂-efficient if it can be phrased as an integer linear program involving n variables , with kn constraints in an LP formulation equivalent to the ILP , and the parameters ( A , b , c ) of the LP formulation depend on w through ( sub ) differentiable functions , c = c ( w ) , A = A ( w ) , b = b ( w ) . • primal ∂-efficient if it can be phrased as an integer linear program involving n variables , the parameters w of the problem influence the cost vector c through a ( sub ) differentiable function c = c ( w ) , and do not influence the constraints A , b . • dual ∂-efficient if it can be phrased as an integer linear program in which the number of constraints in the equivalent LP formulation is kn , the parametersw of the problem influence b through a ( sub ) differentiable function b = b ( w ) , and do no influence the constraint matrix A nor the cost vector c. The class of ∂-efficient problems includes polynomially solvable combinatorial problems with objective function that is linear in terms of problem parameters . Typically , the functions c = c ( w ) , b = b ( w ) and A = A ( w ) are either identity mapping or are constant ; for example , in the LP for maximum network flow , the cost vector c is composed directly of edge capacities , and A an b are constant for a given flow network topology , and do not depend on capacities . For any polynomially solvable combinatorial problem , we can construct a poly ( n ) -sized Boolean circuit for the algorithm solving it . For each poly ( n ) -sized circuit , there is a linear program with poly ( n ) variables and constraints that gives the same solution ( see ( Dasgupta et al. , 2008 ) , Chap . 7 ) . For example , for MST in a graph with V vertices and E edges , the Martin ’ s ILP formulation ( Martin , 1991 ) has only poly ( V + E ) constraints , but it is an extended formulation that involves V E additional variables on top of the typical E variables used in the standard ILP formulations for MST . Thus , we can not use it to construct an ILP formulation that would make MST primal-dual ∂-efficient . Alternatively , there is an ILP for MST with one binary variable per edge , and the weight of the edge only influences the cost vector c , but to prohibit cycles in the solution there is a constraint for each cycle in the graph , thus the number of constraints is not poly ( n ) for arbitrary graphs . These constraints are specified fully by the topology of the graph , not by the edge weights , so w does not influence A nor b , meeting the conditions for primal ∂-efficiency . The MST example shows that there are problems that are primal ∂-efficient and not primal-dual ∂-efficient . Some polynomially solvable combinatorial problems are not ∂-efficient in any of the above sense . For example , fixed-rank combinatorial problems with interaction costs ( Lendl et al. , 2019 ) can be phrased succinctly as a bilinear program , but lead to prohibitively large linear programs both in terms of the number of variables and the number of constraints . For ∂-efficient problems , we can efficiently obtain generalized gradients of the objective value . Theorem 1 . Consider a combinatorial problem P ( w ) of size n , a parameter vector w from the interior of the parameter domainW , and an algorithm Π ( w ) for solving it in time poly ( n ) . Let z∗ be the optimal objective value returned by Π . Then , • if P is primal ∂-efficient , then the generalized gradients ∂z∗ ( w ) exist , and can be efficiently computed from U∗ , the set of primal solution of the ideal formulation of integer program corresponding to P ; • if P is dual ∂-efficient , then the generalized gradients of ∂z∗ ( w ) exist , and can be efficiently computed from V ∗ , the set of all dual solution to the ideal formulation of the integer program corresponding to P ; • if P is primal-dual ∂-efficient , then the generalized gradients of A over w exist , and can be efficiently computed from U∗ and V ∗ , as defined above . Proof . A series of results ( Gal , 1975 ; Freund , 1985 ; De Wolf & Smeers , 2000 ) shows that if the optimal objective value z∗ = LP ( c , A , b ) for a linear program is finite at ( c , A , b ) and in some neighborhood of ( c , A , b ) , then generalized gradients of z∗ with respect to c , b , and A exist and are ∂z∗ ( c ) = U∗ , ∂z∗ ( b ) = V ∗ , ∂z∗ ( A ) = { −vuT : ( u , v ) ∈ V ∗ × U∗ } . We build on these results to obtain generalized gradients of the linear program corresponding to the combinatorial problem . For the first case in the theorem , definition 2 states that in the linear program corresponding to P , only the cost vector c depends on w , through a ( sub ) differentiable function c = c ( w ) . Since w is in the interior of the parameter domainW , the objective value is finite over some neighborhood of w. Then , ∂z∗ ( w ) = ∂z∗ ( c ) ∂c ∂w = ∂c ∂w U∗ , where the generalized gradient z∗ ( c ) exists and is equal to U∗ . For the second case , the ideal formulation LP exists . Then , from definition 2 we have that ∂z∗ ( w ) = ∂z∗ ( b ) ∂b ∂w = ∂b ∂w V ∗ . The third case is a direct extension of the first two cases . Theorem 1 indicates that black-box combinatorial algorithms can be used to expand the range of transformations that can be efficiently utilized in neural networks . One immediate area of application is using them to specify a loss function . Consider a network F ( x ; β ) parameterized by a vector of tunable parameters β . The network transforms a batch of input samples x into a batch of outputs χ = F ( x ; β ) . Then , in the broadest primal-dual ∂-efficient case , χ is used , possibly with the true classes y , to formulate parameters ( c , A , b ) = g ( χ , y ) of a linear program corresponding to the combinatorial problem , through some ( sub ) differentiable function g. For Algorithm 1 Minimization of a combinatorial loss Input : batch x ⊂ X , y ⊂ Y , network F ( x ; β ) , functions g , h , combinatorial algorithm Π Output : Loss and its generalized gradient , L ( β ) , ∂L ( β ) 1 : procedure COMBLOSSMIN ( x , y , β , F , g , h , Π ) 2 : forward pass χ = F ( x ; β ) 3 : forward pass ( c , A , b ) = g ( χ , y ) 4 : run combinatorial solver to find optimal objective value z∗ = Π ( c , A , b ) and optimal primal and/or dual solution vectors u∗ , v∗ 5 : forward pass L ( β ) = h ( z∗ ) 6 : backward pass through h : ∂L/∂z∗ 7 : backward pass through Π : ∂z∗ ( c ) = u∗ , ∂z∗ ( b ) = v∗ , ∂z∗ ( A ) = −v∗u∗T 8 : backward pass through g and F 9 : ∂L ( β ) = ∂L∂z ( u∗ ∂c∂β − v ∗u∗T ∂A∂β + v ∗ ∂b ∂β ) 10 : return L ( β ) , ∂L ( β ) 11 : end procedure a given β and given batch samples ( x , y ) , we can then define loss as a function of the optimal objective value of the linear program corresponding to the combinatorial problem resulting from g ( F ( x ; β ) , y ) , L ( β ) = h ( z∗ ( c , A , b ) ) . This approach , summarized in Algorithm 1 , allows us to obtain the generalized gradient of the loss with respect to β as long as functions g and h are differentiable . For clarity , in Algorithm 1 , we did not consider functions h depending not just on z but also on x or y , but the extension is straightforward .
The value of the optimal objective as a function of the cost vector $c$ can be written as $z^*(c) = c^T u^*(c)$ where the optimal solution $u^*$ also depends on $c$. The function $u^*(c)$ is piecewise constant -- there are finitely (resp. countably) many feasible solutions; candidates for $u^*$ -- and so the function $z^*(c)$ is a piecewise linear function of $c$, with gradient $u^*(c)$, wherever it exists (otherwise there is analogous subgradient). Obviously, all it takes for computing $u^*(c)$ is solving -- anyhow -- the combinatorial problem. This is all trivial and well-known, yet the authors do precisely that.
SP:d92fe94e29672783f906710a2ecb7a02aa4bd67d
Efficient Differentiable Neural Architecture Search with Model Parallelism
1 INTRODUCTION . Neural architecture search ( NAS ) has revolutionized architecture designs of deep learning from manually to automatically in various applications , such as image classification ( Zoph & Le , 2016 ) and semantic segmentation ( Liu et al. , 2019a ) . Reinforcement learning ( Zoph & Le , 2016 ; Zoph et al. , 2018 ; Pham et al. , 2018 ) , evolutionary algorithms ( Real et al. , 2017 ; 2019 ) , and differentiable algorithms ( Liu et al. , 2019b ; Cai et al. , 2019 ) have been applied to discover the optimal architecture from a large search space of candidate network structures . Supernets ( Zoph et al. , 2018 ; Pham et al. , 2018 ) comprising all possible networks reduce search spaces from complete network architectures to cell structures . Recent acceleration techniques of differentiable NAS ( Xie et al. , 2019 ; Yao et al. , 2020 ; Chen et al. , 2019 ; Xu et al. , 2020 ) further diminish search costs to affordable computation overheads ( e.g. , half GPU day ) . Prior work ( Xu et al. , 2020 ) randomly samples partial channels of intermediate feature maps in the mixed operations . However , supernets of differentiable NAS consume gigantic GPU memory , which constrains NAS from using large batch sizes and imposes restrictions on supernet architectures ’ complexity . For example , NAS determines networks in shallow supernets ( e.g. , 8 layers ) for deep compact networks ( e.g. , 20 layers ) . The cell structures are also required to remain identical for the same type of cells . Data parallelism can increase the search efficiency of NAS by using large batch sizes , such as SNAS ( Xie et al. , 2019 ) , but it requires supernet complexity low enough to fit in a single GPU . In contrast , model parallelism can parallelize complex supernets , which distributes partial models to multiple devices . Nevertheless , model parallelism suffers from low hardware utilization . Only one device executes its model partition , while other devices stay idle . How to take advantage of multiple GPUs for large supernets efficiently is an open problem . 1Search and evaluation code are released at link In this paper , we propose a simple and efficient solution , binary neural architecture search ( NASB ) using consecutive model parallel ( CMP ) , to tackle the above limitations . Specifically , supernets have two forward and two backward phases to learn architecture parameters and network weights . CMP distributes several sub-tasks split from the four phases in multiple GPUs and executes the sub-tasks of all forward/backward phases together . Figure 1 illustrates that sub-tasks of forward/backward phases will be overlapped to reduce waiting cycles . Nevertheless , CMP consumes large GPU memory due to two computation graphs existing at the same time . Thus , we introduce NASB to declines GPU memory occupation . NASB utilizes binary and sparse architecture parameters ( 1 or 0 ) for mixed operations . It excludes inactive operations in the computation graph and computes feature maps of inactive operations for architecture gradients during the back-propagation . In this way , NASB-CMP can increase hardware utilization of model parallelism with efficient GPU memory in differentiable NAS . In our experiments on CIFAR-10 , NASB-CMP runs 1.2× faster than using model parallel and pipeline parallel , TorchGPipe ( Kim et al. , 2020 ) in a server with 4 GPUs 2 . It can achieve the test error of 2.53 ± 0.06 % by searching for only 1.48 hours . Our contribution can be summarized as follows : • NASB-CMP is the first NAS algorithm that can parallelize large supernets with large batch sizes . We analyze the acceleration ratio between CMP and traditional model parallelism . Even though complex supernets ( e.g. , large layers and different cell structures ) will not boost NAS performance , NASB-CMP paves the way to explore the supernet architecture design in the future . • NASB utilizes binary architecture parameters and extra architecture gradients computation to reduce GPU usage . It can save memory consumption by accepting twice batch sizes larger than the other memory saving algorithm , PC-DARTS ( Xu et al. , 2020 ) . • We fairly compare NASB-CMP with state-of-the-art differentiable NAS in the same hardware and search space . Extensive experiments show that NASB-CMP can achieve competitive test error in short search time . 2NVIDIA GTX 1080 Ti . 2 METHODOLOGY . We first describe the fundamental concepts of one-shot neural architecture search ( NAS ) in Section 2.1 . We then portray the consecutive model parallel to enhance NAS search efficiency in multiple devices in Section 2.2 . Finally , we explain how we binarize the architectural weights and compute their gradients to cut down the GPU memory consumption in Section 2.3 . 2.1 ONE-SHOT NEURAL ARCHITECTURE SEARCH . One-shot neural NAS ( Zoph et al. , 2018 ) is built on a supernet ( a.k.a . meta graph ) in which we stack normal cells and reduce cells sequentially in Figure 2 ( a ) . Normal cells are analogous to convolutional layers to extract images features . Reduce cells are equivalent to pooling layers to reduce the spatial dimension of feature maps . All normal cells share the same structure , but each cell still has its network weights . So do all reduce cells . One-shot approaches are required to design two cell structures instead of complete neural networks . Figure 2 ( b ) illustrates one popular cell structure ( Pham et al. , 2018 ) , an N -node directed-acyclic-graph ( DAG ) with total edges E , not counting the “ concat ” node . In the h-th cell , the first two nodes are the ( h − 2 ) -th and ( h − 1 ) -th cells having no inbound edges . The other nodes accept previous nodes whose index is lower than the current index . Total edges E ( red lines of Figure 2 ( b ) ) is ( N +1 ) ( N −2 ) /2 . We denote the h-th cell ’ s output as yh = concat ( nj ) , where 2 ≤ j ≤ N − 1 and nj is a DAG node signified in Eq . 1. nj = yh−2 , if j = 0 , yh−1 , j = 1 , ∑ i < j mO ( ni ) , 2 ≤ j ≤ N − 1 . ( 1 ) A mixed operation mO is the edge between node i and j in the DAG . Let O be a set of candidate operations ( e.g. , convolution , pooling , identity , zero ) and A ∈ RE×|O| be a matrix of architecture parameters . Eq . 2 formulates the mixed operation mO from node i to j as the weighted sum of all operations ok ( Liu et al. , 2019b ) . mjO ( ni ) = |O|∑ k=1 Ae , kok ( ni ) , j ≥ 2 , i < j , ( 2 ) where e = ( j+1 ) ( j− 2 ) /2+ i is the edge index . The mixed operations transform the cell structure search to the problem of learning two matrices , AN and AR , for the normal and reduce cell . Given that Lval and Ltrain is the loss function L beyond a training and validation dataset , respectively . Let A comprise AN and AR . Mathematically , one-shot NAS can be formulated in the following optimization problem , minA Lval ( w ∗ , A ) s.t . w∗ = argmin w Ltrain ( w , A ) . ( 3 ) NAS leverages the validation performance to choose well-trained networks that outperform others . After training A , we derive the compact network by pruning unused operations in the supernet . Since the whole paper follows the image classification setting ( Liu et al. , 2019b ; Cai et al. , 2019 ) , we assume each node is assigned two inputs and two operations . And we prune node inputs of cells of the supernet by the largest two values of A associated with that node . For simplicity , we use A in replace of A in the following discussion . 2.2 CONSECUTIVE MODEL PARALLEL . Data parallelism can scale up supernets with large batch sizes , but it can not handle large supernets ( e.g. , deep supernets with different cell structures ) . Model parallelism ( MP ) is able to amortize such large supernets across multiple GPUs , but its hardware utilization is low . MP would generate unwanted waiting cycles across devices . Figure 1 displays that the first device becomes idle until the second device finishes its forward and backward phases . The parallelization gets worse as we use large available GPUs . Motivated by pipeline parallelism ( Huang et al. , 2019 ) , we propose consecutive model parallel ( CMP ) to decrease GPU idle time . Let FA and BA signify the forward and backward phase to update A , and Fw and Bw be two phases to update w. CMP divides the four phases into several sub-tasks and performs sub-tasks of FA and Fw consecutively , followed by sub-tasks of Bw and BA . Figure 1 illustrates that the execution order change by CMP overlaps sub-tasks without waiting for others to finish . Given the number of available GPUs M , Eq . 4 reveals the ratio of execution time between CMP and MP in theory . Time of CMP Time of MP = 1 M [ 4M − 2 ( M − 1 ) ] 4 = 1− M − 1 2M . ( 4 ) We assume FA , BA , Fw , and Bw take the same time unit . MP will complete an iteration in 4 units . For CMP , the total sub-tasks is 4M , and 2 ( M − 1 ) sub-tasks can be overlapped . If a sub-task takes 1/M ideally , CMP will finish an iteration in 1/M ( 4M − 2 ( M − 1 ) ) units . According to Eq . 4 , CMP with two devices could reduce ( 2-1 ) / ( 2 * 2 ) =25 % time from MP . In practice , Experiment 3.1 demonstrates that NASB-CMP runs 1.2× faster than model parallelism without sacrificing test error . The theoretical value for 4 GPU is 1.6 ( or reduce 37.5 % time ) . We believe communication overhead and uneven model balance cause the deviation . Communication overhead comes from the intermediate tensors transfer from one to another GPU when models are split into different GPUs . Moreover , the main thread is responsible for loading data and backward propagation . The GPU with the main thread always consumes the most GPU memory , which causes uneven model balance . CMP is a general model parallel approach for any existing differentiable NAS algorithm . However , runningBA andBw consecutively asks for two computation graphs , which doubles GPU utilization and deteriorates CMP efficiency . To address the problem of great GPU consumption , we introduce a memory-efficient NAS to CMP , called binary neural architecture search ( NASB ) . 2.3 BINARY NEURAL ARCHITECTURE SEARCH . Binary neural architecture search ( NASB ) harnesses binary mixed operations mBO ( Yao et al. , 2020 ) that convert the real-valued A into sparse binary matrix G , as illustrated in Figure 2 . Among rows Ae , : associate node j , mBO enforces the two largest elements to 1 ( active ) and the rest elements to 0 ( inactive ) . The row indexes of active elements indicate selected edges to node j , while column indexes indicate chosen operations . Notice that NASB does not directly multiply G with candidate operations in Eq . 5 . Instead , NASB constructs a set of active operations O ( active ) based on active elements in G. Only those active operations oa ∈ O ( active ) are included in the forward phase . This technique could stop inactive operations being stored in the computation graph and decrease roughly Algorithm 1 : NASB - Consecutive Model Parallel 1 : Initialize architecture weights A and network weights w 2 : while not stopped do 3 : Gt = binarize ( At ) 4 : Create mBO using Gt and Eq . 5 5 : Compute Lvalid ( wt , Gt ) and Ltrain ( wt , Gt ) consecutively // model parallel 6 : Compute∇wLtrain ( wt , Gt ) and∇ALvalid ( wt , Gt ) consecutively // model parallel 7 : Update wt+1 by descending∇wLtrain ( wt , Gt ) 8 : Update At+1 by descending∇ALvalid ( wt , Gt ) 9 : end while |O| times GPU memory compared to using the multiplication by G. mBO ( ni ) = |O|∑ k=1 Ge , kok ( ni ) = oa ( ni ) . ( 5 ) NASB computes gradients of network weights w using standard back-propagation in the supernet . For the gradients of A , NASB estimates ∂L/∂A approximately by ∂L/∂G : ∂L ∂Ae , k = ∂L ∂mO ∂mO ∂Ae , k ≈ ∂L ∂mBO ∂mBO ∂Ge , k = ∂L ∂mBO × ok ( n ) = ∂L ∂Ge , k . ( 6 ) Eq . 6 states that gradients of elements in A come from ∂L/∂mBO × ok ( n ) . However , inactive operations are not in the computation graph . NASB saves inputs of inactive operations n in PyTorch Context that is used for backward computation . During the backward phase , NASB will compute inactive operations ok′ ( n ) on the fly and multiply the results with the ∂L/∂mBO . Apart from saving unneeded GPU FLOPS and memory , mBO can avoid performance bias between supernets and compact networks . Supernets using mO assume that the performance of supernets can represent derived compact networks , but non-linear operations ( e.g. , ReLU-Conv-BN ) break the representation that causes performance bias ( Xie et al. , 2019 ) . Instead , the sparse matrix of mBO activates one operation . The performance of supernets during the search is only for one compact network . Thus , NASB can mitigate the bias caused by non-linear operations . Algorithm 1 describes how CMP works with NASB . Note that NASB-CMP does not update any parameter ( including A and w ) until FA , BA , Fw , and Bw complete . Ltrain will use the current binary architecture matrix Gt rather than updated Gt+1 , which is the major difference from the alternate algorithm ( See Appendix A ) . Experiment 3.2 demonstrates NASB could save substantial GPU memory than PC-DARTS ( Xu et al. , 2020 ) , which reduces GPU memory by partial channels of feature maps in mixed operations . Comparison with other methods . NASP ( Yao et al. , 2020 ) binarizes A based on A itself , while ProxylessNAS ( Cai et al. , 2019 ) binarizes A based on the softmax results of A . The two binarization approaches are equivalent , but how they handle binary mixed operations ( Eq . 5 ) is different . NASP multiplies G with all operations ( i.e. , saving active and inactive operations in the computation graph ) . ProxylessNAS selects two sampled operations ( paths ) in the computation graph according to multinomial distribution . NASB utilizes the same binarization as NASP but only keeps one active operation in the computation graph according to G .
This paper provides the interesting method that leverages GPU memory resources more efficiently for supernet (meta-graph) of differentiable NAS. For this, this paper proposes binary neural architecture search and consecutive model parallel (CMP). CMP parallelizes one supernet with multiple GPUs, which allows NAS model to use larger batch size and search space. Additionally, this paper improves neural architecture search speed and hardware utilization with waiting cycles reduction by dividing forward/backward phases into several sub-tasks and executing the same type of sub-tasks. The proposed method shows 1.2x faster search time compared with other model parallel methods and the highest performance among differentiable NAS methods in the experiment section.
SP:16d9ab54eb8e4f24314ceca6e0f86f4ca586d7f1
Video Prediction with Variational Temporal Hierarchies
1 INTRODUCTION Deep learning has enabled predicting video sequences from large datasets ( Chiappa et al. , 2017 ; Oh et al. , 2015 ; Vondrick et al. , 2016 ) . For high-dimensional inputs such as video , there likely exists a more compact representation of the scene that facilitates long term prediction . Instead of learning dynamics in pixel space , latent dynamics models predict ahead in a more compact feature space ( Doerr et al. , 2018 ; Buesing et al. , 2018 ; Karl et al. , 2016 ; Hafner et al. , 2019 ) . This has the added benefit of increased computational efficiency and a lower memory footprint , allowing to predict thousands of sequences in parallel using a large batch size . A lot of work in deep learning has focused on spatial abstraction , following the advent of convolutional networks ( LeCun et al. , 1989 ) , such as the Variational Ladder Autoencoder ( Zhao et al. , 2017 ) that learns a hierarchy of features in images using networks of different capacities , along with playing an important role in the realm of video prediction models ( Castrejón et al. , 2019 ) . Recent sequential models have incorporated temporal abstraction for learning dependencies in temporally distant observations ( Koutník et al. , 2014 ; Chung et al. , 2016 ) . Kim et al . ( 2019 ) proposed Variational Temporal Abstraction ( VTA ) , in which they explored one level of temporal abstraction above the latent states , the transition of which was modeled using a Bernoulli random variable . In this paper , we intend to work in a more controlled setup than VTA for a qualitative and quantitative analysis of temporally abstract latent variable models . In this paper , we study the benefits of temporal abstraction using a hierarchical latent dynamics model , trained using a variational objective . Each level in the hierarchy of this model temporally abstracts the level below by an adjustable factor . This model can perform long-horizon video prediction of 200 frames , while predicting accurate low-level information for a 6 times longer duration than the baseline model . We study the information stored at different levels of the hierarchy via KL divergence , predictive entropy , datasets of varying speeds , and generative distributions . In our experiments we show that this amounts to object location and identities for the Moving MNIST dataset , and the wall or floor patterns for the GQN mazes dataset ( Eslami et al. , 2018 ) , stored at different levels . Our key contributions are summarized as follows : • Temporal Abstract Latent Dynamics ( TALD ) We introduce a simple model with different clock speeds at every level to study the properties of variational hierarchical dynamics . • Accurate long-term predictions Our form of temporal abstraction substantially improves for how long the model can accurately predict video frames into the future . • Adaptation to sequence speed We demonstrate that our model automatically adapts the amount of information processed at each level to the speed of the video sequence . • Separation of information We visualize the content represented at each level of the hierarchy to find location information in lower levels and object identity in higher levels . 2 RELATED WORK . Generative video models A variety of methods have successfully approached video prediction using large datasets ( Chiappa et al. , 2017 ; Oh et al. , 2015 ; Vondrick et al. , 2016 ; Babaeizadeh et al. , 2017 ; Gemici et al. , 2017 ; Ha & Schmidhuber , 2018 ) . Denton & Fergus ( 2018 ) proposed a stochastic video generation model with a learned prior that transitions in time , and is conditioned on past observations . Lee et al . ( 2018 ) proposed to use an adversarial loss with a variational latent variable model to produce naturalistic images , while Kumar et al . ( 2019 ) used flow-based generative modeling to directly optimize the likelihood of a video generation model . Recently , Weissenborn et al . ( 2020 ) scaled autoregressive models for video prediction using a three-dimensional self-attention mechanism and showed competitive results on real-world video datasets . On similar lines , Xu et al . ( 2018 ) proposed to use an entirely CNN-based architecture for modeling dependencies between sequential inputs . Latent dynamics models Latent dynamics models have evolved from latent space models that had access to low-dimensional features ( Deisenroth & Rasmussen , 2011 ; Higuera et al. , 2018 ) , to models that can build a compact representation of visual scenes and facilitate video prediction purely in the latent space ( Doerr et al. , 2018 ; Buesing et al. , 2018 ; Karl et al. , 2016 ; Franceschi et al. , 2020 ) . The Variational RNN ( Chung et al. , 2015 ) uses an auto-regressive state transition that takes inputs from observations , making it computationally expensive to be used as an imagination module . Hafner et al . ( 2019 ) proposed a latent dynamics model , which is a combination of deterministic and stochastic states , that enables the model to deterministically remember all previous states and filter that information to obtain a distribution over the current state . Hierarchical latent variables Learning per-frame hierarchical structures has proven to be helpful in generating videos on long-term horizon ( Wichers et al. , 2018 ) . Zhao et al . ( 2017 ) proposed the Variational Ladder Autoencoder ( VLAE ) that uses networks of different capacities at different levels of the hierarchy , encouraging the model to store high-level image features at the top level , and simple features at the bottom . Other recently proposed hierarchical models use a purely bottom-up inference approach with no interaction between the inference and generative models ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ; Rezende et al. , 2014 ) . In contrast , Sønderby et al . ( 2016 , LVAE ) and Vahdat & Kautz ( 2020 , NVAE ) proposed to use a combination of bottom-up and top-down inference procedures , sharing parameters between the inference and generative distributions during the top-down pass . We incorporate this conditional structure in our model design as well . Temporal abstraction Identifying complex dependencies between temporally distant observations is a challenging task and has inspired a variety of fundamental work in recurrent models ( Koutník et al. , 2014 ; Chung et al. , 2016 ) . However , relatively few works have demonstrated modeling longterm dependencies using temporally abstract latent dynamics models ( Wichers et al. , 2018 ; Jaderberg et al. , 2018 ) . Recently , Kim et al . ( 2019 ) introduced Variational Temporal Abstraction ( VTA ) to learn temporally abstract latent spaces . They explored one level of temporal abstraction above the latent states , the transition of which was modeled using a Bernoulli random variable , that chose between ‘ copy ’ or ‘ update ’ steps . Inspired by this work , we aim to gain a deeper understanding of such temporally-abstract latent dynamics models . We perform our analysis on a model that is simplified to using fixed time scales for every level . Moreover , the lower level is a continuing chain in our model , whereas VTA resets transitions at a lower level when transitioning at a higher level . 3 TEMPORALLY ABSTRACT LATENT DYNAMICS . Long video sequences contain both information that is local to a few frames as well as global information that is shared among many frames . Traditional video prediction models that predict ahead at the frame rate of the video can struggle to retain information long enough to learn such longterm dependencies . We introduce Temporally Abstract Latent Dynamics ( TALD ) to learn long-term correlations of videos . Our model predicts ahead on multiple time scales to learn dependencies at different temporal levels , as visualized in Figure 3 . We build our work upon the recurrent state-space model ( RSSM ; Hafner et al. , 2019 ) , the details of which can be found in Appendix A. TALD consists of a hierarchy of recurrent latent variables , where each level transitions at a different clock speed . We slow down the transitions exponentially as we go up in the hierarchy , i.e . every level being slower than the level below by a factor of k. We denote a set of active timesteps for every level l ∈ [ 1 , L ] as those steps in time where the state transition generates a new latent state , Active timesteps : Tl . = { t ∈ [ 1 , T ] | tmod kl−1 = 1 } . ( 1 ) At each level , we condition every window of k latent states on a single latent variable in the level above . This can also be thought of as a hierarchy of latent variables where each level has the same clock speed , but performs a state transition every kl−1 timesteps and copies the same state variable otherwise , so that ∀t /∈ Tl : Inactive states : slt . = slmax τ { τ∈Tl|τ≤t } . ( 2 ) Joint distribution We can factorize the joint distribution of a sequence of observations and ( active ) latents at every level into two terms : ( 1 ) a decoder term conditioned on the latent states in the lowest level , and ( 2 ) state transitions at all levels conditioned on the latent state of the last active timestep at the current level and the level above , p ( x1 : T , s 1 : L 1 : T ) . = ( ∏T t=1 p ( xt | s1t ) ) ( ∏L l=1 ∏ t∈Tl p ( s l t | slt−1 , sl+1t ) ) . ( 3 ) Inference For inference , TALD embeds observed frames using a CNN . A hierarchical recurrent network then summarizes the input embeddings , for which each ( active ) latent state at a level l receives embeddings from kl−1 observation frames ( dashed lines in Figure 3 ) . The latent state at the previous timestep at the current level , and the state belief at the level above also condition the posterior belief ( solid lines in Figure 3 ) . The input embeddings combined with this top-down and temporal context together condition the posterior belief qlt over the latent state . Generation The prior transition plt is computed by conditioning over the latent state at the previous timestep at the current level , and the state belief at the level above ( solid lines in Figure 3 ) . Decoding Finally , the state beliefs at the bottom-most level are decoded using a transposed CNN to provide a training signal . To summarize , we utilize the following components in our model , ∀ l ∈ [ 1 , L ] , t ∈ Tl , Encoder : elt = e ( xt : t+kl−1−1 ) Posterior transition qlt : q ( s l t | slt−1 , sl+1t , elt ) Prior transition plt : p ( s l t | slt−1 , sl+1t ) Decoder : p ( xt | s1t ) . ( 4 ) Training objective Since we can not compute the likelihood of the training data under the model in closed form , we use the ELBO as our training objective . This training objective optimizes a reconstruction loss at the lowest level , and a KL regularizer at every level in the hierarchy summed across active timesteps , max e , h , q , p ∑T t=1 Eq1t [ ln p ( xt | s 1 t ) ] − ∑L l=1 ∑ t∈Tl KL [ q l t ‖ plt ] . ( 5 ) The KL regularizer at each level limits the amount of information that filters through the encoder and stays in the posterior at that level . This encourages the model to utilize the state transitions and context from the level above as much as possible . Since the number of active timesteps decreases as we go higher in the hierarchy , the number of KL terms per level decreases as well . Hence it is easier for the model to push global information high up in the hierarchy and pay lesser KL penalty , instead of transitioning those bits with an identity transformation at a lower level . Stochastic and Deterministic Path As illustrated in Figure 3 ( right ) , we split the state slt into stochastic ( zlt ) and deterministic ( h l t ) parts ( Hafner et al. , 2019 ) . The deterministic state is computed using the top-down and temporal context , which then conditions the stochastic state at that level . The stochastic states follow a diagonal Gaussian , with mean and variance predicted by a neural network . We use a GRU ( Cho et al. , 2014 ) per level to update the deterministic state at every active timestep . All components in Equation 4 are trained jointly by optimizing Equation 5 using stochastic backpropagation with reparameterized sampling . Please refer to Appendix B for architectures details . 4 EXPERIMENTS We aim to evaluate temporally-abstract latent dynamics models at modeling long-term dependencies in video . Moreover , we aim to understand how they separate information into different levels of the hierarchy . To investigate these questions , we train TALD described in Section 3 , the temporally-abstract VTA model ( Kim et al. , 2019 ) , the RSSM model without temporal abstraction ( Hafner et al. , 2019 ) , and the image-space video prediction model SVGLP ( Denton & Fergus , 2018 ) on four datasets of varying complexity . We consider the well- established Moving MINST dataset ( Srivastava et al. , 2015 ) , the KTH Action dataset ( Schuldt et al. , 2004 ) , the GQN mazes dataset ( Eslami et al. , 2018 ) , and the MineRL Navigate dataset ( Guss et al. , 2019 ) . We evaluate open-loop video predictions on these datasets using four quantitative metrics : Structural Similarity index ( SSIM ; higher is better ) , Peak Signal-to-Noise Ratio ( PSNR ; higher is better ) , Learned Perceptual Image Patch Similarity ( LPIPS ; lower is better ) ( Zhang et al. , 2018 ) , and Frechet Video Distance ( FVD ; lower is better ) ( Unterthiner et al. , 2018 ) . In Section 4.5 , we investigate how the amount of information stored at different levels of a temporal hierarchy adapts to changes in sequence speed . In Section 4.6 , we visualize the information stored at different levels by resetting individual levels of the hierarchy . We trained all our models using sequences of length 100 . We used convolutional frame encoders and decoders , with architectures very similar to the DCGAN ( Radford et al. , 2016 ) discriminator and generator , respectively . Our implementations made use of TensorFlow Probability ( Dillon et al. , 2017 ) and CuDNN , and used the Adam optimizer ( Kingma & Ba , 2014 ) for training . The training time for a 3-level TALD model with temporal abstraction 6 amounted to around 24 hours for 100 epochs on a single NVIDIA TITAN Xp GPU . Refer to Appendix C for hyperparameters and experimental setup .
This paper proposes a method called Temporal Abstract Latent Dynamics (TALD). TALD is built up on RSSM (Hafner et al. 2019) but with hierarchical dynamics. The experiments are conducted on moving MNIST, GQN 3D Mazes, and KTH. Results are qualitatively better than other methods in term of maintaining long-term consistent prediction. Quantitative comparison is reported only on KTH dataset (Figure 5). Written presentation is clear and easy to understand.
SP:d10957cc11891e1aad6ecac21a73d589bfac341d
Disentangled Recurrent Wasserstein Autoencoder
1 INTRODUCTION . Unsupervised representation learning is an important research topic in machine learning . It embeds high-dimensional sensory data such as images and videos into a low-dimensional latent space in an unsupervised learning framework , aiming at extracting essential data variation factors to help downstream tasks such as classification and prediction ( Bengio et al. , 2013 ) . In the last several years , disentangled representation learning , which further separates the latent embedding space into exclusive explainable factors such that each factor only interprets one of semantic attributes of sensory data , has received a lot of interest and achieved many empirical successes on static data such as images ( Chen et al. , 2016 ; Higgins et al. , 2017 ; Dupont , 2018 ; Chen et al. , 2018 ; Rubenstein et al. , 2018b ; a ; Kim & Mnih , 2018 ) . For example , the latent representation of handwritten digits can be disentangled into a content factor encoding digit identity and a style factor encoding handwriting style . In spite of successes on static data , only a few works have explored unsupervised representation disentanglement of sequential data due to the challenges of developing generative models of sequential ∗Equal contribution . †Part of his work was done before joining Tencent . ‡His work was done before joining Amazon . §His work was done before joining Texas A & M University . data . Learning disentangled representations of sequential data is important and has many applications . For example , the latent representation of a smiling-face video can be disentangled into a static part encoding the identity of the person ( content factor ) and a dynamic part encoding the smiling motion of the face ( motion factor ) . The disentangled representation of the video can be potentially used for many downstream tasks such as classification , retrieval , and synthetic video generation with style transfer . Most of previous unsupervised representation disentanglement models for static data heavily rely on the KL-divergence regularization in a VAE framework ( Higgins et al. , 2017 ; Dupont , 2018 ; Chen et al. , 2018 ; Kim & Mnih , 2018 ) , which has been shown to be problematic due to matching individual instead of aggregated posterior distribution of the latent code to the same prior ( Tolstikhin et al. , 2018 ; Rubenstein et al. , 2018b ; a ) . Therefore , extending VAE or recurrent VAE ( Chung et al. , 2015 ) to disentangle sequential data in a generative model framework ( Hsu et al. , 2017 ; Yingzhen & Mandt , 2018 ) is not ideal . In addition , recent research ( Locatello et al. , 2019 ) has theoretically shown that it is impossible to perform unsupervised disentangled representation learning without inductive biases on both models and data , especially on static data . Fortunately , sequential data such as videos often have clear inductive biases for the disentanglement of content factor and motion factor as mentioned in ( Locatello et al. , 2019 ) . Unlike static data , the learned static and dynamic factors of sequential data are not exchangeable . In this paper , we propose a recurrent Wasserstein Autoencoder ( R-WAE ) to learn disentangled representations of sequential data . We employ a Wasserstein metric ( Arjovsky et al. , 2018 ; Gulrajani et al. , 2017 ; Bellemare et al. , 2017 ) induced from the optimal transport between model distribution and the underlying data distribution , which has some nicer properties ( for e.g. , sum invariance , scale sensitivity , applicable to distributions with non-overlapping supports , and better out-of-sample performance in the worst-case expectation ( Esfahani & Kuhn , 2018 ) ) than the KL divergence in VAE ( Kingma & Welling , 2014 ) and β-VAE ( Higgins et al. , 2017 ) . Leveraging explicit inductive biases in both sequential data and model , we encode an input sequence into two parts : a shared static latent code and a dynamic latent code , and sequentially decode each element of the sequence by combining both codes . We enforce a fixed prior distribution for the static code and learn a prior for the dynamic code to ensure the consistency of the sequence . The disentangled representations are learned by separately regularizing the posteriors of the latent codes with their corresponding priors . Our main contributions are summarized as follows : ( 1 ) We draw the first connection between minimizing a Wasserstein distance and maximizing mutual information for unsupervised representation disentanglement of sequential data from an information theory perspective ; ( 2 ) We propose two sets of effective regularizers to learn the disentangled representation in a completely unsupervised manner with explicit inductive biases in both sequential data and models . ( 3 ) We incorporate a relaxed discrete latent variable to improve the disentangled learning of actions on real data . Experiments show that our models achieve state-of-the-art performance in both disentanglement of static and dynamic latent representations and unconditional video generation under the same settings as baselines ( Yingzhen & Mandt , 2018 ; Tulyakov et al. , 2018 ) . 2 BACKGROUND AND RELATED WORK . Notation Let calligraphic letters ( i.e . X ) be sets , capital letters ( i.e . X ) be random variables and lowercase letters be their values . Let D ( PX , PG ) be the divergence between the true ( but unknown ) data distribution PX ( density p ( x ) ) and the latent-variable generative model distribution PG specified by a prior distribution PZ ( density p ( z ) ) of latent variable Z . Let DKL be KL divergence , DJS be Jensen-Shannon divergence and MMD be Maximum Mean Discrepancy ( MMD ) ( Gretton et al. , 2007 ) . Optimal Transport Between Distributions The optimal transport cost inducing a rich class of divergence between the distribution PX and the distribution PG is defined as follows , W ( PX , PG ) : = inf Γ∼P ( X∼PX , Y∼PG ) E ( X , Y ) ∼Γ [ c ( X , Y ) ] , ( 1 ) where c ( X , Y ) is any measurable cost function and P ( X ∼ PX , Y ∼ PG ) is the set of joint distributions of ( X , Y ) with respective marginals PX and PG . Comparison between WAE ( Tolstikhin et al. , 2018 ) and VAE ( Kingma & Welling , 2014 ) Instead of optimizing over all couplings Γ between two random variables in X , Bousquet et al . ( 2017 ) ; Tolstikhin et al . ( 2018 ) show that it is sufficient to find Q ( Z|X ) such that the marginal Q ( Z ) : = EX∼PX [ Q ( Z|X ) ] is identical to the prior P ( Z ) , as given in the following definition , Definition 1 . For any deterministic PG ( X|Z ) and any function G : Z → X , W ( PX , PG ) = inf Q : QZ=PZ EPXEQ ( Z|X ) [ c ( X , G ( Z ) ) ] . ( 2 ) Definition 1 leads to the following loss DWAE of WAE based on a Wasserstein distance , inf Q ( Z|X ) EPXEQ ( Z|X ) [ c ( X , G ( Z ) ) ] + β D ( QZ , PZ ) , ( 3 ) where the first term is data reconstruction loss , and the second one is a regularizer that forces the posterior QZ = ∫ Q ( Z|X ) dPX to match the prior PZ ( Adversarial autoencoder ( AAE ) ( Makhzani et al. , 2015 ) shares a similar idea to WAE ) . In contrast , VAE has a different regularizer EX [ DKL ( Q ( Z|X ) , PZ ) ) ] enforcing the latent posterior distribution of each input to match PZ . In ( Rubenstein et al. , 2018a ; b ) , it is shown that WAE has better disentanglement than β-VAE ( Higgins et al. , 2017 ) on images , which inspires us to design a new representation disentanglement framework for sequential data with several innovations . Unsupervised disentangled representation learning Several generative models have been proposed to learn disentangled representations of sequential data ( Denton et al. , 2017 ; Hsu et al. , 2017 ; Yingzhen & Mandt , 2018 ; Hsieh et al. , 2018 ; Sun et al. , 2018 ; Tulyakov et al. , 2018 ) . FHVAE in ( Hsu et al. , 2017 ) is a VAE-based hierarchical graphical model with factorized Gaussian priors and only focuses on speech or audio data . Our R-WAE employing a more powerful recurrent prior can be applied to both speech and video data . The models in ( Sun et al. , 2018 ; Denton et al. , 2017 ; Hsieh et al. , 2018 ) are based on the first several elements of a sequence to design disentanglement architectures for future sequence predictions . In terms of representation learning by mutual information maximization , our work empirically demonstrates that explicit inductive biases in data and model architecture are necessary to the success of learning meaningful disentangled representations of sequential data , while the works in ( Locatello et al. , 2019 ; Poole et al. , 2019 ; Tschannen et al. , 2020 ; Ozair et al. , 2019 ) are about general representation learning , especially on static data . The most related works to ours are MoCoGAN ( Tulyakov et al. , 2018 ) and DS-VAE ( Yingzhen & Mandt , 2018 ) , which have the ability to disentangle variant and invariant parts of sequential data and perform unconditional sequence generation . Tulyakov et al . ( 2018 ) is a GAN-based model that can be only applied to the setting in which the number of motions is finite , and can not encode the latent representation of sequences . Yingzhen & Mandt ( 2018 ) provides a disentangled sequential autoencoder based on VAE ( Kingma & Welling , 2014 ) . Training VAE is equivalent to minimizing a lower bound of the KL divergence between empirical data distribution and generated data distribution , which has been shown to produce inferior disentangled representations of static data than generative models employing the Wasserstein metric ( Rubenstein et al. , 2018a ; b ) . 3 PROPOSED APPROACH : DISENTANGLED RECURRENT WASSERSTEIN AUTOENCODER ( R-WAE ) . Given a high-dimensional sequence x1 : T , our goal is to learn a disentangled representation of timeinvariant latent code zc and time-variant latent code zmt , along the sequence . Let zt = ( z c , zmt ) be the latent code of xt . Let Xt , Zt , Zc and Zmt be random variables with realizations xt , zt , z c and zmt respectively , and denote D = X1 : T . To achieve this goal , we define the following probabilistic generative model by assuming Zmt and Z c are independent , P ( X1 : T , Z1 : T ) = P ( Z c ) T∏ t=1 Pψ ( Z m t |Zm < t ) Pθ ( Xt|Zt ) , ( 4 ) where P ( Z1 : T ) = P ( Zc ) ∏T t=1 Pψ ( Z m t |Zm < t ) is the prior in which Zt = ( Zc , Zmt ) , and the decoder model Pθ ( Xt | Zt ) is a Dirac delta distribution . In practice , P ( Zc ) is chosen as N ( 0 , I ) and Pψ ( Zmt |Zm < t ) = N ( µψ ( Zm < t ) , σ2ψ ( Zm < t ) ) , µψ and σψ are parameterized by Recurrent Neural Networks ( RNNs ) . The inference model Q is defined as Qφ ( Z c , Zm1 : T |X1 : T ) = Qφ ( Zc|X1 : T ) T∏ t=1 Qφ ( Z m t | Zm < t , Xt ) , ( 5 ) whereQφ ( Zc|X1 : T ) andQφ ( Zmt | Zm < t , Xt ) are also Gaussian distributions parameterized by RNNs . The structures of the generative model ( 4 ) and the inference model ( 5 ) are provided in Fig . 1 .
This paper extends the Wasserstein autoencoder for learning disentangled representations from sequential data. The latent variable model considered contains separate latent variables capturing global and local information respectively, each of which is regularized by a divergence measuring the marginal posterior $Q_z$ and the prior $P_z$. An optional auxiliary discrete latent is introduced to incorporate inductive bias for discrete local features (e.g., type of actions). To estimate the divergence terms, the authors propose to use MMD for the recurrent local latents since the prior distribution evolves over time; for the global latent, the authors presented two options: discriminator-based Jenson-Shannon Divergence estimate (the same as adversarial autoencoder proposed in Makhzani et al., 2016) and scaled MMD (Arbel et al., 2018). The connection between the proposed objective and mutual information maximization is outlined in Section 4. Experimental results show that the proposed R-WAE model outperforms baseline DS-VAE/FHVAE/MocoGAN
SP:6082a5b51b24315dfdbfe147de1aef2c53cd113d
Predictive Coding Approximates Backprop along Arbitrary Computation Graphs
1 INTRODUCTION . Deep learning has seen stunning successes in the last decade in computer vision ( Krizhevsky et al. , 2012 ; Szegedy et al. , 2015 ) , natural language processing and translation ( Vaswani et al. , 2017 ; Radford et al. , 2019 ; Kaplan et al. , 2020 ) , and computer game playing ( Mnih et al. , 2015 ; Silver et al. , 2017 ; Schrittwieser et al. , 2019 ; Vinyals et al. , 2019 ) . While there is a great variety of architectures and models , they are all trained by gradient descent using gradients computed by automatic differentiation ( AD ) . The key insight of AD is that it suffices to define a forward model which maps inputs to predictions according to some parameters . Then , using the chain rule of calculus , it is possible , as long as every operation of the forward model is differentiable , to differentiate back through the computation graph of the model so as to compute the sensitivity of every parameter in the model to the error at the output , and thus adjust every single parameter to best minimize the total loss . Early models were typically simple artificial neural networks where the computation graph is simply a composition of matrix multiplications and elementwise nonlinearities , and for which the implementation of automatic differentation has become known as ‘ backpropagation ’ ( or ’ backprop ’ ) . However , automatic differentiation allows for substantially more complicated graphs to be differentiated through , up to , and including , arbitrary programs ( Griewank et al. , 1989 ; Baydin et al. , 2017 ; Paszke et al. , 2017 ; Revels et al. , 2016 ; Innes et al. , 2019 ; Werbos , 1982 ; Rumelhart and Zipser , 1985 ; Linnainmaa , 1970 ) . In recent years this has enabled the differentiation through differential equation solvers ( Chen et al. , 2018 ; Tzen and Raginsky , 2019 ; Rackauckas et al. , 2019 ) , physics engines ( Degrave et al. , 2019 ; Heiden et al. , 2019 ) , raytracers ( Pal , 2019 ) , and planning algorithms ( Amos and Yarats , 2019 ; Okada et al. , 2017 ) . These advances allow the straightforward training of models which intrinsically embody complex processes and which can encode significantly more prior knowledge and structure about a given problem domain than previously possible . Modern deep learning has also been closely intertwined with neuroscience ( Hassabis et al. , 2017 ; Hawkins and Blakeslee , 2007 ; Richards et al. , 2019 ) . The backpropagation algorithm itself arose as a technique for training multi-layer perceptrons – simple hierarchical models of neurons inspired by the brain ( Werbos , 1982 ) . Despite this origin , and its empirical successes , a consensus has emerged that the brain can not directly implement backprop , since to do so would require biologically implausible connection rules ( Crick , 1989 ) . There are two principal problems . Firstly , backprop in the brain appears to require non-local information ( since the activity of any specific neuron affects all subsequent neurons down to the final output neuron ) . It is difficult to see how this information could be transmitted ’ backwards ’ throughout the brain with the required fidelity without precise connectivity constraints . The second problem – the ‘ weight transport problem ’ is that backprop through MLP style networks requires identical forward and backwards weights . In recent years , however , a succession of models have been introduced which claim to implement backprop in MLP-style models using only biologically plausible connectivity schemes , and Hebbian learning rules ( Liao et al. , 2016 ; Guerguiev et al. , 2017 ; Sacramento et al. , 2018 ; Bengio and Fischer , 2015 ; Bengio et al. , 2017 ; Ororbia et al. , 2020 ; Whittington and Bogacz , 2019 ) . Of particular significance is Whittington and Bogacz ( 2017 ) who show that predictive coding networks – a type of biologically plausible network which learn through a hierarchical process of prediction error minimization – are mathematically equivalent to backprop in MLP models . In this paper we extend this work , showing that predictive coding can not only approximate backprop in MLPs , but can approximate automatic differentiation along arbitrary computation graphs . This means that in theory there exist potentially biologically plausible algorithms for differentiating through arbitrary programs , utilizing only local connectivity . Moreover , in a class of models which we call parameter-linear , which includes many current machine learning models , the required update rules are Hebbian , raising the possibility that a wide range of current machine learning architectures may be faithfully implemented in the brain , or in neuromorphic hardware . In this paper we provide two main contributions . ( i ) We show that predictive coding converges to automatic differentiation across arbitrary computation graphs . ( ii ) We showcase this result by implementing three core machine learning architectures ( CNNs , RNNs , and LSTMs ) in a predictive coding framework which utilises only local learning rules and mostly Hebbian plasticity . 2 PREDICTIVE CODING ON ARBITRARY COMPUTATION GRAPHS . Predictive coding is an influential theory of cortical function in theoretical and computational neuroscience . Central to the theory is the idea that the core function of the brain is to minimize prediction errors between what is expected to happen and what actually happens . Predictive coding views the brain as composed of multiple hierarchical layers which predict the activities of the layers below . Unpredicted activity is registered as prediction error which is then transmitted upwards for a higher layer to process . Over time , synaptic connections are adjusted so that the system improves at minimizing prediction error . Predictive coding possesses a wealth of empirical support ( Friston , 2003 ; 2005 ; Bogacz , 2017 ; Whittington and Bogacz , 2019 ) and offers a single mechanism that accounts for diverse perceptual phenomena such as repetition-suppression ( Auksztulewicz and Friston , 2016 ) , endstopping ( Rao and Ballard , 1999 ) , bistable perception ( Hohwy et al. , 2008 ; Weilnhammer et al. , 2017 ) and illusory motions ( Lotter et al. , 2016 ; Watanabe et al. , 2018 ) , and even attentional modulation of neural activity ( Feldman and Friston , 2010 ; Kanai et al. , 2015 ) . Moreover , the central role of top-down predictions is consistent with the ubiquity , and importance of , top-down diffuse connections between cortical areas . Predictive coding is consistent with many known aspects of neurophysiology , and has been translated into biologically plausible process theories which define candidate cortical microcircuits which can implement the algorithm . ( Spratling , 2008 ; Bastos et al. , 2012 ; Kanai et al. , 2015 ; Shipp , 2016 ) . In previous work , predictive coding has always been conceptualised as operating on hierarchies of layers ( Bogacz , 2017 ; Whittington and Bogacz , 2017 ) . Here we present a generalized form of predictive coding applied to arbitrary computation graphs . A computation graph G = { E , V } is a directed acyclic graph ( DAG ) which can represent the computational flow of essentially any program or computable function as a composition of elementary functions . Each edge ei ∈ E of the graph corresponds to an intermediate step – the application of an elementary function – while each vertex vi ∈ V is an intermediate variable computed by applying the functions of the edges to the values of their originating vertices . In this paper , vi denotes the vector of activations within a layer and we denote the set of all vertices as { vi } . Effectively , computation flows ’ forward ’ from parent nodes to all their children through the edge functions until the leaf nodes give the final output of the program as a whole ( see Figure 1 and 2 for an example ) . Given a target T and a loss function L = g ( T , vout ) , the graph ’ s output can be evaluated and , and if every edge function is differentiable , automatic differentiation can be performed on the computation graph . Predictive coding can be derived elegantly as a variational inference algorithm under a hierarchical Gaussian generative model ( Friston , 2005 ; Buckley et al. , 2017 ) . We extend this approach to arbitrary computation graphs in a supervised setting by defining the inference problem to be solved as that of inferring the vertex value vi of each node in the graph given fixed start nodes v0 ( the data ) , and end nodes vN ( the targets ) . We define a generative model which parametrises the value of each vertex given the feedforward prediction of its parents , p ( { vi } ) = p ( v0 . . . vN ) = ∏N i p ( vi|P ( vi ) ) 1 , and a factorised , variational posterior Q ( { vi } |v0 , vN ) = Q ( v1 . . . vN−1|v0 , vN ) = ∏N i Q ( vi|P ( vi ) , C ( vi ) ) , where P ( vi ) denotes the set of parents and C ( vi ) denotes the set of children of a given node vi . From this , we can define a suitable objective functional , the variational free-energy F ( VFE ) , which acts as an upper bound on the divergence between the true and variational posteriors . F = KL [ ( Q ( v1 . . . vN−1|v0 , vN ) ‖p ( v0 . . . vN ) ] ≥ KL [ ( Q ( v1 . . . vN−1 ) |v0 , vN ) ‖p ( v1 . . . vN−1|v0 , vN ) ] ≈ N∑ i=0 Ti i ( 1 ) Under Gaussian assumptions for the generative model p ( { vi } ) = ∏N i N ( vi ; v̂i , Σi ) , and the vari- ational posterior Q ( { vi } ) = ∏N i N ( vi ) , where the ‘ predictions ’ v̂i = f ( P ( vi ) ; θi ) are defined as the feedforward value of the vertex produced by running the graph forward , and all the precisions , or inverse variances , Σ−1i are fixed at the identity , we can write F as simply a sum of prediction errors ( see Appendix D or ( Friston , 2003 ; Bogacz , 2017 ; Buckley et al. , 2017 ) for full derivations ) , with the prediction errors defined as i = vi − v̂i . These prediction errors play a core role in the framework and , in the biological process theories ( Friston , 2005 ; Bastos et al. , 2012 ) , are generally considered to be represented by a distinct population of ‘ error units ’ . Since F is an upper bound on the divergence between true and approximate posteriors , by minimizing F , we reduce this divergence , thus improving the quality of the variational posterior and approximating exact Bayesian inference . Predictive coding minimizes F by employing the Cauchy method of steepest descent to set the 1This includes the prior p ( v0 ) , which simply has no parents . dynamics of the vertex variables vi as a gradient descent directly on F ( Bogacz , 2017 ) . dvi dt = ∂F ∂vi = i − ∑ j∈C ( vi ) j ∂v̂j ∂vi ( 2 ) The dynamics of the parameters of the edge functions θ such that v̂i = f ( P ( vi ) ; θ ) , can also be derived as a gradient descent on F . Importantly these dynamics require only information ( the current vertex value , prediction error , and prediction errors of child vertices ) locally available at the vertex . dθi dt = ∂F ∂θi = i ∂v̂i ∂θi ( 3 ) To run generalized predictive coding in practice on a given computation graph G = { E , V } , we augment the graph with error units ∈ E to obtain an augumented computation graph G̃ = { E , V , E } . The predictive coding algorithm then operates in two phases – a feedforward sweep and a backwards iteration phase . In the feedforward sweep , the augmented computation graph is run forward to obtain the set of predictions { v̂i } , and prediction errors { i } = { vi − v̂i } for every vertex . Following Whittington and Bogacz ( 2017 ) , to achieve exact equivalence with the backprop gradients computed on the original computation graph , we initialize vi = v̂i in the initial feedforward sweep so that the output error computed by the predictive coding network and the original graph are identical . In the backwards iteration phase , the vertex activities { vi } and prediction errors { i } are updated with Equation 2 for all vertices in parallel until the vertex values converge to a minimum of F . After convergence the parameters are updated according to Equation 3 . Note we also assume , following Whittington and Bogacz ( 2017 ) , that the predictions at each layer are fixed at the values assigned during the feedforward pass throughout the optimisation of the vs. We call this the fixed-prediction assumption . In effect , by removing the coupling between the vertex activities of the parents and the prediction at the child , this assumption separates the global optimisation problem into a local one for each vertex . We implement these dynamics with a simple forward Euler integration scheme so that the update rule for the vertices became vt+1i ← vti−η dFdvti where η is the step-size parameter . Importantly , if the edge function linearly combines the activities and the parameters followed by an elementwise nonlinearity – a condition which we call ‘ parameter-linear ’ – then both the update rule for the vertices ( Equation 2 ) and the parameters ( Equation 3 ) become Hebbian . Specifically , the update rules for the vertices and weights become dvidt = i − ∑ j jf ′ ( θj v̂j ) θ T j and dθi dt = if ′ ( θiv̂i ) v̂i T , respectively .
The paper extends prior work on equivalence between predictive coding and backprop in layered neural networks to arbitrary computation graphs. This is empirically tested first on a simple nonlinear scalar function, and then on a few commonly used architectures (CNNs, RNNs, LSTMs), confirming the theoretical results. The importance of this advance is highlighted by noting that the demonstrated equivalence shows how in principle modern architectures could be implemented in biological neural systems, and that the highly parallel nature of predictive coding could lead to efficient implementations in neuromorphic hardware.
SP:60894f74f40addd7a2a35a003dcdce6cf70ffef4
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
1 INTRODUCTION . Deep neural networks have achieved state-of-the-art results in various tasks in natural language processing ( NLP ) tasks ( Sutskever et al. , 2014 ; Vaswani et al. , 2017 ; Devlin et al. , 2019 ) and computer vision ( CV ) tasks ( He et al. , 2016 ; Goodfellow et al. , 2016 ) . One approach to improve the generalization performance of deep neural networks is data augmentation ( Xie et al. , 2019 ; Jiao et al. , 2019 ; Cheng et al. , 2019 ; 2020 ) . However , there are some problems if we directly incorporate these augmented samples into the training set . Minimizing the average loss on all these samples means treating them equally , without considering their different implicit impacts on the loss . To address this , we propose to minimize a reweighted loss on these augmented samples to make the model utilize them in a cleverer way . Example reweighting has previously been explored extensively in curriculum learning ( Bengio et al. , 2009 ; Jiang et al. , 2014 ) , boosting algorithms ( Freund & Schapire , 1999 ) , focal loss ( Lin et al. , 2017 ) and importance sampling ( Csiba & Richtárik , 2018 ) . However , none of them focus on the reweighting of augmented samples instead of the original training samples . A recent work ( Jiang et al. , 2020a ) also assigns different weights on augmented samples . But weights in their model are predicted by a mentor network while we obtain the weights from the closed-form solution by minimizing the maximal expected loss ( MMEL ) . In addition , they focus on image samples with noisy labels , while our method can generally be applied to also textual data as well as image data . Tran et al . ( 2017 ) propose to minimize the loss on the augmented samples under the framework of Expectation-Maximization algorithm . But they mainly focus on the generation of augmented samples . ∗This work is done when Mingyang Yi is an intern at Huawei Noah ’ s Ark Lab . Unfortunately , in practise there is no way to directly access the optimal reweighting strategy . Thus , inspired by adversarial training ( Madry et al. , 2018 ) , we propose to minimize the maximal expected loss ( MMEL ) on augmented samples from the same training example . Since the maximal expected loss is the supremum over any possible reweighting strategy on augmented samples ’ losses , minimizing this supremum makes the model perform well under any reweighting strategy . More importantly , we derive a closed-form solution of the weights , where augmented samples with larger training losses have larger weights . Intuitively , MMEL allows the model to keep focusing on augmented samples that are harder to train . The procedure of our method is summarized as follows . We first generate the augmented samples with commonly-used data augmentation technique , e.g. , lexical substitution for textual input ( Jiao et al. , 2019 ) , random crop and horizontal flip for image data ( Krizhevsky et al. , 2012 ) . Then we explicitly derive the closed-form solution of the weights on each of the augmented samples . After that , we update the model parameters with respect to the reweighted loss . The proposed method can generally be applied above any data augmentation methods in various domains like natural language processing and computer vision . Empirical results on both natural language understanding tasks and image classification tasks show that the proposed reweighting strategy consistently outperforms the counterpart of without using it , as well as other reweighting strategies like uniform reweighting . 2 RELATED WORK . Data augmentation . Data augmentation is proven to be an effective technique to improve the generalization ability of various tasks , e.g. , natural language processing ( Xie et al. , 2019 ; Zhu et al. , 2020 ; Jiao et al. , 2019 ) , computer vision ( Krizhevsky et al. , 2014 ) , and speech recognition ( Park et al. , 2019 ) . For image data , baseline augmentation methods like random crop , flip , scaling , and color augmentation ( Krizhevsky et al. , 2012 ) have been widely used . Other heuristic data augmentation techniques like Cutout ( DeVries & Taylor , 2017 ) which masks image patches and Mixup ( Zhang et al. , 2018 ) which combines pairs of examples and their labels , are later proposed . Automatically searching for augmentation policies ( Cubuk et al. , 2018 ; Lim et al. , 2019 ) have recently proposed to improve the performance further . For textual data , Zhang et al . ( 2015 ) ; Wei & Zou ( 2019 ) and Wang ( 2015 ) respectively use lexical substitution based on the embedding space . Jiao et al . ( 2019 ) ; Cheng et al . ( 2019 ) ; Kumar et al . ( 2020 ) generate augmented samples with a pre-trained language model . Some other techniques like back translation ( Xie et al. , 2019 ) , random noise injection ( Xie et al. , 2017 ) and data mixup ( Guo et al. , 2019 ; Cheng et al. , 2020 ) are also proven to be useful . Adversarial training . Adversarial learning is used to enhance the robustness of model ( Madry et al. , 2018 ) , which dynamically constructs the augmented adversarial samples by projected gradient descent across training . Although adversarial training hurts the generalization of model on the task of image classification ( Raghunathan et al. , 2019 ) , it is shown that adversarial training can be used as data augmentation to help generalization in neural machine translation ( Cheng et al. , 2019 ; 2020 ) and natural language understanding ( Zhu et al. , 2020 ; Jiang et al. , 2020b ) . Our proposed method differs from adversarial training in that we adversarially decide the weight on each augmented sample , while traditional adversarial training adversarially generates augmented input samples . In ( Behpour et al. , 2019 ) , adversarial learning is used as data augmentation in object detection . The adversarial samples ( i.e. , bounding boxes that are maximally different from the ground truth ) are reweighted to form the underlying annotation distribution . However , besides the difference in the model and task , their training objective and the resultant solution are also different from ours . Sample reweighting . Minimizing a reweighted loss on training samples has been widely explored in literature . Curriculum learning ( Bengio et al. , 2009 ; Jiang et al. , 2014 ) feeds first easier and then harder data into the model to accelerate training . Zhao & Zhang ( 2014 ) ; Needell et al . ( 2014 ) ; Csiba & Richtárik ( 2018 ) ; Katharopoulos & Fleuret ( 2018 ) use importance sampling to reduce the variance of stochastic gradients to achieve faster convergence rate . Boosting algorithms ( Freund & Schapire , 1999 ) choose harder examples to train subsequent classifiers . Similarly , hard example mining ( Malisiewicz et al. , 2011 ) downsamples the majority class and exploits the most difficult examples . Focal loss ( Lin et al. , 2017 ; Goyal & He , 2018 ) focuses on harder examples by reshaping the standard cross-entropy loss in object detection . Ren et al . ( 2018 ) ; Jiang et al . ( 2018 ) ; Shu et al . ( 2019 ) use meta-learning method to reweight examples to handle the noisy label problem . Unlike all these existing methods , in this work , we reweight the augmented samples ’ losses instead of training samples . 3 MINIMIZE THE MAXIMAL EXPECTED LOSS . In this section , we derive our reweighting strategy on augmented samples from the perspective of maximal expected loss . We first give a derivation of the closed-form solution of the weights on augmented samples . Then we describe two kinds of loss under this formulation . Finally , we give the implementation details using the natural language understanding task as an example . 3.1 WHY MAXIMAL EXPECTED LOSS . Consider a classification task with N training samples . For the i-th training sample xi , its label is denoted as yxi . Let fθ ( · ) be the model with parameter θ which outputs the classification probabilities . ` ( · , · ) denotes the loss function , e.g . the cross-entropy loss between outputs fθ ( xi ) and the groundtruth label yxi . Given an original training sample xi , the set of augmented samples generated by some method isB ( xi ) . Without loss of generality , we assume xi ∈ B ( xi ) . The conventional training objective is to minimize the loss on every augmented sample z in B ( xi ) as min θ 1 N N∑ i=1 1 |B ( xi ) | ∑ ( z , yz ) ∈B ( xi ) ` ( fθ ( z ) , yz ) , ( 1 ) where yz is the label of z ∈ B ( xi ) , and can be different with yxi . |B ( xi ) | is the number of augmented samples in B ( xi ) , which is assumed to be finite . In equation ( 1 ) , for each given xi , the weights on its augmented samples are the same ( i.e. , 1/|B ( xi ) | ) . However , different samples have different implicit impacts on the loss , and we can assign different weights on them to facilitate training . Note that computing the weighted sum of losses of each augmented sample in B ( xi ) can be viewed as taking expectation of loss on augmented samples z ∈ B ( xi ) under a certain distribution . When the augmented samples generated from the same training sample are drawn from a uniform distribution , the loss in equation ( 1 ) can be rewritten as min θ Rθ ( PU ) = min θ 1 N N∑ i=1 [ Ez∼PU ( ·|xi ) [ ` ( fθ ( z ) , yz ) ] − λPKL ( PU ( · | xi ) ‖ PU ( · | xi ) ) ] , ( 2 ) where the Kullback–Leibler ( KL ) divergence KL ( PU ( · | xi ) ‖ PU ( · | xi ) ) equals zero . Here PU ( · | xi ) denotes the uniform distribution on B ( xi ) . When the augmented samples are drawn from a more general distribution PB ( · | · ) 1 instead of the uniform distribution , we can generalize PU ( · | · ) here to some other conditional distribution PB . min θ Rθ ( PB ) = min θ 1 N N∑ i=1 [ Ez∼PB ( ·|xi ) [ ` ( fθ ( z ) , yz ) ] − λPKL ( PB ( · | xi ) ‖ PU ( · | xi ) ) ] . ( 3 ) Remark 1 . When PB ( · | xi ) reduces to the uniform distribution PU ( · | xi ) for any xi , since KL ( PU ( · | xi ) ‖ PU ( · | xi ) ) = 0 , the objective in equation ( 3 ) reduces to the one in equation ( 1 ) . The KL divergence term in equation ( 3 ) is used as a regularizer to encourage PB close to PU ( see Remark 2 ) . From equation ( 3 ) , the conditional distribution PB determines the weights of each augmented sample in B ( xi ) . There may exist an optimal formulation of PB in some regime , e.g . corresponding to the optimal generalization ability of model . Unfortunately , we can not explicitly characterize such an unknown optimal PB . To address this , we borrow the idea from adversarial training ( Madry et al. , 2018 ) and minimize the maximal reweighted loss on augmented samples . Then , the model is guaranteed to perform well under any reweighting strategy , including the underlying optimal one . Specifically , let the conditional distribution PB be P∗θ = arg supPB Rθ ( PB ) . Our objective is to minimize the following reweighted loss min θ Rθ ( P∗θ ) = min θ sup PB Rθ ( PB ) . ( 4 ) The following Remark 2 discusses about the KL divergence term in equation ( 3 ) . 1In the following , we simplify PB ( · | · ) as PB if there is no obfuscation . Remark 2 . Since we take a supremum over PB in equation ( 4 ) , the regularizer KL ( PB ‖ PU ) encourages PB to be close to PU because it reaches the minimal value zero when PB = PU . Thus the regularizer controls the diversity among the augmented samples by constraining the discrepancy between PB and uniform distribution PU , e.g. , a larger λP promotes a larger diversity among the augmented samples . The following Theorem 1 gives the explicit formulation of Rθ ( P∗θ ) . Theorem 1 . Let Rθ ( PB ) and Rθ ( P∗θ ) be defined in equation ( 1 ) and ( 4 ) , then we have Rθ ( P∗θ ) = 1 N N∑ i=1 ∑ z∈B ( xi ) P∗θ ( z | xi ) ` ( fθ ( z ) , yz ) − λPP∗θ ( z | xi ) log ( |B ( xi ) |P∗θ ( z | xi ) ) , ( 5 ) where P∗θ ( z | xi ) = exp ( 1 λP ` ( fθ ( z ) , yz ) ) ∑ z∈B ( xi ) exp ( 1 λP ` ( fθ ( z ) , yz ) ) = Softmaxz ( 1 λP ` ( fθ ( B ( xi ) ) , yB ( xi ) ) ) , ( 6 ) where Softmaxz ( 1λP ` ( fθ ( B ( xi ) ) , yB ( xi ) ) ) represents the output probability of z for vector ( 1λP ` ( fθ ( z1 ) , yz1 ) , · · · , 1 λP ` ( fθ ( z|B ( xi ) | ) , y|B ( xi ) | ) ) . Remark 3 . If we ignore the KL divergence term in equation ( 3 ) , due to the equivalence of minimizing cross-entropy loss and MLE loss ( Martens , 2019 ) , the proposed MMEL also falls into the generalized Expectation-Maximization ( GEM ) framework ( Dempster et al. , 1977 ) . Specifically , given a training example , the augmented samples of it can be viewed as latent variable , and any reweighting on these augmented samples corresponds to a specific conditional distribution of these augmented samples given the training sample . In the expectation step ( E-step ) , we explicitly derive the closed-form solution of the weights on each of these augmented samples according to ( 6 ) . In the maximization step , since there is no analytical solution for deep neural networks , following ( Tran et al. , 2017 ) , we update the model parameters with respect to the reweighted loss by one step of gradient descent . The proof of this theorem can be found in Appendix A . From Theorem 1 , the loss of it decides the weight on each augmented sample z ∈ Bxi , and the weight is normalized by Softmax over all augmented samples in Bxi . The reweighting strategy allows more attention paid to augmented samples with higher loss values . The strategy is similar to those in ( Lin et al. , 2017 ; Zhao & Zhang , 2014 ) but they apply it on training samples .
This paper proposes a simple scheme for training with multiple augmentations of training data in one iteration and reweighting the instances by their relative loss. As authors note in their related works, the idea of reweighting examples based on their relative loss has been widely studied in a variety of machine learning problems. In contrast, this work proposes the reweighting only within augmentations of a single sample. They derive their particular reweighting scheme by proposing an alternative risk (Eq. 3). The new objective is a function of both model parameters and the distribution of augmentations. They propose to find the model parameters that minimize the alternative risk for the hardest distribution of augmentations that maximizes their alternative risk (Eq. 4). Then they consider the distribution of augmentations that are a function of model parameters and input and show that for fixed model parameters, the optimal distribution on a fixed finite set of augmentations is determined by the softmax on the loss of the model for each augmented input. In section 3.2, they propose two variations of their loss using the ground-truth label to evaluate the loss (hard loss) versus using the prediction of the model for the original raw input (soft loss). In section 3.3, they propose specific considerations for augmenting text data. They provide experiments on image and text data with ablations studies.
SP:9e6b5b7d9e7459c015130f4b80f7bc75424de050