Dataset Preview
paper_name (string)text (string)summary (string)paper_id (string)
For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability
1 INTRODUCTION . Statistical learning theory studies the learning properties of machine learning algorithms , and more fundamentally , the conditions under which learning from finite data is possible . In this context , classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures , such as combinatorial dimensions , covering numbers and Rademacher/Gaussian complexities ( Shalev-Shwartz & Ben-David , 2014 ; Boucheron et al. , 2005 ) . Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data ( Bousquet & Elisseeff , 2001 ; Kutin & Niyogi , 2002 ) . In this view , the continuity of the process that maps data to estimators is crucial , rather than the complexity of the hypothesis space . Different notions of stability can be considered , depending on the data perturbation and metric considered ( Kutin & Niyogi , 2002 ) . Interestingly , the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other , and can be shown to be equivalent as shown in Poggio et al . ( 2004 ) and Shalev-Shwartz et al . ( 2010 ) . In modern machine learning overparameterized models , with a larger number of parameters than the size of the training data , have become common . The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process ( Bühlmann & Van De Geer , 2011 ; Steinwart & Christmann , 2008 ) . However , it was recently shown - first for deep networks ( Zhang et al. , 2017 ) , and more recently for kernel methods ( Belkin et al. , 2019 ) - that learning is possible in the absence of regularization , i.e. , when perfectly fitting/interpolating the data . Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding . Since learning using models that interpolate is not exclusive to deep neural networks , we study generalization in the presence of interpolation in the case of kernel methods . We study both linear and kernel least squares problems in this paper . Our Contributions : • We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach . While the ( uniform ) stability properties of regularized kernel methods are well known ( Bousquet & Elisseeff , 2001 ) , we study interpolating solutions of the unregularized ( `` ridgeless '' ) regression problems . • We obtain an upper bound on the stability of interpolating solutions , and show that this upper bound is minimized by the minimum norm interpolating solution . This also means that among all interpolating solutions , the minimum norm solution has the best test error . In particular , the same conclusion is also true for gradient descent , since it converges to the minimum norm solution in the setting we consider , see e.g . Rosasco & Villa ( 2015 ) . • Our stability bounds show that the average stability of the minimum norm solution is controlled by the condition number of the empirical kernel matrix . It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix ( see the discussion of why overparametrization is “ good ” in Poggio et al . ( 2019 ) ) . Our results show that the condition number also controls stability ( and hence , test error ) in a statistical sense . Organization : In section 2 , we introduce basic ideas in statistical learning and empirical risk minimization , as well as the notation used in the rest of the paper . In section 3 , we briefly recall some definitions of stability . In section 4 , we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability . In section 5 we discuss our results in the context of recent work on high dimensional regression . We conclude in section 6 . 2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION . We begin by recalling the basic ideas in statistical learning theory . In this setting , X is the space of features , Y is the space of targets or labels , and there is an unknown probability distribution µ on the product space Z = X × Y . In the following , we consider X = Rd and Y = R. The distribution µ is fixed but unknown , and we are given a training set S consisting of n samples ( thus |S| = n ) drawn i.i.d . from the probability distribution on Zn , S = ( zi ) ni=1 = ( xi , yi ) n i=1 . Intuitively , the goal of supervised learning is to use the training set S to “ learn ” a function fS that evaluated at a new value xnew should predict the associated value of ynew , i.e . ynew ≈ fS ( xnew ) . The loss is a function V : F × Z → [ 0 , ∞ ) , where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point . We define a hypothesis space H ⊆ F where algorithms search for solutions . With the above notation , the expected risk of f is defined as I [ f ] = EzV ( f , z ) which is the expected loss on a new sample drawn according to the data distribution µ . In this setting , statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization ( ERM ) where we minimize the empirical risk IS [ f ] = 1 n ∑n i=1 V ( f , zi ) . A natural error measure for our ERM solution fS is the expected excess risk ES [ I [ fS ] −minf∈H I [ f ] ] . Another common error measure is the expected generalization error/gap given by ES [ I [ fS ] − IS [ fS ] ] . These two error measures are closely related since , the expected excess risk is easily bounded by the expected generalization error ( see Lemma 5 ) . 2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION . The focus in this paper is on the kernel least squares problem . We assume the loss function V is the square loss , that is , V ( f , z ) = ( y − f ( x ) ) 2 . The hypothesis space is assumed to be a reproducing kernel Hilbert space , defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H , such that K ( x , x′ ) = 〈Φ ( x ) , Φ ( x′ ) 〉H for all x , x′ ∈ X , where 〈· , ·〉H is the inner product in H. In this setting , functions are linearly parameterized , that is there exists w ∈ H such that f ( x ) = 〈w , Φ ( x ) 〉H for all x ∈ X . The ERM problem typically has multiple solutions , one of which is the minimum norm solution : f†S = arg min f∈M ‖f‖H , M = arg min f∈H 1 n n∑ i=1 ( f ( xi ) − yi ) 2 . ( 1 ) Here ‖·‖H is the norm onH induced by the inner product . The minimum norm solution can be shown to be unique and satisfy a representer theorem , that is for all x ∈ X : f†S ( x ) = n∑ i=1 K ( x , xi ) cS [ i ] , cS = K †y ( 2 ) where cS = ( cS [ 1 ] , . . . , cS [ n ] ) , y = ( y1 . . . yn ) ∈ Rn , K is the n by n matrix with entries Kij = K ( xi , xj ) , i , j = 1 , . . . , n , and K† is the Moore-Penrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features , that is the rank of X is n , then it is possible to show that for many kernels one can replace K† by K−1 ( see Remark 2 ) . Note that invertibility is necessary and sufficient for interpolation . That is , if K is invertible , f†S ( xi ) = yi for all i = 1 , . . . , n , in which case the training error in ( 1 ) is zero . Remark 1 ( Pseudoinverse for underdetermined linear systems ) A simple yet relevant example are linear functions f ( x ) = w > x , that correspond toH = Rd and Φ the identity map . If the rank of X ∈ Rd×n is n , then any interpolating solution wS satisfies w > S xi = yi for all i = 1 , . . . , n , and the minimum norm solution , also called Moore-Penrose solution , is given by ( w†S ) > = y > X† where the pseudoinverse X† takes the form X† = X > ( XX > ) −1 . Remark 2 ( Invertibility of translation invariant kernels ) Translation invariant kernels are a family of kernel functions given by K ( x1 , x2 ) = k ( x1 − x2 ) where k is an even function on Rd . Translation invariant kernels are Mercer kernels ( positive semidefinite ) if the Fourier transform of k ( · ) is non-negative . For Radial Basis Function kernels ( K ( x1 , x2 ) = k ( ||x1 − x2|| ) ) we have the additional property due to Theorem 2.3 of Micchelli ( 1986 ) that for distinct points x1 , x2 , . . . , xn ∈ Rd the kernel matrix K is non-singular and thus invertible . The above discussion is directly related to regularization approaches . Remark 3 ( Stability and Tikhonov regularization ) Tikhonov regularization is used to prevent potential unstable behaviors . In the above setting , it corresponds to replacing Problem ( 1 ) by minf∈H 1 n ∑n i=1 ( f ( xi ) − yi ) 2 + λ ‖f‖ 2 H where the corresponding unique solution is given by fλS ( x ) = ∑n i=1K ( x , xi ) c [ i ] , c = ( K + λIn ) −1y . In contrast to ERM solutions , the above approach prevents interpolation . The properties of the corresponding estimator are well known . In this paper , we complement these results focusing on the case λ→ 0 . Finally , we end by recalling the connection between minimum norm and the gradient descent . Remark 4 ( Minimum norm and gradient descent ) In our setting , it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist , see e.g . Rosasco & Villa ( 2015 ) . Thus , a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges . In particular , when ERM has multiple interpolating solutions , gradient descent converges to a solution that minimizes a bound on stability , as we show in this paper .
This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study.
SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
Discriminative Representation Loss (DRL): A More Efficient Approach than Gradient Re-Projection in Continual Learning
1 INTRODUCTION . In the real world , we are often faced with situations where data distributions are changing over time , and we would like to update our models by new data in time , with bounded growth in system size . These situations fall under the umbrella of “ continual learning ” , which has many practical applications , such as recommender systems , retail supply chain optimization , and robotics ( Lesort et al. , 2019 ; Diethe et al. , 2018 ; Tian et al. , 2018 ) . Comparisons have also been made with the way that humans are able to learn new tasks without forgetting previously learned ones , using common knowledge shared across different skills . The fundamental problem in continual learning is catastrophic forgetting ( McCloskey & Cohen , 1989 ; Kirkpatrick et al. , 2017 ) , i.e . ( neural network ) models have a tendency to forget previously learned tasks while learning new ones . There are three main categories of methods for alleviating forgetting in continual learning : i ) regularization-based methods which aim in preserving knowledge of models of previous tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Nguyen et al. , 2018 ) ii ) architecture-based methods for incrementally evolving the model by learning task-shared and task-specific components ( Schwarz et al. , 2018 ; Hung et al. , 2019 ) ; iii ) replay-based methods which focus in preserving knowledge of data distributions of previous tasks , including methods of experience replay by episodic memories or generative models ( Shin et al. , 2017 ; Rolnick et al. , 2019 ) , methods for generating compact episodic memories ( Chen et al. , 2018 ; Aljundi et al. , 2019 ) , and methods for more efficiently using episodic memories ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ; Riemer et al. , 2019 ; Farajtabar et al. , 2020 ) . Gradient-based approaches using episodic memories , in particular , have been receiving increasing attention . The essential idea is to use gradients produced by samples from episodic memories to constrain the gradients produced by new samples , e.g . by ensuring the inner product of the pair of gradients is non-negative ( Lopez-Paz & Ranzato , 2017 ) as follows : 〈gt , gk〉 = 〈 ∂L ( xt , θ ) ∂θ , ∂L ( xk , θ ) ∂θ 〉 ≥ 0 , ∀k < t ( 1 ) where t and k are time indices , xt denotes a new sample from the current task , and xk denotes a sample from the episodic memory . Thus , the updates of parameters are forced to preserve the performance on previous tasks as much as possible . In Gradient Episodic Memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) , gt is projected to a direction that is closest to it in L2-norm whilst also satisfying Eq . ( 1 ) : ming̃ 12 ||gt − g̃|| 2 2 , s.t.〈g̃ , gk〉 ≥ 0 , ∀k < t. Optimization of this objective requires a high-dimensional quadratic program and thus is computationally expensive . Averaged-GEM ( A-GEM ) ( Chaudhry et al. , 2019a ) alleviates the computational burden of GEM by using the averaged gradient over a batch of samples instead of individual gradients of samples in the episodic memory . This not only simplifies the computation , but also obtains comparable performance with GEM . Orthogonal Gradient Descent ( OGD ) ( Farajtabar et al. , 2020 ) projects gt to the direction that is perpendicular to the surface formed by { gk|k < t } . Moreover , Aljundi et al . ( 2019 ) propose Gradient-based Sample Selection ( GSS ) , which selects samples that produce most diverse gradients with other samples into episodic memories . Here diversity is measured by the cosine similarity between gradients . Since the cosine similarity is computed using the inner product of two normalized gradients , GSS embodies the same principle as other gradient-based approaches with episodic memories . Although GSS suggests the samples with most diverse gradients are important for generalization across tasks , Chaudhry et al . ( 2019b ) show that the average gradient over a small set of random samples may be able to obtain good generalization as well . In this paper , we answer the following questions : i ) Which samples tend to produce diverse gradients that strongly conflict with other samples and why are such samples able to help with generalization ? ii ) Why does a small set of randomly chosen samples also help with generalization ? iii ) Can we reduce the diversity of gradients in a more efficient way ? Our answers reveal the relation between diversity of gradients and discriminativeness of representations , and further show connections between Deep Metric Learning ( DML ) ( Kaya & Bilge , 2019 ; Roth et al. , 2020 ) and continual learning . Drawing on these findings we propose a new approach , Discriminative Representation Loss ( DRL ) , for classification tasks in continual learning . Our methods show improved performance with relatively low computational cost in terms of time and RAM cost when compared to several state-of-theart ( SOTA ) methods across multiple benchmark tasks in the setting of online continual learning . 2 A NEW PERSPECTIVE OF REDUCING DIVERSITY OF GRADIENTS . According to Eq . ( 1 ) , negative cosine similarities between gradients produced by current and previous tasks result in worse performance in continual learning . This can be interpreted from the perspective of constrained optimization as discussed by Aljundi et al . ( 2019 ) . Moreover , the diversity of gradients relates to the Gradient Signal to Noise Ratio ( GSNR ) ( Liu et al. , 2020 ) , which plays a crucial role in the model ’ s generalization ability . Intuitively , when more of the gradients point in diverse directions , the variance will be larger , leading to a smaller GSNR , which indicates that reducing the diversity of gradients can improve generalization . This finding leads to the conclusion that samples with the most diverse gradients contain the most critical information for generalization which is consistent with in Aljundi et al . ( 2019 ) . 2.1 THE SOURCE OF GRADIENT DIVERSITY . We first conducted a simple experiment on classification tasks of 2-D Gaussian distributions , and tried to identify samples with most diverse gradients in the 2-D feature space . We trained a linear model on the first task to discriminate between two classes ( blue and orange dots in Fig . 1a ) . We then applied the algorithm Gradient-based Sample Selection with Interger Quadratic Programming ( GSS-IQP ) ( Aljundi et al. , 2019 ) to select 10 % of the samples of training data that produce gradients with the lowest similarity ( black dots in Fig . 1a ) , and denote this set of samples as M̂ = minM ∑ i , j∈M 〈gi , gj〉 ||gi||·||gj || . It is clear from Fig . 1a that the samples in M̂ are mostly around the decision boundary between the two classes . Increasing the size of M̂ results in the inclusion of samples that trace the outer edges of the data distributions from each class . Clearly the gradients can be strongly opposed when samples from different classes are very similar . Samples close to decision boundaries are most likely to exhibit this characteristic . Intuitively , storing the decision boundaries of previously learned classes should be an effective way to preserve classification performance on those classes . However , if the episodic memory only includes samples representing the learned boundaries , it may miss important information when the model is required to incrementally learn new classes . We show this by introducing a second task - training the model above on a third class ( green dots ) . We display the decision boundaries ( which split the feature space in a one vs. all manner ) learned by the model after 4 2 0 2 4 6 x 4 2 0 2 4 6 8 y class 0 class 1 M ( a ) Samples with most diverse gradients ( M̂ ) after learning task 1 , the green line is the decision boundary . 4 2 0 2 4 6 x 4 2 0 2 4 6 8 y class 0 class 1 class 2 memory ( b ) Learned decision boundaries ( purple lines ) after task 2 . Here the episodic memory includes samples in M̂ . 4 2 0 2 4 6 x 4 2 0 2 4 6 8 y class 0 class 1 class 2 memory ( c ) Learned decision boundaries ( purple lines ) after task 2 . Here the episodic memory consists of random samples . ( a ) Splitting samples into several subsets in a 3-class classification task . Dots in different colors are from different classes . ( b ) Estimated distributions of β when drawing negative pairs from different subsets of samples . ( c ) Estimated distributions of α− δ when drawing negative pairs from different subsets of samples . Figure 2 : Illustration of how Pr ( 2β > α − δ ) in Theorem 1 behaves in various cases by drawing negative pairs from different subsets of a 3-class feature space which are defined in Fig . 2a . The classifier is a linear model . y-axis in the right side of ( b ) & ( c ) is for the case of x ∈ S1 ∪ S2 . We see that α− δ behaves in a similar way with β but in a smaller range which makes β the key in studying Pr ( 2β > α − δ ) . In the case of x ∈ S3 the distribution of β has more mass on larger values than other cases because the predicted probabilities are mostly on the two classes in a pair , and it causes all 〈gn , gm〉 having the opposite sign of 〈xn , xm〉 as shown in Tab . 1. task 2 with M̂ ( Fig . 1b ) and a random set of samples ( Fig . 1c ) from task 1 as the episodic memory . The random episodic memory shows better performance than the one selected by GSS-IQP , since the new decision boundaries rely on samples not included in M̂ . It explains why randomly selected memories may generalize better in continual learning . Ideally , with M̂ large enough , the model can remember all edges of each class , and hence learn much more accurate decision boundaries sequentially . However , memory size is often limited in practice , especially for high-dimensional data . A more efficient way could be learning more informative representations . The experimental results indicate that : 1 ) more similar representations in different classes result in more diverse gradients . 2 ) more diverse representations in a same class help with learning new boundaries incrementally . Now we formalise the connection between the diversity of gradients and the discriminativeness of representations for the linear model ( proofs are in Appx . A ) . Notations : Negative pair represents two samples from different classes . Positive pair represents two samples from a same class . Let L represent the softmax cross entropy loss , W ∈ RD×K is the weight matrix of the linear model , and xn ∈ RD denotes the input data , yn ∈ RK is a one-hot vector that denotes the label of xn , D is the dimension of representations , K is the number of classes . Let pn = softmax ( on ) , where on = W Txn , the gradient gn = ∇WL ( xn , yn ; W ) . xn , xm are two different samples when n 6= m. Lemma 1 . Let n = pn − yn , we have : 〈gn , gm〉 = 〈xn , xm〉〈 n , m〉 , Theorem 1 . Suppose yn 6= ym , and let cn denote the class index of xn ( i.e . yn , cn = 1 , yn , i = 0 , ∀i 6= cn ) . Let α , ||pn||2 + ||pm||2 , β , pn , cm + pm , cn and δ , ||pn − pm||22 , then : Pr ( sign ( 〈gn , gm〉 ) = sign ( −〈xn , xm〉 ) ) = Pr ( 2β > α− δ ) , Theorem 2 . Suppose yn = ym , when 〈gn , gm〉 6= 0 , we have : sign ( 〈gn , gm〉 ) = sign ( 〈xn , xm〉 ) For a better understanding of the theorems , we conduct empirical study by partitioning the feature space of three classes into several subsets as shown in Fig . 2a and examine four cases of pairwise samples by these subsets : 1 ) . x ∈ S0 , both samples in a pair are near the intersection of the three classes ; 2 ) . x ∈ S0∪S1 , one sample is close to decision boundaries and the other is far away from the boundaries ; 3 ) . x ∈ S3 , both samples close to the decision boundary between their true classes but away from the third class ; 4 ) . x ∈ S1 ∪ S2 , both samples are far away from the decision boundaries . Theorem 1 says that for samples from different classes , 〈gn , gm〉 gets an opposite sign of 〈xn , xm〉 with a probability that depends on the predictions pn and pm . This probability of flipping the sign especially depends on β which reflects how likely to misclassify both samples to its opposite class . We show the empirical distributions of β and ( α− δ ) obtained by a linear model in Figs . 2b and 2c , respectively . In general , ( α− δ ) shows similar behaviors with β in the four cases but in a smaller range , which makes 2β > ( α − δ ) tends to be true except when β is around zero . Basically , a subset including more samples close to decision boundaries leads to more probability mass on large values of β , and the case of x ∈ S3 results in largest mass on large values of β because the predicted probabilities mostly concentrate on the two classes in a pair . As shown in Tab . 1 , more mass on large values of β leads to larger probabilities of flipping the sign . These results demonstrate that samples with most diverse gradients ( which gradients have largely negative similarities with other samples ) are close to decision boundaries because they tend to have large β and 〈xn , xm〉 tend to be positive . In the case of x ∈ S1 ∪ S2 the probability of flipping the sign is zero because β concentrates around zero . According to Lemma 1 〈gn , gm〉 are very close to zero in this case because the predictions are close to true labels , hence , such samples are not considered as with most diverse gradients . Theorem 2 says 〈gn , gm〉 has the same sign as 〈xn , xm〉 when the two samples from a same class . We can see the results of positive pairs in Tab . 1 matches Theorem 2 . In the case of S0 ∪ S1 the two probabilities do not add up to exactly 1 because the implementation of cross-entropy loss in tensorflow smooths the function by a small value for preventing numerical issues which slightly changes the gradients . As 〈xn , xm〉 is mostly positive for positive pairs , 〈gn , gm〉 hence is also mostly positive , which explains why samples with most diverse gradients are not sufficient to preserve information within classes in experiments of Fig . 1 . On the other hand , if 〈xn , xm〉 is negative then 〈gn , gm〉 will be negative , which indicates representations within a class should not be too diverse . Extending this theoretical analysis based on a linear model , we also provide empirical study of non-linear models ( Multi-layer Perceptrons ( MLPs ) ) . As demonstrated in Tab . 1 , the probability of flipping the sign in MLPs are very similar with the linear model since it only depends on the predictions and all models have learned reasonable decision boundaries . The probability of getting negative 〈gn , gm〉 is also similar with the linear model except in the case of S1 ∪ S2 for negative pairs , in which the MLP with ReLU gets much less negative 〈gn , gm〉 . As MLP with tanh activations is still consistent with the linear model in this case , we consider the difference is caused by the representations always being positive due to ReLU activations . These results demonstrate that non-linear models exhibit similar behaviors with linear models that mostly align with the theorems . Since only negative 〈gn , gm〉 may cause conflicts , reducing the diversity of gradients hence relies on reducing negative 〈gn , gm〉 . We consider to reduce negative 〈gn , gm〉 by two ways : 1 ) .minimize the representation inner product of negative pairs , which pushes the inner product to be negative or zero ( for positive representations ) ; 2 ) .optimize the predictions to decrease the probability of flipping the sign . In this sense , decreasing the representation similarity of negative pairs might help with both ways . In addition , according to Fig . 2b x ∼ S3 gets larger prediction similarity than x ∼ S0 due to the predictions put most probability mass on both classes of a pair , which indicates decreasing the similarity of predictions may decrease the probability of flipping the sign . Hence , we include logits in the representations . We verify this idea by training two binary classifiers for two groups of MNIST classes ( { 0 , 1 } and { 7 , 9 } ) . The classifiers have two hidden layers each with 100 hidden units and ReLU activations . We randomly chose 100 test samples from each group to compute the pairwise cosine similarities . Representations are obtained by concatenating the output of all layers ( including logits ) of the neural network , gradients are computed by all parameters of the model . We display the similarities in Figs . 3a and 3b . The correlation coefficients between the gradient and representation similarities of negative pairs are -0.86 and -0.85 , which of positive pairs are 0.71 and 0.79 . In all cases , the similarities of representations show strong correlations with the similarities of gradients . The classifier for class 0 and 1 gets smaller representation similarities and much less negative gradient similarities for negative pairs ( blue dots ) and it also gains a higher accuracy than the other classifier ( 99.95 % vs. 96.25 % ) , which illustrates the potential of reducing the gradient diversity by decreasing the representation similarity of negative pairs .
This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below.
SP:b80bc890180934092cde037b49d94d6e4e06fad9
Learning without Forgetting: Task Aware Multitask Learning for Multi-Modality Tasks
1 INTRODUCTION . The process of Multi-Task Learning ( MTL ) on a set of related tasks is inspired by the patterns displayed by human learning . It involves a pretraining phase over all the tasks , followed by a finetuning phase . During pretraining , the model tries to grasp the shared knowledge of all the tasks involved , while in the finetuning phase , task-specific learning is performed to improve the performance . However , as a result of the finetuning phase , the model forgets the information about the other tasks that it learnt during pretraining . Humans , on the other hand , are less susceptible to forgetfulness and retain existing knowledge/skills while mastering a new task . For example , a polyglot who masters a new language learns to translate from this language without losing the ability to translate other languages . Moreover , the lack of task-based flexibility and having different finetuning/pretraining phases cause gaps in the learning process due to the following reasons : Role Mismatch : Consider the MTL system being trained to perform the Speech Translation ( ST ) , Automatic Speech Recognition ( ASR ) and Machine Translation ( MT ) tasks . The Encoder block has a very different role in the standalone ASR , MT and ST models and hence we can not expect a single encoder to perform well on all the tasks without any cues to identify/use task information . Moreover , there is a discrepancy between pretraining and finetuning hampering the MTL objective . Task Awareness : At each step in the MTL , the model tries to optimize over the task at hand . For tasks like ST and ASR with the same source language , it is impossible for the model to identify the task and alter its parameters accordingly , hence necessitating a finetuning phase . A few such examples have been provided in Table 1 . Humans , on the other hand , grasp the task they have to perform by means of context or explicit cues . Although MTL strategies help the finetuned models to perform better than the models directly trained on those tasks , their applicability is limited to finding a good initialization point for the finetuning phase . Moreover , having a separate model for each task increases the memory requirements , which is detrimental in low resource settings . In order to achieve the goal of jointly learning all the tasks , similar to humans , we need to perform shared learning in synergy with task-specific learning . Previous approaches such as Raffel et al . ( 2019 ) trained a joint model for a set of related text-to-text tasks by providing the task information along with the inputs during the joint learning phase . However , providing explicit task information is not always desirable , e.g. , consider the automatic multilingual speech translation task . In order to ensure seamless user experience , it is expected that the model extracts the task information implicitly . Thus , a holistic joint learning strategy requires a generic framework which learns task-specific information without any explicit supervision . In this work , we propose a generic framework which can be easily integrated into the MTL strategies which can extract task-based characteristics . The proposed approach helps align existing MTL approaches with human learning processes by incorporating task information into the learning process and getting rid of the issues related to forgetfulness . We design a modulation network for learning the task characteristics and modulating the parameters of the model during MTL . As discussed above , the task information may or may not be explicitly available during the training . Hence , we propose two different designs of task modulation network to learn the task characteristics ; one uses explicit task identities while the other uses the examples from the task as input . The model , coupled with the modulation network , jointly learns on all the tasks and at the same time , performs the task-specific learning . The proposed approach tackles issues related to forgetfulness by keeping a single model for all the tasks , and hence avoiding the expensive finetuning phase . Having a single model for all the tasks also reduces memory constraints , improving suitability for low resource devices . To evaluate the proposed framework , we conduct two sets of experiments . First , we include the task information during MTL on text-to-text tasks to show the effect of task information . Secondly , we train a model on tasks with different modalities and end goals , with highly confounding tasks . Our proposed framework allows the model to learn the task characteristics without any explicit supervision , and hence train a single model which performs well on all the tasks . The main contributions of this work are as follows : • We propose an approach to tackle the issue of forgetfulness which occurs during the finetuning phase of existing MTL strategies . • Our model , without any finetuning , achieves superior performance on all the tasks which alleviates the need to keep separate task-specific models . • Our proposed framework is generic enough to be used with any MTL strategy involving tasks with multiple modalities . 2 TASK-AWARE MULTITASK LEARNING . An overview of our proposed approach is shown in Figure 1 . 2.1 BASE MODEL . In general , the sequence-to-sequence architecture consists of two components : ( 1 ) an encoder which computes a set of representationsX = { x1 , · · · , xm } ∈ Rm×d corresponding to x , and a decoder coupled with attention mechanism ( Bahdanau et al. , 2015 ) dynamically reads encoder ’ s output and predicts target language sequence Y = { y1 , · · · , yn } ∈ Rn×d . It is trained on a dataset D to maximize the p ( Y |X ; θ ) , where θ are parameters of the model . We use the Transformer Vaswani et al . ( 2017 ) as our base model . Based on the task modalities , we choose the preprocessing layer in the Transformer , i.e. , speech or the text ( text-embedding ) preprocessing layer . The speech preprocessing layer consists of a stack of k CNN layers with stride 2 for both time and frequency dimensions . This layer compresses the speech sequence and produces the output sequence such that input sequences corresponding to all the tasks have similar dimensions , d. The overview of the base sequence-to-sequence model is shown in the rightmost part of Figure 1 . 2.2 TASK MODULATION NETWORK . The task modulation network performs two operations . In the first step , it computes the task characteristics ( te ) using the task characteristics layer . It then modulates the model parameters θ using te in the second step . 2.2.1 TASK CHARACTERISTICS NETWORK : . We propose two types of Task Characteristics Networks ( TCN ) to learn the task characteristics , where one uses explicit task identities while the other uses source-target sequences as input . Explicit Task Information : In this approach , the tasks involved are represented using different task identities and fed as input to this TCN as one hot vectors . This network consists of a feed-forward layer which produces the task embedding used for modulating the model parameters . te = FFN ( e ) , ( 1 ) where e ∈ Rs is a one-hot encoding of s tasks used during joint learning . Implicit Task Information : The Implicit TCN computes the task embeddings using example sequences from the tasks without any external supervision . It consists of four sub-layers : ( 1 ) Sequence Representation Layer , ( 2 ) Bi-directional Attention Layer , ( 3 ) Sequence Summary Layer , and ( 4 ) Task Embedding Layer . The sequence representation sub-layer consists of uni-directional Transformer Encoder ( TE ) blocks Vaswani et al . ( 2017 ) . It takes the source and target sequences from the tasks as input and produces self-attended source and target sequences . Xsa = TE ( X ) , Y sa = TE ( Y ) , ( 2 ) whereXsa ∈ RM×d , Y sa ∈ RN×d . This sub-layer computes the contextual representation of the sequences . The Bi-directional Attention ( BiA ) sub-layer takes the self-attended source and target sequences from the previous layer as input and computes the relation between them using Dot-Product Attention Luong et al . ( 2015 ) . As a result , we get target aware source ( Xat ∈ RM×d ) and source aware target ( Y asRN×d ) representations as outputs . Xat = BiA ( Xsa , Y sa ) , Y as = BiA ( Y sa , Xsa ) . ( 3 ) The sequence summary sub-layer is similar to the sequence representation sub layer and summarizes the sequences . The sequence summaries are given by : Xs = TEu ( X at ) , Y s = TEu ( Y as ) , ( 4 ) whereXs ∈ RM×d , Y s ∈ RN×d . The Equation 4 summarizes the sequencesXat and Y as which contain the contextual and attention information . We take the last tokens from both the xs ∈ Rd and ys ∈ Rd , since the last token can see the whole sequence and acts as a summary of the sequence . The task embedding layer computes te by taking the outputs of the sequence summary sub-layer and applying a feed-forward network : te = FFN ( [ x s : ys ] ) . ( 5 ) 2.2.2 MODULATING MODEL PARAMETERS . We modulate the parameters ( θ ) of the network ( Section 2.1 ) to account for the task-specific variation during MTL over a set of tasks . We achieve this by scaling ( γ ) and shifting ( β ) the outputs of each layer ( e.g. , transformer block ) including any preprocessing layers in the model adopted based on the Feature-wise Linear Modulation ( FiLM ; Perez et al . ( 2018 ) ) . The γ and β parameters are obtained from the task embedding te either by using equation 1 or 5. γ = te [ : d ] , β = te [ d : ] , ( 6 ) where te ∈ R2d , and d is the hidden dimension of the model . Once we have γ and β , we apply the feature-wise linear modulation ( Perez et al. , 2018 ) to compute the modulated output ( Ol ) for each block of the model . Ol = γ ∗ fl ( vl ; θl ) + β , l = 1 , · · · , L , ( 7 ) where L is the total number of blocks in the model and fl represents the lth block of the model with parameters θl ∈ θ and inputs vl . 2.3 TRAINING . MTL has been successfully applied across different applications of machine learning such as natural language processing ( Hashimoto et al. , 2016 ; Collobert & Weston , 2008 ) , speech recognition ( Liu et al. , 2019 ; Deng et al. , 2013 ) , computer vision ( Zhang et al. , 2014 ; Liu et al. , 2015 ; Girshick , 2015 ) , and drug discovery ( Ramsundar et al. , 2015 ) . It comes in many forms : joint learning , learning to learn , and learning with auxiliary tasks . We consider two MTL strategies : ( 1 ) joint learning and ( 2 ) learning to learn to train on set of S tasks , T = { τ1 , · · · , τS } with corresponding datasets D = { D1 , · · · , DS } . As our first training strategy , we use Joint Learning ( JL ) ( Caruana , 1997 ) , which is the most commonly used training strategy for MTL . In JL , the model parameters , including the output layer , are shared across all the tasks involved in the training . For the second training strategy under the learning-tolearn approach , we use a variant of meta-learning , Modality Agnostic Meta Learning ( MAML ) ( Finn et al. , 2017a ) . Even though MAML is mostly used in few-shot learning settings , we use it since it allows for task-specific learning during the meta-train step and it has also been shown to provide improvements in the field of speech translation ( Indurthi et al. , 2020 ) . We resolve the source-target vocabulary mismatch across different tasks in MTL by using a vocabulary of subwords ( Sennrich et al. , 2016 ) computed from all the tasks . We sample a batch of examples from Ds and use this as input to the TCN and the Transformer model . To ensure that each training example uses the task embedding computed using another example , we randomly shuffle this batch while using them as input to the TCN . This random shuffling improves the generalization performance by forcing the network to learn task-specific characteristics ( te ) in Equation 1 or 5 . We compute the task embedding in the meta-train step as well ; however , the parameters of the TCN are updated only during the meta-test step . During inference time , we use the precomputed task embeddings using a batch of examples randomly sampled from the training set .
This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves state-of-the-art results on some datasets.
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
A Robust Fuel Optimization Strategy For Hybrid Electric Vehicles: A Deep Reinforcement Learning Based Continuous Time Design Approach
1 INTRODUCTION . Hybrid electric vehicles powered by fuel cells and batteries have attracted great enthusiasm in modern days as they have the potential to eliminate emissions from the transport sector . Now , both the fuel cells and batteries have got several operational challenges which make the separate use of each of them in automotive systems quite impractical . HEVs and PHEVs powered by conventional diesel engines and batteries merely reduce the emissions , but can not eliminate completely . Some of the drawbacks include carbon emission causing environmental pollution from fuel cells and long charging times , limited driving distance per charge , non-availability of charging stations along the driving distance for the batteries . Fuel Cell powered Hybrid Electric Vehicles ( FCHEVs ) powered by fuel cells and batteries offer emission-free operation while overcoming the limitations of driving distance per charge and long charging times . So , FCHEVs have gained significant attention in recent years . As we find , most of the existing research which studied and developed several types of Fuel and Energy Management Systems ( FEMS ) for transport applications include Sulaiman et al . ( 2018 ) who has presented a critical review of different energy and fuel management strategies for FCHEVs . Li et al . ( 2017 ) has presented an extensive review of FMS objectives and strategies for FCHEVs . These strategies , however can be divided into two groups , i.e. , model-based and modelfree . The model-based methods mostly depend on the discretization of the state space and therefore suffers from the inherent curse of dimensionality . The coumputational complexity increases in an exponential fashion with the increase in the dimension of the state space . This is quite evident in the methods like state-based EMS ( Jan et al. , 2014 ; Zadeh et al. , 2014 ; 2016 ) , rule-based fuzzy logic strategy ( Motapon et al. , 2014 ) , classical PI and PID strategies ( Segura et al. , 2012 ) , Potryagin ’ s minimum principle ( PMP ) ( Zheng et al. , 2013 ; 2014 ) , model predictive control ( MPC ) ( Kim et al. , 2007 ; Torreglosa et al. , 2014 ) and differential dynamic programming ( DDP ) ( Kim et al. , 2007 ) . Out of all these methods , differential dynamic programming is considered to be computationally quite efficient which rely on the linearization of the non-linear system equations about a nominal state trajectory followed by a policy iteration to improve the policy . In this approach , the control policy for fuel optimization is used to compute the optimal trajectory and the policy is updated until the convergence is achieved . The model-free methods mostly deal with the Adaptive Dynamic Programming ( Bithmead et al. , 1991 ; Zhong et al. , 2014 ) and Reinforcement Learning ( RL ) based strategies ( Mitrovic et al. , 2010 ; Khan et al. , 2012 ) icluding DDP ( Mayne et al. , 1970 ) . Here , they tend to compute the control policy for fuel optimization by continous engagement with the environment and measuring the system response thus enabling it to achieve at a solution of the DP equation recursively in an online fashion . In deep reinforcement learning , multi-layer neural networks are used to represent the learning function using a non-linear parameterized approximation form . Although a compact paremeterized form do exist for the learning function , the inability to know it apriori renders the method suffer from the curse of dimensionality ( O ( d2 ) where , d is the dimension of the state space ) , thus making it infeasible to apply to a high-dimemsional fuel managememt system . The problem of computational complexity of the traditional RL methods like policy iteration ( PI ) and value iteration ( VI ) ( Bellman et al. , 1954 ; 2003 ; Barto et al. , 1983 ; Bartsekas , 2007 ) can be overcome by a simulation based approach ( Sutton et al. , 1998 ) where the policy or the value function can be parameterized with sufficient accuracy using a small number of parameters . Thus , we will be able to transform the optimal control problem to an approximation problem in the parameter space ( Bartesekas et al. , 1996 ; Tsitsiklis et al. , 2003 ; Konda et al. , 2004 ) side stepping the need for model knowledge and excessive computations . However , the convergence requires sufficient exploration of the state-action space and the optimality of the obtained policy depends primarily on the accuracy of the parameterization scheme . As a result , a good approximation of the value function is of utmost importance to the stability of the closed-loop system and it requires convergence of the unknown parameters to their optimal values . Hence , this sufficient exploration condition manifests itself as a persistence of excitation ( PE ) condition when RL is implemented online ( Mehta et al. , 2009 ; Bhasin et al. , 2013 ; Vrabie , 2010 ) which is impossible to be guaranteed a priori . Most of the traditional approaches for fuel optimization are unable to adrress the robustness issue . The methods described in the literature including those of PID ( Segura et al.,2012 ) , Model Predictive Control ( MPC ) ( Kim et al.,2007 ; Torreglosa et al. , 2014 ) and Adaptive Dynamic Programming ( Bithmead et al.,1991 ; Zhong et al. , 2014 ) as well as the simulation based RL strategies ( Bartesekas et al. , 1996 ; Tsitsiklis et al. , 2003 ; Konda et al. , 2004 ) suffer from the drawback of providing a suboptimal solution in the presence of external disturbances and noise . As a result , application of these methods for fuel optimization for hybrid electric vehicles that are plagued by various disturbances in the form of sudden charge and fuel depletion , change in the environment and in the values of the parameters like remaining useful life , internal resistance , voltage and temperature of the battery , are quite impractical . The fuel optimization problem for the hybrid electric vehicle therefore have been formulated as a fully observed stochastic Markov Decision Process ( MDP ) . Instead of using Trajectory-optimized LQG ( T-LQG ) or Model Predictive Control ( MPC ) to provide a sub-optimal solution in the presence of disturbances and noice , we propose a deep reinforcement learning-based optimization strategy using concurrent learning ( CL ) that uses the state-derivative-action-reward tuples to present a robust optimal solution . The convergence of the weight estimates of the policy and the value function to their optimal values justifies our claim . The two major contributions of the proposed approch can be therefore be summarized as follows : 1 ) The popular methods in RL literature including policy iteration and value iteration suffers from the curse of dimensionality owing to the use of a simulation based technique which requires sufficient exploration of the state space ( PE condition ) . Therefore , the proposed model-based RL scheme aims to relax the PE condition by using a concurrent learning ( CL ) -based system identifier to reduce the computational complexity . Generally , an estimate of the true controller designed using the CLbased method introduces an approximate estimation error which makes the stability analysis of the system quite intractable . The proposed method , however , has been able to establish the stability of the closed-loop system by introducing the estimation error and analyzing the augmented system trajectory obtained under the influnece of the control signal . 2 ) The proposed optimization algorithm implemented for fuel management in hybrid electric vehicles will nullify the limitations of the conventional fuel management approaches ( PID , Model Predictive Control , ECMS , PMP ) and traditional RL approaches ( Adaptive Dynamic Proagramming , DDP , DQN ) , all of which suffers from the problem of sub-optimal behaviour in the presence of external disturbances , model-uncertainties , frequent charging and discharging , change of enviroment and other noises . The H-infinity ( H∞ ) performance index defined as the ratio of the disturbance to the control energy has been established for the RL based optimization technique and compared with the traditional strategies to address the robustness issue of the proposed design scheme . The rest of the paper is organised as follows : Section 2 presents the problem formulation including the open-loop optimization and reinforcement learning-based optimal controller design which have been described in subsections 2.1 and 2.2 respectively . The parametric system identification and value function approximation have been detailed in subsections 2.2.1 and 2.2.2 . This is followed by the stability and robustness analysis ( using the H-infinity ( H∞ ) performance index ) of the closed loop system in subsection 2.2.4 . Section 3 provides the simulation results and discussion followed by the conclusion in Section 4 . 2 PROBLEM FORMULATION . Considering the fuel management system of a hybrid electric vehicle as a continous time affine non-linear dynamical system : ẋ = f ( x , w ) + g ( x ) u , y = h ( x , v ) ( 1 ) where , x ∈ Rnx , y ∈ Rny , u ∈ Rnu are the state , output and the control vectors respectively , f ( . ) denotes the drift dynamics and g ( . ) denotes the control effectivenss matrix . The functions f and h are assumed to be locally Lipschitz continuous functions such that f ( 0 ) = 0 and∇f ( x ) is continous for every bounded x ∈Rnx . The process noise w and measurement noise v are assumed to be zero-mean , uncorrelated Gausssian white noise with covariances W and V , respectively . Assumption 1 : We consider the system to be fully observed : y = h ( x , v ) = x ( 2 ) Remark 1 : This assumption is considered to provide a tractable formulation of the fuel management problem to side step the need for a complex treatment which is required when a stochastic control problem is treated as partially observed MDP ( POMDP ) . Optimal Control Problem : For a continous time system with unknown nonlinear dynamics f ( . ) , we need to find an optimal control policy πt in a finite time horizon [ 0 , t ] where πt is the control policy at time t such that πt = u ( t ) to minimize the cost function given by J = ∫ t 0 ( xTQx+uRuT ) dt+xTFx where , Q , F > 0 and R ≥ 0 . 2.1 OPEN LOOP OPTIMIZATION . Considering a noise-free non-linear stochastic dynamical system with unknown dynamics : ẋ = f ( x , 0 ) + g ( x ) u , y = h ( x , v ) = x ( 3 ) where , x0 ∈ Rnx , y ∈ Rny , u ∈ Rnu are the initial state , output and the control vectors respectively , f ( . ) have their usual meanings and the corresponding cost function is given by Jd ( x0 , ut ) =∫ t 0 ( xTQx+ uRuT ) dt+ xTFx . Remark : We have used piecewise convex function to approximate the non-convex fuel function globally which has been used to formulate the cost function for the fuel optimization . The open loop optimization problem is to find the control sequence ut such that for a given initial state x0 , ūt = arg min Jd ( x0 , ut ) , subject to ẋ = f ( x , 0 ) + g ( x ) u , y = h ( x , v ) = x . ( 4 ) The problem is solved using the gradient descent approach ( Bryson et al. , 1962 ; Gosavi et al. , 2003 ) , and the procedure is illustrated as follows : Starting from a random initial value of the control sequence U ( 0 ) = [ ut ( 0 ) ] the control policy is updated iteratively as U ( n+1 ) = U ( n ) − α∇UJd ( x0 , U ( n ) ) , ( 5 ) until the convergence is achieved upto a certain degree of accuracy where U ( n ) denotes the control value at the nth iteration and α is the step size parameter . The gradient vector is given by : ∇UJd ( x0 , U ( n ) ) = ( ∂Jd ∂u0 , ∂Jd ∂u1 , ∂Jd ∂u2 , ..... , ∂Jd ∂ut ) | ( x0 , ut ) ( 6 ) The Gradient Descent Algorithm showing the approach has been detailed in the Appendix A.1 . Remark 2 : The open loop optimization problem is thus solved using the gradient descent approach considering a black-box model of the underlying system dynamics using a sequence of input-output tests without having the perfect knowlegde about the non-linearities in the model at the time of the design . This method proves to be a very simple and useful strategy for implementation in case of complex dynamical systems with complicated cost-to-go functions and suitable for parallelization .
This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation.
SP:a1e2218e6943bf138aeb359e23628676b396ed66
Neural representation and generation for RNA secondary structures
1 INTRODUCTION . There is an increasing interest in developing deep generative models for biochemical data , especially in the context of generating drug-like molecules . Learning generative models of biochemical molecules can facilitate the development and discovery of novel treatments for various diseases , reducing the lead time for discovering promising new therapies and potentially translating in reduced costs for drug development ( Stokes et al. , 2020 ) . Indeed , the study of generative models for molecules has become a rich and active subfield within machine learning , with standard benchmarks ( Sterling & Irwin , 2015 ) , a set of well-known baseline approaches ( Gómez-Bombarelli et al. , 2018 ; Kusner et al. , 2017 ; Liu et al. , 2018 ; Jin et al. , 2018 ) , and high-profile cases of real-world impact 1 . Prior work in this space has focused primarily on the generation of small molecules ( with less than 100 atoms ) , leaving the development of generative models for larger and more complicated biologics and biosimilar drugs ( e.g. , RNA and protein peptides ) an open area for research . Developing generative models for larger biochemicals is critical in order to expand the frontiers of automated treatment design . More generally , developing effective representation learning for such complex biochemicals will allow machine learning systems to integrate knowledge and interactions involving these biologically-rich structures . In this work , we take a first step towards the development of deep generative models for complex biomolecules , focusing on the representation and generation of RNA structures . RNA plays a crucial 1e.g . LambdaZero project for exascale search of drug-like molecules . role in protein transcription and various regulatory processes within cells which can be influenced by its structure ( Crick , 1970 ; Stefl et al. , 2005 ) , and RNA-based therapies are an increasingly active area of research ( Pardi et al. , 2018 ; Schlake et al. , 2012 ) , making it a natural focus for the development of deep generative models . The key challenge in generating RNA molecules—compared to the generation of small molecules—is that RNA involves a hierarchical , multi-scale structure , including a primary sequential structure based on the sequence of nucleic acids as well as more complex secondary and tertiary structures based on the way that the RNA strand folds onto itself . An effective generative model for RNA must be able to generate sequences that give rise to these more complex emergent structures . There have been prior works on optimizing or designing RNA sequences—using reinforcement learning or blackbox optimization—to generate particular RNA secondary structures ( Runge et al. , 2019 ; Churkin et al. , 2017 ) . However , these prior works generally focus on optimizing sequences to conform to a specific secondary structure . In contrast , our goal is to define a generative model , which can facilitate the sampling and generation of diverse RNA molecules with meaningful secondary structures , while also providing a novel avenue for targeted RNA design via search over a tractable latent space . Key contributions . We propose a series of benchmark tasks and deep generative models for the task of RNA generation , with the goal of facilitating future work on this important and challenging problem . We propose three interrelated benchmark tasks for RNA representation and generation : 1 . Unsupervised generation : Generating stable , valid , and diverse RNAs that exhibit complex secondary structures . 2 . Semi-supervised learning : Learning latent representations of RNA structure that correlate with known RNA functional properties . 3 . Targeted generation : Generating RNAs that exhibit particular functional properties . These three tasks build upon each other , with the first task only requiring the generation of stable and valid molecules , while the latter two tasks involve representing and generating RNAs that exhibit particular properties . In addition to proposing these novel benchmarks for the field , we introduce and evaluate three generative models for RNA . All three models build upon variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) augmented with normalizing flows ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ) , and they differ in how they represent the RNA structure . To help readers better understand RNA structures and properties , a self-contained explanation is provided in appendix B . The simplest model ( termed LSTMVAE ) learns using a string-based representation of RNA structure . The second model ( termed GraphVAE ) leverages a graph-based representation and graph neural network ( GNN ) encoder approach ( Gilmer et al. , 2017 ) . Finally , the most sophisticated model ( termed HierVAE ) introduces and leverages a novel hierarchical decomposition of the RNA structure . Extensive experiments on our newly proposed benchmarks highlight how the hierarchical approach allows more effective representation and generation of complex RNA structures , while also highlighting important challenges for future work in the area . 2 TASK DESCRIPTION . Given a dataset of RNA molecules , i.e . sequences of nucleotides and corresponding secondary structures , our goals are to : ( a ) learn to generate structurally stable , diverse , and valid RNA molecules that reflect the distribution in this training dataset ; ( b ) learn latent representations that reflect the functional properties of RNA . A key factor in both these representation and generation processes is that we seek to jointly represent and generate both the primary sequence structure as well as the secondary structure conformation . Together , these two goals lay the foundations for generating novel RNAs that satisfy certain functional properties . To meet these goals , we create two types of benchmark datasets , each one focusing on one aspect of the above mentioned goals : Unlabeled and variable-length RNA . The first dataset contains unlabeled RNA with moderate and highly-variable length ( 32-512 nts ) , obtained from the human transcriptome ( Aken et al. , 2016 ) and through which we focus on the generation aspect of structured RNA and evaluate the validity , stability and diversity of generated RNA molecules . In particular , our goal with this dataset is to jointly generate RNA sequences and secondary structures that are biochemically feasible ( i.e. , valid ) , have low free energy ( i.e. , stable ) , and are distinct from the training data ( i.e. , diverse ) . We will give an extended assessment of the generation aspect under different circumstances , e.g. , when constraining the generation procedures with explicit rules . Labeled RNA . The second dataset is pulled and processed from a previous study on in vitro RNAprotein interaction , which features labeled RNAs with shorter and uniform length ( 40 nts ) ( Cook et al. , 2017 ) . With this dataset , our objective is slightly expanded ( to include obj . a ) , so that the latent space is adequately organized and reflective of the interaction with proteins . Therefore , key assessment for the latent space includes AUROC for the classification of protein binding , which is crucial for the design of desired novel RNA molecules . Essentially , this creates slight variations in the task formulation , with the first dataset suited to unsupervised learning of a generative model , while the second datasets involves additional supervision ( e.g. , for a semi-supervised model or targeted generation ) . Our specific modeling choices , to be introduced in section 3 , are invariant to different task formulations , and flexible enough to handle different representations of RNA secondary structures . We refer readers to appendix C for detailed explanation for the dataset and evaluation metrics on the generated molecules and latent embeddings . 3 METHODS . In this section , we introduce three different generative models for RNA . All three models are based upon the variational autoencoder ( VAE ) framework , involving three key components : 1 . A probabilistic encoder network qφ ( z|x ) , which generates a distribution over latent states given an input representation of an RNA . We experiment with three different types of input encodings for RNA sequence and secondary structures ( see Figure S1 : a dot-bracket annotated string , a graph with adjacency matrix representing base-pairings , and a graph augmented with a hierarchical junction tree annotation for the secondary structure . 2 . A probabilistic decoder network pθ ( x|z ) , which defines a joint distribution over RNA sequences and secondary structures , conditioned on a latent input . As with the encoder network , we design architectures based on a linearized string decoding and a graph-based hierarchical junction-tree decoding approach . 3 . A parameterized prior pψ ( z ) , which defines a prior distribution over latent states and is learned based on a continuous normalizing flow ( CNF ) ( Chen et al. , 2018 ) . For all the approaches we propose , the model is optimized via stochastic gradient descent to minimize the evidence lower bound ( ELBO ) : L = −Eqφ ( z|x ) [ pθ ( x|z ) ] + β KL ( qφ ( z|x ) |pψ ( z ) ) where β is a term to allow KL-annealing over the strength of the prior regularization . In the following sections , we explain our three different instantiations of the encoder ( section 3.1 ) , decoder ( section 3.2 ) , as well as our procedures to structurally constrain the decoding process using domain knowledge ( section 3.3 ) and our procedures to avoid posterior collapse ( section 3.4 ) . 3.1 ENCODING RNA SECONDARY STRUCTURES . The input to the encoder is a structured RNA molecule , with its sequence given by an ordered array of nucleotides x1 . . . xL , with xi ∈ { A , C , G , U } , where L is the length of the sequence , and its secondary structure , either rep- resented as ( 1 ) a dot-bracket string S = ẋ1 . . . ẋL with ẋi ∈ { . , ( , ) } ; ( 2 ) or as a graph G with two types of edges — covalent bonds along the RNA backbone , and hydrogen bonds between the base- pairs 2 . We use xuv to denote edge features between nucleotides u and v ; ( 3 ) or as a hypergraph T — a depth-first ordered array of subgraphs Ĝ1 . . . ĜD with L ( Ĝi ) ∈ { S , H , I , M } indicating the subgraph label , and I ( Ĝi ) = { j|j ∈ { 1 . . . L } } indicating the assignment of nucleotides to each subgraph . Encoding RNA secondary structure as sequence . First , we obtain a joint encoding over the nucleotide and the dot-bracket annotation , using the joint sequence-structure vocabulary { A , C , G , U } × { . , ( , ) } . Then , these one-hot encodings are processed by a stacked bidirectional LSTM ( Hochreiter & Schmidhuber , 1997 ) , followed by a multi-head self-attention module ( Vaswani et al. , 2017 ) to weigh different positions along the RNA backbone . A global max-pooling is used to aggregate the information into hS , and then we obtain mean µS and log variance log σS from hS through linear transformations , and draw latent encoding zS from N ( µS , σS ) using the reparameterization trick ( Kingma & Welling , 2014 ) . Learning graph representation of RNA secondary structure . To encode the graph view G of an RNA secondary structure , we pass rounds of neural messages along the RNA structure , which falls into the framework of Message Passing Neural Network ( MPNN ) as originally discussed in Gilmer et al . ( 2017 ) and similarly motivated by Jin et al . ( 2018 ) . For much longer RNAs , it is conceptually beneficial to pass more rounds of messages so that a nucleotide may receive information on its broader structural context . However , this may introduce undesired effects such as training instability and over-smoothing issues . Therefor , we combine our MPNN network with gating mechanism , which is collectively referred as the G-MPNN : v̂t−1uv = σ ( W g local [ xu ||xuv ] +W g msg ∑ w∈N ( u ) vt−1wu ) ( 1 ) vtuv = GRU ( v̂ t−1 uv , v t−1 uv ) ( 2 ) where [ . . . || . . . ] denotes concatenation , σ denotes the activation function and GRU indicates the gated recurrent unit ( Cho et al. , 2014 ) . Then , after T iterations of message passing , the final nucleotide level embedding is given by : hu = σ ( W g emb [ xu || ∑ v∈N ( u ) v T vu ] ) . Before pooling the nucleotide level embeddings into the graph level , we pass h1 . . . hL through a single bidirectional LSTM layer , obtaining ĥ1 . . . ĥL at each step , and hg = max ( { ĥi|i ∈ 1 ... L } ) . The latent encoding zG is similarly obtained from hG using the reparameterization trick . Hierarchical encoding of the RNA hypergraph . To encode the junction tree T of RNA , we employ a type of GRU specifically suited to tree-like structures , which has previously been applied in works such as GGNN ( Li et al. , 2016 ) and JTVAE ( Jin et al. , 2018 ) . We refer to this tree encoding network as T-GRU , and the format of its input is shown in Figure 1 . One major distinction between our RNA junction tree and the one used for chemical compounds ( Jin et al. , 2018 ) is that an RNA subgraph assumes more variable nucleotide composition such that it is impossible to enumerate based on the observed data . Therefore , we need to dynamically compute the features for each node in an RNA junction tree based on its contained nucleotides , in a hierarchical manner to leverage the nucleotide level embeddings learnt by G-MPNN . Considering a subgraph Ĝi in the junction tree T , we initialize its node feature with : xĜi = [ L ( Ĝi ) ||maxu∈I ( Ĝi ) hu ] . Notably , maxu∈Ĝi hu is a max-pooling over all nucleotides assigned to Ĝi , and nucleotide embedding hu comes from G-MPNN . To compute and pass neural messages between adjacent subgraphs in the RNA junction tree T , we use the T-GRU network in Eq.3 vtĜi , Ĝj = T-GRU ( xĜi , { v t−1 Ĝk , Ĝi | Ĝk ∈ N ( Ĝi ) } ) ( 3 ) hĜi = σ ( W t emb [ xĜi || ∑ Ĝ∈N ( Ĝi ) hĜ ] ) ( 4 ) with details of T-GRU provided in the appendix D , and compute the embeddings for subgraphs with Eq . 4 . Further , we obtain a depth-first traversal of the subgraph embeddings hĜ1 . . . hĜD′ which is also the order for hierarchical decoding to be discussed later . This ordered array of embeddings is processed by another bi-directional LSTM , and the final tree level representation hT is again given by the max-pooling over the bi-LSTM outputs . Likewise , latent encoding zT is obtained from hT . 2We do not differentiate the number of hydrogen bonds , which can be different depending on the base-pairs . For example , G-C has three hydrogen bonds whereas A-U only contains two .
This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semi-supervised learning and targeted generation. This paper has many interesting contributions — a comparison of VAE models that use different RNA secondary structure encoding schemes, including traditional dot-bracket notation and a more complex hierarchical encoding, and they also introduce various decoding schemes to encourage valid secondary structures.
SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
DiP Benchmark Tests: Evaluation Benchmarks for Discourse Phenomena in MT
1 INTRODUCTION AND RELATED WORK . The advances in neural machine translation ( NMT ) systems have led to great achievements in terms of state-of-the-art performance in automatic translation tasks . There have even been claims that their translations are no worse than what an average bilingual human may produce ( Wu et al. , 2016 ) or that the translations are on par with professional translators ( Hassan et al. , 2018 ) . However , extensive studies conducting evaluations with professional translators ( Läubli et al. , 2018 ; Popel et al. , 2020 ) have shown that there is a statistically strong preference for human translations in terms of fluency and overall quality when evaluations are conducted monolingually or at the document level . Document ( or discourse ) level phenomena ( e.g. , coreference , coherence ) may not seem lexically significant , but contribute significantly to readability and understandability of the translated texts ( Guillou , 2012 ) . Targeted datasets for evaluating phenomena like coreference ( Guillou et al. , 2014 ; Guillou & Hardmeier , 2016 ; Lapshinova-Koltunski et al. , 2018 ; Bawden et al. , 2018 ; Voita et al. , 2018b ) , or ellipsis and lexical cohesion ( Voita et al. , 2019 ) , have been proposed . The NMT framework such as the Transformer ( Vaswani et al. , 2017 ) provides more flexibility to incorporate larger context . This has spurred a great deal of interest in developing context-aware NMT systems that take advantage of source or target contexts , e.g. , Miculicich et al . ( 2018 ) , Maruf & Haffari ( 2018 ) , Voita et al . ( 2018b ; 2019 ) , Xiong et al . ( 2019 ) , Wong et al . ( 2020 ) , to name a few . Most studies only report performance on specific testsets , often limited to improvements in BLEU ( Papineni et al. , 2002 ) . Despite being the standard MT evaluation metric , BLEU has been criticised for its inadequacy ; the scores are not interpretable , and are not sensitive to small improvements in lexical terms that may lead to big improvements in fluency or readability ( Reiter , 2018 ) . There is no framework for a principled comparison of MT quality beyond mere lexical matching as done in BLEU : there are no standard corpora and no agreed-upon evaluation measures . To address these shortcomings , we propose the DiP benchmark tests ( for Discourse Phenomena ) , that will enable the comparison of machine translation models across discourse task strengths and source languages . We create diagnostic testsets for four diverse discourse phenomena , and also propose automatic evaluation methods for these tasks . However , discourse phenomena in translations can be tricky to identify , let alone evaluate . A fair number of datasets proposed thus far have been manually curated , and automatic evaluation methods have often failed to agree with human judgments ( Guillou & Hardmeier , 2018 ) . To mitigate these issues , we use trained neural models for identifying and evaluating complex discourse phenomena and conduct extensive user studies to ensure agreements with human judgments . Our methods for automatically extracting testsets can be applied to multiple languages , and find cases that are difficult to translate without having to resort to synthetic data . Moreover , our testsets are extracted in a way that makes them representative of current challenges . They can be easily updated to reflect future challenges , preventing the pitfall of becoming outdated , which is a common failing of many benchmarking testsets . We also benchmark established MT models on these testsets to convey the extent of the challenges they pose . Although discourse phenomena can and do occur at the sentence-level ( e.g. , between clauses ) , we would expect MT systems that model extra-sentential context ( Voita et al. , 2018b ; Zhang et al. , 2018 ; Miculicich et al. , 2018 ) to be more successful on these tasks . However , we observe significant differences in system behavior and quality across languages and phenomena , emphasizing the need for more extensive evaluation as a standard procedure . We propose to maintain a leaderboard that tracks and highlights advances in MT quality that go beyond BLEU improvement . Our main contributions in this paper are as follows : • Benchmark testsets for four discourse phenomena : anaphora , coherence & readability , lexical consistency , and discourse connectives . • Automatic evaluation methods and agreements with human judgments . • Benchmark evaluation and analysis of four context-aware systems contrasted with baselines , for German/Russian/Chinese-English language pairs . 2 MACHINE TRANSLATION MODELS . Model Architectures . We first introduce the MT systems that we will be benchmarking on our testsets . We evaluate a selection of established models of various complexities ( simple sentencelevel to complex context-aware models ) , taking care to include both source- and target-side contextaware models . We briefly describe the model architectures here : • S2S : A standard 6-layer base Transformer model ( Vaswani et al. , 2017 ) which translates sentences independently . • CONCAT : A 6-layer base Transformer whose input is two sentences ( previous and current sentence ) merged , with a special character as a separator ( Tiedemann & Scherrer , 2017 ) . • ANAPH : Voita et al . ( 2018b ) incorporate source context by encoding it with a separate encoder , then fusing it in the last layer of a standard Transformer encoder using a gate . They claim that their model explicitly captures anaphora resolution . • TGTCON : To model target-context , we implement a version of ANAPH with an extra operation of multi-head attention in the decoder , computed between representations of the target sentence and target context . The architecture is described in detail in the Appendix ( A.5 ) . • SAN : Zhang et al . ( 2018 ) use source attention network : a separate Transformer encoder to encode source context , which is incorporated into the source encoder and target decoder using gates . • HAN : Miculicich et al . ( 2018 ) introduce a hierarchical attention network ( HAN ) into the Transformer framework to dynamically attend to the context at two levels : word and sentence . They achieve the highest BLEU when hierarchical attention is applied separately to both the encoder and decoder . Datasets and Training . The statistics for the datasets used to train the models are shown in Table 1 . We tokenize the data using Jieba1 for Zh and Moses scripts2 for the other languages , lowercase the text , and apply BPE encodings3 from Sennrich et al . ( 2016 ) . We learn the BPE encodings with the command learn-joint-bpe-and-vocab -s 40000 . The scores reported are BLEU4 , computed either through fairseq or NLTK ( Wagner , 2010 ) . Further details about dataset composition , training settings and hyperparameters can be found in the Appendix ( A.7 ) . 1https : //github.com/fxsjy/jieba 2https : //www.statmt.org/moses/ 3https : //github.com/rsennrich/subword-nmt/ BLEU scores . The BLEU scores on the WMT-14 ( De-En , Ru-En ) and on the WMT-17 ( Zh-En ) testsets for each of the six trained models are shown in Table 2 . We were unable to train HAN for Zh-En as the model was not optimized for training with large datasets . In contrast to increases in BLEU for selected language-pairs and datasets reported in published work , incorporating context within elaborate context-dependent models decreases BLEU scores for the Zh-En and De-En tasks . However , the simple concatenation-based model CONCAT performs better than S2S for De-En and Ru-En ; this shows that context knowledge is indeed helpful for improving BLEU . 3 BENCHMARK TESTSETS . We construct our benchmarking testsets based on four main principles : Selectivity . The testsets need to provide hard to translate contexts for MT models . We ensure this by looking at translation errors made by system submissions to campaigns like WMT and IWSLT . Authenticity . The testsets can not contain artificial or synthetic data but only natural text . Rather than generating testset samples using heuristics , we extract hard contexts from existing humangenerated source text . Multilinguality . The testset extraction method should be automatic and applicable to multiple languages . Our framework can be used to extract testsets for all source languages that are part of the considered MT campaigns . Adaptability . The testsets should be easy to update frequently , making them adaptable to improvements in newer systems . Since we automatically extract hard contexts based on MT errors , our testsets are easy to update ; they adapt to errors in newer ( and possibly more accurate ) systems , making the tasks harder over time . We use the system outputs released by WMT and IWSLT for the most recent years ( Nadejde et al. , 2016 ; Bojar et al. , 2017 ; 2018 ; 2019 ; Cettolo et al. , 2016 ; 2017 ) to build our testsets . For De-En , Ru-En and Zh-En , these consist of translation outputs from 68 , 41 and 47 unique systems respectively . Since the data comes from a wide variety of systems , our testsets representatively aggregate different types of errors from several ( arguably SOTA ) models . Also note that the MT models we are benchmarking are not a part of these system submissions to WMT , so there is no potential bias in the testsets . 3.1 ANAPHORA . Anaphora are references to entities that occur elsewhere in a text ; mishandling them can result in ungrammatical sentences or the reader inferring the wrong antecedent , leading to misunderstanding of the text ( Guillou , 2012 ) . We focus specifically on the aspect of incorrect pronoun translations . Testset . To obtain hard contexts for pronoun translation , we look for source texts that lead to erroneous pronoun translations in system outputs . We align the system translations with their references , and collect the cases in which the translated pronouns do not match the reference.4 Our anaphora testset is an updated version of the one proposed by Jwalapuram et al . ( 2019 ) . We filter the system translations based on their list of cases where the translations can be considered wrong , rather than acceptable variants . The corresponding source texts are extracted as a test suite for pronoun translation . This gives us a pronoun benchmark testset of 2564 samples for De-En , 2368 for Ru-En and 1540 for Zh-En . Evaluation . Targeted evaluation of pronouns in MT has been challenging as it is not fair to expect an exact match with the reference . Evaluation methods like APT ( Miculicich Werlen & PopescuBelis , 2017 ) or AutoPRF ( Hardmeier & Federico , 2010 ) are specific to language pairs or lists of pronouns , requiring extensive manual intervention . They have also been criticised for failing to produce evaluations that are consistent with human judgments ( Guillou & Hardmeier , 2018 ) . Jwalapuram et al . ( 2019 ) propose a pairwise ranking model that scores “ good '' pronoun translations ( like in the reference ) higher than “ poor '' pronoun translations ( like in the MT output ) in context , and show that their model is good at making this distinction , along with having high agreements with human judgements . However , they do not rank multiple system translations against each other , which is our main goal ; the absolute scores produced by their model are not useful since it is trained in a pairwise fashion . We devise a way to use their model to score and rank system translations in terms of pronouns . First , we re-train their model with more up-to-date WMT data ( more details in Appendix A.1 ) . We obtain a score for each benchmarked MT system ( S2S , CONCAT , etc . ) translation using the model , plus the corresponding reference sentence . We then normalize the score for each translated sentence by calculating the difference with the reference . To get an overall score for an MT system , the assigned scores are summed across all sentences in the testset . Scoresys = ∑ i ρi ( ref|θ ) − ρi ( sys|θ ) ( 1 ) where ρi ( .|θ ) denotes the score given to sentence i by the pronoun model θ . The systems are ranked based on this overall score , where a lower score indicates a better performance . We conduct a user study to confirm that the model rankings correspond with human judgments , obtaining an agreement of 0.91 between four participants who annotated 100 samples . Appendix A.1 gives details ( e.g. , interface , participants , agreement ) about the study .
This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 150-5000 data points per language and phenomenon. A relatively large number of systems from previous work is benchmarked on each test set, and agreement with human judgments is measured.
SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
Private Image Reconstruction from System Side Channels Using Generative Models
1 INTRODUCTION . Side channel analysis ( SCA ) recovers program secrets based on the victim program ’ s nonfunctional characteristics ( e.g. , its execution time ) that depend on the values of program secrets . SCA constitutes a major threat in today ’ s system and hardware security landscape . System side channels , such as CPU cache accesses and operating system ( OS ) page table accesses made by the victim software , are widely used to recover program secrets under various real-world scenarios ( Gullasch et al. , 2011 ; Aciicmez & Koc , 2006 ; Wu et al. , 2012 ; Hähnel et al. , 2017 ; Xu et al. , 2015 ; Yarom et al. , 2017 ) . To conduct SCA , attackers first conduct an online phase to log a trace of side channel data points made by the victim software ( e.g. , its accessed CPU cache lines ) . Then , attackers launch an offline phase to analyze the logged trace and infer secrets ( e.g. , private inputs ) . Enabled by advances in system research , the online phase can be performed smoothly ( Xu et al. , 2015 ) . Nevertheless , the offline phase is challenging , requiring comprehension of victim software ’ s input-relevant operations and how such operations influence side channels . The influence is program-specific and obscure ( see an example in Fig . 1 ) . Even worse , side channel data points made by real-world software are usually highly noisy . For instance , executing libjpeg ( libjpeg , 2020 ) to decompress one unknown JPEG image produces a trace of over 700K side channel data points , where only a small portion depends on the image content . Identifying such input-dependent data points from over 700K records is extremely difficult . Launching SCA to recover images processed by media software constitutes a common threat in the era of cloud computing ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) , especially when machine learning as a service ( MLaaS ) is substantially offered ( e.g. , for face recognition ) . When envisioning the high ∗Corresponding Author risk of violating user privacy , there is a demanding need to understand the adversarial capability of reconstructing private images with SCA . To date , the offline inference phase of existing SCA attacks requires lots of manual efforts with heuristics ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) . While some preliminary studies explore to use AI models to infer secrets ( Hospodar et al. , 2011 ; Kim et al. , 2019 ; Cagli et al. , 2017 ; Hettwer et al. , 2018 ) , their approaches are primarily driven by classification , i.e. , predicting whether a particular bit of crypto key is 0 or 1 . In contrast , reconstructing user private images requires to synthesize and enhance images from a more holistic perspective . Recent advances in generative models , such as generative adversarial network ( GAN ) and variational autoencoder ( VAE ) , have enabled a major thrust in image reconstruction , given subtle signals in even cross-modal settings , e.g. , voice-to-face or text-to-image ( Radford et al. , 2016 ; Reed et al. , 2016 ; Wen et al. , 2019 ; Hong et al. , 2018b ) . Inspired by this breakthrough , we propose an SCA framework using generative models . Given a trace of side channel data points made by image analysis software ( e.g. , libjpeg ) when processing a user input , we reconstruct an image visually similar to the input . Each logged side channel trace , containing around a million records , is first encoded into a matrix and pre-processed by a convolutional neural network ( CNN ) for feature extraction . Then , a VAE network with a learned prior ( referred to as VAE-LP ) is employed to reconstruct an image with a holistic visual appearance . We further supplement VAE-LP with a GAN model to enhance the recovered image with vivid details . The GAN generator yields the final output . Our attack exploits media libraries , libjpeg ( libjpeg , 2020 ) and uPNG ( Middleditch , 2010 ) , using two popular side channels , CPU cache line accesses and OS page table accesses . Our attack is independent of the underlying computing infrastructure ( i.e. , OS , hardware , image library implementation ) . We require enough side channel logs for training , which is consistently assumed by previous works ( Heuser & Zohner , 2012 ; Maghrebi et al. , 2016 ) . While existing attacks particularly target libjpeg and leverage domain knowledge , system hacking , and manual efforts to infer pixel values ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) , we show that images with many details can be reconstructed in an end-to-end manner . We also show surprising results that enabled by our framework , side channel traces composing one-bit data read/write patterns , which prima facie seems minimally informative , suffice recovering images . We conduct qualitative and quantitative evaluations on specific and general datasets representing daily images that can violate privacy if leaked . The recovered images manifest consistent visual appearances with private inputs . The recovered images also exhibit high discriminability : each recovered image ( e.g. , a face ) can be matched to its reference input among many candidates with high accuracy . In summary , we make the following contributions : At the conceptual level , we present the first generative model-based SCA . Our novel approach learns how program inputs influence system side channels from historical side channel logs to reconstruct user private images automatically . We , for the first time , demonstrate surprisingly effective attacks toward even low-resolution side channels like one-bit data read/write access patterns . At the technical level , we design an effective framework by incorporating various design principles to facilitate image reconstruction from side channels . Our framework pipelines 2D CNN , VAE-LP , and GAN models to systematically enhance the quality of generated images . At the empirical level , our evaluations show that the proposed framework can generate images with vivid details and are closely similar to reference inputs . The reconstructed images show high discriminability , making privacy leakage attacks more practical . This is the first paper to conduct SCA with generative models , revealing new SCA opportunities and unknown threats . Our code is at https : //github.com/genSCA/genSCA . 2 BACKGROUND . To formulate SCA , let the attacked program be P and its input domain be I . For a deterministic and terminating program P , the program execution can be modeled as a mapping P : I → E where E represents program runtime behavior ( e.g. , memory access ) . As a common assumption ( Hähnel et al. , 2017 ) , program inputs are private and profitable for attackers . Since different inputs i , i′ ∈ I can likely induce different e , e′ ∈ E , using input-dependent e ∈ E enables to infer i . Modern computer architectures have primarily zeroed the possibility for adversaries to log e ∈ E. Nevertheless , an attacker ’ s view on P can be modeled as a function view : E → O that maps E to side channel observations O . Hence , the composition ( view ◦ P ) : I → O maps inputs to side channel data points that can be logged by attackers . The view indicates the attacker ’ s capability , and for typical system security scenarios , the view is formulated as view : Emem → Ocache ∪ Opage , where Emem denotes a trace of accessed memory locations when executing P with i , and Ocache andOpage represent CPU cache and OS page table side channels , respectively . Despite being unable to monitor Emem , attackers can log accessed cache lines Ocache or page table entries Opage derived from Emem . Attackers then infer Emem and recover i . We now concretize the procedure by introducing how SCA is used to exploit cloud platforms in a two-step approach as follows : Online Phase to Record O . Considering a cloud environment in Fig . 1 ( a ) , where two users , one normal and one malicious , deploy two virtual machine ( VM ) instances on the host . Private images i ∈ I uploaded by users are processed by media library P within the left VM . Modern computer design , e.g. , Intel SGX ( Intel , 2014 ) , guarantees that i ∈ I and the execution of P can not be viewed from outside the VM . However , when processing i , P usually imposes a large volume of CPU cache and page table accesses , which , as shown in Fig . 1 ( a ) , can be recorded by the co-located malicious VM or the malicious host OS in a fully automated manner ( Han et al. , 2017 ; Chiang et al. , 2015 ; Liu et al. , 2015a ; Xu et al. , 2015 ; Hähnel et al. , 2017 ) . Offline Phase to Infer i . Once side channel traces o ∈ O are collected , an offline phase is conducted to infer ( view ◦P ) −1 : O → I and recover i . Fig . 1 ( b ) presents a sample code , where depending on values of input i , different memory locations ( and cache lines ) will be visited . Fig . 1 ( c ) shows the corresponding trace of logged cache side channel records . To infer i , attackers eliminate the second record ( since it is input-independent ) , and infer i as 1 according to the first record . Attackers anticipate to 1 ) pinpointing a subset of records o∗ ⊆ o that depend on i , and to 2 ) recovering the mapping from o∗ to i . However , real-world side channel traces ( e.g. , generated by uPNG ) could contain over one million records , where only a tiny portion o∗ is input-dependent . Even worse , constructing the mapping between i and o∗ requires a deep understanding of program control flows ( e.g. , how i affects program execution and induces cache accesses in Fig . 1 ( b ) ) . To date , these tasks require either manual effort ( Xu et al. , 2015 ; Hähnel et al. , 2017 ) or formal analysis ( Doychev et al. , 2013 ; Wang et al. , 2017 ; 2019 ) , which are program-specific and error-prone with low scalability . Existing research tackles the offline phase challenge by proposing profiling-based SCA ( Maghrebi et al. , 2016 ; Hettwer et al. , 2018 ; Kim et al. , 2019 ) , where models are trained to approximate ( view◦ P ) −1 : O → I . However , existing work focuses on predicting particular bits of crypto keys from succinct side channel traces , e.g. , a few hundred records ( Hettwer et al. , 2020 ) . In contrast , this is the first work shows that by incorporating generative models , SCA can be conducted to exploit real-world media libraries and holistically reconstruct high-quality and discriminable images . 3 THE PROPOSED FRAMEWORK . A common assumption shared by SCA ( Heuser & Zohner , 2012 ; Hähnel et al. , 2017 ; Xu et al. , 2015 ) is that the attackers can profile the victim software locally or remotely with training inputs and collect corresponding side channel traces . We train a model to learn how different inputs can influence side channel traces . Then , given a side channel trace logged when processing an unknown image , our framework reconstructs an image that is visually similar to the unknown input . Our framework has two pipelined modules ( see Fig . 2 ) . Given a side channel trace Ti corresponding to processing an image i , we first encode Ti into a matrix . The encoded matrix will be fed to the VAE-LP module to generate image îtrace , and we further use GAN to denoise îtrace and yield the final output îGAN . We now elaborate on each module . More details are given in Appendix B .
The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAE-LP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAE-LP model is refined using a GAN to add fine details. Compelling results are demonstrated for recovery of private information and state of art metrics are reported.
SP:b2fc6ca65add04fb32bcf7622d9098de9004ca2b
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
Deep ensembles perform better than a single network thanks to the diversity among their members . Recent approaches regularize predictions to increase diversity ; however , they also drastically decrease individual members ’ performances . In this paper , we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies . Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information , we introduce a novel training criterion called DICE : it increases diversity by reducing spurious correlations among features . The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant . Therefore , besides the classification loss with information bottleneck , we adversarially prevent features from being conditionally predictable from each other . We manage to reduce simultaneous errors while protecting class information . We obtain state-of-the-art accuracy results on CIFAR-10/100 : for example , an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently . We further analyze the consequences on calibration , uncertainty estimation , out-of-distribution detection and online co-distillation . 1 INTRODUCTION . Averaging the predictions of several models can significantly improve the generalization ability of a predictive system . Due to its effectiveness , ensembling has been a popular research topic ( Nilsson , 1965 ; Hansen & Salamon , 1990 ; Wolpert , 1992 ; Krogh & Vedelsby , 1995 ; Breiman , 1996 ; Dietterich , 2000 ; Zhou et al. , 2002 ; Rokach , 2010 ; Ovadia et al. , 2019 ) as a simple alternative to fully Bayesian methods ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) . It is currently the de facto solution for many machine learning applications and Kaggle competitions ( Hin , 2020 ) . Ensembling reduces the variance of estimators ( see Appendix E.1 ) thanks to the diversity in predictions . This reduction is most effective when errors are uncorrelated and members are diverse , i.e. , when they do not simultaneously fail on the same examples . Conversely , an ensemble of M identical networks is no better than a single one . In deep ensembles ( Lakshminarayanan et al. , 2017 ) , the weights are traditionally trained independently : diversity among members only relies on the randomness of the initialization and of the learning procedure . Figure 1 shows that the performance of this procedure quickly plateaus with additional members . To obtain more diverse ensembles , we could adapt the training samples through bagging ( Breiman , 1996 ) and bootstrapping ( Efron & Tibshirani , 1994 ) , but a reduction of training samples has a negative impact on members with multiple local minima ( Lee et al. , 2015 ) . Sequential boosting does not scale well for time-consuming deep learners that overfit their training dataset . Liu & Yao ( 1999a ; b ) ; Brown et al . ( 2005b ) explicitly quantified the diversity and regularized members into having negatively correlated errors . However , these ideas have not significantly improved accuracy when applied to deep learning ( Shui et al. , 2018 ; Pang et al. , 2019 ) : while members should predict the same target , they force disagreements among strong learners and therefore increase their bias . It highlights the main objective and challenge of our paper : finding a training strategy to reach an improved trade-off between ensemble diversity and individual accuracies ( Masegosa , 2020 ) . one input ( , ) should not share more information than features from two inputs in the same class ( , ) : i.e. , ( , - ) should not be able to differentiate ( - , ) and ( - , ) . Our core approach is to encourage all members to predict the same thing , but for different reasons . Therefore the diversity is enforced in the features space and not on predictions . Intuitively , to maximize the impact of a new member , extracted features should bring information about the target that is absent at this time so unpredictable from other members ’ features . It would remove spurious correlations , e.g . information redundantly shared among features extracted by different members but useless for class prediction . This redundancy may be caused by a detail in the image background and therefore will not be found in features extracted from other images belonging to the same class . This could make members predict badly simultaneously , as shown in Figure 2 . Our new learning framework , called DICE , is driven by Information Bottleneck ( IB ) ( Tishby , 1999 ; Alemi et al. , 2017 ) principles , that force features to be concise by forgetting the task-irrelevant factors . Specifically , DICE leverages the Minimum Necessary Information criterion ( Fischer , 2020 ) for deep ensembles , and aims at reducing the mutual information ( MI ) between features and inputs , but also information shared between features . We prevent extracted features from being redundant . As mutual information can detect arbitrary dependencies between random variables ( such as symmetry , see Figure 2 ) , we increase the distance between pairs of members : it promotes diversity by reducing predictions ’ covariance . Most importantly , DICE protects features ’ informativeness by conditioning mutual information upon the target . We build upon recent neural approaches ( Belghazi et al. , 2018 ) based on the Donsker-Varadhan representation of the KL formulation of MI . We summarize our contributions as follows : • We introduce DICE , a new adversarial learning framework to explicitly increase diversity in ensemble by minimizing the conditional redundancy between features . • We rationalize our training objective by arguments from information theory . • We propose an implementation through neural estimation of conditional redundancy . We consistently improve accuracy on CIFAR-10/100 as summarized in Figure 1 , with better uncertainty estimation and calibration . We analyze how the two components of our loss modify the accuracy-diversity trade-off . We improve out-of-distribution detection and online co-distillation . 2 DICE MODEL . Notations Given an input distribution X , a network θ is trained to extract the best possible dense features Z to model the distribution pθ ( Y |X ) over the targets , which should be close to the Dirac on the true label . Our approach is designed for ensembles with M members θi , i ∈ { 1 , . . . , M } extracting Zi . In branch-based setup , members share low-level weights to reduce computation cost . We average the M predictions in inference . We initially consider an ensemble of M = 2 members . Quick overview First , we train each member separately for classification with information bottleneck . Second , we train members together to remove spurious redundant correlations while training adversarially a discriminator . In conclusion , members learn to classify with conditionally uncorrelated features for increased diversity . Our procedure is driven by the following theoretical findings . 2.A DERIVING TRAINING OBJECTIVE 2.A.1 BASELINE : NON-CONDITIONAL OBJECTIVE The Minimum Necessary Information ( MNI ) criterion from ( Fischer , 2020 ) aims at finding minimal statistics . In deep ensembles , Z1 and Z2 should capture only minimal information from X , while preserving the necessary information about the task Y . First , we consider separately the two Markov chains Z1 ← X ↔ Y and Z2 ← X ↔ Y . As entropy measures information , entropy of Z1 and Z2 not related to Y should be minimized . We recover IB ( Alemi et al. , 2017 ) in deep ensembles : IBβib ( Z1 , Z2 ) = 1 βib [ I ( X ; Z1 ) + I ( X ; Z2 ) ] − [ I ( Y ; Z1 ) + I ( Y ; Z2 ) ] = IBβib ( Z1 ) + IBβib ( Z2 ) . Second , let ’ s consider I ( Z1 ; Z2 ) : we minimize it following the minimality constraint of the MNI . IBRβib , δr ( Z1 , Z2 ) = 1 βib Compression︷ ︸︸ ︷ [ I ( X ; Z1 ) + I ( X ; Z2 ) ] − Relevancy︷ ︸︸ ︷ [ I ( Y ; Z1 ) + I ( Y ; Z2 ) ] +δr Redundancy︷ ︸︸ ︷ I ( Z1 ; Z2 ) = IBβib ( Z1 ) + IBβib ( Z2 ) + δrI ( Z1 ; Z2 ) . ( green vertical stripes ) with no overlap with relevancy ( red stripes ) . Analysis In this baseline criterion , relevancy encouragesZ1 and Z2 to capture information about Y . Compression & redundancy ( R ) split the information from X into two compressed & independent views . The relevancy-compressionredundancy trade-off depends on the values of βib & δr . 2.A.2 DICE : CONDITIONAL OBJECTIVE The problem is that the compression and redundancy terms in IBR also reduce necessary information related to Y : it is detrimental to have Z1 and Z2 fully disentangled while training them to predict the same Y . As shown on Figure 3 , redundancy regions ( blue horizontal stripes ) overlap with relevancy regions ( red stripes ) . Indeed , the true constraints that the MNI criterion really entails are the following conditional equalities given Y : I ( X ; Z1|Y ) = I ( X ; Z2|Y ) = I ( Z1 ; Z2|Y ) = 0 . Mutual information being non-negative , we transform them into our main DICE objective : DICEβceb , δcr ( Z1 , Z2 ) = 1βceb [ I ( X ; Z1|Y ) + I ( X ; Z2|Y ) ] ︸ ︷︷ ︸ Conditional Compression − [ I ( Y ; Z1 ) + I ( Y ; Z2 ) ] ︸ ︷︷ ︸ Relevancy +δcr I ( Z1 ; Z2|Y ) ︸ ︷︷ ︸ Conditional Redundancy = CEBβceb ( Z1 ) + CEBβceb ( Z2 ) + δcrI ( Z1 ; Z2|Y ) , ( 1 ) where we recover two conditional entropy bottleneck ( CEB ) ( Fischer , 2020 ) components , CEBβceb ( Zi ) = 1 βceb I ( X ; Zi|Y ) − I ( Y ; Zi ) , with βceb > 0 and δcr > 0 . Analysis The relevancy terms force features to be informative about the task Y . But contrary to IBR , DICE bottleneck constraints only minimize irrelevant information to Y . First , the conditional compression removes in Z1 ( or Z2 ) information from X not relevant to Y . Second , the conditional redundancy ( CR ) reduces spurious correlations between members and only forces them to have independent bias , but definitely not independent features . It encourages diversity without affecting members ’ individual precision as it protects information related to the target class in Z1 and Z2 . Useless information from X to predict Y should certainly not be in Z1 or Z2 , but it is even worse if they are in Z1 and Z2 simultaneously as it would cause simultaneous errors . Even if for i ∈ { 1 , 2 } , reducing I ( Zi , X|Y ) indirectly controls I ( Z1 , Z2|Y ) ( as I ( Z1 ; Z2|Y ) ≤ I ( X ; Zi|Y ) by chain rule ) , it is more efficient to directly target this intersection region through the CR term . In a final word , DICE is to IBR for deep ensembles as CEB is to IB for a single network . We now approximate the two CEB and the CR components in DICE objective from equation 1 . 2.B APPROXIMATING DICE INTO A TRACTABLE LOSS 2.B.1 VARIATIONAL APPROXIMATION OF CONDITIONAL ENTROPY BOTTLENECK We leverage Markov assumptions in Zi ← X ↔ Y , i ∈ { 1 , 2 } and empirically estimate on the classification training dataset of N i.i.d . points D = { xn , yn } Nn=1 , yn ∈ { 1 , . . . , K } . Following Fischer ( 2020 ) , CEBβceb ( Zi ) = 1 βceb I ( X ; Zi|Y ) − I ( Y ; Zi ) is variationally upper bounded by : VCEBβceb ( { ei , bi , ci } ) = 1 N N∑ n=1 1 βceb DKL ( ei ( z|xn ) ‖bi ( z|yn ) ) − E [ log ci ( yn|ei ( xn , ) ) ] . ( 2 ) See explanation in Appendix E.4 . ei ( z|x ) is the true features distribution generated by the encoder , ci ( y|z ) is a variational approximation of true distribution p ( y|z ) by the classifier , and bi ( z|y ) is a variational approximation of true distribution p ( z|y ) by the backward encoder . This loss is applied separately on each member θi = { ei , ci , bi } , i ∈ { 1 , 2 } . Practically , we parameterize all distributions with Gaussians . The encoder ei is a traditional neural network features extractor ( e.g . ResNet-32 ) that learns distributions ( means and covariances ) rather than deterministic points in the features space . That ’ s why ei transforms an image into 2 tensors ; a features-mean eµi ( x ) and a diagonal features-covariance e σ i ( x ) each of size d ( e.g . 64 ) . The classifier ci is a dense layer that transforms a features-sample z into logits to be aligned with the target y through conditional cross entropy . z is obtained via reparameterization trick : z = ei ( x , ) = eµi ( x ) + e σ i ( x ) with ∼ N ( 0 , 1 ) . Finally , the backward encoder bi is implemented as an embedding layer of size ( K , d ) that maps the K classes to class-features-means bµi ( z|y ) of size d , as we set the class-features-covariance to 1 . The Gaussian parametrization also enables the exact computation of the DKL ( see Appendix E.3 ) , that forces ( 1 ) features-mean e µ i ( x ) to converge to the class-featuresmean bµi ( z|y ) and ( 2 ) the predicted features-covariance eσi ( x ) to be close to 1 . The advantage of VCEB versus VIB ( Alemi et al. , 2017 ) is the class conditional bµi ( z|y ) versus non-conditional bµi ( z ) which protects class information . 2.B.2 ADVERSARIAL ESTIMATION OF CONDITIONAL REDUNDANCY Theoretical Problem We now focus on estimating I ( Z1 ; Z2|Y ) , with no such Markov properties . Despite being a pivotal measure , mutual information estimation historically relied on nearest neighbors ( Singh et al. , 2003 ; Kraskov et al. , 2004 ; Gao et al. , 2018 ) or density kernels ( Kandasamy et al. , 2015 ) that do not scale well in high dimensions . We benefit from recent advances in neural estimation of mutual information ( Belghazi et al. , 2018 ) , built on optimizing Donsker & Varadhan ( 1975 ) dual representations of the KL divergence . Mukherjee et al . ( 2020 ) extended this formulation for conditional mutual information estimation . CR = I ( Z1 ; Z2|Y ) = DKL ( P ( Z1 , Z2 , Y ) ‖P ( Z1 , Y ) p ( Z2|Y ) ) = sup f Ex∼p ( z1 , z2 , y ) [ f ( x ) ] − log ( Ex∼p ( z1 , y ) p ( z2|y ) [ exp ( f ( x ) ) ] ) = Ex∼p ( z1 , z2 , y ) [ f∗ ( x ) ] − log ( Ex∼p ( z1 , y ) p ( z2|y ) [ exp ( f∗ ( x ) ) ] ) , where f∗ computes the pointwise likelihood ratio , i.e. , f∗ ( z1 , z2 , y ) = p ( z1 , z2 , y ) p ( z1 , y ) p ( z2|y ) . Empirical Neural Estimation We estimate CR ( 1 ) using the empirical data distribution and ( 2 ) replacing f∗ = w ∗ 1−w∗ by the output of a discriminator w , trained to imitate the optimal w ∗ . Let B be a batch sampled from the observed joint distribution p ( z1 , z2 , y ) = p ( e1 ( z|x ) , e2 ( z|x ) , y ) ; we select the features extracted by the two members from one input . Let Bp be sampled from the product distribution p ( z1 , y ) p ( z2|y ) = p ( e1 ( z|x ) , y ) p ( z2|y ) ; we select the features extracted by the two members from two different inputs that share the same class . We train a multi-layer network w on the binary task of distinguishing these two distributions with the standard cross-entropy loss : Lce ( w ) = − 1 |B|+ |Bp| ∑ ( z1 , z2 , y ) ∈B logw ( z1 , z2 , y ) + ∑ ( z1 , z′2 , y ) ∈Bp log ( 1− w ( z1 , z′2 , y ) ) . ( 3 ) If w is calibrated ( see Appendix B.3 ) , a consistent ( Mukherjee et al. , 2020 ) estimate of CR is : ÎCRDV = 1 |B| ∑ ( z1 , z2 , y ) ∈B log f ( z1 , z2 , y ) ︸ ︷︷ ︸ Diversity − log 1 |Bp| ∑ ( z1 , z′2 , y ) ∈Bp f ( z1 , z ′ 2 , y ) ︸ ︷︷ ︸ Fake correlations , with f = w 1− w . Intuition By training our members to minimize ÎCRDV , we force triples from the joint distribution to be indistinguishable from triples from the product distribution . Let ’ s imagine that two features are conditionally correlated , some spurious information is shared between features only when they are from the same input and not from two inputs ( from the same class ) . This correlation can be informative about a detail in the background , an unexpected shape in the image , that is rarely found in samples from this input ’ s class . In that case , the product and joint distributions are easily distinguishable by the discriminator . The first adversarial component will force the extracted features to reduce the correlation , and ideally one of the two features loses this information : it reduces redundancy and increases diversity . The second term would create fake correlations between features from different inputs . As we are not interested in a precise estimation of the CR , we get rid of this second term that , empirically , did not increase diversity , as detailed in Appendix G. L̂CRDV ( e1 , e2 ) = 1 |B| ∑ ( z1 , z2 , y ) ∈B∼p ( e1 ( z|x ) , e2 ( z|x ) , y ) log f ( z1 , z2 , y ) . ( 4 ) Summary First , we train each member for classification with VCEB from equation 2 , as shown in Step 1 from Figure 4 . Second , as shown in Step 2 from Figure 4 , the discriminator , conditioned on the class Y , learns to distinguish features sampled from one image versus features sampled from two images belonging to Y . Simultaneously , both members adversarially ( Goodfellow et al. , 2014 ) delete spurious correlations to reduce CR estimation from equation 4 with differentiable signals : it conditionally aligns features . We provide a pseudo-code in B.4 . While we derive similar losses for IBR and CEBR in Appendix E.5 , the full DICE loss is finally : LDICE ( θ1 , θ2 ) = VCEBβceb ( θ1 ) + VCEBβceb ( θ2 ) + δcrL̂CRDV ( e1 , e2 ) . ( 5 ) 2.C FULL PROCEDURE WITH M MEMBERS We expand our objective for an ensemble with M > 2 members . We only consider pairwise interactions for simplicity to keep quadratic rather than exponential growth in number of components and truncate higher order interactions , e.g . I ( Zi ; Zj , Zk|Y ) ( see Appendix F.1 ) . Driven by previous variational and neural estimations , we train θi = { ei , bi , ci } , i ∈ { 1 , . . . , M } on : LDICE ( θ1 : M ) = M∑ i=1 VCEBβceb ( θi ) + δcr ( M − 1 ) M∑ i=1 M∑ j=i+1 L̂CRDV ( ei , ej ) , ( 6 ) while training adversariallyw on Lce . Batch B is sampled from the concatenation of joint distribution p ( zi , zj , y ) where i , j ∈ { 1 , . . . , M } , i 6= j , while Bp is sampled from the product distribution , p ( zi , y ) p ( zj |y ) . We use the same discriminator w for ( M 2 ) estimates . It improves scalability by reducing the number of parameters to be learned . Indeed , an additional member in the ensemble only adds 256 ∗ d trainable weights in w , where d is the features dimension . See Appendix B.3 for additional information related to the discriminator w .
This paper proposes a method of learning ensembles that adhere to an "ensemble version" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for predicting the labels (Y), i.e. I(X;Z) or I(X;Z|Y), this paper proposes that ensembles should additionally avoid spurious correlations between the ensemble members that aren't useful for predicting Y, i.e. I(Z_i; Z_j| Y). They show empirically that the coefficient on this term increases diversity at the expense of decreasing accuracy of individual members of the ensemble.
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
Zero-shot Synthesis with Group-Supervised Learning
1 INTRODUCTION . Primates perform well at generalization tasks . If presented with a single visual instance of an object , they often immediately can generalize and envision the object in different attributes , e.g. , in different 3D pose ( Logothetis et al. , 1995 ) . Primates can readily do so , as their previous knowledge allows them to be cognizant of attributes . Machines , by contrast , are most-commonly trained on sample features ( e.g. , pixels ) , not taking into consideration attributes that gave rise to those features . To aid machine cognition of visual object attributes , a class of algorithms focuses on learning disentangled representations ( Kingma & Welling , 2014 ; Higgins et al. , 2017 ; Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) , which map visual samples onto a latent space that separates the information belonging to different attributes . These methods show disentanglement by interpolating between attribute values ( e.g. , interpolate pose , etc ) . However , these methods usually process one sample at a time , rather than contrasting or reasoning about a group of samples . We posit that semantic links across samples could lead to better learning . We are motivated by the visual generalization of primates . We seek a method that can synthesize realistic images for arbitrary queries ( e.g. , a particular car , in a given pose , on a given background ) , which we refer to as controlled synthesis . We design a method that enforces semantic consistency of attributes , facilitating controlled synthesis by leveraging semantic links between samples . Our method maps samples onto a disentangled latent representation space that ( i ) consists of subspaces , each encoding one attribute ( e.g. , identity , pose , ... ) , and , ( ii ) is such that two visual samples that share an attribute value ( e.g. , both have identity “ car ” ) have identical latent values in the shared attribute subspace ( identity ) , even if other attribute values ( e.g. , pose ) differ . To achieve this , we propose a general learning framework : Group Supervised Learning ( GSL , Sec . 3 ) , which provides a learner ( e.g. , neural network ) with groups of semantically-related training examples , represented as multigraph . Given a query of attributes , GSL proposes groups of training examples with attribute combinations that are useful for synthesizing a test example satisfying the query ( Fig . 1 ) . This endows the network with an envisioning capability . In addition to applications in graphics , controlled synthesis can also augment training sets for better generalization on machine learning tasks ( Sec . 6.3 ) . As an instantiation of GSL , we propose an encoder-decoder network for zero-shot synthesis : GroupSupervised Zero-Shot Synthesis Network ( GZS-Net , Sec . 4 ) . While learning ( Sec . 4.2 & 4.3 ) , we repeatedly draw a group of semantically-related examples , as informed by a multigraph created by GSL . GZS-Net encodes group examples , to obtain latent vectors , then swaps entries for one or more attributes in the latent space across examples , through multigraph edges , then decodes into an example within the group ( Sec . 4.2 ) . Our contributions are : ( i ) We propose Group-Supervised Learning ( GSL ) , explain how it casts its admissible datasets into a multigraph , and show how it can be used to express learning from semantically-related groups and to synthesize samples with controllable attributes ; ( ii ) We show one instantiation of GSL : Group-supervised Zero-shot Synthesis Network ( GZS-Net ) , trained on groups of examples and reconstruction objectives ; ( iii ) We demonstrate that GZS-Net trained with GSL outperforms state-of-the-art alternatives for controllable image synthesis on existing datasets ; ( iv ) We provide a new dataset , Fonts1 , with its generating code . It contains 1.56 million images and their attributes . Its simplicity allows rapid idea prototyping for learning disentangled representations . 2 RELATED WORK . We review research areas , that share similarities with our work , to position our contribution . Self-Supervised Learning ( e.g. , Gidaris et al . ( 2018 ) ) admits a dataset containing features of training samples ( e.g. , upright images ) and maps it onto an auxiliary task ( e.g. , rotated images ) : dataset examples are drawn and a random transformation ( e.g. , rotate 90◦ ) is applied to each . The task could be to predict the transformation ( e.g. , =90◦ ) from the transformed features ( e.g. , rotated image ) . Our approach is similar , in that it also creates auxiliary tasks , however , the tasks we create involve semantically-related group of examples , rather than from one example at a time . Disentangled Representation Learning are methods that infer latent factors given example visible features , under a generative assumption that each latent factor is responsible for generating one semantic attribute ( e.g . color ) . Following Variational Autoencoders ( VAEs , Kingma & Welling , 2014 ) , a class of models ( including , Higgins et al. , 2017 ; Chen et al. , 2018 ) achieve disentanglement implicitly , by incorporating into the objective , a distance measure e.g . KL-divergence , encouraging the latent factors to be statistically-independent . While these methods can disentangle the factors without knowing them beforehand , unfortunately , they are unable to generate novel combinations not witnessed during training ( e.g. , generating images of red car , without any in training ) . On the other hand , our method requires knowing the semantic relationships between samples ( e.g. , which objects are of same identity and/or color ) , but can then synthesize novel combinations ( e.g. , by stitching latent features of “ any car ” plus “ any red object ” ) . Conditional synthesis methods can synthesize a sample ( e.g. , image ) and some use information external to the synthesized modalities , e.g. , natural language sentence Zhang et al . ( 2017 ) ; Hong et al . 1http : //ilab.usc.edu/datasets/fonts ( 2018 ) or class label Mirza & Osindero ( 2014 ) ; Tran et al . ( 2017 ) . Ours differ , in that our “ external information ” takes the form of semantic relationships between samples . There are methods based on GAN Goodfellow et al . ( 2014 ) that also utilize semantic relationships including Motion Re-targeting ( Yang et al. , 2020 ) , which unfortunately requires domain-specific hand-engineering ( detect and track human body parts ) . On the other hand , we design and apply our method on different tasks ( including people faces , vehicles , fonts ; see Fig . 1 ) . Further , we compare against two recent GAN methods starGAN ( Choi et al. , 2018 ) and ELEGANT ( Xiao et al. , 2018 ) , as they are state-of-the-art GAN methods for amending visual attributes onto images . While they are powerful in carrying local image transformations ( within a small patch , e.g. , changing skin tone or hair texture ) . However , our method better maintains global information : when rotating the main object , the scene also rotates with it , in a semantically coherent manner . Importantly , our learning framework allows expressing simpler model network architectures , such as feed-forward auto-encoders , trained with only reconstruction objectives , as opposed to GANs , with potential difficulties such as lack of convergence guarantees . Zero-shot learning also consumes side-information . For instance , models of Lampert ( 2009 ) ; Atzmon & Chechik ( 2018 ) learn from object attributes , like our method . However , ( i ) these models are supervised to accurately predict attributes , ( ii ) they train and infer one example at a time , and ( iii ) they are concerned with classifying unseen objects . We differ in that ( i ) no learning gradients ( supervision signal ) are derived from the attributes , as ( ii ) these attributes are used to group the examples ( based on shared attribute values ) , and ( iii ) we are concerned with generation rather than classification : we want to synthesize an object in previously-unseen attribute combinations . Graph Neural Networks ( GNNs ) ( Scarselli et al. , 2009 ) are a class of models described on graph structured data . This is applicable to our method , as we propose to create a multigraph connecting training samples . In fact , our method can be described as a GNN , with message passing functions ( Gilmer et al. , 2017 ) that are aware of the latent space partitioning per attribute ( explained in Sec . 4 ) . Nonetheless , for self-containment , we introduce our method in the absence of the GNN framework . 3 GROUP-SUPERVISED LEARNING . 3.1 DATASETS ADMISSIBLE BY GSL . Formally , a dataset admissible by GSL containing n samples D = { x ( i ) } ni=1 where each example is accompanied with m attributes Da = { ( a ( i ) 1 , a ( i ) 2 , . . . a ( i ) m ) } ni=1 . Each attribute value is a member of a countable set : aj ∈ Aj . For instance , pertaining to visual scenes , A1 can denote foreground-colors A1 = { red , yellow , . . . } , A2 could denote background colors , A3 could correspond to foreground identity , A4 to ( quantized ) orientation . Such datasets have appeared in literature , e.g . in Borji et al . ( 2016 ) ; Matthey et al . ( 2017 ) ; Langner et al . ( 2010 ) ; Lai et al . ( 2011 ) . 3.2 AUXILIARY TASKS VIA MULTIGRAPHS . Given a dataset of n samples and their attributes , we define a multigraph M with node set [ 1 .. n ] . Two nodes , i , k ∈ [ 1 .. n ] with i 6= k are connected with edge labels M ( i , k ) ⊆ [ 1 .. m ] as : M ( i , k ) = { j ∣∣∣ a ( i ) j = a ( k ) j ; j ∈ [ 1 .. m ] } . In particular , M defines a multigraph , with |M ( i , k ) | denoting the number of edges connecting nodes i and k , which is equals the number of their shared attributes . Fig . 2 depicts a ( sub- ) multigraph for the Fonts dataset ( Sec . 5.1 ) . Definition 1 COVER ( S , i ) : Given node set S ⊆ [ 1 .. |Dg| ] and node i ∈ [ 1 .. |Dg| ] we say set S covers node i if every attribute value of i is in at least one member of S. Formally : COVER ( S , i ) ⇐⇒ [ 1 .. m ] = ⋃ k∈S M ( i , k ) . ( 1 ) When COVER ( S , i ) holds , there are two mutually-exclusive cases : either i ∈ S , or i /∈ S , respectively shaded as green and blue in Fig . 2 ( b ) . The first case trivially holds even for small S , e.g . COVER ( { i } , i ) holds for all i . However , we are interested in non-trivial sets where |S| > 1 , as sets with |S| = 1 would cast our proposed network ( Sec . 4 ) to a standard Auto-Encoder . The second case is crucial for zero-shot synthesis . Suppose the ( image ) features of node i ( in Fig . 2 ( b ) ) are not given , we can search for S1 , under the assumption that if COVER ( S1 , i ) holds , then S1 contains sufficient information to synthesize i ’ s features as they are not given ( i /∈ S1 ) . Until this point , we made no assumptions how the pairs ( S , i ) are extracted ( mined ) from the multigraph s.t . COVER ( S , i ) holds . In the sequel , we train with |S| = 2 and i ∈ S. We find that this particular specialization of GSL is easy to program , and we leave-out analyzing the impact of mining different kinds of cover sets for future work .
The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zero-shot setting. The framework leverages the underlying semantic links across samples which could be instantiated as a multigraph. Cycle-consistent reconstruction loss as well as reconstruction loss are computed on synthetic samples from swapped latent representations.
SP:5561773ab024b083be4e362db079e371abf79653
Asymmetric self-play for automatic goal discovery in robotic manipulation
1 INTRODUCTION . We are motivated to train a single goal-conditioned policy ( Kaelbling , 1993 ) that can solve any robotic manipulation task that a human may request in a given environment . In this work , we make progress towards this goal by solving a robotic manipulation problem in a table-top setting where the robot ’ s task is to change the initial configuration of a variable number of objects on a table to match a given goal configuration . This problem is simple in its formulation but likely to challenge a wide variety of cognitive abilities of a robot as objects become diverse and goals become complex . Motivated by the recent success of deep reinforcement learning for robotics ( Levine et al. , 2016 ; Gu et al. , 2017 ; Hwangbo et al. , 2019 ; OpenAI et al. , 2019a ) , we tackle this problem using deep reinforcement learning on a very large training distribution . An open question in this approach is how we can build a training distribution rich enough to achieve generalization to many unseen manipulation tasks . This involves defining both an environment ’ s initial state distribution and a goal distribution . The initial state distribution determines how we sample a set of objects and their configuration at the beginning of an episode , and the goal distribution defines how we sample target states given an initial state . In this work , we focus on a scalable way to define a rich goal distribution . The research community has started to explore automated ways of defining goal distributions . For example , previous works have explored learning a generative model of goal distributions ( Florensa et al. , 2018 ; Nair et al. , 2018b ; Racaniere et al. , 2020 ) and collecting teleoperated robot trajectories to identify goals ( Lynch et al. , 2020 ; Gupta et al. , 2020 ) . In this paper , we extend an alternative approach called asymmetric self-play ( Sukhbaatar et al. , 2018b ; a ) for automated goal generation . Asymmetric self-play trains two RL agents named Alice and Bob . Alice learns to propose goals that Bob is likely to fail at , and Bob , a goal-conditioned policy , learns to solve the proposed goals . Alice proposes a goal by manipulating objects and Bob has to solve the goal starting from the same initial state as Alice ’ s . By embodying these two agents into the same robotic hardware , this setup ensures that all proposed goals are provided with at least one solution : Alice ’ s trajectory . There are two main reasons why we consider asymmetric self-play to be a promising goal generation and learning method . First , any proposed goal is achievable , meaning that there exists at least one solution trajectory that Bob can follow to achieve the goal . Because of this property , we can exploit Alice ’ s trajectory to provide additional learning signal to Bob via behavioral cloning . This additional learning signal alleviates the overhead of heuristically designing a curriculum or reward shaping for learning . Second , this approach does not require labor intensive data collection . In this paper , we show that asymmetric self-play can be used to train a goal-conditioned policy for complex object manipulation tasks , and the learned policy can zero-shot generalize to many manually designed holdout tasks , which consist of either previously unseen goals , previously unseen objects , or both . To the best of our knowledge , this is the first work that presents zero-shot generalization to many previously unseen tasks by training purely with asymmetric self-play.1 2 PROBLEM FORMULATION . Our training environment for robotic manipulation consists of a robot arm with a gripper attached and a wide range of objects placed on a table surface ( Figure 1a,1b ) . The goal-conditioned policy learns to control the robot to rearrange randomly placed objects ( the initial state ) into a specified goal configuration ( Figure 1c ) . We aim to train a policy on a single training distribution and to evaluate its performance over a suite of holdout tasks which are independently designed and not explicitly present during training ( Figure 2a ) . In this work , we construct the training distribution via asymmetric self-play ( Figure 2b ) to achieve generalization to many unseen holdout tasks ( Figure 1c ) . Mathematical formulation Formally , we model the interaction between an environment and a goal-conditioned policy as a goal-augmented Markov decision process M = hS , A , P , R , Gi , where S is the state space , A is the action space , P : S ⇥ A ⇥ S 7 ! R denotes the transition probability , G ✓ S specifies the goal space and R : S ⇥ G 7 ! R is a goal-specific reward function . A goalaugmented trajectory sequence is { ( s0 , g , a0 , r0 ) , . . . , ( st , g , at , rt ) } , where the goal is provided to the policy as part of the observation at every step . We say a goal is achieved if st is sufficiently close to g ( Appendix A.2 ) . With a slightly overloaded notation , we define the goal distribution G ( g|s0 ) as the probability of a goal state g 2 G conditioned on an initial state s0 2 S . 1Asymmetric self-play is proposed in Sukhbaatar et al . ( 2018b ; a ) , but to supplement training while the majority of training is conducted on target tasks . Zero-shot generalization to unseen tasks was not evaluated . Training goal distribution A naive design of the goal distribution G ( g|s0 ) is to randomly place objects uniformly on the table , but it is unlikely to generate interesting goals , such as an object picked up and held above the table surface by a robot gripper . Another possible approach , collecting tasks and goals manually , is expensive and hard to scale . We instead sidestep these issues and automatically generate goals via training based on asymmetric self-play ( Sukhbaatar et al. , 2018b ; a ) . Asymmetric self-play involves using a policy named Alice ⇡A ( a|s ) to set goals and a goal-conditioned policy Bob ⇡B ( a|s , g ) to solve goals proposed by Alice , as illustrated in Figure 2b . We run ⇡A to generate a trajectory ⌧A = { ( s0 , a0 , r0 ) , . . . , ( sT , aT , rT ) } and the last state is labelled as a goal g for ⇡B to solve . The goal distribution G ( sT = g|s0 ) is fully determined by ⇡A and we train Bob only on this goal distribution . We therefore say zero-shot generalization when Bob generalizes to a holdout task which is not explicitly encoded into the training distribution . Evaluation on holdout tasks To assess zero-shot generalization of ⇡B ( a|s , g ) from our training setup , we hand-designed a suite of holdout tasks with goals that are never directly incorporated into the training distribution . Some holdout tasks also feature previously unseen objects . The holdout tasks are designed to either test whether a specific skill has been learned , such as the ability to pick up objects ( Figure 3 ) , or represent a semantically interesting task , such as setting a table ( Figure 1c ) . Appendix B.6 describes the list of holdout tasks that we use in our experiments . Note that none of the holdout tasks are used for training ⇡B ( a|s , g ) . 3 ASYMMETRIC SELF-PLAY . To train Alice policy ⇡A ( a|s ) and Bob policy ⇡B ( a|s , g ) , we run the following multi-goal game within one episode , as illustrated in Figure 2b : 1 . An initial state s0 is sampled from an initial state distribution . Alice and Bob are instantiated into their own copies of the environment . Alice and Bob alternate turns as follows . 2 . Alice ’ s turn . Alice interacts with its environment for a fixed number of T steps and may rearrange the objects . The state at the end of Alice ’ s turn sT will be used as a goal g for Bob . If the proposed goal is invalid ( e.g . if Alice has not moved any objects , or if an object has fallen off the table ) , the episode terminates . 3 . Bob ’ s turn . Bob receives reward if it successfully achieves the goal g in its environment . Bob ’ s turn ends when it succeeds at achieving the goal or reaches a timeout . If Bob ’ s turn ends in a failure , its remaining turns are skipped and treated as failures , while we let Alice to keep generating goals . 4 . Alice receives reward if Bob fails to solve the goal that Alice proposed . Steps 2–3 are repeated until 5 goals are set by Alice or Alice proposes an invalid goal , and then the episode terminates . The competition created by this game encourages Alice to propose goals that are increasingly challenging to Bob , while Bob is forced to solve increasingly complex goals . The multi-goal setup was chosen to allow Bob to take advantage of environmental information discovered earlier in the episode to solve its remaining goals , which OpenAI et al . ( 2019a ) found to be important for transfer to physical systems . Note however that in this work we focus on solving goals in simulation only . To improve stability and avoid forgetting , we have Alice and Bob play against past versions of their respective opponent in 20 % of games . More details about the game structure and pseudocode for training with asymmetric self-play are available in Appendix A . 3.1 REWARD STRUCTURE . For Bob , we assign sparse goal-conditioned rewards . We measure the positional and rotational distance between an object and its goal state as the Euclidean distance and the Euler angle rotational distance , respectively . Whenever both distance metrics are below a small error ( the success threshold ) , this object is deemed to be placed close enough to the goal state and Bob receives 1 reward immediately . But if this object is moved away from the goal state that it has arrived at in past steps , Bob obtains -1 reward such that the sum of per-object reward is at most 1 during a given turn . When all of the objects are in their goal state , Bob receives 5 additional reward and its turn is over . For Alice , we assign a reward after Bob has attempted to solve the goal : 5 reward if Bob failed at solving the goal , and 0 if Bob succeeded . We shape Alice ’ s reward slightly by adding 1 reward if it has set a valid goal , defined to be when no object has fallen off the table and any object has been moved more than the success threshold . An additional penalty of 3 reward is introduced when Alice sets a goal with objects outside of the placement area , defined to be a fixed 3D volume within the view of the robot ’ s camera . More details are discussed in Appendix A.2 . 3.2 ALICE BEHAVIORAL CLONING ( ABC ) . One of the main benefits of using asymmetric self-play is that the generated goals come with at least one solution to achieve it : Alice ’ s trajectory . Similarly to Sukhbaatar et al . ( 2018a ) , we exploit this property by training Bob with Behavioral Cloning ( BC ) from Alice ’ s trajectory , in addition to the reinforcement learning ( RL ) objective . We call this learning mechanism Alice Behavioral Cloning ( ABC ) . We propose several improvements over the original formulation in Sukhbaatar et al . ( 2018a ) . Demonstration trajectory filtering Compared to BC from expert demonstrations , using Alice ’ s trajectory needs extra care . Alice ’ s trajectory is likely to be suboptimal for solving the goal , as Alice might arrive at the final state merely by accident . Therefore , we only consider trajectories with goals that Bob failed to solve as demonstrations , to avoid distracting Bob with suboptimal examples . Whenever Bob fails , we relabel Alice ’ s trajectory ⌧A to be a goal-augmented version ⌧BC = { ( s0 , sT , a0 , r0 ) , . . . , ( sT , sT , aT , rT ) } as a demonstration for BC , where sT is the goal . PPO-style BC loss clipping The objective for training Bob is L = LRL + Labc , where LRL is an RL objective and Labc is the ABC loss . is a hyperparameter controlling the relative importance of the BC loss . We set = 0.5 throughout the whole experiment . A naive BC loss is to minimize the negative log-likelihood of demonstrated actions , E ( st , gt , at ) 2DBC ⇥ log ⇡B ( at|st , gt ; ✓ ) ⇤ where DBC is a mini-batch of demonstration data and ⇡B is parameterized by ✓ . We found that overly-aggressive policy changes triggered by BC sometimes led to learning instabilities . To prevent the policy from changing too drastically , we introduce PPO-style loss clipping ( Schulman et al. , 2017 ) on the BC loss by setting the advantage  = 1 in the clipped surrogate objective : Labc = E ( st , gt , at ) 2DBC clip ⇣ ⇡B ( at|st , gt ; ✓ ) ⇡B ( at|st , gt ; ✓old ) , 1 ✏ , 1 + ✏ ⌘ where ⇡B ( at|st , gt ; ✓ ) is Bob ’ s likelihood on a demonstration based on the parameters that we are optimizing , and ⇡B ( at|st , gt ; ✓old ) is the likelihood based on Bob ’ s behavior policy ( at the time of demonstration collection ) evaluated on a demonstration . This behavior policy is identical to the policy that we use to collect RL trajectories . By setting  = 1 , this objective optimizes the naive BC loss , but clips the loss whenever ⇡B ( at|st , gt ; ✓ ) ⇡B ( at|st , gt ; ✓old ) is bigger than 1 + ✏ , to prevent the policy from changing too much . ✏ is a clipping threshold and we use ✏ = 0.2 in all the experiments .
This paper presents an approach to learn goal conditioned policies by relying on self-play which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too challenging for Bob to solve. Bob's task is to solve the task by trying to reproduce the end state of Alice's demonstration. As a result, the learned policy performs various tasks and can work in zero-shot settings.
SP:9f70871f0111b58783f731748d8750c635998f32
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
1 Introduction . Graph neural networks ( GNNs ) have been intensively studied recently [ 29 , 26 , 39 , 68 ] , due to their established performance towards various real-world tasks [ 15 , 69 , 53 ] , as well as close connections to spectral graph theory [ 12 , 9 , 16 ] . While most GNN architectures are not very complicated , the training of GNNs can still be costly regarding both memory and computation resources on real-world large-scale graphs [ 10 , 63 ] . Moreover , it is intriguing to transfer learned structural information across different graphs and even domains in settings like few-shot learning [ 56 , 44 , 25 ] . Therefore , several very recent studies have been conducted on the transferability of GNNs [ 21 , 23 , 22 , 59 , 31 , 3 , 47 ] . However , it is unclear in what situations the models will excel or fail especially when the pre-training and fine-tuning tasks are different . To provide rigorous analysis and guarantee on the transferability of GNNs , we focus on the setting of direct-transfering between the source and target graphs , under an analogous setting of “ domain adaptation ” [ 7 , 59 ] . In this work , we establish a theoretically grounded framework for the transfer learning of GNNs , and leverage it to design a practically transferable GNN model . Figure 1 gives an overview of our framework . It is based on a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features , which allows us to define graph information and similarity , ∗These two authors contribute equally . 1Code and processed data are available at https : //github.com/GentleZhu/EGI . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) , Online . so as to analyze GNN transferability ( §3 ) . This view motivates us to design EGI , a novel GNN training objective based on ego-graph information maximization , which is effective in capturing the graph information as we define ( §3.1 ) . Then we further specify the requirement on transferable node features and analyze the transferability of EGI that is dependent on the local graph Laplacians of source and target graphs ( §3.2 ) . All of our theoretical conclusions have been directly validated through controlled synthetic experiments ( Table 1 ) , where we use structural-equivalent role identification in an direct-transfering setting to analyze the impacts of different model designs , node features and source-target structure similarities on GNN transferability . In §4 , we conduct real-world experiments on multiple publicly available network datasets . On the Airport and Gene graphs ( §4.1 ) , we closely follow the settings of our synthetic experiments and observe consistent but more detailed results supporting the design of EGI and the utility of our theoretical analysis . On the YAGO graphs ( §4.2 ) , we further evaluate EGI on the more generalized and practical setting of transfer learning with task-specific fine-tuning . We find our theoretical insights still indicative in such scenarios , where EGI consistently outperforms state-of-the-art GNN representation and transfer learning frameworks with significant margins . 2 Related Work . Representation learning on graphs has been studied for decades , with earlier spectral-based methods [ 6 , 46 , 52 ] theoretically grounded but hardly scaling up to graphs with over a thousand of nodes . With the emergence of neural networks , unsupervised network embedding methods based on the Skip-gram objective [ 37 ] have replenished the field [ 51 , 14 , 42 , 45 , 66 , 62 , 65 ] . Equipped with efficient structural sampling ( random walk , neighborhood , etc . ) and negative sampling schemes , these methods are easily parallelizable and scalable to graphs with thousands to millions of nodes . However , these models are essentially transductive as they compute fully parameterized embeddings only for nodes seen during training , which are impossible to be transfered to unseen graphs . More recently , researchers introduce the family of graph neural networks ( GNNs ) that are capable of inductive learning and generalizing to unseen nodes given meaningful node features [ 29 , 12 , 15 , 67 ] . Yet , most existing GNNs require task-specific labels for training in a semi-supervised fashion to achieve satisfactory performance [ 29 , 15 , 53 , 64 ] , and their usage is limited to single graphs where the downstream task is fixed . To this end , several unsupervised GNNs are presented , such as the auto-encoder-based ones like VGAE [ 28 ] and GNFs [ 35 ] , as well as the deep-infomax-based ones like DGI [ 54 ] and InfoGraph [ 50 ] . Their potential in the transfer learning of GNN remains unclear when the node features and link structures vary across different graphs . Although the architectures of popular GNNs such as GCN [ 29 ] may not be very complicated compared with heavy vision and language models , training a dedicated GNN for each graph can still be cumbersome [ 10 , 63 ] . Moreover , as pre-training neural networks are proven to be successful in other domains [ 13 , 18 ] , the idea is intriguing to transfer well-trained GNNs from relevant source graphs to improve the modeling of target graphs or enable few-shot learning [ 59 , 31 , 3 ] when labeled data are scarce . In light of this , pioneering works have studied both generative [ 22 ] and discriminative [ 21 , 23 ] GNN pre-training schemes . Though Graph Contrastive Coding [ 43 ] shares the most similar view towards graph structures as us , it utilizes contrastive learning across all graphs instead of focusing on the transfer learning between any specific pairs . On the other hand , unsupervised domain adaptive GCNs [ 59 ] study the domain adaption problem only when the source and target tasks are homogeneous . Most previous pre-training and self-supervised GNNs lack a rigorous analysis towards their transferability and thus have unpredictable effectiveness . The only existing theoretical work on GNN transferability studies the performance of GNNs across different permutations of a single original graph [ 33 , 34 ] and the tradeoff between discriminability and transferability of GNNs [ 47 ] . We , instead , are the first to rigorously study the more practical setting of transferring GNNs across pairs of different source and target graphs . 3 Transferable Graph Neural Networks . In this paper , we design a more transferable training objective for GNN ( EGI ) based on our novel view of essential graph information ( §3.1 ) . We then analyze its transferability as the gap between its abilities to model the source and target graphs , based on their local graph Laplacians ( §3.2 ) . Based on the connection between GNN and spectral graph theory [ 29 ] , we describe the output of a GNN as a combination of its input node features X , fixed graph Laplacian L and learnable graph filters Ψ . The goal of training a GNN is then to improve its utility by learning the graph filters that are compatible with the other two components towards specific tasks . In the graph transfer learning setting where downstream tasks are often unknown during pre-training , we argue that the general utility of a GNN should be optimized and quantified w.r.t . its ability of capturing the essential graph information in terms of the joint distribution of its topology structures and node features , which motivates us to design a novel ego-graph information maximization model ( EGI ) ( §3.1 ) . The general transferability of a GNN is then quantified by the gap between its abilities to model the source and target graphs . Under reasonable requirements such as using structurerespecting node features as the GNN input , we analyze this gap for EGI based on the structural difference between two graphs w.r.t . their local graph Laplacians ( §3.2 ) . 3.1 Transferable GNN via Ego-graph Information Maximization . In this work , we focus on the direct-transfering setting where a GNN is pre-trained on a source graph Ga in an unsupervised fashion and applied on a target graph Gb without fine-tuning.2 Consider a graph G = { V , E } , where the set of nodes V are associated with certain features X and the set of edges E form graph structures . Intuitively , the transfer learning will be successful only if both the features and structures of Ga and Gb are similar in some ways , so that the graph filters of a GNN learned on Ga are compatible with the features and structures of Gb . Graph kernels [ 57 , 8 , 30 , 38 ] are well-known for their capability of measuring similarity between pair of graphs . Motivated by k-hop subgraph kernels [ 4 ] , we introduce a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features . Since GNN essentially encodes such k-hop ego graph samples , this view allows us to give concrete definitions towards structural information of graphs in the transfer learning setting , which facilitates the measuring of similarity ( difference ) among graphs . Yet , none of the existing GNN training objectives are capable of recovering such distributional signals of ego graphs . To this end , we design Ego-Graph Information maximization ( EGI ) , which alternatively reconstructs the k-hop ego-graph of each center node via mutual information maximization [ 20 ] . Definition 3.1 ( K-hop ego-graph ) . We call a graph gi = { V ( gi ) , E ( gi ) } a k-hop ego-graph centered at node vi if it has a k-layer centroid expansion [ 4 ] such that the greatest distance between vi and 2In the experiments , we show our model to be generalizable to the more practical settings with task-specific pre-training and fine-tuning , while the study of rigorous bound in such scenarios is left as future work . any other nodes in the ego-graph is k , i.e . ∀vj ∈ V ( gi ) , |d ( vi , vj ) | ≤ k , where d ( vi , vj ) is the graph distance between vi and vj . In this paper , we use directed k-hop ego-graph and its direction is decided by whether it is composed of incoming or outgoing edges to the center node , i.e. , gi and g̃i . The results apply trivially to undirected graphs with gi = g̃i . Definition 3.2 ( Structural information ) . Let G be a topological space of sub-graphs , we view a graph G as samples of k-hop ego-graphs { gi } ni=1 drawn i.i.d . from G with probability µ , i.e. , gi i.i.d.∼ µ ∀i = 1 , · · · , n. The structural information of G is then defined to be the set of k-hop ego-graphs of { gi } ni=1 and their empirical distribution . As shown in Figure 1 , three graphs G0 , G1 and G2 are characterized by a set of 1-hop ego-graphs and their empirical distributions , which allows us to quantify the structural similarity among graphs as shown in §3.2 ( i.e. , G0 is more similar to G1 than G2 under such characterization ) . In practice , the nodes in a graph G are characterized not only by their k-hop ego-graph structures but also their associated node features . Therefore , G should be regarded as samples { ( gi , xi ) } drawn from the joint distribution P on the product space of G and a node feature space X . Ego-Graph Information Maximization . Given a set of ego-graphs { ( gi , xi ) } i drawn from an empirical joint distribution ( gi , xi ) ∼ P. We aim to train an GNN encoder Ψ to maximize the mutual informaion ( MI ( gi , Ψ ( gi , xi ) ) ) between the defined structural information gi3 ( i.e . k-hop ego-graph ) and node embedding zi = Ψ ( gi , xi ) . To maximize the MI , another discriminator D ( gi , zi ) : E ( gi ) × zi → R+ is introduced to compute the probability of an edge e belongs to the given ego-graph gi . We use the Jensen-Shannon MI estimator [ 20 ] in the EGI objective , LEGI = −MI ( JSD ) ( G , Ψ ) = 1N N∑ i=1 [ sp ( D ( gi , z′i ) ) + sp ( −D ( gi , zi ) ) ] , ( 1 ) where sp ( x ) = log ( 1+ex ) is the softplus function and ( gi , z′i ) is randomly drawn from the product of marginal distributions , i.e . z′i = Ψ ( gi′ , xi′ ) , ( gi′ , xi′ ) ∼ P , i′ 6= i . In general , we can also randomly draw negative g′i in the topological space , while enumerating all possible graphs gi′ leads to high computation cost . In Eq . 1 , the computation of D on E ( gi ) depends on the node orders . Following the common practice in graph generation [ 70 ] , we characterize the decision process of D with a fixed graph ordering , i.e. , the BFS-ordering π over edges E ( gi ) . D = f ◦ Φ is composed by another GNN encoder Φ and scoring function f over an edge sequence Eπ : { e1 , e2 , ... , en } , which makes predictions on the BFS-ordered edges . 3Later in section 3.2 , we will discuss the equivalence between MI ( gi , zi ) and MI ( ( gi , xi ) , zi ) when node feature is structure-respecting . Recall our previous definition on the direction of k-hop ego-graph , the center node encoder Ψ receives pairs of ( gi , xi ) while the neighbor node encoder Φ in discriminator D receives ( g̃i , xi ) . Both encoders are parameterized as GNNs , Ψ ( gi , xi ) = GNNΨ ( Ai , Xi ) , Φ ( g̃i , xi ) = GNNΦ ( A′i , Xi ) , where Ai , A′i is the adjacency matrix with self-loops of gi and g̃i , respectively . The self-loops are added following the common design of GNNs , which allows the convolutional node embeddings to always incorporate the influence of the center node . Ai = A′i ᵀ . The output of Ψ , i.e. , zi ∈ Rn , is the center node embedding , while Φ outputs representation H ∈ R|gi|×n for neighbor nodes in the ego-graph . Once node representation H is computed , we now describe the scoring function f . For each of the node pair ( p , q ) ∈ Eπ , hp is the source node representation from Φ , xq is the destination node features . The scoring function is , f ( hp , xq , zi ) = σ ( UT · τ ( WT [ hp||xq||zi ] ) ) , ( 2 ) where σ and τ are Sigmoid and ReLU activation functions . Thus , the discriminator D is asked to distinguish a positive ( ( p , q ) , zi ) and negative pair ( ( p , q ) , z′i ) ) for each edge in gi . D ( gi , zi ) = ∑ ( p , q ) ∈Eπ log f ( hp , xq , zi ) , D ( gi , z′i ) = Eπ∑ ( p , q ) log f ( hp , xq , z ′ i ) . ( 3 ) There are two types of edges ( p , q ) in our consideration of node orders , type-a - the edges across different hops ( from the center node ) , and type-b - the edges within the same hop ( from the center node ) . The aforementioned BFS-based node ordering guarantees that Eq . 3 is sensitive to the ordering of type-a edges , and invariant to the ordering of type-b edges , which is consistent with the requirement of our theoretical analysis on ∆D . Due to the fact that the output of a k-layer GNN only depends on a k-hop ego-graph for both encoders Ψ and Φ , EGI can be trained in parallel by sampling batches of gi ’ s . Besides , the training objective of EGI is transferable as long as ( gi , xi ) across source graph Ga and Gb satisfies the conditions given in §3.2 . More model details in Appendix §B and source code in the Supplementary Materials . Connection with existing work . To provide more insights into the EGI objective , we also present it as a dual problem of ego-graph reconstruction . Recall our definition of ego-graph mutual information MI ( gi , Ψ ( gi , xi ) ) . It can be related to an ego-graph reconstruction loss R ( gi|Ψ ( gi , xi ) ) as max MI ( gi , Ψ ( gi , xi ) ) = H ( gi ) −H ( gi|Ψ ( gi , xi ) ) ≤ H ( gi ) −R ( gi|Ψ ( gi , xi ) ) . ( 4 ) When EGI is maximizing the mutual information , it simultaneously minimizes the upper error bound of reconstructing an ego-graph gi . In this view , the key difference between EGI and VGAE [ 28 ] is they assume each edge in a graph to be observed independently during the reconstruction . While in EGI , edges in an ego-graph are observed jointly during the GNN decoding . Moreover , existing mutual information based GNNs such as DGI [ 54 ] and GMI [ 41 ] explicitly measure the mutual information between node features x and GNN output Ψ . In this way , they tend to capture node features instead of graph structures , which we deem more essential in graph transfer learning as discussed in §3.2 . Use cases of EGI framework . In this paper , we focus on the classical domain adaption ( directtransferring ) setting [ 7 ] , where no target domain labels are available and transferability is measured by the performance discrepancy without fine-tuning . In this setting , the transferability of EGI is theoretically guaranteed by Theorem 3.1 . In §4.1 , we validated this with the airport datasets . Beyond direct-transferring , EGI is also useful in the more generalized and practical setting of transfer learning with fine-tuning , which we introduced in §4.2 and validated with the YAGO datasets . In this setting , the transferability of EGI is not rigorously studied yet , but is empirically shown promising . Supportive observations . In the first three columns of our synthetic experimental results ( Table 1 ) , in both cases of transfering GNNs between similar graphs ( F-F ) and dissimilar graphs ( B-F ) , EGI significantly outperforms all competitors when using node degree one-hot encoding as transferable node features . In particular , the performance gains over the untrained GIN show the effectiveness of training and transfering , and our gains are always larger than the two state-of-the-art unsupervised GNNs . Such results clearly indicate advantageous structure preserving capability and transferability of EGI .
The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maximizing the mutual information between the subgraph and the GNN output embedding of the center node. Then, the authors give an upper bound for the difference of EGI scores of two graphs based on the difference of eigenvalues of the graph Laplacian of the subgraph samples from the two graphs. The implication is that if the difference of the eigenvalues is small, then the EGI scores are similar, which means the GNN has a similar ability to encode the structure of the two graphs.
SP:038a1d3066f8273977337262e975d7a7aab5002f
Information Lattice Learning
1 INTRODUCTION . With rapid progress in AI , there is an increasing desire for general AI ( Goertzel & Pennachin , 2007 ; Chollet , 2019 ) and explainable AI ( Adadi & Berrada , 2018 ; Molnar , 2019 ) , which exhibit broad , human-like cognitive capacities . One common pursuit is to move away from “ black boxes ” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples , with neither unlimited priors nor unlimited data ( “ primitive priors ” & “ small data ” instead ) . In this pursuit , we present a new , task-nonspecific framework—Information Lattice Learning ( ILL ) — to learn representations akin to human-distilled rules , e.g. , producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students ( shown at the end ) . The term information lattice was first defined by Shannon ( 1953 ) , but remains largely conceptual and unexplored . In the context of abstraction and representation learning , we independently develop representation lattices that coincide with Shannon ’ s information lattice when restricted to his context . Instead of inventing a new name , we adopt Shannon ’ s . However , we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice , yielding the name ILL. ILL explains a signal ( e.g. , a probability distribution ) by disentangled representations , called rules . A rule explains some but not all aspects of the signal , but together the collection of rules aims to capture a large part of the signal . ILL is specially designed to address the core question “ what makes X an X ” or “ what makes X different from Y ” , emphasizing the what rather than generating X or predicting labels X , Y in order to facilitate effective , rule-based explanations designed to help human learners understand . A music AI classifying concertos , or generating one that mimics the masters , does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one . Our focus represents a shift from much representation-learning work ( Bengio et al. , 2013 ) that aim to find the best representation for solving a specific task ( e.g. , classification ) rather than strong concern for explainability . Instead of optimizing a task-specific objective function ( e.g. , classification error ) , ILL balances more general objectives that favor fewer , simpler rules for interpretability , and more essential rules for effectiveness—all formalized later . One intuition behind ILL is to break the whole into simple pieces , similar to breaking a signal into a Fourier series . Yet , rather than decomposition via projection to orthonormal basis and synthesis via weighted sum , we decompose a signal in a hierarchical space called a lattice . Another intuition behind ILL is feature selection . Yet , rather than features , we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning . The goal is to restore human-like , hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice ( called projection-and-lifting , Figure 1 : left ) , resulting in more than a sum of parts . ILL comprises two phases : ( a ) lattice construction ; ( b ) learning ( i.e. , searching ) in the lattice . This is similar to many machine learning ( ML ) models comprising ( a ) function class specification then ( b ) learning in the function class , e.g. , constructing a neural network then learning—finding optimal parameters via back-propagation—in the network . ILL ’ s construction phase is prior-efficient : it builds in universal priors that resemble human innate cognition ( cf . the Core Knowledge priors ( Spelke & Kinzler , 2007 ) ) , then grows a lattice of abstractions . The priors can be customized , however , to cater to a particular human learner , or facilitate more exotic knowledge discovery . ILL ’ s learning phase is data-efficient : it learns from “ small data ” encoded by a signal , but searches for rich explanations of the signal via rule learning , wherein abstraction is key to “ making small data large ” . Notably , the construction phase is prior-driven , not data-driven—data comes in only at the learning phase . Hence , the same construction may be reused in different learning phases for different data sets or even data on different topics ( Figure 1 : right ) . Featuring these two phases , ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model , echoing the notion of “ starting like a baby ; learning like a child ” ( Hutson , 2018 ) . ILL is related to many research areas . It draws ideas and approaches from lattice theory , information theory , group theory , and optimization . It shares algorithmic similarity with a range of techniques including MaxEnt , data compression , autoencoders , and compressed sensing , but with a much greater focus on achieving human-like explainability and generalizability . Below , we broadly compares ILL to prominent , related models , leaving more comparisons to the Appendix for most similar ones . Compared to ILL is deep learning a “ white-box ” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared , common priors and few , simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general : a tree ( e.g. , decision tree or hierarchical clustering ) is essentially a linear lattice ( called a chain formally ) depicting a unidirectional refinement or coarsening process concept lattice in FCA ( Ganter & Wille , 2012 ) conceptually more general and may include both known and unknown concepts ; ILL does not require but discovers domain knowledge ( more details in Appendix A ) We illustrate ILL applications by learning music theory from scores , chemical laws from compounds , and show how ILL ’ s common priors facilitate mutual interpretation between the two subjects . To begin , imagine Tom and Jerry are playing two 12-key pianos simultaneously , one note at a time ( Figure 1 : right ) . The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap . Inspecting this heatmap , what might be the underlying rules that govern their co-play ? ( Check : all grey pixels have a larger “ Jerry-coordinate ” and project to a black key along the “ Tom-axis ” . ) We now elaborate on ILL and use it to distill rules for complex , realistic cases . 2 INFORMATION LATTICE : ABSTRACTIONS AND RULES OF A SIGNAL . Signal . A signal is a function ξ : X → R. For notational brevity and computational reasons , assume ξ is non-negative and X ⊆ Rn is finite ( not a limitation : see Appendix B ) . For example , a signal ξ : { 1 , . . . , 6 } → R being a probability mass function ( pmf ) of a dice roll , or a signal ξ : { 0 , . . . , 27 } 2 → R being a 28× 28 grayscale image . We denote the set of all signals on X by SX . Partition / abstraction . We use a partition P of a set X to denote an abstraction of X ; we call a cell C ∈ P an ( abstracted ) concept . The intuition is simple : a partition of a set renders a “ coarse-grained view ” of the set , or more precisely , an equivalence relation on the set . In this view , we identify equivalence classes of elements ( concepts ) instead of individual elements . For example , the partition P = { { 1 , 3 , 5 } , { 2 , 4 , 6 } } of the six outcomes of the roll of a die identify two concepts ( odd , even ) . Rule / representation . A rule of a signal ξ : X → R is a “ coarsened ” signal rξ : P → R defined on a partition P of X with rξ ( C ) : = ∑ x∈C ξ ( x ) for any C ∈ P . In this paper , a rule of a signal is what we mean by a representation of a signal . If the signal is a grayscale image , a rule can be a special type of blurring or downsampling of the image ; if the signal is a probability distribution , a rule can be a pmf of the “ orbits ” of the distribution for lifted inference algorithms ( Holtzen et al. , 2019 ; Kersting , 2012 ) . More generally , we define a rule ( regardless of any signal ) over a set X by any signal on any partition of X ; accordingly , we denote the set of all rules over X byRX : = ∪P∈ { all partitions of X } SP . Partition lattice . Abstractions are hierarchical : one coarse-grained view can be coarser than another . Let the partition lattice ( PX , ) of a setX be the partially ordered set ( poset ) containing all partitions of X equipped with the partial order coarser than ( ) , or finer than ( ) , defined in the standard way . Let P : = { { x } | x ∈ X } and P : = { X } denote the finest and the coarsest partition , respectively . Per general lattice theory ( Davey & Priestley , 2002 ) , PX is a complete lattice : every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P , where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening . Information lattice . The information lattice ( Rξ , ⇐ ) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than : for any two rules r , r′ ∈ Rξ , we say r is more general than r′ ( or r′ is more specific ) , denoted r ⇐ r′ , if domain ( r ) domain ( r′ ) . Notably , Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below . Projection and lifting . For any signal ξ ∈ SX , we define the projection operator ↓ξ : PX → Rξ by letting ↓ξ ( P ) be the rule of ξ on P . One can check that ↓ξ : ( PX , ) → ( Rξ , ⇐ ) is an isomorphism . Conversely , we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X ( r ) denote the set of all signals that satisfy the rule r , i.e. , ⇑X ( r ) : = { ξ ∈ SX | ↓ξ ( domain ( r ) ) = r } ⊆ SX . To make lifting unique and per Principles of Indifference ( Eva , 2019 ) , we introduce a special lifting ↑X ( r ) to pick the most “ uniform ” signal in ⇑X ( r ) . Formally , define ‖ · ‖q : SX → R by ‖ξ‖q : = ( ∑ x∈X ξ ( x ) q ) 1/q . For any ξ , ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1 , we say that ξ is more uniform than ξ′ ( or ξ′ is more deterministic ) if ‖ξ‖2 ≤ ‖ξ′‖2 . We define the ( special ) lifting operator ↑X : RX → SX by ↑X ( r ) : = argminξ∈⇑X ( r ) ‖ξ‖2 ( can be computed by simply averaging ) . Notation here follows the convention as to function projections to quotient spaces ( Kondor & Trivedi , 2018 ) . Lifting a single rule to the signal domain can be extended in two ways : ( a ) lift to a finer rule domain P instead of X , i.e. , ⇑P ( r ) or ↑P ( r ) ; ( b ) lift more than one rules . Accordingly , we write ⇑ : = ⇑X and ↑ : = ↑X as defaults , write R = ↓ξ ( P ) : = { ↓ξ ( P ) | P ∈ P } ⊆ Rξ to denote a rule set , and write ⇑ ( R ) : = ∩r∈R ⇑ ( r ) = { η ∈ SX | ↓η ( P ) = R } and ↑ ( R ) : = argminη∈⇑ ( R ) ‖η‖2 to denote signals that satisfy all rules in R ( general lifting ) and the most uniform one ( special lifting ) , respectively . More computational details on lifting and its intimate relation to join are in Appendix C .
The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy.
SP:40cba7b6c04d7e44709baed351382c27fa89a129
Don't be picky, all students in the right family can learn from good teachers
1 INTRODUCTION . Recently-developed deep learning models have achieved remarkable performance in a variety of tasks . However , breakthroughs leading to state-of-the-art ( SOTA ) results often rely on very large models : GPipe , Big Transfer and GPT-3 use 556 million , 928 million and 175 billion parameters , respectively ( Huang et al. , 2019 ; Kolesnikov et al. , 2020 ; Brown et al. , 2020 ) . Deploying these models on user devices ( e.g . smartphones ) is currently impractical as they require large amounts of memory and computation ; and even when large devices are an option ( e.g . GPU clusters ) , the cost of large-scale deployment ( e.g . continual inference ) can be very high ( Cheng et al. , 2017 ) . Additionally , target hardware does not always natively or efficiently support all operations used by SOTA architectures . The applicability of these architectures is , therefore , severely limited , and workarounds using smaller or simplified models lead to a performance gap between the technology available at the frontier of deep learning research and that usable in industry applications . In order to bridge this gap , Knowledge Distillation ( KD ) emerges as a potential solution , allowing small student models to learn from , and emulate the performance of , large teacher models ( Hinton et al. , 2015a ) . The student model can be constrained in its size and type of operations used , so that it will satisfy the requirements of the target computational environment . Unfortunately , successfully achieving this in practice is extremely challenging , requiring extensive human expertise . For example , while we know that the architecture of the student is important for distillation ( Liu et al. , 2019b ) , it remains unclear how to design the optimal network given some hardware constraints . With Neural Architecture Search ( NAS ) it is possible to discover an optimal student architecture . NAS automates the choice of neural network architecture for a specific task and dataset , given a search space of architectures and a search strategy to navigate that space ( Pham et al. , 2018 ; Real et al. , 2017 ; Liu et al. , 2019a ; Carlucci et al. , 2019 ; Zela et al. , 2018 ; Ru et al. , 2020 ) . One im- portant limitation of most NAS approaches is that the search space is very restricted , with a high proportion of resources spent on evaluating very similar architectures , thus rendering the approach limited in its effectiveness ( Yang et al. , 2020 ) . This is because traditional NAS approaches have no tools for distinguishing between architectures that are similar and architectures that are very different ; as a consequence , computational resources are needed to compare even insignificant changes in the model . Conversely , properly exploring a large space requires huge computational resources : for example , recent work by Liu et al . ( 2019b ) investigating how to find the optimal student requires evaluating 10 , 000 models . By focusing on the comparison between distributions we ensure to use computational resources only on meaningful differences , thus performing significantly more efficiently : we evaluate 33× less architectures than the most related work to ours ( Liu et al. , 2019b ) . To overcome these limitations , we propose an automated approach to knowledge distillation , in which we look for a family of good students rather than a specific model . We find that even though our method , AutoKD , does not output one specific architecture , all architectures sampled from the optimal family of students perform well when trained with KD . This reformulation of the NAS problem provides a more expressive search space containing very diverse architectures , thus increasing the effectiveness of the search procedure in finding good student networks . Our contributions are as follows : ( A ) a framework for combining KD with NAS and effectively emulate large models while using a fraction of the memory and of the parameters ; ( B ) By searching for an optimal student family , rather than for specific architectures , our algorithm is up to 20x more sample efficient than alternative NAS-based KD solutions ; ( C ) We significantly outperform advanced KD methods on a benchmark of vision datasets , despite using the traditional KD loss , showcasing the efficacy of our found students . 2 RELATED WORK . Model compression has been studied since the beginning of the machine learning era , with multiple solutions being proposed ( Choudhary et al. , 2020 ; Cheng et al. , 2017 ) . Pruning based methods allow the removal of non-essential parameters from the model , with little-to-none drop in final performance . The primary motive of these approaches was to reduce the storage requirement , but they can also be used to speed up the model ( LeCun et al. , 1990 ; Han et al. , 2015 ; Li et al. , 2016a ) . The idea behind quantization methods is to reduce the number of bits used to represent the weights and the activations in a model ; depending on the specific implementation this can lead to reduced storage , reduced memory consumption and a general speed-up of the network ( Fiesler et al. , 1990 ; Soudry et al. , 2014 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ) . In low rank factorization approaches , a given weight matrix is decomposed into the product of smaller ones , for example using singular value decomposition . When applied to fully connected layers this leads to reduced storage , while when applied to convolutional filters , it leads to faster inference ( Choudhary et al. , 2020 ) . All the above mentioned techniques can successfully reduce the complexity of a given model , but are not designed to substitute specific operations . For example , specialized hardware devices might only support a small subset of all the operations offered by modern deep learning frameworks . In Knowledge Distillation approaches , a large model ( the teacher ) distills its knowledge into a smaller student architecture ( Hinton et al. , 2015b ) . This knowledge is assumed to be represented in the neural network ’ s output distribution , hence in the standard KD framework , the output distribution of a student ’ s network is optimized to match the teacher ’ s output distribution for all the training data ( Yun et al. , 2020 ; Ahn et al. , 2019 ; Yuan et al. , 2020 ; Tian et al. , 2020 ; Tung & Mori , 2019 ) . The work of Liu et al . ( 2019b ) shows that the architecture of a student network is a contributing factor in its ability to learn from a given teacher . The authors propose combining KD with a traditional NAS pipeline , based on Reinforcement Learning , to find the optimal student . While this setup leads to good results , it does so at a huge computational cost , requiring over 5 days on 200 TPUs . Similarly , Gu & Tresp ( 2020 ) also look for the optimal student architecture , but do so by searching for a subgraph of the original teacher ; therefore , it can not be used to substitute unsupported operations . Orthogonal approaches , looking at how KD can improve NAS , are explored by Trofimov et al . ( 2020 ) and Li et al . ( 2020 ) . The first establishes that KD improves the correlation between different budgets in multi-fidelity methods , while the second uses the teacher supervision to search the architecture in a blockwise fashion . 3 SEARCHING FOR THE OPTIMAL STUDENT NETWORK GENERATOR . The AutoKD framework ( Fig . 1 ) combines Bayesian Optimization ( BO ) , Neural Architecture Search ( NAS ) and Knowledge Distillation ( KD ) . AutoKD defines a family of random network generators G ( θ ) parameterized by a hyperparameter θ , from where student networks are sampled . BO uses a surrogate model to propose generator hyperparameters , while students from these generators are trained with KD using a state-of-the-art teacher network . The student performances are evaluated and provided as feedback to update the BO surrogate model . To improve our BO surrogate model , the search procedure is iterated , until the best family of student networks G ( θ∗ ) is selected . In this section we specify all components of AutoKD . See also Fig . 1 and Algorithm 1 for an overview . 3.1 KNOWLEDGE DISTILLATION . Knowledge Distillation ( KD ; Hinton et al. , 2015b ) is a method to transfer , or distill , knowledge from one model to another—usually from a large model to small one—such that the small student model learns to emulate the performance of the large teacher model . KD can be formalized as minimizing the objective function : LKD = ∑ xi∈X l ( fT ( xi ) , fS ( xi ) ) ( 1 ) where l is the loss function that measures the difference in performance between the teacher fT and the student fS , xi is the ith input , yi is the ith target . The conventional loss function l used in practice is a linear combination of the traditional cross entropy loss LCE and the Kullback–Leibler divergence LKL of the pre-softmax outputs for fT and fS : l = ( 1− α ) LCE + αLKL ( σ ( fT ( xi ) /τ ) , σ ( fS ( xi ) /τ ) ) ( 2 ) where σ is the softmax function σ ( x ) = 1/ ( 1 + exp ( −x ) ) , and τ is the softmax temperature . Hinton et al . ( 2015b ) propose “ softening ” the probabilities using temperature scaling with τ ≥ 1 . The parameter α represents the weight trade-off between the KL loss and the cross entropy loss LCE . The LKD loss is characterized by the hyper-parameters : α and τ ; popular choices are τ ∈ { 3 , 4 , 5 } and α = 0.9 ( Huang & Wang , 2017 ; Zagoruyko & Komodakis , 2016 ; Zhu et al. , 2018 ) . Numerous other methods ( Polino et al. , 2018 ; Huang & Wang , 2017 ; Tung & Mori , 2019 ) can be formulated as a form of Equation ( 2 ) , but in this paper we use the conventional loss function l. Traditionally in KD , both the teacher and the student network have predefined architectures . In contrast , AutoKD defines a search space of student network architectures and finds the optimal student by leveraging neural architecture search , as detailed below . 3.2 STUDENT SEARCH VIA GENERATOR OPTIMIZATION . Most NAS method for vision tasks employ a cell-based search space , where networks are built by stacking building blocks ( cells ) and the operations inside the cell are searched ( Pham et al. , 2018 ; Real et al. , 2017 ; Liu et al. , 2019a ) . This results in a single architecture being output by the NAS procedure . In contrast , more flexible search spaces have recently been proposed that are based on Algorithm 1 : AutoKD 1 : Input : Network generator G , BOHB hyperparameters ( η , training budget bmin and bmax ) , Evaluation function fKD ( θ , b ) which assesses the validation performance of a generator hyperparameterθ by sampling an architecture from G ( θ ) and training it with the KD loss LKD ( equations 1 and 2 ) for b epochs . 2 : smax = blogη bmaxbmin c ; 3 : for s ∈ { smax , smax − 1 , . . . , 0 } do 4 : Sample M = d smax+1s+1 · η se generator hyperparameters Θ = { θj } Mj=1 which maximises the raito of kernel density estimators ; . ( Falkner et al. , 2018 , Algorithm 2 ) 5 : Initialise b = ηs · bmax ; . Run Successive Halving ( Li et al. , 2016b ) 6 : while b ≤ bmax do 7 : L = { fKD ( θ , b ) : θ ∈ Θ } ; 8 : Θ = top k ( Θ , L , b|Θ|/ηc ) ; 9 : b = η · b ; 10 : end while 11 : end for 12 : Obtain the best performing configuration θ∗ for the student network generator . 13 : Sample k architectures from G ( θ∗ ) , train them to completion , and obtain test performance . neural network generators ( Xie et al. , 2019 ; Ru et al. , 2020 ) . The generator hyperparameters define the characteristics of the family of networks being generated . NAGO optimizes an architecture generator instead of a single architecture and proposes a hierarchical graph-based space which is highly expressive yet low-dimensional ( Ru et al. , 2020 ) . Specifically , the search space of NAGO comprises three levels of graphs ( where the node in the higher level is a lower-level graph ) . The top level is a graph of cells ( Gtop ) and each cell is itself a graph of middlelevel modules ( Gmid ) . Each module further corresponds to a graph of bottom-level operation units ( Gbottom ) such as a relu-conv3×3-bn triplet . NAGO adopts three random graph generators to define the connectivity/topology of Gtop , Gmid and Gbottom respectively , and thus is able to produce a wide variety of architectures with only a few generator hyperparameters . AutoKD employs NAGO as the NAS backbone for finding the optimal student family . Our pipeline consists of two phases . In the first phase ( search ) , a multi-fidelity Bayesian optimisation technique , BOHB ( Falkner et al. , 2018 ) , is employed to optimise the low-dimensional search space . BOHB uses partial evaluations with smaller-than-full budget to exclude bad configurations early in the search process , thus saving resources to evaluate more promising configurations . Given the same time constraint , BOHB evaluates many more configurations than conventional BO which evaluates all configurations with full budget . As Ru et al . ( 2020 ) empirically observe that good generator hyperparameters lead to a tight distribution of well-performing architectures ( small performance standard deviation ) , we similarly assess the performance of a particular generator hyperparameter value with only one architecture sample . In the second phase ( retrainA ) , AutoKD uniformly samples multiple architectures from the optimal generator found during the search phase and evaluates them with longer training budgets to obtain the best architecture performance . Instead of the traditionally used cross-entropy loss , AutoKD uses the KD loss in equation 2 to allow the sampled architecture to distill knowledge from its teacher . The KD hyperparameters temperature τ and loss weight α are included in the search space and optimized simultaneously with the architecture to ensure that the student architectures can efficiently distill knowledge both from the designated teacher and the data distribution . A full overview of the framework is shown in Fig . 1 .
This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, increasing the diversity of the architectures explored. They show that this approach combined with the standard knowledge distillation loss is able to learn good student architectures requiring substantially less samples and achieving competitive performances when comparing to other knowledge distillation algorithms.
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
Towards Counteracting Adversarial Perturbations to Resist Adversarial Examples
1 INTRODUCTION . Deep neural networks ( DNNs ) have become the dominant approach for various tasks including image understanding , natural language processing and speech recognition ( He et al. , 2016 ; Devlin et al. , 2018 ; Park et al. , 2018 ) . However , recent studies demonstrate that neural networks are vulnerable to adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . That is , these network models make an incorrect prediction with high confidence for inputs that are only slightly different from correctly predicted examples . This reveals a potential threat to neural network-based artificial intelligence systems , many of which have been widely deployed in real-world applications . The adversarial vulnerability of neural networks reveals fundamental blind spots in the learning algorithms . Even with advanced learning and regularization techniques , neural networks are not learning the true underlying distribution of the training data , although they can obtain extraordinary performance on test sets . This phenomenon is now attracting much research attention . There have been increasing studies attempting to explain neural networks ’ adversarial vulnerability and develop methods to resist adversarial examples ( Madry et al. , 2018 ; Zhang et al. , 2020 ; Pang et al. , 2020 ) . While much progress has been made , most existing studies remain preliminary . Because it is difficult to construct a theoretical model to explain the adversarial perturbation generating process , defending against adversarial attacks is still a challenging task . Existing methods of resisting adversarial perturbations perform defense either at training time or inference time . Training time defense methods attempt to increase model capacity to improve adversarial robustness . One of the commonly used methods is adversarial training ( Szegedy et al. , 2014 ) , in which a mixture of adversarial and clean examples are used to train the neural network . The adversarial training method can be seen as minimizing the worst case loss when the training example is perturbed by an adversary ( Goodfellow et al. , 2015 ) . Adversarial training requires an adversary to generate adversarial examples in the training procedure . This can significantly increase the training time . Adversarial training also results in reduced performance on clean examples . Lamb et al . ( 2019 ) recently introduced interpolated adversarial training ( IAT ) that incorporates interpolation-based training into the adversarial training framework . The IAT method helps to improve performance on clean examples while maintaining adversarial robustness . As to inference time defense methods , the main idea is to transfer adversarial perturbations such that the obtained inputs are no longer adversarial . Tabacof & Valle ( 2016 ) studied the use of random noise such as Gaussian noise and heavy-tail noise to resist adversarial perturbations . Xie et al . ( 2018 ) introduced to apply two randomization operations , i.e. , random resizing and random zero padding , to inputs to improve adversarial robustness . Guo et al . ( 2018 ) investigated the use of random cropping and rescaling to transfer adversarial perturbations . More recently , Pang et al . ( 2020 ) proposed the mixup inference method that uses the interpolation between the input and a randomly selected clean image for inference . This method can shrink adversarial perturbations somewhat by the interpolation operation . Inference time defense methods can be directly applied to off-the-shelf network models without retraining or finetuning them . This can be much efficient as compared to training time defense methods . Though adversarial perturbations are not readily perceivable by a human observer , it is suggested that adversarial examples are outside the natural image manifold ( Hu et al. , 2019 ) . Previous studies have suggested that adversarial vulnerability is caused by the locally unstable behavior of classifiers on data manifolds ( Fawzi et al. , 2016 ; Pang et al. , 2018 ) . Pang et al . ( 2020 ) also suggested that adversarial perturbations have the locality property and could be resisted by breaking the locality . Existing inference time defense methods mainly use stochastic transformations such as mixup and random cropping and rescaling to break the locality . In this research , we observe that applying small perturbations generated for non-predicted class labels to the adversarial example helps to counteract the adversarial effect . Motivated by this observation , we propose a method that employs the use of small perturbations to counteract adversarial perturbations . In the proposed method , we generate small perturbation using local first-order gradient information for a number of randomly selected class lables . These small perturbations are added together and projected onto a specified space before finally applying to the adversarial example . Our method can be used as a preliminary step before applying existing inference time defense methods . To the best of our knowledge , this is the first research on using local first-order gradient information to resist adversarial perturbations . Successful attack methods such as projected gradient descent ( PGD ) ( Madry et al. , 2018 ) usually use local gradient to obtain adversarial perturbations . Compared to random transformations , it would be more effective to use local gradient to resist adversarial perturbations . We show through experiments that our method is effective and complementary to random transformation-based methods to improve defense performance . The contributions of this paper can be summarized as follows : • We propose a method that uses small first-order perturbations to defend against adversarial attacks . We show that our method is effective in counteracting adversarial perturbations and improving adversarial robustness . • We evaluate our method on CIFAR-10 and CIFAR-100 against PGD attacks in different settings . The experimental results demonstrate that our method significantly improves the defense performance of the baseline methods against both untargeted and targeted attacks and that it performs well in resisting strong adversarial examples generated using more iterations . 2 PRELIMINARY . 2.1 ADVERSARIAL EXAMPLES . We consider a neural network f ( · ) with parameters θ that outputs a vector of probabilities for L = { 1 , 2 , ... , l } categories . In supervised learning , empirical risk minimization ( ERM ) ( Vapnik , 1998 ) has been commonly used as the principle to optimize the parameters on a training set . Given an input x , the neural network makes a prediction c ( x ) = argmaxj∈L fj ( x ) . The prediction is correct if c ( x ) is the same as the actual target c∗ ( x ) . Unfortunately , ERM trained neural networks are vulnerable to adversarial examples , inputs formed by applying small but intentionally crafted perturbations ( Szegedy et al. , 2014 ; Madry et al. , 2018 ) . That is , an adversarial example x′ is close to a clean example x under a distance metric , e.g. , ℓ∞ distance , but the neural network outputs an incorrect result for the adversarial example x′ with high confidence . In most cases , the difference between the adversarial example and clean example is not readily recognizable to humans . 2.2 ATTACK METHODS . Existing attack methods can be categorized into white-box attacks and black-box attacks . We focus on defending against white-box attacks , wherein the adversary has full access to the network model including the architecture and weights . The fast gradient sign ( FGSM ) method ( Goodfellow et al. , 2015 ) and PGD are two successful optimization-based attack methods . The FGSM method is a one-step attack method . It generates adversarial perturbations that yield the highest loss increase in the gradient sign direction . Let x be the input to a network model , y the label associate with x and L ( θ , x , y ) be the loss function for training the neural network . The FGSM method generates a max-norm constrained perturbation as follows : η = εsign ( ∇xL ( θ , x , y ) ) , ( 1 ) where ε denotes the max-norm . This method was developed based on the view that the primary cause of neural networks ’ adversarial vulnerability is their linear nature . The required gradient can be computed efficiently using backpropagation . The PGD method is a multistep attack method that iteratively applies projected gradient descent on the negative loss function ( Kurakin et al. , 2016 ) as follows : xt+1 = Πx+S ( x t + αsign ( ∇xtL ( θ , xt , y ) ) ) , ( 2 ) where α denotes the step size and Π denotes the projection operator that projects the perturbed input onto x+ S. We consider projecting the perturbed input onto a predefined ℓ∞ ball from the original input . The PGD attack method can be seen as a multistep FGSM method . It is a much strong adversary that reliably causes a variety of neural networks to misclassify their input . 3 METHODOLOGY While many studies have been conducted on defending against adversarial attacks at inference time , these studies have not considered using local gradient information to resist adversarial perturbations . Previous work has suggested that the primary cause of neural networks ’ adversarial vulnerability is their linear nature ( Goodfellow et al. , 2015 ) . It would be more effective to use first-order gradient information to counteract adversarial perturbations such that the resulted perturbations no longer result in the model making an incorrect prediction . Adversarial perturbations are small crafted perturbations that slightly affect the visual quality of inputs but cause the neural network to misclassify the inputs in favor of an incorrect answer with high probability . We show that this effect can be counteracted by applying small perturbations generated using local first-order gradient information for class labels other than the predicted one . An illustration of this phenomenon is shown in Figure 1 . We see that by adding perturbations generated for non-predicted labels to the input , the prediction probability for the correct category increases and that for the incorrect label is suppressed . Algorithm 1 Counteracting adversarial perturbations using local first-order gradient . Input : Neural network f ; input x ; step size α used in PGD to generate perturbations to counteract the adver- sarial perturbation . Output : Prediction result for x . 1 : Randomly select N class labels { l1 , l2 , ... , lN } ; 2 : for i = 1 to N do 3 : ηi = PGD ( li , α , step=1 ) // generate perturbation ηi for li using the one-step PGD method . 4 : end for 5 : x = x+ΠC ( ∑N i=1 ηi ( x ) ) // C is a ℓ∞ bounded space . 6 : return f ( x ) . Based on this phenomenon , we propose a method of counteracting adversarial perturbations to improve adversarial robustness . In the proposed method , we generate small perturbations for a number of randomly selected class labels and apply these perturbations to the input to resist the adversarial perturbation . Let x be the input to a model , which can be an adversarial or clean example . We randomly select N class labels and generate small first-order perturbations for the N selected labels . These N small perturbations are added together and then projected onto a ℓ∞-bounded space before applying to the input . This procedure can be formulated as follows : x̃ = x+ΠC ( N∑ i=1 ηi ( x ) ) , ( 3 ) where ηi ( x ) denotes the small perturbation generated for the i-th selected class label , C = { t| ∥t− x∥∞ ≤ µ } is a µ bounded ℓ∞ space . The one-step PGD method is used to generate small perturbations . This is the same as using the FGSM method and empirically achieves better performance than using multiple steps . The perturbations can be generated in an untargeted or targeted manner . The combined perturbation is projected onto the space C. This ensures that the obtained example is visually similar to the original one . We detail the procedure for counteracting adversarial perturbations in Algorithm 1 . Discussion and Analysis Adversarial examples exposes underlying flaws in the training algorithms . While much progress has been made in defending against adversarial attacks , it is difficult to theoretically understand neural networks ’ vulnerability to adversarial examples . Previous work ( Athalye et al. , 2018 ) has suggested that the adversarial perturbation δ can be obtained by solving the following optimization problem : min ∥δ∥p , s.t . c ( x+ δ ) ̸= c∗ ( x ) , ∥δ∥p ≤ ξ , ( 4 ) where ξ is a hyperparameter constraining the size of the perturbation . This problem can be effectively solved by gradient descent-based attack methods such as PGD and FGSM that reliably cause neural networks to output an incorrect result . These attack methods typically use local first-order gradient to find the optimal solution . Because state-of-the-art neural networks usually have many parameters , perturbations obtained with these attack methods may overfit to the inputs . Therefore , perturbing and transferring these adversarial perturbations could be an effective way to resist the adversarial effect . Unlike previous random transformation-based methods , we employ the use of local first-order gradient information to counteract the adversarial effect . We show that the proposed method is effective in improving defense performance , especially against strong adversarial examples generated using more iterations . Let x0 be a clean example and δ be the adversarial perturbation . In our method , the following input is fed to the neural network : x0 + δ · 1z ( x0 ) +ΠC ( N∑ i=1 ηi ( x0 ) ) , where 1z ( x0 ) = { 0 , x0 is not subject to adversarial attack , 1 , x0 is subject to adversarial attack . ( 5 ) The perturbation ηi generated to counteract the adversarial perturbation should be small , otherwise it would be a new adversarial perturbation . This would essentially have no effect in counteracting the adversarial perturbation . Adversarial training that has been shown to be effective to improve adversarial robustness usually employs a first-order adversarial like PGD to provide adversarial examples for training . These adversarial examples help to regularize the model to be resistant to adversarial perturbations . We show through experiments that our method is complementary to adversarial training to improve overall defense performance against both untargeted and targeted attacks . The proposed method is applied at inference time . It can be directly applied to off-the-shelf models without retraining or finetuning them . The required gradient for generating small perturbations can be computed efficiently in parallel using backpropagation . This would not increase too much time for inference .
The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat model where the attacker is unaware of the defense mechanism. As has been argued again and again in the literature and in community guidelines such as [1, 2], the oblivious threat model is trivial and yields absolutely no insights into the effectiveness of a defense (e.g. you can just manipulate the backpropagated gradient in random ways to prevent any gradient-based attack from finding adversarial perturbations). The problem with oblivious attacks is clearly visible in the results section where more PGD iterations are less effective than fewer iterations - a clear red flag that the evaluation is ineffective. The paper also fails to point out that Pang et al. 2020, one of the methods they combine their method with, has been shown to be ineffective [2].
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples
1 INTRODUCTION . Debugging machine learning ( ML ) models is a critical part of the ML development life cycle . Uncovering bugs helps ML developers make important decisions about both development and deployment . In practice , much of debugging uses aggregate test statistics ( like those in leader board style challenges [ Rajpurkar et al . ( 2016 ) ] ) and continuous evaluation and monitoring post deployment [ Liberty et al . ( 2020 ) , Simon ( 2019 ) ] . However , additional issues arise with over-reliance on test statistics . For instance , aggregate statistics like held out test accuracy are known to overestimate generalization performance [ Recht et al . ( 2019 ) ] . Further , statistics offer little insight nor remedy for specific model failures [ Ribeiro et al . ( 2020 ) ; Wu et al . ( 2019 ) ] . Last , reactive debugging of failures as they occur in production does little to mitigate harmful user experiences [ La Fors et al . ( 2019 ) ] . Several techniques exist for identifying undesirable behavior in machine learning models . These methods include explanations [ Ribeiro et al . ( 2016 ) ; Slack et al . ( 2020b ) ; Lakkaraju et al . ( 2019 ) ; Lundberg & Lee ( 2017 ) ] , fairness metrics [ Feldman et al . ( 2015 ) , Slack et al . ( 2020a ) ] , data set replication [ Recht et al . ( 2019 ) ; Engstrom et al . ( 2020 ) ] , and behavioral testing tools [ Ribeiro et al . ( 2020 ) ] . However , these techniques do not provide methods to remedy model bugs or require a high level of human supervision . To enable model designers to discover and correct model bugs beyond aggregate test statistics , we analyze unrestricted adversarial examples : instances on the data manifold that are misclassified [ Song et al . ( 2018 ) ] . We identify model bugs through diagnosing common patterns in unrestricted adversarial examples . In this work , we propose Defuse : a technique for debugging classifiers through distilling1 unrestricted adversarial examples . Defuse works in three steps . First , Defuse identifies unrestricted adversarial examples by making small , semantically meaningful changes to input data using a variational autoencoder ( VAE ) . If the classifier prediction deviates from the ground truth label on the altered instance , it returns the data instance as a potential model failure . This method employs similar techniques from [ Zhao et al . ( 2018 ) ] . Namely , small perturbations in the latent space of generative models can produce images that are misclassified . Second , Defuse distills the changes through clustering on the unrestricted adversarial example ’ s latent codes . In this way , Defuse diagnoses regions in the latent space that are problematic for the classifier . This method produces a set of 1We mean distilling in the sense of “ to extract the most important aspects of ” and do not intend to invoke the knowledge distillation literature [ Hinton et al . ( 2014 ) ] . clusters in the latent space where it is likely to find misclassified data . We call these localities failure scenarios . An annotator reviews the failure scenarios and assigns the correct label— one label per scenario . Third , Defuse corrects the model behavior on the discovered failure scenarios through optimization . Because we use a generative clustering model to describe the failure scenarios , we sample many unrestricted adversarial examples and finetune to fix the classifier . Critically , failure scenarios are highly useful for model debugging because they reveal high level patterns in the way the model fails . By understanding these consistent trends in model failures , model designers can more effectively understand problematic deployment scenarios for their models . To illustrate the usefulness of failure scenarios , we run Defuse on a classifier trained on MNIST and provide an overview in figure 1 . In the identification step ( first pane in figure 1 ) , Defuse generates unrestricted adversarial examples for the model . The red number in the upper right hand corner of the image is the classifier ’ s prediction . Although the classifier achieves high test set performance , we find naturally occurring examples that are classified incorrectly . Next , the method performs the distillation step ( second pane in figure 1 ) . The clustering model groups together similar failures for annotator labeling . We see that similar mistakes are grouped together . For instance , Defuse groups together a similar style of incorrectly classified eights in the first row of the second pane in figure 1 . Next , Defuse receives annotator labels for each of the clusters.2 Last , we run the correction step using both the annotator labeled data and the original training data . We see that the model correctly classifies the images ( third pane in figure 1 ) . Importantly , the model maintains its predictive performance , scoring 99.1 % accuracy after tuning . We see that Defuse enables model designers to both discover and correct naturally occurring model failures . We provide the necessary background in Defuse ( §2 ) . Next , we detail the three steps in Defuse : identification , distillation , and correction ( §3 ) . We then demonstrate the usefulness of Defuse on three image data sets : MNIST [ LeCun et al . ( 2010 ) ] , the German traffic signs data set [ Stallkamp et al . ( 2011 ) ] , and the Street view house numbers data set [ Netzer et al . ( 2011 ) ] , and find that Defuse discovers and resolves critical bugs in high performance classifiers trained on these datasets ( §4 ) . 2 NOTATION AND BACKGROUND . In this section , we establish notation and background on unrestricted adversarial examples . Though unrestricted adversarial examples can be found in many domains , we focus on Defuse applied to image classification . 2We assign label 8 to the first row in the second pane of figure 1 , label 0 to the second row , and label 6 to the third row . Unrestricted adversarial examples Let f : RN ! [ 0 , 1 ] C denote a classifier that accepts a data point x 2 X , where X is the set of legitimate images . The classifier f returns the probability that x belongs to class c 2 { 1 , ... , C } . Next , assume f is trained on a data set D consisting of d tuples ( x , y ) containing data point x and ground truth label y using loss function L. Finally , suppose there exists an oracle o : x 2 X ! { 1 , ... , C } that outputs a label for x . We define unrestricted adversarial examples as the set AN : = { x 2 X | o ( x ) 6= f ( x ) } [ Song et al . ( 2018 ) ] . Variational Autoencoders ( VAEs ) In order to discover unrestricted adversarial examples , it is necessary to model the set of legitimate images . We use a VAE to create such a model . A VAE is composed of an encoder and a decoder neural networks . These networks are used to model the relationship between data x and latent factors z 2 RK . Where x is generated by some ground truth latent factors v 2 RM , we wish to train a model such that the learned generative factors closely resemble the true factors : p ( x|v ) ⇡ p ( x|z ) . In order to train such a model , we employ the -VAE [ Higgins et al . ( 2017 ) ] . This technique produces encoder q ( z|x ) that maps from the data and latent codes and decoder p✓ ( x|z ) that maps from codes to data . 3 METHODS . 3.1 FAILURE SCENARIOS . We begin by formalizing our notion of failure scenarios . Let z 2 RK be the latent codes corresponding to image x 2 X and q ( · ) : x ! z be the encoder mapping the relationship between images and latent codes . Definition 3.1 . Failure scenario . Given a constant ✏ > 0 , vector norm || · || , and point z0 , a failure scenario is a set of images AR = { x 2 X | ✏ > ||q ( x ) z0|| ^ o ( x ) 6= f ( x ) } . Previous works that investigate unrestricted adversarial examples look for specific instances where the oracle and the model disagree [ Song et al . ( 2018 ) ; Zhao et al . ( 2018 ) ] . We instead look for regions in the latent space where this is the case . Because the latent space of the VAE tends to take on Gaussian form due to the prior , we can use euclidean distance to define these regions . If we were to define failure scenarios on the original data manifold , we may need a much more complex distance function . Because it is likely too strict to assume the oracle and model disagree on every instance in such a region , we also introduce a relaxation . Definition 3.2 . Relaxed failure scenario . Given a constant ✏ > 0 , vector norm || · || , point z0 , and threshold ⇢ , a relaxed failure scenario is a set of images Af = { x 2 X | ✏ > ||q ( x ) z0|| } such that | { x 2 Af | o ( x ) 6= f ( x ) } | / |Af | > ⇢ . In this work , we adopt the latter definition of failure scenarios . To concretize failure scenarios and provide evidence for their existence , we continue our MNIST example from figure 1 . We plot the t-SNE embeddings of the latent codes of 10000 images from the training set and 516 unrestricted adversarial examples created during the identification step in figure 2 ( details of how we generate unrestricted adversarial examples in section 3.2.1 ) . We see that the unrestricted adversarial examples are from similar regions in the latent space . 3.2 DEFUSE . In this section , we introduce Defuse : our procedure for identifying and correcting classifier performance on failure scenarios . First , we explain how we identity unrestricted adversarial examples using VAEs . Next , we describe our clustering approach that distills these instances into failure scenarios . Last , we introduce our approach to correct classifier predictions on the failure scenarios . 3.2.1 IDENTIFYING UNRESTRICTED ADVERSARIAL EXAMPLES . This section describes the identification step in Defuse ( first pane in figure 1 ) . The aim of the identification step is to generate many unrestricted adversarial examples . In essence , we encode all the images from the training data . We perturb the latent codes with a small amount of noise drawn from a Beta distribution . We save instances that are classified differently from ground truth by f when decoded . By perturbing the latent codes with a small amount of noise , we expect the decoded instances to have small but semantically meaningful differences from the original instances . Thus , if the classifier prediction deviates on the perturbation the instance is likely misclassified . We denote the set of unrestricted adversarial examples for a single instance . We generate unrestricted adversarial examples over each instance x 2 X producing a set of unrestricted adversarial containing the produced for each instance x. Pseudo code of the algorithm for generating a single unrestricted adversarial example is given in algorithm 1 in appendix A . Our technique is related to the method for generating natural adversarial examples from [ Zhao et al . ( 2018 ) ] — a very similar but slightly different concept from unrestricted adversarial examples . The authors use a similar stochastic search method in the latent space of a GAN . They start with a small amount of noise and increase magnitude of the noise until they find a unrestricted adversarial example . Thus , they save only the unrestricted adversarial examples which are minimally distant from a data point . They also save images that differ in prediction from the original decoded instance . Because we iterate over the entire data set , it is simpler to keep the level of noise fixed and sample a predetermined number of times . In addition , we save images that differ in ground truth label from the original decoded instance because we seek to debug a classifier . Meaning , if the original instance is misclassified we wish to save this instance as a model failure .
The technique is described in sufficient detail and the paper is easy to read. Experimental results involving three datasets: MNIST, street view house numbers, and German traffic signs. The experimental results show that the proposed technique finds significant failures in all datasets, including critical failure scenarios. After correction, the performance of the method improves.
SP:8badc3f75194e9780063af5a2f26448e41e733d4
Improving Learning to Branch via Reinforcement Learning
1 INTRODUCTION . Mixed Integer Programming ( MIP ) has been applied widely in many real-world problems , such as scheduling ( Barnhart et al. , 2003 ) and transportation ( Melo & Wolsey , 2012 ) . Branch and Bound ( B & B ) is a general and widely used paradigm for solving MIP problems ( Wolsey & Nemhauser , 1999 ) . B & B recursively partitions the solution space into a search tree and compute relaxation bounds along the way to prune subtrees that provably can not contain an optimal solution . This iterative process requires sequential decision makings : node selection : selecting the next solution space to evaluate , variable selection : selecting the variable by which to partition the solution space ( Achterberg & Berthold , 2009 ) . In this work , we focus on learning a variable selection strategy , which is the core of the B & B algorithm ( Achterberg & Wunderling , 2013 ) . Very often , instances from the same MIP problem family are solved repeatedly in industry , which gives rise to the opportunity for learning to improve the variable selection policy ( Bengio et al. , 2020 ) . Based on the human-designed heuristics , Di Liberto et al . ( 2016 ) learn a classifier that dynamically selects an existing rule to perform variable selection ; Balcan et al . ( 2018 ) consider a weighted score of multiple heuristics and analyse the sample complexity of finding such a good weight . The first step towards learning a variable selection policy was taken by Khalil et al . ( 2016 ) , who learn an instance customized policy in an online fashion , as well as Alvarez et al . ( 2017 ) and Hansknecht et al . ( 2018 ) who learn a branching rule offline on a collection of similar instances . Those methods need extensively feature engineering and require strong domain knowledge in MIP . To avoid that , Gasse et al . ( 2019 ) propose a graph convolutional neural network approach to obtain competitive performance , only requiring raw features provided by the solver . In each case , the branching policy is learned by imitating the decision of strong branching as it consistently leads to the smallest B & B trees empirically ( Achterberg et al. , 2005 ) . In this work , we argue that strong branching is not a good expert to imitate . The excellent performance ( the smallest B & B tree ) of strong branching relies mostly on the information obtained in solving branch linear programming ( LP ) rather than the decision it makes . This factor prevents learning a good policy by imitating only the decision made by strong branching . To obtain more effective and non-myopic policies , i.e . minimizing the total solving nodes rather than maximizing the immediate duality gap gap , we use reinforcement learning ( RL ) and model the variable selection process as a Markov Decision Process ( MDP ) . Though the MDP formulation for MIP has been mentioned in the previous works ( Gasse et al. , 2019 ; Etheve et al. , 2020 ) , the advantage of RL has not been demonstrated clearly in literature . The challenges of using RL are multi-fold . First , the state space is a complex search tree , which can involve hundreds or thousands of nodes ( with a linear program on each node ) and evolve over time . In the meanwhile , the objective of MIP is to solve problems faster . Hence a trade-off between decision quality and computation time is required when representing the state and designing a policy based on this state representation . Second , learning a branching policy by RL requires rolling out on a distribution of instances . Moreover , for each instance , the solving trajectory could contain thousands of steps and actions can have long-lasting effects . These result in a large variance in gradient estimation . Third , each step of variable selection can have hundreds of candidates . The large action set makes the exploration in MIP very hard . In this work , we address these challenges by designing a policy network inspired by primal-dual iteration and employing a novelty search evolutionary strategy ( NS-ES ) to improve the policy . For efficiency-effectiveness trade-off , the primal-dual policy ignores the redundant information and makes high-quality decisions on the fly . For reducing variance , the ES algorithm is an attractive choice as its gradient estimation is independent of the trajectory length ( Salimans et al. , 2017 ) . For exploration , we introduce a new representation of the B & B solving process employed by novelty search ( Conti et al. , 2018 ) to encourage visiting new states . We evaluate our RL trained agent over a range of problems ( namely , set covering , maximum independent set , capacitated facility location ) . The experiments show that our approach significantly outperforms stateof-the-art human-designed heuristics ( Achterberg & Berthold , 2009 ) as well as imitation based learning methods ( Khalil et al. , 2016 ; Gasse et al. , 2019 ) . In the ablation study , we compare our primal-dual policy net with GCN ( Gasse et al. , 2019 ) , our novelty based ES with vanilla ES ( Salimans et al. , 2017 ) . The results confirm that both our policy network and the novelty search evolutionary strategy are indispensable for the success of the RL agent . In summary , our main contributions are the followings : • We point out the overestimation of the decision quality of strong branching and suggest that methods other than imitating strong branching are needed to find better variable selection policy . • We model the variable selection process as MDP and design a novel policy net based on primal-dual iteration over reduced LP relaxation . • We introduce a novel set representation and optimal transport distance for the branching process associated with a policy , based on which we train our RL agent using novelty search evolution strategy and obtain substantial improvements in empirical evaluation . 2 BACKGROUND . Mixed Integer Programming . MIP is an optimization problem , which is typically formulated as minx∈Rn { cTx : Ax ≤ b , ` ≤ x ≤ u , xj ∈ Z , ∀j ∈ J } ( 1 ) where c ∈ Rn is the objective vector , A ∈ Rm×n is the constraint coefficient matrix , b ∈ Rm is the constraint vector , ` , u ∈ Rn are the variable bounds . The set J ⊆ { 1 , · · · , n } is an index set for integer variables . We denote the feasible region of x as X . Linear Programming Relaxation . LP relaxation is an important building block for solving MIP problems , where the integer constraints are removed : minx∈Rn { cTx : Ax ≤ b , ` ≤ x ≤ u } . ( 2 ) Algorithm 1 : Branch and Bound Input : A MIP P in form Equation 1 Output : An optimal solution set x∗ and optimal value c∗ 1 Initialize the problem set S : = { PLP } . where PLP is in form Equation 2 . Set x∗ = φ , c∗ =∞ ; 2 If S = φ , exit by returning x∗ and c∗ ; 3 Select and pop a LP relaxation Q ∈ S ; 4 Solve Q with optimal solution x̂ and optimal value ĉ ; 5 If ĉ ≥ c∗ , go to 2 ; 6 If x̂ ∈ X , set x∗ = x̂ , c∗ = ĉ , go to 2 ; 7 Select variable j , split Q into two subproblems Q+j and Q − j , add them to S and go to 3 ; Branch and Bound . LP based B & B is the most successful method in solving MIP . A typical LP based B & B algorithm for solving MIP looks as Algorithm 1 ( Achterberg et al. , 2005 ) . It consists of two major decisions : node selection , in line 3 , and variable selection , in line 7 . In this paper , we will focus on the variable selection . Given a LP relaxation and its optimal solution x̂ , the variable selection means selecting an index j . Then , branching splits the current problem into two subproblems , each representing the original LP relaxation with a new constraint xj ≤ bx̂jc for Q−j and xj ≥ dx̂je for Q + j respectively . This procedure can be visualized by a binary tree , which is commonly called search tree . We give a simple visualization in Section A.1 . Evolution Strategy . Evolution Strategies ( ES ) is a class of black box optimization algorithm ( Rechenberg , 1978 ) . In this work , we refer to the definition in Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2008 ) . NES represents the population as a distribution of parameter vectors θ characterized by parameters φ : pφ ( θ ) . NES optimizes φ to maximize the expectation of a fitness f ( θ ) over the population Eθ∼pφ [ f ( θ ) ] . In recent work , Salimans et al . ( 2017 ) outlines a version of NES applied to standard RL benchmark problems , where θ parameterizes the policy πθ , φt = ( θt , σ ) parameterizes a Gaussian distribution pφ ( θ ) = N ( θt , σ2I ) and f ( θ ) is the cumulative reward R ( θ ) over a full agent interaction . At every iteration , Salimans et al . ( 2017 ) apply n additive Gaussian noises to the current parameter and update the population as θt+1 = θt + α 1 nσ n∑ i=1 f ( θt + σ i ) i ( 3 ) To encourage exploration , Conti et al . ( 2018 ) propose Novelty Search Evolution Strategy ( NS-ES ) . In NSES , the fitness function f ( θ ) = λN ( θ ) + ( 1−λ ) R ( θ ) is selected as a combination of domain specific novelty score N and cumulative reward R , where λ is the balancing weight . 3 WHY IMITATING STRONG BRANCHING IS NOT GOOD . Strong branching is a human-designed heuristic , which solves all possible branch LPs Q+j , Q − j ahead of branching . As strong branching usually produces the smallest B & B search trees ( Achterberg , 2009 ) , many learning-based variable selection policy are trained by mimicking strong branching ( Gasse et al. , 2019 ; Khalil et al. , 2016 ; Alvarez et al. , 2017 ; Hansknecht et al. , 2018 ) . However , we claim that strong branching is not a good expert : the reason strong branching can produce a small search tree is the reduction obtained in solving branch LP , rather than its decision quality . Specifically , ( i ) Strong branching can check lines 5 , 6 in Algorithm 1 before branching . If the pruning condition is satisfied , strong branching does not need to add the subproblem into the problem set S. ( ii ) Strong branching can strengthen other LP relaxations in the problem set S via domain propagation ( Rodosek et al. , 1999 ) and conflict analysis ( Achterberg , 2007 ) . For example , if strong branching finds x1 ≥ 1 and x2 ≥ 1 can be pruned during solving branch LP , then any other LP relaxations containing x1 ≥ 1 can be strengthened by adding x2 ≤ 0 . These two reductions are the direct consequence of solving branch LP , and they can not be learned by a variable selection policy . ( iii ) Strong branching activates primal heuristics ( Berthold , 2006 ) after solving LPs . To examine the decision quality of strong branching , we employ vanilla full strong branching ( Gamrath et al. , 2020 ) , which takes the same decision as full strong branching , while the side-effect of solving branch LP is switched off . Experiments in Section 5.2 show that vanilla full strong branching has poor decision quality . Hence , imitating strong branching is not a wise choice for learning variable selection policy .
The paper proposes a model for *variable selection* in *Mixed Integer Programming (MIP)* solvers. While this problem is clearly a sequential decision making task, modeling it as an MDP is challenging. As a result, existing works use other approaches such as ranking or imitation learning. This paper overcomes these challenges by introducing a new problem representation.
SP:bbaedd5d8e7591fa3a5587260bf19f3d05779976
Frequency Decomposition in Neural Processes
Neural Processes are a powerful tool for learning representations of function spaces purely from examples , in a way that allows them to perform predictions at test time conditioned on so-called context observations . The learned representations are finite-dimensional , while function spaces are infinite-dimensional , and so far it has been unclear how these representations are learned and what kinds of functions can be represented . We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components , similar to a Fourier transform . In this context , we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce , depending on their representation size . This bound is confirmed empirically . Finally , we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others , which makes them programmable band-pass or band-stop filters . 1 INTRODUCTION . Neural Processes ( Garnelo et al. , 2018a ; b ) are a class of models that can learn a distribution over functions , or more generally a function space . In contrast to many other approaches that do the same , for example Bayesian Neural Networks , Neural Processes learn an explicit representation of such a function space , which allows them to condition their predictions on an arbitrary number of observations that are only available at test time . This representation is finite-dimensional , while function spaces are infinite-dimensional , and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so . Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space , and in the process describes constraints and conditions that decide what kinds of function spaces can be represented . We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective , which allows us to derive a theoretical upper bound on the frequencies , i.e . Fourier components , of functions that can be represented . We subsequently confirm this bound empirically , which suggests that the learned representations should contain a notion of frequency . To further investigate this hypothesis , we continue with a visualization of the learned representations , which reveals that Neural Processes can decompose a function space into different frequency components , essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour . As further evidence of this we train Neural Processes to represent only certain frequencies , which results in them suppressing those frequencies that were not observed in the training data . Our contributions can be summarized as follows1 : • We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct . As we show , the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval . • We investigate learned representations qualitatively , presenting evidence that Neural Processes perform a frequency decomposition of the function space , akin to a Fourier transform . This behaviour is not incentivized externally but rather emerges naturally . 1The complete source code to reproduce our experiments is available at https : //github.com/ * * * • We show that by choosing the training distribution appropriately , Neural Processes can be made to represent certain frequencies and suppress others , which turns them into programmable band-pass or band-stop filters . 2 BACKGROUND . Neural Processes ( Garnelo et al. , 2018a ; b ) are maps P : C , X → Y , where C is a set of tuples { ( x , f ( x ) ) } Nc=1 = : ( xc , f ( xc ) ) 2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context , because Neural Processes perform predictions for values xt ∈ X ( t for target ) , conditioned on these points . F is the function space we would like to find a representation of . Note that some sources define function spaces as any set of functions with a shared domain and co-domain , while others require them to be vector spaces as well . We don ’ t concern ourselves with this distinction and further restrict our work to X = Y = R , because it allows us to visualize learned representations . We only look at the original Neural Processes , namely the deterministic Conditional Neural Processes ( CNP ) ( Garnelo et al. , 2018a ) and the variational Neural Processes ( NP ) ( Garnelo et al. , 2018b ) , because newer contributions in the field work in ways that preclude them from being analyzed in the same way . We discuss this further in Section 5 . In CNPs and NPs , the map P is separated into two parts , a so called encoding E : C → Z and a decoding or generating part G : Z , X → Y . Z is referred to as the representation or latent space . To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators , specifically neural networks , as the name suggests . The defining characteristic of CNPs and NPs is that E encodes individual pairs ( x , f ( x ) ) from the context separately , and the resulting representations are averaged to form a global representation , meaning one that is independent of the target points xt at which we then evaluate the Neural Process . This is often not the case in later work , for example in Attentive Neural Processes ( Kim et al. , 2019 ) , where the individual representations are instead aggregated using an attention mechanism that depends on xt . In CNPs the representations are deterministic , while in NPs they parametrize mean and ( log- ) variance of a Gaussian distribution , so the latter are trained using variational inference . For details on implementation and training we refer to Appendix A.1 . Our work will investigate how these global representations , which are finite-dimensional , represent infinite-dimensional function spaces . As stated above , E and by extension the Neural Process P acts on set-valued inputs . This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering . Recall that sets are permutation invariant , so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings , but Zaheer et al . ( 2017 ) show that it is in fact the only way to ensure it : E is permutation-invariant if and only if it has a so-called sum-decomposition , i.e . it can be represented in the form E ( x ) = ρ ( N∑ i=1 φ ( xi ) ) ( 1 ) where ρ , φ are appropriately chosen functions . Wagstaff et al . ( 2019 ) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section . 3 AN UPPER BOUND ON SIGNAL FREQUENCIES . We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition , so that the global representations are permutation-invariant , as shown in Zaheer et al . ( 2017 ) . Expanding on this , Wagstaff et al . ( 2019 ) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of 2We use boldface as a shorthand for sets , not vectors . 3This will depend on the implementation of E and G , and for neural networks F is practically restricted to continuous and differentiable functions . the sets are input-output tuples of some function f , as it is typically the case in Neural Processes . We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process . In order to do this , we must first define what it means to successfully learn a representation of a function space . Definition 3.1 ( Representation of Function Spaces in Neural Processes ) . We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [ a , b ] ⊂ R , if , for some error tolerance , it holds for all x ∈ [ a , b ] and for all f ∈ F , represented as a suitable set of discrete measurements ( xf , f ( xf ) ) , that |P ( ( xf , f ( xf ) ) , x ) − f ( x ) | < . That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance . The choice of this tolerance is essentially arbitrary , but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements , by which we mean that it must be possible to reconstruct f from those measurements . Switching to signal-processing terminology , we know that to represent a continuous signal as a set of discrete measurements , we need to sample it at points with a distance of at most τ = 1/ ( 2νmax ) , where νmax is the maximum frequency component of the signal . This is most commonly known as the Nyquist-Shannon sampling theorem ( Whittaker , 1915 ; Kotelnikov , 1933 ; Shannon , 1949 ) . For any finite real interval [ a , b ] , this translates to a number of sampling points N > 2|b − a|νmax . The latter allows us to make a connection to the findings by Wagstaff et al . ( 2019 ) , so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct . Theorem 3.1 ( Maximum Frequency in Neural Process Representations ) . A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [ a , b ] ⊂ R if for all f ∈ F with a maximum frequency content νmax , f it holds that : νmax , f < Dr 2|b− a| ( 2 ) Note that this means we should in theory be able to represent any function space that obeys Eq . ( 2 ) to within arbitrarily small . In practice , we will typically have less control over F , and we only find approximate representations . Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq . ( 2 ) . It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points . During training , we work with randomly sampled inputs , but at test time equidistant points are used , as we outline in Appendix A.2 .
The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation, authors infer a maximum theoretical upper bond of frequencies of functions f that can be used to represent the points. The bond depends on the latent dimension/representation size and the finite interval spawn by the points. Simulations are computed to test the validity of the upper bond. Authors find that NPs behave like a Fourier Transform and decompose the spectrum of the signal. Since the representation during training learns to represent specific frequencies, NPs can be used as band pass/stop filter.
SP:a20769de2c7acf390c7e3bece904a17df6a991bd
Multi-agent Policy Optimization with Approximatively Synchronous Advantage Estimation
1 INTRODUCTION . Reinforcement learning ( RL ) algorithms have shown amazing performance on many singleagent ( SA ) environment tasks ( Mnih et al. , 2013 ) ( Jaderberg et al. , 2016 ) ( Oh et al. , 2018 ) . However , for many real-world problems , the environment is much more complex where RL agents often need to cooperate with other agents . For example , taxi scheduling ( Nguyen et al. , 2018 ) and network control ( Chu et al. , 2019 ) . In cooperative multi-agent tasks , each agent is treated as an independent decision-maker , but can be trained together to learn cooperation . The common goal is to maximize the global return in the perspective of a team of agents . To deal with such tasks , the architecture of centralized training and decentralized executions ( CTDE ) is proposed ( Oliehoek & Vlassis , 2007 ) ( Jorge et al. , 2016 ) . The basic idea of CTDE is to construct a centralized policy evaluator , which only works during training and is accessable to global information . At the same time , each agent is assigned with a local policy for decentralized execution . The role of the evaluator is to evaluate agents ’ local policies differentially from the global perspective . A challenge in construction of centralized evaluator is multi-agent credit assignment ( Chang et al. , 2004 ) : in cooperative settings , joint actions typically generate only global rewards , making it difficult for each agent to deduce its own contribution to the team ’ s success . Credit assignment requires differentiate evaluation for agents ’ local policies , but designing individual reward function for each agent is often complicated and lacks of generalization ( Grzes , 2017 ) ( Mannion et al. , 2018 ) . Current policy based MARL methods generally realize credit assignment by introducing differentiate value functions or advantage functions ( Foerster et al. , 2018 ) ( Lowe et al. , 2017 ) . However , these value functions or advantage functions are estimated asynchronously but decentralized policies are updated synchronously , as shown in figure 1 ( b ) , which results in natural estimation bias . In this paper , we propose a novel policy based MARL method called multi-agent policy optimization with approximatively synchronous advantage estimation ( ASAE ) . In our work , we first define the counter-factual scenes , in which MA advantage estimation can be converted to SA advantage estimation . For certain agent , each counter-factual scene is assigned with a SA advantage . Then the marginal advantage function is defined as the expectation of SA advantages on distribution of counter-factual scenes , and credit assignment is realized by constructing different scenes ’ distribution for different agents . Moreover , in order to achieve synchronous advantage estimation , an approximation of other agents ’ joint future policy is introduced . To ensure the approximation is reliable , a restriction is applied to the original multi-agent policy optimization ( MAPO ) problem . The approximate optimization problem is simplified and broken down into multiple sub-problems , which has a similar form to trust region policy optimization ( TRPO ) problem . And the sub-problems are finally solved by proximal policy optimization ( PPO ) method . We have two contributions in this work : ( 1 ) A novel advantage estimation method called marginal advantage estimation , which realizes credit assignment for MARL is proposed . More importantly , this method provides a channel for various SA advantage functions expanding to multi-agent system . ( 2 ) A simple yet effective method for approximatively synchronous advantage estimation is firstly proposed . 2 RELATED WORK . A common challenge in cooperative multi-agent tasks is credit assignment . RL algorithms designed for single-agent tasks , ignore credit assignment and take other agents as part of partial observable environment . Such algorithms perform poorly in complex cooperative tasks which require high coordination ( Lowe et al. , 2017 ) . To deal with the challenge , some value based MARL methods estimate a local Q value for each agent , and the shared global Q value is then constructed through these local Q values . Value decomposition network ( VDN ) constructs the global Q value by simply adding all local Q values together ( Sunehag et al. , 2018 ) . And in QMIX algorithm ( Rashid et al. , 2018 ) , the global Q value is obtained by mixing local Q values with a neural network . In mean field multi-agent methods , local Q values are defined on agent pairs . The mapping from local Q values to the global Q value is established by measuring the influence of each agent pair ’ s joint action to the global return ( Yang et al. , 2018 ) . Similarly , for policy based MARL methods , credit assignment is generally realized through differentiated evaluation with CTED structure . Some naive policy based methods estimate local Q values for individual agents with a centralized critic ( Lowe et al. , 2017 ) , resulting in large variance . Some other methods try to introduce advantage function in MARL . Counter-factual multi-agent policy gradient ( COMA ) method ( Foerster et al. , 2018 ) is inspired by the idea of difference reward ( Wolpert & Tumer , 2002 ) and provides a naive yet effective approach for differentiated advantage estimation in cooperative MARL . In COMA , a centralized critic is used to predict the joint Q value function Qπ ( s , u ) of joint action u under state s. And the advantage for agent a is defined as Aa ( s , u ) = Q ( s , u ) − ∑ u′a πa ( u′a|τa ) Q ( s , ( u−a , u′a ) ) ( 1 ) where τ and π represent trajectory and policy respectively . a and -a denote current agent and the set of other agents respectively . COMA introduces a counter-factual baseline , which assumes that other agents take fixed actions , as shown in figure 1 ( b ) . COMA performs synchronous updates with asynchronous estimation , which leads to lagging and biased advantage estimation . In contrast , asynchronous estimation & asynchronous updating is more reliable yet more complicated . An ideal approach is synchronous estimation & synchronous updating . However , it requires prediction of other agents ’ future policies . 3 BACKGROUND . We consider a most general setting of partially observable , full cooperative multi-agent tasks , which can be described as a stochastic game defined by a tuple G = < S , U , P , r , Z , O , n , γ > . The true state of environment s ∈ S is unavailable to all agents . At each time step , n agents identified by a ∈ A ( A = { 1 , 2 , · · · , n } ) receive their local observations za ∈ Z , and take actions ua ∈ U simultaneously . The joint observation Z = Zn is acquired by the observation function O ( s , a ) : S × A → Z . The next state is determined by joint action u ∈ U ( U = Un ) and the transition function P ( s′|s , u ) : S×U×S → [ 0 , 1 ] . The reward function r ( s , u ) : S×U→ R is shared by all agents , so as the discounted return Gt = ∑∞ t+i γ trt+i . γ ∈ [ 0 , 1 ) is a discount factor . In policy based MARL with CTED architecture , each agent has a local trajectory τa consists of historical observation and action { ( za0 , ua0 ) , ( ua1 , za1 ) , · · · } . And an independent policy πa ( ua|τa ) is constructed for each agent on their local trajectory . Action-state value function Qπ ( s , u ) and state value function V π ( s ) are used to evaluate joint policy . The advantage function is Aπ ( s , u ) = Qπ ( s , u ) −V π ( s ) . For clarity , symbols in bold are used to denote the joint variable of group agents . In single-agent policy optimization problems ( Schulman et al. , 2015a ) , the objective is to maximize the expected action state value functionEπθ [ Qπθ ] . Similarly , for MAPO with CTDE structure , each agent optimize its local policy individually with estimated Q values from centralized critic . Under this circumstance , the overall objective is for agent a = 1 to n : max θa E ( πθa , π−a ) [ Qa ( πθa , π−a ) ] ( 2 ) Where Q values can be substituted by advantages to reduce the variance . 4 APPROXIMATIVELY SYNCHRONOUS ADVANTAGE ESTIMATION IN MULTI-AGENT SYSTEM . In this section , we first introduce marginal advantage estimation which expands advantage functions of SARL to MARL as well to realize credit assignment . And then , we describe how to realize approximatively synchronous advantage estimation based on the marginal advantage function in MAPO problem . 4.1 MARGINAL ADVANTAGE ESTIMATION . In this subsection , we are going to solve the challenge of credit assignment through the proposed marginal advantage estimation . We first consider an counter-factual way where advantages are estimated asynchronously but policies are updated synchronously , as shown in figure 1 ( b ) . In this case , a counter-factual scene can be defined as : at certain state , for agent a , other agent always take fixed actions . In partially observable , full cooperative multi-agent settings , the counter-factual advantage of agent a ’ s action ua under state s is derived based on the joint action ’ s value ( or joint Q value ) function Q ( s , u ) Aa ( s , u ) = Aa ( s , ( ua , u−a ) ) = Q ( s , u ) − ∫ ua Q ( s , u−a , ua ) dπa ( ua|τa ) ( 3 ) From the view of agent a , the counter-factual advantage depends on other agents ’ joint action u−a , which is a random variable and u−a ∼ π−a . In order to remove the dependency , the marginal Q value function of agent a is defined as Qa ( s , ua ) = Eu−a∼π−a [ Q ( s , ( ua , u−a ) ) ] ( 4 ) Notice that in CTED structure , policy πa ( ua|τa ) and π−a ( u−a|τ−a ) are independent . By replacing joint Q value function with marginal Q value function , the marginal advantage function is derived Aa ( s , ua ) = Qa ( s , ua ) − ∫ ua Qa ( s , ua ) dπa ( ua|τa ) = ∫ u−a Q ( s , ua , u−a ) dπ−a ( u−a|τ−a ) − ∫ ua ∫ u−a Q ( s , ua , u−a ) dπ−a ( u−a|τ−a ) dπa ( ua|τa ) = ∫ u−a [ Q ( s , ua , u−a ) − ∫ ua Q ( s , ua , u−a ) dπa ( ua|τa ) ] dπ−a ( u−a|τ−a ) = ∫ u−a Aa ( s , u ) dπ−a ( u−a|τ−a ) ( 5 ) Such replacement will not change the result of advantage estimation because the substitution of joint Q value is its expectation . Form equation ( 5 ) , for different agent , the value of marginal advantage is different , which realizes credit assignment . It can be easily proved that if counter-factual advantage Aa ( s , u ) is an unbiased estimation of joint Q value Q ( s , u ) , then marginal advantage is also an unbiased estimation of marginal Q value ( Appendix I ) . In a counter-factual scene , from the view of agent a , other agents and their fix joint actions u−a can be regarded as part of the environment . Let ( s , u−a ) = sctf and counter-factual advantage function can be written as Aa ( s , u ) =Aa ( sctf , ua ) =Q ( sctf , u a ) − ∫ ua Q ( sctf , u a ) dπa ( ua|τa ) ( 6 ) In counter-factual scenes , counter-factual advantage function is identical to advantage function in SARL , which means the counter-factual advantage in equation ( 5 ) can be replaced by any form of advantage function used in SARL . For example , considering using TD residual δat = r ( st , u a t ) + γV ( st+1 ) −V ( st ) as an estimation of joint advantage Aa ( st , ut ) , the marginal advantages could be written as Aa ( st , ut ) : = Eu−a∼π−a [ ∞∑ l=0 γlδat+l ] Aa ( st , ut ) : = Eu−a∼π−a [ δ a t ] ( 7 ) The former is unbiased estimation , but has high variance . The latter is biased estimation for any V 6= V π , but has much lower variance . These two methods can be combined for compromise between bias and variance ( Schulman et al. , 2015b ) . As agents ’ policies are independent , the expectation in equation ( 5 ) can be split into a ( n− 1 ) -layer integration , which is complicated . For simplicity and efficiency , the Monte-Carlo ( MC ) sampling can be applied as a substitution . Aa ( st , ut ) = ∫ u−a Aa ( st , ut ) dπ−a ≈ 1 m m∑ u−a Aa ( st , ut ) ( 8 ) Where m is the number of other agents ’ joint action samples . The principle of one step process to calculate marginal advantage with TD residual is shown in figure 2 . Firstly , based on the last true state st , m joint action samples are sampled . These samples are then reorganized . Take agent 1 as example , action u1 , t from Sa1 is combined with other agents ’ action samples from Sa2 to Sam respectively . As a result , m reorganized new samples are acquired . Based on these new samples , one step simulations are executed and m counter-factual rewards and states are acquired , which are used to calculate the estimation of marginal advantage . At last , the next true state is selected form counter-factual states . Both methods in equation ( 7 ) use V value predictor and require interactive simulation . Agent needs to interact with environment to get extra samples . In this work , we consider using centralized critic to predict jointQ values , and the marginal advantages can be directly calculated with theseQ values , which avoids interactive simulation .
The paper deals with the problem of credit assignment and synchronous estimation in cooperative multi-agent reinforcement learning problems. The authors introduce marginal advantage functions and use them for the estimation of the counterfactual advantage function. These functions permit to decompose the Multi-Agent Policy Optimization Problem in Single Agent Policy Optimization subproblems, which are solved using TRPO.
SP:ba25b5b02701e01998e9dd22e4230c4e095f4542
Adaptive Stacked Graph Filter
We study Graph Convolutional Networks ( GCN ) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients . We find that by stacking graph filters with learnable polynomial parameters , we can build a highly adaptive and robust vertex classification model . Our treatment here relaxes the low-frequency ( or equivalently , high homophily ) assumptions in existing vertex classification models , resulting a more ubiquitous solution in terms of spectral properties . Empirically , by using only one hyper-parameter setting , our model achieves strong results on most benchmark datasets across the frequency spectrum . 1 INTRODUCTION . The semi-supervised vertex classification problem ( Weston et al. , 2012 ; Yang et al. , 2016 ) in attributed graphs has become one of the most fundamental machine learning problems in recent years . This problem is often associated with its most popular recent solution , namely Graph Convolutional Networks ( Kipf & Welling , 2017 ) . Since the GCN proposal , there has been a vast amount of research to improve its scalability ( Hamilton et al. , 2017 ; Chen et al. , 2018 ; Wu et al. , 2019 ) as well as performance ( Liao et al. , 2019 ; Li et al. , 2019 ; Pei et al. , 2020 ) . Existing vertex classification models often ( implicitly ) assume that the graph has large vertex homophily ( Pei et al. , 2020 ) , or equivalently , low-frequency property ( Li et al. , 2019 ; Wu et al. , 2019 ) ; see Section 2.1 for graph frequency . However , this assumption is not true in general . For instance , let us take the Wisconsin dataset ( Table 1 ) , which captures a network of students , faculty , staff , courses , and projects . These categories naturally exhibit different frequency patterns1 . Connections between people are often low-frequency , while connections between topics and projects are often midrange . This problem becomes apparent as GCN-like models show low accuracies on this dataset ; for example , see ( Pei et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ) . This paper aims at establishing a GCN model for the vertex classification problem ( Definition 1 ) that does not rely on any frequency assumption . Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure . Contributions . By observing the relation between label frequency and performance of existing GCN-like models , we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer . We use filter stacking to implement a trainable graph filter , which is capable of learning any filter function . Our stacked filter construction with novel learnable filter parameters is easy to implement , sufficiently expressive , and less sensitive to the filters ’ degree . By using only one hyper-parameter setting , we show that our model is more adaptive than existing work on a wide range of benchmark datasets . The rest of our paper is organized as follows . Section 2 introduces notations and analytical tools . Section 3 provides insights into the vertex classification problem and motivations to our model ’ s design . Section 4 presents an implementation of our model . Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models . Section 6 compares our model and other existing methods empirically . We also provide additional experimental results in Appendix A . 1 “ Frequency ” is an equivalent concept to “ homophily ” and will be explained in Section 2 . 2 PRELIMINARIES . We consider a simple undirected graph G = ( V , E ) , where V = { 1 , . . . , n } is a set of n vertices and E ⊆ V × V is a set of edges . A graph G is called an attributed graph , denoted by G ( X ) , when it is associated with a vertex feature mapping X : V 7→ Rd , where d is the dimension of the features . We define the following vertex classification problem , also known in the literature as the semi-supervised vertex classification problem ( Yang et al. , 2016 ) . Definition 1 ( Vertex Classification Problem ) . We are given an attributed graph G ( X ) , a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C , and label set C. The task is to find a model h : V → C using the training data ( Vtr , Ytr ) that approximates the true labeling function Y : V → C. Let A be the adjacency matrix of the graph G , i.e. , Ai , j = 1 if ( i , j ) ∈ E and 0 otherwise . Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag ( d1 , . . . , dn ) be the n × n diagonal matrix of degrees . Let L = D −A be the combinatorial graph Laplacian . Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian . We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties : ( 1 ) its eigenvalues range from 0 to 2 ; and ( 2 ) the spectral properties can be compared between different graphs ( Chung & Graham , 1997 ) . In recent literature , the normalized adjacency matrix with added self-loops , Ã = I −L+ c , is often used as the propagation matrix , where c is some diagonal matrix . 2.1 GRAPH FREQUENCY . Graph signal processing ( Shuman et al. , 2012 ) extends “ frequency ” concepts in the classical signal processing to graphs using the graph Laplacian . Let L = UΛU > be the eigendecomposition of the Laplacian , where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues . Then , we can regard each eigenvector uk as a “ oscillation pattern ” and its eigenvalue λk as the “ frequency ” of the oscillation . This intuition is supported by the Rayleigh quotient as follows . r ( L , x ) , x > Lx x > x = ∑ u∼v Lu , v ( x ( u ) − x ( v ) ) 2∑ u∈V x ( u ) 2 . ( 1 ) where ∑ u∼v sums over all unordered pairs for which u and v are adjacent , x ( u ) denotes the entry of vector x corresponding to vertex u , and Lu , v is the ( u , v ) -entry of L. From the definition we see that r ( x ) is non-negative and L is positive semi-definite . r ( x ) is also known as a variational characterization of eigenvalues of L ( Horn & Johnson , 2012 , Chapter 4 ) , hence 0 ≤ r ( x ) ≤ 2 for any non-zero real vector x . We use the notation r ( x ) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context . The Rayleigh quotient r ( x ) measures how the data x is oscillating . Hence , in this study , we use the term “ frequency ” and the “ Rayleigh quotient ” interchangeably . By the definition , the eigenvector ui has the frequency of λi . The labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label . This is a common assumption made by the spectral clustering algorithms ( Shi & Malik , 2000 ; Ng et al. , 2002 ; Shaham et al. , 2018 ) . Commonly used terms , homophily and heterophily , used in network science , correspond to low-frequency and high-frequency , respectively . 2.2 GRAPH FILTERING . In classical signal processing , a given signal is processed by filters in order to remove unwanted interference . Here , we first design a frequency response f ( λ ) of the filter , and then apply the filter to the signal in the sense that each frequency component x̂ ( λ ) of the data is modulated as f ( λ ) x̂ ( λ ) . Graph signal processing extends this concept as follows . Same as in classical signal processing , we design a filter f ( λ ) . Then , we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui . Then , we modulate each frequency component by f ( λ ) as x = ∑ i f ( λi ) xiui . An important fact is that this can be done without performing the eigendecomposition explicitly . Let f ( L ) be the matrix function induced from f ( λ ) . Then , the filter is represented by f ( L ) x . As an extension of signal processing , graph signal processing deals with signals defined on graphs . In definition 1 , each column of the feature matrix X ∈ Rn×d is a “ graph signal ” . Let L = UΛU > be the eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors . Signal X is filtered by function f of the eigenvalues as follow . X̄ = Uf ( Λ ) U > X = f ( L ) X ( 2 ) In general , different implementations of f ( L ) lead to different graph convolution models . For instance , GCN and SGC ( Wu et al. , 2019 ) are implemented by f ( L ) = ( I−L+ ( D+ I ) −1/2L ( D+ I ) −1/2 ) k , where the constant term stems from the fact that self-loops are added to vertices and k is the filter order . Generally , the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation . The filter in GCN is called a low-pass filter because it amplifies low-frequency components ( Li et al. , 2018 ; NT & Maehara , 2019 ) . 3 SPECTRAL PROPERTIES OF FILTERS . Towards building a ubiquitous solution , we take an intermediate step to study the vertex classification problem . Similar to the unsupervised clustering problem , an ( implicit ) low-frequency assumption is commonly made . However , the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns . Table 1 shows three groups of datasets , each with different label frequency ranges . Notably , WebKB datasets ( Wisconsin , Cornell , Texas ) have mixed label frequencies ; some labels have low frequencies while others have midrange frequencies . Therefore , in order to relax the frequency assumptions , we need to learn the filtering function f ( λ ) in a similar way as proposed by Defferrard et al . ( 2016 ) . The filtering function f ( λ ) is often approximated using a polynomial of the graph Laplacian as f ( L ) ≈ poly ( L ) = K∑ i=0 θiLi . ( 3 ) Because polynomials can uniformly approximate any real continuous function on a compact interval ( see , e.g. , ( Brosowski & Deutsch , 1981 ) ) , such approximation scheme is well-justified . Kipf & Welling ( 2017 ) derived their GCN formulation as follows . In their equation 5 , they approximated a graph filter gθ by Chebyshev polynomials Tk as gθ ∗ x ≈ K∑ k=0 θkTk ( D −1/2AD−1/2 ) x . ( 4 ) Then , they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7 : gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ ( 2IN − L ) ( 5 ) Finally , they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as Z = D̃−1/2ÃD̃−1/2XΘ ( 6 ) Kipf & Welling ( 2017 ) claimed that the weight matrix Θ can learn different filters , and subsequent works ( e.g. , ( Veličković et al. , 2018 ; Spinelli et al. , 2020 ; Chen et al. , 2020b ) ) also learned filters by Θ . However , neither in theory nor practice it is the case ( Oono & Suzuki , 2020 ) . As the construction suggest , a GCN layer only represents a filter of the form f ( λ ) ≈ 2− λ . To properly learn different graph filters , we should learn the multiplying parameters θ0 , θ1 , . . . , θK in equation 3 . In the next section , we propose a learning model which directly learns these multiplying parameters .
This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than working on the frequency domain, the paper focuses on learning the polynomial coefficients of the filter on the vertex domain. The key novelty is the consideration of a stack architecture for which the polynomial filter is formed by the successive application (i.e., matrix multiplication) of filters of order one. Numerical experiments with real datasets showcase the merits, including superior classification performance, of the proposed architecture.
SP:37bdb147b866b9e32a94d55dae82d7a42cea8da9
Deep $k$-NN Label Smoothing Improves Reproducibility of Neural Network Predictions
1 INTRODUCTION . Deep neural networks ( DNNs ) have proved to be immensely successful at solving complex classification tasks across a range of problems . Much of the effort has been spent towards improving their predictive performance ( i.e . accuracy ) , while comparatively little has been done towards improving the stability of training these models . Modern DNN training is inherently noisy due to factors such as the random initialization of network parameters , the mini-batch ordering , and effects of various data augmentation or pre-processing tricks , all of which are exacerbated by the non-convexity of the loss surface . This results in local optima corresponding to models that have very different predictions on the same data points . This may seem counter-intuitive , but even when the different runs all produce very high accuracies for the classification task , their predictions can still differ quite drastically as we will show later in the experiments . Thus , even an optimized training procedure can lead to high prediction churn , which refers to the proportion of sample-level disagreements between classifiers caused by different runs of the same training procedure1 . In practice , reducing such predictive churn can be critical . For example , in a production system , models are often continuously improved on by being trained or retrained with new data or better model architectures and training procedures . In such scenarios , a candidate model for release must be compared to the current model serving in production . Oftentimes , this decision is conditioned on more than just overall offline test accuracy– in fact , oftentimes the offline metrics are not completely aligned with actual goal , especially if these models are used as part of a larger system ( e.g . maximizing offline click-through rate vs. maximizing revenue or user satisfaction ) . As a result , these comparisons oftentimes require extensive and costly live experiments , requiring human evaluation in situations where the candidate and the production model disagree ( i.e . in many situations , the true labels are not available without a manual labeler ) . In these cases , it can be highly desirable to lower prediction churn . Despite the practical relevance of lowering predictive churn , there has been surprisingly little work done in this area , which we highlight in the related work section . In this work , we focus on predictive churn reduction under retraining the same model architecture on an identical train and test set . Our main contributions are as follows : • We provide one of the first comprehensive analyses of baselines to lower prediction churn , showing that popular approaches designed for other goals are effective baselines for churn reduction , even compared to methods designed for this goal . 1Concretely , given two classifiers applied to the same test samples , the prediction churn between them is the fraction of test samples with different predicted labels . • We improve label smoothing , a global smoothing method popular for improving model confidence scores , by utilizing the local information leveraged by the k-NN labels thus introducing k-NN label smoothing which we show to often outperform the baselines on a wide range of benchmark datasets and model architectures . • We show new theoretical results for the k-NN labels suggesting the usefulness of the k-NN label . We show under mild nonparametric assumptions that for a wide range of k , the kNN labels uniformly approximates the Bayes-optimal label and when k is tuned optimally , achieves the minimax optimal rate . We also show that when k is linear in n , the distribution implied by the k-NN label approximates the original distribution smoothed with an adaptive kernel . 2 RELATED WORKS . Our work spans multiple sub-areas of machine learning . The main problem this paper tackles is reducing prediction churn . In the process , we show that label smoothing is an effective baseline and we improve upon it in a principled manner using deep k-NN label smoothing . Prediction Churn . There are only a few works which explicitly address prediction churn . Fard et al . ( 2016 ) proposed training a model so that it has small prediction instability with future versions of the model by modifying the data that the future versions are trained on . They furthermore propose turning the classification problem into a regression towards corrected predictions of an older model as well as regularizing the new model towards the older model using example weights . Cotter et al . ( 2019 ) ; Goh et al . ( 2016 ) use constrained optimization to directly lower prediction churn across model versions . Simultaneously training multiple identical models ( apart from initialization ) while tethering their predictions together via regularization has been proposed in the context of distillation ( Anil et al. , 2018 ; Zhang et al. , 2018 ; Zhu et al. , 2018 ; Song & Chai , 2018 ) and robustness to label noise ( Malach & Shalev-Shwartz , 2017 ; Han et al. , 2018 ) . This family of methods was termed “ co-distillation ” by Anil et al . ( 2018 ) , who also noted that it can be used to reduce churn in addition to improving accuracy . In this paper , we show much more extensively that co-distillation is indeed a reasonable baseline for churn reduction . Label smoothing . Label smoothing ( Szegedy et al. , 2016 ) is a simple technique that proposes to train a model the model on the soft labels obtained by a convex combination of the hard true label and the soft uniform distribution across all the labels . It has been shown that it prevents the network from being over-confident and leads to better confidence calibration ( Müller et al. , 2019 ) . Here we show that label smoothing is a reasonable baseline for reducing prediction churn , and we moreover enhance it for this task by smoothing the labels locally via k-NN rather than a the pure global approach mixing with the uniform distribution . k-NN Theory . The theory of k-NN classification has a long history ( e.g . Fix & Hodges Jr ( 1951 ) ; Cover ( 1968 ) ; Stone ( 1977 ) ; Devroye et al . ( 1994 ) ; Chaudhuri & Dasgupta ( 2014 ) ) . To our knowledge , the most relevant k-NN classification result is by Chaudhuri & Dasgupta ( 2014 ) , who show statistical risk bounds under similar assumptions as used in our work . Our analysis shows finitesample L∞ bounds on the k-NN labels , which is a stronger notion of consistency as it provides a uniform guarantee , rather than an average guarantee as is shown in previous works under standard risk measures such asL2 error . We do this by leveraging recent techniques developed in Jiang ( 2019 ) for k-NN regression , which assumes an additive noise model instead of classification . Moreover , we provide to our knowledge the first consistency guarantee for the case where k grows linearly with n. Deep k-NN . k-NN is a classical method in machine learning which has recently been shown to be useful when applied to the intermediate embeddings of a deep neural network ( Papernot & McDaniel , 2018 ) to obtain more calibrated and adversarially robust networks . This is because standard distance measures are often better behaved in these representations leading to better performance of k-NN on these embeddings than on the raw inputs . Jiang et al . ( 2018 ) uses nearest neighbors on the intermediate representations to obtain better uncertainty scores than softmax probabilities and Bahri et al . ( 2020 ) uses the k-NN label disagreement to filter noisy labels for better training . Like these works , we also leverage k-NN on the intermediate representations but we show that utilizing the k-NN labels leads to lower prediction churn . 3 ALGORITHM . Suppose that the task is multi-class classification with L classes and the training datapoints are ( x1 , y1 ) , ... , ( xn , yn ) , where xi ∈ X , and X is a compact subset of RD and yi ∈ RL , where represents the one-hot vector encoding of the label– that is , if the i-th example has label j , then yi has 1 in the j-th entry and 0 everywhere else . Then we give the formal definition of the smoothed labels : Definition 1 ( Label Smoothing ) . Given label smoothing parameter 0 ≤ a ≤ 1 , then the smoothed label y is ( where 1L denotes the vector of all 1 ’ s in RL ) . yLSa : = ( 1− a ) · y + a L · 1L . We next formally define the k-NN label , which is the average label of the example ’ s k-nearest neighbors in the training set . Let us use shorthand X : = { x1 , ... , xn } and yi ∈ RL . Definition 2 ( k-NN label ) . Let the k-NN radius of x ∈ X be rk ( x ) : = inf { r : |B ( x , r ) ∩X| ≥ k } whereB ( x , r ) : = { x′ ∈ X : |x−x′| ≤ r } and the k-NN set of x ∈ X beNk ( x ) : = B ( x , rk ( x ) ) ∩X . Then for all x ∈ X , the k-NN label is defined as ηk ( x ) : = 1 |Nk ( x ) | n∑ i=1 yi · 1 [ xi ∈ Nk ( x ) ] . The label smoothing method can be seen as performing a global smoothing . That is , every label is equally transformed towards the uniform distribution over all labels . While it seems almost deceptively simple , it has only recently been shown to be effective in practice , specifically for better calibrated networks . However , since this smoothing technique is applied equally to all datapoints , it fails to incorporate local information about the datapoint . To this end , we propose using the k-NN label , which smooths the label across its nearest neighbors . We show theoretically that the k-NN label can be a strong proxy for the Bayes-optimal label , that is , the best possible prediction one can make given the uncertainty . In other words , compared to the true label ( or even the label smoothing ) , the k-NN label is robust to variability in the data distribution and provides a more stable estimate of the label than the original hard label which may be noisy . Training on such noisy labels have been shown to hurt model performance ( Bahri et al. , 2020 ) and using the smoothed labels can help mitigate these effects . To this end , we define k-NN label smoothing as follows : Definition 3 ( k-NN label smoothing ) . Let 0 ≤ a , b ≤ 1 be k-NN label smoothing parameters . Then the k-NN smoothed label of datapoint ( x , y ) is defined as : ykNNa , b = ( 1− a ) · y + a · ( b · 1 L · 1L + ( 1− b ) · ηk ( x ) ) . We see that a is used to weight between using the true labels vs. using smoothing , and b is used to weight between the global vs. local smoothing . Algorithm 1 shows how k-NN label smoothing is applied to deep learning models . Like Bahri et al . ( 2020 ) , we perform k-NN on the network ’ s logits layer . Algorithm 1 Deep k-NN label smoothing Inputs : 0 ≤ a , b ≤ 1 , Training data ( x1 , y1 ) , ... , ( xn , yn ) , model training procedureM . Train model M0 on ( x1 , y1 ) , ... , ( xn , yn ) withM . Let z1 , ... , zn ∈ RL be the logits of x1 , ... , xn , respectively , w.r.t . M0 Let ỹi be the k-NN smoothed label of ( zi , yi ) computed w.r.t . dataset ( z1 , y1 ) , ... , ( zn , yn ) . Train model M on ( x1 , ỹ1 ) , ... , ( xn , ỹn ) withM .
The main objective of this paper is to reduce the model stability, in particular, the prediction churn of neural networks. The prediction churn is defined as the changed prediction w.r.t. model randomness, e.g. multiple runs of networks. The paper proposed to use a interpolated version of global label smoothing and k-NN label smoothing. Theoretically it is shown that k-NN rule converges to the Bayes rule when k is small, and converges to a kernel smoothed version of Bayes rule when k is linear in n. Experiments are conducted that show the proposed method gives highest test accuracy and lowest churn rate in most cases.
SP:f19be0fdce321827638f91d57607ba340b1c3e4b
Adversarial Feature Desensitization
1 Introduction . When training a classifier , it is common to assume that the training and test samples are drawn from the same underlying distribution . In adversarial machine learning , however , this assumption is intentionally violated by using the classifier itself to perturb the samples from the original ( natural ) data distribution towards a new distribution over which the classifier ’ s error rate is increased [ 52 ] . As expected , when tested on such adversarially generated input distribution , the classifier severely underperforms . To date , various methods have been proposed to defend the neural networks against adversarial attacks [ 34 , 2 ] , additive noise patterns and corruptions [ 24 , 25 , 45 ] , and transformations [ 17 ] . Among these methods , two of the most successful adversarial defense methods to date are adversarial training [ 34 ] , which trains the neural network with examples that are perturbed to maximize the loss on the target model , and TRADES [ 57 ] , which regularizes the classifier to push the decision boundary away from the data . While past adversarial defence methods have successfully improved the neural network robustness against adversarial examples , it has also been shown that these robust networks remain susceptible to even slightly larger adversarial perturbations or other forms of attacks [ 19 , 46 , 48 ] . In this paper , we propose to view the problem of adversarial robustness through the lens of domain adaptation , and to consider distributions of natural and adversarial images as distinct input domains 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . that a classifier is expected to perform well on . We then focus our attention on learning features that are invariant under such domain shifts . Building upon domain adaptation literature [ 4 ] , we use the classification-basedH∆H-divergence to quantify the distance between the natural and adversarial domains . The theory of domain adaptation allows us to formulate a bound on the adversarial classification error ( i.e . the error under the distribution of adversarial examples ) in terms of the classification error on natural images and the divergence between the natural and adversarial features . We further propose an algorithm for minimizing the adversarial error using this bound . For this , we train a classifier and a domain discriminator to respectively minimize their losses on the label classification and domain discrimination tasks . The feature extractor is trained to minimize the label classifier ’ s loss and maximise the discriminator ’ s loss . In this way , the feature extractor network is encouraged to learn features that are both predictive for the classification task and insensitive to the adversarial attacks . The proposed setup is conceptually similar to prior work in adversarial domain adaptation [ 18 , 53 ] , where domain-invariant features are learned through an adversarial game between the domain discriminator and a feature extractor network . This setup is similar to the adversarial learning paradigm widely used in image generation and transformation [ 20 , 28 , 60 ] , unsupervised and semi-supervised learning [ 39 ] , video prediction [ 35 , 31 ] , active learning [ 47 ] , and continual learning [ 16 ] . Some prior work have also considered adversarial learning to tackle the problem of adversarial examples [ 54 , 36 , 9 , 8 ] . These methods used generative models to learn the distribution of the adversarial images [ 54 , 36 ] , or to learn the distribution of input gradients [ 9 , 8 ] . Unlike our method which learns a discriminator function between distributions of adversarial and natural features and updates the feature extractor to reduce the discriminability of those distributions . The main contributions of this work are as follows : • We apply domain-adaptation theory to the problem of adversarial robustness ; this allows to bound the adversarial error in terms of the error on the natural inputs and the divergence between the feature ( representation ) distributions of adversarial and natural domains . • Aiming to minimize this bound , we propose a method which learns adversarially robust features that are both predictive and insensitive to adversarial attacks , i.e . can not be used to discriminate between natural and adversarial data . • We empirically demonstrate the effectiveness of the proposed method in learning robust models against a wide range of attack types and attack strengths , and show that our proposed approach often significantly outperforms most previous defense methods . 2 Related Work . There is an extensive literature on mitigating susceptibility to adversarial perturbations [ 34 , 57 , 13 , 59 , 3 , 22 , 7 ] . Adversarial training [ 34 ] is one of the earliest successful attempts to improve robustness of the learned representations to potential perturbations to the input pattern by solving a min-max optimization problem . TRADES [ 57 ] adds a regularization term to the cross-entropy loss which penalizes the network for assigning different labels to natural images and their corresponding perturbed images . [ 41 ] proposed an additional regularization term ( local linearity regularizer ) that encourages the classification loss to behave linearly around the training examples . [ 55 , 51 ] proposed to regularize the flatness of the loss to improve adversarial robustness . Our work is closely related to the domain adaptation literature in which adversarial optimization has recently gained much attention [ 18 , 32 , 53 ] . From this viewpoint one could consider the clean and perturbed inputs as two distinct domains for which a network aims to learn an invariant feature set . Although in our setting , i ) the perturbed domain continuously evolves while the parameters of the feature network are tuned ; ii ) unlike the usual setting in domain-adaptation problems , here we have access to the labels associated with some samples from the perturbed ( target ) domain . Recent work [ 49 ] regularized the network to have similar logit values in response to clean and perturbed inputs and showed that this additional term leads to better robust generalization to unseen perturbations . Related to this , Adversarial Logit Pairing [ 27 ] increases robustness by directly matching the logits for clean and adversarial inputs . JARN [ 9 ] Another line of work is on developing certified defenses which consist of methods with provable bounds over which the network is certified to operate robustly [ 58 , 56 , 10 ] . While these approaches provide a sense of guarantee about the proposed defenses , they are usually prohibitively expensive to train , drastically reduce the performance of the network on natural images , and the empirical robustness gained against standard attacks is low . 3 Our approach . We will now make a connection between the domain adaptation and adversarial robustness , and build upon this connection to develop an approach for improving the network ’ s robustness against adversarial attacks . 3.1 Preliminaries . Let Fθ ( x ) : X → Z , where X ⊆ Rn , Z ⊆ Rm , be a feature extractor ( e.g . a neural network with parameters θ ) mapping the input x ∈ X into the feature vector ( representation ) z ∈ Z , and let Cφ : Z → Y , where Y = { 1 , . . . , K } are the class labels , be a classifier , with parameters φ ( e.g. , the last linear layer of a neural network plus the softmax function , on top of the extracted features ) . Adversarial attack : Let π ( x , ) denote a perturbation function ( an adversarial attack ) which , for a given ( x , y ) ∈ X × Y , generates a perturbed sample x′ ∈ B ( x , ) within the -neighborhood of x , B ( x , ) = { x′ ∈ X : ‖x′ − x‖ < } , by solving the following maximization problem max t∈B ( x , ) L ( Cφ ( Fθ ( t ) ) , y ) , ( 1 ) where L is the task classification loss function . In practice , however , the perturbed sample x′ found by an attacker is typically an approximate rather than the exact solution to this maximization problem . In order to characterize the distance between the natural and adversarial data distributions , the following notion of distance between two probability distributions , defined in [ 4 , 18 ] , will be used later to make a connection with domain adaptation theory . H∆H-distance : Let H be a set of binary classifiers ( hypotheses ) , called a hypothesis space ; then the symmetric difference hypothesis space H∆H defines the set of hypotheses that capture the disagreements between two hypotheses inH , as in [ 4 ] : g ∈ H∆H ⇐⇒ g ( x ) = h ( x ) ⊕ h′ ( x ) for some h , h′ ∈ H , ( 2 ) where ⊕ denotes the XOR function . Then theH∆H-distance [ 4 , 18 ] between two data distributions ( domains ) S and T , with respect to the hypothesis spaceH , is defined as : dH∆H ( S , T ) = 2 sup h∈H∆H |Px∼S [ h ( x ) = 1 ] − Px∼T [ h ( x ) = 1 ] | . ( 3 ) This equation turns into an inequation when the supremum is taken over the hypothesis space H instead ofH∆H [ 18 ] . 3.2 A Domain Adaptation View of Adversarial Robustness . A domain is defined as a data distribution D on the set of inputs X [ 5 ] . In the adversarial robustness setting , we consider two domains – the natural and the adversarial domains , corresponding respectively to the source and target domains in domain adaptation . We denote by DX and D′X the natural and adversarial distributions of input instances respectively and by DZ and D′Z their corresponding induced distributions over the feature space Z . As in domain adaptation , we assume that f : X → Y is a labeling function common to both domains . The expected classification error Z of the classifier Cφ over DZ is defined as the probability that the classifier Cφ disagrees with the function f̃ : Z ( Cφ ) = Ez∼DZ [ y 6= Cφ ( z ) ] , ( 4 ) where f̃ : Z → Y is a mapping from the features to the class label such that f ( x ) = f̃ ( Fθ ( x ) ) . We similarly define ′Z as the expected error of Cφ over DZ′ . Using theorem 2 from [ 4 ] that relates the source and the target domain errors , we get an upper bound on the expected adversarial error ′Z as : ′Z ( h ) ≤ Z ( h ) + 1 2 dH∆H ( DZ , D′Z ) + c , ( 5 ) where c is a constant term w.r.t . h. Eq . 5 essentially gives a bound on the adversarial error ′Z in terms of the natural error Z and a divergence dH∆H between the natural and adversarial domains with respect to their induced representation distributions DZ and D′Z . In the next section , we will describe an algorithm for improving adversarial robustness of a model by iteratively estimating and minimizing these two components of the error bound . 3.3 Adversarial Feature Desensitization . Based on Eq . 5 , the expected adversarial error could be reduced by jointly minimizing the natural error and the divergence between the distributions of natural and adversarial representations dH∆H ( DZ , D′Z ) . While minimizing the natural error X is straightforward , minimizing the crossdomain divergence requires us to estimate dH∆H ( DZ , D′Z ) . As was shown before [ 18 ] , training a domain discriminator Dψ is closely related to estimating the dH∆H ( DZ , D′Z ) . The domain discriminator is a classifier trained to assign a label of 1 to samples from DZ , and -1 to samples from D′Z . Namely , it is shown [ 18 ] that dH∆H ( DZ , D′Z ) ≤ 2 sup h∈H |αDZ , D′Z ( h ) − 1| , ( 6 ) where αDZ , D′Z ( h ) = Pz∼DZ [ h ( z ) = 1 ] +Pz∼D′Z [ h ( z ) = −1 ] combines the true positives and true negatives , and is thus maximized by the optimal domain discriminator h = Dψ . Note that , if the domain distributions DZ and D′Z are the same , then even the best choice of domain discriminator Dψ will achieve chance-level accuracy , corresponding to αDZ , D′Z ( Dψ ) = 1 . Our approach will aim at minimizing this estimated distance dH∆H ( DZ , D′Z ) by tuning the feature extractor network parameters θ in the direction that pushes the distributions DZ and D′Z closer together . In parallel , we train the domain discriminator to estimate and guide the progress of the feature extractor ’ s tuning . We now describe the proposed approach ( see Algorithm 1 ) which essentially involves simultaneous training of the feature extractor Fθ , the task classifier Cφ and the domain discriminator Dψ ( see Figure 1a ) 1 . One iteration of the training procedure consists of the following three steps . First , parameters of the feature extractor Fθ and classifier Cφ are updated aiming to minimize the natural error X using the cross-entropy loss on natural inputs : LC = − 1 m m∑ i=1 ỹi · log ( softmax ( Cφ ( Fθ ( xi ) ) ) ) , ( 7 ) where ỹi is a one-hot encoding of the true label of the i-th sample xi . Next , steps two and three essentially implement a two-player minimax game similar to that in Generative Adversarial Networks ( GAN ) [ 20 ] , carried out between the feature extractor network Fθ and the domain discriminator Dψ , with a value function V ( Fθ , Dψ ) = Ep ( y ) [ Ep ( x|y ) [ S ( −Dψ ( Fθ ( x ) , y ) ) ] ] + Eq ( y ) [ Eq ( x|y ) [ S ( Dψ ( Fθ ( x ) , y ) ) ] ] , ( 8 ) 1Note that we will somewhat abuse the notation , assuming that Cφ and Dψ below correspond to the logits ( last-layer output ) of the corresponding networks . Also , we will use class-conditional discriminators , Dψ ( Fθ ( x , y ) ) , i.e . train different domain discriminator for different label values y. Algorithm 1 : AFD training procedure Input : Adversarial perturbation function ( attack ) π , feature extractor Fθ , task classifier Cφ , domain discriminator Dψ , learning rates α , β , and γ. repeat input next mini-batch { ( xi , yi ) , ... , ( xm , ym ) } for i=1 to m : x′i ← π ( xi , ) Compute LC according to Eq . 7 Compute LD according to Eq . 9 Compute LF according to Eq . 10 ( θ , φ ) ← ( θ , φ ) − α∇θ , φLC % update feature extractor and task classifier ψ ← ψ − β∇ψLD % update domain discriminator θ ← θ − γ∇θLF % update feature extractor until convergence ; where S is the softplus function . In particular , parameters of the domain discriminator Dψ are updated to minimize the cross-entropy loss associated with discriminating natural and adversarial inputs , maximizing α ( h ) in Eq . 6 . LD = 1 m m∑ i=1 [ S ( −Dψ ( Fθ ( xi ) , yi ) ) + S ( Dψ ( Fθ ( x′i ) , yi ) ) ] , ( 9 ) while the parameters of the feature extractor function Fθ are adversarially updated to maximize the domain discriminator ’ s loss from Eq . 9 LF = 1 m m∑ i=1 S ( −Dψ ( Fθ ( x′i ) , yi ) ) . ( 10 ) In Figure 1b , we visually compare the learning dynamics in adversarial training , TRADES and AFD . Essentially , the adversarial training solves the classification problem by pushing the representation of adversarial examples from different classes away . TRADES regularizes the normal classification loss on the natural inputs with an additional term that encourages the representation of adversarial and natural images to match . Similar to TRADES , in AFD , the regular classification loss on natural inputs is augmented but with an adversarial game which consists of training the domain discriminator that distinguishes between the adversarial and natural inputs for each class followed by updates to the feature extractor to make the representations for natural and adversarial examples to become indistinguishable from each other . Notably , because the parameter update for the feature extractor network is done to maximize the domain discriminator loss and not to decrease the loss for particular adversarial examples ( as is done in adversarial training or TRADES ) , it potentially increases the network robustness against any perturbation that could be correctly classified using the same domain discriminator . This could potentially lead to a broader form of generalization learned by the network . Discussion : Relation to Adversarial Training . Adversarial training minimizes the expected error on adversarial examples ( the perturbed versions of the natural samples ) , generated by an attacker in order to maximize the classification loss . The adversarial training procedure involves a minimax optimization problem consisting of an inner maximization to find adversarial examples that maximize the classification loss and an outer minimization to find model parameters that minimize the adversarial loss . From the domain adaptation point of view , the inner optimization of adversarial training is equal to a sampling procedure that generates samples from the target domain . Intuitively , direct training of the classifier on samples from the target domain would be the best way to improve the accuracy in that domain ( i.e . adversarial classification accuracy ) . However , it ’ s important to note that the adversarial examples found through the inner optimization only approximately maximize the classification loss , and therefore the adversarial error associated with these samples only act as a lower bound on the true adversarial error and therefore the outer loop of the adversarial training method essentially minimizes a lower bound on the adversarial classification error . In contrast to this setup , our proposed method minimizes a conservative upper bound on the adversarial error and therefore is more likely to generalize to a larger set of unseen attacks , and to stronger versions of previously seen attacks ( i.e . ones that generate higher-loss samples in the inner optimization loop ) .
This paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. AFD employs a min-max adversarial learning framework where the classifier learns to encode features of both clean and adversarial images as the same distribution, thereby desensitizing adversarial features. With the aim of fooling a separate discriminator model into categorizing the classifier’s adversarial features as from clean images, the classifier is trained with the standard cross-entropy loss and adversarial loss terms. The authors showed through experiments on MNIST, CIFAR10 and CIFAR100 datasets that AFD mostly outperform previous defenses across different adversarial attacks under white- and black-box conditions.
SP:5751b2abad772e44e69e125a769f25892c2a2e30
Syntactic representations in the human brain: beyond effort-based metrics
1 INTRODUCTION . Neuroscientists have long been interested in how the brain processes syntax . To date , there is no consensus on which brain regions are involved in processing it . Classically , only a small number of regions in the left hemisphere were thought to be involved in language processing . More recently , the language system was proposed to involve a set of brain regions spanning the left and right hemisphere ( Fedorenko & Thompson-Schill , 2014 ) . Similarly , some findings show that syntax is constrained to specific brain regions ( Grodzinsky & Friederici , 2006 ; Friederici , 2011 ) , while other findings show syntax is distributed throughout the language system ( Blank et al. , 2016 ; Fedorenko et al. , 2012 ; 2020 ) . The biological basis of syntax was first explored through studies of the impact of brain lesions on language comprehension or production ( Grodzinsky , 2000 ) and later through non-invasive neuroimaging experiments that record brain activity while subjects perform language tasks , using methods such as functional Magnetic Resonance Imaging ( fMRI ) or electroencephalography ( EEG ) . These experiments usually isolate syntactic processing by contrasting the activity between a difficult syntactic condition and an easier one and by identifying regions that increase in activity with syntactic effort ( Friederici , 2011 ) . An example of these conditions is reading a sentence with an object-relative clause ( e.g . “ The rat that the cat chased was tired '' ) , which is more taxing than reading a sentence with a subject-relative clause ( e.g . “ The cat that chased the rat was tired '' ) . In the past decade , this approach was extended to study syntactic processing in naturalistic settings such as when reading or listening to a story ( Brennan et al. , 2012 ; Hale et al. , 2018 ; Willems et al. , 2015 ) . Because such complex material is not organized into conditions , neuroscientists have instead devised effort-based metrics capturing the word-by-word evolving syntactic demands required to understand the material . Brain regions with activity correlated with those metrics are suggested to be involved in processing syntax . We use the term effort-based metrics to refer to uni-dimensional measures capturing word-by-word syntactic demands . A standard approach for constructing a syntactic effort-based metric is to assume a sentence ’ s syntactic representation and estimate the number of syntactic operations performed at each word . Node Count is popular such metric . It relies on constituency trees ( structures that capture the hierarchical grammatical relationship between the words in a sentence ) . While traversing the words of the sentence in order , subtrees of the constituency tree get completed ; Node Count refers to the number of such subtrees that get completed at each word , effectively capturing syntactic load or effort . Brennan et al . ( 2012 ) use Node Count to support the theory that the Anterior Temporal Lobe ( ATL ) is involved in syntactic processing . Another example of an effort-based metric is given by an EEG study by Hale et al . ( 2018 ) . They show that parser action count ( the number of possible actions a parser can take at each word ) is predictive of the P600 , a positive peak in the brain ’ s electrical activity occurring around 600ms after word onset . The P600 is hypothesized to be driven by syntactic processing ( to resolve incongruencies ) , and the results of Hale et al . ( 2018 ) align with this hypothesis . Though effort-based metrics are a good proposal for capturing the effort involved in integrating a word into the syntactic structure of a sentence , they are not reflective of the entire syntactic information in play . Hence , these metrics can not be used to study the brain representation of syntactic constructs such as nouns , verbs , relationships and dependencies between words , and the complex hierarchical structure of phrases and sentences . Constituency trees and dependency trees are the two main structures that capture a sentence ’ s syntactic structure . Constituency trees are derived using phrase structure grammars that encode valid phrase and clause structure ( see Figure 1 ( A ) for an example ) . Dependency trees encode relations between pairs of words such as subject-verb relationships . We use representations derived from both types of trees . We derive word level dependency ( DEP ) labels from dependency trees , and we focus on encoding the structural information given by constituency trees since we want to analyze if the brain builds hierarchical representations of phrase structure . We characterize the syntactic structure inherent in sentence constituency trees by computing an evolving vector representation of the syntactic structure processed at each word using the subgraph embedding algorithm by Adhikari et al . ( 2018 ) . We show that our syntactic structure embeddings – along with other simpler syntactic structure embeddings built using conventional syntactic features such as part-of-speech ( POS ) tags and DEP tags – are better than effort-based metrics at predicting the fMRI data of subjects reading text . This indicates that representations of syntax , and not just syntactic effort , can be observed in fMRI . We also address the important question of whether regions that are predicted by syntactic features are selective for syntax , meaning they are only responsive to syntax and not to other language properties such as semantics . To answer this question , we model the semantic properties of words using a contextual word embedding space ( Devlin et al. , 2018 ) . We find that regions that are predicted by syntactic features are also predicted by semantic features and thus are not selective for syntax . Scientific questions We ask three main questions : • How can scientists construct syntactic structure embeddings that capture the syntactic structure inherent in phrases and sentences ? • Are these embeddings better at predicting brain activity compared to effort-based metrics when used as inputs to encoding models ? • Which brain regions are involved in processing complex syntactic structure and are they different from regions involved in semantic processing ? Contributions We make four main contributions : • We propose a subgraph embeddings-based method to model the syntactic structure inherent in phrases and sentences . • We show that effort-based metrics can be complemented by syntactic structure embeddings which can predict brain activity to a larger extent than effort-based metrics . • Using our syntactic structure embeddings , we find some evidence supporting the hypothesis that the brain processes and represents complex syntactic information such as phrase and clause structure . • We find evidence supporting the existing hypothesis that syntactic processing appears to be distributed in the language network in regions that are not selective for syntax . 2 METHODS . We first describe the syntactic features used in this study and their generation . All of the features we use are incremental i.e . they are computed per word . We then describe our fMRI data analyses . Effort-based metrics We use four effort-based metrics in our analyses - Node Count , Syntactic Surprisal , word frequency and word length . Node Count is an effort-based metric popular in neuroscience . To compute it , we obtain the constituency tree of each sentence using the self-attentive encoder-based constituency parser by Kitaev & Klein ( 2018 ) . We compute Node Count for each word as the number of subtrees that are completed by incorporating this word into its sentence . Syntactic Surprisal is another effort-based metric proposed by Roark et al . ( 2009 ) and is computed using an incremental top down parser ( Roark , 2001 ) . Both of these metrics aim to measure the amount of effort that is required to integrate a word into the syntactic structure of its sentence . The word frequency metric is computed using the wordfreq package ( Speer et al. , 2018 ) as the Zipf frequency of a word . This is the base-10 logarithm of the number of occurrences per billion of a given word in a large text corpus . Finally , word length is the number of characters in the presented word . The last two metrics approximate the amount of effort that is required to read a word . Constituency tree-based Graph Embeddings ( ConTreGE ) Constituency trees are a rich source of syntactic information . We build three representations of these trees that encode this information : ( a ) The largest subtree which is completed upon incorporating a word into a sentence ( see figure 1 ( B ) ) is representative of the implicit syntactic information given by the word . Given that Node Count reduces all of the information present in these subtrees to just one number , it is easy to see that it can not effectively capture this information . POS tags ( categorize words into nouns , verbs , adjectives , etc . ) also capture some of the information present in these trees as they encode phrase structure to a certain extent . But , they are incapable of completely encoding their hierarchical structure and the parsing decisions which are made while generating them . In order to better encode their structure , we first build subgraph embeddings of these completed subtrees called ConTreGE Comp vectors . ( b ) We hypothesize that the brain not only processes structure seen thus far but also predicts future structure from structure it already knows . To test this , we construct embeddings , simply called ConTreGE vectors , using incomplete subtrees that are constructed by retaining all the phrase structure grammar productions that are required to derive the words seen till now , thereby allowing us to capture future sentence structure ( in the form of future constituents ) before the full sentence is read ( see figure 1 ( C ) ) . These subtrees contain leaves that are non-terminal symbols unlike complete subtrees that only have terminal symbols ( words and punctuation ) as leaves . In this context , a non-terminal symbol is a symbol that can be derived further using some rule in the phrase structure grammar ( ex . NP , VP , etc. ) . If incomplete subtrees are more representative of the brain ’ s processes , it would mean that the brain expects certain phrase structures even before the entire phrase or sentence is read . ConTreGE Comp and ConTreGE vectors need to be built using accurate constituency trees constructed using the whole sentence . Thus , we reuse the trees generated to compute Node Count to build them . ( c ) Further , the brain could be computing several possible top down partial parses that can derive the words seen thus far ( see figures 1 ( D ) and ( E ) ) and modifying the list of possible parses as future words are read . To test this hypothesis , we designed Incremental ConTreGE ( InConTreGE ) vectors that are representative of the most probable parses so far . For a given word , its InConTreGE vector is computed as : v = ∑5 i=1 e −siWi where Wi is the subgraph embedding of a partial parse tree built by an incremental top-down parser ( Roark 2001 CoLing ) after reading the word and si is the score assigned to this partial parse that is inversely proportional to the parser ’ s confidence in this tree . To effectively capture the structure of all subtrees , we encode them using the subgraph embeddings proposed by Adhikari et al . ( 2018 ) which preserve the neighbourhood properties of subgraphs . A long fixed length random walk on a subgraph is generated to compute its embedding . Since consecutive nodes in a random walk are neighbours , a long walk can effectively inform us about the neighbourhoods of nodes in the subgraph . Each node in a walk is identified using its unique ID . So , a random walk can be interpreted as a “ paragraph '' where the words are the node IDs . Finally , the subgraph ’ s embedding is computed as the Paragraph Vector ( Le & Mikolov , 2014 ) of this paragraph that is representative of the subgraph ’ s structure . Note that all of the subtrees of a given type ( complete , incomplete or partial parse ) are encoded together . This ensures that all ConTreGE Comp vectors , all ConTreGE vectors and all InConTreGE vectors are in our own spaces . Figure 2 illustrates the subtree encoding process . First , every unique non-terminal in the subtrees is mapped to a unique number ( ex . S is mapped to 1 , NP is mapped to 2 , etc . ) and every terminal is mapped to a unique number that is representative of the order in which they were presented ( the first presented token is mapped to 10000 , the second token is mapped to 10001 and so on ) . We did not map each unique terminal to a unique number ( for instance , we did not map all instances of `` Harry '' to one number ) because a random walk through the tree could give us word co-occurrence information and thus lead to the inclusion of some semantic information in the vectors . Every tree node ’ s label is then replaced by the number it was mapped to in the previous step . The edge lists of these subtrees are supplied to the subgraph embedding generation algorithm to finally obtain 15-dimensional vectors for every presented word . The length of the random walks is set to 100000 and we use an extension of the Distributed Bag of Nodes ( DBON ) model proposed by Le & Mikolov ( 2014 ) for generating Paragraph Vectors called Sub2Vec-DBON by Adhikari et al . ( 2018 ) . The length of the sliding window is set to 5 and the model is trained for 20 epochs . Since ConTreGE Comp , ConTreGE and InConTreGE encode information about the neighbourhoods of all nodes in the constituency trees , they can capture their hierarchical structure . Thus , brain regions predicted by these vectors are likely to be involved in building and encoding hierarchical sentence structure . Punctuation We create one-hot binary vectors indicating the type of punctuation that was presented along with a word ( e.g. . or , ) . For example , a sentence might have ended with `` Malfoy. '' . In this punctuation-based feature space , the column corresponding to . will be set to 1 for this word . While punctuation is seldom considered a syntactic feature , sentence boundaries are highly correlated with changes in working memory load . These changes are bound to be a great source of variability in the fMRI signal ( as we will observe later ) . Failing to account for sentence boundaries and working memory might be a source of confounding that has been ignored in the literature . Part-of-speech tags and dependency tags We use two standard word-level syntactic features - POS and DEP tags . The POS tag of a word is read off previously generated constituency trees . The DEP tag of a word ( ex . subject , object , etc . ) correspond to its assigned role in the dependency trees of the presented sentences which were generated using the spaCy English dependency parser ( 2 ) . We create one-hot binary vectors indicating the POS tag and the DEP tag of each word and concatenate them to create one feature space which we refer to as simple syntactic structure embeddings . Semantic features We adapt the vectors obtained from layer 12 of a pretrained ( 1 ) cased BERTlarge model ( Devlin et al. , 2018 ) to identify regions that process semantics . We use layer 12 because of previous work showing that middle layers of sentence encoders are optimal for predicting brain activity ( Jain & Huth , 2018 ; Toneva & Wehbe , 2019 ) . We obtain the contextual embeddings for a word by running the pretrained model only on the words seen thus far , preventing the inclusion of future semantic information . Since a presented word can be broken up into multiple subtokens , we compute its embedding as the average of the subtokens ’ embeddings . Using principal component analysis ( PCA ) , we reduce their dimensionality to 15 to match the ConTreGE vectors ’ dimensionality . fMRI data We use the fMRI data of 9 subjects reading chapter 9 of Harry Potter and the Sorcerer ’ s Stone ( Rowling , 2012 ) , collected and made available by Wehbe et al . ( 2014 ) . Words are presented one at a time at a rate of 0.5s each . All the brain plots shown here are averages over the 9 subjects in the Montreal Neurological Institute ( MNI ) space . Preprocessing details are in Appendix B . Predicting brain activity The applicability of a given syntactic feature in studying syntactic processing is determined by its efficacy in predicting the brain data described above . Ridge regression is used to perform these predictions and their coefficient of determination ( R2 score ) measures the feature ’ s efficacy . For each voxel of each subject , the regularization parameter is chosen independently . We use Ridge regression because of its computational efficiency and because of the Wehbe et al . ( 2015 ) results showing that with such fMRI data , as long as the regularization parameter is chosen by cross-validation for each voxel independently , different regularization techniques lead to similar results . Indeed , Ridge regression is a common regularization technique used for predictive fMRI models ( Mitchell et al. , 2008 ; Nishimoto et al. , 2011 ; Wehbe et al. , 2014 ; Huth et al. , 2016 ) . For every voxel , a model is fit to predict the signals Y = [ y1 , y2 , . . . , yn ] recorded in that voxel where n is the number of time points ( TR , or time to repetition ) . The words are first grouped by the TR in which they were presented . Then , the features of words in every group are summed to form a sequence of features X = [ x1 , x2 , . . . , xn ] aligned with the brain signals . The response measured by fMRI is an indirect consequence of brain activity that peaks about 6 seconds after stimulus onset . A common solution to account for this delay is to express brain activity as a function of the features of the preceding time points ( Nishimoto et al. , 2011 ; Wehbe et al. , 2014 ; Huth et al. , 2016 ) . Thus , we train our models to predict any yi using xi−1 , xi−2 , xi−3 and xi−4 . We test the models in a cross-validation loop : the data is first split into 4 contiguous and equal sized folds . Each model uses three folds of the data for training and one fold for evaluation . We remove the data from the 5 TRs which either precede or follow the test fold from the training set of folds . This is done to avoid any unintentional data leaks since consecutive yis are correlated with each other because of the lag and continuous nature of the fMRI signal . The brain signals and the word features which comprise the training and testing data for each model are individually Z-scored . After training we obtain the predictions for the validation fold . The predictions for all folds are concatenated ( to form a prediction for the entire experiment in which each time point is predicted from a model trained without the data for that time point ) . Note that since all 3 ConTreGe vectors are stochastic , we construct them 5 times each , and learn a different model each time . The predictions of the 5 models are averaged together into a single prediction . The R2 score is computed for every voxel using the predictions and the real signals . We run a permutation test to test if R2 scores are significantly higher than chance . We permute blocks of contiguous fMRI TRs , instead of individual TRs , to account for the slowness of the underlying hemodynamic response . We choose a common value of 10 TRs ( Deniz et al. , 2019 ) . The predictions are permuted within fold 5000 times , and the resulting R2 scores are used as an empirical distribution of chance performance , from which the p-value of the unpermuted performance is estimated . We also run a bootstrap test to test if a model has a higher R2 score than another . The difference is that in each iteration , we permute ( using the same indices ) the predictions of both models and compute the difference of their R2 and use the resulting distribution to estimate the p-value of the unpermuted difference . Finally , the Benjamni-Hochberg False Discovery Rate correction ( Benjamini & Hochberg , 1995 ) is used for all tests ( appropriate because fMRI data is considered to have positive dependence ( Genovese , 2000 ) ) . The correction is performed by grouping together all the voxel-level p-values ( i.e . across all subjects and feature groups ) and choosing one threshold for all of our results . The correction is done in this way since we test multiple prediction models across multiple voxels and subjects . To compute Region of Interest ( ROI ) statistics , left-hemisphere ROI masks for the language system obtained from a “ sentence vs. non-word '' fMRI contrast ( Fedorenko et al. , 2010 ) are obtained from ( 3 ) and mirrored to obtain the right-hemisphere ROIs .
This paper derives various types of graph embeddings to encode aspects of syntactic information that the brain may be processing during real-time sentence comprehension. These embeddings, along with indicators of punctuation, POS and dependency tags, and BERT embeddings, are used to predict brain activity recorded via fMRI. The authors argue that this is an improvement over use of effort-based metrics to predict brain activity, as these embeddings contain richer information than is captured by distilling down to a single measure of effort. They show that various brain regions are significantly better predicted by the syntactic embeddings than by the effort-based metrics and POS+dependency indicators. BERT embeddings, however, prove to be a better predictor (than syntactic and other predictors) across much more substantial areas of activity.
SP:95ba9ad102adafaabf9671737e6549728d104629
Analogical Reasoning for Visually Grounded Compositional Generalization
Children acquire language subconsciously by observing the surrounding world and listening to descriptions . They can discover the meaning of words even without explicit language knowledge , and generalize to novel compositions effortlessly . In this paper , we bring this ability to AI , by studying the task of multimodal compositional generalization within the context of visually grounded language acquisition . We propose a multimodal transformer model augmented with a novel mechanism for analogical reasoning , which approximates novel compositions by learning semantic mapping and reasoning operations from previously seen compositions . Our proposed method , Analogical Reasoning Transformer Networks ( ARTNET ) , is trained on raw multimedia data ( video frames and transcripts ) , and after observing a set of compositions such as “ washing apple ” or “ cutting carrot ” , it can generalize and recognize new compositions in new video frames , such as “ washing carrot ” or “ cutting apple ” . To this end , ARTNET refers to relevant instances in the training data and uses their visual features and captions to establish analogies with the query image . Then it chooses a suitable verb and noun to create a new composition that describes the new image best . Extensive experiments on an instructional video dataset demonstrate that the proposed method achieves significantly better generalization capability and recognition accuracy compared to state-of-the-art transformer models . 1 INTRODUCTION . Visually grounded Language Acquisition ( VLA ) is an innate ability of the human brain . It refers to the way children learn their native language from scratch , through exploration , observation , and listening ( i.e. , self-supervision ) , and without taking language training lessons ( i.e. , explicit supervision ) . 2-year-old children are able to quickly learn semantics of phrases and their constituent words after repeatedly hearing phrases like “ washing apple ” , or “ cutting carrot ” and observing such situations . More interestingly , they will also understand new compositions such as “ washing carrot ” or “ cutting apple ” , even before experiencing them . This ability of human cognition is called compositional generalization ( Montague ( 1970 ) ; Minsky ( 1988 ) ; Lake et al . ( 2017 ) ) . It helps humans use a limited set of known components ( vocabulary ) to understand and produce unlimited new compositions ( e.g . verb-noun , adjective-noun , or adverb-verb compositions ) . This is also one of the long-term goals of Artificial Intelligence ( AI ) , e.g . in robotics , where it enables the robot to learn new instructions that they have never heard before . Nevertheless , contemporary machine intelligence needs to overcome several major challenges of the task . On one hand , learning compositional generalization can be difficult without using datahungry models . The power of existing language models mainly rely on large-scale language corpora ( Lake & Baroni ( 2017 ) ; Pennington et al . ( 2014 ) ; Devlin et al . ( 2018 ) ) . They are still inadequate at compositional generalization ( Marcus ( 1998 ) ; Lake & Baroni ( 2018 ) ; Surı́s et al . ( 2019 ) ) . Their goal is to recognize training examples rather than focusing on what is missing from training data . On the other hand , the designed model should close the paradigmatic gap ( Nikolaus et al . ( 2019 ) ) between seen compositions and new compositions . For instance , given seen verb-noun compositions “ 1A ” and “ 2B ” ( the digit indicates verb , the letter indicates noun ) , the model should be able to link seen compositions to new compositions ( like “ 1B ” or “ 2A ” ) in completely new cases . Different from previous work ( Johnson et al . ( 2017 ) ; Baradel et al . ( 2018 ) ; Santoro et al . ( 2017 ) ) , we bring the power of compositional generalization to state-of-the-art language models by incorporating Analogical Reasoning ( AR ) ( Gentner & Smith ( 2012 ) ; Littlemore ( 2008 ) ; Vosniadou & Ortony ( 1989 ) ) . An analogy is a comparison between similar concepts or situations , and AR is analogical semantic reasoning that relies upon an analogy . The human brain spontaneously engages in AR to make sense of unfamiliar situations in every day life ( Vamvakoussi ( 2019 ) ) . Inspired by the AR process in the human brain , we design the counterpart for machine language acquisition . To this end , we create a language model that generate appropriate novel compositions by relevant seen compositions , and forming analogies and appropriate arithmetic operations to express the new compositions ( e.g . “ washing carrot ” = “ washing apple ’ + “ cutting carrot ” - “ cutting apple ” ) . We describe this process in three steps : association , reasoning , and inference , as shown in Figure 1 . Given an image ( a video frame in our case ) and a narrative sentence describing it , we mask the main verb-noun composition from the sentence , and ask the model to guess the correct composition that completes the sentence , considering the provided image . To this end , we propose a novel self-supervised and reasoning-augmented framework , Analogical Reasoning Transformer Networks ( ARTNET ) . ARTNET adopts a multimodal transformer ( similar to ViLBERT ( Lu et al . ( 2019 ) ) ) as its backbone to represent visual-textual data in a common space . Then it builds three novel modules on top of the backbone that corresponds to the aforementioned AR steps : association , reasoning , and inference . First , we design Analogical Memory Module ( AMM ) , which discovers analogical exemplars for a given query scenario , from a reference pool of observed samples . Second , we propose Analogical Reasoning Networks ( ARN ) , which takes the retrieved samples as input , selects candidate analogy pairs from the relevant reference samples , and learns proper reasoning operations over the selected analogy pairs , resulting in an analogy context vector . Third , we devise a Conditioned Composition Engine ( CCE ) , which combines the analogy context vector with the representations of the query sample to predict the masked words and complete the target sentence with a novel composition . We show how ARTNET generalizes to new compositions and excels in visually grounded language acquisition by designing experiments in various evaluations : novel composition prediction , assessment of affordance , and sensitivity to data scarcity . The results on the ego-centric video dataset ( EPIC-Kitchens ) demonstrate the effectiveness of the proposed solution in various aspects : accuracy , capability , robustness , etc . The project code is publicly available at https : //github.com/XX . The main contributions of this paper include the following : • We call attention to a challenging problem , compositional generalization , in the context of machine language acquisition , which has seldom been studied . • We propose ideas supported by human analogical reasoning : approximating new verb-noun compositions by learned arithmetic operations over relevant compositions seen before . • We propose a novel reasoning-augmented architecture for visually grounded language acquisition , which addresses the compositional generalization problem through association and analogical reasoning . • We evaluate the proposed model in various aspects , such as composition prediction , validity test , and robustness against data scarcity . The results show that ARTNET achieves significant performance improvements in terms of new composition accuracy , over a large-scale video dataset . 2 ARTNET : ANALOGICAL REASONING TRANSFORMER NETWORKS . Our goal is to develop a framework that can support multimodal compositional generalization through learning in a visual-textual environment . The proposed framework learns to acquire the meaning of phrases and words from image-sentence pairs and to create novel compositions via reasoning . We call the framework Analogical Reasoning Transformer Networks ( ARTNET ) , due to its ability to establish analogies with the previously seen , relevant scenarios , and perform reasoning operations to generalize a composition for the new scenario . Figure 2 illustrates an overview of ARTNET , which is composed of a multimodal encoder backbone , followed by three main modules : Analogical Memory Module ( AMM ) , Analogical Reasoning Networks ( ARN ) , and Conditioned Composition Engine ( CCE ) . We elaborate each component in the rest of this section . 2.1 MULTIMODAL ENCODER BACKBONE . The backbone network is responsible for encoding image-sentence pairs into compositional semantic representations . To achieve this , we utilize the emerging multimodal transformers ( e.g . UNITER ( Chen et al . ( 2019 ) ) or ViLBERT ( Lu et al . ( 2019 ) ) ) , which have recently achieved great success in various vision-language tasks . These models take a set of visual and textual tokens ( e.g . objects and words ) , and extract a multimodal embedding for each token , which is contextualized by all the other tokens through layers of multi-head attention . We follow the architecture of UNITER , as it performs slightly better than ViLBERT and other similar models . Note that since our goal is language acquisition , we intentionally do not use the pretrained weights of UNITER , which are trained on a large-scale corpus . Instead , we train the backbone from scratch on our limited data . 2.2 ANALOGICAL MEMORY MODULE . AMM plays the role of analogical association . Like finding a useful puzzle piece , we propose AMM to discover the most useful reference samples for analogical reasoning in a target scenario . Given a target image-sentence pair ( query ) , where some tokens in the sentence are masked , we randomly select N ( N = 200 in our experiments ) sample image-sentence pairs from the training data to create a reference pool , and find the Top-K most relevant exemplars from that pool . To this end , we measure a multimodal relevance score between the query and each reference . Here , we use the initial embedding of each token on the query and reference samples as described in the Section 2.1 . Given a target and a reference sample , we define the multimodal relevance score as a combination of visual and text relevance between the corresponding sets of tokens . For visual tokens , we compute the mean cosine similarity of every pair of tokens from the query and reference token sets . For the language part , the contextual background words that are not masked can provide linguistic clues for semantic relevance . Thus , we compute the Jaccard Index ( Hamers et al . ( 1989 ) ) between two sentences as textual relevance . Specifically , the multimodal relevance score is svl = 1 2 · ( |WT ∩WR| |WT ∪WR| + 1 + ∑ i ∑ j cos ( vTi , vRj ) Nv 2 ) ( 1 ) whereWT andWR are the set of target words and reference words , Nv is the number of visual token pairs , and vTi and vRj represent the visual embeddings of the ith visual token of the query and the jth visual token of the reference . After computing the scores , AMM ranks reference samples with respect to their relevance scores and selects the Top-K most relevant samples for the given query . 2.3 ANALOGICAL REASONING NETWORKS . Given the retrieved analogical exemplars , we devise a neural network with reasoning ability to enrich the original representation of the masked compositions by making analogies with the seen compositions . The idea is to exploit the semantic relation mapping between the candidate analogy composition and the target composition . To this end , we represent the target masked composition as a query vector q , by concatenating the multimodal transformer embeddings of the masked words of that composition ( typically a verb and a noun from the target sentence ) and learning the representations of ordered constituents in a composition based on a Long Short-Term Memory ( LSTM ) ( Zhou et al . ( 2015 ) ) . Next , we apply the multimodal encoder backbone ( as mentioned above ) on the retrieved analogy samples , and parse each sample into candidate analogy compositions ( pairs of tokens ) . Since the goal is language acquisition , we do not rely on predefined grammar rules or pretrained models to generate the candidate compositions , such as applying part-of-speech tagging and taking each verb-noun pair . Instead , we enumerate all pairs of adjacent words from each retrieved sentence , and all pairs of detected image regions from each retrieved image . The multimodal resulting set of pairs are called analogy pairs hereafter . The core of ARN consists of three neural network modules for analogy attention , analogical reasoning , and analogy transformation . Analogical attention learns the importance of each pair of candidate analogy composition and query vector respectively and generates analogy aggregation from each modality independently . Analogical reasoning is designed to learn the appropriate arithmetic operations from analogy compositions for reasoning . It consists of modality-wise transformations and Neural Arithmetic Logic Units ( Trask et al . ( 2018 ) ) with multiple layers of Neural Accumulator ( NAC ) ( Trask et al . ( 2018 ) ) . NAC is a simple but effective operator that supports the ability to learn addition and subtraction . This module is applied on the analogy pairs , and computes a single vector that represents the output of some reasoning operations , optimized for our task through gradient descent . Through the analogy transformation , ARN generates the sequential representations of final analogy context vector . Specifically , ARN can be denoted as cma = ∑ j αmijh m j , αij = exp a ( [ rmi ; r m i+1 ; q ] , [ r m j ; r m j+1 ; q ] ) ∑ k exp a ( [ r m i ; r m i+1 ; q ] , [ r m k ; r m k+1 ; q ] ) Analogical Attention , ( 2 ) hc = fNAC ( [ gv ( c v a ) , gl ( c l a ) ] T ) Analogical Reasoning , ( 3 ) c = LSTM ( hc ) Analogy Transformation , ( 4 ) where v and l represent the vision and language modalities , rmi and r m i+1 ( m is modality indicator ) are the image regions or text words of candidate analogical compositions . gv and gl are modality transformations that contains two-layer fully connected networks with ReLU activation and dropout , and T represents matrix transpose . The output of ARN is the vector c , which is called analogical context vector , and will be used to augment the composition representations .
This paper explores the problem of generalizing to novel combinations of verbs and nouns in a task for captioning video stills from videos about cooking. The paper introduces a new dataset based off of EPIC-Kitchens (Damen et al. 2018) which masks out verbs and nouns and splits the evaluation data into seen combinations of verb/noun pairs and unseen combinations of verb/noun pairs, challenging a model to generate captions for pairs which were not seen during training.
SP:7327dc440b5c193c1dda156276860f89594721fa
A Unified Framework for Convolution-based Graph Neural Networks
1 INTRODUCTION . Recent years have witnessed a fast development in graph processing by generalizing convolution operation to graph-structured data , which is known as Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2017 ) . Due to the great success , numerous variants of GCNs have been developed and extensively adopted in the field of social network analysis ( Hamilton et al. , 2017 ; Wu et al. , 2019a ; Veličković et al. , 2018 ) , biology ( Zitnik et al. , 2018 ) , transportation forecasting ( Li et al. , 2017 ) and natural language processing ( Wu et al. , 2019b ; Yao et al. , 2019 ) . Inspired by GCN , a wide variety of convolution-based graph learning approaches are proposed to enhance the generalization performance of graph neural networks . Several research aim to achieve higher expressiveness by exploring higher-order information or introducing additional learning mechanisms like attention modules . Although proposed from different perspectives , their exist some connections between these approaches . For example , attention-based GCNs like GAT ( Veličković et al. , 2018 ) and AGNN ( Thekumparampil et al. , 2018 ) share the similar intention by adjusting the adjacency matrix with a function of edge and node features . Similarly , TAGCN ( Du et al. , 2017 ) and MixHop ( Kapoor et al. , 2019 ) can be viewed as particular instances of PPNP ( Klicpera et al. , 2018 ) under certain approximation . However , the relations among these graph learning models are rarely studied and the comparisons are still limited in analyzing generalization performances on public datasets . As a consequence , we still lack a systematic view of different GCN models and deep understanding of the relations among them . In this paper , we resort to the techniques in graph signal processing and attempt to understand GCN-based approaches from a general perspective . Specifically , we present a unified graph convolution framework by establishing graph convolution operations with optimization problems in the graph Fourier domain . We consider a Laplacian regularized least squares optimization problem and show that most of the convolution-based approaches can be interpreted in this framework by adding carefully designed regularizers . Besides vanilla GCNs , we also extend our framework to formulating non-convolutional operations ( Xu et al. , 2018a ; Hamilton et al. , 2017 ) , attention-based GCNs ( Veličković et al. , 2018 ; Thekumparampil et al. , 2018 ) and topology-based GCNs ( Klicpera et al. , 2018 ; Kapoor et al. , 2019 ) , which cover a large fraction of the state-of-the-art graph learning ap- proaches . This novel perspective provides a re-interpretation of graph convolution operations and enables a better understanding of the similarities and differences among many widely used GCNs , and may inspire new approaches for designing better models . As a conclusion , we summarize our contributions as follow : 1 . We introduce a unified framework for convolution-based graph neural networks and interpret various convolution filters as carefully designed regularizers in the graph Fourier domain , which provides a general methodology for evaluating and relating different graph learning modules . 2 . Based on the proposed framework , we provide new insights on understanding the limitations of GCNs and show new directions to tackle common problems and improve the generalization performance of current graph neural networks in the graph Fourier domain . Additionally , the unified framework can serve as a once-for-all platform for expert-designed modules on convolution-based approaches , where newly designed modules can be easily implemented on other networks as a plugin module with trivial adaptations . We believe that our framework can provide convenience for designing new graph learning modules and searching for better combinations . 3 . As a showcase , we present a novel regularization technique under the proposed framework to alleviate the oversmoothing problem in graph representation learning . As shown in Section 4 , the newly designed regularizer can be implemented on several convolution-based networks and effectively improve the generalization performance of graph learning models . 2 PRELIMINARY . We start with an overview of the basic concepts of graph signal processing . Let G = ( V , A ) denote a graph with node feature vectors where V represents the vertex set consisting of nodes { v1 , v2 , . . . , vN } and A = ( aij ) ∈ RN×N is the adjacency matrix implying the connectivity between nodes in the graph . Let D = diag ( d ( 1 ) , . . . , d ( N ) ) ∈ RN×N be the degree matrix of A where d ( i ) = ∑ j∈V aij is the degree of vertex i . Then , L = D−A is the combinatorial Laplacian and L̃ = I−D ( −1/2 ) AD ( −1/2 ) is the normalized Laplacian of G. Additionally , we let à = A+I and D̃ = D + I denote the augmented adjacency and degree matrices with added self-loops . Then L̃sym = I − D̃−1/2ÃD̃−1/2 ( Ãsym = D̃−1/2ÃD̃−1/2 ) and L̃rw = I − D̃−1à ( Ãrw = D̃−1à ) are the augmented symmetric normalized and random walk normalized Laplacian ( augmented adjacency matrices ) of G , respectively . Let x ∈ RN be a signal on the vertices of the graph . The spectral convolution is defined as a function of a filter gθ parameterized in the Fourier domain ( Kipf & Welling , 2017 ) : gθ ? x = Ugθ ( Λ ) U Tx , ( 1 ) where U and Λ are the eigenvectors and eigenvalues of the normalized Laplacian L̃ . Also , we follow Hoang & Maehara ( 2019 ) and define the variation ∆ and D̃-inner product as : ∆ ( x ) = ∑ i , j∈V aij ( x ( i ) − x ( j ) ) 2 = xTLx , ( x , y ) D̃ = ∑ i∈V ( d ( i ) + 1 ) x ( i ) y ( i ) = xT D̃y , ( 2 ) which specifies the smoothness and importance of the signal respectively . 3 UNIFIED GRAPH CONVOLUTION FRAMEWORK . With the success of GCNs , a wide variety of convolution-based approaches are proposed which progressively enhance the expressive power and generalization performance of graph neural networks . Despite the effectiveness of GCN and its derivatives on specific tasks , there still lack a comprehensive understanding on the relations and differences among various graph learning modules . Graph signal processing is a powerful technique which has been adopted in several graph learning researches ( Kipf & Welling , 2017 ; Hoang & Maehara , 2019 ; Zhao & Akoglu , 2019 ) . However , existing researches mainly focus on analyzing the properties of GCNs while ignore the connections between different graph learning modules . Innovatively , in this work , we consider interpreting convolution-based approaches from a general perspective with graph signal processing techniques . In specific , we establish the connections between graph convolution operations and optimization problems in graph Fourier space , showing the effect of each module explicitly with specific regularizers . This novel perspective provides a systematic view of different GCN models and deep understanding of the relations among them . 3.1 UNIFIED GRAPH CONVOLUTION FRAMEWORK . Several researches have proved that , in the field of graph signal processing , the representative features are mostly preserved in the low-frequency signals while noises are mostly contained in the high-frequency signals ( Hoang & Maehara , 2019 ) . Based on this observation , numerous graph representation learning methods are designed to decrease the high-frequency components , which can be viewed as low-pass filters in the graph Fourier space . With similar inspiration , we consider a Laplacian regularized least squares optimization problem with graph signal regularizers and attempt to build connections with these filters . Definition 1 Unified Graph Convolution Framework . Graph convolution filters can be achieved by solving the following Laplacian regularized least squares optimization : min X̄ ∑ i∈V ‖x̄ ( i ) − x ( i ) ‖2 D̃ + λLreg , ( 3 ) where ‖x‖D̃ = √ ( x , x ) D̃ denotes the norm induced by D̃ . In the following sections , we will show that a wide range of convolution-based graph neural networks can be derived from Definition 1 with different carefully designed regularizers , and provide new insights on understanding different graph learning modules from the graph signal perspective . 3.1.1 GRAPH CONVOLUTIONAL NETWORKS . Graph convolutional networks ( GCNs ) ( Kipf & Welling , 2017 ) are the foundation of numerous graph learning models and have received widespread concerns . Several researches have demonstrated that the vanilla GCN is essentially a type of Laplacian smoothing over the whole graph , which makes the features of the connected nodes similar . Therefore , to reformulate GCNs in the graph Fourier space , we consider utilizing the variation ∆ ( x ) as the regularizer . Definition 2 Vanilla GCNs . Let x̄ ( i ) i∈V be the estimation of the input observation x ( i ) i∈V . A low-pass filter : X̄ = ÃrwX , ( 4 ) is the first-order approximation of the optimal solution of the following optimization : min X̄ ∑ i∈V ‖x̄ ( i ) − x ( i ) ‖2 D̃ + ∑ i , j∈V aij‖x̄ ( i ) − x̄ ( j ) ‖22 . ( 5 ) Derivations of the definitions are presented in Appendix A . As the eigenvalues of the approximated filter Ãrw are bounded by 1 , it resembles a low-pass filter that removes the high-frequency signals . By exchanging Ãrw with Ãsym ( which has the same eigenvalues as Ãrw ) , we obtain the same formulation adopted in GCNs . It has been stated that the second term ∆ ( x ) in Eq . ( 5 ) measures the variation of the estimation x̄ over the graph structure . By adding this regularizer to the objective function , the obtained filter emphasizes the low-frequency signals through minimizing the variation over the local graph structure , while keeping the estimation close to the input in the graph Fourier space . 3.1.2 NON-CONVOLUTIONAL OPERATIONS . Residual Connection . Residual connection is first proposed by He et al . ( 2016 ) and has been widely adopted in graph representation learning approaches . In the vanilla GCNs , norms of the eigenvalues of the filter Ãrw ( or Ãsym ) are bounded by 1 which ensures numerical stability in the training procedure . However , on the other hand , signals in all frequency band will shrink as the convolution layer stacks , leading to a consistent information loss . Therefore , adding the residual connection is deemed to preserve the strength of the input signal . Definition 3 Residual Connection . A graph convolution filter with residual connection : X̄ = ÃrwX + X , ( 6 ) where > 0 controls the strength of residual connection , is the first-order approximation of the optimal solution of the following optimization : min X̄ ∑ i∈V ( ‖x̄ ( i ) − x ( i ) ‖2 D̃ − ‖x̄ ( i ) ‖2 D̃ ) + ∑ i , j∈V aij‖x̄ ( i ) − x̄ ( j ) ‖22 . ( 7 ) By adding the negative regularizer to penalize the estimations with small norms , we can induce the same formulation as the vanilla graph convolution with residual connection . Concatenation . Concatenation is practically a residual connection with different learning weights . Definition 3 ’ Concatenation . A graph convolution filter concatenating with the input signal : X̄ = ÃrwX + XΘΘ T , ( 8 ) is the first-order approximation of the optimal solution of the following optimization : min X̄ ∑ i∈V ( ‖x̄ ( i ) − x ( i ) ‖2 D̃ − ‖x̄ ( i ) Θ‖2 D̃ ) + ∑ i , j∈V aij‖x̄ ( i ) − x̄ ( j ) ‖22 , ( 9 ) where > 0 controls the strength of concatenation and Θ is the learning coefficient . Although the learning weights ΘΘT has a constrained expressive capability , it can be compensated by the following feature learning modules .
This paper presents a unified framework for graph convolutional neural networks based on regularized optimization, connecting different variants of graph neural networks including vanilla, attention-based, and topology-based approaches. The authors also propose a novel regularization technique to approach the oversmoothing problem in graph convolution. Experiments on the standard settings of node classification on Citeseer, Cora, and Pubmed prove the effectiveness of the proposed regularization techniques.
SP:5be9a3c39234c10c226c42eec95e29cbddbaf8c0
Benchmarks for Deep Off-Policy Evaluation
1 INTRODUCTION . Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction , such as in robotics ( Kober et al. , 2013 ) , board games and video games ( Tesauro , 1995 ; Mnih et al. , 2013 ; Vinyals et al. , 2019 ) , and recommender systems ( Aggarwal et al. , 2016 ) . However , this sort of active online interaction is often impractical for real-world problems , where active data collection can be costly ( Li et al. , 2010 ) , dangerous ( Hauskrecht & Fraser , 2000 ; Kendall et al. , 2019 ) , or time consuming ( Gu et al. , 2017 ) . Batch ( or offline ) reinforcement learning , has been studied extensively in domains such as healthcare ( Thapa et al. , 2005 ; Raghu et al. , 2018 ) , recommender systems ( Dudík et al. , 2014 ; Theocharous et al. , 2015 ; Swaminathan et al. , 2017 ) , education ( Mandel et al. , 2014 ) , and robotics ( Kalashnikov et al. , 2018 ) . A major challenge with such methods is the off-policy evaluation ( OPE ) problem , where one must evaluate the expected performance of policies solely from offline data . This is critical for several reasons , including providing high-confidence guarantees prior to deployment ( Thomas et al. , 2015 ) , and performing policy improvement and model selection ( Bottou et al. , 2013 ; Doroudi et al. , 2017 ) . The goal of this paper is to provide a standardized benchmark for evaluating OPE methods . Although considerable theoretical ( Thomas & Brunskill , 2016 ; Swaminathan & Joachims , 2015 ; Jiang & Li , 2015 ; Wang et al. , 2017 ; Yang et al. , 2020 ) and practical progress ( Gilotte et al. , 2018 ; Nie et al. , 2019 ; Kalashnikov et al. , 2018 ) on OPE algorithms has been made in a range of different domains , there are few broadly accepted evaluation tasks that combine complex , high-dimensional problems ∗Equally major contributors . †Policies and evaluation code are available at https : //github.com/google-research/deep_ ope . See Section 5 for links to modelling code . commonly explored by modern deep reinforcement learning algorithms ( Bellemare et al. , 2013 ; Brockman et al. , 2016 ) with standardized evaluation protocols and metrics . Our goal is to provide a set of tasks with a range of difficulty , excercise a variety of design properties , and provide policies with different behavioral patterns in order to establish a standardized framework for comparing OPE algorithms . We put particular emphasis on large datasets , long-horizon tasks , and task complexity to facilitate the development of scalable algorithms that can solve high-dimensional problems . Our primary contribution is the Deep Off-Policy Evaluation ( DOPE ) benchmark . DOPE is designed to measure the performance of OPE methods by 1 ) evaluating on challenging control tasks with properties known to be difficult for OPE methods , but which occur in real-world scenarios , 2 ) evaluating across a range of policies with different values , to directly measure performance on policy evaluation , ranking and selection , and 3 ) evaluating in ideal and adversarial settings in terms of dataset coverage and support . These factors are independent of task difficulty , but are known to have a large impact on OPE performance . To achieve 1 , we selected tasks on a set of design principles outlined in Section 3.1 . To achieve 2 , for each task we include 10 to 96 policies for evaluation and devise an evaluation protocol that measures policy evaluation , ranking , and selection as outlined in Section 3.2 . To achieve 3 , we provide two domains with differing dataset coverage and support properties described in Section 4 . Finally , to enable an easy-to-use research platform , we provide the datasets , target policies , evaluation API , as well as the recorded results of state-of-the-art algorithms ( presented in Section 5 ) as open-source . 2 BACKGROUND We briefly review the off-policy evaluation ( OPE ) problem setting . We consider Markov decision processes ( MDPs ) , defined by a tuple ( S , A , T , R , ρ0 , γ ) , with state space S , action space A , transition distribution T ( s′|s , a ) , initial state distribution ρ0 ( s ) , reward function R ( s , a ) and discount factor γ ∈ ( 0 , 1 ] . In reinforcement learning , we are typically concerned with optimizing or estimating the performance of a policy π ( a|s ) . The performance of a policy is commonly measured by the policy value V π , defined as the expected sum of discounted rewards : V π : = Es0∼ρ0 , s1 : ∞ , a0 : ∞∼π [ ∞∑ t=0 γtR ( st , at ) ] . ( 1 ) If we have access to state and action samples collected from a policy π , then we can use the sample mean of observed returns to estimate the value function above . However , in off-policy evaluation we are typically interested in estimating the value of a policy when the data is collected from a separate behavior policy πB ( a|s ) . This setting can arise , for example , when data is being generated online from another process , or in the purely offline case when we have a historical dataset . In this work we consider the latter , purely offline setting . The typical setup for this problem formulation is that we are provided with a discount γ , a dataset of trajectories collected from a behavior policy D = { ( s0 , a0 , r0 , s1 , . . . ) } , and optionally the action probabilities for the behavior policy πB ( at|st ) . In many practical applications , logging action propensities is not possible , for example , when the behavior policy is a mix of ML and hard-coded business logic . For this reason , we focus on the setting without propensities to encourage future work on behavior-agnostic OPE methods . For the methods that require propensities , we estimate the propensities with behavior cloning . The objective can take multiple flavors , as shown in Fig . 1 . A common task in OPE is to estimate the performance , or value , of a policy π ( which may not be the same as πB ) so that the estimated value is as close as possible to V π under a metric such as MSE or absolute error . A second task is to perform policy selection , where the goal is to select the best policy or set of policies out of a group of candidates . This setup corresponds to how OPE is commonly used in practice , which is to find the best performing strategy out of a pool when online evaluation is too expensive to be feasible . 3 DOPE : DEEP OFF-POLICY EVALUATION . The goal of the Deep Off-Policy Evaluation ( DOPE ) benchmark is to provide tasks that are challenging and effective measures of progress for OPE methods , yet is easy to use in order to better facilitate research . Therefore , we design our benchmark around a set of properties which are known to be difficult for existing OPE methods in order to gauge their shortcomings , and keep all tasks amenable to simulation in order for the benchmark to be accessible and easy to evaluate . 3.1 TASK PROPERTIES . We describe our motivating properties for selecting tasks for the benchmark as follows : High Dimensional Spaces ( H ) High-dimensionality is a key-feature in many real-world domains where it is difficult to perform feature engineering , such as in robotics , autonomous driving , and more . In these problems , it becomes challenging to accurately estimate quantities such as the value function without the use of high-capacity models such a neural networks and large datasets with wide state coverage . Our benchmark contains complex continuous-space tasks which exercise these challenges . Long Time-Horizon ( L ) Long time horizon tasks are known to present difficult challenges for OPE algorithms . Some algorithms have difficulty doing credit assignment for these tasks . This can be made worse as the state dimension or action dimension increases . Sparse Rewards ( R ) Sparse reward tasks increase the difficulty of credit assignment and add exploration challenges , which may interact with data coverage in the offline setting . We include a range robotics and navigation tasks which are difficult to solve due to reward sparsity . Temporally extended control ( T ) The ability to make decisions hierarchically is major challenge in many reinforcement learning applications . We include two navigation tasks which require high-level planning in addition to low-level control in order to simulate the difficulty in such problems . 3.2 EVALUATION PROTOCOL The goal of DOPE to provide metrics for policy ranking , evaluation and selection . Many existing OPE methods have only been evaluated on point estimates of value such as MSE , but policy selection is an important , practical use-case of OPE . In order to explicitly measure the quality of using OPE for policy selection , we provide a set of policies with varying value , and devise two metrics that measure how well OPE methods can rank policies . For each task we include a dataset of logged experiencesD , and a set of policies { π1 , π2 , ... , πN } with varying values . For each policy , OPE algorithms must use D to produce an estimate of the policy ’ s value . For evaluation of these estimates , we provide `` ground truth values '' { V π1 , V π2 , ... , V πN } that are computed by running the policy forM ≥ 1000 episodes , where the exact value ofM is given by the number of episodes needed to lower the error bar on the ground truth values to 0.666 . The estimated values are then compared to these ground truth values using three different metrics encompassing both policy evaluation and selection ( illustrated in Figure 2 ; see Appendix A.1 for mathematical definitions ) . Absolute Error This metric measures estimate accuracy instead of its usefulness for ranking . Error is the most commonly used metric to assess performance of OPE algorithms . We opted to use absolute error instead of MSE to be robust to outliers . Regret @ k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set . It is computed by identifying the top-k policies according to the estimated returns . Regret @ k is the difference between the actual expected return of the best policy in the entire set , and the actual value of the best policy in the top-k set . Rank correlation This metric directly measures how well estimated values rank policies , by computing the correlation between ordinal rankings according by the OPE estimates and ordinal rankings according to the ground truth values . 4 DOMAINS . DOPE contains two domains designed to provide a more comprehensive picture of how well OPE methods perform in different settings . These two domains are constructed using two benchmarks previously proposed for offline reinforcement learning : RL Unplugged ( Gulcehre et al. , 2020 ) and D4RL ( Fu et al. , 2020 ) , and reflect the challenges found within them . The DOPE RL Unplugged domain is constrained in two important ways : 1 ) the data is always generated using online RL training , ensuring there is adequate coverage of the state-action space , and 2 ) the policies are generated by applying offline RL algorithms to the same dataset we use for evaluation , ensuring that the behavior policy and evaluation policies induce similar state-action distributions . Using it , we hope to understand how OPE methods work as task complexity increases from simple Cartpole tasks to controlling a Humanoid body while controlling for ideal data . On the other hand , the DOPE D4RL domain has : 1 ) data from various sources ( including random exploration , human teleoperation , and RL-trained policies with limited exploration ) , which results in varying levels of coverage of the state-action space , and 2 ) policies that are generated using online RL algorithms , making it less likely that the behavior and evaluation policies share similar induced state-action distributions . Both of these result in distribution shift which is known to be challenging for OPE methods , even in simple tasks . So , using it we hope to measure how well OPE methods work in more practical data settings .
This article proposes a benchmark of off-policy evaluation, which provides different metrics for policy ranking, evaluation and selection. Offline metrics are provided by evaluating the value function of logged data, and then evaluating absolute error, rank correlation and regret. Verify the effectiveness of different offline evaluation methods. This article provides two evaluation scenarios, one is DOPE RL unplugged, and the other is D4RL. In the experiment, the author verified the benchmark proposed in this article in the MuJoCo environment to evaluate the effectiveness of different offline evaluation methods.
SP:dd2a50abff85d2b52b02dfe27cd42e443ea265cf
Triple-Search: Differentiable Joint-Search of Networks, Precision, and Accelerators
1 INTRODUCTION . The powerful performance and prohibitive complexity of deep neural networks ( DNNs ) have fueled a tremendous demand for efficient DNN accelerators which could boost DNN acceleration efficiency by orders-of-magnitude ( Chen et al. , 2016 ) . In response , extensive research efforts have been devoted to developing DNN accelerators . Early works decouple the design of efficient DNN algorithms and their accelerators . On the algorithms level , pruning , quantization , or neural architecture search ( NAS ) are adopted to trim down the model complexity ; On the hardware level , various FPGA-/ASIC-based accelerators have been developed to customize the micro-architectures ( e.g. , processing elements dimension , memory sizes , and network-on-chip design ) and algorithm-to-hardware mapping methods ( e.g. , loop tiling strategies and loop orders ) in order to optimize the acceleration efficiency for a given DNN . Later , hardware-aware NAS ( HA-NAS ) has been developed to further improve DNNs ’ acceleration efficiency for different applications ( Tan et al. , 2019 ) . More recently , it has been recognized that ( 1 ) optimal DNN accelerators require a joint consideration/search for all the following different yet coupled aspects , including DNNs ’ network structure , the adopted precision , and their accelerators ’ micro-architecture and mapping methods , and ( 2 ) merely exploring a subset of these aspects will lead to sub-optimal designs in terms of hardware efficiency or task accuracy . For example , the optimal accelerators for networks with different structures ( e.g. , width , depth , and kernel size ) can be very different ; while the optimal networks and their bitwidths for different accelerators can differ a lot ( Wu et al. , 2019 ) . However , the direction of jointly designing or searching for all the three aspects has only been slightly touched on previously . For example , ( Chen et al. , 2018 ; Gong et al. , 2019 ; Wang et al. , 2020 ) proposed to jointly search for the structure and precision of DNNs for a fixed target hardware ; ( Abdelfattah et al. , 2020 ; Yang et al. , 2020 ; Jiang et al. , 2020a ; b ) made the first attempt to jointly search for the networks and their accelerators , yet either their network or accelerator choices are limited due to the prohibitive time cost required by their adopted reinforcement learning ( RL ) based methods ; and EDD ( Li et al. , 2020 ) contributed a pioneering effort towards this direction by formulating a differentiable joint search framework , which however only consider one single accelerator parameter ( i.e. , parallel factor ) and more importantly , has not yet fully solved the challenges of such joint search . Although differentiable search is one of the most promising ways in terms of search efficiency to explore the huge joint search space as discussed in Sec . 4.2 , plethora of challenges exist to achieve an effective generic joint search for the aforementioned three aspects . First , Challenge 1 : to jointly search for a network and its precision via differentiable search , there exists a dilemma whether to activate all the paths during search . On one hand , the required memory consumption can easily explode and thus constrain the search ’ s scalability to more complex tasks if all paths are activated ; on the other hand , partially activating a subset of the paths can lead to a sequential training of different precision on the same weights , which might result in inaccurate accuracy ranking among different precision as discussed in ( Jin et al. , 2020 ) . Second , Challenge 2 : the accelerators ’ parameters are not differentiable , and it is non-trivial to derive the operation-wise hardware-cost penalty in order to perform differentiable search ( in considering search efficiency ) . This is because the optimal accelerator is often determined by the whole network instead of one specific operation/layer due to the fact that some accelerator parameters ( e.g. , the loop order ) need to be optimized for the whole network . In this paper , we aim to address the aforementioned challenges towards scalable generic joint search for the network , precision , and accelerator . Specifically , we make the following contributions : • We propose a Triple-Search ( TRIPS ) framework to jointly search for the network , precision , and accelerator in a differentiable manner to efficiently explore the huge joint search space which can not be afforded by previous RL-based methods due to their required prohibitive search cost . TRIPS identifies and tackles the aforementioned challenges towards scalable generic joint search of the three for maximizing both the accuracy and acceleration efficiency . • We develop a heterogeneous sampling strategy for simultaneous updating the weights and network structures to ( 1 ) avoid the need to sequentially train different precision and ( 2 ) achieve unbiased search with constant memory consumption , i.e. , solve the above Challenge 1 . In addition , we develop a novel co-search pipeline that integrates a differentiable hardware search engine to address the above Challenge 2 . • Extensive experiments and ablation studies validate the effectiveness of our proposed TRIPS framework in terms of the resulting search time , task accuracy , and accelerator efficiency , when benchmarked over state-of-the-art ( SOTA ) co-search/exploration techniques , HA-NAS methods , and DNN accelerators . Furthermore , we visualize the searched accelerators by TRIPS to provide insights towards efficient DNN accelerator design in Appendix . 2 RELATED WORKS . Hardware-aware NAS . Hardware-aware NAS has been proposed to automate the design of efficient DNNs . Early works ( Tan et al. , 2019 ; Howard et al. , 2019 ; Tan & Le , 2019 ) utilize RL-based NAS that requires a massive search time/cost , while recent works ( Wu et al. , 2019 ; Wan et al. , 2020 ; Cai et al. , 2018 ; Stamoulis et al. , 2019 ) explore the design space in a differentiable way ( Liu et al. , 2018 ) with much improved searching efficiency . Along another direction , one-shot NAS methods ( Cai et al. , 2019 ; Guo et al. , 2020 ; Yu et al. , 2020 ) pretrain the supernet and directly evaluate the performances of the sub-networks in a weight sharing manner as a proxy of their independently trained performances at the cost of a longer pretrain time . In addition , NAS has been adopted to search for quantization strategies ( Wang et al. , 2019 ; Wu et al. , 2018 ; Cai & Vasconcelos , 2020 ; Elthakeb et al. , 2020 ) for trimming down the complexity of a given DNN . However , these works leave unexplored the hardware design space , which is a crucial enabler for DNN ’ s acceleration efficiency , thus can lead to sub-optimal solutions . DNN accelerators . Motivated by customized accelerators ’ large potential gains , SOTA accelerators ( Du et al. , 2015 ; Chen et al. , 2017 ) innovate micro-architectures and algorithm-to-hardware mapping methods to optimize the acceleration efficiency , given a DNN and the hardware specifications . However , it is non-trivial to design an optimal accelerator as it requires cross-disciplinary knowledge in algorithm , micro-architecture , and circuit design . SOTA accelerator design relies on either experts ’ manual design , which is very time consuming or design flow ( Chen et al. , 2005 ; 2009 ; Rupnow et al. , 2011 ) and DNN accelerator design automation ( Wang et al. , 2016 ; Zhang et al. , 2018a ; Guan et al. , 2017 ; Venkatesan et al. , 2019 ; Wang et al. , 2018a ; Gao et al. , 2017 ) . As they merely explore the accelerator design space , they can result in sub-optimal solutions as compared to SOTA co-search/exploration methods and our TRIPS framework . Co-exploration/search techniques . Pioneering efforts have been made towards jointly searching of DNNs and their accelerators to some extent . For joint searching of DNNs and their precision , ( Chen et al. , 2018 ; Gong et al. , 2019 ; Wang et al. , 2020 ) adopt either differentiable or evolutionary algorithms yet without exploring their hardware accelerators . For joint searching of DNNs and their accelerators , ( Abdelfattah et al. , 2020 ; Yang et al. , 2020 ; Jiang et al. , 2020a ; b ) conduct RL-based search for the networks and some accelerator parameters/templates , where they strictly constrain the search space of the network or accelerator to achieve a practical RL search time , limiting their scalability and achievable efficiency . ( Lin et al . ) is another pioneering work which co-designs the newtork and accelerator in a sequential manner based on the fact that the accelerator ’ s design cycle is longer than the networks . EDD ( Li et al. , 2020 ) extends differentiable NAS to search for layer-wise precision and the accelerators ’ parallel factor , which is most relevant to our TRIPS . EDD has not yet fully solved the joint search challenges . First , it does not discuss or address the potentially explosive memory consumption issue of such joint search ; second , EDD ’ s accelerator search space only includes the parallel factor , which can be strictly limited to their accelerator template and can not generalize to include common accelerator parameters such as the tiling strategies . Built upon prior art , our TRIPS targets a scalable generic joint search framework to optimally search for the network , its precision , and adopted accelerator in a differentiable manner for improving efficiency . 3 THE PROPOSED TECHNIQUES . In this section , we describe our proposed techniques for enabling TRIPS , where Sec . 3.1 provides TRIPS ’ s formulation , Sec . 3.2 and Sec . 3.3 introduce TRIPS ’ s enablers that address the key challenges of scalable generic joint search for networks , precision , and accelerators , and Sec . 3.4 unifies the enablers to build a comprehensive co-search framework . 3.1 TRIPS : FORMULATION . Fig . 1 shows an overview of TRIPS , which jointly searches for the networks ( e.g. , kernel size , channel expansion , and group number ) , precision ( e.g. , 4-/6-/8-/12-/16-bit ) , and the accelerators ( e.g. , PE array type , buffer size , and tiling strategies of each memory hierarchy ) in a differentiable manner . TRIPS targets a scalable yet generic joint search framework , which we formulate as a bi-level optimization problem : min α , β Lval ( ω∗ , net ( α ) , prec ( β ) ) + λLcost ( hw ( γ∗ ) , net ( α ) , prec ( β ) ) ( 1 ) s.t . ω ∗ = arg min ω Ltrain ( ω , net ( α ) , prec ( β ) ) , ( 2 ) s.t . γ ∗ = arg min γ Lcost ( hw ( γ ) , net ( α ) , prec ( β ) ) ( 3 ) Where α , β , and γ are the continuous variables parameterizing the probability of different choices for the network operators , precision bitwidths , and accelerator parameters as in ( Liu et al. , 2018 ) , ω is the supernet weights , Ltrain , Lval , and Lcost are the loss during training and validation and the hardware-cost loss , and net ( α ) , prec ( β ) , and hw ( γ ) denote the network , precision , and accelerator characterized by α , β , γ , respectively .
This paper proposes Triple-Search (TRIPS), a differentiable framework of jointly searching for network architecture, quantization precision, and accelerator parameters. To address the dilemma between exploding training memory and biased search, the proposed framework leverages heterogeneous sampling where soft Gumbel Softmax is used for weight update and hard Gumbel Softmax is used for probabilities \beta. To integrate accelerator search, hard Gumbel Softmax is used on hardware design choices and the overall hardware cost is used for penalization. Experiments are conducted on the FPGA platform for CIFAR and ImageNet dataset to show the superiority of TRIPS over NAS-only methods.
SP:1037f94ce6eae4a42ea7913c76007f5f3c26aeaf
Gradient Based Memory Editing for Task-Free Continual Learning
1 INTRODUCTION . Accumulating past knowledge and adapting to evolving environments are one of the key traits in human intelligence ( McClelland et al. , 1995 ) . While contemporary deep neural networks have achieved impressive results in a range of machine learning tasks Goodfellow et al . ( 2015 ) , they haven ’ t yet manifested the ability of continually learning over evolving data streams ( Ratcliff , 1990 ) . These models suffer from catastrophic forgetting ( McCloskey & Cohen , 1989 ; Robins , 1995 ) when trained in an online fashion—i.e. , performance drops over previously seen examples during the sequential learning process . To this end , continual learning ( CL ) methods are developed to alleviate catastrophic forgetting issue when models are trained on non-stationary data streams ( Goodfellow et al. , 2013 ) . Most existing work on continual learning assume that , when models are trained on a stream of tasks sequentially , the task specifications such as task boundaries or identities are exposed to the models . These task-aware CL methods make explicit use of task specifications to avoid catastrophic forgetting issue , including consolidating important parameters on previous tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Nguyen et al. , 2018 ) , distilling knowledge from previous tasks ( Li & Hoiem , 2017 ; Rannen et al. , 2017 ) , or separating task-specific model parameters ( Rusu et al. , 2016 ; Serrà et al. , 2018 ) . However , in practice , it is more likely that the data instances comes in a sequential , non-stationary fashion without task identity or boundary—a setting that is commonly termed as task-free continual learning ( Aljundi et al. , 2018 ) . To tackle this setting , recent attempts on task-free CL methods have been made ( Aljundi et al. , 2018 ; Zeno et al. , 2018 ; Lee et al. , 2020 ) . These efforts revolve around regularization and model expansion based approaches , which rely on inferring task boundaries or identities ( Aljundi et al. , 2018 ; Lee et al. , 2020 ) and perform online paramater importance estimation ( Zeno et al. , 2018 ) , to consolidate or separate model parameters . In another line of efforts , memory-based CL methods have achieved strong results in task-free setting Aljundi et al . ( 2019b ) . These methods store a small set of previously seen instances in a fix-sized memory , and utilize them for replay ( Robins , 1995 ; Rolnick et al. , 2019 ) or regularization ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ) . The core problem in memory-based CL methods is how to manage the memory instances ( e.g. , which to replace with new instances ) and replay them given a restricted computation budget , so that the model performance can be maximally preserved or 1Code has been uploaded in the supplementary materials and will be published . enhanced . Prior work developing these methods have tried to either identify : 1 ) what instances to include in memory from a data stream ( Aljundi et al. , 2019b ; Rebuffi et al. , 2017 ; Chaudhry et al. , 2019b ) ; and 2 ) which instances in memory need to be replayed at what training step ( Aljundi et al. , 2019a ) . In this paper , we provide a new approach to solving the memory management problem in task-free continual learning by studying how to make gradient updates on stored memory examples . We develop a novel memory editing algorithm which complements existing memory-replay methods and data-sampling strategies for memory management ( updates ) . The challenge is to propose a plausible and sound optimization objective of editing . We employ the same intuition as previous study ( Toneva et al. , 2019 ; Chaudhry et al. , 2020 ; Aljundi et al. , 2019a ) : examples that are likely to be forgotten should be prioritized . Our proposed method , named Gradient-based Memory EDiting ( GMED ) , edits examples stored in the memory with gradient-based updates so that they are more likely to be forgotten . Specifically , we estimate the “ forgetting ” of a stored example by its loss increase in the upcoming one online model update . Finally , we perform gradient ascent on stored examples so that they are more likely to be forgotten . Experiments show that our algorithm consistently outperforms baselines on five benchmark datasets under various memory sizes . Our ablation study shows the proposed editing mechanism outperforms alternative editing strategies such as random editing . We demonstrate that the proposed algorithm is general enough to be used with other strong ( more recent ) memory-based CL methods to further enhance performance , thus allowing for improvements in many benchmark datasets . 2 RELATED WORKS . Task-aware Continual Learning . Most of continual learning algorithms are studied under “ taskaware ” settings , where the model visits a sequence of clearly separated “ tasks ” . A great portion of algorithms make explicit use of task boundaries ( Kirkpatrick et al. , 2017 ; Rusu et al. , 2016 ; Lopez-Paz & Ranzato , 2017 ) , by learning separate parameters for each task , or discourage changes of parameters that are important to old tasks . Existing continual learning algorithms can be summarized into three categories : regularization-based , architecture-based and data-based approaches . Regularization based approaches ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Nguyen et al. , 2018 ; Adel et al. , 2020 ) discourage the change of parameters that are important to previous data . Model expansionbased approaches ( Rusu et al. , 2016 ; Serrà et al. , 2018 ; Li et al. , 2019 ) allows expansion of model architecture to separate parameters for previous and current data . Data-based approaches ( Robins , 1995 ; Shin et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ) replay or constrain model updates with real or synthetic examples . Task-free Continual Learning . Recently , task-free continual learning ( Aljundi et al. , 2018 ) have drawn increasing interest , where we do not assume knowledge about task boundaries . To the best of our knowledge , only a handful number of regularization based ( Zeno et al. , 2018 ; Aljundi et al. , 2018 ) , model-expansion based ( Lee et al. , 2020 ) , generative replay based ( Rao et al. , 2019 ) , continual meta-learning and meta-continual learning ( He et al. , 2019 ; Caccia et al. , 2020 ; Harrison et al. , 2020 ) approaches are applicable in the task-free CL setting . Meanwhile , most memory based continual learning algorithms are applicable to the task-free setting ( Aljundi et al. , 2019a ; b ) . Memory-based CL algorithms such as Experience Replay ( ER ) ( Robins , 1995 ) store a subset of examples in a fix-sized replay memory and utilize them later at training to alleviate forgetting . Recent research has studied online strategies to improve the performance gain when examples get replayed from two dimensions : in terms of which examples to store , and which examples to replay . For example , in terms of deciding which examples to store , Gradient based Sample Selection ( GSS ) ( Aljundi et al. , 2019b ) proposes to store most diverse examples . In terms of deciding which examples to replay , maximally Interfering Retrieval ( MIR ) ( Aljundi et al. , 2019a ) select examples with the largest estimated forgetting . In particular , a task-aware approach , Hindsight Anchor Learning ( HAL ) ( Chaudhry et al. , 2020 ) , shares the same assumption that forgettable examples should be prioritized more . However , HAL only applies to task-aware settings and requires extra memory storage to keep track of the learned anchors . Figure 1 shows a categorization of memory-based task-free continual learning . 3 PRELIMINARIES . In this section we first present the problem formulation of task-free continual learning and then introduce preliminaries on memory-based continual learning methods . 3.1 PROBLEM FORMULATION . In task-free continual learning , we consider a ( potentially infinite ) stream of data examples D , which have a non-stationary data distribution , i.e. , the data distribution P ( x , y ) over time.At each time step t , the model receives a single or a mini batch of labeled examples ( xt , yt ) from the data stream D. For simplicity , here we assume that example ( xt , yt ) from D is generated by : first sampling a latent “ task ” z ∼ P ( z ; t ) , followed by sampling a data example from a joint data distribution P ( x , y|z ; t ) that is conditioned on task z , i.e. , ( xt , yt ) ∼ P ( x , y|z ; t ) . Here P ( z ; t ) is non-i.i.d and time-dependent . Similarly , P ( x , y|z ; t ) also changes over time . The goal of task-free online continual learning is to seek a classification model f ( x ; θ ) , parameterized by θ , over new example ( s ) ( x , y ) from the data stream D that minimizes a predefined loss ` ( x , y ; θ ) while not increasing the loss on previously seen examples . This capability is evaluated by testing the model over a test set of all visited tasks . 3.2 MEMORY-BASED CL METHODS . Briefly , memory-based CL algorithms maintain a fix-sized replay memory M which is used to store ( subset of ) previously seen examples ( xt , yt ) from the stream D. When the memory is full , the algorithm needs to either identify a memory example ( x , y ) to be replaced by new example , or to discard the new example it just received . Following the same setup in previous memory-based CL methods , our experiments use reservoir sampling ( Vitter , 1985 ) to determine how the memory will be updated with new examples received from stream D. Every time the model receives a new example , it draws an integer j between 0 and N randomly , where N is the number of examples visited so far . If j < |M | ( i.e. , the memory size or budget ) , it replace the example at the j-th position in the memory with the new example ; otherwise , this newly received example will be discarded . Reservoir sampling ensures at each time step each visited example is kept with an equal probability |M |/N . At each time step t , the algorithm also needs to determine the memory examples to be used for replay . Similar to previous methods , we randomly sample one or a mini-batch of examples ( x , y ) from the memory M . As an alternative replay strategy , MIR ( Aljundi et al. , 2019a ) identifies a subset of memory examples based on a predefined optimization objective ( i.e , perform one step of training on ( x , y ) ) , and then replays the selected examples . 4 GRADIENT BASED MEMORY EDITING . We propose Gradient based Memory Editing ( GMED ) , a novel algorithm for updating stored memory examples in an online fashion . We state our hypothesis about which examples should be stored in Sec . 4.1 . We then formulate an online optimization objective for example editing in Sec . 4.2 . In Sec . 4.3 , we introduce algorithmic details of GMED and its integration with MIR .
This paper deals with continual learning. Specifically, given a stream of tasks we want to maximise performance across all tasks. Typically neural networks suffer from catastrophic forgetting which results in worse performance on tasks seen earlier in training. There are many proposed solutions to this problem. One specific set of approaches are "memory based" algorithms. Here we store some training examples in memory from the tasks seen thus far. These are then mixed in with new training data so as to encourage the model to not forget past tasks.
SP:d850572819200f79545616fc92e789ce958b30d4
Improving Transformation Invariance in Contrastive Representation Learning
1 INTRODUCTION . Learning meaningful representations of data is a central endeavour in artificial intelligence . Such representations should retain important information about the original input whilst using fewer bits to store it ( van der Maaten et al. , 2009 ; Gregor et al. , 2016 ) . Semantically meaningful representations may discard a great deal of information about the input , whilst capturing what is relevant . Knowing what to discard , as well as what to keep , is key to obtaining powerful representations . By defining transformations that are believed a priori to distort the original without altering semantic features of interest , we can learn representations that are ( approximately ) invariant to these transformations ( Hadsell et al. , 2006 ) . Such representations may be more efficient and more generalizable than lossless encodings . Whilst less effective for reconstruction , these representations are useful in many downstream tasks that relate only to the semantic features of the input . Representation invariance is also a critically important task in of itself : it can lead to improved robustness and remove noise ( Du et al. , 2020 ) , afford fairness in downstream predictions ( Jaiswal et al. , 2020 ) , and enhance interpretability ( Xu et al. , 2018 ) . Contrastive learning is a recent and highly successful self-supervized approach to representation learning that has achieved state-of-the-art performance in tasks that rely on semantic features , rather than exact reconstruction ( van den Oord et al. , 2018 ; Hjelm et al. , 2018 ; Bachman et al. , 2019 ; He et al. , 2019 ) . These methods learn to match two different transformations of the same object in representation space , distinguishing them from contrasts that are representations of other objects . The objective functions used for contrastive learning encourage representations to remain similar under transformation , whilst simultaneously requiring different inputs to be well spread out in representation space ( Wang & Isola , 2020 ) . As such , the choice of transformations is key to their success ( Chen et al. , 2020a ) . Typical choices include random cropping and colour distortion . However , representations are compared using a similarity function that can be maximized even for representations that are far apart , meaning that the invariance learned is relatively weak . Unfor∗Equal contribution tunately , directly changing the similarity measure hampers the algorithm ( Wu et al. , 2018 ; Chen et al. , 2020a ) . We therefore investigate methods to improve contrastive representations by explicitly encouraging stronger invariance to the set of transformations , without changing the core selfsupervized objective ; we look to extract more information about how representations are changing with respect to transformation , and use this to direct the encoder towards greater invariance . To this end , we first develop a gradient regularization term that , when included in the training loss , forces the encoder to learn a representation function that varies slowly with continuous transformations . This can be seen as constraining the encoder to be approximately transformation invariant . We demonstrate empirically that while the parameters of the transformation can be recovered from standard contrastive learning representations using just linear regression , this is no longer the case when our regularization is used . Moreover , our representations perform better on downstream tasks and are robust to the introduction of nuisance transformations at test time . Test representations are conventionally produced using untransformed inputs ( Hjelm et al. , 2018 ; Kolesnikov et al. , 2019 ) , but this fails to combine information from different transformations and views of the object , or to emulate settings in which transformation noise can not simply be removed at test time . Our second key proposal is to instead create test time representations by feature averaging over multiple , differently transformed , inputs to address these concerns and to more directly impose invariance . We show theoretically that this leads to improved performance under linear evaluation protocols , further confirming this result empirically . We evaluate our approaches first on CIFAR-10 and CIFAR-100 ( Krizhevsky et al. , 2009 ) , using transformations appropriate to natural images and evaluating on a downstream classification task . To validate that our ideas transfer to other settings , and to use our gradient regularizer within a fully differentiable generative process , we further introduce a new synthetic dataset called Spirograph . This provides a greater variety of downstream regression tasks , and allows us to explore the interplay between nuisance transformations and generative factors of interest . We confirm that using our regularizer during training and our feature averaging at test time both improve performance in terms of transformation invariance , downstream tasks , and robustness to train–test distributional shift . In summary , the contributions of this paper are as follows : • We derive a novel contrastive learning objective that leads to more invariant representations . • We propose test time feature averaging to enforce further invariance . • We introduce the Spirograph dataset . • We show empirically that our approaches lead to more invariant representations and achieve state-of-the-art performance for existing downstream task benchmarks . 2 PROBABILISTIC FORMULATION OF CONTRASTIVE LEARNING . The goal of unsupervized representation learning is to encode high-dimensional data , such as images , retaining information that may be pertinent to downstream tasks and discarding information that is not . To formalize this , we consider a data distribution p ( x ) on X and an encoder fθ : X → Z which is a parametrized function mapping from data space to representation space . Contrastive learning is a self-supervized approach to representation learning that learns to make representations of differently transformed versions of the same input more similar than representations of other inputs . Of central importance is the set of transformations , also called augmentations ( Chen et al. , 2020a ) or views ( Tian et al. , 2019 ) , used to distort the data input x . In the common application of computer vision , it is typical to include resized cropping ; brightness , contrast , saturation and hue distortion ; greyscale conversion ; and horizontal flipping . We will later introduce the Spirograph dataset which uses quite different transformations . In general , transformations are assumed to change the input only cosmetically , so all semantic features such as the class label are preserved ; the set of transformations indicates changes which can be safely ignored by the encoder . Formally , we consider a transformation set T ⊆ { t : X → X } and a probability distribution p ( t ) on this set . A representation z of x is obtained by applying a random transformation t to x and then encoding the result using fθ . Therefore , we do not have one representation of x , but an implicit distribution p ( z|x ) . A sample of p ( z|x ) is obtained by sampling t ∼ p ( t ) and setting z = fθ ( t ( x ) ) . If the encoder is to discard irrelevant information , we would expect different encodings of x formed with different transformations t to be close in representation space . Altering the transformation should not lead to big changes in the representations of the same input . In other words , the distribution p ( z|x ) should place most probability mass in a small region . However , this does not provide a sufficient training signal for the encoder fθ as it fails to penalize trivial solutions in which all x are mapped to the same z . To preserve meaningful information about the input x whilst discarding purely cosmetic features , we should require p ( z|x ) to be focused around a single z whilst simultaneously requiring the representations of different inputs not to be close . That is , the marginal p ( z ) = Ep ( x ) [ p ( z|x ) ] should distribute probability mass over representation space . This intuition is directly reflected in contrastive learning . Most state-of-the-art contrastive learning methods utilize the InfoNCE objective ( van den Oord et al. , 2018 ) , or close variants of it ( Chen et al. , 2020a ) . InfoNCE uses a batch x1 , ... , xK of inputs , from which we form pairs of representations ( z1 , z′1 ) , ... , ( zK , z ′ K ) by applying two random transformations to each input followed by the encoder fθ . In probabilistic language xi ∼ p ( x ) for i = 1 , ... , K ( 1 ) zi , z ′ i ∼ p ( z|x = xi ) conditionally independently given xi , for i = 1 , ... , K , ( 2 ) such that zi , z′i = fθ ( t ( x ) ) , fθ ( t ′ ( x ) ) for i.i.d . transformations t , t′ ∼ p ( t ) . Given a learnable similarity score sφ : Z × Z → R , contrastive learning methods minimize the following loss L ( θ , φ ) = − 1 K K∑ i=1 sφ ( zi , z ′ i ) + 1 K K∑ i=1 log K∑ j=1 exp [ sφ ( zi , z ′ j ) ] . ( 3 ) Written in this way , we see that the loss will be minimized when sφ ( zi , z′i ) is large , but sφ ( zi , z ′ j ) is small for i 6= j . In other words , InfoNCE makes the two samples zi , z′i of p ( z|x = xi ) similar , whilst making samples zi , z′j of p ( z ) dissimilar . This can also be understood through the lens of mutual information , for more details see Appendix A . In practice , the similarity measure used generally takes the form ( Chen et al. , 2020a ) sφ ( z , z ′ ) = gφ ( z ) > gφ ( z ′ ) τ‖gφ ( z ) ‖2‖gφ ( z′ ) ‖2 ( 4 ) where gφ is a small neural network and τ is a temperature hyperparameter . If the encoder fθ is perfectly invariant to the transformations , then zi = z′i and sφ ( zi , z ′ i ) will be maximal . However , there are many ways to maximize the InfoNCE objective without encouraging strong invariance in the encoder.1 In this paper , we show how we can learn stronger invariances , above and beyond what is learned through the above approach , and that this benefits downstream task performance . 3 INVARIANCE BY GRADIENT REGULARIZATION . Contrastive learning with InfoNCE can gently encourage invariance by maximizing sφ ( z , z′ ) , but does not provide a strong signal to ensure this invariance . Our first core contribution is to show how we can use gradient methods to directly regulate how the representation changes with the transformation and thus ensure the desired invariance . The key underlying idea is to differentiate the representation with respect to the transformation , and then encourage this gradient to be small so that the representation changes slowly as the transformation is varied . To formalize this , we begin by looking more closely at the transformations T which are used to define the distribution p ( z|x ) . Many transformations , such as brightness adjustment , are controlled by a transformation parameter . We can include these parameters in our set-up by writing the transformation t as a map from both input space X and transformation parameter space U , i.e . t : X ×U → X . In this formulation , we sample a random transformation parameter from u ∼ p ( u ) which is a distribution on U . A sample from p ( z|x ) is then obtained by taking z = fθ ( t ( x , u ) ) , with t now regarded as a fixed function . 1This is because the function gφ is not an injection , so we may have gφ ( z ) = gφ ( z′ ) but z 6= z′ . Johnson & Lindenstrauss ( 1984 ) gives conditions under which a projection of this form will preserve approximate distances , in particular , the required projection dimension is much larger than the typical value 128 . The advantage of this change of perspective is that it opens up additional ways to learn stronger invariance of the encoder . In particular , it may make sense to consider the gradient ∇uz , which describes the rate of change of z with respect to the transformation . This only makes sense for some transformation parameters—we can differentiate with respect to the brightness scaling but not with respect to a horizontal flip . To separate out differentiable and non-differentiable parameters we write u = α , β where α are the parameters for which it makes sense to consider the derivative ∇αz . Intuitively , this gradient should be small to ensure that representations change only slowly as the transformation parameterα is varied . For clarity of exposition , and for implementation practicalities , it is important to consider gradients of a scalar function , so we introduce an arbitrary direction vector e ∈ Z and define F ( α , β , x , e ) = e · fθ ( t ( x , α , β ) ) ‖fθ ( t ( x , α , β ) ) ‖2 ( 5 ) so that F : A× B × X × Z → R calculates the scalar projection of the normalized representation z/‖z‖2 in the e direction . To encourage an encoder that is invariant to changes in α , we would like to minimize the expected conditional variance of F with respect to α : V = Ep ( x ) p ( β ) p ( e ) [ Varp ( α ) [ F ( α , β , x , e ) | x , β , e ] ] , ( 6 ) where we have exploited independence to write p ( x , β , e ) = p ( x ) p ( β ) p ( e ) . Defining V requires a distribution for e to be specified . For this , we make components of e independent Rademacher random variables , justification for which is included in Appendix B . A naive estimator of V can be formed using a direct nested Monte Carlo estimator ( Rainforth et al. , 2018 ) of sample variances , which , including Bessel ’ s correction , is given by V ≈ 1 K K∑ i=1 1 L− 1 L∑ j=1 F ( αij , βi , xi , ei ) 2 − 1 L ( L− 1 ) [ L∑ k=1 F ( αik , βi , xi , ei ) ] 2 ( 7 ) where xi , βi , ei ∼ p ( x ) p ( β ) p ( e ) and αij ∼ p ( α ) . However , this estimator requires LK forward passes through the encoder fθ to evaluate . As an alternative to this computationally prohibitive approach , we consider a first-order approximation2 to F F ( α′ , β , x , e ) − F ( α , β , x , e ) = ∇αF ( α , β , x , e ) · ( α′ −α ) + o ( ‖α′ −α‖ ) ( 8 ) and the following alternative form for the conditional variance ( see Appendix B for a derivation ) Varp ( α ) [ F ( α , β , x , e ) | x , β , e ] = 12Ep ( α ) p ( α′ ) [ ( F ( α , β , x , e ) − F ( α′ , β , x , e ) ) 2 | x , β , e ] ( 9 ) Combining these two ideas , we have V = Ep ( x ) p ( β ) p ( e ) [ 1 2Ep ( α ) p ( α′ ) [ ( F ( α , β , x , e ) − F ( α′ , β , x , e ) ) 2 | x , β , e ] ] ( 10 ) ≈ Ep ( x ) p ( β ) p ( e ) [ 1 2Ep ( α ) p ( α′ ) [ ( ∇αF ( α , β , x , e ) · ( α′ −α ) ) 2 | x , β , e ] ] . ( 11 ) Here we have an approximation of the conditional variance V that uses gradient information . Including this as a regularizer within contrastive learning will encourage the encoder to reduce the magnitude of the conditional variance V , forcing the representation to change slowly as the transformation is varied and thus inducing approximate invariance to the transformations . An unbiased estimator of equation 11 using a batch x1 , ... , xK is V̂regularizer = 1 K K∑ i=1 1 2L L∑ j=1 [ ∇αF ( αi , βi , xi , ei ) · ( α′ij −αi ) ] 2 ( 12 ) where xi , αi , βi , ei , ∼ p ( x ) p ( α ) p ( β ) p ( e ) , α′ij ∼ p ( α ) . We can cheaply use a large number of samples for α′ without having to take any additional forward passes through the encoder : we only require K evaluations of F . Our final loss function is L ( θ , φ ) =− 1 K K∑ i=1 sφ ( zi , z ′ i ) + 1 K K∑ i=1 log K∑ j=1 exp [ sφ ( zi , z ′ j ) ] + λ LK K∑ i=1 L∑ j=1 [ ∇αF ( αi , βi , xi , ei ) · ( α′ij −αi ) ] 2 ( 13 ) 2We use the notation a ( x ) = o ( b ( x ) ) to mean a ( x ) /b ( x ) → 0 as x → ∞ . where λ is a hyperparameter controlling the regularization strength . This loss does not require us to encode a larger number of differently transformed inputs . Instead , it uses the gradient at ( x , α , β , e ) to control properties of the encoder in a neighbourhood ofα . This can effectively reduce the representation gradient along the directions corresponding to many different transformations . This , in turn , creates an encoder that is approximately invariant to the transformations .
Given one image, the paper first generates different views which are controlled by differentiable parameter \alpha, and then minimizes the additional "conditional variance" term~(expectation of these views' squared differences). Therefore, the paper encourages representations of the same image remain similar under the augmentation. A testing strategy is further proposed by voting features with different augmentations. Results demonstrate the effectiveness.
SP:a692e1e43991839e08a02e9122757224e1582cfd
Understanding the Effect of Bias in Deep Anomaly Detection
1 INTRODUCTION . Anomaly detection ( Chandola et al. , 2009 ; Pimentel et al. , 2014 ) trains a formal model to identify unexpected or anomalous instances in incoming data , whose behaviors differ from normal instances . It is particularly useful for detecting problematic events such as digital fraud , structural defects , and system malfunctions . Building accurate anomaly detection models is a well-known challenge in machine learning , due to the scarcity of labeled anomaly data . The classical and most common approach is to train anomaly detection models using only normal data1 , i.e. , first train a model using a corpus of normal data to capture normal behaviors , then configure the model to flag instances with large deviations as anomalies . Researchers have also developed deep learning methods to better capture the complex structure in the data ( Ruff et al . ( 2018 ) ; Wang et al . ( 2019a ) ; Zhou & Paffenroth ( 2017 ) ) . Following the terminology introduced by Chandola et al . ( 2009 ) , we refer to these models as semi-supervised anomaly detection . Recently , a new line of anomaly detection models proposes to leverage available labeled anomalies during model training , i.e. , train an anomaly detection model using both normal data and additional labeled anomaly samples as they become available ( Ruff et al . ( 2020b ) ; Yamanaka et al . ( 2019 ) ; Ruff et al . ( 2020a ) ; Hendrycks et al . ( 2019a ) ) . Existing works show that these new models achieve considerable performance improvements beyond the models trained using only normal data . We hereby refer to these models as deep supervised2 anomaly detection ( Chandola et al. , 2009 ) . When exploring these models , we found that when the labeled anomalies ( used to train the model ) do not align with the target distribution , they could introduce harmful bias to the trained model . Specifically , when comparing the performance of a supervised anomaly detector to its semi-supervised 1Existing literature has used different terms to describe this type of models : some using semi-supervised anomaly detection ( Chandola et al. , 2009 ) and others using unsupervised anomaly detection ( Ruff et al. , 2018 ) . 2Some works termed these models as semi-supervised anomaly detection ( Ruff et al. , 2020b ; Yamanaka et al. , 2019 ; Ruff et al. , 2020a ; Hendrycks et al. , 2019a ) while others termed them as supervised anomaly detection ( Chandola et al. , 2009 ) . version , the performance difference varies significantly across test anomaly data , some better and some worse . That is , using labeled anomalies during model training does not always improve model performance ; instead , it may introduce large variance ( or bias ) in anomaly detection outcomes . In this paper , we aim to understand the effect of a biased training set on deep anomaly detection models . We formally state the anomaly detection problem , focusing on the anomaly detector ’ s recall at a given false positive rate as the main performance metric . We factor the contribution of the labeled anomalies by the detector ’ s anomaly scoring function , and show that different types of labeled anomalies produce different anomaly scoring functions . Next , given any two different anomaly scoring functions , we formally define their difference in performance as the relative scoring bias of the anomaly detectors . Our novel notion of scoring bias for anomaly detection aligns with the notion of bias in the classical supervised learning setting , with the key difference being the different performance metric—we target recall at a given false positive rate , the metric used by real-world anomaly detection tasks ( Li et al. , 2019 ; Liu et al. , 2018 ) . Along this line , we establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection . We empirically validate our assumptions and theoretical results on both synthetic and three real-world datasets ( Fashion-MNIST , Statlog ( Landsat Satellite ) , and Cellular Spectrum Misuse ( Li et al. , 2019 ) ) . Furthermore , we provide an empirical study on how a biased training anomaly set affects the anomaly score function and therefore the resulting detection performance . We consider the above three real-world datasets and six deep-learning based anomaly detection models . Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic , and provides a solid benchmark for future research . In this paper , we introduce a formal analysis on the effect of a biased training set on deep anomaly detection . Our main contributions are the following : • We discover the issue of large performance variance in deep anomaly detectors , caused by the use of the biased anomaly set as training data . • We model the effect of biased training as relative scoring bias , and establish the first finite sample rates for estimating the relative scoring bias of the trained models . • We conduct empirical experiments to verify and characterize the impact of the relative scoring bias on six popular anomaly detection models , and three real-world datasets . To the best of our knowledge , our work is the first to formally study the effect of a biased anomaly training set on deep anomaly detection . Our results show both significant positive and negative impacts of these biases , and suggest that model trainers must treat anomalies with additional care . We believe this leads to new opportunities for improving deep anomaly detectors and deserves more attention from the research community . 2 RELATED WORK . Anomaly Detection Models . While the literature on anomaly detection models is extensive , the most relevant to our work are deep learning based models . Following the terminology used by Chandola et al . ( 2009 ) , we consider two types of models : • Semi-supervised anomaly detection refers to models trained on only normal data , e.g. , Ruff et al . ( 2018 ) ; Sakurada & Yairi ( 2014 ) ; Zhou & Paffenroth ( 2017 ) ; • Supervised anomaly detection refers to models trained on normal data and a small set of labeled anomalies , e.g. , Pang et al . ( 2019 ) ; Daniel et al . ( 2019 ) ; Yamanaka et al . ( 2019 ) ; Ruff et al . ( 2020a ; b ) . One can also categorize models by their architecture : hypersphere ( Ruff et al. , 2018 ; 2020a ; b ) and autoencoder ( or reconstruction ) based models ( Zhou & Paffenroth , 2017 ; Yamanaka et al. , 2019 ) . Another line of recent work proposes to use synthetic or auxiliary anomalies to train anomaly detection models ( Golan & El-Yaniv ( 2018 ) ; Hendrycks et al . ( 2019c ) ; Lee et al . ( 2018 ) ; Hendrycks et al . ( 2019b ) ) , “ forcing ” the model to learn a more compact representation of the normal data . While the existing work has shown empirically that the choice of abnormal data in training can help detect some unseen abnormal distributions , it does not offer any theoretical explanation for the phe- nomenon , nor does it consider the counter-cases when additional abnormal data in training hurt the detection performance . Bias in Anomaly Detection . To the best of our knowledge , we are the first to identify the presence of bias caused by an additional labeled anomaly set in deep anomaly detection models , especially when there exists a mismatch between the anomalies present in training and those encountered in testing ( as shown in Section 5 ) . Existing work has explored the presence of bias in semi-supervised anomaly detection models when there exists defective normal data in training , like outliers and simple-to-reconstruct examples ( Tong et al. , 2019 ) , or examples with background noise ( Liu & Ma , 2019 ) . There is also literature on the bias-variance tradeoff for ensembles of semi-supervised anomaly detection models ( Aggarwal & Sathe , 2015 ; Rayana et al. , 2016 ) . But little or no work has been done on the bias of anomaly detection in the supervised setting ( i.e. , models trained on both normal data and some labeled anomalies ) . Finally , another line of work in transfer learning has identified the value of additional labeled data in training ( Kpotufe & Martinet , 2018 ; Hanneke & Kpotufe , 2019 ) and the performance bias on target data by transferring knowledge from a less related source ( Wang et al. , 2019b ; Wu et al. , 2020 ) . Yet most work only considered the cases of classification models . PAC guarantees for Anomaly Detection . Despite significant progress on developing theoretical guarantees for classification tasks ( Valiant ( 1984 ) ; Kearns et al . ( 1994 ) ) , little has been done for anomaly detection tasks . Siddiqui et al . ( 2016 ) first establishes a PAC framework for anomaly detection models using the notion of pattern space ; however , it is hard to apply such pattern spaces to deep learning models with complex latent spaces . Liu et al . ( 2018 ) proposes a model-agnostic approach to provide the PAC guarantee for anomaly detection performance , by analyzing the convergence for the cumulative distribution of anomaly scores . We follow the basic setting from this line of work to address the convergence of the relative scoring bias . In contrast to prior work , our proof relies on a novel adaption of the key theoretical tool from Massart ( 1990 ) , which allows us to extend our theory to characterize the notion of scoring bias as defined in Section 3.2 . 3 PROBLEM FORMULATION . We now formally state the anomaly detection problem . Consider a model class Θ for anomaly detection , and a ( labeled ) training set D sampled from a mixture distribution D over the normal and anomalous instances . In the context of anomaly detection , a model θ maps each input instance x to a continuous output , which corresponds to anomaly score sθ ( x ) . The model further uses a threshold τθ on the score function to produce a binary label for input x . For a given threshold value τθ , we can define the False Positive Rate ( FPR ) of the model θ on the input data distribution as FPR ( sθ , τθ ) = P [ sθ ( x ) > τθ | y = 0 ] , and the True Positive Rate ( TPR , a.k.a . Recall ) as TPR ( sθ , τθ ) = P [ sθ ( x ) > τθ | y = 1 ] . The FPR and TPR are competing objectives—therefore , a key challenge for anomaly detection algorithms is to identify a configuration of the score , threshold pair ( sθ , τθ ) that strikes a balance between the two performance metrics . W.l.o.g.3 , in this paper we focus on the following scenario , where the objective is to maximize TPR subject to achieving a target FPR . Formally , let q be the target FPR ; we define the optimal anomaly detector as4 ( s∗θ , τ∗θ ) ∈ arg max ( sθ , τθ ) : θ∈Θ TPR ( sθ , τθ ) s.t . FPR ( sθ , τθ ) ≤ q ( 3.1 ) 3.1 A GENERAL ANOMALY DETECTION FRAMEWORK . Note that the performance metric ( namely TPR ) in Problem 3.1 is statistics that depends on the entire predictive distribution , and can not be easily evaluated on any single data point . Therefore , rather than directly solving Problem 3.1 , practical anomaly detection algorithms ( such as OCSVM ( Schölkopf et al. , 1999 ) , Deep SAD ( Ruff et al. , 2020b ) , etc ) often rely on a two-stage process : ( 1 ) 3Our results can be easily extended to the setting where the goal is to minimize FPR subject to a given TPR . 4This formulation aligns with many contemporary works in deep anomaly detection . For example , Li et al . ( 2019 ) show that in real-world anomaly detection problems , it is desirable to detect anomalies with a prefixed low false alarm rate ; Liu et al . ( 2018 ) formulate the anomaly detection in a similar way , where the goal is to minimize FPR for a fixed TPR . learning the score function sθ from training data via a surrogate loss , and ( 2 ) given sθ from the previous step , computing the threshold function τθ on the training data . Formally , given a model class Θ , a training set D , a loss function ` , and a target FPR q , a two-staged anomaly detection algorithm outputs { ŝθ ∈ arg minsθ : θ∈Θ ` ( sθ , D ) τ̂θ ∈ arg maxτθ : θ∈Θ TPR ( ŝθ , τθ ) s.t . FPR ( ŝθ , τθ ) ≤ q ( 3.2 ) Note that the first part of Equation 3.2 amounts to solving a supervised learning problem . Here , the loss function ` could be instantiated into latent-space-based losses ( e.g. , Deep SAD ) , marginbased losses ( e.g. , OCSVM ) , or reconstruction-based losses ( e.g. , ABC ( Yamanaka et al. , 2019 ) ) ; therefore , many contemporary anomaly detection models fall into this framework . To set the threshold τ̂θ , we consider using the distribution of the anomaly scores ŝθ ( · ) from a labeled validation set Dval ∼ D. Let Dval : = Dval0 ∪ Dvala where Dval0 and Dvala denote the subset of normal data and the subset of abnormal data of Dval . Denote the empirical CDFs for anomaly scores assigned to x in Dval0 and D val a as F̂0 and F̂a , respectively . Then , given a target FPR value q , following a similar argument as Liu et al . ( 2018 ) , one can compute the threshold as τ̂θ = max { u ∈ R : F̂0 ( u ) ≤ q } . The steps for solving the second part of Equation 3.2 is summarized in Algorithm 1 . Algorithm 1 : Computing the anomaly detection threshold for Problem 3.2 Data : A validation dataset Dval and a scoring function s ( · ) . Result : A score threshold achieving a target FPR and the corresponding recall on Dval . 1 Get anomaly score s ( x ) for each x in Dval . 2 Compute empirical CDF F̂0 ( x ) and F̂a ( x ) for anomaly scores of x in Dval0 and Dvala . 3 Output detection threshold τ̂ = max { u ∈ R : F̂0 ( u ) ≤ q } . 4 Output TPR ( recall ) on Dvala as r̂ = 1− F̂a ( τ̂ ) .
This paper studies the potential bias in deep semi-supervised anomaly detection. The bias is evaluated in terms of TPR rate given a fixed FPR rate. It uses the anomaly scores output by unsupervised anomaly detectors as a benchmark to examine the relative scoring bias in deep semi-supervised anomaly detectors. It further studies the finite sample rate for this type of scoring bias. This type of bias is verified using some synthetic and real-world datasets. The empirical results also show the potential impact of this bias on several anomaly detectors.
SP:a24603a5dbc07070aeba98e1206511799111bec6
Calibration tests beyond classification
1 INTRODUCTION . We consider the general problem of modelling the relationship between a featureX and a target Y in a probabilistic setting , i.e. , we focus on models that approximate the conditional probability distribution P ( Y |X ) of target Y for given feature X . The use of probabilistic models that output a probability distribution instead of a point estimate demands guarantees on the predictions beyond accuracy , enabling meaningful and interpretable predicted uncertainties . One such statistical guarantee is calibration , which has been studied extensively in metereological and statistical literature ( DeGroot & Fienberg , 1983 ; Murphy & Winkler , 1977 ) . A calibrated model ensures that almost every prediction matches the conditional distribution of targets given this prediction . Loosely speaking , in a classification setting a predicted distribution of the model is called calibrated ( or reliable ) , if the empirically observed frequencies of the different classes match the predictions in the long run , if the same class probabilities would be predicted repeatedly . A classical example is a weather forecaster who predicts each day if it is going to rain on the next day . If she predicts rain with probability 60 % for a long series of days , her forecasting model is calibrated for predictions of 60 % if it actually rains on 60 % of these days . If this property holds for almost every probability distribution that the model outputs , then the model is considered to be calibrated . Calibration is an appealing property of a probabilistic model since it 1The source code of the experiments is available at https : //github.com/devmotion/ Calibration_ICLR2021 . provides safety guarantees on the predicted distributions even in the common case when the model does not predict the true distributions P ( Y |X ) . Calibration , however , does not guarantee accuracy ( or refinement ) —a model that always predicts the marginal probabilities of each class is calibrated but probably inaccurate and of limited use . On the other hand , accuracy does not imply calibration either since the predictions of an accurate model can be too over-confident and hence miscalibrated , as observed , e.g. , for deep neural networks ( Guo et al. , 2017 ) . In the field of machine learning , calibration has been studied mainly for classification problems ( Bröcker , 2009 ; Guo et al. , 2017 ; Kull et al. , 2017 ; 2019 ; Kumar et al. , 2018 ; Platt , 2000 ; Vaicenavicius et al. , 2019 ; Widmann et al. , 2019 ; Zadrozny , 2002 ) and for quantiles and confidence intervals of models for regression problems with real-valued targets ( Fasiolo et al. , 2020 ; Ho & Lee , 2005 ; Kuleshov et al. , 2018 ; Rueda et al. , 2006 ; Taillardat et al. , 2016 ) . In our work , however , we do not restrict ourselves to these problem settings but instead consider calibration for arbitrary predictive models . Thus , we generalize the common notion of calibration as : Definition 1 . Consider a model PX : = P ( Y |X ) of a conditional probability distribution P ( Y |X ) . Then model P is said to be calibrated if and only if P ( Y |PX ) = PX almost surely . ( 1 ) If P is a classification model , Definition 1 coincides with the notion of ( multi-class ) calibration by Bröcker ( 2009 ) ; Kull et al . ( 2019 ) ; Vaicenavicius et al . ( 2019 ) . Alternatively , in classification some authors ( Guo et al. , 2017 ; Kumar et al. , 2018 ; Naeini et al. , 2015 ) study the strictly weaker property of confidence calibration ( Kull et al. , 2019 ) , which only requires P ( Y = arg maxPX |maxPX ) = maxPX almost surely . ( 2 ) This notion of calibration corresponds to calibration according to Definition 1 for a reduced problem with binary targets Ỹ : = 1 ( Y = arg maxPX ) and Bernoulli distributions P̃X : = Ber ( maxPX ) as probabilistic models . For real-valued targets , Definition 1 coincides with the so-called distribution-level calibration by Song et al . ( 2019 ) . Distribution-level calibration implies that the predicted quantiles are calibrated , i.e. , the outcomes for all real-valued predictions of the , e.g. , 75 % quantile are actually below the predicted quantile with 75 % probability ( Song et al. , 2019 , Theorem 1 ) . Conversely , although quantile-based calibration is a common approach for real-valued regression problems ( Fasiolo et al. , 2020 ; Ho & Lee , 2005 ; Kuleshov et al. , 2018 ; Rueda et al. , 2006 ; Taillardat et al. , 2016 ) , it provides weaker guarantees on the predictions . For instance , the linear regression model in Fig . 1 empirically shows quantiles that appear close to being calibrated albeit being uncalibrated according to Definition 1 . Figure 1 also raises the question of how to assess calibration for general target spaces in the sense of Definition 1 , without having to rely on visual inspection . In classification , measures of calibration such as the commonly used expected calibration error ( ECE ) ( Guo et al. , 2017 ; Kull et al. , 2019 ; Naeini et al. , 2015 ; Vaicenavicius et al. , 2019 ) and the maximum calibration error ( MCE ) ( Naeini et al. , 2015 ) try to capture the average and maximal discrepancy between the distributions on the left hand side and the right hand side of Eq . ( 1 ) or Eq . ( 2 ) , respectively . These measures can be generalized to other target spaces ( see Definition B.1 ) , but unfortunately estimating these calibration errors from observations of features and corresponding targets is problematic . Typically , the predictions are different for ( almost ) all observations , and hence estimation of the conditional probability P ( Y |PX ) , which is needed in the estimation of ECE and MCE , is challenging even for low-dimensional target spaces and usually leads to biased and inconsistent estimators ( Vaicenavicius et al. , 2019 ) . Kernel-based calibration errors such as the maximum mean calibration error ( MMCE ) ( Kumar et al. , 2018 ) and the kernel calibration error ( KCE ) ( Widmann et al. , 2019 ) for confidence and multi-class calibration , respectively , can be estimated without first estimating the conditional probability and hence avoid this issue . They are defined as the expected value of a weighted sum of the differences of the left and right hand side of Eq . ( 1 ) for each class , where the weights are given as a function of the predictions ( of all classes ) and chosen such that the calibration error is maximized . A reformulation with matrix-valued kernels ( Widmann et al. , 2019 ) yields unbiased and differentiable estimators without explicit dependence on P ( Y |PX ) , which simplifies the estimation and allows to explicitly account for calibration in the training objective ( Kumar et al. , 2018 ) . Additionally , the kernel-based framework allows the derivation of reliable statistical hypothesis tests for calibration in multi-class classification ( Widmann et al. , 2019 ) . However , both the construction as a weighted difference of the class-wise distributions in Eq . ( 1 ) and the reformulation with matrix-valued kernels require finite target spaces and hence can not be applied to regression problems . To be able to deal with general target spaces , we present a new and more general framework of calibration errors without these limitations . Our framework can be used to reason about and test for calibration of any probabilistic predictive model . As explained above , this is in stark contrast with existing methods that are restricted to simple output distributions , such as classification and scalar-valued regression problems . A key contribution of this paper is a new framework that is applicable to multivariate regression , as well as situations when the output is of a different ( e.g. , discrete ordinal ) or more complex ( e.g. , graph-structured ) type , with clear practical implications . Within this framework a KCE for general target spaces is obtained . We want to highlight that for multi-class classification problems its formulation is more intuitive and simpler to use than the measure proposed by Widmann et al . ( 2019 ) based on matrix-valued kernels . To ease the application of the KCE we derive several estimators of the KCE with subquadratic sample complexity and their asymptotic properties in tests for calibrated models , which improve on existing estimators and tests in the two-sample test literature by exploiting the special structure of the calibration framework . Using the proposed framework , we numerically evaluate the calibration of neural network models and ensembles of such models . 2 CALIBRATION ERROR : A GENERAL FRAMEWORK . In classification , the distributions on the left and right hand side of Eq . ( 1 ) can be interpreted as vectors in the probability simplex . Hence ultimately the distance measure for ECE and MCE ( see Definition B.1 ) can be chosen as a distance measure of real-valued vectors . The total variation , Euclidean , and squared Euclidean distances are common choices ( Guo et al. , 2017 ; Kull et al. , 2019 ; Vaicenavicius et al. , 2019 ) . However , in a general setting measuring the discrepancy between P ( Y |PX ) and PX can not necessarily be reduced to measuring distances between vectors . The conditional distribution P ( Y |PX ) can be arbitrarily complex , even if the predicted distributions are restricted to a simple class of distributions that can be represented as real-valued vectors . Hence in general we have to resort to dedicated distance measures of probability distributions . Additionally , the estimation of conditional distributions P ( Y |PX ) is challenging , even more so than in the restricted case of classification , since in general these distributions can be arbitrarily complex . To circumvent this problem , we propose to use the following construction : We define a random variable ZX ∼ PX obtained from the predictive model and study the discrepancy between the joint distributions of the two pairs of random variables ( PX , Y ) and ( PX , ZX ) , respectively , instead of the discrepancy between the conditional distributions P ( Y |PX ) and PX . Since ( PX , Y ) d = ( PX , ZX ) if and only if P ( Y |PX ) = PX almost surely , model P is calibrated if and only if the distributions of ( PX , Y ) and ( PX , ZX ) are equal . The random variable pairs ( PX , Y ) and ( PX , ZX ) take values in the product space P×Y , where P is the space of predicted distributions PX and Y is the space of targets Y . For instance , in classification , P could be the probability simplex and Y the set of all class labels , whereas in the case of Gaussian predictive models for scalar targets P could be the space of normal distributions and Y be R. The study of the joint distributions of ( PX , Y ) and ( PX , ZX ) motivates the definition of a generally applicable calibration error as an integral probability metric ( Müller , 1997 ; Sriperumbudur et al. , 2009 ; 2012 ) between these distributions . In contrast to common f -divergences such as the Kullback-Leibler divergence , integral probability metrics do not require that one distribution is absolutely continuous with respect to the other , which can not be guaranteed in general . Definition 2 . Let Y denote the space of targets Y , and P the space of predicted distributions PX . We define the calibration error with respect to a space of functions F of the form f : P × Y → R as CEF : = sup f∈F ∣∣EPX , Y f ( PX , Y ) − EPX , ZX f ( PX , ZX ) ∣∣ . ( 3 ) By construction , if model P is calibrated , then CEF = 0 regardless of the choice of F . However , the converse statement is not true for arbitrary function spaces F . From the theory of integral probability metrics ( see , e.g. , Müller , 1997 ; Sriperumbudur et al. , 2009 ; 2012 ) , we know that for certain choices of F the calibration error in Eq . ( 3 ) is a well-known metric on the product space P×Y , which implies that CEF = 0 if and only if model P is calibrated . Prominent examples include the maximum mean discrepancy2 ( MMD ) ( Gretton et al. , 2007 ) , the total variation distance , the Kantorovich distance , and the Dudley metric ( Dudley , 1989 , p. 310 ) . As pointed out above , Definition 2 is a generalization of the definition for multi-class classification proposed by Widmann et al . ( 2019 ) —which is based on vector-valued functions and only applicable to finite target spaces—to any probabilistic predictive model . In Appendix E we show this explicitly and discuss the special case of classification problems in more detail . Previous results ( Widmann et al. , 2019 ) imply that in classification MMCE and , for common distance measures d ( · , · ) such as the total variation and squared Euclidean distance , ECEd and MCEd are special cases of CEF . In Appendix G we show that our framework also covers natural extensions of ECEd and MCEd to countably infinite discrete target spaces , which to our knowledge have not been studied before and occur , e.g. , in Poisson regression . The literature of integral probability metrics suggests that we can resort to estimating CEF from i.i.d . samples from the distributions of ( PX , Y ) and ( PX , ZX ) . For the MMD , the Kantorovich distance , and the Dudley metric tractable strongly consistent empirical estimators exist ( Sriperumbudur et al. , 2012 ) . Here the empirical estimator for the MMD is particularly appealing since compared with the other estimators “ it is computationally cheaper , the empirical estimate converges at a faster rate to the population value , and the rate of convergence is independent of the dimension d of the space ( for S = Rd ) ” ( Sriperumbudur et al . ( 2012 ) ) . Our specific design of ( PX , ZX ) can be exploited to improve on these estimators . If EZx∼Pxf ( Px , Zx ) can be evaluated analytically for a fixed prediction Px , then CEF can be estimated empirically with reduced variance by marginalizing out ZX . Otherwise EZx∼Pxf ( Px , Zx ) has to be estimated , but in contrast to the common estimators of the integral probability metrics discussed above the artificial construction of ZX allows us to approximate it by numerical integration methods such as ( quasi ) Monte Carlo integration or quadrature rules with arbitrarily small error and variance . Monte Carlo integration preserves statistical properties of the estimators such as unbiasedness and consistency . 2As we discuss in Section 3 , the MMD is a metric if and only if the employed kernel is characteristic .
The authors present an approach for testing calibration in conditional probability estimation models. They build on a line of work in the kernel estimation literature assessing whether the conditional distributions are well calibrated (i.e. P(Y | f(X)) = f(X), where f is some predictive model). They develop an MMD kernel estimator and expand on practical choices of kernels that are computationally tractable. They then derive an asymptotic null distribution for calibrated models, enabling control over the error rate when labeling a model uncalibrated. A few simulation studies are done with neural networks to show the applicability of the method.
SP:cf6c9061542bf9c43a968faa574ce03ad71a859a
Semantic Hashing with Locality Sensitive Embeddings
1 INTRODUCTION . One of most challenging aspects in many Information Retrieval ( IR ) systems is the discovery and identification of the nearest neighbors of a query element in an vector space . This is typically solved using Approximate Nearest Neighbors ( ANN ) methods as exact solutions typically do not scale well with the dimension of the vector space . ANN methods typically fall into one of three categories : space partitioning trees , such as the kd-tree ( Bentley ( 1975 ) ; Friedman et al . ( 1977 ) ; Arya et al . ( 1998 ) ) , neighborhood graph search ( Chen et al . ( 2018 ) ; Iwasaki & Miyazaki ( 2018 ) ) or Locality Sensitive Hashing ( LSH ) methods ( Charikar ( 2002 ) ; Gionis et al . ( 1999 ) ; Lv et al . ( 2007 ) ) . Despite their theoretical , intuitive , and computational appeal , LSH methods are not as prevalent in modern IR systems as are space-partitioning trees or neighborhood graph methods ( Bernhardsson ( 2013 ) ; Chen et al . ( 2018 ) ; Johnson et al . ( 2017 ) ; Iwasaki & Miyazaki ( 2018 ) ) . Empirical studies demonstrate that LSH techniques frequently do not attain the same level of quality as spacepartitioning trees ( Muja & Lowe ( 2009 ) ) . Nonetheless , space-partitioning and neighborhood graph search methods are expensive , both in data structure construction and in query time , and remain a bottleneck in many modern IR pipelines . As many modern retrieval tasks revolve around solving ANN for vector representations learned from raw , structured data , one might attempt to learn representations which are more suited towards efficient retrieval . Metric learning methods ( Xing et al . ( 2003 ) ; Weinberger et al . ( 2006 ) ; Chechik et al . ( 2010 ) ; Hoffer & Ailon ( 2015 ) ; Kulis et al . ( 2013 ) ) have been proposed for learning linear and non-linear transformations of given representations for improved clustering and retrieval quality . A class of related methods , semantic hashing or hash learning methods ( Salakhutdinov & Hinton ( 2009 ) ) , have also been explored for learning transformations into binary vector spaces . These learned binary representations may then be used in hashing based retrieval methods , typically by retrieving all neighboring elements in the Hamming ball with radius 1 or 2 . Exact hashing retrieval algorithms , that is , Hamming ball “ search ” with radius 0 , have a particular computational appeal in that search data structures are not needed nor is enumeration of all codes within a Hamming ball . In addition , binary representations that are suitable for exact hashing retrieval can also be used to identify groups of related items that can be interpreted as clusters in the traditional sense . As the number of clusters discovered by the algorithm isn ’ t explicitly controlled ( only bounded by 2d , ) algorithms generating binary embeddings suitable for exact hashing retrieval can be viewed as nonparametric clustering methods . To this end , we propose a method for learning continuous representations in which the optimized similarity is the angular similarity . The angular similarity corresponds to the collision probability of SimHash , a hyperplane based LSH function ( Charikar ( 2002 ) ) . Angular distance gives a sharp topology on the embedding space which encourages similar objects have nearly identical embeddings suitable for exact hashing retrieval . Related work on similarity learning , LSH , and hash learning can be found in Section 2 . The proposed models are found in Section 3 . The experimental results , and other technical details , can be found in Sections 4 . Finally , we conclude in Section 5 . 2 PRELIMINARIES . 2.1 SIMILARITY MODELLING . Similarity learning methods are a class of techniques for learning a similarity function between objects . One successful approach for similarity learning are “ twin network ” or “ two tower architecture ” models , in which two neural network architectures are joined to produce a similarity prediction ( Bromley et al . ( 1994 ) ; Chopra et al . ( 2005 ) ; Huang et al . ( 2013 ) ) . The weights of these networks may be shared or not , depending on whether the two input domains are equivalent or not . Let i ∈ U and j ∈ V be the identities of two objects , where U and V are the two domains across which a similarity function is to be learned . Let φu ( i ) and φv ( j ) be the input representations for the objects ( these functions φ may be identity functions if the input domains are discrete . ) These representations are then transformed through parameterized vector-valued functions fu ( ·|θu ) and fv ( ·|θv ) , whose output are typically the learned representations ui = fu ( φu ( i ) |θu ) and vj = fv ( φv ( j ) |θv ) . A loss is then defined using pairwise labels yij and an interaction function s ( ui , vj ) which denotes the similarity or relevancy of the pair . Taking fu to be a mapping for each index i to an independent parameter vector ui ( similarly for fv and vi ) , and taking s ( ui , vj ) = uTi vj with an appropriate loss results in a variety of matrix factorization approaches ( Koren et al . ( 2009 ) ; Lee & Seung ( 2001 ) ; Mnih & Salakhutdinov ( 2008 ) ; Blei et al . ( 2003 ) ; Rendle et al . ( 2012 ) ; Pennington et al . ( 2014 ) ) . Taking fu to be a neural network mapping a context φu ( i ) to a representation ui allows for similarity models that readily make use of complex contextual information . Common choices for the similarity function include transformations of Euclidean distance ( Chopra et al . ( 2005 ) ) , and cosine similarity : s ( ui , vj ) = uTi vj ||ui||||vj || ( Huang et al . ( 2013 ) ) . In addition , the loss can be defined for pairs ( Chopra et al . ( 2005 ) ) , triplets ( one positive pair , one negative pair ) ( Rendle et al . ( 2012 ) ; Chechik et al . ( 2010 ) ) , or on larger sets ( Huang et al . ( 2013 ) ) . 2.2 LOCALITY SENSITIVE HASHING AND ANGULAR SIMILARITY . A Locality Sensitive Hash ( LSH ) family F is a distribution of hashes h on a collection of objects Q such that for qi , qj ∈ Q , ( Indyk & Motwani ( 1998 ) ; Gionis et al . ( 1999 ) ; Charikar ( 2002 ) ) Pr [ h ( qi ) = h ( qj ) ] = s ( qi , qj ) ( 1 ) for some similarity function s on the objects . SimHash ( Charikar ( 2002 ) ) is a LSH technique developed for document deduplication but may be used in other contexts . For a vector representations q ∈ Rd , SimHash draws a random matrix Z ∈ Rd×M with standard Normal entries . The hash h ( qi ) ∈ { 0 , 1 } M is then constructed as h ( qi ) m = 1 [ q T i Z : m > 0 ] . ( 2 ) Intuitively , SimHash draws random hyperplanes intersecting the origin to separate points . A useful property of this hash function , as stated in Charikar ( 2002 ) , is that ψ ( qi , qj ) : = Pr [ h ( qi ) m = h ( qj ) m ] = 1− 1 π cos−1 ( qTi qj ||qi||||qj || ) , where the above probability is measured with respect to Z. ψ ( qi , qj ) , the collision probability for two vectors , is also known as the angular similarity , and ξ = 1 − ψ is the angular distance , which is a proper metric ( unlike the cosine distance 1− q T i qj ||qi||||qj || ) . As the columns of Z are independent , the collision probability for a K bit hash is ψK . 2.3 LEARNING TO HASH . A related approach to similarity learning is hash learning methods , introduced in Salakhutdinov & Hinton ( 2009 ) . These methods train binary embeddings directly and then use hash collisions or Hamming Ball search to retrieve approximate nearest neighbors . Binary representations lead to some technical challenges ; Salakhutdinov & Hinton ( 2009 ) uses contrastive divergence for training , whereas Hubara et al . ( 2016 ) implement binary threshold activation functions with stochastic neurons . Another approach ( and the one followed in this work ) is to avoid explicit binary representations in training and to introduce an quantization loss to penalize embeddings that are not close to binary , and to subsequently threshold these near-binary embeddings to binary ones . This type of quantization loss is distinct from those used in vector quantization methods ( Ahalt et al . ( 1990 ) ; Kohonen ( 1990 ) ; Sato & Yamada ( 1996 ) ) in which the data representations are fixed and the codes are learned ; here the codes are fixed and the representations are learned . The quantization loss introduced in Deep Hashing Networks ( DHN ) Zhu et al . ( 2016 ) is of the form b ( ui|θ ) = ∑ d log cosh ( |uid| − 1 ) ≈ ‖|ui| − 1‖1 . ( 3 ) Other quantization losses based on distances to binary codes have been used in Li et al . ( 2016 ) ; Liu et al . ( 2016 ) . Cao et al . ( 2017 ) utilizes a quantization loss whose strength increases over time . Finally , Deep Cauchy Hashing ( DCH ) ( Cao et al . ( 2018 ) ) has shown improvements by utilizing a heavy-tailed similarity function with a similarly inspired quantization loss . 3 LOCALITY SENSITIVE EMBEDDINGS . Many similarity learning methods utilize dot products or cosine similarity to relate the embeddings of a pair to each other . For example GloVe ( Pennington et al . ( 2014 ) ) minimizes the weighted error between the dot product of the embeddings and a log-coocurrence matrix , and the DSSM model ( Huang et al . ( 2013 ) ) utilizes cosine similarity as the “ crossing ” layer between the two halves of a twin network . In general , embeddings trained in this way are not suitable for SimHash retrieval , as can be seen in Figure 1 . If models are trained so as to minimize the error of a prediction made by cosine similarity , extremely low tolerances are required in order to achieve embeddings with significant collision probability . Similar observations on the misspecifiation of cosine distance for Semantic Hashing were made in Cao et al . ( 2018 ) . In this section , we define models in which collision probabilities of learned representations are directly optimized . 3.1 LOSS DEFINITION . In the following , we define a population loss through a data distribution D of relevant and irrelevant pairs . Each sample from D is a tuple ( y , i , j ) ∈ { 0 , 1 } × U × V , where U and V are the sets across which a similarity is to be learned – for example , “ users ” and “ items ” in a recommender system . y is the relevancy of the pair ( i , j ) . The population losses we consider are expectations over D of a per-tuple loss l with regularization terms r per item : L ( θ ) = E y , i , j∼D l ( y , i , j|θ ) + λr ( i|θ ) + λr ( j|θ ) . ( 4 ) In practice , we minimize the empirical loss L̂ constructed from a finite sample from D , and we use r ( i|θ ) = b ( ui|θ ) defined in equation 3. θ represents all parameters of the model , including any learned representations for the elements of the sets U and V . An embedding ui for element i may either be a vector of free parameters , as would be in a fixed vocabulary embedding model , or may be the output of a model on a raw input : ui = fu ( φu ( i ) ) , as would be in a twin network model . In addition , each half of the pair ( ui , vj ) may represent a different input space , as in the DSSM model .
The authors consider the problem of learning a hash function such that semantically similar elements have high collision probability. They modify the approach Deep Hashing Networks (Zhu et al., 2016) with a new loss function. Rather than use a sigmoid based loss function, the authors argue that a loss function based on angular similarity and SimHash would be better. Specifically, they use the probability of SimHash collisions as a loss function. They then experimentally verify their method on synthetic data from a Stochastic Block Model distribution, image data (CIFAR-10 and ImageNet), and text data (OSCAR). They show improvements over related methods.
SP:becb496310e88c1e2e7d03131093b9ebcf075c1d
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
1 INTRODUCTION . When models are tested on distributions that are different from the training distribution , they typically suffer large drops in performance ( Blitzer and Pereira , 2007 ; Szegedy et al. , 2014 ; Jia and Liang , 2017 ; AlBadawy et al. , 2018 ; Hendrycks et al. , 2019a ) . For example , in remote sensing , central tasks include predicting poverty , crop type , and land cover from satellite imagery for downstream humanitarian , policy , and environmental applications ( Xie et al. , 2016 ; Jean et al. , 2016 ; Wang et al. , 2020 ; Rußwurm et al. , 2020 ) . In some developing African countries , labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys ( Jean et al. , 2016 ) . To make accurate predictions in these countries , we must extrapolate to out-of-distribution ( OOD ) examples across different geographic terrains and political borders . We consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution ( e.g. , global satellite imagery ) . While labels are scarce , auxiliary information is often cheaply available for every input and may provide some signal for the missing labels . Auxiliary information can come from additional data sources ( e.g. , climate data from other satellites ) or derived from the original input ( e.g. , background or non-visible spectrum image channels ) . This auxiliary information is often discarded or not leveraged , and how to best use them is unclear . One way is to use them directly as input features ( aux-inputs ) ; another is to treat them as prediction outputs for an auxiliary task ( aux-outputs ) in pre-training . Which approach leads to better in-distribution or OOD performance ? Aux-inputs provide more features to potentially improve in-distribution performance , and one may hope that this also improves OOD performance . Indeed , previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy ( Recht et al. , 2019 ; Taori et al. , 2020 ; Xie et al. , 2020 ; Santurkar et al. , 2020 ) . However , in this paper we find that aux-inputs can introduce more spurious correlations with the labels : as a result , while aux-inputs often improve in-distribution accuracy , they can worsen OOD accuracy . We give examples of this trend on CelebA ( Liu et al. , 2015 ) and real-world satellite datasets in Sections 5.2 and 5.3 . Conversely , aux-output methods such as pre-training may improve OOD performance through auxiliary supervision ( Caruana , 1997 ; Weiss et al. , 2016 ; Hendrycks et al. , 2019a ) . Hendrycks et al . ∗Equal contribution . 𝑥 𝑧 𝑤 𝑦 𝑢 𝐵∗ 𝐴∗ 𝐶∗ 𝜃 '' 𝜃 # Figure 2 : Graphical model for our theoretical setting : prediction task with input x , target y , and auxiliary information z , which is related to y through the latent variable w and latent noise u . ( 2019a ) show that pre-training on ImageNet can improve adversarial robustness , and Hendrycks et al . ( 2019b ) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions . In this paper , we find that while aux-outputs improve OOD accuracy , the in-distribution accuracy is worse than with aux-inputs . Thus , we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs . To theoretically study how to best use auxiliary information , we extend the multi-task linear regression setting ( Du et al. , 2020 ; Tripuraneni et al. , 2020 ) to allow for distribution shifts . We show that auxiliary information helps in-distribution error by providing useful features for predicting the target , but the relationship between the aux-inputs and the target can shift significantly OOD , worsening the OOD error . In contrast , the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space . We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information . Can we do better than using auxiliary information as inputs or outputs alone ? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs ( Figure 1 ) . In-N-Out first uses an aux-inputs model , which has good in-distribution accuracy , to pseudolabel in-distribution unlabeled data . It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data . We prove that In-N-Out , which combines self-training and pre-training , further improves both in-distribution and OOD error over the aux-outputs model . We show empirical results on CelebA and two remote sensing tasks ( land cover and cropland prediction ) that parallel the theory . On all datasets , In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2 % in-distribution , 2–3 % OOD over not using auxiliary information on remote sensing tasks . Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone ( up to 5 % in-distribution , 1–2 % OOD on remote sensing tasks ) . We also find that using OOD ( rather than in-distribution ) unlabeled examples for pre-training is crucial for OOD improvements . 2 SETUP . Let x∈Rd be the input ( e.g. , a satellite image ) , y ∈R be the target ( e.g. , crop type ) , and z ∈RT be the cheaply obtained auxiliary information either from additional sources ( e.g. , climate information ) or derived from the original data ( e.g. , background ) . Training data . Let Pid and Pood denote the underlying distribution of ( x , y , z ) triples in-distribution and out-of-distribution , respectively . The training data consists of ( i ) in-distribution labeled data { ( xi , yi , zi ) } ni=1 ∼ Pid , ( ii ) in-distribution unlabeled data { ( xidi , zidi ) } mid i=1 ∼ Pid , and ( iii ) out-of-distribution unlabeled data { ( xoodi , zoodi ) } mood i=1 ∼Pood . Goal and risk metrics . Our goal is to learn a model from input and auxiliary information to the target , f : Rd×RT →R . For a loss function ` , the in-distribution population risk of the model f is Rid ( f ) =Ex , y , z∼Pid [ ` ( f ( x , z ) , y ) ] , and its OOD population risk isRood ( f ) =Ex , y , z∼Pood [ ` ( f ( x , z ) , y ) ] . 2.1 MODELS . We consider three common ways to use the auxiliary information ( z ) to learn a model . Baseline . The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information ( accomplished by setting z to 0 ) : f̂bs =argmin f 1 n n∑ i=1 ` ( f ( xi,0 ) , yi ) . ( 1 ) Aux-inputs . The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features : f̂in =argmin f 1 n n∑ i=1 ` ( f ( xi , zi ) , yi ) . ( 2 ) Aux-outputs . The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task , in hopes that there is a low-dimensional feature representation that is common to predicting both z and y . Training the aux-outputs model consists of two steps : In the pre-training step , we use all the unlabeled data to learn a shared feature representation . Let h : Rd→Rk denote a feature map and gz-out : Rk→RT denote a mapping from feature representation to the auxiliary outputs . Let ` aux denote the loss function for the auxiliary information . We define the empirical risk of h and gz-out as : R̂pre ( h , gz-out ) = 1 mid+mood ( mid∑ i=1 ` aux ( gz-out ( h ( x id i ) ) , z id i ) + mood∑ i=1 ` aux ( gz-out ( h ( x ood i ) ) , z ood i ) ) . ( 3 ) The estimate of the feature map is ĥout =argminhmingz-outR̂pre ( h , gz-out ) . In the transfer step , the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out : Rk→R from feature representation to target y . We define the transfer empirical risk as : R̂trans ( ĥout , gy-out ) = 1 n n∑ i=1 ` ( gy-out ( ĥout ( xi ) ) , yi ) ( 4 ) The estimate of the target mapping is ĝy-out = argmingy-out R̂trans ( ĥout , gy-out ) . The final aux-outputs model is f̂out ( x , z ) = ĝy-out ( ĥout ( x ) ) . ( 5 ) Like the baseline model , the aux-outputs model ignores the auxiliary information for prediction . 3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS . We now analyze the baseline , aux-inputs , and aux-outputs models introduced in Section 2 . Our setup extends a linear regression setting commonly used for analyzing multi-task problems ( Du et al. , 2020 ; Tripuraneni et al. , 2020 ) . Setup . See Figure 2 for the graphical model . Letw=B ? x∈Rk be a low-dimensional latent feature ( k≤d ) shared between auxiliary information z and the target y . Let u∈Rm denote unobserved latent variables not captured in x . We assume z and y are linear functions of u andw : y=θ > ww+θ > u u+ , ( 6 ) z=A ? w+C ? u , ( 7 ) where ∼ P denotes noise with mean 0 and variance σ2 . As in Du et al . ( 2020 ) , we assume the dimension of the auxiliary information T is greater than the feature dimension k , that is T ≥k , and thatA ? , B ? andC ? have full rank ( rank k ) . We also assume T ≥m , wherem is the dimension of u . Data . Let Px and Pu denote the distribution of x and u in-distribution ( ID ) , and let P ′x , P ′u denote the distribution x and uOOD . We assume x and u are independent , have distributions with bounded density everywhere , and have invertible covariance matrices . We assume the mean of u is zero in- and out-of-distribution1 . We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD , a common assumption in unsupervised domain adaptation theory ( Sugiyama et al. , 2007 ; Kumar et al. , 2020 ; Raghunathan et al. , 2020 ) . Loss metrics . We use the squared loss for the target and auxiliary losses : ` ( ŷ , y ) = ( y− ŷ ) 2 and ` aux ( z , z ′ ) =‖z−z′‖22 . Models . We assume all model families ( f , h , gz-out , gy-out ) in Section 2 are linear . Let S= ( A ? , B ? , C ? , θw , θu , Px , Pu ) denote a problem setting which satisfies all the above assumptions . 3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION , BUT CAN HURT OOD . We first show that the aux-inputs model ( 2 ) performs better than the baseline model ( 1 ) in-distribution . Intuitively , the target y depends on both the inputs x ( throughw ) and latent variable u ( Figure 2 ) . The baseline model only uses x to predict y ; thus it can not capture the variation in y due to u . On the other hand , the aux-inputs model uses x and z to predict y . Since z is a function of x ( through w ) and u , u can be recovered from x and z by inverting this relation . Note that u is unobserved but implicitly recovered . The aux-inputs model can then combine u and x to predict y better . Let σ2u=Eu∼Pu [ ( θ > u u ) 2 ] denote the ( in-distribution ) variance of y due to the latent variables u . The following proposition shows that if σ2u > 0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2 Proposition 1 . For all problem settings S , P , assuming regularity conditions ( bounded x , u , sub-Gaussian noise , and T =m ) , and σ2u > 0 , for all δ > 0 , there existsN such that for n≥N number of training points , with probability at least 1−δ over the training examples , the aux-inputs model improves over the baseline : Rid ( f̂in ) < Rid ( f̂bs ) . ( 8 ) Although using z as input leads to better in-distribution performance , we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples . Intuitively , the aux-inputs model uses z , which can be unreliable OOD because z depends on u and u can shift OOD . In more detail , the aux-inputs model learns to predict ŷ= θ̂ > x , inx+θ̂ > z , inz , where the true output y=θ > x x+θ > z z , and θ̂z , in is an approximation to the true parameter θz , that has some error . Out-of-distribution u and hence z can have very high variance , which would magnify ( θ̂z , in−θz ) > z and lead to bad predictions . Example 1 . There exists a problem setting S , P , such that for every n , there is some test distribution P ′x , P ′ u with : E [ Rood ( f̂in ) ] > E [ Rood ( f̂bs ) ] ( 9 )
This paper introduces a new method for leveraging auxiliary information and unlabelled data to improve out-of-distribution model performance. Theoretically, in a linear model with latent variables, they demonstrate using auxiliary data as inputs helps in-distribution test-error, but can hurt out-of-distribution error, while using auxiliary data to pretrain a "good" representation always improve out-of-distribution error. The proposed method uses the auxiliary data to learn an initial model, which generates psuedolabels to fine-tune the pretrained model.
SP:7611ee6b9dfabf7ec6a65da58cb6e3892705e1c9
Variance Reduction in Hierarchical Variational Autoencoders
1 INTRODUCTION . Variational autoencoders ( VAE ) [ 10 ] are a popular latent variable model for unsupervised learning that simplifies learning by the introduction of a learned approximate posterior . Given data x and latent variables z , we specify the conditional distribution p ( x|z ) by parameterizing the distribution parameters by a neural network . Since it is difficult to learn such a model directly , another conditional distribution q ( z|x ) is introduced to approximate the posterior distribution . During learning the goal is to maximize the evidence lower bound ( ELBO ) , which lower bounds the log likelihood , log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) +log p ( z ) − log q ( z|x ) ] . In their simplest form , the generative model p ( x|z ) and the approximate posterior q ( z|x ) are Gaussian distributions optimized in unison . A natural way to increase the modeling capacity of VAE is to incorporate a hierarchy of stochastic variables . Such models , however , turn out to be difficult to train and higher levels in the hierarchy tend to remain independent of input data – a problem termed posterior collapse . Posterior collapse in VAEs manifests itself by the latent distribution tending to fall back to the prior . With hierarchical VAEs the effect is found to be more pronounced in the top layers farther from the output . For the purpose of the paper and for clarity of exposition , we focus on the simplest extension of hierarchical variational autoencoders where stochastic layers are stacked serially on top of each other [ 2 , 21 ] , p ( x , z ) = p ( x|z1 ) p ( zL ) ∏L−1 i=1 p ( zi|zi+1 ) and q ( z|x ) = q ( z1|x ) ∏L−1 i=1 q ( zi+1|zi ) . The intermediate distributions in this model are commonly taken to be Gaussian distributions parameterized by neural network functions , so that p ( zi|zi+1 ) = N ( zi|µ ( zi+1 ) , σ ( zi+1 ) ) , where µ ( z ) , σ ( z ) are neural networks computing the mean and variance of the Gaussian distribution . We refer to them as vanilla hierarchical variational autoencoders . For each stochastic layer in this model there is a corresponding KL divergence term in the objective given by E [ KL ( q ( zi|zi−1 ) ||p ( zi|zi+1 ) ] . ( 1 ) As described later , expression 1 can be easily decomposed to show an explicit dependence on the variance of the parameterizing functions µ ( zi ) , σ ( zi ) of the intermediate Gaussian distribution . We further show the KL divergence term to be closely related to the harmonics of the parameterizing function . For complex parameterizing functions the KL divergence term has large high frequency components ( and thus high variance ) which leads to unstable training causing posterior collapse . Building on this , we suggest a method for training the simplest hierarchical extension of VAE that avoids the problem of posterior collapse without introducing further architectural complexity [ 13 , 21 ] . Given a hierarchical variational autoencoder , our training method incorporates a smoothing parameter ( we denote this by ρ ) in the neural network functions used to parameterize the intermediate latent distributions . The smoothing is done such that expected values are preserved , the higher frequencies are attenuated and the variance is reduced . Next , the gradients computed with the smooth functions are used to train the original hierarchical variational autoencoder . For the construction of the smoothing transformations for VAEs with Gaussian latent spaces we make use of ideas from the analysis of Gaussian spaces . We analyze the stochastic functions in vanilla hierarchical VAEs as Hermite expansions on Gaussian spaces [ 9 ] . The Ornstein-Uhlenbeck ( OU ) semigroup from Gaussian analysis is a set of operators that we show to smoothly interpolate between a random variable and its expectation . The OU semigroup provides the appropriate set of smoothing operators which enable us to control variance and avoid posterior collapse . We further show that by smoothing the intermediate parameterizing functions µ ( z ) , σ ( z ) in the proposed manner , the KL divergence of the top layer sees a sudden sharp drop toward zero as the amount of smoothing is decreased . This behaviour is retained when we evaluate the KL divergence on the original unsmoothed variational autoencoder model . This behaviour is reminiscent of phase transitions from statistical mechanics and we adopt the same terminology to describe the phenomenon . Our experiments suggest that the phenomenon is general across datasets and commonly used architectures . Furthermore , the critical value of the smoothing parameter ρ at which the transition occurs is fixed for a given model configuration and varies with stochastic depth and width . We make the following contributions . First , we establish a connection between higher harmonics , variance , posterior collapse and phase transitions in hierarchical VAEs . Second , we show that by using the Ornstein-Uhlenbeck semigroup of operators on the generative stochastic functions in VAEs we reduce higher frequencies and consequently variance to mitigate posterior collpase . We corroborate our findings experimentally and further obtain in CIFAR-10 likelihoods competitive with more complex architectural solutions alongside a reduction in model size . We refer to the proposed family of models as Hermite variational autoencoders ( HVAE ) . 2 HERMITE VARIATIONAL AUTOENCODERS . 2.1 ANALYSIS ON GAUSSIAN SPACES . The analysis of Gaussian spaces studies functions of Gaussian random variables . These are realvalued functions defined on Rn endowed with the Gaussian measure . Many functions employed in machine learning are instances of such functions : decoders for variational autoencoders , as is the case in this work , and generators for generative adversarial networks being two examples . By way of summary , the main facts we use from this field are that a function on a Gaussian space can be expanded in an orthonormal basis , where the basis functions are the Hermite polynomials . This orthonormal expansion is akin to a Fourier transform in this space . The second fact is that the coefficients of such an expansion can be modified in a way to reduce the variance of the expanded function by applying an operator from the Ornstein-Uhlenbeck semigroup of operators . Next , we give a brief introduction . For further details on Gaussian analysis we refer to [ 9 ] . Gaussian Spaces : Let L2 ( Rn , γ ) be the space of square integrable functions , f : Rn → R , with the Gaussian measure γ ( z ) = ∏ iN ( zi|0 , 1 ) . Given functions f , g in this space , the inner product is given by 〈f , g〉 = Eγ ( z ) [ f ( z ) g ( z ) ] . Basis functions for L2 ( R , γ ) : Taking the space of univariate functions L2 ( R , γ ) , it is known that the polynomial functions φi ( z ) = zi are a basis for this space . By a process of orthonormalization we obtain the normalized Hermite polynomial basis for this space . The first few Hermite polynomials are the following : h0 ( z ) = 1 , h1 ( z ) = z , h2 = z 2−1√ 2 , . . .. Basis functions for L2 ( Rn , γ ) : Letting α ∈ Nn be a multi-index , the basis functions for L2 ( Rn , γ ) are obtained by multiplying the univariate basis functions across dimension , hα ( z ) = ∏ i hαi ( zi ) . Hermite expansion : A function in L2 ( Rn , γ ) can be expressed as f = ∑ α∈Nn f̂ ( α ) hα , where f̂ ( α ) are the Hermite coefficients of f and are computed as f̂ ( α ) = 〈f , hα〉 = Eγ ( z ) [ f ( z ) hα ( z ) ] . Plancherel ’ s theorem is the following relation between the norm of f and f̂ which follows from orthnormality of the basis functions . 〈f , f〉 = ∑ α f̂ ( α ) 2 , ( 2 ) Ornstein-Uhlenbeck ( OU ) Semigroup : Given a parameter ρ ∈ [ 0 , 1 ] and a Gaussian variable z , we construct a correlated variable z′ as z′ = ρz + √ 1− ρ2zω , where zω ∼ N ( 0 , 1 ) is a random standard Gaussian sample . The OU semigroup is a set of operators , denoted Uρ and parameterized by ρ ∈ [ 0 , 1 ] . The action of Uρ on f at z is to average the function values on correlated z′s around z , Uρf ( z ) = Ez′|z [ f ( z′ ) ] = Ezω [ f ( ρz + √ 1− ρ2zω ) ] ( 3 ) The action of the Uρ operators on the Hermite expansion of function f ( z ) is to decay Hermite coefficients according to their degree , Uρf ( z ) = ∑ α∈Nn ρ |α|f̂ ( α ) hα . where |α| = ∑ i αi . If z is reparameterized as z = σ 1 + µ , the correlated OU sample is given by z′ = σ ( ρ 1 +√ 1− ρ2 2 ) + µ , where 1 , 2 are standard Gaussian variables . This can also be expressed in terms of z as z′ = ρz + ( 1− ρ ) µ+ σ √ 1− ρ2 2 , ( 4 ) 2.2 HERMITE EXPANSIONS FOR VAES . Our proposed method is a new training procedure for the vanilla hierarchical variational autoencoder that builds upon Hermite expansions of Gaussian functions and properties of the OU semigroup . In the context of hierarchical variational autoencoders , the Gaussian function f is the generative model µi ( zi+1 ) and σi ( zi+1 ) that receives as inputs the latent variable zi+1 to return the Gaussian latent variable of the next layer , zi ∼ N ( µi ( zi+1 ) , σi ( zi+1 ) ) . We make use of the following properties of the OU semigroup to construct Gaussian functions of lower variance . The first property we employ is that the OU semigroup of operators interpolates between a random variable ( ρ = 1 ) and its expectation ( ρ = 0 ) , where the parameter ρ controls the extent of the interpolation . Proposition 1 The operators Uρ retain the expected value of the operated function , E [ f ] = E [ Uρf ] . Proposition 2 The operators Uρ interpolate between a random variable and its expectation . In particular , as ρ→ 1 , Uρf = f . and as ρ→ 0 , Uρf = E [ f ] The second property we exploit is that the new random variable Uρf ( z ) has lower variance compared with original variable f ( z ) and is in general a smoother function than f ( z ) . The smoothing properties of the operator Uρ can be understood by examining the Hermite expansion of Uρf . First we note that we can express the expectation and variance of a function f in terms of its Hermite coefficients , specifically E [ f ] = f̂ ( 0 ) and Var ( f ) = E [ ( f −E [ f ] ) 2 ] = E [ ( f − f̂ ( 0 ) ) 2 ] =∑α : |α| > 0 f̂ ( α ) 2 , which follows from Plancherel ’ s theorem ( equation 2 ) . Replacing f with Uρf and using the Hermite expansion of Uρf from equation 3 , the mean remains the same , E [ Uρf ] = ρ0f̂ ( 0 ) = f̂ ( 0 ) , and variance reduces like Var [ Uρf ] = E [ ( Uρf − E [ f ] ) 2 ] = E [ ( f − f̂ ( 0 ) ) 2 ] = ∑ α : |α| > 0 ρ2|α|f̂ ( α ) 2 . ( 5 ) The last equation indicates that the contribution to the variance by f̂ ( α ) decays by an amount ρ2|α| when ρ ∈ ( 0 , 1 ) . This , in turn , leads to a decrease in variance . Algorithm . In essence , Hermite variational autoencoders are similar to variational autoencoders , save for applying the OU semigroup to the latent distributions p ( zi|zi+1 ) that comprise the generator to compute gradients during training only . Specifically , we apply these operators to the functions parameterizing the mean and variance of the latent Gaussian distributions . For each distribution p ( zi|zi+1 ) we substitute N ( zi|µi ( zi+1 ) , σi ( zi+1 ) ) with N ( zi|Uρµi ( zi+1 ) , Uρσi ( zi+1 ) ) . The new functions result in latent distributions with parameters that have lower variance but the same expected value relative to the conditional input latent distribution . In an alternative parameterization we apply the OU semigroup to the ratio of the mean and variance functions : Uρ µiσi ( zi+1 ) ( see next section for a justification of this ) . The OU semigroup operators can also be applied on approximate posterior functions , but we observe little benefit . In practice , we compute Uρµi ( zi+1 ) and Uρσi ( zi+1 ) by Monte Carlo averaging . As for a function f , Uρf = Ez′|z [ f ( z′ ) ] , where z′ are the correlated samples , we estimate the expectation by Monte Carlo averaging over z′ . Experiments show that 5 to 10 samples suffice . It is important to emphasize that the substitution of the lower variance functions for parameterizing the distributions is only done when computing gradients during training . All evaluations , training or test , are still done on the original hierarchical variational autoencoder model . Thus , the new training procedure has an additional computational cost only for the intermediate distributions in the generator , proportional to the number of correlated samples during training . Complexity . In Hermite VAE the OU sampling operation is only applied in the intermediate stochastic layers in the generator network . In particular , it is not applied in the inference network or in the last layer of the decoder . The fact that OU sampling is not applied in the final stochastic layer computing p ( x|z1 ) is especially important for deep VAEs for images since feature maps are upsampled to match image dimensions in this layer . Thus , for 5 OU samples , the added computational and activation memory complexity is significantly less than 5 times the total cost of the base VAE model , and is 5 times the cost in the higher decoder layers only in the base model . An empirical comparison of maximum memory usage of various models can be found in table 6 .
This paper studies the training of deep hierarchical VAEs and focuses on the problem of posterior collapse. It is argued that reducing the variance of the gradient estimate may help to overcome posterior collapse. The authors focus on reducing the variance of the functions parameterizing the variational distribution of each layer using a layer-wise smoothing operator based on the Ornstein-Uhlenbeck semigroup (parameterized by a parameter $\rho$). The operator requires additional Monte-Carlo samples. The authors provide an analytical analysis of bias and variance. Last they train multiple VAEs models, measure the posterior collapse and observe a phase transition behaviour depending on the parameter $\rho$.
SP:b6dd62914f7464efb601c6d9f8a4d35e047447d5
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
1 INTRODUCTION . Many real-world optimization problems involve function evaluations that are the result of expensive or time-consuming process . Examples occur in the design of materials ( Mansouri Tehrani et al. , 2018 ) , proteins ( Brookes et al. , 2019 ; Kumar & Levine , 2019 ) , neural network architectures ( Zoph & Le , 2016 ) , or vehicles ( Hoburg & Abbeel , 2014 ) . Rather than settling for a slow and expensive optimization process through repeated function evaluations , one may instead adopt a data-driven approach , where a large dataset of previously collected input-output pairs is given in lieu of running expensive function queries . Not only could this approach be more economical , but in some domains , such as in the design of drugs or vehicles , function evaluations pose safety concerns and an online method may simply be impractical . We refer to this setting as the offline model-based optimization ( MBO ) problem , where a static dataset is available but function queries are not allowed . A straightforward method to solving offline MBO problems would be to estimate a proxy of the ground truth function f̂θ using supervised learning , and to optimize the input x with respect to this proxy . However , this approach is brittle and prone to failure , because the model-fitting process often has little control over the values of the proxy function on inputs outside of the training set . An algorithm that directly optimizes f̂θ could easily exploit the proxy to produce adversarial inputs that nevertheless are scored highly under f̂θ ( Kumar & Levine , 2019 ; Fannjiang & Listgarten , 2020 ) . In order to counteract the effects of model exploitation , we propose to use the normalized maximum likelihood framework ( NML ) ( Barron et al. , 1998 ) . The NML estimator produces the distribution closest to the MLE assuming an adversarial output label , and has been shown to be effective for resisting adversarial attacks ( Bibas et al. , 2019 ) . Moreover , NML provides a principled approach to generating uncertainty estimates which allows it to reason about out-of-distribution queries . However , because NML is typically intractable except for a handful of special cases ( Roos et al. , 2008 ) , we show in this work how we can circumvent intractability issues with NML in order to construct a reliable and robust method for MBO . Because of its general formulation , the NML distribution pro- vides a flexible approach to constructing conservative and robust estimators using high-dimensional models such as neural networks . The main contribution of this work is to develop an offline MBO algorithm that utilizes a novel approximation to the NML distribution to obtain an uncertainty-aware forward model for optimization , which we call NEMO ( Normalized maximum likelihood Estimation for Model-based Optimization ) . The basic premise of NEMO is to construct a conditional NML distribution that maps inputs to a distribution over outputs . While constructing the NML distribution is intractable in general , we discuss novel methods to amortize the computational cost of NML , which allows us the scale our method to practical problems with high dimensional inputs using neural networks . A separate optimization algorithm can then be used to optimize over the output to any desired confidence level . Theoretically , we provide insight into why NML is useful for the MBO setting by showing a regret bound for modeling the ground truth function . Empirically , we evaluate our method on a selection of tasks from the Design Benchmark ( Anonymous , 2021 ) , where we show that our method performs competitively with state-of-the-art baselines . Additionally , we provide a qualitative analysis of the uncertainty estimates produced by NEMO , showing that it provides reasonable uncertainty estimates , while commonly used methods such as ensembles can produce erroneous estimates that are both confident and wrong in low-data regimes . 2 RELATED WORK . Derivative-free optimization methods are typically used in settings where only function evaluations are available . This includes methods such as REINFORCE ( Williams , 1992 ) and reward-weighted regression ( Peters & Schaal , 2007 ) in reinforcement learning , the cross-entropy method ( Rubinstein , 1999 ) , latent variable models ( Garnelo et al. , 2018 ; Kim et al. , 2019 ) , and Bayesian optimization ( Snoek et al. , 2012 ; Shahriari et al. , 2015 ) . Of these approaches , Bayesian optimization is the most often used when function evaluations are expensive and limited . However , all of the aforementioned methods focus on the active or online setting , whereas in this work , we are concerned with the offline setting where additional function evaluations are not available . Normalized maximum likelihood is an information-theoretic framework based on the minimum description length principle ( Rissanen , 1978 ) . While the standard NML formulation is purely generative , the conditional or predictive NML setting can be used Rissanen & Roos ( 2007 ) ; Fogel & Feder ( 2018 ) for supervised learning and prediction problems . Bibas et al . ( 2019 ) apply this framework for prediction using deep neural networks , but require an expensive finetuning process for every input . The goal of our work is to provide a scalable and tractable method to approximate the CNML distribution , and we apply this framework to offline optimization problems . Like CNML , conformal prediction ( Shafer & Vovk , 2008 ) is concerned with predicting the value of a query point ŷt+1 given a prior dataset , and provides per-instance confidence intervals , based on how consistent the new input is with the rest of the dataset . Our work instead relies on the NML framework , where the NML regret serves a similar purpose for measuring how close a new query point is to existing , known data . The offline model-based optimization problem has been applied to problems such as designing DNA ( Killoran et al. , 2017 ) , drugs ( Popova et al. , 2018 ) , or materials ( Hautier et al. , 2010 ) . The estimation of distribution algorithm ( Bengoetxea et al. , 2001 ) alternates between searching in the input space and model space using a maximum likelihood objective . Kumar & Levine ( 2019 ) propose to learn an inverse mapping from output values to input values , and optimize over the output values which produce consistent input values . Brookes et al . ( 2019 ) propose CbAS , which uses a trust-region to limit exploitation of the model . Fannjiang & Listgarten ( 2020 ) casts the MBO problem as a minimax game based on the oracle gap , or the value between the ground truth function and the estimated function . In contrast to these works , we develop an approach to MBO which explicitly reasons about uncertainty . Approaches which utilize uncertainty , such as Bayesian optimization , are commonly used in online settings , and we expect these to work in offline settings as well . There are several related areas that could arguably be viewed as special cases of MBO . One is in contextual bandits under the batch learning from bandit feedback setting , where learning is often done on logged experience ( Swaminathan & Joachims , 2015 ; Joachims et al. , 2018 ) , or offline reinforcement learning ( Levine et al. , 2020 ) , where model-based methods construct estimates of the MDP parameters ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) . Our work focuses on a more generic function optimization setting , but could be applied in these domains as well . 3 PRELIMINARIES . We begin by reviewing the problem formulation for offline model-based optimization , as well as necessary background on the normalized maximum likelihood estimator . Problem statement . We define the offline model-based optimization ( MBO ) problem as follows . Assume the existence of a stochastic ground truth function f ( y|x ) . The MBO algorithm is given a dataset D of inputs x along with outputs y sampled from f ( y|x ) . Like in standard optimization problems , the goal of MBO is to find the input value that maximizes the true function : x∗ = argmaxxEy∼f ( y|x ) [ y ] . ( 1 ) However , in offline MBO , the algorithm is not allowed to query the true function f ( y|x ) , and must find the best possible point x∗ using only the guidance of a fixed dataset D = { x1 : N , y1 : N } . One approach to solving this problem is to introduce a separate proxy function f̂θ ( y|x ) ≈ f ( y|x ) , which is learned from D as an estimate of the true function . From here , standard optimization algorithms such as gradient descent can be used to find the optimum of the proxy function , x̂∗ = argmaxxEy∼f̂θ ( y|x ) [ y ] . Alternatively , a trivial algorithm could be to select the highestperforming point in the dataset . While adversarial ground truth functions can easily be constructed where this is the best one can do ( e.g. , if f ( x ) = −∞ on any x /∈ D ) , in many reasonable domains it should be possible to perform better than the best point in the dataset . Conditional normalized maximum likelihood . In order to produce a conditional distribution pNML ( y|x ) we can use for estimating the ground truth function , we leverage the conditional or predictive NML ( CNML ) framework ( Rissanen & Roos , 2007 ; Fogel & Feder , 2018 ; Bibas et al. , 2019 ) . Intuitively , the CNML distribution is the distribution closest to the MLE assuming the test label y is chosen adversarially . This is useful for the MBO setting since we do not know the ground truth value y at points we are querying during optimization , and the CNML distribution gives us conservative estimates that help mitigate model exploitation ( see Fig . 1 ) . Formally , the CNML estimator is the minimax solution to a notion of regret , called the individual regret defined as Regretind ( h , y ) = log p ( y|x , θ̂D∪ ( x , y ) ) − log h ( y|x ) , and pNML ( y|x ) = arg minh maxy′ Regretind ( h , y′ ) ( Fogel & Feder , 2018 ) . The notation D ∪ ( x , y ) refers to an augmented dataset by appending a query point and label ( x , y ) , to a fixed offline dataset D , and θ̂D∪ ( x , y ) denotes the MLE estimate for this augmented dataset . The query point ( x , y ) serves to represent the test point we are interested in modeling . The solution to the minimax problem can be expressed as ( Fogel & Feder , 2018 ) : pNML ( y|x ) = p ( y|x , θ̂D∪ ( x , y ) ) ∫ y′ p ( y′|x , θ̂D∪ ( x , y′ ) ) dy′ , ( 2 ) where θ̂D∪ ( x , y ) = arg maxθ 1 N+1 ∑ ( x , y ) ∈D∪ ( x , y ) log p ( y|x , θ ) is the maximum likelihood estimate for p using the dataset D augmented with ( x , y ) . Algorithm 1 NEMO : Normalized Maximum Likelihood for Model-Based Optimization Input Model class { fθ : θ ∈ Θ } , Dataset D = ( x1 : N , y1 : N ) , number of bins K , evaluation function g ( y ) , learning rates αθ , αx . Initialize K models θ1 : K0 , optimization iterate x0 Quantize y1 : N into K bins , denoted as bYc = { by1c , · · · bykc } . for iteration t in 1 . . . T do for k in 1 . . .K do construct augmented dataset : D′ ← D ∪ ( xt , bykc ) . update model : θkt+1 ← θkt + αθ∇θkt LogLikelihood ( θ k t , D′ ) end for estimate CNML distribution : p̂NML ( y|xt ) ∝ p ( y|xt , θyt ) / ∑ k p ( bykc|xt , θkt ) Update x : xt+1 ← xt + αx∇xEy∼p̂NML ( y|x ) [ g ( y ) ] end for The NML family of estimators has connections to Bayesian methods , and has shown to be asymptotically equivalent to Bayesian inference under the uninformative Jeffreys prior ( Rissanen , 1996 ) . NML and Bayesian modeling both suffer from intractability , albeit for different reasons . Bayesian modeling is generally intractable outside of special choices of the prior and model class Θ where conjugacy can be exploited . On the other hand , NML is intractable because the denominator requires integrating and training a MLE estimator for every possible y . One of the primary contributions of this paper is to discuss how to approximate this intractable computation with a tractable one that is sufficient for optimization on challenging problems , which we discuss in Section 4 .
The paper proposes an approximation method, called NEMO (Normalized maximum likelihood Estimation for model-based optimization) to compute the conditional normalized maximum log-likelihood of a query data point as a way to quantify the uncertainty in a forward prediction model in offline model-based optimization problems. The main idea is to construct a conditional NML (CNML) distribution that maps the high-dimensional inputs to a distribution over output variables. In addition, the paper provides a theoretical motivation that estimating the true function with the CNML is close to the best possible expert even if the test label is chosen adversarially, which is a great challenge for an optimizer to exploit the model. By using this CNML on three offline optimization benchmark datasets (Superconductor, GFP, MoleculeActivity) with gradient ascent-based optimization, the NEMO outputs all the other four baselines on the Superconductor dataset by almost 1.4x to 1.7x, the generate comparable results as the other four baselines method on the GFP and MoleculeActivity datasets.
SP:2d25eeb93ba90f9c4064bf794f9a132a6859c8e4
Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs
1 INTRODUCTION . Transformer-based models yield state-of-the-art results on a number of tasks , including representation learning ( Devlin et al. , 2019 ; Liu et al. , 2019 ; Clark et al. , 2020 ) and generation ( Radford et al . ; Raffel et al. , 2019 ; Lewis et al. , 2020 ) . Notably , large language models have been reported to produce outputs nearly indistinguishable from human-written texts ( Brown et al. , 2020 ) . Although the predictions of autoregressive language models are fluent and coherent , it is not clear how to manipulate the model to get samples with desired properties . For example , make them shorter , more formal or more positive , or , alternatively , use the same model to rewrite human-written texts in a different tone . Current approaches often rely on external labels of target attributes and require modifications to the model . This involves retraining for new attributes or changing the decoding procedure , which is usually expensive . In contrast , models with explicit latent spaces have the innate ability to manipulate text attributes by moving along latent directions . They , however , gained limited traction . One reason is that training a VAE on text data poses a number of optimization challenges , which have been tackled with a varying degree of success ( He et al. , 2019 ; Fu et al. , 2019 ; Zhu et al. , 2020 ) . Additionally , language VAEs are mostly small LSTM-based models which goes against the current trend of using large pretrained Transformers . The first large-scale language VAE model is the recently introduced OPTIMUS ( Li et al. , 2020 ) : it uses BERT as the encoder and GPT-2 as the decoder , and sets a new record on benchmark datasets . Differently from texts , latent space models for images , especially GANs , achieve state-of-the-art generation results . Therefore , these models have been the focus of the research community , and the properties of latent spaces are well-learned . For example , even early works on generative adversarial networks for images report that it is possible to have smooth interpolations between images in the latent space ( Goodfellow et al. , 2014 ) . More recent studies show that the latent space directions corresponding to human-interpretable image transformations ( from now on , ” interpretable directions ” ) can be discovered in an unsupervised way ( Härkönen et al. , 2020 ; Voynov & Babenko , 2020 ; Peebles et al. , 2020 ) . In this paper , we show that for the language domain , much alike the well-studied visual domain , a sufficiently “ good ” latent space allows to manipulate sample attributes with relative ease . To avoid the known difficulties associated with training language GANs , we experiment with VAEs ; more specifically , with the current state-of-the-art model OPTIMUS . We show that for this model , not only it is possible to produce meaningful and “ smooth ” interpolations between examples and to transfer specific properties via arithmetic operations in the latent space , but it is also possible to discover the interpretable latent directions in an unsupervised manner . We propose a method based on the PCA of latent representations of the texts in the training dataset . According to human evaluation , the proportion of interpretable directions among the ones found by our method is consistently larger than the proportion of interpretable directions among canonical co-ordinates or random directions in the latent space . The meaningful directions found by this method include , for example , subject age , subject gender , verb tense , and sentence length . Some of the directions , e.g . sentence length , are potentially useful : the ability to expand or shrink a text while preserving its content may be useful for tasks like summarization . Note that the proposed method is simple and fast . The method is simple because it requires only the forward pass of the encoder , without backpropagating through decoding steps . This is very important for the language domain , where backpropagation through samples is significantly more difficult than for images . Namely , generation is non-differentiable , and previous attempts to overcome this issue relied on noisy or biased gradient estimates , which is less reliable than the standard MLE training . Instead , we do not rely on generated samples at all : we operate directly in the latent space . Additionally , since sampling directly from the prior does not yield diverse samples in case of OPTIMUS , we use the representations of the training data without running a decoding procedure - this maked the method fast . To summarize , our contributions are as follows : 1 . We propose the first method for unsupervised discovery of interpretable directions in latent spaces of language VAEs . 2 . This method is simple and fast : it is based on PCA of latent representations for texts in the training dataset . 3 . This method is effective : the proportion of interpretable directions among the ones found by our method is consistently larger than that of canonical co-ordinates or random directions in the latent space . 4 . Our work lays foundations for two important areas : first , it allows to compare models in terms of latent space interpretability , and second , it provides a baseline for unsupervised latent controls discovery . 2 RELATED WORK . Finding interpretable directions in latent spaces of language VAEs is related to three lines of work . First , latent variable models for text and , more specifically , properties of latent spaces : for interpretable directions to exist , latent space has to be smooth ( i.e . allow coherent interpolations ) . Then , since great part of the motivation for finding interpretable directions is manipulating generated texts , we discuss works on controllable text generation for different types of models , both VAE and standard autoregressive . Finally , we mention recent works trying to discover interpretable directions in image GANs . 2.1 LATENT VARIABLE MODELS FOR TEXT . Latent variable models encode information about text into a probability distribution . In addition to sampling new sentences from the prior distribution , they potentially allow to explicitly encode specific properties of text , such as sentiment or style . Even early works on VAEs show that a latent space obtained with the VAE objective can result in coherent interpolations ( Bowman et al. , 2016 ) . While this is encouraging , training good VAEs with smooth and expressive latent spaces is challenging . Specifically , for interpretable directions to exist , we need a model which ( i ) does not ignore latent variable – to produce good samples , ( ii ) has continuous latent space – to allow controllable manipulation . Ignoring latent variable is a known problem of VAEs . It arises because of the the KL vanishing problem : over the course of training , the KL divergence part of the loss may drop to 0 , which indicates that the model ignores the latent variable . There exist many ways to alleviate this issue ( Yang et al. , 2017 ; Fang et al. , 2019 ; Fu et al. , 2019 ; Zhu et al. , 2020 ) ; one of the simpler ways is adjusting the weight of the KL loss component according to a specific schedule . Another problem is the latent vacancy problem : differently from images , not all regions of the latent space are occupied by the posterior distribution ( Xu et al. , 2020 ) . In simple words , text latent spaces tend to have “ holes ” where the decoding network fails to generalize . As a result , when the latent codes are manipulated , the modified codes often land in these holes or vacant regions in the posterior latent space . If this happens , a model can not decode properly . In light of the above , discovery of interpretable directions in text latent spaces is possible only with a strong model . Therefore , we use the current state-of-the-art model OPTIMUS ( Li et al. , 2020 ) . It is a recent large-scale variational autoencoder which initializes the encoder with BERT ( Devlin et al. , 2019 ) and the decoder with GPT2 ( Radford et al. ) . In addition to the model ’ s high capacity , we use it because of the available checkpoints and reported results on latent space manipulation . 2.2 CONTROLLABLE GENERATION FOR TEXT DATA . Latent variable models . A natural way to achieve text generation with required attributes is using latent variable text generation models . The idea is that information about the attribute value is encoded in the latent code , and to obtain samples with the desired property one has to fix the corresponding component ( direction ) of the code . For example , several works learn latent spaces with disentangled representations of content and style ( Hu et al. , 2017 ; Logeswaran et al. , 2018 ; Lample et al. , 2019 ; Yang et al. , 2018 ; Shen et al. , 2017 ; John et al. , 2019 ) . After that , to generate sentences in a specific style , the style vector is fixed . Depending on the approach , this style vector can either be estimated by encoding sentences with the desired attribute or be directly produced by specifying the structured latent code ( e.g . one-hot encoding of an attribute ) . Another line of research shows that it is possible to achieve attribute manipulation by moving in the latent space along specific vectors . These vectors , however , are found using data labelled with the attribute , i.e . with supervision . For example , Shen et al . ( 2020 ) change tense of a sentence by adding to its latent representation a “ tense vector ” computed as a difference of averaged representations of sentences with different tenses ; Wang et al . ( 2019 ) use gradients of the attribute classifier . One of the first successful methods that learns a disentangled latent space is the work by Xu et al . ( 2020 ) : they use basis vectors in the constrained latent space ; however , this involves training a model with a structured latent space , which is rather complicated . Autoregressive models . Controllable generation for standard autoregressive language models is usually achieved by either prepending an attribute to the input sequence as a prompt ( Keskar et al. , 2019 ) , training an additional component of the model ( Chan et al. , 2020 ) or adjusting the decoding result with additional attribute-specific language models ( Dathathri et al. , 2020 ) . A more thorough comparison of approaches to controlled text generation can be read in Prabhumoye et al . ( 2020 ) . Note that all these approaches require supervision and substantial changes to either training or generation procedures , whereas our approach is applicable to any variational autoencoder . 2.3 INTERPRETABLE DIRECTIONS MINING IN IMAGE GANS . To the best of our knowledge , there are only three works which discover interpretable latent directions in an unsupervised way , and all of them operate with GANs for images . Two of them are not applicable to texts directly . In Voynov & Babenko ( 2020 ) , the interpretable directions are trained . These directions are the ones which can be recognized by a separate reconstructor based on two samples , from the original and a shifted latent vector . Peebles et al . ( 2020 ) propose to learn disentangled attributes by minimizing the sum of squared off-diagonal terms of the generator Hessian matrix . Both approaches require backpropagation through sampling and therefore are not applicable directly for texts : unlike images , generated texts are not differentiable with respect to their latent representations . The last approach , Härkönen et al . ( 2020 ) , show that interpretable controls for image synthesis can be identified by finding principal components from layer outputs of the generator network for several samples from the prior . In our more challenging language domain , instead of sampling from the generator distribution , we take advantage of the availability of the encoder in VAEs and perform PCA on training data representations .
This paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results in more interpretable directions. The method is applied on top of a VAE model (OPTIMUS), and the authors argue that different directions discovered by PCA correspond to interpretable concepts.
SP:ce75f565c3c17363695c9e39f28b49a66e3731b8
Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures
√ n dependence . We demonstrate the usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature . 1 INTRODUCTION . In recent years , there has been a surge of interest in the use of information-theoretic techniques for bounding the loss of learning algorithms . While the first results of this flavor can be traced to the probably approximately correct ( PAC ) -Bayesian approach ( McAllester , 1998 ; Catoni , 2007 ) ( see also ( Guedj , 2019 ) for a recent review ) , the connection between loss bounds and classical information-theoretic measures was made explicit in the works of Russo & Zou ( 2016 ) and Xu & Raginsky ( 2017 ) , where bounds on the average population loss were derived in terms of the mutual information between the training data and the output hypothesis . Since then , these average loss bounds have been tightened ( Bu et al. , 2019 ; Asadi et al. , 2018 ; Negrea et al. , 2019 ) . Furthermore , the information-theoretic framework has also been successfully applied to derive tail probability bounds on the population loss ( Bassily et al. , 2018 ; Esposito et al. , 2019 ; Hellström & Durisi , 2020a ) . Of particular relevance to the present paper is the random-subset setting , introduced by Steinke & Zakynthinou ( 2020 ) and further studied in ( Hellström & Durisi , 2020b ; Haghifam et al. , 2020 ) . In this setting , a random vector S is used to select n training samples Z ( S ) from a larger set Z̃ of 2n samples . Then , bounds on the average population loss are derived in terms of the conditional mutual information ( CMI ) I ( W ; S|Z̃ ) between the chosen hypothesis W and the random vector S given the set Z̃ . The bounds obtained by Xu & Raginsky ( 2017 ) depend on the mutual information I ( W ; Z ) , a quantity that can be unbounded if W reveals too much about the training set Z . In contrast , bounds for the random-subset setting are always finite , since I ( W ; S|Z̃ ) is never larger than n bits . Most information-theoretic population loss bounds mentioned thus far are given by the training loss plus a term with a √ IM ( PWZ ) /n-dependence , where IM ( PWZ ) denotes an information measure , such as mutual information or maximal leakage ( Issa et al. , 2020 ) . Assuming that the information measure grows at most polylogarithmically with n , the convergence rate of the population loss to the training loss is Õ ( 1/ √ n ) , where the Õ-notation hides logarithmic factors . This is sometimes referred to as a slow rate . In the context of bounds on the excess risk , defined as the difference between the achieved population loss for a chosen hypothesis w and its infimum over the hypothesis class , it is known that slow rates are optimal for worst-case distributions and hypothesis classes ( Talagrand , 1994 ) . However , it is also known that under the assumption of realizability ( i.e. , the existence of a w in the hypothesis class such that the population loss LPZ ( w ) = 0 ) and when the hypothesis class is finite , the dependence on the sample size can be improved to Õ ( 1/n ) ( Vapnik , 1998 , Chapter 4 ) . This is referred to as a fast rate . Excess risk bounds with fast rates for randomized classifiers have also been derived , under certain additional conditions , for both bounded losses ( Van Erven et al. , 2015 ) and unbounded losses ( Grünwald & Mehta , 2020 ) . Notably , Steinke & Zakynthinou ( 2020 , Thm . 2 ( 3 ) ) derive a population loss bound whose dependence on n is I ( W ; S|Z̃ ) /n . The price for this improved dependence is that the training loss that is added to the n-dependent term is multiplied by a constant larger than 1 . Furthermore , ( Steinke & Zakynthinou , 2020 , Thm . 8 ) shows that if the Vapnik-Chervonenkis ( VC ) dimension of the hypothesis class is finite , there exists an empirical risk minimizer ( ERM ) whose CMI grows at most logarithmically with n. This implies that the CMI approach leads to fast-rate bounds in certain scenarios . However , the result in ( Steinke & Zakynthinou , 2020 , Thm . 2 ( 3 ) ) pertains only to the average population loss : no tail bounds on the population loss are provided . Throughout the paper , we will , with an abuse of terminology , refer to bounds with an n-dependence of the form IM ( PWZ ) /n as fast-rate bounds . Such bounds are also known as linear bounds ( Dziugaite et al. , 2020 ) . Note that the n-dependence of the information measure IM ( PWZ ) has to be at most polylogarithmic for such bounds to actually achieve a fast rate in the usual sense . An intriguing open problem in statistical learning is to find a theoretical justification for the capability of overparameterized neural networks ( NNs ) to achieve good generalization performance despite being able to memorize randomly labeled training data sets ( Zhang et al. , 2017 ) . As a consequence of this behavior , classical population loss bounds that hold uniformly over a given hypothesis class , such as VC bounds , are vacuous when applied to overparameterized NNs . This has stimulated recent efforts aimed at obtaining tighter population loss bounds that are algorithm-dependent or data-dependent . In the past few years , several studies have shown that promising bounds are attainable by using techniques from the PAC-Bayesian literature ( Dziugaite & Roy , 2017 ; Zhou et al. , 2019 ; Dziugaite et al. , 2020 ) . The PAC-Bayesian approach entails using the Kullback-Leibler ( KL ) divergence to compare the distribution on the weights of the NN induced by training to some reference distribution . These distributions are referred to as the posterior and the prior , respectively . Recently , Dziugaite et al . ( 2020 ) used data-dependent priors to obtain state-of-the-art bounds for LeNet-5 trained on MNIST and Fashion-MNIST . In their approach , the available data is used both for training the network and for choosing the prior . This leads to a bound that is tighter than previously available bounds . Furthermore , the bound can be further improved by minimizing the KL divergence between the posterior and the chosen prior during training . One drawback of the PAC-Bayesian approach is that it applies only to stochastic NNs , whose weights are randomly chosen each time the network is used , and not to deterministic NNs with fixed weights . Information-theoretic bounds have also been derived for iterative , noisy training algorithms such as stochastic gradient Langevin dynamics ( SGLD ) ( Bu et al. , 2019 ) . These bounds lead to nonvacuous estimates of the population loss of overparameterized NNs that are trained using SGLD through the use of data-dependent priors ( Negrea et al. , 2019 ) . However , these bounds do not apply to deterministic NNs , nor to standard stochastic gradient descent ( SGD ) training . Furthermore , the bounds pertain to the average population loss , and not to its tails . Although the techniques yielding these estimates can be adapted to the PAC-Bayesian setting , as discussed by Negrea et al . ( 2019 , App . I ) , the resulting bounds are generally loose . 1.1 CONTRIBUTIONS . In this paper , we extend the fast-rate average loss bound by Steinke & Zakynthinou ( 2020 ) to the PAC-Bayesian and the single-draw settings . We then use the resulting PAC-Bayesian and single-draw bounds to characterize the test loss of NNs used to classify images from the MNIST and FashionMNIST data sets . The single-draw bounds can be applied to deterministic NNs trained through SGD but with Gaussian noise added to the final weights , whereas the PAC-Bayesian bounds apply only to randomized neural networks , whose weights are drawn from a Gaussian distribution each time the network is used . For the same setup , we also evaluate the slow-rate PAC-Bayesian and single-draw bounds from ( Hellström & Durisi , 2020b ) . Our numerical experiments reveal that both the slow-rate bounds from ( Hellström & Durisi , 2020b ) and the newly derived fast-rate bounds are nonvacuous . Furthermore , for some settings , the fast-rate bounds presented in this paper are quantitatively stronger than the corresponding slow-rate ones from ( Hellström & Durisi , 2020b ) , and essentially match the best bounds available in the literature for SGD-trained NNs ( Dziugaite et al. , 2020 ) . 1.2 PRELIMINARIES . We now detail some notation and describe the random-subset setting introduced in ( Steinke & Zakynthinou , 2020 ) . Let Z be the instance space , W be the hypothesis space , and ` : W ×Z → R+ be the loss function . Throughout the paper , we will assume that the range of ` ( w , z ) is restricted to [ 0 , 1 ] for all w ∈ W and all z ∈ Z . A typical example of such a loss function is the classification error . In this setting , the sample Z consists of an example X ∈ X and a corresponding label Y ∈ Y . Then , the loss is given by ` ( W , Z ) = 1 { fW ( X ) 6= Y } , where fW ( · ) is the map from X to Y induced by the hypothesis W . We note that , when applying our bounds to NNs , the function ` ( · , · ) used to characterize the performance of the network does not necessarily need to coincide with the loss function used when training the NN . For instance , one could use the ( unbounded ) cross-entropy loss when training the NN , and apply the bounds for the scenario in which ` ( · , · ) is the classification error . In the random-subset setting , 2n training samples Z̃ = ( Z̃1 , . . . , Z̃2n ) are available , with all entries of Z̃ being drawn independently from some distribution PZ onZ . However , only a randomly selected subset of cardinality n is actually used for training . Following ( Steinke & Zakynthinou , 2020 ) , we assume that the training dataZ ( S ) is selected as follows . Let S = ( S1 , . . . , Sn ) be an n-dimensional random vector , the elements of which are drawn independently from a Bern ( 1/2 ) distribution and are independent of Z̃ . Then , for i = 1 , . . . , n , the ith training sample in Z ( S ) is Zi ( Si ) = Z̃i+Sin . Thus , the binary variable Si determines whether the training set Z ( S ) will contain the sample Z̃i or the sample Z̃i+n . The selected training procedure , including the loss function used for training , will determine the conditional distribution PW |Z ( S ) on the hypothesis class given the training data . For a given W ∼ PW |Z ( S ) , we let LZ ( S ) ( W ) = 1n ∑n i=1 ` ( W , Zi ( Si ) ) denote the training loss . Furthermore , we let S̄ denote the modulo-2 complement ofS . ThenLZ ( S̄ ) ( W ) can be interpreted as a test loss , sinceW is conditionally independent ofZ ( S̄ ) givenZ ( S ) . Finally , we note that the average over ( Z̃ , S ) of the test loss is the population loss LPZ ( W ) = EPZ̃S [ LZ ( S̄ ) ( W ) ] = EPZ [ ` ( W , Z ) ] . Our bounds will depend on several different information-theoretic quantities , which we shall introduce next . The information density ı ( W , Z ) between W and Z is defined as ı ( W , Z ) = log dPWZdPWPZ , where dPWZdPWPZ is the Radon-Nikodym derivative of PWZ with respect to PWPZ . The information density is well-defined if PWZ is absolutely continuous with respect to PWPZ , denoted by PWZ PWPZ . The conditional information density ı ( W , S|Z̃ ) between W and S given Z̃ is defined as ı ( W , S|Z̃ ) = log dPWZ̃SdPW |Z̃PZ̃S , provided that PWZ̃S PW |Z̃PZ̃S . The mutual information can be obtained as I ( W ; Z ) = EPWZ [ ı ( W , Z ) ] and the conditional mutual information as I ( W ; S|Z̃ ) = EPWZ̃S [ ı ( W , S|Z̃ ) ] . We will also need the KL divergences D ( PW |Z ||PW ) = EPW |Z [ ı ( W , Z ) ] and D ( PW |Z̃S ||PW |Z̃ ) = EPW |Z̃S [ ı ( W , S|Z̃ ) ] . In practical applications , the marginal distribution PW is not available , since PZ is unknown . Furthermore , PW |Z̃ is also difficult to compute , since marginalizing PSPW |Z̃S over S involves performing training 2 n times . Hence , bounds depending on ı ( W , Z ) or on ı ( W , S|Z̃ ) can not typically be evaluated . Therefore , it will be convenient to replace the information density ı ( W , Z ) with the proxy log dPWZdQWPZ and ı ( W , S|Z̃ ) with log dPWZ̃SdQW |Z̃PZ̃S . Here , QW and QW |Z̃ are suitably chosen auxiliary distributions ( priors ) that are used in place of the intractable , true marginals .
This paper extends results of prior work by Steinke and Zakynthinou, by providing generalization bounds in the PAC-Bayesian and single-draw settings that depend on the conditional mutual information. The emphasis in this work is on obtaining fast rates ($1/n$ vs. $1/\sqrt{n}$). The authors also conduct empirical experiments showing how the fast rate bounds they propose can be useful for obtaining non-vacuous generalization bounds in the context of over-parameterized neural networks.
SP:b9d78677e836fddeab78615ad35e9545d9c1d08f
Neural Time-Dependent Partial Differential Equation
1 INTRODUCTION . The research of time-dependent partial differential equations ( PDEs ) is regarded as one of the most important disciplines in applied mathematics . PDEs appear ubiquitously in a broad spectrum of fields including physics , biology , chemistry , and finance , to name a few . Despite their fundamental importance , most PDEs can not be solved analytically and have to rely on numerical solving methods . Developing efficient and accurate numerical schemes for solving PDEs , therefore , has been an active research area over the past few decades ( Courant et al. , 1967 ; Osher & Sethian , 1988 ; LeVeque ; Cockburn et al. , 2012 ; Thomas , 2013 ; Johnson , 2012 ) . Still , devising stable and accurate schemes with acceptable computational cost is a difficult task , especially when nonlinear and ( or ) high-dimensional PDEs are considered . Additionally , PDE models emerged from science and engineering disciplines usually require huge empirical data for model calibration and validation , and determining the multidimensional parameters in such a PDE system poses another challenge ( Peng et al. , 2020 ) . Deep learning is considered to be the state-of-the-art tool in classification and prediction of nonlinear inputs , such as image , text , and speech ( Litjens et al. , 2017 ; Devlin et al. , 2018 ; LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ) . Recently , considerable efforts have been made to employ deep learning tools in designing data-driven methods for solving PDEs ( Han et al. , 2018 ; Long et al. , 2018 ; Sirignano & Spiliopoulos , 2018 ; Raissi et al. , 2019 ) . Most of these approaches are based on fully-connected neural networks ( FCNNs ) , convolutional neural networks ( CNNs ) and multilayer perceptron ( MLP ) . These neural network structures usually require an increment of the layers to improve the predictive accuracy ( Raissi et al. , 2019 ) , and subsequently lead to a more complicated model due to the additional parameters . Recurrent neural networks ( RNNs ) are one type of neural network architectures . RNNs predict the next time step value by using the input data from the current and previous states and share parameters across all inputs . This idea ( Sherstinsky , 2020 ) of using current and previous step states to calculate the state at the next time step is not unique to RNNs . In fact , it is ubiquitously used in numerical PDEs . Almost all time-stepping numerical methods applied to solve time-dependent PDEs , such as Euler ’ s , Crank-Nicolson , high-order Taylor and its variance Runge-Kutta ( Ascher et al. , 1997 ) time-stepping methods , update numerical solution by utilizing solution from previous steps . This motivates us to think what would happen if we replace the previous step data in the neural network with numerical solution data to PDE supported on grids . It is possible that the neural network behaves like a time-stepping method , for example , forward Euler ’ s method yields the numerical solution at a new time point as the current state output ( Chen et al. , 2018 ) . Since the numerical solution on each of the grid point ( for finite difference ) or grid cell ( for finite element ) computed at a set of contiguous time points can be treated as neural network input in the form of one time sequence of data , the deep learning framework can be trained to predict any time-dependent PDEs from the time series data supported on some grids if the bidirectional structure is applied ( Huang et al. , 2015 ; Schuster & Paliwal , 1997 ) . In other words , the supervised training process can be regarded as a practice of the deep learning framework to learn the numerical solution from the input data , by learning the coefficients on neural network layers . Long Short-Term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) is a neural network built upon RNNs . Unlike vanilla RNNs , which suffer from losing long term information and high probability of gradient vanishing or exploding , LSTM has a specifically designed memory cell with a set of new gates such as input gate and forget gate . Equipped with these new gates which control the time to preserve and pass the information , LSTM is capable of learning long term dependencies without the danger of having gradient vanishing or exploding . In the past two decades , LSTM has been widely used in the field of natural language processing ( NLP ) , such as machine translation , dialogue systems , question answering systems ( Lipton et al. , 2015 ) . Inspired by numerical PDE schemes and LSTM neural network , we propose a new deep learning framework , denoted as Neural-PDE . It simulates multi-dimensional governing laws , represented by time-dependent PDEs , from time series data generated on some grids and predicts the next n time steps data . The Neural-PDE is capable of intelligently processing related data from all spatial grids by using the bidirectional ( Schuster & Paliwal , 1997 ) neural network , and thus guarantees the accuracy of the numerical solution and the feasibility in learning any time-dependent PDEs . The detailed structures of the Neural-PDE and data normalization are introduced in Section 3 . The rest of the paper is organized as follows . Section 2 briefly reviews finite difference method for solving PDEs . Section 3 contains detailed description of designing the Neural-PDE . In Section 4 and Appendix A of the paper , we apply the Neural-PDE to solve four different PDEs , including the 1-dimensional ( 1D ) wave equation , the 2-dimensional ( 2D ) heat equation , and two systems of PDEs : the invicid Burgers ’ equations and a coupled Navier Stokes-Cahn Hilliard equations , which widely appear in multiscale modeling of complex fluid systems . We demonstrate the robustness of the Neural-PDE , which achieves convergence within 20 epochs with an admissible mean squared error , even when we add Gaussian noise in the input data . 2 PRELIMINARIES . 2.1 TIME DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS . A time-dependent partial differential equation is an equation of the form : ut = f ( x1 , · · · , u , ∂u ∂x1 , · · · , ∂u ∂xn , ∂2u ∂x1∂x1 , · · · , ∂ 2u ∂x1∂xn , · · · , ∂ nu ∂x1 · · · ∂xn ) , ( 2.1.1 ) where u = u ( t , x1 , ... , xn ) is known , xi ∈ R are spatial variables , and the operator f maps R 7→ R. For example , consider the parabolic heat equation : ut = α2∆u , where u represents the temperature and f is the Laplacian operator ∆ . Eq . ( 2.1.1 ) can be solved by finite difference methods , which is briefly reviewed below for the self-completeness of the paper . 2.2 FINITE DIFFERENCE METHOD . Consider using a finite difference method ( FDM ) to solve a two-dimensional second-order PDE of the form : ut = f ( x , y , ux , uy , uxx , uyy ) , ( x , y ) ∈ Ω ⊂ R2 , t ∈ R+ ∪ { 0 } , ( 2.2.1 ) with some proper boundary conditions . Let Ω be Ω = [ xa , xb ] × [ ya , yb ] , and uni , j = u ( xi , yj , tn ) ( 2.2.2 ) where tn = nδt , 0 ≤ n ≤ N , and δt = TN for t ∈ [ 0 , T ] , and some large integer N . xi = iδx , 0 ≤ i ≤ Nx , δx = xa−xbNx for x ∈ [ xa , xb ] . yj = jδy , 0 ≤ j ≤ Ny , δy = ya−yb Ny for y ∈ [ ya , yb ] . Nx and Ny are integers . The central difference method approximates the spatial derivatives as follows ( Thomas , 2013 ) : ux ( xi , yj , t ) = 1 2δx ( ui+1 , j − ui−1 , j ) +O ( δx2 ) , ( 2.2.3 ) uy ( xi , yj , t ) = 1 2δy ( ui , j+1 − ui , j−1 ) +O ( δy2 ) , ( 2.2.4 ) uxx ( xi , yj , t ) = 1 δx2 ( ui+1 , j − 2ui , j + ui−1 , j ) +O ( δx2 ) , ( 2.2.5 ) uyy ( xi , yj , t ) = 1 δy2 ( ui , j+1 − 2ui , j + ui , j−1 ) +O ( δy2 ) . ( 2.2.6 ) To this end , the explicit time-stepping scheme to update next step solution un+1 is given by : uni , j ≈ Un+1i , j = U n i , j + δtf ( xi , yj , U n i , j , U n i , j−1 , U n i , j+1 , U n i+1 , j , U n i−1 , j ) , ( 2.2.7 ) ≡ F ( xi , yj , δx , δy , δt , Uni , j , Uni , j−1 , Uni , j+1 , Uni+1 , j , Uni−1 , j ) , ( 2.2.8 ) Apparently , the finite difference method ( 2.2.7 ) for updating un+1 on a grid point relies on the previous time steps ’ solutions , supported on the grid point and its neighbours . The scheme ( 2.2.7 ) updates un+1i , j using four points of un values ( see Figure 1 ) . Similarly , the finite element method ( FEM ) approximates the new solution by calculating the corresponded mesh cell coefficient ( see Appendix ) , which is updated by its related nearby coefficients on the mesh . From this perspective , one may regard the numerical schemes for solving time-dependent PDEs as methods catching the information from neighbourhood data of interest . 3 PROPOSED METHOD . 3.1 MATHEMATICAL MOTIVATION . Recurrent neural network including LSTM is an artificial neural network structure of the form ( Lipton et al. , 2015 ) : ht = σ ( Whxxt + Whhht−1 + bh ) ≡ σa ( xt , ht−1 ) ≡ σb ( x0 , x1 , x2 , · · · , xt ) , ( 3.1.1 ) where xt ∈ Rd is the input data of the tth state and ht−1 ∈ Rh denotes the processed value in its previous state by the hidden layers . The output yt of the current state is updated by the current state value ht : yt = σ ( Whyht + by ) ( 3.1.2 ) ≡ σc ( ht ) ≡ σd ( x0 , x1 , x2 , · · · , xt ) . ( 3.1.3 ) Here Whx ∈ Rh×d , Whh ∈ Rh×h , Why ∈ Rh×h are the matrix of weights , vectors bh , by ∈ Rh are the coefficients of bias , and σ , σa , σb , σc , σd are corresponded activation and mapping functions . With proper design of input and forget gate , LSTM can effectively yield a better control over the gradient flow and better preserve useful information from long-range dependencies ( Graves & Schmidhuber , 2005 ) . Now consider a temporally continuous vector function u ∈ Rn given by an ordinary differential equation with the form : du ( t ) dt = g ( u ( t ) ) . ( 3.1.4 ) Let un = u ( t = nδt ) , a forward Euler ’ s method for solving u can be easily derived from the Taylor ’ s theorem which gives the following first-order accurate approximation of the time derivative : dun dt = un+1 − un δt +O ( δt ) . ( 3.1.5 ) Then we have : du dt = g ( u ) ( 3.1.5 ) −−−−→ un+1 = un + δt g ( un ) +O ( δt2 ) → ûn+1 = f1 ( ûn ) = f1 ◦ f1 ◦ · · · f1 ( û0 ) ︸ ︷︷ ︸ n ( 3.1.6 ) Here ûn ≈ u ( nδt ) is the numerical approximation and f1 ≡ un+δt g ( un ) : Rn → Rn . Combining equations ( 3.1.1 ) and ( 3.1.6 ) one may notice that the residual networks , recurrent neural network and also LSTM networks can be regarded as a numerical scheme for solving time-dependent differential equations if more layers are added and smaller time steps are taken . ( Chen et al. , 2018 ) Canonical structure for such recurrent neural network usually calculate the current state value by its previous time step value ht−1 and current state input xt . Similarly , in numerical PDEs , the next step data at a grid point is updated from the previous ( and current ) values on its nearby grid points ( see Eq . 2.2.7 ) . Thus , what if we replace the temporal input ht−1 and xt with spatial information ? A simple sketch of the upwinding method for a 1d example of u ( x , t ) : ut + νux = 0 ( 3.1.7 ) will be : un+1i = u n i − ν δt δx ( uni − uni−1 ) +O ( δx , δt ) → ûn+1i = f2 ( û n i−1 , û n i ) ( 3.1.8 ) ≡ fθ ( fη ( xi , hi−1 ( u ) ) ) = fθ , η ( ûn0 , û n 1 , · · · , ûni−1 , ûni ) = vn+1i ( 3.1.9 ) xi = û n i , hi−1 ( û ) = σ ( û n i−1 , hi−2 ( û ) ) ≡ fη ( ûn0 , ûn1 , ûn2 , · · · , ûni−1 ) . ( 3.1.10 ) Here let vn+1i be the prediction of û n+1 i processed by neural network . We replace the temporal previous state ht−1with spacial grid value hi−1 and input the numerical solution ûni ≈ u ( iδx , nδt ) as current state value , which indicates the neural network could be seen as a forward Euler method for equation 3.1.7 ( Lu et al. , 2018 ) . Function f2 ≡ ûni − ν δtδx ( û n i − ûni−1 ) : R→ R and the function fθ represents the dynamics of the hidden layers in decoder with parameters θ , and fη specifies the dynamics of the LSTM layer ( Hochreiter & Schmidhuber , 1997 ; Graves & Schmidhuber , 2005 ) in encoder withe parameters η . The function fθ , η simulates the dynamics of the Neural-PDE with paramaters θ and η . By applying Bidirectional neural network , all grid data are transferred and it enables LSTM to simulate the PDEs as : vn+1i = fθ ( fη ( hi+1 ( ˆ̂u ) , û n i , hi−1 ( û ) ) ) ( 3.1.11 ) hi+1 ( û ) ≡ fη ( ûni+1 , ûni+2 , ûni+3 , · · · , ûnk ) . ( 3.1.12 ) For a time-dependent PDE , if we map all our grid data into an input matrix which contains the information of δx , δt , then the neural network would regress such coefficients as constants and will learn and filter the physical rules from all the k mesh grids data as : vn+1i = fθ , η ( ûn0 , û n 1 , û n 2 , · · · , ûnk ) ( 3.1.13 ) The LSTM neural network is designed to overcome the vanishing gradient issue through hidden layers , therefore we use such recurrent structure to increase the stability of the numerical approach in deep learning . The highly nonlinear function fθ , η simulates the dynamics of updating rules for un+1i , which works in a way similar to a finite difference method ( section 2.2 ) or a finite element method . 3.2 NEURAL-PDE In particular , we use the bidirectional LSTM ( Hochreiter & Schmidhuber , 1997 ; Graves & Schmidhuber , 2005 ) to better retain the state information from data on grid points which are neighbourhoods in the mesh but far away in input matrix . The right frame of Figure 3 shows the overall design of the Neural-PDE . Denote the time series data at collocation points as aN1 , a N 2 , · · · , aNk with aNi = [ û0i , û 1 i , · · · , ûNi ] at ith point . The superscript represents different time points . The Neural-PDE takes the past states { aN1 , aN2 , · · · , aNk } of all collocation points , and outputs the predicted future states { bM1 , bM2 , · · · , bMk } , where bMi = [ v N+1 i , v N+2 i , · · · , v N+M i ] is the Neural-PDE prediction for the ith collocation point at time points from N + 1 to N +M . The data from time point 0 to N are the training data set . The Neural-PDE is an encoder-decoder style sequence model that first maps the input data to a low dimensional latent space that hi = −−−−→ LSTM ( ai ) ⊕ ←−−−− LSTM ( ai ) , ( 3.2.1 ) where ⊕ denotes concatenation and hi is the latent embedding of point ai under the environment . One then decoder , another bi-lstm with a dense layer : vi = ( −−−−→ LSTM ( hi ) ⊕ ←−−−− LSTM ( hi ) ) ·W , ( 3.2.2 ) where W is the learnable weight matrix in the dense layer . During training process , mean squared error ( MSE ) loss L is used as we typically don ’ t know the specific form of the PDE . L = N+M∑ t=N+1 k∑ i=1 ||ûti − vti ||2 , ( 3.2.3 )
This work proposes a sequence-to-sequence approach for learning the time evolution of PDEs. The method employs a bi-directional LSTM to predict solutions of a PDE-based formulation for a chosen number of time steps. By itself this is an interesting, and important goal, but the method does not seem to contain any novel components apart from demonstrating that LSTMs can be used to learn data from PDEs. The paper only compares to a simple form of PINNs, but not to a variety of other time forecasting algorithms available in the deep learning field (LSTM are just one of many methods used these days, a more state of the art one being e.g. transformers). In addition, the examples only contain single cases with relatively simple model equations.
SP:29a7b851d3edc2176467adc75ba67cc973a11a37
Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active Learning
1 INTRODUCTION . The impressive performance exhibited by modern machine learning models hinges on the ability to train the aforementioned models on a very large amounts of labeled data . In practice , in many real world scenarios , even when raw data exists aplenty , acquiring labels might prove challenging and/or expensive . This severely limits the ability to deploy machine learning capabilities in real world applications . This bottleneck has been recognized early on , and methods to alleviate it have been suggested . Most relevant for our work is the large body of research on active learning or optimal experimental design , which aims at selecting data point to be labeled so to maximally inform the learning process . Disappointedly , active learning techniques seem to deliver mostly lukewarm benefits in the context of deep learning . One possible reason why experimental design has so far failed to make an impact in the context of deep learning is that such models are overparameterized , and oftentimes are trained to be interpolative ( Zhang et al. , 2017 ) , i.e. , they are trained so that a perfect fit of the training data is found . This raises a conundrum : the classical perspective on statistical learning theory is that overfitting should be avoided since there is a tradeoff between the fit and complexity of the model . This conundrum is exemplified by the double descent phenomena ( Belkin et al. , 2019b ; Bartlett et al. , 2020 ) , namely when fixing the model size and increasing the amount of training data , the predictive performance initially goes down , and then starts to go up , exploding when the amount of training data approaches the model complexity , and then starts to descend again . This runs counter to statistical intuition which says that more data implies better learning . Indeed , when using interpolative models , more data can hurt ( Nakkiran et al. , 2020a ) ! This phenomena is exemplified in the curve labeled “ Random Selection ” in Figure 1 . Figure 1 explores the predictive performance of various designs when learning a linear regression model and varying the amount of training data with responses . The fact that more data can hurt further motivates experimental design in the interpolative regime . Presumably , if data is carefully curated , more data should never hurt . Unfortunately , classical optimal experimental design focuses on the underparameterized ( and thus , noninterpolative ) case . As such , the theory reported in the literature is often not applicable in the interpolative regime . As our analysis shows ( see Section 3 ) , the prediction error of interpolative models can either be bias dominated ( the first descent phase , i.e. , when training size is very small compared to the number of parameters ) , variance dominated ( near equality of size and parameters ) or of mixed nature . However , properly trained underparameterized models tend to have prediction error which is variance dominated , so classical experimental design focuses on variance reduction . As such , naively using classical optimality criteria , such as V-optimality ( the one most relevant for generalization error ) or others , in the context of interpolation , tends to produce poor results when prediction error is bias dominated or of mixed nature . This is exemplified in the curve labeled “ Classical OED ” in Figure 1 . The goal of this paper is to understand these regimes , and to propose an experimental design strategy that is well suited for overparameterized models . Like many recent work that attempt to understand the double descent phenomena by analyzing underdetermined linear regression , we too use a simple linear regression model in our analysis of experimental design in the overparameterized case ( however , we also consider kernel ridge regression , not only linear interpolative models ) . We believe that understanding experimental design in the overparameterized linear regression case is a prelude to designing effective design algorithms for deep learning . Indeed , recent theoretical results showed a deep connection between deep learning and kernel learning via the so-called Neural Tangent Kernel ( Jacot et al. , 2018 ; Arora et al. , 2019a ; Lee et al. , 2019 ) . Based on this connection , and as a proof-of-concept , we propose a new algorithm for single shot deep active learning . Let us now summarize our contributions : • We analyze the prediction error of learning overparameterized linear models for a given fixed design , revealing three possible regimes that call for different design criteria : bias dominated , variance dominated , and mixed nature . We also reveal an interesting connection between overparameterized experimental design and the column subset selection problem ( Boutsidis et al. , 2009 ) , transductive experimental design ( Yu et al. , 2006 ) , and coresets ( Sener & Savarese , 2018 ) . We also extend our approach to kernel ridge regression . • We propose a novel greedy algorithm for finding designs for overparameterized linear models . As exemplified in the curve labeled “ Overparameterized OED ” , our algorithm is sometimes able to mitigate the double descent phenomena , while still performing better than classical OED ( though no formal proof of this fact is provided ) . • We show how our algorithm can also be applied for kernel ridge regression , and report experiments which show that when the number of parameters is in a sense infinite , our algorithm is able to find designs that are better than state of the art . • We propose a new algorithm for single shot deep active learning , a scaracly treated problem so far , and demonstrate its effectiveness on MNIST . Related Work . The phenomena of benign overfitting and double descent was firstly recognized in DNNs ( Zhang et al. , 2017 ) , and later discussed and analyzed in the context of linear models ( Zhang et al. , 2017 ; Belkin et al. , 2018 ; 2019a ; b ; Bartlett et al. , 2020 ) . Recently there is also a growing interest in the related phenomena of “ more data can hurt ” ( Nakkiran et al. , 2020a ; Nakkiran , 2019 ; Nakkiran et al. , 2020b ; Loog et al. , 2019 ) . A complementary work discussed the need to consider zero or negative regularization coefficient for large real life linear models ( Kobak et al. , 2020 ) . Experimental design is an well established paradigm in statistics , extensively covered in the literature for the linear case ( Pukelsheim , 2006 ) and the non linear case ( Pronzato & Pázman , 2013 ) . The application of it to pool based active learning with batch acquisitions was explored by Yu et al . ( 2006 ) for linear models and by Hoi et al . ( 2006 ) for logistic regression . It was also proposed in the context of deep learning ( Sourati et al. , 2018 ) . Another related line of work is recent work by Haber and Horesh on experimental design for ill-posed inverse problems ( Haber et al. , 2008 ; 2012 ; Horesh et al. , 2010 ) . Active learning in the context of overparameterized learning was explored by Karzand & Nowak ( 2020 ) , however their approach differs from ours significantly since it is based on artificially completing the labels using a minimax approach . I the context of Laplacian regularized Least Squares ( LapRLS ) , which is a generalization of ridge regression , Gu et al . ( 2012 ) showed rigorously that Yu et al . ( 2006 ) criterion is justified as a bound for both the bias and variance components of the expected error . We farther show that this bound is in some sense tight only if the parameter norm is oneand the noise variance equals the l2 penalty coefficient . In addition we postulate and show experimentally that in the overparameterized case using a bias dominant criterion is preferable . Another case in which the bias term idoes not vanish is when the model is misspecified . For linear and generalized linear models this case has been tackled with reweighing of the loss function . A popular modern approach for pool based active learning with batch acquisition is coresets ( Sener & Savarese , 2018 ; Geifman & El-Yaniv , 2017 ; Ash et al. , 2019 ; Pinsler et al. , 2019 ) . This approach has been used in the context of active learning for DNNs . 2 UNDERPARAMETERIZED V-OPTIMAL EXPERIMENTAL DESIGN . Consider a noisy linear response model y = xTw + , where ∼ N ( 0 , σ2 ) and w ∈ Rd and assume we are given with some data points x1 , . . . , xn , for which we obtained independent responses , yi = x T iw + i . Consider the underparameterized case , i.e . n ≥ d , and furthermore assume that the set { x1 , . . . , xn } contains at least d independent vectors . The best linear unbiased estimator ŵ of w according to the Gauss-Markov theorem is given by : ŵ = arg minw ‖Xw − y‖22 = X +y where X ∈ Rn×d is a matrix whose rows are x1 , . . . , xn , y = [ y1 . . . yn ] T ∈ Rn and X+ is the Moore-Pensrose pseudoinverse of X . It is well known that ŵ −w is a normal random vector with zero mean and covariance matrix σ2M−1 , where M = XTX is the Fisher information matrix . This implies that ŷ ( x ) − y ( x ) is also a normal variable with zero mean and variance equal to σ2xTM−1x . Assume also that x comes from a distribution ρ . With that we can further define the excess risk R ( ŵ ) = Ex∼ρ [ ( xTw − xTŵ ) 2 ] and its expectation : E [ R ( ŵ ) ] = Ex∼ρ [ Var [ y ( x ) − ŷ ( x ) ] ] = Ex∼ρ [ σ2xTM−1x ] = Tr ( σ2M−1Cρ ) ( 1 ) where Cρ is the uncentered second moment matrix of ρ : Cρ : = Ex∼ρ [ xxT ] . Eq . ( 1 ) motivates the so-called V-optimal design criterion : select the dataset x1 , . . . , xn so that ϕ ( M ) : = Tr ( M−1Cρ ) is minimized ( if we do not have access to Cρ then it is possible to estimate it by drawing samples from ρ ) . In doing so , we are trying to minimize the expected ( with respect to the noise ) average ( with respect to the data x ) prediction variance , since the risk is composed solely from it ( due to the fact that the estimator is unbiased ) . As we shall see , this is in contrast with the overparameterized case , in which the estimator is biased . V-optimality is only one instance of various statistical criteria used in experimental design . In general experimental design , the focus is on minimizing a preselected criteria ϕ ( M ) ( Pukelsheim , 2006 ) . For example in D-optimal design , ϕ ( M ) = det ( M−1 ) and in A-optimal design ϕ ( M ) = Tr ( M−1 ) . However , since minimizing the V-optimality criterion corresponds to minimizing the risk , it is more appropriate when assessing the predictive performance of machine learning models .
In this paper, the authors develop a data selection scheme aimed to minimize a notion of Bayes excess risk for overparametrized linear models. The excess Bayes risk is the expected squared error between the prediction and the target. The authors note that solutions such as V-optimality exist for the underparametrized cases (linear regression), and offer extensions to ridge regression. After the development of a greedy schemes and a tentative extension to deep learning models, the authors show that their selection scheme can outperform random selection on MNIST with a specific model.
SP:797b07cd8142a35333037bb573db0dfe5dde65ac
Offline Policy Optimization with Variance Regularization
1 INTRODUCTION . Offline batch reinforcement learning ( RL ) algoithms are key towards scaling up RL for real world applications , such as robotics ( Levine et al. , 2016 ) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets , similar to supervised learning , avoiding continual interaction with the environment , which could be problematic for safety and feasibility reasons . However , significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates , a problem encountered by most off-policy RL algorithms ( Precup et al. , 2000 ) . A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch , leading the agent in data regions where its behavior is poor Fujimoto et al . ( 2019 ) . Recently there has been some progress in offline RL ( Kumar et al. , 2019 ; Wu et al. , 2019b ; Fujimoto et al. , 2019 ) , trying to tackle both of these problems . In this work , we study the problem of offline policy optimization with variance minimization . To avoid overly optimistic value function estimates , we propose to learn value functions under variance constraints , leading to a pessimistic estimation , which can significantly help offline RL algorithms , especially under large distribution mismatch . We propose a framework for variance minimization in offline RL , such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions . We develop a novel approach for variance regularized offline actor-critic algorithms , which we call Offline Variance Regularizer ( OVR ) . The key idea of OVR is to constrain the policy improvement step via variance regularized value function estimates . Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates , by instead considering the variance of stationary distribution corrections with per-step rewards , and using the Fenchel transformation ( Boyd & Vandenberghe , 2004 ) to formulate a minimax optimization objective . This allows minimizing variance constraints by instead optimizing dual variables , resulting in simply an augmented reward objective for variance regularized value functions . We show that even with variance constraints , we can ensure policy improvement guarantees , where the regularized value function leads to a lower bound on the true value function , which mitigates the usual overestimation problems in batch RL The use of Fenchel duality in computing the variance allows us to avoid double sampling , which has been a major bottleneck in scaling up variance-constrained actor-critic algorithms in prior work A . & Ghavamzadeh ( 2016 ) ; A . & Fu ( 2018 ) . Practically , our algorithm is easy to implement , since it simply involves augmenting the rewards with the dual variables only , such that the regularized value function can be implemented on top of any existing offline policy optimization algorithms . We evaluate our algorithm on existing offline benchmark tasks based on continuous control domains . Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random , or when it is very different from the data distributions encountered during training . 2 PRELIMINARIES AND BACKGROUND . We consider an infinite horizon MDP as ( S , A , P , γ ) where S is the set of states , A is the set of actions , P is the transition dynamics and γ is the discount factor . The goal of reinforcement learning is to maximize the expected return J ( π ) = Es∼dβ [ V π ( s ) ] , where V π ( s ) is the value function V π ( s ) = E [ ∑∞ t=0 γ tr ( st , at ) | s0 = s ] , and β is the initial state distribution . Considering parameterized policies πθ ( a|s ) , the goal is maximize the returns by following the policy gradient ( Sutton et al. , 1999 ) , based on the performance metric defined as : J ( πθ ) = Es0∼ρ , a0∼π ( s0 ) [ Qπθ ( s0 , a0 ) ] = E ( s , a ) ∼dπθ ( s , a ) [ r ( s , a ) ] ( 1 ) where Qπ ( s , a ) is the state-action value function , since V π ( s ) = ∑ a π ( a|s ) Qπ ( s , a ) . The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy πθ , where dπ ( s , a ) is the state-action occupancy measure , such that the normalized state-action visitation distribution under policy π is defined as : dπ ( s , a ) = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s , at = a|s0 ∼ β , a ∼ π ( s0 ) ) . The equality in equation 1 holds and can be equivalently written based on the linear programming ( LP ) formulation in RL ( see ( Puterman , 1994 ; Nachum & Dai , 2020 ) for more details ) . In this work , we consider the off-policy learning problem under a fixed dataset D which contains s , a , r , s′ tuples under a known behaviour policy µ ( a|s ) . Under the off-policy setting , importance sampling ( Precup et al. , 2000 ) is often used to reweight the trajectory under the behaviour data collecting policy , such as to get unbiased estimates of the expected returns . At each time step , the importance sampling correction π ( at|st ) µ ( at|st ) is used to compute the expected return under the entire trajectory as J ( π ) = ( 1 − γ ) E ( s , a ) ∼dµ ( s , a ) [ ∑T t=0 γ tr ( st , at ) ( ∏T t=1 π ( at|st ) µ ( at|st ) ] . Recent works ( Fujimoto et al. , 2019 ) have demonstrated that instead of importance sampling corrections , maximizing value functions directly for deterministic or reparameterized policy gradients ( Lillicrap et al. , 2016 ; Fujimoto et al. , 2018 ) allows learning under fixed datasets , by addressing the over-estimation problem , by maximizing the objectives of the form maxθ Es∼D [ Qπθ ( s , πθ ( s ) ] . 3 VARIANCE REGULARIZATION VIA DUALITY IN OFFLINE POLICY OPTIMIZATION . In this section , we first present our approach based on variance of stationary distribution corrections , compared to importance re-weighting of episodic returns in section 3.1 . We then present a derivation of our approach based on Fenchel duality on the variance , to avoid the double sampling issue , leading to a variance regularized offline optimization objective in section 3.2 . Finally , we present our algorithm in 1 , where the proposed regularizer can be used in any existing offline RL algorithm . 3.1 VARIANCE OF REWARDS WITH STATIONARY DISTRIBUTION CORRECTIONS . In this work , we consider the variance of rewards under occupancy measures in offline policy optimization . Let us denote the returns as Dπ = ∑T t=0 γ tr ( st , at ) , such that the value function is V π = Eπ [ Dπ ] . The 1-step importance sampling ratio is ρt = π ( at|st ) µ ( at|st ) , and the T-steps ratio can be denoted ρ1 : T = ∏T t=1 ρt . Considering per-decision importance sampling ( PDIS ) ( Precup et al. , 2000 ) , the returns can be similarly written as Dπ = ∑T t=0 γ trtρ0 : t. The variance of episodic returns , which we denote by VP ( π ) , with off-policy importance sampling corrections can be written as : VP ( π ) = Es∼β , a∼µ ( ·|s ) , s′∼P ( ·|s , a ) [ ( Dπ ( s , a ) − J ( π ) ) 2 ] . Instead of importance sampling , several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections ( Liu et al. , 2018 ; Nachum et al. , 2019a ; Zhang et al. , 2020 ; Uehara & Jiang , 2019 ) , which can lead to lower variance estimators at the cost of introducing bias . Denoting the stationary distribution ratios as ω ( s , a ) = dπ ( s , a ) dµ ( s , a ) , the returns can be written as Wπ ( s , a ) = ω ( s , a ) r ( s , a ) . The variance of marginalized IS is : VD ( π ) = E ( s , a ) ∼dµ ( s , a ) [ ( Wπ ( s , a ) − J ( π ) ) 2 ] = E ( s , a ) ∼dµ ( s , a ) [ Wπ ( s , a ) 2 ] − E ( s , a ) ∼dµ ( s , a ) [ Wπ ( s , a ) ] 2 ( 2 ) Our key contribution is to first consider the variance of marginalized IS VD ( π ) itself a as risk constraints , in the offline batch optimization setting . We show that constraining the offline policy optimization objective with variance of marginalized IS , and using the Fenchel-Legendre transformation on VD ( π ) can help avoid the well-known double sampling issue in variance risk constrained RL ( for more details on how to compute the gradient of the variance term , see appendix B ) . We emphasize that the variance here is solely based on returns with occupancy measures , and we do not consider the variance due to the inherent stochasticity of the MDP dynamics . 3.2 VARIANCE REGULARIZED OFFLINE MAX-RETURN OBJECTIVE . We consider the variance regularized off-policy max return objective with stationary distribution corrections ωπ/D ( which we denote ω for short for clarity ) in the offline fixed dataset D setting : max πθ J ( πθ ) : = Es∼D [ Qπθ ( s , πθ ( s ) ) ] − λVD ( ω , πθ ) ( 3 ) where λ ≥ 0 allows for the trade-off between offline policy optimization and variance regularization ( or equivalently variance risk minimization ) . The max-return objective under Qπθ ( s , a ) has been considered in prior works in offline policy optimization ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . We show that this form of regularizer encourages variance minimization in offline policy optimization , especially when there is a large data distribution mismatch between the fixed dataset D and induced data distribution under policy πθ . 3.3 VARIANCE REGULARIZATION VIA FENCHEL DUALITY . At first , equation 3 seems to be difficult to optimize , especially for minimizing the variance regularization w.r.t θ . This is because finding the gradient of V ( ω , πθ ) would lead to the double sampling issue since it contains the squared of the expectation term . The key contribution of OVR is to use the Fenchel duality trick on the second term of the variance expression in equation 2 , for regularizing policy optimization objective with variance of marginalized importance sampling . Applying Fenchel duality , x2 = maxy ( 2xy − y2 ) , to the second term of variance expression , we can transform the variance minimization problem into an equivalent maximization problem , by introducing the dual variables ν ( s , a ) . We have the Fenchel conjugate of the variance term as : V ( ω , πθ ) = max ν { − 1 2 ν ( s , a ) 2 + ν ( s , a ) ω ( s , a ) r ( s , a ) + E ( s , a ) ∼dD [ ω ( s , a ) r ( s , a ) 2 ] } = max ν E ( s , a ) ∼dD [ − 1 2 ν ( s , a ) 2 + ν ( s , a ) ω ( s , a ) r ( s , a ) + ω ( s , a ) r ( s , a ) 2 ] ( 4 ) Regularizing the policy optimization objective with variance under the Fenchel transformation , we therefore have the overall max-min optimization objective , explicitly written as : max θ min ν J ( πθ , ν ) : = Es∼D [ Qπθ ( s , πθ ( s ) ) ] −λE ( s , a ) ∼dD [ ( − 1 2 ν2+ν ·ω ·r+ω ·r2 ) ( s , a ) ] ( 5 )
This paper proposes a novel algorithm for offline policy optimization. The main idea is to prevent overestimation bias by regularizing against the variance of the importance weighted value estimate. There are two key modifications: (1) using an importance weight from the stationary distribution and (2) using Fenchel duality to introduce a min-max problem to avoid double sampling when estimating the gradient of the variance regularization term. The theory section motivates the use of variance regularization and the experiments show improvements over BCQ when adding the proposed variance regularization algorithm.
SP:4989f7703e106a20401cec0a5058d440720b0379
Quantifying Statistical Significance of Neural Network Representation-Driven Hypotheses by Selective Inference
1 INTRODUCTION . The remarkable predictive performance of deep neural networks ( DNNs ) stems from their ability to learn appropriate representations from data . In order to understand the decision-making process of DNNs , it is thus important to be able to explain and interpret DNN representations . For example , in image classification tasks , knowing the attention region from DNN representation allows us to understand the reason for the classification . In the past few years , several methods have been developed to explain and interpret DNN representations ( Ribeiro et al. , 2016 ; Bach et al. , 2015 ; Doshi-Velez & Kim , 2017 ; Lundberg & Lee , 2017 ; Zhou et al. , 2016 ; Selvaraju et al. , 2017 ) ; however , some of them have turned out to be unstable and not reproducible ( Kindermans et al. , 2017 ; Ghorbani et al. , 2019 ; Melis & Jaakkola , 2018 ; Zhang et al. , 2020 ; Dombrowski et al. , 2019 ; Heo et al. , 2019 ) . Therefore , it is crucially important to develop a method to quantify the reliability of DNN representations . In this paper , we interpret these representations as hypotheses that are driven by DNN ( called DNNdriven hypotheses ) and employ statistical hypothesis testing framework to quantify the reliability of DNN representations . For example , in an image classification task , the reliability of an attention region can be quantified based on the statistical significance of the difference between the attention region and the rest of the image . Unfortunately , however , traditional statistical test can not be applied to this problem because the hypothesis ( attention region in the above example ) itself is selected by the data . Traditional statistical test is valid only when the hypothesis is non-random . Roughly speaking , if a hypothesis is selected by the data , the hypothesis will over-fit to the data and the bias needs to be corrected when assessing the reliability of the hypothesis . Our main contribution in this paper is to introduce Selective Inference ( SI ) approach for testing the reliability of DNN representations . The basic idea of SI is to perform statistical inference under the condition that the hypothesis is selected . SI approach has been demonstrated to be effective in the context of feature selections such as Lasso . In this paper , in order to introduce SI for DNN representations , we develop a novel SI algorithm based on homotopy method , which enables us to derive the exact ( non-asymptotic ) conditional sampling distribution of the DNN-driven hypothesis . We use p-value as a criterion to quantify the reliability of DNN representation . In the literature , pvalues are often misinterpreted and there are various source of mis-interpretation has been discussed ( Wasserstein & Lazar , 2016 ) . In this paper , by using SI , we address one of the sources of misinterpreted p-values ; the p-values are biased when the hypothesis is selected after looking at the data ( often called double-dipping or data dredging ) . We believe our approach is a first significant step to provide valid p-values for assessing the reliability of DNN representations . Figure 1 shows an example that illustrates the importance of our method . Related works . Several recent approaches have been developed to visualize and understand a trained DNN . Many of these post-hoc approaches ( Mahendran & Vedaldi , 2015 ; Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016 ; Simonyan et al. , 2013 ) have focused on developing visualization tools for the activation maps and/or the filter weights within trained networks . Others have aimed to identify the discriminative regions in an input image , given a trained network ( Selvaraju et al. , 2017 ; Fong & Vedaldi , 2017 ; Zhou et al. , 2016 ; Lundberg & Lee , 2017 ) . In parallel , some recent studies have showed that many popular methods for explanation and interpretation are not stable with respect to the perturbation or the adversarial attack on the input data and the model ( Kindermans et al. , 2017 ; Ghorbani et al. , 2019 ; Melis & Jaakkola , 2018 ; Zhang et al. , 2020 ; Dombrowski et al. , 2019 ; Heo et al. , 2019 ) . However , there are no previous studies that quantitatively evaluate the stability and reproducibility of DNN representations with a rigorous statistical inference framework . In the past few years , SI has been actively studied for inference on the features of linear models selected by several feature selection methods , e.g. , Lasso ( Lee et al. , 2016 ; Liu et al. , 2018 ; Duy & Takeuchi , 2020 ) . The basic idea of SI is to make inference conditional on the selection event , which allows us to derive the exact ( non-asymptotic ) sampling distribution of the test statistic . Besides , SI has also been applied to various problems ( Bachoc et al. , 2014 ; Fithian et al. , 2015 ; Choi et al. , 2017 ; Tian et al. , 2018 ; Chen & Bien , 2019 ; Hyun et al. , 2018 ; Bachoc et al. , 2018 ; Loftus & Taylor , 2014 ; Loftus , 2015 ; Panigrahi et al. , 2016 ; Tibshirani et al. , 2016 ; Yang et al. , 2016 ; Suzumura et al. , 2017 ; Duy et al. , 2020 ) . However , to the best of our knowledge , there is no existing study that provides SI for DNNs , which is technically challenging . This study is partly motivated by Tanizaki et al . ( 2020 ) where the authors provide a framework to compute p-values for image segmentation results provided by graph cut and threshold-based segmentation algorithms . As we demonstrate in this paper , our method can be also used to assess the reliability of DNN-based segmentation results . Contribution . To our knowledge , this is the first study that provides an exact ( non-asymptotic ) inference method for statistically quantifying the reliability of data-driven hypotheses that are discovered from DNN representation . We propose a novel SI homotopy method , inspired by Duy & Takeuchi ( 2020 ) , for conducting powerful and efficient SI for DNN representations . We conduct experiments on both synthetic and real-world datasets , through which we offer evidence that our proposed method can successfully control the false positive rate , has decent performance in terms of computational efficiency , and provides good results in practical applications . We provide our implementation in the supplementary document and it will be released when this paper is published . 2 PROBLEM STATEMENT . To formulate the problem , we denote an image with n pixels corrupted with Gaussian noise as X = ( X1 , ... , Xn ) > = µ+ ε , ε ∼ N ( 0 , Σ ) , ( 1 ) where µ ∈ Rn is an unknown mean pixel intensity vector and ε ∈ Rn is a vector of Normally distributed noise with the covariance matrix Σ that is known or able to be estimated from external data . We note that we do not assume that the pixel intensities in an image follow Normal distribution in Equation ( 1 ) . Instead , we only assume that the vector of noises added to the true pixel values follows a multivariate Normal distribution . For an image X and a trained DNN , the main target is to identify an attention region ( discriminative/informative region ) in the input image X based on a DNN representation . A pixel is assigned to the attention region if its corresponding value in the representation layer is greater than a pre-defined threshold . We denote the set of pixels ofX divided into attention region and non-attention region as C+X and C − X , respectively . Definition 1 . We define A ( X ) as the event that the result of dividing pixels of image X into two sets of pixels C+X and C − X is obtained by applying a DNN onX , i.e. , A ( X ) = { C+X , C − X } . ( 2 ) Quantifying the statistical significance of DNN-driven hypotheses . Given an observed image xobs ∈ Rn sampled from the model ( 1 ) , we can obtain C+ xobs and C− xobs by applying DNN on xobs . Let us consider a score ∆ that represents the degree to which the attention region differs from the non-attention region . In general , we can define any score as long as it is written in the form ∆ = η > xobs . For example , we can define ∆ as the difference in average pixel values between the attention region and the non-attention region , i.e. , ∆ = mC+ xobs −mC− xobs = 1 |C+ xobs | ∑ i∈C+ xobs xobsi − 1 |C− xobs | ∑ i∈C− xobs xobsi = η > xobs , where η = 1|C+ xobs |1 n C+ xobs − 1|C− xobs |1 n C− xobs , and 1nC ∈ Rn is a vector whose elements belonging to a set C are 1 , and 0 otherwise . If the value of |∆| is sufficiently large , the difference between C+ xobs and C− xobs is significant and the attention region is reliable . To quantify the statistical significance , we consider a statistical hypothesis testing with the following null hypothesis H0 and alternative hypothesis H1 : H0 : µC+ xobs = µC− xobs vs. H1 : µC+ xobs 6= µC− xobs , ( 3 ) where µC+ xobs and µC− xobs are the true means of the pixel values in the attention region and nonattention region , respectively . Given a significance level α ( e.g. , 0.05 ) , we reject H0 if the p-value is smaller than α , which indicates the attention region differs from the non-attention region . Otherwise , we can not say that the difference is significant . In a standard ( naive ) statistical test , the hypotheses in ( 3 ) are assumed to be fixed , i.e. , non-random . Then , the naive ( two-sided ) p-value is simply given as pnaive = PH0 ( |η > X| ≥ |∆| ) = PH0 ( |η > X| ≥ |η > xobs| ) . ( 4 ) However , since the hypotheses in ( 3 ) are actually not fixed in advance , the naive p-value is not valid in the sense that , if we reject H0 with a significance level α , the false detection rate ( type-I error ) can not be controlled at level α , which indicates that pnaive is unreliable . This is due to the fact that the hypotheses ( the attention region ) in ( 3 ) are selected by looking at the data ( the input image ) , and thus selection bias exists . This selection bias is sometimes called data dredging , data snooping or p-hacking ( Ioannidis , 2005 ; Head et al. , 2015 ) . Selective inference ( SI ) for computing valid p-values . The basic idea of SI is to make inference conditional on the selection event , which allows us to derive the exact ( non-asymptotic ) sampling distribution of the test statistic η > X in an attempt to avoid the selection bias . Thus , we employ the following conditional p-value pselective = PH0 ( |η > X| ≥ |η > xobs| | A ( X ) = A ( xobs ) , q ( X ) = q ( xobs ) ) , ( 5 ) where q ( X ) = ( In − cη > ) X with c = Ση ( η > Ση ) −1 . The first condition A ( X ) = A ( xobs ) indicates the event that the result of dividing pixels into an attention region and non-attention region for a random image X is the same as that of the observed image xobs , i.e. , C+X = C + xobs and C−X = C − xobs . The second condition q ( X ) = q ( xobs ) indicates the component that is independent of the test statistic forX is the same as the one for xobs . The q ( X ) corresponds to the component z in the seminal SI paper of Lee et al . ( 2016 ) ( Sec 5 , Eq 5.2 and Theorem 5.2 ) . The p-value in ( 5 ) , which is called selective type I error or selective p-values in the SI literature ( Fithian et al. , 2014 ) , is valid in the sense that PH0 ( pselective < α ) = α , ∀α ∈ [ 0 , 1 ] , i.e. , the false detection rate is theoretically controlled at level α indicating the selective p-value is reliable . To calculate the selective p-value in ( 5 ) , we need to identify the conditional data space . Let us define the set of x ∈ Rn that satisfies the conditions in ( 5 ) as X = { x ∈ Rn | A ( x ) = A ( xobs ) , q ( x ) = q ( xobs ) } . ( 6 ) According to the second condition , the data in X are restricted to a line ( Sec 6 in Liu et al . ( 2018 ) , and Fithian et al . ( 2014 ) ) . Therefore , the set X can be re-written , using a scalar parameter z ∈ R , as X = { x ( z ) = a+ bz | z ∈ Z } , ( 7 ) where a = q ( xobs ) , b = Ση ` ( η > ` Ση ` ) −1 , and Z = { z ∈ R | A ( x ( z ) ) = A ( xobs ) } . ( 8 ) Now , let us consider a random variable Z ∈ R and its observation zobs ∈ R that satisfyX = a+bZ and xobs = a+ bzobs . Then , the selective p-value in ( 5 ) is re-written as pselective = PH0 ( |η > X| ≥ |η > xobs| |X ∈ X ) = PH0 ( |Z| ≥ |zobs| | Z ∈ Z ) . ( 9 ) Since the variable Z ∼ N ( 0 , η > Ση ) under the null hypothesis , the law of Z | Z ∈ Z follows a truncated Normal distribution . Once the truncation region Z is identified , the selective p-value ( 9 ) can be computed as pselective = F Z 0 , η > Ση ( −|z obs| ) + 1− FZ0 , η > Ση ( |z obs| ) , ( 10 ) where F Em , s2 is the c.d.f . of the truncated normal distribution with mean m , variance s 2 and truncation region E . Therefore , the most important task is to identify Z . Extension of the problem setup to hypothesis driven from DNN-based image segmentation . We interpret the hypothesis driven from image segmentation result as the one obtained from the representation at output layer instead of internal representation . Our problem setup is general and can be directly applied to this case . For example , we can consider the attention region as the object region and the non-attention region as the background region . Then , we can conduct SI to quantify the significance of the difference between object and background regions . We note that we consider the case where the image is segmented into two regions—object and background—to simplify the problem and notations . The extension to more than two regions is straightforward .
This paper proposed a novel method which to quantify the reliability of DNN-driven hypotheses in a statistical hypothesis testing framework. Naive statistical testings are not appropriate for the DNN-driven hypotheses, where the hypotheses are selected by looking at the data(i.e. The selection bias exists). To address this problem, the authors developed a novel homotopy method under the Selective-Inference(SI) framework, which can derive the exact sampling distribution of the DNN-driven hypotheses. In this paper, the authors mainly focus on DNNs which consist of affine operations, max-operations, and piecewise-linear activation. As described by Lee et al. (2016), the main idea of SI is to make the inference conditional on the selection event. Specifically to the DNN-driven hypotheses, the authors proposed a novel method that consists of two steps, 1) Adding extra conditioning to make the problem traceable. 2) Combining multiple over-conditioning cases by homotopy method to solve the over-conditioning problem. The experimental results on both synthetic and real-world datasets illustrate the proposed method can successfully control the FP error rate.
SP:4e77d43eb99688600f6c2115e1882e0b1e11a751
Gradient descent temporal difference-difference learning
1 INTRODUCTION . Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning . However , because off-policy methods learn a value function for a target policy given data due to a different behavior policy , they often exhibit greater variance in parameter updates . When applied to problems involving function approximation , off-policy methods are slower to converge than on-policy methods and may even diverge ( Baird , 1995 ; Sutton & Barto , 2018 ) . Two general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms . One approach is to use importance sampling methods to warp the update distribution back to the on-policy distribution ( Precup et al. , 2000 ; Mahmood et al. , 2014 ) . This approach is useful for decreasing the variance of parameter updates , but it does not address stability issues . The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution . Sutton et al . ( 2009a ; b ) proposed the first off-policy gradientdescent-based temporal difference ( GTD and GTD2 , respectively ) algorithms . These algorithms are guaranteed to be stable , with computational complexity scaling linearly with the size of the function approximator . Empirically , however , their convergence is much slower than conventional temporal difference ( TD ) learning , limiting their practical utility ( Ghiassian et al. , 2020 ; White & White , 2016 ) . Building on this work , extensions to the GTD family of algorithms ( see Ghiassian et al . ( 2018 ) for a review ) have allowed for incorporating eligibility traces ( Maei & Sutton , 2010 ; Geist & Scherrer , 2014 ) , non-linear function approximation such as with a neural network ( Maei , 2011 ) , and reformulation of the optimization as a saddle point problem ( Liu et al. , 2015 ; Du et al. , 2017 ) . However , due to their slow convergence , none of these stable off-policy methods are commonly used in practice . In this work , we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation . This algorithm , which we call gradient descent temporal difference-difference ( Gradient-DD ) learning , is an acceleration technique that employs second- order differences in successive parameter updates . The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in last time step , then to derive a gradient-descent algorithm based on this modified objective function . In addition to exploiting the Bellman equation to get the solution , this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate . Algorithmically , the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method , and the extra computational cost is negligible . We show mathematically that applying this method significantly improves the convergence rate relative to the GTD2 method for linear function approximation . This result is supported by numerical experiments , which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning . 1.1 RELATED WORK . In related approaches to ours , some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function . Liu et al . ( 2012 ) have used l1 regularization on weights to learn sparse representations of value functions , and Ghiassian et al . ( 2020 ) has used l2 regularization on weights . Unlike these references , our approach modifies the error objective function by regularizing the evaluation error obtained in the most recent time step . With this modification , our method provides a learning rule that contains second-order differences in successive parameter updates . Our approach is similar to trust region policy optimization ( Peters & Schaal , 2008 ; Schulman et al. , 2015 ) or relative entropy policy search ( Peters et al. , 2010 ) , which penalize large changes being learned in policy learning . In these methods , constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy . Here , however , our aim here is to look for the optimal value function , and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process . 2 GRADIENT DESCENT METHOD FOR OFF-POLICY TEMPORAL DIFFERENCE LEARNING . 2.1 PROBLEM DEFINITION AND BACKGROUND . In this section , we formalize the problem of learning the value function for a given policy under the Markov Decision Process ( MDP ) framework . In this framework , the agent interacts with the environment over a sequence of discrete time steps , t = 1 , 2 , . . .. At each time step the agent observes a partial summary of the state st ∈ S and selects an action at ∈ A . In response , the environment emits a reward rt ∈ R and transitions the agent to its next state st+1 ∈ S. The state and action sets are finite . State transitions are stochastic and dependent on the immediately preceding state and action . Rewards are stochastic and dependent on the preceding state and action , as well as on the next state . The process generating the agent ’ s actions is termed the behavior policy . In off-policy learning , this behavior policy is in general different from the target policy π : S → A . The objective is to learn an approximation to the state-value function under the target policy in a particular environment : V ( s ) = Eπ [ ∞∑ t=1 γt−1rt|s1 = s ] , ( 1 ) where γ ∈ [ 0 , 1 ) is the discount rate . In problems for which the state space is large , it is practical to approximate the value function . In this paper we consider linear function approximation , where states are mapped to feature vectors with fewer components than the number of states . Specifically , for each state s ∈ S there is a corresponding feature vector x ( s ) ∈ Rp , with p ≤ |S| , such that the approximate value function is given by Vw ( s ) : = w > x ( s ) . ( 2 ) The goal is then to learn the parameters w such that Vw ( s ) ≈ V ( s ) . 2.2 GRADIENT TEMPORAL DIFFERENCE LEARNING . A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms ( Sutton et al. , 2009a ; b ) . We begin by briefly recapitulating the GTD algorithms , which we will then extend in the following sections . To begin , we introduce the Bellman operator B such that the true value function V ∈ R|S| satisfies the Bellman equation : V = R + γPV = : BV , where R is the reward vector with components E ( rn+1|sn = s ) , and P is a matrix of state transition probabilities . In temporal difference methods , an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation . Having defined the Bellman operator , we next introduce the projection operator Π , which takes any value function V and projects it to the nearest value function within the space of approximate value functions of the form ( 2 ) . Letting X be the matrix whose rows are x ( s ) , the approximate value function can be expressed as Vw = Xw . We will also assume that there exists a limiting probability distribution such that ds = limn→∞ p ( sn = s ) ( or , in the episodic case , ds is the proportion of time steps spent in state s ) . The projection operator is then given by Π = X ( X > DX ) −1X > D , where the matrix D is diagonal , with diagonal elements ds . The natural measure of how closely the approximation Vw satisfies the Bellman equation is the mean-squared Bellman error : MSBE ( w ) = ‖Vw −BVw‖2D , ( 3 ) where the norm is weighted by D , such that ‖V‖2D = V > DV . However , because the Bellman operator follows the underlying state dynamics of the Markov chain , irrespective of the structure of the linear function approximator , BVw will typically not be representable as Vw for any w. An alternative objective function , therefore , is the mean squared projected Bellman error ( MSPBE ) , which we define as J ( w ) = ‖Vw −ΠBVw‖2D . ( 4 ) Following ( Sutton et al. , 2009b ) , our objective is to minimize this error measure . As usual in stochastic gradient descent , the weights at each time step are then updated by ∆w = −α∇wJ ( w ) , where α > 0 , and −1 2 ∇wJ ( w ) =− E [ ( γxn+1 − xn ) x > n ] [ E ( xnx > n ) ] −1E ( δnxn ) ≈− E [ ( γxn+1 − xn ) x > n ] η . ( 5 ) For notational simplicity , we have denoted the feature vector associated with sn as xn = x ( sn ) . We have also introduced the temporal difference error δn = rn + ( γxn+1 − xn ) > wn , as well as η , a linear predictor to approximate [ E ( xnx > n ) ] −1E ( δnxn ) . Because the factors in Eqn . ( 5 ) can be directly sampled , the resulting updates in each step are δn =rn + ( γxn+1 − xn ) > wn ηn+1 =ηn + βn ( δn − x > n ηn ) xn wn+1 =wn − αn ( γxn+1 − xn ) ( x > n ηn ) . ( 6 ) These updates define the GTD2 learning algorithm , which we will build upon in the following section . 3 GRADIENT DESCENT TEMPORAL DIFFERENCE-DIFFERENCE LEARNING . In order to improve the GTD2 algorithm described above , in this section we modify the objective function via additionally considering the approximation error Vw−Vwn−1 given the previous time step n− 1 . Specifically , we modify Eqn . ( 4 ) as follows : JGDD ( w|wn−1 ) = J ( w ) + κ‖Vw −Vwn−1‖2D , ( 7 ) Figure 1 : Schematic diagram of Gradient-DD learning with w ∈ R2 . Rather than updating w directly along the gradient of the MSPBE ( arrow ) , the update rule selects wn that minimizes the MSPBE while satisfying the constraint ‖Vw −Vwn−1‖2D ≤ µ ( shaded ellipse ) . where κ ≥ 0 is a parameter of the regularization . Minimizing Eqn . ( 7 ) is equivalent to the following optimization arg min w J ( w ) s.t . ‖Vw −Vwn−1‖2D ≤ µ ( 8 ) where µ > 0 is a parameter which becomes large when κ is small , so that the MSPBE objective is recovered as µ→∞ , equivalent to κ→ 0 in Eqn . ( 7 ) . We show in the Appendix that for any µ > 0 , there exist κ ≥ 0 such that the solution of Eqn . ( 7 ) and that of Eqn . ( 8 ) are the same . Eqns . ( 7 ) and ( 8 ) represent a tradeoff between minimizing the MSPBE error and preventing the estimated value function from changing too drastically . Rather than simply minimizing the optimal prediction from the projected Bellman equation , the agent makes use of the most recent update to look for the solution . Figure 1 gives a schematic view of the effect of the regularization . Rather than directly following the direction of the MSPBE gradient , the update chooses a w that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly . In effect , the regularization term encourages searching around the estimate at previous time step , especially when the state space is large . With these considerations in mind , the negative gradient of JGDD ( w|wn−1 ) is − 1 2 ∇wJGDD ( w|wn−1 ) =− E [ ( γxn+1 − xn ) x > n ] [ E ( xnx > n ) ] −1E ( δnxn ) − κE [ ( x > nwn − x > nwn−1 ) xn ] ≈− E [ ( γxn+1 − xn ) x > n ] ηn − κE [ ( x > nwn − x > nwn−1 ) xn ] . ( 9 ) Because the terms in Eqn . ( 9 ) can be directly sampled , the stochastic gradient descent updates are given by δn =rn + ( γxn+1 − xn ) > wn ηn+1 =ηn + βn ( δn − x > n ηn ) xn wn+1 =wn − κn ( x > nwn − x > nwn−1 ) xn − αn ( γxn+1 − xn ) ( x > n ηn ) . ( 10 ) These update equations define the Gradient-DD method , in which the GTD2 update equations ( 6 ) are generalized by including a second-order update term in the third update equation , where this term originates from the squared bias term in the objective ( 7 ) . In the following sections , we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning .
This paper proposes a variant of the GTD2 algorithm by adding an additional regularization term to the objective function, and the new algorithm is named as Gradient-DD (GDD). The regularization ensures that the value function does not change drastically between consecutive iterations. The authors show that the update rule of GDD can be written as a difference equation and aim to further show the convergence via Lyapunov based analysis. An simulation study is provided to compare the proposed GDD algorithm with TD, ETD, and GTD.
SP:8a32dfc80f31fd3da97e15ce98193144d03836b5
FactoredRL: Leveraging Factored Graphs for Deep Reinforcement Learning
We propose a simple class of deep reinforcement learning ( RL ) methods , called FactoredRL , that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms . In tabular and linear approximation settings , the factored Markov decision process literature has shown exponential improvements in sample efficiency by leveraging factored environment structures . We extend this to deep RL algorithms that use neural networks . For model-based algorithms , we use the factored structure to inform the state transition network architecture and for model-free algorithms we use the factored structure to inform the Q network or the policy network architecture . We demonstrate that doing this significantly improves sample efficiency in both discrete and continuous state-action space settings . 1 INTRODUCTION . In many domains , the structure of the Markov Decision Process ( MDP ) is known at the time of problem formulation . For example , in inventory management , we know the structure of the state transition : how inventory flows from a vendor , to a warehouse , to a customer ( Giannoccaro & Pontrandolfo , 2002 ; Oroojlooyjadid et al. , 2017 ) . In portfolio management , we know that a certain asset changes only when the agent buys or sells a corresponding item ( Jiang et al. , 2017 ) . Similar structural information is available in vehicle routing , robotics , computing , and many others . Our work stems from the observation that we can exploit the known structure of a given MDP to learn a good policy . We build on the Factored MDP literature ( Boutilier et al. , 1995 ; Osband & Van Roy , 2014 ; Kearns & Singh , 2002 ; Cui & Khardon , 2016 ) , and propose a factored graph to represent known relationships between states , actions and rewards in a given problem . We use the factored graphs to inform the structure of the neural networks used in deep reinforcement learning ( RL ) algorithms to improve their sample efficiency . We give literature references and example factor graphs for real world applications in Appendix A . Consider a motivational example , where the goal of the agent is to balance multiple independent cartpoles simultaneously , with each cartpole defined as per OpenAI gym ( G. Brockman & Zaremba , 2016 ) . The agent can take a ‘ left ’ or ‘ right ’ action on each cartpole , and the state includes the position and velocity of each cart and each pole . We refer to this as the Multi-CartPole problem . Both model-based and model-free algorithms treat the state-action space as a single entity , which makes exploration combinatorially complex . As a consequence , the sample efficiency of RL algorithms degrades exponentially with the number of cartpoles , despite the problem remaining conceptually simple for a human . By allowing the agent access to the problem ’ s factored structure ( i.e . each action affects only one cartpole ) , we bypass the need to learn about each action ’ s relationship with the entire state , and instead only need to learn about each action ’ s relationship with its single , related cartpole . We show how to integrate knowledge of the factored graph into both model-based and model-free deep RL algorithms , and thereby improve sample efficiency . In all cases , we first write down a factored graph as an adjacency matrix , representing the relationships between state , action , and reward . From this adjacency matrix , we then define a Factored Neural Network ( Factored NN ) , which uses input and output masking to reflect the structure of the factored graph . Finally , we show how to integrate this Factored NN into existing deep RL algorithms . For modelbased , we use the Factored NN to learn decomposed state transitions , and then integrate this state transition model with Monte Carlo Tree Search ( MCTS ) ( Kocsis & Szepesvári , 2006 ) . For model-free , we use the Factored NN to learn a decomposed Q-function , and then integrate with DQN ( Mnih et al. , 2015 ) . Also for model-free , we use the Factored NN to learn a decomposed policy function , and then integrate with PPO ( Schulman et al. , 2017 ) . In all three cases , we demonstrate empirically that these Factored RL methods ( Factored MCTS , DQN , and PPO ) are able to achieve better sample efficiency than their vanilla implementations , on a range of environments . 2 RELATED WORK . Several methods have been proposed that exploit the structural information of a problem in the Factored MDP literature . Kearns & Koller ( 1999 ) propose a method to conduct model-based RL with a Dynamic Bayesian Network ( DBN ) ( Dean & Kanazawa , 1989 ) and learn its parameters based on an extension of the Explicit Explore or Exploit ( E3 ) algorithm ( Kearns & Singh , 2002 ) . Guestrin et al . ( 2003 ) propose a linear program and a dynamic program based algorithm to learn linear value functions in Factored MDPs , and extend it to multi-agent settings ( Guestrin et al. , 2002 ) . They exploit the context specific and additive structure in Factored MDP that capture the locality of influence of specific states and actions . We use the same structures in our proposed algorithms . Cui & Khardon ( 2016 ) propose a symbolic representation of Factored MDPs . Osband & Van Roy ( 2014 ) propose posterior sampling and upper confidence bounds based algorithms and prove that they are near-optimal . They show that the sample efficiency of the algorithm scales polynomially with the number of parameters that encode the factored MDP , which may be exponentially smaller than the full state-action space . Xu & Tewari ( 2020 ) extend the results to non-episodic settings and Lattimore et al . ( 2016 ) show similar results for contextual bandits . The algorithms proposed in these prior works assume a tabular ( Cui et al. , 2015 ; Geißer et al . ) or linear setting ( Guestrin et al. , 2003 ) , or require symbolic expressions ( Cui & Khardon , 2016 ) . We extend these ideas to deep RL algorithms by incorporating the structural information in the neural network . Li & Czarnecki ( 2019 ) propose a factored DQN algorithm for urban driving applications . Our proposed algorithms are similar , but we extend the ideas to model-based algorithms like MCTS ( Kocsis & Szepesvári , 2006 ) , and model-free on-policy algorithms like PPO ( Schulman et al. , 2017 ) . We also evaluate our algorithms on a variety of environments which encompass discrete and continuous stateaction spaces . The Factored NN we propose is closely related to Graph Neural Networks ( Scarselli et al. , 2008 ; Zhou et al. , 2018 ) , which are deep learning based methods that operate on graph domain and have been applied to domains such as network analysis ( Kipf & Welling , 2016 ) , molecule design ( Liu et al. , 2018 ) and computer vision ( Xu et al. , 2018 ) . Instead of explicitly embedding the neighbors of all the nodes with neural networks , we use a single neural network with masking . NerveNet Wang et al . ( 2018 ) addresses the expressiveness of structure in an MDP , similar to our work . They focus on robotics applications and demonstrate state-action factorization with PPO . In our work , we additionally demonstrate state transition and state-reward factorization in MCTS and DQN respectively . In addition , they propose imposing a structure with Graph Neural Networks . In contrast , we propose using input and output masking without modifying the neural architecture . Working Memory Graphs Loynd et al . ( 2020 ) uses Transformer networks for modeling both factored observations and dependencies across time steps . However , they only evaluate their method in a grid world with a single discrete action . In contrast , we demonstrate our methods on multiple environments and algorithms with factorization in state transition , state-action and state-reward relationships . In addition , our factored network is a simple extension to the existing network used to solve a problem , whereas they impose a complex network architecture . Action masking has been used effectively to improve RL performance in multiple works ( Williams & Zweig , 2016 ; Williams et al. , 2017 ; Vinyals et al. , 2017 ) . We use a similar trick when applying our Factored NN to policy networks in model-free RL . However , we use both an action mask as well as a state mask to incorporate factored structure in policy networks . Our state transition networks for model-based RL also imposes masks on both input and output corresponding to current state-action and next state respectively . Wu et al . ( 2018 ) introduce an action dependent baseline in actor-critic algorithms , where a separate advantage function is learned for each action . Their method also exploits structure available in the action space . Our method to incorporate structure is orthogonal , as we modify the policy network in actor-critic methods . There is also a relationship between our work and the emerging intersection of reinforcement learning and causal inference , as factored graphs are are a super-set of causal graphs in the MDP setting . Lu et al . ( 2018 ) use the backdoor criterion in causal inference and variational autoencoders . Zhang & Bareinboim ( 2019 ) propose a near-optimal algorithm by taking advantage of causal inference in non-Markovian dynamic treatment regimes . Both works assume there exist unobserved confounders in the environment . We instead tackle a different problem where there are no unobserved confounders and show that there are still benefits to leverage structural information . 3 TERMINOLOGY . We briefly describe terminology used in this paper . We use Directed Acyclic Graphs ( DAG ) to represent relationships between the variables . DAGs consist of nodes and edges where the nodes correspond to random variables X = ( X1 , ... , Xd ) , and a directed edge from variable Xi to Xj represents that Xi has an effect on Xj ( Xi is also called the parent of Xj ) . Under Markov conditions , the joint distribution of the variables can be factored as p ( X1 : d ) = ∏d i=1 p ( Xi|PA ( Xi ) ) . Consider a general Markov Decision Process ( MDP ) defined by ( S , A , P , R , ρ0 , γ ) , where S , A denote the state and action space respectively , P denotes the transition probability , R represents the reward function , ρ0 and γ represent the initial distribution of the state and discount factor respectively . In the classic RL setting , one typically assumes each state Skt+1 depends on the entire previous states and actions , i.e. , PA ( Skt+1 ) = { { Skt } |S| k=1 , { Akt } |A| k=1 } , where | · | denotes the cardinality of the space , and PA denotes the parents of a node in a bayesian network . However , in many scenarios , one component of the action Akt may only cause part of the state-space { Skt } k∈Ck to change , where Ck is the index set of the related states of the kth component of the action . In other words , the parents of each state may only be a subset of the actions and previous states , i.e. , PA ( Skt+1 ) $ { { Skt } |S| k=1 , { Akt } |A| k=1 } . Simplifying the conditional dependencies helps to construct a more accurate model , enabling us to better decompose the the dynamics and reduce complexity of the learning tasks . We assume the factored structure of the environment does not change over time .
This paper presents a methodology for incorporating factor-graphs into model-based and model-free RL methods. The work starts by assuming access to a correct and factor graph showing the relationship between individual state factors, actions, and rewards. The authors propose to make use of this factor graph by using a Factored Neural Network - which is similar to the standard feed-forward MLP networks that would typically be used to parameterize a policy or Q-function - except that it masks out connections between input and output nodes that are not connected in the factor graph. Presumably this results in a sparser neural network which can lead to faster learning and better sample complexity. The authors demonstrate how these factored NNs can be incorporated with model-based MCTS as well as model-free DQN and PPO. In short - the algorithm remains unchanged and the only substition seems to be the Factored NN rather than a fully-connected NN. Experiments are performed on Multi-Cartpole (simultaneous control over several cartpoles), Taxi, BitFlip, and PyBullet's Ant, Half-Cheetah, and Humanoid. Each of the factored algorithms is compared with the un-factored equivalent and increased sample efficiency of learning is noted for the factored variants. The authors provide the manually-defined factor-graphs used for each of these environments in the Appendix.
SP:dcb62a0cc1b03e9ea24b2ed167f14255d9386f95
Parallel Training of Deep Networks with Local Updates
1 INTRODUCTION . Backpropagation ( Rumelhart et al. , 1985 ) is by far the most common method used to train neural networks . Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss ( Schulman et al. , 2015 ) , non-smooth loss landscape ( Metz et al. , 2019 ) , or due to memory and/or compute requirements ( Ororbia et al. , 2020 ) . However , progress in deep learning is producing ever larger models in terms of parameter count and depth , in vision ( Hénaff et al. , 2019 ; Chen et al. , 2020 ) , language ( Radford et al. , 2019 ; Brown et al. , 2020 ) , and many other domains ( Silver et al. , 2017 ; Vinyals et al. , 2019 ; Berner et al. , 2019 ) . As model size increases , backpropagation incurs growing computational , memory , and synchronization overhead ( Ben-Nun & Hoefler , 2018 ) . This raises the question of whether there are more efficient training strategies , even for models and losses that are considered well matched to training by backpropagation . Much of the work on training large scale models focuses on designing compute infrastructure which makes backpropagation more efficient , despite growing model size ( Dean et al. , 2012b ; Chen et al. , 2015 ; Sergeev & Balso , 2018 ) . One of the most common ways to achieve efficient training of deep neural networks with backpropagation is to scale utilizing data parallelism ( Zhang et al. , 1989 ; Chen et al. , 2016 ) , training on bigger batch sizes spread across multiple devices . However , diminishing returns have been reported with this method for larger batch sizes , effectively wasting compute ( Goyal et al. , 2017 ; Masters & Luschi , 2018 ; Shallue et al. , 2018 ; McCandlish et al. , 2018 ) . Training based on pipeline parallelism has also been introduced , but still requires large batches for efficient training ( Petrowski et al. , 1993 ; Ben-Nun & Hoefler , 2018 ; Huang et al. , 2019 ) . Moreover , in addition to the limitation that in the forward pass each layer can only process the input data in sequence ( forward locking ) , the use of backpropagation implies that the network parameters of each layer can only be updated in turn after completing the full forward pass ( backward locking ) . This backward locking results in increased memory overhead , and precludes efficient parallel processing across layers ( Jaderberg et al. , 2017 ) . The challenges of scaling compute infrastructure to support deep networks trained with backpropagation motivate the need for alternative approaches to training deep neural networks . In this work , we explore how layer-wise local updates ( Belilovsky et al. , 2019a ; Löwe et al. , 2019 ; Xiong et al. , 2020 ) can help overcome these challenges and scale more efficiently with compute than backpropagation . With local updates , each layer is updated before even completing a full forward pass through the network . This remedies the forward and backward locking problems which harm memory efficiency and update latency in standard backprop . Layer-wise local updates are not proportional to gradients of the original loss , and are not even guaranteed to descend a loss function . Nevertheless , in practice they are effective at training neural networks . We refer to this approach of parallelizing compute , which is alternative and complementary to data and model parallelism , as local parallelism . Our investigation focuses on the trade-offs of using local update methods as opposed to global backpropagation . To summarize our contributions : ( i ) We provide the first large scale investigation into local update methods in both vision and language domains . We find training speedups ( as measured by the reduction in required sequential compute steps ) of up to 10× on simple MLPs , and 2× on Transformer architectures . These training speedups are the result of local training methods being able to leverage more parallel compute than backprop . ( ii ) We provide insight into how local parallelism methods work , and experimentally compare the similarity of their gradient and features to those from backprop . ( iii ) We demonstrate a prototype implementation of local parallelism for ResNets , and show up to a 40 % increase in sample throughput ( number of training points per second ) relative to backprop , due to higher hardware utilization . We believe that local parallelism will provide benefits whenever there are diminishing returns from data parallelism , and avoid stale weights from pipelined model parallelism . Additionally , we have released code showing an example of local parallelism , available at hiddenurl . 2 RELATED WORK . 2.1 PARALLELIZATION IN DEEP LEARNING . Scaling large models has led to the development of a number of techniques to train deep models in a parallel fashion ( Ben-Nun & Hoefler , 2018 ) , summarized in Figure 1 . Data Parallelism : Data Parallelism ( Zhang et al. , 1989 ) is an attempt to speed up training of a model by splitting the data among multiple identical models and training each model on a shard of the data independently . Data parallelism is effectively training with larger minibatches ( Kaplan et al. , 2020 ) . This creates issues around the consistency of a model which then needs to be synchronized ( Deng et al. , 2012 ; Dean et al. , 2012a ) . There are two main ways to synchronize weights across model copies : ( i ) Synchronous optimization , where data parallel training synchronizes at the end of every minibatch ( Das et al. , 2016 ; Chen et al. , 2016 ) , with a communication overhead that increases with the number of devices ; ( ii ) Asynchronous optimization that implements data parallel training with independent updates of local model parameters without global synchronization ( Niu et al. , 2011 ; Dean et al. , 2012a ) – this increases device utilization , but empirically gradients are computed on stale weights , which results in a poor sample efficiency and thus slower overall training time compared to synchronous optimization . Model Parallelism : Model Parallelism is used when a model is too large to fit in the memory of a single device and is instead spread over multiple processors ( Krizhevsky et al. , 2012 ; Shazeer et al. , 2018 ; Harlap et al. , 2018 ; Lepikhin et al. , 2020 ) . This is increasingly common as state of the art performance continues to improve with increasing model size ( Brown et al. , 2020 ) . Model parallelism unfortunately has a few downsides : ( i ) High communication costs – the total training time for larger networks can become dominated by communication costs ( Simonyan & Zisserman , 2015 ) , which in the worst case can grow quadratically with the number of devices , and can reach up to 85 % of the total training time of a large model such as VGG-16 ( Harlap et al. , 2018 ; Simonyan & Zisserman , 2015 ) ; ( ii ) Device under-utilization – forward propagation and backward propagation are both synchronous operations , which can result in processor under-utilization in model-parallel systems . This problem becomes worse as we increase the number of layers ( Ben-Nun & Hoefler , 2018 ; Jia et al. , 2014 ; Collobert et al. , 2011 ; Abadi et al. , 2016 ; Huang et al. , 2018 ) . Pipeline Parallelism : Due to the forward and backward locking , using multiple devices to process consecutive blocks of the deep model would make an inefficient use of the hardware resources . Pipelining ( Harlap et al. , 2018 ) concurrently passes multiple mini-batches to multiple layers on multiple devices . This increases device utilization but can introduce staleness and consistency issues which lead to unstable training . Harlap et al . ( 2018 ) alleviates the consistency issue by storing past versions of each layer . Huang et al . ( 2019 ) addresses the staleness issue by pipelining microbatches and synchronously updating at the end of each minibatch . Guan et al . ( 2019 ) builds on this work by introducing a weight prediction strategy and Yang et al . ( 2020 ) investigates to what extent the tradeoff between staleness/consistency and device utilization is necessary . Local updates on the other hand can keep device utilization high with both small and large batches and avoid the weight staleness problem . Local Learning Rules : Local learning describes a family of methods that perform parameter updates based only on local information , where locality is defined as dependence of neighboring neurons , layers , or groups of layers . The earliest local method we are aware of is Hebbian Learning ( Hebb , 1949 ) which has further been explored in BCM theory ( Izhikevich & Desai , 2003 ; Coesmans et al. , 2004 ) , Oja ’ s rule ( Oja , 1982 ) , Generalized Hebbian Learning ( Sanger , 1989 ) , and meta-learned local learning rules ( Bengio et al. , 1990 ; 1992 ; Metz et al. , 2018 ; Gu et al. , 2019 ) . Architectures like Hopfield Networks ( Hopfield , 1982 ) and Boltzmann Machines ( Ackley et al. , 1985 ) also employ a local update , and predate backprogation in deep learning . Modern variants of local training methods have attempted to bridge the performance gap with backpropagation . These include projection methods such as Hebbian learning rules for deep networks ( Krotov & Hopfield , 2019 ; Grinberg et al. , 2019 ; Ryali et al. , 2020 ) , and local layer-wise learning with auxiliary losses ( Belilovsky et al. , 2019a ; b ) . Most similar to our work is decoupled greedy layer-wise learning ( Belilovsky et al. , 2019b ; Löwe et al. , 2019 ) , which trained auxiliary image classifiers greedily , and local contrastive learning ( Xiong et al. , 2020 ) . These methods mainly focus on matching the performance of backpropagation with respect to training epochs , whereas our work focuses on tradeoffs . Finally , while not local in the sense that parallelized layers still optimize for the global objective , Huo et al . ( 2018b ) parallelize layers by caching gradients and using delayed gradient signals to overcome the backward locking problem and update decoupled layers in parallel . 3 LOCAL PARALLELISM . Given a deep neural network , we divide the layers into a sequence of J blocks , which may contain one or more layers . Each block is trained independently with an auxiliary objective , and receives the activations output by the previous block as input or , in the case of the first block , the data from the sampled minibatch . We consider five variants to train this sequence of J blocks : backpropagation , greedy local parallelism , overlapping local parallelism , and chunked local parallelism , as shown in Figure 2 . We also include a baseline method of just training the last , or last two , layers . In all of the local methods , training occurs by attaching objective functions to the end of each block and back propagating the signal locally into the corresponding block or blocks . In this work the auxiliary objective functions that we use take the same form as the global objective . For example , to train a classifier on CIFAR-10 , we attach auxiliary linear classifiers to each local block . See Belilovsky et al . ( 2019b ) for further discussion on the form of this objective . Backpropagation : In our notation , backpropagation groups all layers into one block and thus J = 1 . The parameters are updated with one instance of global error correction . While backpropagation ensures that all weights are updated according to the final output loss , it also suffers from forward and backward locking ( Jaderberg et al. , 2017 ) , an issue that local parallelized methods aim to resolve . Greedy local parallelism : A straightforward approach to enable local training is to attach an auxiliary network to each local layer , which generates predictions from the activations of hidden layers . After generating predictions , each local gradient is backpropagated to its respective local block , shown in Figure 2 ( b ) . The activations are then passed as input to the next layer . We refer to this approach , introduced in ( Belilovsky et al. , 2019b ) , as greedy . Greedy local parallelism is the most parallelizable of all the schemes we consider . However , a potential downside is that fully greedy updates force the layers to learn features that are only relevant to their local objective and preclude inter-layer communication , which may result in lower evaluation performance for the global objective , or worse generalization . Overlapping local parallelism : One issue with the purely greedy approach is that features learned for any individual block may not be useful for subsequent blocks , since there is no inter-block propagation of gradient . For this reason , we consider overlapping local architectures where the first layer of each block is also the last layer of the previous block , as shown in Figure 2 ( c ) , though overlapping of more layers is also possible . This redundancy enables inter-block propagation of gradient that is still local , since only neighboring blocks overlap . However , this comes at the cost of running additional backward passes . The overlapping architecture has appeared before in Xiong et al . ( 2020 ) , but was used only for contrastive losses . Ours is the first work to investigate overlapping local architectures for standard prediction objectives in computer vision and language . Overlapping updates are parallelizable , but come with the additional complexity of keeping duplicates of the overlapping components and averaging updates for these layers . Chunked local parallelism : The greedy architecture is maximally parallel in the sense that it distributes one layer per block . However , it is also possible to have fewer parallel blocks by combining multiple layers into one . We refer to this architecture , shown in Figure 2 ( d ) , as chunked local parallelism . This method trades off parallelizability and therefore throughput for an error signal that propagates through more consecutive layers . It differs from overlapping local parallelism by not needing to duplicate any layer . While previous work has investigated the asymptotic performance of chunked parallelism ( Belilovsky et al. , 2019b ) , ours is the first to consider the compute efficiency and parallelizability of local parallelism . By stacking multiple layers per each parallelized block , chunked parallelism sits between fully parallelized methods , such as greedy and overlapping updates , and fully sequential methods like backpropagation .
It is a very poorly written paper. Basic idea of finding a way to not have to wait for full forward pass is not new. Multiple research papers have been published from the extreme of using stale weight to some form of sub-network backdrop as a proxy for the full network. This paper proposed no new idea for local update. Prior work have all suffered with one or both of these two limitations: a) poor experimental framework, or b) not being able to meet the accuracy bar set by backprop. This work suffers from both. Very poorly described experimental basis - and failing to come even close to the backprop accuracy target with any decent speedup claim. Former is my biggest concern. Section 6 starts with 'Here we show that performance gains of local parallelism can be realized on real hardware' - with near-zero description of any 'real' hardware, except a footnote on '1000 IPUs on a chip'.
SP:ad7eb2bcb3a83153f140e5e8bfaa8b76110e62ab
Simple and Effective VAE Training with Calibrated Decoders
1 INTRODUCTION . Deep density models based on the variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) have found ubiquitous use in probabilistic modeling and representation learning as they are both conceptually simple and are able to scale to very complex distributions and large datasets . These VAE techniques are used for tasks such as future frame prediction ( Castrejon et al. , 2019 ) , image segmentation ( Kohl et al. , 2018 ) , generating speech ( Chung et al. , 2015 ) and music ( Dhariwal et al. , 2020 ) , as well as model-based reinforcement learning ( Hafner et al. , 2019a ) . However , in practice , many of these approaches require careful manual tuning of the balance between two terms that correspond to distortion and rate from information theory ( Alemi et al. , 2017 ) . This balance trades off fidelity of reconstruction and quality of samples from the model : a model with low rate would not contain enough information to reconstruct the data , while allowing the model to have high rate might lead to unrealistic samples from the prior as the KL-divergence constraint becomes weaker ( Alemi et al. , 2017 ; Higgins et al. , 2017 ) . While a proper variational lower bound does not expose any free parameters to control this tradeoff , many prior works heuristically introduce a weight on the prior KL-divergence term , often denoted β . Usually , β needs to be tuned for every dataset and model variant as a hyperparameter , which slows down development and can lead to poor performance as finding the optimal value is often prohibitively computationally expensive . Moreover , using β 6= 1 precludes the appealing interpretation of the VAE objective as a bound on the data likelihood , and is undesirable for applications like density modeling . While many architectures for calibrating decoders have been proposed in the literature ( Kingma & Welling , 2014 ; Kingma et al. , 2016 ; Dai & Wipf , 2019 ) , more applied work typically employs VAEs with uncalibrated decoding distributions , such as Gaussian distributions without a learned variance , where the decoder only outputs the mean parameter ( Castrejon et al. , 2019 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ; Babaeizadeh et al. , 2018 ; Lee et al. , 2018 ; Hafner et al. , 2019b ; Pong et al. , 2019 ; Zhu et al. , 2017 ; Pavlakos et al. , 2019 ) , or uses other ad-hoc modifications to the objective ( Sohn et al. , 2015 ; Henaff et al. , 2019 ) . Indeed , it is well known that attempting to learn the variance in a Gaussian decoder may lead to numerical instability ( Rezende & Viola , 2018 ; Dai & Wipf , 2019 ) , and naı̈ve approaches often lead to poor results . As a result , it remains unclear whether practical empirical performance of VAEs actually benefits from calibrated decoders or not . To rectify this , our first contribution is a comparative analysis of various calibrated decoder architectures and practical recommendations for simple and effective VAE training . We find that , while naı̈ve calibrated decoders often lead to worse results , a careful choice of the decoder distribution can work very well , and removes the need to tune the additional parameter β . Indeed , we note that the entropy of the decoding distribution controls the mutual information I ( x ; z ) . Calibrated decoders allow the model to control I ( x ; z ) automatically , instead of relying on manual tuning . Our second contribution is a simple but novel technique for optimizing the decoder variance analytically , without requiring the decoder network to produce it as an additional output . We call the resulting approach to learning the Gaussian variance the σ-VAE . In our experiments , the σ-VAE outperforms the alternative of learning the variance through gradient descent , while being simpler to implement and extend . We validate our results on several VAE and sequence VAE models and a range of image and video datasets . 2 RELATED WORK . Prior work on variational autoencoders has studied a number of different decoder parameterizations . Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) use the Bernoulli distribution for the binary MNIST data and Kingma & Welling ( 2014 ) use Gaussian distributions with learned variance parameter for grayscale images . However , modeling images with continuous distributions is prone to instability as the variance can converge to zero ( Rezende & Viola , 2018 ; Mattei & Frellsen , 2018 ; Dai & Wipf , 2019 ) . Some work has attempted to rectify this problem by using dequantization ( Gregor et al. , 2016 ) , which is theoretically appealing as it is tightly related to the log-likelihood of the original discrete data ( Theis et al. , 2016 ) , optimizing the variance in a two-stage procedure ( Arvanitidis et al. , 2017 ) , or training a post-hoc prior ( Ghosh et al. , 2019 ) . Takahashi et al . ( 2018 ) ; Barron ( 2019 ) proposed more expressive distributions . Additionally , different choices for representing such variance exist , including diagonal covariance ( Kingma & Welling , 2014 ; Sønderby et al. , 2016 ; Rolfe , 2016 ) , or a single shared parameter ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ; Edwards & Storkey , 2016 ; Rezende & Viola , 2018 ) . We analyze these and notice that learning a single variance parameter shared across images leads to stable training and good performance , without the use of dequantization or even clipping the variance , although these techniques can be used with our decoders ; and further improve the estimation of this variance with an analytic solution . Early work on discrete VAE decoders for color images modeled them with the Bernoulli distribution , treating the color intensities as probabilities ( Gregor et al. , 2015 ) . Further work has explored various parameterizations based on discretized continuous distributions , such as discretized logistic ( Kingma et al. , 2016 ) . More recent work has improved expressivity of the decoder with a mixture of discretized logistics ( Chen et al. , 2016 ; Maaløe et al. , 2019 ) . However , these models also employ powerful autoregressive decoders ( Chen et al. , 2016 ; Gulrajani et al. , 2016 ; Maaløe et al. , 2019 ) , and the latent variables in these models may not represent all of the significant factors of variation in the data , as some factors can instead be modeled internally by the autoregressive decoder ( Alemi et al. , 2017 ) .1 While many calibrated decoders have been proposed , outside the core generative modeling community uncalibrated decoders are ubiquitous . They are used in work on video prediction ( Denton & Fergus , 2018 ; Castrejon et al. , 2019 ; Lee et al. , 2018 ; Babaeizadeh et al. , 2018 ) , image segmentation ( Kohl et al. , 2018 ) , image-to-image translation ( Zhu et al. , 2017 ) , 3D human pose ( Pavlakos et al. , 2019 ) , as well as model-based reinforcement learning ( Henaff et al. , 2019 ; Hafner et al. , 2019b ; a ) , and representation learning ( Lee et al. , 2019 ; Watter et al. , 2015 ; Pong et al. , 2019 ) . Most of these works utilize the heuristic hyperparameter β instead , which is undesirable both as the resulting objective is no longer a bound on the likelihood , and as β usually requires extensive tuning . In this work , we analyze the common pitfalls of using calibrated decoders that may have prevented the practitioners from using them , propose a simple and effective analytic way of learning such calibrated distribution , and provide a comprehensive experimental evaluation of different decoding distributions . Alternative discussions of the hyperparameter β are presented by Zhao et al . ( 2017 ) ; Higgins et al . ( 2017 ) ; Alemi et al . ( 2017 ) ; Achille & Soatto ( 2018 ) , who show that it controls the amount of information in the latent variable , I ( x ; z ) . Peng et al . ( 2018 ) ; Rezende & Viola ( 2018 ) further discuss constrained optimization objectives for VAEs , which also yield a similar hyperparameter . Here , we focus on β-VAEs with Gaussian decoders with constant variance , as commonly used in recent work , and show that the hyperparameter β can be incorporated in the decoding likelihood for these models . 1BIVA ( Maaløe et al. , 2019 ) uses the Mixture of Logistics decoder proposed in ( Salimans et al. , 2017 ) that produces the channels for each pixel autoregressively , see also App D . 3 ANALYSING DECODING DISTRIBUTIONS . The generative model of a VAE ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) with parameters θ is specified with a prior distribution over the latent variable pθ ( z ) , commonly unit Gaussian , and a decoding distribution pθ ( x|z ) , which for color images is commonly a conditional Gaussian parameterized with a neural network . We would like to fit this generative model to a given dataset by maximizing the evidence lower bound ( ELBO ( Neal & Hinton , 1998 ; Jordan et al. , 1999 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) ) , which uses an approximate posterior distribution qφ ( z|x ) , also commonly a conditional Gaussian specified with a neural network . In this work , we focus on the form of the decoding distribution pθ ( x|z ) . To achieve the best results , we want a decoding distribution that represents the required probability p ( x|z ) accurately In this section , we will review and analyze various choices of decoding distributions that enable better decoder calibration , including expressive decoding distributions that can represent both the prediction of the image and the uncertainty about such prediction , or even multimodal predictions . 3.1 GAUSSIAN DECODERS . We first analyse the commonly used Gaussian decoders . We note that the commonly used MSE reconstruction loss between the reconstruction x̂ and ground truth data x is equivalent to the negative log-likelihood objective with a Gaussian decoding distribution with constant variance : − ln p ( x|z ) = 1 2 ||x̂− x||2 +D ln √ 2π = 1 2 ||x̂− x||2 + c = D 2 MSE ( x̂ , x ) + c , where p ( x|z ) ∼ N ( x̂ , I ) , the prediction x̂ is produced with a neural network x̂ = µθ ( z ) , and D is the dimensionality of x . This demonstrates a drawback of methods that rely simply on the MSE loss ( Castrejon et al. , 2019 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ; Hafner et al. , 2019b ; Pong et al. , 2019 ; Zhu et al. , 2017 ; Henaff et al. , 2019 ) , as it is equivalent to assuming a particular , constant variance of the Gaussian decoding distribution . By learning this variance , we can achieve much better performance due to better calibration of the decoder . There are several ways in which we can specify this variance . An expressive way to specify the variance is to specify a diagonal covariance matrix for the image , with one value per pixel ( Kingma & Welling , 2014 ; Sønderby et al. , 2016 ; Rolfe , 2016 ) . This can be done , for example , by letting a neural network σθ output the diagonal entries of the covariance matrix given a latent sample z : pθ ( x|z ) ∼ N ( µθ ( z ) , σθ ( z ) 2 ) . ( 1 ) This parameterization of the decoding distribution outputs one variance value per each pixel and channel . While powerful , we observe in Section 5.3 that this approach attains suboptimal performance , and is moreover prone to numerical instability . Instead , we will find experimentally that a simpler parameterization , in which the covariance matrix is specified with a single shared ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ; Edwards & Storkey , 2016 ; Rezende & Viola , 2018 ) parameter σ as Σ = σI often works better in practice : pθ , σ ( x|z ) ∼ N ( µθ ( z ) , σ 2I ) . ( 2 ) The parameter σ can be optimized together with parameters of the neural network θ with gradient descent . Of particular interest is the interpretation of this parameter . Writing out the expression for the decoding likelihood , we obtain − ln p ( x|z ) = 1 2σ2 ||x̂−x||2+D lnσ √ 2π = 1 2σ2 ||x̂−x||2+D lnσ+c = D lnσ+ D 2σ2 MSE ( x̂ , x ) +c . The full objective of the resulting Gaussian σ-VAE is : Lθ , φ , σ = D lnσ + D 2σ2 MSE ( x̂ , x ) +DKL ( q ( z|x ) ||p ( z ) ) . ( 3 ) Note that σ may be viewed as a weighting parameter between the MSE reconstruction term and the KL-divergence term in the objective . Moreover , this objective explicitly specifies how to select the optimal variance : the variance should be selected to minimize the ( weighted ) MSE loss while also minimizing the logarithm of the variance . Decoder Calibration It is important that the decoder distribution be calibrated in the statistical sense , that is , the predicted probabilities should correspond to the frequencies of seeing a particular value of x given that prediction ( DeGroot & Fienberg , 1983 ; Dawid , 1982 ) . The calibration of a neural network can be usually improved by estimating the uncertainty of that prediction ( Guo et al. , 2017 ) , such as the variance of a Gaussian ( Kendall & Gal , 2017 ) . Since the naive MSE loss assumes a constant variance , it does not effectively represent the uncertainty of the prediction , and is often poorly calibrated . Instead , learning the variance as in Eq . 3 leads to better uncertainty estimation and better calibration . In Sec 5.1 , we show that learning a good estimate of this uncertainty is crucial for the quality of the VAE generations . Connection to β-VAE . The β-VAE objective ( Higgins et al. , 2017 ) for a Gaussian decoder with unit variance is : Lβ = D 2 MSE ( x̂ , x ) + βDKL ( q ( z|x ) ||p ( z ) ) . ( 4 ) We see that it can be interpreted as a particular case of the objective ( 3 ) , where the variance is constant and the term D lnσ can be ignored during optimization . The β-VAE objective is then equivalent to a σ-VAE with a constant variance σ = √ β/2 ( for a particular learning rate setting ) . In recent work ( Zhu et al. , 2017 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ) , β-VAE models are often used in this exact regime . By tuning the β term , practitioners are able to tune the variance of the decoder , manually producing a more calibrated decoder . However , by re-interpreting the β-VAE objective as a special case of the VAE and introducing the missing D lnσ term , we can both obtain a valid evidence lower bound , and remove the need to manually select β . Instead , the variance σ can instead simply be learned end-to-end , reducing the need for hyperparameter tuning . An alternative discussion of this connection in the context of linear VAEs is also presented by Lucas et al . ( 2019 ) . While the β term is not necessary for good performance if the decoder is calibrated , it can still be employed if desired , such as when the aim is to attain better disentanglement ( Higgins et al. , 2017 ) or a particular rate-distortion tradeoff ( Alemi et al. , 2017 ) . However , we found that with calibrated decoders , the best sample quality is obtained when β = 1 . Loss implementation details . For the correct evidence lower bound computation , it is necessary to add the values of the MSE loss and the KL divergence across the dimensions . We observe that common implementations of these losses ( Denton & Fergus , 2018 ; Abadi et al. , 2016 ; Paszke et al. , 2019 ) use averaging instead , which will lead to poor results if the number of image dimensions is significantly different from the number of the latent dimensions . While this can be conveniently ignored in the β-VAE regime , where the balance term is tuned manually anyway , for the σ-VAE it is essential to compute the objective value correctly . Variance implementation details . Since the variance is non-negative , we parameterize it logarithmically as σ2 = e2λ , where λ is the logarithm of the standard deviation . For some models , such as per-pixel variance decoders , we observed that it is necessary to restrict the variance range for numerical stability . We do so by using the soft clipping operations proposed by Chua et al . ( 2018 ) : λ : = λmax − softplus ( λmax − λ ) ; λ : = λmin + softplus ( λ− λmin ) . We observe that setting λmin = −6 to lower bound the standard deviation to be at least half of the distance between allowed color values works well in practice . We also observe that this clipping is unnecessary when learning a shared σ value .
This paper discusses a well-known problem of VAE training that decoder produces blurry reconstruction with constant variance. While much existing work addressed this problem by introducing independent variance training (as of the original VAE model) or additional hyper-parameters, those approaches usually come with additional training/tuning difficulty and even break the ELBO assumption. This paper proposed a simple $\sigma$-VAE that addresses the above problem by optimizing a single variance variable. This also could be easily connected to the well known $\beta$-VAE works. The experiment results in Tables 2 and 3 show the proposed model obtains a better FID score than the existing works on multiple datasets.
SP:a3e5acdd322677d019a4582db78dab2dc1102818
Bayesian Neural Networks with Variance Propagation for Uncertainty Evaluation
1 INTRODUCTION . Uncertainty evaluation is a core technique in practical applications of deep neural networks ( DNNs ) . As an example , let us consider the Cyber-Physical Systems ( CPS ) such as the automated driving system . In the past decade , machine learning methods are widely utilized to realize the environment perception and path-planing components in the CPS . In particular , the automated driving system has drawn a huge attention as a safety-critical and real-time CPS ( NITRD CPS Senior Steering Group , 2012 ; Wing , 2009 ) . In the automated driving system , the environment perception component is built using DNN-based predictive models . In real-world applications , the CPS is required to deal with unexpected samples that have not seen in the training process . Therefore , not only achieving the high-prediction accuracy under the ideal environment but providing uncertainty evaluation for real-world data is significant for safety-critical systems ( Henne et al. , 2019 ) . The CPS should prepare some options such as the rejection of the recommended action to promote the user ’ s intervention when the uncertainty is high . Such an interactive system is necessary to build fail-safe systems ( Varshney & Alemzadeh , 2017 ; Varshney , 2016 ) . On the other hand , the uncertainty evaluation is useful to enhance the efficiency of learning algorithms , i.e. , samples with high uncertainty are thought to convey important information for training networks . Active data selection based on the uncertainty has been studied for long time under the name of active learning ( David et al. , 1996 ; Gal et al. , 2017 ; Holub et al. , 2008 ; Li & Guo , 2013 ; Shui et al. , 2020 ) . In statistics and machine learning , Bayesian estimation has been commonly exploited for uncertainty evaluation ( Bishop , 2006. ) . In the Bayesian framework , the prior knowledge is represented as the prior distribution of the statistical model . The prior distribution is updated to the posterior distribution based on observations . The epistemic model uncertainty is represented in the prior distribution , and upon observing data , those beliefs can be updated in the form of a posterior distribution , which yields model uncertainty conditioned on observed data . The entropy or the variance is representative of uncertainty measures ( Cover & Thomas , 2006 ) . For complicated models such as DNNs , however , a direct application of Bayesian methods is prohibited as the computation including the high-dimensional integration highly costs . In deep learning , Bayesian methods are related to stochastic learning algorithms . This relation is utilized to approximate the posterior over complex models . The stochastic method called dropout is a powerful regularization method for DNNs ( Srivastava et al. , 2014 ) . In each layer of the DNN , some units are randomly dropped in the learning using stochastic gradient descent methods . Gal & Ghahramani ( 2016a ) revealed that the dropout is interpreted as the variational Bayes method . Based on this interpretation , they proposed a simple sampling method of DNN parameters from the approximate posterior distribution . Furthermore , the uncertainty of the DNN-based prediction is evaluated using the Monte-Carlo ( MC ) method called MC dropout . While the Bayesian DNN trained using dropout is realized by a simple procedure , the computational overhead is not ignorable . In the MC dropout , dropout is used also at the test time with a number of repeated feed-forward calculations to effectively sample from the approximate posterior . Hence , the naive MC dropout is not necessarily relevant to the system demanding the real-time response . In this work , we propose a sampling-free method to evaluate the uncertainty of the DNN-based prediction . Our method is computationally inexpensive comparing to the MC dropout and provides reliable uncertainty evaluation . In the following , we will first outline related works . Section 3 is devoted to show the detailed formulae of calculating the uncertainty . In our method , an upper bound of the variance is propagated in each layer to evaluate the uncertainty of the output . We show that the our method alleviates the overconfident prediction . This property is shared with scaling methods for the calibration of the class-probability on test samples . In Section 4 , we study the relation between our method and scaling methods . In Section 5 , we demonstrate the computational efficiency and statistical reliability of our method through some numerical experiments using both DNNs and RNNs . 2 RELATED WORKS . The framework of Bayesian inference is often utilized to evaluate the uncertainty of DNN-based predictions . In Bayesian methods , the uncertainty is represented by the predictive distribution defined from the posterior distribution of the weight parameters . MacKay ( 1992 ) proposed a simple approximation method of the posterior distribution for neural networks , and demonstrated that the Bayesian method improves the prediction performance on classification tasks . Graves ( 2011 ) showed that the variational method efficiently works to approximate the posterior distribution of complex neural network models . There are many approaches to evaluate the uncertainty of modern DNNs ( Alex Kendall & Cipolla , 2017 ; Choi et al. , 2018 ; Lu et al. , 2017 ; Le et al. , 2018 ) . We briefly review MC-based methods and sampling-free methods . Monte-Carlo methods based on Stochastic Learning : The randomness in the learning process can be interpreted as a prior distribution . In particular , the dropout is a landmark of stochastic regularization method to train DNNs ( Srivastava et al. , 2014 ) . Gal & Ghahramani ( 2016a ) proposed a simple method to generate weight parameters from the posterior distribution induced from the prior corresponding to the dropout regularization . The predictive distribution is approximated by the MC dropout , which compute the expected output over the Monte-Carlo sampling of the weight parameters . Gal & Ghahramani ( 2016b ) reported that the MC dropout efficiently works not only for feed-forward DNNs but for recurrent neural networks ( RNNs ) . Another sampling based method is the ensemble-based posteriors with different random seeds ( Lakshminarayanan et al. , 2017 ) . However , the computation cost is high as the bootstrap method requires repeated training of parameters using resampling data . Sampling-free methods : Though the MC dropout is a simple and practical method to evaluate the uncertainty , a number of feed-forward computations are necessary to approximate the predictive distribution . Recently , some sampling-free methods have been proposed for the uncertainty evaluation . Probabilistic network is a direct way to deal with uncertainty . The parameters of the probabilistic model , say the mean and the variance of the Gaussian distribution , are propagated in probabilistic neural networks . Then , the uncertainty evaluation is given by a single feed-forward calculation . Choi et al . ( 2018 ) used the mixture of Gaussian distributions as a probabilistic neural network and Wang et al . ( 2016 ) proposed natural-parameter networks as a class of probabilistic neural networks based on exponential families . For a given input vector , the network outputs the parameters of the distribution . For the recurrent neural networks , Hwang et al . ( 2019 ) proposed a variant of the natural-parameter networks . Instead of parameters of statistical models , Wu et al . ( 2019 ) developed a sampling-free method to propagate the first and second order moments of the posterior distribution . Sampling-free methods can evaluate the uncertainty with a one-pass computation for neural networks . However , specialized learning algorithms are required to train the probabilistic networks . Our method is applicable to DNNs and RNNs trained by common learning methods with the dropout . Postels et al . ( 2019 ) and Shekhovtsov & Flach ( 2019 ) proposed similar methods that propagate the uncertainty of the network to the output layer . Differently from the past works , our method takes the upper limit of the correlations among the inputs at the affine layer into account when the uncertainty is evaluated . In addition , we show that our method efficiently works even for RNNs . 3 UNCERTAINTY EVALUATION WITH VARIANCE PROPAGATION . In this work , we assume that we can access to the weight parameters in the DNN and the dropout probability in the training process . As the variance is a common measure of uncertainty , we propose a variance propagation algorithm for the trained DNN . Implementation of our method called nn2vpbnn is presented in Section A in the appendix . In our method , we need only the DNN or RNN trained using dropout . Unlike various kinds of probabilistic NNs , we do not need any specialized training procedure to evaluate the uncertainty . This is a great advantage for our implementation . Furthermore , the representative values of the predictive distribution , i.e . the mean and variance , are obtained by a one-path feed-forward calculation . Hence , we can circumvent iterative Monte-Carlo calculations . 3.1 UNCERTAINTY IN AFFINE LAYER . Let us consider the output of the affine layer y = Wx + b for the random input x , where W = ( Wij ) ∈ R ` ×m and b = ( bi ) ` i=1 ∈ R ` . Suppose that the random vector x has the mean vector E [ x ] and the variance covariance matrix ( Σx ) i , j = Cov ( xi , xj ) for i , j = 1 , . . . , m. Then , the mean vector E [ y ] and the variance covariance matrix Σy of y are given by E [ y ] = WE [ x ] + b and Σy = WΣxW T . As the estimation of the full variable-covariance matrix is not necessarily reliable , we use only the variances of each xi and an upper bound of the absolute correlation coefficient to evaluate the uncertainty . For W = ( Wij ) , the variance Var [ yi ] is Var [ yi ] = ∑ jW 2 ijVar [ xj ] +∑ j , j′ : j 6=j′WijWij′Cov ( xj , xj′ ) . Suppose the absolute correlation coefficient among x1 , . . . , xm is bounded above by ρ , 0 ≤ ρ ≤ 1 . Using the relation between the correlation and variance , we have Var [ yi ] ≤ ∑ j W 2ijVar [ xj ] + ρ ∑ j , j′ : j 6=j′ |Wij ||Wij′ | √ Var ( xj ) √ Var ( xj′ ) = ( 1− ρ ) ∑ j |Wij |2Var [ xj ] + ρ ( ∑ j |Wij | √ Var ( xj ) ) 2 , i = 1 , . . . , ` . ( 1 ) Under the independent assumption , i.e. , ρ = 0 , the minimum upper bound is obtained . The prediction with a small variance leads to overconfident decision making . Hence , the upper bounding of the variance is important to build fail-safe systems . A simple method of estimating ρ is presented in Section 3.5 . Using the above formula , the mean and an upper bound of the variance of y are computed using the mean and an upper bound of the variance of x . In this paper , such a computation is referred to as the Variance Propagation or VP for short . Let us define the variance vector of the m-dimensional random vector x = ( x1 , . . . , xm ) ∈ Rm by Var [ x ] = ( Var [ x1 ] , . . . , Var [ xm ] ) ∈ Rm . Furthermore , we denote the concatenated vector of the mean and variance of z or its approximation as U ( z ) , i.e. , U ( z ) = ( E [ z ] , Var [ z ] ) . The VP at the affine layer is expressed by the function Taff , U ( y ) = ( m , v ) = Taff ( U ( x ) ) , ( 2 ) where m = WE [ x ] + b ∈ Rm and each element of v ∈ Rm is defined by equation 1 . The average pooling layer , global average pooling layer ( Lin et al. , 2013 ) , and the batch normalization layer ( Ioffe & Szegedy , 2015 ) are examples of the affine layer . Hence , the VP of the affine layer also works to evaluate the uncertainty of these layers . The distribution of yi is well approximated by the univariate Gaussian distribution if the correlation among x is small ( Wang & Manning , 2013 ; Wu et al. , 2019 ) . Based on this fact , the uncertainty of yi can be represented by the univariate Gaussian distribution N ( E [ yi ] , Var [ yi ] ) . In our method , the variance Var [ yi ] of the approximate Gaussian is given by the variance v in equation 2 .
This paper proposes a sampling free technique based on variance propagation to model predictive distributions of deep learning models. Estimating uncertainty of deep learning models is an important line of research for understanding the reliability of predictions and ensuring robustness to out-of-distribution data. Results are shown using synthetic data, perplexity analysis for a language modeling task and out-of-distribution detection performance using a convolutional network.
SP:3a1d7f7165762299ba2d9bab4144576660b9a784
Private Post-GAN Boosting
1 INTRODUCTION . The vast collection of detailed personal data , including everything from medical history to voting records , to GPS traces , to online behavior , promises to enable researchers from many disciplines to conduct insightful data analyses . However , many of these datasets contain sensitive personal information , and there is a growing tension between data analyses and data privacy . To protect the privacy of individual citizens , many organizations , including Google ( Erlingsson et al. , 2014 ) , Microsoft ( Ding et al. , 2017 ) , Apple ( Differential Privacy Team , Apple , 2017 ) , and more recently the 2020 US Census ( Abowd , 2018 ) , have adopted differential privacy ( Dwork et al. , 2006 ) as a mathematically rigorous privacy measure . However , working with noisy statistics released under differential privacy requires training . A natural and promising approach to tackle this challenge is to release differentially private synthetic data—a privatized version of the dataset that consists of fake data records and that approximates the real dataset on important statistical properties of interest . Since they already satisfy differential privacy , synthetic data enable researchers to interact with the data freely and to perform the same analyses even without expertise in differential privacy . A recent line of work ( Beaulieu-Jones et al. , 2019 ; Xie et al. , 2018 ; Yoon et al. , 2019 ) studies how one can generate synthetic data by incorporating differential privacy into generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Although GANs provide a powerful framework for synthetic data , they are also notoriously hard to train and privacy constraint imposes even more difficulty . Due to the added noise in the private gradient updates , it is often difficult to reach convergence with private training . In this paper , we study how to improve the quality of the synthetic data produced by private GANs . Unlike much of the prior work that focuses on fine-tuning of network architectures and training techniques , we propose Private post-GAN boosting ( Private PGB ) —a differentially private method that boosts the quality of the generated samples after the training of a GAN . Our method can be viewed as a simple and practical amplification scheme that improves the distribution from any ex- isting black-box GAN training method – private or not . We take inspiration from an empirical observation in Beaulieu-Jones et al . ( 2019 ) that even though the generator distribution at the end of the private training may be a poor approximation to the data distribution ( due to e.g . mode collapse ) , there may exist a high-quality mixture distribution that is given by several generators over different training epochs . PGB is a principled method for finding such a mixture at a moderate privacy cost and without any modification of the GAN training procedure . To derive PGB , we first formulate a two-player zero-sum game , called post-GAN zero-sum game , between a synthetic data player , who chooses a distribution over generated samples over training epochs to emulate the real dataset , and a distinguisher player , who tries to distinguish generated samples from real samples with the set of discriminators over training epochs . We show that under a “ support coverage ” assumption the synthetic data player ’ s mixed strategy ( given by a distribution over the generated samples ) at an equilibrium can successfully “ fool ” the distinguisher–that is , no mixture of discriminators can distinguish the real versus fake examples better than random guessing . While the strict assumption does not always hold in practice , we demonstrate empirically that the synthetic data player ’ s equilibrium mixture consistently improves the GAN distribution . The Private PGB method then privately computes an approximate equilibrium in the game . The algorithm can be viewed as a computationally efficient variant of MWEM ( Hardt & Rothblum , 2010 ; Hardt et al. , 2012 ) , which is an inefficient query release algorithm with near-optimal sample complexity . Since MWEM maintains a distribution over exponentially many “ experts ” ( the set of all possible records in the data domain ) , it runs in time exponential in the dimension of the data . In contrast , we rely on private GAN to reduce the support to only contain the set of privately generated samples , which makes PGB tractable even for high-dimensional data . We also provide an extension of the PGB method by incorporating the technique of discriminator rejection sampling ( Azadi et al. , 2019 ; Turner et al. , 2019 ) . We leverage the fact that the distinguisher ’ s equilibrium strategy , which is a mixture of discriminators , can often accurately predict which samples are unlikely and thus can be used as a rejection sampler . This allows us to further improve the PGB distribution with rejection sampling without any additional privacy cost since differential privacy is preserved under post-processing . Our Private PGB method also has a natural non-private variant , which we show improves the GAN training without privacy constraints . We empirically evaluate both the Private and Non-Private PGB methods on several tasks . To visualize the effects of our methods , we first evaluate our methods on a two-dimensional toy dataset with samples drawn from a mixture of 25 Gaussian distributions . We define a relevant quality score function and show that the both Private and Non-Private PGB methods improve the score of the samples generated from GAN . We then show that the Non-Private PGB method can also be used to improve the quality of images generated by GANs using the MNIST dataset . Finally , we focus on applications with high relevance for privacy-protection . First we synthesize US Census datasets and demonstrate that the PGB method can improve the generator distribution on several statistical measures , including 3-way marginal distributions and pMSE . Secondly , we evaluate the PGB methods on a dataset with a natural classification task . We train predictive models on samples from Private PGB and samples from a private GAN ( without PGB ) , and show that PGB consistently improves the model accuracy on real out-of-sample test data . Related work . Our PGB method can be viewed as a modular boosting method that can improve on a growing line of work on differentially private GANs ( Beaulieu-Jones et al. , 2019 ; Xie et al. , 2018 ; Frigerio et al. , 2019 ; Torkzadehmahani et al. , 2020 ) . To obtain formal privacy guarantees , these algorithms optimize the discriminators in GAN under differential privacy , by using private SGD , RMSprop , or Adam methods , and track the privacy cost using moments accounting Abadi et al . ( 2016 ) ; Mironov ( 2017 ) . Yoon et al . ( 2019 ) give a private GAN training method by adapting ideas from the PATE framework ( Papernot et al. , 2018 ) . Our PGB method is inspired by the Private Multiplicative Weigths method ( Hardt & Rothblum , 2010 ) and its more practical variant MWEM ( Hardt et al. , 2012 ) , which answer a large collection of statistical queries by releasing a synthetic dataset . Our work also draws upon two recent techniques ( Turner et al . ( 2019 ) and Azadi et al . ( 2019 ) ) that use the discriminator as a rejection sampler to improve the generator distribution . We apply their technique by using the mixture discriminator computed in PGB as the rejection sampler . There has also been work that applies the idea of boosting to ( non-private ) GANs . For example , Arora et al . ( 2017 ) and Hoang et al . ( 2018 ) propose methods that directly train a mixture of generators and discriminators , and Tolstikhin et al . ( 2017 ) proposes AdaGAN that reweighes the real examples during training similarly to what is done in AdaBoost ( Freund & Schapire , 1997 ) . Both of these methods may be hard to make differentially private : they either require substantially more privacy budget to train a collection of discriminators or increase the weights on a subset of examples , which requires more adding more noise when computing private gradients . In contrast , our PGB method boosts the generated samples post training and does not make modifications to the GAN training procedure . 2 PRELIMINARIES . Let X denote the data domain of all possible observations in a given context . Let pd be a distribution over X . We say that two datasets X , X ′ ∈ Xn are adjacent , denoted by X ∼ X ′ , if they differ by at most one observation . We will write pX to denote the empirical distribution over X . Definition 1 ( Differential Privacy ( DP ) ( Dwork et al. , 2006 ) ) . A randomized algorithm A : Xn → R with output domain R ( e.g . all generative models ) is ( ε , δ ) -differentially private ( DP ) if for all adjacent datasets X , X ′ ∈ Xn and for all S ⊆ R : P ( A ( X ) ∈ S ) ≤ eεP ( A ( X ′ ) ∈ S ) + δ . A very nice property of differential privacy is that it is preserved under post-processing . Lemma 1 ( Post-processing ) . LetM be an ( ε , δ ) -differentially private algorithm with output range R and f : R→ R′ be any mapping , the composition f ◦M is ( ε , δ ) -differentially private . As a result , any subsequent analyses conducted on DP synthetic data also satisfy DP . The exponential mechanism ( McSherry & Talwar , 2007 ) is a private mechanism for selecting among the best of a discrete set of alternativesR , where “ best ” is defined by a quality function q : Xn×R → R that measures the quality of the result r for the dataset X . The sensitivity of the quality score q is defined as ∆ ( q ) = maxr∈RmaxX∼X′ |q ( X , r ) −q ( X ′ , r ) | . Then given a quality score q and privacy parameter ε , the exponential mechanismME ( q , ε , X ) simply samples a random alternative from the rangeR such that the probability of selecting each r is proportional to exp ( εq ( X , r ) / ( 2∆ ( q ) ) ) . 2.1 DIFFERENTIALLY PRIVATE GAN . The framework of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) consists of two types of neural networks : generators and discriminators . A generator G is a function that maps random vectors z ∈ Z drawn from a prior distribution pz to a sample G ( z ) ∈ X . A discriminator D takes an observation x ∈ X as input and computes a probability D ( x ) that the observation is real . Each observation is either drawn from the underlying distribution pd or the induced distribution pg from a generator . The training of GAN involves solving the following joint optimization over the discriminator and generator : min G max D Ex∼pX [ f ( D ( x ) ) ] + Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] where f : [ 0 , 1 ] → R is a monotone function . For example , in standard GAN , f ( a ) = log a , and in Wasserstein GAN ( Arjovsky et al. , 2017 ) , f ( a ) = a . The standard ( non-private ) algorithm iterates between optimizing the parameters of the discriminator and the generator based on the loss functions : LD = −Ex∼pX [ f ( D ( x ) ) ] − Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] , LG = Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] The private algorithm for training GAN also performs the same alternating optimization , but it optimizes the discriminator under differential privacy while keeping the generator optimization the same . In general , the training proceeds over epochs τ = 1 , . . . , N , and at the end of each epoch τ the algorithm obtains a discriminator Dτ and a generator Gτ by optimizing the loss functions respectively . In Beaulieu-Jones et al . ( 2019 ) ; Xie et al . ( 2018 ) , the private optimization on the discriminators is done by running the private SGD method Abadi et al . ( 2016 ) or its variants . Yoon et al . ( 2019 ) performs the private optimization by incorporating the PATE framework Papernot et al . ( 2018 ) . For all of these private GAN methods , the entire sequence of discriminators { D1 , . . . , DN } satisfies privacy , and thus the sequence of generators { G1 , . . . , GN } is also private since they can be viewed as post-processing of the discriminators . Our PGB method is agnostic to the exact private GAN training methods .
This paper studies the differential private synthetic dataset generation. Unlike previous DP based GAN models, this paper aims to boost the sample quality of after the training stage. In particular, the final synthetic dataset is sampled from the sequence of generators obtained during GAN training. The distribution is obtained by a private two-player game between the privately selected discriminator and a sampler from the mixture of generators. The results are demonstrated on gaussian data and tabular data.
SP:72d1283f3602edc22896934271fcec5b03f25d9e
A Near-Optimal Recipe for Debiasing Trained Machine Learning Models
1 INTRODUCTION . Machine learning is increasingly applied to critical decisions which can have a lasting impact on individual lives , such as for credit lending ( Bruckner , 2018 ) , medical applications ( Deo , 2015 ) , and criminal justice ( Brennan et al. , 2009 ) . Consequently , it is imperative to understand and improve the degree of bias of such automated decision-making . Unfortunately , despite the fact that bias ( or “ fairness ” ) is a central concept in our society today , it is difficult to define it in precise terms . In fact , as people perceive ethical matters differently depending on a plethora of factors including geographical location or culture ( Awad et al. , 2018 ) , no universally-agreed upon definition for bias exists . Moreover , the definition of bias may depend on the application and might even be ignored in favor of accuracy when the stakes are high , such as in medical diagnosis ( Kleinberg et al. , 2017 ; Ingold and Soper , 2016 ) . As such , it is not surprising that several definitions of “ unbiased classification ” have been introduced . These include statistical parity ( Dwork et al. , 2012 ; Zafar et al. , 2017a ) , equality of opportunity ( Hardt et al. , 2016 ) , and equalized odds ( Hardt et al. , 2016 ; Kleinberg et al. , 2017 ) . Unfortunately , such definitions are not generally compatible ( Chouldechova , 2017 ) and some might even be in conflict with calibration ( Kleinberg et al. , 2017 ) . In addition , because fairness is a societal concept , it does not necessarily translate into a statistical criteria ( Chouldechova , 2017 ; Dixon et al. , 2018 ) . Statistical parity Let X be an instance space and let Y = { 0 , 1 } be the target set in a standard binary classification problem . In the fair classification setting , we may further assume the existence of a ( possibly randomized ) sensitive attribute s : X → { 0 , 1 , . . . , K } , where s ( x ) = k if and only if x ∈ Xk for some total partition X = ∪kXk . For example , X might correspond to the set of job applicants while s indicates their gender . Here , the sensitive attribute can be randomized if , for instance , the gender of an applicant is not a deterministic function of the full instance x ∈ X ( e.g . number of publications , years of experience , ... etc ) . Then , a commonly used criterion for fairness is to require similar mean outcomes across the sensitive attribute . This property is well-captured through the notion of statistical parity ( a.k.a . demographic parity ) ( Corbett-Davies et al. , 2017 ; Dwork et al. , 2012 ; Zafar et al. , 2017a ; Mehrabi et al. , 2019 ) : Definition 1 ( Statistical Parity ) . Let X be an instance space and X = ∪kXk be a total partition of X . A classifier f : X → { 0 , 1 } satisfies statistical parity across all groups X1 , . . . , XK if : max k∈ { 1,2 , ... , K } Ex [ f ( x ) | x ∈ Xk ] − min k∈ { 1,2 , ... , K } Ex [ f ( x ) | x ∈ Xk ] ≤ To motivate and further clarify the definition , we showcase the empirical results on the Adult benchmark dataset ( Blake and Merz , 1998 ) in Figure 1 . When tasked with predicting whether the income of individuals is above $ 50K per year , all considered classifiers exhibit gender-related bias . One way of removing such bias is to enforce statistical parity across genders . Crucially , however , without taking ethnicity into account , different demographic groups may experience different outcomes . In fact , gender bias can actually increase in some minority groups after enforcing statistical parity . This can be fixed by redefining the sensitive attribute to be the cross product of both gender and ethnicity ( green bars ) . Our main contribution is to present a near-optimal recipe for debiasing models , including deep neural networks , according to Definition 1 . Specifically , we formulate the task of debiasing learned models as a regularized optimization problem that is solved efficiently using the projected SGD method . We show how the algorithm produces thresholding rules with randomization near the thresholds , where the width of randomization is controlled by the regularization parameter . We also show that randomization near the threshold is necessary for Bayes risk consistency . While we focus on binary sensitive attributes in our experiments in Section 5 , our algorithm and its theoretical guarantees continue to hold for non-binary sensitive attributes as well . Statement of Contribution . 1 . We derive a near-optimal post-processing algorithm for debiasing learned models ( Section 3 ) . 2 . We prove theoretical guarantees for the proposed algorithm , including a proof of correctness and an explicit bound on the Bayes excess risk ( Section 4 ) . 3 . We empirically validate the proposed algorithm on benchmark datasets across both classical algorithms and modern DNN architectures . Our experiments demonstrate that the proposed algorithm significantly outperforms previous post-processing methods ( Section 5 ) . In Appendix E , we also show how the proposed algorithm can be modified to handle other criteria of bias as well . 2 RELATED WORK . Algorithms for fair machine learning can be broadly classified into three groups : ( 1 ) pre-processing methods , ( 2 ) in-processing methods , and ( 3 ) post-processing methods ( Zafar et al. , 2019 ) . Preprocessing algorithms transform the data into a different representation such that any classifier trained on it will not exhibit bias . This includes methods for learning a fair representation ( Zemel et al. , 2013 ; Lum and Johndrow , 2016 ; Bolukbasi et al. , 2016 ; Calmon et al. , 2017 ; Madras et al. , 2018 ; Kamiran and Calders , 2012 ) , label manipulation ( Kamiran and Calders , 2009 ) , data augmentation ( Dixon et al. , 2018 ) , or disentanglement ( Locatello et al. , 2019 ) . On the other hand , in-processing methods constrain the behavior of learning algorithms in order to control bias . This includes methods based on adversarial learning ( Zhang et al. , 2018 ) and constraint-based classification , such as by incorporating constrains on the decision margin ( Zafar et al. , 2019 ) or features ( Grgić-Hlača et al. , 2018 ) . Agarwal et al . ( 2018 ) showed that the task of learning an unbiased classifier could be reduced to a sequence of cost-sensitive classification problems , which could be applied to any black-box classifier . One caveat of the latter approach is that it requires solving a linear program ( LP ) and retraining classifiers , such as neural networks , many times before convergence . The algorithm we propose in this paper is a post-processing method , which can be justified theoretically ( Corbett-Davies et al. , 2017 ; Hardt et al. , 2016 ; Menon and Williamson , 2018 ; Celis et al. , 2019 ) . Fish et al . ( 2016 ) and Woodworth et al . ( 2017 ) fall under this category . However , the former only provides generalization guarantees without consistency results while the latter proposes a twostage approach that requires changes to the original training algorithm . Kamiran et al . ( 2012 ) also proposes a post-processing algorithm , called Reject Option Classifier ( ROC ) , without providing any theoretical guarantees . In contrast , our algorithm is Bayes consistent and does not alter the original classification method . In Celis et al . ( 2019 ) and Menon and Williamson ( 2018 ) , instance-dependent thresholding rules are also learned . However , our algorithm also learns to randomize around the threshold ( Figure 2 ( a ) ) and this randomization is key to our algorithm both theoretically as well as experimentally ( Appendix C and Section 5 ) . Hardt et al . ( 2016 ) learns a randomized post-processing rule but our proposed algorithm outperforms it in all of our experiments ( Section 5 ) . Woodworth et al . ( 2017 ) showed that the post-processing approach can , sometimes , be highly suboptimal . Nevertheless , the latter result does not contradict the statement that our post-processing rule is near-optimal because we assume that the original classifier outputs a monotone transformation of some approximation to the posterior probability p ( y = 1 | x ) ( e.g . margin or softmax output ) whereas Woodworth et al . ( 2017 ) assumed in their construction that the post-processing rule had access to the binary predictions only . We argue that the proposed algorithm has distinct advantages , particularly for deep neural networks ( DNNs ) . First , stochastic convex optimization methods are well-understood and can scale well to massive amounts of data ( Bottou , 2010 ) , which is often the case in deep learning today . Second , the guarantees provided by our algorithm hold w.r.t . the binary predictions instead of using a proxy , such as the margin as in some previous works ( Zafar et al. , 2017b ; 2019 ) . Third , unlike previous reduction methods that would require retraining a deep neural network several times until convergence ( Agarwal et al. , 2018 ) , which can be prohibitively expensive , our algorithm operates on learned models that are trained once and does not require retraining . Besides developing algorithms for fair classification , several recent works focused on other related aspects , such as proposing new definitions for fairness ; e.g . demographic parity ( Dwork et al. , 2012 ; Mehrabi et al. , 2019 ) , equalized odds ( Hardt et al. , 2016 ) , equality of opportunity/disparate mistreatment ( Zafar et al. , 2017a ; Hardt et al. , 2016 ) , and individual fairness ( Dwork et al. , 2012 ) . Recent works have also established several impossibility results related to fair classification , such as Kleinberg et al . ( 2017 ) ; Chouldechova ( 2017 ) . In our case , we derive a new impossibility result that holds for any deterministic binary classifier and relate it to the task of controlling the covariance between the classifier ’ s predictions and the sensitive attribute ( Appendix E ) . 3 NEAR-OPTIMAL ALGORITHM FOR STATISTICAL PARITY . Notation We reserve boldface letters for random variables ( e.g . x ) , small letters for instances ( e.g . x ) , capital letters for sets ( e.g . X ) , and calligraphic typeface for universal sets ( e.g . the instance space X ) . Given a set S , 1S ( x ) ∈ { 0 , 1 } is the characteristic function indicating whether x ∈ S. We denote by [ n ] the set of integers { 1 , . . . , n } and [ x ] + = max { 0 , x } . Algorithm Given a classifier f : X → [ −1 , +1 ] our goal is to post-process the predictions made by f 1 in order to control the bias with respect to a sensitive attribute s : X → [ K ] as in Definition 1 . To this end , instead of learning a deterministic classifier , we consider randomized prediction rules of the form h̃ : X × { 1 , 2 , . . . , K } × [ −1 , 1 ] → [ 0 , 1 ] , where h̃ ( x ) represents the probability of predicting the positive class given ( i ) instance x ∈ X , ( ii ) sensitive attribute s ( x ) , and ( iii ) classifier ’ s output f ( x ) . As discussed in Appendix B , for post-processing rule h̃ ( x ) , and for each group Xk ⊆ X , the fairness constraint in Definition 1 can be written as |Ex [ h̃ ( x ) | x ∈ Xk ] − ρ| ≤ , where ρ ∈ [ 0 , 1 ] is a hyper-parameter tuned via a validation dataset . On the other hand , minimizing the probability of altering the predictions of the original classifier can be achieved by maximizing the inner product Ex [ h̃ ( x ) ·f ( x ) ] . Instead of optimizing this quantity directly , which would lead to a pure thresholding rule , we minimize the regularized objective : ( γ/2 ) Ex [ h̃ ( x ) 2 ] −Ex [ h̃ ( x ) · f ( x ) ] for some regularization parameter γ > 0 . This regularization leads to randomization around the threshold , which we show to be critical , both theoretically ( Section 4 and Appendix C ) and experimentally ( Section 5 ) . Using Lagrange duality we show that the solution reduces to the update rules in Equation 2 with optimization variables { λk , µk } k∈ [ K ] and the corresponding predictor which outputs +1 for group Xk with probability h̃γ ( x ) is given by h̃γ ( x ) = 0 , f ( x ) ≤ λk − µk ( f ( x ) − λk + µk ) /γ , λk − µk ≤ f ( x ) ≤ λk − µk + γ 1 , f ( x ) ≥ λk − µk + γ ( 1 ) where ξγ is given by Eq . ( 3 ) . Update rules To learn these parameters , one can apply the following update rules ( Appendix B ) : λs ( x ) ← max { 0 , λs ( x ) − η ( 2 + ρ+ ∂ ∂λs ( x ) ξγ ( f ( x ) − ( λs ( x ) − µs ( x ) ) ) ) } µs ( x ) ← max { 0 , µs ( x ) − η ( 2 − ρ+ ∂ ∂µs ( x ) ξγ ( f ( x ) − ( λs ( x ) − µs ( x ) ) ) ) } , ( 2 ) where , again , ρ ∈ [ 0 , 1 ] is a hyperparameter tuned via a validation dataset , s : X → [ K ] is the sensitive attribute , and γ > 0 is a regularization parameter that controls the level of randomization . In addition , the function ξγ : R→ R+ is given by : ξγ ( w ) = w2 2γ · I { 0 ≤ w ≤ γ } + ( w − γ 2 ) · I { w > γ } ( 3 ) Note that ξγ is convex and its derivative ξ′γ is ( 1/γ ) -Lipschitz continuous ; it can be interpreted as differentiable approximation to the ReLU unit ( Nair and Hinton , 2010 ) . A full pseudocode of the proposed algorithm is presented in Appendix A .
In this paper, the authors propose a post-processing method for removing bias from a trained model. The bias is defined as conditional statistical parity — for a given partitioning of the data, the predicted label should be conditionally uncorrelated with the sensitive (bias inducing) attribute for each partition. The authors relax this strong requirement to an epsilon-constraint on the conditional covariance for each partition. As an example, race (sensitive attribute) should be conditionally uncorrelated to whether an individual will default on their loan (predicted target) for each city (data partition). The authors propose a constrained optimization problem that takes the input data, sensitive attribute, partitioning and a trained model to yield a probabilistic decision rule. Subsequently, they propose an iterative solution to the problem, proving some theoretical properties as well as showing how the method compares to different baselines.
SP:a6280b6605e621403de6ac4c3fc80fa71184ab6d
DeLighT: Deep and Light-weight Transformer
1 INTRODUCTION . Attention-based transformer networks ( Vaswani et al. , 2017 ) are widely used for sequence modeling tasks , including language modeling and machine translation . To improve performance , models are often scaled to be either wider , by increasing the dimension of hidden layers , or deeper , by stacking more transformer blocks . For example , T5 ( Raffel et al. , 2019 ) uses a dimension of 65K and GPT-3 ( Brown et al. , 2020 ) uses 96 transformer blocks . However , such scaling increases the number of network parameters significantly ( e.g. , T5 and GPT-3 have 11 billion and 175 billion parameters , respectively ) , and complicates learning , i.e. , these models either require very large training corpora ( Raffel et al. , 2019 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) or careful regularization ( Hinton et al. , 2012 ; Wan et al. , 2013 ; Merity et al. , 2018a ) . In this paper , we introduce a new parameter-efficient attention-based architecture that can be easily scaled to be both wide and deep . Our Deep and Light-weight Transformer architecture , DeLighT , extends the transformer architecture of Vaswani et al . ( 2017 ) and delivers similar or better performance with significantly fewer parameters and operations . At the heart of DeLighT is the DeLighT transformation that uses the group linear transformations ( GLTs ) of Mehta et al . ( 2018 ) with an expand-reduce strategy for varying the width and depth of the DeLighT block efficiently . Since GLTs are local by nature , the DeLighT transformation uses feature shuffling , which is analogous to channel shuffling in convolutional networks ( Zhang et al. , 2018 ) , to share information between different groups . Such wide and deep representations facilitate replacing the multi-head attention and feed-forward layers in transformers with single headed attention and light-weight feed-forward layers , reducing total network parameters and operations . Importantly , unlike transformers , the DeLighT transformation decouples the depth and width from the input size , allowing us to allocate parameters more efficiently across blocks by using shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output . We demonstrate that DeLighT models achieve similar or better performance than transformer models with significantly fewer parameters and operations , on two common sequence modeling tasks , ( i ) machine translation and ( ii ) language modeling . On the low resource WMT ’ 16 En-Ro machine translation dataset , DeLighT attains transformer performance using 2.8× fewer parameters . On the high resource WMT ’ 14 En-Fr dataset , DeLighT delivers better performance ( +0.4 BLEU score ) with 1.8× fewer parameters than baseline transformers . Similarly , on language modeling , DeLighTmatches the performance of Transformer-XL ( Dai et al. , 2019 ) with 1.5× fewer parameters on the WikiText-103 dataset . Our source code is open-source and is available at : https : //github.com/ sacmehta/delight 2 RELATED WORK . Improving transformers : Several methods have been introduced to improve the transformer architecture . The first line of research addresses the challenge of computing self attention on long input sequences ( Child et al. , 2019 ; Kitaev et al. , 2020 ; Beltagy et al. , 2020 ) . These methods can be combined with our architecture . The second line of research focuses on explaining multi-head attention ( Raganato and Tiedemann , 2018 ; Brunner et al. , 2020 ) . They show that increasing the number of transformer heads can lead to redundant representations ( Voita et al. , 2019a ; Michel et al. , 2019 ) and using fixed attention heads with predefined patterns ( Raganato et al. , 2020 ) or synthetic attention matrices ( Tay et al. , 2020 ) improves performance . The third line of research focuses on improving transformers by learning better representations ( Wu et al. , 2019 ; 2020 ; So et al. , 2019 ) . These works aim to improve the expressiveness of transformers using different transformations – for example , using convolutions ( Wu et al. , 2019 ; Gehring et al. , 2017 ) , gated linear units ( Dauphin et al. , 2017 ) , or multi-branch feature extractors ( So et al. , 2019 ; Wu et al. , 2020 ) . Our work falls into this category . Unlike previous works , we show that it is possible to efficiently allocate parameters both at the block-level using the DeLighT transformation and across blocks using block-wise scaling . Model scaling : Model scaling is a standard method to improve the performance of sequence models ( Vaswani et al. , 2017 ; Raffel et al. , 2019 ; Lan et al. , 2020 ; Devlin et al. , 2019 ; Shoeybi et al. , 2019 ; Tan and Le , 2019 ; Brown et al. , 2020 ) . Model dimensions are increased in width-wise scaling ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ) while more blocks ( e.g. , Transformer blocks ) are stacked in depth-wise scaling ( Shoeybi et al. , 2019 ; Brown et al. , 2020 ; Wang et al. , 2019 ) . In both cases ( and their combination ) , parameters inside each block of the network are the same , which may lead to a sub-optimal solution . To further improve the performance of sequence models , this paper introduces block-wise scaling that allows for variably-sized blocks and efficient allocation of parameters in the network . Our results show that ( 1 ) shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output deliver the best performance , and ( 2 ) models with block-wise scaling coupled with model scaling achieve better performance compared to model scaling alone . We note that convolutional neural networks ( CNNs ) also learn shallower and narrower representations near the input and deeper and wider representations near the output . Unlike CNNs ( e.g. , ResNet of He et al . 2016 ) that perform a fixed number of operations at each convolutional layer , the proposed block-wise scaling uses a variable number of operations in each layer and block . Improving sequence models : There is also significant recent work on other related methods for improving sequence models , including ( 1 ) improving accuracy using better token-level representations – for example , using BPE ( Sennrich et al. , 2016 ) , adaptive inputs ( Baevski and Auli , 2019 ) and outputs ( Grave et al. , 2017a ) , and DeFINE ( Mehta et al. , 2020 ) , and ( 2 ) improving efficiency – for example , using compression ( Chen et al. , 2018 ; Sun et al. , 2020 ) , pruning ( Han et al. , 2016 ; Voita et al. , 2019b ) , and distillation ( Hinton et al. , 2015 ; Sanh et al. , 2019 ) . The closest to our work is the DeFINE transformation , which also learns representations using an expand-reduce strategy . The key difference between the DeFINE transformation ( Figure 1c ) and the DeLighT transformation ( Figure 1d ) is that the DeLighT transformation more efficiently allocates parameters within expansion and reduction layers . Unlike DeFINE , which uses fewer groups in group linear transformations to learn wider representations , DeLighT transformation uses more groups to learn wider representations with fewer parameters . The DeLighT transformation achieves comparable performance to the DeFINE transformation but with significantly fewer parameters . 3 DELIGHT : DEEP AND LIGHT-WEIGHT TRANSFORMER . A standard transformer block ( Figure 1a ) comprises of multi-head attention that uses a query-keyvalue decomposition to model relationships between sequence tokens , and a feed forward network ( FFN ) to learn wider representations . Multi-head attention obtains query Q , key K , and value V by applying three projections to the input , each consisting of h linear layers ( or heads ) that map the dm-dimensional input into a dh-dimensional space , where dh = dm/h is the head dimension . The FFN consists of two linear layers , where the first expands the dimensions from dm to df and the learnable parameters ( Linear and DeLighT ) are shown in color . The shape of linear transformations indicate their operation ( expansion , reduction , etc. ) . ( c , d ) compares the DeFINE transformation ( Mehta et al. , 2020 ) with the DeLighT transformation . Compared to the DeFINE transformation , the DeLighT transformation uses group linear transformations ( GLTs ) with more groups to learn wider representations with fewer parameters . Different colors are used to show groups in GLTs . For simplicity , feature shuffling is not shown in ( d ) . second reduces the dimensions from df to dm . The depth of a transformer block is 4 , consisting of ( 1 ) three parallel branches for queries , keys , and values , ( 2 ) a fusion layer that combines the output of multiple heads , and ( 3 ) two sequential linear layers in the FFN . In general , transformer-based networks sequentially stacks transformer blocks to increase network capacity and depth . This paper extends the transformer architecture and introduces a deep and light-weight transformer , DeLighT . Our model uses a deep and light-weight expand-reduce transformation , DeLighT transformation ( Section 3.1 ) , that enables learning wider representations efficiently . It also enables replacing multi-head attention and feed forward network ( FFN ) layers with single-head attention and a light-weight FFN ( Section 3.2 ) . DeLighT transformation decouples attention dimensions from the depth and width , allowing us to learn representations efficiently using block-wise scaling instead of uniform stacking of transformer blocks ( Section 3.3 ) . 3.1 DELIGHT TRANSFORMATION . DeLighT transformation maps a dm dimensional input vector into a high dimensional space ( expansion ) and then reduces it down to a do dimensional output vector ( reduction ) using N layers of the group transformations of Mehta et al . ( 2018 ) , as shown in Figure 1d . During these expansion and reduction phases , DeLighT transformation uses group linear transformations ( GLTs ) because they learn local representations by deriving the output from a specific part of the input and are more efficient than linear transformations . To learn global representations , the DeLighT transformation shares information between different groups in the group linear transformation using feature shuffling , analogous to channel shuffling in convolutional networks ( Zhang et al. , 2018 ) . A standard approach to increase the expressivity and capacity of transformers is to increase the input dimensions , dm . However , increasing dm linearly also increases the number of operations in multihead attention ( O ( n2dm ) , where n is the sequence length ) in a standard transformer block ( Figure 1a ) . In contrast , to increase the expressivity and capacity of the DeLighT block , we increase the depth and width of its intermediate DeLighT transformations using expansion and reduction phases . This enables us to use smaller dimensions for computing attention , requiring fewer operations . Formally , the DeLighT transformation is controlled by five configuration parameters : ( 1 ) number of GLT layers N , ( 2 ) width multiplier wm , ( 3 ) input dimension dm , ( 4 ) output dimension do , and ( 5 ) maximum groups gmax in a GLT . In the expansion phase , the DeLighT transformation projects the dm-dimensional input to a high-dimensional space , dmax = wmdm , linearly using dN2 e layers . In the reduction phase , the DeLighT transformation projects the dmax-dimensional vector to a do-dimensional space using the remaining N − dN2 e GLT layers . Mathematically , we define the output Y at each GLT layer l as : Yl = { F ( X , Wl , bl , gl ) , l = 1 F ( H ( X , Yl−1 ) , Wl , bl , gl ) , Otherwise ( 1 ) where Wl = { Wl1 , · · · , Wlgl } and bl = { bl1 , · · · , blgl } are the learnable weights and biases of group linear transformation F with gl groups at the l-th layer . Briefly , the F function takes the input X ( orH ( X , Yl−1 ) ) and splits into gl non-overlapping groups such that X = { X1 , · · · , Xgl } . The function F then linearly transforms each Xi with weights Wli and bias bli to produce output Yli = XiW l i + b l i . The outputs of each group Y l i are then concatenated to produce the output Y l. The functionH first shuffles the output of each group in Yl−1 and then combines it with the input X using the input mixer connection of Mehta et al . ( 2020 ) to avoid vanishing gradient problems . Figure 2 visualizes the expansion phase in the DeLighT transformation with group linear transformation , feature shuffling , and the input mixer connection . The number of groups at the l-th GLT in DeLighT transformation are computed as : gl = { min ( 2l−1 , gmax ) , 1 ≤ l ≤ dN/2e gN−l , Otherwise ( 2 ) In our experiments , we use gmax = ddm32 e so that each group has at least 32 input elements .
This paper presents a variant of Transformer where low-dimension matrix multiplications and single-head attention are used. Stacked group-linear-transformation (GLT) are applied on input of each layer to perform dimension growth and then reduction. The paper is well-written and easy to follow. Experiments demonstrate the propose architecture matches or improves the performance of baseline Transformers with fewer parameters.
SP:90ffef024018f59b3bde23aa2e2a4677602d41e8
On the mapping between Hopfield networks and Restricted Boltzmann Machines
1 INTRODUCTION . Hopfield networks ( HNs ) ( Hopfield , 1982 ; Amit , 1989 ) are a classical neural network architecture that can store prescribed patterns as fixed-point attractors of a dynamical system . In their standard formulation with binary valued units , HNs can be regarded as spin glasses with pairwise interactions Jij that are fully determined by the patterns to be encoded . HNs have been extensively studied in the statistical mechanics literature ( e.g . ( Kanter & Sompolinsky , 1987 ; Amit et al. , 1985 ) ) , where they can be seen as an interpolation between the ferromagnetic Ising model ( p = 1 pattern ) and the Sherrington-Kirkpatrick spin glass model ( many random patterns ) ( Kirkpatrick & Sherrington , 1978 ; Barra & Guerra , 2008 ) . By encoding patterns as dynamical attractors which are robust to perturbations , HNs provide an elegant solution to pattern recognition and classification tasks . They are considered the prototypical attractor neural network , and are the historical precursor to modern recurrent neural networks . Concurrently , spin glasses have been used extensively in the historical machine learning literature where they comprise a sub-class of “ Boltzmann machines ” ( BMs ) ( Ackley et al. , 1985 ) . Given a collection of data samples drawn from a data distribution , one is generally interested in “ training ” a BM by tuning its weights Jij such that its equilibrium distribution can reproduce the data distribution as closely as possible ( Hinton , 2012 ) . The resulting optimization problem is dramatically simplified when the network has a two-layer structure where each layer has no self-interactions , so that there are only inter-layer connections ( Hinton , 2012 ) ( see Fig . 1 ) . This architecture is known as a Restricted Boltzmann Machine ( RBM ) , and the two layers are sometimes called the visible layer and the hidden layer . The visible layer characteristics ( dimension , type of units ) are determined by the training data , whereas the hidden layer can have binary or continuous units and the dimension is chosen somewhat arbitrarily . In addition to generative modelling , RBMs and their multi-layer extensions have been used for a variety of learning tasks , such as classification , feature extraction , and dimension reduction ( e.g . Salakhutdinov et al . ( 2007 ) ; Hinton & Salakhutdinov ( 2006 ) ) . There has been extensive interest in the relationship between HNs and RBMs , as both are built on the Ising model formalism and fulfill similar roles , with the aim of better understanding RBM behaviour and potentially improving performance . Various results in this area have been recently reviewed ( Marullo & Agliari , 2021 ) . In particular , an exact mapping between HNs and RBMs has been previously noted for the special case of uncorrelated ( orthogonal ) patterns ( Barra et al. , 2012 ) . Several related models have since been studied ( Agliari et al. , 2013 ; Mézard , 2017 ) , which partially relax the uncorrelated pattern constraint . However , the patterns observed in most real datasets exhibit significant correlations , precluding the use of these approaches . In this paper , we demonstrate exact correspondence between HNs and RBMs in the case of correlated pattern HNs . Specifically , we show that any HN with N binary units and p < N arbitrary ( i.e . non-orthogonal ) binary patterns encoded via the projection rule ( Kanter & Sompolinsky , 1987 ; Personnaz et al. , 1986 ) , can be transformed into an RBM with N binary and p gaussian variables . We then characterize when the reverse map from RBMs to HNs can be made . We consider a practical example using the mapping , and discuss the potential importance of this correspondence for the training and interpretability of RBMs . 2 RESULTS . We first introduce the classical solution to the problem of encodingN -dimensional binary { −1 , +1 } vectors { ξµ } pµ=1 , termed “ patterns ” , as global minima of a pairwise spin glass H ( s ) = − 12s TJs . This is often framed as a pattern retrieval problem , where the goal is to specify or learn Jij such that an energy-decreasing update rule for H ( s ) converges to the patterns ( i.e . they are stable fixed points ) . Consider the N × p matrix ξ with the p patterns as its columns . Then the classical prescription known as the projection rule ( or pseudo-inverse rule ) ( Kanter & Sompolinsky , 1987 ; Personnaz et al. , 1986 ) , J = ξ ( ξT ξ ) −1ξT , guarantees that the p patterns will be global minima of H ( s ) . This resulting spin model is commonly called a ( projection ) Hopfield network , and has the Hamiltonian H ( s ) = −1 2 sT ξ ( ξT ξ ) −1ξTs . ( 1 ) Note that ξT ξ invertibility is guaranteed as long as the patterns are linearly independent ( we therefore require p ≤ N ) . Also note that in the special ( rare ) case of orthogonal patterns ξµ · ξν = Nδµν ( also called “ uncorrelated ” ) , studied in the previous work ( Barra et al. , 2012 ) , one has ξT ξ = NI and so the pseudo-inverse interactions reduce to the well-known Hebbian form J = 1N ξξ T ( the properties of which are studied extensively in Amit et al . ( 1985 ) ) . Additional details on the projection HN Eq . ( 1 ) are provided in Appendix A . To make progress in analyzing Eq . ( 1 ) , we first consider a transformation of ξ which eliminates the inverse factor . 2.1 MAPPING A HOPFIELD NETWORK TO A RESTRICTED BOLTZMANN MACHINE . In order to obtain a more useful representation of the quadratic form Eq . ( 1 ) ( for our purposes ) , we utilize the QR-decomposition ( Schott & Stewart , 1999 ) of ξ to “ orthogonalize ” the patterns , ξ = QR , ( 2 ) with Q ∈ RN×p , R ∈ Rp×p . The columns of Q are the orthogonalized patterns , and form an orthonormal basis ( of non-binary vectors ) for the p-dimensional subspace spanned by the binary patterns . R is upper triangular , and if its diagonals are held positive then Q and R are both unique ( Schott & Stewart , 1999 ) . Note both the order and sign of the columns of ξ are irrelevant for HN pattern recall , so there are n = 2p · p ! possibleQ , R pairs . Fixing a pattern ordering , we can use the orthogonality ofQ to re-write the interaction matrix as J = ξ ( ξT ξ ) −1ξT = QR ( RTR ) −1RTQT = QQT ( 3 ) ( the last equality follows from ( RTR ) −1 = R−1 ( RT ) −1 ) . Eq . ( 3 ) resembles the simple Hebbian rule but with non-binary orthogonal patterns . Defining q ≡ QTs in analogy to the classical pattern overlap parameterm ≡ 1N ξ Ts ( Amit et al. , 1985 ) , we have H ( s ) = −1 2 sTQQTs = −1 2 q ( s ) · q ( s ) . ( 4 ) Using a Gaussian integral as in Amit et al . ( 1985 ) ; Barra et al . ( 2012 ) ; Mézard ( 2017 ) to transform ( exactly ) the partition function Z ≡ ∑ { s } e −βH ( s ) of Eq . ( 1 ) , we get Z = ∑ { s } e 1 2 ( βq ) T ( β−1I ) ( βq ) = ∑ { s } ∫ e− β 2 ∑ µ λ 2 µ+β ∑ µ λµ ∑ iQiµsi ∏ µ dλµ√ 2π/β . ( 5 ) The second line can be seen as the partition function of an expanded Hamiltonian for the N ( binary ) original variables { si } and the p ( continuous ) auxiliary variables { λµ } , i.e . HRBM ( { si } , { λµ } ) = 1 2 ∑ µ λ2µ − ∑ µ ∑ i Qiµsiλµ . ( 6 ) Note that this is the Hamiltonian of a binary-continuous RBM with inter-layer weights Qiµ . The original HN is therefore equivalent to an RBM described by Eq . ( 6 ) ( depicted in Fig . 1 ) . As mentioned above , there are many RBMs which correspond to the same HN due to the combinatorics of choosing Q . In fact , instead of QR factorization one can use any decomposition which satisfies J = UUT , with orthogonal U ∈ RN×p ( see Appendix B ) , in which case U acts as the RBM weights . Also note the inclusion of an applied field term − ∑ i bisi in Eq . ( 1 ) trivially carries through the procedure , i.e . H̃RBM ( { si } , { λµ } ) = 12 ∑ µ λ 2 µ − ∑ i bisi − ∑ µ ∑ iQiµsiλµ . Instead of working with the joint form Eq . ( 6 ) , one could take a different direction from Eq . ( 5 ) and sum out the original variables { si } , i.e . Z = ∫ e− β 2 ∑ µ λ 2 µ2N ∏ i cosh ( β ∑ µ Qiµλµ ) ∏ µ dλµ√ 2π/β . ( 7 ) This continuous , p-dimensional representation is useful for numerical estimation of Z ( Section 3.1 ) . We may write Eq . ( 7 ) as Z = ∫ e−F0 ( λ ) dλµ , where F0 ( { λµ } ) = 1 2 ∑ µ λ2µ − 1 β ∑ i ln cosh ( β ∑ µ Qiµλµ ) . ( 8 ) Eq . ( 8 ) is an approximate Lyapunov function for the mean dynamics of { λµ } ; ∇λF0 describes the effective behaviour of the stochastic dynamics of the N binary variables { si } at temperature β−1 . 2.2 COMMENTS ON THE REVERSE MAPPING . With the mapping from HNs ( with correlated patterns ) to RBMs established , we now consider the reverse direction . Consider a binary-continuous RBM with inter-layer weights Wiµ which couple a visible layer of N binary variables { si } to a hidden layer of p continuous variables { λµ } , H ( s , λ ) = 1 2 ∑ µ λ2µ − ∑ i bisi − ∑ µ ∑ i Wiµsiλµ . ( 9 ) Here we use W instead of Q for the RBM weights to emphasize that the RBM is not necessarily an HN . First , following Mehta et al . ( 2019 ) , we transform the RBM to a BM with binary states by integrating out the hidden variables . The corresponding Hamiltonian for the visible units alone is ( see Appendix D.1 for details ) , H̃ ( s ) = − ∑ i bisi − 1 2 ∑ i ∑ j ∑ µ WiµWjµsisj , ( 10 ) a pairwise Ising model with a particular coupling structure Jij = ∑ µWiµWjµ , which in vector form is J = ∑ µ wµw T µ =WW T , ( 11 ) where { wµ } are the p columns ofW . In general , this Ising model Eq . ( 10 ) produced by integrating out the hidden variables need not have Hopfield structure ( discussed below ) . However , it automatically does ( as noted in Barra et al . ( 2012 ) ) , in the very special case whereWiµ ∈ { −1 , +1 } . In that case , the binary patterns are simply { wµ } , so that Eq . ( 11 ) represents a Hopfield network with the Hebbian prescription . This situation is likely rare and may only arise as a by-product of constrained training ; for a generically trained RBM the weights will not be binary . It is therefore interesting to clarify when and how real-valued RBM interactionsW can be associated with HNs . Approximate binary representation of W : In Section 2.1 , we orthogonalized the binary matrix ξ via the QR decomposition ξ = QR , where Q is an orthogonal ( but non-binary ) matrix , which allowed us to map a projection HN ( defined by its patterns ξ , Eq . ( 1 ) ) to an RBM ( defined by its inter-layer weightsQ , Eq . ( 6 ) ) . Here we consider the reverse map . Given a trained RBM with weights W ∈ RN×p , we look for an invertible transformation X ∈ Rp×p which binarizes W . We make the mild assumption that W is rank p. If we find such an X , then B =WX will be the Hopfield pattern matrix ( analogous to ξ ) , with Biµ ∈ { −1 , +1 } . This is a non-trivial problem , and an exact solution is not guaranteed . As a first step to study the problem , we relax it to that of finding a matrix X ∈ GLp ( R ) ( i.e . invertible , p × p , real ) which minimizes the binarization error argmin X∈GLp ( R ) ||WX − sgn ( WX ) ||F . ( 12 ) We denote the approximately binary transformation ofW via a particular solutionX by Bp =WX . ( 13 ) We also define the associated error matrixE ≡ Bp− sgn ( Bp ) . We stress thatBp is non-binary and approximatesB ≡ sgn ( Bp ) , the columns of which will be HN patterns under certain conditions on E. We provide an initial characterization and example in Appendix D .
This paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function of a binary-continuous (Bernoulli-Gaussian) RBM. The authors comments on the mapping from RBM to BN. The experiments show the advantages of training RBM with weights initialised from BN projection weights in generation and classification.
SP:c83ecc74eb885df5f29e5a7080a8c60d1ee0a3b0
One Reflection Suffice
Orthogonal weight matrices are used in many areas of deep learning . Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal . One popular approach utilizes many Householder reflections . The only practical drawback is that many reflections cause low GPU utilization . We mitigate this final drawback by proving that one reflection is sufficient , if the reflection is computed by an auxiliary neural network . 1 INTRODUCTION . Orthogonal matrices have shown several benefits in deep learning , with successful applications in Recurrent Neural Networks , Convolutional Neural Networks and Normalizing Flows . One popular approach can represent any d × d orthogonal matrix using d Householder reflections ( Mhammedi et al. , 2017 ) . The only practical drawback is low GPU utilization , which happens because the d reflections needs to be evaluated sequentially ( Mathiasen et al. , 2020 ) . Previous work often increases GPU utilization by using k d reflections ( Tomczak & Welling , 2016 ; Mhammedi et al. , 2017 ; Zhang et al. , 2018 ; Berg et al. , 2018 ) . Using fewer reflections limits the orthogonal transformations the reflections can represent , yielding a trade-off between representational power and computation time . This raises an intriguing question : can we circumvent the trade-off and attain full representational power without sacrificing computation time ? We answer this question with a surprising “ yes. ” The key idea is to use an auxiliary neural network to compute a different reflection for each input . In theory , we prove that one such “ auxiliary reflection ” can represent any number of normal reflections . In practice , we demonstrate that one auxiliary reflection attains similar validation error to models with d normal reflections , when training Fully Connected Neural Networks ( Figure 1 left ) , Recurrent Neural Networks ( Figure 1 center ) and convolutions in Normalizing Flows ( Figure 1 right ) . Notably , auxiliary reflections train between 2 and 6 times faster for Fully Connected Neural Networks with orthogonal weight matrices ( see Section 3 ) . 1.1 OUR RESULTS . The Householder reflection of x ∈ Rd around v ∈ Rd can be represented by a matrixH ( v ) ∈ Rd×d . H ( v ) x = ( I − 2 vv T ||v||2 ) x . An auxiliary reflection uses a Householder matrix H ( v ) with v = n ( x ) for a neural network n. f ( x ) = H ( n ( x ) ) x = ( I − 2n ( x ) n ( x ) T ||n ( x ) ||2 ) x . One auxiliary reflection can represent any composition of Householder reflections . We prove this claim even when we restrict the neural network n ( x ) to have a single linear layer n ( x ) = Wx for W ∈ Rd×d such that f ( x ) = H ( Wx ) x. Theorem 1 . For any k Householder reflections U = H ( v1 ) · · ·H ( vk ) there exists a neural network n ( x ) = Wx with W ∈ Rd×d such that f ( x ) = H ( Wx ) x = Ux for all x ∈ Rd\ { 0 } . Previous work ( Mhammedi et al. , 2017 ; Zhang et al. , 2018 ) often employ k d reflections and compute Ux as k sequential Householder reflectionsH ( v1 ) · · ·H ( vk ) ·xwith weights V = ( v1 · · · vk ) . It is the evaluation of these sequential Householder reflection that cause low GPU utilization ( Mathiasen et al. , 2020 ) , so lower values of k increase GPU utilization but decrease representational power . Theorem 1 states that it is sufficient to evaluate a single auxiliary reflection H ( Wx ) x instead of k reflections H ( v1 ) · · ·H ( vk ) · x , thereby gaining high GPU utilization while retaining the full representational power of any number of reflections . In practice , we demonstrate that d reflections can be substituted with a single auxiliary reflection without decreasing validation error , when training Fully Connected Neural Networks ( Section 3.1 ) , Recurrent Neural Networks ( Section 3.2 ) and Normalizing Flows ( Section 3.3 ) . While the use of auxiliary reflections is straightforward for Fully Connected Neural Networks and Recurrent Neural Networks , we needed additional ideas to support auxiliary reflections in Normalizing Flows . In particular , we developed further theory concerning the inverse and Jacobian of f ( x ) = H ( Wx ) x . Note that f is invertible if there exists a unique x given y = H ( Wx ) x and W . Theorem 2 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if W = WT and has eigenvalues which satisfy 3/2 · λmin ( W ) > λmax ( W ) . Finally , we present a matrix formula for the Jacobian of the auxiliary reflection f ( x ) = H ( Wx ) x . This matrix formula is used in our proof of Theorem 2 , but it also allows us simplify the Jacobian determinant ( Lemma 1 ) which is needed when training Normalizing Flows . Theorem 3 . The Jacobian of f ( x ) = H ( Wx ) x is : J = H ( Wx ) A− 2Wxx TW ||Wx||2 where A = I − 2x TWTx ||Wx||2 W. We prove Theorem 1 in Appendix A.1.1 while Theorems 2 and 3 are proved in Section 2 . 2 NORMALIZING FLOWS . 2.1 BACKGROUND . Let z ∼ N ( 0 , 1 ) d and f be an invertible neural network . Then f−1 ( z ) ∼ Pmodel defines a model distribution for which we can compute likelihood of x ∼ Pdata ( Dinh et al. , 2015 ) . log pmodel ( x ) = log pz ( f ( x ) ) + log ∣∣∣∣det ( ∂f ( x ) ∂x ) ∣∣∣∣ ( 1 ) This allows us to train invertible neural network as generative models by maximum likelihood . Previous work demonstrate how to construct invertible neural networks and efficiently compute the log jacobian determinant ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Ho et al. , 2019 ) . 2.2 INVERTIBILITY AND JACOBIAN DETERMINANT ( PROOF SKETCH ) . To use auxiliary reflections in Normalizing Flows we need invertibility . That is , for every y ∈ Rd there must exist a unique x ∈ Rd so f ( x ) = H ( Wx ) x = y.1 We find that f is invertible if its Jacobian determinant is non-zero for all x in Sd−1 = { x ∈ Rd | ‖x‖ = 1 } . Theorem 4 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if the Jacobian determinant of f is non-zero for all x ∈ Sd−1 and W is invertible . The Jacobian determinant of H ( Wx ) x takes the following form . Lemma 1 . The Jacobian determinant of f ( x ) = H ( Wx ) x is : −det ( A ) ( 1 + 2 vTA−1u ||u||2 ) where vT = xTW , u = Wx and A = I − 2x TWTx ||Wx||2 W. It is then sufficient that det ( A ) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0 . We prove that this happens if W = WT with eigenvalues 3/2 ·λmin ( W ) > λmax ( W ) . This can be achieved with W = I+V V T if we guarantee σmax ( V V T ) < 1/2 by spectral normalization ( Miyato et al. , 2018 ) . Combining these results yields Theorem 2 . Theorem 2 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if W = WT and has eigenvalues which satisfy 3/2 · λmin ( W ) > λmax ( W ) . Computing the Inverse . In practice , we use Newtons method to compute x so H ( Wx ) x = y . Figure 2 show reconstructions n−1 ( n ( x ) ) = x for an invertible neural network n with auxiliary reflections using Newtons method , see Appendix A.2.1 for details . 2.3 PROOFS . The goal of this section is to prove that f ( x ) = H ( Wx ) x is invertible . Our proof strategy has two parts . Section 2.3.1 first shows f is invertible if it has non-zero Jacobian determinant . Section 2.3.2 then present an expression for the Jacobian determinant , Lemma 1 , and prove the expression is non-zero if W = WT and 3/2 · λmin ( W ) > λmin ( W ) . 2.3.1 NON-ZERO JACOBIAN DETERMINANT IMPLIES INVERTIBILITY . In this section , we prove that f ( x ) = H ( Wx ) x is invertible on Rd if f has non-zero Jacobian determinant . To simplify matters , we first prove that invertibility on Sd−1 implies invertibility on Rd . Informally , invertibility on Sd−1 is sufficient because H ( Wx ) is scale invariant , i.e. , H ( c ·Wx ) = H ( Wx ) for all c 6= 0 . This is formalized by Lemma 2 . Lemma 2 . If f ( x ) = H ( Wx ) x is invertible on Sd−1 it is also invertible on Rd\ { 0 } . Proof . Assume that f ( x ) is invertible on Sd−1 . Pick any y′ ∈ Rd such that ||y′|| = c for any c > 0 . Our goal is to compute x′ such that H ( Wx′ ) x′ = y′ . By normalizing , we see y′/‖y′‖ ∈ Sd−1 . We can then use the inverse f−1 on y′/‖y′‖ to find x such that H ( Wx ) x = y′/‖y‖ . The result is then x′ = x‖y‖ since H ( Wx′ ) x′ = H ( Wx ) x||y|| = y due to scale invariance of H ( Wx ) . 1Note that we do not know H ( Wx ) so we can not trivially compute x = H ( Wx ) −1y = H ( Wx ) y . The main theorem we use to prove invertibiliy on Sd−1 is a variant of Hadamards global function inverse theorem from ( Krantz & Parks , 2012 ) . On a high-level , Hadamard ’ s theorem says that a function is invertible if it has non-zero Jacobian determinant and satisfies a few additional conditions . It turns out that these additional conditions are meet by any continuously differentiable function f ( x ) when ( in the notation of Theorem 5 ) M1 = M2 = Sd−1 . Theorem 5 . ( Krantz & Parks , 2012 , 6.2.8 ) Let M1 and M2 be smooth , connected N -dimensional manifolds and let f : M1 → M2 be continuously differentiable . If ( 1 ) f is proper , ( 2 ) the Jacobian of f is non-zero , and ( 3 ) M2 is simple connected , then f is invertible . For M1 = M2 = Sd−1 the additional conditions are met if f is continuously differentiable . Corollary 1 . Let f : Sd−1 → Sd−1 with d ≥ 2 be continuously differentiable with non-zero Jacobian determinant , then f is invertible . Proof . Note that Sd−1 is smooth and simply connected if d ≥ 2 ( Lee , 2013 ) . Continuously functions on Sd−1 are proper . We conclude f is invertible on Sd−1 by Theorem 5 . We now show that f ( x ) = H ( Wx ) x is continuously differentiable on Sd−1 . Lemma 3 . The function f ( x ) = H ( Wx ) x is continuously differentiable on Sd−1 ifW is invertible . Proof . Compositions of continuously differentiable functions are continuously differentiable by the chain rule . All the functions used to construct H ( Wx ) x are continuously differentiable , except the division . However , the only case where division is not continously differentiable is when ||Wx|| = 0 . Since W is invertible , ||Wx|| = 0 iff x = 0 . But 0 /∈ Sd−1 and we conclude f is continuously differentiable on Sd−1 . Theorem 4 . Let f ( x ) = H ( Wx ) x with f ( 0 ) : = 0 , then f is invertible on Rd with d ≥ 2 if the Jacobian determinant of f is non-zero for all x ∈ Sd−1 and W is invertible . Proof . By Lemma 3 , we see f is continuously differentiable since W is invertible , which by Corollary 1 means f is invertible on Sd−1 if f has non-zero Jacobian determinant on Sd−1 . By Lemma 2 , we get that f is invertible on Rd if it has non-zero Jacobian on Sd−1 .
The authors present a way to learn the action of an arbitrary orthogonal matrix on a vector via a map from $\mathbb{R}^{n\times n}$ onto $\operatorname{O}(n)$. They show that the map is surjective, and give conditions under which they can invert this action. They then compare against previous proposed schemes in one task and show the performance of their models in other two.
SP:3d705a1b70254d2b9d05277efff8ac08b0539086
PCPs: Patient Cardiac Prototypes
1 INTRODUCTION . Modern medical research is arguably anchored around the “ gold standard ” of evidence provided by randomized control trials ( RCTs ) ( Cartwright , 2007 ) . However , RCT-derived conclusions are population-based and fail to capture nuances at the individual patient level ( Akobeng , 2005 ) . This is primarily due to the complex mosaic that characterizes a patient from demographics , to physiological state , and treatment outcomes . Similarly , despite the success of deep learning algorithms in automating clinical diagnoses ( Galloway et al. , 2019 ; Attia et al. , 2019a ; b ; Ko et al. , 2020 ) , network-generated predictions remain population-based and difficult to interpret . Such properties are a consequence of a network ’ s failure to incorporate patient-specific structure during training or inference . As a result , physicians are reluctant to integrate such systems into their clinical workflow . In contrast to such reluctance , personalized medicine , the ability to deliver the right treatment to the right patient at the right time , is increasingly viewed as a critical component of medical diagnosis ( Hamburg & Collins , 2010 ) . The medical diagnosis of cardiac signals such as the electrocardiogram ( ECG ) is of utmost importance in a clinical setting ( Strouse et al. , 1939 ) . For example , such signals , which convey information about potential abnormalities in a patent ’ s heart , also known as cardiac arrhythmias , are used to guide medical treatment both within and beyond the cardiovascular department ( Carter , 1950 ) . In this paper , we conceptually borrow insight from the field of personalized medicine in order to learn patient representations which allow for a high level of network interpretability . Such representations have several potential clinical applications . First , they allow clinicians to quantify the similarity of patients . By doing so , network-generated predictions for a pair of patients can be traced back to this similarity , and in turn , their corresponding ECG recordings . Allowing for this inspection of ECG recordings aligns well with the existing clinical workflow . An additional application of patient similarity is the exploration of previously unidentified patient relationships , those which may lead to the discovery of novel patient sub-cohorts . Such discoveries can lend insight into particular diseases and appropriate medical treatments . In contrast to existing patient representation learning methods ( Zhu et al. , 2016 ; Suo et al. , 2017 ) , we concurrently optimize for a predictive task ( cardiac arrhythmia classification ) , leverage patient similarity , and design a system specifically for 12-lead ECG signals . Contributions . Our contributions are the following : 1 . Patient cardiac prototypes ( PCPs ) - we learn representations that efficiently summarize the cardiac state of a patient in an end-to-end manner via contrastive learning . 2 . Patient similarity quantification - we show that , by measuring the Euclidean distance between PCPs and representations , we can identify similar patients across different datasets . 3 . Dataset distillation - we show that PCPs can be used to train a network , in lieu of the original dataset , and maintain strong generalization performance . 2 RELATED WORK . Contrastive learning is a self-supervised method that encourages representations of instances with commonalities to be similar to one another . This is performed for each instance and its perturbed counterpart ( Oord et al. , 2018 ; Chen et al. , 2020a ; b ; Grill et al. , 2020 ) and for different visual modalities ( views ) of the same instance ( Tian et al. , 2019 ) . Such approaches are overly-reliant on the choice of perturbations and necessitate a large number of comparisons . Instead , Caron et al . ( 2020 ) propose to learn cluster prototypes . Most similar to our work is that of Cheng et al . ( 2020 ) and CLOCS ( Kiyasseh et al. , 2020 ) which both show the benefit of encouraging patient-specific representations to be similar to one another . Although DROPS ( Anonymous , 2021 ) leverages contrastive learning , it does so at the patient-attribute level . In contrast to existing methods , we learn patient-specific representations , PCPs , in an end-to-end manner Meta-learning designs learning paradigms that allow for the fast adaptation of networks . Prototypical Networks ( Snell et al. , 2017 ) average representations to obtain class-specific prototypes . During inference , the similarity of representations to these prototypes determines the classification . Relational Networks ( Sung et al. , 2018 ) build on this idea by learning the similarity of representations to prototypes through a parametric function . Gidaris & Komodakis ( 2018 ) and Qiao et al . ( 2018 ) exploit hypernetworks ( Ha et al. , 2016 ) and propose to generate the parameters of the final linear layer of a network for few-shot learning on visual tasks . In contrast , during inference only , we compute the cosine similarity between representations and PCPs and use the latter as the input to a hypernetwork . Patient similarity aims at discovering relationships between patient data ( Sharafoddini et al. , 2017 ) . To quantify these relationships , Pai & Bader ( 2018 ) and ( Pai et al. , 2019 ) propose Patient Similarity Networks for cancer survival classification . Exploiting electronic health record data , Zhu et al . ( 2016 ) use Word2Vec to learn patient representations , and Suo et al . ( 2017 ) propose to exploit patient similarity to guide the re-training of models , an approach which is computationally expensive . Instead , our work naturally learns PCPs as efficient descriptors of the cardiac state of a patient . 3 METHODS . 3.1 LEARNING PATIENT CARDIAC PROTOTYPES VIA CONTRASTIVE LEARNING . We assume the presence of a dataset , D = { xi , yi } Ni=1 , comprising N ECG recordings , x , and cardiac arrhythmia labels , y , for a total of Ptot patients . Typically , multiple recordings are associated with a single patient , p. This could be due to multiple recordings within the same hospital visit or multiple visits to a hospital . Therefore , each patient is associated with N/Ptot recordings . We learn a feature extractor fθ : x ∈ RD −→ h ∈ RE , parameterized by θ , that maps a D-dimensional recording , x , to an E-dimensional representation , h. In the quest to learn patient-specific representations , we associate each patient , p , out of a total of P patients in the training set with a unique and learnable embedding , v ∈ RE , in a set of embeddings , V , where |V | = P N . Such embeddings are designed to be efficient descriptors of the cardiac state of a patient , and we thus refer to them as patient cardiac prototypes or PCPs . We propose to learn PCPs in an end-to-end manner via contrastive learning . More specifically , given an instance , xi , that belongs to a particular patient , k , we encourage its representation , hi = fθ ( xi ) , to be similar to the same patient ’ s PCP , vk , and dissimilar to the remaining PCPs , vj , j 6= k. We quantify this similarity , s ( hi , vk ) , by using the cosine similarity with a temperature parameter , τ . The intuition is that each PCP , in being attracted to a diverse set of representations that belong to the same patient , should become invariant to insidious intra-patient differences . For a mini-batch of size , B , the contrastive loss is as follows . Lcontrastive = − B∑ i log [ es ( hi , vk ) ∑P j e s ( hi , vj ) ] ( 1 ) s ( hi , vj ) = fθ ( xi ) · vj ‖fθ ( xi ) ‖‖vj‖ · 1 τ ( 2 ) 3.2 GENERATING PATIENT-SPECIFIC PARAMETERS VIA HYPERNETWORKS . Network parameters are typically updated during training and fixed during inference . This allows the parameters to exploit population-based information in order to learn high-level features useful for solving the task at hand . Such an approach , however , means that all instances are exposed to the same set of parameters during inference , regardless of instance-specific information . Such information can be related to any meta-label including , but not limited to , patient ID , geographical location , and even temporal period . As an exemplar , and motivated by the desire to generate patient-specific diagnoses , we focus on patient-specific information . We are essentially converting a traditional classification task to one that is conditioned on patient-specific information . To perform such conditioning , we propose to exploit both PCPs and hypernetworks , as explained next . We assume the presence of a hypernetwork , gφ : h ∈ RE −→ ω ∈ RE×C , parameterized by φ , that maps an E-dimensional representation , h , to a matrix of classification parameters , ω , where C is the number of class labels . During training , we feed a representation , hi , to the hypernetwork and generate instance-specific parameters , ωi ( see Fig . 1 left ) . During inference , however , we retrieve , and feed into the hypernetwork , the most similar PCP , vk , to the current representation , hi , ( based on similarity metric , s ) . We chose this strategy after having experimented with several of them ( see Sec . 5.2 ) . It is worthwhile to note that although this approach bears some resemblance to clustering , it is distinct from it . In a clustering scenario , we would have assigned labels to instances based on their proximity to PCPs . In contrast , we are leveraging this proximity to determine the input of a hypernetwork ( see Fig . 1 right ) . ωi = gφ ( hi ) for traininggφ ( vk ) for inference , vk = arg max vj s ( hi , vj ) ( 3 ) By performing this retrieval , we exploit the similarity between patients in the training and inference set . As a result , the hypernetwork generates patient-specific parameters that parameterize the linear classifier , pω : h ∈ RE −→ y ∈ RC , which maps a representation , h , to a posterior class distribution , y . We train the entire network in an end-to-end manner using a combined contrastive and supervised loss . Lsupervised = − B∑ i log pωi ( yi = c|hi ) ( 4 ) Lcombined = Lcontrastive + Lsupervised ( 5 ) 4 EXPERIMENTAL DESIGN . 4.1 DATASETS . We conduct experiments using PyTorch ( Paszke et al. , 2019 ) on three large-scale ECG datasets that contain a significant number of patients . PhysioNet 2020 ECG consists of 12-lead ECG recordings from 6,877 patients alongside labels corresponding to 9 different classes of cardiac arrhythmia . Each recording can be associated with multiple labels . Chapman ECG ( Zheng et al. , 2020 ) consists of 12-lead ECG recordings from 10,646 patients alongside labels corresponding to 11 different classes of cardiac arrhythmia . As is suggested by Zheng et al . ( 2020 ) , we group these labels into 4 major classes . PTB-XL ECG ( Wagner et al. , 2020 ) consists of 12-lead ECG recordings from 18,885 patients alongside 71 different types of annotations provided by two cardiologists . We follow the training and evaluation protocol presented by Strodthoff et al . ( 2020 ) where we leverage the 5 diagnostic class labels . We alter the original setup to only consider ECG segments with one label assigned to them and convert the task into a binary classification problem . Further details can be found in Appendix A.1 . Unless otherwise mentioned , datasets were split into training , validation , and test sets according to patient ID using a 60 , 20 , 20 configuration . In other words , patients appeared in only one of the sets . Further details about the dataset splits can be found in Appendix A.2 .
This paper proposes to learn patient-specific representation using patient physiological signals. The authors design a PCP representation for each patient, which is learned to agree with signals from the same patients and disagrees with the remaining patients. In the supervised part, the classifier is generated from patient-specific parameters by meta-learning. The model was evaluated on three large ECG datasets: PhysioNet 2020 ECG, Chapman ECG, PTB-XL ECG.
SP:0cb862cf3806c4f04d2d30f200c25841a1cb52a8
Activation-level uncertainty in deep neural networks
1 INTRODUCTION . Deep Neural Networks ( DNNs ) have achieved state-of-the-art performance in many different tasks , such as speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Mikolov et al. , 2013 ) or computer vision ( Krizhevsky et al. , 2012 ) . In spite of their predictive power , DNNs are limited in terms of uncertainty estimation . This has been a classical concern in the field ( MacKay , 1992 ; Hinton & Van Camp , 1993 ; Barber & Bishop , 1998 ) , which has attracted a lot of attention in the last years ( Lakshminarayanan et al. , 2017 ; Guo et al. , 2017 ; Sun et al. , 2019 ; Wenzel et al. , 2020 ) . Indeed , this ability to “ know what is not known ” is essential for critical applications such as medical diagnosis ( Esteva et al. , 2017 ; Mobiny et al. , 2019 ) or autonomous driving ( Kendall & Gal , 2017 ; Gal , 2016 ) . Bayesian Neural Networks ( BNNs ) address this problem through a Bayesian treatment of the network weights1 ( MacKay , 1992 ; Neal , 1995 ) . This will be refered to as weight-space stochasticity . However , dealing with uncertainty in weight space is challenging , since it contains many symmetries and is highly dimensional ( Wenzel et al. , 2020 ; Sun et al. , 2019 ; Snoek et al. , 2019 ; Fort et al. , 2019 ) . Here we focus on two specific limitations . First , it has been recently shown that BNNs with well-established inference methods such as Bayes by Backprop ( BBP ) ( Blundell et al. , 2015 ) and MC-Dropout ( Gal & Ghahramani , 2016 ) underestimate the predictive uncertainty for instances located in-between two clusters of training points ( Foong et al. , 2020 ; 2019 ; Yao et al. , 2019 ) . Second , the weight-space prior does not allow BNNs to guide extrapolation to out-of-distribution ( OOD ) data ( Sun et al. , 2019 ; Nguyen et al. , 2015 ; Ren et al. , 2019 ) . Both aspects are illustrated graphically in Figure 3 , more details in Section 3.1 . ∗Work developed mostly while visiting Cambridge University , UK . 1The bias term will be absorbed within the weights throughout the work . As an alternative to standard BNNs , Functional Bayesian Neural Nets ( fBNN ) specify the prior and perform inference directly in function space ( Sun et al. , 2019 ) . This provides a mechanism to guide the extrapolation in OOD data , e.g . predictions can be encouraged to revert to the prior in regions of no observed data . However , the posterior stochastic process is still defined by a factorized Gaussian on the network weights ( i.e . as in BBP ) , see ( Sun et al. , 2019 , Sect . 3.1 ) . We will show that this makes fBNN inherit the problem of underestimating the predictive uncertainty for in-between data . In this work , we adopt a different approach by moving stochasticity from the weights to the activation function , see Figure 1 . This will be referred to as auNN ( activation-level uncertainty for Neural Networks ) . The activation functions are modelled with ( one-dimensional ) GP priors , for which a triangular kernel inspired by the ReLu non-linearity ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) is used . Since non-linearities are typically simple functions ( e.g . ReLu , sigmoid , tanh ) , our GPs are sparsified with few inducing points . The network weights are deterministic parameters which are estimated to maximize the marginal likelihood of the model . The motivation behind auNN is to avoid inference in the complex space of weights . We hypothesise that it could be enough to introduce stochasticity in the activation functions that follow the linear projections to provide sensible uncertainty estimations . We show that auNN obtains well-calibrated estimations for in-between data , and its prior allows to guide the extrapolation to OOD data by reverting to the empirical mean . This will be visualized in a simple 1D example ( Figure 3 and Table 1 ) . Moreover , auNN obtains competitive performance in standard benchmarks , is scalable ( datasets of up to ten millions training points are used ) , and can be readily used for classification . The use of GPs for the activations establishes an interesting connection with deep GPs ( DGPs ) ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . The main difference is the linear projection before the GP , recall Figure 1 ( c-d ) . This allows auNN units to model simpler mappings between layers , which are defined along one direction of the input space , similarly to neural networks . However , DGP units model more complex mappings defined on the whole input space , see also Figure 2a . We will show that auNN units require fewer inducing points and are better suited for deep architectures , achieving superior performance . Also , a thorough discussion on additional related work will be provided in Section 4 . In summary , the main contributions of this paper are : ( 1 ) a new approach to model uncertainty in DNNs , based on deterministic weights and simple stochastic non-linearities ( in principle , not necessarily modelled by GPs ) ; ( 2 ) the specific use of non-parametric GPs as a prior , including the triangular kernel inspired by the ReLu ; ( 3 ) auNN addresses a well-known limitation of BNNs and fBNNs ( uncertainty underestimation for in-between data ) , can guide the extrapolation to OOD data by reverting to the empirical mean , and is competitive in standard prediction tasks ; ( 4 ) auNN units require fewer inducing points and are better suited for deep architectures than DGP ones , achieving superior performance . 2 PROBABILISTIC MODEL AND INFERENCE . Model specification . We focus on a supervised task ( e.g . regression or classification ) with training data2 { xn , : , yn , : } Nn=1 . The graphical model in Figure 2b will be useful throughout this section . We 2The output is represented as a vector since all the derivations apply for the multi-output case . assume a model of L layers , each one with Dl units as in Figure 1c . Each activation is modelled with a ( 1D ) GP prior , i.e . f ld ( a l d ) ∼ GP ( µld , kld ) , with µld : R → R and kld : R × R → R. The GP hyperparameters θld will be omitted for clarity ( for the kernels used here , θ l d includes the amplitude and the lengthscale ) . Assuming independence between units , each layer depends on the previous one as : p ( Fl|Fl−1 , Wl ) = p ( Fl|Al ) = ∏Dl d=1 p ( f l d|ald ) , ( 1 ) where Fl is the N ×Dl matrix of outputs of the l-th layer for N inputs , Wl is the Dl−1 ×Dl matrix of weights in that layer , and Al is the N ×Dl matrix of pre-activations , i.e . Al = Fl−1 ·Wl . As usual , the columns and rows of Fl are denoted as f ld and f l n , : , respectively ( and analogously for the other matrices ) . Since the activation is defined by a GP , we have p ( f ld|ald ) = N ( f ld|µld , Kld ) , with µld ( resp . Kld ) the result of evaluating µ l d ( resp . k l d ) on a l d ( that is , µ l d is a N -dimensional vector and K l d is a N ×N matrix ) . To fully specify the model , the output Y is defined from the last layer with a distribution that factorizes across data points , i.e . p ( Y|FL ) = ∏N n=1 p ( yn , :|fLn , : ) . This formulation resembles that of DGPs ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . The main difference is that we model Fl|Fl−1 through Dl 1D GPs evaluated on the pre-activations Al ( i.e . the projections of Fl−1 through Wl ) , whereas DGPs use Dl GPs of dimension Dl−1 evaluated directly on Fl−1 , recall Figure 1 ( c-d ) . Variational Inference . Inference in the proposed model is intractable . To address this , we follow standard sparse variational GP approaches ( Titsias , 2009 ; Hensman et al. , 2013 ; 2015 ) , similarly to the Doubly Stochastic Variational Inference ( DSVI ) for DGPs ( Salimbeni & Deisenroth , 2017 ) . Specifically , in each unit of each layer we introduce M l inducing values uld , which are the result of evaluating the GP on the one-dimensional inducing points zld . We naturally write U l and Zl for the corresponding M l × Dl matrices associated to the l-th layer , respectively . Following eq . ( 1 ) , the augmented model for one layer is p ( Fl , Ul|Fl−1 , Wl , Zl ) = p ( Fl|Ul , Al , Zl ) p ( Ul|Zl ) = ∏Dl d=1 p ( f l d|uld , ald , zld ) p ( uld|zld ) . ( 2 ) Variational inference ( VI ) involves the approximation of the true posterior p ( { Fl , Ul } l|Y ) . Following ( Hensman et al. , 2013 ; Salimbeni & Deisenroth , 2017 ) , we propose a posterior given by p ( F|U ) and a parametric Gaussian on U : q ( { Fl , Ul } l ) = ∏L l=1 p ( F l|Ul , Al , Zl ) q ( Ul ) = ∏L l=1 ∏Dl d=1 p ( f l d|uld , ald , zld ) q ( uld ) , ( 3 ) where q ( uld ) = N ( uld|mld , Sld ) , with mld ∈ RM l and Sld ∈ RM l×M l variational parameters to be estimated . Minimizing the KL divergence between q ( { Fl , Ul } l ) and the true posterior is equivalent to maximizing the following evidence lower bound ( ELBO ) : log p ( Y| { Wl , Zl } l ) ≥ ELBO = N∑ n=1 Eq ( fLn , : ) [ log p ( yn , :|fLn , : ) ] − L∑ l=1 Dl∑ d=1 KL ( q ( uld ) ||p ( uld ) ) . ( 4 ) In the ELBO , the KL term can be computed in closed-form , as both q ( uld ) and p ( u l d ) are Gaussians . The log likelihood term can be approximated by sampling from the marginal posterior q ( fLn , : ) , which can be done efficiently through univariate Gaussians as in ( Salimbeni & Deisenroth , 2017 ) . Specifically , Ul can be analytically marginalized in eq . ( 3 ) , which yields q ( { Fl } l ) = ∏ l q ( F l|Fl−1 , Wl ) = ∏ l , dN ( f ld|µ̃ l d , Σ̃ l d ) , with : [ µ̃ld ] i = µ l d ( a l id ) +α l d ( a l id ) ᵀ ( mld − µld ( zld ) ) , ( 5 ) [ Σ̃ l d ] ij = k l d ( a l id , a l jd ) −αld ( alid ) ᵀ ( kld ( zld ) − Sld ) αld ( aljd ) , ( 6 ) where αld ( x ) = k l d ( x , z l d ) [ k l d ( z l d ) ] −1 and aln , : = W lf l−1n , : . Importantly , the marginal posterior q ( f l n , : ) is a Gaussian that depends only on aln , : , which in turn only depends on q ( f l−1 n , : ) . Therefore , sampling from f ln , : is straightforward using the reparametrization trick ( Kingma & Welling , 2013 ) : f lnd = [ µ̃ l d ] n + ε · [ Σ̃ l d ] 1/2 nn , with ε ∼ N ( 0 , 1 ) , and f0n , : = xn , : . ( 7 ) Training consists in maximizing the ELBO , eq . ( 4 ) , w.r.t . variational parameters { mld , Sld } , inducing points { zld } , and model parameters ( i.e . weights { wld } and kernel parameters { θ l d } ) . This can be done in batches , allowing for scalability to very large datasets . The complexity to evaluate the ELBO is O ( NM2 ( D1 + · · ·+DL ) ) , the same as DGPs with DSVI ( Salimbeni & Deisenroth , 2017 ) .3 Predictions . Given a new x∗ , : , we want to compute4 p ( fL∗ , :|X , Y ) ≈ Eq ( { Ul } ) [ p ( fL∗ , :| { Ul } ) ] . As in ( Salimbeni & Deisenroth , 2017 ) , this can be approximated by sampling S values up to the ( L− 1 ) -th layer with the same eq . ( 7 ) , but starting with x∗ , : . Then , p ( fL∗ , :|X , Y ) is given by the mixture of the S Gaussians distributions obtained from eqs . ( 5 ) - ( 6 ) . Triangular kernel . One of the most popular kernels in GPs is the RBF ( Williams & Rasmussen , 2006 ) , which produces very smooth functions . However , the ReLu non-linearity led to a general boost in performance in DNNs ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) , and we aim to model similar activations . Therefore , we introduce the use of the triangular ( TRI ) kernel . Just like RBF , TRI is an isotropic kernel , i.e . it depends on the distance between the inputs , k ( x , y ) = γ · g ( |x− y|/ ` ) , with γ and ` the amplitude and lengthscale . For RBF , g ( t ) = e−t 2/2 . For TRI , g ( t ) = max ( 1− t , 0 ) . This is a valid kernel ( Williams & Rasmussen , 2006 , Section 4.2.1 ) . Similarly to the ReLu , the functions modelled by TRI are piecewise linear , see Figure 6a in the main text and Figure 8 in Appendix C. Comparison with DGP . The difference between auNN and DGP units is graphically illustrated in Figure 2a . Whereas DGP mappings from one layer to the next are complex functions defined on Dl−1 dimensions ( Dl−1 = 2 in the figure ) , auNN mappings are defined just along one direction via the weight projection . This is closer in spirit to NNs , whose mappings are also simpler and better suited for feature extraction and learning more abstract concepts . Moreover , since the GP is defined on a 1D space , auNN requires fewer inducing points than DGP ( which , intuitively , can be interpreted as inducing ( hyper ) planes in the Dl−1-dimensional space before the projection ) .
Either putting the uncertainty on the weights (e.g., Bayes by BP) or on the activation (e.g., fast dropout or variants of natural-parameter networks [2,3] or Bayesian dark knowledge [4]) or both [1] have been investigated before. The idea of moving the uncertainty from the weight to the activation function is not new. One could argue that VAE-style parameterization or local reparameterization trick is also a kind of methods that put uncertainty in the activation function. In fact the proposed method does involve the reprarameterization trick in each layer as shown in Eq. 7.
SP:b7a45906d972644e9d0e757a83ff50fd3ad7cde3
Local SGD Meets Asynchrony
1 INTRODUCTION . In this paper , we consider the classic problem of minimizing an empirical risk , defined simply as min x∈Rd ∑ i∈ [ I ] fi ( x ) , ( 1 ) where d is the dimension , x ∈ Rd denotes the set of model parameters , [ I ] is the training set , and fi ( x ) : Rd → R is the loss on the training sample i ∈ [ I ] . Stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ) is an extremely popular iterative approach to solving this problem : xk+1 = xk − αk∇fBk ( xk ) , ( 2 ) where∇fBk ( xk ) = 1|Bk| ∑ i∈Bk ∇fi ( xk ) is the sum of gradients computed over samples , typically selected uniformly and randomly as a minibatch Bk ⊆ [ I ] , and αk is the learning rate at iteration k . 1.1 BACKGROUND ON DECENTRALIZED DATA-PARALLEL SGD . For better or worse , SGD and its variants currently represent the computational backbone for many large-scale optimization tasks , most notably the training of deep neural networks ( DNNs ) . Arguably the most popular SGD variant is minibatch SGD ( MB-SGD ) ( Bottou ( 2012 ) ) . In a distributed setting with decentralized workers q ∈ [ Q ] , it follows the iteration xk+1 = xk − αk 1 Q Q∑ q=1 ∇fBqk , ( 3 ) where Bqk ⊆ [ I ] is a local minibatch selected by worker q ∈ [ Q ] at iteration k. This strategy is straightforward to scale in a data-parallel way , as each worker can process a subset of the samples in parallel , and the model is then updated by the average of the workers ’ gradient computations . For convenience , we assume the same batch size per worker . This approach has achieved tremendous popularity recently , and there has been significant interest in running training with increasingly large batch sizes aggregated over a large number of GPUs , e.g . Goyal et al . ( 2017 ) . An alternative approach is parallel or local SGD ( L-SGD ) ( Zinkevich et al . ( 2010 ) ; Zhang et al . ( 2016c ) ; Lin et al . ( 2020 ) ) : xqj , t+1 = x q j , t − αj , t∇fBqj , t , 0 ≤ t < Kj ; x q j+1,0 = 1 Q ∑ q xqj , Kj , ( 4 ) where xqj , t denotes the local model at worker q ∈ [ Q ] after j synchronization rounds followed by t local gradient updates and Bqj , t is the local minibatch sampled at the same iteration . Kj denotes the number of local gradient update steps before the jth synchronization . Essentially , workers run SGD without any communication for several local steps , after which they globally average the resulting local models . This method is intuitively easy to scale , since it reduces the frequency of the communication . Recently , a variant called post local SGD ( PL-SGD ) ( Lin et al . ( 2020 ) ) , was introduced to address the issue of loss in generalization performance of L-SGD , wherein the averaging frequency during the initial phase of training is high and is reduced later when optimization stabilizes . Method Bloc Train Loss Train Acc . Test Loss Test Acc Time ( Sec ) Quality/ Perf . MB-SGD 128 0.016 99.75 0.234 92.95 1754 Baseline MB-SGD 1024 0.023 99.51 0.293 91.38 1201 OK PL-SGD 128 0.018 99.69 0.245 92.98 1603 Good PL-SGD 1024 0.154 94.69 0.381 87.81 1159 Poor Bloc , where Bloc = 128 , Q is the number of workers , 2 here , and α0 = 0.1 . In PL-SGD , we average the model after each gradient update for first 150 epochs and thereafter averaging frequencyK is set to 16 as in Lin et al . ( 2020 ) ; other HPs are identical to theirs . The listed results are average of 3 runs with different seeds . with PL-SGD . Clearly , these methods can not tolerate a larger Bloc , though the GPUs can support them . This shortcoming of the existing methods in harnessing the growing data-parallelism is also identified via empirical studies ( Golmant et al . ( 2018 ) ; Shallue et al . ( 2019 ) ) existing in literature . To our knowledge no effective remedy ( yet ) exists to address this challenge . Notice that , here our core target is maximally harnessing the local data-parallelism and therefore the larger local batch size , as against the existing trend in the literature wherein large number of GPUs are deployed to have a large aggregated global batch size with a relatively small Bloc . For example , refer to the performance of MB-SGD and PL-SGD as listed in Table 1 of Lin et al . ( 2020 ) . Notice that with 16 GPUs , each with Bloc = 128 , thus totaling the minibatch size as 2048 , identical to the one with 2 GPUs each with Bloc = 1024 as above , with exactly the same LR scaling and warmup strategy , both MB-SGD and PL-SGD do not face generalization degradation . However , unfortunately , such an implementation setting would incur excessive wastage of available data-parallel compute resources on each of the GPUs . Indeed , the existing specific techniques such as LARS ( You et al . ( 2017 ) ) to address the issue of poor generalization for global large batch training are insufficient for the larger local minibatch size ; we empirically describe it in Section 3 ( Table 11 ) . 1.2 LOCALLY-ASYNCHRONOUS PARALLEL SGD . Now , consider an implementation scheme as the following : 1 . In a decentralized setting of L-SGD , i.e . wherein each worker q ∈ [ Q ] has a local model xq undergoing local SGD updates as described earlier , multiple local concurrent processes u ∈ Uq share the model xq . Processes u ∈ Uq perform asynchronous concurrent gradient updates locally . 2 . The workers average their models whenever any one of them would have had at least Kj local shared updates , where Kj is as that in Equation 4 . The averaging is performed asynchronously and in a non-blocking way by the ( averaging- ) processes aq on behalf of each worker q ∈ [ Q ] . Essentially , the decentralized workers run shared-memory-based asynchronous SGD locally and periodically synchronize in a totally non-blocking fashion . More formally , consider Algorithm 1 . The model xq on a GPU q ∈ [ Q ] is shared by the processes p ∈ P q = { { aq } ∪Uq } locally . The processes p ∈ P q also maintain a shared counter Sq , initialized to 0 . The operation read-and-inc implements an atomic ( with lock ) read and increment of Sq , whereas , read provides an atomic read . Sq essentially enables ordering the shared gradient updates . In turn , this order streamlines the synchronization among workers , thereby determines the averaging rounds j . The ( updater ) processes u ∈ Uq asynchronously and lock-freely update xq with gradients computed over a non-blocking , potentially inconsistent , snapshot va , q of xq , essentially going Hogwild ! ( Recht et al . ( 2011 ) ) , see Algorithm 1a . 1 Initialize s = 0 ; 2 while s ≤ T do 3 vu , q [ i ] : = xq [ i ] , ∀ 1 ≤ i ≤ d ; 4 s : = read-and-inc ( S ) ; 5 Compute∇fBqs ( v u , q ) ; 6 xq [ i ] −= αs∇fBqs ( v u , q ) [ i ] , ∀ 1 ≤ i ≤ d ; ( a ) Local asynchronous gradient update by process u ∈ Uq . 1 Initialize scur = spre = |Uq| , j = 0 ; 2 while scur ≤ T do 3 scur : = read ( S ) ; Compute j corresponding to scur ; 4 if scur − spre ≥ Kj then 5 va , qj [ i ] : = x q [ i ] , ∀ 1 ≤ i ≤ d ; 6 Synchronize across ar , r ∈ [ Q ] \ { q } to compute vj : = 1 Q ∑ q∈ [ Q ] v a , q j ; 7 Compute ∆vqj = vj − v a , q j ; spre : = scur ; 8 xq [ i ] += ∆vqj [ i ] , ∀ 1 ≤ i ≤ d ; j = j + 1 ; ( b ) Asynchronous non-blocking in-place averaging . Algorithm 1 : Locally-asynchronous Parallel SGD ( LAP-SGD ) The process aq , which performs averaging for the worker q ∈ [ Q ] , concurrently keeps on atomically reading Sq , see Algorithm 1b . As soon as it notices an increment Kj in Sq , i.e . xq got concurrently updated with Kj number of gradients , it takes a non-blocking snapshot v a , q j of x q and synchronizes with ar of peers r ∈ [ Q ] /q to compute the average vj of the snapshots . Thereafter , aq adds the difference of the average with the snapshot va , qj to the model x q without blocking the concurrent asynchronous local gradient updates . We call this method locally-asynchronous parallel SGD ( LAP-SGD ) . This method closely resembles Hogwild++ ( Zhang et al . ( 2016a ) ) , which targets the heterogeneous NUMA based multi-core machines , though there are key differences which we describe in Section 4 . Results of the same training task as before by LAP-SGD is given in Table 2 . The distinction of this implementation is that it harnesses the compute power of the GPUs not by increasing the size of Bloc but by concurrently computing many minibatch gradients . Evidently , LAP-SGD provides speed-up without losing the quality of optimization in comparison to the baseline . Recently , Kungurtsev et al . ( 2019 ) presented a shared-memory based method wherein they showed that partitioned gradient updates for some iterations during the course of training over a shared model can reasonably save on total computation cost by means of restricted backpropagation without necessarily losing on optimization quality . Their method is limited to a centralized sharedmemory setting . Moreover , aiming to establish convergence under non-smoothness assumption , they ensure write consistency under a model-wide lock . Having designed our asynchronous parallel SGD , it inspires us to adapt the partitioned gradient update strategy to our lock-free decentralized setting . More specifically , building on LAP-SGD , we consider locally partitioned gradient computation along with asynchronous lock-free updates . Essentially , we partition the model xq to { xqi ( u ) } for u ∈ Uq , i ( u ) ∩ i ( w ) = ∅ , ∀u , w ∈ Uq ( i.e. , non-overlapping block components of the vector x ) . With that , a partitioned gradient computation will amount to computing ∇i ( u ) fBqs ( vq , u ) , the minibatch gradient with respect to the partition xqi ( u ) at line 5 in Figure 1a . Accordingly , the update step at line 6 in Algorithm 1a transforms to xq [ i ] −= αs∇fBqs ( vq , u ) [ i ] , ∀ i ∈ i ( u ) . It is to be noted that we do not use write lock for iterations at any stage . Having devised a partitioned update scheme , we propose locally-partitionedasynchronous parallel SGD ( LPP-SGD ) as described below . 1 . Processes u ∈ Uq maintain a process-local variable last iter which can take two values PARTITIONED and FULL . Each u ∈ Uq initializes last iter as FULL . 2 . While s ≤ Tst , each process u ∈ Uq performs LAP-SGD updates as lines 3 to 6 of Algorithm 1a . 3 . If Tst < s ≤ T , each process u ∈ Uq performs ( a ) a partitioned gradient computation and update : xq , u [ i ] −= αs∇fBqs ( vu , q ) [ i ] , ∀i ∈ i ( u ) if last iter = FULL , and sets last iter = PARTITIONED . ( b ) an LAP-SGD update if last iter = PARTITIONED , and sets last iter = FULL . Essentially , after some initial stabilizing epochs each process u ∈ Uq alternates between a full and a partitioned lock-free asynchronous gradient updates to the model xq . Our experiments showed that Tst = T 10 was almost always sufficient to obtain a competitive optimization result . The results of a sample implementation of LPP-SGD are available in Table 3 . It is clear that LPP-SGD handsomely speeds up the computation and provides equally competitive optimization results .
In this paper, the authors argue that the mini-batch method and local SGD method suffers generalization performance degradation for large local mini-batch size. An asynchronous method is proposed to improve the generalization performance. A sublinear convergence rate is provided for the non-convex objective. As there are some missing definitions and little explanation of the proposed method, the reviewer finds the paper hard to read.
SP:4d94ef57fdaf5f1100b6b09331d5cff5264fcdf6
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
1 INTRODUCTION . Negotiation is ubiquitous in human interaction , from e-commerce to the multi-billion dollar sales of companies . Learning how to negotiate effectively involves deep pragmatic understanding and planning the dialogue strategically ( Thompson ; Bazerman et al. , 2000b ; Pruitt , 2013 ) . Modern dialogue systems for collaborative tasks such as restaurant or flight reservations have made considerable progress by modeling the dialogue history and structure explicitly using the semantic content , like slot-value pairs ( Larionov et al. , 2018 ; Young , 2006 ) , or implicitly with encoder-decoder architectures ( Sordoni et al. , 2015 ; Li et al. , 2016 ) . In such tasks , users communicate explicit intentions , enabling systems to map the utterances into specific intent slots ( Li et al. , 2020 ) . However , such mapping is less clear in complex non-collaborative tasks like negotiation ( He et al. , 2018 ) and persuasion ( Wang et al. , 2019 ) , where user intent and most effective strategies are hidden . Hence , along with the generated dialogue , the strategic choice of framing and the sequence of chosen strategies play a vital role , as depicted in Figure 1 . Indeed , prior work on negotiation dialogues has primarily focused on optimizing dialogue strategies—from highlevel task-specific strategies ( Lewis et al. , 2017 ) , to more specific task execution planning ( He et al. , 2018 ) , to fine-grained planning of linguistic outputs given 1Code , data and a demo system is released at https : //github.com/rishabhjoshi/ DialoGraph_ICLR21 strategic choices ( Zhou et al. , 2019 ) . These studies have confirmed that it is crucial to control for pragmatics of the dialogue to build effective negotiation systems . To model the explicit dialogue structure , prior work incorporated Hidden Markov Models ( HMMs ) ( Zhai & Williams , 2014 ; Ritter et al. , 2010 ) , Finite State Transducers ( FSTs ) ( Zhou et al. , 2020 ) and RNNs ( He et al. , 2018 ; Shi et al. , 2019 ) . While RNN-based models lack interpretability , HMMand FST-based approaches may lack expressivity . In this paper , we hypothesize that Graph Neural Networks ( GNNs ) ( Wu et al. , 2020 ) can combine the benefits of interpretability and expressivity because of their effectiveness in encoding graph-structured data through message propagation . While being sufficiently expressive to model graph structures , GNNs also provide a natural means for interpretation via intermediate states ( Xie & Lu , 2019 ; Pope et al. , 2019 ) . We propose DIALOGRAPH , an end-to-end negotiation dialogue system that leverages Graph Attention Networks ( GAT ) ( Veličković et al. , 2018 ) to model complex negotiation strategies while providing interpretability for the model via intermediate structures . DIALOGRAPH incorporates the recently proposed hierarchical graph pooling based approaches ( Ranjan et al. , 2020 ) to learn the associations between negotiation strategies , including conceptual and linguistic strategies and dialogue acts , and their relative importance in predicting the best sequence . We focus on buyer–seller negotiations in which two individuals negotiate on the price of an item through a chat interface , and we model the seller ’ s behavior on the CraigslistBargain dataset ( He et al. , 2018 ) .2 We demonstrate that DIALOGRAPH outperforms previous state-of-art methods on strategy prediction and downstream dialogue responses . This paper makes several contributions . First , we introduce a novel approach to model negotiation strategies and their dependencies as graph structures , via GNNs . Second , we incorporate these learned graphs into an end-to-end negotiation dialogue system and demonstrate that it consistently improves future-strategy prediction and downstream dialogue generation , leading to better negotiation deals ( sale prices ) . Finally , we demonstrate how to interpret intermediate structures and learned sequences of strategies , opening-up the black-box of end-to-end strategic dialogue systems . 2 DIALOGRAPH . We introduce DIALOGRAPH , a modular end-to-end dialogue system , that incorporates GATs with hierarchical pooling to learn pragmatic dialogue strategies jointly with the dialogue history . DIALOGRAPH is based on a hierarchical encoder-decoder model and consists of three main components : ( 1 ) hierarchical dialogue encoder , which learns a representation for each utterance and encodes its local context ; ( 2 ) structure encoder for encoding sequences of negotiation strategies and dialogue acts ; and ( 3 ) utterance decoder , which finally generates the output utterance . Formally , our dialogue input consists of a sequence of tuples , D = [ ( u1 , da1 , ST1 ) , ( u2 , da2 , ST2 ) , ... , ( un , dan , STn ) ] where ui is the utterance , dai is the coarse dialogue act and STi = { sti,1 , sti,2 , . . . , sti , k } is the set of k fine-grained negotiation strategies for the utterance ui.3 The dialogue context forms the input to ( 1 ) and the previous dialogue acts and negotiation strategies form the input to ( 2 ) . The overall architecture is shown in Figure 2 . In what follows , we describe DIALOGRAPH in detail . 2.1 HIERARCHICAL DIALOGUE ENCODER . A dialogue context typically comprises of multiple dialogue utterances which are sequential in nature . We use hierarchical encoders for modeling such sequential dialogue contexts ( Jiao et al. , 2019 ) . To encode the utterance ut at time t , we use the pooled representations from BERT ( Devlin et al. , 2019 ) to obtain the corresponding utterance embedding et . We then pass the utterance embeddings through a GRU to obtain the dialogue context encoding till time t , denoted by hUt . 2We focus on the seller ’ s side following Zhou et al . ( 2019 ) who devised a set of strategies specific to maximizing the seller ’ s success . Our proposed methodology , however , is general . 3For example , in an utterance Morning ! My bro destroyed my old kit and I ’ m looking for a new pair for $ 10 , the coarse dialogue act is Introduction , and the finer grained negotiation strategies include Proposing price , Being informal and Talking about family for building rapport . 2.2 STRUCTURE ENCODER . Our structure encoder is designed to model the graph representations of the strategies and dialogue acts using GATs and output their structural representations . These structural representations are used to predict the next set of strategies and dialogue acts and enrich the encoded dialogue representation . Below we describe the structure encoder for negotiation strategies . We model the sequence of negotiation strategies , ST = [ ST1 , ST2 , . . . , STt ] by creating a directed graph , where STi is the set of k fine-grained negotiation strategies for the utterance ui . Formally , we define a graph G ( V , E , X ) with |E| edges andN = |V| nodes where each node vi ∈ V represents a particular negotiation strategy for an utterance and has a d-dimensional feature representation denoted by zi . Z ∈ RN×d denotes the feature matrix of the nodes and A ∈ RN×N represents the adjacency matrix , where N is the total number of nodes ( strategies ) that have occurred in the conversation till that point . Therefore , each node represents a strategy-utterance pair . We define the set of edges as E = { ( a , b ) } ; a , b ∈ V where a and b denote strategies at utterances ua and ub , present at turns ta and tb , such that tb > ta . In other words , we make a directed edge from a particular node ( strategy in an utterance ) to all the consecutive nodes . This ensures a direct connection from all the previous strategies to the more recent ones.4 In the same way , we form the graph out of the sequence of dialogue acts . These direct edges and learned edge attention weights help us interpret the dependence and influence of strategies on each other . To get the structural representations from the strategy graphs , we pass them through a hierarchical graph pooling based encoder , which consists of l layers of GAT , each followed by the Adaptive Structure Aware Pooling ( ASAP ) layer ( Ranjan et al. , 2020 ) . As part of the ASAP layer , the model first runs GAT over the input graph representations to obtain structurally informed representations of the nodes . Then a cluster assignment step is performed which generates a cluster assignment matrix , S , which tells the model which nodes come in a similar structural context . After that , the clusters are ranked and then the graph is pooled by taking the top few clusters as new nodes and forming edges between them using the existing graph . This way the size of the graph is reduced at every step which leads to a structurally informed graph representation . We take advantage of the cluster formulation to obtain the associations between the negotiation strategies , as identified from the cluster assignment matrix , S. These association scores can later be used to interpret which strategies are associated with each other and tend to co-occur in similar contexts . Moreover , we also use the node attention scores from GAT to interpret the influence of different strategies on the 4Appendix C shows an example of the graph obtained from a sequence of strategies . representation of a particular strategy , which essentially gives the dependence information between strategies . In this way , the structure representation is learned and accumulated in a manner that preserves the structural information ( Ying et al. , 2018 ; Lee et al. , 2019 ) . After each pooling step , the graph representation is summarized using the concatenation of mean and max of the node representations . The summaries are then added and passed through fully connected layers to obtain the final structural representation of the strategies hSTt . We employ a similar structure encoder to encode the graph obtained from the sequence of dialogue acts , to obtain hdat . 2.3 UTTERANCE DECODER . The utterance decoder uses the dialogue context representation and structural representations of dialogue acts and negotiation strategies to produce the dialogue response ( next utterance ) . We enrich the dialogue representation by concatenating the structural representations before passing it to a standard greedy GRU ( Cho et al. , 2014 ) decoder . This architecture follows Zhou et al . ( 2020 ) , who introduced a dynamic negotiation system that incorporates negotiation strategies and dialogue acts via FSTs . We thus follow their utterance decoder architecture to enable direct baseline comparison . For the jth word of utterance ut+1 , w j t+1 , we condition on the previous word w j−1 t+1 to calculate the probability distribution over the vocabulary as pwjt+1 = softmax ( GRU ( ht , w j−1 t+1 ) ) where ht = [ hut ; h ST t ; h da t ] and [ ; ] represents the concatenation operator . For encoding the price , we replace all price information in the dataset with placeholders representing the percentage of the offer price . For example , we would replace $ 35 with < price− 0.875 > if the original selling price is $ 40 . The decoder generates these placeholders which are then replaced with the calculated price before generating the utterance . 2.4 MODEL TRAINING . We use hSTt to predict the next set of strategies STt+1 , a binary value vector which represents the k-hot representation of negotiation strategies for the next turn . We compute the probability of the jth strategy occurring in ut+1 as p ( stt+1 , j |hSTt ) = σ ( hSTt ) . where σ denotes the sigmoid operator . We threshold the probability by 0.5 to obtain the k-hot representation . We denote the weighted negative log likelihood of strategies LST as the loss function of the task of next strategy prediction LST = − ∑ j δj log ( p ( stt+1 , j ) ) − ∑ k log ( 1 − p ( stt+1 , k ) ) where the summation of j are over the strategies present ( st ′ t+1 , j = 1 ) and not present ( st ′ t+1 , k = 0 ) in the ground truth strategies set , ST ′ . Here δj is the positive weight associated with the particular strategy . We add this weight to the positive examples to trade off precision and recall . We put δj = # of instances not having strategy j/ # of instances having strategy j . Similarly , we use hdat to predict the dialogue act for the next utterance dat+1 . Given the target dialogue act da ′ t+1 and the class weights ρda for the dialogue acts , we denote the class-weighted cross entropy loss over the set of possible dialogue acts , LDA = −ρda log ( softmax ( hdat ) ) . We pass ht = [ hut ; h ST t ; h da t ] through a linear layer to predict the negotiation success , which is denoted by the sale-to-list ratio r = ( sale price− buyer target price ) / ( listed price− buyer target price ) ( Zhou et al. , 2019 ) . We split the ratios into 5 negotiation classes of equal sizes using the training data and use those to predict the success of negotiation . Therefore , given the predicted probabilities for target utterance u ′ t+1 from §2.3 , target ratio class y ′ r and the learnable parameters Wr and br , we use the cross entropy loss as the loss for the generation task ( LNLG ) as well as the negotiation outcome prediction task ( LR ) , thus LNLG = − ∑ wj∈u ′ t+1 log ( p wj t+1 ) and LR = − ∑ r∈ [ 1,5 ] y ′ r log ( softmax ( Wrht + br ) ) . The LR loss optimizes for encoding negotiation strategies to enable accurate prediction of negotiation outcome . We use hyperparameters α , β and γ to optimize the joint loss Ljoint , of strategy prediction , dialogue act prediction , utterance generation and outcome prediction together , using the Adam optimizer ( Kingma & Ba , 2014 ) , to get Ljoint = LNLG + αLST + βLDA + γLR .
This paper deals with the problem of natural language generation for a dialogue system involved in complex communication tasks such as negotiation or persuasion. The proposed architecture consists of two encoders: one for the utterance and the other for dialogue acts and negotiation strategies. The decoder is an RNN that converts the encoded vectors to the output utterance. Each utterance is first passed through BERT to get an utterance-level encoding. The sequence of utterance encodings is then passed through an RNN to generate a conversation level encodings. The negotiation strategies and dialogue acts in a conversation are represented using a node-edge graph, where the nodes are one of the N different strategies/acts and there exists an edge from node a to node b if an utterance with strategy A precedes any utterance with strategy B. The entire architecture is trained in a multi-task setup where the loss function accounts for both the predictions of the model and generated language. The proposed architecture is evaluated on the CraigslistBargain dataset and compared against Zhou et al. 2020.
SP:3dffd0add054e13be141cfe939e367f6f6785eb8
Learning the Step-size Policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm
1 INTRODUCTION . Consider the unconstrained optimization problem minimize x f ( x ) ( 1 ) where f : Rn → R is an objective function that is differentiable for all x ∈ Rn , with n being the number of decision variables forming x . Let ∇xf ( x0 ) be the gradient of f ( x ) evaluated at some x0 ∈ Rn . A general quasi-Newton algorithm for solving this problem iterates xk+1 = xk − tkHkgk ( 2 ) for an initial x0 ∈ Rn until a given stop criterion is met . At the k-th iteration , gk = ∇xf ( xk ) is the gradient , Hk is a positive-definite matrix satisfying the secant equation ( Nocedal and Wright , 2006 , p. 137 ) and tk is the step size . In this paper , we develop a policy that learns to suitably determine step sizes tk when the product Hkgk is calculated by the Limited-Memory Broyden–Fletcher–Goldfarb–Shanno ( L-BFGS ) algorithm ( Liu and Nocedal , 1989 ) . The main contributions of the paper are : 1 . We propose a neural network architecture defining this policy taking as input local information of the current iterate . In contrast with more standard strategies , this policy is tuning-free and avoids re-evaluations of the objective function and gradients at each step . The training procedure is formulated as a stochastic optimization problem and can be performed by easily applying backpropagation through time ( TBPTT ) . 2 . Training classifiers in the MNIST database ( LeCun et al. , 1998 ) , our approach is competitive against heuristically tuned optimization procedures . Our tests show that the proposed policy is not only able to outperform competitors such as ADAM and RMSprop in wall-clock time and optimal/final value , but also performs better than L-BFGS with backtracking line searches , which is the gold standard , and with constant step sizes , which is the baseline . 3 . According to subsequent experiments on CIFAR-10 ( Krizhevsky et al. , 2009 ) , the proposed policy can generalize to different classes of problems after a few additional training steps on examples from these classes . This indicates that learning may be transferable between distinct types of tasks , allowing to explore transfer learning strategies . This result is a step towards the development of optimization methods that frees the designer from tuning control parameters as it will be motivated in Section 2 . The remaining parts of this paper are organized as follows : Section 3 presents the classical L-BFGS algorithm and discuss some methodologies to determine step sizes ; Section 4 contains the architecture for the proposed policy and also discussions on how it was implemented ; Section 5 describes the training procedure ; and , finally , Section 6 presents experiments using classifiers to operate on MNIST and CIFAR-10 databases . The notation is mainly standard . Scalars are plain lower-case letters , vectors are bold lower-case , and matrices are bold upper-case . The clip function is defined as clipul ( y ) : = min ( u , max ( l , y ) ) . 2 MOTIVATION . Most algorithms used in artificial intelligence and statistics are based on optimization theory , which has widely collaborated for the success of machine learning applications in the last decades . However , this two-way bridge seems not to be currently leveraging its full potential in the other sense , that is , to learn how to automate optimization procedures . Indeed , performing satisfactory optimization , or solving learning problems , still relies upon the appropriate tuning of parameters of the chosen algorithm , which are often grouped with other hyper-parameters of the learning task . Despite the existence of several methodologies to obtain good values for these parameters ( Bengio , 2000 ; Bergstra et al. , 2011 ; Bergstra and Bengio , 2012 ; Snoek et al. , 2015 ; Daniel et al. , 2016 ; Dong et al. , 2018 ) , the search for tuning-free algorithms that perform better than heuristically designed ones is of great interest among practitioner and theoreticians . Indeed , besides the generally-desirable faster convergence , the ready-to-use nature of such algorithms allows the user to focus his attention on other problem-level hyper-parameters while the optimization procedure is automatically performed , resulting in better time and effort allocation . As recent advancements of machine learning have helped automatize the solution of numberless problems , optimization theory should equally benefit from these , balancing the bridge flows . From a wider viewpoint , most optimization problem requires the user to select an algorithm and tune it to some extent . Although intuition and knowledge about the problem can speed-up these processes , trial-and-error methodologies are often employed which can be a time-consuming and inefficient task . With that in mind , the concept of Learned optimizers has been gathering attention in the last few years and , basically , refers to optimization policies and routines that were learned by looking at instances of optimization problems , here called tasks . This idea was introduced by Li and Malik ( 2016 ) and Andrychowicz et al . ( 2016 ) building upon previous results of “ learning to learn ” or “ meta-learning ” ( Thrun and Pratt , 1998 ; Hochreiter et al. , 2001 ) . In the former , the authors presented an optimization policy based on a neural network trained by reinforcement learning and taking as input the history of gradient vectors at previous iterations . The latter adopts a long short-term memory ( LSTM ) to achieve a similar task , but the learning is done by truncated backpropagation through time after unrolling the proposed optimizer for a certain number of steps . Subsequently , it was shown in Metz et al . ( 2019 ) how multilayer perceptrons ( MLP ) , adequately trained using a combined gradient estimation method , can perform faster in wall-clock time compared to current algorithms of choice . Also within this scenario , in Xu et al . ( 2019 ) a reinforcement learning-based methodology to auto-learn an adaptive learning rate is presented . Following this same fashion , in this present paper , instead of completely learning an optimizer from data , we propose a mixture of these ideas into a classical optimization procedure . Thus , the resulting optimizer , composed by a combination of L-BFGS and the proposed policy , will be learned in a constrained domain that assures valuable mathematical properties . The idea is to leverage both frameworks , inheriting the theoretical aspects assured by optimization theory while learning a policy to rule out the hand-design of parameters . Algorithm 1 : L-BFGS algorithm Input : si = xi+1 − xi , yi = gi+1 − gi and ρi = 1/ ( sTi yi ) for all i ∈ k −m , . . . , k − 1 ; and current gradient gk , Result : update direction dk = −Hkgk 1 q ← gk ; 2 for i = k − 1 , . . . , k −m do 3 αi ← ρisTi q ; 4 q ← q − αiyi ; 5 end 6 γ = |sTk−1yk−1|/ ( yTk−1yk−1 ) ; 7 r ← γq ; 8 for i = k −m , . . . , k − 1 do 9 β ← ρiyTi r ; 10 r ← r + si ( αi − β ) ; 11 end 12 dk ← −r ; 3 L-BFGS ALGORITHM . The L-BFGS algorithm was originally presented in Liu and Nocedal ( 1989 ) and is here transcribed into Algorithm 1 . It is a quasi-Newton method derived from the BFGS algorithm ( Nocedal and Wright , 2006 ) lowering space complexity from quadratic to linear in the problem dimension at the expense of precision . This algorithm calculates a descending direction in the search space taking into account an estimate of the inverse hessian matrix of f ( x ) , given by Hk . This matrix is not explicitly constructed but rather the product dk : = −Hkgk is obtained from the past m values of xk and gk , which have to be stored . This property makes it often the algorithm of choice for large-scale deterministic non-linear optimization problems . If f ( x ) is convex in x , this algorithm is guaranteed to provide a descending update direction , but the same does not apply for non-convex objective functions . However , a simple way to circumvent this is by removing iterations i in lines 2 and 8 of Algorithm 1 such that ρi ≤ 0 ( Nocedal and Wright , 2006 , p. 537 ) , which is used in this paper . A matter of great relevance within this scope is how to choose an appropriate step size tk to apply the update rule in Eq . ( 2 ) . To the best of our knowledge , there does not seem to exist a consensus on how to choose tk in a general way for non-convex objective functions . The scaling factor γ in lines 6-7 of Algorithm 1 is known to assure that the step size tk = 1 is accepted in most iterations in the convex optimization context , but not always . We will refer to a constant step-size policy that outputs tk = 1 as the baseline L-BFGS . However , a line search ( LS ) procedure is often combined with L-BFGS to assure its convergence . Ideally , this should be performed by solving tk = arg mint > 0 f ( xk + tdk ) but this exact approach is often too expensive to be adopted , motivating the use of inexact ones . An example is the backtracking line search ( BTLS ) , which takes an initial length tk for the step size and shrinks it repeatedly until the so-called sufficient decrease Wolfe Condition f ( xk + tkdk ) ≤ f ( xk ) + c1tkg T k dk is fulfilled , where c1 ∈ ( 0 , 1 ) is a control parameter to be tuned . Another parameter that has to be designed is the contraction factor c2 ∈ ( 0 , 1 ) that shrinks the step size , i.e. , tk ← c2tk , see Nocedal and Wright ( 2006 , p. 37 ) . This method assures convergence to a localminima at the cost of re-evaluating the objective function several times per iteration . This is a price that the user is , in some cases , willing to pay , but for large-dimensional problems this procedure is likely to become the bottle-neck of the optimization task . It is important to highlight that the method to be presented may also apply to other optimization algorithms that deeply rely on line searches to perform well . However , this paper focus on L-BFGS as it is often the algorithm of choice in large-scale deterministic optimization . In the context of stochastic optimization , many modified versions of Algorithm 1 together with methodologies for choosing tk are available ( Moritz et al. , 2016 ; Zhou et al. , 2017 ; Bollapragada et al. , 2018 ; Wills and Schön , 2019 ) , but for sake of simplicity , our work will deal exclusively with deterministic non-linear optimization problems . 4 LEARNED POLICY FOR SELECTING STEP SIZES . Recalling the definition of sk and yk in Algorithm 1 , our policy is defined as tk = π ( dk , gk , sk−1 , yk−1 ; θ ) and selects an adequate step size for L-BFGS but neither relying on any parameter tuning nor requiring additional evaluations of the objective function . Instead , its parameters that are represented by θ should be learned from data . Let us , from now on , de- fine this policy combined with Algorithm 1 as the L-BFGS-π approach . The architecture of the policy π ( dk , gk , sk−1 , yk−1 ; θ ) is shown in Fig . 1 . To allow the policy to be independent from the problem size n , only the inner products between its inputs are used . These values define u0 = dotln ( dk , gk , sk−1 , yk−1 ) where dotln ( · ) returns the component-wise application of f ( x ) = ln ( min ( x , ) ) to the elements of X = [ dk gk sk−1 yk−1 ] T [ dk gk sk−1 yk−1 ] but with the superdiagonal entries having their signs reversed . We have chosen = 10−8 to avoid imaginaryvalued entries . The vector u0 is the input to two parallel input layers , which are fully connected linear layers that transport information in u0 to another vector space Rnh ( in our tests , we adopted nh = 6 ) . Their outputs , as usual , are defined as u1 = W01u0 + b01 and u2 = W02u0 + b02 . The logarithm operation was adopted to let the linear layers evaluate products and divisions between powers of the inputs by simply summing and subtracting them . Moreover , as the output is positive , working in the logarithmic vector space allows us to use a wider range of numerical values . Subsequently , let us define the normalized vectors ū1 = u1/‖u2‖ and ū2 = u2/‖u2‖ to calculate the scalar projection of ū1 onto ū2 and clip the result to some interval [ τm , τM ] , yielding the log-step size τk = clip τM τm ( ūT2 ū1 ) = : p ( u1 , u2 ) ( 3 ) Finally , the selected step size is obtained as tk = eτk . To geometrically interpret this , we sketch three different scenarios in Fig . 2 . The dashed lines represent orthogonal axes spanned by some arbitrary ū2 and the gray strip represents the interval [ τm , τM ] along the direction of ū2 whence τk should be taken . When the Linear Layer 1 maps u0 into u′1 , the scalar projection of ū ′ 1 onto ū2 is beyond the maximal τM , so τk is clipped to it . In the same way , for ū′′′1 the step size will be the minimal one tk = eτm whereas for the intermediate ū′′1 we have τk ∈ ( τm , τM ) . The two layers , jointly trained , should learn how to position ū1 and ū2 in the lifted space to represent important directional information of dk and gk by looking at similar optimization tasks , being thus able to produced suitable step sizes . ū2 ū′1 ū′′′1 ū′′1 [ τm , τM ] for example , in the sufficient decrease Wolfe condition for backtracking line search , which makes our policy comparable to them in the sense that π ( · ; θ ) does not require additional information to operate . However , notice that the clip function is not suitable for training given that it is non-differentiable and gradients can not be backpropagated through it . Fortunately , the clip operation ( 3 ) can be cast as a convex optimization problem τk = arg min τ∈R ‖u2τ − u1‖2 ( 4 ) s.t . τm ≤ τ ≤ τM ( 5 ) allowing τk to be calculated by a convex optimization layer , defined here as a CVX Layer , ( Agrawal et al. , 2019 ) . This last layer can output the solution to a parameter-dependent convex optimization problem . For the special case where a solution is not differentiable with respect to the input ( e.g. , in our case when an inequality constraint is active ) , the automatic differentiation procedure delivers an heuristic quantity that can be employed as a gradient . The use of a CVX Layer is therefore convenient for training our policy but , on the other hand , using Eq . ( 3 ) in its place when applying the already-trained policy significantly speeds up the step-size evaluation , compared to solving ( 4 ) . It is important to remark that this policy is defined as independent from both the memory length m of Algorithm 1 and the problem dimension n. Additionally , the lower and upper limits for the log-step size are τm and τM , respectively , and can also be learned . In this work , however , we chose τm = −3 and τM = 0 , letting tk ∈ [ 0.0497 , 1 ] . This interval is comprehensive enough to let our method be compared in a fair way to backtracking line searches . Moreover , when we allowed τM to be learned in our tests it converged to values that were very close to τM = 0 , indicating that 1 was already an adequate upper limit for the step size .
The paper studies a problem of learning step-size policy for L-BFGS algorithm. This paper falls into a general category of meta-learning algorithms that try to derive a data-driven approach to learn one of the parameters of the learning algorithm. In this case, it is the learning rate of L-BFGS. The paper is very similar in nature to the papers of Ravi & Larochelle, MAML and Andrychowicz.
SP:3b3e7833784c53527eb32d5f6ac8d720f9d764bd
Uncertainty Calibration Error: A New Metric for Multi-Class Classification
1 INTRODUCTION . Advances in deep learning have led to superior accuracy in classification tasks , making deep learning classifiers an attractive choice for safety-critical applications like autonomous driving ( Chen et al. , 2015 ) or computer-aided diagnosis ( Esteva et al. , 2017 ) . However , the high accuracy of recent deep learning models alone is not sufficient for such applications . In cases where serious decisions are made upon model ’ s predictions , it is essential to also consider the uncertainty of these predictions . We need to know if a prediction is likely to be incorrect or if invalid input data is presented to a deep model , e.g . data that is far away from the training domain or obtained from a defective sensor . The consequences of a false decision based on an uncertain prediction can be fatal . A natural expectation is that the certainty of a prediction should be directly correlated with the quality of the prediction . In other words , predictions with high certainty are more likely to be accurate than uncertain predictions , which are more likely to be incorrect . A common misconception is the assumption that the estimated softmax likelihood can be directly used as a confidence measure for the predicted class . This expectation is dangerous in the context of critical decision-making . The estimated likelihood of models trained by minimizing the negative log-likelihood ( i.e . cross entropy ) is highly overconfident ; that is , the estimated likelihood is considerably higher than the observed frequency of accurate predictions with that likelihood ( Guo et al. , 2017 ) . 2 UNCERTAINTY ESTIMATION . In this work , we focus on uncertainty from approximately Bayesian methods . We assume a general multi-class classification task with C classes . Let input x ∈ X be a random variable with corresponding label y ∈ Y = { 1 , . . . , C } . Let fw ( x ) be the output ( logits ) of a neural network with weight matrices w , and with model likelihood p ( y=c |fw ( x ) ) for class c , which is sampled from a probability vector p = σSM ( fw ( x ) ) , obtained by passing the model output through the softmax function σSM ( · ) . From a frequentist perspective , the softmax likelihood is often interpreted as confidence of prediction . Throughout this paper , we follow this definition . The frequentist approach assumes a single best point estimate of the parameters ( or weights ) of a neural network . In frequentist inference , the weights of a deep model are obtained by maximum likelihood estimation ( Bishop , 2006 ) , and the normalized output likelihood for an unseen test input does not consider uncertainty in the weights ( Kendall & Gal , 2017 ) . Weight uncertainty ( also referred to as model or epistemic uncertainty ) is a considerable source of predictive uncertainty for models trained on data sets of limited size ( Bishop , 2006 ; Kendall & Gal , 2017 ) . Bayesian neural networks and recent advances in their approximation provide valuable mathematical tools for quantification of model uncertainty ( Gal & Ghahramani , 2016 ; Kingma & Welling , 2014 ) . Instead of assuming the existence of a single best parameter set , we place distributions over the parameters and want to consider all possible parameter configurations , weighted by their posterior . More specifically , given a training data set D and an unseen test sample x with class label y , we are interested in evaluating the predictive distribution p ( y|x , D ) = ∫ p ( y|x , w ) p ( w|D ) dw . This integral requires to evaluate the posterior p ( w|D ) , which involves the intractable marginal likelihood . A possible solution to this is to approximate the posterior with a more simple , tractable distribution q ( w ) by optimization . In this work , we incorporate the following approximately Bayesian methods which we use in our experiments to obtain weight uncertainty : Monte Carlo ( MC ) dropout ( Gal & Ghahramani , 2016 ) , Gaussian dropout ( Wang & Manning , 2013 ; Kingma et al. , 2015 ) , Bayes by Backprop ( Blundell et al. , 2015 ) , SWA-Gaussian ( Maddox et al. , 2019 ) , and ( although not Bayesian ) deep ensembles ( Lakshminarayanan et al. , 2017 ) . A short review of each of the methods can be found in Appendix A.2 . 3 RELATED CALIBRATION METRICS . Expected Calibration Error The expected calibration error ( ECE ) is one of the most popular calibration error metrics and estimates model calibration by binning the predicted confidences p̂ = maxc p ( y = c |x ) into M bins from equidistant intervals and comparing them to average accuracies per bin ( Naeini et al. , 2015 ; Guo et al. , 2017 ) : ECE = M∑ m=1 |Bm| n ∣∣acc ( Bm ) − conf ( Bm ) ∣∣ , ( 1 ) with number of test samples n and acc ( B ) and conf ( B ) denoting the accuracy and confidence of bin B , respectively . Several recent works have described severe pathologies of the ECE metric ( Ashukha et al. , 2020 ; Nixon et al. , 2019 ; Kumar et al. , 2019 ) . Most notably , the ECE metric is minimized by a model constantly predicting the marginal distribution of the majority class which makes it impossible to directly optimize it ( Kumar et al. , 2018 ) . Additionally , the ECE only considers the maximum class probability and ignores the remaining entries of the probability vector p ( x ) . Adaptive Calibration Error Nixon et al . ( 2019 ) proposed the adaptive calibration error ( ACE ) to address the issue of fixed bin widths of ECE-like metrics . For models with high accuracy or overconfidence , most of the predictions fall into the rightmost bins , whereas only very few predictions fall into the rest of the bins . ACE spaces the bins such that an equal number of predictions contribute to each bin . The final ACE is computed by averaging over per-class ACE values to address the issue raised by Kull et al . ( 2019 ) . However , this makes the metric more sensitive to the manually selected number of bins M as the number of bins effectively becomes C ·M , with number of classes C. Using fixed bin widths , the numbers of samples in the sparsely populated bins is further reduced , which increases the variance of each measurement per bin . Using adaptive bins , this results in the lower confidence bins spanning a wide range of values , which increases the bias of the bin ’ s measurement . Negative Log-Likelihood Deep models for classification are usually trained by minimizing the average negative log-likelihood ( NLL ) : NLL = 1 N N∑ i=1 − log p ( y = yi |xi ) . ( 2 ) The NLL is also commonly used as a metric for measuring the calibration of uncertainty . However , the NLL is minimized by increasing the confidence maxc p ( y = c |x ) , which favors over-confident models and models with higher accuracy ( Ashukha et al. , 2020 ) . This metric is therefore unable to compare the calibration of models with different accuracies and training a model by minimizing NLL does not necessarily lead to good calibration . Brier Score The average Brier score is another popular metric for assessing the quality of predictive uncertainty and is defined as ( Brier , 1950 ; Lakshminarayanan et al. , 2017 ) BS = 1 N N∑ i=1 C∑ c=1 ( 1 ( yi = c ) − p ( y = c |xi ) ) 2 . ( 3 ) Similarly to the NLL , the Brier score favors high probabilities for correct predictions and low probabilities for incorrect predictions . Thus , models with higher accuracy tend to show a better Brier score , which makes the metric unsuitable for comparing the quality of uncertainty for models with different accuracies . Maximum Mean Calibration Error Common recalibration methods are applied post-hoc , e.g . temperature scaling on a separate calibration set . Kumar et al . ( 2018 ) proposed the maximum mean calibration error ( MMCE ) , a trainable calibration surrogate for the calibration error . It is defined as MMCE2 ( D ) = ∑ i , j∈D ( 1 ( ŷi = yi ) − p̂i ) ( 1 ( ŷj = yj ) − p̂j ) k ( p̂i , p̂j ) m2 ( 4 ) over batch D ⊂ D with batch size m , matrix-valued universal kernel k and ŷ = argmaxc p ( y = c |x ) . Trainable calibration metrics are used in joint optimization with the negative log-likelihood argmin w ∑ D NLL ( D , w ) + λMMCE ( D , w ) . ( 5 ) Kumar et al . ( 2018 ) claim to have addressed the issue that the ECE is unsuitable for direct optimization due to its high discontinuity in w. However , MMCE is also minimized by a model constantly predicting the marginal distribution of the classes . This leads to subpar logit temperature when training with MMCE and temperature scaling can further reduce miscalibration ( Kumar et al. , 2018 ) . 4 UNCERTAINTY CALIBRATION ERROR . To give an insight into our general approach to measuring the calibration of uncertainty , we will first revisit the definition of perfect calibration of confidence ( Guo et al. , 2017 ) and show how this concept can be extended to calibration of our definition uncertainty . Let ŷ = argmaxp be the most likely class prediction of input x with confidence p̂ = maxp and true label y . Then , following Guo et al . ( 2017 ) , perfect calibration of confidence is defined as P [ ŷ = y | p̂ = α ] = α , ∀α ∈ [ 0 , 1 ] . ( 6 ) That is , the probability of a correct prediction ŷ = y given the prediction confidence p̂ should exactly correspond to the prediction confidence . Instead of using only the probability of the predicted class , we use the entropy of p to express prediction uncertainty : H ( p ) = − C∑ c=1 p ( c ) log p ( c ) . ( 7 ) Let q ( k ) : = ( P [ y = 1| argmaxp ( x ) = k ] , . . . , P [ y = C| argmaxp ( x ) = k ] ) ( 8 ) be a probability vector of true marginal class probabilities for all inputs x predicted with class k. Consider the following example : Three i.i.d . inputs x1:3 in a binary classification task with ground truth labels { 1 , 1 , 2 } have all been predicted with argmaxp ( x1:3 ) = 1 . Then , q ( 1 ) = ( 2 3 , 1 3 ) . With this , we define a model to be perfectly calibrated if H ( q ( k ) ) = H ( p | argmaxp = k ) ∀ k ∈ { 1 , . . . , C } . ( 9 ) From this , we derive an error metric for calibration of uncertainty : Ep [ |H ( q ) −H ( p ) | ] . ( 10 ) However , this metric and the use of the entropy as measure of uncertainty lacks interpretability , as the entropy scales with the number of classes C. This does not allow to compare the uncertainty or the calibration of models trained on different data sets . Therefore , we propose to use the normalized entropy to scale the values to a range between 0 and 1 : H̃ ( p ) : = − 1 logC C∑ c=1 p ( c ) log p ( c ) , H̃ ∈ [ 0 , 1 ] . ( 11 ) We further increase interpretability and argue , that the normalized entropy should correlate with the model error . From Eq . ( 6 ) and Eq . ( 11 ) , we define perfect calibration of uncertainty as P [ ŷ 6= y | H̃ ( p ) = α ] = α , ∀α ∈ [ 0 , 1 ] . ( 12 ) That is , in a batch of inputs that are all predicted with uncertainty of e. g. 0.2 , a top-1 error of 20 % is expected . The confidence is interpreted as the probability of belonging to a particular class , which should naturally correlate with the model error of that class . This characteristic does not generally apply to entropy , and thus the question arises why entropy should correspond with the model error . Proposition 1 . The normalized entropy ( uncertainty ) H̃ ( p ) approaches the top-1 error in the limit of number of classes C if the model p is well-calibrated . Proof . lim C→∞ H̃ ( p ) = ( 1− p̂ ) ( 13 ) The top-1 error equals ( 1 − p̂ ) if the model is perfectly calibrated in the sense of Eq . ( 6 ) . For a detailed proof , see Appendix A.1 . Thus , the normalized entropy gives us an intuitive and interpretable measure of uncertainty . If a model is perfectly calibrated , H̃ corresponds to the top-1 error . We propose the following notion to quantify miscalibration of uncertainty : EH̃ [ ∣∣P [ ŷ 6= y | H̃ ( p ) = α ] − α∣∣ ] , ∀α ∈ [ 0 , 1 ] . ( 14 ) We refer to this as Expected Uncertainty Calibration Error ( UCE ) and approximate with UCE : = M∑ m=1 |Bm| n ∣∣err ( Bm ) − uncert ( Bm ) ∣∣ , ( 15 ) using the same binning scheme as in ECE estimation . The error per bin is defined as err ( Bm ) : = 1 |Bm| ∑ i∈Bm 1 ( ŷi 6= y ) , ( 16 ) where 1 ( ŷi 6= y ) = 1 and 1 ( ŷi = y ) = 0 . Uncertainty per bin is defined as uncert ( Bm ) : = 1 |Bm| ∑ i∈Bm H̃ ( pi ) . ( 17 ) Properties of UCE The proposed UCE metric solves several problems of other metrics . First , the UCE is not zero for a model constantly predicting the marginal class distribution . Estimators of metrics with this pathology ( e.g . ECE , MMCE ) suffer from varying bias and therefore do not allow comparing miscalibration of different models ( Ashukha et al. , 2020 ; Vaicenavicius et al. , 2019 ) . In contrast to ACE , UCE is not highly sensitive to the numbers of bins and provides a consistent ranking of different models for the same classification task ( see Fig . 1 ) . Additionally , UCE can be used as a trainable regularizer in similar manner to MMCE . During training , we compute the UCE over mini-batches D ⊂ D and add it to the NLL training objective argmin w ∑ D NLL ( D , w ) + λUCE ( D , w ) , ( 18 ) weighted by a factor λ. UCE is zero for an optimal model and thus does not penalize high confident predictions for models with high accuracy , which is a major disadvantage of plain entropy regularization ( Pereyra et al. , 2017 ) . Predictions with low uncertainty , but high top-1 error are penalized whereas predictions with high accuracy are encouraged to have low uncertainty .
This paper proposes a new calibration error measurement named UCE (Uncertainty Calibration Error) for deep classification models. It consists in doing a calibration in order to achieve "perfect calibration" (i.e., the uncertainty provided is equivalent to the classification error at all levels in [0, 1]), relying on normalized entropy for multiclass classification. This UCE is well justified for classification problems with several classes to process, where the entropy is demonstrated to be asymptotically equivalent to the classification (top-1) error. A point with this UCE metric is that is has some interpretability properties in terms of its value, and is said to be robust to the number of bins used.
SP:7a92beaba926a93a627208abebe4a455ae3e0400
Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference
1 INTRODUCTION . Bayesian inference provides a powerful framework to blend prior knowledge , data generation process and ( possibly small ) data for statistical inference . With some prior knowledge ⇢ ( distribution ) for the quantity of interest x 2 Rd , and some ( noisy ) measurement y 2 Rdy , it casts on x a posterior q ( x|y ) / ⇢ ( x ) L ( y|x ) , where L ( y|x ) = N ( y F ( x ) ; 0 , `` ) . ( 1 ) where L ( y|x ) is the likelihood that compares the data y with system prediction F ( x ) from the candidate x , here F denotes the forward process . We can use different distributions to model the mismatch `` = y F ( x ) , and for illustration simplicity , we assume Gaussian in Equation 1 . For example , Bayesian deep learning generates model predicted logits F ( x ) from model parameters x , and compares it with discrete labels y through binomial or multinomial distribution . Sampling or inferring from q is a long-standing challenge , especially for high-dimensional ( high-d ) cases . An arbitrary high-d posterior can have its importance regions ( also called “ modes ” ) anywhere in the high-d space , and finding these modes requires computational cost that grows exponentially with the dimension d. This intrinsic difficulty is the consequence of “ the curse of dimensionality ” , which all existing Bayesian inference methods suffer from , e.g. , MCMC-based methods ( Neal et al. , 2011 ; Welling & Teh , 2011 ; Cui et al. , 2016 ) , SVGD-type methods ( Liu & Wang , 2016 ; Chen et al. , 2018 ; 2019a ) , and generative modeling ( Morzfeld et al. , 2012 ; Parno et al. , 2016 ; Hou et al. , 2019 ) . In this paper , we focus on Bayesian inference problems with multiscale structure and exploit this structure to sample from a high-d posterior . While the original problem has a high spatial resolution ( fine-scale ) , its low resolution ( coarse-scale ) analogy is computationally attractive because it lies in a low-dimension ( low-d ) space . A problem has the multiscale structure if such coarse-scale low-d surrogate exists and gives good approximation to the fine-scale high-d problem , see Section 2.1 . Such multiscale property is very common in high-d Bayesian inference problems . For example , inferring 3-D permeability field of subsurface at the scale of meters is a reasonable approximation of itself at the scale of centimeters , while the problem dimension is 106-times fewer . We propose a Multiscale Invertible Generative Network ( MsIGN ) to sample from high-d Bayesian inference problems with multiscale structure . MsIGN is a flow-based generative network that can both generate samples and give density evaluation . It consists of multiple scales that recursively lifts up samples to a finer-scale ( higher-resolution ) , except that the coarsest scale directly samples from a low-d ( low resolution ) distribution . At each scale , a fixed prior conditioning layer combines coarse-scale samples with some random noise according to the prior to enhance the resolution , and then an invertible flow modifies the samples for better accuracy , see Figure 1 . The architecture of MsIGN makes it fully invertible between the final sample and random noise at all scales . MsIGN undergoes a multi-stage training that learns a hierarchy of distributions with dimensions growing from the lowest to the highest ( the target posterior ) . Each stage gives a good initialization to the next stage thanks to the multiscale property . To capture multiple modes , we choose Jeffreys divergence DJ ( pkq ) as the training objective at each stage , which is defined as DJ ( pkq ) = DKL ( pkq ) +DKL ( qkp ) = Ex⇠p [ log ( p ( x ) /q ( x ) ) ] + Ex⇠q [ log ( q ( x ) /p ( x ) ) ] . ( 2 ) Jeffreys divergence removes bad local minima of single-sided Kullback-Leibler ( KL ) divergence to avoid mode missing . We build an unbiased estimation of it by leveraging prior conditioning layer in importance sampling . Proper loss function and good initialization from multi-stage training solve the non-convex optimization stably and capture multi-modes of the high-d distribution . In summary , we claim four contributions in this work . First , we propose a Multiscale Invertible deep Generative Network ( MsIGN ) with a novel prior conditioning layer , which can be trained in a coarse-to-fine scale manner . Second , Jeffreys divergence is used as the objective function to avoid mode collapse , and is estimated by importance sampling based on the prior conditioning layer . Third , when applied to two Bayesian inverse problems , MsIGN clearly captures multiple modes in the high-d posterior and approximates the posterior accurately , demonstrating its superior performance compared with previous methods via the generative modeling approach . Fourth , we also apply MsIGN to image synthesis tasks , where it achieves superior performance in bits-perdimension among our baseline models , like Glow ( Kingma & Dhariwal , 2018 ) , FFJORD ( Grathwohl et al. , 2018 ) , Flow++ ( Ho et al. , 2019 ) , i-ResNet ( Behrmann et al. , 2019 ) , and Residual Flow ( Chen et al. , 2019b ) . MsIGN also yields great interpret-ability of its neurons in intermediate layers . 2 METHOLOGY . We will abbreviate q ( x|y ) in Equation 1 as q ( x ) for simplicity in the following context , because y only plays the role of defining the target distribution q ( x ) in MsIGN . In Section 2.1 , we discuss the multiscale structure in detail of the posterior q ( x ) and derive a scale decoupling that can be utilized to divide and conquer the high-d challenge of Bayesian inference . As a flow-based generative model like in Dinh et al . ( 2016 ) , MsIGN models a bijective that maps Gaussian noise z to a sample x whose distribution is denoted as p✓ ( x ) , where ✓ is the network parameters . MsIGN allows fast generation of samples x and density evaluation p✓ ( x ) , so we train our working distribution p✓ ( x ) to approximate the target distribution q ( x ) . We present the architecture of MsIGN in Section 2.2 and the training algorithm in Section 2.3 . 2.1 MULTISCALE STRUCTURE AND SCALE DECOUPLING . We say a Bayesian inference problem has multiscale structure if the associated coarse-scale likelihood Lc approximates the original likelihood L well : L ( y|x ) ⇡ Lc ( y|xc ) , where Lc ( y|xc ) : = N ( y Fc ( xc ) ; 0 , `` ) . ( 3 ) Here xc 2 Rdc is a coarse-scale version of the fine-scale quantity x 2 Rd ( dc < d ) , given by a deterministic pooling operator A : xc = A ( x ) . The map Fc : Rdc ! Rdy is a forward process that gives system prediction based on the coarse-scale information xc . A popular case of the multiscale structure is when A is the average pooling operator , and F ( x ) ⇡ Fc ( xc ) , meaning that the system prediction mainly depends on the lower-resolution information xc . Equation 3 motivates us to define a surrogate distribution q̃ ( x ) / ⇢ ( x ) Lc ( y|A ( x ) ) that approximates the target posterior q ( x ) well1 : q̃ ( x ) = ⇢ ( x ) Lc ( y|A ( x ) ) = ⇢ ( x ) Lc ( y|xc ) ⇡ ⇢ ( x ) L ( y|x ) = q ( x ) . ( 4 ) We also notice that the prior ⇢ allows an exact scale decoupling . To generate a sample x from ⇢ , one can first sample its coarse-scale version xc = A ( x ) , and then replenish missing fine-scale details without changing the coarse-scale structure by sampling from the conditional distribution ⇢ ( x|xc ) = ⇢ ( x|A ( x ) = xc ) . Using ⇢c to denote the distribution of xc = A ( x ) , the conditional probability calculation summarizes this scale decoupling process as ⇢ ( x ) = ⇢ ( x|xc ) ⇢c ( xc ) . Combining the scale effect in the likelihood and the scale decoupling in the prior , we decouple the surrogate q̃ ( x ) = ⇢ ( x ) Lc ( y|A ( x ) ) into the prior conditional distribution ⇢ ( x|xc ) and a coarse-scale posterior , defined as qc ( xc ) : = ⇢c ( xc ) L ( y|xc ) . The decoupling goes as q̃ ( x ) = ⇢ ( x ) Lc ( y|xc ) = ⇢ ( x|xc ) ⇢c ( xc ) Lc ( y|xc ) = ⇢ ( x|xc ) qc ( xc ) , ( 5 ) The prior conditional distribution ⇢ ( x|xc ) bridges the coarse-scale posterior qc ( xc ) and the surrogate q̃ ( x ) , which in turn approximates the original fine-scale posterior q ( x ) . Parno et al . ( 2016 ) proposed a similar scale decoupling relation , and we leave the discussion and comparison to Appendix A . Figure 1 shows the integrated sampling strategy . To sample an x from q , we start with an xc from qc . The prior conditioning layer then performs random upsampling from the prior conditional distribution ⇢ ( ·|xc ) , and the output will be a sample x̃ of the surrogate q̃ . Due to the approximation q̃ ⇡ q from Equation 4 , we stack multiple invertible blocks for the invertible flow F to modify the sample x̃ ⇠ q̃ to a sample x ⇠ q : x = F ( x̃ ) . F is initialized as an identity map in training . Finally , to obtain the xc from qc , we apply the above procedure recursively until the dimension of the coarsest scale is small enough so that qc can be easily sampled by a standard method . 2.2 MULTISCALE INVERTIBLE GENERATIVE NETWORK : ARCHITECTURE . Our proposed MsIGN has multiple levels to recursively apply the above strategy . We denote L the number of levels , xl 2 Rdl the sample at level l , and Al : Rdl ! Rdl 1 the pooling operator from level l to l 1 : xl 1 = Al ( xl ) . Following the idea in Section 2.1 , we can define the l-th level target ql ( xl ) and surrogate q̃l ( x̃l ) , and the last-level target qL is our original target q in Equation 1 . The l-th level of MsIGN uses a prior conditioning layer PCl and an inverse transform Fl to capture ql . Prior conditioning layer . The prior conditioning layer PCl at level l lifts a coarse-scale sample xl 1 2 Rdl 1 up to a random fine-scale one xl 2 Rdl following the conditional distribution ⇢ ( xl|xl 1 ) . The difference in dimension is compensated by a Gaussian noise zl 2 Rdl dl 1 , which is the source of randomness : xl = PCl ( xl 1 , zl ) . PCl depends only on the prior conditional distribution ⇢ ( xl|xl 1 ) , and thus can be pre-computed independently for different levels regardless of the likelihood L. When the prior is Gaussian and the pooling operators are linear ( e.g. , average pooling ) , the prior conditional distribution is still Gaussian with moments specified as follows . Lemma 2.1 Suppose that ⇢ ( xl ) = N ( xl ; 0 , ⌃l ) , and Al ( xl ) = Alxl for some Al 2 Rdl 1⇥dl , then with Ul 1 : = ⌃lATl ( Al⌃lA T l ) 1 and ⌃l|l 1 : = ⌃l ⌃lATl ( Al⌃lATl ) 1Al⌃l , we have ⇢ ( xl|xl 1 = Alxl ) = N ( xl ; Ul 1xl 1 , ⌃l|l 1 ) . 1We omit normalizing constants . Equivalence and approximation are up to normalization in the following . With the Cholesky decomposition ( or eigen-decomposition ) ⌃l|l 1 = BlBTl , we design the prior conditioning layer PCl as below , which is invertible between xl and ( xl 1 , zl ) : xl = PCl ( xl 1 , zl ) : = Ul 1xl 1 +Blzl , zl ⇠ N ( 0 , Idl dl 1 ) . ( 6 ) We refer readers to Appendix B for proof of Lemma 2.1 and the invertibility in Equation 6 . When the prior is non-Gaussian or the pooling operators are nonlinear , there exists a nonlinear invertible prior conditioning operator xl = PCl ( xl 1 , zl ) such that xl follows the prior conditional distribution ⇢ ( xl|xl 1 ) given xl 1 and zl ⇠ N ( 0 , Idl dl 1 ) . We can pre-train an invertible network to approximate this sampling process , and fix it as the prior conditioning layer . Invertible flow . The invertible flow Fl at level l modifies the surrogate q̃l towards the target ql . The more accurate the multiscale structure in Equation 3 is , the better q̃l approximates ql , and the closer Fl is to the identity map . Therefore , we parameterize Fl by some flow-based generative model and initialize it as an identity map . In practice , we utilize the invertible block of Glow ( Kingma & Dhariwal , 2018 ) , which consists of actnorm , invertible 1⇥ 1 convolution , and affine coupling layer , and stack several blocks as the inverse flow Fl in MsIGN . Overall model . MsIGN is a bijective map between random noise inputs at different scales { zl } Ll=1 and the finest-scale sample xL . The forward direction of MsIGN maps { zl } Ll=1 to xL as below : x1 = F1 ( z1 ) , x̃l = PCl ( xl 1 , zl ) , xl = Fl ( x̃l ) , 2 l L . ( 7 ) As a flow-based generative model , sample generation as in Equation 7 and density evaluation p✓ ( x ) by the change-of-variable rule is accessible and fast for MsIGN . In scenarios when certain bound needs enforcing to the output , we can append element-wise output activations at the end of MsIGN . For example , image synthesis can use the sigmoid function so that pixel values lie in [ 0 , 1 ] . Such activations should be bijective to keep the invertible relation between random noise to the sample .
This paper presents a model and a corresponding training approach for multi-scale invertible models. The presented model is defined on multiple scales with information on finer scales being conditioned on coarser scales. Data generation is hence done sequentially from a coarser to finer scale. The authors argue that this multi-scale sampling helps in addressing the curse of dimensionality problem by allowing to sample from high density regions more efficiently.
SP:92d112388a1eac20c2208f0596cdfcdcca685c8f
Meta Gradient Boosting Neural Networks
1 INTRODUCTION . While humans can learn quickly with a few samples with prior knowledge and experiences , artificial intelligent algorithms face challenges in dealing with such situations . Learning to learn ( or metalearning ) ( Vilalta & Drissi , 2002 ) emerges as the common practice to address the challenge by leveraging transferable knowledge learned from previous tasks to improve learning on new tasks ( Hospedales et al. , 2020 ) . An important direction in meta-learning research is meta-optimization frameworks ( Lee & Choi , 2018 ; Nichol & Schulman , 2018 ; Rusu et al. , 2019 ) , a.k.a. , model-agnostic meta-learning ( MAML ) ( Finn et al. , 2017 ) . Such frameworks learn initial model parameters from similar tasks and commit to achieving superior performance on new tasks that conform to the same distribution through fast adaptation . They offer excellent flexibility in model choices and demonstrate appealing performance in various domains , such as image classification ( Li et al. , 2017 ; Finn et al. , 2017 ) , language modeling ( Vinyals et al. , 2016 ) , and reinforcement learning ( Fernando et al. , 2018 ; Jaderberg et al. , 2019 ) . Generally , such frameworks define a target model Fθ and a meta-learnerM . The learning tasks T = { T train , T test } are divided into training and testing tasks , where T are generated from the meta-datasetD , i.e. , T ∼ P ( D ) . Each task contains a support set DS and a query set DQ for training and evaluating a local model . The initialization of the model parameter θ is learned by the meta learner , i.e. , θ ←M ( T train ) . We denote the meta-learned parameter as φ so that θ ← φ . For each task , the model obtains locally optimal parameter θ̂ by minimizing the loss L ( Fθ ( DS ) ) . The meta parameter φ will be updated across all training tasks by minimizing the loss ΣT∈T train ( L ( Fθ̂ ( D Q ) ) ) . Generally , it takes only a small number of epochs to learn locally optimal parameters across training tasks so that meta-learned parameter φ can quickly converge to an optimal parameter for new tasks . Most methods assume some transferable knowledge across all tasks and rely on a single shared meta parameter . However , the success of the meta-learners are limited within similar task families , and the single shared meta parameter can not well support fast learning on diverse tasks ( e.g. , a large meta-dataset ) or task distributions ( e.g. , T are generated from multiple meta-datasets ) due to conflicting gradients for those tasks ( Hospedales et al. , 2020 ) . Recent efforts have studied multiple initial conditions to solve the above challenges . Some employ probabilistic models ( Rusu et al. , 2019 ; Finn et al. , 2018 ; Yoon et al. , 2018 ) while others incorporate task-specific information ( Lee & Choi , 2018 ; Vuorio et al. , 2019 ; Alet et al. , 2018 ) . The former learns to obtain an approximate posterior of an unseen task yet needs sufficient samples to get reliable data distributions ; the latter conducts task-specific parameter initialization using multiple meta-learners yet requires expensive computation and can not transfer knowledge across different modes of task distributions . In this work , we aim to resolve the above challenges from a novel perspective by proposing a meta gradient boosting framework . Gradient boosting ( Friedman , 2001 ) aims to build a new learner towards the residuals of the previous prediction result for each step . We call the learner for each step as weak learner and make predictions based on summing up the weak learners . Recent research ( Badirli et al. , 2020 ; Olson et al. , 2018 ) has demonstrated the potential of decomposing deep neural nets into an ensemble of sub-networks with each achieving low training errors . We propose to use the first or first few weak learners as the base learner , followed by a series of gradient boosting modules to cope with a diverse array of tasks—the base learner is responsible for inferring transferable knowledge by learning across all tasks ; the gradient-boosting modules are designed to make task-specific updates to the base learner . Compare with existing work , which uses multiple initial conditions , our approach does not require specifying a set of initialization conditions and thus has better flexibility in dealing with multi-mode tasks . Our proposed framework is also more efficient than its counterparts as it does not require a large number of gradient boosting modules . We evaluate the proposed framework on few-shot learning scenarios for both regression and classification tasks . The experimental results show the well performance of the proposed framework , which demonstrates the model ’ s ability in learning with very few cases . 2 RELATED WORK . Meta-learning has the potential of replicating human ability to learn new concepts from one or very few instances . It has recently drawn increasing attention , given its broad applicability to different fields ( Hospedales et al. , 2020 ) . Pioneers ( Finn et al. , 2017 ; Nichol & Schulman , 2018 ) in this topic propose optimization algorithms with learned parameters to automate the exploitation to the structures of learning problems . However , most of them initialize the same set of model parameters for all tasks , which may have different distributions , thus resulting in over-fitting . Recent studies either model the mixture of multiple initial conditions via probabilistic modeling ( Finn et al. , 2018 ; Yoon et al. , 2018 ) or incorporate task-specific knowledge ( Lee & Choi , 2018 ; Alet et al. , 2018 ) , to address the above issues . Yoon et al . ( 2018 ) and Finn et al . ( 2018 ) use variational approximation to enable probabilistic extensions to MAML . But it is unclear how to extend MAML for a wide range of task distributions . Rusu et al . ( 2019 ) consider multiple conditions by borrowing the idea of variational autoencoders ( Kingma & Welling , 2014 ) , which encodes inputs to a low-dimensional latent embedding space and then decodes the learned latent code to generalize taskspecific parameters . Another line of research defines a set of initialization modules and incorporate task-specific information to select task-specific modules ; this way , it can identify the mode of tasks sampled from a multimodal task distribution and adapt quickly through gradient updates ( Vuorio et al. , 2019 ) . Yao et al . ( 2019 ) propose a Hierarchically Structured Meta-Learning ( HSML ) framework to perform soft clustering on tasks . HSML first learns the inputs and then obtains clustering results by the hierarchical clustering structure . HSML tailors the globally shared parameter initialization for each cluster via a parameter gate to initialize all tasks within the clusters . The above approaches have common limitations in 1 ) requiring sufficient data samples to generalize task distribution thus may fail in few-shot cases ; 2 ) being computationally expensive , due to the globally stored initialization modules ; 3 ) facing challenges in exhaustively listing every possible initial condition . Two closely-related topics to meta-learning are modular approaches ( Andreas et al. , 2016 ) and multitask learning ( Zhang & Yang , 2017 ) . Modular approaches are similar to meta-learning in that the input signal gives relatively direct information about a good structural decomposition of the problem . For example , Alet et al . ( 2018 ) adopt the modular structure and parameter adaptation method for learning reusable modules . Multi-task learning aims to learn a good shared-parameter or make the parameter for each task as similar as possible ( Wang et al. , 2020 ) . For example , Zhang et al . ( 2018 ) propose two task networks that share the first few layers for the generic information before applying different prediction layers to different tasks . These approaches differ from meta-learning in requiring fine-tuning the models over all training samples and thus can not adapt well to new tasks . Our framework Meta Gradient Booting ( MGB ) neural network is based on the idea of gradient boosting ( Friedman , 2001 ) , which aims to build a new learner towards the residuals of the previous prediction result for each step . The learner for each step is called weak learner , and the prediction is based on the summation of weak learners . Weak learners may vary from traditional decision trees ( Chen & Guestrin , 2016 ) to neural networks ( Tannor & Rokach , 2019 ; Badirli et al. , 2020 ) . Algorithm 1 Training of MGB 1 : Randomly initialize global parameter φ 2 : while Not done do 3 : for T ∈ T do 4 : for ( x , y ) ∈ DS do 5 : Initialize fθ0 by θ0 ← φ 6 : for k ∈ range ( K ) do 7 : θ ← θ − βL ( y , Fθ ) 8 : end for 9 : end for 10 : Get updated parameter θ̂ 11 : for ( x , y ) ∈ DQ do 12 : Calculate predictions Fθ̂ ( x ) 13 : Calculate task loss L ( y , Fθ̂ ) 14 : end for 15 : end for 16 : Update φ by φ← φ− γLmeta 17 : end while ... Feature Target φ θ0 ∇Loss ... Feature θ1 ∇Loss Outputs Hid Task 1 Task T Base Learner Gradient BoostModule Feature ∇Loss Feature Local Update G lo ba l U pd at e ... Figure 1 : Example of the model with only one gradient-boosting module . Green lines are for local update and red lines are for global update . A recent study ( Badirli et al. , 2020 ) proposes a general framework for gradient boosting on neural networks , which work for both regression and classification tasks . It uses the deep layers of neural nets as a bagging mechanism in a similar spirit to random forest classifier ( Veit et al. , 2016 ) . After only slight tuning , deep neural nets can perform well on a wide range of small real-world datasets ( Olson et al. , 2018 ) . These findings demonstrate the potential of decomposing deep neural nets into an ensemble of sub-networks each achieving low training errors . In our framework , we use the first weak learner or the first few weak learners as the base learner for learning the shared initialization parameter across tasks . The output for each weak learner is then aggregated to the inputs for the next step for constructing an end-to-end learning strategy until the last gradient boosting module . This way , the base learner serves as transferable knowledge , and the gradient boosting modules following it are trained for task-specific predictions . 3 METHOD . We explore the problem in the context of supervised learning , where input-output pairs are available in both training and validation sets . Similar to previous meta-optimization based approaches ( Finn et al. , 2017 ; Nichol & Schulman , 2018 ) , we assume the tasks are generated from an underlying distribution T ∼ P ( D ) , where D is the meta-dataset , which is either a uni-mode dataset or multi-mode datasets . Given a set of tasks T = { T train , T test } , each task T ∈ T contains a support dataset DS and a query dataset DQ , both representing input-output pairs ( x , y ) . We aim to learn a meta-learnerM to guide the initialization for a target model Fθ so that the target model can quickly adapt and perform well on a given new task . We propose a Meta Gradient Boosting ( MGB ) framework as the target modelFθ , which consists of several weak learners and can be represented asFθ ∼ ΣKk=0fθk . The first weak learner fθ0 or the first few weak learners are regarded as the base learner for learning the shared information across tasks ; the weak learners are gradient boosting modules for capturing task-specific information . The meta learner aims to learn transferable knowledge and provides initialization details for the base learner so that the model can quickly adapt to task-specific predictions with a few gradient-boosting steps . Figure 1 shows an example of our MGB framework under K = 1 , where we update the model locally for task-specific predictions and update the meta-learner globally for all tasks .
This study is presented clearly, and the core idea is interesting. However, the presented novelty is limited to a globally (for all tasks) and locally (task-specific) learning paradigm using a framework inspired by (Badirli et al., 2020). The authors have presented experimental results for both regression and classification setups, which are interesting.
SP:077926a214f87b9fdcd5a5f9d818d6313437cd90
Test-Time Adaptation and Adversarial Robustness
1 INTRODUCTION . There is a surge of interest to study test-time adaptation to help generalization to unseen domains ( e.g. , recent work by Sun et al . ( 2020 ) ; Wang et al . ( 2020 ) ; Nado et al . ( 2020 ) ) . At the high level , a generic test-time adaptation can be modeled as an algorithm Γ which accepts an ( optional ) labeled training dataset D , an ( optional ) model F trained on D ( usually used as a starting point ) , and an unlabeled test feature set U , outputs a model F̃ = Γ ( F , D , U ) , in order to achieve high test accuracy on U . For large test set U , test-time adaptation can be viewed as a form of transductive learning ( Joachims ( 1999 ) ; Vapnik ( 1998 ) ) ( i.e. , using D , U to train a model to predict specific instances in U ) , which is argued to be easier than more traditional inductive learning . This paper studies test-time adaptation in the context of adversarial robustness ( i.e. , there is an active agent who tries to fool the test-time adaptation by perturbing the input so that F̃ gives wrong predictions ) . There are several motivations in pursuing this direction . First , this question is of practical interest : Many practical ML pipelines run in a batch mode1 , where they first collect a set of unlabelled data points , and then send them to a model ( e.g . Nado et al . ( 2020 ) ) . In such cases , data in the batch may have been adversarially perturbed , and it is a natural question whether we can leverage the large batch size and test-time adaptation to enhance adversarial robustness . Second , from a purely theoretical point of view , since test-time adaptation is a form of transductive learning , it is intriguing to ask whether transductive adversarial learning can be easier , given that traditional adversarial robustness is formulated in the inductive learning setting ( e.g . Madry et al . ( 2018 ) ) . To this end , a recent work by Goldwasser et al . ( 2020 ) shows that , with transductive learning , one can achieve nontrivial guarantees for classes of bounded VC dimension with arbitrary train and test distributions . The current work complements their paper in the setting of deep learning . To study these questions , we formalize a threat model , which we call ( test-time ) maximin threat model , for the adversarial robustness of test-time adaptation . Recall that the classic adversarial 1For example , Instagram collects a large batch of photos before sending them to a model to tag people . robustness game is a minimax game minF EV [ maxṼ L ( F , Ṽ ) ] , where V is random sampled data , Ṽ is the perturbed data generated from V by the adversary , and L ( F , Ṽ ) is the loss of the model F on Ṽ . By contrast , in the maximin threat model , we allow V to be sampled from a different domain , and the game is maximin : EV [ maxU minF̃ L ( F̃ , Ṽ ) ] ( where U is the perturbed features of V , subject to the attack type , and Ṽ is the labeled perturbed data , see Definition 2 ) . By the maximin inequality , it follows that this threat model is no harder than the minimax model ( to allow source and target domains to be different , we need to generalize the classic minimax model , see Definition 3 ) . We then move on to the focus of this work : Whether the maximin threat model is “ strictly weaker ” than the minimax threat model . We note that any good defender solution ( a robust model ) in the minimax game induces a good defender solution in the maximin game ( an adaptation algorithm that outputs that robust model ) , thus intuitively , the good defender solutions of the minimax model is a subset of the good defender solutions of the maximin threat model . We ask whether such a containment is proper : That is , whether there exists a defender solution that is good in the maximin threat model , but is bad in the minimax threat model . The existence of such a defender will demonstrate that the maximin threat model admits more good solutions . Besides theoretical interest , this question is also of practical importance since these “ new ” solutions may possess desirable properties that good solutions in the minimax threat model may lack . For example , one such property is that the defender solution is attack agnostic ( Goodfellow ( 2018 ) ( pp.30 ) ) : That is , the solution is not to directly optimize the performance measure for a particular type of perturbation2 . To this end , we first present a provable separation between the maximin and minimax threat models in a natural Gaussian data model . In fact , the separation holds even when U only contains a single point , indicating the power of transductive learning . We then move to deep learning . While we do not have provable guarantees , we empirically examine Domain Adverarial Neural Networks ( DANN ) ( Ganin et al . ( 2017 ) ) , an algorithm designed for unsupervised domain adaptation ( UDA ) , as a candidate for the separation . Specifically , we demonstrate that DANN provides nontrivial testtime adversarial robustness against both transfer attacks and adaptive attacks , in both homogeneous and inhomogeneous cases . This is somewhat surprising as DANN is attack agnostic as we mentioned above , and has not been considered for adversarial robustness . Not surprisingly , as we hypothesized for a separation , the accuracy becomes very low when evaluating F̃ in the minimax model . Complementing the above result , we explore the maximin robustness of the recent data-oblivious adaptation algorithms ( namely , the adaptation algorithms do not useD , but just the pretrained model F and unlabeled test set U ) . Specifically , we consider Test-Time Training ( TTT ) by Sun et al . ( 2020 ) 3 . We show that TTT can be easily attacked using simple transfer attacks . While this is not surprising as authors of Sun et al . ( 2020 ) have cautioned that TTT is not designed for adversarial robustness , the situation is in sharp contrast to our results with DANN . The rest of the paper is organized as follows : Section 2 presents the setup . In Section 3 we define threat models . In Section 4 we present theoretical results about separation , and examine DANN as a candidate separation in the deep learning . Finally , Section 5 explores the maximin robustness of oblivious test-time adaptation , and concludes the paper with future directions . 2 PRELIMINARIES . Let F be a model , for a data point ( x , y ) ∈ X ×Y , a loss function ` ( F ; x , y ) give the loss of F on x given the true label y . Let V be a set of labeled data points . We use the notation L ( F , V ) = 1 |V | ∑ ( x , y ) ∈V ` ( F ; x , y ) to denote the empirical loss of F on V . For example , if we use binary loss ` 0,1 ( F ; x , y ) = 1 [ F ( x ) 6= y ] , this gives the test error of F on V . We use the notation V |X to denote the projection of V to its features , that is { ( xi , yi ) } mi=1 7→ { x1 , . . . , xm } . Threat model for classic adversarial robustness . To formulate the threat model for test-time adaptation , we first present a threat model for the classic adversarial robustness . Although the classic adversarial robustness can be written down succinctly as a minimax objective , namely 2Another consideration , which is beyond the scope of this paper , is the computational feasibility of finding a good solution , given the hardness of minimax optimization Katz et al . ( 2017 ) ; Daskalakis et al . ( 2020 ) . 3While TTT does not use training data D at the test time , it has a special self-training component , and the joint architecture is a Y -structure . A more domain agnostic approach is discussed in Wang et al . ( 2020 ) . minF E ( x , y ) ∼ ( X , Y ) [ maxx′∈N ( x ) [ ` ( F ; x ′ , y ) ] ] ( N ( x ) is a neighborhood function of x , determined by the attack type ) , a threat model formulation will help us develop more nuanced models . Definition 1 ( Threat model for classic adversarial robustness ) . Attacker and defender agree on a particular attack type . Attacker is an algorithm A , and defender is a supervised learning algorithm T . Before game starts . • A ( labeled ) training set D is sampled i.i.d . from from ( X , Y ) . Training time . • ( Defender ) Train a model F on D as F = T ( D ) . Test time . • A ( labeled ) natural test set V is sampled i.i.d . from ( X , Y ) . • ( Attacker ) On input F , D , and V , A perturbs each point ( x , y ) ∈ V to ( x′ , y ) ( subject to the agreed attack type ) , giving Ṽ = A ( F , D , V ) . Evaluation : . Evaluate the test loss of F on Ṽ , L ( F , Ṽ ) . Attacker ’ s goal is to maximize the test loss , while the defender ’ s goal is to minimize the test loss . We stress that the i.i.d sampling of V is important ( which is also present in the expectation in the minimax objective ) : Otherwise an attacker can pick any point that fools F and repeat it arbitrarily many times . ( we refer readers to Goodfellow ( 2019 ) for more discussions along this line ) . Notations for models and attacks . In this paper we mainly use the PGD attacks ( Projected Gradient Descent attacks ) with norm-based perturbations Madry et al . ( 2018 ) . For example , given a model F , we use the notation PGD ( F , V ) to denote PGD attacks against F , on data V ( the attack type is specified in the context ) . We adopt the following notations : Notation Meaning T A target model trained on the labeled target data V . AdvT An adversarially trained target model using the labeled target data V . S A source model trained on the labeled source data D. AdvS An adversarially trained source model using the labeled source data D. PGD ( · , · ) PGD Attacks on a model and data . For example , PGD ( AdvT , V ) means to use PGD attacks on the model AdvT and data V . Test-time defenses and BPDA . Various previous work have investigated test-time defenses where a pretrained model is fixed and there is a “ preprocessing procedure ” to preprocess an input before sending it to the model . Several such defenses were described and attacked in Athalye et al . ( 2018 ) , by the BPDA technique ( Backward Pass Differentiable Approximation ) . While syntactically one can fit these defenses into our framework , they only form some very special cases of our framework , which reuses a fixed pretrained model and focuses on input sanitization . As we will show later in the paper , for both of our provable separation and deep learning results , the adaptation algorithms train new models ( beyond sanitizing inputs ) ; and theoretically attacking these adaptations becomes a bilevel optimization . In these cases , it is unclear how to apply BPDA , and indeed it is an intriguing direction to further study attacking unsupervised domain adaptation algorithms , such as DANN .
The paper explores adversarial robustness in a new setting of test-time adaptation. It shows this new problem of “test-time-adapted adversarial robustness” is strictly weaker than the “traditional adversarial robustness” when assuming the training data is available for the “test-time-adapted adversarial robustness”. The gap between the two problems is demonstrated by the simple DANN solution which has good “test-time-adapted adversarial robustness” but bad “traditional adversarial robustness”. The paper also explores the subcase of “test-time-adapted adversarial robustness” when assuming the training data is not available and provide some initial result.
SP:2969ff98eb93abe37242a962df458541311090ff
Subspace Clustering via Robust Self-Supervised Convolutional Neural Network
1 INTRODUCTION . Subspace clustering approaches have achieved encouraging performance when compared with the clustering algorithms that rely on proximity measures between data points . The main idea behind the subspace model is that the data can be drawn from low-dimensional subspaces which are embedded in a high-dimensional ambient space ( Lodhi & Bajwa , 2018 ) . Grouping such data associated with respective subspaces is known as the subspace clustering ( Vidal , 2011 ) . That is , each low-dimensional subspace corresponds to a class or category . Up to now , two main approaches for recovering lowdimensional subspaces are developed : models that are based on the self-representation property , and non-linear generalization of subspace clustering called union of subspaces ( UoS ) ( Lodhi & Bajwa , 2018 ; Lu & Do , 2008 ; Wu & Bajwa , 2014 ; 2015 ) . UoS algorithms are out of the scope of this work . Self-representation subspace clustering is achieved in two steps : ( i ) learning representation matrix C from data X and building corresponding affinity matrix A = |C|+ |CT | ; ( ii ) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix that correspond with the leading k eigenvalues . This second step is known as spectral clustering ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . Owning to the presumed subspace structure , the data points obey the self-expressiveness or self-representation property ( Elhamifar & Vidal , 2013 ; Peng et al. , 2016b ; Liu et al. , 2012 ; Li & Vidal , 2016 ; Favaro et al. , 2011 ) . In other words , each data point can be represented as a linear combination of other points in a dataset : X=XC . The self-representation approach is facing serious limitations regarding real-world datasets . One limitation relates to the linearity assumption because in a wide range of applications samples lie in nonlinear subspaces , e.g . face images acquired under non-uniform illumination and different poses ( Ji et al. , 2017 ) . Standard practice for handling data from nonlinear manifolds is to use the kernel trick on samples mapped implicitly into high dimensional space . Therein , samples better conform to linear subspaces ( Patel et al. , 2013 ; Patel & Vidal , 2014 ; Xiao et al. , 2015 ; Brbić & Kopriva , 2018 ) . However , identifying an appropriate kernel function for a given data set is quite a difficult task ( Zhang et al. , 2019b ) . The second limitation of existing deep SC methods relates to their assumption that the origin of data corruption is known , in which case the proper error model can be employed . In real-word applications origin of data corruption is unknown . That can severely harm the algorithm ’ s learning process if the non-robust loss function is used . Furthermore , validation ( i.e . stopping of the learning process ) in most of the deep SC methods often requires access to the ground-truth labels . That stands for violation of the basic principle of unsupervised machine learning and yields the overly-optimistic results . Dataset size is also a limitation when it comes to memory requirements . Since the self-representation subspace clustering is based on building the affinity matrix , memory complexity increases as the square of the dataset size . However , the latter limitation is not in the main focus of this work . Motivated by the exceptional ability of deep neural networks to capture complex underlying structures of data and learn discriminative features for clustering ( Hinton & Salakhutdinov , 2006 ; Dilokthanakul et al. , 2016 ; Ghasedi Dizaji et al. , 2017 ; Tian et al. , 2014 ; Xie et al. , 2016 ) , deep subspace clustering approaches emerged recently ( Ji et al. , 2017 ; Abavisani & Patel , 2018 ; Peng et al. , 2016a ; Yang et al. , 2019 ; Zhou et al. , 2018 ; Ji et al. , 2019b ; Peng et al. , 2018 ; 2017 ; Zhou et al. , 2019 ; Zhang et al. , 2019a ; Kheirandishfard et al. , 2020 ) . In particular , it is shown that convolutional neural networks ( CNNs ) , when applied to images of different classes , can learn features that lie in a UoS ( Lezama et al. , 2018 ) . Mostly , the base of the recently developed deep subspace-clustering networks is convolutional autoencoder . It is an end-to-end fully convolutional network that is based on the minimization of the reconstruction error . Together , the autoencoder and an additional self-expression ( SE ) module are forming a Deep subspace clustering network ( DSCNet ) ( Ji et al. , 2017 ) . Hence , the total loss function of DSCNet is composed of reconstruction loss and SE model loss . That is , during the learning process the clustering quality is not taken into account . Self-supervised convolutional SC network ( S2ConvSCN ) ( Zhang et al. , 2019a ) addressed this issue through the addition of a fully connected layer ( FC ) module and a spectral clustering module that , respectively , generate soft- and pseudo-labels . Dual self-supervision is achieved by forcing these two modules to converge towards consensus . Related accumulated loss , therefore , participates in enhancing the self-representation matrix and the quality of features extracted in the encoder layer . The architecture of S2ConvSCN has a possibility of direct classification once the learning process is completed . A trained encoder and the FC module can make a new network that can directly classify unseen data , also known as an out-of-sample problem . However , while this network can be validated and compared with other algorithms on a separate data set , such an ablation study was not completed . Furthermore , the main disadvantage of the DSCNet architecture , and indirectly S2ConvSCN , is that the network training is stopped when the accuracy is highest ( Ji et al. , 2019a ) . First , it is a direct violation of the unsupervised learning principle as the ground-truth labels are exposed . Second , the reported performance ( Zhang et al. , 2019a ; Ji et al. , 2017 ) is overly-optimistic and can not be compared to other algorithms . Also , as mentioned in ( Haeffele et al. , 2020 ) , most self-expressive based deep subspace clustering models suffer from the need of post-processing the self-representation matrix . Compared to the baseline model , we significantly reduced the post-processing while maintaining the noise-free matrix . Mentioned research problems led to three main contributions of proposed Robust S2ConvSCN : • robustness to errors of the unknown ( arbitrary ) origin is achieved by using the correntropy induced metric ( CIM ) in the self-expression loss , • the network is trained using the early-stopping method while monitoring only the accumulated loss , • thanks to correntropy based loss function the training process is less sensitive to data corruptions which enables the network to generalize better . This study has , also , three side-contributions : • the performance of models is estimated using the unseen ( out-of-sample ) data , • block-diagonal regularization of self-representation matrix is integrated into the gradient descent learning process , • post-processing of self-representation matrix is reduced to a significant extent . A complete head to head comparison of the baseline S2ConvSCN model and our robust approach can be seen in Figure 1 . 2 BACKGROUND AND RELATED WORK . 2.1 MAIN NOTATIONS AND DEFINITIONS . Throughout this paper , matrices are represented with bold capital symbols and vectors with bold lower-case symbols . X ∈ Rd×N represents data matrix comprised of N data samples with dimen- sionality d. { H ( l ) i } m ( l ) i=1 represent feature maps produced at the output of layer l−1 . Thus , H ( 0 ) = X and H ( L ) = X̂ . X̂ represents the output of the decoder and L represents number of convolutional layers in the autoencoder . { w ( l ) i } m ( l ) i=1 stand for a set of filters with associated biases { b ( l ) i } m ( l ) i=1 that form a convolutional layer l = 1 , . . . , L. zn = [ h ( L/2 ) 1 ( : ) . . .h ( L/2 ) m ( L/2 ) ( : ) ] T ∈ Rd̂×1 stands for feature vector comprised of vectorized and concatenated feature maps , with d̂ extracted features , in the top layer L2 ( encoder output ) representing input sample xn , n = 1 , . . . , N . C ∈ R N×N stands for representation matrix in self-expressive model Z = ZC . A = |C|+ |CT | is the affinity matrix and L = D− 1 2AD 1 2 is corresponding graph Laplacian matrix . D is diagonal degree matrix such that Dii = ∑N j=1 Aij . ‖X‖F = √∑N i , j=1 x 2 ij is the Frobenius norm of matrix X . ` p ( x ) = ‖x‖p = ( ∑d i=1 ‖xi‖ p ) 1/p , 0 < p ≤ 1 is the ` p norm of x . ` 0 ( x ) = ‖x‖0 = # { xi 6= 0 , i = 1 , . . . , d } , where # denotes the cardinality function , is ` 0 quasi norm of x . The Sp , 0 < p ≤ 1 , Schatten norms of matrix X are defined as the corresponding ` p norms of the vector of singular values of X , i.e . Sp ( X ) = ‖σ ( X ) ‖p where σ ( X ) stands for the vector of singular values of X . Depending on the context , 0 represents matrix/vector of all zeros and 1 represents matrix/vector of all ones . Grouping the data according to the linear subspaces they are drawn from is known as subspace clustering ( Vidal , 2011 ) . The problem is formally defined in : Definition 1 . Let X = [ X1 , . . . , Xk ] be a set of sample vectors drawn from a union of k subspaces in Rd , ∪ki=1 { Si } , of dimensions di min { d , N } , for i = 1 , . . . , k. Let Xi be a collection of Ni samples drawn from subspace Si , N = ∑k i=1Ni . The problem of subspace clustering is to segment samples into the subspaces they are drawn from . Throughout this paper , as it is the case in the majority of other papers , we have assumed that number of clusters k is known a priori . 2.2 APPROACHES TO SUBSPACE CLUSTERING . Usually , processes that operate in different modes generate data in real-world scenarios . Each mode models such data as lying on a subspace , while the whole process , thus , generates data lying on a union of subspaces ( UoS ) ( Lodhi & Bajwa , 2018 ) . The alternative to the UoS model is the selfrepresentation based subspace model . It implies that every sample from the dataset can be represented as a linear combination of other samples from the same cluster . While shallow models directly optimize such a self-representation matrix , their deep counterparts train the whole network to better extract features from the raw data and achieve representation linearity . Many approaches to deep subspace clustering are based on the introduction of the self-representation in the feature space ( Abavisani & Patel , 2018 ; Ji et al. , 2017 ; Peng et al. , 2016a ; Zhou et al. , 2018 ; 2019 ; Zhang et al. , 2019a ; Kheirandishfard et al. , 2020 ; Zhang et al. , 2020 ) . However , one weakness of self-expressive deep subspace clustering models is that their perfomance mainly depends on the self-representation matrix . Thus , elimination of the noise is done by post-processing ( Haeffele et al. , 2020 ) . It appears in many cases that from the final performance point of view the post-processing matters more than depth of the network . By the virtue of self-representation property , improvements of the shallow subspace clustering methods are of direct relevance to their deep counterparts . The subspace clustering task is accomplished through ( i ) learning the representation matrix C from data X , and ( ii ) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix L that correspond with the k leading eigenvalues . This second step is known as spectral clustering ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . Low-rank ( Liu et al. , 2012 ; Favaro et al. , 2011 ) and sparse models ( Elhamifar & Vidal , 2013 ) are one of the commonly used algorithms to solve SC clustering problem . They aim to learn the low-rank and sparse representation matrix by solving the following optimization problem ( Li & Vidal , 2016 ) : min C λ ‖C‖pSp + τ ‖C‖ p p s.t . Z = ZC , diag ( C ) = 0 ( 1 ) where λ and τ are nonnegative regularization constants . If number of layers L = 0 problem ( 1 ) is related to shallow subspace clustering . Constraint diag ( C ) = 0 is necessary to prevent sparseness regularized optimization algorithms to converge towards trivial solution where each data point represents itself . This constraint is not necessary for problem constrained only by low-rank . When data samples are contaminated with additive white Gaussian noise ( AWGN ) problem ( 1 ) becomes : min C ‖E‖2F + λ ‖C‖ p Sp + τ ‖C‖pp s.t . diag ( C ) = 0 ( 2 ) where E stands for the modelling error ( noise ) : E = Z− ZC . ( 3 ) Alternatively , square of the Frobenius norm of C is used for regularization ( Lu et al. , 2012 ) : min C ‖E‖2F + λ ‖C‖ 2 F ( 4 ) Objective ( 4 ) is used also in the self-expression module of the S2ConvSCN in ( Zhang et al. , 2019a ) . As seen from ( 2 ) and ( 4 ) , the MSE measure for discrepancy between Z and its self-representation ZC is justified only for the contamination by the AWGN . For sample-specific corruptions ( outliers ) the proper norm is ‖E‖2,1 while for large random corruptions the proper choice is ‖E‖1 ( Liu et al. , 2012 ) . However , errors in real world data have different origins and magnitude and may not follow specific probabilistic model . Sometimes , it is hard to know the true origin of corruptions present in data . Thus , to obtain method robust to arbitrary corruption we propose to introduce the CIM of the error . Rationale behind introduction of any regularization on C is to reflect its structural property of block-diagonality . Even though ‖C‖Sp and ‖C‖p , 0 ≤ p ≤ 1 in principle satisfy the enforced block-diagonality condition , their approximation of the BD structure of C is indirect ( Lu et al. , 2018 ) . Hence , for comparison , this study proposes introduction of loss function with gradient-based BD regularization on representation matrix C .
This paper presents an approach to deep subspace clustering based on minimizing the correntropy induced metric (CIM), with the goal of establishing when training should be stopped and generalizing to unseen data. The main contribution over the existing S2ConfSCN method is a change from squared error loss to CIM when optimizing over the affinity matrix. A key benefit of CIM as a loss is that it does not decrease arbitrarily with training epochs, so it provides a means of estimating when training should cease without needing ground truth labels. The authors argue that CIM "ensures a smooth decrease of the loss function that enables the use of label-free stopping criterion." However, this claim is only justified through a minimal empirical evaluation. The authors also include a means of enforcing block diagonal structure in the learned affinity matrix.
SP:b7532fd6e281d88fff5a0a89c73ae3e6651f8827
UNSUPERVISED ANOMALY DETECTION FROM SEMANTIC SIMILARITY SCORES
1 INTRODUCTION . Anomaly detection or novelty detection aims at identifying patterns in data that are significantly different to what is expected . This problem is inherently a binary classification problem that classifies examples either as in-distribution or out-of-distribution , given a sufficiently large sample from the in-distribution ( training set ) . A natural approach to OOD detection is to learn a density model from the training data and compute the likelihood ratio of OOD examples . However , in practice this approach frequently fails for high-dimensional data ( Nalisnick et al . ( 2019 ) ) , where it has been shown that deep generative models can assign higher likelihood to OOD examples than to in-distribution examples . This surprising result is likely the consequence of how existing deep generative models generalise . For example , Variational Autoencoders ( Kingma & Welling ( 2014 ) ) generalise by superposition of examples , which is a consequence of the stochastic nature of the posterior that can map different examples to the same point in latent space . As superposition is an averaging process that reduces the information content it can be expected that examples of lower complexity than the training examples can map to high likelihood regions in latent space . Note that it is possible for a datapoint to have high likelihood under a distribution yet be nearly impossible to be sampled , a property known as asymptotic equipartition property in information theory Cover & Thomas ( 2001 ) . For autoregressive generative models , such as PixelCNN ( van den Oord et al . ( 2016 ) ) , it has been shown that the pixel-by-pixel generation process is strongly determined by the local surrounding of pixels ( Chen et al . ( 2018 ) ) , where the fact that nearby pixels of training examples frequently share the same color can explain why mono-chromatic images are assigned a high likelihood ( Nalisnick et al . ( 2019 ) ) . Local pixel correlations also seem to be responsible for the failure of generative models based on Normalising Flows to assign correct likelihood values to OOD examples Schirrmeister et al . ( 2020 ) . As a consequence , most of the current OOD detection approaches make use of a score function s ( x ) to classify test examples as in-distribution or OOD . In case that the examples of the training set are labelled , a simple score can be given by s ( x ) = maxy p ( y|x ) , with p ( y|x ) the softmax probability for predicting class labels , y ∈ { 1 , .. , K } ( Hendrycks & Gimpel ( 2017 ) ) . If s ( x ) is below a threshold the test example is classified as OOD . Labelled data allows to learn representations that are associated with the semantic information shared by the examples in the training set , which can be used for OOD detection . However , the approach suffers from the problem that the scores for in-distribution examples can be widely distributed across the interval of possible score values , s ( x ) ∈ [ 1/K , 1 ] , especially if the number of labels are low and the classification task is hard , which strongly increases the false-positive rate . Consequently , better performance was found for approaches that use labeled data for learning a higher dimensional representation that encodes for semantic information ( Lee et al . ( 2018b ) ) . In this representation space the in-distribution occupies just a small volume and a random feature vector would be most likely classified as OOD . Another simplification arises if the OOD detection problem is supervised , with some OOD examples labelled as such and contribute to the training set . In this case the OOD detection problem boils down to an unbalanced classification problem ( Chalapathy & Chawla ( 2019 ) ) . In general OOD detection benefits from separating the factors of variation for the in-distribution in either relevant ( e.g . object identity ) or irrelevant ( e.g . compression artefacts ) using prior knowledge , where the relevant factors are typically those that carry salient semantic information . In line with the arguments put forward by Ahmed & Courville ( 2020 ) , this separation helps an OOD model to systematically generalise , e.g . whether we are allowed to re-colour or add noise to images for data augmentation . Generalisation over the training set is necessary , as learning under insufficient inductive bias would result in misclassification of examples from an in-distribution test set as OOD . Labeled data provide this additional information , as relevant factors can be defined as those that help the classification task , with the limitation that there might be more factors involved in characterising the in-distribution than those needed to predict the labels . In this work , we introduce a general framework for OOD detection problems that does not require label information . Our framework can be widely applied to OOD detection tasks , including visual , audio , and textual data with the only limitation that transformations must be a priori known that conserve the semantics of training examples , such as geometric transformations for images , proximity of time intervals for audio recordings ( van den Oord et al . ( 2018 ) ) , or randomly masking a small fraction of words in a sentence or paragraph ( Devlin et al . ( 2019 ) ) . For visual data we show new state-of-the-art OOD classification accuracies for standard benchmark data sets , surpassing even the accuracies that include labels as additional information . The key contributions of this work are • We propose a new OOD detection framework that is applicable in absence of labeled indistribution data or OOD examples that are labeled as such . • We show that our approach strongly improves OOD detection for challenging tasks in the visual domain • We find that identifying semantically close examples in the training set is central for reliable OOD detection 2 RELATED WORK . Unsupervised Methods using in-distribution labels . Many OOD detection methods make use of labels to generate scores that are either based on class prediction probabilities or on intermediate representations for an in-distribution classification task . For example , Hendrycks & Gimpel ( Hendrycks & Gimpel ( 2017 ) ) used the maximum of the softmax probabilities ( MSP ) to discriminate between OOD and in-distribution . More recent approaches Lee et al . ( 2018a ) ; Winkens et al . ( 2020 ) ; Zhang et al . ( 2020 ) use labels to learn an intermediate representation on which a density distribution ( e.g . multivariate normal distribution or deep generative network ) can be fitted , which then can be used to compute the likelihood of OOD examples . As labels implicitly provide information about the semantic relation of examples in the training set , approaches using label information typically show higher accuracy than unsupervised methods . These approaches can be improved by introducing additional parameters or training strategies . For example , MSP was improved by introducing a temperature parameter ( Liang et al . ( 2018 ) ) , alternative losses ( Lee et al . ( 2018a ) ; Vyas et al . ( 2018 ) ) , auxiliary objectives ( Devries & Taylor ( 2018 ) ; Hendrycks et al . ( 2019b ) ; Mohseni et al . ( 2020 ) ) , or outlier exposure ( Hendrycks et al . ( 2019a ) ) . Intermediate representations were improved using a multi-head network architecture ( Shalev et al . ( 2018 ) , contrastive learning Winkens et al . ( 2020 ) , metric learning Masana et al . ( 2018 ) ) . General Unsupervised Methods . If label information is absent , other means must be found to impose an inductive bias on the OOD detection model to generalise over the training set . Existing approaches can be separated in methods that learn generalisable features based on ( i ) self-supervised learning tasks Golan & El-Yaniv ( 2018 ) , transformations that destroy semantics Choi & Chung ( 2020 ) , match of encoder-decoder architectures Xiao et al . ( 2020 ) , or make use of a semantically related auxiliary outlier distribution Schirrmeister et al . ( 2020 ) . The work that is most related to ours is Geometric-Transformation Classification ( GEOM ) , proposed by Golan & El-Yaniv ( 2018 ) and improved by Bergman & Hoshen ( 2020 ) , which belongs to the class of self-supervised learning approaches ( Hendrycks et al . ( 2019b ) ) . The central idea of GEOM is to construct an auxiliary in-distribution classification task by transforming each image of the training set by one of 72 different combinations of geometric transformations with fixed strength , such as rotation , reflection , and translation . The task is to predict which of the 72 transformations has been applied , given a transformed image . GEOM gives examples that show high prediction uncertainty a high OOD score . The relevant features learned by this task are salient geometrical features , such as the typical orientation of an object . Our approach differs from GEOM by the fact that we define the relevant features as those that are invariant under geometric and other transformations , such as cropping and color jitter , which are chosen of moderate strength to not change the semantics of the images in the training set . 3 METHOD . An intuitive approach for OOD detection is to learn a representation that densely maps the indistribution to a small region within a lower dimensional space ( latent space ) , with the consequence that OOD examples will be found outside this region with high probability . The representation should include the salient semantic information of the training set , to ensure that test examples from the in-distribution are not misclassified as OOD , but disregard irrelevant factors of variation that would prevent dense mapping . As learning this mapping by Autoencoders is difficult , we split the OOD detection task into finding a semantically dense mapping of in-distribution onto a d-dimensional unit-hypersphere by contrastive learning , followed by classifying neighbouring examples on the unit-hypersphere as semantically close or distant . 3.1 LEARNING SEMANTIC SIMILARITY . A contrastive objective can be used to align feature vectors h ( x ) ∈ Rd that are semantically similar and at the same time distributes examples of the training set almost uniformly over the unit-hypersphere ( Wang & Isola ( 2020 ) ; Chen et al . ( 2020 ) ) . This representation allows to identify for any test example the semantically close example from the training set . The mapping h ( x ) = f ( x ) /||f ( x ) || can be learned from training a deep neural network f ( x ) to minimise the contrastive loss L [ h ] = −E ( x , x′ ) ∼Th ( x , x′ ) [ log eh ( x ) Th ( x′ ) /τ Exneg∼Th ( x ) [ eh ( x ) Th ( xneg ) /τ ] ] , ( 1 ) where τ denotes a temperature parameter . Here , each positive pair ( x , x′ ) is the result of sampling from a distribution of transformations Th ( x , x′ ) that conserve semantics between x and x′ , with Th ( x′ ) the marginal of Th ( x , x′ ) . For datasets used to benchmark object recognition tasks , samples ( x , x′ ) ∼ Th ( x , x′ ) can be generated by picking a single example from the training set and independently apply random transformations , such as geometric transformations , colour distortions , or cropping ( Appendix D ) . The negative pairs can be generated by applying random transformations to different training examples . We emphasise that the types of transformations and their strengths essentially define the semantics we want to encode and thus determine if , for example , the image of a black swan is classified as OOD for an in-distribution that contains only white swans . The design of transformations that capture the underlying semantics of the training dataset requires either higher level understanding of the data or extensive sampling of different combinations of transformations with evaluation on an in-distribution validation set .
The authors present a new Algorithm for performing unsupervised anomaly detection in diverse applications such as visual, audio and text data. They propose a two-step method in which first they utilise contrastive learning in order to find a semantically dense map of the data onto the unit-hypersphere. Then, they classify neighbouring pairs of test examples as in- or out-of- distribution based on the amount of the shared semantic information. Finally, they show that in several anomaly detection problems in the field of visual data their proposed method outperforms several existing methods.
SP:f0e0d909df518f25eb9243837939225d7db1196e
Learning to Generate 3D Shapes with Generative Cellular Automata
1 INTRODUCTION . Probabilistic 3D shape generation aims to learn and sample from the distribution of diverse 3D shapes and has applications including 3D contents generation or robot interaction . Specifically , learning the distribution of shapes or scenes can automate the process of generating diverse and realistic virtual environments or new object designs . Likewise , modeling the conditional distribution of the whole scene given partial raw 3D scans can help the decision process of a robot , by informing various possible outputs of occluded space . The distribution of plausible shapes in 3D space is diverse and complex , and we seek a scalable formulation of the shape generation process . Pioneering works on 3D shape generation try to regress the entire shape ( Dai et al . ( 2017 ) ) which often fail to recover fine details . We propose a more modular approach that progressively generates shape by a sequence of local updates . Our work takes inspiration from prior works on autoregressive models in the image domains , such as the variants of pixelCNN ( van den Oord & Kalchbrenner ( 2016 ) ; van den Oord et al . ( 2016 ; 2017 ) ) , which have been successful in image generation . The key idea of pixelCNN ( van den Oord et al . ( 2016 ) ) is to order the pixels , and then learn the conditional distribution of the next pixel given all of the previous pixels . Thus generating an image becomes the task of sampling pixel-by-pixel in the predefined order . Recently , PointGrow ( Sun et al . ( 2020 ) ) proposes a similar approach in the field of 3D generation , replacing the RGB values of pixels with the coordinates of points and sampling point-by-point in a sequential manner . While the work proposes a promising interpretable generation process by sequentially growing a shape , the required number of sampling procedures expands linearly with the number of points , making the model hard to scale to high-resolution data . We believe that a more scalable solution in 3D is to employ the local update rules of cellular automata ( CA ) . CA , a mathematical model operating on a grid , defines a state to be a collection of cells that carries values in the grid ( Wolfram ( 1982 ) ) . The CA repeatedly mutates its states based on the predefined homogeneous update rules only determined by the spatial neighborhood of the current cell . In contrast to the conventional CA where the rules are predefined , we employ a neural network to infer the stochastic sequential transition rule of individual cells based on Markov chain . The obtained homogeneous local rule for the individual cells constitutes the 3D generative model , named Generative Cellular Automata ( GCA ) . When the rule is distributed into the group of occupied cells of an arbitrary starting shape , the sequence of local transitions eventually evolves into an instance among the diverse shapes from the multi-modal distribution . The local update rules of CA greatly reduce the search space of voxel occupancy , exploiting the sparsity and connectivity of 3D shapes . We suggest a simple , progressive training procedure to learn the distribution of local transitions of which repeated application generates the shape of the data distribution . We represent the shape in terms of surface points and store it within a 3D grid , and the transition rule is trained only on the occupied cells by employing a sparse CNN ( Graham et al . ( 2018 ) ) . The sparse representation can capture the high-resolution context information , and yet learn the effective rule enjoying the expressive power of deep CNN as demonstrated in various computer vision tasks ( Krizhevsky et al . ( 2012 ) ; He et al . ( 2017 ) ) . Inspired by Bordes et al . ( 2017 ) , our model learns sequences that are slightly different from the sampling chain but converge to the full shapes in the training data . The network successfully learns the update rules of CA , such that a single inference samples from the distribution of diverse modes along the surface . The contributions of the paper are highlighted as follows : ( 1 ) We propose Generative Cellular Automata ( GCA ) , a Markov chain based 3D generative model that iteratively mends the shape to a learned distribution , generating diverse and high-fidelity shapes . ( 2 ) Our work is the first to learn the local update rules of cellular automata for 3D shape generation in voxel representation . This enables the use of an expressive sparse CNN and reduces the search space of voxel occupancy by fully exploiting sparsity and connectivity of 3D shapes . ( 3 ) Extensive experiments show that our method has competitive performance against the state-of-the-art models in probabilistic shape completion and shape generation . 2 3D SHAPE GENERATION WITH GENERATIVE CELLULAR AUTOMATA . Let Zn be an n-dimensional uniform grid space , where n = 3 for a 3D voxel space . A 3D shape is represented as a state s ⊂ Z3 , which is an ordered set of occupied cells c ∈ Z3 in a binary occupancy grid based on the location of the surface . Note that our voxel representation is different from the conventional occupancy grid , where 1 represents that the cell is inside the surface . Instead , we only store the cells lying on the surface . This representation can better exploit the sparsity of 3D shape than the full occupancy grid . The shape generation process is presented as a sequence of state variables s0 : T that is drawn from the following Markov Chain : s0 ∼ p0 st+1 ∼ pθ ( st+1|st ) ( 1 ) where p0 is the initial distribution and pθ is the homogeneous transition kernel parameterized by θ . We denote the sampled sequence s0 → s1 → · · · → sT as a sampling chain . Given the data set D containing 3D shapes x ∈ D , our objective is to learn the parameters θ of transition kernel pθ , such that the marginal distribution of final generated sample p ( sT ) = ∑ s0 : T−1 p 0 ( s0 ) ∏ 0≤t < T pθ ( s t+1|st ) is close to the data distribution . The initial state s0 can be defined differently depending on the task to solve . For the task of probabilistic shape completion , s0 is given as the partial input shape . For shape generation , we set the initial state s0 to be the most simple state we can think of , a single cell { c } . Figure 1 presents examples of sampling chains of shape generation , where the starting shape s0 is merely a single cell . The GCA further splits the transition kernel pθ ( st+1|st ) to be the combination of local update rules on individual occupied cells ci ∈ st , as depicted in Figure 2 . The cellular transition is implemented with the sparse convolution , which is translation invariant if implemented with a fully convolutional network , and outputs the distribution of local occupied cells . Then individual predictions are aggregated by cell-wise averaging , resulting in the binary probability distribution for occupancy of each cell that follows the Bernoulli distribution : pθ ( s t+1|st ) = ∏ c∈Zn pθ ( c|st ) . ( 2 ) The next state st+1 is sampled independently for individual cells from the obtained distribution and the sampling chain continues to the next time step . For each transition pθ ( st+1|st ) , we limit the search space of the occupied cells by confining our predictions within the neighborhood of the occupied cells . The underlying assumption is that the occupied cells of a valid shape are connected and a successful generation is possible by progressive growing into the immediate neighborhood of the given state . Specifically , the output of the sparse convolution is the occupancy probability of neighborhood cells pi = pθ ( N ( ci ) |st ) , where the neighborhood cells are those that fall within a radius r ball centered at the cell , N ( ci ) = { c′ ∈ Zn| d ( ci , c′ ) ≤ r } given a distance metric d. Other cells are ignored assuming they have probability 0 of occupancy . If the input state has M occupied cells st = { c1 , · · · , cM } , the sparse convolution predicts the occupancy probability of individual cells with N -dimension vectors P = { p1 , · · · , pM } , where N is the number of neighborhood cells fixed by the distance threshold r within the uniform grid Zn . After the cell-wise averaging step , the aggregated probability is nonzero for coordinates in N ( st ) = ⋃ c∈st N ( c ) . Then the cell-wise sampling in Eq . ( 2 ) is performed only within N ( st ) , instead of the full grid Zn , leading to an efficient sampling procedure . The stochastic local transition rule pθ ( N ( ci ) |st ) changes the state of a cell ’ s immediate neighborhood N ( ci ) , but the inference is determined from a larger perception neighborhood . In contrast , classical cellular automata updates a state of a cell determined by a fixed rule given the observation of the cell ’ s immediate neighborhood . The large perception neighborhood of GCA is effectively handled by deep sparse convolutional network , and results in convergence to a single consistent global shape out of diverse possible output shapes as further discussed in the appendix F . 3 TRAINING GENERATIVE CELLULAR AUTOMATA . We aim to learn the parameters for the local transition probability pθ ( N ( ci ) |st ) , whose repetitive application generates shape that follows the complex distribution of the data set . If we have sequences of sampling chains , then all current and next states can serve as the training data . Because we only have the set of complete shapes D , we emulate the sequence of sampling chain and obtain the intermediate states . The emulated sequence of a sampling chain may start from an arbitrary state , and needs to converge to the full shape x ∈ D after local transitions to the neighbor of the previous state . A naive method would be to sample the next state st from the sampling chain and maximize pθ ( x ∩N ( st ) |st ) , the probability of the shape that is closest to x among reachable forms from the current state1 , similar to scheduled sampling ( Bengio et al . ( 2015 ) ) . While this approach clearly emulates the inference procedure , it leads to learning a biased estimator as pointed out in Huszár ( 2015 ) . More importantly , the emulated sequence can not consistently learn the full shape x as it is not guaranteed to visit the state s such that x ⊂ N ( s ) . We instead employ the use of infusion training procedure suggested by Bordes et al . ( 2017 ) . Specifically , the infusion chain , denoted as s̃0 → s̃1 → ... → s̃T , is obtained as following : s̃0 ∼ q0 ( s̃0|x ) qt ( s̃t+1|s̃t , x ) = ∏ c̃∈N ( s̃t ) ( 1− αt ) pθ ( c̃|s̃t ) + αtδx ( c̃ ) ( 3 ) where q0 indicates the initial distribution of infusion chain , and qt is the infusion transition kernel at time step t. For probabilistic shape completion s̃0 is sampled as a subset of x , while for shape generation s̃0 is a single cell c ∈ x . The transition kernel qt is defined for cells in the neighborhood c̃ ∈ N ( s̃t ) as the mixture of pθ ( c̃|s̃t ) and δx ( c̃ ) , which are the transition kernel of the sampling chain and the infusion of the ground shape x , respectively . δx ( c̃ ) is crucial to guarantee the sequence to ultimately reach the ground truth shape , and is formulated as the Bernoulli distribution with probability 1 , if c̃ ∈ x , else 0 . We set the infusion rate to increase linearly with respect to time step , i.e. , αt = max ( wt , 1 ) , where w > 0 is the speed of infusion rate as in Bordes et al . ( 2017 ) . We can prove that the training procedure converges to the shape x in the training data if it is composed of weakly connected cells . We first define the connectivity of two states . Definition 1 . We call a state s̃′ to be partially connected to state x , if for any cell b ∈ x , there is a cell c0 ∈ s̃′ ∩ x , and a finite sequence of coordinates c0 : Tb in x that starts with c0 and ends with cTb = b , where each subsequent element is closer than the given threshold distance r , i.e. , for any b ∈ x , ∃c0 : Tb , such that ci ∈ x , d ( ci , ci+1 ) ≤ r , 0 ≤ i ≤ Tb and c0 ∈ s̃′ , cTb = b . The shape x is partially connected to any s̃′ 6= ∅ if we can create a sequence of coordinates between any pair of cells in x that is composed of local transitions bounded by the distance r. This is a very weak connectivity condition , and any set that overlaps with x is partially connected to x for shapes with continuous surfaces , which include shapes in the conventional dataset . Now assuming that the state s̃t ′ is partially connected to the state x , we recursively create a sequence by defining s̃t+1 = N ( s̃t ) ∩ x , which is the next sequence of infusion chain with infusion rate 1 . Since we use a linear scheduler for infusion rate , we can assume that there exists a state s̃t ′ such that infusion rate αt ′ = 1 . The following proposition proves that the sequence ( s̃t ) t≥t′ converges to x with local transitions . Proposition 1 . Let state s̃t ′ be partially connected to state x , where x has a finite number of occupied cells . We denote a sequence of states s̃t ′ : ∞ , recursively defined as s̃t+1 = N ( s̃t ) ∩ x . Then , there exists an integer T ′ such that s̃t = x , t ≥ T ′ . Proof . The proof is found in the appendix A . The proposition states that the samples of infusion chain eventually converge to the shape x , and thus we can compute the nonzero likelihood p ( x|s̃T ) during training if T is large enough . One can also interpret the training procedure as learning the sequence of states that converges to x and is close to 1Since we defined a state as a set of occupied cells , union ( ∪ ) , intersection ( ∩ ) , and subset ( ⊂ ) operations can be defined as regular sets . Figure 3 : Qualitative comparison of probabilistic shape completion . Figure 4 : Samples from shape generation . the sampling chain . We empirically observe that most of the training samples converge to the shape x before the infusion rate becomes 1 . The training procedure can be summarized as the following : 1 . Sample infusion chain s̃0 : T from s̃0 ∼ q0 ( s̃0|x ) , s̃t+1 ∼ qt ( s̃t+1|s̃t , x ) . 2 . For each state s̃t , maximize the log-likelihood that has the closest form to x via gradient descent , i.e. , θ ← θ + η ∂ log pθ ( x∩N ( s̃ t ) |s̃t ) ∂θ . The full training algorithm utilizes a data buffer to de-correlate the gradients of subsequent time steps , and accelerates the process by controlling the time steps based on the current state . More details regarding the full training algorithm can be found in the appendix B .
The paper proposes a generative method for 3D objects (voxels representation). Given an initial voxels configuration (e.g. partial shape, or even a single voxel), the method learns a local transition kernel for a Markov chain to decide how to evolve the configuration; sampling iteratively from these probabilities leads to a final model. The paper shows results on shape completion and generation, obtaining fairly good results.
SP:7c44bf5a4a8d5e5ee1e86ee4582c42186e2df72c
Decentralized Deterministic Multi-Agent Reinforcement Learning
[ Zhang , ICML 2018 ] provided the first decentralized actor-critic algorithm for1 multi-agent reinforcement learning ( MARL ) that offers convergence guarantees . In2 that work , policies are stochastic and are defined on finite action spaces . We extend3 those results to offer a provably-convergent decentralized actor-critic algorithm for4 learning deterministic policies on continuous action spaces . Deterministic policies5 are important in real-world settings . To handle the lack of exploration inherent in de-6 terministic policies , we consider both off-policy and on-policy settings . We provide7 the expression of a local deterministic policy gradient , decentralized deterministic8 actor-critic algorithms and convergence guarantees for linearly-approximated value9 functions . This work will help enable decentralized MARL in high-dimensional10 action spaces and pave the way for more widespread use of MARL.11 1 Introduction12 Cooperative multi-agent reinforcement learning ( MARL ) has seen considerably less use than its13 single-agent analog , in part because often no central agent exists to coordinate the cooperative agents.14 As a result , decentralized architectures have been advocated for MARL . Recently , decentralized15 architectures have been shown to admit convergence guarantees comparable to their centralized16 counterparts under mild network-specific assumptions ( see Zhang et al . [ 2018 ] , Suttle et al . [ 2019 ] ) .17 In this work , we develop a decentralized actor-critic algorithm with deterministic policies for multi-18 agent reinforcement learning . Specifically , we extend results for actor-critic with stochastic policies19 ( Bhatnagar et al . [ 2009 ] , Degris et al . [ 2012 ] , Maei [ 2018 ] , Suttle et al . [ 2019 ] ) to handle deterministic20 policies . Indeed , theoretical and empirical work has shown that deterministic algorithms outperform21 their stochastic counterparts in high-dimensional continuous action settings ( Silver et al . [ January22 2014b ] , Lillicrap et al . [ 2015 ] , Fujimoto et al . [ 2018 ] ) . Deterministic policies further avoid estimating23 the complex integral over the action space . Empirically this allows for lower variance of the critic24 estimates and faster convergence . On the other hand , deterministic policy gradient methods suffer25 from reduced exploration . For this reason , we provide both off-policy and on-policy versions of our26 results , the off-policy version allowing for significant improvements in exploration . The contributions27 of this paper are three-fold : ( 1 ) we derive the expression of the gradient in terms of the long-term28 average reward , which is needed in the undiscounted multi-agent setting with deterministic policies ; 29 ( 2 ) we show that the deterministic policy gradient is the limiting case , as policy variance tends to30 zero , of the stochastic policy gradient ; and ( 3 ) we provide a decentralized deterministic multi-agent31 actor critic algorithm and prove its convergence under linear function approximation.32 Submitted to 34th Conference on Neural Information Processing Systems ( NeurIPS 2020 ) . Do not distribute . 2 Background33 Consider a system of N agents denoted by N = [ N ] in a decentralized setting . Agents determine34 their decisions independently based on observations of their own rewards . Agents may however com-35 municate via a possibly time-varying communication network , characterized by an undirected graph36 Gt = ( N , Et ) , where Et is the set of communication links connecting the agents at time t ∈ N. The37 networked multi-agent MDP is thus characterized by a tuple ( S , { Ai } i∈N , P , { Ri } i∈N , { Gt } t≥0 ) 38 where S is a finite global state space shared by all agents in N , Ai is the action space of agent i , and39 { Gt } t≥0 is a time-varying communication network . In addition , let A = ∏ i∈N Ai denote the joint40 action space of all agents . Then , P : S × A × S → [ 0 , 1 ] is the state transition probability of the41 MDP , and Ri : S ×A → R is the local reward function of agent i . States and actions are assumed42 globally observable whereas rewards are only locally observable . At time t , each agent i chooses its43 action ait ∈ Ai given state st ∈ S , according to a local parameterized policy πiθi : S ×A i → [ 0 , 1 ] ,44 where πiθi ( s , a i ) is the probability of agent i choosing action ai at state s , and θi ∈ Θi ⊆ Rmi is45 the policy parameter . We pack the parameters together as θ = [ ( θ1 ) > , · · · , ( θN ) > ] > ∈ Θ where46 Θ = ∏ i∈N Θ i . We denote the joint policy by πθ : S×A → [ 0 , 1 ] where πθ ( s , a ) = ∏ i∈N π i θi ( s , a i ) .47 Note that decisions are decentralized in that rewards are observed locally , policies are evaluated48 locally , and actions are executed locally . We assume that for any i ∈ N , s ∈ S , ai ∈ Ai , the49 policy function πiθi ( s , a i ) > 0 for any θi ∈ Θi and that πiθi ( s , a i ) is continuously differentiable with50 respect to the parameters θi over Θi . In addition , for any θ ∈ Θ , let P θ : S × S → [ 0 , 1 ] denote51 the transition matrix of the Markov chain { st } t≥0 induced by policy πθ , that is , for any s , s′ ∈ S,52 P θ ( s′|s ) = ∑ a∈A πθ ( s , a ) · P ( s′|s , a ) . We make the standard assumption that the Markov chain53 { st } t≥0 is irreducible and aperiodic under any πθ and denote its stationary distribution by dθ.54 Our objective is to find a policy πθ that maximizes the long-term average reward over the network.55 Let rit+1 denote the reward received by agent i as a result of taking action a i t. Then , we wish to solve:56 max θ J ( πθ ) = lim T→∞ 1 T E [ T−1∑ t=0 1 N ∑ i∈N rit+1 ] = ∑ s∈S , a∈A dθ ( s ) πθ ( s , a ) R̄ ( s , a ) , where R̄ ( s , a ) = ( 1/N ) · ∑ i∈N R i ( s , a ) is the globally averaged reward function . Let r̄t =57 ( 1/N ) · ∑ i∈N r i t , then R̄ ( s , a ) = E [ r̄t+1|st = s , at = a ] , and therefore , the global relative action-58 value function is : Qθ ( s , a ) = ∑ t≥0 E [ r̄t+1 − J ( θ ) |s0 = s , a0 = a , πθ ] , and the global relative59 state-value function is : Vθ ( s ) = ∑ a∈A πθ ( s , a ) Qθ ( s , a ) . For simplicity , we refer to Vθ and Qθ60 as simply the state-value function and action-value function . We define the advantage function as61 Aθ ( s , a ) = Qθ ( s , a ) − Vθ ( s ) .62 Zhang et al . [ 2018 ] provided the first provably convergent MARL algorithm in the context of the63 above model . The fundamental result underlying their algorithm is a local policy gradient theorem:64 ∇θiJ ( µθ ) = Es∼dθ , a∼πθ [ ∇θi log πiθi ( s , a i ) ·Aiθ ( s , a ) ] , where Aiθ ( s , a ) = Qθ ( s , a ) − Ṽ iθ ( s , a−i ) is a local advantage function and Ṽ iθ ( s , a−i ) =65 ∑ ai∈Ai π i θi ( s , a i ) Qθ ( s , a i , a−i ) . This theorem has important practical value as it shows that the66 policy gradient with respect to each local parameter θi can be obtained locally using the corresponding67 score function ∇θi log πiθi provided that agent i has an unbiased estimate of the advantage functions68 Aiθ or Aθ . With only local information , the advantage functions A i θ or Aθ can not be well estimated69 since the estimation requires the rewards { rit } i∈N of all agents . Therefore , they proposed a consensus70 based actor-critic that leverages the communication network to share information between agents71 by placing a weight ct ( i , j ) on the message transmitted from agent j to agent i at time t. Their72 action-value function Qθ was approximated by a parameterized function Q̂ω : S ×A → R , and each73 agent i maintains its own parameter ωi , which it uses to form a local estimate Q̂ωi of the global Qθ.74 At each time step t , each agent i shares its local parameter ωit with its neighbors on the network , and75 the shared parameters are used to arrive at a consensual estimate of Qθ over time.76 3 Local Gradients of Deterministic Policies77 While the use of a stochastic policy facilitates the derivations of convergence proofs , most real-world78 control tasks require a deterministic policy to be implementable . In addition , the quantities estimated79 in the deterministic critic do not involve estimation of the complex integral over the action space found80 in the stochastic version . This offers lower variance of the critic estimates and faster convergence . To81 address the lack of exploration that comes with deterministic policies , we provide both off-policy82 and on-policy versions of our results . Our first requirement is a local deterministic policy gradient83 theorem.84 We assume that Ai = Rni . We make standard regularity assumptions on our MDP . That is , we85 assume that for any s , s′ ∈ S , P ( s′|s , a ) and Ri ( s , a ) are bounded and have bounded first and86 second derivatives . We consider local deterministic policies µiθi : S → A i with parameter vector87 θi ∈ Θi , and denote the joint policy by µθ : S → A , where µθ ( s ) = ( µ1θ1 ( s ) , . . . , µNθN ( s ) ) and88 θ = [ ( θ1 ) > , . . . , ( θN ) > ] > . We assume that for any s ∈ S , the deterministic policy function µiθi ( s ) 89 is twice continuously differentiable with respect to the parameter θi over Θi . Let P θ denote the90 transition matrix of the Markov chain { st } t≥0 induced by policy µθ , that is , for any s , s′ ∈ S,91 P θ ( s′|s ) = P ( s′|s , µθ ( s ) ) . We assume that the Markov chain { st } t≥0 is irreducible and aperiodic92 under any µθ and denote its stationary distribution by dµθ .93 Our objective is to find a policy µθ that maximizes the long-run average reward:94 max θ J ( µθ ) = Es∼dµθ [ R̄ ( s , µθ ( s ) ) ] = ∑ s∈S dµθ ( s ) R̄ ( s , µθ ( s ) ) . Analogous to the stochastic policy case , we denote the action-value function by Qθ ( s , a ) =95 ∑ t≥0 E [ r̄t+1 − J ( µθ ) |s0 = s , a0 = a , µθ ] , and the state-value function by Vθ ( s ) = Qθ ( s , µθ ( s ) ) .96 When there is no ambiguity , we will denote J ( µθ ) and dµθ by simply J ( θ ) and dθ , respectively . We97 present three results for the long-run average reward : ( 1 ) an expression for the local deterministic98 policy gradient in the on-policy setting ∇θiJ ( µθ ) , ( 2 ) an expression for the gradient in the off-policy99 setting , and ( 3 ) we show that the deterministic policy gradient can be seen as the limit of the stochastic100 one.101 On-Policy Setting102 Theorem 1 ( Local Deterministic Policy Gradient Theorem - On Policy ) . For any θ ∈ Θ , i ∈ N ,103 ∇θiJ ( µθ ) exists and is given by104 ∇θiJ ( µθ ) = Es∼dµθ [ ∇θiµiθi ( s ) ∇ai Qθ ( s , µ −i θ−i ( s ) , a i ) ∣∣ ai=µi θi ( s ) ] . The first step of the proof consists in showing that ∇θJ ( µθ ) =105 Es∼dθ [ ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) ] . This is an extension of the well-known stochastic106 case , for which we have∇θJ ( πθ ) = Es∼dθ [ ∇θ log ( πθ ( a|s ) ) Qθ ( s , a ) ] , which holds for a long-term107 averaged return with stochastic policy ( e.g Theorem 1 of Sutton et al . [ 2000a ] ) . See the Appendix for108 the details.109 Off-Policy Setting In the off-policy setting , we are given a behavior policy π : S → P ( A ) , and110 our goal is to maximize the long-run average reward under state distribution dπ:111 Jπ ( µθ ) = Es∼dπ [ R̄ ( s , µθ ( s ) ) ] = ∑ s∈S dπ ( s ) R̄ ( s , µθ ( s ) ) . ( 1 ) Note that we consider here an excursion objective ( Sutton et al . [ 2009 ] , Silver et al . [ January 2014a ] ,112 Sutton et al . [ 2016 ] ) since we take the average over the state distribution of the behaviour policy π of113 the state-action reward when selecting action given by the target policy µθ . We thus have:114 Theorem 2 ( Local Deterministic Policy Gradient Theorem - Off Policy ) . For any θ ∈ Θ , i ∈ N ,115 π : S → P ( A ) a fixed stochastic policy , ∇θiJπ ( µθ ) exists and is given by116 ∇θiJπ ( µθ ) = Es∼dπ [ ∇θiµiθi ( s ) ∇ai R̄ ( s , µ −i θ−i ( s ) , a i ) ∣∣ ai=µi θi ( s ) ] . Proof . Since dπ is independent of θ we can take the gradient on both sides of ( 1 ) 117 ∇θJπ ( µθ ) = Es∼dπ [ ∇θµθ ( s ) ∇aR̄ ( s , µθ ( s ) ) ∣∣ a=µθ ( s ) ] . Given that∇θiµjθ ( s ) = 0 if i 6= j , we have∇θµθ ( s ) = Diag ( ∇θ1µ1θ1 ( s ) , . . . , ∇θNµ N θN ( s ) ) and the118 result follows.119 This result implies that , off-policy , each agent needs access to µ−i θ−it ( st ) for every t .120 Limit Theorem As noted by Silver et al . [ January 2014b ] , the fact that the deterministic gradient121 is a limit case of the stochastic gradient enables the standard machinery of policy gradient , such as122 compatible-function approximation ( Sutton et al . [ 2000b ] ) , natural gradients ( Kakade [ 2001 ] ) , on-line123 feature adaptation ( Prabuchandran et al . [ 2016 ] , ) and actor-critic ( Konda [ 2002 ] ) to be used with124 deterministic policies . We show that it holds in our setting . The proof can be found in the Appendix.125 Theorem 3 ( Limit of the Stochastic Policy Gradient for MARL ) . Let πθ , σ be a stochastic policy126 such that πθ , σ ( a|s ) = νσ ( µθ ( s ) , a ) , where σ is a parameter controlling the variance , and νσ satisfy127 Condition 1 in the Appendix . Then,128 lim σ↓0 ∇θJπθ , σ ( πθ , σ ) = ∇θJµθ ( µθ ) where on the l.h.s the gradient is the standard stochastic policy gradient and on the r.h.s . the gradient129 is the deterministic policy gradient.130 4 Algorithms131 We provide two decentralized deterministic actor-critic algorithms , one on-policy and the other132 off-policy and demonstrate their convergence in the next section ; assumptions and proofs are provided133 in the Appendix.134 On-Policy Deterministic Actor-Critic135 Algorithm 1 Networked deterministic on-policy actor-critic Initialize : step t = 0 ; parameters Ĵ i0 , ω i 0 , ω̃ i 0 , θ i 0 , ∀i ∈ N ; state s0 ; stepsizes { βω , t } t≥0 , { βθ , t } t≥0 Draw ai0 = µ i θi0 ( s0 ) and compute ãi0 = ∇θiµiθi0 ( s0 ) Observe joint action a0 = ( a10 , . . . , a N 0 ) and ã0 = ( ã10 , . . . , ã N 0 ) repeat for i ∈ N do Observe st+1 and reward rit+1 = r i ( st , at ) Update Ĵ it+1 ← ( 1− βω , t ) · Ĵ it + βω , t · rit+1 Draw action at+1 = µiθit ( st+1 ) and compute ã i t+1 = ∇θiµiθit ( st+1 ) end for Observe joint action at+1 = ( a1t+1 , . . . , a N t+1 ) and ãt+1 = ( ã1t+1 , . . . , ã N t+1 ) for i ∈ N do Update : δit ← rit+1 − Ĵ it + Q̂ωit ( st+1 , at+1 ) − Q̂ωit ( st , at ) Critic step : ω̃it ← ωit + βω , t · δit · ∇ωQ̂ωi ( st , at ) ∣∣∣ ω=ωit Actor step : θit+1 = θit + βθ , t · ∇θiµiθit ( st ) ∇aiQ̂ωit ( st , a −i t , a i ) ∣∣∣ ai=ait Send ω̃it to the neighbors { j ∈ N : ( i , j ) ∈ Et } over Gt Consensus step : ωit+1 ← ∑ j∈N c ij t · ω̃ j t end for Update t← t+ 1 until end Consider the following on-policy algorithm . The actor step is based on an expression for∇θiJ ( µθ ) 136 in terms of∇aiQθ ( see Equation ( 15 ) in the Appendix ) . We approximate the action-value function Qθ137 using a family of functions Q̂ω : S×A → R parameterized by ω , a column vector in RK . Each agent138 i maintains its own parameter ωi and uses Q̂ωi as its local estimate of Qθ . The parameters ωi are139 updated in the critic step using consensus updates through a weight matrix Ct = ( cijt ) i , j ∈ RN×N140 where cijt is the weight on the message transmitted from i to j at time t , namely:141 Ĵ it+1 = ( 1− βω , t ) · Ĵ it + βω , t · rit+1 ( 2 ) ω̃it = ω i t + βω , t · δit · ∇ωQ̂ωi ( st , at ) ∣∣∣ ω=ωit ( 3 ) ωit+1 = ∑ j∈N cijt · ω̃ j t ( 4 ) with142 δit = r i t+1 − Ĵ it + Q̂ωit ( st+1 , at+1 ) − Q̂ωit ( st , at ) . For the actor step , each agent i improves its policy via:143 θit+1 = θ i t + βθ , t · ∇θiµiθit ( st ) · ∇aiQ̂ωit ( st , a −i t , a i ) ∣∣∣ ai=ait . ( 5 ) Since Algorithm 1 is an on-policy algorithm , each agent updates the critic using only ( st , at , st+1 ) , at144 time t knowing that at+1 = µθt ( st+1 ) . The terms in blue are additional terms that need to be shared145 when using compatible features ( this is explained further in the next section ) .146 Off-Policy Deterministic Actor-Critic We further propose an off-policy actor-critic algorithm,147 defined in Algorithm 2 to enable better exploration capability . Here , the goal is to maximize148 Jπ ( µθ ) where π is the behavior policy . To do so , the globally averaged reward function R̄ ( s , a ) is149 approximated using a family of functions ˆ̄Rλ : S ×A → R that are parameterized by λ , a column150 vector in RK . Each agent i maintains its own parameter λi and uses ˆ̄Rλi as its local estimate of R̄.151 Based on ( 1 ) , the actor update is152 θit+1 = θ i t + βθ , t · ∇θiµiθit ( st ) · ∇ai ˆ̄Rλit ( st , µ −i θ−it ( st ) , a i ) ∣∣∣ ai=µ θit ( st ) , ( 6 ) which requires each agent i to have access to µj θjt ( st ) for j ∈ N .153 The critic update is154 λ̃it = λ i t + βλ , t · δit · ∇λ ˆ̄Rλi ( st , at ) ∣∣∣ λ=λit ( 7 ) λit+1 = ∑ j∈N cijt λ̃ j t , ( 8 ) with155 δit = r i ( st , at ) − ˆ̄Rλit ( st , at ) . ( 9 ) In this case , δit was motivated by distributed optimization results , and is not related to the local156 TD-error ( as there is no `` temporal '' relationship for R ) . Rather , it is simply the difference between157 the sample reward and the bootstrap estimate . The terms in blue are additional terms that need to be158 shared when using compatible features ( this is explained further in the next section ) .159 5 Convergence160 To show convergence , we use a two-timescale technique where in the actor , updating deterministic161 policy parameter θi occurs more slowly than that of ωi and Ĵ i in the critic . We study the asymptotic162 behaviour of the critic by freezing the joint policy µθ , then study the behaviour of θt under convergence163 of the critic . To ensure stability , projection is often assumed since it is not clear how boundedness of164 Algorithm 2 Networked deterministic off-policy actor-critic Initialize : step t = 0 ; parameters λi0 , λ̃ i 0 , θ i 0 , ∀i ∈ N ; state s0 ; stepsizes { βλ , t } t≥0 , { βθ , t } t≥0 Draw ai0 ∼ πi ( s0 ) , compute ȧi0 = µiθi0 ( s0 ) and ã i 0 = ∇θiµiθi0 ( s0 ) Observe joint action a0 = ( a10 , . . . , a N 0 ) , ȧ0 = ( ȧ 1 0 , . . . , ȧ N 0 ) and ã0 = ( ã10 , . . . , ã N 0 ) repeat for i ∈ N do Observe st+1 and reward rit+1 = r i ( st , at ) end for for i ∈ N do Update : δit ← rit+1 − ˆ̄Rλit ( st , at ) Critic step : λ̃it ← λit + βλ , t · δit · ∇λ ˆ̄Rλi ( st , at ) ∣∣∣ λ=λit Actor step : θit+1 = θit + βθ , t · ∇θiµiθit ( st ) · ∇ai ˆ̄Rλit ( st , µ −i θ−it ( st ) , a i ) ∣∣∣ ai=µ θit ( st ) Send λ̃it to the neighbors { j ∈ N : ( i , j ) ∈ Et } over Gt end for for i ∈ N do Consensus step : λit+1 ← ∑ j∈N c ij t · λ̃ j t Draw action at+1 ∼ π ( st+1 ) , compute ȧit+1 = µiθit+1 ( st+1 ) and compute ã i t+1 = ∇θiµiθit+1 ( st+1 ) end for Observe joint action at+1 = ( a1t+1 , . . . , a N t+1 ) , ȧt+1 = ( ȧ 1 t+1 , . . . , ȧ N t+1 ) and ãt+1 = ( ã1t+1 , . . . , ã N t+1 ) Update t← t+ 1 until end { θit } can otherwise be ensured ( see Bhatnagar et al . [ 2009 ] ) . However , in practice , convergence is165 typically observed even without the projection step ( see Bhatnagar et al . [ 2009 ] , Degris et al . [ 2012 ] ,166 Prabuchandran et al . [ 2016 ] , Zhang et al . [ 2018 ] , Suttle et al . [ 2019 ] ) . We also introduce the following167 technical assumptions which will be needed in the statement of the convergence results.168 Assumption 1 ( Linear approximation , average-reward ) . For each agent i , the average-reward function169 R̄ is parameterized by the class of linear functions , i.e. , ˆ̄Rλi , θ ( s , a ) = wθ ( s , a ) ·λi where wθ ( s , a ) =170 [ wθ,1 ( s , a ) , . . . , wθ , K ( s , a ) ] ∈ RK is the feature associated with the state-action pair ( s , a ) . The171 feature vectors wθ ( s , a ) , as well as ∇awθ , k ( s , a ) are uniformly bounded for any s ∈ S , a ∈ A , k ∈172 J1 , KK . Furthermore , we assume that the feature matrix Wπ ∈ R|S|×K has full column rank , where173 the k-th column of Wπ , θ is [ ∫ A π ( a|s ) wθ , k ( s , a ) da , s ∈ S ] for any k ∈ J1 , KK.174 Assumption 2 ( Linear approximation , action-value ) . For each agent i , the action-value function175 is parameterized by the class of linear functions , i.e. , Q̂ωi ( s , a ) = φ ( s , a ) · ωi where φ ( s , a ) =176 [ φ1 ( s , a ) , . . . , φK ( s , a ) ] ∈ RK is the feature associated with the state-action pair ( s , a ) . The feature177 vectors φ ( s , a ) , as well as∇aφk ( s , a ) are uniformly bounded for any s ∈ S , a ∈ A , k ∈ { 1 , . . . , K } .178 Furthermore , we assume that for any θ ∈ Θ , the feature matrix Φθ ∈ R|S|×K has full column rank,179 where the k-th column of Φθ is [ φk ( s , µθ ( s ) ) , s ∈ S ] for any k ∈ J1 , KK . Also , for any u ∈ RK ,180 Φθu 6= 1.181 Assumption 3 ( Bounding θ ) . The update of the policy parameter θi includes a local projection by182 Γi : Rmi → Θi that projects any θit onto a compact set Θi that can be expressed as { θi|qij ( θi ) ≤183 0 , j = 1 , . . . , si } ⊂ Rmi , for some real-valued , continuously differentiable functions { qij } 1≤j≤si184 defined on Rmi . We also assume that Θ = ∏N i=1 Θ i is large enough to include at least one local185 minimum of J ( θ ) .186 We use { Ft } to denote the filtration with Ft = σ ( sτ , Cτ−1 , aτ−1 , rτ−1 , τ ≤ t ) .187 Assumption 4 ( Random matrices ) . The sequence of non-negative random matrices { Ct = ( cijt ) ij } 188 satisfies:189 1 . Ct is row stochastic and E ( Ct|Ft ) is a.s. column stochastic for each t , i.e. , Ct1 = 1 and190 1 > E ( Ct|Ft ) = 1 > a.s . Furthermore , there exists a constant η ∈ ( 0 , 1 ) such that , for any191 cijt > 0 , we have c ij t ≥ η.192 2 . Ct respects the communication graph Gt , i.e. , cijt = 0 if ( i , j ) /∈ Et.193 3 . The spectral norm of E [ C > t · ( I − 11 > /N ) · Ct ] is smaller than one.194 4 . Given the σ-algebra generated by the random variables before time t , Ct , is conditionally195 independent of st , at and rit+1 for any i ∈ N .196 Assumption 5 ( Step size rules , on-policy ) . The stepsizes βω , t , βθ , t satisfy:197 ∑ t βω , t = ∑ t βθ , t =∞∑ t ( β2ω , t + β 2 θ , t ) < ∞∑ t |βθ , t+1 − βθ , t| < ∞ . In addition , βθ , t = o ( βω , t ) and limt→∞βω , t+1/βω , t = 1.198 Assumption 6 ( Step size rules , off-policy ) . The step-sizes βλ , t , βθ , t satisfy:199 ∑ t βλ , t = ∑ t βθ , t =∞ , ∑ t β2λ , t + β 2 θ , t < ∞ βθ , t = o ( βλ , t ) , lim t→∞ βλ , t+1/βλ , t = 1 . On-Policy Convergence To state convergence of the critic step , we define Dsθ = Diag [ dθ ( s ) , s ∈ S ] , R̄θ = [ R̄ ( s , µθ ( s ) ) , s ∈ S ] > ∈ R|S| and the operator TQθ : R|S| → R|S| for any action-value vector Q ∈ R|S| ( and not R|S|·|A| since there is a mapping associating an action to each state ) as : TQθ ( Q ′ ) = R̄θ − J ( µθ ) · 1 + P θQ′ . Theorem 4 . Under Assumptions 3 , 4 , and 5 , for any given deterministic policy µθ , with { Ĵt } and { ωt } generated from ( 2 ) , we have limt→∞ 1N ∑ i∈N Ĵ i t = J ( µθ ) and limt→∞ω i t = ωθ a.s. for any i ∈ N , where J ( µθ ) = ∑ s∈S dθ ( s ) R̄ ( s , µθ ( s ) ) is the long-term average return under µθ , and ωθ is the unique solution to200 Φθ > Dsθ [ TQθ ( Φθωθ ) − Φθωθ ] = 0 . ( 10 ) Moreover , ωθ is the minimizer of the Mean Square Projected Bellman Error ( MSPBE ) , i.e. , the solution to minimize ω ‖Φθω −ΠTQθ ( Φθω ) ‖ 2 Dsθ , where Π is the operator that projects a vector to the space spanned by the columns of Φθ , and ‖·‖2Dsθ201 denotes the euclidean norm weighted by the matrix Dsθ.202 To state convergence of the actor step , we define quantities ψit , θ , ξ i t and ξ i t , θ as203 ψit , θ = ∇θiµiθi ( st ) and ψ i t = ψ i t , θt = ∇θiµ i θit ( st ) , ξit , θ = ∇aiQ̂ωθ ( st , a −i t , ai ) ∣∣∣ ai=ai=µi θit ( st ) = ∇aiφ ( st , a−it , ai ) ∣∣ ai=ai=µi θit ( st ) ωθ , ξit = ∇aiQ̂ωit ( st , a −i t , ai ) ∣∣∣ ai=µi θi ( st ) = ∇aiφ ( st , a−it , ai ) ∣∣ ai=µi θi ( st ) ωit . Additionally , we introduce the operator Γ̂ ( · ) as204 Γ̂i [ g ( θ ) ] = lim 0 < η→0 Γi [ θi + η · g ( θ ) ] − θi η ( 11 ) for any θ ∈ Θ and g : Θ→ Rmi a continuous function . In case the limit above is not unique we take205 Γ̂i [ g ( θ ) ] to be the set of all possible limit points of ( 11 ) .206 Theorem 5 . Under Assumptions 2 , 3 , 4 , and 5 , the policy parameter θit obtained from ( 5 ) converges207 a.s. to a point in the set of asymptotically stable equilibria of208 θ̇i = Γ̂i [ Est∼dθ , µθ [ ψit , θ · ξit , θ ] ] , for any i ∈ N . ( 12 ) In the case of multiple limit points , the above is treated as a differential inclusion rather than an209 ODE.210 The convergence of the critic step can be proved by taking similar steps as that in Zhang et al . [ 2018 ] .211 For the convergence of the actor step , difficulties arise from the projection ( which is handled using212 Kushner-Clark Lemma Kushner and Clark [ 1978 ] ) and the state-dependent noise ( that is handled by213 “ natural ” timescale averaging Crowder [ 2009 ] ) . Details are provided in the Appendix.214 Remark . Note that that with a linear function approximator Qθ , ψt , θ · ξt , θ =215 ∇θµθ ( st ) ∇aQ̂ωθ ( st , a ) ∣∣∣ a=µθ ( st ) may not be an unbiased estimate of∇θJ ( θ ) :216 Es∼dθ [ ψt , θ·ξt , θ ] = ∇θJ ( θ ) +Es∼dθ [ ∇θµθ ( s ) · ( ∇aQ̂ωθ ( s , a ) ∣∣∣ a=µθ ( s ) − ∇aQωθ ( s , a ) |a=µθ ( s ) ) ] . A standard approach to overcome this approximation issue is via compatible features ( see , for217 example , Silver et al . [ January 2014a ] and Zhang and Zavlanos [ 2019 ] ) , i.e . φ ( s , a ) = a · ∇θµθ ( s ) > ,218 giving , for ω ∈ Rm,219 Q̂ω ( s , a ) = a · ∇θµθ ( s ) > ω = ( a− µθ ( s ) ) · ∇θµθ ( s ) > ω + V̂ω ( s ) , with V̂ω ( s ) = Q̂ω ( s , µθ ( s ) ) and ∇aQ̂ω ( s , a ) ∣∣∣ a=µθ ( s ) = ∇θµθ ( s ) > ω . We thus expect that the convergent point of ( 5 ) corresponds to a small neighborhood of a local220 optimum of J ( µθ ) , i.e. , ∇θiJ ( µθ ) = 0 , provided that the error for the gradient of the action-221 value function ∇aQ̂ω ( s , a ) ∣∣∣ a=µθ ( s ) − ∇aQθ ( s , a ) |a=µθ ( s ) is small . However , note that using222 compatible features requires computing , at each step t , φ ( st , at ) = at · ∇θµθ ( st ) > . Thus , in223 Algorithm 1 , each agent observes not only the joint action at+1 = ( a1t+1 , . . . , a N t+1 ) but also224 ( ∇θ1µ1θ1t ( st+1 ) , . . . , ∇θNµ N θNt ( st+1 ) ) ( see the parts in blue in Algorithm 1 ) .225 Off-Policy Convergence226 Theorem 6 . Under Assumptions 1 , 4 , and 6 , for any given behavior policy π and any θ ∈ Θ , with227 { λit } generated from ( 7 ) , we have limt→∞λit = λθ a.s. for any i ∈ N , where λθ is the unique228 solution to229 Bπ , θ · λθ = Aπ , θ · dsπ ( 13 ) where dsπ = [ dπ ( s ) , s ∈ S ] > , Aπ , θ = [ ∫ A π ( a|s ) R̄ ( s , a ) w ( s , a ) > da , s ∈ S ] ∈ RK×|S| and230 Bπ , θ = [ ∑ s∈S d π ( s ) ∫ A π ( a|s ) wi ( s , a ) · w ( s , a ) > da , 1 ≤ i ≤ K ] ∈ RK×K .231 From here on we let232 ξit , θ = ∇ai ˆ̄Rλθ ( st , µ −i θ−it ( st ) , ai ) ∣∣∣ ai=µi θit ( st ) = ∇aiw ( st , µ−iθ−it ( st ) , ai ) ∣∣∣ ai=µi θit ( st ) λθ ξit = ∇ai ˆ̄Rλit ( st , µ −i θ−it ( st ) , ai ) ∣∣∣ ai=µi θit ( st ) = ∇aiw ( st , µ−iθ−i ( st ) , ai ) ∣∣ ai=µi θi ( st ) λit and we keep233 ψit , θ = ∇θiµiθi ( st ) , and ψ i t = ψ i t , θt = ∇θiµ i θit ( st ) . Theorem 7 . Under Assumptions 1 , 3 , 4 , and 6 , the policy parameter θit obtained from ( 6 ) converges234 a.s. to a point in the asymptotically stable equilibria of235 θ̇i = Γi [ Es∼dπ [ ψit , θ · ξit , θ ] ] . ( 14 ) We define compatible features for the action-value and the average-reward function in an analogous236 manner : wθ ( s , a ) = ( a− µθ ( s ) ) · ∇θµθ ( s ) > . For λ ∈ Rm,237 ˆ̄Rλ , θ ( s , a ) = ( a− µθ ( s ) ) · ∇θµθ ( s ) > · λ ∇a ˆ̄Rλ , θ ( s , a ) = ∇θµθ ( s ) > · λ and we have that , for λ∗ = argmin λ Es∼dπ [ ‖∇a ˆ̄Rλ , θ ( s , µθ ( s ) ) −∇aR̄ ( s , µθ ( s ) ) ‖2 ] : ∇θJπ ( µθ ) = Es∼dπ [ ∇θµθ ( s ) · ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) ] = Es∼dπ [ ∇θµθ ( s ) · ∇a ˆ̄Rλ∗ , θ ( s , a ) ∣∣∣ a=µθ ( s ) ] . The use of compatible features requires each agent to observe not only the joint action taken238 at+1 = ( a 1 t+1 , . . . , a N t+1 ) and the “ on-policy action ” ȧt+1 = ( ȧ 1 t+1 , . . . , ȧ N t+1 ) , but also ãt+1 =239 ( ∇θ1µ1θ1t ( st+1 ) , . . . , ∇θNµ N θNt ( st+1 ) ) ( see the parts in blue in Algorithm 2 ) .240 We illustrate algorithm convergence on multi-agent extension of a continuous bandit problem from241 Sec . 5.1 of Silver et al . [ January 2014b ] . Details are in the Appendix . Figure 2 shows the convergence242 of Algorithms 1 and 2 averaged over 5 runs . In all cases , the system converges and the agents are243 able to coordinate their actions to minimize system cost . 244 6 Conclusion245 We have provided the tools needed to implement decentralized , deterministic actor-critic algorithms246 for cooperative multi-agent reinforcement learning . We provide the expressions for the policy247 gradients , the algorithms themselves , and prove their convergence in on-policy and off-policy settings.248 We also provide numerical results for a continuous multi-agent bandit problem that demonstrates249 the convergence of our algorithms . Our work differs from Zhang and Zavlanos [ 2019 ] as the latter250 was based on policy consensus whereas ours is based on critic consensus . Our approach represents251 agreement between agents on every participants ’ contributions to the global reward , and as such,252 provides a consensus scoring function with which to evaluate agents . Our approach may be used253 in compensation schemes to incentivize participation . An interesting extension of this work would254 be to prove convergence of our actor-critic algorithm for continuous state spaces , as it may hold255 with assumptions on the geometric ergodicity of the stationary state distribution induced by the256 deterministic policies ( see Crowder [ 2009 ] ) . The expected policy gradient ( EPG ) of Ciosek and257 Whiteson [ 2018 ] , a hybrid between stochastic and deterministic policy gradient , would also be258 interesting to leverage . The Multi-Agent Deep Deterministic Policy Gradient algorithm ( MADDPG ) 259 of Lowe et al . [ 2017 ] assumes partial observability for each agent and would be a useful extension,260 but it is likely difficult to extend our convergence guarantees to the partially observed setting.261 References262 Albert Benveniste , Pierre Priouret , and Michel Métivier . Adaptive Algorithms and Stochastic263 Approximations . Springer-Verlag , Berlin , Heidelberg , 1990 . ISBN 0-387-52894-6.264 Shalabh Bhatnagar , Richard S. Sutton , Mohammad Ghavamzadeh , and Mark Lee . Natural actor-critic265 algorithms . Automatica , 45 ( 11 ) :2471–2482 , November 2009 . ISSN 0005-1098. doi : 10.1016/j.266 automatica.2009.07.008 . URL http : //dx.doi.org/10.1016/j.automatica.2009.07.008.267 Kamil Ciosek and Shimon Whiteson . Expected Policy Gradients for Reinforcement Learning . arXiv268 e-prints , art . arXiv:1801.03326 , Jan 2018.269 Martin Crowder . Stochastic approximation : A dynamical systems viewpoint by vivek s. borkar.270 International Statistical Review , 77 ( 2 ) :306–306 , 2009.271 Thomas Degris , Martha White , and Richard S. Sutton . Off-policy actor-critic . CoRR , abs/1205.4839,272 2012 . URL http : //arxiv.org/abs/1205.4839.273 Scott Fujimoto , Herke van Hoof , and Dave Meger . Addressing function approximation error in actor-274 critic methods . CoRR , abs/1802.09477 , 2018 . URL http : //arxiv.org/abs/1802.09477.275 Sham Kakade . A natural policy gradient . In Proceedings of the 14th International Conference on276 Neural Information Processing Systems : Natural and Synthetic , NIPS ’ 01 , pages 1531–1538 , Cam-277 bridge , MA , USA , 2001 . MIT Press . URL http : //dl.acm.org/citation.cfm ? id=2980539.278 2980738.279 Vijaymohan Konda . Actor-critic Algorithms . PhD thesis , Cambridge , MA , USA , 2002 . AAI0804543.280 Harold J . ( Harold Joseph ) Kushner and ( joint author . ) Clark , Dean S. Stochastic approximation281 methods for constrained and unconstrained systems . New York : Springer-Verlag , 1978 . ISBN282 0387903410.283 Timothy P. Lillicrap , Jonathan J . Hunt , Alexander Pritzel , Nicolas Manfred Otto Heess , Tom Erez,284 Yuval Tassa , David Silver , and Daan Wierstra . Continuous control with deep reinforcement285 learning . CoRR , abs/1509.02971 , 2015.286 Ryan Lowe , Yi Wu , Aviv Tamar , Jean Harb , Pieter Abbeel , and Igor Mordatch . Multi-agent actor-287 critic for mixed cooperative-competitive environments . Neural Information Processing Systems288 ( NIPS ) , 2017.289 Hamid Reza Maei . Convergent actor-critic algorithms under off-policy training and function approxi-290 mation . CoRR , abs/1802.07842 , 2018 . URL http : //arxiv.org/abs/1802.07842.291 P. Marbach and J. N. Tsitsiklis . Simulation-based optimization of markov reward processes . IEEE292 Transactions on Automatic Control , 46 ( 2 ) :191–209 , Feb 2001 . ISSN 0018-9286. doi : 10.1109/9.293 905687.294 K. J. Prabuchandran , Shalabh Bhatnagar , and Vivek S. Borkar . Actor-critic algorithms with online295 feature adaptation . ACM Trans . Model . Comput . Simul. , 26 ( 4 ) :24:1–24:26 , February 2016 . ISSN296 1049-3301. doi : 10.1145/2868723 . URL http : //doi.acm.org/10.1145/2868723.297 Martin L. Puterman . Markov Decision Processes : Discrete Stochastic Dynamic Programming . John298 Wiley & Sons , Inc. , New York , NY , USA , 1st edition , 1994 . ISBN 0471619779.299 David Silver , Guy Lever , Nicolas Heess , Thomas Degris , Daan Wierstra , and Martin Riedmiller.300 Deterministic Policy Gradient Algorithms . International Conference on Machine Learning , pages301 387–395 , January 2014a.302 David Silver , Guy Lever , Nicolas Heess , Thomas Degris , Daan Wierstra , and Martin Riedmiller.303 Deterministic Policy Gradient Algorithms . International Conference on Machine Learning , pages304 387–395 , January 2014b.305 Wesley Suttle , Zhuoran Yang , Kaiqing Zhang , Zhaoran Wang , Tamer Basar , and Ji Liu . A multi-agent306 off-policy actor-critic algorithm for distributed reinforcement learning . CoRR , abs/1903.06372,307 2019 . URL http : //arxiv.org/abs/1903.06372.308 Richard S Sutton , David A. McAllester , Satinder P. Singh , and Yishay Mansour . Policy gradient309 methods for reinforcement learning with function approximation . In S. A. Solla , T. K. Leen , and310 K. Müller , editors , Advances in Neural Information Processing Systems 12 , pages 1057–1063 . MIT311 Press , 2000a.312 Richard S Sutton , David A. McAllester , Satinder P. Singh , and Yishay Mansour . Policy gradient313 methods for reinforcement learning with function approximation . In S. A. Solla , T. K. Leen , and314 K. Müller , editors , Advances in Neural Information Processing Systems 12 , pages 1057–1063 . MIT315 Press , 2000b.316 Richard S. Sutton , Hamid Reza Maei , Doina Precup , Shalabh Bhatnagar , David Silver , Csaba317 Szepesvári , and Eric Wiewiora . Fast gradient-descent methods for temporal-difference learning318 with linear function approximation . In Proceedings of the 26th Annual International Conference319 on Machine Learning , ICML ’ 09 , pages 993–1000 , New York , NY , USA , 2009 . ACM . ISBN320 978-1-60558-516-1.321 Richard S. Sutton , A. Rupam Mahmood , and Martha White . An emphatic approach to the problem322 of off-policy temporal-difference learning . J. Mach . Learn . Res. , 17 ( 1 ) :2603–2631 , January 2016.323 ISSN 1532-4435 . URL http : //dl.acm.org/citation.cfm ? id=2946645.3007026.324 Kaiqing Zhang , Zhuoran Yang , Han Liu , Tong Zhang , and Tamer Basar . Fully decentralized multi-325 agent reinforcement learning with networked agents . 80:5872–5881 , 10–15 Jul 2018.326 Yan Zhang and Michael M. Zavlanos . Distributed off-policy actor-critic reinforcement learning with327 policy consensus . CoRR , abs/1903.09255 , 2019.328 Numerical experiment details329 We demonstrate the convergence of our algorithm in a continuous bandit problem that is a multi-330 agent extension of the experiment in Section 5.1 of Silver et al . ( 2014 ) . Each agent chooses331 an action ai ∈ Rm . We assume all agents have the same reward function given by Ri ( a ) =332 − ( ∑ i a i − a∗ ) T C ( ∑ i a i − a∗ ) . The matrix C is positive definite with eigenvalues chosen from333 { 0.1 , 1 } , and a∗ = [ 4 , . . . , 4 ] T. We consider 10 agents and action dimensions m = 10 , 20 , 50 . Note334 that there are multiple possible solutions for this problem , requiring the agents to coordinate their335 actions to sum to a∗ . We assume a target policy of the form µθi = θi for each agent i and a Gaussian336 behaviour policy β ( · ) ∼ N ( θi , σ2β ) where σβ = 0.1 . We use the Gaussian behaviour policy for both337 Algorithms 1 and 2 . Strictly speaking , Algorithm 1 is on-policy , but in this simplified setting where338 the target policy is constant , the on-policy version would be degenerate such that the Q estimate does339 not affect the TD-error . Therefore , we add a Gaussian behaviour policy to Algorithm 1 . Each agent340 maintains an estimate Qω i ( a ) of the critic using a linear function of the compatible features a− θ341 and a bias feature . The critic is recomputed from each successive batch of 2m steps and the actor342 is updated once per batch . The critic step size is 0.1 and the actor step size is 0.01 . Performance343 is evaluated by measuring the cost of the target policy ( without exploration ) . Figure 2 shows the344 convergence of Algorithms 1 and 2 averaged over 5 runs . In all cases , the system converges and the345 agents are able to coordinate their actions to minimize system cost . The jupyter notebook will be346 made available for others to use . In fact , in this simple experiment , we also observe convergence347 under discounted rewards.348 Proof of Theorem 1349 The proof follows the same scheme as Sutton et al . [ 2000a ] , naturally extending their results for a350 deterministic policy µθ and a continuous action space A.351 Note that our regularity assumptions ensure that , for any s ∈ S , Vθ ( s ) , ∇θVθ ( s ) , J ( θ ) , ∇θJ ( θ ) ,352 dθ ( s ) are Lipschitz-continuous functions of θ ( since µθ is twice continuously differentiable and Θ is353 compact ) , and that Qθ ( s , a ) and ∇aQθ ( s , a ) are Lipschitz-continuous functions of a ( Marbach and354 Tsitsiklis [ 2001 ] ) .355 We first show that∇θJ ( θ ) = Es∼dθ [ ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) ] .356 The Poisson equation under policy µθ is given by Puterman [ 1994 ] 357 Qθ ( s , a ) = R̄ ( s , a ) − J ( θ ) + ∑ s′∈S P ( s′|s , a ) Vθ ( s′ ) . So,358 ∇θVθ ( s ) = ∇θQθ ( s , µθ ( s ) ) = ∇θ [ R̄ ( s , µθ ( s ) ) − J ( θ ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) Vθ ( s′ ) ] = ∇θµθ ( s ) ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) −∇θJ ( θ ) +∇θ ∑ s′∈S P ( s′|s , µθ ( s ) ) Vθ ( s′ ) = ∇θµθ ( s ) ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) −∇θJ ( θ ) + ∑ s′∈S ∇θµθ ( s ) ∇aP ( s′|s , a ) |a=µθ ( s ) Vθ ( s ′ ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) = ∇θµθ ( s ) ∇a [ R̄ ( s , a ) + ∑ s′∈S P ( s|s′ , a ) Vθ ( s′ ) ] ∣∣∣∣∣ a=µθ ( s ) −∇θJ ( θ ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) = ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) −∇θJ ( θ ) Hence,359 ∇θJ ( θ ) = ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) + ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) −∇θVθ ( s ) ∑ s∈S dθ ( s ) ∇θJ ( θ ) = ∑ s∈S dθ ( s ) ∇θµθ ( s ) ∇a Qθ ( s , a ) |a=µθ ( s ) + ∑ s∈S dθ ( s ) ∑ s′∈S P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) − ∑ s∈S dθ ( s ) ∇θVθ ( s ) . Using stationarity property of dθ , we get∑ s∈S ∑ s′∈S dθ ( s ) P ( s′|s , µθ ( s ) ) ∇θVθ ( s′ ) = ∑ s′∈S dθ ( s′ ) ∇θVθ ( s′ ) . Therefore , we get ∇θJ ( θ ) = ∑ s∈S dθ ( s ) ∇θµθ ( s ) ∇aQθ ( s , a ) |a=µθ ( s ) = Es∼dθ [ ∇θµθ ( s ) ∇aQθ ( s , a ) |a=µθ ( s ) ] . Given that ∇θiµjθ ( s ) = 0 if i 6= j , we have ∇θµθ ( s ) = Diag ( ∇θ1µ1θ1 ( s ) , . . . , ∇θNµ N θN ( s ) ) , which360 implies361 ∇θiJ ( θ ) = Es∼dθ [ ∇θiµiθi ( s ) ∇ai Qθ ( s , µ −i θ−i ( s ) , a i ) ∣∣ ai=µi θi ( s ) ] . ( 15 ) Proof of Theorem 3362 We extend the notation for off-policy reward function to stochastic policies as follows . Let β be a363 behavior policy under which { st } t≥0 is irreducible and aperiodic , with stationary distribution dβ . For364 a stochastic policy π : S → P ( A ) , we define365 Jβ ( π ) = ∑ s∈S dβ ( s ) ∫ A π ( a|s ) R̄ ( s , a ) da . Recall that for a deterministic policy µ : S → A , we have366 Jβ ( µ ) = ∑ s∈S dβ ( s ) R̄ ( s , µ ( s ) ) . We introduce the following conditions which are identical to Conditions B1 from Silver et al.367 [ January 2014a ] .368 Conditions 1 . Functions νσ parametrized by σ are said to be regular delta-approximation onR ⊂ A369 if they satisfy the following conditions:370 1 . The distributions νσ converge to a delta distribution : limσ↓0 ∫ A νσ ( a ′ , a ) f ( a ) da = f ( a′ ) 371 for a′ ∈ R and suitably smooth f . Specifically we require that this convergence is uniform372 in a′ and over any class F of L-Lipschitz and bounded functions , ‖∇af ( a ) ‖ < L < ∞,373 supaf ( a ) < b < ∞ , i.e . :374 lim σ↓0 sup f∈F , a′∈R ∣∣∣∣∫ A νσ ( a ′ , a ) f ( a ) da− f ( a′ ) ∣∣∣∣ = 0 . 2 . For each a′ ∈ R , νσ ( a′ , · ) is supported on some compact Ca′ ⊆ A with Lipschitz boundary375 bd ( Ca′ ) , vanishes on the boundary and is continuously differentiable on Ca′ .376 3 . For each a′ ∈ R , for each a ∈ A , the gradient∇a′νσ ( a′ , a ) exists.377 4 . Translation invariance : for all a ∈ A , a′ ∈ R , and any δ ∈ Rn such that a + δ ∈ A,378 a′ + δ ∈ A , νσ ( a′ , a ) = νσ ( a′ + δ , a+ δ ) .379 The following lemma is an immediate corollary of Lemma 1 from Silver et al . [ January 2014a ] .380 Lemma 1 . Let νσ be a regular delta-approximation onR ⊆ A . Then , wherever the gradients exist ∇a′ν ( a′ , a ) = −∇aν ( a′ , a ) . Theorem 3 is a less technical restatement of the following result.381 Theorem 8 . Let µθ : S → A. Denote the range of µθ by Rθ ⊆ A , and R = ∪θRθ . For382 each θ , consider πθ , σ a stochastic policy such that πθ , σ ( a|s ) = νσ ( µθ ( s ) , a ) , where νσ satisfy383 Conditions 1 on R. Then , there exists r > 0 such that , for each θ ∈ Θ , σ 7→ Jπθ , σ ( πθ , σ ) ,384 σ 7→ Jπθ , σ ( µθ ) , σ 7→ ∇θJπθ , σ ( πθ , σ ) , and σ 7→ ∇θJπθ , σ ( µθ ) are properly defined on [ 0 , r ] ( with385 Jπθ,0 ( πθ,0 ) = Jπθ,0 ( µθ ) = Jµθ ( µθ ) and ∇θJπθ,0 ( πθ,0 ) = ∇θJπθ,0 ( µθ ) = ∇θJµθ ( µθ ) ) , and we386 have:387 lim σ↓0 ∇θJπθ , σ ( πθ , σ ) = lim σ↓0 ∇θJπθ , σ ( µθ ) = ∇θJµθ ( µθ ) . To prove this result , we first state and prove the following Lemma.388 Lemma 2 . There exists r > 0 such that , for all θ ∈ Θ and σ ∈ [ 0 , r ] , stationary distribution dπθ , σ389 exists and is unique . Moreover , for each θ ∈ Θ , σ 7→ dπθ , σ and σ 7→ ∇θdπθ , σ are properly defined390 on [ 0 , r ] and both are continuous at 0.391 Proof of Lemma 2 . For any policy β , we let ( P βs , s′ ) s , s′∈S be the transition matrix associated to the392 Markov Chain { st } t≥0 induced by β . In particular , for each θ ∈ Θ , σ > 0 , s , s′ ∈ S , we have393 Pµθs , s′ = P ( s ′|s , µθ ( s ) ) , P πθ , σ s , s′ = ∫ A πθ , σ ( a|s ) P ( s′|s , a ) da = ∫ A νσ ( µθ ( s ) , a ) P ( s ′|s , a ) da . Let θ ∈ Θ , s , s′ ∈ S , ( θn ) ∈ ΘN such that θn → θ and ( σn ) n∈N ∈ R+ N , σn ↓ 0:394 ∣∣∣Pπθn , σns , s′ − Pµθs , s′ ∣∣∣ ≤ ∣∣∣Pπθn , σns , s′ − Pµθns , s′ ∣∣∣+ ∣∣∣Pµθns , s′ − Pµθs , s′ ∣∣∣ . Applying the first condition of Conditions 1 with f : a 7→ P ( s′|s , a ) belonging to F :395 ∣∣∣Pπθn , σns , s′ − Pµθns , s′ ∣∣∣ = ∣∣∣∣∫ A νσn ( µθn ( s ) , a ) P ( s ′|s , a ) da− P ( s′|s , µθn ( s ) ) ∣∣∣∣ ≤ sup f∈F , a′∈R ∣∣∣∣∫ A νσn ( a ′ , a ) f ( a ) da− f ( a′ ) ∣∣∣∣ −→n→∞ 0 . By regularity assumptions on θ 7→ µθ ( s ) and P ( s′|s , · ) , we have396 ∣∣∣Pµθns , s′ − Pµθs , s′ ∣∣∣ = |P ( s′|s , µθn ( s ) ) − P ( s′|s , µθ ( s ) ) | −→n→∞ 0 . Hence,397 ∣∣∣Pπθn , σns , s′ − Pµθs , s′ ∣∣∣ −→n→∞ 0 . Therefore , for each s , s′ ∈ S , ( θ , σ ) 7→ Pπθ , σs , s′ , with P πθ,0 s , s′ = P µθ s , s′ , is continuous on Θ× { 0 } . Note398 that , for each n ∈ N , P 7→ ∏ s , s′ ( P n ) s , s′ is a polynomial function of the entries of P . Thus , for399 each n ∈ N , fn : ( θ , σ ) 7→ ∏ s , s′ ( P πθ , σn ) s , s′ , with fn ( θ , 0 ) = ∏ s , s′ ( P µθn ) s , s′ is continuous on400 Θ × { 0 } . Moreover , for each θ ∈ Θ , σ ≥ 0 , from the structure of Pπθ , σ , if there is some n∗ ∈ N401 such that fn∗ ( θ , σ ) > 0 then , for all n ≥ n∗ , fn ( θ , σ ) > 0.402 Now let us suppose that there exists ( θn ) ∈ ΘN ∗ such that , for each n > 0 there is a σn ≤ n−1 such403 that fn ( θn , σn ) = 0 . By compacity of Θ , we can take ( θn ) converging to some θ ∈ Θ . For each404 n∗ ∈ N , by continuity we have fn∗ ( θ , 0 ) = lim n→∞ fn∗ ( θn , σn ) = 0 . Since Pµθ is irreducible and405 aperiodic , there is some n ∈ N such that for all s , s′ ∈ S and for all n∗ ≥ n , ( Pµθn ∗ ) s , s′ > 0 , i.e.406 fn∗ ( θ , 0 ) > 0 . This leads to a contradiction.407 Hence , there exists n∗ > 0 such that for all θ ∈ Θ and σ ≤ n∗−1 , fn ( θ , σ ) > 0 . We let r = n∗−1 . It408 follows that , for all θ ∈ Θ and σ ∈ [ 0 , r ] , Pπθ , σ is a transition matrix associated to an irreducible and409 aperiodic Markov Chain , thus dπθ , σ is well defined as the unique stationary probability distribution410 associated to Pπθ , σ . We fix θ ∈ Θ in the remaining of the proof.411 Let β a policy for which the Markov Chain corresponding to P β is irreducible and aperiodic . Let412 s∗ ∈ S , as asserted in Marbach and Tsitsiklis [ 2001 ] , considering stationary distribution dβ as a413 vector ( dβs ) s∈S ∈ R |S| , dβ is the unique solution of the balance equations:414 ∑ s∈S dβsP β s , s′ = d β s′ s ′ ∈ S\ { s∗ } , ∑ s∈S dβs = 1 . Hence , we have Aβ an |S| × |S| matrix and a 6= 0 a constant vector of R|S| such that the balance415 equations is of the form416 Aβdβ = a ( 16 ) with Aβs , s′ depending on P β s′ , s in an affine way , for each s , s ′ ∈ S. Moreover , Aβ is invertible , thus417 dβ is given by418 dβ = 1 det ( Aβ ) adj ( Aβ ) > a . Entries of adj ( Aβ ) and det ( Aβ ) are polynomial functions of the entries of P β .419 Thus , σ 7→ dπθ , σ = 1 det ( Aπθ , σ ) adj ( Aπθ , σ ) > a is defined on [ 0 , r ] and is continuous at 0.420 Lemma 1 and integration by parts imply that , for s , s′ ∈ S , σ ∈ [ 0 , r ] :421 ∫ A ∇a′νσ ( a′ , a ) |a′=µθ ( s ) P ( s ′|s , a ) da = − ∫ A ∇aνσ ( µθ ( s ) , a ) P ( s′|s , a ) da = ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da+ boundary terms = ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da where the boundary terms are zero since νσ vanishes on the boundary due to Conditions 1.422 Thus , for s , s′ ∈ S , σ ∈ [ 0 , r ] :423 ∇θP πθ , σ s , s′ = ∇θ ∫ A πθ , σ ( a|s ) P ( s′|s , a ) da = ∫ A ∇θπθ , σ ( a|s ) P ( s′|s , a ) da ( 17 ) = ∫ A ∇θµθ ( s ) ∇a′νσ ( a′ , a ) |a′=µθ ( s ) P ( s ′|s , a ) da = ∇θµθ ( s ) ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da where exchange of derivation and integral in ( 17 ) follows by application of Leibniz rule with:424 • ∀a ∈ A , θ 7→ πθ , σ ( a|s ) P ( s′|s , a ) is differentiable , and ∇θπθ , σ ( a|s ) P ( s′|s , a ) =425 ∇θµθ ( s ) ∇a′νσ ( a′ , a ) |a′=µθ ( s ) .426 427 • Let a∗ ∈ R , ∀θ ∈ Θ,428 ‖∇θπθ , σ ( a|s ) P ( s′|s , a ) ‖ = ∥∥∥∇θµθ ( s ) ∇a′νσ ( a′ , a ) |a′=µθ ( s ) ∥∥∥ ≤ ‖∇θµθ ( s ) ‖op ∥∥∥∇a′νσ ( a′ , a ) |a′=µθ ( s ) ∥∥∥ ≤ sup θ∈Θ ‖∇θµθ ( s ) ‖op ‖∇aνσ ( µθ ( s ) , a ) ‖ = sup θ∈Θ ‖∇θµθ ( s ) ‖op ‖∇aνσ ( a ∗ , a− µθ ( s ) + a∗ ) ‖ ( 18 ) ≤ sup θ∈Θ ‖∇θµθ ( s ) ‖op sup a∈Ca∗ ‖∇aνσ ( a∗ , a ) ‖ 1a∈Ca∗ where ‖·‖op denotes the operator norm , and ( 18 ) comes from translation invariance ( we take429 ∇aνσ ( a∗ , a ) = 0 for a ∈ Rn\Ca∗ ) . a 7→ sup θ∈Θ ‖∇θµθ ( s ) ‖op sup a∈Ca∗ ‖∇aνσ ( a∗ , a ) ‖ 1a∈Ca∗ is430 measurable , bounded and supported on Ca∗ , so it is integrable on A.431 • Dominated convergence ensures that , for each k ∈ J1 , mK , partial derivative gk ( θ ) =432 ∂θk ∫ A∇θπθ , σ ( a|s ) P ( s ′|s , a ) da is continuous : let θn ↓ θ , then433 gk ( θn ) = ∂θk ∫ A ∇θπθn , σ ( a|s ) P ( s′|s , a ) da = ∂θkµθn ( s ) ∫ Ca∗ νσ ( a ∗ , a− µθn ( s ) + a∗ ) ∇aP ( s′|s , a ) da −→ n→∞ ∂θkµθ ( s ) ∫ Ca∗ νσ ( a ∗ , a− µθ ( s ) + a∗ ) ∇aP ( s′|s , a ) da = gk ( θ ) with the dominating function a 7→ sup a∈Ca∗ |νσ ( a∗ , a ) |sup a∈A ‖∇aP ( s′|s , a ) ‖ 1a∈Ca∗ .434 Thus σ 7→ ∇θP πθ , σ s , s′ is defined for σ ∈ [ 0 , r ] and is continuous at 0 , with ∇θP πθ,0 s , s′ =435 ∇θµθ ( s ) ∇aP ( s′|s , a ) |a=µθ ( s ) . Indeed , let ( σn ) n∈N ∈ [ 0 , r ] +N , σn ↓ 0 , then , applying the first436 condition of Conditions 1 with f : a 7→ ∇aP ( s′|s , a ) belonging to F , we get437 ∥∥∥∇θPπθ , σns , s′ −∇θPµθs , s′∥∥∥ = ‖∇θµθ ( s ) ‖op ∥∥∥∥∥ ∫ Cµθ ( s ) νσn ( µθ ( s ) , a ) ∇aP ( s′|s , a ) da− ∇aP ( s′|s , a ) |a=µθ ( s ) ∥∥∥∥∥ −→n→∞ 0 . Since dπθ , σ = 1 det ( Aπθ , σ ) adj ( Aπθ , σ ) > a with |det ( Aπθ , σ ) | > 0 for all σ ∈ [ 0 , r ] and since entries438 of adj ( Aπθ , σ ) and det ( Aπθ , σ ) are polynomial functions of the entries of Pπθ , σ , it follows that439 σ 7→ ∇θdπθ , σ is properly defined on [ 0 , r ] and is continuous at 0 , which concludes the proof of440 Lemma 2.441 We now proceed to prove Theorem 8.442 Let θ ∈ Θ , πθ as in Theorem 3 , and r > 0 such that σ 7→ dπθ , σ , σ 7→ ∇θdπθ , σ are well defined on443 [ 0 , r ] and are continuous at 0 . Then , the following two functions444 σ 7→ Jπθ , σ ( πθ , σ ) = ∑ s∈S dπθ , σ ( s ) ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da , σ 7→ Jπθ , σ ( µθ ) = ∑ s∈S dπθ , σ ( s ) R̄ ( s , µθ ( s ) ) , are properly defined on [ 0 , r ] ( with Jπθ,0 ( πθ,0 ) = Jπθ,0 ( µθ ) = Jµθ ( µθ ) ) . Let s ∈ S , by taking445 similar arguments as in the proof of Lemma 2 , we have446 ∇θ ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da = ∫ A ∇θπθ , σ ( a , s ) R̄ ( s , a ) da , = ∇θµθ ( s ) ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da . Thus , σ 7→ ∇θJπθ , σ ( πθ , σ ) is properly defined on [ 0 , r ] and447 ∇θJπθ , σ ( πθ , σ ) = ∑ s∈S ∇θdπθ , σ ( s ) ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da + ∑ s∈S dπθ , σ ( s ) ∇θ ∫ A πθ , σ ( a|s ) R̄ ( s , a ) da = ∑ s∈S ∇θdπθ , σ ( s ) ∫ A νσ ( µθ ( s ) , a ) R̄ ( s , a ) da + ∑ s∈S dπθ , σ ( s ) ∇θµθ ( s ) ∫ Cµθ ( s ) νσ ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da . Similarly , σ 7→ ∇θJπθ , σ ( µθ ) is properly defined on [ 0 , r ] and448 ∇θJπθ , σ ( µθ ) = ∑ s∈S ∇θdπθ , σ ( s ) R̄ ( s , µθ ( s ) ) + ∑ s∈S dπθ , σ ( s ) ∇θµθ ( s ) ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) To prove continuity at 0 of both σ 7→ ∇θJπθ , σ ( πθ , σ ) and σ 7→ ∇θJπθ , σ ( µθ ) ( with ∇θJπθ,0 ( πθ,0 ) =449 ∇θJπθ,0 ( µθ ) = ∇θJµθ ( µθ ) ) , let ( σn ) n≥0 ↓ 0:450 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ,0 ( πθ,0 ) ∥∥ ≤ ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ , σn ( µθ ) ∥∥+ ∥∥∇θJπθ , σn ( µθ ) −∇θJµθ ( µθ ) ∥∥ . ( 19 ) For the first term of the r.h.s we have451 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ , σn ( µθ ) ∥∥ ≤ ∑ s∈S ‖∇θdπθ , σn ( s ) ‖ ∣∣∣∣∫ A νσn ( µθ ( s ) , a ) R̄ ( s , a ) da− R̄ ( s , µθ ( s ) ) ∣∣∣∣ + ∑ s∈S dπθ , σn ( s ) ‖∇θµθ ( s ) ‖op ∥∥∥∥∫ A νσn ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da− ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) ∥∥∥∥ . Applying the first assumption in Condition 1 with f : a 7→ R̄ ( s , a ) and f : a 7→ ∇aR̄ ( s , a ) belonging452 to F we have , for each s ∈ S:453 ∣∣∣∣∫ A νσn ( µθ ( s ) , a ) R̄ ( s , a ) da− R̄ ( s , µθ ( s ) ) ∣∣∣∣ −→n→∞ 0 and∥∥∥∥∫ A νσn ( µθ ( s ) , a ) ∇aR̄ ( s , a ) da− ∇aR̄ ( s , a ) ∣∣ a=µθ ( s ) ∥∥∥∥ −→n→∞ 0 . Moreover , for each s ∈ S , dπθ , σn ( s ) −→ n→∞ dµθ ( s ) and∇θdπθ , σn ( s ) −→ n→∞ ∇θdµθ ( s ) ( by Lemma 2 ) ,454 and ‖∇θµθ ( s ) ‖op < ∞ , so455 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ , σn ( µθ ) ∥∥ −→n→∞ 0 . For the second term of the r.h.s of ( 19 ) , we have456 ∥∥∇θJπθ , σn ( µθ ) −∇θJµθ ( µθ ) ∥∥ ≤∑ s∈S ‖∇θdπθ , σn ( s ) −∇θdµθ ( s ) ‖ ∣∣R̄ ( s , µθ ( s ) ) ∣∣ + ∑ s∈S |dπθ , σn ( s ) − dµθ ( s ) | ‖∇θµθ ( s ) ‖op ∥∥∥∇aR̄ ( s , a ) ∣∣a=µθ ( s ) ∥∥∥ . Continuity at 0 of σ 7→ dπθ , σ ( s ) and σ 7→ ∇θdπθ , σ ( s ) for each s ∈ S , boundedness of R̄ ( s , · ) ,457 ∇aR̄ ( s , · ) and ∇θ ( s ) µθ ( s ) implies that458 ∥∥∇θJπθ , σn ( µθ ) −∇θJµθ ( µθ ) ∥∥ −→n→∞ 0 . Hence,459 ∥∥∇θJπθ , σn ( πθ , σn ) −∇θJπθ,0 ( πθ,0 ) ∥∥ −→n→∞ 0 . So , σ 7→ ∇θJπθ , σ ( πθ , σ ) and ∇θJπθ , σ ( µθ ) are continuous at 0:460 lim σ↓0 ∇θJπθ , σ ( πθ , σ ) = lim σ↓0 ∇θJπθ , σ ( µθ ) = ∇θJµθ ( µθ ) . Proof of Theorem 4461 We will use the two-time-scale stochastic approximation analysis . We let the policy parameter θt462 fixed as θt ≡ θ when analysing the convergence of the critic step . Thus we can show the convergence463 of ωt towards an ωθ depending on θ , which will then be used to prove the convergence for the slow464 time-scale.465 Lemma 3 . Under Assumptions 3 – 5 , the sequence ωit generated from ( 2 ) is bounded a.s. , i.e.,466 supt‖ωit‖ < ∞ a.s. , for any i ∈ N .467 The proof follows the same steps as that of Lemma B.1 in the PMLR version of Zhang et al . [ 2018 ] .468 Lemma 4 . Under Assumption 5 , the sequence { Ĵ it } generated as in 2 is bounded a.s , i.e. , supt|Ĵ it | < 469 ∞ a.s. , for any i ∈ N .470 The proof follows the same steps as that of Lemma B.2 in the PMLR version of Zhang et al . [ 2018 ] .471 The desired result holds since Step 1 and Step 2 of the proof of Theorem 4.6 in Zhang et al . [ 2018 ] 472 can both be repeated in the setting of deterministic policies.473 Proof of Theorem 5474 Let Ft,2 = σ ( θτ , sτ , τ ≤ t ) a filtration . In addition , we define475 H ( θ , s , ω ) = ∇θµθ ( s ) · ∇aQω ( s , a ) |a=µθ ( s ) , H ( θ , s ) = H ( θ , s , ωθ ) , h ( θ ) = Es∼dθ [ H ( θ , s ) ] . Then , for each θ ∈ Θ , we can introduce νθ : S → Rn the solution to the Poisson equation:476 ( I − P θ ) νθ ( · ) = H ( θ , · ) − h ( θ ) that is given by νθ ( s ) = ∑ k≥0 Esk+1∼P θ ( ·|sk ) [ H ( θ , sk ) − h ( θ ) |s0 = s ] which is properly defined477 ( similar to the differential value function V ) .478 With projection , actor update ( 5 ) becomes479 θt+1 = Γ [ θt + βθ , tH ( θt , st , ωt ) ] ( 20 ) = Γ [ θt + βθ , th ( θt ) − βθ , t ( h ( θt ) −H ( θt , st ) ) − βθ , t ( H ( θt , st ) −H ( θt , st , ωt ) ) ] = Γ [ θt + βθ , th ( θt ) + βθ , t ( ( I − P θt ) νθt ( st ) ) + βθ , tA 1 t ] = Γ [ θt + βθ , th ( θt ) + βθ , t ( νθt ( st ) − νθt ( st+1 ) ) + βθ , t ( νθt ( st+1 ) − P θtνθt ( st ) ) + βθ , tA 1 t ] = Γ [ θt + βθ , t ( h ( θt ) +A 1 t +A 2 t +A 3 t ) ] where480 A1t = H ( θt , st , ωt ) −H ( θt , st ) , A2t = νθt ( st ) − νθt ( st+1 ) , A3t = νθt ( st+1 ) − P θtνθt ( st ) . For r < t we have481 t−1∑ k=r βθ , kA 2 k = t−1∑ k=r βθ , k ( νθk ( sk ) − νθk ( sk+1 ) ) = t−1∑ k=r βθ , k ( νθk ( sk ) − νθk+1 ( sk+1 ) ) + t−1∑ k=r βθ , k ( νθk+1 ( sk+1 ) − νθk ( sk+1 ) ) = t−1∑ k=r ( βθ , k+1 − βθ , k ) νθk+1 ( sk+1 ) + βθrνθr ( sr ) − βθtνθt ( st ) + t−1∑ k=r ( 2 ) k = t−1∑ k=r ( 1 ) k + t−1∑ k=r ( 2 ) k + ηr , t where482 ( 1 ) k = ( βθ , k+1 − βθ , k ) νθk+1 ( sk+1 ) , ( 2 ) k = βθ , k ( νθk+1 ( sk+1 ) − νθk ( sk+1 ) ) , ηr , t = βθrνθr ( sr ) − βθtνθt ( st ) . Lemma 5 . ∑t−1 k=0 βθ , kA 2 k converges a.s. for t→∞483 Proof of Lemma 5 . Since νθ ( s ) is uniformly bounded for θ ∈ Θ , s ∈ S , we have for some K > 0484 t−1∑ k=0 ∥∥∥ ( 1 ) k ∥∥∥ ≤ K t−1∑ k=0 |βθ , k+1 − βθ , k| which converges given Assumption 5.485 Moreover , since µθ ( s ) is twice continuously differentiable , θ 7→ νθ ( s ) is Lipschitz for each s , and so486 we have487 t−1∑ k=0 ∥∥∥ ( 2 ) k ∥∥∥ ≤ t−1∑ k=0 βθ , k ∥∥νθk ( sk+1 ) − νθk+1 ( sk+1 ) ∥∥ ≤ K2 t−1∑ k=0 βθ , k ‖θk − θk+1‖ ≤ K3 t−1∑ k=0 β2θ , k . Finally , lim t→∞ ‖η0 , t‖ = βθ,0 ‖νθ0 ( s0 ) ‖ < ∞ a.s.488 Thus , ∑t−1 k=0 ∥∥βθ , kA2k∥∥ ≤∑t−1k=0 ∥∥∥ ( 1 ) k ∥∥∥+∑t−1k=0 ∥∥∥ ( 2 ) k ∥∥∥+ ‖η0 , t‖ converges a.s.489 Lemma 6 . ∑t−1 k=0 βθ , kA 3 k converges a.s. for t→∞.490 Proof of Lemma 6 . We set491 Zt = t−1∑ k=0 βθ , kA 3 k = t−1∑ k=0 βθ , k ( νθk ( sk+1 ) − P θkνθk ( sk ) ) . Since Zt is Ft-adapted and E [ νθt ( st+1 ) |Ft ] = P θtνθt ( st ) , Zt is a martingale . The remaining of the492 proof is now similar to the proof of Lemma 2 on page 224 of Benveniste et al . [ 1990 ] .493 Let gi ( θt ) = Est∼dθt [ ψit · ξit , θt |Ft,2 ] and g ( θ ) = [ g1 ( θ ) , . . . , gN ( θ ) ] . We have gi ( θt ) = ∑ st∈S dθt ( st ) · ψit · ξit , θt . Given ( 10 ) , θ 7→ ωθ is continuously differentiable and θ 7→ ∇θωθ is bounded so θ 7→ ωθ is494 Lipschitz-continuous . Thus θ 7→ ξit , θ is Lipschitz-continuous for each st ∈ S . Due to our regularity495 assumptions , θ 7→ ψit , θt is also continuous for each i ∈ N , st ∈ S. Moreover , θ 7→ d θ ( s ) is also496 Lipschitz continuous for each s ∈ S. Hence , θ 7→ g ( θ ) is Lipschitz-continuous in θ and the ODE497 ( 12 ) is well-posed . This holds even when using compatible features.498 By critic faster convergence , we have limt→∞‖ξit − ξit , θt‖= 0 so limt→∞A 1 t = 0.499 Hence , by Kushner-Clark lemma Kushner and Clark [ 1978 ] ( pp 191-196 ) we have that the update in500 ( 20 ) converges a.s. to the set of asymptotically stable equilibria of the ODE ( 12 ) .501 Proof of Theorem 6502 We use the two-time scale technique : since critic updates at a faster rate than the actor , we let the503 policy parameter θt to be fixed as θ when analysing the convergence of the critic update.504 Lemma 7 . Under Assumptions 4 , 1 and 6 , for any i ∈ N , sequence { λit } generated from ( 7 ) is505 bounded almost surely.506 To prove this lemma we verify the conditions for Theorem A.2 of Zhang et al . [ 2018 ] to hold.507 We use { Ft,1 } to denote the filtration with Ft,1 = σ ( sτ , Cτ−1 , aτ−1 , rτ , λτ , τ ≤ t ) . With λt =508 [ ( λ1t ) > , . . . , ( λNt ) > ] > , critic step ( 7 ) has the form:509 λt+1 = ( Ct ⊗ I ) ( λt + βλ , t · yt+1 ) ( 21 ) with yt+1 = ( δ1tw ( st , at ) > , . . . , δNt w ( st , at ) > ) > ∈ RKN , ⊗ denotes Kronecker product and I is510 the identity matrix . Using the same notation as in Assumption A.1 from Zhang et al . [ 2018 ] , we511 have:512 hi ( λit , st ) = Ea∼π [ δitw ( st , a ) > |Ft,1 ] = ∫ A π ( a|st ) ( Ri ( st , a ) − w ( st , a ) · λit ) w ( st , a ) > da , M it+1 = δ i tw ( st , at ) > − Ea∼π [ δitw ( st , a ) > |Ft,1 ] , h̄i ( λt ) = A i π , θ · dsπ −Bπ , θ · λt , where Aiπ , θ = [ ∫ A π ( a|s ) Ri ( s , a ) w ( s , a ) > da , s ∈ S ] . Since feature vectors are uniformly bounded for any s ∈ S and a ∈ A , hi is Lipschitz continuous in513 its first argument . Since , for i ∈ N , the ri are also uniformly bounded , E [ ‖Mt+1‖2|Ft,1 ] ≤ K · ( 1 +514 ‖λt‖2 ) for some K > 0 . Furthermore , finiteness of |S| ensures that , a.s. , ‖h̄ ( λt ) − h ( λt , st ) ‖2≤515 K ′ · ( 1 + ‖λt‖2 ) . Finally , h∞ ( y ) exists and has the form516 h∞ ( y ) = −Bπ , θ · y . From Assumption 1 , we have that −Bπ , θ is a Hurwitcz matrix , thus the origin is a globally asymptot-517 ically stable attractor of the ODE ẏ = h∞ ( y ) . Hence Theorem A.2 of Zhang et al . [ 2018 ] applies,518 which concludes the proof of Lemma 7.519 We introduce the following operators as in Zhang et al . [ 2018 ] :520 • 〈·〉 : RKN → RK521 〈λ〉 = 1 N ( 1 > ⊗ I ) λ = 1 N ∑ i∈N λi . • J = ( 1 N 11 > ⊗ I ) : RKN → RKN such that J λ = 1⊗ 〈λ〉.522 • J⊥ = I − J : RKN → RKN and we note λ⊥ = J⊥λ = λ− 1⊗ 〈λ〉.523 We then proceed in two steps as in Zhang et al . [ 2018 ] , firstly by showing the convergence a.s. of the524 disagreement vector sequence { λ⊥ , t } to zero , secondly showing that the consensus vector sequence525 { 〈λt〉 } converges to the equilibrium such that 〈λt〉 is solution to ( 13 ) .526 Lemma 8 . Under Assumptions 4 , 1 and 6 , for any M > 0 , we have527 sup t E [ ‖β−1λ , tλ⊥ , t‖ 2 1 { supt‖λt‖≤M } ] < ∞ . Since dynamic of { λt } described by ( 21 ) is similar to ( 5.2 ) in Zhang et al . [ 2018 ] we have528 E [ ‖β−1λ , t+1λ⊥ , t+1‖ 2|Ft,1 ] = β2λ , t β2λ , t+1 ρ ( ‖β−1λ , tλ⊥ , t‖ 2+2 · ‖β−1λ , tλ⊥ , t‖·E ( ‖yt+1‖ 2|Ft,1 ) 1 2 + E ( ‖yt+1‖2|Ft,1 ) ) ( 22 ) where ρ represents the spectral norm of E [ C > t · ( I − 11 > /N ) · Ct ] , with ρ ∈ [ 0 , 1 ) by Assumption529 4 . Since yit+1 = δ i t · w ( st , at ) > we have530 E [ ‖yt+1‖2|Ft,1 ] = E [ ∑ i∈N ‖ ( ri ( st , at ) − w ( st , at ) λit ) · w ( st , at ) > ‖2|Ft,1 ] ≤ 2 · E [ ∑ i∈N ‖ri ( st , at ) w ( st , at ) > ‖2+‖w ( st , at ) > ‖4·‖λit‖2|Ft,1 ] . By uniform boundedness of r ( s , · ) and w ( s , · ) ( Assumptions 1 ) and finiteness of S , there exists531 K1 > 0 such that532 E [ ‖yt+1‖2|Ft,1 ] ≤ K1 ( 1 + ‖λt‖2 ) . Thus , for any M > 0 there exists K2 > 0 such that , on the set { supτ≤t‖λτ‖ < M } ,533 E [ ‖yt+1‖21 { supτ≤t‖λτ‖ < M } |Ft,1 ] ≤ K2 . ( 23 ) We let vt = ‖β−1λ , tλ⊥ , t‖21 { supτ≤t‖λτ‖ < M } . Taking expectation over ( 22 ) , noting that534 1 { supτ≤t+1‖λτ‖ < M } ≤ 1 { supτ≤t‖λτ‖ < M } we get535 E ( vt+1 ) ≤ β2λ , t β2λ , t+1 ρ ( E ( vt ) + 2 √ E ( vt ) · √ K2 +K2 ) which is the same expression as ( 5.10 ) in Zhang et al . [ 2018 ] . So similar conclusions to the ones of536 Step 1 of Zhang et al . [ 2018 ] holds:537 sup t E [ ‖β−1λ , tλ⊥ , t‖ 2 1 { supt‖λt‖≤M } ] < ∞ ( 24 ) and lim t λ⊥ , t = 0 a.s. ( 25 ) We now show convergence of the consensus vector 1⊗ 〈λt〉 . Based on ( 21 ) we have538 〈λt+1〉 = 〈 ( Ct ⊗ I ) ( 1⊗ 〈λt〉+ λ⊥ , t + βλ , tyt+1 ) 〉 = 〈λt〉+ 〈λ⊥ , t〉+ βλ , t〈 ( Ct ⊗ I ) ( yt+1 + β−1λ , tλ⊥ , t ) 〉 = 〈λt〉+ βλ , t ( h ( λt , st ) +Mt+1 ) where h ( λt , st ) = Eat∼π [ 〈yt+1〉|Ft ] andMt+1 = 〈 ( Ct⊗I ) ( yt+1+β−1λ , tλ⊥ , t ) 〉−Eat∼π [ 〈yt+1〉|Ft ] .539 Since 〈δt〉 = r̄ ( st , at ) − w ( st , at ) 〈λt〉 , we have540 h ( λt , st ) = Eat∼π ( r̄ ( st , at ) w ( st , at ) > |Ft ) + Eat∼π ( w ( st , at ) 〈λt〉 · w ( st , at ) > |Ft,1 ) so h is Lipschitz-continuous in its first argument . Moreover , since 〈λ⊥ , t〉 = 0 and 1 > E ( Ct|Ft,1 ) =541 1 > a.s.:542 Eat∼π [ 〈 ( Ct ⊗ I ) ( yt+1 + β−1λ , tλ⊥ , t ) 〉|Ft,1 ] = Eat∼π [ 1 N ( 1 > ⊗ I ) ( Ct ⊗ I ) ( yt+1 + β−1λ , tλ⊥ , t ) |Ft,1 ] = 1 N ( 1 > ⊗ I ) ( E ( Ct|Ft,1 ) ⊗ I ) Eat∼π [ yt+1 + β −1 λ , tλ⊥ , t|Ft,1 ] = 1 N ( 1 > E ( Ct|Ft,1 ) ⊗ I ) Eat∼π [ yt+1 + β −1 λ , tλ⊥ , t|Ft,1 ] = Eat∼π [ 〈yt+1〉|Ft,1 ] a.s . So { Mt } is a martingale difference sequence . Additionally we have543 E [ ‖Mt+1‖2|Ft,1 ] ≤ 2 · E [ ‖yt+1 + β−1λ , tλ⊥ , t‖ 2 Gt |Ft,1 ] + 2 · ‖E [ 〈yt+1〉|Ft,1 ] ‖2 with Gt = N−2 ·C > t 11 > Ct ⊗ I whose spectral norm is bounded for Ct is stochastic . From ( 23 ) and544 ( 24 ) we have that , for any M > 0 , over the set { supt‖λt‖≤M } , there exists K3 , K4 < ∞ such that545 E [ ‖yt+1+β−1λ , tλ⊥ , t‖ 2 Gt |Ft,1 ] 1 { supt‖λt‖≤M } ≤ K3·E [ ‖yt+1‖2+‖β−1λ , tλ⊥ , t‖ 2|Ft,1 ] 1 { supt‖λt‖≤M } ≤ K4 . Besides , since rit+1 and w are uniformly bounded , there exists K5 < ∞ such that546 ‖E [ 〈yt+1〉|Ft,1 ] ‖2≤ K5 · ( 1 + ‖〈λt〉‖2 ) . Thus , for any M > 0 , there exists some K6 < ∞547 such that over the set { supt‖λt‖≤M } 548 E [ ‖Mt+1‖2|Ft,1 ] ≤ K6 · ( 1 + ‖〈λt〉‖2 ) . Hence , for any M > 0 , assumptions ( a.1 ) - ( a.5 ) of B.1 . from Zhang et al . [ 2018 ] are verified on the549 set { supt‖λt‖≤M } . Finally , we consider the ODE asymptotically followed by 〈λt〉:550 ˙〈λt〉 = −Bπ , θ · 〈λt〉+Aπ , θ · dπ which has a single globally asymptotically stable equilibrium λ∗ ∈ RK , since Bπ , θ is positive551 definite : λ∗ = B−1π , θ ·Aπ , θ · dπ . By Lemma 7 , supt‖〈λt〉‖ < ∞ a.s. , all conditions to apply Theorem552 B.2 . of Zhang et al . [ 2018 ] hold a.s. , which means that 〈λt〉 −→ t→∞ λ∗ a.s. As λt = 1⊗ 〈λt〉+ λ⊥ , t553 and λ⊥ , t −→ t→∞ 0 a.s. , we have for each i ∈ N , a.s.,554 λit −→ t→∞ B−1π , θ ·Aπ , θ · d π . Proof of Theorem 7555 Let Ft,2 = σ ( θτ , τ ≤ t ) be the σ-field generated by { θτ , τ ≤ t } , and let556 ζit,1 = ψ i t · ξit − Est∼dπ [ ψit · ξit|Ft,2 ] , ζit,2 = Est∼dπ [ ψit · ( ξit − ξit , θt ) |Ft,2 ] . With local projection , actor update ( 6 ) becomes557 θit+1 = Γ i [ θit + βθ , tEst∼dπ [ ψit · ξit , θt |Ft,2 ] + βθ , tζ i t,1 + βθ , tζ i t,2 ] . ( 26 ) So with hi ( θt ) = Est∼dπ [ ψit · ξit , θt |Ft,2 ] and h ( θ ) = [ h1 ( θ ) , . . . , hN ( θ ) ] , we have hi ( θt ) = ∑ st∈S dπ ( st ) · ψit · ξit , θt . Given ( 10 ) , θ 7→ ωθ is continuously differentiable and θ 7→ ∇θωθ is bounded so θ 7→ ωθ is Lipschitz-558 continuous . Thus θ 7→ ξit , θ is Lipschitz-continuous for each st ∈ S. Our regularity assumptions559 ensure that θ 7→ ψit , θt is continuous for each i ∈ N , st ∈ S. Moreover , θ 7→ d θ ( s ) is also Lipschitz560 continuous for each s ∈ S. Hence , θ 7→ g ( θ ) is Lipschitz-continuous in θ and the ODE ( 12 ) is561 well-posed . This holds even when using compatible features.562 By critic faster convergence , we have limt→∞‖ξit − ξit , θt‖= 0.563 Let M it = ∑t−1 τ=0 βθ , τζ i τ,1 . M i t is a martingale sequence with respect to Ft,2 . Since564 { ωt } t , { ∇aφk ( s , a ) } s , k , and { ∇θµθ ( s ) } s are bounded ( Lemma 3 , Assumption 2 ) , it follows565 that the sequence { ζit,1 } is bounded . Thus , by Assumption 5 , ∑ t E [ ∥∥M it+1 −M it∥∥2 |Ft,2 ] =566 ∑ t ∥∥βθ , tζit,1∥∥2 < ∞ a.s . The martingale convergence theorem ensures that { M it } converges a.s.567 Thus , for any > 0,568 lim t P ( sup n≥t ∥∥∥∥∥ n∑ τ=t βθ , τζ i τ,1 ∥∥∥∥∥ ≥ ) = 0 . Hence , by Kushner-Clark lemma Kushner and Clark [ 1978 ] ( pp 191-196 ) we have that the update in569 ( 26 ) converges a.s. to the set of asymptotically stable equilibria of the ODE ( 12 ) .570
This paper extends the results for actor-critic with stochastic policies of [Zhang, ICML 2018] to deterministic policies and offers the proof of convergence under some specific assumptions. The authors consider both the on-policy setting and the off-policy setting and offers some convincing derivation. It provides a valuable idea and a promising direction in MARL, but the current version has several problems that need to be fixed. Specifically, some parts of equations, algorithms, and expressions are ambiguous and unintelligible. Besides, problems with the format in the formula and citations also exist, which degrade the paper’s quality and clarity.
SP:9326f169cc5e8d2f4268dcf39af31590ee004d98
Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
1 INTRODUCTION . State-of-the-art for most existing natural language processing ( NLP ) classification tasks is achieved by models that are first pre-trained on auxiliary language modeling tasks and then fine-tuned on the task of interest with cross-entropy loss ( Radford et al. , 2019 ; Howard & Ruder , 2018 ; Liu et al. , 2019 ; Devlin et al. , 2019 ) . Although ubiquitous , the cross-entropy loss – the KL-divergence between one-hot vectors of labels and the distribution of model ’ s output logits – has several shortcomings . Cross entropy loss leads to poor generalization performance ( Liu et al. , 2016 ; Cao et al. , 2019 ) , and it lacks robustness to noisy labels ( Zhang & Sabuncu , 2018 ; Sukhbaatar et al. , 2015 ) or adversarial examples ( Elsayed et al. , 2018 ; Nar et al. , 2019 ) . Effective alternatives have been proposed to modify the reference label distributions through label smoothing ( Szegedy et al. , 2016 ; Müller et al. , 2019 ) , Mixup ( Zhang et al. , 2018 ) , CutMix ( Yun et al. , 2019 ) , knowledge distillation ( Hinton et al. , 2015 ) or self-training ( Yalniz et al. , 2019 ; Xie et al. , 2020 ) . Fine-tuning using cross entropy loss in NLP also tends to be unstable across different runs ( Zhang et al. , 2020 ; Dodge et al. , 2020 ) , especially when supervised data is limited , a scenario in which pre-training is particularly helpful . To tackle the issue of unstable fine-tuning and poor generalization , recent works propose local smoothness-inducing regularizers ( Jiang et al. , 2020 ) and regularization methods inspired by the trust region theory ( Aghajanyan et al. , 2020 ) to prevent representation collapse . Empirical evidence suggests that fine-tuning for more iterations , reinitializing top few layers ( Zhang et al. , 2020 ) , and using debiased Adam optimizer during fine-tuning ( Mosbach et al. , 2020 ) can make the fine-tuning stage more stable . Inspired by the learning strategy that humans utilize when given a few examples , we seek to find the commonalities between the examples of each class and contrast them with examples from other classes . We hypothesize that a similarity-based loss will be able to hone in on the important dimensions of the multidimensional hidden representations hence lead to better few-shot learning results and be more stable while fine-tuning pre-trained language models . We propose a novel objective for fine-tuning that includes a supervised contrastive learning ( SCL ) term that pushes the examples from the same class close and the examples from different classes further apart . The SCL ∗Work done during Facebook AI research internship , correspondence to bgunel @ stanford.edu . term is similar to the contrastive objectives used in self-supervised representation learning across image , speech , and video domains . ( Sohn , 2016 ; Oord et al. , 2018 ; Wu et al. , 2018 ; Bachman et al. , 2019 ; Hénaff et al. , 2019 ; Baevski et al. , 2020 ; Conneau et al. , 2020 ; Tian et al. , 2020 ; Hjelm et al. , 2019 ; Han et al. , 2019 ; He et al. , 2020 ; Misra & Maaten , 2020 ; Chen et al. , 2020a ; b ) . Unlike these methods , however , we use a contrastive objective for supervised learning of the final task , instead of contrasting different augmented views of examples . In few-shot learning settings ( 20 , 100 , 1000 labeled examples ) , the addition of the SCL term to the finetuning objective significantly improves the performance on several natural language understanding classification tasks from the popular GLUE benchmark ( Wang et al. , 2019 ) over the very strong baseline of fine-tuning RoBERTa-Large with cross-entropy loss only . Furthermore , pre-trained language models fine-tuned with our proposed objective are not only robust to noise in the fine-tuning training data , but can also exhibit improved generalization to related tasks with limited labeled task data . Our approach does not require any specialized network architectures ( Bachman et al. , 2019 ; Hénaff et al. , 2019 ) , memory banks ( Wu et al. , 2018 ; Tian et al. , 2020 ; Misra & Maaten , 2020 ) , data augmentation of any kind , or additional unsupervised data . To the best of our knowledge , our work is the first to successfully integrate a supervised contrastive learning objective for fine-tuning pre-trained language models . We empirically demonstrate that the new objective has desirable properties across several different settings . Our contributions in this work are listed in the following : • We propose a novel objective for fine-tuning pre-trained language models that includes a supervised contrastive learning term , as described in Section 2 . • We obtain strong improvements in the few-shot learning settings ( 20 , 100 , 1000 labeled examples ) as shown in Table 2 , leading up to 10.7 points improvement on a subset of GLUE benchmark tasks ( SST-2 , QNLI , MNLI ) for the 20 labeled example few-shot setting , over a very strong baseline – RoBERTa-Large fine-tuned with cross-entropy loss . • We demonstrate that our proposed fine-tuning objective is more robust , in comparison to RoBERTa-Large fine-tuned with cross-entropy loss , across augmented noisy training datasets ( used to fine-tune the models for the task of interest ) with varying noise levels as shown in Table 3 – leading up to 7 points improvement on a subset of GLUE benchmark tasks ( SST-2 , QNLI , MNLI ) across augmented noisy training datasets . We use a backtranslation model to construct the augmented noisy training datasets of varying noise levels ( controlled by the temperature parameter ) , as described in detail in Section 4.2 . • We show that the task-models fine-tuned with our proposed objective have improved generalizability to related tasks despite having limited availability of labeled task data ( Table 7 ) . This led to a 2.9 point improvement on Amazon-2 over the task model fine-tuned with cross-entropy loss only . Moreover , it considerably reduced the variance across few-shot training samples , when transferred from the source SST-2 sentiment analysis task model . 2 APPROACH . We propose a novel objective that includes a supervised contrastive learning term for fine-tuning pre-trained language models . The loss is meant to capture the similarities between examples of the same class and contrast them with the examples from other classes . For a multi-class classification problem with C classes , we work with a batch of training examples of size N , { xi , yi } i=1 , ... N . Φ ( · ) ∈ Rd denotes an encoder that outputs the l2 normalized final encoder hidden layer before the softmax projection ; Nyi is the total number of examples in the batch that have the same label as yi ; τ > 0 is an adjustable scalar temperature parameter that controls the separation of classes ; yi , c denotes the label and ŷi , c denotes the model output for the probability of the ith example belonging to the class c ; λ is a scalar weighting hyperparameter that we tune for each downstream task and setting . The overall loss is then given in the following : L = ( 1− λ ) LCE + λLSCL ( 1 ) LCE = − 1 N N∑ i=1 C∑ c=1 yi , c · logŷi , c ( 2 ) LSCL = N∑ i=1 − 1 Nyi − 1 N∑ j=1 1i6=j1yi=yj log exp ( Φ ( xi ) · Φ ( xj ) /τ ) ∑N k=1 1i 6=k exp ( Φ ( xi ) · Φ ( xk ) /τ ) ( 3 ) The overall loss is a weighted average of CE and the proposed SCL loss , as given in the equation ( 1 ) . The canonical definition of the multi-class CE loss that we use is given in equation ( 2 ) . The novel SCL loss is given in the equation ( 3 ) . This loss can be applied using a variety of encoders Φ ( · ) ∈ Rd – for example a ResNet for a computer vision application or a pre-trained language model such as BERT for an NLP application . In this work , we focus on fine-tuning pre-trained language models for single sentence and sentence-pair classification settings . For single sentence classification , each example xi consists of sequence of tokens prepended with the special [ CLS ] token xi = [ [ CLS ] , t1 , t2 , . . . , tL , [ EOS ] ] . The length of sequence L is constrained such that L < Lmax . Similarly , for sentence-pair classification tasks , each example xi is a concatenation of two sequences of tokens [ t1 , t2 , . . . tL ] and [ s1 , s2 , . . . , sM ] corresponding to the sentences with special tokens delimiting them : xi = [ [ CLS ] , t1 , t2 , . . . , tL , [ SEP ] , s1 , s2 , . . . , sM , [ EOS ] ] . The length of concatenated sequences is constrained such that L+M < Lmax . In both cases , Φ ( xi ) ∈ Rd uses the embedding of [ CLS ] token as the representation for example xi . These choices follow standard practices for fine-tuning pre-trained language models for classification ( Devlin et al. , 2019 ; Liu et al. , 2019 ) . Empirical observations show that both l2 normalization of the encoded embedding representations and an adjustable scalar temperature parameter τ improve performance . Lower temperature increases the influence of examples that are harder to separate , effectively creating harder negatives . Using hard negatives has been previously shown to improve performance in the context of margin-based loss formulations such as triplet loss ( Schroff et al. , 2015 ) . The empirical behavior of the adjustable temperature parameter is consistent with the observations of previous work related to supervised contrastive learning . ( Chen et al. , 2020a ; Khosla et al. , 2020 ) . Relationship to Self-Supervised Contrastive Learning Self-supervised contrastive learning has shown success in learning powerful representations , particularly in the computer vision domain . ( Chen et al. , 2020a ; He et al. , 2020 ; Tian et al. , 2020 ; Mnih & Kavukcuoglu , 2013 ; Gutmann & Hyvärinen , 2012 ; Kolesnikov et al. , 2019 ) Self-supervised learning methods do not require any labeled data ; instead they sample a mini batch from unsupervised data and create positive and negative examples from these samples using strong data augmentation techniques such as AutoAugment ( Cubuk et al. , 2019 ) or RandAugment ( Cubuk et al. , 2020 ) for computer vision . Positive examples are constructed by applying data augmentation to the same example ( cropping , flipping , etc . for an image ) , and negative examples are simply all the other examples in the sampled mini batch . Intuitively , selfsupervised contrastive objectives are learning representations that are invariant to different views of positive pairs ; while maximizing the distance between negative pairs . The distance metric used is often the inner product or the Euclidean distance between vector representations of the examples . For a batch of size N , self-supervised contrastive loss is defined as : Lself = 2N∑ i=1 − log exp ( Φ ( x′2i−1 ) · Φ ( x′2i ) /τ ) ∑2N k=1 1i 6=k exp ( Φ ( x ′ i ) · Φ ( x′k ) /τ ) ( 4 ) where Φ ( · ) ∈ Rd denotes an encoder that outputs the l2 normalized final encoder hidden layer before the softmax projection ; τ > 0 is a scalar temperature parameter . A is defined as a data augmentation block that generates two randomly generated augmented examples , x′2i and x ′ 2i−1 from the original example xi : A ( { xi , yi } i=1 , ... N ) = { x′i , y′i } i=1 , ... 2N . As an example , A can be RandAugment for a computer vision application ; or it could be a back-translation model for an NLP application .
The paper proposes a new training objective for fine-tuning pre-trained models: a weighted sum of the classical cross-entropy (CE) and a new supervised contrastive learning term (SCP). The latter uses the (negated) softmax over the embedding distances (i.e. dot products) between a training instance and all other instances in the batch with the same label. In contrast to the more traditional self-supervised contrastive learning (where positive pairs are obtained by applying transformations to the original data instance), there is no data augmentation; two examples with the same label constitute a positive pair.
SP:cc282126b689c7311c3a28f0d173a004ed24382f
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards . Complex tasks are often hierarchically composed of sub-tasks . Solving a sub-task increases the return expectation and leads to a step in the Q-function . RUDDER identifies these steps and then redistributes reward to them , thus immediately giving reward if sub-tasks are solved . Since the delay of rewards is reduced , learning is considerably sped up . However , for complex tasks , current exploration strategies struggle with discovering episodes with high rewards . Therefore , we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration . Unfortunately , the number of demonstrations is typically small and RUDDER ’ s LSTM as a deep learning model does not learn well on these few training samples . Hence , we introduce Align-RUDDER , which is RUDDER with two major modifications . First , Align-RUDDER assumes that episodes with high rewards are given as demonstrations , replacing RUDDER ’ s safe exploration and lessons replay buffer . Second , we substitute RUDDER ’ s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations . Profile models can be constructed from as few as two demonstrations . Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards . Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations . On the MineCraft ObtainDiamond task , Align-RUDDER is able to mine a diamond , though not frequently . 1 INTRODUCTION . Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards ( Sutton & Barto , 2018 ; Rahmandad et al. , 2009 ; Luoma et al. , 2017 ) . For delayed rewards , temporal difference ( TD ) suffers from vanishing information ( Arjona-Medina et al. , 2019 ) . On the other hand Monte Carlo ( MC ) has high variance since it must average over all possible futures ( ArjonaMedina et al. , 2019 ) . Monte-Carlo Tree Search ( MCTS ) , used for Go and chess , can handle delayed and rare rewards since it has a perfect environment model ( Silver et al. , 2016 ; 2017 ) . RUDDER ( Arjona-Medina et al. , 2019 ; 2018 ) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given . RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network . However , for complex tasks , current exploration strategies find episodes with high rewards only after an incommensurate long time . Humans and animals obtain high reward episodes by teachers , role models , or prototypes . Along this line , we assume that episodes with high rewards are given as demonstrations . Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies , typically , only a few demonstrations are available . However , RUDDER ’ s LSTM ( Hochreiter , 1991 ; Hochreiter & Schmidhuber , 1997a ) as a deep learning method requires many examples for learning . Therefore , we introduce Align-RUDDER , which replaces RUDDER ’ s LSTM with a profile model obtained from multiple sequence alignment ( MSA ) of the demonstrations . Profile models are well known in bioinformatics . They are used to score new sequences according to their sequence similarity to the aligned sequences . Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model— , which considerably speeds up learning even if only a few demonstrations are available . Our main contributions are : • We suggest a reinforcement algorithm that works well for sparse and delayed rewards , where standard exploration fails but few demonstrations with high rewards are available . • We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations . • We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning . 2 REVIEW OF RUDDER . Basic insight : Q-functions for complex tasks are step functions . Complex tasks are typically composed of sub-tasks . Therefore the Q-function of an optimal policy resembles a step function . The Q-function is the expected future return and it increases ( i.e , makes a step ) when a sub-task is completed . Identifying large steps in the Q-function speeds up learning since it allows ( i ) to increase the return by performing actions that cause the step and ( ii ) to sample episodes with a larger return for learning . An approximation to the Q-function must predict the expected future return for every state-action pair . However , a Q-function that resembles a step-function is mostly constant . Therefore predictions are only necessary at the steps . We have to identify the relevant state-actions that cause the steps and then predict the size of the steps . An LSTM network ( Hochreiter , 1991 ; Hochreiter & Schmidhuber , 1995 ; 1997a ; b ) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells . Consequently , LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed . Therefore , both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode . Reward Redistribution . We consider episodic Markov decision processes ( MDPs ) , i.e. , the reward is only given once at the end of the sequence . The Q-function is assumed to be a step function , that is , the task can be decomposed into sub-tasks ( see previous paragraph ) . Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward . Since the Q-function of an optimal policy is not known , we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work . The differences in predictions determine the reward redistribution . The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most . Fortunately , just identifying the largest steps even with poor predictions speeds up learning considerably . See Figure 1 for a description of the reward redistribution . Learning methods based on reward redistribution . The redistributed reward serves as reward for a subsequent learning method : ( A ) The Q-values can be directly estimated ( Arjona-Medina et al. , 2019 ) , which is used in the experiments for the artificial tasks and BC pre-training for MineCraft . ( B ) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2018 ) , which is used in the MineCraft experiments . ( C ) Redistributed rewards can serve for temporal difference learning like Q-learning ( Watkins , 1989 ) . LSTM models for reward redistribution . RUDDER uses an LSTM model for predicting the future return . The reward redistribution is the difference between two subsequent predictions . If a stateaction pair increases the prediction of the return , then it is immediately rewarded . Using state-action sub-sequences ( s , a ) 0 : t = ( s0 , a0 , . . . , st , at ) , the redistributed reward is Rt+1 = g ( ( s , a ) 0 : t ) − g ( ( s , a ) 0 : t−1 ) , where g is an LSTM model that predicts the return of the episode . The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most . 3 ALIGN-RUDDER : RUDDER WITH FEW DEMONSTRATIONS . In bioinformatics , sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship ( Needleman & Wunsch , 1970 ; Smith & Waterman , 1981 ) . The result of the alignment of multiple sequences is a profile model . The profile model is a consensus sequence , a frequency matrix , or a Position-Specific Scoring Matrix ( PSSM ) ( Stormo et al. , 1982 ) . New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model . Align-RUDDER uses such alignment techniques to align two or more high return demonstrations . For the alignment , we assume that the demonstrations follow the same underlying strategy , therefore they are similar to each other analog to being evolutionary related . If the agent generates a state-action sequence ( s , a ) 0 : t−1 , then this sequence is aligned to the profile model g giving a score g ( ( s , a ) 0 : t−1 ) . The next action of the agent extends the state-action sequence by one state-action pair ( st , at ) . The extended sequence ( s , a ) 0 : t is also aligned to the profile model g giving another score g ( ( s , a ) 0 : t ) . The redistributed reward Rt+1 is the difference of these scores : Rt+1 = g ( ( s , a ) 0 : t ) − g ( ( s , a ) 0 : t−1 ) ( see Eq . ( 1 ) ) . This difference indicates how much of the return is gained or lost by a adding another sequence element . Align-RUDDER scores how close an agent follows an underlying strategy , which has been extracted by the profile model . Similar to the LSTM model , we identify the largest steps in the Q-function via relevant events determined by the profile model . Therefore , redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees . RUDDER ’ s theory for reward redistribution is valid for LSTM , other recurrent networks , attention mechanisms , or sequence and profile models . Advantages of alignment compared to LSTM . Learning an LSTM model is severely limited when very few demonstrations are available . First , LSTM is known to require a large number of samples to generalize to new sequences . In contrast , sequence alignment requires only two examples to generalize well as known from bioinformatics . Second , expert demonstrations have high rewards . Therefore random demonstrations with very low rewards have to be generated . LSTM does not generalize well when only these extreme reward cases can be observed in the training set . In contrast , sequence alignment only uses examples that are closely related ; that is , they belong to the same category ( expert demonstrations ) . Reward Redistribution by Sequence Alignment . The new reward redistribution approach consists of five steps , see Fig . 3 : ( I ) Define events to turn episodes of state-action sequences into sequences of events . ( II ) Determine an alignment scoring scheme , so that relevant events are aligned to each other . ( III ) Perform a multiple sequence alignment ( MSA ) of the demonstrations . ( IV ) Compute the profile model like a PSSM . ( V ) Redistribute the reward : Each sub-sequence τt of a new episode τ is aligned to the profile . The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step ( IV ) , i.e . Rt+1 ∝ S ( τt ) − S ( τt−1 ) . In the following , the five steps of Align-RUDDER ’ s reward redistribution are outlined . For the interested reader , each step is detailed in Sec . A.3 in the appendix . Finally , in Sec . A.7.3 in the appendix , we illustrate these five steps on the example of Minecraft . ( I ) Defining Events . Instead of states , we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal . An event is defined as a cluster of state differences . We use similarity-based clustering like affinity propagation ( AP ) ( Frey & Dueck , 2007 ) . If states are only enumerated , we suggest to use the “ successor representation ” ( Dayan , 1993 ) or “ successor features ” ( Barreto et al. , 2017 ) . We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation . A sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e ( the event ) and ignoring the actions . Alignment techniques from bioinformatics assume sequences composed of a few events , e.g . 20 events . If there are too many events , good fitting alignments can not be distinguished from random alignments . This effect is known in bioinformatics as “ Inconsistency of Maximum Parsimony ” ( Felsenstein , 1978 ) . ( II ) Determining the Alignment Scoring System . A scoring matrix S with entries si , j determines the score for aligning event i with j . A priori , we only know that a relevant event should be aligned to itself but not to other events . Therefore , we set si , j = 1/pi for i = j and si , j = α for i 6= j . Here , pi is the relative frequency of event i in the demonstrations . α is a hyper-parameter , which is typically a small negative number . This scoring scheme encourages alignment of rare events , for which pi is small . For more details see Appendix Sec . A.3 . ( III ) Multiple sequence alignment ( MSA ) . An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i , j , i < j ∑L t=0 si , j , ti , tj , t in an alignment , where si , j , ti , tj , t is the score at alignment column t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length , since gaps make the alignment longer than the length of each sequence . We use ClustalW ( Thompson et al. , 1994 ) for MSA . MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations . This guiding tree allows to identify multiple strategies . For more details see Appendix Sec . A.3 . ( IV ) Position-Specific Scoring Matrix ( PSSM ) and MSA profile model . From the alignment , we construct a profile model as a ) column-wise event probabilities and b ) a PSSM ( Stormo et al. , 1982 ) . The PSSM is a column-wise scoring matrix to align new sequences to the profile model . More details are given in Appendix Sec . A.3 . ( V ) Reward Redistribution . The reward redistribution is based on the profile model . A sequence τ = e0 : T ( et is event at posit