paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
EMTL: A Generative Domain Adaptation Approach
We propose an unsupervised domain adaptation approach based on generative models . We show that when the source probability density function can be learned , one-step Expectation–Maximization iteration plus an additional marginal density function constraint will produce a proper mediator probability density function to bridge the gap between the source and target domains . The breakthrough is based on modern generative models ( autoregressive mixture density nets ) that are competitive to discriminative models on moderate-dimensional classification problems . By decoupling the source density estimation from the adaption steps , we can design a domain adaptation approach where the source data is locked away after being processed only once , opening the door to transfer when data security or privacy concerns impede the use of traditional domain adaptation . We demonstrate that our approach can achieve state-of-the-art performance on synthetic and real data sets , without accessing the source data at the adaptation phase . 1 INTRODUCTION . In the classical supervised learning paradigm , we assume that the training and test data come from the same distribution . In practice , this assumption often does not hold . When the pipeline includes massive data labeling , models are routinely retrained after each data collecion campaign . However , data labeling costs often make retraining impractical . Without labeled data , it is still possible to train the model by using a training set which is relevant but not identically distributed to the test set . Due to the distribution shift between the training and test sets , the performance usually can not be guaranteed . Domain adaptation ( DA ) is a machine learning subdomain that aims at learning a model from biased training data . It explores the relationship between source ( labeled training data ) and target ( test data ) domains to find the mapping function and fix the bias , so that the model learned on the source data can be applied in target domain . Usually some target data is needed during the training phase to calibrate the model . In unsupervised domain adaptation ( UDA ) only unlabeled target data is needed during training phase . UDA is an appealing learning paradigm since obtaining unlabeled data is usually easy in a lot of applications . UDA allows the model to be deployed in various target domains with different shifts using a single labeled source data set . Due to these appealing operational features , UDA has became a prominent research field with various approaches . Kouw & Loog ( 2019 ) and Zhuang et al . ( 2020 ) surveyed the latest progress on UDA and found that most of the approaches are based on discriminative models , either by reweighting the source instances to approximate the target distribution or learning a feature mapping function to reduce the statistical distance between the source and target domains . After calibrating , a discriminative model is trained on the adjusted source data and used in target domain . In this workflow , the adaptation algorithm usually have to access the source and target data simultaneously . However , accessing the source data during the adaptation phase is not possible when the source data is sensitive ( for example because of security or privacy issues ) . In particular , in our application workflow an industrial company is selling devices to various service companies which can not share their customer data with each other . The industrial company may contract with one of the service companies to access their data during an R & D phase , but this data will not be available when the industrial company sells the device ( and the predictive model ) to other service companies . In this paper we propose EMTL , a generative UDA algorithm for binary classification that does not have to access the source data during the adaptation phase . We use density estimation to estimate the joint source probability function ps ( x , y ) and the marginal target probability function pt ( x ) and use them for domain adaption . To solve the data security issue , EMTL decouples source density estimation from the adaptation steps . In this way , after the source preprocessing we can put away or delete the source data . Our approach is motivated by the theory on domain adaptation ( Ben-David et al. , 2010 ) which claims that the error of a hypothesis h on the target domain can be bounded by three items : the error on the source domain , the distance between source and target distributions , and the expected difference in labeling functions . This theorem motivated us to define a mediator density function pm ( x , y ) i ) whose conditional probability y|x is equal to the conditional probability of the source and ii ) whose marginal density on x is equal to the marginal density of the target . We can then construct a Bayes optimal classifier on the target domain under the assumption of covariate shift ( the distribution y|x is the same in the source and target domains ) . Our approach became practical with the recent advances in ( autoregressive ) neural density estimation ( Uria et al. , 2013 ) . We learn pm ( x , y ) from ps ( x , y ) and pt ( x ) to bridge the gap between the source and target domains . We regard the label on the target data as a latent variable and show that if ps ( x |y = i ) be learned perfectly for i ∈ { 0 , 1 } , then a one-step Expectation–Maximization ( and this is why our algorithm named EMTL ) iteration will produce a density function pm ( x , y ) with the following properties on the target data : i ) minimizing the Kullback–Leibler divergence between pm ( yi|xi ) and ps ( yi|xi ) ; ii ) maximizing the log-likelihood ∑ log pm ( xi ) . Then , by adding an additional marginal constraint on pm ( xi ) to make it close to pt ( xi ) on the target data explicitly , we obtain the final objective function for EMTL . Although this analysis assumes a simple covariate shift , we will experimentally show that EMTL can go beyond this assumption and work well in other distribution shifts . We conduct experiments on synthetic and real data to demonstrate the effectiveness of EMTL . First , we construct a simple two-dimensional data set to visualize the performance of EMTL . Second , we use UCI benchmark data sets and the Amazon reviews data set to show that EMTL is competitive with state-of-the-art UDA algorithms , without accessing the source data at the adaptation phase . To our best knowledge , EMTL is the first work using density estimation for unsupervised domain adaptation . Unlike other existing generative approaches ( Kingma et al. , 2014 ; Karbalayghareh et al. , 2018 ; Sankaranarayanan et al. , 2018 ) , EMTL can decouple the source density estimation process from the adaption phase and thus it can be used in situations where the source data is not available at the adaptation phase due to security or privacy reasons . 2 RELATED WORK . Zhuang et al . ( 2020 ) , Kouw & Loog ( 2019 ) and Pan & Yang ( 2009 ) categorize DA approaches into instance-based and feature-based techniques . Instance-based approaches reweight labeled source samples according to the ratio of between the source and the target densities . Importance weighting methods reweight source samples to reduce the divergence between the source and target densities ( Huang et al. , 2007 ; Gretton et al. , 2007 ; Sugiyama et al. , 2007 ) . In contrast , class importance weighting methods reweight source samples to make the source and target label distribution the same ( Azizzadenesheli et al. , 2019 ; Lipton et al. , 2018 ; Zhang et al. , 2013 ) . Feature-based approaches learn a new representation for the source and the target by minimizing the divergence between the source and target distributions . Subspace mapping methods assume that there is a common subspace between the source and target ( Fernando et al. , 2013 ; Gong et al. , 2012 ) . Courty et al . ( 2017 ) proposed to use optimal transport to constrain the learning process of the transformation function . Other methods aim at learning a representation which is domain-invariant among domains ( Gong et al. , 2016 ; Pan et al. , 2010 ) . Besides these shallow models , deep learning has also been widely applied in domain adaptation ( Tzeng et al. , 2017 ; Ganin et al. , 2016 ; Long et al. , 2015 ) . DANN ( Ganin et al. , 2016 ) learns a representation using a neural network which is discriminative for the source task while can not distinguish the source and target domains from each other . Kingma et al . ( 2014 ) and Belhaj et al . ( 2018 ) proposed a variational inference based semi-supervised learning approach by regarding the missing label as latent variable and then performing posterior inference . 3 NOTATION AND PROBLEM DEFINITION . We consider the unsupervised domain adaptation problem in a binary classification setting ( the setup is trivial to extend to multi-class classification ) . Let p ( x , y ) be a joint density function defined on X × Y , where x ∈ Rp is the feature vector and y ∈ { 0 , 1 } is the label . We denote the conditional probability p ( y = 1|x ) by q ( x ) . A hypothesis or model is a function h : X 7→ [ 0 , 1 ] . We define the error of h as the expected disagreement between h ( x ) and q ( x ) , i.e. , ( h ) = Ex∼p |h ( x ) − q ( x ) | . ( 1 ) We use superscripts s and t to distinguish the source and target domains , that is , ps ( x , y ) and pt ( x , y ) are the joint density functions in the source and target domains respectively . In general , we assume that ps ( x , y ) 6= pt ( x , y ) . Let Ds = { ( xsi , ysi ) } n s i=1 and U t = { xti } n t i=1 be i.i.d . data sets generated from the source distribution ps ( x , y ) and the marginal target distribution pt ( x ) , respectively , where ns and nt are source and target sample sizes . The objective of unsupervised domain adaptation is to learn a model ĥ by using labeled Ds and unlabeled U t , which achieves lowest error in target domain . 4 GENERATIVE APPROACH . Ben-David et al . ( 2010 ) proved that the error of a hypothesis h in the target domain t ( h ) can be bounded by the sum of error in source domain s ( h ) , the distribution distance between the two domains , and the expected L1 distance between two conditional probabilities . Theorem 1 ( Ben-David et al . ( 2010 ) , Theorem 1 ) For a hypothesis h , t ( h ) ≤ s ( h ) + d1 ( ps ( x ) , pt ( x ) ) + min { Ex∼ps |qs ( x ) − qt ( x ) | , Ex∼pt |qs ( x ) − qt ( x ) | } , ( 2 ) where d1 ( ps ( x ) , pt ( x ) ) = 2sup B∈B |Prs ( B ) − Prt ( B ) | is the twice the total variation distance of two domain distributions and qs ( x ) and qt ( x ) are the source and target probabilities of y = 1|x , respectively . In the covariate shift setting , we assume that the conditional probability p ( y|x ) is invariant between the source and the target domains . Thus in the right hand side of Eq . ( 2 ) , the third component will be zero , which means that the target error is bounded by the source error plus the distance between two domains . Many current unsupervised domain adaptation solutions work on how to reduce the distance between the two domain densities . Importance-sampling-based approaches manage to resample the source data to mimic the target data distribution , and feature-mapping-based approaches do that by learning a transformation function φ ( x ) for the source data . However , both approaches need to access source and target data simultaneously . In this paper , we propose a domain adaptation approach based on generative models . First , we learn all multivariate densities using RNADE ( Uria et al. , 2013 ) , an autoregressive version of Bishop ( 1994 ) ’ s mixture density nets . We found RNADE excellent in learning medium-dimensional densities , and in a certain sense it is RNADE that made our approach feasible . Second , we introduce a mediator joint density function pm ( x , y ) that bridges the gap between ps ( x , y ) and pt ( x , y ) . Since the source distribution information is stored in the learned generative model after training , we do not need to access source data in the adaptation phase .
This paper proposes a novel method for Unsupervised Domain Adaptation (UDA) when the source domain's privacy should be preserved. The authors propose EMTL, which is a generative method using multivariate densities using RNADE (Uria et al., 2013) and a mediator joint density function bridging both source and target domains. EMTL achieves comparable performances to those of DANN (Ganin et al., 2016) on a single dataset.
SP:feed1c549e9d8bc680bfb92dbd0979b3fb103904
CopulaGNN: Towards Integrating Representational and Correlational Roles of Graphs in Graph Neural Networks
1 INTRODUCTION . Graphs , as flexible data representations that store rich relational information , have been commonly used in data science tasks . Machine learning methods on graphs ( Chami et al. , 2020 ) , especially Graph Neural Networks ( GNNs ) , have attracted increasing interest in the research community . They are widely applied to real-world problems such as recommender systems ( Ying et al. , 2018 ) , social network analysis ( Li et al. , 2017 ) , and transportation forecasting ( Yu et al. , 2017 ) . Among the heterogeneous types of graph-structured data , it is worth noting that graphs usually play diverse roles in different contexts , different datasets , and different tasks . Some of the roles are relational , as a graph may indicate certain statistical relationships of connected observations ; some are representational , as the topological structure of a graph may encode important features/patterns of the data ; some are even causal , as a graph may reflect causal relationships specified by domain experts . It is crucial to recognize the distinct roles of a graph in order to correctly utilize the signals in the graph-structured data . In this paper , we distinguish the representational role and the correlational role of graphs in the context of node-level ( semi- ) supervised learning , and we investigate how to design better GNNs that take advantage of both roles . In a node-level prediction task , the observed graph in the data may relate to the outcomes of interest ( e.g. , node labels ) in multiple ways . Conceptually , we call that the graph plays a representational 1The code is available at https : //github.com/jiaqima/CopulaGNN . role if one can leverage it to construct better feature representations . For example , in social network analysis , aggregating user features from one ’ s friends is usually helpful ( thanks to the well-known homophily phenomenon ( McPherson et al. , 2001 ) ) . In addition , the structural properties of a user ’ s local network , e.g. , structural diversity ( Ugander et al. , 2012 ) and structural holes ( Burt , 2009 ; Lou & Tang , 2013 ) , often provide useful information for making predictions about certain outcomes of that user . On the other hand , sometimes a graph directly encodes correlations between the outcomes of connected nodes , and we call it playing a correlational role . For example , hyper-linked Webpages are likely to be visited together even if they have dissimilar content . In spatiotemporal predictions , the outcome of nearby locations , conditional on all the features , may still be correlated . We note that the graph structure may provide useful predictive information through both roles but in distinct ways . While both the representational and the correlational roles are common in graph-structured data , we find that , through a simulation study , many existing GNN models are incapable of utilizing the correlational information encoded in a graph . Specifically , we design a synthetic dataset for the node-level regression . The node-level outcomes are drawn from a multivariate normal distribution , with the mean and the covariance as functions of the graph to reflect the representational and correlation roles respectively . We find that when the graph only provides correlational information of the node outcomes , many popular GNN models underperform a multi-layer perceptron which does not consider the graph at all . To mitigate this deficiency of GNNs , we propose a principled solution , the Copula Graph Neural Network ( CopulaGNN ) , which can take a wide range of GNNs as the base model and improve their capabilities of modeling the correlational graph information . The key insight of the proposed method is that , by decomposing the joint distribution of node outcomes into the product of marginal densities and a copula density , the representational information and correlational information can be separately modeled . The former is modeled by the marginal densities through a base GNN while the latter is modeled by a Gaussian copula . The proposed method also enjoys the benefit of easy extension to various types of node outcome variables including continuous variables , discrete count variables , or even mixed-type variables . We instantiate CopulaGNN with normal and Poisson marginal distributions for continuous and count regression tasks respectively . We also implement two types of copula parameterizations combined with two types of base GNNs . We evaluate the proposed method on both synthetic and real-world data with both continuous and count regression tasks . The experimental results show that CopulaGNNs significantly outperform their base GNN counterparts when the graph in the data exhibits both correlational and representational roles . We summarize our main contributions as follows : 1 . We raise the question of distinguishing the two roles played by the graph and demonstrate that many existing GNNs are incapable of utilizing the graph information when it plays a pure correlational role . 2 . We propose a principled solution , the CopulaGNN , to integrate the representational and correlational roles of the graph . 3 . We empirically demonstrate the effectiveness of CopulaGNN compared to base GNNs on semi-supervised regression tasks . 2 RELATED WORK . There have been extensive existing works that model either the representational role or the correlational role of the graph in node-level ( semi- ) supervised learning tasks . However , there are fewer methods that try to model both sides simultaneously , especially with a GNN . Methods focusing on the representational role . As we mentioned in Section 1 , the graph can help construct better node feature representations by both providing extra topological information and guiding node feature aggregation . There have been vast existing studies on both directions , and among them we can only list a couple of examples . Various methods have been proposed to leverage the topological information of graph-structured data in machine learning tasks , such as graph kernels ( Vishwanathan et al. , 2010 ) , node embeddings ( Perozzi et al. , 2014 ; Tang et al. , 2015 ; Grover & Leskovec , 2016 ) , and GNNs ( Xu et al. , 2018 ) . Aggregating node features on an attributed graph has also been widely studied , e.g. , through feature smoothing ( Mei et al. , 2008 ) or GNNs ( Kipf & Welling , 2016 ; Hamilton et al. , 2017 ) . In this work , we restrict our focus on the GNN models , which have been the state-of-the-art graph representation learning method on various tasks . Methods focusing on the correlational role . On the other hand , there has also been extensive literature on modeling the dependence of variables on connected nodes in a graph . One group of methods is called the graph-based regularization ( Zhu et al. , 2003 ; Li et al. , 2019 ) , where it is assumed that the variables associated with linked objects change smoothly and pose an explicit similarity regularization among them . The correlational role of the graph is also closely related to undirected graphical models ( Lauritzen , 1996 ; Jordan et al. , 2004 ; Wainwright & Jordan , 2008 ) . In graphical models , the edges in a graph provide a representation of the conditional ( in ) dependence structure among a set of random variables , which are represented by the node set of the graph . Finally , there has been a line of research that combines graphical models with copulas and leads to more flexible model families ( Elidan , 2010 ; Dobra et al. , 2011 ; Liu et al. , 2012 ; Bauer et al. , 2012 ) . Our proposed method integrates the benefits of copulas and GNNs to capture both the representational and correlational roles . Methods improving GNNs by leveraging the correlational graph information . A few methods explicitly leverage the correlational graph information to improve the GNN training , but most of them focus on a classification setting ( Qu et al. , 2019 ; Ma et al. , 2019 ) . A recent study ( Jia & Benson , 2020 ) that we have been aware of only lately shares a similar motivation to ours , yet our methodology differs significantly . In particular , Jia & Benson ( 2020 ) apply a multivariate normal distribution to model the correlation of node outcomes , which can be viewed as a special case of our proposed CopulaGNN when a Gaussian copula with normal marginals is used . Our method not only generalizes to other marginals ( we show the effectiveness of some of them ) , but also has a more flexible parameterization on the correlation matrix of the copula distribution . In addition , we differ with these previous works by explicitly distinguishing the two roles of the graph in the data . 3 SIMULATING THE TWO ROLES OF THE GRAPH . In this section , we investigate , through a simulation study , the representational and correlational roles of the graph in the context of node-level semi-supervised learning . 3.1 NODE-LEVEL SEMI-SUPERVISED LEARNING . We start by formally introducing the problem of node-level semi-supervised learning . A graph is a tuple : G = ( V , E ) , where V = { 1 , 2 , . . . , n } is the set of n nodes ; E ∈ V × V is the set of edges and let s = |E| be the number of edges . The graph is also associated with X ∈ Rn×d and y ∈ Rn , which are the node features and outcome labels . In the semi-supervised learning setting , we only observe the labels of 0 < m < n nodes . Without loss of generality , we assume the labels of nodes { 1 , 2 , . . . , m } are observed and those of { m+ 1 , . . . , n } are missing . Therefore , the label vector y can be partitioned as y = ( yTobs , y T miss ) T . The goal of a node-level semi-supervised learning task is to infer ymiss based on ( yobs , X , G ) . 3.2 SYNTHETIC DATA . To simulate the representational and correlational roles of the graph , we first design a synthetic dataset by specifying the joint distribution of y conditional on X and G. In particular , we let the joint distribution of the node outcomes take the form of y|X , G ∼ N ( µ ( X , G ) , Σ ( G ) ) , for some µ , Σ to be specified . In this way , the graph G plays a representational role through µ ( X , G ) and a correlational role through Σ ( G ) . Specifically , we generate synthetic node-level regression data on a graph with n nodes and s edges ( see Appendix A.1 for the whole procedure ) . We first randomly generate a feature matrix X ∈ Rn×d0 . AssumeA is the adjacency matrix of the graph , D is the degree matrix , and L = D−A is the graph Laplacian . Let à = A + I and D̃ = D + I . Given parameters wy ∈ Rd0 , we generate the node label vector y ∼ N ( µ , Σ ) , where , for some γ > 0 , τ > 0 , and σ2 > 0 , ( a ) µ = D̃−1ÃXwy , Σ = σ2I ; ( b ) µ = Xwy , Σ = τ ( L+ γI ) −1 ; ( c ) µ = D̃−1ÃXwy , Σ = τ ( L+ γI ) −1 . Depending on how ( µ , Σ ) are configured , we get three types of synthetic data settings : ( a ) , ( b ) , and ( c ) . Intuitively , the graph plays a pure representational role in setting ( a ) since the label of a node depends on the aggregated features of its local neighborhood and the node labels are independent conditional on the node features . In setting ( b ) , the graph plays a pure correlational role ; while the means of node labels only depend on their own node features , the node labels are still correlated conditional on the features , and the correlation is determined by the graph structure . Finally , setting ( c ) is a combination of ( a ) and ( b ) where the graph plays both representational and correlational roles . In the rest of this section , we test the performance of a few widely used GNNs under setting ( a ) and ( b ) to examine their capabilities of utilizing the representational and correlational information . We defer the experimental results under setting ( c ) to Section 5.2 for ease of reading .
The paper presents a new model based on the Graphical Neural Network (GNN). The proposed model adopts probability distributions called copulas and is called the Copula Graphical Neural Network (CopulaGNN). Two parametrizations of the CopulaGNN are given, and the learning of the proposed model is discussed. Experiments suggest that the CopulaGNN outperforms existing GNNs and MLP in almost all setups.
SP:4ebb53f9acc9e99dc57bb71b548aabde7dccbef7
Active Tuning
1 INTRODUCTION . Recurrent neural networks ( RNNs ) are inherently only robust against noise to a limited extent and they often generate unsuitable predictions when confronted with corrupted or missing data ( cf. , e.g. , Otte et al. , 2015 ) . To tackle noise , an explicit noise-aware training procedure can be employed , yielding denoising networks , which are targeted to handle particular noise types and levels . Recurrent oscillators , such as echo state networks ( ESNs ) ( Jaeger , 2001 ; Koryakin et al. , 2012 ; Otte et al. , 2016 ) , when initialized with teacher forcing , however , are highly dependent on a clean and accurate target signal . Given an overly noisy signal , the system is often not able to tune its neural activities into the desired target dynamics at all . Here , we present a method that can be seen as an alternative to regular teacher forcing and , moreover , as a general tool for more robustly tuning and thus synchronizing the dynamics of a generative differentiable temporal forward model—such as a standard RNN , ESN , or LSTM-like RNN ( Hochreiter & Schmidhuber , 1997 ; Otte et al. , 2014 ; Chung et al. , 2014 ; Otte et al. , 2016 ) —into the observed data stream . The proposed method , which we call Active Tuning , uses gradient back-propagation through time ( BPTT ) ( Werbos , 1990 ) , where the back-propagated gradient signal is used to tune the hidden activities of a neural network instead of adapting its weights . The way we utilize the temporal gradient signal is related to learning parametric biases ( Sugita et al. , 2011 ) and applying dynamic context inference ( Butz et al. , 2019 ) . With Active Tuning , two essential aspects apply : First , during signal inference , the model is not driven by the observations directly , but indirectly via prediction errorinducted temporal gradient information , which is used to infer the hidden state activation sequence that best explains the observed signal . Second , the general stabilization ability of propagating signal hypotheses through the network is exploited , effectively washing out activity components ( such as noise ) that can not be modeled with the learned temporal structures within the network . As a result , the vulnerable internal dynamics are kept within a system-consistent activity milieu , effectively decoupling it from noise or other unknown distortions that are present in the defective actual signal . In this work we show that Active Tuning elicits enhanced signal filtering abilities , without the need for explicitly training distinct models for exactly such purposes . Excitingly , this method allows for instance the successful application of an entirely noise-unaware RNN ( trained on clean , ideal data ) under highly noisy and unknown conditions . In the following , we first detail the Active Tuning algorithm . We then evaluate the RNN on three time series benchmarks—multiple superimposed sine waves , a chaotic pendulum , and spatiotemporal wave dynamics . The results confirm that Active Tuning enhances noise robustness in all cases . The mechanism mostly even beats the performance of networks that were explicitly trained to handle a particular noise level . It can also cope with missing data when tuning the predictor ’ s state into the observations . In conclusion , we recommend to employ Active Tuning in all time series prediction cases , when the data is known to be noisy , corrupted , or to contain missing values and the generative differentiable temporal forward model—typically a particular RNN architecture—knows about the potential underlying system dynamics . The resulting data processing system will be able to handle a larger range of noise and corrupted data , filtering the signal , generating more accurate predictions , and thus identifying the underlying data patterns more accurately and reliably . 2 ACTIVE TUNING . Starting point for the application of Active Tuning is a trained temporal forward model . This may be , as mentioned earlier , an RNN , but could also be another type of temporal model . The prerequisite is , however , a differentiable model that implements dependencies over time , such that BPTT can be used to reversely route gradient information through the computational forward chain . Without loss of generality , we assume that the model of interest , whose forward function may be referred to as fM , fulfills the following structure : ( σt , xt ) fM7−−−→ ( σt+1 , x̃t+1 ) , ( 1 ) where σt is the current latent hidden state of the model ( e.g . the hidden outputs of LSTM units , their cell states , or any other latent variable of interest ) and xt is the current signal observation . Based on this information fM generates a prediction for the next input x̃t+1 and updates its latent state σt+1 accordingly . Following the conventional inference scheme , we feed a given sequence time step by time step into the network and receive a one-time step ahead prediction after each particular step . Over time , this effectively synchronizes the network with the observed signal . Once the network dynamics are initialized , which is typically realized by teacher forcing , the network can generate prediction and its dynamics can be driven further into the future in a closed-loop manner , whereby the network feeds itself with its own predictions . To realize next time step- and closed-loop predictions , direct contact with the signal is inevitable to drive the teacher forcing process . In contrast , Active Tuning decouples the network from the direct influence of the signal . Instead , the model is permanently kept in closed-loop mode , which initially prevents the network from generating meaningful predictions . Over a certain time frame , Active Tuning keeps track of the recent signal history , the recent hidden states of the model , as well as its recent predictions . We call this time frame ( retrospective ) tuning horizon or tuning length ( denoted with R ) . The principle of Active Tuning can best be explained with the help of Figure 1 and Algorithm 1 . The latter gives a more formal perspective onto the principle . Note that for every invocation of the procedure a previously unrolled forward chain ( from the previous invocation or an initial unrolling ) is assumed . L refers to the prediction error of the entire unrolled prediction sequence and the respective observations , whereas Lt′ refers to the local prediction error just for a time step t′ . With every new perceived and potentially noise-affected signal observation xt , one or multiple tuning cycles are performed . Every tuning cycle hereby consists of the following stages : First , from the currently believed sequence of signal predictions ( which is in turn based on a sequence of hidden states ) and the actual observed recent inputs , a prediction error is calculated and propagated back into the past reversely along the unfolded forward computation sequence . The temporal gradient travels to the very left of the tuning horizon and is finally projected onto the seed hidden state σt−R , which is then adapted by applying the gradient signal in order to minimize the encountered prediction error . This adaption can be done using any gradient-based optimizer . Note that in this paper , we exclusively use Adam ( Kingma & Ba , 2015 ) , but other optimizers are possible as well . Second , after the adap- Algorithm 1 : Active Tuning procedure Input : Current observation xt Output : Prediction x̃t ( filtered output ) , predictive hidden state σt x̃t , σt← fM ( x̃t−1 , σt−1 ) / * Generate current prediction based on previous forward chain * / for c← 1 to C do / * Perform multiple tuning cycles * / for t′← t to t−R do / * Back-propagate prediction error * / gt ′ ← ∂L ∂σt ′ = ∂Lt ′ ∂σt ′ + g t′+1 ∂σ t′+1 ∂σt ′ if t ′ < t 0 otherwise end for σt−R← update ( σt−R , gt−R ) / * Perform tuning step ( e.g . with Adam update rule ) * / for t′← t−R+ 1 to t do / * Roll out forward chain again based on adapted hidden state * / x̃t ′ , σt ′ ← fM ( x̃t ′−1 , σt ′−1 ) end for end for return x̃t , σt tion of this seed state ( and maybe the seed input as well ) the prediction sequence is rolled out from the past into the present again , effectively refining the output sequence towards a better explanation of the recently observed signal . Each tuning cycle thus updates the current prediction x̃t and the current hidden state σt from which a closed-loop future prediction can be rolled out , if desired . To transition into the next world time step , one forward step has to be computed . The formerly leftmost seed states can be discarded and the recorded history is shifted by one time step , making σt−R+1 the new seed state that will be tuned within the next world time step . From then on , the procedure is repeated , yielding the continuous adaptive tuning process . As a result , the model is predominantly driven by its own imagination , that is , its own top down predictions . Meanwhile , the predictions themselves are adapted by means of the temporal gradients based on the accumulated prediction error , but not by the signal directly . In a nutshell , Active Tuning realizes a gradient-based minioptimization procedure on any of the model ’ s latent variables within one world time step . While it needs to be acknowledged that this process draws on additional computational resources , in this paper we investigate the resulting gain in signal processing robustness . Intuitively speaking , Active Tuning tries to fit known temporal patterns , as memorized within the forward model , to the concurrently observed data . Due to the strong pressure towards consistency maintenance , which is naturally enforced by means of the temporal gradient information in combination with the repeatedly performed forward passes of the hidden state activities , the network will generate adaptations and potential recombinations of patterns that it has learned during training . Occurrences that can not be generated from the repertoire of neural dynamics will therefore not appear ( or only in significantly suppressed form ) in the model ’ s output . As a consequence , there is a much smaller need to strive for noise robustness during training . Our results below indeed confirm that the model may be trained on clean , idealized target signals . However , imprinting a slight denoising tendency during training proves to be useful when facing more noisy data . Enhanced with our Active Tuning scheme , the model will be able to robustly produce high-quality outputs even under extremely adverse conditions—as long as ( some of ) the assumed target signals are actually present . Our scheme is thus a tool that can be highly useful in various application scenarios for signal reconstruction and flexible denoising . Nevertheless , it should be mentioned that with Active Tuning the computational overhead for inference scales with the number of tuning cycles and the tuning length . 3 EXPERIMENTS . In order to investigate the abilities of Active Tuning we studied its behavior at considering three different types of time series data , namely , one-dimensional linear dynamics , two-dimensional nonlinear dynamics , and distributed spatiotemporal dynamics . For all three problem domains we used a comparable setup except for the particular recurrent neural network architectures applied . We trained the networks as one time step ahead predictors whose task is to predict the next input given both the current input and the history of inputs aggregated in the latent hidden state of the models . The target sequences were generated directly from the clean input sequences by realizing a shift of one time step . Moreover , we trained networks under six different denoising conditions ( normally distributed ) per experiment , where we fed a potentially noisy signal into the network and provide the true signal ( one time step ahead ) as the target value ( Lu et al. , 2013 ; Otte et al. , 2015 ; Goodfellow et al. , 2016 ) . These conditions are determined by their relative noise ratios : 0.0 ( no noise ) , 0.05 , 0.1 , 0.2 , 0.5 , and 1.0 , where the ratios depend on the respective base signal statistics . For instance , a noise ratio of 0.1 means that the noise added to the input has a standard deviation of 0.1 times the standard deviation of the base signal . As a result we obtained predictive denoising experts for each of these conditions . All models were trained with Adam ( Kingma & Ba , 2015 ) using its default parameters ( learning rate η = 0.001 , β1 = 0.9 and β2 = 0.999 ) over 100 ( first two experiments ) or 200 ( third experiment ) epochs , respectively .
Paper proposes a way to adapt an autoregressive model (RNN in examples) to the incoming noisy signal to generate noise-free data output. The approach is interesting due to applying updates to the hidden state of the past observation. The proposed approached is named Active Tuning and evaluated on 3 toy tasks. The idea sounds interesting, however the lack of comparisons with other approaches and theoretical justification of why this approach is superior makes it hard to convince reader.
SP:78d44eef96138ddcb2b86cd1de3d9c6a63e33e32
Accelerating DNN Training through Selective Localized Learning
Training Deep Neural Networks ( DNNs ) places immense compute requirements on the underlying hardware platforms , expending large amounts of time and energy . We propose LoCal+SGD , a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent ( SGD ) based training framework . Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply ( GEMM ) operations to compute the error and weight gradients for each layer . We alleviate this by selectively updating some layers ’ weights using localized learning rules that require only 1 GEMM operation per layer . Further , since the weight update is performed during the forward pass itself , the layer activations for the mini-batch do not need to be stored until the backward pass , resulting in a reduced memory footprint . Localized updates can substantially boost training speed , but need to be used selectively and judiciously in order to preserve accuracy and convergence . We address this challenge through the design of a Learning Mode Selection Algorithm , where all layers start with SGD , and as epochs progress , layers gradually transition to localized learning . Specifically , for each epoch , the algorithm identifies a Localized→SGD transition layer , which delineates the network into two regions . Layers before the transition layer use localized updates , while the transition layer and later layers use gradient-based updates . The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs . We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss . We applied LoCal+SGD to 8 image recognition CNNs ( including ResNet50 and MobileNetV2 ) across 3 datasets ( Cifar10 , Cifar100 and ImageNet ) . Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5 % loss in Top-1 classification accuracy . 1 INTRODUCTION . Deep Neural Networks ( DNNs ) have achieved continued success in many application domains involving images ( Krizhevsky et al. , 2017 ) , videos ( Ng et al. , 2015 ) , text ( Zhou et al. , 2015 ) and natural language ( Goldberg & Hirst , 2017 ) . However training state-of-the-art DNN models is computationally quite challenging , often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets . Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators , training large models using current platforms is still quite expensive and often takes days to even weeks . In this work , we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1 , which alleviates the key performance bottlenecks in Stochastic Gradient Descent ( SGD ) through selective use of localized or Hebbian learning . Computational Bottlenecks in DNN Training . DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD ( Bottou , 2010 ) or Adam ( Kingma & Ba , 2015 ) . The training inputs ( typically grouped into minibatches ) are iteratively forward propagated ( FP ) and back propagated ( BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss . 1In addition to combining localized and SGD based learning , LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements Back-propagation is computationally expensive , accounting for 65-75 % of the total training time on GPUs . This is attributed to two key factors : ( i ) BP involves 2 Generalized Matrix Multiply ( GEMM ) operations , one to propagate the error across layers and the other to compute the weight gradients , and ( ii ) when training on distributed systems using data/model parallelism ( Dean et al. , 2012b ; Krizhevsky et al. , 2012 ) , aggregation of weight gradients/errors across devices incurs significant communication overhead . Further , BP through auxiliary ops such as batch normalization are also more expensive than FP . Prior Efforts on Efficient DNN Training . Prior research efforts to improve DNN training time can be grouped into a few directions . One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning ( You et al. , 2017a ; Goyal et al. , 2017 ; You et al. , 2017b ) and asynchronous weight updates ( Dean et al. , 2012a ) . Another class of efforts employ importancebased sample selection during training , wherein ‘ easier ’ training samples are selectively discarded to improve runtime ( Jiang et al. , 2019 ; Zhang et al. , 2019 ) . Finally , model quantization ( Sun et al. , 2019 ) and pruning ( Lym et al. , 2019 ) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements . LoCal+SGD : Combining SGD with Localized Learning . Complementary to the aforementioned efforts , we propose a new approach , LoCal+SGD , to alleviate the performance bottlenecks in DNN training , while preserving model accuracy . Our hybrid approach combines Hebbian or localized learning ( Hebb ) with SGD by selectively applying it in specific layers and epochs . Localized learning rules ( Hebb ; Oja , 1982 ; Zhong , 2005 ) utilize a single feed-forward weight update to learn the feature representations , eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training , which leads to further runtime savings due to better compute utilization and reduced communication costs . It is worth noting that localized learning has been actively explored in the context of unsupervised learning ( Chen et al. , 2020 ; van den Oord et al. , 2018 ; Hénaff et al. , 2019 ) . Further , there has been active research efforts on neuro-scientific learning rules ( Lee et al. , 2015 ; Nøkland , 2016 ) . Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context , wherein we selectively combine it within an SGD framework to achieve computational savings . Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e. , only to selected layers in certain epochs . We address this challenge through the design of a learning mode selection algorithm . At the start training , the selection algorithm initializes the learning mode of all layers to SGD , and as training progresses determines the layers that transition to localized learning . Specifically , for each epoch , the algorithm identifies a Localized→SGD transition layer , which delineates the network into two regions . Layers before the transition layer use localized updates , while subsequent layers use gradient-based updates . This allows BP to stop at the transition layer , as layers before it have no use for the back-propagated errors . The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch . Further , we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss . Contributions : To the best of our knowledge , LoCal+SGD is the first effort that combines localized learning ( an unsupervised learning technique ) within a supervised SGD context to reduced computational costs while maintaining classification accuracy . This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs . Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss . Across 8 image recognition CNNs ( including ResNet50 and MobileNet ) and 3 datasets ( Cifar10 , Cifar100 and ImageNet ) , we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5 % Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU . 2 LoCal+SGD : COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time , without incurring loss in accuracy . The following components are critical to the effectiveness of LoCal+SGD : • Localized Learning Rule Formulation . We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD . • Learning Mode Selection Algorithm . We propose a learning mode selection algorithm that chooses between localized learning and SGD-based learning for each layer in every epoch , based on the potential impact on accuracy and computational benefits . • Weak Supervision . We propose a weak supervision technique , which comprises of a low- cost supervision signal communicated to the localized learning layers in each epoch . The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss . In the following sub-sections , we describe the salient aspects of these components in greater detail . 2.1 EFFICIENT LOCALIZED LEARNING . Localized learning has been extensively explored in the context of unsupervised learning , demonstrating success on small ( < = 3 layer ) networks using relatively simpler datasets ( e.g . MNIST , Cifar-10 ) ( LeCun & Cortes , 2010 ; Krizhevsky et al. , a ) ) with an accuracy gap that is yet to be bridged on larger datasets ( e.g . ResNet50 or MobileNetV2 on ImageNet ( Deng et al. , 2009 ) ) . First proposed in ( Hebb ) , the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns . Equation 1 depicts the Hebbian weight update proposed in ( Hebb ) , for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively , with η as the learning rate . 4W = η · x · y ( 1 ) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning ( Oja , 1982 ; Zhong , 2005 ) . However , many of the proposed rules are computationally complex , or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs . Since our primarily goal is improving DNN training time , we adopt the computationally simple localized learning rule presented in Equation 1 . The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair . While its application to fully-connected ( fc ) layers is straightforward , we need to consider the sharing of weights between neuron pairs in convolutional ( conv ) layers . For updating a shared weight of a conv layer , we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates . This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1 . For further computational efficiency improvement , unlike Equation 1 , we consider the pre-activation-function values of the outputs i.e. , zl instead of their post activation value al . Further , we normalize the localized update values as shown in Equation 4 of Figure 1 , as it was observed to achieve better convergence in practice . Overall , we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch . All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1 . SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer , and are otherwise skipped . Clearly , Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers . Thus , from Figure 1 , we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice , we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark , across all conv and fc layers . In addition to the computational complexity , the memory footprint of SGD-based BP is also higher . This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1 , the input activations to the layers , used in Equation 6 of SGD-based BP . In contrast , the localized update for a layer can be performed as soon as the FP through the layer is complete . The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network , thereby freeing up a significant portion of on-device memory during training . In turn , this can allow larger minibatch sizes to be accommodated on a given hardware platform , when the localized updates are applied on a sufficient number of layers .
This paper try to leverage the benefit of Hebb learning to reduce CNN training time cost. In order to achieve this, a learning mode selection algorithm is proposed to progressively increase number of layers using Hebb learning. The writing of this paper is good and the idea is also interesting, however, the experimental part should be improved:
SP:2de60266ac8f4832460bd1da6451a74f63fd8f28
Accelerating DNN Training through Selective Localized Learning
Training Deep Neural Networks ( DNNs ) places immense compute requirements on the underlying hardware platforms , expending large amounts of time and energy . We propose LoCal+SGD , a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent ( SGD ) based training framework . Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply ( GEMM ) operations to compute the error and weight gradients for each layer . We alleviate this by selectively updating some layers ’ weights using localized learning rules that require only 1 GEMM operation per layer . Further , since the weight update is performed during the forward pass itself , the layer activations for the mini-batch do not need to be stored until the backward pass , resulting in a reduced memory footprint . Localized updates can substantially boost training speed , but need to be used selectively and judiciously in order to preserve accuracy and convergence . We address this challenge through the design of a Learning Mode Selection Algorithm , where all layers start with SGD , and as epochs progress , layers gradually transition to localized learning . Specifically , for each epoch , the algorithm identifies a Localized→SGD transition layer , which delineates the network into two regions . Layers before the transition layer use localized updates , while the transition layer and later layers use gradient-based updates . The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary between SGD and localized updates is shifted in future epochs . We also propose a low-cost weak supervision mechanism by controlling the learning rate of localized updates based on the overall training loss . We applied LoCal+SGD to 8 image recognition CNNs ( including ResNet50 and MobileNetV2 ) across 3 datasets ( Cifar10 , Cifar100 and ImageNet ) . Our measurements on a Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ∼0.5 % loss in Top-1 classification accuracy . 1 INTRODUCTION . Deep Neural Networks ( DNNs ) have achieved continued success in many application domains involving images ( Krizhevsky et al. , 2017 ) , videos ( Ng et al. , 2015 ) , text ( Zhou et al. , 2015 ) and natural language ( Goldberg & Hirst , 2017 ) . However training state-of-the-art DNN models is computationally quite challenging , often requiring exa-FLOPs of compute as the models are quite complex and need to be trained using large datasets . Despite rapid improvements in the capabilities of GPUs and the advent of specialized accelerators , training large models using current platforms is still quite expensive and often takes days to even weeks . In this work , we aim to reduce the computational complexity of DNN training through a new algorithmic approach called LoCal+SGD1 , which alleviates the key performance bottlenecks in Stochastic Gradient Descent ( SGD ) through selective use of localized or Hebbian learning . Computational Bottlenecks in DNN Training . DNNs are trained in a supervised manner using gradient-descent based cost minimization techniques such as SGD ( Bottou , 2010 ) or Adam ( Kingma & Ba , 2015 ) . The training inputs ( typically grouped into minibatches ) are iteratively forward propagated ( FP ) and back propagated ( BP ) through the DNN layers to compute weight updates that push the network parameters in the direction that decreases the overall classification loss . 1In addition to combining localized and SGD based learning , LoCal+SGD is Low-Calorie SGD or SGD with reduced computational requirements Back-propagation is computationally expensive , accounting for 65-75 % of the total training time on GPUs . This is attributed to two key factors : ( i ) BP involves 2 Generalized Matrix Multiply ( GEMM ) operations , one to propagate the error across layers and the other to compute the weight gradients , and ( ii ) when training on distributed systems using data/model parallelism ( Dean et al. , 2012b ; Krizhevsky et al. , 2012 ) , aggregation of weight gradients/errors across devices incurs significant communication overhead . Further , BP through auxiliary ops such as batch normalization are also more expensive than FP . Prior Efforts on Efficient DNN Training . Prior research efforts to improve DNN training time can be grouped into a few directions . One group of efforts enable larger scales of parallelism in DNN training through learning rate tuning ( You et al. , 2017a ; Goyal et al. , 2017 ; You et al. , 2017b ) and asynchronous weight updates ( Dean et al. , 2012a ) . Another class of efforts employ importancebased sample selection during training , wherein ‘ easier ’ training samples are selectively discarded to improve runtime ( Jiang et al. , 2019 ; Zhang et al. , 2019 ) . Finally , model quantization ( Sun et al. , 2019 ) and pruning ( Lym et al. , 2019 ) can lead to significant runtime benefits during training by enabling the use of reduced-bitwidth processing elements . LoCal+SGD : Combining SGD with Localized Learning . Complementary to the aforementioned efforts , we propose a new approach , LoCal+SGD , to alleviate the performance bottlenecks in DNN training , while preserving model accuracy . Our hybrid approach combines Hebbian or localized learning ( Hebb ) with SGD by selectively applying it in specific layers and epochs . Localized learning rules ( Hebb ; Oja , 1982 ; Zhong , 2005 ) utilize a single feed-forward weight update to learn the feature representations , eschewing BP . Careful formulation of the localized learning rule can result in ∼2× computation savings compared to SGD and also significantly reduces memory footprint as activations from FP needed not be retained until BP . The reduction in memory footprint can in turn allow increasing the batch size during training , which leads to further runtime savings due to better compute utilization and reduced communication costs . It is worth noting that localized learning has been actively explored in the context of unsupervised learning ( Chen et al. , 2020 ; van den Oord et al. , 2018 ; Hénaff et al. , 2019 ) . Further , there has been active research efforts on neuro-scientific learning rules ( Lee et al. , 2015 ; Nøkland , 2016 ) . Our work is orthogonal to such efforts and represents a new application of localized learning in a fully supervised context , wherein we selectively combine it within an SGD framework to achieve computational savings . Preserving model accuracy and convergence with LoCal+SGD requires localized updates to be applied judiciously i.e. , only to selected layers in certain epochs . We address this challenge through the design of a learning mode selection algorithm . At the start training , the selection algorithm initializes the learning mode of all layers to SGD , and as training progresses determines the layers that transition to localized learning . Specifically , for each epoch , the algorithm identifies a Localized→SGD transition layer , which delineates the network into two regions . Layers before the transition layer use localized updates , while subsequent layers use gradient-based updates . This allows BP to stop at the transition layer , as layers before it have no use for the back-propagated errors . The algorithm takes advantage of the magnitude of the weight updates of the Localized→SGD transition layer in deciding the new position of the boundary every epoch . Further , we provide weak supervision by tweaking the learning rate of locally updated layers based on overall training loss . Contributions : To the best of our knowledge , LoCal+SGD is the first effort that combines localized learning ( an unsupervised learning technique ) within a supervised SGD context to reduced computational costs while maintaining classification accuracy . This favorable tradeoff is achieved by LoCal+SGD through a Learning Mode Selection Algorithm that applies localized learning to selected layers and epochs . Further improvement is achieved through the use of weak supervision by modulating the learning rate of locally updated layers based on the overall training loss . Across 8 image recognition CNNs ( including ResNet50 and MobileNet ) and 3 datasets ( Cifar10 , Cifar100 and ImageNet ) , we demonstrate that LoCal+SGD achieves up to 1.5× improvement in training time with ∼0.5 % Top-1 accuracy loss on a Nvidia GTX 1080Ti GPU . 2 LoCal+SGD : COMBINING SGD WITH SELECTIVE LOCALIZED LEARNING The key idea in LoCal+SGD is to apply localized learning to selected layers and epochs during DNN training to improve the overall execution time , without incurring loss in accuracy . The following components are critical to the effectiveness of LoCal+SGD : • Localized Learning Rule Formulation . We formulate a computationally efficient localized learning rule and highlight the clear runtime benefits when compared to SGD . • Learning Mode Selection Algorithm . We propose a learning mode selection algorithm that chooses between localized learning and SGD-based learning for each layer in every epoch , based on the potential impact on accuracy and computational benefits . • Weak Supervision . We propose a weak supervision technique , which comprises of a low- cost supervision signal communicated to the localized learning layers in each epoch . The signal modulates the learning rates of these layers based on the rate of change of the overall classification loss . In the following sub-sections , we describe the salient aspects of these components in greater detail . 2.1 EFFICIENT LOCALIZED LEARNING . Localized learning has been extensively explored in the context of unsupervised learning , demonstrating success on small ( < = 3 layer ) networks using relatively simpler datasets ( e.g . MNIST , Cifar-10 ) ( LeCun & Cortes , 2010 ; Krizhevsky et al. , a ) ) with an accuracy gap that is yet to be bridged on larger datasets ( e.g . ResNet50 or MobileNetV2 on ImageNet ( Deng et al. , 2009 ) ) . First proposed in ( Hebb ) , the key intuition behind localized learning rules is to encourage correlations between neurons that have similar activation patterns . Equation 1 depicts the Hebbian weight update proposed in ( Hebb ) , for a synapse with weight W , connecting a pair of input and output neurons whose activation values are represented by x and y respectively , with η as the learning rate . 4W = η · x · y ( 1 ) Considerable research has gone into evolving this equation over the years to improve the performance of localized learning ( Oja , 1982 ; Zhong , 2005 ) . However , many of the proposed rules are computationally complex , or are difficult to parallelize on modern hardware platforms such as GPUs and TPUs . Since our primarily goal is improving DNN training time , we adopt the computationally simple localized learning rule presented in Equation 1 . The learning rule in Equation 1 assumes a distinct synapse between each input and output neuron pair . While its application to fully-connected ( fc ) layers is straightforward , we need to consider the sharing of weights between neuron pairs in convolutional ( conv ) layers . For updating a shared weight of a conv layer , we calculate the individual updates due to each pair of pre- and post-synaptic neurons sharing the weight and sum all such updates . This essentially reduces to a convolution operation between the input and output activations of the layer and can be expressed by Equation 3 in Figure 1 . For further computational efficiency improvement , unlike Equation 1 , we consider the pre-activation-function values of the outputs i.e. , zl instead of their post activation value al . Further , we normalize the localized update values as shown in Equation 4 of Figure 1 , as it was observed to achieve better convergence in practice . Overall , we utilize Equations 3 and 4 from Figure 1 to perform the weight updates in all layers that are earlier than the Localized→SGD transition layer during a certain epoch . All other layers continue to be updated using SGD-based BP , expressed by Equations 5-7 in Figure 1 . SGD updates are applied to batch-normalization layers present after the Localized→SGD transition layer , and are otherwise skipped . Clearly , Equation 3 has the same computational complexity as Equation 6 of SGD-based BP for conv and fc layers . Thus , from Figure 1 , we can directly infer that our localized learning rule will be considerable faster than SGD-based BP . In practice , we measured this improvement to be more than 2× on a NVIDIA GTX 1080Ti GPU for the ImageNet-ResNet50 benchmark , across all conv and fc layers . In addition to the computational complexity , the memory footprint of SGD-based BP is also higher . This is because DNN software frameworks commonly store all activation values computed during FP to avoid recomputing al−1 , the input activations to the layers , used in Equation 6 of SGD-based BP . In contrast , the localized update for a layer can be performed as soon as the FP through the layer is complete . The activation tensor al of layer L can be discarded or over-written as soon as FP proceeds to the next layer in the network , thereby freeing up a significant portion of on-device memory during training . In turn , this can allow larger minibatch sizes to be accommodated on a given hardware platform , when the localized updates are applied on a sufficient number of layers .
This paper proposes a combination of SGD with selective application of a non-backprop learning rule (Hebbian). The two learning rules are not applied together, but rather a boundary is determined where layers prior use SGD, and the ones after use the Hebbian approach. A selection algorithm dynamically adjusts the boundary over training. For accuracy reasons, they include weak supervision by using the overall classification loss to control the sign of the update.
SP:2de60266ac8f4832460bd1da6451a74f63fd8f28
Action and Perception as Divergence Minimization
1 INTRODUCTION . To achieve goals in complex environments , intelligent agents need to perceive their environments and choose effective actions . These two processes , perception and action , are often studied in isolation . Despite the many objectives that have been proposed in the fields of representation learning and reinforcement learning , it remains unclear how the objectives relate to each other and which fundamentally new objectives remain yet to be discovered . Based on the KL divergence ( Kullback and Leibler , 1951 ) , we propose a unified framework for action and perception that connects a wide range of objectives to facilitate our understanding of them while providing a recipe for designing novel agent objectives . Our findings are conceptual in nature and this paper includes no empirical study . Instead , we offer a unified picture of a wide range of methods that have been shown to be successful in practice in prior work . The contributions of this paper are described as follows . Unified objective function for perception and action We propose joint KL minimization as a principled framework for designing and comparing agent objectives . KL minimization was proposed separately for perception as variational inference ( Jordan et al. , 1999 ; Alemi and Fischer , 2018 ) and for actions as KL control ( Todorov , 2008 ; Kappen et al. , 2009 ) . Based on this insight , we formulate action and perception as jointly minimizing the KL from the world to a unified target distribution . The target serves both as the model to infer representations and as reward for actions . This extends variational inference to controllable inputs , while extending KL control to latent representations . We show a novel decomposition of joint KL divergence that explains several representation learning and exploration objectives . Divergence minimization additionally connects deep reinforcement learning to the free energy principle ( Friston , 2010 ; 2019 ) , while simplifying and overcoming limitations of its active inference implementations ( Friston et al. , 2017 ) that we discuss in Appendix B . Understanding latent variables for decision making Divergence minimization with an expressive target maximizes the mutual information between inputs and latents . Agents thus infer representations that are informative of past inputs and explore future inputs that are informative of the representations . For the past , this yields reconstruction ( Hinton et al. , 2006 ; Kingma and Welling , 2013 ) or contrastive learning ( Gutmann and Hyvärinen , 2010 ; Oord et al. , 2018 ) . For the future , it yields information gain exploration ( Lindley et al. , 1956 ) . Stochastic skills and actions are realized over time , so their past terms are constant . For the future , they lead to empowerment ( Klyubin et al. , 2005 ) and skill discovery ( Gregor et al. , 2016 ) . RL as inference ( Rawlik et al. , 2010 ) does not maximize mutual information because its target is factorized . To optimize a consistent objective across past and future , latent representations should be accompanied by information gain exploration . Expressive world models for large ecological niches The more flexible an agent ’ s target or model , the better the agent can adapt to its environment . Minimizing the divergence between the world and the model , the agent converges to a natural equilibrium or niche where it can accurately predict its inputs and that it can inhabit despite external perturbations ( Schrödinger , 1944 ; Wiener , 1948 ; Haken , 1981 ; Friston , 2013 ; Berseth et al. , 2019 ) . While surprise minimization can lead to trivial solutions , divergence minimization encourages the niche to match the agent ’ s model class , thus visiting all inputs proportionally to how well they can be understood . This suggests designing expressive world models of sensory inputs ( Ebert et al. , 2017 ; Hafner et al. , 2018 ; Gregor et al. , 2019 ) as a path toward building highly adaptive agents , while rendering task rewards optional . 2 FRAMEWORK . This section introduces the framework of action and perception as divergence minimization ( APD ) . To unify action and perception , we formulate the two processes as joint KL minimization with a shared target distribution . The target distribution expresses the agent ’ s preferences over system configurations and is also the probabilistic model under which the agent infers its representations . Using an expressive model as the target maximizes the mutual information between the latent variables and the sequence of sensory inputs , thus inferring latent representations that are informative of past inputs and exploring future inputs that are informative of the representations . We assume knowledge of basic concepts from probability and information theory that are reviewed in Appendix D . 2.1 JOINT KL MINIMIZATION . Consider a stochastic system described by a joint probability distribution over random variables . For example , the random variables for supervised learning are the inputs and labels and for an agent they are the sequence of sensory inputs , internal representations , and actions . More generally , we combine all input variables into x and the remaining variables that we term latents into z . We will see that different latents correspond to different representation learning and exploration objectives . The random variables are distributed according to their generative process or actual distribution pφ . Parts of the actual distribution can be unknown , such as the data distribution , and parts can be influenced by varying the parameter vector φ , such as the distribution of stochastic representations or actions . As a counterpart to the actual distribution , we define the desired target distribution τ over the same support . It describes our preferences over system configurations and can be unnormalized , Actual distribution : x , z ∼ pφ ( x , z ) Target distribution : τ ( x , z ) . ( 1 ) We formulate the problem of joint KL minimization as changing the parameters φ to bring the actual distribution of all random variables as close as possible to the target distribution , as measured by the KL divergence ( Kullback and Leibler , 1951 ; Li et al. , 2017 ; Alemi and Fischer , 2018 ) , min φ KL [ pφ ( x , z ) ∥∥ τ ( x , z ) ] . ( 2 ) All expectations and KLs throughout the paper are integrals under the actual distribution , so they can be estimated from samples of the system and depend on φ . Equation 2 is the reverse KL or information projection used in variational inference ( Csiszár and Matus , 2003 ) . Examples For representation learning , pφ is the joint of data and belief distributions and τ is a latent variable model . Note that we use pφ to denote not the model under which we infer beliefs but the generative process of inputs and their representations . For control , pφ is the trajectory distribution under the current policy and τ corresponds to the utility of the trajectory . The parameters φ include everything the optimizer can change directly , such as sufficient statistics of representations , model parameters , and policy parameters . Target parameters There are two ways to denote deterministic values within our framework , also known as MAP estimates in the probabilistic modeling literature ( Bishop , 2006 ) . We can either use a fixed target distribution and use a latent variable that follows a point mass distribution ( Dirac , 1958 ) , or we explicitly parameterize the target using a deterministic parameter as τφ . In either case , τ refers to the fixed model class . The two approaches are equivalent because in both cases the target receives a deterministic value that has no entropy regularizer . For more details , see Appendix A.1 . Assumptions Divergence minimization uses only two inductive biases , namely that the agent optimizes an objective and that it uses random variables to represent uncertainty . Choosing the wellestablished KL as the divergence measure is an additional assumption . It corresponds to maximizing the expected log probability under the target while encouraging high entropy for all variables in the system to avoid overconfidence , as detailed in Appendix C. Common objectives with different degrees of entropy regularization are summarized in Table 1 . Generality Alternative divergence measures would lead to different optimization dynamics , different solutions if the target can not be reached , and potentially novel objectives for representation learning and exploration . Nonetheless , the KL can describe any converged system , trivially by choosing its actual distribution as the target , and thus offers a simple and complete mathematical perspective for comparing a wide range of specific objectives that correspond to different latent variables and target distributions . 2.2 INFORMATION BOUNDS . We show that for expressive targets that capture dependencies between the variables in the system , minimizing the joint KL increases both the preferences and the mutual information between inputs x and latents z . This property allows divergence minimization to explain a wide range of existing representation learning and exploration objectives . We use the term representation learning for inferring deterministic or stochastic variables from inputs , which includes local representations of individual inputs and global representations such as model parameters . Latent preferences The joint KL can be decomposed in multiple ways , for example into a marginal KL plus a conditional KL or by grouping marginal with conditional terms . To reveal the mutual information maximization , we decompose the joint KL into a preference seeking term and an information seeking term . The decomposition can be done either with the information term expressed over inputs and the preferences expressed over latents or the other way around , KL [ pφ ( x , z ) ∥∥ τ ( x , z ) ] joint divergence = E KL [ pφ ( z | x ) ∥∥ τ ( z ) ] realizing latent preferences − E [ ln τ ( x ∣∣ z ) − ln pφ ( x ) ] information bound . ( 3 ) All expectations throughout the paper are over all variables , under the actual distribution , and thus depend on the parameters φ . The first term on the right side of Equation 3 is a KL regularizer that keeps the belief pφ ( z | x ) over latent variables close to the marginal latent preferences τ ( z ) . The second term is a variational bound on the mutual information I [ x ; z ] ( Barber and Agakov , 2003 ) . The bound is expressed in input space . Maximizing the conditional ln τ ( x | z ) seeks latent variables that accurately predict inputs while minimizing the marginal ln pφ ( x ) seeks diverse inputs . Variational free energy When the agent can not influence its inputs , such as when learning from a fixed dataset , the input entropy E [ − ln pφ ( x ) ] is not parameterized and can be dropped from Equation 3 . This yields the free energy or ELBO objective used by variational inference to infer approximate posterior beliefs in latent variable models ( Hinton and Van Camp , 1993 ; Jordan et al. , 1999 ) . The free energy regularizes the belief pφ ( z | x ) to stay close to the prior τ ( z ) while reconstructing inputs via τ ( x | z ) . However , in reinforcement and active learning , inputs can be influenced and thus the input entropy should be kept , which makes the information bound explicit . Input preferences Analogously , we decompose the joint KL the other way around . The first term on the right side of Equation 4 is a KL regularizer that keeps the conditional input distribution pφ ( x | z ) close to the marginal input preferences τ ( x ) . This term is analogous to the objective in KL control ( Todorov , 2008 ; Kappen et al. , 2009 ) , except that the inputs now depend upon latent variables via the policy . The second term is again a variational bound on the mutual information I [ x ; z ] , this time expressed in latent space . Intuitively , the bound compares the belief τ ( z | x ) after observing the inputs and the belief pφ ( z ) before observing any inputs to measure the gained information , KL [ pφ ( x , z ) ∥∥ τ ( x , z ) ] joint divergence = E KL [ pφ ( x | z ) ∥∥ τ ( x ) ] realizing input preferences − E [ ln τ ( z ∣∣ x ) − ln pφ ( z ) ] information bound . ( 4 ) The information bounds are tighter the better the target conditional approximates the actual conditional , meaning that the agent becomes better at maximizing mutual information as it learns more about the relation between the two variables . This requires an expressive target that captures correlations between inputs and latents , such as a latent variable model or deep neural network . Maximizing the mutual information accounts for both learning latent representations that are informative of inputs as well as exploring inputs that are informative of the latent representations .
The authors proposed to use the joint KL divergence between the generative joint distribution and the target distribution (containing latent variables which could correspond to latent parts we wanted to model (e.g. beliefs). It was illustrative to discuss decomposing the joint KL into different ways and thus forming information bounds in different scenarios. The decomposition of past and future in Eq.6 also provided a unified perspective for looking at the most currently used objectives.
SP:3533f4976f70e2fdac0934dbb782d7b8af64c9fd
Action and Perception as Divergence Minimization
1 INTRODUCTION . To achieve goals in complex environments , intelligent agents need to perceive their environments and choose effective actions . These two processes , perception and action , are often studied in isolation . Despite the many objectives that have been proposed in the fields of representation learning and reinforcement learning , it remains unclear how the objectives relate to each other and which fundamentally new objectives remain yet to be discovered . Based on the KL divergence ( Kullback and Leibler , 1951 ) , we propose a unified framework for action and perception that connects a wide range of objectives to facilitate our understanding of them while providing a recipe for designing novel agent objectives . Our findings are conceptual in nature and this paper includes no empirical study . Instead , we offer a unified picture of a wide range of methods that have been shown to be successful in practice in prior work . The contributions of this paper are described as follows . Unified objective function for perception and action We propose joint KL minimization as a principled framework for designing and comparing agent objectives . KL minimization was proposed separately for perception as variational inference ( Jordan et al. , 1999 ; Alemi and Fischer , 2018 ) and for actions as KL control ( Todorov , 2008 ; Kappen et al. , 2009 ) . Based on this insight , we formulate action and perception as jointly minimizing the KL from the world to a unified target distribution . The target serves both as the model to infer representations and as reward for actions . This extends variational inference to controllable inputs , while extending KL control to latent representations . We show a novel decomposition of joint KL divergence that explains several representation learning and exploration objectives . Divergence minimization additionally connects deep reinforcement learning to the free energy principle ( Friston , 2010 ; 2019 ) , while simplifying and overcoming limitations of its active inference implementations ( Friston et al. , 2017 ) that we discuss in Appendix B . Understanding latent variables for decision making Divergence minimization with an expressive target maximizes the mutual information between inputs and latents . Agents thus infer representations that are informative of past inputs and explore future inputs that are informative of the representations . For the past , this yields reconstruction ( Hinton et al. , 2006 ; Kingma and Welling , 2013 ) or contrastive learning ( Gutmann and Hyvärinen , 2010 ; Oord et al. , 2018 ) . For the future , it yields information gain exploration ( Lindley et al. , 1956 ) . Stochastic skills and actions are realized over time , so their past terms are constant . For the future , they lead to empowerment ( Klyubin et al. , 2005 ) and skill discovery ( Gregor et al. , 2016 ) . RL as inference ( Rawlik et al. , 2010 ) does not maximize mutual information because its target is factorized . To optimize a consistent objective across past and future , latent representations should be accompanied by information gain exploration . Expressive world models for large ecological niches The more flexible an agent ’ s target or model , the better the agent can adapt to its environment . Minimizing the divergence between the world and the model , the agent converges to a natural equilibrium or niche where it can accurately predict its inputs and that it can inhabit despite external perturbations ( Schrödinger , 1944 ; Wiener , 1948 ; Haken , 1981 ; Friston , 2013 ; Berseth et al. , 2019 ) . While surprise minimization can lead to trivial solutions , divergence minimization encourages the niche to match the agent ’ s model class , thus visiting all inputs proportionally to how well they can be understood . This suggests designing expressive world models of sensory inputs ( Ebert et al. , 2017 ; Hafner et al. , 2018 ; Gregor et al. , 2019 ) as a path toward building highly adaptive agents , while rendering task rewards optional . 2 FRAMEWORK . This section introduces the framework of action and perception as divergence minimization ( APD ) . To unify action and perception , we formulate the two processes as joint KL minimization with a shared target distribution . The target distribution expresses the agent ’ s preferences over system configurations and is also the probabilistic model under which the agent infers its representations . Using an expressive model as the target maximizes the mutual information between the latent variables and the sequence of sensory inputs , thus inferring latent representations that are informative of past inputs and exploring future inputs that are informative of the representations . We assume knowledge of basic concepts from probability and information theory that are reviewed in Appendix D . 2.1 JOINT KL MINIMIZATION . Consider a stochastic system described by a joint probability distribution over random variables . For example , the random variables for supervised learning are the inputs and labels and for an agent they are the sequence of sensory inputs , internal representations , and actions . More generally , we combine all input variables into x and the remaining variables that we term latents into z . We will see that different latents correspond to different representation learning and exploration objectives . The random variables are distributed according to their generative process or actual distribution pφ . Parts of the actual distribution can be unknown , such as the data distribution , and parts can be influenced by varying the parameter vector φ , such as the distribution of stochastic representations or actions . As a counterpart to the actual distribution , we define the desired target distribution τ over the same support . It describes our preferences over system configurations and can be unnormalized , Actual distribution : x , z ∼ pφ ( x , z ) Target distribution : τ ( x , z ) . ( 1 ) We formulate the problem of joint KL minimization as changing the parameters φ to bring the actual distribution of all random variables as close as possible to the target distribution , as measured by the KL divergence ( Kullback and Leibler , 1951 ; Li et al. , 2017 ; Alemi and Fischer , 2018 ) , min φ KL [ pφ ( x , z ) ∥∥ τ ( x , z ) ] . ( 2 ) All expectations and KLs throughout the paper are integrals under the actual distribution , so they can be estimated from samples of the system and depend on φ . Equation 2 is the reverse KL or information projection used in variational inference ( Csiszár and Matus , 2003 ) . Examples For representation learning , pφ is the joint of data and belief distributions and τ is a latent variable model . Note that we use pφ to denote not the model under which we infer beliefs but the generative process of inputs and their representations . For control , pφ is the trajectory distribution under the current policy and τ corresponds to the utility of the trajectory . The parameters φ include everything the optimizer can change directly , such as sufficient statistics of representations , model parameters , and policy parameters . Target parameters There are two ways to denote deterministic values within our framework , also known as MAP estimates in the probabilistic modeling literature ( Bishop , 2006 ) . We can either use a fixed target distribution and use a latent variable that follows a point mass distribution ( Dirac , 1958 ) , or we explicitly parameterize the target using a deterministic parameter as τφ . In either case , τ refers to the fixed model class . The two approaches are equivalent because in both cases the target receives a deterministic value that has no entropy regularizer . For more details , see Appendix A.1 . Assumptions Divergence minimization uses only two inductive biases , namely that the agent optimizes an objective and that it uses random variables to represent uncertainty . Choosing the wellestablished KL as the divergence measure is an additional assumption . It corresponds to maximizing the expected log probability under the target while encouraging high entropy for all variables in the system to avoid overconfidence , as detailed in Appendix C. Common objectives with different degrees of entropy regularization are summarized in Table 1 . Generality Alternative divergence measures would lead to different optimization dynamics , different solutions if the target can not be reached , and potentially novel objectives for representation learning and exploration . Nonetheless , the KL can describe any converged system , trivially by choosing its actual distribution as the target , and thus offers a simple and complete mathematical perspective for comparing a wide range of specific objectives that correspond to different latent variables and target distributions . 2.2 INFORMATION BOUNDS . We show that for expressive targets that capture dependencies between the variables in the system , minimizing the joint KL increases both the preferences and the mutual information between inputs x and latents z . This property allows divergence minimization to explain a wide range of existing representation learning and exploration objectives . We use the term representation learning for inferring deterministic or stochastic variables from inputs , which includes local representations of individual inputs and global representations such as model parameters . Latent preferences The joint KL can be decomposed in multiple ways , for example into a marginal KL plus a conditional KL or by grouping marginal with conditional terms . To reveal the mutual information maximization , we decompose the joint KL into a preference seeking term and an information seeking term . The decomposition can be done either with the information term expressed over inputs and the preferences expressed over latents or the other way around , KL [ pφ ( x , z ) ∥∥ τ ( x , z ) ] joint divergence = E KL [ pφ ( z | x ) ∥∥ τ ( z ) ] realizing latent preferences − E [ ln τ ( x ∣∣ z ) − ln pφ ( x ) ] information bound . ( 3 ) All expectations throughout the paper are over all variables , under the actual distribution , and thus depend on the parameters φ . The first term on the right side of Equation 3 is a KL regularizer that keeps the belief pφ ( z | x ) over latent variables close to the marginal latent preferences τ ( z ) . The second term is a variational bound on the mutual information I [ x ; z ] ( Barber and Agakov , 2003 ) . The bound is expressed in input space . Maximizing the conditional ln τ ( x | z ) seeks latent variables that accurately predict inputs while minimizing the marginal ln pφ ( x ) seeks diverse inputs . Variational free energy When the agent can not influence its inputs , such as when learning from a fixed dataset , the input entropy E [ − ln pφ ( x ) ] is not parameterized and can be dropped from Equation 3 . This yields the free energy or ELBO objective used by variational inference to infer approximate posterior beliefs in latent variable models ( Hinton and Van Camp , 1993 ; Jordan et al. , 1999 ) . The free energy regularizes the belief pφ ( z | x ) to stay close to the prior τ ( z ) while reconstructing inputs via τ ( x | z ) . However , in reinforcement and active learning , inputs can be influenced and thus the input entropy should be kept , which makes the information bound explicit . Input preferences Analogously , we decompose the joint KL the other way around . The first term on the right side of Equation 4 is a KL regularizer that keeps the conditional input distribution pφ ( x | z ) close to the marginal input preferences τ ( x ) . This term is analogous to the objective in KL control ( Todorov , 2008 ; Kappen et al. , 2009 ) , except that the inputs now depend upon latent variables via the policy . The second term is again a variational bound on the mutual information I [ x ; z ] , this time expressed in latent space . Intuitively , the bound compares the belief τ ( z | x ) after observing the inputs and the belief pφ ( z ) before observing any inputs to measure the gained information , KL [ pφ ( x , z ) ∥∥ τ ( x , z ) ] joint divergence = E KL [ pφ ( x | z ) ∥∥ τ ( x ) ] realizing input preferences − E [ ln τ ( z ∣∣ x ) − ln pφ ( z ) ] information bound . ( 4 ) The information bounds are tighter the better the target conditional approximates the actual conditional , meaning that the agent becomes better at maximizing mutual information as it learns more about the relation between the two variables . This requires an expressive target that captures correlations between inputs and latents , such as a latent variable model or deep neural network . Maximizing the mutual information accounts for both learning latent representations that are informative of inputs as well as exploring inputs that are informative of the latent representations .
The authors formulate a general framework that unifies inference, action/perception, control, and several other tasks. The framework is based on minimizing the KL divergence between a parameterized "actual" distribution and a "target" distribution. The authors argue that this formulation unifies a wide range of previously proposed objectives. They also argue that it has some advantages when compared to Friston's "free energy principle" framework, with which it shares many similarities, in particular that probability matching is preferred to surprise minimization.
SP:3533f4976f70e2fdac0934dbb782d7b8af64c9fd
EigenGame: PCA as a Nash Equilibrium
1 INTRODUCTION . The principal components of data are the vectors that align with the directions of maximum variance . These have two main purposes : a ) as interpretable features and b ) for data compression . Recent methods for principal component analysis ( PCA ) focus on the latter , explicitly stating objectives to find the k-dimensional subspace that captures maximum variance ( e.g. , ( Tang , 2019 ) ) , and leaving the problem of rotating within this subspace to , for example , a more efficient downstream singular value ( SVD ) decomposition step1 . This point is subtle , yet critical . For example , any pair of twodimensional , orthogonal vectors spans all of R2 and , therefore , captures maximum variance of any two-dimensional dataset . However , for these vectors to be principal components , they must , in addition , align with the directions of maximum variance which depends on the covariance of the data . By learning the optimal subspace , rather than the principal components themselves , objectives focused on subspace error ignore the first purpose of PCA . In contrast , modern nonlinear representation learning techniques focus on learning features that are both disentangled ( uncorrelated ) and low dimensional ( Chen et al. , 2016 ; Mathieu et al. , 2018 ; Locatello et al. , 2019 ; Sarhan et al. , 2019 ) . It is well known that the PCA solution of the d-dimensional dataset X ∈ Rn×d is given by the eigenvectors of X > X or equivalently , the right singular vectors of X . Impractically , the cost of computing the full SVD scales with O ( min { nd2 , n2d } ) -time and O ( nd ) -space ( Shamir , 2015 ; Tang , 2019 ) . For moderately sized data , randomized methods can be used ( Halko et al. , 2011 ) . Beyond this , stochastic—or online—methods based on Oja ’ s rule ( Oja , 1982 ) or power iterations ( Rutishauser , 1971 ) are common . Another option is to use streaming k-PCA algorithms such as Frequent Directions ( FD ) ( Ghashami et al. , 2016 ) or Oja ’ s algorithm2 ( Allen-Zhu and Li , 2017 ) with storage complexity O ( kd ) . Sampling or sketching methods also scale well , but again , focus on the top-k subspace ( Sarlos , 2006 ; Cohen et al. , 2017 ; Feldman et al. , 2020 ) . In contrast to these approaches , we view each principal component ( equivalently eigenvector ) as a player in a game whose objective is to maximize their own local utility function in controlled competition with other vectors . The proposed utility gradients are interpretable as a combination of Oja ’ s rule and a generalized Gram-Schmidt process . We make the following contributions : • A novel formulation of PCA as finding the Nash equilibrium of a suitable game , • A sequential , globally convergent algorithm for approximating the Nash on full-batch data , 1After learning the top-k subspace V ∈ Rd×k , the rotation can be recovered via an SVD of XV . 2FD approximates the top-k subspace ; Oja ’ s algorithm approximates the top-k eigenvectors . • A decentralized algorithm with experiments demonstrating the approach as competitive with modern streaming k-PCA algorithms on synthetic and real data , • In demonstration of the scaling of the approach , we compute the top-32 principal components of the matrix of RESNET-200 activations on the IMAGENET dataset ( n ≈ 106 , d ≈ 20 ·106 ) . Each of these contributions is important . Novel formulations often lead to deeper understanding of problems , thereby , opening doors to improved techniques . In particular , k-player games are in general complex and hard to analyze . In contrast , PCA has been well-studied . By combining the two fields we hope to develop useful analytical tools . Our specific formulation is important because it obviates the need for any centralized orthonormalization step and lends itself naturally to decentralization . And lastly , theory and experiments support the viability of this approach for continued research . 2 PCA AS AN EIGEN-GAME . We adhere to the following notation . Vectors and matrices meant to approximate principal components ( equivalently eigenvectors ) are designated with hats , v̂ and V̂ respectively , whereas true principal components are v and V . Subscripts indicate which eigenvalue a vector is associated with . For example , vi is the ith largest eigenvector . In this work , we will assume each eigenvalue is distinct . By an abuse of notation , vj < i refers to the set of vectors { vj |j ∈ { 1 , . . . , i− 1 } } and are also referred to as the parents of vi ( vi is their child ) . Sums over indices should be clear from context , e.g. , ∑ j < i = ∑i−1 j=1 . The Euclidean inner product is written 〈u , v〉 = u > v . We denote the unit sphere by Sd−1 and simplex by ∆d−1 in d-dimensional ambient space . Outline of derivation As argued in the introduction , the PCA problem is often mis-interpreted as learning a projection of the data into a subspace that captures maximum variance ( equiv . maximizing the trace of a suitable matrix R introduced below ) . This is in contrast to the original goal of learning the principal components . We first develop the intuition for deriving our utility functions by ( i ) showing that only maximizing the trace of R is not sufficient for recovering all principal components ( equiv . eigenvectors ) , and ( ii ) showing that minimizing off-diagonal terms in R is a complementary objective to maximizing the trace and can recover all components . We then consider learning only the top-k and construct utilities that are consistent with findings in ( i ) and ( ii ) , equal the true eigenvalues at the Nash of the game we construct , and result in a game that is amenable to analysis . Derivation of player utilities . The eigenvalue problem for a symmetric matrix X > X = M ∈ Rd×d is to find a matrix of d orthonormal column vectors V ( implies V is full-rank ) such that MV = V Λ with Λ diagonal . Given a solution to this problem , the columns of V are known as eigenvectors and corresponding entries in Λ are eigenvalues . By left-multiplying by V > and recalling V > V = V V > = I by orthonormality ( i.e. , V is unitary ) , we can rewrite the equality as V > MV = V > V Λ unitary = Λ . ( 1 ) Let V̂ denote a guess or estimate of the true eigenvectors V and define R ( V̂ ) def= V̂ > MV̂ . The PCA problem is often posed as maximizing the trace of R ( equiv . minimizing reconstruction error ) : max V̂ > V̂=I { ∑ i Rii = Tr ( R ) = Tr ( V̂ > MV̂ ) = Tr ( V̂ V̂ > M ) = Tr ( M ) } . ( 2 ) Surprisingly , the objective in ( 2 ) is independent of V̂ , so it can not be used to recover all ( i.e. , k = d ) the eigenvectors of M— ( i ) . Alternatively , Equation ( 1 ) implies the eigenvalue problem can be phrased as ensuring all off-diagonal terms of R are zero , thereby ensuring R is diagonal— ( ii ) : min V̂ > V̂=I ∑ i6=j R2ij . ( 3 ) It is worth further examining the entries of R in detail . Diagonal entries Rii = 〈v̂i , Mv̂i〉 are recognized as Rayleigh quotients because ||v̂i|| = 1 by the constraints . Off-diagonal entries Rij = 〈v̂i , Mv̂j〉 measure alignment between v̂i and v̂j under a generalized inner product 〈· , ·〉M . So far , we have considered learning all the eigenvectors . If we repeat the logic for the top-k eigenvectors with k < d , then by Equation ( 1 ) , R must still be diagonal . V is not square , so V V > 6= I , but assuming V is orthonormal as before , we have V V > = P is a projection matrix . Left-multiplying Equation ( 1 ) by V now reads ( PM ) V = V Λ so we are solving an eigenvalue problem for a subspace of M . If we only desire the top-k eigenvectors , maximizing the trace encourages learning a subspace spanned by the top-k eigenvectors , but does not recover the eigenvectors themselves . On the other hand , Equation ( 3 ) places no preference on recovering large over small eigenvectors , but does enforce the columns of V̂ to actually be eigenvectors . The preceding exercise is intended to introduce minimizing the off-diagonal terms of R as a possible complementary objective for solving top-k PCA . Next , we will use these two objectives to construct utility functions for each eigenvector v̂i . We want to combine the objectives to take advantage of both their strengths . A valid proposal is max V̂ > V̂=I ∑ i Rii − ∑ i6=j R2ij . ( 4 ) However , this objective ignores the natural hierarchy of the top-k eigenvectors . For example , v̂1 is penalized for aligning with v̂k and vice versa , but v̂1 , being the estimate of the largest eigenvector , should be free to search for the direction that captures the most variance independent of the locations of the other vectors . Instead , first consider solving for the top-1 eigenvector , v1 , in which case R = [ 〈v̂1 , Mv̂1〉 ] is a 1× 1 matrix . In this setting , Equation ( 3 ) is not applicable because there are no off-diagonal elements , so maxv̂ > 1 v̂1=1〈v̂1 , Mv̂1〉 is a sensible utility function for v̂1 . If considering the top-2 eigenvectors , v̂1 ’ s utility remains as before , and we introduce a new utility for v̂2 . Equation ( 3 ) is now applicable , so v̂2 ’ s utility is max v̂ > 2 v̂2=1 , v̂ > 1 v̂2=0 〈v̂2 , Mv̂2〉 − 〈v̂2 , Mv̂1〉2 〈v̂1 , Mv̂1〉 ( 5 ) where we have divided the off-diagonal penalty by 〈v1 , Mv1〉 so a ) the two terms in Equation ( 5 ) are on a similar scale and b ) for reasons that ease analysis . Additionally note that the constraint v̂ > 1 v̂2 = 0 may be redundant at the optimum ( v̂∗1 = v1 , v̂ ∗ 2 = v2 ) because the second term , 〈v̂∗2 , Mv̂∗1〉2 = 〈v2 , Mv1〉2 = Λ211〈v2 , v1〉2 , already penalizes such deviations ( Λii is the ith largest eigenvector ) . These reasons motivate the following set of objectives ( utilities ) , one for each vector i ∈ { 1 , . . . , k } : max v̂ > i v̂i=1 { ui ( v̂i|v̂j < i ) = v̂ > i Mv̂i − ∑ j < i ( v̂ > i Mv̂j ) 2 v̂ > j Mv̂j = ||Xv̂i||2 − ∑ j < i 〈Xv̂i , Xv̂j〉2 〈Xv̂j , Xv̂j〉 } ( 6 ) where the notation ui ( ai|b ) emphasizes that player i adjusts ai to maximize a utility conditioned on b . It is interesting to note that by incorporating knowledge of the natural hierarchy ( see Figure 1 ) , we are immediately led to constructing asymmetric utilities , and thereby , inspired to formulate the PCA problem as a game , rather than a direct optimization problem as in Equation ( 4 ) . A key concept in games is a Nash equilibrium . A Nash equilibrium specifies a variable for each player from which no player can unilaterally deviate and improve their outcome . In this case , V̂ is a ( strict- ) Nash equilibrium if and only if for all i , ui ( v̂i|v̂j < i ) > ui ( zi|v̂j < i ) for all zi ∈ Sd−1 . Theorem 2.1 ( PCA Solution is the Unique strict-Nash Equilibrium ) . Assume that the top-k eigenvalues of X > X are positive and distinct . Then the top-k eigenvectors form the unique strictNash equilibrium of the proposed game in Equation ( 6 ) .3 The proof is deferred to Appendix L. Solving for the Nash of a game is difficult in general . Specifically , it belongs to the class of PPADcomplete problems ( Gilboa and Zemel , 1989 ; Daskalakis et al. , 2009 ) . However , because the game is hierarchical and each player ’ s utility only depends on its parents , it is possible to construct a sequential algorithm that is convergent by solving each player ’ s optimization problem in sequence .
Principal component analysis (PCA) is a well-known dimensionality reduction and feature learning technique in the literature that leads to uncorrelated features. While there are a plethora of algorithms for PCA, along with accompanying analysis, a majority of these works have been developed from an optimization perspective. This paper differs from existing works in that it motivates the $k$-PCA problem, which involves learning the $k$-dominant eigen vectors of the sample covariance matrix, as a competitive game between $k$ players in which each player is supposed to estimate one of the eigen vectors and the PCA solution is the unique strict-Nash equilibrium. The main contributions of the paper in this regard are the following:
SP:9c77f92d9933964d7066aec0e5d3e33bb2ee1745
EigenGame: PCA as a Nash Equilibrium
1 INTRODUCTION . The principal components of data are the vectors that align with the directions of maximum variance . These have two main purposes : a ) as interpretable features and b ) for data compression . Recent methods for principal component analysis ( PCA ) focus on the latter , explicitly stating objectives to find the k-dimensional subspace that captures maximum variance ( e.g. , ( Tang , 2019 ) ) , and leaving the problem of rotating within this subspace to , for example , a more efficient downstream singular value ( SVD ) decomposition step1 . This point is subtle , yet critical . For example , any pair of twodimensional , orthogonal vectors spans all of R2 and , therefore , captures maximum variance of any two-dimensional dataset . However , for these vectors to be principal components , they must , in addition , align with the directions of maximum variance which depends on the covariance of the data . By learning the optimal subspace , rather than the principal components themselves , objectives focused on subspace error ignore the first purpose of PCA . In contrast , modern nonlinear representation learning techniques focus on learning features that are both disentangled ( uncorrelated ) and low dimensional ( Chen et al. , 2016 ; Mathieu et al. , 2018 ; Locatello et al. , 2019 ; Sarhan et al. , 2019 ) . It is well known that the PCA solution of the d-dimensional dataset X ∈ Rn×d is given by the eigenvectors of X > X or equivalently , the right singular vectors of X . Impractically , the cost of computing the full SVD scales with O ( min { nd2 , n2d } ) -time and O ( nd ) -space ( Shamir , 2015 ; Tang , 2019 ) . For moderately sized data , randomized methods can be used ( Halko et al. , 2011 ) . Beyond this , stochastic—or online—methods based on Oja ’ s rule ( Oja , 1982 ) or power iterations ( Rutishauser , 1971 ) are common . Another option is to use streaming k-PCA algorithms such as Frequent Directions ( FD ) ( Ghashami et al. , 2016 ) or Oja ’ s algorithm2 ( Allen-Zhu and Li , 2017 ) with storage complexity O ( kd ) . Sampling or sketching methods also scale well , but again , focus on the top-k subspace ( Sarlos , 2006 ; Cohen et al. , 2017 ; Feldman et al. , 2020 ) . In contrast to these approaches , we view each principal component ( equivalently eigenvector ) as a player in a game whose objective is to maximize their own local utility function in controlled competition with other vectors . The proposed utility gradients are interpretable as a combination of Oja ’ s rule and a generalized Gram-Schmidt process . We make the following contributions : • A novel formulation of PCA as finding the Nash equilibrium of a suitable game , • A sequential , globally convergent algorithm for approximating the Nash on full-batch data , 1After learning the top-k subspace V ∈ Rd×k , the rotation can be recovered via an SVD of XV . 2FD approximates the top-k subspace ; Oja ’ s algorithm approximates the top-k eigenvectors . • A decentralized algorithm with experiments demonstrating the approach as competitive with modern streaming k-PCA algorithms on synthetic and real data , • In demonstration of the scaling of the approach , we compute the top-32 principal components of the matrix of RESNET-200 activations on the IMAGENET dataset ( n ≈ 106 , d ≈ 20 ·106 ) . Each of these contributions is important . Novel formulations often lead to deeper understanding of problems , thereby , opening doors to improved techniques . In particular , k-player games are in general complex and hard to analyze . In contrast , PCA has been well-studied . By combining the two fields we hope to develop useful analytical tools . Our specific formulation is important because it obviates the need for any centralized orthonormalization step and lends itself naturally to decentralization . And lastly , theory and experiments support the viability of this approach for continued research . 2 PCA AS AN EIGEN-GAME . We adhere to the following notation . Vectors and matrices meant to approximate principal components ( equivalently eigenvectors ) are designated with hats , v̂ and V̂ respectively , whereas true principal components are v and V . Subscripts indicate which eigenvalue a vector is associated with . For example , vi is the ith largest eigenvector . In this work , we will assume each eigenvalue is distinct . By an abuse of notation , vj < i refers to the set of vectors { vj |j ∈ { 1 , . . . , i− 1 } } and are also referred to as the parents of vi ( vi is their child ) . Sums over indices should be clear from context , e.g. , ∑ j < i = ∑i−1 j=1 . The Euclidean inner product is written 〈u , v〉 = u > v . We denote the unit sphere by Sd−1 and simplex by ∆d−1 in d-dimensional ambient space . Outline of derivation As argued in the introduction , the PCA problem is often mis-interpreted as learning a projection of the data into a subspace that captures maximum variance ( equiv . maximizing the trace of a suitable matrix R introduced below ) . This is in contrast to the original goal of learning the principal components . We first develop the intuition for deriving our utility functions by ( i ) showing that only maximizing the trace of R is not sufficient for recovering all principal components ( equiv . eigenvectors ) , and ( ii ) showing that minimizing off-diagonal terms in R is a complementary objective to maximizing the trace and can recover all components . We then consider learning only the top-k and construct utilities that are consistent with findings in ( i ) and ( ii ) , equal the true eigenvalues at the Nash of the game we construct , and result in a game that is amenable to analysis . Derivation of player utilities . The eigenvalue problem for a symmetric matrix X > X = M ∈ Rd×d is to find a matrix of d orthonormal column vectors V ( implies V is full-rank ) such that MV = V Λ with Λ diagonal . Given a solution to this problem , the columns of V are known as eigenvectors and corresponding entries in Λ are eigenvalues . By left-multiplying by V > and recalling V > V = V V > = I by orthonormality ( i.e. , V is unitary ) , we can rewrite the equality as V > MV = V > V Λ unitary = Λ . ( 1 ) Let V̂ denote a guess or estimate of the true eigenvectors V and define R ( V̂ ) def= V̂ > MV̂ . The PCA problem is often posed as maximizing the trace of R ( equiv . minimizing reconstruction error ) : max V̂ > V̂=I { ∑ i Rii = Tr ( R ) = Tr ( V̂ > MV̂ ) = Tr ( V̂ V̂ > M ) = Tr ( M ) } . ( 2 ) Surprisingly , the objective in ( 2 ) is independent of V̂ , so it can not be used to recover all ( i.e. , k = d ) the eigenvectors of M— ( i ) . Alternatively , Equation ( 1 ) implies the eigenvalue problem can be phrased as ensuring all off-diagonal terms of R are zero , thereby ensuring R is diagonal— ( ii ) : min V̂ > V̂=I ∑ i6=j R2ij . ( 3 ) It is worth further examining the entries of R in detail . Diagonal entries Rii = 〈v̂i , Mv̂i〉 are recognized as Rayleigh quotients because ||v̂i|| = 1 by the constraints . Off-diagonal entries Rij = 〈v̂i , Mv̂j〉 measure alignment between v̂i and v̂j under a generalized inner product 〈· , ·〉M . So far , we have considered learning all the eigenvectors . If we repeat the logic for the top-k eigenvectors with k < d , then by Equation ( 1 ) , R must still be diagonal . V is not square , so V V > 6= I , but assuming V is orthonormal as before , we have V V > = P is a projection matrix . Left-multiplying Equation ( 1 ) by V now reads ( PM ) V = V Λ so we are solving an eigenvalue problem for a subspace of M . If we only desire the top-k eigenvectors , maximizing the trace encourages learning a subspace spanned by the top-k eigenvectors , but does not recover the eigenvectors themselves . On the other hand , Equation ( 3 ) places no preference on recovering large over small eigenvectors , but does enforce the columns of V̂ to actually be eigenvectors . The preceding exercise is intended to introduce minimizing the off-diagonal terms of R as a possible complementary objective for solving top-k PCA . Next , we will use these two objectives to construct utility functions for each eigenvector v̂i . We want to combine the objectives to take advantage of both their strengths . A valid proposal is max V̂ > V̂=I ∑ i Rii − ∑ i6=j R2ij . ( 4 ) However , this objective ignores the natural hierarchy of the top-k eigenvectors . For example , v̂1 is penalized for aligning with v̂k and vice versa , but v̂1 , being the estimate of the largest eigenvector , should be free to search for the direction that captures the most variance independent of the locations of the other vectors . Instead , first consider solving for the top-1 eigenvector , v1 , in which case R = [ 〈v̂1 , Mv̂1〉 ] is a 1× 1 matrix . In this setting , Equation ( 3 ) is not applicable because there are no off-diagonal elements , so maxv̂ > 1 v̂1=1〈v̂1 , Mv̂1〉 is a sensible utility function for v̂1 . If considering the top-2 eigenvectors , v̂1 ’ s utility remains as before , and we introduce a new utility for v̂2 . Equation ( 3 ) is now applicable , so v̂2 ’ s utility is max v̂ > 2 v̂2=1 , v̂ > 1 v̂2=0 〈v̂2 , Mv̂2〉 − 〈v̂2 , Mv̂1〉2 〈v̂1 , Mv̂1〉 ( 5 ) where we have divided the off-diagonal penalty by 〈v1 , Mv1〉 so a ) the two terms in Equation ( 5 ) are on a similar scale and b ) for reasons that ease analysis . Additionally note that the constraint v̂ > 1 v̂2 = 0 may be redundant at the optimum ( v̂∗1 = v1 , v̂ ∗ 2 = v2 ) because the second term , 〈v̂∗2 , Mv̂∗1〉2 = 〈v2 , Mv1〉2 = Λ211〈v2 , v1〉2 , already penalizes such deviations ( Λii is the ith largest eigenvector ) . These reasons motivate the following set of objectives ( utilities ) , one for each vector i ∈ { 1 , . . . , k } : max v̂ > i v̂i=1 { ui ( v̂i|v̂j < i ) = v̂ > i Mv̂i − ∑ j < i ( v̂ > i Mv̂j ) 2 v̂ > j Mv̂j = ||Xv̂i||2 − ∑ j < i 〈Xv̂i , Xv̂j〉2 〈Xv̂j , Xv̂j〉 } ( 6 ) where the notation ui ( ai|b ) emphasizes that player i adjusts ai to maximize a utility conditioned on b . It is interesting to note that by incorporating knowledge of the natural hierarchy ( see Figure 1 ) , we are immediately led to constructing asymmetric utilities , and thereby , inspired to formulate the PCA problem as a game , rather than a direct optimization problem as in Equation ( 4 ) . A key concept in games is a Nash equilibrium . A Nash equilibrium specifies a variable for each player from which no player can unilaterally deviate and improve their outcome . In this case , V̂ is a ( strict- ) Nash equilibrium if and only if for all i , ui ( v̂i|v̂j < i ) > ui ( zi|v̂j < i ) for all zi ∈ Sd−1 . Theorem 2.1 ( PCA Solution is the Unique strict-Nash Equilibrium ) . Assume that the top-k eigenvalues of X > X are positive and distinct . Then the top-k eigenvectors form the unique strictNash equilibrium of the proposed game in Equation ( 6 ) .3 The proof is deferred to Appendix L. Solving for the Nash of a game is difficult in general . Specifically , it belongs to the class of PPADcomplete problems ( Gilboa and Zemel , 1989 ; Daskalakis et al. , 2009 ) . However , because the game is hierarchical and each player ’ s utility only depends on its parents , it is possible to construct a sequential algorithm that is convergent by solving each player ’ s optimization problem in sequence .
The authors present new insights on PCA analysis by reconceiving it in terms of a Nash equilibrium among different players, related to the different components. The importance of an objective function minimizing the off-diagonal elements of R is emphasized. The insights lead to parallel algorithms and are demonstrated on large scale problems, which is nice. Overall the new insights can be very valuable and also inspiring for future work and for new developments, from a broader perspective.
SP:9c77f92d9933964d7066aec0e5d3e33bb2ee1745
Robust Temporal Ensembling
1 INTRODUCTION . Deep neural networks have enjoyed considerable success across a variety of domains , and in particular computer vision , where the common theme is that more labeled training data yields improved model performance ( Hestness et al. , 2017 ; Mahajan et al. , 2018 ; Xie et al. , 2019b ; Kolesnikov et al. , 2019 ) . However , performance depends on the quality of the training data , which is expensive to collect and inevitably imperfect . For example , ImageNet ( Deng et al. , 2009 ) is one of the most widely-used datasets in the field of deep learning and despite over 2 years of labor from more than 49,000 human annotators across 167 countries , it still contains erroneous and ambiguous labels ( FeiFei & Deng , 2017 ; Karpathy , 2014 ) . It is therefore essential that learning algorithms in production workflows leverage noise robust methods . Noise robust learning has a long history and takes many forms ( Natarajan et al. , 2013 ; Frenay & Verleysen , 2014 ; Song et al. , 2020 ) . Common strategies include loss correction and reweighting ( Patrini et al. , 2016 ; Zhang & Sabuncu , 2018 ; Menon et al. , 2020 ) , label refurbishment ( Reed et al. , 2014 ; Song et al. , 2019 ) , abstention ( Thulasidasan et al. , 2019 ) , and relying on carefully constructed trusted subsets of human-verified labeled data ( Li et al. , 2017 ; Hendrycks et al. , 2018 ; Zhang et al. , 2020 ) . Additionally , recent methods such as SELF ( Nguyen et al. , 2020 ) and DivideMix ( Li et al. , 2020 ) convert the problem of learning with noise into a semi-supervised learning approach by splitting the corrupted training set into clean labeled data and noisy unlabeled data at which point semisupervised learning methods such as Mean Teacher ( Tarvainen & Valpola , 2017 ) and MixMatch ( Berthelot et al. , 2019 ) can be applied directly . In essence , these methods effectively discard a majority of the label information so as to side-step having to learning with noise at all . The problem here is that noisy label filtering tactics are imperfect resulting in corrupted data in the small labeled partition and valuable clean samples lost to the large pool of unlabeled data . Moreover , caution is needed when applying semi-supervised methods where the labeled data is not sampled i.i.d . from the pool of unlabeled data ( Oliver et al. ) . Indeed , filtering tactics can be biased and irregular , driven by specification error and the underlying noise process of the label corruption . Recognizing the success of semi-supervised approaches , we ask : can we leverage the underlying mechanisms of semi-supervised learning such as entropy regularization for learning with noise without discarding our most valuable asset , the labels ? 2 ROBUST TEMPORAL ENSEMBLING . 2.1 PRELIMINARIES . Adopting the notation of Zhang & Sabuncu ( 2018 ) , we consider the problem of classification where X ⊂ Rd is the feature space and Y = { 1 , . . . , c } is the label space where the classifier function is a deep neural network with a softmax output layer that maps input features to distributions over labels f : X → Rc . The dataset of training examples containing in-sample noise is defined as D = { ( xi , ỹi ) } ni=1 where ( xi , ỹi ) ∈ ( X × Y ) and ỹi is the noisy version of the true label yi such that p ( ỹi = k|yi = j , xi ) ≡ ηijk . We do not consider open-set noise ( Wang et al. , 2018 ) , in which there is a particular type of noise that occurs on inputs , x̃ , rather than labels . Following most prior work , we make the simplifying assumption that the noise is conditionally independent of the input , xi , given the true labels . In this setting , we can write ηijk = p ( ỹi = k|yi = j ) ≡ ηjk which is , in general , considered to be class dependent noise1,2 . To aid in a simple and precise corruption procedure , we now depart from traditional notation and further decompose ηjk as pj · cjk , where pj ∈ [ 0 , 1 ] is the probability of corruption of the j-th class and cjk ∈ [ 0 , 1 ] is the relative probability that corrupted samples of class j are labeled as class k , with ci6=j ≥ 0 , cjj = 0 and ∑ k cjk = 1 . A noisy dataset with m classes can then be described as transition probabilities specified by F = diag ( P ) · C + diag ( 1− P ) · I ( 1 ) where C ∈ Rm×m defines the system confusion or noise structure , P ∈ Rm defines the noise intensity or ratio for each class , and I is the identity matrix . When cjk = ckj the noise is said to be symmetric and is considered asymmetric otherwise . If ratio of noise is the same for all classes then pj = p and the dataset is said to exhibit uniform noise . For the case of uniform noise , equation ( 1 ) interestingly takes the familiar form of the Google matrix equation as Fp = p · C + ( 1− p ) · I ( 2 ) Note that , by this definition , ηjj = p · cjj = 0 which prohibits ỹi = yi . This ensures a true effective noise ratio of p. For example , suppose there are m = 10 classes and we wish to corrupt labels with 80 % probability . Then if corrupted labels are sampled from Y rather than Y \ { y } , 110 · 0.8 = 8 % of the corrupted samples will not actually be corrupted , leading to a true corruption rate of 72 % . Therefore , despite prescribing p = 0.8 , the true effective noise ratio would be 0.72 , which in turn yields a 0.081−0.8 = 40 % increase in clean labels , and this is indeed the case in many studies ( Zhang & Sabuncu , 2018 ; Nguyen et al. , 2020 ; Li et al. , 2020 ; Zhang et al. , 2020 ) . 2.2 METHODS . At a very high level , RTE is the combination of noise-robust task loss , augmentation , and pseudolabeling for consistency regularization . We unify generalized cross entropy ( Zhang & Sabuncu , 2018 ) , AugMix stochastic augmentation strategy ( Hendrycks et al. , 2020 ) , an exponential moving average of model weights for generating pseudo-labels ( Tarvainen & Valpola , 2017 ) , and an augmentation anchoring-like approach ( Berthelot et al. , 2020 ) to form a robust approach for learning with noisy labels . 2.2.1 NOISE-ROBUST TASK LOSS . Generalized cross entropy ( GCE ) ( Zhang & Sabuncu , 2018 ) is a theoretically grounded noise-robust loss function that can be seen as a generalization of mean absolute error ( MAE ) and categorical cross entropy ( CCE ) . The main idea is that CCE learns quickly , but more emphasis is put on difficult samples which is prone to overfit noisy labels , while MAE treats all samples equally , providing noise-robustness but learning slowly . To exploit the benefits of both MAE and CCE , a negative Box-Cox transformation ( Box & Cox , 1964 ) is used as the loss function Lq ( f ( xi ) , yi = j ) = ( 1− fj ( xi ) q ) q ( 3 ) 1See Lee et al . ( 2019 ) for treatment of conditionally dependent semantic noise such that ηijk 6= ηjk . 2Note that Patrini et al . ( 2016 ) define the noise transition matrix T such that Tjk ≡ ηjk . where q ∈ ( 0 , 1 ] , and fj denotes the j-th element of f . Note that GCE becomes CCE for limq→0 Lq and becomes MAE/unhinged loss when q = 1 . 2.2.2 ENSEMBLE CONSISTENCY REGULARIZATION . Consistency regularization works under the assumption that a model should output similar predictions given augmented versions of the same input . This regularization strategy is a common component of semi-supervised learning algorithms with the general form of ‖pθ ( y|xaug1 ) −pθ ( y|xaug2 ) ‖ where pθ ( y|x ) is the predicted class distribution produced by the model having parameters θ for input x ( Zheng et al. , 2016 ; Sajjadi et al. , 2016 ) . We build upon numerous variations from semisupervised learning ( Laine & Alia , 2017 ; Tarvainen & Valpola , 2017 ; Berthelot et al. , 2019 ; 2020 ) and leverage an ensemble consistency regularization ( ECR ) strategy as ECR = 1 |Y|N∗ N∗∑ i=1 ‖pθ′ ( y|x ) − pθ ( y|A ( x ) ) ‖ ( 4 ) where x is the training example , A is stochastic augmentation function reevaluated for each term in the summation , θ′t = αθ ′ t−1 + ( 1 − α ) θt is a temporal moving average of model weights used to generate pseudo-label targets , and inputs are pre-processed with standard random horizontal flip and crop . In practice , this consists of initializing a copy of the initial model and maintaining an exponential moving average as training progresses . Some methods directly average multiple label predictions together at each optimization step to form a single pseudo-label target ( Berthelot et al. , 2019 ; Li et al. , 2020 ) but we find pseudo-label target distributions generated by θ′ to be better suited for the learning with noise problem due to the intrinsic ensemble nature of the weight averaging process over many optimization steps ( Tarvainen & Valpola , 2017 ) . In semi-supervised learning techniques , it is common to leverage a large batch-size of unlabeled data for consistency regularization . However , we found that modulating N∗ , rather than the batch size of the consistency term , yields a monotonic increase in model performance consistent with related works ( Berthelot et al. , 2020 ) . Moreover , in semi-supervised learning , different batches are used for between supervised and unsupervised loss terms but we find ( see section 4.3 ) that for the case of learning with noise , batches synchronized with GCE task loss term yields superior performance . 2.2.3 AUGMENTATION . AugMix ( Hendrycks et al. , 2020 ) is a data augmentation technique which utilizes stochasticity , diverse augmentations , a Jensen-Shannon divergence consistency loss , and a formulation to mix multiple augmented inputs . Other augmentation strategies such as RandAugment ( Cubuk et al. , 2020 ) , augmentations are applied sequentially with fixed intensity which can degrade input quickly . In AugMix , to mitigate input degradation but retain augmentation diversity , several stochastically sampled augmentation chains are layered together in a convex combination to generate highly diverse transformations . These mixing coefficients are randomly sampled from a Dirichlet distribution with shared concentration parameters , and the resulting augmented version of the input is combined with the original input through a second random convex combination sampled from a beta distribution , again with shared parameters . 2.2.4 JENSEN-SHANNON DIVERGENCE . The Jensen-Shannon consistency loss is used to enforce a flat response of the classifier by incentivizing the model to be stable , consistent , and insensitive across a diverse range of inputs ( Zheng et al. , 2016 ) . The Jensen-Shannon divergence ( JSD ) is minimized across distributions porig , paug1 , and paug2 of the original sample xorig and its augmented variants xaug1 and xaug2 which can be understood to measure the average information that the sample reveals about the identity of its originating distribution ( Hendrycks et al. , 2020 ) . This JSD term is computed withM = ( porig +paug1 +paug2 ) /3 and is then JSD = 1 3 ( KL ( porig ‖M ) + KL ( paug1 ‖M ) + KL ( paug2 ‖M ) ) ( 5 ) where KL ( p ‖ q ) is Kullback–Leibler divergence from q to p. In this way , the JSD term improves the stability of training in the presence of noisy labels and heavy data augmentation with a modest contribution to final classifier test accuracy as shown in Table 5 .
Real-world data contains noise in the annotated labels. To mitigate, the authors propose a supervised learning approach, Robust Temporal Ensembling (RTE). RTE combines 1) task loss correction, which is a generalized cross entropy loss, 2) different augmentations resulting from AugMix technique and the Jensen-Shannon divergence (JSD), 3) the ensemble consistency regularization and pseudo labeling.
SP:9feb34bfbe8bfbf1a99d90a74f36b2b0c7dc9985
Robust Temporal Ensembling
1 INTRODUCTION . Deep neural networks have enjoyed considerable success across a variety of domains , and in particular computer vision , where the common theme is that more labeled training data yields improved model performance ( Hestness et al. , 2017 ; Mahajan et al. , 2018 ; Xie et al. , 2019b ; Kolesnikov et al. , 2019 ) . However , performance depends on the quality of the training data , which is expensive to collect and inevitably imperfect . For example , ImageNet ( Deng et al. , 2009 ) is one of the most widely-used datasets in the field of deep learning and despite over 2 years of labor from more than 49,000 human annotators across 167 countries , it still contains erroneous and ambiguous labels ( FeiFei & Deng , 2017 ; Karpathy , 2014 ) . It is therefore essential that learning algorithms in production workflows leverage noise robust methods . Noise robust learning has a long history and takes many forms ( Natarajan et al. , 2013 ; Frenay & Verleysen , 2014 ; Song et al. , 2020 ) . Common strategies include loss correction and reweighting ( Patrini et al. , 2016 ; Zhang & Sabuncu , 2018 ; Menon et al. , 2020 ) , label refurbishment ( Reed et al. , 2014 ; Song et al. , 2019 ) , abstention ( Thulasidasan et al. , 2019 ) , and relying on carefully constructed trusted subsets of human-verified labeled data ( Li et al. , 2017 ; Hendrycks et al. , 2018 ; Zhang et al. , 2020 ) . Additionally , recent methods such as SELF ( Nguyen et al. , 2020 ) and DivideMix ( Li et al. , 2020 ) convert the problem of learning with noise into a semi-supervised learning approach by splitting the corrupted training set into clean labeled data and noisy unlabeled data at which point semisupervised learning methods such as Mean Teacher ( Tarvainen & Valpola , 2017 ) and MixMatch ( Berthelot et al. , 2019 ) can be applied directly . In essence , these methods effectively discard a majority of the label information so as to side-step having to learning with noise at all . The problem here is that noisy label filtering tactics are imperfect resulting in corrupted data in the small labeled partition and valuable clean samples lost to the large pool of unlabeled data . Moreover , caution is needed when applying semi-supervised methods where the labeled data is not sampled i.i.d . from the pool of unlabeled data ( Oliver et al. ) . Indeed , filtering tactics can be biased and irregular , driven by specification error and the underlying noise process of the label corruption . Recognizing the success of semi-supervised approaches , we ask : can we leverage the underlying mechanisms of semi-supervised learning such as entropy regularization for learning with noise without discarding our most valuable asset , the labels ? 2 ROBUST TEMPORAL ENSEMBLING . 2.1 PRELIMINARIES . Adopting the notation of Zhang & Sabuncu ( 2018 ) , we consider the problem of classification where X ⊂ Rd is the feature space and Y = { 1 , . . . , c } is the label space where the classifier function is a deep neural network with a softmax output layer that maps input features to distributions over labels f : X → Rc . The dataset of training examples containing in-sample noise is defined as D = { ( xi , ỹi ) } ni=1 where ( xi , ỹi ) ∈ ( X × Y ) and ỹi is the noisy version of the true label yi such that p ( ỹi = k|yi = j , xi ) ≡ ηijk . We do not consider open-set noise ( Wang et al. , 2018 ) , in which there is a particular type of noise that occurs on inputs , x̃ , rather than labels . Following most prior work , we make the simplifying assumption that the noise is conditionally independent of the input , xi , given the true labels . In this setting , we can write ηijk = p ( ỹi = k|yi = j ) ≡ ηjk which is , in general , considered to be class dependent noise1,2 . To aid in a simple and precise corruption procedure , we now depart from traditional notation and further decompose ηjk as pj · cjk , where pj ∈ [ 0 , 1 ] is the probability of corruption of the j-th class and cjk ∈ [ 0 , 1 ] is the relative probability that corrupted samples of class j are labeled as class k , with ci6=j ≥ 0 , cjj = 0 and ∑ k cjk = 1 . A noisy dataset with m classes can then be described as transition probabilities specified by F = diag ( P ) · C + diag ( 1− P ) · I ( 1 ) where C ∈ Rm×m defines the system confusion or noise structure , P ∈ Rm defines the noise intensity or ratio for each class , and I is the identity matrix . When cjk = ckj the noise is said to be symmetric and is considered asymmetric otherwise . If ratio of noise is the same for all classes then pj = p and the dataset is said to exhibit uniform noise . For the case of uniform noise , equation ( 1 ) interestingly takes the familiar form of the Google matrix equation as Fp = p · C + ( 1− p ) · I ( 2 ) Note that , by this definition , ηjj = p · cjj = 0 which prohibits ỹi = yi . This ensures a true effective noise ratio of p. For example , suppose there are m = 10 classes and we wish to corrupt labels with 80 % probability . Then if corrupted labels are sampled from Y rather than Y \ { y } , 110 · 0.8 = 8 % of the corrupted samples will not actually be corrupted , leading to a true corruption rate of 72 % . Therefore , despite prescribing p = 0.8 , the true effective noise ratio would be 0.72 , which in turn yields a 0.081−0.8 = 40 % increase in clean labels , and this is indeed the case in many studies ( Zhang & Sabuncu , 2018 ; Nguyen et al. , 2020 ; Li et al. , 2020 ; Zhang et al. , 2020 ) . 2.2 METHODS . At a very high level , RTE is the combination of noise-robust task loss , augmentation , and pseudolabeling for consistency regularization . We unify generalized cross entropy ( Zhang & Sabuncu , 2018 ) , AugMix stochastic augmentation strategy ( Hendrycks et al. , 2020 ) , an exponential moving average of model weights for generating pseudo-labels ( Tarvainen & Valpola , 2017 ) , and an augmentation anchoring-like approach ( Berthelot et al. , 2020 ) to form a robust approach for learning with noisy labels . 2.2.1 NOISE-ROBUST TASK LOSS . Generalized cross entropy ( GCE ) ( Zhang & Sabuncu , 2018 ) is a theoretically grounded noise-robust loss function that can be seen as a generalization of mean absolute error ( MAE ) and categorical cross entropy ( CCE ) . The main idea is that CCE learns quickly , but more emphasis is put on difficult samples which is prone to overfit noisy labels , while MAE treats all samples equally , providing noise-robustness but learning slowly . To exploit the benefits of both MAE and CCE , a negative Box-Cox transformation ( Box & Cox , 1964 ) is used as the loss function Lq ( f ( xi ) , yi = j ) = ( 1− fj ( xi ) q ) q ( 3 ) 1See Lee et al . ( 2019 ) for treatment of conditionally dependent semantic noise such that ηijk 6= ηjk . 2Note that Patrini et al . ( 2016 ) define the noise transition matrix T such that Tjk ≡ ηjk . where q ∈ ( 0 , 1 ] , and fj denotes the j-th element of f . Note that GCE becomes CCE for limq→0 Lq and becomes MAE/unhinged loss when q = 1 . 2.2.2 ENSEMBLE CONSISTENCY REGULARIZATION . Consistency regularization works under the assumption that a model should output similar predictions given augmented versions of the same input . This regularization strategy is a common component of semi-supervised learning algorithms with the general form of ‖pθ ( y|xaug1 ) −pθ ( y|xaug2 ) ‖ where pθ ( y|x ) is the predicted class distribution produced by the model having parameters θ for input x ( Zheng et al. , 2016 ; Sajjadi et al. , 2016 ) . We build upon numerous variations from semisupervised learning ( Laine & Alia , 2017 ; Tarvainen & Valpola , 2017 ; Berthelot et al. , 2019 ; 2020 ) and leverage an ensemble consistency regularization ( ECR ) strategy as ECR = 1 |Y|N∗ N∗∑ i=1 ‖pθ′ ( y|x ) − pθ ( y|A ( x ) ) ‖ ( 4 ) where x is the training example , A is stochastic augmentation function reevaluated for each term in the summation , θ′t = αθ ′ t−1 + ( 1 − α ) θt is a temporal moving average of model weights used to generate pseudo-label targets , and inputs are pre-processed with standard random horizontal flip and crop . In practice , this consists of initializing a copy of the initial model and maintaining an exponential moving average as training progresses . Some methods directly average multiple label predictions together at each optimization step to form a single pseudo-label target ( Berthelot et al. , 2019 ; Li et al. , 2020 ) but we find pseudo-label target distributions generated by θ′ to be better suited for the learning with noise problem due to the intrinsic ensemble nature of the weight averaging process over many optimization steps ( Tarvainen & Valpola , 2017 ) . In semi-supervised learning techniques , it is common to leverage a large batch-size of unlabeled data for consistency regularization . However , we found that modulating N∗ , rather than the batch size of the consistency term , yields a monotonic increase in model performance consistent with related works ( Berthelot et al. , 2020 ) . Moreover , in semi-supervised learning , different batches are used for between supervised and unsupervised loss terms but we find ( see section 4.3 ) that for the case of learning with noise , batches synchronized with GCE task loss term yields superior performance . 2.2.3 AUGMENTATION . AugMix ( Hendrycks et al. , 2020 ) is a data augmentation technique which utilizes stochasticity , diverse augmentations , a Jensen-Shannon divergence consistency loss , and a formulation to mix multiple augmented inputs . Other augmentation strategies such as RandAugment ( Cubuk et al. , 2020 ) , augmentations are applied sequentially with fixed intensity which can degrade input quickly . In AugMix , to mitigate input degradation but retain augmentation diversity , several stochastically sampled augmentation chains are layered together in a convex combination to generate highly diverse transformations . These mixing coefficients are randomly sampled from a Dirichlet distribution with shared concentration parameters , and the resulting augmented version of the input is combined with the original input through a second random convex combination sampled from a beta distribution , again with shared parameters . 2.2.4 JENSEN-SHANNON DIVERGENCE . The Jensen-Shannon consistency loss is used to enforce a flat response of the classifier by incentivizing the model to be stable , consistent , and insensitive across a diverse range of inputs ( Zheng et al. , 2016 ) . The Jensen-Shannon divergence ( JSD ) is minimized across distributions porig , paug1 , and paug2 of the original sample xorig and its augmented variants xaug1 and xaug2 which can be understood to measure the average information that the sample reveals about the identity of its originating distribution ( Hendrycks et al. , 2020 ) . This JSD term is computed withM = ( porig +paug1 +paug2 ) /3 and is then JSD = 1 3 ( KL ( porig ‖M ) + KL ( paug1 ‖M ) + KL ( paug2 ‖M ) ) ( 5 ) where KL ( p ‖ q ) is Kullback–Leibler divergence from q to p. In this way , the JSD term improves the stability of training in the presence of noisy labels and heavy data augmentation with a modest contribution to final classifier test accuracy as shown in Table 5 .
This submission deals with robust supervised learning in the presence of noisy labels. The label noise is modeled using a probabilistic (and conditionally independent) transition matrix that changes the label of one class to another one. In order to classify with noise, the network is trained with a mixture of three known losses including: 1) generalized cross entropy (GCE) rejects the outlier labels, 2) JSD divergence to assure the soft-max distribution matches the augmented data distributions, and 3) an ensemble consistency regularization (ECR) that penalizes the inconsistencies of the augmented data based on the mean teachers. Experiments with CIFAR-10, CIFAR-100, and ImageNet classification indicate substantial gains compared with state-of-the-art alternatives.
SP:9feb34bfbe8bfbf1a99d90a74f36b2b0c7dc9985
Lipschitz-Bounded Equilibrium Networks
1 INTRODUCTION . Deep neural network models have revolutionized the field of machine learning : their accuracy on practical tasks such as image classification and their scalability have led to an enormous volume of research on different model structures and their properties ( LeCun et al. , 2015 ) . In particular , deep residual networks with skip connections He et al . ( 2016 ) have had a major impact , and neural ODEs have been proposed as an analog with “ implicit depth ” ( Chen et al. , 2018 ) . Recently , a new structure has gained interest : equilibrium networks ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) , a.k.a . implicit deep learning models ( El Ghaoui et al. , 2019 ) , in which model outputs are defined by implicit equations incorporating neural networks . This model class is very flexible : it is easy to show that includes many previous structures as special cases , including standard multi-layer networks , residual networks , and ( in a certain sense ) neural ODEs . However model flexibility in machine learning is always in tension with model regularity or robustness . While deep learning models have exhibited impressive generalisation performance in many contexts it has also been observed that they can be very brittle , especially when targeted with adversarial attacks ( Szegedy et al. , 2014 ) . In response to this , there has been a major research effort to understand and certify robustness properties of deep neural networks , e.g . Raghunathan et al . ( 2018a ) ; Tjeng et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Cohen et al . ( 2019 ) and many others . Global Lipschitz bounds ( a.k.a . incremental gain bounds ) provide a somewhat crude but nevertheless highly useful proxy for robustness ( Tsuzuku et al. , 2018 ; Fazlyab et al. , 2019 ) , and appear in several analyses of generalization ( e.g . ( Bartlett et al. , 2017 ; Zhou & Schoellig , 2019 ) ) . Inspired by both of these lines of research , in this paper we propose new parameterizations of equilibrium networks with guaranteed Lipschitz bounds . We build directly on the monotone operator framework of Winston & Kolter ( 2020 ) and the work of Fazlyab et al . ( 2019 ) on Lipschitz bounds . The main contribution of our paper is the ability to enforce tight bounds on the Lipschitz constant of an equilibrium network during training with essentially no extra computational effort . In addition , we prove existence of solutions with less restrictive conditions on the weight matrix and more natural assumptions on the activation functions via novel connections to convex optimization and contracting dynamical systems . Finally , we show via small-scale image classification experiments that the proposed parameterizations can provide significant improvement in robustness to adversarial attacks with little degradation in nominal accuracy . Furthermore , we observe small gaps between certified Lipschitz upper bounds and observed lower bounds computed via adversarial attack . 2 RELATED WORK . Equilibrium networks , Implicit Deep Models , and Well-Posedness . As mentioned above , it has been recently shown that many existing network architectures can be incorporated into a flexible model set called an equilibrium network ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) or implicit deep model ( El Ghaoui et al. , 2019 ) . In this unified model set , the network predictions are made not by forward computation of sequential hidden layers , but by finding a solution to an implicit equation involving a single layer of all hidden units . One major question for this type of networks is its wellposedness , i.e . the existence and uniqueness of a solution to the implicit equation for all possible inputs . El Ghaoui et al . ( 2019 ) proposed a computationally verifiable but conservative condition on the spectral norm of hidden unit weight . In Winston & Kolter ( 2020 ) , a less conservative condition was developed based on monotone operator theory . Similar monotonicity constraints were previously used to ensure well-posedness of a different class of implicit models in the context of nonlinear system identification ( Tobenkin et al. , 2017 , Theorem 1 ) . On the question of well-posedness , our contribution is a more flexible model set and more natural assumptions on the activation functions : that they are monotone and slope-restricted . Neural Network Robustness and Lipschitz Bounds . The Lipschitz constant of a function measures the worst-case sensitivity of the function , i.e . the maximum “ amplification ” of difference in inputs to differences in outputs . The key features of a good Lipschitz bounded learning approach include a tight estimation for Lipschitz constant and a computationally tractable training method with bounds enforced . For deep networks , Tsuzuku et al . ( 2018 ) proposed a computationally efficient but conservative approach since its Lipschitz constant estimation method is based on composition of estimates for different layers , while Anil et al . ( 2019 ) proposed a combination of a novel activation function and weight constraints . For equilibrium networks , El Ghaoui et al . ( 2019 ) proposed an estimation of Lipschitz bounds via input-to-state ( ISS ) stability analysis . Fazlyab et al . ( 2019 ) estimates for deep networks based on incremental quadratic constraints and semidefinite programming ( SDP ) were shown to give state-of-the-art results , however this was limited to analysis of an already-trained network . The SDP test was incorporated into training via the alternating direction method of multipliers ( ADMM ) in Pauli et al . ( 2020 ) , however due to the complexity of the SDP the training times recorded were almost 50 times longer than for unconstrained networks . Our approach uses a similar condition to Fazlyab et al . ( 2019 ) applied to equilibrium networks , however we introduce a novel direct parameterization method that enables learning robust models via unconstrained optimization , removing the need for computationally-expensive projections or barrier terms . 3 PROBLEM FORMULATION AND PRELIMINARIES . 3.1 PROBLEM STATEMENT . We consider the weight-tied network in which x ∈ Rd denotes the input , and z ∈ Rn denotes the hidden units , y ∈ Rp denotes the output , given by the following implicit equation z = σ ( Wz + Ux+ bz ) , y = Woz + by ( 1 ) where W ∈ Rn×n , U ∈ Rn×d , and Wo ∈ Rp×n are the hidden unit , input , and output weights , respectively , bz ∈ Rn and by ∈ Rp are bias terms . The implicit framework includes most current neural network architectures ( e.g . deep and residual networks ) as special cases . To streamline the presentation we assume that σ : R → R is a single nonlinearity applied elementwise , although our results also apply in the case that each channel has a different activation function , nonlinear or linear . Equation ( 1 ) is called an equilibrium network since its solutions are equilibrium points of the difference equation zk+1 = σ ( Wzk +Ux+ bz ) or the ODE ż ( t ) = −z ( t ) + σ ( Wz ( t ) +Ux+ bz ) . Our goal is to learn equilibrium networks ( 1 ) possessing the following two properties : • Well-posedness : For every input x and bias bz , equation 1 admits a unique solution z . • γ-Lipschitz : It has a finite Lipschitz bound of γ , i.e. , for any input-output pairs ( x1 , y1 ) , ( x2 , y2 ) we have ‖y1 − y2‖2 ≤ γ‖x1 − x2‖2 . 3.2 PRELIMINARIES . Monotone operator theory . The theory of monotone operators on Euclidean space ( see the survey Ryu & Boyd ( 2016 ) ) has been extensively applied in the development of equilibrium networks ( Winston & Kolter , 2020 ) . In this paper , we will use the monotone operator theory on non-Euclidean spaces ( Bauschke et al. , 2011 ) , in particular , we are interested in a finite-dimensional Hilbert space H , which we identify with Rn equipped with a weighted inner product 〈x , y〉Q : = y > Qx where Q 0 . The main benefit is that we can construct a more expressive equilibrium network set . A brief summary or relevant theory can be found in Appendix C.1 ; here we give some definitions that are frequently used throughout the paper . An operator is a set-valued or single-valued function defined by a subset of the space A ⊆ H×H . A function f : H → R ∪ { ∞ } is proper if f ( x ) < ∞ for at least one x . The subdifferential and proximal operators of a proper function f are defined as ∂f ( x ) : = { g ∈ H | f ( y ) ≥ f ( x ) + 〈y − x , g〉Q , ∀y ∈ H } , proxαf ( x ) : = { z ∈ H | z = arg min u 1 2 ‖u− x‖2Q + αf ( u ) } respectively , where ‖x‖Q : = √ 〈x , x〉Q is the induced norm . For n = 1 , we only consider the case ofQ = 1 . An operatorA is monotone if 〈u−v , x−y〉Q ≥ 0 and strongly monotone with parameter m if 〈u− v , x− y〉Q ≥ m‖x− y‖2Q for all ( x , u ) , ( y , v ) ∈ A . The operator splitting problem is that of finding a zero of a sum of two operators A and B , i.e . find an x such that 0 ∈ ( A+B ) ( x ) . Dynamical systems theory . In this paper , we will also treat the solutions of ( 1 ) as equilibrium points of certain dynamical systems ż ( t ) = f ( z ( t ) ) . Then , the well-posedness and robustness properties of ( 1 ) can be guaranteed by corresponding properties of the dynamical system ’ s solution set . A central focus in robust and nonlinear control theory for more than 50 years – and largely unified by the modern theory of integral quadratic constraints ( Megretski & Rantzer , 1997 ) – has been on systems which are interconnections of linear mappings and “ simple ” nonlinearities , i.e . those easily bounded in some sense by quadratic functions . Fortuitously , this characteristic is shared with deep , recurrent , and equilibrium neural networks , a connection that we use heavily in this paper and has previously been exploited by Fazlyab et al . ( 2019 ) ; El Ghaoui et al . ( 2019 ) ; Revay et al . ( 2020 ) and others . A particular property we are interested in is called contraction ( Lohmiller & Slotine , 1998 ) , i.e. , any pair of solutions z1 ( t ) and z2 ( t ) exponentially converge to each other : ‖z1 ( t ) − z2 ( t ) ‖ ≤ α‖z1 ( 0 ) − z2 ( 0 ) ‖e−βt for all t > 0 and some α , β > 0 . Contraction can be established by finding a Riemannian metric with respect to which nearby trajectories converge , which is a differential analog of a Lyapunov function . A nice property of a contracting dynamical system is that if it is time-invariant , a unique equilibrium exists and it possesses a certain level of robustness . Moreover , contraction can also be linked to monotone operators , i.e . a system is contracting w.r.t . to a constant ( state-independent ) metric Q if and only if the operator −f is strongly monotone w.r.t . Q-weighted inner product . We collect some directly relevant results from systems theory in Appendix C.2 .
The paper introduces a new condition for showing the existence of the solution of a deep equilibrium model (which defines an implicit mapping via the fixed point). The new formulation also comes with a convenient and accurate Lipschitz bound. The proposed condition can be satisfied via reparameterizing an unconstrained set of trainable parameters.
SP:16392bc9174dde6ad7b569f3f40fa14a4ed48831
Lipschitz-Bounded Equilibrium Networks
1 INTRODUCTION . Deep neural network models have revolutionized the field of machine learning : their accuracy on practical tasks such as image classification and their scalability have led to an enormous volume of research on different model structures and their properties ( LeCun et al. , 2015 ) . In particular , deep residual networks with skip connections He et al . ( 2016 ) have had a major impact , and neural ODEs have been proposed as an analog with “ implicit depth ” ( Chen et al. , 2018 ) . Recently , a new structure has gained interest : equilibrium networks ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) , a.k.a . implicit deep learning models ( El Ghaoui et al. , 2019 ) , in which model outputs are defined by implicit equations incorporating neural networks . This model class is very flexible : it is easy to show that includes many previous structures as special cases , including standard multi-layer networks , residual networks , and ( in a certain sense ) neural ODEs . However model flexibility in machine learning is always in tension with model regularity or robustness . While deep learning models have exhibited impressive generalisation performance in many contexts it has also been observed that they can be very brittle , especially when targeted with adversarial attacks ( Szegedy et al. , 2014 ) . In response to this , there has been a major research effort to understand and certify robustness properties of deep neural networks , e.g . Raghunathan et al . ( 2018a ) ; Tjeng et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Cohen et al . ( 2019 ) and many others . Global Lipschitz bounds ( a.k.a . incremental gain bounds ) provide a somewhat crude but nevertheless highly useful proxy for robustness ( Tsuzuku et al. , 2018 ; Fazlyab et al. , 2019 ) , and appear in several analyses of generalization ( e.g . ( Bartlett et al. , 2017 ; Zhou & Schoellig , 2019 ) ) . Inspired by both of these lines of research , in this paper we propose new parameterizations of equilibrium networks with guaranteed Lipschitz bounds . We build directly on the monotone operator framework of Winston & Kolter ( 2020 ) and the work of Fazlyab et al . ( 2019 ) on Lipschitz bounds . The main contribution of our paper is the ability to enforce tight bounds on the Lipschitz constant of an equilibrium network during training with essentially no extra computational effort . In addition , we prove existence of solutions with less restrictive conditions on the weight matrix and more natural assumptions on the activation functions via novel connections to convex optimization and contracting dynamical systems . Finally , we show via small-scale image classification experiments that the proposed parameterizations can provide significant improvement in robustness to adversarial attacks with little degradation in nominal accuracy . Furthermore , we observe small gaps between certified Lipschitz upper bounds and observed lower bounds computed via adversarial attack . 2 RELATED WORK . Equilibrium networks , Implicit Deep Models , and Well-Posedness . As mentioned above , it has been recently shown that many existing network architectures can be incorporated into a flexible model set called an equilibrium network ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) or implicit deep model ( El Ghaoui et al. , 2019 ) . In this unified model set , the network predictions are made not by forward computation of sequential hidden layers , but by finding a solution to an implicit equation involving a single layer of all hidden units . One major question for this type of networks is its wellposedness , i.e . the existence and uniqueness of a solution to the implicit equation for all possible inputs . El Ghaoui et al . ( 2019 ) proposed a computationally verifiable but conservative condition on the spectral norm of hidden unit weight . In Winston & Kolter ( 2020 ) , a less conservative condition was developed based on monotone operator theory . Similar monotonicity constraints were previously used to ensure well-posedness of a different class of implicit models in the context of nonlinear system identification ( Tobenkin et al. , 2017 , Theorem 1 ) . On the question of well-posedness , our contribution is a more flexible model set and more natural assumptions on the activation functions : that they are monotone and slope-restricted . Neural Network Robustness and Lipschitz Bounds . The Lipschitz constant of a function measures the worst-case sensitivity of the function , i.e . the maximum “ amplification ” of difference in inputs to differences in outputs . The key features of a good Lipschitz bounded learning approach include a tight estimation for Lipschitz constant and a computationally tractable training method with bounds enforced . For deep networks , Tsuzuku et al . ( 2018 ) proposed a computationally efficient but conservative approach since its Lipschitz constant estimation method is based on composition of estimates for different layers , while Anil et al . ( 2019 ) proposed a combination of a novel activation function and weight constraints . For equilibrium networks , El Ghaoui et al . ( 2019 ) proposed an estimation of Lipschitz bounds via input-to-state ( ISS ) stability analysis . Fazlyab et al . ( 2019 ) estimates for deep networks based on incremental quadratic constraints and semidefinite programming ( SDP ) were shown to give state-of-the-art results , however this was limited to analysis of an already-trained network . The SDP test was incorporated into training via the alternating direction method of multipliers ( ADMM ) in Pauli et al . ( 2020 ) , however due to the complexity of the SDP the training times recorded were almost 50 times longer than for unconstrained networks . Our approach uses a similar condition to Fazlyab et al . ( 2019 ) applied to equilibrium networks , however we introduce a novel direct parameterization method that enables learning robust models via unconstrained optimization , removing the need for computationally-expensive projections or barrier terms . 3 PROBLEM FORMULATION AND PRELIMINARIES . 3.1 PROBLEM STATEMENT . We consider the weight-tied network in which x ∈ Rd denotes the input , and z ∈ Rn denotes the hidden units , y ∈ Rp denotes the output , given by the following implicit equation z = σ ( Wz + Ux+ bz ) , y = Woz + by ( 1 ) where W ∈ Rn×n , U ∈ Rn×d , and Wo ∈ Rp×n are the hidden unit , input , and output weights , respectively , bz ∈ Rn and by ∈ Rp are bias terms . The implicit framework includes most current neural network architectures ( e.g . deep and residual networks ) as special cases . To streamline the presentation we assume that σ : R → R is a single nonlinearity applied elementwise , although our results also apply in the case that each channel has a different activation function , nonlinear or linear . Equation ( 1 ) is called an equilibrium network since its solutions are equilibrium points of the difference equation zk+1 = σ ( Wzk +Ux+ bz ) or the ODE ż ( t ) = −z ( t ) + σ ( Wz ( t ) +Ux+ bz ) . Our goal is to learn equilibrium networks ( 1 ) possessing the following two properties : • Well-posedness : For every input x and bias bz , equation 1 admits a unique solution z . • γ-Lipschitz : It has a finite Lipschitz bound of γ , i.e. , for any input-output pairs ( x1 , y1 ) , ( x2 , y2 ) we have ‖y1 − y2‖2 ≤ γ‖x1 − x2‖2 . 3.2 PRELIMINARIES . Monotone operator theory . The theory of monotone operators on Euclidean space ( see the survey Ryu & Boyd ( 2016 ) ) has been extensively applied in the development of equilibrium networks ( Winston & Kolter , 2020 ) . In this paper , we will use the monotone operator theory on non-Euclidean spaces ( Bauschke et al. , 2011 ) , in particular , we are interested in a finite-dimensional Hilbert space H , which we identify with Rn equipped with a weighted inner product 〈x , y〉Q : = y > Qx where Q 0 . The main benefit is that we can construct a more expressive equilibrium network set . A brief summary or relevant theory can be found in Appendix C.1 ; here we give some definitions that are frequently used throughout the paper . An operator is a set-valued or single-valued function defined by a subset of the space A ⊆ H×H . A function f : H → R ∪ { ∞ } is proper if f ( x ) < ∞ for at least one x . The subdifferential and proximal operators of a proper function f are defined as ∂f ( x ) : = { g ∈ H | f ( y ) ≥ f ( x ) + 〈y − x , g〉Q , ∀y ∈ H } , proxαf ( x ) : = { z ∈ H | z = arg min u 1 2 ‖u− x‖2Q + αf ( u ) } respectively , where ‖x‖Q : = √ 〈x , x〉Q is the induced norm . For n = 1 , we only consider the case ofQ = 1 . An operatorA is monotone if 〈u−v , x−y〉Q ≥ 0 and strongly monotone with parameter m if 〈u− v , x− y〉Q ≥ m‖x− y‖2Q for all ( x , u ) , ( y , v ) ∈ A . The operator splitting problem is that of finding a zero of a sum of two operators A and B , i.e . find an x such that 0 ∈ ( A+B ) ( x ) . Dynamical systems theory . In this paper , we will also treat the solutions of ( 1 ) as equilibrium points of certain dynamical systems ż ( t ) = f ( z ( t ) ) . Then , the well-posedness and robustness properties of ( 1 ) can be guaranteed by corresponding properties of the dynamical system ’ s solution set . A central focus in robust and nonlinear control theory for more than 50 years – and largely unified by the modern theory of integral quadratic constraints ( Megretski & Rantzer , 1997 ) – has been on systems which are interconnections of linear mappings and “ simple ” nonlinearities , i.e . those easily bounded in some sense by quadratic functions . Fortuitously , this characteristic is shared with deep , recurrent , and equilibrium neural networks , a connection that we use heavily in this paper and has previously been exploited by Fazlyab et al . ( 2019 ) ; El Ghaoui et al . ( 2019 ) ; Revay et al . ( 2020 ) and others . A particular property we are interested in is called contraction ( Lohmiller & Slotine , 1998 ) , i.e. , any pair of solutions z1 ( t ) and z2 ( t ) exponentially converge to each other : ‖z1 ( t ) − z2 ( t ) ‖ ≤ α‖z1 ( 0 ) − z2 ( 0 ) ‖e−βt for all t > 0 and some α , β > 0 . Contraction can be established by finding a Riemannian metric with respect to which nearby trajectories converge , which is a differential analog of a Lyapunov function . A nice property of a contracting dynamical system is that if it is time-invariant , a unique equilibrium exists and it possesses a certain level of robustness . Moreover , contraction can also be linked to monotone operators , i.e . a system is contracting w.r.t . to a constant ( state-independent ) metric Q if and only if the operator −f is strongly monotone w.r.t . Q-weighted inner product . We collect some directly relevant results from systems theory in Appendix C.2 .
> Summary: This paper studies a new and more general way of parameterizing the simplest equilibrium network of the form $\sigma(Wz+Ux+b)$, a form that has been tackled by works like (Winston & Kolter 2020)and (El Ghaoui et al. 2019). The authors provide a computationally (relatively) efficient way of computing Lipschitz-bounded equilibrium networks and a detailed analysis of how the network should be constructed, along with the proof of the existence and uniqueness of the fixed point (and with less restrictive conditions when compared to MON). The empirical results on adversarial robustness shows that the proposed approach is a bit more robust than prior layer-based networks and other implicit networks, and validates most of the theoretical claims made by the authors.
SP:16392bc9174dde6ad7b569f3f40fa14a4ed48831
Learning Safe Policies with Cost-sensitive Advantage Estimation
1 INTRODUCTION . In recent years , Reinforcement Learning ( RL ) has achieved remarkable success in learning skillful AI agents in various applications ranging from robot locomotion ( Schulman et al. , 2015a ; Duan et al. , 2016 ; Schulman et al. , 2015c ) , video games ( Mnih et al. , 2015 ) and the game of Go ( Silver et al. , 2016 ; 2017 ) . These agents are either trained in simulation or in risk-free environments , and the deployed RL algorithms can focus on maximizing the cumulative return by exploring the environment arbitrarily . However , this is barely workable for real-world RL problems where the safety of the agent is important . For example , a navigating robot can not take the action of crashing into a front obstacle even if the potential return on reaching the target faster is higher . Actually , in reality , some states or actions might be unsafe and harmful to the system , and the agent should learn to avoid them in deployment when performing certain tasks . Conventional RL algorithms do not particularly consider such safety-constrained environments , which limits their practical application . Recently , Safe Reinforcement Learning ( Garcıa & Fernández , 2015 ; Mihatsch & Neuneier , 2002 ; Altman , 1999 ) has been proposed and drawn increasing attention . Existing safe RL algorithms generally fall into two categories based on whether or not the agents are required to always stay safe during learning and exploration . The algorithms with exploration safety ( Dalal et al. , 2018 ; Pecka & Svoboda , 2014 ) insist that safety constraints never be violated even during learning , and thus they usually require certain prior knowledge of the environment to be available , e.g. , in the form of human demonstrations . Comparatively , deployment safety ( Achiam et al. , 2017 ; Chow et al. , 2018 ) RL algorithms train the agents from interaction with the environment and allow safety constraints violation during learning to some extent . This is reasonable since whether a state is safe will not be clear until the agent visits that state . Since human demonstrations are too difficult or expensive to collect in some cases and may not cover the whole state space , we focus on deployment safety in this work . RL problems with deployment safety are typically formulated as Constrained Markov Decision Process ( CMDP ) ( Altman , 1999 ) that extends MDP by requiring the agent to satisfy cumulative cost constraints in expectation in the meanwhile of maximizing the expected return . Leveraging the success of recent deep learning powered policy optimization methods ( Schulman et al. , 2015b ) , Constrained Policy Optimization ( CPO ) ( Achiam et al. , 2017 ) makes the first attempt on highdimensional control tasks in continuous CMDPs . However , CPO only considers the total cost of a trajectory of a sequence of state-action pairs during policy optimization . It does not differentiate the safe state-action pairs from the unsafe ones in the trajectories . Due to such incapability of exploiting the intrinsic structure of environments and trajectories , CPO sacrifices too much on the expected return for learning the safety policy . In this work , we propose Cost-sensitive Advantage Estimation ( CSAE ) which generalizes the conventional advantage estimation for safe RL problems by differentiating safe and unsafe states , based on the cost information returned by the environment during training . CSAE depresses the advantage value of unsafe state-action pairs but controls effects upon their adjacent safe state-actions in the trajectories . Thus , the learned policy can maximally gain rewards from the safe states . Based on CSAE , we develop a new safe RL algorithm with proved monotonic policy performance improvement in terms of both safety and return from safe states , showing superiority over other safe RL algorithms . Moreover , to further enhance the agent ’ s ability of enforcing safety constraints , we propose Worst-case Constrained Markov Decision Process ( WCMDP ) , an extension of CMDP by constraining the cumulative cost in worst cases through the Conditional Value-at-Risk ( Tamar et al. , 2015 ) , instead of that in expectation . This augmentation makes the learned policy not only safer but also better , both experimentally and theoretically . With CSAE and WCMDP , we develop a new safe RL algorithm by relating them to trust region methods . We conduct extensive experiments to evaluate our algorithm on several constrained robot locomotion tasks based on Mujoco ( Todorov et al. , 2012 ) , and compare it with well-established baselines . The results demonstrate that the agent trained by our algorithm can collect a higher reward , while satisfying the safety constraints with less cost . 2 RELATED WORK . Safe Reinforcement Learning has drawn growing attention . There are various definitions of ‘ safety ’ in RL ( Garcıa & Fernández , 2015 ; Pecka & Svoboda , 2014 ) , e.g. , the variance of return ( Heger , 1994 ; Gaskett , 2003 ) , fatal transitions ( Hans et al. , 2008 ) and unknown states ( Garcıa et al. , 2013 ) . In this paper , we focus on the RL problems with trajectory-based safety cost , under the constrained MDP ( CMDP ) framework . Through Lagrangian method , Geibel & Wysotzki ( 2005 ) propose to convert CMDP into an unconstrained problem to maximize the expected return with a cost penalty . Though such a problem can be easily solved with well-designed RL algorithms , e.g . ( Schulman et al. , 2015b ; 2017 ) , the trade-off between return and cost is manually balanced with a fixed Lagrange multiplier , which can not guarantee safety through learning . To address this , inspired by trust region methods ( Schulman et al. , 2015b ) , Constrained Policy Optimization ( Achiam et al. , 2017 ) ( CPO ) establishes linear approximation to the safety constraint and solves the corresponding optimization problem in the dual form . Compared with previous CMDP algorithms , CPO scales well to highdimensional continuous state-action spaces . However , CPO does not distinguish the safe states from the unsafe ones in the training process , limiting its performance in the return . Besides developing various optimization algorithms , some recent works also explore other approaches to enhance the safety constraints , e.g. , adopting the Conditional Value-at-Risk ( CVaR ) of the cumulative cost as the safety constraint ( Tamar et al. , 2015 ) . Along this direction , Tamar et al . ( 2015 ) develop a gradient estimator through sampling to optimize CVaR with gradient descent . Prashanth ( 2014 ) further applies this estimator to CVaR-Constrained MDP to solve the stochastic shortest path ( SSP ) problem . Our work considers a similar framework to CPO ( Achiam et al. , 2017 ) , but it treats states differently by extending Generalized Advantage Estimation ( Schulman et al. , 2015c ) to be safety-sensitive . Our proposed CSAE can boost the policy performance in terms of the return while ensuring the safety property . Moreover , our algorithm with WCMDP is safer than CPO in terms of constraint violation ratio during learning . There are also some non-CMDP based algorithms for safe RL that are not in the scope of this work . In ( Dalal et al. , 2018 ) , a linear safety-signal model is built to estimate per-step cost from state-action pairs and rectify the action into a safe one . However , this method requires a pre-collected dataset to fit the linear cost estimation model , which limits its application . Similarly , Cheng et al . ( 2019 ) augment the model-free controller to enforce safety per step by designing a modle-based controller with control barrier functions ( CBFs ) . Some works introduce Lyapunov functions to build safe RL algorithms . For example , Berkenkamp et al . ( 2017 ) apply Lyapunov functions for safely recovering from exploratory actions , while Chow et al . ( 2018 ) construct Lyapunov functions that explicitly model constraints . 3 PRELIMINARIES . A standard Markov Decision Process ( MDP ) ( Sutton et al. , 1998 ) is defined with a tuple ( S , A , P , R , γ , µ ) , where S andA denote the set of states and actions respectively , P : S×A×S → [ 0 , 1 ] is the transition dynamics modeling the probability of transferring from state s to s′ after taking action a , R ( s , a , s′ ) represents the reward function during this transition , γ ∈ [ 0 , 1 ] is the discount factor and µ : S 7→ [ 0 , 1 ] denotes the starting state distribution . An MDP agent is usually equipped with a policy π ( a|s ) , which denotes the probability distribution over actions a given a state s. The performance of a policy π is measured with the expected discounted total reward J ( π ) = Eτ∼π , s0∼µ [ ∑∞ t=0 γ tR ( st , at , st+1 ) ] , where τ = ( s0 , a0 , s1 , . . . ) is a trajectory generated by following policy π. RL algorithms for MDPs try to find the policy π∗ that achieves the highest reward , i.e. , π∗ = arg maxπ J ( π ) . They commonly use the value function Vπ ( s ) = Eτ∼π [ ∑∞ t=0 γ tR ( st , at , st+1 ) |s0 = s ] , the action value function Qπ ( s , a ) = Eτ∼π [ ∑∞ t=0 γ tR ( st , at , st+1 ) |s0 = s , a0 = a ] and the advantage function Aπ ( s , a ) = Qπ ( s , a ) − Vπ ( s ) . The discounted future state distribution will also be useful , which is defined as dπ ( s ) = ( 1− γ ) ∑ t=0 γ tP ( st = s|π ) . Constrained Markov Decision Process ( CMDP ) ( Altman , 1999 ) extends MDP to environments with safety cost that could harm the agent when undesired actions are taken . As various safety costs may exist in a single CMDP , we relate them with m cost functions { C1 ( s , a , s′ ) , . . . , Cm ( s , a , s′ ) } , each of which denotes the cost an agent receives for each transition ( s , a , s′ ) ( similar to reward functions ) . Let Ci ( τ ) = ∑∞ t=0 γ tCi ( st , at , st+1 ) denote the cumulative cost along a trajectory τ generated from policy π . We consider a trajectory-based cost constraint in CMDP , which limits the cumulative cost in expectation JCi = Eτ∼π , s0∼µ [ Ci ( τ ) ] with value di . Then safe RL aims to learn the policy π under CMDP by solving the following problem , π∗ = arg max J ( π ) , s.t . JCi = Eτ∼π , s0∼µ [ Ci ( τ ) ] ≤ di , i = 1 , . . . , m. ( 1 ) Safe RL algorithms search for the policy π∗ that achieves the maximal cumulative reward and meanwhile does not violate the imposed safety constraints on the costs . In the following , analogous to the definition of value functions ( i.e. , Vπ , Qπ and Aπ ) , we use V Ciπ , Q Ci π and A Ci π to denote the cost-value functions w.r.t . cost function Ci .
In this paper, the authors proposed a new constrained policy optimization algorithm and a worst-case version of the constrained MDP framework. Tho proposed constrained policy optimization algorithm is based on CPO, and a novel advantage function (CSAE) based on the concept of a "safe" state. Experiments in control simulation tasks are provided.
SP:d7c00cd82b5d4cd035635e74b8525cf5603d305b
Learning Safe Policies with Cost-sensitive Advantage Estimation
1 INTRODUCTION . In recent years , Reinforcement Learning ( RL ) has achieved remarkable success in learning skillful AI agents in various applications ranging from robot locomotion ( Schulman et al. , 2015a ; Duan et al. , 2016 ; Schulman et al. , 2015c ) , video games ( Mnih et al. , 2015 ) and the game of Go ( Silver et al. , 2016 ; 2017 ) . These agents are either trained in simulation or in risk-free environments , and the deployed RL algorithms can focus on maximizing the cumulative return by exploring the environment arbitrarily . However , this is barely workable for real-world RL problems where the safety of the agent is important . For example , a navigating robot can not take the action of crashing into a front obstacle even if the potential return on reaching the target faster is higher . Actually , in reality , some states or actions might be unsafe and harmful to the system , and the agent should learn to avoid them in deployment when performing certain tasks . Conventional RL algorithms do not particularly consider such safety-constrained environments , which limits their practical application . Recently , Safe Reinforcement Learning ( Garcıa & Fernández , 2015 ; Mihatsch & Neuneier , 2002 ; Altman , 1999 ) has been proposed and drawn increasing attention . Existing safe RL algorithms generally fall into two categories based on whether or not the agents are required to always stay safe during learning and exploration . The algorithms with exploration safety ( Dalal et al. , 2018 ; Pecka & Svoboda , 2014 ) insist that safety constraints never be violated even during learning , and thus they usually require certain prior knowledge of the environment to be available , e.g. , in the form of human demonstrations . Comparatively , deployment safety ( Achiam et al. , 2017 ; Chow et al. , 2018 ) RL algorithms train the agents from interaction with the environment and allow safety constraints violation during learning to some extent . This is reasonable since whether a state is safe will not be clear until the agent visits that state . Since human demonstrations are too difficult or expensive to collect in some cases and may not cover the whole state space , we focus on deployment safety in this work . RL problems with deployment safety are typically formulated as Constrained Markov Decision Process ( CMDP ) ( Altman , 1999 ) that extends MDP by requiring the agent to satisfy cumulative cost constraints in expectation in the meanwhile of maximizing the expected return . Leveraging the success of recent deep learning powered policy optimization methods ( Schulman et al. , 2015b ) , Constrained Policy Optimization ( CPO ) ( Achiam et al. , 2017 ) makes the first attempt on highdimensional control tasks in continuous CMDPs . However , CPO only considers the total cost of a trajectory of a sequence of state-action pairs during policy optimization . It does not differentiate the safe state-action pairs from the unsafe ones in the trajectories . Due to such incapability of exploiting the intrinsic structure of environments and trajectories , CPO sacrifices too much on the expected return for learning the safety policy . In this work , we propose Cost-sensitive Advantage Estimation ( CSAE ) which generalizes the conventional advantage estimation for safe RL problems by differentiating safe and unsafe states , based on the cost information returned by the environment during training . CSAE depresses the advantage value of unsafe state-action pairs but controls effects upon their adjacent safe state-actions in the trajectories . Thus , the learned policy can maximally gain rewards from the safe states . Based on CSAE , we develop a new safe RL algorithm with proved monotonic policy performance improvement in terms of both safety and return from safe states , showing superiority over other safe RL algorithms . Moreover , to further enhance the agent ’ s ability of enforcing safety constraints , we propose Worst-case Constrained Markov Decision Process ( WCMDP ) , an extension of CMDP by constraining the cumulative cost in worst cases through the Conditional Value-at-Risk ( Tamar et al. , 2015 ) , instead of that in expectation . This augmentation makes the learned policy not only safer but also better , both experimentally and theoretically . With CSAE and WCMDP , we develop a new safe RL algorithm by relating them to trust region methods . We conduct extensive experiments to evaluate our algorithm on several constrained robot locomotion tasks based on Mujoco ( Todorov et al. , 2012 ) , and compare it with well-established baselines . The results demonstrate that the agent trained by our algorithm can collect a higher reward , while satisfying the safety constraints with less cost . 2 RELATED WORK . Safe Reinforcement Learning has drawn growing attention . There are various definitions of ‘ safety ’ in RL ( Garcıa & Fernández , 2015 ; Pecka & Svoboda , 2014 ) , e.g. , the variance of return ( Heger , 1994 ; Gaskett , 2003 ) , fatal transitions ( Hans et al. , 2008 ) and unknown states ( Garcıa et al. , 2013 ) . In this paper , we focus on the RL problems with trajectory-based safety cost , under the constrained MDP ( CMDP ) framework . Through Lagrangian method , Geibel & Wysotzki ( 2005 ) propose to convert CMDP into an unconstrained problem to maximize the expected return with a cost penalty . Though such a problem can be easily solved with well-designed RL algorithms , e.g . ( Schulman et al. , 2015b ; 2017 ) , the trade-off between return and cost is manually balanced with a fixed Lagrange multiplier , which can not guarantee safety through learning . To address this , inspired by trust region methods ( Schulman et al. , 2015b ) , Constrained Policy Optimization ( Achiam et al. , 2017 ) ( CPO ) establishes linear approximation to the safety constraint and solves the corresponding optimization problem in the dual form . Compared with previous CMDP algorithms , CPO scales well to highdimensional continuous state-action spaces . However , CPO does not distinguish the safe states from the unsafe ones in the training process , limiting its performance in the return . Besides developing various optimization algorithms , some recent works also explore other approaches to enhance the safety constraints , e.g. , adopting the Conditional Value-at-Risk ( CVaR ) of the cumulative cost as the safety constraint ( Tamar et al. , 2015 ) . Along this direction , Tamar et al . ( 2015 ) develop a gradient estimator through sampling to optimize CVaR with gradient descent . Prashanth ( 2014 ) further applies this estimator to CVaR-Constrained MDP to solve the stochastic shortest path ( SSP ) problem . Our work considers a similar framework to CPO ( Achiam et al. , 2017 ) , but it treats states differently by extending Generalized Advantage Estimation ( Schulman et al. , 2015c ) to be safety-sensitive . Our proposed CSAE can boost the policy performance in terms of the return while ensuring the safety property . Moreover , our algorithm with WCMDP is safer than CPO in terms of constraint violation ratio during learning . There are also some non-CMDP based algorithms for safe RL that are not in the scope of this work . In ( Dalal et al. , 2018 ) , a linear safety-signal model is built to estimate per-step cost from state-action pairs and rectify the action into a safe one . However , this method requires a pre-collected dataset to fit the linear cost estimation model , which limits its application . Similarly , Cheng et al . ( 2019 ) augment the model-free controller to enforce safety per step by designing a modle-based controller with control barrier functions ( CBFs ) . Some works introduce Lyapunov functions to build safe RL algorithms . For example , Berkenkamp et al . ( 2017 ) apply Lyapunov functions for safely recovering from exploratory actions , while Chow et al . ( 2018 ) construct Lyapunov functions that explicitly model constraints . 3 PRELIMINARIES . A standard Markov Decision Process ( MDP ) ( Sutton et al. , 1998 ) is defined with a tuple ( S , A , P , R , γ , µ ) , where S andA denote the set of states and actions respectively , P : S×A×S → [ 0 , 1 ] is the transition dynamics modeling the probability of transferring from state s to s′ after taking action a , R ( s , a , s′ ) represents the reward function during this transition , γ ∈ [ 0 , 1 ] is the discount factor and µ : S 7→ [ 0 , 1 ] denotes the starting state distribution . An MDP agent is usually equipped with a policy π ( a|s ) , which denotes the probability distribution over actions a given a state s. The performance of a policy π is measured with the expected discounted total reward J ( π ) = Eτ∼π , s0∼µ [ ∑∞ t=0 γ tR ( st , at , st+1 ) ] , where τ = ( s0 , a0 , s1 , . . . ) is a trajectory generated by following policy π. RL algorithms for MDPs try to find the policy π∗ that achieves the highest reward , i.e. , π∗ = arg maxπ J ( π ) . They commonly use the value function Vπ ( s ) = Eτ∼π [ ∑∞ t=0 γ tR ( st , at , st+1 ) |s0 = s ] , the action value function Qπ ( s , a ) = Eτ∼π [ ∑∞ t=0 γ tR ( st , at , st+1 ) |s0 = s , a0 = a ] and the advantage function Aπ ( s , a ) = Qπ ( s , a ) − Vπ ( s ) . The discounted future state distribution will also be useful , which is defined as dπ ( s ) = ( 1− γ ) ∑ t=0 γ tP ( st = s|π ) . Constrained Markov Decision Process ( CMDP ) ( Altman , 1999 ) extends MDP to environments with safety cost that could harm the agent when undesired actions are taken . As various safety costs may exist in a single CMDP , we relate them with m cost functions { C1 ( s , a , s′ ) , . . . , Cm ( s , a , s′ ) } , each of which denotes the cost an agent receives for each transition ( s , a , s′ ) ( similar to reward functions ) . Let Ci ( τ ) = ∑∞ t=0 γ tCi ( st , at , st+1 ) denote the cumulative cost along a trajectory τ generated from policy π . We consider a trajectory-based cost constraint in CMDP , which limits the cumulative cost in expectation JCi = Eτ∼π , s0∼µ [ Ci ( τ ) ] with value di . Then safe RL aims to learn the policy π under CMDP by solving the following problem , π∗ = arg max J ( π ) , s.t . JCi = Eτ∼π , s0∼µ [ Ci ( τ ) ] ≤ di , i = 1 , . . . , m. ( 1 ) Safe RL algorithms search for the policy π∗ that achieves the maximal cumulative reward and meanwhile does not violate the imposed safety constraints on the costs . In the following , analogous to the definition of value functions ( i.e. , Vπ , Qπ and Aπ ) , we use V Ciπ , Q Ci π and A Ci π to denote the cost-value functions w.r.t . cost function Ci .
The authors propose to improve a safe RL algorithm, constrained policy optimizaiton, that can learn the optimal safe policy while exploring unsafe states less often during the training process. In particular, they dampen the estimated advantage associated with unsafe states, which encourages the RL algorithm to explore safe states more often during the learning process. In addition, the authors aim to find a policy that satisfies the constraints with high probability, rather than only in expectation, by considering the worst-case constraints. The empirical results show that a safe RL algo that dampens the advantage and respects worst-case constraints are able to learn policies with large returns and avoid unsafe states.
SP:d7c00cd82b5d4cd035635e74b8525cf5603d305b
Combining Label Propagation and Simple Models out-performs Graph Neural Networks
1 INTRODUCTION . Following the success of neural networks in computer vision and natural language processing , there are now a wide range of graph neural networks ( GNNs ) for making predictions involving relational data ( Battaglia et al. , 2018 ; Wu et al. , 2020 ) . These models have had much success and sit atop leaderboards such as the Open Graph Benchmark ( Hu et al. , 2020 ) . Often , the methodological developments for GNNs revolve around creating strictly more expressive architectures than basic variants such as the Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) or GraphSAGE ( Hamilton et al. , 2017a ) ; examples include Graph Attention Networks ( Veličković et al. , 2018 ) , Graph Isomorphism Networks ( Xu et al. , 2018 ) , and various deep models ( Li et al. , 2019 ; Rong et al. , 2019 ; Chen et al. , 2020 ) . Many ideas for new GNN architectures are adapted from new architectures in models for language ( e.g. , attention ) or vision ( e.g. , deep CNNs ) with the hopes that success will translate to graphs . However , as these models become more complex , understanding their performance gains is a major challenge , and scaling them to large datasets is difficult . Here , we see how far we can get by combining much simpler models , with an emphasis on understanding where there are easy opportunities for performance improvements in graph learning , particularly transductive node classification . We propose a simple pipeline with three main parts ( Figure 1 ) : ( i ) a base prediction made with node features that ignores the graph structure ( e.g. , a shallow multi-layer perceptron or just a linear model ) ; ( ii ) a correction step , which propagates uncertainties from the training data across the graph to correct the base prediction ; and ( iii ) a smoothing of the predictions over the graph . Steps ( ii ) and ( iii ) are post-processing and implemented with classical methods for graph-based semi-supervised learning , namely , label propagation techniques ∗Equal contribution †Work done while at Cornell University ( Zhu , 2005 ) .1 With a few modifications and new deployment of these classic ideas , we achieve stateof-the-art performance on several node classification tasks , outperforming big GNN models . In our framework , the graph structure is not used to learn parameters ( which is done in step ( i ) ) but instead as a post-processing mechanism . This simplicity leads to models with orders of magnitude fewer parameters that take orders of magnitude less time to train and can easily scale to large graphs . We can also combine our ideas with state-of-the-art GNNs , although the performance gains are modest . A major source of our performance improvements is directly using labels for predictions . This idea is not new — early diffusion-based semi-supervised learning algorithms on graphs such as the spectral graph transducer ( Joachims , 2003 ) , Gaussian random field models ( Zhu et al. , 2003 ) , and and label spreading ( Zhou et al. , 2004 ) all use this idea . However , the motivation for these methods was semi-supervised learning on point cloud data , so the “ node features ” were used to construct the graph itself . Since then , these techniques have been used for learning on relational data consisting of a graph and some labels but no node features ( Koutra et al. , 2011 ; Gleich & Mahoney , 2015 ; Peel , 2017 ; Chin et al. , 2019 ) ; however , they have largely been ignored in the context of GNNs . ( That being said , we still find that even simple label propagation , which ignores features , does surprisingly well on a number of benchmarks . ) This provides motivation for combining two orthogonal sources of prediction power — one coming from the node features ( ignoring graph structure ) and one coming from using the known labels directly in predictions . Recent research connects GNNs to label propagation ( Wang & Leskovec , 2020 ; Jia & Benson , 2020 ; 2021 ) as well as Markov Random fields ( Qu et al. , 2019 ; Gao et al. , 2019 ) , and some techniques use ad hoc incorporation of label information in the features ( Shi et al. , 2020 ) . However , these approaches are usually still expensive to train , while we use label propagation in two understandable and low-cost ways . We start with a cheap “ base prediction ” from a model that uses only node features and ignores the graph structure . After , we use label propagation for error correction and then to smooth final predictions . These post-processing steps are based on the fact that errors and labels on connected nodes tend to be positively correlated . Assuming similarity between connected nodes is at the center of much network analysis and corresponds to homophily or assortative mixing ( McPherson et al. , 2001 ; Newman , 2003 ; Easley & Kleinberg , 2010 ) . In the semi-supervised learning literature , the analog is the smoothness or cluster assumption ( Chapelle et al. , 2003 ; Zhu , 2005 ) . The good performance of label propagation that we see across a wide variety of datasets suggests that these correlations hold on common benchmarks . 1One of the main methods that we use ( Zhou et al. , 2004 ) is often called label spreading . The term “ label propagation ” is used in a variety of contexts ( Zhu , 2005 ; Wang & Zhang , 2007 ; Raghavan et al. , 2007 ; Gleich & Mahoney , 2015 ) . The salient point for this paper is that we assume positive correlations on neighboring nodes and that the algorithms work by “ propagating ” information from one node to another . Overall , our methodology demonstrates that combining several simple ideas yields excellent performance in transductive node classification at a fraction of the cost , in terms of both model size ( i.e. , number of parameters ) and training time . For example , on the OGB-Products benchmark , we out-perform the current best-known GNN with more than two orders of magnitude fewer parameters and more than two orders of magnitude less training time . However , our goal is not to say that current graph learning methods are poor or inappropriate . Instead , we aim to highlight easier ways in which to improve prediction performance in graph learning and to better understand the source of performance gains . Our main finding is that more direct incorporation of labels into the learning algorithms is key . We hope that our approach spurs new ideas that can help in other graph learning tasks , such as inductive node classification , link prediction , and graph prediction . 1.1 ADDITIONAL RELATED WORK . The Approximate Personalized Propagation of Neural Predictions ( APPNP ) framework is most relevant to our work , as they also smooth base predictions ( Klicpera et al. , 2018 ) . However , they focus on integrating this smoothing into the training process so that their model can be trained end to end . Not only is this significantly more computationally expensive , it also prevents APPNP from incorporating label information at inference . Compared to APPNP , our framework produces more accurate predictions , is faster to train , and more easily scales to large datasets . That being said , APPNP can also be used without end-to-end training , which can make it faster but less accurate . Our framework also complements the Simplified Graph Convolution ( Wu et al. , 2019 ) and other algorithms designed to increase scalability ( Bojchevski et al. , 2020 ; Zeng et al. , 2019 ; Frasca et al. , 2020 ) . The primary focus of our approach , however , is using labels directly , and scalability is a byproduct . There is also prior work connecting GCNs and label propagation . Wang & Leskovec ( 2020 ) use label propagation as a pre-processing step to weight edges for GNNs , whereas we use label propagation as a post-processing step and avoid GNNs . Jia & Benson ( 2020 ; 2021 ) use label propagation with GNNs for regression tasks , and our error correction step adapts some of their ideas for the case of classification . Finally , there are several recent approaches that incorporate nonlinearity into label propagation methods to compete with GNNs and achieve scalability ( Eliav & Cohen , 2018 ; Ibrahim & Gleich , 2019 ; Tudisco et al. , 2021 ) , but these methods focus on settings of low label rates and do not use feature-based learning . 2 CORRECT AND SMOOTH ( C & S ) MODEL . We start with some notation . We assume that we have an undirected graph G = ( V , E ) , where there are n = |V | nodes with features on each node represented by a matrix X ∈ Rn×p . Let A be the adjacency matrix of the graph , D be the diagonal degree matrix , and S be the normalized adjacency matrix D−1/2AD−1/2 . For the prediction problem , the node set V is split into a disjoint set of unlabeled nodes U and labeled nodes L , which are subsets of the indices { 1 , . . . , n } . We will further split the labeled nodes into a training set Lt and validation set Lv . We represent the labels by a one-hot-encoding matrix Y ∈ Rn×c , where c is the number of classes ( i.e. , Yij = 1 if i ∈ L is known to be in class j , and 0 otherwise , where the ith row of Y is all zero if i ∈ U ) , Our problem is transductive node classification : assign each node j ∈ U a label in { 1 , . . . , c } , given G , X , and Y . Our approach starts with a simple base predictor on node features that does not rely on any learning over the graph . After , we perform two types of label propagation ( LP ) : one that corrects the base predictions by modeling correlated error and one that smooths the final prediction . We call the combination of these two methods Correct and Smooth ( C & S ; Figure 1 ) . The LPs are only postprocessing steps , and our pipeline is not trained end-to-end . Furthermore , the graph is only used in the post-processing steps ( and in a pre-processing step to augment the features X ) , but not for the base predictions . This makes training fast and scalable compared to standard GNN models . Moreover , we take advantage of both LP ( which performs fairly well on its own without features ) and the node features . We find that combining these complementary signals yields excellent predictions . 2.1 SIMPLE BASE PREDICTOR . To start , we use a simple base predictor that does not rely on the graph structure . More specifically , we train a model f to minimize ∑ i∈Lt ` ( f ( xi ) , yi ) , where xi is the ith row of X , yi is the ith row of Y , and ` is a loss function , For this paper , f is either a linear model or a shallow multi-layer perceptron ( MLP ) followed by a softmax , and ` is the cross-entropy loss . The validation set Lv is used to tune hyperparameters such as learning rates and the hidden layer dimensions for the MLP . From f , we get a base prediction Z ∈ Rn×c , where each row of Z is a probability distribution resulting from the softmax . Omitting the graph structure for these base predictions avoids most of the scalability issues with GNNs . In principle , though , we can use any base predictor for Z , including those based on GNNs , and we explore this in Section 3 . However , for our pipeline to be simple and scalable , we just use linear classifiers or MLPs with subsequent post-processing , which we describe next .
This paper presents C&S method that predicts node labels in the transductive semi-supervised node classification setting. C&S uses the three-stage-pipeline approach. First, label probabilities are predicted with simple and scalable classifiers such as MLP. Then, the predicted errors are diffused over graphs. Finally, the labels are further smoothened to give the final node label prediction. The authors demonstrate that their simple C&S approach beats many existing GNN approaches.
SP:87fb323fc2a1b385c9a695c7669f509c835ef0aa
Combining Label Propagation and Simple Models out-performs Graph Neural Networks
1 INTRODUCTION . Following the success of neural networks in computer vision and natural language processing , there are now a wide range of graph neural networks ( GNNs ) for making predictions involving relational data ( Battaglia et al. , 2018 ; Wu et al. , 2020 ) . These models have had much success and sit atop leaderboards such as the Open Graph Benchmark ( Hu et al. , 2020 ) . Often , the methodological developments for GNNs revolve around creating strictly more expressive architectures than basic variants such as the Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) or GraphSAGE ( Hamilton et al. , 2017a ) ; examples include Graph Attention Networks ( Veličković et al. , 2018 ) , Graph Isomorphism Networks ( Xu et al. , 2018 ) , and various deep models ( Li et al. , 2019 ; Rong et al. , 2019 ; Chen et al. , 2020 ) . Many ideas for new GNN architectures are adapted from new architectures in models for language ( e.g. , attention ) or vision ( e.g. , deep CNNs ) with the hopes that success will translate to graphs . However , as these models become more complex , understanding their performance gains is a major challenge , and scaling them to large datasets is difficult . Here , we see how far we can get by combining much simpler models , with an emphasis on understanding where there are easy opportunities for performance improvements in graph learning , particularly transductive node classification . We propose a simple pipeline with three main parts ( Figure 1 ) : ( i ) a base prediction made with node features that ignores the graph structure ( e.g. , a shallow multi-layer perceptron or just a linear model ) ; ( ii ) a correction step , which propagates uncertainties from the training data across the graph to correct the base prediction ; and ( iii ) a smoothing of the predictions over the graph . Steps ( ii ) and ( iii ) are post-processing and implemented with classical methods for graph-based semi-supervised learning , namely , label propagation techniques ∗Equal contribution †Work done while at Cornell University ( Zhu , 2005 ) .1 With a few modifications and new deployment of these classic ideas , we achieve stateof-the-art performance on several node classification tasks , outperforming big GNN models . In our framework , the graph structure is not used to learn parameters ( which is done in step ( i ) ) but instead as a post-processing mechanism . This simplicity leads to models with orders of magnitude fewer parameters that take orders of magnitude less time to train and can easily scale to large graphs . We can also combine our ideas with state-of-the-art GNNs , although the performance gains are modest . A major source of our performance improvements is directly using labels for predictions . This idea is not new — early diffusion-based semi-supervised learning algorithms on graphs such as the spectral graph transducer ( Joachims , 2003 ) , Gaussian random field models ( Zhu et al. , 2003 ) , and and label spreading ( Zhou et al. , 2004 ) all use this idea . However , the motivation for these methods was semi-supervised learning on point cloud data , so the “ node features ” were used to construct the graph itself . Since then , these techniques have been used for learning on relational data consisting of a graph and some labels but no node features ( Koutra et al. , 2011 ; Gleich & Mahoney , 2015 ; Peel , 2017 ; Chin et al. , 2019 ) ; however , they have largely been ignored in the context of GNNs . ( That being said , we still find that even simple label propagation , which ignores features , does surprisingly well on a number of benchmarks . ) This provides motivation for combining two orthogonal sources of prediction power — one coming from the node features ( ignoring graph structure ) and one coming from using the known labels directly in predictions . Recent research connects GNNs to label propagation ( Wang & Leskovec , 2020 ; Jia & Benson , 2020 ; 2021 ) as well as Markov Random fields ( Qu et al. , 2019 ; Gao et al. , 2019 ) , and some techniques use ad hoc incorporation of label information in the features ( Shi et al. , 2020 ) . However , these approaches are usually still expensive to train , while we use label propagation in two understandable and low-cost ways . We start with a cheap “ base prediction ” from a model that uses only node features and ignores the graph structure . After , we use label propagation for error correction and then to smooth final predictions . These post-processing steps are based on the fact that errors and labels on connected nodes tend to be positively correlated . Assuming similarity between connected nodes is at the center of much network analysis and corresponds to homophily or assortative mixing ( McPherson et al. , 2001 ; Newman , 2003 ; Easley & Kleinberg , 2010 ) . In the semi-supervised learning literature , the analog is the smoothness or cluster assumption ( Chapelle et al. , 2003 ; Zhu , 2005 ) . The good performance of label propagation that we see across a wide variety of datasets suggests that these correlations hold on common benchmarks . 1One of the main methods that we use ( Zhou et al. , 2004 ) is often called label spreading . The term “ label propagation ” is used in a variety of contexts ( Zhu , 2005 ; Wang & Zhang , 2007 ; Raghavan et al. , 2007 ; Gleich & Mahoney , 2015 ) . The salient point for this paper is that we assume positive correlations on neighboring nodes and that the algorithms work by “ propagating ” information from one node to another . Overall , our methodology demonstrates that combining several simple ideas yields excellent performance in transductive node classification at a fraction of the cost , in terms of both model size ( i.e. , number of parameters ) and training time . For example , on the OGB-Products benchmark , we out-perform the current best-known GNN with more than two orders of magnitude fewer parameters and more than two orders of magnitude less training time . However , our goal is not to say that current graph learning methods are poor or inappropriate . Instead , we aim to highlight easier ways in which to improve prediction performance in graph learning and to better understand the source of performance gains . Our main finding is that more direct incorporation of labels into the learning algorithms is key . We hope that our approach spurs new ideas that can help in other graph learning tasks , such as inductive node classification , link prediction , and graph prediction . 1.1 ADDITIONAL RELATED WORK . The Approximate Personalized Propagation of Neural Predictions ( APPNP ) framework is most relevant to our work , as they also smooth base predictions ( Klicpera et al. , 2018 ) . However , they focus on integrating this smoothing into the training process so that their model can be trained end to end . Not only is this significantly more computationally expensive , it also prevents APPNP from incorporating label information at inference . Compared to APPNP , our framework produces more accurate predictions , is faster to train , and more easily scales to large datasets . That being said , APPNP can also be used without end-to-end training , which can make it faster but less accurate . Our framework also complements the Simplified Graph Convolution ( Wu et al. , 2019 ) and other algorithms designed to increase scalability ( Bojchevski et al. , 2020 ; Zeng et al. , 2019 ; Frasca et al. , 2020 ) . The primary focus of our approach , however , is using labels directly , and scalability is a byproduct . There is also prior work connecting GCNs and label propagation . Wang & Leskovec ( 2020 ) use label propagation as a pre-processing step to weight edges for GNNs , whereas we use label propagation as a post-processing step and avoid GNNs . Jia & Benson ( 2020 ; 2021 ) use label propagation with GNNs for regression tasks , and our error correction step adapts some of their ideas for the case of classification . Finally , there are several recent approaches that incorporate nonlinearity into label propagation methods to compete with GNNs and achieve scalability ( Eliav & Cohen , 2018 ; Ibrahim & Gleich , 2019 ; Tudisco et al. , 2021 ) , but these methods focus on settings of low label rates and do not use feature-based learning . 2 CORRECT AND SMOOTH ( C & S ) MODEL . We start with some notation . We assume that we have an undirected graph G = ( V , E ) , where there are n = |V | nodes with features on each node represented by a matrix X ∈ Rn×p . Let A be the adjacency matrix of the graph , D be the diagonal degree matrix , and S be the normalized adjacency matrix D−1/2AD−1/2 . For the prediction problem , the node set V is split into a disjoint set of unlabeled nodes U and labeled nodes L , which are subsets of the indices { 1 , . . . , n } . We will further split the labeled nodes into a training set Lt and validation set Lv . We represent the labels by a one-hot-encoding matrix Y ∈ Rn×c , where c is the number of classes ( i.e. , Yij = 1 if i ∈ L is known to be in class j , and 0 otherwise , where the ith row of Y is all zero if i ∈ U ) , Our problem is transductive node classification : assign each node j ∈ U a label in { 1 , . . . , c } , given G , X , and Y . Our approach starts with a simple base predictor on node features that does not rely on any learning over the graph . After , we perform two types of label propagation ( LP ) : one that corrects the base predictions by modeling correlated error and one that smooths the final prediction . We call the combination of these two methods Correct and Smooth ( C & S ; Figure 1 ) . The LPs are only postprocessing steps , and our pipeline is not trained end-to-end . Furthermore , the graph is only used in the post-processing steps ( and in a pre-processing step to augment the features X ) , but not for the base predictions . This makes training fast and scalable compared to standard GNN models . Moreover , we take advantage of both LP ( which performs fairly well on its own without features ) and the node features . We find that combining these complementary signals yields excellent predictions . 2.1 SIMPLE BASE PREDICTOR . To start , we use a simple base predictor that does not rely on the graph structure . More specifically , we train a model f to minimize ∑ i∈Lt ` ( f ( xi ) , yi ) , where xi is the ith row of X , yi is the ith row of Y , and ` is a loss function , For this paper , f is either a linear model or a shallow multi-layer perceptron ( MLP ) followed by a softmax , and ` is the cross-entropy loss . The validation set Lv is used to tune hyperparameters such as learning rates and the hidden layer dimensions for the MLP . From f , we get a base prediction Z ∈ Rn×c , where each row of Z is a probability distribution resulting from the softmax . Omitting the graph structure for these base predictions avoids most of the scalability issues with GNNs . In principle , though , we can use any base predictor for Z , including those based on GNNs , and we explore this in Section 3 . However , for our pipeline to be simple and scalable , we just use linear classifiers or MLPs with subsequent post-processing , which we describe next .
This paper shows modified label propagation can perform better than GCN. The idea is as follows: it first uses MLP on node features to get the initial labels, and then use two steps--correction and smoothness to postprocessing the labels. And this postprocessing is based on the traditional label propagation algorithm. It shows that this simple method matches GCN performances on various datasets.
SP:87fb323fc2a1b385c9a695c7669f509c835ef0aa
Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors to Sequences
1 INTRODUCTION . Graph neural networks ( GNNs ) have shown effectiveness in many fields with rich relational structures , such as citation networks ( Kipf & Welling , 2016 ; Veličković et al. , 2018 ) , social networks ( Hamilton et al. , 2017 ) , drug discovery ( Gilmer et al. , 2017 ; Stokes et al. , 2020 ) , physical systems ( Battaglia et al. , 2016 ) , and point clouds ( Wang et al. , 2019 ) . Most current GNNs follow a message passing scheme ( Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) , in which the representation of each node is recursively updated by aggregating the representations of its neighbors . Various GNNs ( Li et al. , 2016 ; Kipf & Welling , 2016 ; Veličković et al. , 2018 ; Xu et al. , 2019 ) mainly differ in the forms of aggregation functions . Real-world applications usually generate massive graphs , such as social networks . However , message passing methods have difficulties in handling such large graphs as the recursive message passing mechanism leads to prohibitive computation and memory requirements . To date , sampling methods ( Hamilton et al. , 2017 ; Ying et al. , 2018 ; Chen et al. , 2018a ; b ; Huang et al. , 2018 ; Zou et al. , 2019 ; Zeng et al. , 2020 ; Gao et al. , 2018 ; Chiang et al. , 2019 ; Zeng et al. , 2020 ) and precomputing methods ( Wu et al. , 2019 ; Rossi et al. , 2020 ; Bojchevski et al. , 2020 ) have been proposed to scale GNNs on large graphs . While the sampling methods can speed up training , they might result in redundancy , still incur high computational complexity , lead to loss of performance , or introduce bias ( see Section 2.2 ) . Generally , precomputing methods can scale to larger graphs as compared to sampling methods as recursive message passing is still required in sampling methods . In this work , we propose the Neighbor2Seq that transforms the hierarchical neighborhood of each node to a sequence in a precomputing step . After the Neighbor2Seq transformation , each node and its associated neighborhood tree are converted to an ordered sequence . Therefore , each node can be viewed as an independent sample and is no longer constrained by the topological structure . This novel transformation from graphs to grid-like data enables the use of mini-batch training for subsequent models . As a result , our models can be used on extremely large graphs , as long as the Neighbor2Seq step can be precomputed . As a radical departure from existing precomputing methods , we consider the hierarchical neighborhood of each node as an ordered sequence . The order information corresponds to hops between nodes . As a result of our Neighbor2Seq transformation , generic deep learning operations for gridlike data , such as convolution and attention , can be applied in subsequent models . In addition , our Neighbor2Seq can alleviate the over-squashing issue ( Alon & Yahav , 2020 ) suffered by current GNNs . Experimental results indicate that our proposed method can be used on a massive graph , where most current methods can not be applied . Furthermore , our method achieves superior performance as compared with previous sampling and precomputing methods . 2 ANALYSIS OF CURRENT GRAPH NEURAL NETWORK METHODS . We start by defining necessary notations . A graph is formally defined as G = ( V , E ) , where V is the set of nodes and E ⊆ V × V is the set of edges . We use n = |V | and m = |E| to denote the numbers of nodes and edges , respectively . The nodes are indexed from 1 to n. We consider a node feature matrix X ∈ Rn×d , where each row xi ∈ Rd is the d-dimensional feature vector associated with node i . The topology information of the graph is encoded in the adjacency matrix A ∈ Rn×n , whereA ( i , j ) = 1 if an edge exists between node i and node j , andA ( i , j ) = 0 otherwise . 2.1 GRAPH NEURAL NETWORKS VIA MESSAGE PASSING . There are two primary deep learning methods on graphs ( Bronstein et al . ) ; those are , spectral methods and spatial methods . The spectral method in Bruna et al . ( 2014 ) extends convolutional neural networks ( LeCun et al. , 1989 ) to the graph domain based on the spectrum of the graph Laplacian . The main limitation of spectral methods is the high complexity . ChebNet ( Defferrard et al. , 2016 ) and GCN ( Kipf & Welling , 2016 ) simplify the spectral methods and can be understood from the spatial perspective . In this work , we focus on the analysis of the current mainstream spatial methods . Generally , most existing spatial methods , such as ChebNet ( Defferrard et al. , 2016 ) , GCN ( Kipf & Welling , 2016 ) , GG-NN ( Li et al. , 2016 ) , GAT ( Veličković et al. , 2018 ) , and GIN ( Xu et al. , 2019 ) , can be understood from the message passing perspective ( Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) . Specifically , we iteratively update node representations by aggregating representations from its immediate neighbors . These message passing methods have been shown to be effective in many fields . However , they have inherent difficulties when applied on large graphs due to their excessive computation and memory requirements , as described in Section 2.2 . 2.2 GRAPH NEURAL NETWORKS ON LARGE GRAPHS . The above message passing methods are often trained in full batch . This requires the whole graph , i.e. , all the node representations and edge connections , to be in memory to allow recursive message passing on the whole graph . Usually , the number of neighbors grows very rapidly with the increase of receptive field . Hence , these methods can not be applied directly on large-scale graphs due to the prohibitive requirements on computation and memory . To enable deep learning on large graphs , two families of methods have been proposed ; those are methods based on sampling and precomputing . To circumvent the recursive expansion of neighbors across layers , sampling methods apply GNNs on a sampled subset of nodes with mini-batch training . Sampling methods can be further divided into three categories . First , node-wise sampling methods perform message passing for each node in its sampled neighborhood . This strategy is first proposed in GraphSAGE ( Hamilton et al. , 2017 ) , where neighbors are randomly sampled . This is extended by PinSAGE ( Ying et al. , 2018 ) , which selects neighbors based on random walks . VR-GCN ( Chen et al. , 2018a ) further proposes to use variance reduction techniques to obtain a convergence guarantee . Although these node-wise sampling methods can reduce computation , the remaining computation is still very expensive and some redundancy might have been introduced , as described in Huang et al . ( 2018 ) . Second , layer-wise sampling methods sample a fixed number of nodes for each layer . In particular , FastGCN ( Chen et al. , 2018b ) samples a fixed number of nodes for each layer independently based on the degree of each node . AS-GCN ( Huang et al. , 2018 ) and LADIES ( Zou et al. , 2019 ) introduce between-layer dependencies during sampling , thus alleviating the loss of information . Layer-wise sampling methods can avoid the redundancy introduced by node-wise sampling methods . However , the expensive sampling algorithms that aim to ensure performance may themselves incur high computational cost , as pointed out in Zeng et al . ( 2020 ) . Third , graph-wise sampling methods build mini-batches on sampled subgraphs . Specifically , LGCN ( Gao et al. , 2018 ) proposes to leverage mini-batch training on subgraphs selected by Breadth-First-Search algorithms . ClusterGCN ( Chiang et al. , 2019 ) conducts mini-batch training on sampled subgraphs that are obtained by a graph clustering algorithm . GraphSAINT ( Zeng et al. , 2020 ) proposes to derive subgraphs by importance-sampling and introduces normalization techniques to eliminate biases . These graph-wise sampling methods usually have high efficiency . The main limitation is that the nodes in a sampled subgraph are usually clustered together . This implies that two distant nodes in the original graph usually can not be feeded into the GNNs in the same mini-batch during training , potentially leading to bias in the trained models . The second family of methods for enabling GNNs training on large graphs are based on procomputing . Specifically , SGC ( Wu et al. , 2019 ) removes the non-linearity between GCN layers , resulting in a simplification as Y = softmax ( ÂLXW ) . In this formulation ,  = D̃− 1 2 ÃD̃− 1 2 is the sym- metrically normalized adjacency matrix , à = A + I is the adjacency matrix with self-loops , D̃ is the corresponding diagonal node degree matrix with D̃ ( i , i ) = ∑ j à ( i , j ) , L is the size of receptive field ( i.e. , the number of considered neighboring hops ) , which is the same as a L-layer GCN , Y is the output of the softmax classifier . Since there is no learnable parameters in ÂLX , this term can be precomputed as a feature pre-processing step . Similarly , SIGN ( Rossi et al. , 2020 ) applies an inception-like model to the precomputed features  ` X for ` ∈ { 1 , · · · , L } , where L is the predefined size of receptive field . Instead of precomputing the smoothing features as in SGC and SIGN , PPRGo ( Bojchevski et al. , 2020 ) extends the idea of PPNP ( Klicpera et al. , 2018 ) by approximately precomputing the personalized PageRank ( Page et al. , 1999 ) matrix , thereby enabling model training on large graphs using mini-batches . Generally , the precomputing methods can scale to larger graphs than sampling methods because the latter still needs to perform the recursive message passing during training . Differing from these precomputing methods , we consider the hierarchical neighborhood of each node as an ordered sequence , thus retaining the useful information about hops between nodes and enabling subsequent powerful and efficient operations . 3 THE PROPOSED NEIGHBOR2SEQ METHOD AND ANALYSIS . In this section , we describe our proposed method , known as Neighbor2Seq , which transforms the hierarchical neighborhood of each node into an ordered sequence , thus enabling the subsequent use of general deep learning operations . We analyze the scalability of our method ( See Section 3.5 ) and describe how our method can alleviate the over-squashing issue suffered by current message passing methods ( See Section 3.6 ) . 3.1 OVERVIEW . As described in Section 2.1 , existing message passing methods recursively update each node ’ s representation by aggregating information from its immediate neighbors . Hence , what these methods aim at capturing for each node is essentially its corresponding hierarchical neighborhood , i.e. , the neighborhood tree rooted at current node , as illustrated in Figure 1 ( b ) . In this work , we attempt to go beyond the message passing scheme to overcome the limitations mentioned in Section 2 . We propose to capture the information of this hierarchical neighborhood by transforming it into an ordered sequence , instead of recursively squashing it into a fixed-length vector . Our proposed method is composed of three steps . First , we transform a neighborhood to a sequence for each node . Second , we apply a normalization technique to the derived sequence features . Third , we use general deep learning operations , i.e. , convolution and attention , to learn from these sequence features and then make predictions for nodes . In the following , we describe these three steps in detail .
The paper proposes a method called neighbor2seq that converts the hierarchical structure of the center node to a sequence during message passing in graph neural networks. The proposed method aims to mitigate the issue of excessive computation and memory requirement of training graph neural networks. The proposed models Neighbor2Seq+Conv and Neighbor2Seq+Attn are tested on several datasets including a large scale benchmark dataset (ogbn-papers100M). The result shows some improvement especially on ogbn-papers100M while the improvement is not very obvious on other datasets.
SP:6dbb656031537976500fc17775a52c782ef46729
Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors to Sequences
1 INTRODUCTION . Graph neural networks ( GNNs ) have shown effectiveness in many fields with rich relational structures , such as citation networks ( Kipf & Welling , 2016 ; Veličković et al. , 2018 ) , social networks ( Hamilton et al. , 2017 ) , drug discovery ( Gilmer et al. , 2017 ; Stokes et al. , 2020 ) , physical systems ( Battaglia et al. , 2016 ) , and point clouds ( Wang et al. , 2019 ) . Most current GNNs follow a message passing scheme ( Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) , in which the representation of each node is recursively updated by aggregating the representations of its neighbors . Various GNNs ( Li et al. , 2016 ; Kipf & Welling , 2016 ; Veličković et al. , 2018 ; Xu et al. , 2019 ) mainly differ in the forms of aggregation functions . Real-world applications usually generate massive graphs , such as social networks . However , message passing methods have difficulties in handling such large graphs as the recursive message passing mechanism leads to prohibitive computation and memory requirements . To date , sampling methods ( Hamilton et al. , 2017 ; Ying et al. , 2018 ; Chen et al. , 2018a ; b ; Huang et al. , 2018 ; Zou et al. , 2019 ; Zeng et al. , 2020 ; Gao et al. , 2018 ; Chiang et al. , 2019 ; Zeng et al. , 2020 ) and precomputing methods ( Wu et al. , 2019 ; Rossi et al. , 2020 ; Bojchevski et al. , 2020 ) have been proposed to scale GNNs on large graphs . While the sampling methods can speed up training , they might result in redundancy , still incur high computational complexity , lead to loss of performance , or introduce bias ( see Section 2.2 ) . Generally , precomputing methods can scale to larger graphs as compared to sampling methods as recursive message passing is still required in sampling methods . In this work , we propose the Neighbor2Seq that transforms the hierarchical neighborhood of each node to a sequence in a precomputing step . After the Neighbor2Seq transformation , each node and its associated neighborhood tree are converted to an ordered sequence . Therefore , each node can be viewed as an independent sample and is no longer constrained by the topological structure . This novel transformation from graphs to grid-like data enables the use of mini-batch training for subsequent models . As a result , our models can be used on extremely large graphs , as long as the Neighbor2Seq step can be precomputed . As a radical departure from existing precomputing methods , we consider the hierarchical neighborhood of each node as an ordered sequence . The order information corresponds to hops between nodes . As a result of our Neighbor2Seq transformation , generic deep learning operations for gridlike data , such as convolution and attention , can be applied in subsequent models . In addition , our Neighbor2Seq can alleviate the over-squashing issue ( Alon & Yahav , 2020 ) suffered by current GNNs . Experimental results indicate that our proposed method can be used on a massive graph , where most current methods can not be applied . Furthermore , our method achieves superior performance as compared with previous sampling and precomputing methods . 2 ANALYSIS OF CURRENT GRAPH NEURAL NETWORK METHODS . We start by defining necessary notations . A graph is formally defined as G = ( V , E ) , where V is the set of nodes and E ⊆ V × V is the set of edges . We use n = |V | and m = |E| to denote the numbers of nodes and edges , respectively . The nodes are indexed from 1 to n. We consider a node feature matrix X ∈ Rn×d , where each row xi ∈ Rd is the d-dimensional feature vector associated with node i . The topology information of the graph is encoded in the adjacency matrix A ∈ Rn×n , whereA ( i , j ) = 1 if an edge exists between node i and node j , andA ( i , j ) = 0 otherwise . 2.1 GRAPH NEURAL NETWORKS VIA MESSAGE PASSING . There are two primary deep learning methods on graphs ( Bronstein et al . ) ; those are , spectral methods and spatial methods . The spectral method in Bruna et al . ( 2014 ) extends convolutional neural networks ( LeCun et al. , 1989 ) to the graph domain based on the spectrum of the graph Laplacian . The main limitation of spectral methods is the high complexity . ChebNet ( Defferrard et al. , 2016 ) and GCN ( Kipf & Welling , 2016 ) simplify the spectral methods and can be understood from the spatial perspective . In this work , we focus on the analysis of the current mainstream spatial methods . Generally , most existing spatial methods , such as ChebNet ( Defferrard et al. , 2016 ) , GCN ( Kipf & Welling , 2016 ) , GG-NN ( Li et al. , 2016 ) , GAT ( Veličković et al. , 2018 ) , and GIN ( Xu et al. , 2019 ) , can be understood from the message passing perspective ( Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) . Specifically , we iteratively update node representations by aggregating representations from its immediate neighbors . These message passing methods have been shown to be effective in many fields . However , they have inherent difficulties when applied on large graphs due to their excessive computation and memory requirements , as described in Section 2.2 . 2.2 GRAPH NEURAL NETWORKS ON LARGE GRAPHS . The above message passing methods are often trained in full batch . This requires the whole graph , i.e. , all the node representations and edge connections , to be in memory to allow recursive message passing on the whole graph . Usually , the number of neighbors grows very rapidly with the increase of receptive field . Hence , these methods can not be applied directly on large-scale graphs due to the prohibitive requirements on computation and memory . To enable deep learning on large graphs , two families of methods have been proposed ; those are methods based on sampling and precomputing . To circumvent the recursive expansion of neighbors across layers , sampling methods apply GNNs on a sampled subset of nodes with mini-batch training . Sampling methods can be further divided into three categories . First , node-wise sampling methods perform message passing for each node in its sampled neighborhood . This strategy is first proposed in GraphSAGE ( Hamilton et al. , 2017 ) , where neighbors are randomly sampled . This is extended by PinSAGE ( Ying et al. , 2018 ) , which selects neighbors based on random walks . VR-GCN ( Chen et al. , 2018a ) further proposes to use variance reduction techniques to obtain a convergence guarantee . Although these node-wise sampling methods can reduce computation , the remaining computation is still very expensive and some redundancy might have been introduced , as described in Huang et al . ( 2018 ) . Second , layer-wise sampling methods sample a fixed number of nodes for each layer . In particular , FastGCN ( Chen et al. , 2018b ) samples a fixed number of nodes for each layer independently based on the degree of each node . AS-GCN ( Huang et al. , 2018 ) and LADIES ( Zou et al. , 2019 ) introduce between-layer dependencies during sampling , thus alleviating the loss of information . Layer-wise sampling methods can avoid the redundancy introduced by node-wise sampling methods . However , the expensive sampling algorithms that aim to ensure performance may themselves incur high computational cost , as pointed out in Zeng et al . ( 2020 ) . Third , graph-wise sampling methods build mini-batches on sampled subgraphs . Specifically , LGCN ( Gao et al. , 2018 ) proposes to leverage mini-batch training on subgraphs selected by Breadth-First-Search algorithms . ClusterGCN ( Chiang et al. , 2019 ) conducts mini-batch training on sampled subgraphs that are obtained by a graph clustering algorithm . GraphSAINT ( Zeng et al. , 2020 ) proposes to derive subgraphs by importance-sampling and introduces normalization techniques to eliminate biases . These graph-wise sampling methods usually have high efficiency . The main limitation is that the nodes in a sampled subgraph are usually clustered together . This implies that two distant nodes in the original graph usually can not be feeded into the GNNs in the same mini-batch during training , potentially leading to bias in the trained models . The second family of methods for enabling GNNs training on large graphs are based on procomputing . Specifically , SGC ( Wu et al. , 2019 ) removes the non-linearity between GCN layers , resulting in a simplification as Y = softmax ( ÂLXW ) . In this formulation ,  = D̃− 1 2 ÃD̃− 1 2 is the sym- metrically normalized adjacency matrix , à = A + I is the adjacency matrix with self-loops , D̃ is the corresponding diagonal node degree matrix with D̃ ( i , i ) = ∑ j à ( i , j ) , L is the size of receptive field ( i.e. , the number of considered neighboring hops ) , which is the same as a L-layer GCN , Y is the output of the softmax classifier . Since there is no learnable parameters in ÂLX , this term can be precomputed as a feature pre-processing step . Similarly , SIGN ( Rossi et al. , 2020 ) applies an inception-like model to the precomputed features  ` X for ` ∈ { 1 , · · · , L } , where L is the predefined size of receptive field . Instead of precomputing the smoothing features as in SGC and SIGN , PPRGo ( Bojchevski et al. , 2020 ) extends the idea of PPNP ( Klicpera et al. , 2018 ) by approximately precomputing the personalized PageRank ( Page et al. , 1999 ) matrix , thereby enabling model training on large graphs using mini-batches . Generally , the precomputing methods can scale to larger graphs than sampling methods because the latter still needs to perform the recursive message passing during training . Differing from these precomputing methods , we consider the hierarchical neighborhood of each node as an ordered sequence , thus retaining the useful information about hops between nodes and enabling subsequent powerful and efficient operations . 3 THE PROPOSED NEIGHBOR2SEQ METHOD AND ANALYSIS . In this section , we describe our proposed method , known as Neighbor2Seq , which transforms the hierarchical neighborhood of each node into an ordered sequence , thus enabling the subsequent use of general deep learning operations . We analyze the scalability of our method ( See Section 3.5 ) and describe how our method can alleviate the over-squashing issue suffered by current message passing methods ( See Section 3.6 ) . 3.1 OVERVIEW . As described in Section 2.1 , existing message passing methods recursively update each node ’ s representation by aggregating information from its immediate neighbors . Hence , what these methods aim at capturing for each node is essentially its corresponding hierarchical neighborhood , i.e. , the neighborhood tree rooted at current node , as illustrated in Figure 1 ( b ) . In this work , we attempt to go beyond the message passing scheme to overcome the limitations mentioned in Section 2 . We propose to capture the information of this hierarchical neighborhood by transforming it into an ordered sequence , instead of recursively squashing it into a fixed-length vector . Our proposed method is composed of three steps . First , we transform a neighborhood to a sequence for each node . Second , we apply a normalization technique to the derived sequence features . Third , we use general deep learning operations , i.e. , convolution and attention , to learn from these sequence features and then make predictions for nodes . In the following , we describe these three steps in detail .
This paper proposed a simple graph neural network architecture that is easy to scale up and perform stochastic training. Instead of performing message passing as commonly used GNN, this paper first performs weighted combinations of node features per each hop of the neighbors of a center node, and then performs either CNN or attention mechanism to aggregate the features and obtain center node embedding. Since the feature aggregation can be performed offline, and the computation can easily be decomposed and stochastic training is straightforward, the method can easily scale up to graphs with 10M nodes. Experiments on median size or large size graphs show the comparable or better performance than alternatives.
SP:6dbb656031537976500fc17775a52c782ef46729
Iterated learning for emergent systematicity in VQA
1 INTRODUCTION . Although great progress has been made in visual question-answering ( VQA ) , recent methods still struggle to generalize systematically to inputs coming from a distribution different from that seen during training ( Bahdanau et al. , 2019b ; a ) . Neural module networks ( NMNs ) present a natural solution to improve generalization in VQA , using a symbolic layout or program to arrange neural computational modules into computation graphs . If these modules are learned to be specialized , they can be composed in arbitrary legal layouts to produce different processing flows . However , for modules to learn specialized roles , programs must support this type of compositionality ; if programs reuse modules in non-compositional ways , modules are unlikely to become layout-invariant . This poses a substantial challenge for the training of NMNs . Although Bahdanau et al . ( 2019b ) and Bahdanau et al . ( 2019a ) both observe that NMNs can systematically generalize if given humandesigned ground-truth programs , creating these programs imposes substantial practical costs . It becomes natural to jointly learn a program generator alongside the modules ( Johnson et al. , 2017b ; Hu et al. , 2017 ; Vedantam et al. , 2019 ) , but the generated programs often fail to generalize systematically and lead to worse performance ( Bahdanau et al. , 2019b ) . Iterated learning ( IL ) offers one way to address this problem . Originating in cognitive science , IL explains how language evolves to become more compositional and easier-to-acquire in a repeated transmission process , where each new generation acquires the previous generation ’ s language through a limited number of samples ( Kirby et al. , 2014 ) . Early works with human participants ( Kirby et al. , 2008 ) as well as agent-based simulations ( Zuidema , 2003 ) support this hypoth- ∗Correspondance at : ankit.vani @ umontreal.ca . esis . The machine learning community has also recently shown an increasing interest in applying IL towards emergent communication ( Guo et al. , 2019 ; Li & Bowling , 2019 ; Cogswell et al. , 2019 ; Dagan et al. , 2020 ; Ren et al. , 2020 ) . Different from previous works , we believe that IL is an algorithmic principle that is equally applicable to recovering compositional structure in more general tasks . We thus propose treating NMN programs as samples from a “ layout language ” and applying IL to the challenging problem of VQA . Our efforts highlight the potential of IL for broader machine learning applications beyond the previously-explored scope of language emergence and preservation ( Lu et al. , 2020 ) . To demonstrate our method , we introduce a lightweight benchmark for systematic generalization research based on the popular SHAPES dataset ( Andreas et al. , 2016 ) called SHAPES-SyGeT ( SHAPES Systematic Generalization Test ) . Our experiments on SHAPES-SyGeT , CLEVR ( Johnson et al. , 2017a ) , and CLOSURE ( Bahdanau et al. , 2019a ) show that our IL algorithm accelerates the learning of compositional program structure , leading to better generalization to both unseen questions from the training question templates and unseen question templates . Using only 100 ground-truth programs for supervision , our method achieves CLEVR performance comparable to Johnson et al . ( 2017b ) and Vedantam et al . ( 2019 ) , which use 18000 and 1000 programs for supervision respectively . 2 RELATED WORK . Systematic generalization . Systematicity was first proposed as a topic of research in neural networks by Fodor & Pylyshyn ( 1988 ) , who argue that cognitive capabilities exhibit certain symmetries , and that representations of mental states have combinatorial syntactic and semantic structure . Whether or not neural networks can exhibit systematic compositionality has been a subject of much debate in the research community ( Fodor & Pylyshyn , 1988 ; Christiansen & Chater , 1994 ; Marcus , 1998 ; Phillips , 1998 ; Chang , 2002 ; Marcus , 2018 ; van der Velde et al. , 2004 ; Botvinick & Plaut , 2009 ; Bowers et al. , 2009 ; Brakel & Frank , 2009 ; Fodor & Lepore , 2002 ; Marcus , 2018 ; Calvo & Symons , 2014 ) . Bahdanau et al . ( 2019b ) investigate various VQA architectures such as neural module networks ( NMNs ) ( Andreas et al. , 2016 ) , MAC ( Hudson & Manning , 2018 ) , FiLM ( Perez et al. , 2018 ) , and relation networks ( Santoro et al. , 2017 ) on their ability to systematically generalize on a new synthetic dataset called SQOOP . They show that only NMNs are able to robustly solve test problems , but succeed only when a fixed tree-structured layout is provided . When learning to infer the module network layout , robust tree-structured layouts only emerged if given a strong prior to do so . The authors conclude that explicit regularization and stronger priors are required for the development of the right layout structure . CLEVR ( Johnson et al. , 2017a ) is a popular VQA dataset , and various benchmarks achieve almostperfect CLEVR validation scores ( Hu et al. , 2017 ; Hudson & Manning , 2018 ; Perez et al. , 2018 ; Santoro et al. , 2017 ) . Bahdanau et al . ( 2019a ) proposed an extension of CLEVR with a new evaluation dataset called CLOSURE , containing novel combinations of linguistic concepts found in CLEVR . The authors found that many of the existing models in the literature fail to systematically generalize to CLOSURE . Moreover , there is a significant gap between the performance achieved with ground-truth layouts and learned layouts on CLOSURE . Language emergence and compositionality . Agents interacting in a cooperative environment can learn a language to communicate to solve a particular task . The emergence of such a communication protocol has been studied extensively in multi-agent referential games . In these games , one agent must describe what it saw to another agent , which is tasked with figuring out what the first agent saw ( Lewis , 2008 ; Skyrms , 2010 ; Steels & Loetzsch , 2012 ) . To encourage a dialogue between agents , several multi-stage variants of such a game have also been proposed ( Kottur et al. , 2017 ; Evtimova et al. , 2018 ) . Most approaches to learning a discrete communication protocol between agents use reinforcement learning ( Foerster et al. , 2016 ; Lazaridou et al. , 2017 ; Kottur et al. , 2017 ; Jorge et al. , 2016 ; Havrylov & Titov , 2017 ) . However , the Gumbel straight-through estimator ( Jang et al. , 2017 ) can also be used ( Havrylov & Titov , 2017 ) , as can backpropagation when the language in question is continuous ( Foerster et al. , 2016 ; Sukhbaatar & Fergus , 2016 ; Singh et al. , 2019 ) . Several works in the literature have found that compositionality only arises in emergent languages if appropriate environmental pressures are present ( Kottur et al. , 2017 ; Choi et al. , 2018 ; Lazaridou et al. , 2018 ; Chaabouni et al. , 2020 ) . While generalization pressure is not sufficient to guarantee compositionality , compositional languages tend to exhibit better systematic generalization ( Bahdanau et al. , 2019b ; Chaabouni et al. , 2020 ) . The community still lacks strong research indicating what general conditions are necessary or sufficient for compositional language emergence . Iterated learning . The origins of the compositionality of human language , which leads to an astounding open-ended expressive power , have attracted much interest over the years . Kirby ( 2001 ) suggests that this phenomenon is a result of a learning bottleneck arising from the need to learn a highly expressive language with only a limited set of supervised learning data . The iterated application of this bottleneck , as instantiated by IL , has been demonstrated to cause artificial languages to develop structure in experiments with human participants ( Kirby et al. , 2008 ; Silvey et al. , 2015 ) . Ren et al . ( 2020 ) present neural IL following the principles of Kirby ( 2001 ) , where neural agents play a referential game and evolve a communication protocol through IL . They use topographic similarity ( Brighton & Kirby , 2006 ) to quantify compositionality , and find that high topographic similarity improves the learning speed of neural agents , allows the listener to recognize more objects using less data , and increases validation performance . However , these experiments are limited to domains with extremely simple object and message structure . Several ablation studies ( Li & Bowling , 2019 ; Ren et al. , 2020 ) have found that re-initializing the speaker and the listener between generations is necessary to reap the benefits of compositionality from IL . However , seeded IL ( Lu et al. , 2020 ) proposes to seed a new agent with the previous generation ’ s parameters at the end of the learning phase of a new generation . Since self-play has not yet fine-tuned this initialization , it has not had the opportunity to develop a non-compositional language to fit the training data . The authors find that seeded IL helps counter language drift in a translation game , and hypothesize that IL maintains the compositionality of natural language . 3 METHOD . We are interested in solving the task of visual question-answering ( VQA ) . Let X be the space of images about which our model will be required to answer questions . Next , let Q be the space of natural-language questions and Y the space of all possible answers to the questions . Additionally , we consider a space Z of programs , which represent computation graphs of operations that can be performed on an image in X to produce an output in Y . We consider a question template T to be a set of tuples ( q , z ) , where q ∈ Q and z ∈ Z . Each question template contains questions with the same structure but varying primitive values . For example , the questions “ Is a triangle blue ” and “ Is a square red ” belong to a template “ Is a SHAPE COLOR. ” The program z corresponding to the question q in a template defines a computation graph of operations that would produce the correct answer in Y to q for any input image x ∈ X . Finally , let T be a finite set of question templates . The dataset for training our model and evaluating VQA performance constitutes tuples of the form ( q , z , x , y ) . First , a template T ∈ T is sampled and a tuple ( q , z ) is sampled from T . Then , an image x ∈ X is sampled and the answer y is produced by passing x through the program z . These collected variables ( q , z , x , y ) form a single example in the task dataset . To evaluate our model ’ s performance on unseen question templates , we define Ttrain ⊂ T to be a subset of training templates and Ttest = T − Ttrain to be the subset of test templates . The training dataset D is prepared from templates in Ttrain and the out-of-distribution test dataset Dtest from templates in Ttest . We allow a program z to be absent in D , in which case it is not used for auxiliary supervision during training . Our goal of systematic generalization is to learn a model p ( Y | X , Q ) that performs well on the dataset Dtest created using unseen question templates , where Y , X , and Q are random variables taking values in Y , X , and Q . We define our model to be a composition of a program generator PGθ ( Z | Q ) and an execution engine EEφ ( Y |X , Z ) , parameterized by θ and φ respectively .
The authors address methods to encourage the emergence of the layout expression structures on the frameworks of neural module networks (NMN) for the visual QA problems. The methods are motivated from the works on language emergence for communication between multi-agents and the language acquisition of new-born babies from parents, which achieved with limited data. The methods, ‘iterative learning’ (IL) are designed as forming two agents (program generators and execution engines) to play VQA games. Basic architectures and learning methods seem to be very similar to the approach of semi-supervised learning introduced in [ICCV17].
SP:c7f896d15bb66637e8ad0b80f7baa713d9da6c30
Iterated learning for emergent systematicity in VQA
1 INTRODUCTION . Although great progress has been made in visual question-answering ( VQA ) , recent methods still struggle to generalize systematically to inputs coming from a distribution different from that seen during training ( Bahdanau et al. , 2019b ; a ) . Neural module networks ( NMNs ) present a natural solution to improve generalization in VQA , using a symbolic layout or program to arrange neural computational modules into computation graphs . If these modules are learned to be specialized , they can be composed in arbitrary legal layouts to produce different processing flows . However , for modules to learn specialized roles , programs must support this type of compositionality ; if programs reuse modules in non-compositional ways , modules are unlikely to become layout-invariant . This poses a substantial challenge for the training of NMNs . Although Bahdanau et al . ( 2019b ) and Bahdanau et al . ( 2019a ) both observe that NMNs can systematically generalize if given humandesigned ground-truth programs , creating these programs imposes substantial practical costs . It becomes natural to jointly learn a program generator alongside the modules ( Johnson et al. , 2017b ; Hu et al. , 2017 ; Vedantam et al. , 2019 ) , but the generated programs often fail to generalize systematically and lead to worse performance ( Bahdanau et al. , 2019b ) . Iterated learning ( IL ) offers one way to address this problem . Originating in cognitive science , IL explains how language evolves to become more compositional and easier-to-acquire in a repeated transmission process , where each new generation acquires the previous generation ’ s language through a limited number of samples ( Kirby et al. , 2014 ) . Early works with human participants ( Kirby et al. , 2008 ) as well as agent-based simulations ( Zuidema , 2003 ) support this hypoth- ∗Correspondance at : ankit.vani @ umontreal.ca . esis . The machine learning community has also recently shown an increasing interest in applying IL towards emergent communication ( Guo et al. , 2019 ; Li & Bowling , 2019 ; Cogswell et al. , 2019 ; Dagan et al. , 2020 ; Ren et al. , 2020 ) . Different from previous works , we believe that IL is an algorithmic principle that is equally applicable to recovering compositional structure in more general tasks . We thus propose treating NMN programs as samples from a “ layout language ” and applying IL to the challenging problem of VQA . Our efforts highlight the potential of IL for broader machine learning applications beyond the previously-explored scope of language emergence and preservation ( Lu et al. , 2020 ) . To demonstrate our method , we introduce a lightweight benchmark for systematic generalization research based on the popular SHAPES dataset ( Andreas et al. , 2016 ) called SHAPES-SyGeT ( SHAPES Systematic Generalization Test ) . Our experiments on SHAPES-SyGeT , CLEVR ( Johnson et al. , 2017a ) , and CLOSURE ( Bahdanau et al. , 2019a ) show that our IL algorithm accelerates the learning of compositional program structure , leading to better generalization to both unseen questions from the training question templates and unseen question templates . Using only 100 ground-truth programs for supervision , our method achieves CLEVR performance comparable to Johnson et al . ( 2017b ) and Vedantam et al . ( 2019 ) , which use 18000 and 1000 programs for supervision respectively . 2 RELATED WORK . Systematic generalization . Systematicity was first proposed as a topic of research in neural networks by Fodor & Pylyshyn ( 1988 ) , who argue that cognitive capabilities exhibit certain symmetries , and that representations of mental states have combinatorial syntactic and semantic structure . Whether or not neural networks can exhibit systematic compositionality has been a subject of much debate in the research community ( Fodor & Pylyshyn , 1988 ; Christiansen & Chater , 1994 ; Marcus , 1998 ; Phillips , 1998 ; Chang , 2002 ; Marcus , 2018 ; van der Velde et al. , 2004 ; Botvinick & Plaut , 2009 ; Bowers et al. , 2009 ; Brakel & Frank , 2009 ; Fodor & Lepore , 2002 ; Marcus , 2018 ; Calvo & Symons , 2014 ) . Bahdanau et al . ( 2019b ) investigate various VQA architectures such as neural module networks ( NMNs ) ( Andreas et al. , 2016 ) , MAC ( Hudson & Manning , 2018 ) , FiLM ( Perez et al. , 2018 ) , and relation networks ( Santoro et al. , 2017 ) on their ability to systematically generalize on a new synthetic dataset called SQOOP . They show that only NMNs are able to robustly solve test problems , but succeed only when a fixed tree-structured layout is provided . When learning to infer the module network layout , robust tree-structured layouts only emerged if given a strong prior to do so . The authors conclude that explicit regularization and stronger priors are required for the development of the right layout structure . CLEVR ( Johnson et al. , 2017a ) is a popular VQA dataset , and various benchmarks achieve almostperfect CLEVR validation scores ( Hu et al. , 2017 ; Hudson & Manning , 2018 ; Perez et al. , 2018 ; Santoro et al. , 2017 ) . Bahdanau et al . ( 2019a ) proposed an extension of CLEVR with a new evaluation dataset called CLOSURE , containing novel combinations of linguistic concepts found in CLEVR . The authors found that many of the existing models in the literature fail to systematically generalize to CLOSURE . Moreover , there is a significant gap between the performance achieved with ground-truth layouts and learned layouts on CLOSURE . Language emergence and compositionality . Agents interacting in a cooperative environment can learn a language to communicate to solve a particular task . The emergence of such a communication protocol has been studied extensively in multi-agent referential games . In these games , one agent must describe what it saw to another agent , which is tasked with figuring out what the first agent saw ( Lewis , 2008 ; Skyrms , 2010 ; Steels & Loetzsch , 2012 ) . To encourage a dialogue between agents , several multi-stage variants of such a game have also been proposed ( Kottur et al. , 2017 ; Evtimova et al. , 2018 ) . Most approaches to learning a discrete communication protocol between agents use reinforcement learning ( Foerster et al. , 2016 ; Lazaridou et al. , 2017 ; Kottur et al. , 2017 ; Jorge et al. , 2016 ; Havrylov & Titov , 2017 ) . However , the Gumbel straight-through estimator ( Jang et al. , 2017 ) can also be used ( Havrylov & Titov , 2017 ) , as can backpropagation when the language in question is continuous ( Foerster et al. , 2016 ; Sukhbaatar & Fergus , 2016 ; Singh et al. , 2019 ) . Several works in the literature have found that compositionality only arises in emergent languages if appropriate environmental pressures are present ( Kottur et al. , 2017 ; Choi et al. , 2018 ; Lazaridou et al. , 2018 ; Chaabouni et al. , 2020 ) . While generalization pressure is not sufficient to guarantee compositionality , compositional languages tend to exhibit better systematic generalization ( Bahdanau et al. , 2019b ; Chaabouni et al. , 2020 ) . The community still lacks strong research indicating what general conditions are necessary or sufficient for compositional language emergence . Iterated learning . The origins of the compositionality of human language , which leads to an astounding open-ended expressive power , have attracted much interest over the years . Kirby ( 2001 ) suggests that this phenomenon is a result of a learning bottleneck arising from the need to learn a highly expressive language with only a limited set of supervised learning data . The iterated application of this bottleneck , as instantiated by IL , has been demonstrated to cause artificial languages to develop structure in experiments with human participants ( Kirby et al. , 2008 ; Silvey et al. , 2015 ) . Ren et al . ( 2020 ) present neural IL following the principles of Kirby ( 2001 ) , where neural agents play a referential game and evolve a communication protocol through IL . They use topographic similarity ( Brighton & Kirby , 2006 ) to quantify compositionality , and find that high topographic similarity improves the learning speed of neural agents , allows the listener to recognize more objects using less data , and increases validation performance . However , these experiments are limited to domains with extremely simple object and message structure . Several ablation studies ( Li & Bowling , 2019 ; Ren et al. , 2020 ) have found that re-initializing the speaker and the listener between generations is necessary to reap the benefits of compositionality from IL . However , seeded IL ( Lu et al. , 2020 ) proposes to seed a new agent with the previous generation ’ s parameters at the end of the learning phase of a new generation . Since self-play has not yet fine-tuned this initialization , it has not had the opportunity to develop a non-compositional language to fit the training data . The authors find that seeded IL helps counter language drift in a translation game , and hypothesize that IL maintains the compositionality of natural language . 3 METHOD . We are interested in solving the task of visual question-answering ( VQA ) . Let X be the space of images about which our model will be required to answer questions . Next , let Q be the space of natural-language questions and Y the space of all possible answers to the questions . Additionally , we consider a space Z of programs , which represent computation graphs of operations that can be performed on an image in X to produce an output in Y . We consider a question template T to be a set of tuples ( q , z ) , where q ∈ Q and z ∈ Z . Each question template contains questions with the same structure but varying primitive values . For example , the questions “ Is a triangle blue ” and “ Is a square red ” belong to a template “ Is a SHAPE COLOR. ” The program z corresponding to the question q in a template defines a computation graph of operations that would produce the correct answer in Y to q for any input image x ∈ X . Finally , let T be a finite set of question templates . The dataset for training our model and evaluating VQA performance constitutes tuples of the form ( q , z , x , y ) . First , a template T ∈ T is sampled and a tuple ( q , z ) is sampled from T . Then , an image x ∈ X is sampled and the answer y is produced by passing x through the program z . These collected variables ( q , z , x , y ) form a single example in the task dataset . To evaluate our model ’ s performance on unseen question templates , we define Ttrain ⊂ T to be a subset of training templates and Ttest = T − Ttrain to be the subset of test templates . The training dataset D is prepared from templates in Ttrain and the out-of-distribution test dataset Dtest from templates in Ttest . We allow a program z to be absent in D , in which case it is not used for auxiliary supervision during training . Our goal of systematic generalization is to learn a model p ( Y | X , Q ) that performs well on the dataset Dtest created using unseen question templates , where Y , X , and Q are random variables taking values in Y , X , and Q . We define our model to be a composition of a program generator PGθ ( Z | Q ) and an execution engine EEφ ( Y |X , Z ) , parameterized by θ and φ respectively .
The authors apply iterated learning - a procedure originating in CogSci analyses of how human languages might develop - to the training of neural module networks. The goal is for iterated learning to encourage these networks to develop compositional structures that support systematic generalization without requiring explicit pressures for compositional structures (in past work, such explicit pressures have generally been necessary). The proposed approach brings substantial improvements in systematic generalization across two datasets, SHAPES and CLEVR.
SP:c7f896d15bb66637e8ad0b80f7baa713d9da6c30
MetaNorm: Learning to Normalize Few-Shot Batches Across Domains
1 INTRODUCTION . Batch normalization ( Ioffe & Szegedy , 2015 ) is crucial for training neural networks , and with its variants , e.g. , layer normalization ( Ba et al. , 2016 ) , group normalization ( Wu & He , 2018 ) and instance normalization ( Ulyanov et al. , 2016 ) , has thus become an essential part of the deep learning toolkit ( Bjorck et al. , 2018 ; Luo et al. , 2018a ; Yang et al. , 2019 ; Jia et al. , 2019 ; Luo et al. , 2018b ; Summers & Dinneen , 2020 ) . Batch normalization helps stabilize the distribution of internal activations when a model is being trained . Given a mini-batch B , the normalization is conducted along each individual feature channel for 2D convolutional neural networks . During training , the batch normalization moments are calculated as follows : µB = 1 M M∑ i=1 ai , σ 2 B = 1 M M∑ i=1 ( ai − µB ) 2 , ( 1 ) where ai indicates the i-th element of the M activations in the batch , M = |B| ×H ×W , in which H and W are the height and width of the feature map in each channel . We can now apply the normalization statistics to each activation : a′i ← BN ( ai ) ≡ γâi + β , where , âi = ai − µB√ σ2B + , ( 2 ) where γ and β are parameters learned during training , is a small scalar to prevent division by 0 , and operations between vectors are element-wise . At test time , the standard practice is to normalize activations using the moving average over mini-batch means µB and variance σ2B . Batch normalization is based on an implicit assumption that the samples in the dataset are independent and identically distributed . However , this assumption does not hold in challenging settings like few-shot learning and domain generalization . In this paper , we strive for batch normalization when batches are of small size and suffer from distributions shifts between source and target domains . Batch normalization for few-shot learning and domain generalization problems have so far been considered separately , predominantly in a meta-learning setting . For few-shot meta-learning ( Finn et al. , 2017 ; Gordon et al. , 2019 ) , most existing methods rely critically on transductive batch normalization , except those based on prototypes ( Snell et al. , 2017 ; Allen et al. , 2019 ; Zhen et al. , 2020a ) . However , the nature of transductive learning restricts its application due to the requirement to sample from the test set . To address this issue , Bronskill et al . ( 2020 ) proposes TaskNorm , which leverages other statistics from both layer and instance normalization . As a non-transductive normalization approach , it achieves impressive performance and outperforms conventional batch normalization ( Ioffe & Szegedy , 2015 ) . However , its performance is not always performing better than transductive batch normalization . Meanwhile , domain generalization ( Muandet et al. , 2013 ; Balaji et al. , 2018 ; Li et al. , 2017a ; b ) suffers from distribution shifts from training to test , which makes it problematic to directly apply statistics calculated from a seen domain to test data from unseen domains ( Wang et al. , 2019 ; Seo et al. , 2019 ) . Recent works deal with this problem by learning a domain specific normalization ( Chang et al. , 2019 ; Seo et al. , 2019 ) or a transferable normalization in place of existing normalization techniques ( Wang et al. , 2019 ) . We address the batch normalization challenges for few-shot classification and domain generalization in a unified way by learning a new batch normalization under the meta-learning setting . We propose MetaNorm , a simple but effective meta-learning normalization . We leverage the metalearning setting and learn to infer normalization statistics from data , instead of applying direct calculations or blending various normalization statistics . MetaNorm is a general batch normalization approach , which is model-agnostic and serves as a plug-and-play module that can be seamlessly embedded into existing meta-learning approaches . We demonstrate its effectiveness for few-shot classification and domain generalization , where it learns task-specific statistics from limited data samples in the support set for each few-shot task ; and it can also learn to generate domain-specific statistics from the seen source domains for unseen target domains . We verify the effectiveness of MetaNorm by extensive evaluation on few-shot classification and domain generalization tasks . For few-shot classification , we experiment with representative gradient , metric and model-based meta-learning approaches on fourteen benchmark datasets . For domain generalization , we evaluate the model on three widely-used benchmarks for cross-domain visual object classification . Last but not least , we introduce the challenging new task of few-shot domain generalization , which combines the challenges of both few-shot learning and domain generalization . The experimental results demonstrate the benefit of MetaNorm compared to existing batch normalizations . 2 RELATED WORKS . Transductive Batch Normalization For conventional batch normalization under supervised settings , i.i.d . assumptions about the data distribution imply that estimating moments from the training set will provide appropriate normalization statistics for test data . However , in the meta-learning scenario data points are only assumed to be i.i.d . within a specific task . Therefore , it is critical to select the moments when batch normalization is applied to support and query set data points during meta training and meta testing . Hence , in the recent meta-learning literature the running moments are no longer used for normalization at meta-test time , but instead replaced with support/query set statistics . These statistics are used for normalization , both at meta-train and meta-test time . This approach is referred to as transductive batch normalization ( TBN ) ( Bronskill et al. , 2020 ) . Competitive meta-learning methods ( e.g. , Gordon et al. , 2019 ; Finn et al. , 2017 ; Zhen et al. , 2020b ) rely on TBN to achieve state-of-the-art performance . However , there are two critical problems with TBN . First , TBN is sensitive to the distribution over the query set used during meta-training , and as such is less generally applicable than non-transductive learning . Second , TBN uses extra information for multiple test samples , compared to non-transductive batch normalization at prediction time , which could be problematic as we are not guaranteed to have a set of test samples available during training in practical applications . In contrast , MetaNorm is a non-transductive normalization . It generates statistics from the support set only , without relying on query samples , making it more practical . Meta Batch Normalization To address the problem of transductive batch normalization and improve conventional batch normalization , meta-batch normalization ( MetaBN ) was introduced ( Triantafillou et al. , 2020 ; Bronskill et al. , 2020 ) . In MetaBN , the support set alone is used to compute the normalization statistics for both the support and query sets at both meta-training and meta-test time . MetaBN is non-transductive since the normalization of a test input does not depend on other test inputs in the query set . However , Bronskill et al . ( 2020 ) observe that MetaBN performs less well for small-sized support sets . This leads to high variance in moment estimates , which is similar to the difficulty of using batch normalization with small-batch training ( Wu & He , 2018 ) . To address this issue , Bronskill et al . ( 2020 ) proposed TaskNorm , which learns to combine statistics from both layer normalization and instance normalization , with a lending parameter to be learned at meta-train time . As a non-transductive normalization , TaskNorm achieves impressive performance , outperforming conventional batch normalization . However , it can not always perform better than transductive batch normalization . TaskNorm indicates non-transductive batch normalization estimates proper normalization statistics by involving learning in the normalization process . We also propose to learn batch normalization within the meta-learning framework , but instead of employing a learnable combination of existing normalization statistics , we directly learn to infer statistics from data . At meta-train time , the model learns to acquire the ability to generate statistics only from the support set and at meta-test time we directly apply the model to infer statistics for new tasks . Batch Normalization for Domain Adaptation and Domain Generalization Domain adaption suffers from a distribution shift between source and target domains , which makes it sub-optimal to directly apply batch normalization ( Bilen & Vedaldi , 2017 ) . Li et al . ( 2016 ) proposed adaptive batch normalization to increase the generalization ability of a deep neural network . By modulating the statistical information of all batch normalization layers in the neural network , it achieves deep adaptation effects for domain-adaptive tasks . Nado et al . ( 2020 ) noted the possibility of accessing small unlabeled batches of the shifted data just before prediction time . To improve model accuracy and calibration under covariate shift , they proposed prediction-time batch normalization . Since the activation statistics obtained during training do not reflect statistics of the test distribution , when testing in an out-of-distribution environment , Schneider et al . ( 2020 ) proposed estimating the batch statistics on the corrupted images . Kaku et al . ( 2020 ) demonstrated that standard non-adaptive feature normalization fails to correctly normalize the features of convolutional neural networks on held-out data where extraneous variables take values not seen during training . Learning domain-specific batch normalization has been explored ( Chang et al. , 2019 ; Wang et al. , 2019 ) . Wang et al . ( 2019 ) introduced transferable normalization , TransNorm , which normalizes the feature representations from source and target domain separately using domain-specific statistics . Along a similar vein , Chang et al . ( 2019 ) proposed a domain-specific batch normalization layer , which consists of two branches , each in charge of a single domain exclusively . The hope is that , through the normalization , the feature representation will become domain invariant . Nevertheless , these normalization methods are specifically designed for domain adaptation tasks , where data from target domains are available , though often unlabelled . This makes them inapplicable to domain generalization tasks where data from target domains are inaccessible at training time . Seo et al . ( 2019 ) proposed learning to optimize domain specific normalization for domain generalization tasks . Under the meta-learning settings , a mixture of different normalization techniques is optimized for each domain , where the mixture weights are learned specifically for different domains . Instead of combining different normalization statistics , MetaNorm learns from data to generate adaptive statistics specific to each domain . Moreover , we introduce an even more challenging setting , i.e. , few-shot domain generalization , which combines the challenges of few-shot classification and domain generalization . Conditional Batch Normalization de Vries et al . ( 2017 ) proposed conditional batch normalization to modulate visual processing by predicting the scalars γ and β of the batch normalization conditioned on the language from an early processing stage . Conditional batch normalization has also been applied to align different data distributions for domain adaptation ( Li et al. , 2016 ) . Oreshkin et al . ( 2018 ) applies conditional batch normalization to metric-based models for the few-shot classification task . Tseng et al . ( 2020 ) proposed a learning-to-learn method to optimize the hyper-parameters of the feature-wise transformation layers by conditional batch normalization for cross-domain classification . Unlike conditional batch normalization , we use extra data ( the query set ) to generate normalization statistics under the meta-learning setting , rather than the scalars .
This paper describes a new method for normalizing few-shot learning episodes. The authors point out that the statistics of an episode are unreliable when the size of the episode is small or when the data distribution changes from episode to episode. To remedy this, the authors propose a method called ‘MetaNorm’ which uses a meta-learning approach to infer the means and variances to be used in the batch normalization layers that are employed in the feature extractor component. In particular, they meta-learn the parameters for a set of hypernetworks in an amortized fashion that learn to generate the means and variances of the batch normalization layers conditioned on the contents of the episode. The paper focuses entirely on the few-shot image classification scenario where MetaNorm is evaluated in various settings including standard few-shot classification and domain generalization (including a novel few-shot domain generalization setting).
SP:ea7daa9dbbcba08e7c094630ef2bb55badc4fde5
MetaNorm: Learning to Normalize Few-Shot Batches Across Domains
1 INTRODUCTION . Batch normalization ( Ioffe & Szegedy , 2015 ) is crucial for training neural networks , and with its variants , e.g. , layer normalization ( Ba et al. , 2016 ) , group normalization ( Wu & He , 2018 ) and instance normalization ( Ulyanov et al. , 2016 ) , has thus become an essential part of the deep learning toolkit ( Bjorck et al. , 2018 ; Luo et al. , 2018a ; Yang et al. , 2019 ; Jia et al. , 2019 ; Luo et al. , 2018b ; Summers & Dinneen , 2020 ) . Batch normalization helps stabilize the distribution of internal activations when a model is being trained . Given a mini-batch B , the normalization is conducted along each individual feature channel for 2D convolutional neural networks . During training , the batch normalization moments are calculated as follows : µB = 1 M M∑ i=1 ai , σ 2 B = 1 M M∑ i=1 ( ai − µB ) 2 , ( 1 ) where ai indicates the i-th element of the M activations in the batch , M = |B| ×H ×W , in which H and W are the height and width of the feature map in each channel . We can now apply the normalization statistics to each activation : a′i ← BN ( ai ) ≡ γâi + β , where , âi = ai − µB√ σ2B + , ( 2 ) where γ and β are parameters learned during training , is a small scalar to prevent division by 0 , and operations between vectors are element-wise . At test time , the standard practice is to normalize activations using the moving average over mini-batch means µB and variance σ2B . Batch normalization is based on an implicit assumption that the samples in the dataset are independent and identically distributed . However , this assumption does not hold in challenging settings like few-shot learning and domain generalization . In this paper , we strive for batch normalization when batches are of small size and suffer from distributions shifts between source and target domains . Batch normalization for few-shot learning and domain generalization problems have so far been considered separately , predominantly in a meta-learning setting . For few-shot meta-learning ( Finn et al. , 2017 ; Gordon et al. , 2019 ) , most existing methods rely critically on transductive batch normalization , except those based on prototypes ( Snell et al. , 2017 ; Allen et al. , 2019 ; Zhen et al. , 2020a ) . However , the nature of transductive learning restricts its application due to the requirement to sample from the test set . To address this issue , Bronskill et al . ( 2020 ) proposes TaskNorm , which leverages other statistics from both layer and instance normalization . As a non-transductive normalization approach , it achieves impressive performance and outperforms conventional batch normalization ( Ioffe & Szegedy , 2015 ) . However , its performance is not always performing better than transductive batch normalization . Meanwhile , domain generalization ( Muandet et al. , 2013 ; Balaji et al. , 2018 ; Li et al. , 2017a ; b ) suffers from distribution shifts from training to test , which makes it problematic to directly apply statistics calculated from a seen domain to test data from unseen domains ( Wang et al. , 2019 ; Seo et al. , 2019 ) . Recent works deal with this problem by learning a domain specific normalization ( Chang et al. , 2019 ; Seo et al. , 2019 ) or a transferable normalization in place of existing normalization techniques ( Wang et al. , 2019 ) . We address the batch normalization challenges for few-shot classification and domain generalization in a unified way by learning a new batch normalization under the meta-learning setting . We propose MetaNorm , a simple but effective meta-learning normalization . We leverage the metalearning setting and learn to infer normalization statistics from data , instead of applying direct calculations or blending various normalization statistics . MetaNorm is a general batch normalization approach , which is model-agnostic and serves as a plug-and-play module that can be seamlessly embedded into existing meta-learning approaches . We demonstrate its effectiveness for few-shot classification and domain generalization , where it learns task-specific statistics from limited data samples in the support set for each few-shot task ; and it can also learn to generate domain-specific statistics from the seen source domains for unseen target domains . We verify the effectiveness of MetaNorm by extensive evaluation on few-shot classification and domain generalization tasks . For few-shot classification , we experiment with representative gradient , metric and model-based meta-learning approaches on fourteen benchmark datasets . For domain generalization , we evaluate the model on three widely-used benchmarks for cross-domain visual object classification . Last but not least , we introduce the challenging new task of few-shot domain generalization , which combines the challenges of both few-shot learning and domain generalization . The experimental results demonstrate the benefit of MetaNorm compared to existing batch normalizations . 2 RELATED WORKS . Transductive Batch Normalization For conventional batch normalization under supervised settings , i.i.d . assumptions about the data distribution imply that estimating moments from the training set will provide appropriate normalization statistics for test data . However , in the meta-learning scenario data points are only assumed to be i.i.d . within a specific task . Therefore , it is critical to select the moments when batch normalization is applied to support and query set data points during meta training and meta testing . Hence , in the recent meta-learning literature the running moments are no longer used for normalization at meta-test time , but instead replaced with support/query set statistics . These statistics are used for normalization , both at meta-train and meta-test time . This approach is referred to as transductive batch normalization ( TBN ) ( Bronskill et al. , 2020 ) . Competitive meta-learning methods ( e.g. , Gordon et al. , 2019 ; Finn et al. , 2017 ; Zhen et al. , 2020b ) rely on TBN to achieve state-of-the-art performance . However , there are two critical problems with TBN . First , TBN is sensitive to the distribution over the query set used during meta-training , and as such is less generally applicable than non-transductive learning . Second , TBN uses extra information for multiple test samples , compared to non-transductive batch normalization at prediction time , which could be problematic as we are not guaranteed to have a set of test samples available during training in practical applications . In contrast , MetaNorm is a non-transductive normalization . It generates statistics from the support set only , without relying on query samples , making it more practical . Meta Batch Normalization To address the problem of transductive batch normalization and improve conventional batch normalization , meta-batch normalization ( MetaBN ) was introduced ( Triantafillou et al. , 2020 ; Bronskill et al. , 2020 ) . In MetaBN , the support set alone is used to compute the normalization statistics for both the support and query sets at both meta-training and meta-test time . MetaBN is non-transductive since the normalization of a test input does not depend on other test inputs in the query set . However , Bronskill et al . ( 2020 ) observe that MetaBN performs less well for small-sized support sets . This leads to high variance in moment estimates , which is similar to the difficulty of using batch normalization with small-batch training ( Wu & He , 2018 ) . To address this issue , Bronskill et al . ( 2020 ) proposed TaskNorm , which learns to combine statistics from both layer normalization and instance normalization , with a lending parameter to be learned at meta-train time . As a non-transductive normalization , TaskNorm achieves impressive performance , outperforming conventional batch normalization . However , it can not always perform better than transductive batch normalization . TaskNorm indicates non-transductive batch normalization estimates proper normalization statistics by involving learning in the normalization process . We also propose to learn batch normalization within the meta-learning framework , but instead of employing a learnable combination of existing normalization statistics , we directly learn to infer statistics from data . At meta-train time , the model learns to acquire the ability to generate statistics only from the support set and at meta-test time we directly apply the model to infer statistics for new tasks . Batch Normalization for Domain Adaptation and Domain Generalization Domain adaption suffers from a distribution shift between source and target domains , which makes it sub-optimal to directly apply batch normalization ( Bilen & Vedaldi , 2017 ) . Li et al . ( 2016 ) proposed adaptive batch normalization to increase the generalization ability of a deep neural network . By modulating the statistical information of all batch normalization layers in the neural network , it achieves deep adaptation effects for domain-adaptive tasks . Nado et al . ( 2020 ) noted the possibility of accessing small unlabeled batches of the shifted data just before prediction time . To improve model accuracy and calibration under covariate shift , they proposed prediction-time batch normalization . Since the activation statistics obtained during training do not reflect statistics of the test distribution , when testing in an out-of-distribution environment , Schneider et al . ( 2020 ) proposed estimating the batch statistics on the corrupted images . Kaku et al . ( 2020 ) demonstrated that standard non-adaptive feature normalization fails to correctly normalize the features of convolutional neural networks on held-out data where extraneous variables take values not seen during training . Learning domain-specific batch normalization has been explored ( Chang et al. , 2019 ; Wang et al. , 2019 ) . Wang et al . ( 2019 ) introduced transferable normalization , TransNorm , which normalizes the feature representations from source and target domain separately using domain-specific statistics . Along a similar vein , Chang et al . ( 2019 ) proposed a domain-specific batch normalization layer , which consists of two branches , each in charge of a single domain exclusively . The hope is that , through the normalization , the feature representation will become domain invariant . Nevertheless , these normalization methods are specifically designed for domain adaptation tasks , where data from target domains are available , though often unlabelled . This makes them inapplicable to domain generalization tasks where data from target domains are inaccessible at training time . Seo et al . ( 2019 ) proposed learning to optimize domain specific normalization for domain generalization tasks . Under the meta-learning settings , a mixture of different normalization techniques is optimized for each domain , where the mixture weights are learned specifically for different domains . Instead of combining different normalization statistics , MetaNorm learns from data to generate adaptive statistics specific to each domain . Moreover , we introduce an even more challenging setting , i.e. , few-shot domain generalization , which combines the challenges of few-shot classification and domain generalization . Conditional Batch Normalization de Vries et al . ( 2017 ) proposed conditional batch normalization to modulate visual processing by predicting the scalars γ and β of the batch normalization conditioned on the language from an early processing stage . Conditional batch normalization has also been applied to align different data distributions for domain adaptation ( Li et al. , 2016 ) . Oreshkin et al . ( 2018 ) applies conditional batch normalization to metric-based models for the few-shot classification task . Tseng et al . ( 2020 ) proposed a learning-to-learn method to optimize the hyper-parameters of the feature-wise transformation layers by conditional batch normalization for cross-domain classification . Unlike conditional batch normalization , we use extra data ( the query set ) to generate normalization statistics under the meta-learning setting , rather than the scalars .
This paper proposes to replace batch normalization statistics, which are typically computed as the batch moments during training or a fixed training average during testing, with the outputs of learned neural networks. These networks are trained to minimize the KL divergence between their output and the expected or desired batch statistics. In this way, the statistics computation is amortized and can hopefully generalize in the face of small batches and distribution shift.
SP:ea7daa9dbbcba08e7c094630ef2bb55badc4fde5
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
1 INTRODUCTION . Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications . However , the success of deep learning has not been well-explained in theory . It remains mysterious why standard optimization algorithms tend to find a globally optimal solution , despite the highly non-convex landscape of the training loss function . Moreover , despite the extremely large amount of parameters , deep neural networks rarely over-fit , and can often generalize well to unseen data and achieve good test accuracy . Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory . Recent breakthroughs have shed light on the optimization and generalization of deep neural networks ( DNNs ) under the over-parameterized setting , where the hidden layer width is extremely large ( much larger than the number of training examples ) . It has been shown that with the standard random initialization , the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ; Arora et al. , 2019b ) . In the neural tangent kernel regime ( or lazy training regime ( Chizat et al. , 2019 ) ) , the neural network function behaves similarly as its first-order Taylor expansion at initialization ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Arora et al. , 2019b ; Cao and Gu , 2019 ) , which enables feasible optimization and generalization analysis . In terms of optimization , a line of work ( Du et al. , 2019b ; Allen-Zhu et al. , 2019b ; Zou et al. , 2019 ; Zou and Gu , 2019 ) proved that for sufficiently wide neural networks , ( stochastic ) gradient descent ( GD/SGD ) can successfully find a global optimum of the training loss function . For generalization , Allen-Zhu et al . ( 2019a ) ; Arora et al . ( 2019a ) ; Cao and Gu ( 2019 ) established generalization bounds of neural networks trained with ( stochastic ) gradient descent , and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space ( RKHS ) or the corresponding random feature function class . Although existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks , they require the neural network to be extremely wide . ∗Equal contribution . The typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1 . As there still remains a huge gap between such network width requirement and the practice , many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization ( Oymak and Soltanolkotabi , 2019 ; Zou and Gu , 2019 ; Kawaguchi and Huang , 2019 ; Bai and Lee , 2019 ) . For two-layer ReLU networks , a recent work ( Ji and Telgarsky , 2020 ) showed that when the training data are well separated , polylogarithmic width is sufficient to guarantee good optimization and generalization performances . However , their results can not be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous , which can not be satisfied by DNNs . Therefore , whether deep neural networks can be learned with such a mild over-parameterization is still an open problem . In this paper , we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs . In particular , unlike the existing works that require the DNNs to behave very close to a linear model ( up to some small approximation error ) , we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs . Thanks to the relaxed requirement on the linear approximation error , a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved . We summarize our contributions as follows : • We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class ( Cao and Gu , 2019 ) , a set of linear functions over random features . Specifically , we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class , where R is the radius of the NTRF function class . • We also establish the generalization guarantees for both GD and SGD in the same setting . Specifically , we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q , while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover , we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively , which are tighter than existing bounds for learning deep ReLU networks ( Cao and Gu , 2019 ) , and match the best results when reduced to the two-layer cases ( Arora et al. , 2019b ; Ji and Telgarsky , 2020 ) . • We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature . We show if a large fraction of the training data are well separated , the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q , which has a logarithmic dependence on the target error and sample size n. Compared with existing results ( Cao and Gu , 2020 ; Ji and Telgarsky , 2020 ) which require all training data points to be separated in the NTK regime , our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data . For the ease of comparison , we summarize our results along with the most related previous results in Table 1 , in terms of data assumption , the over-parameterization condition and sample complexity . It can be seen that under data separation assumption ( See Sections 4.1 , 4.2 ) , our result improves existing results for learning deep neural networks by only requiring a polylogpn , ´1q network width . Notation . For two scalars a and b , we denote a^ b “ minta , bu . For a vector x P Rd we use } x } 2 to denote its Euclidean norm . For a matrix X , we use } X } 2 and } X } F to denote its spectral norm and Frobenius norm respectively , and denote by Xij the entry of X at the i-th row and j-th column . Given two matrices X and Y with the same dimension , we denote xX , Yy “ ř i , jXijYij . Given a collection of matrices W “ tW1 , ¨ ¨ ¨ , WLu P bLl “ 1Rmlˆm 1 l and a function fpWq over bLl “ 1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl “ 1 . We also denote BpW , τq “ W1 : maxlPrLs } W1l ´Wl } F ď τ ( for τ ě 0 . For two collection of matrices A “ tA1 , ¨ ¨ ¨ , Anu , B “ tB1 , ¨ ¨ ¨ , Bnu , we denote xA , By “ řn i “ 1xAi , Biy and } A } 2F “ řn i “ 1 } Ai } 2F . Algorithm 1 Gradient descent with random initialization Input : Number of iterations T , step size η , training set S “ tpxi , yiqni “ 1u , initialization Wp0q for t “ 1 , 2 , . . . , T do Update Wptq “ Wpt´1q ´ η ¨∇WLSpWpt´1qq . end for Output : Wp0q , . . . , WpT q . Given two sequences txnu and tynu , we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1 , xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2 , and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3 , C4 ą 0 . We also use rOp¨q , rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively . Additionally , we denote xn “ polypynq if xn “ OpyDn q for some positive constant D , and xn “ polylogpynq if xn “ polyplogpynqq . 2 PRELIMINARIES ON LEARNING NEURAL NETWORKS . In this section , we introduce the problem setting in this paper , including definitions of the neural network and loss functions , and the training algorithms , i.e. , GD and SGD with random initialization . Neural network function . Given an input x P Rd , the output of deep fully-connected ReLU network is defined as follows , fWpxq “ m1 { 2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q , where W1 P Rmˆd , W2 , ¨ ¨ ¨ , WL´1 P Rmˆm , WL P R1ˆm , and σpxq “ maxt0 , xu is the ReLU activation function . Here , without loss of generality , we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers , as long as the smallest width satisfies our overparameterization condition . We denote the collection of all weight matrices as W “ tW1 , . . . , WLu . Loss function . Given training dataset txi , yiui “ 1 , ... , n with input xi P Rd and output yi P t´1 , ` 1u , we define the training loss function as LSpWq “ 1 n n ÿ i “ 1 LipWq , where LipWq “ ` ` yifWpxiq ˘ “ log ` 1 ` expp´yifWpxiqq ˘ is defined as the cross-entropy loss . Algorithms . We consider both GD and SGD with Gaussian random initialization . These two algorithms are displayed in Algorithms 1 and 2 respectively . Specifically , the entries in Wp0q1 , ¨ ¨ ¨ , W p0q L´1 are generated independently from univariate Gaussian distributionNp0 , 2 { mq and the entries in Wp0qL are generated independently from Np0 , 1 { mq . For GD , we consider using the full gradient to update the model parameters . For SGD , we use a new training data point in each iteration . Note that our initialization method in Algorithms 1 , 2 is the same as the widely used He initialization ( He et al. , 2015 ) . Our neural network parameterization is also consistent with the parameterization used in prior work on NTK ( Jacot et al. , 2018 ; Allen-Zhu et al. , 2019b ; Du et al. , 2019a ; Arora et al. , 2019b ; Cao and Gu , 2019 ) . Algorithm 2 Stochastic gradient desecent ( SGD ) with random initialization Input : Number of iterations n , step size η , initialization Wp0q for i “ 1 , 2 , . . . , n do Draw pxi , yiq from D and compute the corresponding gradient∇WLipWpi´1qq . Update Wpiq “ Wpi´1q ´ η ¨∇WLipWpi´1qq . end for Output : Randomly choose xW uniformly from tWp0q , . . . , Wpn´1qu .
The paper extends an existing proof for the sufficiency of polylogarithmic width for sharp learning guarantees of ReLU networks trained by (stochastic) gradient descent from shallow networks to deep networks. The theoretical analysis links the convergence of GD and SGD to the width of the network. The paper shows that polylogarithmic width is enough to give reasonable guarantees also for deep neural networks. It furthermore provides a generalisation bound in terms of network width.
SP:a81ee1b76201649dc0d0653db304c7297befee33
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
1 INTRODUCTION . Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications . However , the success of deep learning has not been well-explained in theory . It remains mysterious why standard optimization algorithms tend to find a globally optimal solution , despite the highly non-convex landscape of the training loss function . Moreover , despite the extremely large amount of parameters , deep neural networks rarely over-fit , and can often generalize well to unseen data and achieve good test accuracy . Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory . Recent breakthroughs have shed light on the optimization and generalization of deep neural networks ( DNNs ) under the over-parameterized setting , where the hidden layer width is extremely large ( much larger than the number of training examples ) . It has been shown that with the standard random initialization , the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ; Arora et al. , 2019b ) . In the neural tangent kernel regime ( or lazy training regime ( Chizat et al. , 2019 ) ) , the neural network function behaves similarly as its first-order Taylor expansion at initialization ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Arora et al. , 2019b ; Cao and Gu , 2019 ) , which enables feasible optimization and generalization analysis . In terms of optimization , a line of work ( Du et al. , 2019b ; Allen-Zhu et al. , 2019b ; Zou et al. , 2019 ; Zou and Gu , 2019 ) proved that for sufficiently wide neural networks , ( stochastic ) gradient descent ( GD/SGD ) can successfully find a global optimum of the training loss function . For generalization , Allen-Zhu et al . ( 2019a ) ; Arora et al . ( 2019a ) ; Cao and Gu ( 2019 ) established generalization bounds of neural networks trained with ( stochastic ) gradient descent , and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space ( RKHS ) or the corresponding random feature function class . Although existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks , they require the neural network to be extremely wide . ∗Equal contribution . The typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1 . As there still remains a huge gap between such network width requirement and the practice , many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization ( Oymak and Soltanolkotabi , 2019 ; Zou and Gu , 2019 ; Kawaguchi and Huang , 2019 ; Bai and Lee , 2019 ) . For two-layer ReLU networks , a recent work ( Ji and Telgarsky , 2020 ) showed that when the training data are well separated , polylogarithmic width is sufficient to guarantee good optimization and generalization performances . However , their results can not be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous , which can not be satisfied by DNNs . Therefore , whether deep neural networks can be learned with such a mild over-parameterization is still an open problem . In this paper , we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs . In particular , unlike the existing works that require the DNNs to behave very close to a linear model ( up to some small approximation error ) , we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs . Thanks to the relaxed requirement on the linear approximation error , a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved . We summarize our contributions as follows : • We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class ( Cao and Gu , 2019 ) , a set of linear functions over random features . Specifically , we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class , where R is the radius of the NTRF function class . • We also establish the generalization guarantees for both GD and SGD in the same setting . Specifically , we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q , while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover , we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively , which are tighter than existing bounds for learning deep ReLU networks ( Cao and Gu , 2019 ) , and match the best results when reduced to the two-layer cases ( Arora et al. , 2019b ; Ji and Telgarsky , 2020 ) . • We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature . We show if a large fraction of the training data are well separated , the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q , which has a logarithmic dependence on the target error and sample size n. Compared with existing results ( Cao and Gu , 2020 ; Ji and Telgarsky , 2020 ) which require all training data points to be separated in the NTK regime , our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data . For the ease of comparison , we summarize our results along with the most related previous results in Table 1 , in terms of data assumption , the over-parameterization condition and sample complexity . It can be seen that under data separation assumption ( See Sections 4.1 , 4.2 ) , our result improves existing results for learning deep neural networks by only requiring a polylogpn , ´1q network width . Notation . For two scalars a and b , we denote a^ b “ minta , bu . For a vector x P Rd we use } x } 2 to denote its Euclidean norm . For a matrix X , we use } X } 2 and } X } F to denote its spectral norm and Frobenius norm respectively , and denote by Xij the entry of X at the i-th row and j-th column . Given two matrices X and Y with the same dimension , we denote xX , Yy “ ř i , jXijYij . Given a collection of matrices W “ tW1 , ¨ ¨ ¨ , WLu P bLl “ 1Rmlˆm 1 l and a function fpWq over bLl “ 1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl “ 1 . We also denote BpW , τq “ W1 : maxlPrLs } W1l ´Wl } F ď τ ( for τ ě 0 . For two collection of matrices A “ tA1 , ¨ ¨ ¨ , Anu , B “ tB1 , ¨ ¨ ¨ , Bnu , we denote xA , By “ řn i “ 1xAi , Biy and } A } 2F “ řn i “ 1 } Ai } 2F . Algorithm 1 Gradient descent with random initialization Input : Number of iterations T , step size η , training set S “ tpxi , yiqni “ 1u , initialization Wp0q for t “ 1 , 2 , . . . , T do Update Wptq “ Wpt´1q ´ η ¨∇WLSpWpt´1qq . end for Output : Wp0q , . . . , WpT q . Given two sequences txnu and tynu , we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1 , xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2 , and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3 , C4 ą 0 . We also use rOp¨q , rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively . Additionally , we denote xn “ polypynq if xn “ OpyDn q for some positive constant D , and xn “ polylogpynq if xn “ polyplogpynqq . 2 PRELIMINARIES ON LEARNING NEURAL NETWORKS . In this section , we introduce the problem setting in this paper , including definitions of the neural network and loss functions , and the training algorithms , i.e. , GD and SGD with random initialization . Neural network function . Given an input x P Rd , the output of deep fully-connected ReLU network is defined as follows , fWpxq “ m1 { 2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q , where W1 P Rmˆd , W2 , ¨ ¨ ¨ , WL´1 P Rmˆm , WL P R1ˆm , and σpxq “ maxt0 , xu is the ReLU activation function . Here , without loss of generality , we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers , as long as the smallest width satisfies our overparameterization condition . We denote the collection of all weight matrices as W “ tW1 , . . . , WLu . Loss function . Given training dataset txi , yiui “ 1 , ... , n with input xi P Rd and output yi P t´1 , ` 1u , we define the training loss function as LSpWq “ 1 n n ÿ i “ 1 LipWq , where LipWq “ ` ` yifWpxiq ˘ “ log ` 1 ` expp´yifWpxiqq ˘ is defined as the cross-entropy loss . Algorithms . We consider both GD and SGD with Gaussian random initialization . These two algorithms are displayed in Algorithms 1 and 2 respectively . Specifically , the entries in Wp0q1 , ¨ ¨ ¨ , W p0q L´1 are generated independently from univariate Gaussian distributionNp0 , 2 { mq and the entries in Wp0qL are generated independently from Np0 , 1 { mq . For GD , we consider using the full gradient to update the model parameters . For SGD , we use a new training data point in each iteration . Note that our initialization method in Algorithms 1 , 2 is the same as the widely used He initialization ( He et al. , 2015 ) . Our neural network parameterization is also consistent with the parameterization used in prior work on NTK ( Jacot et al. , 2018 ; Allen-Zhu et al. , 2019b ; Du et al. , 2019a ; Arora et al. , 2019b ; Cao and Gu , 2019 ) . Algorithm 2 Stochastic gradient desecent ( SGD ) with random initialization Input : Number of iterations n , step size η , initialization Wp0q for i “ 1 , 2 , . . . , n do Draw pxi , yiq from D and compute the corresponding gradient∇WLipWpi´1qq . Update Wpiq “ Wpi´1q ´ η ¨∇WLipWpi´1qq . end for Output : Randomly choose xW uniformly from tWp0q , . . . , Wpn´1qu .
The paper studies optimization and generalization properties of deep relu networks trained with (stochastic) gradient descent on the logistic loss in the neural tangent kernel (NTK) regime. By using a new analysis that makes the "linearized" approximation as well as the L2 norm of the model in the approximate "random feature" kernel more explicit, the authors obtain results where the width only depends poly-logarithmically on the number of samples and 1/epsilon, for a test 0-1 loss of epsilon. This improves on previous analysis for deep networks, although it is similar to the two-layer result of Ji & Telgarsky.
SP:a81ee1b76201649dc0d0653db304c7297befee33
Balancing training time vs. performance with Bayesian Early Pruning
1 INTRODUCTION . Deep neural networks ( DNNs ) are known to be overparameterized ( Allen-Zhu et al. , 2019 ) as they usually have more learnable parameters than needed for a given learning task . So , a trained DNN contains many ineffectual parameters that can be safely pruned or zeroed out with little/no effect on its predictive accuracy . Pruning ( LeCun et al. , 1989 ) is an approach to alleviating overparameterization of a DNN by identifying and removing its ineffectual parameters while preserving its predictive accuracy on the validation/test dataset . Pruning is typically applied to the DNN after training to speed up testing-time evaluation . For standard image classification tasks with MNIST , CIFAR-10 , and ImageNet datasets , it can reduce the number of learnable parameters by up to 50 % or more while maintaining test accuracy ( Han et al. , 2015 ; Li et al. , 2017 ; Molchanov et al. , 2017 ) . In particular , the overparameterization of a DNN also leads to considerable training time being wasted on those DNN elements ( e.g. , connection weights , neurons , or convolutional filters ) which are eventually ineffectual after training and can thus be safely pruned . Our work in this paper considers early pruning of such DNN elements by identifying and removing them throughout the training process instead of after training.1 As a result , this can significantly reduce the time incurred by the training process without compromising the final test accuracy ( upon convergence ) much . Recent work ( Section 5 ) in foresight pruning ( Lee et al. , 2019 ; Wang et al. , 2020 ) show that pruning heuristics applied at initialization work well to prune connection weights without significantly degrading performance . In contrast to these work , we prune throughout the training procedure , which improves performance after convergence of DNNs , albeit with somewhat longer training times . In this work , we pose early pruning as a constrained optimization problem ( Section 3.1 ) . A key challenge in the optimization is accurately modeling the future efficacy of DNN elements . We achieve this through the use of multi-output Gaussian process which models the belief of future efficacy conditioned upon efficacy measurements collected during training ( Section 3.2 ) . Although the posed optimization problem is NP-hard , we derive an efficient Bayesian early pruning ( BEP ) approximation algorithm , which appropriately balances the inherent training time vs. performance tradeoff in pruning prior to convergence ( Section 3.3 ) . Our algorithm relies on a measure of network element efficacy , termed saliency ( LeCun et al. , 1989 ) . The development of saliency functions is an active area of research with no clear optimal choice . To accomodate this , our algorithm is agnostic , and therefore 1In contrast , foresight pruning ( Wang et al. , 2020 ) removes DNN elements prior to the training process . flexible , to changes in saliency function . We use BEP to prune neurons and convolutional filters to achieve practical speedup during training ( Section 4 ) .2 Our approach also compares favorably to state-of-the-art works such as SNIP ( Lee et al. , 2019 ) , GraSP ( Wang et al. , 2020 ) , and momentum based dynamic sparse reparameterization ( Dettmers & Zettlemoyer , 2019 ) . 2 PRUNING . Consider a dataset of D training examples X = { x1 , . . . , xD } , Y = { y1 , . . . , yD } and a neural network Nvt parameterized by a vector of M pruneable network elements ( e.g . weight parameters , neurons , or convolutional filters ) vt , [ vat ] a=1 , ... , M , where vt represent the network elements after t iterations of stochastic gradient descent ( SGD ) for t = 1 , . . . , T . Let L ( X , Y ; Nvt ) be the loss function for the neural network Nvt . Pruning aims at refining the network elements vt given some sparsity budget B and preserving the accuracy of the neural network after convergence ( i.e. , NvT ) , which can be stated as a constrained optimization problem ( Molchanov et al. , 2017 ) : minm∈ { 0,1 } M |L ( X , Y ; Nm vT ) − L ( X , Y ; NvT ) | s.t . ||m||0 ≤ B ( 1 ) where is the Hadamard product and m is a pruning mask . Note that we abuse the Hadamard product for notation simplicity : for a = 1 , .. , M , ma × vaT corresponds to pruning vaT if ma = 0 , and keeping vaT otherwise . Pruning a network element refers to zeroing the network element or the weight parameters which compute the network element . Any weight parameters which reference the output of the pruned network element are also zeroed since the element outputs a constant 0 . The above optimization problem is difficult due to the NP-hardness of combinatorial optimization . This leads to the approach of using saliency function s which measures efficacy of network elements at minimizing the loss function . A network element with small saliency can be pruned since it ’ s not salient in minimizing the loss function . Consequently , pruning can be done by maximizing the saliency of the network elements given the sparsity budget B : maxm∈ { 0,1 } M ∑M a=1m as ( a ; X , Y , NvT , L ) s.t . ||m||0 ≤ B ( 2 ) where s ( a ; X , Y , NvT , L ) measures the saliency of vaT at minimizing L after convergence through T iterations of SGD . The above optimization problem can be efficienctly solved by selecting the B most salient network elements in vT . The construction of the saliency function has been discussed in many existing works : Some approaches derived the saliency function from first-order ( LeCun et al. , 1989 ; Molchanov et al. , 2017 ) and second-order ( Hassibi & Stork , 1992 ; Wang et al. , 2020 ) Taylor series approximations of L. Other common saliency functions include L1 ( Li et al. , 2017 ) or L2 ( Wen et al. , 2016 ) norm of the network element weights , as well as mean activation ( Polyak & Wolf , 2015 ) . In this work , we use a first-order Taylor series approximation saliency function defined for neurons and convolutional filters3 ( Molchanov et al. , 2017 ) , however our approach remains flexible to arbitrary choice of saliency function on a plug-n-play basis . 3 BAYESIAN EARLY PRUNING . 3.1 PROBLEM STATEMENT . As has been mentioned before , existing pruning works based on the saliency function are typically done after the training convergence ( i.e. , ( 2 ) ) to speed up the testing-time evaluation , which waste considerable time on training these network elements which will eventually be pruned . To resolve this issue , We extend the pruning problem definition ( 2 ) along the temporal dimension , allowing network elements to be pruned during the training process consisting of T iterations of SGD . 2Popular deep learning libaries do not accelerate sparse matrix operations over dense matrix operations . Thus , pruning network connections can not be easily capitalized upon with performance improvements . It is also unclear whether moderately sparse matrix operations ( i.e. , operations on matrices generated by connection pruning ) can be significantly accelerated on massively parallel architectures such as GPUs ( see Yang et al . ( 2018 ) Fig . 7 ) . See Section 5 in Buluç & Gilbert ( 2008 ) for challenges in parallel sparse matrix multiplication . 3Implementation details of this saliency function can be found in Appendix A.1 . Let sat , s ( a ; X , Y , Nvt , L ) be a random variable which denotes the saliency of network element vat after t iterations of SGD , st , [ sat ] a=1 , ... , M for t = 1 , . . . , T , and sτ1 : τ2 , [ st ] t=τ1 , ... , τ2 be a vector of saliency of all the network elements between iterations τ1 and τ2 . Our early pruning algorithm is designed with the goal of maximizing the saliency of the unpruned elements after iteration T , yet allowing for pruning at each iteration t given some computational budget Bt , c for t = 1 , . . . , T : ρT ( mT−1 , BT , c , Bs ) , maxmT mT · sT ( 3a ) s.t . ||mT ||0 ≤ Bs ( 3b ) mT ≤̇mT−1 ( 3c ) BT , c ≥ 0 ( 3d ) ρt ( mt−1 , Bt , c , Bs ) , maxmtEp ( st+1|s̃1 : t ) [ ρt+1 ( mt , Bt , c−||mt||0 , Bs ) ] ( 4a ) s.t . mt ≤̇mt−1 ( 4b ) where Bs is the trained network sparsity budget , s̃1 : t is a vector of observed values for s1 : t , m0 is an M -dimensional 1 ’ s vector , andmt ≤̇mt−1 represents an element-wise comparison between mt and mt−1 : mat ≤ mat−1 for a = 1 , . . . , M . At each iteration t , the saliency st is observed and mt ∈ { 0 , 1 } M in ρt represents a pruning decision performed to maximize the expectectation of ρt+1 conditioned upon saliency measurements s1 : t collected up to and including iteration t. This recursive structure terminates with base case ρT where the saliency of the unpruned elements is maximized after T iterations of training . In the above early pruning formulation4 , constraints ( 3c ) and ( 4b ) ensure pruning is performed in a practical manner whereby once a network element is pruned , it can no longer be recovered in a later training iteration . We define a trained network sparsity budget , Bs ( 3b ) , which may differ significantly from initial network size ||m0||0 ( e.g. , in the case where the network is trained on GPUs , but deployed on resource constrained edge or mobile devices ) . We also constrain a total computational effort budget Bt , c which is reduced per training iteration t by the number of unpruned network elements ||mt||0 . We constrain BT , c ≥ 0 ( 3d ) to ensure training completion within the specified computational budget . Here we assume that a more sparse pruning maskmt corresponds to lower computational effort during training iteration t due to updating fewer network elements . Finally , ( 3a ) maximizes the saliency with a pruning maskmT constrained by a sparsity budget Bs ( 3b ) . Our early pruning formulation balances the saliency of network elements after convergence against the total computational effort to train such network ( i.e. , mT · sT vs. ∑T t=1||mt||0 ) . This appropriately captures the balancing act of training-time early pruning whereby the computational effort is saved by early pruning network elements while preserving the saliency of the remaining network elements after convergence . 3.2 MODELING THE SALIENCY WITH MULTI-OUTPUT GAUSSIAN PROCESS . To solve the above early pruning problem , we need to model the belief p ( s1 : T ) of the saliency for computing the predictive belief p ( st+1 : T |s̃1 : t ) of the future saliency in ( 4a ) . At the first glance , one may consider to decompose the belief : p ( s1 : T ) , ∏M a=1 p ( s a 1 : T ) and model the saliency s a 1 : T , [ sat ] t=1 , ... , T of each network element independently . Such independent models , however , ignore the co-adaptation and co-evolution of the network elements which have been shown to be a common occurrence in DNN ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ; Wang et al. , 2020 ) . Also , modeling the correlations between the saliency of different network elements explicitly is non-trivial since considerable feature engineering is needed for representing diverse network elements such as neurons , connections , or convolutional filters . To resolve such issues , we use multi-output Gaussian process ( MOGP ) to jointly model the belief p ( s1 : T ) of all saliency measurements . To be specific , we assume that the saliency sat of the a-th network element at iteration t is a linear mixture5 of Q independent latent functions { uq ( t ) } Qq=1 : sat , ∑Q q=1 γ a q uq ( t ) . As shown in ( Álvarez & Lawrence , 2011 ) , if each uq ( t ) is an independent GP with prior zero mean and covariance kq ( t , t′ ) , then the resulting distribution over p ( s1 : T ) is a multivariate Gaussian distribution with prior zero mean and covariance determined by the mixing 4In contrast to PruneTrain ( Lym et al. , 2019 ) , our problem definition balances training time vs. performance under an additional constraint on the trained network size ( 3b ) . We discuss this further in Section 5 . 5Among the various types of MOGPs ( see Álvarez & Lawrence ( 2011 ) for a detailed review . ) , we choose this linear model such that the correlations between sat and sa ′ t′ can be computed analytically . weights : cov [ sat , s a′ t′ ] = ∑Q q=1 γ a q γ a′ q kq ( t , t ′ ) . This explicit covariance between sat and s a′ t′ helps to exploit the co-evolution and co-adaptation of network elements within the neural networks . To capture the horizontal asymptote trend of sa1 , . . . , s a T as visualized in Appendix A.2 , we turn to a kernel used for modeling decaying exponential curves known as the “ exponential kernel ” ( Swersky et al. , 2014 ) and set kq ( t , t′ ) , βq αq ( t+t′+βq ) αq where αq and βq are hyperparameters of MOGP and can be learned via maximum likelihood estimation ( Álvarez & Lawrence , 2011 ) . Then , given a vector of observed saliency s̃1 : t , the MOGP regression model can provide a Gaussian predictive distribution for any future saliency st′ . Thus , the predictive mean µat′|1 : t , E [ s a t′ | s̃1 : t ] of the saliency sat′ and the predictive ( co ) variance σaa ′ t′|1 : t , cov [ s a t′ , s a′ t′ | s̃1 : t ] between the saliency sat′ and sa ′ t′ can be computed analytically , as detailed in Appendix A.3 .
This paper introduces a method for pruning during the training process in order to filter out unimportant/redundant components of the network continuously to speed up training and perform gradual pruning over the training process. The proposed approach is novel in the sense that the vast amount of prior work on pruning has focused on either (i) pruning on network initialization (e.g., SNIP, etc.) or (ii) pruning after the network has been fully trained (e.g., Magnitude Pruning, among many others). The introduced method uses the Taylor-series based saliency criterion (of Molchanov et al., 2017) and uses a multi-output Gaussian process to predict future saliencies and to determine whether a parameter can be safely removed early on during training.
SP:e7caebe84a63ae1f2e8eda175eec514684a7a2ee
Balancing training time vs. performance with Bayesian Early Pruning
1 INTRODUCTION . Deep neural networks ( DNNs ) are known to be overparameterized ( Allen-Zhu et al. , 2019 ) as they usually have more learnable parameters than needed for a given learning task . So , a trained DNN contains many ineffectual parameters that can be safely pruned or zeroed out with little/no effect on its predictive accuracy . Pruning ( LeCun et al. , 1989 ) is an approach to alleviating overparameterization of a DNN by identifying and removing its ineffectual parameters while preserving its predictive accuracy on the validation/test dataset . Pruning is typically applied to the DNN after training to speed up testing-time evaluation . For standard image classification tasks with MNIST , CIFAR-10 , and ImageNet datasets , it can reduce the number of learnable parameters by up to 50 % or more while maintaining test accuracy ( Han et al. , 2015 ; Li et al. , 2017 ; Molchanov et al. , 2017 ) . In particular , the overparameterization of a DNN also leads to considerable training time being wasted on those DNN elements ( e.g. , connection weights , neurons , or convolutional filters ) which are eventually ineffectual after training and can thus be safely pruned . Our work in this paper considers early pruning of such DNN elements by identifying and removing them throughout the training process instead of after training.1 As a result , this can significantly reduce the time incurred by the training process without compromising the final test accuracy ( upon convergence ) much . Recent work ( Section 5 ) in foresight pruning ( Lee et al. , 2019 ; Wang et al. , 2020 ) show that pruning heuristics applied at initialization work well to prune connection weights without significantly degrading performance . In contrast to these work , we prune throughout the training procedure , which improves performance after convergence of DNNs , albeit with somewhat longer training times . In this work , we pose early pruning as a constrained optimization problem ( Section 3.1 ) . A key challenge in the optimization is accurately modeling the future efficacy of DNN elements . We achieve this through the use of multi-output Gaussian process which models the belief of future efficacy conditioned upon efficacy measurements collected during training ( Section 3.2 ) . Although the posed optimization problem is NP-hard , we derive an efficient Bayesian early pruning ( BEP ) approximation algorithm , which appropriately balances the inherent training time vs. performance tradeoff in pruning prior to convergence ( Section 3.3 ) . Our algorithm relies on a measure of network element efficacy , termed saliency ( LeCun et al. , 1989 ) . The development of saliency functions is an active area of research with no clear optimal choice . To accomodate this , our algorithm is agnostic , and therefore 1In contrast , foresight pruning ( Wang et al. , 2020 ) removes DNN elements prior to the training process . flexible , to changes in saliency function . We use BEP to prune neurons and convolutional filters to achieve practical speedup during training ( Section 4 ) .2 Our approach also compares favorably to state-of-the-art works such as SNIP ( Lee et al. , 2019 ) , GraSP ( Wang et al. , 2020 ) , and momentum based dynamic sparse reparameterization ( Dettmers & Zettlemoyer , 2019 ) . 2 PRUNING . Consider a dataset of D training examples X = { x1 , . . . , xD } , Y = { y1 , . . . , yD } and a neural network Nvt parameterized by a vector of M pruneable network elements ( e.g . weight parameters , neurons , or convolutional filters ) vt , [ vat ] a=1 , ... , M , where vt represent the network elements after t iterations of stochastic gradient descent ( SGD ) for t = 1 , . . . , T . Let L ( X , Y ; Nvt ) be the loss function for the neural network Nvt . Pruning aims at refining the network elements vt given some sparsity budget B and preserving the accuracy of the neural network after convergence ( i.e. , NvT ) , which can be stated as a constrained optimization problem ( Molchanov et al. , 2017 ) : minm∈ { 0,1 } M |L ( X , Y ; Nm vT ) − L ( X , Y ; NvT ) | s.t . ||m||0 ≤ B ( 1 ) where is the Hadamard product and m is a pruning mask . Note that we abuse the Hadamard product for notation simplicity : for a = 1 , .. , M , ma × vaT corresponds to pruning vaT if ma = 0 , and keeping vaT otherwise . Pruning a network element refers to zeroing the network element or the weight parameters which compute the network element . Any weight parameters which reference the output of the pruned network element are also zeroed since the element outputs a constant 0 . The above optimization problem is difficult due to the NP-hardness of combinatorial optimization . This leads to the approach of using saliency function s which measures efficacy of network elements at minimizing the loss function . A network element with small saliency can be pruned since it ’ s not salient in minimizing the loss function . Consequently , pruning can be done by maximizing the saliency of the network elements given the sparsity budget B : maxm∈ { 0,1 } M ∑M a=1m as ( a ; X , Y , NvT , L ) s.t . ||m||0 ≤ B ( 2 ) where s ( a ; X , Y , NvT , L ) measures the saliency of vaT at minimizing L after convergence through T iterations of SGD . The above optimization problem can be efficienctly solved by selecting the B most salient network elements in vT . The construction of the saliency function has been discussed in many existing works : Some approaches derived the saliency function from first-order ( LeCun et al. , 1989 ; Molchanov et al. , 2017 ) and second-order ( Hassibi & Stork , 1992 ; Wang et al. , 2020 ) Taylor series approximations of L. Other common saliency functions include L1 ( Li et al. , 2017 ) or L2 ( Wen et al. , 2016 ) norm of the network element weights , as well as mean activation ( Polyak & Wolf , 2015 ) . In this work , we use a first-order Taylor series approximation saliency function defined for neurons and convolutional filters3 ( Molchanov et al. , 2017 ) , however our approach remains flexible to arbitrary choice of saliency function on a plug-n-play basis . 3 BAYESIAN EARLY PRUNING . 3.1 PROBLEM STATEMENT . As has been mentioned before , existing pruning works based on the saliency function are typically done after the training convergence ( i.e. , ( 2 ) ) to speed up the testing-time evaluation , which waste considerable time on training these network elements which will eventually be pruned . To resolve this issue , We extend the pruning problem definition ( 2 ) along the temporal dimension , allowing network elements to be pruned during the training process consisting of T iterations of SGD . 2Popular deep learning libaries do not accelerate sparse matrix operations over dense matrix operations . Thus , pruning network connections can not be easily capitalized upon with performance improvements . It is also unclear whether moderately sparse matrix operations ( i.e. , operations on matrices generated by connection pruning ) can be significantly accelerated on massively parallel architectures such as GPUs ( see Yang et al . ( 2018 ) Fig . 7 ) . See Section 5 in Buluç & Gilbert ( 2008 ) for challenges in parallel sparse matrix multiplication . 3Implementation details of this saliency function can be found in Appendix A.1 . Let sat , s ( a ; X , Y , Nvt , L ) be a random variable which denotes the saliency of network element vat after t iterations of SGD , st , [ sat ] a=1 , ... , M for t = 1 , . . . , T , and sτ1 : τ2 , [ st ] t=τ1 , ... , τ2 be a vector of saliency of all the network elements between iterations τ1 and τ2 . Our early pruning algorithm is designed with the goal of maximizing the saliency of the unpruned elements after iteration T , yet allowing for pruning at each iteration t given some computational budget Bt , c for t = 1 , . . . , T : ρT ( mT−1 , BT , c , Bs ) , maxmT mT · sT ( 3a ) s.t . ||mT ||0 ≤ Bs ( 3b ) mT ≤̇mT−1 ( 3c ) BT , c ≥ 0 ( 3d ) ρt ( mt−1 , Bt , c , Bs ) , maxmtEp ( st+1|s̃1 : t ) [ ρt+1 ( mt , Bt , c−||mt||0 , Bs ) ] ( 4a ) s.t . mt ≤̇mt−1 ( 4b ) where Bs is the trained network sparsity budget , s̃1 : t is a vector of observed values for s1 : t , m0 is an M -dimensional 1 ’ s vector , andmt ≤̇mt−1 represents an element-wise comparison between mt and mt−1 : mat ≤ mat−1 for a = 1 , . . . , M . At each iteration t , the saliency st is observed and mt ∈ { 0 , 1 } M in ρt represents a pruning decision performed to maximize the expectectation of ρt+1 conditioned upon saliency measurements s1 : t collected up to and including iteration t. This recursive structure terminates with base case ρT where the saliency of the unpruned elements is maximized after T iterations of training . In the above early pruning formulation4 , constraints ( 3c ) and ( 4b ) ensure pruning is performed in a practical manner whereby once a network element is pruned , it can no longer be recovered in a later training iteration . We define a trained network sparsity budget , Bs ( 3b ) , which may differ significantly from initial network size ||m0||0 ( e.g. , in the case where the network is trained on GPUs , but deployed on resource constrained edge or mobile devices ) . We also constrain a total computational effort budget Bt , c which is reduced per training iteration t by the number of unpruned network elements ||mt||0 . We constrain BT , c ≥ 0 ( 3d ) to ensure training completion within the specified computational budget . Here we assume that a more sparse pruning maskmt corresponds to lower computational effort during training iteration t due to updating fewer network elements . Finally , ( 3a ) maximizes the saliency with a pruning maskmT constrained by a sparsity budget Bs ( 3b ) . Our early pruning formulation balances the saliency of network elements after convergence against the total computational effort to train such network ( i.e. , mT · sT vs. ∑T t=1||mt||0 ) . This appropriately captures the balancing act of training-time early pruning whereby the computational effort is saved by early pruning network elements while preserving the saliency of the remaining network elements after convergence . 3.2 MODELING THE SALIENCY WITH MULTI-OUTPUT GAUSSIAN PROCESS . To solve the above early pruning problem , we need to model the belief p ( s1 : T ) of the saliency for computing the predictive belief p ( st+1 : T |s̃1 : t ) of the future saliency in ( 4a ) . At the first glance , one may consider to decompose the belief : p ( s1 : T ) , ∏M a=1 p ( s a 1 : T ) and model the saliency s a 1 : T , [ sat ] t=1 , ... , T of each network element independently . Such independent models , however , ignore the co-adaptation and co-evolution of the network elements which have been shown to be a common occurrence in DNN ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ; Wang et al. , 2020 ) . Also , modeling the correlations between the saliency of different network elements explicitly is non-trivial since considerable feature engineering is needed for representing diverse network elements such as neurons , connections , or convolutional filters . To resolve such issues , we use multi-output Gaussian process ( MOGP ) to jointly model the belief p ( s1 : T ) of all saliency measurements . To be specific , we assume that the saliency sat of the a-th network element at iteration t is a linear mixture5 of Q independent latent functions { uq ( t ) } Qq=1 : sat , ∑Q q=1 γ a q uq ( t ) . As shown in ( Álvarez & Lawrence , 2011 ) , if each uq ( t ) is an independent GP with prior zero mean and covariance kq ( t , t′ ) , then the resulting distribution over p ( s1 : T ) is a multivariate Gaussian distribution with prior zero mean and covariance determined by the mixing 4In contrast to PruneTrain ( Lym et al. , 2019 ) , our problem definition balances training time vs. performance under an additional constraint on the trained network size ( 3b ) . We discuss this further in Section 5 . 5Among the various types of MOGPs ( see Álvarez & Lawrence ( 2011 ) for a detailed review . ) , we choose this linear model such that the correlations between sat and sa ′ t′ can be computed analytically . weights : cov [ sat , s a′ t′ ] = ∑Q q=1 γ a q γ a′ q kq ( t , t ′ ) . This explicit covariance between sat and s a′ t′ helps to exploit the co-evolution and co-adaptation of network elements within the neural networks . To capture the horizontal asymptote trend of sa1 , . . . , s a T as visualized in Appendix A.2 , we turn to a kernel used for modeling decaying exponential curves known as the “ exponential kernel ” ( Swersky et al. , 2014 ) and set kq ( t , t′ ) , βq αq ( t+t′+βq ) αq where αq and βq are hyperparameters of MOGP and can be learned via maximum likelihood estimation ( Álvarez & Lawrence , 2011 ) . Then , given a vector of observed saliency s̃1 : t , the MOGP regression model can provide a Gaussian predictive distribution for any future saliency st′ . Thus , the predictive mean µat′|1 : t , E [ s a t′ | s̃1 : t ] of the saliency sat′ and the predictive ( co ) variance σaa ′ t′|1 : t , cov [ s a t′ , s a′ t′ | s̃1 : t ] between the saliency sat′ and sa ′ t′ can be computed analytically , as detailed in Appendix A.3 .
This paper introduces a new method to accelerate training by saliency-based pruning. The method predicts future saliency for neurons based on observed saliency with a multi-output Gaussian process (MOGP), then greedily prunes neurons with least saliency at fixed intervals during training. The authors provide extensive mathematical analysis to show that the algorithm produces pruning mask solutions that are close to the optimum of the formulated optimization (the reviewer is unable to verify). The experimental results showed improvements in task accuracies of trained models but with longer training times.
SP:e7caebe84a63ae1f2e8eda175eec514684a7a2ee
Preventing Value Function Collapse in Ensemble Q-Learning by Maximizing Representation Diversity
1 INTRODUCTION . Q-learning ( Watkins , 1989 ) and its deep learning based successors inaugurated by DQN ( Mnih et al. , 2015 ) are model-free , value function based reinforcement learning algorithms . Their popularity stems from their intuitive , easy-to-implement update rule derived from the Bellman equation . At each time step , the agent updates its Q-value towards the expectation of the current reward plus the value corresponding to the maximal action in the next state . This state-action value represents the maximum sum of reward the agent believes it could obtain from the current state by taking the current action . Unfortunately ( Thrun & Schwartz , 1993 ; van Hasselt , 2010 ) have shown that this simple rule suffers from overestimation bias : due to the maximization operator in the update rule , positive and negative errors do not cancel each other out , but positive errors accumulate . The overestimation bias is particularly problematic under function approximation and have contributed towards learning sub-optimal policies ( Thrun & Schwartz , 1993 ; Szita & Lőrincz , 2008 ; Strehl et al. , 2009 ) . A possible solution is to introduce underestimation bias in the estimation of the Q-value . Double Q-learning ( van Hasselt , 2010 ) maintains two independent state-action value estimators ( Q-functions ) . The state-action value of estimator one is calculated by adding observed reward and maximal stateaction value from the other estimator . Double DQN ( Hado van Hasselt et al. , 2016 ) applied this idea using neural networks , and was shown to provide better performance than DQN . More recent actor-critic type deep RL algorithms such as TD3 ( Fujimoto et al. , 2018 ) and SAC ( Haarnoja et al. , 2018 ) also use two Q function estimators ( in combination with other techniques ) . Other approaches such as EnsembleDQN ( Anschel et al. , 2017 ) and MaxminDQN ( Lan et al. , 2020 ) maintain ensembles of Q-functions to estimate an unbiased Q-function . EnsembleDQN estimates the state-action values by adding the current observed reward and the maximal state-action value from the average of Q-functions from the ensemble . MaxminDQN creates a proxy Q-function by selecting the minimum Q-value for each action from all the Q-functions and using the maximal state-action value from the proxy Q-function to estimate an unbiased Q-function . Both EnsembleDQN and MaxminDQN have been shown to perform better than Double DQN . The primary insight of this paper is that the performance of ensemble based methods is contingent on maintaining sufficient diversity in the representation space between the Q-functions in the ensembles . If the Q-functions in the ensembles converge to a common representation ( we will show that this is the case in many scenarios ) , the performance of these approaches significantly degrades . In this paper we propose to use cross-learner regularizers to prevent the collapse of the representation space in ensemble-based Q-learning methods . Intuitively , these representations capture an inductive bias towards more diverse representations . We have investigated five different regularizers . The mathematical formulation of four of the regularizers correspond to inequality measures borrowed from economics theory . While in economics , high inequality is seen as a negative , in this case we use the metrics to encourage inequality between the representations . The fifth regularizer is inspired from consensus optimization . There is a separate line of reinforcement learning literature where ensembles are used to address several different issues ( Chen et al. , 2017 ; Chua et al. , 2018 ; Kurutach et al. , 2018 ; Lee et al. , 2020 ; Osband et al. , 2016 ) such as exploration and error propagation but we limit our solution to algorithms addressing the overestimation bias problem only . To summarize , our contributions are following : 1 . We show that high representation similarity between neural network based Q-functions leads to decline in performance in ensemble based Q-learning methods . 2 . To mitigate this , we propose five regularizers based on inequality measures from economics theory and consensus optimization that maximize representation diversity between Q-functions in ensemble based Q-learning methods . 3 . We show that applying the proposed regularizers to the MaxminDQN and EnsembleDQN methods can lead to significant improvement in performance over a variety of benchmarks . 2 BACKGROUND . Reinforcement learning considers an agent as a Markov Decision Process ( MDP ) defined as a five element tuple ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S×A×S → [ 0 , 1 ] are the state-action transition probabilities , r : S × A × S → R is the reward mapping and γ → [ 0 , 1 ] is the discount factor . At each time step t the agent observes the state of the environment st ∈ S and selects an action at ∈ A . The effect of the action triggers a transition to a new state st+1 ∈ S according to the transition probabilities P , while the agent receives a scalar reward Rt = r ( st , at , st+1 ) . The goal of the agent is to learn a policy π that maximizes the expectation of the discounted sum of future rewards . One way to implicitly learn the policy π is the Q-learning algorithm that estimates the expected sum of rewards of state st if we take the action at by solving the Bellman equation Q∗ ( st , at ) = E [ Rt + max a′∈A Q∗ ( st+1 , a ′ ) ] The implicit policy π can extracted by acting greedily with respect to the optimal Q-function : arg max a∈A Q∗ ( s , a ) . One possible way to estimate the optimal Q-value is by iteratively updating it for sampled states st and action at using Q∗ ( st , at ) ← Q∗ ( st , at ) + α ( Yt −Q∗ ( st , at ) ) where Yt = Rt + max a′∈A Q∗ ( st+1 , a ′ ) where α is the step size and Yt is called the target value . While this algorithm had been initially studied in the context of a tabular representation of Q for discrete states and actions , in many practical applications the Q value is approximated by a learned function . Since the emergence of deep learning , the preferred approximation technique is based on a deep neural network . DQN ( Mnih et al. , 2015 ) , had demonstrated super-human performance in Atari Games , but required a very large number of training iterations . From this baseline , subsequent algorithms improved both the learning speed and achievable performance , with one of the main means for this being techniques to reduce the overestimation bias of the Q-function . EnsembleDQN ( Anschel et al. , 2017 ) uses an ensemble of N neural networks to estimate state-action values and uses their average to reduce both overestimation bias and estimation variance . Formally , the target value for EnsembleDQN is calculated using QE ( · ) = 1 N N∑ i=1 Qi ( · ) Y Et = Rt + max a′∈A QE ( st+1 , a ′ ) ( 1 ) More recent , MaxminDQN ( Lan et al. , 2020 ) addresses the overestimation bias using order statistics , using the ensemble size N as a hyperparameter to tune between underestimating and overestimating bias . The target value for MaxminDQN is calculated using QM ( · , · ) = min i=1 , ... , N Qi ( · , · ) YMt = Rt + max a′∈A QM ( st+1 , a ′ ) ( 2 ) 3 RELATED WORK . Techniques to Address Overestimation Bias in RL : Addressing overestimation bias is a long standing research topic not only in reinforcement learning but other fields of science such as economics and statistics . It is commonly known as max-operator bias in statistics ( D ’ Eramo et al. , 2017 ) and as the winner ’ s curse in economics ( Thaler , 2012 ; Smith & Winkler , 2006 ) . To address this , ( van Hasselt , 2010 ) proposed Double Q-learning , subsequently adapted to a neural network based function approximators as Double DQN ( Hado van Hasselt et al. , 2016 ) . Alternatively , ( Zhang et al. , 2017 ; Lv et al. , 2019 ) proposed weighted estimators of Double Q-learning and ( Lee et al. , 2013 ) introduced a bias correction term . Other approaches to address the overestimation are based on averaging and ensembling . Techniques include averaging Q-values from previous N versions of the Q-network ( Anschel et al. , 2017 ) , taking linear combinations of min and max over the pool of Q-values ( Kumar et al. , 2019 ) , or using a random mixture from the pool ( Agarwal et al. , 2019 ) . Regularization in Reinforcement Learning : Regularization in reinforcement learning has been used to perform effective exploration and learning generalized policies . For instance , ( Grau-Moya et al. , 2019 ) uses mutual-information regularization to optimize a prior action distribution for better performance and exploration , ( Cheng et al. , 2019 ) regularizes the policy π ( a|s ) using a control prior , ( Galashov et al. , 2019 ) uses temporal difference error regularization to reduce variance in Generalized Advantage Estimation ( Schulman et al. , 2016 ) . Generalization in reinforcement learning refers to the performance of the policy on different environment compared to the training environment . For example , ( Farebrother et al. , 2018 ) studied the effect ofL2 norm on DQN on generalization , ( Tobin et al. , 2017 ) studied generalization between simulations vs. the real world , ( Pattanaik et al. , 2018 ) studied parameter variations and ( Zhang et al. , 2018 ) studied the effect of different random seeds in environment generation . Representation Similarity : Measuring similarity between the representations learned by different neural networks is an active area of research . For instance , ( Raghu et al. , 2017 ) used Canonical Correlation Analysis ( CCA ) to measure the representation similarity . CCA find two basis matrices such that when original matrices are projected on these bases , the correlation is maximized . ( Raghu et al. , 2017 ; Mroueh et al. , 2015 ) used truncated singular value decomposition on the activations to make it robust for perturbations . Other work such as ( Li et al. , 2015 ) and ( Wang et al. , 2018 ) studied the correlation between the neurons in the neural networks . 4 MAXIMIZING REPRESENTATION DIVERSITY IN ENSEMBLE-BASED DEEP Q-LEARNING . The work described in this paper is based on the conjecture that while ensemble-based deep Qlearning approaches aim to reduce the overestimation bias , this only works to the degree that the neural networks in the ensemble use diverse representations . If during training , these networks collapse to closely related representations , the learning performance decreases . From this idea , we propose to use regularization techniques to maximize representation diversity between the networks of the ensemble . 4.1 REPRESENTATION SIMILARITY MEASURE . Let X ∈ Rn×p1 denote a matrix of activations of p1 neurons for n examples and Y ∈ Rn×p2 denote a matrix of activations of p2 neurons for the same n examples . Furthermore , we consider Kij = k ( xi , xj ) and Lij = l ( yi , yj ) where k and l are two kernels . Centered Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ; Cortes et al. , 2012 ; Cristianini et al. , 2002 ) is a method for comparing representations of neural networks , and identifying correspondences between layers , not only in the same network but also on different neural network architectures . CKA is a normalized form of Hilbert-Schmidt Independence Criterion ( HSIC ) ( Gretton et al. , 2005 ) . Formally , CKA is defined as : CKA ( K , L ) = HSIC ( K , L ) √ HSIC ( K , K ) · HSIC ( L , L ) HSIC is a test statistic for determining whether two sets of variables are independent . The empirical estimator of HSIC is defined as : HSIC ( K , L ) = 1 ( n− 1 ) 2 tr ( KHLH ) where H is the centering matrix Hn = In − 1 n 11T .
This paper proposes methods to induce diversity in the networks of ensemble-based Q-Learning methods. This is achieved my maximizing a variety of measures of inequality based on the L2 parameter norms of individual networks in an ensemble. This is motivated by the benefit of having diversity in the learned features, which itself is motivated by observations on the CKA of some ensembleDQN networks.
SP:eb16e608d4bb9be2c7f2e358a5166c6c202272cc
Preventing Value Function Collapse in Ensemble Q-Learning by Maximizing Representation Diversity
1 INTRODUCTION . Q-learning ( Watkins , 1989 ) and its deep learning based successors inaugurated by DQN ( Mnih et al. , 2015 ) are model-free , value function based reinforcement learning algorithms . Their popularity stems from their intuitive , easy-to-implement update rule derived from the Bellman equation . At each time step , the agent updates its Q-value towards the expectation of the current reward plus the value corresponding to the maximal action in the next state . This state-action value represents the maximum sum of reward the agent believes it could obtain from the current state by taking the current action . Unfortunately ( Thrun & Schwartz , 1993 ; van Hasselt , 2010 ) have shown that this simple rule suffers from overestimation bias : due to the maximization operator in the update rule , positive and negative errors do not cancel each other out , but positive errors accumulate . The overestimation bias is particularly problematic under function approximation and have contributed towards learning sub-optimal policies ( Thrun & Schwartz , 1993 ; Szita & Lőrincz , 2008 ; Strehl et al. , 2009 ) . A possible solution is to introduce underestimation bias in the estimation of the Q-value . Double Q-learning ( van Hasselt , 2010 ) maintains two independent state-action value estimators ( Q-functions ) . The state-action value of estimator one is calculated by adding observed reward and maximal stateaction value from the other estimator . Double DQN ( Hado van Hasselt et al. , 2016 ) applied this idea using neural networks , and was shown to provide better performance than DQN . More recent actor-critic type deep RL algorithms such as TD3 ( Fujimoto et al. , 2018 ) and SAC ( Haarnoja et al. , 2018 ) also use two Q function estimators ( in combination with other techniques ) . Other approaches such as EnsembleDQN ( Anschel et al. , 2017 ) and MaxminDQN ( Lan et al. , 2020 ) maintain ensembles of Q-functions to estimate an unbiased Q-function . EnsembleDQN estimates the state-action values by adding the current observed reward and the maximal state-action value from the average of Q-functions from the ensemble . MaxminDQN creates a proxy Q-function by selecting the minimum Q-value for each action from all the Q-functions and using the maximal state-action value from the proxy Q-function to estimate an unbiased Q-function . Both EnsembleDQN and MaxminDQN have been shown to perform better than Double DQN . The primary insight of this paper is that the performance of ensemble based methods is contingent on maintaining sufficient diversity in the representation space between the Q-functions in the ensembles . If the Q-functions in the ensembles converge to a common representation ( we will show that this is the case in many scenarios ) , the performance of these approaches significantly degrades . In this paper we propose to use cross-learner regularizers to prevent the collapse of the representation space in ensemble-based Q-learning methods . Intuitively , these representations capture an inductive bias towards more diverse representations . We have investigated five different regularizers . The mathematical formulation of four of the regularizers correspond to inequality measures borrowed from economics theory . While in economics , high inequality is seen as a negative , in this case we use the metrics to encourage inequality between the representations . The fifth regularizer is inspired from consensus optimization . There is a separate line of reinforcement learning literature where ensembles are used to address several different issues ( Chen et al. , 2017 ; Chua et al. , 2018 ; Kurutach et al. , 2018 ; Lee et al. , 2020 ; Osband et al. , 2016 ) such as exploration and error propagation but we limit our solution to algorithms addressing the overestimation bias problem only . To summarize , our contributions are following : 1 . We show that high representation similarity between neural network based Q-functions leads to decline in performance in ensemble based Q-learning methods . 2 . To mitigate this , we propose five regularizers based on inequality measures from economics theory and consensus optimization that maximize representation diversity between Q-functions in ensemble based Q-learning methods . 3 . We show that applying the proposed regularizers to the MaxminDQN and EnsembleDQN methods can lead to significant improvement in performance over a variety of benchmarks . 2 BACKGROUND . Reinforcement learning considers an agent as a Markov Decision Process ( MDP ) defined as a five element tuple ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S×A×S → [ 0 , 1 ] are the state-action transition probabilities , r : S × A × S → R is the reward mapping and γ → [ 0 , 1 ] is the discount factor . At each time step t the agent observes the state of the environment st ∈ S and selects an action at ∈ A . The effect of the action triggers a transition to a new state st+1 ∈ S according to the transition probabilities P , while the agent receives a scalar reward Rt = r ( st , at , st+1 ) . The goal of the agent is to learn a policy π that maximizes the expectation of the discounted sum of future rewards . One way to implicitly learn the policy π is the Q-learning algorithm that estimates the expected sum of rewards of state st if we take the action at by solving the Bellman equation Q∗ ( st , at ) = E [ Rt + max a′∈A Q∗ ( st+1 , a ′ ) ] The implicit policy π can extracted by acting greedily with respect to the optimal Q-function : arg max a∈A Q∗ ( s , a ) . One possible way to estimate the optimal Q-value is by iteratively updating it for sampled states st and action at using Q∗ ( st , at ) ← Q∗ ( st , at ) + α ( Yt −Q∗ ( st , at ) ) where Yt = Rt + max a′∈A Q∗ ( st+1 , a ′ ) where α is the step size and Yt is called the target value . While this algorithm had been initially studied in the context of a tabular representation of Q for discrete states and actions , in many practical applications the Q value is approximated by a learned function . Since the emergence of deep learning , the preferred approximation technique is based on a deep neural network . DQN ( Mnih et al. , 2015 ) , had demonstrated super-human performance in Atari Games , but required a very large number of training iterations . From this baseline , subsequent algorithms improved both the learning speed and achievable performance , with one of the main means for this being techniques to reduce the overestimation bias of the Q-function . EnsembleDQN ( Anschel et al. , 2017 ) uses an ensemble of N neural networks to estimate state-action values and uses their average to reduce both overestimation bias and estimation variance . Formally , the target value for EnsembleDQN is calculated using QE ( · ) = 1 N N∑ i=1 Qi ( · ) Y Et = Rt + max a′∈A QE ( st+1 , a ′ ) ( 1 ) More recent , MaxminDQN ( Lan et al. , 2020 ) addresses the overestimation bias using order statistics , using the ensemble size N as a hyperparameter to tune between underestimating and overestimating bias . The target value for MaxminDQN is calculated using QM ( · , · ) = min i=1 , ... , N Qi ( · , · ) YMt = Rt + max a′∈A QM ( st+1 , a ′ ) ( 2 ) 3 RELATED WORK . Techniques to Address Overestimation Bias in RL : Addressing overestimation bias is a long standing research topic not only in reinforcement learning but other fields of science such as economics and statistics . It is commonly known as max-operator bias in statistics ( D ’ Eramo et al. , 2017 ) and as the winner ’ s curse in economics ( Thaler , 2012 ; Smith & Winkler , 2006 ) . To address this , ( van Hasselt , 2010 ) proposed Double Q-learning , subsequently adapted to a neural network based function approximators as Double DQN ( Hado van Hasselt et al. , 2016 ) . Alternatively , ( Zhang et al. , 2017 ; Lv et al. , 2019 ) proposed weighted estimators of Double Q-learning and ( Lee et al. , 2013 ) introduced a bias correction term . Other approaches to address the overestimation are based on averaging and ensembling . Techniques include averaging Q-values from previous N versions of the Q-network ( Anschel et al. , 2017 ) , taking linear combinations of min and max over the pool of Q-values ( Kumar et al. , 2019 ) , or using a random mixture from the pool ( Agarwal et al. , 2019 ) . Regularization in Reinforcement Learning : Regularization in reinforcement learning has been used to perform effective exploration and learning generalized policies . For instance , ( Grau-Moya et al. , 2019 ) uses mutual-information regularization to optimize a prior action distribution for better performance and exploration , ( Cheng et al. , 2019 ) regularizes the policy π ( a|s ) using a control prior , ( Galashov et al. , 2019 ) uses temporal difference error regularization to reduce variance in Generalized Advantage Estimation ( Schulman et al. , 2016 ) . Generalization in reinforcement learning refers to the performance of the policy on different environment compared to the training environment . For example , ( Farebrother et al. , 2018 ) studied the effect ofL2 norm on DQN on generalization , ( Tobin et al. , 2017 ) studied generalization between simulations vs. the real world , ( Pattanaik et al. , 2018 ) studied parameter variations and ( Zhang et al. , 2018 ) studied the effect of different random seeds in environment generation . Representation Similarity : Measuring similarity between the representations learned by different neural networks is an active area of research . For instance , ( Raghu et al. , 2017 ) used Canonical Correlation Analysis ( CCA ) to measure the representation similarity . CCA find two basis matrices such that when original matrices are projected on these bases , the correlation is maximized . ( Raghu et al. , 2017 ; Mroueh et al. , 2015 ) used truncated singular value decomposition on the activations to make it robust for perturbations . Other work such as ( Li et al. , 2015 ) and ( Wang et al. , 2018 ) studied the correlation between the neurons in the neural networks . 4 MAXIMIZING REPRESENTATION DIVERSITY IN ENSEMBLE-BASED DEEP Q-LEARNING . The work described in this paper is based on the conjecture that while ensemble-based deep Qlearning approaches aim to reduce the overestimation bias , this only works to the degree that the neural networks in the ensemble use diverse representations . If during training , these networks collapse to closely related representations , the learning performance decreases . From this idea , we propose to use regularization techniques to maximize representation diversity between the networks of the ensemble . 4.1 REPRESENTATION SIMILARITY MEASURE . Let X ∈ Rn×p1 denote a matrix of activations of p1 neurons for n examples and Y ∈ Rn×p2 denote a matrix of activations of p2 neurons for the same n examples . Furthermore , we consider Kij = k ( xi , xj ) and Lij = l ( yi , yj ) where k and l are two kernels . Centered Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ; Cortes et al. , 2012 ; Cristianini et al. , 2002 ) is a method for comparing representations of neural networks , and identifying correspondences between layers , not only in the same network but also on different neural network architectures . CKA is a normalized form of Hilbert-Schmidt Independence Criterion ( HSIC ) ( Gretton et al. , 2005 ) . Formally , CKA is defined as : CKA ( K , L ) = HSIC ( K , L ) √ HSIC ( K , K ) · HSIC ( L , L ) HSIC is a test statistic for determining whether two sets of variables are independent . The empirical estimator of HSIC is defined as : HSIC ( K , L ) = 1 ( n− 1 ) 2 tr ( KHLH ) where H is the centering matrix Hn = In − 1 n 11T .
Q-learning is known to have overestimation bias. Approaches like EnsembleDQN and MaxminDQN try to use different estimates from ensembles of learners to reduce the bias. The authors study a specific observation and try to tackle it by regularization technique to maximise the diversity of representation space. Five different regularization functions are evaluated in the paper. And experiments show that the proposed regularization helps on the diversity and outperform MaxminDQN and EnsembleDQN. Note that the reviewer is not very familiar with methods to introduce diversity in representation, but based on educated guess, the proposed method look interesting.
SP:eb16e608d4bb9be2c7f2e358a5166c6c202272cc
Brain-like approaches to unsupervised learning of hidden representations - a comparative study
1 INTRODUCTION . Artificial neural networks have made remarkable progress in supervised pattern recognition in recent years . In particular , deep neural networks have dominated the field largely due to their capability to discover hierarchies of salient data representations . However , most recent deep learning methods rely extensively on supervised learning from labelled samples for extracting and tuning data representations . Given the abundance of unlabeled data there is an urgent demand for unsupervised or semi-supervised approaches to learning of hidden representations ( Bengio et al. , 2013 ) . Although early concepts of greedy layer-wise pretraining allow for exploiting unlabeled data , ultimately the application of deep pre-trained networks to pattern recognition problems rests on label dependent end-to-end weight fine tuning ( Erhan et al. , 2009 ) . At the same time , we observe a surge of interest in more brain plausible networks for unsupervised and semi-supervised learning problems that build on some fundamental principles of neural information processing in the brain ( Pehlevan & Chklovskii , 2019 ; Illing et al. , 2019 ) . Most importantly , these brain-like computing approaches rely on local learning rules and label independent biologically compatible mechanisms to build data representations whereas deep learning methods predominantly make use of error back-propagation ( backprop ) for learning the weights . Although efficient , backprop has several issues that make it an unlikely candidate model for synaptic plasticity in the brain . The most apparent issue is that the synaptic connection strength between two biological neurons is expected to comply with Hebb ’ s postulate , i.e . to depend only on the available local information provided by the activities of preand postsynaptic neurons . This is violated in backprop since synaptic weight updates need gradient signals to be communicated from distant output layers . Please refer to ( Whittington & Bogacz , 2019 ; Lillicrap et al. , 2020 ) for a detailed review of possible biologically plausible implementations of and alternatives to backprop . In this work we utilize the MNIST dataset to compare two classical learning systems , the autoencoder ( AE ) and the restricted Boltzmann machine ( RBM ) , with two brain-like approaches to unsupervised learning of hidden representations , i.e . the recently proposed model by Krotov and Hopfield ( referred to as the KH model ) ( Krotov & Hopfield , 2019 ) , and the BCPNN model ( Ravichandran et al. , 2020 ) , which both rely on biologically plausible learning strategies . In particular , we qualitatively examine the extracted hidden representations and quantify their label dependent separability using a simple linear classifier on top of all the networks under investigation . This classification step is not part of the learning strategy , and we use it merely to evaluate the resulting representations . Special emphasis is on the feedforward BCPNN model with a single hidden layer , which frames the update and learning steps of the neural network as probabilistic computations . Probabilistic ap- proaches are widely used in both deep learning models ( Goodfellow et al. , 2016 ) and computational models of brain function ( Doya et al. , 2007 ) . One disadvantage of probabilistic models is that exact inference and learning on distributed representations is often intractable and forces approximate approaches like sampling-based or variational methods ( Rezende et al. , 2014 ) . In this work , we adopt a modular BCPNN architecture , previously used in abstract models of associative memory ( Sandberg et al. , 2002 ; Lansner et al. , 2009 ) , action selection ( Berthet et al. , 2012 ) , and in application to brain imaging ( Benjaminsson et al. , 2010 ; Schain et al. , 2013 ) and data mining ( Orre et al. , 2000 ) . Spiking versions of BCPNN have also been used in biologically detailed models of different forms of cortical associative memory ( Lundqvist et al. , 2011 ; Fiebig & Lansner , 2017 ; Tully et al. , 2014 ) . The modules in BCPNN , referred to as hypercolumns ( HCs ) , comprise a set of functional minicolumns ( MCs ) that compete in a soft-winner-take-all manner . The abstract view of a HC in this abstract cortical-like network is that it represents some attribute , e.g . edge orientation , in a discrete coded manner . A minicolumn comprises a unit that conceptually represents one discrete value ( a realization of the given attribute ) and , as a biological parallel , it accounts for a local subnetwork of around a hundred recurrently connected neurons with similar receptive field properties ( Mountcastle , 1997 ) . Such an architecture was initially generalized from the primary visual cortex , but today has more support also from later experimental work and has been featured in spiking computational models of cortex ( Rockland , 2010 ; Lansner , 2009 ) . Finally , in this work we highlight additional mechanisms of bias regulation and structural plasticity , introduced recently to the BCPNN framework ( Ravichandran et al. , 2020 ) , which enable unsupervised learning of hidden representations . The bias regulation mechanism ensures that the activities of all units in the hidden layer are maintained near their target activity by regulating their bias parameter . Structural plasticity learns a set of sparse connections from the input layer to hidden layer by maximizing a local greedy information theoretic score . 2 RELATED WORKS . A popular unsupervised learning approach is to train a hidden layer to reproduce the input data as , for example , in AE and RBM . The AE and RBM networks trained with a single hidden layer are relevant here since learning weights of the input-to-hidden-layer connections relies on local gradients , and the representations can be stacked on top of each other to extract hierarchical features . However , stacked autoencoders and deep belief nets ( stacked RBMs ) have typically been used for pre-training procedures followed by end-to-end supervised fine-tuning ( using backprop ) ( Erhan et al. , 2009 ) . The recently proposed KH model ( Krotov & Hopfield , 2019 ) addresses the problem of learning solely with local gradients by learning hidden representations only using an unsupervised method . In this network the input-to-hidden connections are trained and additional ( non-plastic ) lateral inhibition provides competition within the hidden layer . For evaluating the representation , the weights are frozen , and a linear classifier trained with labels is used for the final classification . Our approach shares some common features with the KH model , e.g . learning hidden representations solely by unsupervised methods , and evaluating the representations by a separate classifier ( Illing et al . ( 2019 ) provides an extensive review of methods with similar goals ) . All the aforementioned models employ either competition within the hidden layer ( KH ) , or feedback connections from hidden to input ( RBM and AE ) . The BCPNN uses only the feedforward connections , along with an implicit competition via a local softmax operation , the neural implementation of which would be lateral inhibition . It is also observed that , for unsupervised learning , having sparse connectivity in the feedforward connections performs better than full connectivity ( Illing et al. , 2019 ) . In addition to the unsupervised methods , networks employing supervised learning like convolutional neural networks ( CNNs ) force a fixed spatial filter to obtain this sparse connectivity ( Lindsay , 2020 ) . The BCPNN model takes an alternate approach where , along with learning the weights of the feedforward connections , which is regarded as biological synaptic plasticity , a sparse connectivity between the input and hidden layer is learnt simultaneously , in analogy with the structural plasticity in the brain ( Butz et al. , 2009 ) . 3 BAYESIAN CONFIDENCE PROPAGATION NEURAL NETWORK . Here we describe the BCPNN network architecture and update rules ( Sandberg et al. , 2002 ; Lansner et al. , 2009 ) . The feedforward BCPNN architecture contains two layers , referred to as the input layer and hidden layer . A layer consists of a set of HCs , each of which represents a discrete random variable Xi ( upper case ) . Each HC , in turn , is composed of a set of MCs representing a particular value xi ( lower case ) of Xi . The probability of Xi is then a multinomial distribution , defined as p ( Xi = xi ) , such that ∑ xi p ( Xi = xi ) = 1 . In the neural network , the activity of the MC is interpreted as p ( Xi = xi ) , and the activities of all the MCs inside a HC sum to one . Since the network is a probabilistic graphical model ( see Fig . 1 ) , we can compute the posterior of a target HC in the hidden layer conditioned on all the source HCs in the input layer . We will use x ’ s and y ’ s for referring the HCs in the input and hidden layer respectively . Computing the exact posterior p ( Yj |X1 , ... , XN ) over the target HC is intractable , since it scales exponentially with the number of units . The naive Bayes assumption p ( X1 , .. , XN |Yj ) = ∏N i=1 p ( Xi|Yj ) allows us to write the posterior as follows : p ( Yj |X1 , ... , XN ) = p ( Yj ) ∏N i=1 p ( Xi|Yj ) p ( X1 , ... , XN ) ∝ p ( Yj ) N∏ i=1 p ( Xi|Yj ) ( 1 ) When the network is driven by input data { X1 , .. , XN } = { xD1 , .. , xDN } , we can write the posterior probabilities of a target MC in terms of the source MCs as : p ( yj |xD1 , ... , xDN ) ∝ p ( yj ) N∏ i=1 p ( xDi |yj ) = p ( yj ) N∏ i=1 ∏ xi p ( xi|yj ) I ( xi=x D i ) ( 2 ) where I ( · ) is the indicator function that equals 1 if its argument is true , and zero otherwise . We have written the posterior of the target MC as a function of all the source MCs ( all xi ’ s ) . The log posterior can be written as : log p ( yj |xD1 , ... , xDN ) ∝ log p ( yj ) + N∑ i=1 ∑ xi I ( xi=xDi ) log p ( xi|yj ) ( 3 ) Since the posterior is linear in the indicator function of data sample , I ( xi=xDi ) can be approximated by its expected value defined as π ( xi ) = p ( xi=xDi ) . Except for π ( xi ) , all the terms in the posterior are functions of the marginals p ( yj ) and p ( xi , yj ) . We define the terms bias β ( yj ) = log p ( yj ) and weight w ( xi , yj ) = log p ( xi|yj ) in analogy with artificial neural networks . The inference step to calculate the posterior probabilities of the target MCs conditioned on the input sample is given by the activity update equations : h ( yj ) = β ( yj ) + N∑ i=1 ∑ xi π ( xi ) w ( xi , yj ) ( 4 ) π ( yj ) = exp ( h ( yj ) ) ∑ k exp ( h ( yk ) ) ( 5 ) where h ( yj ) is the total input received by each target MC from which the posterior probability π ( yj ) = p ( yj |xD1 , ... , xDN ) is recovered by softmax normalization of all MCs within the HC . As each data sample is presented , the learning step updates the marginal probabilities , weights , and biases as follows : τp dp ( yj ) dt = π ( yj ) − p ( yj ) ( 6 ) τp dp ( xi , yj ) dt = π ( xi ) π ( yj ) − p ( xi , yj ) ( 7 ) β ( yj ) = kβ log p ( yj ) ( 8 ) w ( xi , yj ) = log p ( xi , yj ) p ( yj ) ( 9 ) The terms τp is a learning time constant and kβ is the bias gain . The set of Equations 4-9 define the update and learning equations of the BCPNN architecture . In this work , we use the abstract nonspiking model of BCPNN for the purpose of representation learning . The network for unsupervised representation learning requires , in addition to the update and learning equations , the following two mechanisms to enable learning representations ( Ravichandran et al. , 2020 ) .
This paper evaluated four unsupervised learning approaches (BCPNN, KH, RBM, AE) by training a supervised classification layer on top of the hidden representation. Specifically, the authors qualitatively compared the receptive fields and quantitatively compared the classification performance across four models. The authors emphasized the advantages of BCPNN since it applies biologically plausible local learning rules and requires fewer epochs for convergence.
SP:f746ca9d21491dd433de8667cb51e6a137f2898f
Brain-like approaches to unsupervised learning of hidden representations - a comparative study
1 INTRODUCTION . Artificial neural networks have made remarkable progress in supervised pattern recognition in recent years . In particular , deep neural networks have dominated the field largely due to their capability to discover hierarchies of salient data representations . However , most recent deep learning methods rely extensively on supervised learning from labelled samples for extracting and tuning data representations . Given the abundance of unlabeled data there is an urgent demand for unsupervised or semi-supervised approaches to learning of hidden representations ( Bengio et al. , 2013 ) . Although early concepts of greedy layer-wise pretraining allow for exploiting unlabeled data , ultimately the application of deep pre-trained networks to pattern recognition problems rests on label dependent end-to-end weight fine tuning ( Erhan et al. , 2009 ) . At the same time , we observe a surge of interest in more brain plausible networks for unsupervised and semi-supervised learning problems that build on some fundamental principles of neural information processing in the brain ( Pehlevan & Chklovskii , 2019 ; Illing et al. , 2019 ) . Most importantly , these brain-like computing approaches rely on local learning rules and label independent biologically compatible mechanisms to build data representations whereas deep learning methods predominantly make use of error back-propagation ( backprop ) for learning the weights . Although efficient , backprop has several issues that make it an unlikely candidate model for synaptic plasticity in the brain . The most apparent issue is that the synaptic connection strength between two biological neurons is expected to comply with Hebb ’ s postulate , i.e . to depend only on the available local information provided by the activities of preand postsynaptic neurons . This is violated in backprop since synaptic weight updates need gradient signals to be communicated from distant output layers . Please refer to ( Whittington & Bogacz , 2019 ; Lillicrap et al. , 2020 ) for a detailed review of possible biologically plausible implementations of and alternatives to backprop . In this work we utilize the MNIST dataset to compare two classical learning systems , the autoencoder ( AE ) and the restricted Boltzmann machine ( RBM ) , with two brain-like approaches to unsupervised learning of hidden representations , i.e . the recently proposed model by Krotov and Hopfield ( referred to as the KH model ) ( Krotov & Hopfield , 2019 ) , and the BCPNN model ( Ravichandran et al. , 2020 ) , which both rely on biologically plausible learning strategies . In particular , we qualitatively examine the extracted hidden representations and quantify their label dependent separability using a simple linear classifier on top of all the networks under investigation . This classification step is not part of the learning strategy , and we use it merely to evaluate the resulting representations . Special emphasis is on the feedforward BCPNN model with a single hidden layer , which frames the update and learning steps of the neural network as probabilistic computations . Probabilistic ap- proaches are widely used in both deep learning models ( Goodfellow et al. , 2016 ) and computational models of brain function ( Doya et al. , 2007 ) . One disadvantage of probabilistic models is that exact inference and learning on distributed representations is often intractable and forces approximate approaches like sampling-based or variational methods ( Rezende et al. , 2014 ) . In this work , we adopt a modular BCPNN architecture , previously used in abstract models of associative memory ( Sandberg et al. , 2002 ; Lansner et al. , 2009 ) , action selection ( Berthet et al. , 2012 ) , and in application to brain imaging ( Benjaminsson et al. , 2010 ; Schain et al. , 2013 ) and data mining ( Orre et al. , 2000 ) . Spiking versions of BCPNN have also been used in biologically detailed models of different forms of cortical associative memory ( Lundqvist et al. , 2011 ; Fiebig & Lansner , 2017 ; Tully et al. , 2014 ) . The modules in BCPNN , referred to as hypercolumns ( HCs ) , comprise a set of functional minicolumns ( MCs ) that compete in a soft-winner-take-all manner . The abstract view of a HC in this abstract cortical-like network is that it represents some attribute , e.g . edge orientation , in a discrete coded manner . A minicolumn comprises a unit that conceptually represents one discrete value ( a realization of the given attribute ) and , as a biological parallel , it accounts for a local subnetwork of around a hundred recurrently connected neurons with similar receptive field properties ( Mountcastle , 1997 ) . Such an architecture was initially generalized from the primary visual cortex , but today has more support also from later experimental work and has been featured in spiking computational models of cortex ( Rockland , 2010 ; Lansner , 2009 ) . Finally , in this work we highlight additional mechanisms of bias regulation and structural plasticity , introduced recently to the BCPNN framework ( Ravichandran et al. , 2020 ) , which enable unsupervised learning of hidden representations . The bias regulation mechanism ensures that the activities of all units in the hidden layer are maintained near their target activity by regulating their bias parameter . Structural plasticity learns a set of sparse connections from the input layer to hidden layer by maximizing a local greedy information theoretic score . 2 RELATED WORKS . A popular unsupervised learning approach is to train a hidden layer to reproduce the input data as , for example , in AE and RBM . The AE and RBM networks trained with a single hidden layer are relevant here since learning weights of the input-to-hidden-layer connections relies on local gradients , and the representations can be stacked on top of each other to extract hierarchical features . However , stacked autoencoders and deep belief nets ( stacked RBMs ) have typically been used for pre-training procedures followed by end-to-end supervised fine-tuning ( using backprop ) ( Erhan et al. , 2009 ) . The recently proposed KH model ( Krotov & Hopfield , 2019 ) addresses the problem of learning solely with local gradients by learning hidden representations only using an unsupervised method . In this network the input-to-hidden connections are trained and additional ( non-plastic ) lateral inhibition provides competition within the hidden layer . For evaluating the representation , the weights are frozen , and a linear classifier trained with labels is used for the final classification . Our approach shares some common features with the KH model , e.g . learning hidden representations solely by unsupervised methods , and evaluating the representations by a separate classifier ( Illing et al . ( 2019 ) provides an extensive review of methods with similar goals ) . All the aforementioned models employ either competition within the hidden layer ( KH ) , or feedback connections from hidden to input ( RBM and AE ) . The BCPNN uses only the feedforward connections , along with an implicit competition via a local softmax operation , the neural implementation of which would be lateral inhibition . It is also observed that , for unsupervised learning , having sparse connectivity in the feedforward connections performs better than full connectivity ( Illing et al. , 2019 ) . In addition to the unsupervised methods , networks employing supervised learning like convolutional neural networks ( CNNs ) force a fixed spatial filter to obtain this sparse connectivity ( Lindsay , 2020 ) . The BCPNN model takes an alternate approach where , along with learning the weights of the feedforward connections , which is regarded as biological synaptic plasticity , a sparse connectivity between the input and hidden layer is learnt simultaneously , in analogy with the structural plasticity in the brain ( Butz et al. , 2009 ) . 3 BAYESIAN CONFIDENCE PROPAGATION NEURAL NETWORK . Here we describe the BCPNN network architecture and update rules ( Sandberg et al. , 2002 ; Lansner et al. , 2009 ) . The feedforward BCPNN architecture contains two layers , referred to as the input layer and hidden layer . A layer consists of a set of HCs , each of which represents a discrete random variable Xi ( upper case ) . Each HC , in turn , is composed of a set of MCs representing a particular value xi ( lower case ) of Xi . The probability of Xi is then a multinomial distribution , defined as p ( Xi = xi ) , such that ∑ xi p ( Xi = xi ) = 1 . In the neural network , the activity of the MC is interpreted as p ( Xi = xi ) , and the activities of all the MCs inside a HC sum to one . Since the network is a probabilistic graphical model ( see Fig . 1 ) , we can compute the posterior of a target HC in the hidden layer conditioned on all the source HCs in the input layer . We will use x ’ s and y ’ s for referring the HCs in the input and hidden layer respectively . Computing the exact posterior p ( Yj |X1 , ... , XN ) over the target HC is intractable , since it scales exponentially with the number of units . The naive Bayes assumption p ( X1 , .. , XN |Yj ) = ∏N i=1 p ( Xi|Yj ) allows us to write the posterior as follows : p ( Yj |X1 , ... , XN ) = p ( Yj ) ∏N i=1 p ( Xi|Yj ) p ( X1 , ... , XN ) ∝ p ( Yj ) N∏ i=1 p ( Xi|Yj ) ( 1 ) When the network is driven by input data { X1 , .. , XN } = { xD1 , .. , xDN } , we can write the posterior probabilities of a target MC in terms of the source MCs as : p ( yj |xD1 , ... , xDN ) ∝ p ( yj ) N∏ i=1 p ( xDi |yj ) = p ( yj ) N∏ i=1 ∏ xi p ( xi|yj ) I ( xi=x D i ) ( 2 ) where I ( · ) is the indicator function that equals 1 if its argument is true , and zero otherwise . We have written the posterior of the target MC as a function of all the source MCs ( all xi ’ s ) . The log posterior can be written as : log p ( yj |xD1 , ... , xDN ) ∝ log p ( yj ) + N∑ i=1 ∑ xi I ( xi=xDi ) log p ( xi|yj ) ( 3 ) Since the posterior is linear in the indicator function of data sample , I ( xi=xDi ) can be approximated by its expected value defined as π ( xi ) = p ( xi=xDi ) . Except for π ( xi ) , all the terms in the posterior are functions of the marginals p ( yj ) and p ( xi , yj ) . We define the terms bias β ( yj ) = log p ( yj ) and weight w ( xi , yj ) = log p ( xi|yj ) in analogy with artificial neural networks . The inference step to calculate the posterior probabilities of the target MCs conditioned on the input sample is given by the activity update equations : h ( yj ) = β ( yj ) + N∑ i=1 ∑ xi π ( xi ) w ( xi , yj ) ( 4 ) π ( yj ) = exp ( h ( yj ) ) ∑ k exp ( h ( yk ) ) ( 5 ) where h ( yj ) is the total input received by each target MC from which the posterior probability π ( yj ) = p ( yj |xD1 , ... , xDN ) is recovered by softmax normalization of all MCs within the HC . As each data sample is presented , the learning step updates the marginal probabilities , weights , and biases as follows : τp dp ( yj ) dt = π ( yj ) − p ( yj ) ( 6 ) τp dp ( xi , yj ) dt = π ( xi ) π ( yj ) − p ( xi , yj ) ( 7 ) β ( yj ) = kβ log p ( yj ) ( 8 ) w ( xi , yj ) = log p ( xi , yj ) p ( yj ) ( 9 ) The terms τp is a learning time constant and kβ is the bias gain . The set of Equations 4-9 define the update and learning equations of the BCPNN architecture . In this work , we use the abstract nonspiking model of BCPNN for the purpose of representation learning . The network for unsupervised representation learning requires , in addition to the update and learning equations , the following two mechanisms to enable learning representations ( Ravichandran et al. , 2020 ) .
The Bayesian Confidence Propagating Neural Network has recently been extended to the case of unsupervised learning (Ravichandran et al., IJCNN, 2020). This paper compares this extension to restricted Boltzmann machines, autoencoders, and a biologically plausible model proposed by (Krotov & Hopfield, PNAS, 2019) on the MNIST dataset. For evaluation the authors consider the learned receptive fields and the classification performance of a linear classifier. The paper is very similar to (Ravichandran et al., IJCNN, 2020) but with an extended experimental section.
SP:f746ca9d21491dd433de8667cb51e6a137f2898f
Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay
1 INTRODUCTION . Success stories of deep reinforcement learning ( RL ) from high dimensional inputs such as pixels or large spatial layouts include achieving superhuman performance on Atari games ( Mnih et al. , 2015 ; Schrittwieser et al. , 2019 ; Badia et al. , 2020 ) , grandmaster level in Starcraft II ( Vinyals et al. , 2019 ) and grasping a diverse set of objects with impressive success rates and generalization with robots in the real world ( Kalashnikov et al. , 2018 ) . Modern off-policy RL algorithms ( Mnih et al. , 2015 ; Hessel et al. , 2018 ; Hafner et al. , 2019 ; 2020 ; Srinivas et al. , 2020 ; Kostrikov et al. , 2020 ; Laskin et al. , 2020 ) have improved the sample-efficiency of agents that process high-dimensional pixel inputs with convolutional neural networks ( CNNs ; LeCun et al . 1998 ) using the past experiential data that is typically stored as raw observations form in a replay buffer ( Lin , 1992 ) . However , these methods demand high memory and computational bandwidth , which makes deep RL inaccessible in several scenarios , such as learning with much lighter on-device computation ( e.g . mobile phones or other light-weight edge devices ) . For compute- and memory-efficient deep learning , several strategies , such as network pruning ( Han et al. , 2015 ; Frankle & Carbin , 2019 ) , quantization ( Han et al. , 2015 ; Iandola et al. , 2016 ) and freezing ( Yosinski et al. , 2014 ; Raghu et al. , 2017 ) have been proposed in supervised learning and unsupervised learning for various purposes ( see Section 2 for more details ) . In computer vision , Raghu et al . ( 2017 ) showed that the computational cost of updating CNNs can be reduced by freezing lower layers earlier in training , and Han et al . ( 2015 ) introduced a deep compression , which reduces the memory requirement of neural networks by producing a sparse network . In natural language processing , several approaches ( Tay et al. , 2019 ; Sun et al. , 2020 ) have studied improving the computational efficiency of Transformers ( Vaswani et al. , 2017 ) . In deep RL , however , developing compute- and memory-efficient techniques has received relatively little attention despite their serious impact on the practicality of RL algorithms . In this paper , we propose Latent Vector Experience Replay ( LeVER ) , a simple technique to reduce computational overhead and memory requirements that is compatible with various off-policy RL algorithms ( Haarnoja et al. , 2018 ; Hessel et al. , 2018 ; Srinivas et al. , 2020 ) . Our main idea is to freeze the lower layers of CNN encoders of RL agents early in training , which enables two key capabilities : ( a ) compute-efficiency : reducing the computational overhead of gradient updates in CNNs ; ( b ) memory-efficiency : saving memory by storing the low-dimensional latent vectors to experience replay instead of high-dimensional images . Additionally , we leverage the memory-efficiency of LeVER to adaptively increase the replay capacity , resulting in improved sample-efficiency of offpolicy RL algorithms in constrained-memory settings . LeVER achieves these improvements without sacrificing the performance of RL agents due to early convergence of CNN encoders . To summarize , the main contributions of this paper are as follows : • We present LeVER , a compute- and memory-efficient technique that can be used in conjunction with most modern off-policy RL algorithms ( Haarnoja et al. , 2018 ; Hessel et al. , 2018 ) . • We show that LeVER significantly reduces computation while matching the original performance of existing RL algorithms on both continuous control tasks from DeepMind Control Suite ( Tassa et al. , 2018 ) and discrete control tasks from Atari games ( Bellemare et al. , 2013 ) . • We show that LeVER improves the sample-efficiency of RL agents in constrained-memory settings by enabling an increased replay buffer capacity . • Finally , we show that LeVER is useful for computation-efficient transfer learning , highlighting the generality and transferability of encoder features . 2 RELATED WORK . Off-policy deep reinforcement learning . The most sample-efficient RL agents often use off-policy RL algorithms , a recipe for improving the agent ’ s policy from experiences that may have been recorded with a different policy ( Sutton & Barto , 2018 ) . Off-policy RL algorithms are typically based on Q-Learning ( Watkins & Dayan , 1992 ) which estimates the optimal value functions for the task at hand , while actor-critic based off-policy methods ( Lillicrap et al. , 2016 ; Schulman et al. , 2017 ; Haarnoja et al. , 2018 ) are also commonly used . In this paper we will consider Deep QNetworks ( DQN ; Mnih et al . 2015 ) , which combine the function approximation capability of deep convolutional neural networks ( CNNs ; LeCun et al . 1998 ) with Q-Learning along with the usage of the experience replay buffer ( Lin , 1992 ) as well as off-policy actor-critic methods ( Lillicrap et al. , 2016 ; Haarnoja et al. , 2018 ) , which have been proposed for continuous control tasks . Taking into account the learning ability of humans and practical limitations of wall clock time for deploying RL algorithms in the real world , particularly those that learn from raw high dimensional inputs such as pixels ( Kalashnikov et al. , 2018 ) , the sample-inefficiency of off-policy RL algorithms has been a research topic of wide interest and importance ( Lake et al. , 2017 ; Kaiser et al. , 2020 ) . To address this , several improvements in pixel-based off-policy RL have been proposed recently : algorithmic improvements such as Rainbow ( Hessel et al. , 2018 ) and its data-efficient versions ( van Hasselt et al. , 2019 ) ; using ensemble approaches based on bootstrapping ( Osband et al. , 2016 ; Lee et al. , 2020 ) ; combining RL algorithms with auxiliary predictive , reconstruction and contrastive losses ( Jaderberg et al. , 2017 ; Higgins et al. , 2017 ; Oord et al. , 2018 ; Yarats et al. , 2019 ; Srinivas et al. , 2020 ; Stooke et al. , 2020 ) ; using world models for auxiliary losses and/or synthetic rollouts ( Sutton , 1991 ; Ha & Schmidhuber , 2018 ; Kaiser et al. , 2020 ; Hafner et al. , 2020 ) ; using data-augmentations on images to improve sample-efficiency ( Laskin et al. , 2020 ; Kostrikov et al. , 2020 ) . Compute-efficient techniques in machine learning . Most recent progress in deep learning and RL has relied heavily on the increased access to more powerful computational resources . To address this , Mattson et al . ( 2020 ) presented MLPerf , a fair and precise ML benchmark to evaluate model training time on standard datasets , driving scalability alongside performance , following a recent focus on mitigating the computational cost of training ML models . Several techniques , such as pruning and quantization ( Han et al. , 2015 ; Frankle & Carbin , 2019 ; Blalock et al. , 2020 ; Iandola et al. , 2016 ; Tay et al. , 2019 ) have been developed to address compute and memory requirements . Raghu et al . ( 2017 ) proposed freezing earlier layers to remove computationally expensive backward passes in supervised learning tasks , motivated by the bottom-up convergence of neural networks . This intuition was further extended to recurrent neural networks ( Morcos et al. , 2018 ) and continual learning ( Pellegrini et al. , 2019 ) , and Yosinski et al . ( 2014 ) study the transferability of frozen and fine-tuned CNN parameters . Fang et al . ( 2019 ) store low-dimensional embeddings of input observations in scene memory for long-horizon tasks . We focus on the feasibility of freezing neural network layers in deep RL and show that this idea can improve the compute- and memory-efficiency of many offpolicy algorithms using standard RL benchmarks . 3 BACKGROUND . We formulate visual control task as a partially observable Markov decision process ( POMDP ; Sutton & Barto 2018 ; Kaelbling et al . 1998 ) . Formally , at each timestep t , the agent receives a highdimensional observation ot , which is an indirect representation of the state st , and chooses an action at based on its policy π . The environment returns a reward rt and the agent transitions to the next observation ot+1 . The return Rt = ∑∞ k=0 γ krt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [ 0 , 1 ) . The goal of RL is to learn a policy π that maximizes the expected return over trajectories . By following the common practice in DQN ( Mnih et al. , 2015 ) , we handle the partial observability of environment using stacked input observations , which are processed through the convolutional layers of an encoder fψ . Soft Actor-Critic . SAC ( Haarnoja et al. , 2018 ) is an off-policy actor-critic method based on the maximum entropy RL framework ( Ziebart , 2010 ) , which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy . To update the parameters , SAC alternates between a soft policy evaluation and a soft policy improvement . At the soft policy evaluation step , a soft Q-function , which is modeled as a neural network with parameters θ , is updated by minimizing the following soft Bellman residual : LSACQ ( θ , ψ ) = Eτt∼B [ ( Qθ ( fψ ( ot ) , at ) − rt − γEat+1∼πφ [ Qθ̄ ( fψ̄ ( ot+1 ) , at+1 ) − α log πφ ( at+1|fψ ( ot+1 ) ) ] ) 2 ] , where τt = ( ot , at , rt , ot+1 ) is a transition , B is a replay buffer , θ̄ , ψ̄ are the delayed parameters , and α is a temperature parameter . At the soft policy improvement step , the policy π with its parameter φ is updated by minimizing the following objective : LSACπ ( φ ) = Eot∼B , at∼πφ [ α log πφ ( at|fψ ( ot ) ) −Qθ ( fψ ( ot ) , at ) ] . ( 1 ) Here , the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces . Deep Q-learning . DQN algorithm ( Mnih et al. , 2015 ) learns a Q-function , which is modeled as a neural network with parameters θ , by minimizing the following Bellman residual : LDQN ( θ , ψ ) = Eτt∼B [ ( Qθ ( fψ ( ot ) , at ) − rt − γmax a Qθ̄ ( fψ̄ ( ot+1 ) , a ) ) 2 ] , ( 2 ) where τt = ( ot , at , rt , ot+1 ) is a transition , B is a replay buffer , and θ̄ , ψ̄ are the delayed parameters . Rainbow DQN integrates several techniques , such as double Q-learning ( Van Hasselt et al. , 2016 ) and distributional DQN ( Bellemare et al. , 2017 ) . For exposition , we refer the reader to Hessel et al . ( 2018 ) for more detailed explanations of Rainbow DQN . 4 LEVER : LATENT VECTOR EXPERIENCE REPLAY . In this section , we present LeVER : Latent Vector Experience Replay , which can be used in conjunction with most modern off-policy RL algorithms , such as SAC ( Haarnoja et al. , 2018 ) and Rainbow DQN ( Hessel et al. , 2018 ) . Our main idea is to freeze lower layers during training and only update higher layers , which eliminates the computational overhead of computing gradients and updating in lower layers . We additionally improve the memory-efficiency of off-policy RL algorithms by storing low-dimensional latent vectors in the replay buffer instead of high-dimensional pixel observations . See Figure 1 and Appendix A for more details of our method .
This work proposes LeVER, a method that modifies general off-policy RL algorithms with a fixed layer freezing policy for early embedding layers (in this particular case, a few early layers of a CNN). As a direct consequence, the method enables to store embeddings in the experience replay buffer rather than observations, with a potential decrease in memory required, as well as providing a boost in clock time due to fewer gradient computations needed for every update. The method is benchmarked with a couple of off-policy RL algorithms against a few different environments.
SP:66df426d54b2965855f955ec2946f5304b974ef5
Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay
1 INTRODUCTION . Success stories of deep reinforcement learning ( RL ) from high dimensional inputs such as pixels or large spatial layouts include achieving superhuman performance on Atari games ( Mnih et al. , 2015 ; Schrittwieser et al. , 2019 ; Badia et al. , 2020 ) , grandmaster level in Starcraft II ( Vinyals et al. , 2019 ) and grasping a diverse set of objects with impressive success rates and generalization with robots in the real world ( Kalashnikov et al. , 2018 ) . Modern off-policy RL algorithms ( Mnih et al. , 2015 ; Hessel et al. , 2018 ; Hafner et al. , 2019 ; 2020 ; Srinivas et al. , 2020 ; Kostrikov et al. , 2020 ; Laskin et al. , 2020 ) have improved the sample-efficiency of agents that process high-dimensional pixel inputs with convolutional neural networks ( CNNs ; LeCun et al . 1998 ) using the past experiential data that is typically stored as raw observations form in a replay buffer ( Lin , 1992 ) . However , these methods demand high memory and computational bandwidth , which makes deep RL inaccessible in several scenarios , such as learning with much lighter on-device computation ( e.g . mobile phones or other light-weight edge devices ) . For compute- and memory-efficient deep learning , several strategies , such as network pruning ( Han et al. , 2015 ; Frankle & Carbin , 2019 ) , quantization ( Han et al. , 2015 ; Iandola et al. , 2016 ) and freezing ( Yosinski et al. , 2014 ; Raghu et al. , 2017 ) have been proposed in supervised learning and unsupervised learning for various purposes ( see Section 2 for more details ) . In computer vision , Raghu et al . ( 2017 ) showed that the computational cost of updating CNNs can be reduced by freezing lower layers earlier in training , and Han et al . ( 2015 ) introduced a deep compression , which reduces the memory requirement of neural networks by producing a sparse network . In natural language processing , several approaches ( Tay et al. , 2019 ; Sun et al. , 2020 ) have studied improving the computational efficiency of Transformers ( Vaswani et al. , 2017 ) . In deep RL , however , developing compute- and memory-efficient techniques has received relatively little attention despite their serious impact on the practicality of RL algorithms . In this paper , we propose Latent Vector Experience Replay ( LeVER ) , a simple technique to reduce computational overhead and memory requirements that is compatible with various off-policy RL algorithms ( Haarnoja et al. , 2018 ; Hessel et al. , 2018 ; Srinivas et al. , 2020 ) . Our main idea is to freeze the lower layers of CNN encoders of RL agents early in training , which enables two key capabilities : ( a ) compute-efficiency : reducing the computational overhead of gradient updates in CNNs ; ( b ) memory-efficiency : saving memory by storing the low-dimensional latent vectors to experience replay instead of high-dimensional images . Additionally , we leverage the memory-efficiency of LeVER to adaptively increase the replay capacity , resulting in improved sample-efficiency of offpolicy RL algorithms in constrained-memory settings . LeVER achieves these improvements without sacrificing the performance of RL agents due to early convergence of CNN encoders . To summarize , the main contributions of this paper are as follows : • We present LeVER , a compute- and memory-efficient technique that can be used in conjunction with most modern off-policy RL algorithms ( Haarnoja et al. , 2018 ; Hessel et al. , 2018 ) . • We show that LeVER significantly reduces computation while matching the original performance of existing RL algorithms on both continuous control tasks from DeepMind Control Suite ( Tassa et al. , 2018 ) and discrete control tasks from Atari games ( Bellemare et al. , 2013 ) . • We show that LeVER improves the sample-efficiency of RL agents in constrained-memory settings by enabling an increased replay buffer capacity . • Finally , we show that LeVER is useful for computation-efficient transfer learning , highlighting the generality and transferability of encoder features . 2 RELATED WORK . Off-policy deep reinforcement learning . The most sample-efficient RL agents often use off-policy RL algorithms , a recipe for improving the agent ’ s policy from experiences that may have been recorded with a different policy ( Sutton & Barto , 2018 ) . Off-policy RL algorithms are typically based on Q-Learning ( Watkins & Dayan , 1992 ) which estimates the optimal value functions for the task at hand , while actor-critic based off-policy methods ( Lillicrap et al. , 2016 ; Schulman et al. , 2017 ; Haarnoja et al. , 2018 ) are also commonly used . In this paper we will consider Deep QNetworks ( DQN ; Mnih et al . 2015 ) , which combine the function approximation capability of deep convolutional neural networks ( CNNs ; LeCun et al . 1998 ) with Q-Learning along with the usage of the experience replay buffer ( Lin , 1992 ) as well as off-policy actor-critic methods ( Lillicrap et al. , 2016 ; Haarnoja et al. , 2018 ) , which have been proposed for continuous control tasks . Taking into account the learning ability of humans and practical limitations of wall clock time for deploying RL algorithms in the real world , particularly those that learn from raw high dimensional inputs such as pixels ( Kalashnikov et al. , 2018 ) , the sample-inefficiency of off-policy RL algorithms has been a research topic of wide interest and importance ( Lake et al. , 2017 ; Kaiser et al. , 2020 ) . To address this , several improvements in pixel-based off-policy RL have been proposed recently : algorithmic improvements such as Rainbow ( Hessel et al. , 2018 ) and its data-efficient versions ( van Hasselt et al. , 2019 ) ; using ensemble approaches based on bootstrapping ( Osband et al. , 2016 ; Lee et al. , 2020 ) ; combining RL algorithms with auxiliary predictive , reconstruction and contrastive losses ( Jaderberg et al. , 2017 ; Higgins et al. , 2017 ; Oord et al. , 2018 ; Yarats et al. , 2019 ; Srinivas et al. , 2020 ; Stooke et al. , 2020 ) ; using world models for auxiliary losses and/or synthetic rollouts ( Sutton , 1991 ; Ha & Schmidhuber , 2018 ; Kaiser et al. , 2020 ; Hafner et al. , 2020 ) ; using data-augmentations on images to improve sample-efficiency ( Laskin et al. , 2020 ; Kostrikov et al. , 2020 ) . Compute-efficient techniques in machine learning . Most recent progress in deep learning and RL has relied heavily on the increased access to more powerful computational resources . To address this , Mattson et al . ( 2020 ) presented MLPerf , a fair and precise ML benchmark to evaluate model training time on standard datasets , driving scalability alongside performance , following a recent focus on mitigating the computational cost of training ML models . Several techniques , such as pruning and quantization ( Han et al. , 2015 ; Frankle & Carbin , 2019 ; Blalock et al. , 2020 ; Iandola et al. , 2016 ; Tay et al. , 2019 ) have been developed to address compute and memory requirements . Raghu et al . ( 2017 ) proposed freezing earlier layers to remove computationally expensive backward passes in supervised learning tasks , motivated by the bottom-up convergence of neural networks . This intuition was further extended to recurrent neural networks ( Morcos et al. , 2018 ) and continual learning ( Pellegrini et al. , 2019 ) , and Yosinski et al . ( 2014 ) study the transferability of frozen and fine-tuned CNN parameters . Fang et al . ( 2019 ) store low-dimensional embeddings of input observations in scene memory for long-horizon tasks . We focus on the feasibility of freezing neural network layers in deep RL and show that this idea can improve the compute- and memory-efficiency of many offpolicy algorithms using standard RL benchmarks . 3 BACKGROUND . We formulate visual control task as a partially observable Markov decision process ( POMDP ; Sutton & Barto 2018 ; Kaelbling et al . 1998 ) . Formally , at each timestep t , the agent receives a highdimensional observation ot , which is an indirect representation of the state st , and chooses an action at based on its policy π . The environment returns a reward rt and the agent transitions to the next observation ot+1 . The return Rt = ∑∞ k=0 γ krt+k is the total accumulated rewards from timestep t with a discount factor γ ∈ [ 0 , 1 ) . The goal of RL is to learn a policy π that maximizes the expected return over trajectories . By following the common practice in DQN ( Mnih et al. , 2015 ) , we handle the partial observability of environment using stacked input observations , which are processed through the convolutional layers of an encoder fψ . Soft Actor-Critic . SAC ( Haarnoja et al. , 2018 ) is an off-policy actor-critic method based on the maximum entropy RL framework ( Ziebart , 2010 ) , which encourages the robustness to noise and exploration by maximizing a weighted objective of the reward and the policy entropy . To update the parameters , SAC alternates between a soft policy evaluation and a soft policy improvement . At the soft policy evaluation step , a soft Q-function , which is modeled as a neural network with parameters θ , is updated by minimizing the following soft Bellman residual : LSACQ ( θ , ψ ) = Eτt∼B [ ( Qθ ( fψ ( ot ) , at ) − rt − γEat+1∼πφ [ Qθ̄ ( fψ̄ ( ot+1 ) , at+1 ) − α log πφ ( at+1|fψ ( ot+1 ) ) ] ) 2 ] , where τt = ( ot , at , rt , ot+1 ) is a transition , B is a replay buffer , θ̄ , ψ̄ are the delayed parameters , and α is a temperature parameter . At the soft policy improvement step , the policy π with its parameter φ is updated by minimizing the following objective : LSACπ ( φ ) = Eot∼B , at∼πφ [ α log πφ ( at|fψ ( ot ) ) −Qθ ( fψ ( ot ) , at ) ] . ( 1 ) Here , the policy is modeled as a Gaussian with mean and covariance given by neural networks to handle continuous action spaces . Deep Q-learning . DQN algorithm ( Mnih et al. , 2015 ) learns a Q-function , which is modeled as a neural network with parameters θ , by minimizing the following Bellman residual : LDQN ( θ , ψ ) = Eτt∼B [ ( Qθ ( fψ ( ot ) , at ) − rt − γmax a Qθ̄ ( fψ̄ ( ot+1 ) , a ) ) 2 ] , ( 2 ) where τt = ( ot , at , rt , ot+1 ) is a transition , B is a replay buffer , and θ̄ , ψ̄ are the delayed parameters . Rainbow DQN integrates several techniques , such as double Q-learning ( Van Hasselt et al. , 2016 ) and distributional DQN ( Bellemare et al. , 2017 ) . For exposition , we refer the reader to Hessel et al . ( 2018 ) for more detailed explanations of Rainbow DQN . 4 LEVER : LATENT VECTOR EXPERIENCE REPLAY . In this section , we present LeVER : Latent Vector Experience Replay , which can be used in conjunction with most modern off-policy RL algorithms , such as SAC ( Haarnoja et al. , 2018 ) and Rainbow DQN ( Hessel et al. , 2018 ) . Our main idea is to freeze lower layers during training and only update higher layers , which eliminates the computational overhead of computing gradients and updating in lower layers . We additionally improve the memory-efficiency of off-policy RL algorithms by storing low-dimensional latent vectors in the replay buffer instead of high-dimensional pixel observations . See Figure 1 and Appendix A for more details of our method .
This manuscript proposes to reduce the intensive computation and memory requirement in reinforcement learning trainings by freezing the parameters of lower layers early. Besides, the authors also propose to store the low-dimensional latent vectors rather than the high-dimensional images in the replay buffer for experience replay. The effectiveness of the proposed techniques is evaluated on DeepMind Control environments and Atari. The motivation for this work is strong, and the results are impressive. However, the proposed technique is described in a very general way without clearly defined applicable conditions and specific design methods. Below are detailed comments and questions.
SP:66df426d54b2965855f955ec2946f5304b974ef5
Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
1 INTRODUCTION . Model-based Reinforcement Learning ( RL ) aims to learn an approximate model of the environment ’ s dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization . Early work on Model-based RL uses simple tabular ( Sutton , 1990 ; Moore and Atkeson , 1993 ; Peng and Williams , 1993 ) and locally linear ( Atkeson et al. , 1997 ) dynamics models , which often result in a large degree of model bias ( Deisenroth and Rasmussen , 2011 ) . Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions , achieving a high level of performance on standard RL benchmarks ( Chua et al. , 2018 ; Wang et al. , 2019 ) . However , standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action , which may lead to a poor estimation of uncertainty and unclear effects on RL applications . In this work , we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation ( OPE ) and offline policy optimization on continuous control . Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state , in addition to the current state and action ( see Figure 1 ) . This means that to sample the next state from an autoregressive dynamics model , one needs n sequential steps , where n is the number of state dimensions , and one more step to generate the reward . By contrast , standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure ( e.g. , Chua et al . ( 2018 ) ; Janner et al . ( 2019 ) ) . This modeling choice assumes that different state dimensions are conditionally independent . ∗Work done as an intern at Google Brain . Autoregressive generative models have seen success in generating natural images ( Parmar et al. , 2018 ) , text ( Brown et al. , 2020 ) , and speech ( Oord et al. , 2016 ) , but they have not seen use in Model-based RL for continuous control . We find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks ( Tassa et al. , 2018 ) from the RL Unplugged dataset ( Gulcehre et al. , 2020 ) . To determine the impact of improved transition dynamics models , we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection . We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics . We expect that our advances in model-based OPE will improve offline policy selection for offline RL ( Paine et al. , 2020 ) . Finally , we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control , achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged ( Gulcehre et al. , 2020 ) . Key contributions of this paper include : • We propose autoregressive dynamics models to capture dependencies between state dimensions in forward prediction . We show that autoregressive models improve log-likelihood over nonautoregressive models for continuous control tasks from the DM Control Suite ( Tassa et al. , 2018 ) . • We apply autoregressive dynamics models to Off-Policy Evaluation ( OPE ) , surpassing the performance of state-of-the art baselines in median absolute error , rank correlation , and normalized top-5 regret across 9 control tasks . • We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization , serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning . 2 PRELIMINARIES . Here we introduce relevant notation and discuss off-policy ( offline ) policy evaluation ( OPE ) . We refer the reader to Lange et al . ( 2012 ) and Levine et al . ( 2020 ) for background on offline RL , which is also known as batch RL in the literature . A finite-horizon Markov Decision Process ( MDP ) is defined by a tupleM = ( S , A , T , d0 , r , γ ) , where S is a set of states s ∈ S , A is a set of actions a ∈ A , T defines transition probability distributions p ( st+1|st , at ) , d0 defines the initial state distribution d0 ≡ p ( s0 ) , r defines a reward function r : S × A → R , and γ is a scalar discount factor . A policy π ( a | s ) defines a conditional distribution over actions conditioned on states . A trajectory consists of a sequence of states and actions τ = ( s0 , a0 , s1 , a1 , . . . , sH ) of horizon length H . We use st , i to denote the i-th dimension of the state at time step t ( and similarly for actions ) . In reinforcement learning , the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy : Vγ ( π ) = Eτ∼pπ ( τ ) [ H∑ t=0 γtr ( st , at ) ] . ( 1 ) The trajectory distribution is characterized by the initial state distribution , policy , and transition probability distribution : pπ ( τ ) = d0 ( s0 ) H−1∏ t=0 π ( at|st ) p ( st+1|st , at ) . ( 2 ) In offline RL , we are given access to a dataset of transitions D = { ( sit , ait , rit+1 , sit+1 ) } Ni=1 and a set of initial states S0 . Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq . ( 1 ) but is not allowed additional interactions with the environment . Even though offline RL offers the promise of leveraging existing logged datasets , current offline RL algorithms ( Fujimoto et al. , 2019 ; Agarwal et al. , 2020 ; Kumar et al. , 2019 ) are typically evaluated using online interaction , which limits their applicability in the real world . The problem of off-policy ( offline ) policy evaluation ( OPE ) entails estimating Vγ ( π ) , the value of a target policy π , based on a fixed dataset of transitions denoted D , without access to the environment ’ s dynamics . Some OPE methods assume that D is generated from a known behavior ( logging ) policy µ and assume access to µ in addition to D. In practice , the logged dataset D may be the result of following some existing system that does not have a probabilistic form . Hence , in our work , we will assume no access to the original behavior policy µ for OPE . That said , for methods that require access to µ , we train a behavior cloning policy on D . 3 PROBABILISTIC DYNAMICS MODELS . Feedforward dynamics model . In the context of our paper , we use the term “ model ” to jointly refer to the forward dynamics model ps ( st+1|st , at ) and reward model pr ( rt+1|st , at ) . We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL ( Chua et al. , 2018 ; Nagabandi et al. , 2018 ; Janner et al. , 2019 ) . Let θ denote the parameters of a fully connected network used to model pθ ( st+1 , rt+1 | st , at ) . We expect joint modeling of the next state and reward to benefit from sharing intermediate network features . Similar to prior work ( Janner et al. , 2019 ) , our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously , as follows : pθ ( st+1 , rt+1 | st , at ) = N ( µ ( st , at ) , Diag ( exp { l ( st , at ) } ) ) , ( 3 ) where µ ( st , at ) ∈ Rn+1 denotes the mean for the concatenation of the next state and reward , l ( st , at ) ∈ Rn+1 denotes the log variance , and Diag ( v ) is an operator that creates a diagonal matrix with the main diagonal specified by the vector v. During training , we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset D : ` ( θ | D ) = − ∑ ( s , a , r′ , s′ ) ∈D log pθ ( s ′ , r′ | s , a ) . ( 4 ) While it is possible to place different weights on the loss for next state and reward prediction , we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments . This is straightforward to implement and does not require tuning an additional hyperparameter , which is challenging for OPE . Note that the input has |s|+ |a| dimensions . Autoregressive dynamics model . We now describe our autoregressive model . We seek to demonstrate the utility of predicting state dimensions in an autoregressive way . Therefore , rather than using a complex neural network architecture , where improvements in log-likelihood and policy evaluation are confounded by architectural differences , we opt to make simple modifications to the feedforward model described above . This allows us to isolate the source of performance improvements . The autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension . We augment the input space of the baseline with the previous predicted state dimensions and a one-hot encoding to indicate which dimension to predict . This is illustrated in Figure 1 . The autoregressive model therefore has 3|s| + |a| input dimensions . Hence , the autoregressive model has a small number of additional weights in the first fully connected layer , but as will be shown in our experiments , these extra parameters are not the reason for a performance gain . At training time , the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously . At inference , the autoregressive model requires additional forward passes , on the order of the number of state dimensions in a given environment . We use the default ordering for the state dimensions in a given environment , though it is interesting to explore different orderings in future works . The negative log-likelihood for an autoregressive model takes the form of : ` ( θ | D ) = − ∑ ( s , a , r′ , s′ ) ∈D [ log pθ ( r ′ | s , a , s′ ) + ∑n i=1 log pθ ( s ′ i | s , a , s′1 , . . . , s′i−1 ) ] , ( 5 ) where we use chain rule to factorize the joint probability of p ( s′ , r′ | s , a ) . The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions . This class of models can therefore capture non-unimodal dependencies , e.g. , between different joint angles of a robot . Paduraru ( 2007 ) demonstrates this increased expressivity in the tabular setting , constructing an example on which a model assuming conditional independence fails . While the expressive power of autoregressive models have been shown in various generative models ( Parmar et al. , 2018 ; Oord et al. , 2016 ) , autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work . Algorithm 1 Model-based OPE Require : Number of rollouts n , discount factor γ , horizon length H , policy π , dynamics model p , set of initial states S0 for i = 1 , 2 , . . . n do Ri ← 0 sample initial state s0 ∼ S0 for t = 0 , 1 , 2 , . . . , H − 1 do sample from policy : at ∼ π ( · | st ) sample from the dynamics model : st+1 , rt+1 ∼ p ( · , · | st , at ) Ri ← Ri + γtrt+1 end for end for return 1n ∑n i=1Ri Model-based OPE . Once a dynamics model is trained from offline data , OPE can be performed in a direct and primitive way . We let the policy and model interact—the policy generates the next action , the model plays the role of the environment and generates the next state and reward . Due to the stochasticity in the model and the policy , we estimate the return for a policy with Monte-Carlo sampling and monitor standard error . See Algorithm 1 for pseudocode .
The authors consider the usage of autoregressive dynamics models for batch model-based RL, where state-variable/reward predictions are performed sequentially conditioned on previously-predicted variables. Extensive numerical results are provided in several continuous domains for both policy evaluation and optimization problems. The results showcase the effectiveness of autoregressive models and, in particular, their superiority over standard feed-forward models.
SP:686d12e3c1b9b03b8a0ad2106de8108b793daab3
Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
1 INTRODUCTION . Model-based Reinforcement Learning ( RL ) aims to learn an approximate model of the environment ’ s dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization . Early work on Model-based RL uses simple tabular ( Sutton , 1990 ; Moore and Atkeson , 1993 ; Peng and Williams , 1993 ) and locally linear ( Atkeson et al. , 1997 ) dynamics models , which often result in a large degree of model bias ( Deisenroth and Rasmussen , 2011 ) . Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions , achieving a high level of performance on standard RL benchmarks ( Chua et al. , 2018 ; Wang et al. , 2019 ) . However , standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action , which may lead to a poor estimation of uncertainty and unclear effects on RL applications . In this work , we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation ( OPE ) and offline policy optimization on continuous control . Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state , in addition to the current state and action ( see Figure 1 ) . This means that to sample the next state from an autoregressive dynamics model , one needs n sequential steps , where n is the number of state dimensions , and one more step to generate the reward . By contrast , standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure ( e.g. , Chua et al . ( 2018 ) ; Janner et al . ( 2019 ) ) . This modeling choice assumes that different state dimensions are conditionally independent . ∗Work done as an intern at Google Brain . Autoregressive generative models have seen success in generating natural images ( Parmar et al. , 2018 ) , text ( Brown et al. , 2020 ) , and speech ( Oord et al. , 2016 ) , but they have not seen use in Model-based RL for continuous control . We find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks ( Tassa et al. , 2018 ) from the RL Unplugged dataset ( Gulcehre et al. , 2020 ) . To determine the impact of improved transition dynamics models , we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection . We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics . We expect that our advances in model-based OPE will improve offline policy selection for offline RL ( Paine et al. , 2020 ) . Finally , we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control , achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged ( Gulcehre et al. , 2020 ) . Key contributions of this paper include : • We propose autoregressive dynamics models to capture dependencies between state dimensions in forward prediction . We show that autoregressive models improve log-likelihood over nonautoregressive models for continuous control tasks from the DM Control Suite ( Tassa et al. , 2018 ) . • We apply autoregressive dynamics models to Off-Policy Evaluation ( OPE ) , surpassing the performance of state-of-the art baselines in median absolute error , rank correlation , and normalized top-5 regret across 9 control tasks . • We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization , serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning . 2 PRELIMINARIES . Here we introduce relevant notation and discuss off-policy ( offline ) policy evaluation ( OPE ) . We refer the reader to Lange et al . ( 2012 ) and Levine et al . ( 2020 ) for background on offline RL , which is also known as batch RL in the literature . A finite-horizon Markov Decision Process ( MDP ) is defined by a tupleM = ( S , A , T , d0 , r , γ ) , where S is a set of states s ∈ S , A is a set of actions a ∈ A , T defines transition probability distributions p ( st+1|st , at ) , d0 defines the initial state distribution d0 ≡ p ( s0 ) , r defines a reward function r : S × A → R , and γ is a scalar discount factor . A policy π ( a | s ) defines a conditional distribution over actions conditioned on states . A trajectory consists of a sequence of states and actions τ = ( s0 , a0 , s1 , a1 , . . . , sH ) of horizon length H . We use st , i to denote the i-th dimension of the state at time step t ( and similarly for actions ) . In reinforcement learning , the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy : Vγ ( π ) = Eτ∼pπ ( τ ) [ H∑ t=0 γtr ( st , at ) ] . ( 1 ) The trajectory distribution is characterized by the initial state distribution , policy , and transition probability distribution : pπ ( τ ) = d0 ( s0 ) H−1∏ t=0 π ( at|st ) p ( st+1|st , at ) . ( 2 ) In offline RL , we are given access to a dataset of transitions D = { ( sit , ait , rit+1 , sit+1 ) } Ni=1 and a set of initial states S0 . Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq . ( 1 ) but is not allowed additional interactions with the environment . Even though offline RL offers the promise of leveraging existing logged datasets , current offline RL algorithms ( Fujimoto et al. , 2019 ; Agarwal et al. , 2020 ; Kumar et al. , 2019 ) are typically evaluated using online interaction , which limits their applicability in the real world . The problem of off-policy ( offline ) policy evaluation ( OPE ) entails estimating Vγ ( π ) , the value of a target policy π , based on a fixed dataset of transitions denoted D , without access to the environment ’ s dynamics . Some OPE methods assume that D is generated from a known behavior ( logging ) policy µ and assume access to µ in addition to D. In practice , the logged dataset D may be the result of following some existing system that does not have a probabilistic form . Hence , in our work , we will assume no access to the original behavior policy µ for OPE . That said , for methods that require access to µ , we train a behavior cloning policy on D . 3 PROBABILISTIC DYNAMICS MODELS . Feedforward dynamics model . In the context of our paper , we use the term “ model ” to jointly refer to the forward dynamics model ps ( st+1|st , at ) and reward model pr ( rt+1|st , at ) . We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL ( Chua et al. , 2018 ; Nagabandi et al. , 2018 ; Janner et al. , 2019 ) . Let θ denote the parameters of a fully connected network used to model pθ ( st+1 , rt+1 | st , at ) . We expect joint modeling of the next state and reward to benefit from sharing intermediate network features . Similar to prior work ( Janner et al. , 2019 ) , our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously , as follows : pθ ( st+1 , rt+1 | st , at ) = N ( µ ( st , at ) , Diag ( exp { l ( st , at ) } ) ) , ( 3 ) where µ ( st , at ) ∈ Rn+1 denotes the mean for the concatenation of the next state and reward , l ( st , at ) ∈ Rn+1 denotes the log variance , and Diag ( v ) is an operator that creates a diagonal matrix with the main diagonal specified by the vector v. During training , we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset D : ` ( θ | D ) = − ∑ ( s , a , r′ , s′ ) ∈D log pθ ( s ′ , r′ | s , a ) . ( 4 ) While it is possible to place different weights on the loss for next state and reward prediction , we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments . This is straightforward to implement and does not require tuning an additional hyperparameter , which is challenging for OPE . Note that the input has |s|+ |a| dimensions . Autoregressive dynamics model . We now describe our autoregressive model . We seek to demonstrate the utility of predicting state dimensions in an autoregressive way . Therefore , rather than using a complex neural network architecture , where improvements in log-likelihood and policy evaluation are confounded by architectural differences , we opt to make simple modifications to the feedforward model described above . This allows us to isolate the source of performance improvements . The autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension . We augment the input space of the baseline with the previous predicted state dimensions and a one-hot encoding to indicate which dimension to predict . This is illustrated in Figure 1 . The autoregressive model therefore has 3|s| + |a| input dimensions . Hence , the autoregressive model has a small number of additional weights in the first fully connected layer , but as will be shown in our experiments , these extra parameters are not the reason for a performance gain . At training time , the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously . At inference , the autoregressive model requires additional forward passes , on the order of the number of state dimensions in a given environment . We use the default ordering for the state dimensions in a given environment , though it is interesting to explore different orderings in future works . The negative log-likelihood for an autoregressive model takes the form of : ` ( θ | D ) = − ∑ ( s , a , r′ , s′ ) ∈D [ log pθ ( r ′ | s , a , s′ ) + ∑n i=1 log pθ ( s ′ i | s , a , s′1 , . . . , s′i−1 ) ] , ( 5 ) where we use chain rule to factorize the joint probability of p ( s′ , r′ | s , a ) . The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions . This class of models can therefore capture non-unimodal dependencies , e.g. , between different joint angles of a robot . Paduraru ( 2007 ) demonstrates this increased expressivity in the tabular setting , constructing an example on which a model assuming conditional independence fails . While the expressive power of autoregressive models have been shown in various generative models ( Parmar et al. , 2018 ; Oord et al. , 2016 ) , autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work . Algorithm 1 Model-based OPE Require : Number of rollouts n , discount factor γ , horizon length H , policy π , dynamics model p , set of initial states S0 for i = 1 , 2 , . . . n do Ri ← 0 sample initial state s0 ∼ S0 for t = 0 , 1 , 2 , . . . , H − 1 do sample from policy : at ∼ π ( · | st ) sample from the dynamics model : st+1 , rt+1 ∼ p ( · , · | st , at ) Ri ← Ri + γtrt+1 end for end for return 1n ∑n i=1Ri Model-based OPE . Once a dynamics model is trained from offline data , OPE can be performed in a direct and primitive way . We let the policy and model interact—the policy generates the next action , the model plays the role of the environment and generates the next state and reward . Due to the stochasticity in the model and the policy , we estimate the return for a policy with Monte-Carlo sampling and monitor standard error . See Algorithm 1 for pseudocode .
The paper studies offline policy evaluation (OPE) and optimization in the model-based setting. The main methodological contribution of the paper is using autoregressive models for the next state and reward prediction. The authors demonstrate that autoregressive models achieve higher likelihood compared to feedforward models on 9 environments from RL Unplugged [1] offline dataset. Given that model likelihood is only a proxy quality metric in OPE and control, they further demonstrate a positive correlation between likelihood and OPE estimates. The paper shows quantitatively that using autoregressive models results in more accurate OPE estimates than for feedforward models and model-free benchmarks. Finally, the authors apply autoregressive models for offline control and achieve higher returns than for feedforward models.
SP:686d12e3c1b9b03b8a0ad2106de8108b793daab3
Differential-Critic GAN: Generating What You Want by a Cue of Preferences
1 INTRODUCTION . Learning a good generative model for high-dimensional natural signals , such as images ( Zhu et al. , 2017 ) , video ( Vondrick et al. , 2016 ) and audio ( Fedus et al. , 2018 ) has long been one of the key milestones of machine learning . Powered by the learning capabilities of deep neural networks , generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have brought the field closer to attaining this goal . Currently , GANs are applied in a setting where the whole training dataset is of user interest . Therefore , regular GANs no longer meet our requirement when only partial instead of the entire training dataset possesses the desired properties ( Killoran et al. , 2017 ) . It is more challenging when the given dataset has a small number of desired data . Adapting vanilla GAN to this setting , a naive way is to first select the samples possessing the desired properties and then perform regular GAN training only on the selected samples to derive the desired distribution . However , vanilla GAN fails when the desired samples are limited . FBGAN overcomes the limited data problem by iteratively introducing desired samples from the generation into the training data . Specifically , FBGAN is pretrained with all training data using the vanilla GAN . In each training epoch , the generator first generates certain amounts of samples . The generated samples possessing the desired properties are selected by an expert selector and used to replace the old training data . Then , regular WGAN is trained with the updated training data . Since the ratio of the desired samples gradually increases in the training data , all training data will be replaced with the desired samples . Finally , FBGAN would derive the desired distribution when convergence . However , bluntly eliminating undesired samples may lead to a biased representation of the real desired data distribution . Because the undesired samples can also reveal useful clues about what is not desired . Suppose we want to generate old face images , however the training data contains only a few old face images whereas it has many young face images . In this case , the young face images can be used as negative sampling ( Mikolov et al. , 2013 ) to learn the subtle aging features ( e.g . wrinkles , pigmented skin , etc . ) , which guides the generation of the desired old face images . The conditional variants of GAN , such as CGAN ( Mirza and Osindero , 2014 ) and ACGAN ( Odena et al. , 2017 ) can be also applied in this setting by introducing condition variables to model the conditional desired data distribution . However , the generation performance of condition-based GAN is governed by the respective conditions with sufficient training observations . When the desired data is limited , the conditional modeling is dominated by the major classes , i.e. , undesired data , resulting in a failure to capture the desired distribution . All the literature methods require user-defined criteria to select the desired data in order to learn the distribution of the desired data , which may not exist in real applications . Instead of soliciting a ready-to-use criteria , we consider a more general setting where GAN can be guided towards the distribution of user-desired data by the user preference . In particular , pairwise preferences are the most popular form of user preference due to their simplicity and easy accessibility ( Lu and Boutilier , 2011 ) . Therefore , our target is to incorporate pairwise preferences into the learning process of GAN , so as to guide the generation of the desired data . Relativistic GAN ( RGAN ) ( Jolicoeur-Martineau , 2019 ) is a variant of regular GAN and is proposed to learn the whole data distribution . It considers the critic value as the indicator of sample quality and defines the discriminator using the difference in the critic values . The critic value in RGAN is similar to the ranking score , but it is used to describe sample quality . Motivated by this , we consider taking the critic value as the ranking score and define the ranking loss for pairwise preferences based on the critic value directly . In particular , the difference in critic values for each pair of samples reflects the user ’ s preference over the samples . This is why we call our critic the differential critic , and we propose Differential-Critic GAN ( DiCGAN ) for learning the user-desired data distribution . As shown in Fig . 1 , the differential critic incorporates the user preference direction , which pushes the original critic direction towards the real desired data region instead of the entire real data region . The main contributions are summarized as follows : • We propose DiCGAN to learn the distributions of the desired data from the entire data using pairwise preferences . To the best of our knowledge , this is the first work to promote the ratio of the desired data by incorporating user preferences directly into the data generation . • We introduce the differential critic by defining an additional pairwise ranking loss on the WGAN ’ s critic . It endows the difference in the critic values between each pair of samples with user preferences . • The empirical study shows that DiCGAN learns the distribution of user-desired data and the differential critic can derive the preference direction even from a limited umber of preferences . 2 GENERATIVE ADVERSARIAL NETWORKS . Generative Adversarial Network ( GAN ) ( Goodfellow et al. , 2014 ) performs generative modeling by learning a map from low-dimensional input space Z to data space X , i.e. , Gθ : Z → X , given samples from the training data distribution , namely , x ∼ pr ( x ) . The goal is to find θ which achieves pθ ( x ) = pr ( x ) , where pθ ( x ) is the fake data distribution x = Gθ ( z ) . Let p ( z ) be the input noise distribution and G indicate Gθ . GAN defines a discriminator D that is trained to discriminate real data from fake data to guide the learning of G. Wasserstein GAN ( WGAN ) ( Arjovsky et al. , 2017 ) proposes to use the Wasserstein metric as a critic , which measures the quality of fake data in terms of the distance between the real data distribution and the fake data distribution . The Wasserstein distance ( W-distance ) is approximated by the difference in the average critic values between the real data and the fake data . The empirical experiments show that the W-distance between two distributions corresponds well to the quality of the generated data . WGAN ’ s objective function is defined as follows : min G max D Epr ( x ) [ D ( x ) ] − Epθ ( x ) [ D ( x ) ] , ( 1 ) where D is the critic and satisfies 1-Lipschitz . 3 DICGAN FOR USER-DESIRED DISTRIBUTION . No longer learning the distribution of the whole dataset , GAN is applied in a new scenario , where the distribution of the partial dataset is what we desire . User-desired data may refer to some certain class of data among multiple class datasets , or observations with/without some particular attributes . Such data can be induced from the user preference , which can be represented as an ordering relation between two or more samples in terms of the desired properties . We propose differential-critic GAN to learn the desired data distribution from the user preferences along with the whole dataset . 3.1 LEARNING THE DISTRIBUTION OF USER-DESIRED DATA . Following the score-based ranking literature , we suppose that there exists a numeric score associated with each sample , reflecting the user ’ s preference for the sample . A higher score indicates that its corresponding sample is preferred by the user . In detail , let f denote a score function that maps sample x to score f ( x ) . Then , if sample x is desired by the user , its score f ( x ) exceeds a predefined threshold , namely , I ( f ( x ) > ) = 1 . I is a sign function , which equals 1 if its condition is true and 0 otherwise . For the sake of explanation , we use pr ( x ) , pd ( x ) , pu ( x ) to denote the distribution of the whole data , the user-desired data and the undesired data , respectively . FBGAN ( Gupta and Zou , 2019 ) was proposed to learn the distribution of the desired data pd ( x ) . FBGAN executes alternatively between two steps : ( 1 ) construct the desired dataset Xd = { x|I ( f ( x ) > ) = 1 , x ∼ pr ( x ) } ; ( 2 ) train GAN on Xd to derive pd ( x ) . However , the assumption that the score function f is predefined in FBGAN may be too restrictive for real applications , where no universal and explicit criteria exists . Further , the definitions of the desired/undesired samples are highly dependent on the choice of the threshold . The removal of the so-called undesired samples may result in a biased representation of real desired data distribution . Instead of relying on a predefined score function , we propose to learn the desired data distribution in a straightforward manner from the user preference . Here , we consider a general auxiliary information , i.e. , the pairwise preferences , to represent the user preference , due to its simplicity and easy accessibility . For any two samples x1 , x2 ∼ pr ( x ) , let x1 x2 denote that x1 is preferred over x2 according to the user-defined criteria . Let X be the training samples , i.e. , X = { xi ∼ pr ( x ) } . A collection of pairwise preferences S is obtained by : S = { s = ( x1 , x2 ) |x1 x2 , x1 , x2 ∈ X } . ( 2 ) Definition 1 ( Problem Setting ) . Given the training samples X and the pairwise preferences S , the target is to learn a generative model pθ ( x ) that is identical to the distribution of the desired data pd ( x ) , i.e. , pθ ( x ) = pd ( x ) . 3.2 DIFFERENTIAL CRITIC GAN . Instead of WGAN ’ scritic for quality assessment , we present the differential critic for modelling pairwise preferences . The differential critic can guide the generation of the user-desired data . 3.2.1 PAIRWISE PREFERENCE . In this section , we consider incorporating the pairwise preference into the training of GAN . The score-based ranking model ( Zhou et al. , 2008 ) is used to model the pairwise preferences . It learns the score function f , of which the score value , called ranking score in the model , is the indicator of the user preference . Further , the difference of ranking scores can indicate the pairwise preference relation . That is , for any pair of samples x1 , x2 , if x1 x2 then f ( x1 ) − f ( x2 ) > 0 and vice versa . For any pairwise preference s : x1 x2 , the ranking loss we consider is as follows : h ( s ) = max ( 0 , − ( f ( x1 ) − f ( x2 ) ) +m ) , ( 3 ) where m is the ranking margin . For other forms of ranking losses , the reader can refer to ( Zhou et al. , 2008 ) . Instead of learning the score function independent of GAN ’ s training , we consider incorporating it into GAN ’ s training , guiding GAN towards the generation of the desired data . The critic in RGAN ( Jolicoeur-Martineau , 2019 ) is similar to the score function , where the critic values are used to describe the sample quality . We are motivated to take the critic value as the ranking score and define the ranking loss on the critic value directly . In particular , the difference in the critic values for each pair of samples reflects the user ’ s preference over the samples . Remark 1 ( Pairwise regularization to the generator ) . It is possible to consider a pairwise regularization to the generator . As the target is to learn the desired distribution , the regularization to the generator can be used to make the critic values of the generated samples larger than those of the undesired samples . We construct the regularization with the principle similar as FBGAN . Specifically , a selector is first applied to give a full ranking for the training data and then bottom K samples are picked up as the undesired samples . The pairwise preferences are then defined over the generated samples and the undesired samples .
The motivation of this study is to estimate the distribution of desired data from the entire data distribution. And the proposed solution extends existing GAN solutions by introducing an additional pairwise loss on the discriminator, e.g., its scores on the desired instances should be higher than the undesired ones. The idea is natural and neat, and it is also proved to be effective in the reported experiments.
SP:64282a23a9df8092c2fc9737045a96d1ac64f4ac
Differential-Critic GAN: Generating What You Want by a Cue of Preferences
1 INTRODUCTION . Learning a good generative model for high-dimensional natural signals , such as images ( Zhu et al. , 2017 ) , video ( Vondrick et al. , 2016 ) and audio ( Fedus et al. , 2018 ) has long been one of the key milestones of machine learning . Powered by the learning capabilities of deep neural networks , generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have brought the field closer to attaining this goal . Currently , GANs are applied in a setting where the whole training dataset is of user interest . Therefore , regular GANs no longer meet our requirement when only partial instead of the entire training dataset possesses the desired properties ( Killoran et al. , 2017 ) . It is more challenging when the given dataset has a small number of desired data . Adapting vanilla GAN to this setting , a naive way is to first select the samples possessing the desired properties and then perform regular GAN training only on the selected samples to derive the desired distribution . However , vanilla GAN fails when the desired samples are limited . FBGAN overcomes the limited data problem by iteratively introducing desired samples from the generation into the training data . Specifically , FBGAN is pretrained with all training data using the vanilla GAN . In each training epoch , the generator first generates certain amounts of samples . The generated samples possessing the desired properties are selected by an expert selector and used to replace the old training data . Then , regular WGAN is trained with the updated training data . Since the ratio of the desired samples gradually increases in the training data , all training data will be replaced with the desired samples . Finally , FBGAN would derive the desired distribution when convergence . However , bluntly eliminating undesired samples may lead to a biased representation of the real desired data distribution . Because the undesired samples can also reveal useful clues about what is not desired . Suppose we want to generate old face images , however the training data contains only a few old face images whereas it has many young face images . In this case , the young face images can be used as negative sampling ( Mikolov et al. , 2013 ) to learn the subtle aging features ( e.g . wrinkles , pigmented skin , etc . ) , which guides the generation of the desired old face images . The conditional variants of GAN , such as CGAN ( Mirza and Osindero , 2014 ) and ACGAN ( Odena et al. , 2017 ) can be also applied in this setting by introducing condition variables to model the conditional desired data distribution . However , the generation performance of condition-based GAN is governed by the respective conditions with sufficient training observations . When the desired data is limited , the conditional modeling is dominated by the major classes , i.e. , undesired data , resulting in a failure to capture the desired distribution . All the literature methods require user-defined criteria to select the desired data in order to learn the distribution of the desired data , which may not exist in real applications . Instead of soliciting a ready-to-use criteria , we consider a more general setting where GAN can be guided towards the distribution of user-desired data by the user preference . In particular , pairwise preferences are the most popular form of user preference due to their simplicity and easy accessibility ( Lu and Boutilier , 2011 ) . Therefore , our target is to incorporate pairwise preferences into the learning process of GAN , so as to guide the generation of the desired data . Relativistic GAN ( RGAN ) ( Jolicoeur-Martineau , 2019 ) is a variant of regular GAN and is proposed to learn the whole data distribution . It considers the critic value as the indicator of sample quality and defines the discriminator using the difference in the critic values . The critic value in RGAN is similar to the ranking score , but it is used to describe sample quality . Motivated by this , we consider taking the critic value as the ranking score and define the ranking loss for pairwise preferences based on the critic value directly . In particular , the difference in critic values for each pair of samples reflects the user ’ s preference over the samples . This is why we call our critic the differential critic , and we propose Differential-Critic GAN ( DiCGAN ) for learning the user-desired data distribution . As shown in Fig . 1 , the differential critic incorporates the user preference direction , which pushes the original critic direction towards the real desired data region instead of the entire real data region . The main contributions are summarized as follows : • We propose DiCGAN to learn the distributions of the desired data from the entire data using pairwise preferences . To the best of our knowledge , this is the first work to promote the ratio of the desired data by incorporating user preferences directly into the data generation . • We introduce the differential critic by defining an additional pairwise ranking loss on the WGAN ’ s critic . It endows the difference in the critic values between each pair of samples with user preferences . • The empirical study shows that DiCGAN learns the distribution of user-desired data and the differential critic can derive the preference direction even from a limited umber of preferences . 2 GENERATIVE ADVERSARIAL NETWORKS . Generative Adversarial Network ( GAN ) ( Goodfellow et al. , 2014 ) performs generative modeling by learning a map from low-dimensional input space Z to data space X , i.e. , Gθ : Z → X , given samples from the training data distribution , namely , x ∼ pr ( x ) . The goal is to find θ which achieves pθ ( x ) = pr ( x ) , where pθ ( x ) is the fake data distribution x = Gθ ( z ) . Let p ( z ) be the input noise distribution and G indicate Gθ . GAN defines a discriminator D that is trained to discriminate real data from fake data to guide the learning of G. Wasserstein GAN ( WGAN ) ( Arjovsky et al. , 2017 ) proposes to use the Wasserstein metric as a critic , which measures the quality of fake data in terms of the distance between the real data distribution and the fake data distribution . The Wasserstein distance ( W-distance ) is approximated by the difference in the average critic values between the real data and the fake data . The empirical experiments show that the W-distance between two distributions corresponds well to the quality of the generated data . WGAN ’ s objective function is defined as follows : min G max D Epr ( x ) [ D ( x ) ] − Epθ ( x ) [ D ( x ) ] , ( 1 ) where D is the critic and satisfies 1-Lipschitz . 3 DICGAN FOR USER-DESIRED DISTRIBUTION . No longer learning the distribution of the whole dataset , GAN is applied in a new scenario , where the distribution of the partial dataset is what we desire . User-desired data may refer to some certain class of data among multiple class datasets , or observations with/without some particular attributes . Such data can be induced from the user preference , which can be represented as an ordering relation between two or more samples in terms of the desired properties . We propose differential-critic GAN to learn the desired data distribution from the user preferences along with the whole dataset . 3.1 LEARNING THE DISTRIBUTION OF USER-DESIRED DATA . Following the score-based ranking literature , we suppose that there exists a numeric score associated with each sample , reflecting the user ’ s preference for the sample . A higher score indicates that its corresponding sample is preferred by the user . In detail , let f denote a score function that maps sample x to score f ( x ) . Then , if sample x is desired by the user , its score f ( x ) exceeds a predefined threshold , namely , I ( f ( x ) > ) = 1 . I is a sign function , which equals 1 if its condition is true and 0 otherwise . For the sake of explanation , we use pr ( x ) , pd ( x ) , pu ( x ) to denote the distribution of the whole data , the user-desired data and the undesired data , respectively . FBGAN ( Gupta and Zou , 2019 ) was proposed to learn the distribution of the desired data pd ( x ) . FBGAN executes alternatively between two steps : ( 1 ) construct the desired dataset Xd = { x|I ( f ( x ) > ) = 1 , x ∼ pr ( x ) } ; ( 2 ) train GAN on Xd to derive pd ( x ) . However , the assumption that the score function f is predefined in FBGAN may be too restrictive for real applications , where no universal and explicit criteria exists . Further , the definitions of the desired/undesired samples are highly dependent on the choice of the threshold . The removal of the so-called undesired samples may result in a biased representation of real desired data distribution . Instead of relying on a predefined score function , we propose to learn the desired data distribution in a straightforward manner from the user preference . Here , we consider a general auxiliary information , i.e. , the pairwise preferences , to represent the user preference , due to its simplicity and easy accessibility . For any two samples x1 , x2 ∼ pr ( x ) , let x1 x2 denote that x1 is preferred over x2 according to the user-defined criteria . Let X be the training samples , i.e. , X = { xi ∼ pr ( x ) } . A collection of pairwise preferences S is obtained by : S = { s = ( x1 , x2 ) |x1 x2 , x1 , x2 ∈ X } . ( 2 ) Definition 1 ( Problem Setting ) . Given the training samples X and the pairwise preferences S , the target is to learn a generative model pθ ( x ) that is identical to the distribution of the desired data pd ( x ) , i.e. , pθ ( x ) = pd ( x ) . 3.2 DIFFERENTIAL CRITIC GAN . Instead of WGAN ’ scritic for quality assessment , we present the differential critic for modelling pairwise preferences . The differential critic can guide the generation of the user-desired data . 3.2.1 PAIRWISE PREFERENCE . In this section , we consider incorporating the pairwise preference into the training of GAN . The score-based ranking model ( Zhou et al. , 2008 ) is used to model the pairwise preferences . It learns the score function f , of which the score value , called ranking score in the model , is the indicator of the user preference . Further , the difference of ranking scores can indicate the pairwise preference relation . That is , for any pair of samples x1 , x2 , if x1 x2 then f ( x1 ) − f ( x2 ) > 0 and vice versa . For any pairwise preference s : x1 x2 , the ranking loss we consider is as follows : h ( s ) = max ( 0 , − ( f ( x1 ) − f ( x2 ) ) +m ) , ( 3 ) where m is the ranking margin . For other forms of ranking losses , the reader can refer to ( Zhou et al. , 2008 ) . Instead of learning the score function independent of GAN ’ s training , we consider incorporating it into GAN ’ s training , guiding GAN towards the generation of the desired data . The critic in RGAN ( Jolicoeur-Martineau , 2019 ) is similar to the score function , where the critic values are used to describe the sample quality . We are motivated to take the critic value as the ranking score and define the ranking loss on the critic value directly . In particular , the difference in the critic values for each pair of samples reflects the user ’ s preference over the samples . Remark 1 ( Pairwise regularization to the generator ) . It is possible to consider a pairwise regularization to the generator . As the target is to learn the desired distribution , the regularization to the generator can be used to make the critic values of the generated samples larger than those of the undesired samples . We construct the regularization with the principle similar as FBGAN . Specifically , a selector is first applied to give a full ranking for the training data and then bottom K samples are picked up as the undesired samples . The pairwise preferences are then defined over the generated samples and the undesired samples .
The authors introduce DiCGAN, an algorithm to learn a generative model that comes up with samples whose likelihood is based on a real dataset but adjusted given user preferences. They train the critic to assign high values to samples with higher preference values and thus the generator tends to move its samples towards these points. The idea is nice and reasonably novel in my opinion, but the paper has quite a few problems.
SP:64282a23a9df8092c2fc9737045a96d1ac64f4ac
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
1 INTRODUCTION . When translating a word , translation models need to spend a substantial amount of its capacity in disambiguating its sense in the source language and choose a lexeme in the target language which adequately express its meaning ( Choi et al. , 2017 ; Tamchyna , 2017 ) . However , neural machine translation ( NMT ) has a severe problem on lexical choice , since it usually has mistranslation errors on low-frequency words ( Koehn & Knowles , 2017 ; Nguyen & Chiang , 2018 ; Gu et al. , 2020 ) . In recent years , there has been a growing interest in non-autoregressive translation ( NAT , Gu et al. , 2018 ) , which improves decoding efficiency by predicting all tokens independently and simultaneously . Well-performed NAT models are generally trained on synthetic data distilled by autoregressive translation ( AT ) teachers instead of the raw training data ( Figure 1 ( a ) ) ( Stern et al. , 2019 ; Lee et al. , 2018 ; Ghazvininejad et al. , 2019 ; Gu et al. , 2019 ; Hao et al. , 2021 ) . Recent studies have revealed that knowledge distillation ( KD ) reduces the modes ( i.e . multiple lexical choices for a source word ) in the raw data by re-weighting the training examples ( Furlanello et al. , 2018 ; Tang et al. , 2020 ) , which lowers the intrinsic uncertainty ( Ott et al. , 2018 ) and learning difficulty for NAT ( Zhou et al. , 2020 ; Ren et al. , 2020 ) . However , the side effect of KD has not been fully studied . In this work , ∗Work was done when Liang Ding and Xuebo Liu were interning at Tencent AI Lab . we investigate this problem from the perspective of lexical choice , which is at the core of machine translation . We argue that the lexical choice errors of AT teacher can be propagated to the NAT model via the distilled training data . To verify this hypothesis , we qualitatively compare raw and distilled training corpora . Table 1 lists all samples whose source sentences contain the place name “ 纽马基特 ” . In the raw corpus ( “ RAW-TGT ” ) , this low-frequency word totally occurs three times and corresponds to correct translation “ Newmarket ” . However , in the KD corpus ( “ KD-TGT ” ) , the word is incorrectly translated into a person name “ Newmargot ” ( Margot Robbie is an Australian actress ) or organization name “ Newmarquette ” ( Marquette is an university in Wisconsin ) or even invalid one “ Newmarquite ” . Motivated by this finding , we explore NAT from the lexical choice perspective . We first validate our hypothesis by analyzing the lexical choice behaviors of NAT models ( §3 ) . Concretely , we propose a new metric AoLC ( accuracy of lexical choice ) to evaluate the lexical translation accuracy of a given NAT model . Experimental results across different language pairs show that NAT models trained on distilled data have higher accuracy of global lexical translation ( AoLC↑ ) , which results in better sequence generation . However , fine-grained analyses revealed that although KD improves the accuracy on high-frequency tokens , it meanwhile harms performance on low-frequency ones ( Low freq . AoLC↓ ) . And with the improvement of teacher models , this issue becomes more severe . We conclude that the lexical choice of the low-frequency tokens is a typical kind of lost information when using knowledge distillation from AT model . In order to rejuvenate this lost information in raw data , we propose to expose the raw data to the training of NAT models , which augments NAT models the ability to learn the lost knowledge by themselves . Specifically , we propose two bi-lingual lexical-level data-dependent priors ( Word Alignment Distribution and Self-Distilled Distribution ) extracted from raw data , which is integrated into NAT training via Kullback-Leibler divergence . Both approaches expose the lexical knowledge in the raw data to NAT , which makes it learn to restore the useful information of low-frequency words to accomplish the translation . We validated our approach on several datasets that widely used in previous studies ( i.e . WMT14 En-De , WMT16 Ro-En , WMT17 Zh-En , and WAT17 Ja-En ) and model architectures ( i.e . MaskPredict ( Ghazvininejad et al. , 2019 ) and Levenshtein Transformer ( Gu et al. , 2019 ) ) . Experimental results show that the proposed method consistently improve translation performance over the standard NAT models across languages and advanced NAT architectures . The improvements come from the better lexical translation accuracy ( low-frequency tokens in particular ) of NAT models ( AoLC↑ ) , which leads to less mis-translations and low-frequency words prediction errors . The main contributions of this work are : • Our study reveals the side effect of NAT models ’ knowledge distillation on low-frequency lexicons , which makes the standard NAT training on the distilled data sub-optimal . • We demonstrate the necessity of letting NAT models learn to distill lexical choices from the raw data by themselves . • We propose an simple yet effective approach to accomplish this goal1 , which are robustly applicable to several model architectures and language pairs . 2 PRELIMINARIES . 2.1 NON-AUTOREGRESSIVE TRANSLATION . The idea of NAT has been pioneered by Gu et al . ( 2018 ) , which enables the inference process goes in parallel . Different from AT models that generate each target word conditioned on previously generated ones , NAT models break the autoregressive factorization and produce target words in parallel . Given a source sentence x , the probability of generating its target sentence y with length T is calculated as : p ( y|x ) = pL ( T |x ; θ ) T∏ t=1 p ( yt|x ; θ ) ( 1 ) 1Code is available at : https : //github.com/alphadl/LCNAT where pL ( · ) is a separate conditional distribution to predict the length of target sequence . During training , the negative loglikelihood loss function of NAT is accordingly LNAT ( θ ) = − log p ( y|x ) . To bridge the performance gap between NAT and AT models , a variety approaches have been proposed , such as multi-turn refinement mechanism ( Lee et al. , 2018 ; Ghazvininejad et al. , 2019 ; Gu et al. , 2019 ; Kasai et al. , 2020 ) , rescoring with AT models ( Wei et al. , 2019 ; Ma et al. , 2019 ; Sun et al. , 2019 ) , adding auxiliary signals to improve model capacity ( Wang et al. , 2019 ; Ran et al. , 2019 ; Guo et al. , 2019 ; Ding et al. , 2020 ) , and advanced training objective ( Wei et al. , 2019 ; Shao et al. , 2019 ; Ma et al. , 2020 ) . Our work is complementary to theirs : while they focus on improving NAT models trained on the distilled data , we refine the NAT models by exploiting the knowledge in the raw data . Sentence-Level Knowledge Distillation NAT models suffer from the multimodality problem , in which the conditional independence assumption prevents a model from properly capturing the highly multimodal distribution of target translations . For example , one English source sentence “ Thank you. ” can be accurately translated into German as any one of “ Danke. ” , “ Danke schön. ” or “ Vielen Dank. ” , all of which occur in the training data . To alleviate this problem , Gu et al . ( 2018 ) applied sequence-level KD ( Kim & Rush , 2016 ) to construct a synthetic corpus , whose target sentences are generated by an AT model trained on the raw data , as shown in Figure 1 ( a ) . The NAT model is only trained on distilled data with lower modes , which makes it easily acquire more deterministic knowledge ( e.g . one lexical choice for each source word ) . While separating KD and model training makes the pipeline simple and efficient , it has one potential threat : the re-weighted samples distilled with AT model may have lost some important information . Lee et al . ( 2020 ) show that distillation benefits the sequence generation but harms the density estimation . In this study , we exploit to bridge this gap by exposing the raw data to the training of NAT models , as shown in Figure 1 ( b ) . 2.2 EXPERIMENTAL SETUP . Datasets Experiments were conducted on four widely-used translation datasets : WMT14 EnglishGerman ( En-De , Vaswani et al . 2017 ) , WMT16 Romanian-English ( Ro-En , Gu et al . 2018 ) , WMT17 Chinese-English ( Zh-En , Hassan et al . 2018 ) , and WAT17 Japanese-English ( Ja-En , Morishita et al . 2017 ) , which consist of 4.5M , 0.6M , 20M , and 2M sentence pairs , respectively . We use the same validation and test datasets with previous works for fair comparison . To avoid unknown words , we preprocessed data via BPE ( Sennrich et al. , 2016 ) with 32K merge operations . The GIZA++ ( Och & Ney , 2003 ) was employed to build word alignments for the training datasets . We evaluated the translation quality with BLEU ( Papineni et al. , 2002 ) . NAT Models We validated our research hypotheses on two SOTA NAT models : • MaskPredict ( MaskT , Ghazvininejad et al . 2019 ) that uses the conditional mask LM ( Devlin et al. , 2019 ) to iteratively generate the target sequence from the masked input . We followed its optimal settings to keep the iteration number be 10 and length beam be 5 , respectively . • Levenshtein Transformer ( LevT , Gu et al . 2019 ) that introduces three steps : deletion , placeholder prediction and token prediction . The decoding iterations in LevT adaptively depends on certain conditions . For regularization , we tune the dropout rate from [ 0.1 , 0.2 , 0.3 ] based on validation performance in each direction , and apply weight decay with 0.01 and label smoothing with = 0.1 . We train batches of approximately 128K tokens using Adam ( Kingma & Ba , 2015 ) . The learning rate warms up to 5× 10−4 in the first 10K steps , and then decays with the inverse square-root schedule . We followed the common practices ( Ghazvininejad et al. , 2019 ; Kasai et al. , 2020 ) to evaluate the translation performance on an ensemble of top 5 checkpoints to avoid stochasticity . AT Teachers We closely followed previous works on NAT to apply sequence-level knowledge distillation ( Kim & Rush , 2016 ) to reduce the modes of the training data . More precisely , to assess the effectiveness of our method under different of AT teachers , we trained three kinds of Transformer ( Vaswani et al. , 2017 ) models , including Transformer-BASE , Transformer-BIG and Transformer-STRONG . The main results employ LARGE for all directions except Ro-En , which is distilled by BASE . The architectures of Transformer-BIG and Transformer-STRONG are unchanged , but STRONG utilizes a large batch ( 458K tokens ) training strategy . 3 UNDERSTANDING LEXICAL CHOICE IN NAT MODELS . 3.1 EVALUATING LEXICAL CHOICE OF NAT MODELS . Recently , Zhou et al . ( 2020 ) argue that knowledge distillation is necessary for the uncertain nature of the machine translation task . Accordingly , they propose a metric to estimate the complexity of the data ( CoD ) , which is driven from an external word alignment model . They reveal that the distilled data is indeed less complex , which facilitates easier training for the NAT model . Inspired by this , we propose a metric to measure the lexical level accuracy of model predictions . Accuracy of Lexical Choice ( AoLC ) evaluates the accuracy of target lexicon chosen by a trained NAT model M for each source word . Specifically , the model M takes a source word f as the input , and produce a hypothesis candidate list with their corresponding word confidence : PMf = { PM ( e1|f ) , . . . , PM ( e|Vtrg||f ) } ( 2 ) where Vtrg is the target side vocabularies over whole corpus . The AoLC score is calculated by averaging the probability of the gold target word ef of each source word f : AoLC = ∑ f∈Vtestsrc PM ( ef |f ) |Vtestsrc | ( 3 ) where Vtestsrc is the set of source side tokens in test set . Each gold word ef is chosen with the help of the word alignment model PAf . The chosen procedure is as follows : Step 1 ) collecting the references of the source sentences that contains source word f , and generating the target side word bag Bf with these references . Step 2 ) Descending PAf in terms of alignment probabilities and looking up the word that first appears in Bf as the gold word until the Bf is traversed . Step 3 ) If the gold word is still not found , let the word with the highest alignment probability in PAf as the gold word . Generally , higher accuracy of lexical translation represents more confident of the predictions . We discuss the reliability of word alignment-based AoLC in Appendix A.1 .
This paper follows up on the work (Zhou et al.) on establishing the importance of knoweldge distillation (KD) from a pretrained autoregressive translation model (AT) to train effective non-autoregresstive translation (NAT) models. Specifically, KD is helpful because it reduces the data complexity which allows successful training of NAT models. This paper shows that KD has an undesirable effect on training of NAT models in terms of poor performance on translation into infrequent tokens and further suggests a remedy for regularizing the NAT training with an additional lexical translation loss based upon a prior translation table obtained via word alignment.
SP:18ce50996a98836e07d8cb448adbff5cb039b285
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
1 INTRODUCTION . When translating a word , translation models need to spend a substantial amount of its capacity in disambiguating its sense in the source language and choose a lexeme in the target language which adequately express its meaning ( Choi et al. , 2017 ; Tamchyna , 2017 ) . However , neural machine translation ( NMT ) has a severe problem on lexical choice , since it usually has mistranslation errors on low-frequency words ( Koehn & Knowles , 2017 ; Nguyen & Chiang , 2018 ; Gu et al. , 2020 ) . In recent years , there has been a growing interest in non-autoregressive translation ( NAT , Gu et al. , 2018 ) , which improves decoding efficiency by predicting all tokens independently and simultaneously . Well-performed NAT models are generally trained on synthetic data distilled by autoregressive translation ( AT ) teachers instead of the raw training data ( Figure 1 ( a ) ) ( Stern et al. , 2019 ; Lee et al. , 2018 ; Ghazvininejad et al. , 2019 ; Gu et al. , 2019 ; Hao et al. , 2021 ) . Recent studies have revealed that knowledge distillation ( KD ) reduces the modes ( i.e . multiple lexical choices for a source word ) in the raw data by re-weighting the training examples ( Furlanello et al. , 2018 ; Tang et al. , 2020 ) , which lowers the intrinsic uncertainty ( Ott et al. , 2018 ) and learning difficulty for NAT ( Zhou et al. , 2020 ; Ren et al. , 2020 ) . However , the side effect of KD has not been fully studied . In this work , ∗Work was done when Liang Ding and Xuebo Liu were interning at Tencent AI Lab . we investigate this problem from the perspective of lexical choice , which is at the core of machine translation . We argue that the lexical choice errors of AT teacher can be propagated to the NAT model via the distilled training data . To verify this hypothesis , we qualitatively compare raw and distilled training corpora . Table 1 lists all samples whose source sentences contain the place name “ 纽马基特 ” . In the raw corpus ( “ RAW-TGT ” ) , this low-frequency word totally occurs three times and corresponds to correct translation “ Newmarket ” . However , in the KD corpus ( “ KD-TGT ” ) , the word is incorrectly translated into a person name “ Newmargot ” ( Margot Robbie is an Australian actress ) or organization name “ Newmarquette ” ( Marquette is an university in Wisconsin ) or even invalid one “ Newmarquite ” . Motivated by this finding , we explore NAT from the lexical choice perspective . We first validate our hypothesis by analyzing the lexical choice behaviors of NAT models ( §3 ) . Concretely , we propose a new metric AoLC ( accuracy of lexical choice ) to evaluate the lexical translation accuracy of a given NAT model . Experimental results across different language pairs show that NAT models trained on distilled data have higher accuracy of global lexical translation ( AoLC↑ ) , which results in better sequence generation . However , fine-grained analyses revealed that although KD improves the accuracy on high-frequency tokens , it meanwhile harms performance on low-frequency ones ( Low freq . AoLC↓ ) . And with the improvement of teacher models , this issue becomes more severe . We conclude that the lexical choice of the low-frequency tokens is a typical kind of lost information when using knowledge distillation from AT model . In order to rejuvenate this lost information in raw data , we propose to expose the raw data to the training of NAT models , which augments NAT models the ability to learn the lost knowledge by themselves . Specifically , we propose two bi-lingual lexical-level data-dependent priors ( Word Alignment Distribution and Self-Distilled Distribution ) extracted from raw data , which is integrated into NAT training via Kullback-Leibler divergence . Both approaches expose the lexical knowledge in the raw data to NAT , which makes it learn to restore the useful information of low-frequency words to accomplish the translation . We validated our approach on several datasets that widely used in previous studies ( i.e . WMT14 En-De , WMT16 Ro-En , WMT17 Zh-En , and WAT17 Ja-En ) and model architectures ( i.e . MaskPredict ( Ghazvininejad et al. , 2019 ) and Levenshtein Transformer ( Gu et al. , 2019 ) ) . Experimental results show that the proposed method consistently improve translation performance over the standard NAT models across languages and advanced NAT architectures . The improvements come from the better lexical translation accuracy ( low-frequency tokens in particular ) of NAT models ( AoLC↑ ) , which leads to less mis-translations and low-frequency words prediction errors . The main contributions of this work are : • Our study reveals the side effect of NAT models ’ knowledge distillation on low-frequency lexicons , which makes the standard NAT training on the distilled data sub-optimal . • We demonstrate the necessity of letting NAT models learn to distill lexical choices from the raw data by themselves . • We propose an simple yet effective approach to accomplish this goal1 , which are robustly applicable to several model architectures and language pairs . 2 PRELIMINARIES . 2.1 NON-AUTOREGRESSIVE TRANSLATION . The idea of NAT has been pioneered by Gu et al . ( 2018 ) , which enables the inference process goes in parallel . Different from AT models that generate each target word conditioned on previously generated ones , NAT models break the autoregressive factorization and produce target words in parallel . Given a source sentence x , the probability of generating its target sentence y with length T is calculated as : p ( y|x ) = pL ( T |x ; θ ) T∏ t=1 p ( yt|x ; θ ) ( 1 ) 1Code is available at : https : //github.com/alphadl/LCNAT where pL ( · ) is a separate conditional distribution to predict the length of target sequence . During training , the negative loglikelihood loss function of NAT is accordingly LNAT ( θ ) = − log p ( y|x ) . To bridge the performance gap between NAT and AT models , a variety approaches have been proposed , such as multi-turn refinement mechanism ( Lee et al. , 2018 ; Ghazvininejad et al. , 2019 ; Gu et al. , 2019 ; Kasai et al. , 2020 ) , rescoring with AT models ( Wei et al. , 2019 ; Ma et al. , 2019 ; Sun et al. , 2019 ) , adding auxiliary signals to improve model capacity ( Wang et al. , 2019 ; Ran et al. , 2019 ; Guo et al. , 2019 ; Ding et al. , 2020 ) , and advanced training objective ( Wei et al. , 2019 ; Shao et al. , 2019 ; Ma et al. , 2020 ) . Our work is complementary to theirs : while they focus on improving NAT models trained on the distilled data , we refine the NAT models by exploiting the knowledge in the raw data . Sentence-Level Knowledge Distillation NAT models suffer from the multimodality problem , in which the conditional independence assumption prevents a model from properly capturing the highly multimodal distribution of target translations . For example , one English source sentence “ Thank you. ” can be accurately translated into German as any one of “ Danke. ” , “ Danke schön. ” or “ Vielen Dank. ” , all of which occur in the training data . To alleviate this problem , Gu et al . ( 2018 ) applied sequence-level KD ( Kim & Rush , 2016 ) to construct a synthetic corpus , whose target sentences are generated by an AT model trained on the raw data , as shown in Figure 1 ( a ) . The NAT model is only trained on distilled data with lower modes , which makes it easily acquire more deterministic knowledge ( e.g . one lexical choice for each source word ) . While separating KD and model training makes the pipeline simple and efficient , it has one potential threat : the re-weighted samples distilled with AT model may have lost some important information . Lee et al . ( 2020 ) show that distillation benefits the sequence generation but harms the density estimation . In this study , we exploit to bridge this gap by exposing the raw data to the training of NAT models , as shown in Figure 1 ( b ) . 2.2 EXPERIMENTAL SETUP . Datasets Experiments were conducted on four widely-used translation datasets : WMT14 EnglishGerman ( En-De , Vaswani et al . 2017 ) , WMT16 Romanian-English ( Ro-En , Gu et al . 2018 ) , WMT17 Chinese-English ( Zh-En , Hassan et al . 2018 ) , and WAT17 Japanese-English ( Ja-En , Morishita et al . 2017 ) , which consist of 4.5M , 0.6M , 20M , and 2M sentence pairs , respectively . We use the same validation and test datasets with previous works for fair comparison . To avoid unknown words , we preprocessed data via BPE ( Sennrich et al. , 2016 ) with 32K merge operations . The GIZA++ ( Och & Ney , 2003 ) was employed to build word alignments for the training datasets . We evaluated the translation quality with BLEU ( Papineni et al. , 2002 ) . NAT Models We validated our research hypotheses on two SOTA NAT models : • MaskPredict ( MaskT , Ghazvininejad et al . 2019 ) that uses the conditional mask LM ( Devlin et al. , 2019 ) to iteratively generate the target sequence from the masked input . We followed its optimal settings to keep the iteration number be 10 and length beam be 5 , respectively . • Levenshtein Transformer ( LevT , Gu et al . 2019 ) that introduces three steps : deletion , placeholder prediction and token prediction . The decoding iterations in LevT adaptively depends on certain conditions . For regularization , we tune the dropout rate from [ 0.1 , 0.2 , 0.3 ] based on validation performance in each direction , and apply weight decay with 0.01 and label smoothing with = 0.1 . We train batches of approximately 128K tokens using Adam ( Kingma & Ba , 2015 ) . The learning rate warms up to 5× 10−4 in the first 10K steps , and then decays with the inverse square-root schedule . We followed the common practices ( Ghazvininejad et al. , 2019 ; Kasai et al. , 2020 ) to evaluate the translation performance on an ensemble of top 5 checkpoints to avoid stochasticity . AT Teachers We closely followed previous works on NAT to apply sequence-level knowledge distillation ( Kim & Rush , 2016 ) to reduce the modes of the training data . More precisely , to assess the effectiveness of our method under different of AT teachers , we trained three kinds of Transformer ( Vaswani et al. , 2017 ) models , including Transformer-BASE , Transformer-BIG and Transformer-STRONG . The main results employ LARGE for all directions except Ro-En , which is distilled by BASE . The architectures of Transformer-BIG and Transformer-STRONG are unchanged , but STRONG utilizes a large batch ( 458K tokens ) training strategy . 3 UNDERSTANDING LEXICAL CHOICE IN NAT MODELS . 3.1 EVALUATING LEXICAL CHOICE OF NAT MODELS . Recently , Zhou et al . ( 2020 ) argue that knowledge distillation is necessary for the uncertain nature of the machine translation task . Accordingly , they propose a metric to estimate the complexity of the data ( CoD ) , which is driven from an external word alignment model . They reveal that the distilled data is indeed less complex , which facilitates easier training for the NAT model . Inspired by this , we propose a metric to measure the lexical level accuracy of model predictions . Accuracy of Lexical Choice ( AoLC ) evaluates the accuracy of target lexicon chosen by a trained NAT model M for each source word . Specifically , the model M takes a source word f as the input , and produce a hypothesis candidate list with their corresponding word confidence : PMf = { PM ( e1|f ) , . . . , PM ( e|Vtrg||f ) } ( 2 ) where Vtrg is the target side vocabularies over whole corpus . The AoLC score is calculated by averaging the probability of the gold target word ef of each source word f : AoLC = ∑ f∈Vtestsrc PM ( ef |f ) |Vtestsrc | ( 3 ) where Vtestsrc is the set of source side tokens in test set . Each gold word ef is chosen with the help of the word alignment model PAf . The chosen procedure is as follows : Step 1 ) collecting the references of the source sentences that contains source word f , and generating the target side word bag Bf with these references . Step 2 ) Descending PAf in terms of alignment probabilities and looking up the word that first appears in Bf as the gold word until the Bf is traversed . Step 3 ) If the gold word is still not found , let the word with the highest alignment probability in PAf as the gold word . Generally , higher accuracy of lexical translation represents more confident of the predictions . We discuss the reliability of word alignment-based AoLC in Appendix A.1 .
This paper analyzes the side effect of knowledge distillation in NAT where the lexical choice errors on low-frequency words are propagated to the student model from the teacher. Tackling on this, the paper then proposes to expose raw data to restore such information. In my view, the submission is well motivated and the designed experiments and results are meaningful and convincing which deserves an accept. However, as the paper focuses on analyzing a specific point (lexical choice) in a very constrained setting (NAT), the overall contribution might be incremental compared to other works in general at such a venue like ICLR.
SP:18ce50996a98836e07d8cb448adbff5cb039b285
Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy
Regularization plays a crucial role in machine learning models , especially for deep neural networks . The existing regularization techniques mainly rely on the i.i.d . assumption and only consider the knowledge from the current sample , without the leverage of neighboring relationship between samples . In this work , we propose a general regularizer called Patch-level Neighborhood Interpolation ( Pani ) that conducts a non-local representation in the computation of network . Our proposal explicitly constructs patch-level graphs in different network layers and then linearly interpolates neighborhood patch features , serving as a general and effective regularization strategy . Further , we customize our approach into two kinds of popular regularization methods , namely Virtual Adversarial Training ( VAT ) and MixUp as well as its variants . The first derived Pani VAT presents a novel way to construct non-local adversarial smoothness by employing patch-level interpolated perturbations . In addition , the second derived Pani MixUp method extends the original MixUp regularization and its variant to the Pani version , achieving a significant improvement in the performance . Finally , extensive experiments are conducted to verify the effectiveness of our Patch-level Neighborhood Interpolation approach in both supervised and semi-supervised settings . 1 INTRODUCTION . In the statistical learning theory , regularization techniques are typically leveraged to achieve the trade-off between empirical error minimization and the control of model complexity ( Vapnik & Chervonenkis , 2015 ) . In contrast to the classical convex empirical risk minimization where regularization can rule out trivial solutions , regularization plays a rather different role in deep learning due to its highly non-convex optimization nature ( Zhang et al. , 2016 ) . Among all the explicit and implicit regularization , regularization with stochastic transformation , perturbations and randomness , such as adversarial training ( Goodfellow et al. , 2014 ) , dropout and MixUp ( Zhang et al. , 2017 ) , play a key role in the deep learning models due to their superiority in the performance ( Berthelot et al. , 2019b ; Zhang et al. , 2017 ; Miyato et al. , 2018 ; Berthelot et al. , 2019a ) . In this section , we firstly review two kinds of effective and prestigious regularization branches for deep neural networks , which can elegantly generalize from supervised learning to semi-supervised setting . Adversarial Training ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) can provide an additional regularization beyond that provided by other generic regularization strategies , such as dropout , pretraining and model averaging . However , recent works ( Zhang et al. , 2019 ; Tsipras et al. , 2018 ) demonstrated that this kind of training method holds a trade-off between the robustness and accuracy , limiting the efficacy of the adversarial regularization . Besides , Virtual Adversarial Training ( VAT ) ( Miyato et al. , 2018 ) can be regarded as a natural extension of adversarial training to semi-supervised setting through adversarially smoothing the posterior output distribution with the leverage of unlabeled data . This strategy has achieved great success in image classification ( Miyato et al. , 2018 ) , text classification ( Miyato et al. , 2016 ) and node classification ( Sun et al. , 2019 ) . Tangent-Normal Adversarial Regularization ( TNAR ) ( Yu et al. , 2019 ) extended VAT by taking the data manifold into consideration and applied VAT along the tangent space and the orthogonal normal space of the data manifold , outperforming previous semi-supervised approaches . MixUp ( Zhang et al. , 2017 ) augmented the training data by incorporating the prior knowledge that linear interpolation of input vectors should lead to linear interpolation of the associated targets , accomplishing consistent improvement of generalization on image , speech and tabular data . MixMatch ( Berthelot et al. , 2019b ) extended MixUp to semi-supervised tasks by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp . In contrast with VAT , MixMatch ( Berthelot et al. , 2019b ) utilizes one specific form of consistency regularization , i.e. , using the standard data augmentation for images , such as random horizontal flips , rather than computing adversarial perturbations to smooth the posterior distribution of the classifier . Nevertheless , the vast majority of regularization methods , including the aforementioned approaches , assume that the training samples are drawn independently and identically from an unknown data generating distribution . For instance , Support Vector Machine ( SVM ) , Back-Propagation ( BP ) for Neural Networks , and many other common algorithms implicitly make this assumption as part of their derivation . However , this i.i.d . assumption is commonly violated in realistic scenarios where batches or sub-groups of training samples are likely to have internal correlations . In particular , Dundar et al . ( 2007 ) demonstrated that accounting for the correlations in real-world training data leads to statistically significant improvements in accuracy . Similarly , Peer-Regularized Networks ( PeerNet ) ( Svoboda et al. , 2018 ) applied graph convolutions ( Velickovic et al. , 2017 ; Kipf & Welling , 2016 ) to harness information of peer samples , and verified its effectiveness on defending adversarial attacks . Motivated by these facts , we aim to design a general regularization strategy that can fully utilize the internal relationship between samples by explicitly constructing a graph within a minibatch in order to consistently improve the generalization of deep neural networks in both semi- and supervised settings . In this paper , we propose the Patch-level Neighborhood Interpolation ( Pani ) for deep neural networks , serving as a simple yet effective non-local regularization . We firstly construct a patch-level graph in each mini-batch during the stochastic gradient decent training process . Then we apply linear interpolation on the neighboring patch features and the resulting non-local representation additionally captures the relationship of neighboring patch features in different layers , serving as a general and effective regularization . Furthermore , to prove the generality and superiority of our Pani method , we explicitly customize our approach into two kinds of popular and general regularization strategies , i.e. , Virtual Adversarial Regularization and MixUp , resulting in Pani VAT and Pani MixUp . For the Pani VAT , we reformulate the construction of adversarial perturbations , transforming from solely depending on the current sample to the linear interpolation of neighboring patch features . This non-local adversarial perturbations can leverage the information of neighboring correlation from all samples within a batch , providing more informative adversarial smoothness in semisupervised setting . Besides , in the Pani MixUp , we extend MixUp and its variant MixMatch from image to patch level by mixing fine-grained patch features and corresponding supervised signals . Finally , we conduct extensive experiments to demonstrate that both of the two derived regularization strategies can outperform other state-of-the-art approaches in both supervised and semi-supervised tasks . More importantly , these successful cases verify the generality and superiority of our Patchlevel Neighborhood Interpolation method . Our contributions can be summarized as follow : • We propose a general interpolation strategy either in input or feature space , i.e. , Patch-level Neighborhood Interpolation , helping to improve the generalization of deep neural networks on both semi- and supervised scenarios . This strategy can serve as an effective graph-based representation method and has much potential to be leveraged in a wider range of tasks . • Based on our method , the customized approaches Pani VAT and Pani MixUP as well as Pani MixMatch can boost the generalization performance significantly , and thus provide a guidance to the deployment of our Pani strategy into more regularization methods . 2 OUR METHOD : PATCH-LEVEL NEIGHBORHOOD INTERPOLATION . Before introducing our approach , we highly recommend readers to go through some preliminary knowledge about VAT ( Miyato et al. , 2017 ) , MixUP ( Zhang et al. , 2017 ) and PeerNet ( Svoboda et al. , 2018 ) in Appendix A . For our method , one related work is PeerNet ( Svoboda et al. , 2018 ) that designed graph-based layers to defend against adversarial attacks , but unfortunately the construction of pixel-level K-NN graphs in PeerNet ( Svoboda et al. , 2018 ) is costly in computation . By contrast , our motivation is to develop a general regularization that can consistently boost the performance of deep neural networks in both semi- and supervised settings rather than the adversarial scenario . Besides , the construction way of a non-local layer in our method is more flexible and can be determined by the specific objective function , as elaborated in Section 2.1 and 2.2 . Moreover , our patch-level method can achieve computational advantage over pixel-level regularization , and incorporates more meaningful semantic correlations in different layers . Particularly , a flexible patch size can be chosen according to the size of receptive field in different layers , yielding more informative graph-based representation and better regularization performance . Concretely , as our Patch-level Neighborhood Interpolation ( Pani ) shown in Figure 1 , in the first step we determine the candidate peer images set Si for each image i . This can be achieved by random matching or computing the semantically nearest image neighbors using e.g . the cosine distance . Next , we construct the whole patches set Pi on the candidate peer images set Si for each image i by clipping the corresponding patches in the different locations on an input or a feature map . Following the establishment of patch set Pi , we construct K nearest neighbor patch graphs based on the distance of patch features in order to find the neighbors of each patch in patch set Pi for ∀i = 1 , .. , N . Mathematically , following the definition in the PeerNet , let zip be the p-th patch on the input or feature map Zi for the i-th image within one batch . Then denote the k-th nearest patch neighbor for zip as z jk qk taken from the patch qk of the peer image jk in the candidate set Si . Next , in order to leverage the knowledge from neighbors , different from graph attention mechanism in PeerNet , we apply a more straightforward linear interpolation on the neighboring patches for the current patch zip . Then , the general formulation of our Patch-level Neighborhood Interpolation can be presented as follows : z̃ip = z i p + K∑ k=1 ηipk ( z jk qk − zip ) , ( 1 ) where ηipk is the combination coefficient for the p-th patch of i-th image w.r.t its k-th patch neighbor , which can be computed through the power iteration similar to the manner of VAT , or through random sampling from a specific distribution in randomness-based regularization , e.g. , Mixup and its variants . Moreover , the choice of linear interpolation in Eq . 1 enjoys great computational advantage over the nonlinear GAT form in PeerNet in the computation of networks . Finally , after the patchlevel linear interpolation on patch features , we can obtain the refined graph-based representation Z̃i for i-th image , ∀i = 1 , ... , N . Note that our proposed method can explicitly combine the advantages of manifold regularization and non-local filtering in a flexible way , which we have a more detailed discussion about in Appendix B . Besides , to further demonstrate the generality and effectiveness of our Pani method , we provide Pani-version of two typical regularization strategies , i.e. , Virtual adversarial Training and Mixup as well its variant MixMatch , and verify the superiority of our Pani strategy on the significant boosting of accuracy . 2.1 PANI VAT . Based on our Patch-level Neighborhood Interpolation framework , we can construct a novel Pani VAT that utilizes the combination or interpolation of patch neighbors for each sample to manipulate the non-local perturbations , thus providing more informative adversarial smoothness in semi-supervised setting . Consider a more general composite function form of the classifier f , i.e. , f ( x ) = g ( z ) and z = h ( x ) where z denotes the hidden feature of input x or the input itself when the reduced form happens . Combining VAT formulation , i.e. , Eq . 7 in Appendix A , and Pani formulation , i.e. , Eq . 1 , we reformulate our Pani VAT with perturbations on L layers in a deep neural network as follows : max η D [ g ( z ) , g ( z̃ ( η ) ) ] s.t . L∑ l=1 w2l ‖η ( l ) ‖2 ≤ 2 , ( 2 ) where D measures the divergence between two distributions . η = { ηipk } denotes the generic perturbations from our Pani method and η ( l ) indicates the perturbations in l-th layer of network . z̃ ( η ) = { z̃ip } represents the smoothed feature map imposed by perturbation η considering all patches in the way shown in Eq . 1 . In particular , when L = 1 , adversarial perturbations are only imposed on the input feature , which is similar to the traditional virtual adversarial perturbations . Additionally , wl is the hyper-parameter , adjusting the weight of perturbation η ( l ) in different layers with the overall perturbations restrained in an -ball . Next , we still utilize the similar power iteration and finite difference proposed in VAT ( Miyato et al. , 2017 ) to compute the desired perturbation η∗ . Then the resulting full loss function is defined as : min θ L0 + βEx∼DRvadv ( x , η∗ ; θ ) , ( 3 ) where L0 is the original supervised loss and β controls the degree of adversarial smoothness . Rvadv ( x , y , η∗ ) = D [ g ( z ) , g ( z̃ ( η∗ ) ) ] can be attained after solving the optimization problem in Eq . 2 . For the implementation details , we describe them in Algorithm 1 . Algorithm 1 : Pani VAT within a Batch 1 : Input : neighbors K1 and K2 , classifier f , batch size B , perturbed layers L 2 : Initialization : combination coefficient η 3 : ComputeK1 nearest image neighbors based on the distance of the second last layer output from f and obtain K1 ( K1 ≤ B ) peer images set Si for each image i . 4 : for l = 1 to L do : 5 : Compute the patch set Pi for all K1 peer images on layer l for each image i . 6 : Construct a K2 nearest patch neighbors graph for each patch in each image i . 7 : Conduct Patch-level Neighborhood Interpolation via Eq . 1 for each patch . 8 : end for 9 : Conduct power iteration and finite difference in VAT to compute η∗ constrained by Eq . 2 . 10 : ReturnRvadv ( x , η∗ ; θ ) Remark . As shown in the adversarial part of Figure 1 , the rationality of our Pani VAT method lies in the fact that the constructed perturbations can entail more non-local information coming from the neighbors of current sample . Through the delicate patch-level interpolation among neighbors of each patch , the resulting non-local virtual adversarial perturbations are expected to provide more informative smoothness , thus enhancing the performance of classifier in the semi-supervised setting .
This paper proposed a new regularization method via patch level interpolation. During the training, images within a batch will be used to construct an image graph. For example, for a certain image, its nearest neighbors in the feature spaces will be used. Then patches from its neighbors will be used to interpolate to each patch in that given image. Thus a straightforward application for such regularization is semi-supervised training. Moreover, in this paper it has demonstrated such regularization can be extended with virtual adversarial training and mixup training.
SP:21d29b68bb3e7cf18e699a98f7be35f9e12bdaaf
Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy
Regularization plays a crucial role in machine learning models , especially for deep neural networks . The existing regularization techniques mainly rely on the i.i.d . assumption and only consider the knowledge from the current sample , without the leverage of neighboring relationship between samples . In this work , we propose a general regularizer called Patch-level Neighborhood Interpolation ( Pani ) that conducts a non-local representation in the computation of network . Our proposal explicitly constructs patch-level graphs in different network layers and then linearly interpolates neighborhood patch features , serving as a general and effective regularization strategy . Further , we customize our approach into two kinds of popular regularization methods , namely Virtual Adversarial Training ( VAT ) and MixUp as well as its variants . The first derived Pani VAT presents a novel way to construct non-local adversarial smoothness by employing patch-level interpolated perturbations . In addition , the second derived Pani MixUp method extends the original MixUp regularization and its variant to the Pani version , achieving a significant improvement in the performance . Finally , extensive experiments are conducted to verify the effectiveness of our Patch-level Neighborhood Interpolation approach in both supervised and semi-supervised settings . 1 INTRODUCTION . In the statistical learning theory , regularization techniques are typically leveraged to achieve the trade-off between empirical error minimization and the control of model complexity ( Vapnik & Chervonenkis , 2015 ) . In contrast to the classical convex empirical risk minimization where regularization can rule out trivial solutions , regularization plays a rather different role in deep learning due to its highly non-convex optimization nature ( Zhang et al. , 2016 ) . Among all the explicit and implicit regularization , regularization with stochastic transformation , perturbations and randomness , such as adversarial training ( Goodfellow et al. , 2014 ) , dropout and MixUp ( Zhang et al. , 2017 ) , play a key role in the deep learning models due to their superiority in the performance ( Berthelot et al. , 2019b ; Zhang et al. , 2017 ; Miyato et al. , 2018 ; Berthelot et al. , 2019a ) . In this section , we firstly review two kinds of effective and prestigious regularization branches for deep neural networks , which can elegantly generalize from supervised learning to semi-supervised setting . Adversarial Training ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) can provide an additional regularization beyond that provided by other generic regularization strategies , such as dropout , pretraining and model averaging . However , recent works ( Zhang et al. , 2019 ; Tsipras et al. , 2018 ) demonstrated that this kind of training method holds a trade-off between the robustness and accuracy , limiting the efficacy of the adversarial regularization . Besides , Virtual Adversarial Training ( VAT ) ( Miyato et al. , 2018 ) can be regarded as a natural extension of adversarial training to semi-supervised setting through adversarially smoothing the posterior output distribution with the leverage of unlabeled data . This strategy has achieved great success in image classification ( Miyato et al. , 2018 ) , text classification ( Miyato et al. , 2016 ) and node classification ( Sun et al. , 2019 ) . Tangent-Normal Adversarial Regularization ( TNAR ) ( Yu et al. , 2019 ) extended VAT by taking the data manifold into consideration and applied VAT along the tangent space and the orthogonal normal space of the data manifold , outperforming previous semi-supervised approaches . MixUp ( Zhang et al. , 2017 ) augmented the training data by incorporating the prior knowledge that linear interpolation of input vectors should lead to linear interpolation of the associated targets , accomplishing consistent improvement of generalization on image , speech and tabular data . MixMatch ( Berthelot et al. , 2019b ) extended MixUp to semi-supervised tasks by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp . In contrast with VAT , MixMatch ( Berthelot et al. , 2019b ) utilizes one specific form of consistency regularization , i.e. , using the standard data augmentation for images , such as random horizontal flips , rather than computing adversarial perturbations to smooth the posterior distribution of the classifier . Nevertheless , the vast majority of regularization methods , including the aforementioned approaches , assume that the training samples are drawn independently and identically from an unknown data generating distribution . For instance , Support Vector Machine ( SVM ) , Back-Propagation ( BP ) for Neural Networks , and many other common algorithms implicitly make this assumption as part of their derivation . However , this i.i.d . assumption is commonly violated in realistic scenarios where batches or sub-groups of training samples are likely to have internal correlations . In particular , Dundar et al . ( 2007 ) demonstrated that accounting for the correlations in real-world training data leads to statistically significant improvements in accuracy . Similarly , Peer-Regularized Networks ( PeerNet ) ( Svoboda et al. , 2018 ) applied graph convolutions ( Velickovic et al. , 2017 ; Kipf & Welling , 2016 ) to harness information of peer samples , and verified its effectiveness on defending adversarial attacks . Motivated by these facts , we aim to design a general regularization strategy that can fully utilize the internal relationship between samples by explicitly constructing a graph within a minibatch in order to consistently improve the generalization of deep neural networks in both semi- and supervised settings . In this paper , we propose the Patch-level Neighborhood Interpolation ( Pani ) for deep neural networks , serving as a simple yet effective non-local regularization . We firstly construct a patch-level graph in each mini-batch during the stochastic gradient decent training process . Then we apply linear interpolation on the neighboring patch features and the resulting non-local representation additionally captures the relationship of neighboring patch features in different layers , serving as a general and effective regularization . Furthermore , to prove the generality and superiority of our Pani method , we explicitly customize our approach into two kinds of popular and general regularization strategies , i.e. , Virtual Adversarial Regularization and MixUp , resulting in Pani VAT and Pani MixUp . For the Pani VAT , we reformulate the construction of adversarial perturbations , transforming from solely depending on the current sample to the linear interpolation of neighboring patch features . This non-local adversarial perturbations can leverage the information of neighboring correlation from all samples within a batch , providing more informative adversarial smoothness in semisupervised setting . Besides , in the Pani MixUp , we extend MixUp and its variant MixMatch from image to patch level by mixing fine-grained patch features and corresponding supervised signals . Finally , we conduct extensive experiments to demonstrate that both of the two derived regularization strategies can outperform other state-of-the-art approaches in both supervised and semi-supervised tasks . More importantly , these successful cases verify the generality and superiority of our Patchlevel Neighborhood Interpolation method . Our contributions can be summarized as follow : • We propose a general interpolation strategy either in input or feature space , i.e. , Patch-level Neighborhood Interpolation , helping to improve the generalization of deep neural networks on both semi- and supervised scenarios . This strategy can serve as an effective graph-based representation method and has much potential to be leveraged in a wider range of tasks . • Based on our method , the customized approaches Pani VAT and Pani MixUP as well as Pani MixMatch can boost the generalization performance significantly , and thus provide a guidance to the deployment of our Pani strategy into more regularization methods . 2 OUR METHOD : PATCH-LEVEL NEIGHBORHOOD INTERPOLATION . Before introducing our approach , we highly recommend readers to go through some preliminary knowledge about VAT ( Miyato et al. , 2017 ) , MixUP ( Zhang et al. , 2017 ) and PeerNet ( Svoboda et al. , 2018 ) in Appendix A . For our method , one related work is PeerNet ( Svoboda et al. , 2018 ) that designed graph-based layers to defend against adversarial attacks , but unfortunately the construction of pixel-level K-NN graphs in PeerNet ( Svoboda et al. , 2018 ) is costly in computation . By contrast , our motivation is to develop a general regularization that can consistently boost the performance of deep neural networks in both semi- and supervised settings rather than the adversarial scenario . Besides , the construction way of a non-local layer in our method is more flexible and can be determined by the specific objective function , as elaborated in Section 2.1 and 2.2 . Moreover , our patch-level method can achieve computational advantage over pixel-level regularization , and incorporates more meaningful semantic correlations in different layers . Particularly , a flexible patch size can be chosen according to the size of receptive field in different layers , yielding more informative graph-based representation and better regularization performance . Concretely , as our Patch-level Neighborhood Interpolation ( Pani ) shown in Figure 1 , in the first step we determine the candidate peer images set Si for each image i . This can be achieved by random matching or computing the semantically nearest image neighbors using e.g . the cosine distance . Next , we construct the whole patches set Pi on the candidate peer images set Si for each image i by clipping the corresponding patches in the different locations on an input or a feature map . Following the establishment of patch set Pi , we construct K nearest neighbor patch graphs based on the distance of patch features in order to find the neighbors of each patch in patch set Pi for ∀i = 1 , .. , N . Mathematically , following the definition in the PeerNet , let zip be the p-th patch on the input or feature map Zi for the i-th image within one batch . Then denote the k-th nearest patch neighbor for zip as z jk qk taken from the patch qk of the peer image jk in the candidate set Si . Next , in order to leverage the knowledge from neighbors , different from graph attention mechanism in PeerNet , we apply a more straightforward linear interpolation on the neighboring patches for the current patch zip . Then , the general formulation of our Patch-level Neighborhood Interpolation can be presented as follows : z̃ip = z i p + K∑ k=1 ηipk ( z jk qk − zip ) , ( 1 ) where ηipk is the combination coefficient for the p-th patch of i-th image w.r.t its k-th patch neighbor , which can be computed through the power iteration similar to the manner of VAT , or through random sampling from a specific distribution in randomness-based regularization , e.g. , Mixup and its variants . Moreover , the choice of linear interpolation in Eq . 1 enjoys great computational advantage over the nonlinear GAT form in PeerNet in the computation of networks . Finally , after the patchlevel linear interpolation on patch features , we can obtain the refined graph-based representation Z̃i for i-th image , ∀i = 1 , ... , N . Note that our proposed method can explicitly combine the advantages of manifold regularization and non-local filtering in a flexible way , which we have a more detailed discussion about in Appendix B . Besides , to further demonstrate the generality and effectiveness of our Pani method , we provide Pani-version of two typical regularization strategies , i.e. , Virtual adversarial Training and Mixup as well its variant MixMatch , and verify the superiority of our Pani strategy on the significant boosting of accuracy . 2.1 PANI VAT . Based on our Patch-level Neighborhood Interpolation framework , we can construct a novel Pani VAT that utilizes the combination or interpolation of patch neighbors for each sample to manipulate the non-local perturbations , thus providing more informative adversarial smoothness in semi-supervised setting . Consider a more general composite function form of the classifier f , i.e. , f ( x ) = g ( z ) and z = h ( x ) where z denotes the hidden feature of input x or the input itself when the reduced form happens . Combining VAT formulation , i.e. , Eq . 7 in Appendix A , and Pani formulation , i.e. , Eq . 1 , we reformulate our Pani VAT with perturbations on L layers in a deep neural network as follows : max η D [ g ( z ) , g ( z̃ ( η ) ) ] s.t . L∑ l=1 w2l ‖η ( l ) ‖2 ≤ 2 , ( 2 ) where D measures the divergence between two distributions . η = { ηipk } denotes the generic perturbations from our Pani method and η ( l ) indicates the perturbations in l-th layer of network . z̃ ( η ) = { z̃ip } represents the smoothed feature map imposed by perturbation η considering all patches in the way shown in Eq . 1 . In particular , when L = 1 , adversarial perturbations are only imposed on the input feature , which is similar to the traditional virtual adversarial perturbations . Additionally , wl is the hyper-parameter , adjusting the weight of perturbation η ( l ) in different layers with the overall perturbations restrained in an -ball . Next , we still utilize the similar power iteration and finite difference proposed in VAT ( Miyato et al. , 2017 ) to compute the desired perturbation η∗ . Then the resulting full loss function is defined as : min θ L0 + βEx∼DRvadv ( x , η∗ ; θ ) , ( 3 ) where L0 is the original supervised loss and β controls the degree of adversarial smoothness . Rvadv ( x , y , η∗ ) = D [ g ( z ) , g ( z̃ ( η∗ ) ) ] can be attained after solving the optimization problem in Eq . 2 . For the implementation details , we describe them in Algorithm 1 . Algorithm 1 : Pani VAT within a Batch 1 : Input : neighbors K1 and K2 , classifier f , batch size B , perturbed layers L 2 : Initialization : combination coefficient η 3 : ComputeK1 nearest image neighbors based on the distance of the second last layer output from f and obtain K1 ( K1 ≤ B ) peer images set Si for each image i . 4 : for l = 1 to L do : 5 : Compute the patch set Pi for all K1 peer images on layer l for each image i . 6 : Construct a K2 nearest patch neighbors graph for each patch in each image i . 7 : Conduct Patch-level Neighborhood Interpolation via Eq . 1 for each patch . 8 : end for 9 : Conduct power iteration and finite difference in VAT to compute η∗ constrained by Eq . 2 . 10 : ReturnRvadv ( x , η∗ ; θ ) Remark . As shown in the adversarial part of Figure 1 , the rationality of our Pani VAT method lies in the fact that the constructed perturbations can entail more non-local information coming from the neighbors of current sample . Through the delicate patch-level interpolation among neighbors of each patch , the resulting non-local virtual adversarial perturbations are expected to provide more informative smoothness , thus enhancing the performance of classifier in the semi-supervised setting .
The paper proposes a general regularizer called the Patch-level Neighborhood Interpolation (Pani) that constructs patch-level graphs at different levels of neural networks. Specifically, it is based on the k-nearest patch neighbors at each layer and linear interpolation for each patch. By applying this proposed regularizer framework into two special cases and get Pani VAT and Pani MixUp. Numerical experiments are comprehensive and convincing.
SP:21d29b68bb3e7cf18e699a98f7be35f9e12bdaaf
Differentiable Trust Region Layers for Deep Reinforcement Learning
1 INTRODUCTION . Deep reinforcement learning has shown considerable advances in recent years with prominent application areas such as games ( Mnih et al. , 2015 ; Silver et al. , 2017 ) , robotics ( Levine et al. , 2015 ) , and control ( Duan et al. , 2016 ) . In policy search , policy gradient ( PG ) methods have been highly successful and have gained , among others , great popularity ( Peters & Schaal , 2008 ) . However , often it is difficult to tune learning rates for vanilla PG methods , because they tend to reduce the entropy of the policy too quickly . This results in a lack of exploration and , as a consequence , in premature or slow convergence . A common practice to mitigate these limitations is to impose a constraint on the allowed change between two successive policies . Kakade & Langford ( 2002 ) provided a theoretical justification for this in the approximate policy iteration setting . Two of the arguably most favored policy search algorithms , Trust Region Policy Optimization ( TRPO ) ( Schulman et al. , 2015a ) and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) , follow this idea using the KullbackLeibler divergence ( KL ) between successive policies as a constraint . We propose closed-form projections for Gaussian policies , realized as differentiable neural network layers . These layers constrain the change in successive policies by projecting the updated policy onto trust regions . First , this approach is more stable with respect to what Engstrom et al . ( 2020 ) refer to as code-level optimizations than other approaches . Second , it comes with the benefit of imposing constraints for individual states , allowing for the possibility of state-dependent trust regions . This allows us to constrain the state-wise maximum change of successive policies . In this we differ from previous works , that constrain only the expected change and thus can not rely on exact guarantees of monotonic improvement . Furthermore , we propose three different similarity measures , the KL ∗Correspondence to fabian.otto @ bosch.com divergence , the Wasserstein L2 distance , and the Frobenius norm , to base our trust region approach on . The last layer of the projected policy is now the the trust region layer which relies on the old policy as input . This would result in a ever-growing stack of policies , rendering this approach clearly infeasible . To circumvent this issue we introduce a penalty term into the reinforcement learning objective to ensure the input and output of the projection stay close together . While this still results in an approximation of the trust region update , we show that the trust regions are properly enforced . We also extend our approach to allow for a controlled evolution of the entropy of the policy , which has been shown to increase the performance in difficult exploration problems ( Pajarinen et al. , 2019 ; Akrour et al. , 2019 ) . We compare and discuss the effect of the different similarity measures as well as the entropy control on the optimization process . Additionally , we benchmark our algorithm against existing methods and demonstrate that we achieve similar or better performance . 2 RELATED WORK . Approximate Trust Regions . Bounding the size of the policy update in policy search is a common approach . While Kakade & Langford ( 2002 ) originally focused on a method based on mixing policies , nowadays most approaches use KL trust regions to bound the updates . Peters et al . ( 2010 ) proposed a first approach to such trust regions by formulating the problem as a constraint optimization and provided a solution based on the dual of that optimization problem . Still , this approach is not straightforwardly extendable to highly non-linear policies , such as neural networks . In an attempt to transfer those ideas to deep learning , TRPO ( Schulman et al. , 2015a ) approximates the KL constraint using the Fisher information matrix and natural policy gradient updates ( Peters & Schaal , 2008 ; Kakade , 2001 ) , along with a backtracking line search to enforce a hard KL constraint . Yet , the resulting algorithm scales poorly . Thus , Schulman et al . ( 2017 ) introduced PPO , which does not directly enforce the KL trust region , but clips the probability ratio in the importance sampling objective . This allows using efficient first-order optimization methods while maintaining robust training . However , Engstrom et al . ( 2020 ) and Andrychowicz et al . ( 2020 ) recently showed that implementation choices are essential for achieving state-of-the-art results with PPO . Code-level optimizations , such as reward scaling as well as value function , observation , reward , and gradient clipping , can even compensate for removing core parts of the algorithm , e. g. the clipping of the probability ratio . Additionally , PPO heavily relies on its exploration behavior and might get stuck in local optima ( Wang et al. , 2019 ) . Tangkaratt et al . ( 2018 ) use a closed-form solution for the constraint optimization based on the method of Lagrangian multipliers . They , however , require a quadratic parametrization of the Q-Function , which can limit the performance . Pajarinen et al . ( 2019 ) introduced an approach based on compatible value function approximations to realize KL trust regions . Based on the reinforcement learning as inference paradigm ( Levine , 2018 ) , Abdolmaleki et al . ( 2018 ) introduced an actor-critic approach using an Expectation-Maximization based optimization with KL trust regions in both the E-step and M-step . Song et al . ( 2020 ) proposed an on-policy version of this approach using a similar optimization scheme and constraints . Projections for Trust Regions . Akrour et al . ( 2019 ) proposed Projected Approximate Policy Iteration ( PAPI ) , a projection-based solution to implement KL trust regions . Their method projects an intermediate policy , that already satisfies the trust region constraint , onto the constraint bounds . This maximizes the size of the update step . However , PAPI relies on other trust region methods to generate this intermediary policy and can not operate in a stand-alone setting . Additionally , the projection is not directly part of the policy optimization but applied afterwards , which can result in sub-optimal policies . In context of computational complexity , both , TRPO and PAPI , simplify the constraint by leveraging the expected KL divergence . Opposed to that , we implement the projections as fully differentiable network layers and directly include them in the optimization process . Additionally , our projections enforce the constraints per state . This allows for better control of the change between subsequent policies and for state-dependent trust regions . For the KL-based projection layer we need to resort to numerical optimization and implicit gradients for convex optimizations ( Amos & Kolter , 2017 ; Agrawal et al. , 2019 ) . Thus , we investigate two alternative projections based on the Wasserstein L2 and Frobenius norm , which allow for closed form solutions . Both , Wasserstein and Frobenius norm , have found only limited applications in reinforcement learning . Pacchiano et al . ( 2020 ) use the Wasserstein distance to score behaviors of agents . Richemond & Maginnis ( 2017 ) proposed an alternative algorithm for bandits with Wasserstein based trust regions . Song & Zhao ( 2020 ) focus on solving the trust region problem for distributional policies using both KL and Wasserstein based trust regions for discrete action spaces . Our projections are applicable independently of the underlying algorithm and only assume a Gaussian policy , a common assumption for continuous action spaces . Several authors ( Dalal et al. , 2018 ; Chow et al. , 2019 ; Yang et al. , 2020 ) used projections as network layers to enforce limitations in the action or state space given environmental restrictions , such as robotic joint limits . Entropy Control . Abdolmaleki et al . ( 2015 ) introduced the idea of explicitly controlling the decrease in entropy during the optimization process , which later was extended to deep reinforcement learning by Pajarinen et al . ( 2019 ) and Akrour et al . ( 2019 ) . They use either an exponential or linear decay of the entropy during policy optimization to control the exploration process and escape local optima . To leverage those benefits , we embed this entropy control mechanism in our differentiable trust region layers . 3 PRELIMINARIES AND PROBLEM STATEMENT . We consider the general problem of a policy search in a Markov Decision Process ( MDP ) defined by the tuple ( S , A , T , R , P0 , γ ) . We assume the state space S and action space A are continuous and the transition probabilities T : S × A × S → [ 0 , 1 ] describe the probability transitioning to state st+1 ∈ S given the current state st ∈ S and action at ∈ A . We denote the initial state distributions as P0 : S → [ 0 , 1 ] . The reward returned by the environment is given by a function R : S × A → R and γ ∈ [ 0 , 1 ) describes the discount factor . Our goal is to maximize the expected accumulated discounted reward Rγ = ET , P0 , π [ ∑∞ t=0 γ tR ( st , at ) ] . To find the optimal policy , traditional PG methods often make use of the likelihood ratio gradient and an importance sampling estimator . Moreover , instead of directly optimizing the returns , it has been shown to be more effective to optimize the advantage function as this results in an unbiased estimator of the gradient with less variance max θ Ĵ ( πθ , πθold ) = max θ E ( s , a ) ∼πθold [ πθ ( a|s ) πθold ( a|s ) Aπθold ( s , a ) ] , ( 1 ) where Aπ ( s , a ) = E [ Rγ |s0 = s , a0 = a ; π ] − E [ Rγ |s0 = s ; π ] describes the advantage function , and the expectation is w.r.t πθold , i.e . s ′ ∼ T ( ·|s , a ) , a ∼ πθold ( ·|s ) , s0 ∼ P0 ( s0 ) , s ∼ ρπθold where ρπθold is a stationary distribution of policy πθold . The advantage function is commonly estimated bygeneralized advantage estimation ( GAE ) ( Schulman et al. , 2015b ) . Trust region methods use additional constraints for the given objective . Using a constraint on the maximum KL over the states has been shown to guarantee monotonic improvement of the policy ( Schulman et al. , 2015a ) . However , since all current approaches do not use a maximum KL constraint but an expected KL constraint , the guarantee of monotonic improvement does not hold exactly either . We are not aware of such results for the W2 distance or the Frobenius norm . For our projections we assume Gaussian policies πθold ( at|st ) = N ( at|µold ( st ) , Σold ( st ) ) and πθ ( at|st ) = N ( at|µ ( st ) , Σ ( st ) ) represent the old as well as the current policy , respectively . We explore three trust regions on top of Equation 1 that employ different similarity measures between old and new distributions , more specifically the frequently used reverse KL divergence , the Wasserstein L2 distance , and the Frobenius norm . Reverse KL Divergence . The KL divergence between two Gaussian distributions with means µ1 and µ2 and covariances Σ1 and Σ2 can generally be written as KL ( { µ1 , Σ1 } ‖ { µ2 , Σ2 } ) = 1 2 [ ( µ2 − µ1 ) TΣ−12 ( µ2 − µ1 ) + log |Σ2| |Σ1| + tr { Σ−12 Σ1 } − d ] , where d is the dimensionality of µ1 , µ2 . The KL uses the Mahalanobis distance to measure the similarity between the two mean vectors . The difference of the covariances is measured by the difference in shape , i.e. , the difference in scale , given by the log ratio of the determinants , plus the difference in rotation , given by the trace term . Given the KL is non-symmetric , it is clearly not a distance , yet still a frequently used divergence between distributions . We will use the more common reverse KL for our trust region , where the first argument is the new policy and the second is the old policy . Wasserstein Distance . The Wasserstein distance is a distance measure based on an optimal transport formulation , for more details see Villani ( 2008 ) . The Wasserstein-2 distance for two Gaussian distributions can generally be written as W2 ( { µ1 , Σ1 } , { µ2 , Σ2 } ) = |µ1 − µ2|2 + tr ( Σ1 + Σ2 − 2 ( Σ 1/2 2 Σ1Σ 1/2 2 ) 1/2 ) . A key difference to the KL divergence is that the Wasserstein distance is a symmetric distance measure , i. e. , W2 ( q , p ) = W2 ( p , q ) . Our experiments also revealed that it is beneficial to measure the W2 distance in a metric space defined by the covariance of the old policy distribution , denoted here as Σ2 , as the distance measure is then more sensitive to the data-generating distribution . The W2 distance in this metric space reads W2 , Σ2 ( { µ1 , Σ1 } , { µ2 , Σ2 } ) = ( µ2 − µ1 ) TΣ−12 ( µ2 − µ1 ) + tr ( Σ−12 Σ1 + I− 2Σ−12 ( Σ 1/2 2 Σ1Σ 1/2 2 ) 1/2 ) . Frobenius Norm . The Frobenius norm is a matrix norm and can directly be applied to the difference of the covariance matrices of the Gaussian distributions . To measure the distance of the mean vectors , we will , similar to the KL divergence , employ the Mahalanobis distance as this empirically leads to an improved performance in comparison to just taking the squared distance . Hence , we will denote the following metric as Frobenius norm between two Gaussian distributions F ( { µ1 , Σ1 } , { µ2 , Σ2 } ) = ( µ2 − µ1 ) TΣ−12 ( µ2 − µ1 ) + tr ( ( Σ2 − Σ1 ) T ( Σ2 − Σ1 ) ) . The Frobenius norm also constitutes a symmetric distance measure .
In trust-region-based policy optimization methods such as TRPO and PPO, it is difficult to tune and lots of approximations are required. The authors try to solve this issue by introducing the closed-form derivation of trust regions for Gaussian policies with three different types of divergence (or distance). Based on the theoretical derivation, the differentiable layer is proposed, where the layer is built upon “old” policy during the trust-region-based policy updates. The difference comes from the use of various divergences (or distances) are given in theoretical and empirical ways.
SP:7a6904083c223c746197e75e6b24d84107b50ab3
Differentiable Trust Region Layers for Deep Reinforcement Learning
1 INTRODUCTION . Deep reinforcement learning has shown considerable advances in recent years with prominent application areas such as games ( Mnih et al. , 2015 ; Silver et al. , 2017 ) , robotics ( Levine et al. , 2015 ) , and control ( Duan et al. , 2016 ) . In policy search , policy gradient ( PG ) methods have been highly successful and have gained , among others , great popularity ( Peters & Schaal , 2008 ) . However , often it is difficult to tune learning rates for vanilla PG methods , because they tend to reduce the entropy of the policy too quickly . This results in a lack of exploration and , as a consequence , in premature or slow convergence . A common practice to mitigate these limitations is to impose a constraint on the allowed change between two successive policies . Kakade & Langford ( 2002 ) provided a theoretical justification for this in the approximate policy iteration setting . Two of the arguably most favored policy search algorithms , Trust Region Policy Optimization ( TRPO ) ( Schulman et al. , 2015a ) and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) , follow this idea using the KullbackLeibler divergence ( KL ) between successive policies as a constraint . We propose closed-form projections for Gaussian policies , realized as differentiable neural network layers . These layers constrain the change in successive policies by projecting the updated policy onto trust regions . First , this approach is more stable with respect to what Engstrom et al . ( 2020 ) refer to as code-level optimizations than other approaches . Second , it comes with the benefit of imposing constraints for individual states , allowing for the possibility of state-dependent trust regions . This allows us to constrain the state-wise maximum change of successive policies . In this we differ from previous works , that constrain only the expected change and thus can not rely on exact guarantees of monotonic improvement . Furthermore , we propose three different similarity measures , the KL ∗Correspondence to fabian.otto @ bosch.com divergence , the Wasserstein L2 distance , and the Frobenius norm , to base our trust region approach on . The last layer of the projected policy is now the the trust region layer which relies on the old policy as input . This would result in a ever-growing stack of policies , rendering this approach clearly infeasible . To circumvent this issue we introduce a penalty term into the reinforcement learning objective to ensure the input and output of the projection stay close together . While this still results in an approximation of the trust region update , we show that the trust regions are properly enforced . We also extend our approach to allow for a controlled evolution of the entropy of the policy , which has been shown to increase the performance in difficult exploration problems ( Pajarinen et al. , 2019 ; Akrour et al. , 2019 ) . We compare and discuss the effect of the different similarity measures as well as the entropy control on the optimization process . Additionally , we benchmark our algorithm against existing methods and demonstrate that we achieve similar or better performance . 2 RELATED WORK . Approximate Trust Regions . Bounding the size of the policy update in policy search is a common approach . While Kakade & Langford ( 2002 ) originally focused on a method based on mixing policies , nowadays most approaches use KL trust regions to bound the updates . Peters et al . ( 2010 ) proposed a first approach to such trust regions by formulating the problem as a constraint optimization and provided a solution based on the dual of that optimization problem . Still , this approach is not straightforwardly extendable to highly non-linear policies , such as neural networks . In an attempt to transfer those ideas to deep learning , TRPO ( Schulman et al. , 2015a ) approximates the KL constraint using the Fisher information matrix and natural policy gradient updates ( Peters & Schaal , 2008 ; Kakade , 2001 ) , along with a backtracking line search to enforce a hard KL constraint . Yet , the resulting algorithm scales poorly . Thus , Schulman et al . ( 2017 ) introduced PPO , which does not directly enforce the KL trust region , but clips the probability ratio in the importance sampling objective . This allows using efficient first-order optimization methods while maintaining robust training . However , Engstrom et al . ( 2020 ) and Andrychowicz et al . ( 2020 ) recently showed that implementation choices are essential for achieving state-of-the-art results with PPO . Code-level optimizations , such as reward scaling as well as value function , observation , reward , and gradient clipping , can even compensate for removing core parts of the algorithm , e. g. the clipping of the probability ratio . Additionally , PPO heavily relies on its exploration behavior and might get stuck in local optima ( Wang et al. , 2019 ) . Tangkaratt et al . ( 2018 ) use a closed-form solution for the constraint optimization based on the method of Lagrangian multipliers . They , however , require a quadratic parametrization of the Q-Function , which can limit the performance . Pajarinen et al . ( 2019 ) introduced an approach based on compatible value function approximations to realize KL trust regions . Based on the reinforcement learning as inference paradigm ( Levine , 2018 ) , Abdolmaleki et al . ( 2018 ) introduced an actor-critic approach using an Expectation-Maximization based optimization with KL trust regions in both the E-step and M-step . Song et al . ( 2020 ) proposed an on-policy version of this approach using a similar optimization scheme and constraints . Projections for Trust Regions . Akrour et al . ( 2019 ) proposed Projected Approximate Policy Iteration ( PAPI ) , a projection-based solution to implement KL trust regions . Their method projects an intermediate policy , that already satisfies the trust region constraint , onto the constraint bounds . This maximizes the size of the update step . However , PAPI relies on other trust region methods to generate this intermediary policy and can not operate in a stand-alone setting . Additionally , the projection is not directly part of the policy optimization but applied afterwards , which can result in sub-optimal policies . In context of computational complexity , both , TRPO and PAPI , simplify the constraint by leveraging the expected KL divergence . Opposed to that , we implement the projections as fully differentiable network layers and directly include them in the optimization process . Additionally , our projections enforce the constraints per state . This allows for better control of the change between subsequent policies and for state-dependent trust regions . For the KL-based projection layer we need to resort to numerical optimization and implicit gradients for convex optimizations ( Amos & Kolter , 2017 ; Agrawal et al. , 2019 ) . Thus , we investigate two alternative projections based on the Wasserstein L2 and Frobenius norm , which allow for closed form solutions . Both , Wasserstein and Frobenius norm , have found only limited applications in reinforcement learning . Pacchiano et al . ( 2020 ) use the Wasserstein distance to score behaviors of agents . Richemond & Maginnis ( 2017 ) proposed an alternative algorithm for bandits with Wasserstein based trust regions . Song & Zhao ( 2020 ) focus on solving the trust region problem for distributional policies using both KL and Wasserstein based trust regions for discrete action spaces . Our projections are applicable independently of the underlying algorithm and only assume a Gaussian policy , a common assumption for continuous action spaces . Several authors ( Dalal et al. , 2018 ; Chow et al. , 2019 ; Yang et al. , 2020 ) used projections as network layers to enforce limitations in the action or state space given environmental restrictions , such as robotic joint limits . Entropy Control . Abdolmaleki et al . ( 2015 ) introduced the idea of explicitly controlling the decrease in entropy during the optimization process , which later was extended to deep reinforcement learning by Pajarinen et al . ( 2019 ) and Akrour et al . ( 2019 ) . They use either an exponential or linear decay of the entropy during policy optimization to control the exploration process and escape local optima . To leverage those benefits , we embed this entropy control mechanism in our differentiable trust region layers . 3 PRELIMINARIES AND PROBLEM STATEMENT . We consider the general problem of a policy search in a Markov Decision Process ( MDP ) defined by the tuple ( S , A , T , R , P0 , γ ) . We assume the state space S and action space A are continuous and the transition probabilities T : S × A × S → [ 0 , 1 ] describe the probability transitioning to state st+1 ∈ S given the current state st ∈ S and action at ∈ A . We denote the initial state distributions as P0 : S → [ 0 , 1 ] . The reward returned by the environment is given by a function R : S × A → R and γ ∈ [ 0 , 1 ) describes the discount factor . Our goal is to maximize the expected accumulated discounted reward Rγ = ET , P0 , π [ ∑∞ t=0 γ tR ( st , at ) ] . To find the optimal policy , traditional PG methods often make use of the likelihood ratio gradient and an importance sampling estimator . Moreover , instead of directly optimizing the returns , it has been shown to be more effective to optimize the advantage function as this results in an unbiased estimator of the gradient with less variance max θ Ĵ ( πθ , πθold ) = max θ E ( s , a ) ∼πθold [ πθ ( a|s ) πθold ( a|s ) Aπθold ( s , a ) ] , ( 1 ) where Aπ ( s , a ) = E [ Rγ |s0 = s , a0 = a ; π ] − E [ Rγ |s0 = s ; π ] describes the advantage function , and the expectation is w.r.t πθold , i.e . s ′ ∼ T ( ·|s , a ) , a ∼ πθold ( ·|s ) , s0 ∼ P0 ( s0 ) , s ∼ ρπθold where ρπθold is a stationary distribution of policy πθold . The advantage function is commonly estimated bygeneralized advantage estimation ( GAE ) ( Schulman et al. , 2015b ) . Trust region methods use additional constraints for the given objective . Using a constraint on the maximum KL over the states has been shown to guarantee monotonic improvement of the policy ( Schulman et al. , 2015a ) . However , since all current approaches do not use a maximum KL constraint but an expected KL constraint , the guarantee of monotonic improvement does not hold exactly either . We are not aware of such results for the W2 distance or the Frobenius norm . For our projections we assume Gaussian policies πθold ( at|st ) = N ( at|µold ( st ) , Σold ( st ) ) and πθ ( at|st ) = N ( at|µ ( st ) , Σ ( st ) ) represent the old as well as the current policy , respectively . We explore three trust regions on top of Equation 1 that employ different similarity measures between old and new distributions , more specifically the frequently used reverse KL divergence , the Wasserstein L2 distance , and the Frobenius norm . Reverse KL Divergence . The KL divergence between two Gaussian distributions with means µ1 and µ2 and covariances Σ1 and Σ2 can generally be written as KL ( { µ1 , Σ1 } ‖ { µ2 , Σ2 } ) = 1 2 [ ( µ2 − µ1 ) TΣ−12 ( µ2 − µ1 ) + log |Σ2| |Σ1| + tr { Σ−12 Σ1 } − d ] , where d is the dimensionality of µ1 , µ2 . The KL uses the Mahalanobis distance to measure the similarity between the two mean vectors . The difference of the covariances is measured by the difference in shape , i.e. , the difference in scale , given by the log ratio of the determinants , plus the difference in rotation , given by the trace term . Given the KL is non-symmetric , it is clearly not a distance , yet still a frequently used divergence between distributions . We will use the more common reverse KL for our trust region , where the first argument is the new policy and the second is the old policy . Wasserstein Distance . The Wasserstein distance is a distance measure based on an optimal transport formulation , for more details see Villani ( 2008 ) . The Wasserstein-2 distance for two Gaussian distributions can generally be written as W2 ( { µ1 , Σ1 } , { µ2 , Σ2 } ) = |µ1 − µ2|2 + tr ( Σ1 + Σ2 − 2 ( Σ 1/2 2 Σ1Σ 1/2 2 ) 1/2 ) . A key difference to the KL divergence is that the Wasserstein distance is a symmetric distance measure , i. e. , W2 ( q , p ) = W2 ( p , q ) . Our experiments also revealed that it is beneficial to measure the W2 distance in a metric space defined by the covariance of the old policy distribution , denoted here as Σ2 , as the distance measure is then more sensitive to the data-generating distribution . The W2 distance in this metric space reads W2 , Σ2 ( { µ1 , Σ1 } , { µ2 , Σ2 } ) = ( µ2 − µ1 ) TΣ−12 ( µ2 − µ1 ) + tr ( Σ−12 Σ1 + I− 2Σ−12 ( Σ 1/2 2 Σ1Σ 1/2 2 ) 1/2 ) . Frobenius Norm . The Frobenius norm is a matrix norm and can directly be applied to the difference of the covariance matrices of the Gaussian distributions . To measure the distance of the mean vectors , we will , similar to the KL divergence , employ the Mahalanobis distance as this empirically leads to an improved performance in comparison to just taking the squared distance . Hence , we will denote the following metric as Frobenius norm between two Gaussian distributions F ( { µ1 , Σ1 } , { µ2 , Σ2 } ) = ( µ2 − µ1 ) TΣ−12 ( µ2 − µ1 ) + tr ( ( Σ2 − Σ1 ) T ( Σ2 − Σ1 ) ) . The Frobenius norm also constitutes a symmetric distance measure .
The paper proposes a way to impose trust region restrictions via projections when doing policy optimisation in Reinforcement Learning. The projections have a closed form and enforce a trust region for each state individually. The authors propose three types of projections based on Frobenius, Wasserstein distances and KL divergence. They compare them to the existing methods (PPO, PAPI) and provide some insights about their behaviour.
SP:7a6904083c223c746197e75e6b24d84107b50ab3
Learning to communicate through imagination with model-based deep multi-agent reinforcement learning
1 INTRODUCTION . “ We use imagination in our ordinary perception of the world . This perception can not be separated from interpretation. ” ( Warnock , 1976 ) . The human brain , and the mind that emerges from its working , is currently our best example of a general purpose intelligent learning system . And our ability to imagine , is an integral part of it ( Abraham , 2020 ) . The imagination is furthermore intimately connected to other parts of our cognition such as our use of language ( Shulman , 2012 ) . In fact , Dor ( 2015 ) argues that : “ The functional specificity of language lies in the very particular functional strategy it employs . It is dedicated to the systematic instruction of imagination : we use it to communicate directly with our interlocutors ’ imaginations. ” However , the origin of language resides not only in individual cognition , but in society ( Von Humboldt , 1999 ) , grounded in part through interpersonal experience ( Bisk et al. , 2020 ) . The complexity of the world necessitates our use of individual mental models ( Forrester , 1971 ) , to store abstract representations of the information we perceive through the direct experiences of our senses ( Chang and Tsao , 2017 ) . As society expanded , the sharing of direct experiences within groups reached its limit . Growing societies could only continue to function through the invention of language , a unique and effective communication protocol where a sender ’ s coded message of abstract mental representations delivered through speech , could serve as a direct instruction to the receiver ’ s imagination ( Dor , 2015 ) . Therefore , the combination of language and imagination gave us the ability to solve complex tasks by performing abstract reasoning ( Perkins , 1985 ) and joint spatiotemporal planning ( Reuland , 2010 ) . In this work , we explore a plausible learning system architecture for the development of an artificial multi-agent communication protocol of the imagination . Based on the above discussion , the minimum set of required features of such a system include : ( 1 ) that it be constructed from multiple individual agents where , ( 2 ) each agent possesses an abstract model of the world that can serve as an imagination , ( 3 ) has access to a communication medium , or channel , and ( 4 ) jointly learns and interacts in a collective society . Consequently , these features map most directly onto the learning framework of model-based deep multi-agent reinforcement learning . Reinforcement learning ( RL ) has demonstrated close connections with neuroscientific models of learning ( Barto , 1995 ; Schultz et al. , 1997 ) . However , beside this connection , RL has proven to be an extremely useful computational framework for building effective artificial learning systems ( Sutton and Barto , 2018 ) . This is true , not only in simulated environments and games ( Mnih et al. , 2015 ; Silver et al. , 2017 ) , but also in real-world applications ( Gregurić et al. , 2020 ) . Futhermore , RL approaches are being considered for some of humanities most pressing problems , such as the need to build sustainable food supply ( Binas et al. , 2019 ) and energy forecasting systems ( Jeong and Kim , 2020 ) , brought about through global climate change ( Manabe and Wetherald , 1967 ; Hays et al. , 1976 ; Hansen et al. , 2012 ; Rolnick et al. , 2019 ) . Our system . We develop our system specifically in the context of cooperative mutli-agent RL ( OroojlooyJadid and Hajinezhad , 2019 ) , where multiple agents jointly attempt to learn how to act in a partially observable environment by maximising a shared global reward . Our agents make use of model-based reinforcement learning ( Langlois et al. , 2019 ; Moerland et al. , 2020 ) . To learn an artificial language of the imagination , each individual agent in our system is given access to a recurrent world model capable of learning rich abstract representations of real and imagined future states . We combine this world model with an encoder function to encode world model rollouts as messages and use a recurrent differentiable message passing channel for communication . To show the benefits of our system , we develop a set of ablation tests and specialised experiments using novel as well as well-known multi-agent environments and compare the performance of our system to a set of strong model-free deep MARL baselines . Our findings and contributions . We find that joint planning using learned communication through imagination can significantly improve MARL system performance when compared to a set of stateof-the-art baselines . We demonstrate this advantage of planning in a set of specialised environments specifically designed to test for the use of communication combined with imagined future prediction . Our present work is not at scale and we only consider situations containing two agents . However , to the best of our knowledge , this is the first demonstration of a model-based deep MARL system that combines world models with differentiable communication for joint planning , able to solve tasks successfully , where state-of-the-art model-free deep MARL methods fail . We see this work as a preliminary step towards building larger-scale joint planning systems using model-based deep multi-agent RL . 2 BACKGROUND AND RELATED WORK . Reinforcement learning is concerned with optimal sequential decision making within a particular environment . In single agent RL , the problem is modeled as a Markov decision process ( MDP ) defined by the following tuple ( S , A , r , p , ρ0 , γ ) ( Andreae , 1969 ; Watkins , 1989 ) . At time step t , in a state st , which is a member of the state space S , the agent can select an action at from a set of actions A . The environment state transition function p ( st+1|st , at ) provides a distribution over next states st+1 and a reward function r ( st , at , st+1 ) returns a scalar reward , given the current state , action and next state . The initial state distribution is given by ρ0 , with s0 ∼ ρ0 , and γ ∈ ( 0 , 1 ] is a discount factor controlling the influence of future reward . The goal of RL is to find an optimal policy π∗ , where the policy is a mapping from states to a distribution over actions , that maximises long-term discounted future reward such that π∗ = argmaxπE [ ∑∞ t=0 γ tr ( st , at , st+1 ) ] . If the environment state is partially observed by the agent , an observation function o ( st ) is assumed and the agent has access only to the observation ot = o ( st ) at each time step , with the full observation space defined as O = { o ( s ) |s ∈ S } . In this work , we focus only on the case of partial observability . Deep RL . Popular algorithms for solving the RL problem include value-based methods such as Qlearning ( Watkins and Dayan , 1992 ) and policy gradient methods such as the REINFORCE algorithm ( Williams , 1992 ) . Q-learning learns a value function Q ( s , a ) for state-action pairs and obtains a policy by selecting actions according to these learned values using a specific action selector , e.g . -greedy ( Watkins , 1989 ) or UCB ( Auer et al. , 2002 ) . In contrast , policy gradient methods learn a parameterised policy πθ , with parameters θ , directly by following a performance gradient signal with respect to θ . The above approaches are combined in actor-critic methods ( Sutton et al. , 2000 ) , where the actor refers to the policy being learned and the critic to the value function . In deep RL , the policy and value functions use deep neural networks as high-capacity function approximators capable of learning distributed abstract representations from raw input signals that are useful for downstream decision making . Recent state-of-the-art deep RL methods include Deep Q-Networks ( DQN ) ( Mnih et al. , 2013 ) and related variants ( Hessel et al. , 2017 ) , as well as advanced actor-critic methods such as PPO ( Schulman et al. , 2017 ) and SAC ( Haarnoja et al. , 2018 ) . See ( Arulkumaran et al. , 2017 ; Li , 2017 ) for an in-depth review of deep RL . Model-based RL . In RL , the environment transition function p is typically unknown . As a result , so-called model-free RL methods , such as DQN and PPO , rely solely on data gathered from the environment , i.e . real experience , to learn an optimal policy . However , if given access to a transition function , an agent can generate useful simulated , or imagined experience , and use it to plan . Therefore , in model-based RL , a model p̂φ ( ot+1|ot , at ) with parameters φ is learned using stored transitions gathered from either a random , heuristic or learned policy to simulate transitions from the true ( unknown ) transition function p. The model can then be used for model-based planning , which can either happen in the background , or at decisiontime . We briefly highlight the differences between these two types of planning and discuss work related to each and how this relates to our own work . – Background planning . In background planning , the model is primarily used to generate additional experience and assist learning , i.e . for updating the parameters of the policy and/or value functions . An early version of this approach is DYNA-Q ( Sutton , 1990 ) which uses the additional experience to help learn a value function . However , the usefulness of a model degrades over long time horizons as model rollout error starts to compound ( Gu et al. , 2016 ) . This has lead to different approaches that either use fixed depth rollouts based on model uncertainty ( Feinberg et al. , 2018 ) , dynamic rollout schedules ( Buckman et al. , 2018 ) , or short rollouts starting from intermediate states sampled from a buffer ( Janner et al. , 2019 ) . A promising alternative approach is to update gradients directly via imagined rollouts in a lower-dimensional latent space ( Hafner et al. , 2019 ; 2020 ; Byravan et al. , 2020 ) . – Decision-time planning . In decision-time planning , the model is used to generate imagined rollouts from a given state for the purpose of selecting the optimal action or sequence of actions . Decisiontime planning methods for discrete action spaces often rely on search methods such as Monte Carlo tree search ( MCTS ) ( Coulom , 2006 ) and have been used successfully in several works ( Silver et al. , 2017 ; Anthony et al. , 2017 ; Schrittwieser et al. , 2019 ) . In continuous action spaces , methods include trajectory optimisation approaches using trajectory sampling ( Todorov and Li , 2005 ; Theodorou et al. , 2010 ; Nagabandi et al. , 2018 ; Chua et al. , 2018 ) or collocation ( Posa et al. , 2014 ) ( optimising reward while forcing the model ’ s predictions to be close to already visited states ) . The model in our system is utilised for decision-time planning and follows the approach of Ha and Schmidhuber ( 2018 ) , who used recurrent neural world models as a way to give agent ’ s the ability to learn how to think ( Schmidhuber , 2015 ) . Specifically , we make use of a recurrent world model that takes the form of a mixture density network LSTM ( MDN-LSTM ) , as used in ( Ha and Eck , 2017 ) . The model is therefore a form of a recurrent Gaussian mixture model and allows us to sample probabilistic predictions of imagined next states . An illustration of the core features of model-based RL and the different types of planning is given in Figure 1 . Also see ( Janner , 2019 ) and ( Mordatch and Hamrick , 2020 ) for useful overviews . Multi-agent RL ( MARL ) . In the multi-agent case with N agents , we use the formalism of partially observable Markov games ( Littman , 1994 ) , defined as the tuple given above for the single agent case , but with observation and action spaces given by the following cartesian products : O = ∏N i=1Oi ⊆ S and A = ∏N i=1Ai , for agents i = 1 , ... , N . The goal in this setting is to find an optimal joint policy π∗ ( a1 , ... , aN |o1 , ... , oN ) that maximises a shared long-term discounted future reward for all agent as π∗ = argmaxπE [ ∑N i=1 ∑∞ t=0 γ tr ( oti , a t i , o t+1 i ) ] . Early work in MARL simply trained multiple independent Q-learning algorithms ( Tan , 1993 ) , which has since been extended to include deep neural networks , or more specifically independent DQNs ( Tampuu et al. , 2017 ) . However , from the perspective of an individual agent , these approaches treat all other learning agents as part of the environment , resulting in the optimal policy distribution to become non-stationary . Furthermore , if the environment is only partially observable , the learning task can become even more difficult , where agents may struggle with credit assignment due to spurious rewards received from unobserved actions of other agents ( Claus and Boutilier , 1998 ) . To mitigate the issue of non-stationarity , MARL systems are often designed within the paradigm of centralised training with decentralised execution ( CTDE ) ( Oliehoek et al. , 2008 ; Lowe et al. , 2017 ; Foerster et al. , 2017 ) . In CTDE , a centralised value function , or critic , is used during training , which conditions on the global state and joint actions from all the agents to make the learning problem stationary , but is then later removed once the individual agent ’ s policies have been learned , making it possible to use each policy independently during system execution . However , individual agent policies extracted in this way may still perform poorly because training is not specifically aligned with the goal of performing well under decentralised execution . Therefore , state-of-the-art value-based MARL approaches such as Q-mix ( Rashid et al. , 2018 ) and QTRAN ( Son et al. , 2019 ) make use of value function decomposition strategies ( Sunehag et al. , 2017 ) to more closely resemble decentralised training , where each agent is a recurrent DQN ( Hausknecht and Stone , 2015 ) that has memory to also deal with partial observability . Another clear way to help with the issue of partial observability is for agents to be able to communicate . Learned multi-agent communication has been a key innovation in helping MARL systems to scale to more complex environments and solve more challenging tasks ( Foerster et al. , 2016 ; Sukhbaatar et al. , 2016 ; Singh et al. , 2018 ; Chu et al. , 2020 ) . To facilitate communication in our work , we formally extend the Markov gameM by having agents connected to each other via communication channels according to a pre-defined neighbourhood graph G ( V , E ) . The graph G is defined by a set of nodes ( vertices ) V along with a set of edge connections E = { ( i , j ) |i , j ∈ V , i 6= j } , where each agent is a node in the graph , locally connected to other agent nodes . We define the connected neighbourhood surrounding agent i as Ni = { j ∈ V| ( i , j ) ∈ E } . This networked Markov game MG is then defined by the following tuple ( G , S , A , r , p , ρ0 , γ ) . Our communication channels are recurrent and end-to-end differentiable allowing for agent-to-agent communication protocols to be learned during training . Unlike work studying the emergence of language through communication in MARL , e.g . ( Lazaridou et al. , 2016 ; Mordatch and Abbeel , 2017 ; Kajić et al. , 2020 ) our work is more focused on communication through imagination as a useful system design for task solving , as apposed to uncovering new insights into emergent phenomena related to the human imagination . Model-based MARL . To the best of our knowledge , the literature on model-based deep MARL is quite sparse and very little work has been done in this area . A notable exception is the recent work by Krupnik et al . ( 2020 ) on multi-agent model-based latent space trajectory optimisation . Here a multi-step generative model , specifically a temporal segment model ( Mishra et al. , 2017 ) , is used to generate rollouts in a disentangled latent space and optimisation is performed directly over agent latent variables . Our work is the first we are aware of in the area of model-based deep MARL that combines communication with decision-time planning using recurrent neural world models .
This paper proposes to combine model-based and multi-agent reinforcement learning. The authors follow the typical recurrent neural world models setting to generate imagined rollouts for decision-time planning. To tackle the non-stationarity of a multi-agent environment, they build end-to-end differentiable communication channels between agents within a pre-defined neighborhood. The communication message is defined as abstract information encoded from the imagined rollout. Agents then make decisions based on the message they received and the output of recurrent neural world models. Empirical studies are performed to show the superiority of proposed methods over SOTA model-free MARL approaches. Results are shown in two simple environments, which are designed to require communication between agents to solve the task.
SP:e4eac7e23932f7b1c1ac0c281cbeb076a4525a86
Learning to communicate through imagination with model-based deep multi-agent reinforcement learning
1 INTRODUCTION . “ We use imagination in our ordinary perception of the world . This perception can not be separated from interpretation. ” ( Warnock , 1976 ) . The human brain , and the mind that emerges from its working , is currently our best example of a general purpose intelligent learning system . And our ability to imagine , is an integral part of it ( Abraham , 2020 ) . The imagination is furthermore intimately connected to other parts of our cognition such as our use of language ( Shulman , 2012 ) . In fact , Dor ( 2015 ) argues that : “ The functional specificity of language lies in the very particular functional strategy it employs . It is dedicated to the systematic instruction of imagination : we use it to communicate directly with our interlocutors ’ imaginations. ” However , the origin of language resides not only in individual cognition , but in society ( Von Humboldt , 1999 ) , grounded in part through interpersonal experience ( Bisk et al. , 2020 ) . The complexity of the world necessitates our use of individual mental models ( Forrester , 1971 ) , to store abstract representations of the information we perceive through the direct experiences of our senses ( Chang and Tsao , 2017 ) . As society expanded , the sharing of direct experiences within groups reached its limit . Growing societies could only continue to function through the invention of language , a unique and effective communication protocol where a sender ’ s coded message of abstract mental representations delivered through speech , could serve as a direct instruction to the receiver ’ s imagination ( Dor , 2015 ) . Therefore , the combination of language and imagination gave us the ability to solve complex tasks by performing abstract reasoning ( Perkins , 1985 ) and joint spatiotemporal planning ( Reuland , 2010 ) . In this work , we explore a plausible learning system architecture for the development of an artificial multi-agent communication protocol of the imagination . Based on the above discussion , the minimum set of required features of such a system include : ( 1 ) that it be constructed from multiple individual agents where , ( 2 ) each agent possesses an abstract model of the world that can serve as an imagination , ( 3 ) has access to a communication medium , or channel , and ( 4 ) jointly learns and interacts in a collective society . Consequently , these features map most directly onto the learning framework of model-based deep multi-agent reinforcement learning . Reinforcement learning ( RL ) has demonstrated close connections with neuroscientific models of learning ( Barto , 1995 ; Schultz et al. , 1997 ) . However , beside this connection , RL has proven to be an extremely useful computational framework for building effective artificial learning systems ( Sutton and Barto , 2018 ) . This is true , not only in simulated environments and games ( Mnih et al. , 2015 ; Silver et al. , 2017 ) , but also in real-world applications ( Gregurić et al. , 2020 ) . Futhermore , RL approaches are being considered for some of humanities most pressing problems , such as the need to build sustainable food supply ( Binas et al. , 2019 ) and energy forecasting systems ( Jeong and Kim , 2020 ) , brought about through global climate change ( Manabe and Wetherald , 1967 ; Hays et al. , 1976 ; Hansen et al. , 2012 ; Rolnick et al. , 2019 ) . Our system . We develop our system specifically in the context of cooperative mutli-agent RL ( OroojlooyJadid and Hajinezhad , 2019 ) , where multiple agents jointly attempt to learn how to act in a partially observable environment by maximising a shared global reward . Our agents make use of model-based reinforcement learning ( Langlois et al. , 2019 ; Moerland et al. , 2020 ) . To learn an artificial language of the imagination , each individual agent in our system is given access to a recurrent world model capable of learning rich abstract representations of real and imagined future states . We combine this world model with an encoder function to encode world model rollouts as messages and use a recurrent differentiable message passing channel for communication . To show the benefits of our system , we develop a set of ablation tests and specialised experiments using novel as well as well-known multi-agent environments and compare the performance of our system to a set of strong model-free deep MARL baselines . Our findings and contributions . We find that joint planning using learned communication through imagination can significantly improve MARL system performance when compared to a set of stateof-the-art baselines . We demonstrate this advantage of planning in a set of specialised environments specifically designed to test for the use of communication combined with imagined future prediction . Our present work is not at scale and we only consider situations containing two agents . However , to the best of our knowledge , this is the first demonstration of a model-based deep MARL system that combines world models with differentiable communication for joint planning , able to solve tasks successfully , where state-of-the-art model-free deep MARL methods fail . We see this work as a preliminary step towards building larger-scale joint planning systems using model-based deep multi-agent RL . 2 BACKGROUND AND RELATED WORK . Reinforcement learning is concerned with optimal sequential decision making within a particular environment . In single agent RL , the problem is modeled as a Markov decision process ( MDP ) defined by the following tuple ( S , A , r , p , ρ0 , γ ) ( Andreae , 1969 ; Watkins , 1989 ) . At time step t , in a state st , which is a member of the state space S , the agent can select an action at from a set of actions A . The environment state transition function p ( st+1|st , at ) provides a distribution over next states st+1 and a reward function r ( st , at , st+1 ) returns a scalar reward , given the current state , action and next state . The initial state distribution is given by ρ0 , with s0 ∼ ρ0 , and γ ∈ ( 0 , 1 ] is a discount factor controlling the influence of future reward . The goal of RL is to find an optimal policy π∗ , where the policy is a mapping from states to a distribution over actions , that maximises long-term discounted future reward such that π∗ = argmaxπE [ ∑∞ t=0 γ tr ( st , at , st+1 ) ] . If the environment state is partially observed by the agent , an observation function o ( st ) is assumed and the agent has access only to the observation ot = o ( st ) at each time step , with the full observation space defined as O = { o ( s ) |s ∈ S } . In this work , we focus only on the case of partial observability . Deep RL . Popular algorithms for solving the RL problem include value-based methods such as Qlearning ( Watkins and Dayan , 1992 ) and policy gradient methods such as the REINFORCE algorithm ( Williams , 1992 ) . Q-learning learns a value function Q ( s , a ) for state-action pairs and obtains a policy by selecting actions according to these learned values using a specific action selector , e.g . -greedy ( Watkins , 1989 ) or UCB ( Auer et al. , 2002 ) . In contrast , policy gradient methods learn a parameterised policy πθ , with parameters θ , directly by following a performance gradient signal with respect to θ . The above approaches are combined in actor-critic methods ( Sutton et al. , 2000 ) , where the actor refers to the policy being learned and the critic to the value function . In deep RL , the policy and value functions use deep neural networks as high-capacity function approximators capable of learning distributed abstract representations from raw input signals that are useful for downstream decision making . Recent state-of-the-art deep RL methods include Deep Q-Networks ( DQN ) ( Mnih et al. , 2013 ) and related variants ( Hessel et al. , 2017 ) , as well as advanced actor-critic methods such as PPO ( Schulman et al. , 2017 ) and SAC ( Haarnoja et al. , 2018 ) . See ( Arulkumaran et al. , 2017 ; Li , 2017 ) for an in-depth review of deep RL . Model-based RL . In RL , the environment transition function p is typically unknown . As a result , so-called model-free RL methods , such as DQN and PPO , rely solely on data gathered from the environment , i.e . real experience , to learn an optimal policy . However , if given access to a transition function , an agent can generate useful simulated , or imagined experience , and use it to plan . Therefore , in model-based RL , a model p̂φ ( ot+1|ot , at ) with parameters φ is learned using stored transitions gathered from either a random , heuristic or learned policy to simulate transitions from the true ( unknown ) transition function p. The model can then be used for model-based planning , which can either happen in the background , or at decisiontime . We briefly highlight the differences between these two types of planning and discuss work related to each and how this relates to our own work . – Background planning . In background planning , the model is primarily used to generate additional experience and assist learning , i.e . for updating the parameters of the policy and/or value functions . An early version of this approach is DYNA-Q ( Sutton , 1990 ) which uses the additional experience to help learn a value function . However , the usefulness of a model degrades over long time horizons as model rollout error starts to compound ( Gu et al. , 2016 ) . This has lead to different approaches that either use fixed depth rollouts based on model uncertainty ( Feinberg et al. , 2018 ) , dynamic rollout schedules ( Buckman et al. , 2018 ) , or short rollouts starting from intermediate states sampled from a buffer ( Janner et al. , 2019 ) . A promising alternative approach is to update gradients directly via imagined rollouts in a lower-dimensional latent space ( Hafner et al. , 2019 ; 2020 ; Byravan et al. , 2020 ) . – Decision-time planning . In decision-time planning , the model is used to generate imagined rollouts from a given state for the purpose of selecting the optimal action or sequence of actions . Decisiontime planning methods for discrete action spaces often rely on search methods such as Monte Carlo tree search ( MCTS ) ( Coulom , 2006 ) and have been used successfully in several works ( Silver et al. , 2017 ; Anthony et al. , 2017 ; Schrittwieser et al. , 2019 ) . In continuous action spaces , methods include trajectory optimisation approaches using trajectory sampling ( Todorov and Li , 2005 ; Theodorou et al. , 2010 ; Nagabandi et al. , 2018 ; Chua et al. , 2018 ) or collocation ( Posa et al. , 2014 ) ( optimising reward while forcing the model ’ s predictions to be close to already visited states ) . The model in our system is utilised for decision-time planning and follows the approach of Ha and Schmidhuber ( 2018 ) , who used recurrent neural world models as a way to give agent ’ s the ability to learn how to think ( Schmidhuber , 2015 ) . Specifically , we make use of a recurrent world model that takes the form of a mixture density network LSTM ( MDN-LSTM ) , as used in ( Ha and Eck , 2017 ) . The model is therefore a form of a recurrent Gaussian mixture model and allows us to sample probabilistic predictions of imagined next states . An illustration of the core features of model-based RL and the different types of planning is given in Figure 1 . Also see ( Janner , 2019 ) and ( Mordatch and Hamrick , 2020 ) for useful overviews . Multi-agent RL ( MARL ) . In the multi-agent case with N agents , we use the formalism of partially observable Markov games ( Littman , 1994 ) , defined as the tuple given above for the single agent case , but with observation and action spaces given by the following cartesian products : O = ∏N i=1Oi ⊆ S and A = ∏N i=1Ai , for agents i = 1 , ... , N . The goal in this setting is to find an optimal joint policy π∗ ( a1 , ... , aN |o1 , ... , oN ) that maximises a shared long-term discounted future reward for all agent as π∗ = argmaxπE [ ∑N i=1 ∑∞ t=0 γ tr ( oti , a t i , o t+1 i ) ] . Early work in MARL simply trained multiple independent Q-learning algorithms ( Tan , 1993 ) , which has since been extended to include deep neural networks , or more specifically independent DQNs ( Tampuu et al. , 2017 ) . However , from the perspective of an individual agent , these approaches treat all other learning agents as part of the environment , resulting in the optimal policy distribution to become non-stationary . Furthermore , if the environment is only partially observable , the learning task can become even more difficult , where agents may struggle with credit assignment due to spurious rewards received from unobserved actions of other agents ( Claus and Boutilier , 1998 ) . To mitigate the issue of non-stationarity , MARL systems are often designed within the paradigm of centralised training with decentralised execution ( CTDE ) ( Oliehoek et al. , 2008 ; Lowe et al. , 2017 ; Foerster et al. , 2017 ) . In CTDE , a centralised value function , or critic , is used during training , which conditions on the global state and joint actions from all the agents to make the learning problem stationary , but is then later removed once the individual agent ’ s policies have been learned , making it possible to use each policy independently during system execution . However , individual agent policies extracted in this way may still perform poorly because training is not specifically aligned with the goal of performing well under decentralised execution . Therefore , state-of-the-art value-based MARL approaches such as Q-mix ( Rashid et al. , 2018 ) and QTRAN ( Son et al. , 2019 ) make use of value function decomposition strategies ( Sunehag et al. , 2017 ) to more closely resemble decentralised training , where each agent is a recurrent DQN ( Hausknecht and Stone , 2015 ) that has memory to also deal with partial observability . Another clear way to help with the issue of partial observability is for agents to be able to communicate . Learned multi-agent communication has been a key innovation in helping MARL systems to scale to more complex environments and solve more challenging tasks ( Foerster et al. , 2016 ; Sukhbaatar et al. , 2016 ; Singh et al. , 2018 ; Chu et al. , 2020 ) . To facilitate communication in our work , we formally extend the Markov gameM by having agents connected to each other via communication channels according to a pre-defined neighbourhood graph G ( V , E ) . The graph G is defined by a set of nodes ( vertices ) V along with a set of edge connections E = { ( i , j ) |i , j ∈ V , i 6= j } , where each agent is a node in the graph , locally connected to other agent nodes . We define the connected neighbourhood surrounding agent i as Ni = { j ∈ V| ( i , j ) ∈ E } . This networked Markov game MG is then defined by the following tuple ( G , S , A , r , p , ρ0 , γ ) . Our communication channels are recurrent and end-to-end differentiable allowing for agent-to-agent communication protocols to be learned during training . Unlike work studying the emergence of language through communication in MARL , e.g . ( Lazaridou et al. , 2016 ; Mordatch and Abbeel , 2017 ; Kajić et al. , 2020 ) our work is more focused on communication through imagination as a useful system design for task solving , as apposed to uncovering new insights into emergent phenomena related to the human imagination . Model-based MARL . To the best of our knowledge , the literature on model-based deep MARL is quite sparse and very little work has been done in this area . A notable exception is the recent work by Krupnik et al . ( 2020 ) on multi-agent model-based latent space trajectory optimisation . Here a multi-step generative model , specifically a temporal segment model ( Mishra et al. , 2017 ) , is used to generate rollouts in a disentangled latent space and optimisation is performed directly over agent latent variables . Our work is the first we are aware of in the area of model-based deep MARL that combines communication with decision-time planning using recurrent neural world models .
The paper talks about developing a model-based method for cooperative multi-agent reinforcement learning. The proposed approach utilizes communication as a tool for mitigating the partial observability induced by the non-stationary task while also helping agents reason about other agents' behaviors. The authors present their motivation for using language as a medium in model-based RL stemming from early literature in psychology and linguistics.
SP:e4eac7e23932f7b1c1ac0c281cbeb076a4525a86
Generating Plannable Lifted Action Models for Visually Generated Logical Predicates
1 INTRODUCTION . Learning a high-level symbolic transition model of an environment from raw input ( e.g. , images ) is a major challenge in the integration of connectionism and symbolism . Doing so without manually defined symbols is particularly difficult as it requires solving both the Symbol Grounding ( Harnad , 1990 ; Taddeo & Floridi , 2005 ; Steels , 2008 ) and the Action Model Learning/Acquisition problem . Recently , seminal work by Asai & Fukunaga ( 2018 , Latplan ) that learns discrete planning models from images has opened the door to applying symbolic Classical Planning systems to a wide variety of raw , noisy data . Latplan uses discrete variational autoencoders to generate propositional latent states and its dynamics ( action model ) directly from images . Unlike existing work , which requires several machine learning pipelines ( SVM/decision trees ) and labeled inputs ( e.g. , a sequence of high-level options ) ( Konidaris et al. , 2014 ) , Latplan is an end-to-end unsupervised neural network that requires no manually labeled inputs . Numerous extensions and enhancements have been proposed : Causal InfoGAN ( Kurutach et al. , 2018 ) instead uses GAN framework to obtain propositional representations . Latplan ’ s representation was shown to be compatible with symbolic Goal Recognition ( Amado et al. , 2018 ) . First-Order State AutoEncoder ( Asai , 2019 , FOSAE ) extends Latplan to generate predicate symbols . Cube-Space AutoEncoder ( Asai & Muise , 2020 , CSAE ) regularized the latent space to a particular form which directly exports to a learned propositional PDDL model ( Fikes et al. , 1972 ) . Discrete Sequential Application of Words ( DSAW ) learns a plannable propositional word embedding from a natural language corpus ( Asai & Tang , 2020 ) . In this paper , we obtain a lifted action model expressed in First-Order Logic ( FOL ) , which is a superset of object-centric ( property-based ) representation that Machine Learning community recently began to pay attention to1 , but has long been the central focus of the broader AI community . In propositional action models , the environment representation is a fixed-sized binary array and does not transfer to a different or a dynamically changing environment with a varying number of objects . In contrast , lifted FOL representations are generalized over objects and environments , as we demonstrate in Blocksworld with different number of blocks , or Sokoban with different map sizes . We propose Lifted First-Order Space AutoEncoder ( FOSAE++ ) neuro-symbolic architecture , which learns a lifted PDDL action model by integrating and extending the FOSAE , the CSAE and the Neural Logic Machine ( Dong et al. , 2019 , NLM ) architectures . The overall task of our system is illustrated in Fig . 1 . The system takes a transition dataset containing a set of pairs of raw observations which are single time step away . Each observation consists of multiple visual segmentations of the objects . It learns a lifted action model of the environment 1e.g. , ICML workshop on Object-Oriented Learning https : //oolworkshop.github.io/ by generating the symbols and emits a PDDL ( Haslum et al. , 2019 ) encoding for state-of-the-art planning systems . Contribution Table 1 contains a taxonomy of existing model acquisition systems in chronological order . FOSAE++ is the first system that satisfies all features readily available in symbolic action model acquisition systems , while not relying on human-derived symbols . FOSAE++ generates unnamed symbols by itself — Effectively addressing the long-standing Knowledge Acquisition bottleneck ( Cullen & Bryman , 1988 ) and the Symbol Grounding problem , showing a future direction for high-level symbolic autonomy . 2 PRELIMINARIES AND BACKGROUND . We denote a multi-dimensional array ( tensor ) in bold and its elements with a subscript ( e.g. , x ∈ RN×M , x2 ∈ RM ) , an integer range n ≤ i ≤ m by n .. m , a concatenation of tensors a and b in the last axis by a ; b , and the i-th data point of a dataset by a superscript i which we may omit for clarity . We use the same symbol for a set and its size ( e.g. , S , and not |S| ) to avoid the clutter . Finally , B = [ 0 , 1 ] ⊂ R. We assume background knowledge of discrete VAEs with continuous relaxations ( included in the appendix Sec . A.1 ) , such as Gumbel-Softmax ( GS ) and Binary-Concrete ( BC ) ( Jang et al. , 2017 ; Maddison et al. , 2017 ) . Their activations are denoted as GS and BC , respectively . 2.1 LIFTED STRIPS/PDDL PLANNING . Planning Domain Description Language ( PDDL ) is a modeling language for a Lifted STRIPS planning formalism ( Fikes et al. , 1972 ) and its extensions ( Haslum et al. , 2019 ) . Let F ( T ) be 1 ( Yang et al. , 2007 ; Cresswell et al. , 2013 ; Aineto et al. , 2018 ; Zhuo et al. , 2019 ; Cresswell & Gregory , 2011 ; Mourão et al. , 2012 ; Zhuo & Kambhampati , 2013 ) 2Konidaris et al . ( 2014 ) requires sequences of high-level options to learn from , such as [ move , move , interact , · · · ] in Playroom domain . Causal InfoGAN can not deterministically enumerate all successors ( a requirement for search completeness ) due to the lack of action symbols and should sample the successors . James et al . ( 2020b ) ’ s PDDL output is limited to unary predicates / properties of objects , thus can not model the interactions between objects . Also , it requires sequences of high-level options such as [ WalkToItem , AttachBlock , WalkToItem , · · · ] in the Minecraft domain . a formula consisting of logical operations { ∧ , ¬ } and a set of terms T . For example , when T = { have ( I , food ) , full ( I ) } , then have ( I , food ) ∧ ¬full ( I ) ∈ F ( T ) . We denote a lifted STRIPS planning problem as a 5-tuple 〈O , P , A , I , G〉 . O is a set of objects ( 3 food ) , P is a set of predicates ( 3 full ( x ) ) , and A is a set of lifted actions ( 3 eat ) . Each predicate p ∈ P has an arity # p ≥ 0 . Predicates are instantiated/grounded into propositions P ( O ) = ⋃ p∈P ( { p } ×O × # p. . .×O ) , such as have ( I , food ) . A state s ⊆ P ( O ) represents truth assignments to the propositions , e.g. , s = { have ( I , food ) } represents have ( I , food ) = > . We can also represent it as a bitvector of size ∑ pO # p. Each lifted action a ( X ) ∈ A has an arity # a and parameters X = ( x1 , · · · , x # a ) , such as eat ( x1 , x2 ) . Lifted actions are instantiated into ground actions A ( O ) =⋃ a∈A ( { a } ×O × # a. . .×O ) , such as eat ( I , food ) . a ( X ) is a 3-tuple 〈PRE ( a ) , ADD ( a ) , DEL ( a ) 〉 , where PRE ( a ) , ADD ( a ) , DEL ( a ) ∈ F ( P ( X ) ) are preconditions , add-effects , and delete-effects : e.g. , eat ( x1 , x2 ) = 〈 { have ( x1 , x2 ) } , { full ( x1 ) } , { have ( x1 , x2 ) } 〉 . The semantics of these three elements are as follows : A ground action a† ∈ A ( O ) is applicable when a state s satisfies PRE ( a† ) , i.e. , PRE ( a† ) ⊆ s , and applying an action a† to s yields a new successor state a† ( s ) = ( s \ DEL ( a† ) ) ∪ ADD ( a† ) , e.g. , eat ( I , food ) = “ I can eat a food when I have one , and if I eat one I am full but the food is gone. ” Finally , I , G ⊆ P ( O ) are the initial state and a goal condition , respectively . The task of classical planning is to find a plan ( a†1 , · · · , a†n ) which satisfies a†n ◦ · · · ◦ a † 1 ( I ) ⊆ G . 2.2 NEURAL PROPOSITIONAL/ACTION SYMBOL GENERATION WITH LATPLAN . Latplan is a framework for domain-independent image-based classical planning ( Asai & Fukunaga , 2018 ) . It learns a propositional state representation and transition rules entirely from image-based observations of the environment with discrete VAEs and solves the problem using a classical planner . Latplan is trained on a transition input Tr : a set of pairs of raw data randomly sampled from the environment . The i-th transition ( oi,0 , oi,1 ) ∈ Tr is a pair of observations made before and after an unknown high-level action is performed . Once trained , Latplan can process a planning input ( oI , oG ) , a pair of raw images corresponding to an initial and goal state of the environment . The output of Latplan is a data sequence representing the plan execution ( oI , . . .oG ) that reaches oG from oI . While the original paper used an image-based implementation , conceptually any form of temporal data is viable for this methodology , e.g. , an NLP corpus ( Asai & Tang , 2020 ) . The latest Latplan Asai & Muise ( 2020 ) has a training phase and a planning phase . In the training phase , it trains an end-to-end neural network called Cube-Space AutoEncoder ( CSAE ) on Tr ( Fig . 2 , top left ) . CSAE is a variational autoencoder modeled by binary and categorical random variables each representing the propositional states and the actions in the classical planning . The dynamics modeled by these actions directly compiles into a PDDL model . The network combines BinaryConcrete VAE to produce binary state representation , and a Gumbel-Softmax VAE to produce a categorical bottleneck layer which assigns a categorical label to each input . Let o0 and o1 be a pair of observed states in a transition , z0 and z1 be the corresponding binary latent states , and a be the one-hot vector that represents a discrete action label assigned to the transition . CSAE is a variational autoencoder network that can be formalized as follows : ( encoder ) z0 , z1 = ENCODE ( o0 ) , ENCODE ( o1 ) ∈ BF ( action assignment/clustering ) a = ACTION ( z0 , z1 ) ∈ BA ( learning the dynamics ) z∼1 = APPLY ( z0 , a ) ∈ BF ( decoder ) o∼0 , o∼1 , o∼ 1 = DECODE ( z0 ) , DECODE ( z1 ) , DECODE ( z∼1 ) where ENCODE , DECODE , ACTION , APPLY are arbitrary multilayer perceptrons . The outputs of ENCODE and APPLY are activated by Binary Concerete , and the output of ACTION is activated by Gumbel-Softmax of A categories ( hyperparameter ) . Assuming a certain set of prior distributions , a lower bound ( ELBO ) of the log likelihood of observing a pair of states ( o0 , o1 ) can be as derived follows ( Appendix Sec . A.3.3 ) : log p ( o0 , o1 ) ≥ −DKL ( q ( a|o1 , z0 , z1 ) ||p ( a|z0 , z1 ) ) −DKL ( q ( z∼1|o1 , a , z0 , z1 ) ||q ( z1|o1 ) ) −DKL ( q ( z0|o0 ) ||p ( z0 ) ) −DKL ( q ( z1|o1 ) ||p ( z1 ) ) + log p ( o0|z0 ) + log p ( o1|z∼1 , a , z0 , z1 ) After the training , it generates a propositional classical planning problem ( zI , zG ) from a planning input ( oI , oG ) and export it into PDDL files together with the action model obtained in ( 2 ) , which are then solved by Fast Downward ( Helmert , 2006 ) , an optimized C++-based solver independent from the neural network . Finally , it obtains a step-by-step , human-comprehensible visualization of the plan execution by decoding the intermediate states of the plan into images . The A-dimensional one-hot categorical variable ai in the network performs a clustering on the state transitions with the maximum number of clusters A specified as a hyperparameter . The cluster ID is used as its action symbol . The clustering is performed by its encoder , ACTION ( zi,0 , zi,1 ) = ai , which takes a propositional state pair ( zi,0 , zi,1 ) and returns a one-hot vector ai of A categories using Gumbel-Softmax . AAE ’ s decoder APPLY takes the current state zi,0 and the action ai and reconstructs a successor state z∼i,1 ≈ zi,1 , acting as a progression function APPLY ( ai , zi,0 ) = z∼i,1 . APPLY is typically just called a “ model ” in model-based RL literature . While APPLY can be any network from the training standpoint , such a neural black-box function does not directly translates to a STRIPS action model , preventing efficient search with state-ofthe-art classical planner . Cube-Space AE ( Asai & Muise , 2020 ) addresses this issue by Back-ToLogit technique ( BTL Fig . 2 , bottom-left ) which modifies APPLY . Latent state transitions learned by BTL guarantees that the actions and the transitions satisfy the STRIPS state transition rule s′ = ( s \ DEL ( a ) ) ∪ ADD ( a ) , thus enabling a direct translation from neural network weights to PDDL modeling language . Details of the network , the translation method and the proof can be found in the appendix Sec . A.3 .
This work presents FOSAE++, an end-to-end system capable of producing "lifted" action models provided only bounding box annotations of image pairs before and after an unknown action is executed. Building on recent work in the space, the primary contribution of this work is to generate PDDL action rules. To accomplish this, the authors introduce novel 'params' function that use the Gumbel-Softmax function to implement a differentiable mechanism for selecting which entities are relevant to the current action and feeds them into the new 'bind' and 'unbind' functions that select those elements in the tensor predicting their relevance. Overall, this work is a meaningful contribution in the direction of generated lifted action models without direct labeled data.
SP:78f30ff42b38782a096376e39364151da28d1812
Generating Plannable Lifted Action Models for Visually Generated Logical Predicates
1 INTRODUCTION . Learning a high-level symbolic transition model of an environment from raw input ( e.g. , images ) is a major challenge in the integration of connectionism and symbolism . Doing so without manually defined symbols is particularly difficult as it requires solving both the Symbol Grounding ( Harnad , 1990 ; Taddeo & Floridi , 2005 ; Steels , 2008 ) and the Action Model Learning/Acquisition problem . Recently , seminal work by Asai & Fukunaga ( 2018 , Latplan ) that learns discrete planning models from images has opened the door to applying symbolic Classical Planning systems to a wide variety of raw , noisy data . Latplan uses discrete variational autoencoders to generate propositional latent states and its dynamics ( action model ) directly from images . Unlike existing work , which requires several machine learning pipelines ( SVM/decision trees ) and labeled inputs ( e.g. , a sequence of high-level options ) ( Konidaris et al. , 2014 ) , Latplan is an end-to-end unsupervised neural network that requires no manually labeled inputs . Numerous extensions and enhancements have been proposed : Causal InfoGAN ( Kurutach et al. , 2018 ) instead uses GAN framework to obtain propositional representations . Latplan ’ s representation was shown to be compatible with symbolic Goal Recognition ( Amado et al. , 2018 ) . First-Order State AutoEncoder ( Asai , 2019 , FOSAE ) extends Latplan to generate predicate symbols . Cube-Space AutoEncoder ( Asai & Muise , 2020 , CSAE ) regularized the latent space to a particular form which directly exports to a learned propositional PDDL model ( Fikes et al. , 1972 ) . Discrete Sequential Application of Words ( DSAW ) learns a plannable propositional word embedding from a natural language corpus ( Asai & Tang , 2020 ) . In this paper , we obtain a lifted action model expressed in First-Order Logic ( FOL ) , which is a superset of object-centric ( property-based ) representation that Machine Learning community recently began to pay attention to1 , but has long been the central focus of the broader AI community . In propositional action models , the environment representation is a fixed-sized binary array and does not transfer to a different or a dynamically changing environment with a varying number of objects . In contrast , lifted FOL representations are generalized over objects and environments , as we demonstrate in Blocksworld with different number of blocks , or Sokoban with different map sizes . We propose Lifted First-Order Space AutoEncoder ( FOSAE++ ) neuro-symbolic architecture , which learns a lifted PDDL action model by integrating and extending the FOSAE , the CSAE and the Neural Logic Machine ( Dong et al. , 2019 , NLM ) architectures . The overall task of our system is illustrated in Fig . 1 . The system takes a transition dataset containing a set of pairs of raw observations which are single time step away . Each observation consists of multiple visual segmentations of the objects . It learns a lifted action model of the environment 1e.g. , ICML workshop on Object-Oriented Learning https : //oolworkshop.github.io/ by generating the symbols and emits a PDDL ( Haslum et al. , 2019 ) encoding for state-of-the-art planning systems . Contribution Table 1 contains a taxonomy of existing model acquisition systems in chronological order . FOSAE++ is the first system that satisfies all features readily available in symbolic action model acquisition systems , while not relying on human-derived symbols . FOSAE++ generates unnamed symbols by itself — Effectively addressing the long-standing Knowledge Acquisition bottleneck ( Cullen & Bryman , 1988 ) and the Symbol Grounding problem , showing a future direction for high-level symbolic autonomy . 2 PRELIMINARIES AND BACKGROUND . We denote a multi-dimensional array ( tensor ) in bold and its elements with a subscript ( e.g. , x ∈ RN×M , x2 ∈ RM ) , an integer range n ≤ i ≤ m by n .. m , a concatenation of tensors a and b in the last axis by a ; b , and the i-th data point of a dataset by a superscript i which we may omit for clarity . We use the same symbol for a set and its size ( e.g. , S , and not |S| ) to avoid the clutter . Finally , B = [ 0 , 1 ] ⊂ R. We assume background knowledge of discrete VAEs with continuous relaxations ( included in the appendix Sec . A.1 ) , such as Gumbel-Softmax ( GS ) and Binary-Concrete ( BC ) ( Jang et al. , 2017 ; Maddison et al. , 2017 ) . Their activations are denoted as GS and BC , respectively . 2.1 LIFTED STRIPS/PDDL PLANNING . Planning Domain Description Language ( PDDL ) is a modeling language for a Lifted STRIPS planning formalism ( Fikes et al. , 1972 ) and its extensions ( Haslum et al. , 2019 ) . Let F ( T ) be 1 ( Yang et al. , 2007 ; Cresswell et al. , 2013 ; Aineto et al. , 2018 ; Zhuo et al. , 2019 ; Cresswell & Gregory , 2011 ; Mourão et al. , 2012 ; Zhuo & Kambhampati , 2013 ) 2Konidaris et al . ( 2014 ) requires sequences of high-level options to learn from , such as [ move , move , interact , · · · ] in Playroom domain . Causal InfoGAN can not deterministically enumerate all successors ( a requirement for search completeness ) due to the lack of action symbols and should sample the successors . James et al . ( 2020b ) ’ s PDDL output is limited to unary predicates / properties of objects , thus can not model the interactions between objects . Also , it requires sequences of high-level options such as [ WalkToItem , AttachBlock , WalkToItem , · · · ] in the Minecraft domain . a formula consisting of logical operations { ∧ , ¬ } and a set of terms T . For example , when T = { have ( I , food ) , full ( I ) } , then have ( I , food ) ∧ ¬full ( I ) ∈ F ( T ) . We denote a lifted STRIPS planning problem as a 5-tuple 〈O , P , A , I , G〉 . O is a set of objects ( 3 food ) , P is a set of predicates ( 3 full ( x ) ) , and A is a set of lifted actions ( 3 eat ) . Each predicate p ∈ P has an arity # p ≥ 0 . Predicates are instantiated/grounded into propositions P ( O ) = ⋃ p∈P ( { p } ×O × # p. . .×O ) , such as have ( I , food ) . A state s ⊆ P ( O ) represents truth assignments to the propositions , e.g. , s = { have ( I , food ) } represents have ( I , food ) = > . We can also represent it as a bitvector of size ∑ pO # p. Each lifted action a ( X ) ∈ A has an arity # a and parameters X = ( x1 , · · · , x # a ) , such as eat ( x1 , x2 ) . Lifted actions are instantiated into ground actions A ( O ) =⋃ a∈A ( { a } ×O × # a. . .×O ) , such as eat ( I , food ) . a ( X ) is a 3-tuple 〈PRE ( a ) , ADD ( a ) , DEL ( a ) 〉 , where PRE ( a ) , ADD ( a ) , DEL ( a ) ∈ F ( P ( X ) ) are preconditions , add-effects , and delete-effects : e.g. , eat ( x1 , x2 ) = 〈 { have ( x1 , x2 ) } , { full ( x1 ) } , { have ( x1 , x2 ) } 〉 . The semantics of these three elements are as follows : A ground action a† ∈ A ( O ) is applicable when a state s satisfies PRE ( a† ) , i.e. , PRE ( a† ) ⊆ s , and applying an action a† to s yields a new successor state a† ( s ) = ( s \ DEL ( a† ) ) ∪ ADD ( a† ) , e.g. , eat ( I , food ) = “ I can eat a food when I have one , and if I eat one I am full but the food is gone. ” Finally , I , G ⊆ P ( O ) are the initial state and a goal condition , respectively . The task of classical planning is to find a plan ( a†1 , · · · , a†n ) which satisfies a†n ◦ · · · ◦ a † 1 ( I ) ⊆ G . 2.2 NEURAL PROPOSITIONAL/ACTION SYMBOL GENERATION WITH LATPLAN . Latplan is a framework for domain-independent image-based classical planning ( Asai & Fukunaga , 2018 ) . It learns a propositional state representation and transition rules entirely from image-based observations of the environment with discrete VAEs and solves the problem using a classical planner . Latplan is trained on a transition input Tr : a set of pairs of raw data randomly sampled from the environment . The i-th transition ( oi,0 , oi,1 ) ∈ Tr is a pair of observations made before and after an unknown high-level action is performed . Once trained , Latplan can process a planning input ( oI , oG ) , a pair of raw images corresponding to an initial and goal state of the environment . The output of Latplan is a data sequence representing the plan execution ( oI , . . .oG ) that reaches oG from oI . While the original paper used an image-based implementation , conceptually any form of temporal data is viable for this methodology , e.g. , an NLP corpus ( Asai & Tang , 2020 ) . The latest Latplan Asai & Muise ( 2020 ) has a training phase and a planning phase . In the training phase , it trains an end-to-end neural network called Cube-Space AutoEncoder ( CSAE ) on Tr ( Fig . 2 , top left ) . CSAE is a variational autoencoder modeled by binary and categorical random variables each representing the propositional states and the actions in the classical planning . The dynamics modeled by these actions directly compiles into a PDDL model . The network combines BinaryConcrete VAE to produce binary state representation , and a Gumbel-Softmax VAE to produce a categorical bottleneck layer which assigns a categorical label to each input . Let o0 and o1 be a pair of observed states in a transition , z0 and z1 be the corresponding binary latent states , and a be the one-hot vector that represents a discrete action label assigned to the transition . CSAE is a variational autoencoder network that can be formalized as follows : ( encoder ) z0 , z1 = ENCODE ( o0 ) , ENCODE ( o1 ) ∈ BF ( action assignment/clustering ) a = ACTION ( z0 , z1 ) ∈ BA ( learning the dynamics ) z∼1 = APPLY ( z0 , a ) ∈ BF ( decoder ) o∼0 , o∼1 , o∼ 1 = DECODE ( z0 ) , DECODE ( z1 ) , DECODE ( z∼1 ) where ENCODE , DECODE , ACTION , APPLY are arbitrary multilayer perceptrons . The outputs of ENCODE and APPLY are activated by Binary Concerete , and the output of ACTION is activated by Gumbel-Softmax of A categories ( hyperparameter ) . Assuming a certain set of prior distributions , a lower bound ( ELBO ) of the log likelihood of observing a pair of states ( o0 , o1 ) can be as derived follows ( Appendix Sec . A.3.3 ) : log p ( o0 , o1 ) ≥ −DKL ( q ( a|o1 , z0 , z1 ) ||p ( a|z0 , z1 ) ) −DKL ( q ( z∼1|o1 , a , z0 , z1 ) ||q ( z1|o1 ) ) −DKL ( q ( z0|o0 ) ||p ( z0 ) ) −DKL ( q ( z1|o1 ) ||p ( z1 ) ) + log p ( o0|z0 ) + log p ( o1|z∼1 , a , z0 , z1 ) After the training , it generates a propositional classical planning problem ( zI , zG ) from a planning input ( oI , oG ) and export it into PDDL files together with the action model obtained in ( 2 ) , which are then solved by Fast Downward ( Helmert , 2006 ) , an optimized C++-based solver independent from the neural network . Finally , it obtains a step-by-step , human-comprehensible visualization of the plan execution by decoding the intermediate states of the plan into images . The A-dimensional one-hot categorical variable ai in the network performs a clustering on the state transitions with the maximum number of clusters A specified as a hyperparameter . The cluster ID is used as its action symbol . The clustering is performed by its encoder , ACTION ( zi,0 , zi,1 ) = ai , which takes a propositional state pair ( zi,0 , zi,1 ) and returns a one-hot vector ai of A categories using Gumbel-Softmax . AAE ’ s decoder APPLY takes the current state zi,0 and the action ai and reconstructs a successor state z∼i,1 ≈ zi,1 , acting as a progression function APPLY ( ai , zi,0 ) = z∼i,1 . APPLY is typically just called a “ model ” in model-based RL literature . While APPLY can be any network from the training standpoint , such a neural black-box function does not directly translates to a STRIPS action model , preventing efficient search with state-ofthe-art classical planner . Cube-Space AE ( Asai & Muise , 2020 ) addresses this issue by Back-ToLogit technique ( BTL Fig . 2 , bottom-left ) which modifies APPLY . Latent state transitions learned by BTL guarantees that the actions and the transitions satisfy the STRIPS state transition rule s′ = ( s \ DEL ( a ) ) ∪ ADD ( a ) , thus enabling a direct translation from neural network weights to PDDL modeling language . Details of the network , the translation method and the proof can be found in the appendix Sec . A.3 .
This paper addresses the problem of learning dynamics model directly from raw sensory inputs. The authors propose an unsupervised end-to-end model that can perform high-level tasks planning on raw observations. This work extends Asai et al. 2020, 2019 etc, and with improved symbol generation and lifted PDDL. The authors follow the experimental setup as seen in prior work, where three artificial environments (blocksworld, MNIST 8-puzzle, and sokoban) are used for planning.
SP:78f30ff42b38782a096376e39364151da28d1812
Linear Convergent Decentralized Optimization with Compression
1 INTRODUCTION . Distributed optimization solves the following optimization problem x∗ : = argmin x∈Rd [ f ( x ) : = 1 n n∑ i=1 fi ( x ) ] ( 1 ) with n computing agents and a communication network . Each fi ( x ) : Rd → R is a local objective function of agent i and typically defined on the data Di settled at that agent . The data distributions { Di } can be heterogeneous depending on the applications such as in federated learning . The variable x ∈ Rd often represents model parameters in machine learning . A distributed optimization algorithm seeks an optimal solution that minimizes the overall objective function f ( x ) collectively . According to the communication topology , existing algorithms can be conceptually categorized into centralized and decentralized ones . Specifically , centralized algorithms require global communication between agents ( through central agents or parameter servers ) . While decentralized algorithms only require local communication between connected agents and are more widely applicable than centralized ones . In both paradigms , the computation can be relatively fast with powerful computing devices ; efficient communication is the key to improve algorithm efficiency and system scalability , especially when the network bandwidth is limited . In recent years , various communication compression techniques , such as quantization and sparsification , have been developed to reduce communication costs . Notably , extensive studies ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Bernstein et al. , 2018 ; Stich et al. , 2018 ; Karimireddy et al. , 2019 ; Mishchenko et al. , 2019 ; Tang et al. , 2019b ; Liu et al. , 2020 ) have utilized gradient compression to significantly boost communication efficiency for centralized optimization . They enable efficient large-scale optimization while maintaining comparable convergence rates and practical performance with their non-compressed counterparts . This great success has suggested the potential and significance of communication compression in decentralized algorithms . While extensive attention has been paid to centralized optimization , communication compression is relatively less studied in decentralized algorithms because the algorithm design and analysis are more challenging in order to cover general communication topologies . There are recent efforts trying to push this research direction . For instance , DCD-SGD and ECD-SGD ( Tang et al. , 2018a ) introduce difference compression and extrapolation compression to reduce model compression error . ( Reisizadeh et al. , 2019a ; b ) introduce QDGD and QuanTimed-DSGD to achieve exact convergence with small stepsize . DeepSqueeze ( Tang et al. , 2019a ) directly compresses the local model and compensates the compression error in the next iteration . CHOCO-SGD ( Koloskova et al. , 2019 ; 2020 ) presents a novel quantized gossip algorithm that reduces compression error by difference compression and preserves the model average . Nevertheless , most existing works focus on the compression of primal-only algorithms , i.e. , reduce to DGD ( Nedic & Ozdaglar , 2009 ; Yuan et al. , 2016 ) or P-DSGD ( Lian et al. , 2017 ) . They are unsatisfying in terms of convergence rate , stability , and the capability to handle heterogeneous data . Part of the reason is that they inherit the drawback of DGD-type algorithms , whose convergence rate is slow in heterogeneous data scenarios where the data distributions are significantly different from agent to agent . In the literature of decentralized optimization , it has been proved that primal-dual algorithms can achieve faster converge rates and better support heterogeneous data ( Ling et al. , 2015 ; Shi et al. , 2015 ; Li et al. , 2019 ; Yuan et al. , 2020 ) . However , it is unknown whether communication compression is feasible for primal-dual algorithms and how fast the convergence can be with compression . In this paper , we attempt to bridge this gap by investigating the communication compression for primal-dual decentralized algorithms . Our major contributions can be summarized as : • We delineate two key challenges in the algorithm design for communication compression in decentralized optimization , i.e. , data heterogeneity and compression error , and motivated by primal-dual algorithms , we propose a novel decentralized algorithm with compression , LEAD . • We prove that for LEAD , a constant stepsize in the range ( 0 , 2/ ( µ + L ) ] is sufficient to ensure linear convergence for strongly convex and smooth objective functions . To the best of our knowledge , LEAD is the first linear convergent decentralized algorithm with compression . Moreover , LEAD provably works with unbiased compression of arbitrary precision . • We further prove that if the stochastic gradient is used , LEAD converges linearly to the O ( σ2 ) neighborhood of the optimum with constant stepsize . LEAD is also able to achieve exact convergence to the optimum with diminishing stepsize . • Extensive experiments on convex problems validate our theoretical analyses , and the empirical study on training deep neural nets shows that LEAD is applicable for nonconvex problems . LEAD achieves state-of-art computation and communication efficiency in all experiments and significantly outperforms the baselines on heterogeneous data . Moreover , LEAD is robust to parameter settings and needs minor effort for parameter tuning . 2 RELATED WORKS . Decentralized optimization can be traced back to the work by Tsitsiklis et al . ( 1986 ) . DGD ( Nedic & Ozdaglar , 2009 ) is the most classical decentralized algorithm . It is intuitive and simple but converges slowly due to the diminishing stepsize that is needed to obtain the optimal solution ( Yuan et al. , 2016 ) . Its stochastic version D-PSGD ( Lian et al. , 2017 ) has been shown effective for training nonconvex deep learning models . Algorithms based on primal-dual formulations or gradient tracking are proposed to eliminate the convergence bias in DGD-type algorithms and improve the convergence rate , such as D-ADMM ( Mota et al. , 2013 ) , DLM ( Ling et al. , 2015 ) , EXTRA ( Shi et al. , 2015 ) , NIDS ( Li et al. , 2019 ) , D2 ( Tang et al. , 2018b ) , Exact Diffusion ( Yuan et al. , 2018 ) , OPTRA ( Xu et al. , 2020 ) , DIGing ( Nedic et al. , 2017 ) , GSGT ( Pu & Nedić , 2020 ) , etc . Recently , communication compression is applied to decentralized settings by Tang et al . ( 2018a ) . It proposes two algorithms , i.e. , DCD-SGD and ECD-SGD , which require compression of high accuracy and are not stable with aggressive compression . Reisizadeh et al . ( 2019a ; b ) introduce QDGD and QuanTimed-DSGD to achieve exact convergence with small stepsize and the convergence is slow . DeepSqueeze ( Tang et al. , 2019a ) compensates the compression error to the compression in the next iteration . Motivated by the quantized average consensus algorithms , such as ( Carli et al. , 2010 ) , the quantized gossip algorithm CHOCO-Gossip ( Koloskova et al. , 2019 ) converges linearly to the consensual solution . Combining CHOCO-Gossip and D-PSGD leads to a decentralized algorithm with compression , CHOCO-SGD , which converges sublinearly under the strong convexity and gradient boundedness assumptions . Its nonconvex variant is further analyzed in ( Koloskova et al. , 2020 ) . A new compression scheme using the modulo operation is introduced in ( Lu & De Sa , 2020 ) for decentralized optimization . A general algorithmic framework aiming to maintain the linear convergence of distributed optimization under compressed communication is considered in ( Magnússon et al. , 2020 ) . It requires a contractive property that is not satisfied by many decentralized algorithms including the algorithm in this paper . 3 ALGORITHM . We first introduce notations and definitions used in this work . We use bold upper-case letters such as X to define matrices and bold lower-case letters such as x to define vectors . Let 1 and 0 be vectors with all ones and zeros , respectively . Their dimensions will be provided when necessary . Given two matrices X , Y ∈ Rn×d , we define their inner product as 〈X , Y〉 = tr ( X > Y ) and the norm as ‖X‖ = √ 〈X , X〉 . We further define 〈X , Y〉P = tr ( X > PY ) and ‖X‖P = √ 〈X , X〉 P for any given symmetric positive semidefinite matrix P ∈ Rn×n . For simplicity , we will majorly use the matrix notation in this work . For instance , each agent i holds an individual estimate xi ∈ Rd of the global variable x ∈ Rd . Let Xk and∇F ( Xk ) be the collections of { xki } ni=1 and { ∇fi ( xki ) } ni=1 which are defined below : Xk = [ xk1 , . . . , x k n ] > ∈ Rn×d , ∇F ( Xk ) = [ ∇f1 ( xk1 ) , . . . , ∇fn ( xkn ) ] > ∈ Rn×d . ( 2 ) We use ∇F ( Xk ; ξk ) to denote the stochastic approximation of ∇F ( Xk ) . With these notations , the update Xk+1 = Xk − η∇F ( Xk ; ξk ) means that xk+1i = xki − η∇fi ( xki ; ξki ) for all i . In this paper , we need the average of all rows in Xk and ∇F ( Xk ) , so we define Xk = ( 1 > Xk ) /n and ∇F ( Xk ) = ( 1 > ∇F ( Xk ) ) /n . They are row vectors , and we will take a transpose if we need a column vector . The pseudoinverse of a matrix M is denoted as M† . The largest , ith-largest , and smallest nonzero eigenvalues of a symmetric matrix M are λmax ( M ) , λi ( M ) , and λmin ( M ) . Assumption 1 ( Mixing matrix ) . The connected network G = { V , E } consists of a node set V = { 1 , 2 , . . . , n } and an undirected edge set E . The primitive symmetric doubly-stochastic matrix W = [ wij ] ∈ Rn×n encodes the network structure such that wij = 0 if nodes i and j are not connected and can not exchange information . Assumption 1 implies that −1 < λn ( W ) ≤ λn−1 ( W ) ≤ · · ·λ2 ( W ) < λ1 ( W ) = 1 and W1 = 1 ( Xiao & Boyd , 2004 ; Shi et al. , 2015 ) . The matrix multiplication Xk+1 = WXk describes that agent i takes a weighted sum from its neighbors and itself , i.e. , xk+1i = ∑ j∈Ni∪ { i } wijx k j , where Ni denotes the neighbors of agent i . 3.1 THE PROPOSED ALGORITHM . The proposed algorithm LEAD to solve problem ( 1 ) is showed in Alg . 1 with matrix notations for conciseness . We will refer to the line number in the analysis . A complete algorithm description from the agent ’ s perspective can be found in Appendix A . The motivation behind Alg . 1 is to achieve two goals : ( a ) consensus ( xki − ( Xk ) > → 0 ) and ( b ) convergence ( ( Xk ) > → x∗ ) . We first discuss how goal ( a ) leads to goal ( b ) and then explain how LEAD fulfills goal ( a ) . In essence , LEAD runs the approximate SGD globally and reduces to the exact SGD under consensus . One key property for LEAD is 1 > n×1D k = 0 , regardless of the compression error in Ŷk . It holds because that for the initialization , we require D1 = ( I −W ) Z for some Z ∈ Rn×d , e.g. , D1 = 0n×d , and that the update of Dk ensures Dk ∈ Range ( I − W ) for all k and 1 > n×1 ( I −W ) = 0 as we will explain later . Therefore , multiplying ( 1/n ) 1 > n×1 on both sides of Line 7 leads to a global average view of Alg . 1 : Xk+1 = Xk − η∇F ( Xk ; ξk ) , ( 3 ) which doesn ’ t contain the compression error . Note that this is an approximate SGD step because , as shown in ( 2 ) , the gradient ∇F ( Xk ; ξk ) is not evaluated on a global synchronized model Xk . However , if the solution converges to the consensus solution , i.e. , xki − ( Xk ) > → 0 , then Eξk [ ∇F ( Xk ; ξk ) −∇f ( Xk ; ξk ) ] → 0 and ( 3 ) gradually reduces to exact SGD . Algorithm 1 LEAD Input : Stepsize η , parameter ( α , γ ) , X0 , H1 , D1 = ( I−W ) Z for any Z Output : XK or 1/n ∑n i=1 X K i 1 : H1w = WH 1 2 : X1 = X0 − η∇F ( X0 ; ξ0 ) 3 : for k = 1 , 2 , · · · , K − 1 do 4 : Yk = Xk − η∇F ( Xk ; ξk ) − ηDk 5 : Ŷk , Ŷkw , H k+1 , Hk+1w = COMM ( Y k , Hk , Hkw ) 6 : Dk+1 = Dk + γ2η ( Ŷ k − Ŷkw ) 7 : Xk+1 = Xk − η∇F ( Xk ; ξk ) − ηDk+1 8 : end for 9 : procedure COMM ( Y , H , Hw ) 10 : Q = COMPRESS ( Y −H ) 11 : Ŷ = H+Q 12 : Ŷw = Hw +WQ 13 : H = ( 1− α ) H+ αŶ 14 : Hw = ( 1− α ) Hw + αŶw 15 : Return : Ŷ , Ŷw , H , Hw 16 : end procedure With the establishment of how consensus leads to convergence , the obstacle becomes how to achieve consensus under local communication and compression challenges . It requires addressing two issues , i.e. , data heterogeneity and compression error . To deal with these issues , existing algorithms , such as DCD-SGD , ECD-SGD , QDGD , DeepSqueeze , Moniqua , and CHOCO-SGD , need a diminishing or constant but small stepsize depending on the total number of iterations . However , these choices unavoidably cause slower convergence and bring in the difficulty of parameter tuning . In contrast , LEAD takes a different way to solve these issues , as explained below . Data heterogeneity . It is common in distributed settings that there exists data heterogeneity among agents , especially in real-world applications where different agents collect data from different scenarios . In other words , we generally have fi ( x ) 6= fj ( x ) for i 6= j . The optimality condition of problem ( 1 ) gives 1 > n×1∇F ( X∗ ) = 0 , where X∗ = [ x∗ , · · · , x∗ ] is a consensual and optimal solution . The data heterogeneity and optimality condition imply that there exist at least two agents i and j such that ∇fi ( x∗ ) 6= 0 and ∇fj ( x∗ ) 6= 0 . As a result , a simple D-PSGD algorithm can not converge to the consensual and optimal solution as X∗ 6= WX∗ − ηEξ∇F ( X∗ ; ξ ) even when the stochastic gradient variance is zero . Gradient correction . Primal-dual algorithms or gradient tracking algorithms are able to convergence much faster than DGD-type algorithms by handling the data heterogeneity issue , as introduced in Section 2 . Specifically , LEAD is motivated by the design of primal-dual algorithm NIDS ( Li et al. , 2019 ) and the relation becomes clear if we consider the two-step reformulation of NIDS adopted in ( Li & Yan , 2019 ) : Dk+1 = Dk + I−W 2η ( Xk − η∇F ( Xk ) − ηDk ) , ( 4 ) Xk+1 = Xk − η∇F ( Xk ) − ηDk+1 , ( 5 ) where Xk and Dk represent the primal and dual variables respectively . The dual variable Dk plays the role of gradient correction . As k → ∞ , we expect Dk → −∇F ( X∗ ) and Xk will converge to X∗ via the update in ( 5 ) since Dk+1 corrects the nonzero gradient ∇F ( Xk ) asymptotically . The key design of Alg . 1 is to provide compression for the auxiliary variable defined as Yk = Xk − η∇F ( Xk ) − ηDk . Such design ensures that the dual variable Dk lies in Range ( I −W ) , which is essential for convergence . Moreover , it achieves the implicit error compression as we will explain later . To stabilize the algorithm with inexact dual update , we introduce a parameter γ to control the stepsize in the dual update . Therefore , if we ignore the details of the compression , Alg . 1 can be concisely written as Yk = Xk − η∇F ( Xk ; ξk ) − ηDk ( 6 ) Dk+1 = Dk + γ 2η ( I−W ) Ŷk ( 7 ) Xk+1 = Xk − η∇F ( Xk ; ξk ) − ηDk+1 ( 8 ) where Ŷk represents the compression of Yk and F ( Xk ; ξk ) denote the stochastic gradients . Nevertheless , how to compress the communication and how fast the convergence we can attain with compression error are unknown . In the following , we propose to carefully control the compression error by difference compression and error compensation such that the inexact dual update ( Line 6 ) and primal update ( Line 7 ) can still guarantee the convergence as proved in Section 4 . Compression error . Different from existing works , which typically compress the primal variable Xk or its difference , LEAD first construct an intermediate variable Yk and apply compression to obtain its coarse representation Ŷk as shown in the procedure COMM ( Y , H , Hw ) : • Compress the difference between Y and the state variable H as Q ; • Q is encoded into the low-bit representation , which enables the efficient local communication step Ŷw = Hw +WQ . It is the only communication step in each iteration . • Each agent recovers its estimate Ŷ by Ŷ = H+Q and we have Ŷw = WŶ . • States H and Hw are updated based on Ŷ and Ŷw , respectively . We have Hw = WH . By this procedure , we expect when both Yk and Hk converge to X∗ , the compression error vanishes asymptotically due to the assumption we make for the compression operator in Assumption 2 . Remark 1 . Note that difference compression is also applied in DCD-PSGD ( Tang et al. , 2018a ) and CHOCO-SGD ( Koloskova et al. , 2019 ) , but their state update is the simple integration of the compressed difference . We find this update is usually too aggressive and cause instability as showed in our experiments . Therefore , we adopt a momentum update H = ( 1−α ) H+αŶ motivated from DIANA ( Mishchenko et al. , 2019 ) , which reduces the compression error for gradient compression in centralized optimization . Implicit error compensation . On the other hand , even if the compression error exists , LEAD essentially compensates for the error in the inexact dual update ( Line 6 ) , making the algorithm more stable and robust . To illustrate how it works , let Ek = Ŷk −Yk denote the compression error and eki be its i-th row . The update of D k gives Dk+1 = Dk + γ 2η ( Ŷk − Ŷkw ) = Dk + γ 2η ( I−W ) Yk + γ 2η ( Ek −WEk ) where −WEk indicates that agent i spreads total compression error − ∑ j∈Ni∪ { i } wjie k i = −eki to all agents and Ek indicates that each agent compensates this error locally by adding eki back . This error compensation also explains why the global view in ( 3 ) doesn ’ t involve compression error . Remark 2 . Note that in LEAD , the compression error is compensated into the model Xk+1 through Line 6 and Line 7 such that the gradient computation in the next iteration is aware of the compression error . This has some subtle but important difference from the error compensation or error feedback in ( Seide et al. , 2014 ; Wu et al. , 2018 ; Stich et al. , 2018 ; Karimireddy et al. , 2019 ; Tang et al. , 2019b ; Liu et al. , 2020 ; Tang et al. , 2019a ) , where the error is stored in the memory and only compensated after gradient computation and before the compression . Remark 3 . The proposed algorithm , LEAD in Alg . 1 , recovers NIDS ( Li et al. , 2019 ) , D2 ( Tang et al. , 2018b ) , Exact Diffusion ( Yuan et al. , 2018 ) . These connections are established in Appendix B .
The paper introduces a novel decentralized algorithm (LEAD) incorporated with compression that achieves linear convergence rate in strongly convex setting. The main idea is to apply and communicate the compression of an auxiliary variable instead of the primal or dual iterates. Convergence analysis is provided for both deterministic and stochastic variants. Experiments shows the state-of-the-art performance.
SP:940f5374980f33ee94784370eccd403e49c99ac3
Linear Convergent Decentralized Optimization with Compression
1 INTRODUCTION . Distributed optimization solves the following optimization problem x∗ : = argmin x∈Rd [ f ( x ) : = 1 n n∑ i=1 fi ( x ) ] ( 1 ) with n computing agents and a communication network . Each fi ( x ) : Rd → R is a local objective function of agent i and typically defined on the data Di settled at that agent . The data distributions { Di } can be heterogeneous depending on the applications such as in federated learning . The variable x ∈ Rd often represents model parameters in machine learning . A distributed optimization algorithm seeks an optimal solution that minimizes the overall objective function f ( x ) collectively . According to the communication topology , existing algorithms can be conceptually categorized into centralized and decentralized ones . Specifically , centralized algorithms require global communication between agents ( through central agents or parameter servers ) . While decentralized algorithms only require local communication between connected agents and are more widely applicable than centralized ones . In both paradigms , the computation can be relatively fast with powerful computing devices ; efficient communication is the key to improve algorithm efficiency and system scalability , especially when the network bandwidth is limited . In recent years , various communication compression techniques , such as quantization and sparsification , have been developed to reduce communication costs . Notably , extensive studies ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Bernstein et al. , 2018 ; Stich et al. , 2018 ; Karimireddy et al. , 2019 ; Mishchenko et al. , 2019 ; Tang et al. , 2019b ; Liu et al. , 2020 ) have utilized gradient compression to significantly boost communication efficiency for centralized optimization . They enable efficient large-scale optimization while maintaining comparable convergence rates and practical performance with their non-compressed counterparts . This great success has suggested the potential and significance of communication compression in decentralized algorithms . While extensive attention has been paid to centralized optimization , communication compression is relatively less studied in decentralized algorithms because the algorithm design and analysis are more challenging in order to cover general communication topologies . There are recent efforts trying to push this research direction . For instance , DCD-SGD and ECD-SGD ( Tang et al. , 2018a ) introduce difference compression and extrapolation compression to reduce model compression error . ( Reisizadeh et al. , 2019a ; b ) introduce QDGD and QuanTimed-DSGD to achieve exact convergence with small stepsize . DeepSqueeze ( Tang et al. , 2019a ) directly compresses the local model and compensates the compression error in the next iteration . CHOCO-SGD ( Koloskova et al. , 2019 ; 2020 ) presents a novel quantized gossip algorithm that reduces compression error by difference compression and preserves the model average . Nevertheless , most existing works focus on the compression of primal-only algorithms , i.e. , reduce to DGD ( Nedic & Ozdaglar , 2009 ; Yuan et al. , 2016 ) or P-DSGD ( Lian et al. , 2017 ) . They are unsatisfying in terms of convergence rate , stability , and the capability to handle heterogeneous data . Part of the reason is that they inherit the drawback of DGD-type algorithms , whose convergence rate is slow in heterogeneous data scenarios where the data distributions are significantly different from agent to agent . In the literature of decentralized optimization , it has been proved that primal-dual algorithms can achieve faster converge rates and better support heterogeneous data ( Ling et al. , 2015 ; Shi et al. , 2015 ; Li et al. , 2019 ; Yuan et al. , 2020 ) . However , it is unknown whether communication compression is feasible for primal-dual algorithms and how fast the convergence can be with compression . In this paper , we attempt to bridge this gap by investigating the communication compression for primal-dual decentralized algorithms . Our major contributions can be summarized as : • We delineate two key challenges in the algorithm design for communication compression in decentralized optimization , i.e. , data heterogeneity and compression error , and motivated by primal-dual algorithms , we propose a novel decentralized algorithm with compression , LEAD . • We prove that for LEAD , a constant stepsize in the range ( 0 , 2/ ( µ + L ) ] is sufficient to ensure linear convergence for strongly convex and smooth objective functions . To the best of our knowledge , LEAD is the first linear convergent decentralized algorithm with compression . Moreover , LEAD provably works with unbiased compression of arbitrary precision . • We further prove that if the stochastic gradient is used , LEAD converges linearly to the O ( σ2 ) neighborhood of the optimum with constant stepsize . LEAD is also able to achieve exact convergence to the optimum with diminishing stepsize . • Extensive experiments on convex problems validate our theoretical analyses , and the empirical study on training deep neural nets shows that LEAD is applicable for nonconvex problems . LEAD achieves state-of-art computation and communication efficiency in all experiments and significantly outperforms the baselines on heterogeneous data . Moreover , LEAD is robust to parameter settings and needs minor effort for parameter tuning . 2 RELATED WORKS . Decentralized optimization can be traced back to the work by Tsitsiklis et al . ( 1986 ) . DGD ( Nedic & Ozdaglar , 2009 ) is the most classical decentralized algorithm . It is intuitive and simple but converges slowly due to the diminishing stepsize that is needed to obtain the optimal solution ( Yuan et al. , 2016 ) . Its stochastic version D-PSGD ( Lian et al. , 2017 ) has been shown effective for training nonconvex deep learning models . Algorithms based on primal-dual formulations or gradient tracking are proposed to eliminate the convergence bias in DGD-type algorithms and improve the convergence rate , such as D-ADMM ( Mota et al. , 2013 ) , DLM ( Ling et al. , 2015 ) , EXTRA ( Shi et al. , 2015 ) , NIDS ( Li et al. , 2019 ) , D2 ( Tang et al. , 2018b ) , Exact Diffusion ( Yuan et al. , 2018 ) , OPTRA ( Xu et al. , 2020 ) , DIGing ( Nedic et al. , 2017 ) , GSGT ( Pu & Nedić , 2020 ) , etc . Recently , communication compression is applied to decentralized settings by Tang et al . ( 2018a ) . It proposes two algorithms , i.e. , DCD-SGD and ECD-SGD , which require compression of high accuracy and are not stable with aggressive compression . Reisizadeh et al . ( 2019a ; b ) introduce QDGD and QuanTimed-DSGD to achieve exact convergence with small stepsize and the convergence is slow . DeepSqueeze ( Tang et al. , 2019a ) compensates the compression error to the compression in the next iteration . Motivated by the quantized average consensus algorithms , such as ( Carli et al. , 2010 ) , the quantized gossip algorithm CHOCO-Gossip ( Koloskova et al. , 2019 ) converges linearly to the consensual solution . Combining CHOCO-Gossip and D-PSGD leads to a decentralized algorithm with compression , CHOCO-SGD , which converges sublinearly under the strong convexity and gradient boundedness assumptions . Its nonconvex variant is further analyzed in ( Koloskova et al. , 2020 ) . A new compression scheme using the modulo operation is introduced in ( Lu & De Sa , 2020 ) for decentralized optimization . A general algorithmic framework aiming to maintain the linear convergence of distributed optimization under compressed communication is considered in ( Magnússon et al. , 2020 ) . It requires a contractive property that is not satisfied by many decentralized algorithms including the algorithm in this paper . 3 ALGORITHM . We first introduce notations and definitions used in this work . We use bold upper-case letters such as X to define matrices and bold lower-case letters such as x to define vectors . Let 1 and 0 be vectors with all ones and zeros , respectively . Their dimensions will be provided when necessary . Given two matrices X , Y ∈ Rn×d , we define their inner product as 〈X , Y〉 = tr ( X > Y ) and the norm as ‖X‖ = √ 〈X , X〉 . We further define 〈X , Y〉P = tr ( X > PY ) and ‖X‖P = √ 〈X , X〉 P for any given symmetric positive semidefinite matrix P ∈ Rn×n . For simplicity , we will majorly use the matrix notation in this work . For instance , each agent i holds an individual estimate xi ∈ Rd of the global variable x ∈ Rd . Let Xk and∇F ( Xk ) be the collections of { xki } ni=1 and { ∇fi ( xki ) } ni=1 which are defined below : Xk = [ xk1 , . . . , x k n ] > ∈ Rn×d , ∇F ( Xk ) = [ ∇f1 ( xk1 ) , . . . , ∇fn ( xkn ) ] > ∈ Rn×d . ( 2 ) We use ∇F ( Xk ; ξk ) to denote the stochastic approximation of ∇F ( Xk ) . With these notations , the update Xk+1 = Xk − η∇F ( Xk ; ξk ) means that xk+1i = xki − η∇fi ( xki ; ξki ) for all i . In this paper , we need the average of all rows in Xk and ∇F ( Xk ) , so we define Xk = ( 1 > Xk ) /n and ∇F ( Xk ) = ( 1 > ∇F ( Xk ) ) /n . They are row vectors , and we will take a transpose if we need a column vector . The pseudoinverse of a matrix M is denoted as M† . The largest , ith-largest , and smallest nonzero eigenvalues of a symmetric matrix M are λmax ( M ) , λi ( M ) , and λmin ( M ) . Assumption 1 ( Mixing matrix ) . The connected network G = { V , E } consists of a node set V = { 1 , 2 , . . . , n } and an undirected edge set E . The primitive symmetric doubly-stochastic matrix W = [ wij ] ∈ Rn×n encodes the network structure such that wij = 0 if nodes i and j are not connected and can not exchange information . Assumption 1 implies that −1 < λn ( W ) ≤ λn−1 ( W ) ≤ · · ·λ2 ( W ) < λ1 ( W ) = 1 and W1 = 1 ( Xiao & Boyd , 2004 ; Shi et al. , 2015 ) . The matrix multiplication Xk+1 = WXk describes that agent i takes a weighted sum from its neighbors and itself , i.e. , xk+1i = ∑ j∈Ni∪ { i } wijx k j , where Ni denotes the neighbors of agent i . 3.1 THE PROPOSED ALGORITHM . The proposed algorithm LEAD to solve problem ( 1 ) is showed in Alg . 1 with matrix notations for conciseness . We will refer to the line number in the analysis . A complete algorithm description from the agent ’ s perspective can be found in Appendix A . The motivation behind Alg . 1 is to achieve two goals : ( a ) consensus ( xki − ( Xk ) > → 0 ) and ( b ) convergence ( ( Xk ) > → x∗ ) . We first discuss how goal ( a ) leads to goal ( b ) and then explain how LEAD fulfills goal ( a ) . In essence , LEAD runs the approximate SGD globally and reduces to the exact SGD under consensus . One key property for LEAD is 1 > n×1D k = 0 , regardless of the compression error in Ŷk . It holds because that for the initialization , we require D1 = ( I −W ) Z for some Z ∈ Rn×d , e.g. , D1 = 0n×d , and that the update of Dk ensures Dk ∈ Range ( I − W ) for all k and 1 > n×1 ( I −W ) = 0 as we will explain later . Therefore , multiplying ( 1/n ) 1 > n×1 on both sides of Line 7 leads to a global average view of Alg . 1 : Xk+1 = Xk − η∇F ( Xk ; ξk ) , ( 3 ) which doesn ’ t contain the compression error . Note that this is an approximate SGD step because , as shown in ( 2 ) , the gradient ∇F ( Xk ; ξk ) is not evaluated on a global synchronized model Xk . However , if the solution converges to the consensus solution , i.e. , xki − ( Xk ) > → 0 , then Eξk [ ∇F ( Xk ; ξk ) −∇f ( Xk ; ξk ) ] → 0 and ( 3 ) gradually reduces to exact SGD . Algorithm 1 LEAD Input : Stepsize η , parameter ( α , γ ) , X0 , H1 , D1 = ( I−W ) Z for any Z Output : XK or 1/n ∑n i=1 X K i 1 : H1w = WH 1 2 : X1 = X0 − η∇F ( X0 ; ξ0 ) 3 : for k = 1 , 2 , · · · , K − 1 do 4 : Yk = Xk − η∇F ( Xk ; ξk ) − ηDk 5 : Ŷk , Ŷkw , H k+1 , Hk+1w = COMM ( Y k , Hk , Hkw ) 6 : Dk+1 = Dk + γ2η ( Ŷ k − Ŷkw ) 7 : Xk+1 = Xk − η∇F ( Xk ; ξk ) − ηDk+1 8 : end for 9 : procedure COMM ( Y , H , Hw ) 10 : Q = COMPRESS ( Y −H ) 11 : Ŷ = H+Q 12 : Ŷw = Hw +WQ 13 : H = ( 1− α ) H+ αŶ 14 : Hw = ( 1− α ) Hw + αŶw 15 : Return : Ŷ , Ŷw , H , Hw 16 : end procedure With the establishment of how consensus leads to convergence , the obstacle becomes how to achieve consensus under local communication and compression challenges . It requires addressing two issues , i.e. , data heterogeneity and compression error . To deal with these issues , existing algorithms , such as DCD-SGD , ECD-SGD , QDGD , DeepSqueeze , Moniqua , and CHOCO-SGD , need a diminishing or constant but small stepsize depending on the total number of iterations . However , these choices unavoidably cause slower convergence and bring in the difficulty of parameter tuning . In contrast , LEAD takes a different way to solve these issues , as explained below . Data heterogeneity . It is common in distributed settings that there exists data heterogeneity among agents , especially in real-world applications where different agents collect data from different scenarios . In other words , we generally have fi ( x ) 6= fj ( x ) for i 6= j . The optimality condition of problem ( 1 ) gives 1 > n×1∇F ( X∗ ) = 0 , where X∗ = [ x∗ , · · · , x∗ ] is a consensual and optimal solution . The data heterogeneity and optimality condition imply that there exist at least two agents i and j such that ∇fi ( x∗ ) 6= 0 and ∇fj ( x∗ ) 6= 0 . As a result , a simple D-PSGD algorithm can not converge to the consensual and optimal solution as X∗ 6= WX∗ − ηEξ∇F ( X∗ ; ξ ) even when the stochastic gradient variance is zero . Gradient correction . Primal-dual algorithms or gradient tracking algorithms are able to convergence much faster than DGD-type algorithms by handling the data heterogeneity issue , as introduced in Section 2 . Specifically , LEAD is motivated by the design of primal-dual algorithm NIDS ( Li et al. , 2019 ) and the relation becomes clear if we consider the two-step reformulation of NIDS adopted in ( Li & Yan , 2019 ) : Dk+1 = Dk + I−W 2η ( Xk − η∇F ( Xk ) − ηDk ) , ( 4 ) Xk+1 = Xk − η∇F ( Xk ) − ηDk+1 , ( 5 ) where Xk and Dk represent the primal and dual variables respectively . The dual variable Dk plays the role of gradient correction . As k → ∞ , we expect Dk → −∇F ( X∗ ) and Xk will converge to X∗ via the update in ( 5 ) since Dk+1 corrects the nonzero gradient ∇F ( Xk ) asymptotically . The key design of Alg . 1 is to provide compression for the auxiliary variable defined as Yk = Xk − η∇F ( Xk ) − ηDk . Such design ensures that the dual variable Dk lies in Range ( I −W ) , which is essential for convergence . Moreover , it achieves the implicit error compression as we will explain later . To stabilize the algorithm with inexact dual update , we introduce a parameter γ to control the stepsize in the dual update . Therefore , if we ignore the details of the compression , Alg . 1 can be concisely written as Yk = Xk − η∇F ( Xk ; ξk ) − ηDk ( 6 ) Dk+1 = Dk + γ 2η ( I−W ) Ŷk ( 7 ) Xk+1 = Xk − η∇F ( Xk ; ξk ) − ηDk+1 ( 8 ) where Ŷk represents the compression of Yk and F ( Xk ; ξk ) denote the stochastic gradients . Nevertheless , how to compress the communication and how fast the convergence we can attain with compression error are unknown . In the following , we propose to carefully control the compression error by difference compression and error compensation such that the inexact dual update ( Line 6 ) and primal update ( Line 7 ) can still guarantee the convergence as proved in Section 4 . Compression error . Different from existing works , which typically compress the primal variable Xk or its difference , LEAD first construct an intermediate variable Yk and apply compression to obtain its coarse representation Ŷk as shown in the procedure COMM ( Y , H , Hw ) : • Compress the difference between Y and the state variable H as Q ; • Q is encoded into the low-bit representation , which enables the efficient local communication step Ŷw = Hw +WQ . It is the only communication step in each iteration . • Each agent recovers its estimate Ŷ by Ŷ = H+Q and we have Ŷw = WŶ . • States H and Hw are updated based on Ŷ and Ŷw , respectively . We have Hw = WH . By this procedure , we expect when both Yk and Hk converge to X∗ , the compression error vanishes asymptotically due to the assumption we make for the compression operator in Assumption 2 . Remark 1 . Note that difference compression is also applied in DCD-PSGD ( Tang et al. , 2018a ) and CHOCO-SGD ( Koloskova et al. , 2019 ) , but their state update is the simple integration of the compressed difference . We find this update is usually too aggressive and cause instability as showed in our experiments . Therefore , we adopt a momentum update H = ( 1−α ) H+αŶ motivated from DIANA ( Mishchenko et al. , 2019 ) , which reduces the compression error for gradient compression in centralized optimization . Implicit error compensation . On the other hand , even if the compression error exists , LEAD essentially compensates for the error in the inexact dual update ( Line 6 ) , making the algorithm more stable and robust . To illustrate how it works , let Ek = Ŷk −Yk denote the compression error and eki be its i-th row . The update of D k gives Dk+1 = Dk + γ 2η ( Ŷk − Ŷkw ) = Dk + γ 2η ( I−W ) Yk + γ 2η ( Ek −WEk ) where −WEk indicates that agent i spreads total compression error − ∑ j∈Ni∪ { i } wjie k i = −eki to all agents and Ek indicates that each agent compensates this error locally by adding eki back . This error compensation also explains why the global view in ( 3 ) doesn ’ t involve compression error . Remark 2 . Note that in LEAD , the compression error is compensated into the model Xk+1 through Line 6 and Line 7 such that the gradient computation in the next iteration is aware of the compression error . This has some subtle but important difference from the error compensation or error feedback in ( Seide et al. , 2014 ; Wu et al. , 2018 ; Stich et al. , 2018 ; Karimireddy et al. , 2019 ; Tang et al. , 2019b ; Liu et al. , 2020 ; Tang et al. , 2019a ) , where the error is stored in the memory and only compensated after gradient computation and before the compression . Remark 3 . The proposed algorithm , LEAD in Alg . 1 , recovers NIDS ( Li et al. , 2019 ) , D2 ( Tang et al. , 2018b ) , Exact Diffusion ( Yuan et al. , 2018 ) . These connections are established in Appendix B .
This paper introduces a novel algorithm for decentralized optimization when nodes can only communicate a compressed signal with their neighbors. Unlike most decentralized methods with compression that are inspired by primal methods (DGD type methods), this paper introduces a new primal-dual algorithm with compression. The proposed method's main idea is borrowed from the NIDS algorithm, which converges linearly when the local loss functions are smooth and strongly convex. As the proposed LEAD method is based on primal-dual methods, it succeeds in improving the sublinear rate of primal-based methods. To the best of my knowledge, this is the first decentralized method that achieves a linear convergence rate in the setting that nodes use compressed signals.
SP:940f5374980f33ee94784370eccd403e49c99ac3
Action Guidance: Getting the Best of Sparse Rewards and Shaped Rewards for Real-time Strategy Games
Training agents using Reinforcement Learning with sparse rewards is often difficult ( Pathak et al. , 2017 ) . First , due to the sparsity of the reward , the agent often spends the majority of the training time doing inefficient exploration and sometimes not even reaching the first sparse reward during the entirety of its training . Second , even if the agents have successfully retrieved some sparse rewards , performing proper credit assignment is challenging among complex sequences of actions that have led to theses sparse rewards . Reward shaping ( Ng et al. , 1999 ) is a widely-used technique designed to mitigate this problem . It works by providing intermediate rewards that lead the agent towards the sparse rewards , which are the true objective . For example , the sparse reward for a game of Chess is naturally +1 for winning , -1 for losing , and 0 for drawing , while a possible shaped reward might be +1 for every enemy piece the agent takes . One of the critical drawbacks for reward shaping is that the agent sometimes learns to optimize for the shaped reward instead of the real objective . Using the Chess example , the agent might learn to take as many enemy pieces as possible while still losing the game . A good shaped reward achieves a nice balance between letting the agent find the sparse reward and being too shaped ( so the agent learns to just maximize the shaped reward ) , but this balance can be difficult to find . In this paper , we present a novel technique called action guidance that successfully trains the agent to eventually optimize over sparse rewards while maintaining most of the sample efficiency that comes with reward shaping . It works by constructing a main policy that only learns from the sparse reward function RM and some auxiliary policies that learn from the shaped reward function RA1 , RA2 , . . . , RAn . During training , we use the same rollouts to train the main and auxiliary policies and initially set a high-probability of the main policy to take action guidance from the auxiliary policies , that is , the main policy will execute actions sampled from the auxiliary policies . Then the main policy and auxiliary policies are updated via off-policy policy gradient . As the training goes on , the main policy will get more independent and execute more actions sampled from its own policy . Auxiliary policies learn from shaped rewards and therefore make the training sampleefficient , while the main policy learns from the original sparse reward and therefore makes sure that the agents will eventually optimize over the true objective . We can see action guidance as combining reward shaping to train auxiliary policies interlieaved with a sort of imitation learning to guide the main policy from these auxiliary policies . We examine action guidance in the context of a real-time strategy ( RTS ) game simulator called µRTS for three sparse rewards tasks of varying difficulty . For each task , we compare the performance of training agents with the sparse reward function RM , a shaped reward function RA1 , and action guidance with a singular auxiliary policy learning from RA1 . The main highlights are : Action guidance is sample-efficient . Since the auxiliary policy learns from RA1 and the main policy takes action guidance from the auxiliary policy during the initial stage of training , the main policy is more likely to discover the first sparse reward more quickly and learn more efficiently . Empirically , action guidance reaches almost the same level of sample efficiency as reward shaping in all of the three tasks tested . The true objective is being optimized . During the course of training , the main policy has never seen the shaped rewards . This ensures that the main policy , which is the agent we are really interested in , is always optimizing against the true objective and is less biased by the shaped rewards . As an example , Figure 1 shows that the main policy trained with action guidance eventually learns to win the game as fast as possible , even though it has only learned from the match outcome reward ( +1 for winning , -1 for losing , and 0 for drawing ) . In contrast , the agents trained with reward shaping learn more diverse sets of behaviors which result in high shaped reward . To support further research in this field , we make our source code available at GitHub1 , as well as all the metrics , logs , and recorded videos2 . 1 RELATED WORK . In this section , we briefly summarize the popular techniques proposed to address the challenge of sparse rewards . Reward Shaping . Reward shaping is a common technique where the human designer uses domain knowledge to define additional intermediate rewards for the agents . Ng et al . ( 1999 ) show that a slightly more restricted form of state-based reward shaping has better theoretical properties for preserving the optimal policy . Transfer and Curriculum Learning . Sometimes learning the target tasks with sparse rewards is too challenging , and it is more preferable to learn some easier tasks first . Transfer learning leverages this idea and trains agents with some easier source tasks and then later transfer the knowledge through value function ( Taylor et al. , 2007 ) or reward shaping ( Svetlik et al. , 2017 ) . Curriculum learning further extends transfer learning by automatically designing and choosing a full sequences of source tasks ( i.e . a curriculum ) ( Narvekar & Stone , 2018 ) . Imitation Learning . Alternatively , it is possible to directly provide examples of human demonstration or expert replay for the agents to mimic via Behavior Cloning ( BC ) ( Bain & Sammut , 1995 ) , which uses supervised learning to learn a policy given the state-action pairs from expert replays . Alternatively , Inverse Reinforcement Learning ( IRL ) ( Abbeel & Ng , 2004 ) recovers a reward function from expert demonstrations to be used to train agents . Curiosity-driven Learning . Curiosity driven learning seeks to design intrinsic reward functions ( Burda et al. , 2019 ) using metrics such as prediction errors ( Houthooft et al. , 2016 ) and “ visit counts ” ( Bellemare et al. , 2016 ; Lopes et al. , 2012 ) . These intrinsic rewards encourage the agents to explore unseen states . Goal-oriented Learning . In certain tasks , it is possible to describe a goal state and use it in conjunction with the current state as input ( Schaul et al. , 2015 ) . Hindsight experience replay ( HER ) ( Andrychowicz et al. , 2017 ) develops better utilization of existing data in experience replay by replaying each episode with different goals . HER is shown to be an effective technique in sparse rewards tasks . Hierarchical Reinforcement Learning ( HRL ) . If the target task is difficult to learn directly , it is also possible to hierarchically structure the task using experts ’ knowledge and train hierarchical agents , which generally involves a main policy that learns abstract goals , time , and actions , as well as auxiliary policies that learn primitive actions and specific goals ( Dietterich , 2000 ) . HRL is especially popular in RTS games with combinatorial action spaces ( Pang et al. , 2019 ; Ye et al. , 2020 ) . The most closely related work is perhaps Scheduled Auxiliary Control ( SAC-X ) ( Riedmiller et al. , 2018 ) , which is an HRL algorithm that trains auxiliary policies to perform primitive actions with 1https : //github.com/anonymous-research-code/action-guidance 2Blinded for peer review shaped rewards and a main policy to schedule the use of auxiliary policies with sparse rewards . However , our approach differs in the treatment of the main policy . Instead of learning to schedule auxiliary policies , our main policy learns to act in the entire action space by taking action guidance from the auxiliary policies . There are two intuitive benefits to our approach since our main policy learns in the full action space . First , during policy evaluation our main policy does not have to commit to a particular auxiliary policy to perform actions for a fixed number of time steps like it is usually done in SAC-X . Second , learning in the full action space means the main policy will less likely suffer from the definition of hand-crafted sub-tasks , which could be incomplete or biased . 2 BACKGROUND . We consider the Reinforcement Learning problem in a Markov Decision Process ( MDP ) denoted as ( S , A , P , ρ0 , r , γ , T ) , where S is the state space , A is the discrete action space , P : S×A×S → [ 0 , 1 ] is the state transition probability , ρ0 : S → [ 0 , 1 ] is the the initial state distribution , r : S × A→ R is the reward function , γ is the discount factor , and T is the maximum episode length . A stochastic policy πθ : S ×A→ [ 0 , 1 ] , parameterized by a parameter vector θ , assigns a probability value to an action given a state . The goal is to maximize the expected discounted return of the policy : Eτ [ T−1∑ t=0 γtrt ] , where τ is the trajectory ( s0 , a0 , r0 , s1 , . . . , sT−1 , aT−1 , rT−1 ) and s0 ∼ ρ0 , st ∼ P ( ·|st−1 , at−1 ) , at ∼ πθ ( ·|st ) , rt = r ( st , at ) Policy Gradient Algorithms . The core idea behind policy gradient algorithms is to obtain the policy gradient ∇θJ of the expected discounted return with respect to the policy parameter θ . Doing gradient ascent θ = θ + ∇θJ therefore maximizes the expected discounted reward . Earlier work proposes the following policy gradient estimate to the objective J ( Sutton & Barto , 2018 ) : gpolicy , θ = Eτ∼πθ [ T−1∑ t=0 ∇θ log πθ ( at|st ) Gt ] , where Gt = ∑∞ k=0 γ krt+k denotes the discounted return following time t. This gradient estimate , however , suffers from large variance ( Sutton & Barto , 2018 ) and the following gradient estimate is suggested instead : gpolicy , θ = Eτ [ ∇θ T−1∑ t=0 log πθ ( at|st ) A ( τ , V , t ) ] , where A ( τ , V , t ) is the General Advantage Estimation ( GAE ) ( Schulman et al. , 2015 ) , which measures “ how good is at compared to the usual actions ” , and V : S → R is the state-value function . 3 ACTION GUIDANCE . The key idea behind action guidance is to create a main policy that trains on the sparse rewards , and creating some auxiliary policies that are trained on shaped rewards . During the initial stages of training , the main policy has a high probability to take action guidance from the auxiliary policies , that is , the main policy can execute actions sampled from the auxiliary policies , rather than from its own policy . As the training goes on , this probability decreases , and the main policy executes more actions sampled from its own policy . During training , the main and auxiliary policies are updated via off-policy policy gradient . Our use of auxiliary policies makes the training sample-efficient , and our use of the main policy , who only sees its own sparse reward , makes sure that the agent will eventually optimize over the true objective of sparse rewards . In a way , action guidance can be seen as training agents using shaped rewards , while having the main policy learn by imitating from them . Specifically , let us defineM as the MDP that the main policy learns from andA = { A1 , A2 , ... , Ak } be a set of auxiliary MDPs that the auxiliary policies learn from . In our constructions , M and A share the same state , observation , and action space . However , the reward function forM is RM , which is the sparse reward function , and reward functions for A are RA1 , ... , RAk , which are the shaped reward functions . For each of these MDPs E ∈ S = { M } ∪A above , let us initialize a policy πθE parameterized by parameters θE , respectively . Furthermore , let us use πS = { πθE |E ∈ S } to denote the set of these initialized policies . At each timestep t , let us use some exploration strategy S that selects a policy πb ∈ πS to sample an action at given st. At the end of the episode , each policy πθ ∈ πS can be updated via its off-policy policy gradient ( Degris et al. , 2012 ; Levine et al. , 2020 ) : Eτ∼πθb [ ( T−1∏ t=0 πθ ( at|st ) πθb ( at|st ) ) T−1∑ t=0 ∇θ log πθ ( at|st ) A ( τ , V , t ) ] ( 1 ) When πθ = πθb , the gradient in Equation 1 means on-policy policy gradient update for πθ . Otherwise , the objective means off-policy policy gradient update for πθ .
This paper introduces an approach called action guidance, made to address issues in more standard applications of reward shaping. The main idea of their approach is that there are two different kinds of agents, one (auxiliary agents) that learn from shaped reward functions alone and the other (main agent(s)) that learn only from the actual sparse rewards. The authors made use of a simplified RTS domain and demonstrated that their approach outperformed a more naive shaped reward approach. In addition they demonstrated an ablation study on positive learning optimization.
SP:c0924c1c4d4132e6d80e24103c243780438f8a89
Action Guidance: Getting the Best of Sparse Rewards and Shaped Rewards for Real-time Strategy Games
Training agents using Reinforcement Learning with sparse rewards is often difficult ( Pathak et al. , 2017 ) . First , due to the sparsity of the reward , the agent often spends the majority of the training time doing inefficient exploration and sometimes not even reaching the first sparse reward during the entirety of its training . Second , even if the agents have successfully retrieved some sparse rewards , performing proper credit assignment is challenging among complex sequences of actions that have led to theses sparse rewards . Reward shaping ( Ng et al. , 1999 ) is a widely-used technique designed to mitigate this problem . It works by providing intermediate rewards that lead the agent towards the sparse rewards , which are the true objective . For example , the sparse reward for a game of Chess is naturally +1 for winning , -1 for losing , and 0 for drawing , while a possible shaped reward might be +1 for every enemy piece the agent takes . One of the critical drawbacks for reward shaping is that the agent sometimes learns to optimize for the shaped reward instead of the real objective . Using the Chess example , the agent might learn to take as many enemy pieces as possible while still losing the game . A good shaped reward achieves a nice balance between letting the agent find the sparse reward and being too shaped ( so the agent learns to just maximize the shaped reward ) , but this balance can be difficult to find . In this paper , we present a novel technique called action guidance that successfully trains the agent to eventually optimize over sparse rewards while maintaining most of the sample efficiency that comes with reward shaping . It works by constructing a main policy that only learns from the sparse reward function RM and some auxiliary policies that learn from the shaped reward function RA1 , RA2 , . . . , RAn . During training , we use the same rollouts to train the main and auxiliary policies and initially set a high-probability of the main policy to take action guidance from the auxiliary policies , that is , the main policy will execute actions sampled from the auxiliary policies . Then the main policy and auxiliary policies are updated via off-policy policy gradient . As the training goes on , the main policy will get more independent and execute more actions sampled from its own policy . Auxiliary policies learn from shaped rewards and therefore make the training sampleefficient , while the main policy learns from the original sparse reward and therefore makes sure that the agents will eventually optimize over the true objective . We can see action guidance as combining reward shaping to train auxiliary policies interlieaved with a sort of imitation learning to guide the main policy from these auxiliary policies . We examine action guidance in the context of a real-time strategy ( RTS ) game simulator called µRTS for three sparse rewards tasks of varying difficulty . For each task , we compare the performance of training agents with the sparse reward function RM , a shaped reward function RA1 , and action guidance with a singular auxiliary policy learning from RA1 . The main highlights are : Action guidance is sample-efficient . Since the auxiliary policy learns from RA1 and the main policy takes action guidance from the auxiliary policy during the initial stage of training , the main policy is more likely to discover the first sparse reward more quickly and learn more efficiently . Empirically , action guidance reaches almost the same level of sample efficiency as reward shaping in all of the three tasks tested . The true objective is being optimized . During the course of training , the main policy has never seen the shaped rewards . This ensures that the main policy , which is the agent we are really interested in , is always optimizing against the true objective and is less biased by the shaped rewards . As an example , Figure 1 shows that the main policy trained with action guidance eventually learns to win the game as fast as possible , even though it has only learned from the match outcome reward ( +1 for winning , -1 for losing , and 0 for drawing ) . In contrast , the agents trained with reward shaping learn more diverse sets of behaviors which result in high shaped reward . To support further research in this field , we make our source code available at GitHub1 , as well as all the metrics , logs , and recorded videos2 . 1 RELATED WORK . In this section , we briefly summarize the popular techniques proposed to address the challenge of sparse rewards . Reward Shaping . Reward shaping is a common technique where the human designer uses domain knowledge to define additional intermediate rewards for the agents . Ng et al . ( 1999 ) show that a slightly more restricted form of state-based reward shaping has better theoretical properties for preserving the optimal policy . Transfer and Curriculum Learning . Sometimes learning the target tasks with sparse rewards is too challenging , and it is more preferable to learn some easier tasks first . Transfer learning leverages this idea and trains agents with some easier source tasks and then later transfer the knowledge through value function ( Taylor et al. , 2007 ) or reward shaping ( Svetlik et al. , 2017 ) . Curriculum learning further extends transfer learning by automatically designing and choosing a full sequences of source tasks ( i.e . a curriculum ) ( Narvekar & Stone , 2018 ) . Imitation Learning . Alternatively , it is possible to directly provide examples of human demonstration or expert replay for the agents to mimic via Behavior Cloning ( BC ) ( Bain & Sammut , 1995 ) , which uses supervised learning to learn a policy given the state-action pairs from expert replays . Alternatively , Inverse Reinforcement Learning ( IRL ) ( Abbeel & Ng , 2004 ) recovers a reward function from expert demonstrations to be used to train agents . Curiosity-driven Learning . Curiosity driven learning seeks to design intrinsic reward functions ( Burda et al. , 2019 ) using metrics such as prediction errors ( Houthooft et al. , 2016 ) and “ visit counts ” ( Bellemare et al. , 2016 ; Lopes et al. , 2012 ) . These intrinsic rewards encourage the agents to explore unseen states . Goal-oriented Learning . In certain tasks , it is possible to describe a goal state and use it in conjunction with the current state as input ( Schaul et al. , 2015 ) . Hindsight experience replay ( HER ) ( Andrychowicz et al. , 2017 ) develops better utilization of existing data in experience replay by replaying each episode with different goals . HER is shown to be an effective technique in sparse rewards tasks . Hierarchical Reinforcement Learning ( HRL ) . If the target task is difficult to learn directly , it is also possible to hierarchically structure the task using experts ’ knowledge and train hierarchical agents , which generally involves a main policy that learns abstract goals , time , and actions , as well as auxiliary policies that learn primitive actions and specific goals ( Dietterich , 2000 ) . HRL is especially popular in RTS games with combinatorial action spaces ( Pang et al. , 2019 ; Ye et al. , 2020 ) . The most closely related work is perhaps Scheduled Auxiliary Control ( SAC-X ) ( Riedmiller et al. , 2018 ) , which is an HRL algorithm that trains auxiliary policies to perform primitive actions with 1https : //github.com/anonymous-research-code/action-guidance 2Blinded for peer review shaped rewards and a main policy to schedule the use of auxiliary policies with sparse rewards . However , our approach differs in the treatment of the main policy . Instead of learning to schedule auxiliary policies , our main policy learns to act in the entire action space by taking action guidance from the auxiliary policies . There are two intuitive benefits to our approach since our main policy learns in the full action space . First , during policy evaluation our main policy does not have to commit to a particular auxiliary policy to perform actions for a fixed number of time steps like it is usually done in SAC-X . Second , learning in the full action space means the main policy will less likely suffer from the definition of hand-crafted sub-tasks , which could be incomplete or biased . 2 BACKGROUND . We consider the Reinforcement Learning problem in a Markov Decision Process ( MDP ) denoted as ( S , A , P , ρ0 , r , γ , T ) , where S is the state space , A is the discrete action space , P : S×A×S → [ 0 , 1 ] is the state transition probability , ρ0 : S → [ 0 , 1 ] is the the initial state distribution , r : S × A→ R is the reward function , γ is the discount factor , and T is the maximum episode length . A stochastic policy πθ : S ×A→ [ 0 , 1 ] , parameterized by a parameter vector θ , assigns a probability value to an action given a state . The goal is to maximize the expected discounted return of the policy : Eτ [ T−1∑ t=0 γtrt ] , where τ is the trajectory ( s0 , a0 , r0 , s1 , . . . , sT−1 , aT−1 , rT−1 ) and s0 ∼ ρ0 , st ∼ P ( ·|st−1 , at−1 ) , at ∼ πθ ( ·|st ) , rt = r ( st , at ) Policy Gradient Algorithms . The core idea behind policy gradient algorithms is to obtain the policy gradient ∇θJ of the expected discounted return with respect to the policy parameter θ . Doing gradient ascent θ = θ + ∇θJ therefore maximizes the expected discounted reward . Earlier work proposes the following policy gradient estimate to the objective J ( Sutton & Barto , 2018 ) : gpolicy , θ = Eτ∼πθ [ T−1∑ t=0 ∇θ log πθ ( at|st ) Gt ] , where Gt = ∑∞ k=0 γ krt+k denotes the discounted return following time t. This gradient estimate , however , suffers from large variance ( Sutton & Barto , 2018 ) and the following gradient estimate is suggested instead : gpolicy , θ = Eτ [ ∇θ T−1∑ t=0 log πθ ( at|st ) A ( τ , V , t ) ] , where A ( τ , V , t ) is the General Advantage Estimation ( GAE ) ( Schulman et al. , 2015 ) , which measures “ how good is at compared to the usual actions ” , and V : S → R is the state-value function . 3 ACTION GUIDANCE . The key idea behind action guidance is to create a main policy that trains on the sparse rewards , and creating some auxiliary policies that are trained on shaped rewards . During the initial stages of training , the main policy has a high probability to take action guidance from the auxiliary policies , that is , the main policy can execute actions sampled from the auxiliary policies , rather than from its own policy . As the training goes on , this probability decreases , and the main policy executes more actions sampled from its own policy . During training , the main and auxiliary policies are updated via off-policy policy gradient . Our use of auxiliary policies makes the training sample-efficient , and our use of the main policy , who only sees its own sparse reward , makes sure that the agent will eventually optimize over the true objective of sparse rewards . In a way , action guidance can be seen as training agents using shaped rewards , while having the main policy learn by imitating from them . Specifically , let us defineM as the MDP that the main policy learns from andA = { A1 , A2 , ... , Ak } be a set of auxiliary MDPs that the auxiliary policies learn from . In our constructions , M and A share the same state , observation , and action space . However , the reward function forM is RM , which is the sparse reward function , and reward functions for A are RA1 , ... , RAk , which are the shaped reward functions . For each of these MDPs E ∈ S = { M } ∪A above , let us initialize a policy πθE parameterized by parameters θE , respectively . Furthermore , let us use πS = { πθE |E ∈ S } to denote the set of these initialized policies . At each timestep t , let us use some exploration strategy S that selects a policy πb ∈ πS to sample an action at given st. At the end of the episode , each policy πθ ∈ πS can be updated via its off-policy policy gradient ( Degris et al. , 2012 ; Levine et al. , 2020 ) : Eτ∼πθb [ ( T−1∏ t=0 πθ ( at|st ) πθb ( at|st ) ) T−1∑ t=0 ∇θ log πθ ( at|st ) A ( τ , V , t ) ] ( 1 ) When πθ = πθb , the gradient in Equation 1 means on-policy policy gradient update for πθ . Otherwise , the objective means off-policy policy gradient update for πθ .
The paper introduces an approach for learning policies across multiple MDPs and using those policies to improve learning performance on the task that the agent designer cares about. The approach assumes that a set of MDPs are provided to the learning agent, and that all of the MDPs have the same underlying task but with different reward densities (i.e., some of these MDPs have shaped rewards, and thus are faster to learn from). The approach operates by training the main agent to imitate the actions chosen by the other agents that are trained on the MDPs with shaped reward functions.
SP:c0924c1c4d4132e6d80e24103c243780438f8a89
Learning Hyperbolic Representations for Unsupervised 3D Segmentation
There exists a need for unsupervised 3D segmentation on complex volumetric data , particularly when annotation ability is limited or discovery of new categories is desired . Using the observation that much of 3D volumetric data is innately hierarchical , we propose learning effective representations of 3D patches for unsupervised segmentation through a variational autoencoder ( VAE ) with a hyperbolic latent space and a proposed gyroplane convolutional layer , which better models the underlying hierarchical structure within a 3D image . We also introduce a hierarchical triplet loss and multi-scale patch sampling scheme to embed relationships across varying levels of granularity . We demonstrate the effectiveness of our hyperbolic representations for unsupervised 3D segmentation on a hierarchical toy dataset , BraTS whole tumor dataset , and cryogenic electron microscopy data . 1 INTRODUCTION . Recent advances in technology have greatly increased both the availability of 3D data , as well as the need to process and learn from 3D data . In particular , technologies such as magnetic resonance imaging and cryogenic electron microscopy ( cryo-EM ) have led to greater availability of 3D voxel data . Deep learning is a promising technique to do so , but producing annotations for 3D data can be extremely expensive , especially for richer tasks such as segmentation in dense voxel grids . In some cases , labels may also be impossible to produce due to the limitations of current knowledge , or may introduce bias if we want to conduct scientific discovery . Unsupervised learning , which does not require annotations , is a promising approach for overcoming these limitations . In this work , we tackle the challenging problem of unsupervised segmentation on complex 3D voxel data by addressing the essential challenge of representation learning . We expand from prior literature in the hyperbolic domain that conducts classification in simple data to the task of segmentation in 3D images , which requires significantly more representation discriminability . In order to learn effective representations , we need to capture the structure of our input data . We observe that 3D images often have inherent hierarchical structure : as a biomedical example , a cryo-EM tomogram of a cell has a hierarchy that at the highest level comprises the entire cell ; at a finer level comprises organelles such as the mitochondria and nucleus ; and at an even finer level comprises sub-structures such as the nucleolus of a nucleus or proteins within organelles . For downstream analysis , we are typically interested in the unsupervised discovery and segmentation of structures spanning multiple levels of hierarchy . However , prior work on representation learning for unsupervised 3D segmentation does not explicitly model hierarchical structure between different regions of a 3D image . We argue that this hampers the ability to leverage hierarchical relationships to improve segmentation in complex 3D images . Our key insight is that we can utilize a hyperbolic embedding space to learn effective hierarchical representations of voxel regions in 3D images . Hyperbolic representations have been proposed as a continuous way to represent hierarchical data , as trees can be embedded in hyperbolic space with arbitrarily low error ( Sarkar , 2011 ) . These methods have shown promise for modeling data types such as natural language word taxonomies ( Nickel & Kiela , 2017 ; 2018 ) , graphs ( Nickel & Kiela , 2017 ; Mathieu et al. , 2019 ; Ovinnikov , 2019 ; Chami et al. , 2019 ) , as well as simple MNIST ( LeCun et al. , 2010 ) image data for classification ( Mathieu et al. , 2019 ) . To the best of our knowledge , our work is the first to introduce learning hyperbolic representations to capture hierarchical structure among subregions of complex 3D images , and to utilize the learned hyperbolic representations to perform a complex computer vision task such as segmentation . Our approach for learning hyperbolic representations of 3D voxel grid data is based on several key innovations . First , to handle larger and more complex 3D data such as biomedical images , we propose a hyperbolic 3D convolutional VAE along with a new gyroplane convolutional layer that respects hyperbolic geometry . Second , we enhance our VAE training objective with a novel self-supervised hierarchical triplet loss that helps our model learn hierarchical structure within the VAE ’ s hyperbolic latent space . Finally , since our goal in segmentation is to learn hierarchy within voxel regions of 3D input , we present a multi-scale sampling scheme such that our 3D VAE can simultaneously embed hierarchical relationships across varying levels of granularity . In summary , our key contributions are as follows : • We introduce a hyperbolic 3D convolutional VAE with a novel gyroplane convolutional layer that scales the learning of hyperbolic representations to complex 3D data . • We propose a multi-scale sampling scheme and hierarchical triplet loss in order to encode hierarchical structure in the latent space and perform 3D unsupervised segmentation . • We demonstrate the effectiveness of our approach through experiments on a synthetic 3D toy dataset , the Brain Tumor Segmentation ( BraTS ) dataset ( Menze et al. , 2014 ; Bakas et al. , 2017 ; 2018 ) , and cryo-EM data . 2 RELATED WORK . Segmentation on 3D voxel data Since 3D voxel grids are dense , computer vision tasks such as supervised segmentation are commonly performed using deep learning architectures with 3D convolutional layers ( Chen et al. , 2016 ; Dou et al. , 2017 ; Hesamian et al. , 2019 ; Zheng et al. , 2019 ) . However , due to the challenges of obtaining voxel-level segmentations in 3D , there has been significant effort in finding semi-supervised approaches , including using labels only from several fully annotated 2D slices of an input volume ( Çiçek et al. , 2016 ) , using a smaller set of segmentations with joint segmentation and registration ( Xu & Niethammer , 2019 ) , and using one segmented input in conjunction with other unlabelled data ( Zhao et al. , 2019 ) . Unsupervised approaches for 3D segmentation are useful not only for further reducing the manual annotation effort required , but also for scientific discovery tasks where we lack the sufficient knowledge to provide representative training examples for structures of interest . Moriya et al . ( 2018 ) extends to 3D data an iterative approach of feature learning followed by clustering ( Yang et al. , 2016 ) . Nalepa et al . ( 2020 ) uses a 3D convolutional autoencoder architecture and performs clustering of the latent representations . Another approach , ( Dalca et al. , 2018 ) , uses a network pre-trained on manual segmentations from a separate dataset to perform unsupervised segmentation of 3D biomedical images . However , this limits applicability to areas where we already have a dataset with manual annotations and makes it unsuitable for unbiased unsupervised discovery . Gur et al . ( 2019 ) and Kitrungrotsakul et al . ( 2019 ) developed unsupervised methods for 3D segmentation of vessel structures , but these are specialized and do not generalize to the segmentation of other structures . Beyond unsupervised 3D segmentation , there has been work such as Ji et al . ( 2019 ) that performs unsupervised 2D segmentation based on a mutual information objective , and Caron et al . ( 2018 ) , which proposes using the clustered output of an encoder as pseudo-labels . While these methods can be applied to 2D slices of a 3D volume to perform 3D segmentation , they generally suffer limitations due to insufficient modeling of the 3D spatial information . None of the aforementioned approaches explicitly model hierarchical structure , which is the main focus of our work . Hyperbolic representations A recent line of work has employed hyperbolic space to model hierarchical structure , with the intuition that tree structures can be naturally embedded into continuous hyperbolic space ( Nickel & Kiela , 2017 ) . Several works have proposed hyperbolic variational autoencoders ( VAEs ) as an unsupervised method to learn hyperbolic representations . Ovinnikov ( 2019 ) proposes a Wasserstein autoencoder on the Poincaré ball model of hyperbolic geometry . Nagano et al . ( 2019 ) proposes a VAE on the hyperboloid model of hyperbolic geometry where the last layer of the encoder is an exponential map , and derives a reparametrisable sampling scheme for the wrapped normal distribution , which they use for the prior and posterior . Mathieu et al . ( 2019 ) proposes a VAE on the Poincaré ball model of hyperbolic geometry . In addition to having the last layer of the encoder be an exponential map , Mathieu et al . ( 2019 ) also proposes to have the first layer of the decoder be the gyroplane layer proposed by Ganea et al . ( 2018 ) in order to better handle the geometry of the hyperbolic latent space , and applies their model to MNIST image classification . Our work differs by introducing an approach for learning hyperbolic representations that models the hierarchy between sub-volumes of complex 3D images , and uses a novel hierarchical triplet loss and sampling scheme to capture relationships among multiple levels of granularity in a given input . In addition , a related field of study has sought to generalize traditional Euclidean neural networks or their components to non-Euclidean spaces . Ganea et al . ( 2018 ) proposes hyperbolic feed-forward and recurrent architectures based on the theory of gyrovector spaces . Building on this work , Chami et al . ( 2019 ) propose a hyperbolic graph convolutional network . Other works such as Bachmann et al . ( 2019 ) ; Becigneul & Ganea ( 2019 ) ; Gu et al . ( 2019 ) have also proposed learning with a product space of manifolds . Our work generalizes a layer of Ganea et al . ( 2018 ) in order to create and use a new hyperbolic convolutional layer , which we call the gyroplane convolutional layer . 3 PRELIMINARIES . Hyperbolic Space Hyperbolic space is a non-Euclidean space with constant negative curvature . Curvature is a measure of the deviation of the geometry from a flat plane ( Chami et al. , 2019 ) . There are five equivalent models of hyperbolic geometry . Following previous work ( Mathieu et al. , 2019 ; Ganea et al. , 2018 ; Lou et al. , 2020 ) , we use the Poincaré ball model . Hyperbolic space can be considered the continuous version of trees ( Nickel & Kiela , 2017 ) , making it a natural choice for embedding hierarchical data . Trees can be embedded in the Poincaré ball with arbitrarily low error ( Sarkar , 2011 ) , and like the leaves of a tree , the area of a disc in the Poincaré ball increases exponentially with the radius . Unlike trees , hyperbolic space is smooth , permitting deep learning . Poincaré ball model of hyperbolic geometry The Poincaré ball ( of curvature c = −1 ) is the open ball of radius 1 centered at the origin equipped with the metric tensor gp = ( λx ) 2ge , where the conformal factor λx = 21−||x||2 and ge is Euclidean metric tensor ( i.e. , the usual dot product ) . Formally , this makes the Poincaré ball a Riemannian manifold . The distance dp between points on the Poincaré ball is given by : dp ( x , y ) = cosh −1 ( 1 + 2 ||x− y||2 ( 1− ||x||2 ) ( 1− ||y||2 ) ) ( 1 ) The exponential and logarithm maps are a useful way to map from Euclidean space to the Poincaré ball and vice versa ( in general , to map from a tangent space to a Riemannian manifold and vice versa ) . On the Poincaré ball , the exponential and logarithm maps have the closed forms expz ( v ) = z ⊕ ( tanh ( λz||v|| 2 ) v ||v|| ) , logz ( y ) = 2 λz tanh−1 ( || − z ⊕ y|| ) −z ⊕ y || − z ⊕ y|| ( 2 ) where ⊕ denotes Mobius addition , which was first introduced by Ungar ( 2001 ) as a way to define vector operations on hyperbolic space ( see Appendix ) .
The authors of this manuscript propose an unsupervised learning framework for 3D segmentation of biomedical images. Specifically, the proposed method learns effective representations for 3D patches using variational autoencoder (VAE) with a hyperbolic latent space. Its main contribution lies at that it introduces a new unsupervised learning framework including hyperbolic convolutional VAE and hierarchical triplet loss. This work conducts experiments on toy dataset, the Brain Tumor Segmentation dataset, and cryo-EM data. The experiment demonstrates competitive performance of the proposed method.
SP:d197f9ea345b135b417400d791002f18baad39e7
Learning Hyperbolic Representations for Unsupervised 3D Segmentation
There exists a need for unsupervised 3D segmentation on complex volumetric data , particularly when annotation ability is limited or discovery of new categories is desired . Using the observation that much of 3D volumetric data is innately hierarchical , we propose learning effective representations of 3D patches for unsupervised segmentation through a variational autoencoder ( VAE ) with a hyperbolic latent space and a proposed gyroplane convolutional layer , which better models the underlying hierarchical structure within a 3D image . We also introduce a hierarchical triplet loss and multi-scale patch sampling scheme to embed relationships across varying levels of granularity . We demonstrate the effectiveness of our hyperbolic representations for unsupervised 3D segmentation on a hierarchical toy dataset , BraTS whole tumor dataset , and cryogenic electron microscopy data . 1 INTRODUCTION . Recent advances in technology have greatly increased both the availability of 3D data , as well as the need to process and learn from 3D data . In particular , technologies such as magnetic resonance imaging and cryogenic electron microscopy ( cryo-EM ) have led to greater availability of 3D voxel data . Deep learning is a promising technique to do so , but producing annotations for 3D data can be extremely expensive , especially for richer tasks such as segmentation in dense voxel grids . In some cases , labels may also be impossible to produce due to the limitations of current knowledge , or may introduce bias if we want to conduct scientific discovery . Unsupervised learning , which does not require annotations , is a promising approach for overcoming these limitations . In this work , we tackle the challenging problem of unsupervised segmentation on complex 3D voxel data by addressing the essential challenge of representation learning . We expand from prior literature in the hyperbolic domain that conducts classification in simple data to the task of segmentation in 3D images , which requires significantly more representation discriminability . In order to learn effective representations , we need to capture the structure of our input data . We observe that 3D images often have inherent hierarchical structure : as a biomedical example , a cryo-EM tomogram of a cell has a hierarchy that at the highest level comprises the entire cell ; at a finer level comprises organelles such as the mitochondria and nucleus ; and at an even finer level comprises sub-structures such as the nucleolus of a nucleus or proteins within organelles . For downstream analysis , we are typically interested in the unsupervised discovery and segmentation of structures spanning multiple levels of hierarchy . However , prior work on representation learning for unsupervised 3D segmentation does not explicitly model hierarchical structure between different regions of a 3D image . We argue that this hampers the ability to leverage hierarchical relationships to improve segmentation in complex 3D images . Our key insight is that we can utilize a hyperbolic embedding space to learn effective hierarchical representations of voxel regions in 3D images . Hyperbolic representations have been proposed as a continuous way to represent hierarchical data , as trees can be embedded in hyperbolic space with arbitrarily low error ( Sarkar , 2011 ) . These methods have shown promise for modeling data types such as natural language word taxonomies ( Nickel & Kiela , 2017 ; 2018 ) , graphs ( Nickel & Kiela , 2017 ; Mathieu et al. , 2019 ; Ovinnikov , 2019 ; Chami et al. , 2019 ) , as well as simple MNIST ( LeCun et al. , 2010 ) image data for classification ( Mathieu et al. , 2019 ) . To the best of our knowledge , our work is the first to introduce learning hyperbolic representations to capture hierarchical structure among subregions of complex 3D images , and to utilize the learned hyperbolic representations to perform a complex computer vision task such as segmentation . Our approach for learning hyperbolic representations of 3D voxel grid data is based on several key innovations . First , to handle larger and more complex 3D data such as biomedical images , we propose a hyperbolic 3D convolutional VAE along with a new gyroplane convolutional layer that respects hyperbolic geometry . Second , we enhance our VAE training objective with a novel self-supervised hierarchical triplet loss that helps our model learn hierarchical structure within the VAE ’ s hyperbolic latent space . Finally , since our goal in segmentation is to learn hierarchy within voxel regions of 3D input , we present a multi-scale sampling scheme such that our 3D VAE can simultaneously embed hierarchical relationships across varying levels of granularity . In summary , our key contributions are as follows : • We introduce a hyperbolic 3D convolutional VAE with a novel gyroplane convolutional layer that scales the learning of hyperbolic representations to complex 3D data . • We propose a multi-scale sampling scheme and hierarchical triplet loss in order to encode hierarchical structure in the latent space and perform 3D unsupervised segmentation . • We demonstrate the effectiveness of our approach through experiments on a synthetic 3D toy dataset , the Brain Tumor Segmentation ( BraTS ) dataset ( Menze et al. , 2014 ; Bakas et al. , 2017 ; 2018 ) , and cryo-EM data . 2 RELATED WORK . Segmentation on 3D voxel data Since 3D voxel grids are dense , computer vision tasks such as supervised segmentation are commonly performed using deep learning architectures with 3D convolutional layers ( Chen et al. , 2016 ; Dou et al. , 2017 ; Hesamian et al. , 2019 ; Zheng et al. , 2019 ) . However , due to the challenges of obtaining voxel-level segmentations in 3D , there has been significant effort in finding semi-supervised approaches , including using labels only from several fully annotated 2D slices of an input volume ( Çiçek et al. , 2016 ) , using a smaller set of segmentations with joint segmentation and registration ( Xu & Niethammer , 2019 ) , and using one segmented input in conjunction with other unlabelled data ( Zhao et al. , 2019 ) . Unsupervised approaches for 3D segmentation are useful not only for further reducing the manual annotation effort required , but also for scientific discovery tasks where we lack the sufficient knowledge to provide representative training examples for structures of interest . Moriya et al . ( 2018 ) extends to 3D data an iterative approach of feature learning followed by clustering ( Yang et al. , 2016 ) . Nalepa et al . ( 2020 ) uses a 3D convolutional autoencoder architecture and performs clustering of the latent representations . Another approach , ( Dalca et al. , 2018 ) , uses a network pre-trained on manual segmentations from a separate dataset to perform unsupervised segmentation of 3D biomedical images . However , this limits applicability to areas where we already have a dataset with manual annotations and makes it unsuitable for unbiased unsupervised discovery . Gur et al . ( 2019 ) and Kitrungrotsakul et al . ( 2019 ) developed unsupervised methods for 3D segmentation of vessel structures , but these are specialized and do not generalize to the segmentation of other structures . Beyond unsupervised 3D segmentation , there has been work such as Ji et al . ( 2019 ) that performs unsupervised 2D segmentation based on a mutual information objective , and Caron et al . ( 2018 ) , which proposes using the clustered output of an encoder as pseudo-labels . While these methods can be applied to 2D slices of a 3D volume to perform 3D segmentation , they generally suffer limitations due to insufficient modeling of the 3D spatial information . None of the aforementioned approaches explicitly model hierarchical structure , which is the main focus of our work . Hyperbolic representations A recent line of work has employed hyperbolic space to model hierarchical structure , with the intuition that tree structures can be naturally embedded into continuous hyperbolic space ( Nickel & Kiela , 2017 ) . Several works have proposed hyperbolic variational autoencoders ( VAEs ) as an unsupervised method to learn hyperbolic representations . Ovinnikov ( 2019 ) proposes a Wasserstein autoencoder on the Poincaré ball model of hyperbolic geometry . Nagano et al . ( 2019 ) proposes a VAE on the hyperboloid model of hyperbolic geometry where the last layer of the encoder is an exponential map , and derives a reparametrisable sampling scheme for the wrapped normal distribution , which they use for the prior and posterior . Mathieu et al . ( 2019 ) proposes a VAE on the Poincaré ball model of hyperbolic geometry . In addition to having the last layer of the encoder be an exponential map , Mathieu et al . ( 2019 ) also proposes to have the first layer of the decoder be the gyroplane layer proposed by Ganea et al . ( 2018 ) in order to better handle the geometry of the hyperbolic latent space , and applies their model to MNIST image classification . Our work differs by introducing an approach for learning hyperbolic representations that models the hierarchy between sub-volumes of complex 3D images , and uses a novel hierarchical triplet loss and sampling scheme to capture relationships among multiple levels of granularity in a given input . In addition , a related field of study has sought to generalize traditional Euclidean neural networks or their components to non-Euclidean spaces . Ganea et al . ( 2018 ) proposes hyperbolic feed-forward and recurrent architectures based on the theory of gyrovector spaces . Building on this work , Chami et al . ( 2019 ) propose a hyperbolic graph convolutional network . Other works such as Bachmann et al . ( 2019 ) ; Becigneul & Ganea ( 2019 ) ; Gu et al . ( 2019 ) have also proposed learning with a product space of manifolds . Our work generalizes a layer of Ganea et al . ( 2018 ) in order to create and use a new hyperbolic convolutional layer , which we call the gyroplane convolutional layer . 3 PRELIMINARIES . Hyperbolic Space Hyperbolic space is a non-Euclidean space with constant negative curvature . Curvature is a measure of the deviation of the geometry from a flat plane ( Chami et al. , 2019 ) . There are five equivalent models of hyperbolic geometry . Following previous work ( Mathieu et al. , 2019 ; Ganea et al. , 2018 ; Lou et al. , 2020 ) , we use the Poincaré ball model . Hyperbolic space can be considered the continuous version of trees ( Nickel & Kiela , 2017 ) , making it a natural choice for embedding hierarchical data . Trees can be embedded in the Poincaré ball with arbitrarily low error ( Sarkar , 2011 ) , and like the leaves of a tree , the area of a disc in the Poincaré ball increases exponentially with the radius . Unlike trees , hyperbolic space is smooth , permitting deep learning . Poincaré ball model of hyperbolic geometry The Poincaré ball ( of curvature c = −1 ) is the open ball of radius 1 centered at the origin equipped with the metric tensor gp = ( λx ) 2ge , where the conformal factor λx = 21−||x||2 and ge is Euclidean metric tensor ( i.e. , the usual dot product ) . Formally , this makes the Poincaré ball a Riemannian manifold . The distance dp between points on the Poincaré ball is given by : dp ( x , y ) = cosh −1 ( 1 + 2 ||x− y||2 ( 1− ||x||2 ) ( 1− ||y||2 ) ) ( 1 ) The exponential and logarithm maps are a useful way to map from Euclidean space to the Poincaré ball and vice versa ( in general , to map from a tangent space to a Riemannian manifold and vice versa ) . On the Poincaré ball , the exponential and logarithm maps have the closed forms expz ( v ) = z ⊕ ( tanh ( λz||v|| 2 ) v ||v|| ) , logz ( y ) = 2 λz tanh−1 ( || − z ⊕ y|| ) −z ⊕ y || − z ⊕ y|| ( 2 ) where ⊕ denotes Mobius addition , which was first introduced by Ungar ( 2001 ) as a way to define vector operations on hyperbolic space ( see Appendix ) .
The paper considers learning hyperbolic representations for unsupervised 3D segmentation. Since the general task of producing annotations for 3D data can be expensive (e.g. for segmentation in dense voxel grids), this is an important problem. The paper proposes to learn hierarchical data structures (e.g. 3D biomedical images) with a hyperbolic variational autoencoder. The paper adapts different metric learning approaches, such as triplet loss and computing a Frechet mean on Riemannian manifolds for clustering.
SP:d197f9ea345b135b417400d791002f18baad39e7
One-class Classification Robust to Geometric Transformation
1 INTRODUCTION . One-class classification refers to the problem of identifying whether an input example belongs to a single target class ( in-class ) or any of novel classes ( out-of-class ) . The main challenge of this task is that only in-class examples are available at training time . Thus , by using only positive examples , a model has to learn the decision boundary that distinguishes in-class examples from out-of-class examples , whose distribution is assumed to be unknown in practice . Early work on one-class classification mainly utilized kernel-based methods ( Schölkopf et al. , 2000 ; Tax & Duin , 2004 ) to find a hypersphere ( or hyperplane ) enclosing all training in-class examples , or density estimation techniques ( Parzen , 1962 ) to measure the likelihood of an input example . In the era of deep learning , numerous literature have tried to employ deep neural networks to effectively learn the high-dimensional data ( e.g. , images ) . Most of them aim to detect out-of-class examples based on density estimation , by adopting the architecture of autoencoders ( Ruff et al. , 2018 ; Zong et al. , 2018 ) or generative adversarial networks ( GANs ) ( Schlegl et al. , 2017 ; Zenati et al. , 2018 ) . Nevertheless , their supervision is not useful enough to capture the semantic of highdimensional data for a target class , which eventually leads to the limited performance . Recently , there have been several attempts to make use of self-supervised learning ( Golan & El-Yaniv , 2018 ; Hendrycks et al. , 2019 ; Bergman & Hoshen , 2020 ) for more informative supervision on the target class , and made a major breakthrough to this problem . They build a self-labeled image set by applying a bunch of geometric transformations to training images , then train a classifier to accurately predict the transformation applied to original input images . This approach achieved the state-of-theart performance for one-class classification even without modeling the latent distribution of in-class examples for density estimation . However , all the aforementioned methods are quite vulnerable to spatial variances within the images , because they were developed based on the assumption that in-class ( and out-of-class ) images have a fixed viewpoint . In particular , the existing self-supervised methods do not work completely for the inputs with various viewpoints in that their capability of predicting the geometric transformation relies on the fixed viewpoint . Note that humans usually recognize that the images of a target object with different viewpoints belong to the same class ; in this sense , the one-class classifiers also should be robust to the viewpoint of input images . In other words , we need to make geometricallytransformed in-class images not to be identified as out-of-class , from the perspective that a geometric transformation ( e.g. , rotation & x , y-translation ) does not change the semantic ( i.e. , object class ) but the viewpoint . The goal of our work is to propose an effective strategy that can circumvent the limitation of viewpoint sensitivity , without compromising the performance for the images with the fixed viewpoint . We first present several evaluation settings for validating the robustness to flexible viewpoints , artificially introduced by geometric transformations . Then , we describe our proposed solution , termed as GROC , which measures a conformity score indicating how confidently an input image matches with one of the predefined ( anchor ) in-class transformations . In this work , we offer two measures for the conformity score , which are the inner product similarity and the conditional likelihood , and show how they can be optimized by the training in-class images . The empirical experiments on the proposed evaluation scenarios show that GROC considerably outperforms all the other competing methods in terms of the robustness to geometric transformation . 2 PRELIMINARIES . 2.1 PROBLEM FORMULATION . Let X be a set of all kinds of images , Xin ⊆ X and Xout = X\Xin be the sets of all in-class and out-of-class images , respectively . Given training in-class data X trin ⊆ Xin , we consider the one-class classification problem which differentiates in-class and out-of-class data . The problem aims to build a classifier by using only the known in-class data for training . The classifier learns an in-class score function , Sin ( x ) : X → R , where a higher score indicates that the input x is more likely to be in Xin . Based on the score , the classifier determines whether the input belongs to in-class or not . 2.2 SELF-SUPERVISED LEARNING METHODS FOR ONE-CLASS CLASSIFICATION . Recently , the self-supervised learning methods ( Golan & El-Yaniv , 2018 ; Hendrycks et al. , 2019 ; Bergman & Hoshen , 2020 ) have achieved the state-of-the-art performance in one-class classification . For self-supervised learning , they first create a self-labeled dataset and use it to train a multi-class classifier . Concretely , let T = { T0 , · · · , Ti , · · · , TK−1 } be a set of predefined ( anchor ) geometric transformations , where T0 ( x ) = x is the identity mapping and each transformation Ti is a composition of multiple unit transformations ( i.e. , rotation & x , y-translation ) . The self-labeled dataset consists of transformed images and their corresponding labels . Dself = { ( Ti ( x ) , i ) |x ∈ X trin , 0 ≤ i < K } , ( 1 ) where Ti ( · ) is the i-th transformation operator and its label i is the transformation id of Ti ( · ) . Using the self-labeled dataset , these methods train a softmax classifier based on a multi-class classification loss ( i.e. , cross-entropy ) for a discrimination among the transformations . For one-class classification , they define an in-class score under the assumption that a well-trained classifier would better predict the transformation for the in-class images than that for the out-of-class images . In the end , the in-class score for an unseen image x is defined by the sum of softmax probabilities that its transformed images are correctly classified as their labels ( Golan & El-Yaniv , 2018 ; Bergman & Hoshen , 2020 ) . Sin ( x ) = K−1∑ i=0 p ( y = i|Ti ( x ) ) , ( 2 ) where p ( y = i|Ti ( x ) ) is the softmax probability that Ti ( x ) is classified as the i-th transformation . The state-of-the-art method based on this self-supervised approach ( Hendrycks et al. , 2019 ) significantly improves the performance by formulating the classification task in a multi-label manner . Since each transformation is determined by the combination of unit transformations from three categories1 , ( i.e. , rotation , ( horizontal ) x-translation , and ( vertical ) y-translation ) , the unit transformations applied to an input image can be independently predicted for each category . Thus , they adopt a 1They build the set of transformations T by the combination of the following unit transformations : rotation ∈ { 0◦ , 90◦ , 180◦ , 270◦ } , x-translation ∈ { −8 , 0 , +8 } , and y-translation ∈ { −8 , 0 , +8 } . softmax head for each transformation category , then train a classifier to predict the degree of transformations within each category . The final in-class score is also replaced with the one summarizing all the softmax heads , each of which is for the unit transformation applied to the input . 3 METHOD . 3.1 MOTIVATION . The underlying concept of the self-supervised methods based on transformation classification is to learn discriminative features of in-class images , in order to classify various viewpoints caused by the geometric transformations . The precondition for this approach is that the viewpoint of training images is always the same , otherwise the classifier can not be trained due to the inconsistent supervision . However , at test time , the input images can have different viewpoints from those appearing in the training images . We remark that the images of the same object with different viewpoints belong to the same class , as usually recognized by humans . In this sense , it is desired that in-class images with various viewpoints are identified as in-class , not out-of-class . That is , the robustness to geometric transformations should be considered for one-class classification . Sea lion , n02077923 In this respect , the existing self-supervised methods totally fail to compute the effective inclass scores for inputs with various viewpoints . We observe that they produce an undesirable in-class score especially when the input image has the same ( or similar ) viewpoint represented by the anchor transformations T \ { T0 } . For example , suppose a classifier is trained on Dself with the transformations T of clockwise rotations { 0◦ , 90◦ , 180◦ , 270◦ } . Given two images of sea lions x′ and x′′ , let x′ have the same viewpoint with the training images and x′′ have the 90◦ rotated viewpoint , which is equivalent to T1 ( x′ ) . As illustrated in Figure 1 , the softmax probability of each transformed image has a high value for the input x′ , but the one for x′′ has a low value . Consequently , it can not correctly identify x′′ as in-class , though it comes from the target class as well . We point out that setting the target label of each transformed image to the applied transformation is not valid any longer when an input viewpoint is changed . A straightforward solution for this challenge is augmenting the training dataset so that it can cover various viewpoints of in-class images . Unfortunately , the data augmentation technique is not applicable because it results in inconsistent supervision for the task of discriminating the viewpoints , which is the learning objective of the self-supervised methods . On the other hand , there exist several one-class classification methods ( Ruff et al. , 2018 ; Zong et al. , 2018 ) that can adopt the data augmentation technique . However , they can not achieve the performance as high as the self-supervised methods even in the case that all input images have a fixed viewpoint ; this will be further discussed in Section 4 . To sum up , we need to consider another strategy to develop a robust one-class classifier that works well even for the input images having various viewpoints . 3.2 PROPOSED SETUPS . We first propose three evaluation setups for testing the robustness to various viewpoints : 1 ) fixed viewpoint , 2 ) anchor viewpoint , and 3 ) random viewpoint . We artificially introduce the spatial variance ( i.e. , the changes of the viewpoint ) in test images by using the geometric transformations . Note that X te denotes the test data , which contains both in-class and out-of-class images . Fixed viewpoint setup . In this setup , we consider only the fixed viewpoint that is used for training , as done in previous work . We do not change the viewpoint of the original test images , X tefv = X te . Anchor viewpoint setup . This setup is designed for verifying the robustness to the viewpoints induced by the anchor transformations . We build a test dataset by X teav = { T ( x ) |T ∼ T , x ∈ X te } , where T is randomly sampled from the set of the anchor transformations T for each image x . Random viewpoint setup . The random viewpoint setup further considers the geometric transformations that are not included in the set of anchor transformations . We first define the superset of T , denoted by T ∗ , including a number of transformations with continuous degrees . A test dataset for this setup is built by X terv = { T ( x ) |T ∼ T ∗ , x ∈ X te } , where T is sampled for each image x . As a preliminary result , we plot the in-class score distributions for in-class and out-of-class test images , computed by the state-of-the-art self-supervised method ( Hendrycks et al. , 2019 ) . In Figure 2 , we observe that the score distributions of in-class and out-of-class images in X tefv are clearly distinguishable , which supports the great performance for one-class classification . On the contrary , the two score distributions are almost overlapping with each other in cases of X teav and X terv , strongly indicating that they fail to figure out in-class images due to their various viewpoints . We additionally investigate the performance drop for geometrically/non-geometrically transformed inputs . In Figure 2 ( d ) , it is obvious that geometric transformations make the self-supervised method totally malfunction , while non-geometric transformations ( e.g. , brightness , contrast , sharpness , and color temperature ) hardly degrade the final performance for one-class classification .
This paper considers the deep one-class classification problem. Some recent state of the art in this area is built upon self-supervised learning methods that are trained to predict the rotation applied to a training image, and then use the success of rotation prediction on test images as an outlier score. The paper observes that, while successful on standard benchmarks, this strategy is not robust to unexpected image rotations at test-time. Since, humans are (presumably) able to exhibit rotation invariance during test-time in 1-class classification, this is considered a flaw in existing methods. To rectify this flaw, the paper proposes to use an anomaly score which is the maximum over all possible rotation predictions. The results show that the proposed method outperforms prior approaches when exposed to novel rotations at test time.
SP:70bb2ad8b8a46670e6ee60a6800656c4f2220ad0
One-class Classification Robust to Geometric Transformation
1 INTRODUCTION . One-class classification refers to the problem of identifying whether an input example belongs to a single target class ( in-class ) or any of novel classes ( out-of-class ) . The main challenge of this task is that only in-class examples are available at training time . Thus , by using only positive examples , a model has to learn the decision boundary that distinguishes in-class examples from out-of-class examples , whose distribution is assumed to be unknown in practice . Early work on one-class classification mainly utilized kernel-based methods ( Schölkopf et al. , 2000 ; Tax & Duin , 2004 ) to find a hypersphere ( or hyperplane ) enclosing all training in-class examples , or density estimation techniques ( Parzen , 1962 ) to measure the likelihood of an input example . In the era of deep learning , numerous literature have tried to employ deep neural networks to effectively learn the high-dimensional data ( e.g. , images ) . Most of them aim to detect out-of-class examples based on density estimation , by adopting the architecture of autoencoders ( Ruff et al. , 2018 ; Zong et al. , 2018 ) or generative adversarial networks ( GANs ) ( Schlegl et al. , 2017 ; Zenati et al. , 2018 ) . Nevertheless , their supervision is not useful enough to capture the semantic of highdimensional data for a target class , which eventually leads to the limited performance . Recently , there have been several attempts to make use of self-supervised learning ( Golan & El-Yaniv , 2018 ; Hendrycks et al. , 2019 ; Bergman & Hoshen , 2020 ) for more informative supervision on the target class , and made a major breakthrough to this problem . They build a self-labeled image set by applying a bunch of geometric transformations to training images , then train a classifier to accurately predict the transformation applied to original input images . This approach achieved the state-of-theart performance for one-class classification even without modeling the latent distribution of in-class examples for density estimation . However , all the aforementioned methods are quite vulnerable to spatial variances within the images , because they were developed based on the assumption that in-class ( and out-of-class ) images have a fixed viewpoint . In particular , the existing self-supervised methods do not work completely for the inputs with various viewpoints in that their capability of predicting the geometric transformation relies on the fixed viewpoint . Note that humans usually recognize that the images of a target object with different viewpoints belong to the same class ; in this sense , the one-class classifiers also should be robust to the viewpoint of input images . In other words , we need to make geometricallytransformed in-class images not to be identified as out-of-class , from the perspective that a geometric transformation ( e.g. , rotation & x , y-translation ) does not change the semantic ( i.e. , object class ) but the viewpoint . The goal of our work is to propose an effective strategy that can circumvent the limitation of viewpoint sensitivity , without compromising the performance for the images with the fixed viewpoint . We first present several evaluation settings for validating the robustness to flexible viewpoints , artificially introduced by geometric transformations . Then , we describe our proposed solution , termed as GROC , which measures a conformity score indicating how confidently an input image matches with one of the predefined ( anchor ) in-class transformations . In this work , we offer two measures for the conformity score , which are the inner product similarity and the conditional likelihood , and show how they can be optimized by the training in-class images . The empirical experiments on the proposed evaluation scenarios show that GROC considerably outperforms all the other competing methods in terms of the robustness to geometric transformation . 2 PRELIMINARIES . 2.1 PROBLEM FORMULATION . Let X be a set of all kinds of images , Xin ⊆ X and Xout = X\Xin be the sets of all in-class and out-of-class images , respectively . Given training in-class data X trin ⊆ Xin , we consider the one-class classification problem which differentiates in-class and out-of-class data . The problem aims to build a classifier by using only the known in-class data for training . The classifier learns an in-class score function , Sin ( x ) : X → R , where a higher score indicates that the input x is more likely to be in Xin . Based on the score , the classifier determines whether the input belongs to in-class or not . 2.2 SELF-SUPERVISED LEARNING METHODS FOR ONE-CLASS CLASSIFICATION . Recently , the self-supervised learning methods ( Golan & El-Yaniv , 2018 ; Hendrycks et al. , 2019 ; Bergman & Hoshen , 2020 ) have achieved the state-of-the-art performance in one-class classification . For self-supervised learning , they first create a self-labeled dataset and use it to train a multi-class classifier . Concretely , let T = { T0 , · · · , Ti , · · · , TK−1 } be a set of predefined ( anchor ) geometric transformations , where T0 ( x ) = x is the identity mapping and each transformation Ti is a composition of multiple unit transformations ( i.e. , rotation & x , y-translation ) . The self-labeled dataset consists of transformed images and their corresponding labels . Dself = { ( Ti ( x ) , i ) |x ∈ X trin , 0 ≤ i < K } , ( 1 ) where Ti ( · ) is the i-th transformation operator and its label i is the transformation id of Ti ( · ) . Using the self-labeled dataset , these methods train a softmax classifier based on a multi-class classification loss ( i.e. , cross-entropy ) for a discrimination among the transformations . For one-class classification , they define an in-class score under the assumption that a well-trained classifier would better predict the transformation for the in-class images than that for the out-of-class images . In the end , the in-class score for an unseen image x is defined by the sum of softmax probabilities that its transformed images are correctly classified as their labels ( Golan & El-Yaniv , 2018 ; Bergman & Hoshen , 2020 ) . Sin ( x ) = K−1∑ i=0 p ( y = i|Ti ( x ) ) , ( 2 ) where p ( y = i|Ti ( x ) ) is the softmax probability that Ti ( x ) is classified as the i-th transformation . The state-of-the-art method based on this self-supervised approach ( Hendrycks et al. , 2019 ) significantly improves the performance by formulating the classification task in a multi-label manner . Since each transformation is determined by the combination of unit transformations from three categories1 , ( i.e. , rotation , ( horizontal ) x-translation , and ( vertical ) y-translation ) , the unit transformations applied to an input image can be independently predicted for each category . Thus , they adopt a 1They build the set of transformations T by the combination of the following unit transformations : rotation ∈ { 0◦ , 90◦ , 180◦ , 270◦ } , x-translation ∈ { −8 , 0 , +8 } , and y-translation ∈ { −8 , 0 , +8 } . softmax head for each transformation category , then train a classifier to predict the degree of transformations within each category . The final in-class score is also replaced with the one summarizing all the softmax heads , each of which is for the unit transformation applied to the input . 3 METHOD . 3.1 MOTIVATION . The underlying concept of the self-supervised methods based on transformation classification is to learn discriminative features of in-class images , in order to classify various viewpoints caused by the geometric transformations . The precondition for this approach is that the viewpoint of training images is always the same , otherwise the classifier can not be trained due to the inconsistent supervision . However , at test time , the input images can have different viewpoints from those appearing in the training images . We remark that the images of the same object with different viewpoints belong to the same class , as usually recognized by humans . In this sense , it is desired that in-class images with various viewpoints are identified as in-class , not out-of-class . That is , the robustness to geometric transformations should be considered for one-class classification . Sea lion , n02077923 In this respect , the existing self-supervised methods totally fail to compute the effective inclass scores for inputs with various viewpoints . We observe that they produce an undesirable in-class score especially when the input image has the same ( or similar ) viewpoint represented by the anchor transformations T \ { T0 } . For example , suppose a classifier is trained on Dself with the transformations T of clockwise rotations { 0◦ , 90◦ , 180◦ , 270◦ } . Given two images of sea lions x′ and x′′ , let x′ have the same viewpoint with the training images and x′′ have the 90◦ rotated viewpoint , which is equivalent to T1 ( x′ ) . As illustrated in Figure 1 , the softmax probability of each transformed image has a high value for the input x′ , but the one for x′′ has a low value . Consequently , it can not correctly identify x′′ as in-class , though it comes from the target class as well . We point out that setting the target label of each transformed image to the applied transformation is not valid any longer when an input viewpoint is changed . A straightforward solution for this challenge is augmenting the training dataset so that it can cover various viewpoints of in-class images . Unfortunately , the data augmentation technique is not applicable because it results in inconsistent supervision for the task of discriminating the viewpoints , which is the learning objective of the self-supervised methods . On the other hand , there exist several one-class classification methods ( Ruff et al. , 2018 ; Zong et al. , 2018 ) that can adopt the data augmentation technique . However , they can not achieve the performance as high as the self-supervised methods even in the case that all input images have a fixed viewpoint ; this will be further discussed in Section 4 . To sum up , we need to consider another strategy to develop a robust one-class classifier that works well even for the input images having various viewpoints . 3.2 PROPOSED SETUPS . We first propose three evaluation setups for testing the robustness to various viewpoints : 1 ) fixed viewpoint , 2 ) anchor viewpoint , and 3 ) random viewpoint . We artificially introduce the spatial variance ( i.e. , the changes of the viewpoint ) in test images by using the geometric transformations . Note that X te denotes the test data , which contains both in-class and out-of-class images . Fixed viewpoint setup . In this setup , we consider only the fixed viewpoint that is used for training , as done in previous work . We do not change the viewpoint of the original test images , X tefv = X te . Anchor viewpoint setup . This setup is designed for verifying the robustness to the viewpoints induced by the anchor transformations . We build a test dataset by X teav = { T ( x ) |T ∼ T , x ∈ X te } , where T is randomly sampled from the set of the anchor transformations T for each image x . Random viewpoint setup . The random viewpoint setup further considers the geometric transformations that are not included in the set of anchor transformations . We first define the superset of T , denoted by T ∗ , including a number of transformations with continuous degrees . A test dataset for this setup is built by X terv = { T ( x ) |T ∼ T ∗ , x ∈ X te } , where T is sampled for each image x . As a preliminary result , we plot the in-class score distributions for in-class and out-of-class test images , computed by the state-of-the-art self-supervised method ( Hendrycks et al. , 2019 ) . In Figure 2 , we observe that the score distributions of in-class and out-of-class images in X tefv are clearly distinguishable , which supports the great performance for one-class classification . On the contrary , the two score distributions are almost overlapping with each other in cases of X teav and X terv , strongly indicating that they fail to figure out in-class images due to their various viewpoints . We additionally investigate the performance drop for geometrically/non-geometrically transformed inputs . In Figure 2 ( d ) , it is obvious that geometric transformations make the self-supervised method totally malfunction , while non-geometric transformations ( e.g. , brightness , contrast , sharpness , and color temperature ) hardly degrade the final performance for one-class classification .
This paper presents a one-class classifier robust to geometrically-transformed inputs (GROC). A conformity score is proposed that measures how strongly an input image agrees with one of the predefined in-class transformations. Experiments show that the proposed method works well on 3 datasets for out-of-class detection and produces similar scores for in-class images under different transformations.
SP:70bb2ad8b8a46670e6ee60a6800656c4f2220ad0
Weights Having Stable Signs Are Important: Finding Primary Subnetworks and Kernels to Compress Binary Weight Networks
1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) have achieved great success in many computer vision tasks such as image classification ( Krizhevsky et al. , 2012 ) , object detection ( Girshick et al. , 2014 ) and semantic segmentation ( Long et al. , 2015 ) . However , modern CNNs usually have large number of parameters , posing heavy costs on memory and computation . To ease their deployment in resourceconstrained environments , different types of neural network compression and acceleration techniques have been proposed in recent years , such as network pruning ( Han et al. , 2015 ; Li et al. , 2017 ) , network quantization ( Hubara et al. , 2016 ; Rastegari et al. , 2016 ; Zhou et al. , 2016 ) , knowledge distillation ( Ba & Caruana , 2014 ; Hinton et al. , 2015 ) , efficient CNN architecture engineering and searching ( Howard et al. , 2017 ; Zhang et al. , 2018b ; Zoph & Le , 2017 ) . Comparatively , network quantization is more commercially attractive as it can not only benefit specialized hardware accelerator designs ( Sze et al. , 2017 ) , but also can be readily combined with other techniques to get further improved compression and acceleration performance ( Mishra & Marr , 2018 ; Han et al. , 2016 ; Zhou et al. , 2017 ) . Quantization methods aim to approximate fullprecision ( 32-bit floating-point ) neural networks with low-precision ( low-bit ) ones . In particular , the extremely quantized models called Binarized Neural Networks ( BNNs ) ( Courbariaux et al. , 2015 ; 2016 ; Rastegari et al. , 2016 ) which force the weights or even weights and activations to have 1-bit values ( +1 and −1 ) , bringing 32× reduction in model size and making costly 32-bit floating-point multiplications can be replaced by much cheaper binary bit-wise operations . Because of this , how to train accurate BNNs either in a post-training manner or in a training from scratch manner has attracted increasing attention . However , training BNNs poses a non-differentiable issue as converting full-precision weights into binary values leads to zero gradients . To combat this issue , most existing methods use the Straight-Through-Estimator ( STE ) . Although there are few attempts ( Achterhold et al. , 2018 ; Chen et al. , 2019 ; Bai et al. , 2019 ; Hou et al. , 2017 ) to learn BNNs without STE by using proximal gradient methods or meta-learning methods , they suffer from worse accuracy and heavier parameter tuning compared to STE based methods . In STE based methods , full-precision weights are retained during training , and the gradients w.r.t . them and their binarized ones are assumed to be the same . In the forward pass of the training , the full-precision weights of the currently learnt model are quantized to binary values for predication loss calculation . In the backward pass , the gradients w.r.t . full-precision weights instead of binary ones are used for model update . To compensating for drastic information loss and training more accurate BNNs , most state of the art STE based methods follow the formulation of ( Rastegari et al. , 2016 ) in which the binary weights are represented as a combination of scaling factors and weight signs to approximate 32-bit floating-point weight values layer-by-layer , yet also present a lot of modifications . These modifications include but are not limited to expanding binary weights to have multiple binary bases ( Lin et al. , 2017 ; Guo et al. , 2017 ) , replacing hand-crafted scaling factors with learnable ones ( Zhang et al. , 2018a ) , making an ensemble of multiple binary models ( Zhu et al. , 2019 ) , searching high-performance binary network architectures ( Kim et al. , 2020 ) , and designing improved regularization objectives , optimizers and activation functions ( Cai et al. , 2017 ; Liu et al. , 2018 ; Helwegen et al. , 2019 ; Martinez et al. , 2020 ) . There are also a few works , trying to make a better understanding of the training of BNNs with STE . In ( Alizadeh et al. , 2019 ) , the authors evaluate some of the widely used tricks , showing that adapting learning rate with a second-moment optimizer is crucial to train BNNs with STE based methods while other tricks such as weights and gradients clipping are less important . Bethge et al . ( 2019 ) shows the commonly used techniques such as hand-crafted scaling factors and custom gradients are also not crucial . Sajad et al . ( 2019 ) demonstrates learnable scaling factors combined into a modified sign function can enhance the accuracy of BNNs . Anderson & Berg ( 2018 ) makes an interpretation of why binary models can approximate their full-precision references in terms of high-dimensional geometry . Galloway et al . ( 2018 ) validates that BNNs have surprisingly improved robustness against some adversarial attacks compared to their full-precision counterparts . In this paper , we revisit the training of BNNs , particularly Binary Weight Networks ( BWNs ) with STE , but in a new perspective , exploring structural weight behaviors in training BWNs . Our main contributions are summarized as follows : • We use two popular methods ( Rastegari et al. , 2016 ) and ( Zhang et al. , 2018a ) for an empirical study , showing both hand-crafted and learnable scaling factors are not that important , while the change of weight signs plays the key role in the training of BWNs , under the settings of using common techniques and tricks . • More importantly , we observe two astonishing training phenomena : ( 1 ) the training of BWNs demonstrates the process of seeking primary binary sub-networks whose weight signs are determined and fixed at the early training stage , which is akin to recent findings of the lottery ticket hypothesis ( Frankle & Carbin , 2019 ) for training sparse neural networks ; ( 2 ) binary kernels in the convolutional layers ( Conv layers ) of final BWNs tend to be centered on a limited number of binary kernels , showing binary weight networks may has the potential to be further compressed . This breaks the common understanding that representing each weight with a single bit puts the quantization to the extreme compression . • We propose a binary kernel quantization method to compress BWNs , bringing a new type of BWNs called Quantized Binary-Kernel Networks ( QBNs ) . 2 AN EMPIRICAL STUDY ON UNDERSTANDING BWNS ’ TRAINING . In this section we will briefly describe BWNs we use in experiments , implementation details , scaling factors in BWNs , full-precision weight norm , weight sign , and sub-networks in BWNs . 2.1 DIFFERENT BINARY WEIGHT NETWORKS . BWNs generally represents those networks with binary weights , and there are several different BWNs existing . Overall they use αB to replace full-precision weight W , where B = sign ( W ) and α is proposed to minimize ||αB − W || in an either learnable or calculated way . In following experiments , we use the one implemented in XNor-Net ( Rastegari et al. , 2016 ) and denote it as XNor-BWN , and the one implemented in LQ-Net ( Zhang et al. , 2018a ) and denote it as LQ-BWN which is 1-bit weight 32-bit activation version of LQ-Net . Other popular BWN methods like DoReFa-Net and BinaryConnect are similar to these two methods . Both XNor-BWN and LQ-BWN use STE framework , and XNor-BWN uses hand-crafted calculated scaling factors , and LQ-BWN uses learnable scaling factors . 2.2 IMPLEMENTATION DETAILS AND NOTATION . Quantization : We directly use open source codes of BWN released by authors , including XNorBWN1 and LQ-BWN2 . Dataset and Network Structure : CIFAR-10 ( Krizhevsky & Hinton , 2009 ) and ImageNet ( Russakovsky et al . ) are used in our experiments . We use VGG-7 ( Simonyan & Zisserman , 2015 ) and ResNet-20 ( He et al. , 2016 ) on CIFAR-10 , and ResNet18 on ImageNet . The strcuture is the same as original ones . Hyper-parameters : We use the same training parameters on each network . The network is trained for 200 epochs . The learning rate is set initially to 0.02 and divided by 10 at 80 and 160 epochs . For random crop , we first use zero pad to resize the image into 40× 40 , and random crop into 32× 32 . For BWN trained on ImageNet , each is trained for 100 epochs . The initial learning rate is 0.1 and decays 0.1 at 30 , 60 , 90 epoch . The image is rescaled into 256×256 and then randomly cropped into 224× 224 . No additional data augmentations are used . For all networks , weight decay is applied to all Conv layers set to 4× 10−5 . Notations : In figures and tables , we will use the following abbreviations for clearer expression . BN : BatchNormalization , LR : Learning Rate . WD : Weight Decay . SF : Scaling Factors . FP : Fullprecision . VGG-7 XNor-BWN : a VGG-7 network using the binarization algorithm of XNor-BWN . ResNet-20 Baseline : a full-precision ResNet-20 only using data augmentation and weight decay without any additional tricks . Other network structures with certain methods are similar to this . Large weights , large magnitude weights , and weights with larger norm have the same meaning to indicate those weights having relatively large absolute values . 2.3 SCALING FACTORS . According to previous methods , scaling factors are one essential element in obtaining BWNs . However , according to our experiments and analysis , we find scaling factors are not so important in training BWNs , and they can somehow be ignored without the drop in performance . Here we list four reasons to explain why scaling factors are unimportant . A simple proof : BN is a common practice to be used in training BWNs . It contains two operations , Normalization and Affine as shown in Equation1 . γ and β are the affine parameters used in BN . = 5e− 4 is used in PyTorch to avoid the error of dividing zero . We use a simple proof to demonstrate that BN can absorb scaling factors as shown in Equation2 . This is correct during training when one scaling factor is applied to each output channel under the Conv-BN-ReLU structure . x′ = Normalize ( x ) = x− x̄√ σ2 + y = Affine ( x′ ) = γx′ + β ( 1 ) yα = γ αx− αx̄√ α2σ2 + + β ≈ γ x− x̄√ σ2 + + β = y ( 2 ) 1We use the codes of DoReFa-Net to realize XNor-BWN which is the same as the original implementation . https : //github.com/tensorpack/tensorpack/tree/master/examples/DoReFa-Net 2LQ-BWN is the 1-bit weight 32-bit activation version of LQ-Nets . https : //github.com/microsoft/LQ-Nets Experimental Results : In the second aspect , we directly go to experimental results . As shown Table.2 in Appendix.B , we train different networks with and without scaling factors . The test accuracy on Cifar-10 and validation accuracy on ImageNet do not show a large difference between the two methods . Later we fix scaling factors of all layers to a certain value and magnify their learning rate according to the fixed scaling factors ’ magnitude . The performance does not change when fixing scaling factors . Thus , we conclude with proper learning rate , scaling factors are not essential to train BWNs . Compare learnable SF and γ in BN : LQ-BWN uses channel-wise scaling factors . From the experiments in Appendix.C , we find that these channel-wise scaling factors having a high correlation with γ in the BN after corresponding binary Conv . This finding indicates that BN ’ s γ can replace channel-wise SF to some extent . Quantization Error Curve : Another purpose using scaling factors is to reduce the quantization error between full-precision weights and binary weights according to a BNN survey ( Qin et al. , 2020 ) . By using experiments in Appendix.D we prove that the quantization error is not actually reduced by scaling factors , but weight decay helps on this reduction .
The Authors show that scaling factors with hand-crafted or learnable methods are not so important when training Binary Weight Networks (BWNs), while the change of weight signs is crucial. They make two observations: The weight signs of the primary binary sub-networks are determined and fixed at the early training stage. Binary kernels in the convolutional layers of final models tend to be centered on a limited number of fixed structural patterns. Based on these observations, they propose a new method called binary kernel quantization to further compress BWNs.
SP:fdf6eccb626f29ace14ead921e976448e2dd8bb8
Weights Having Stable Signs Are Important: Finding Primary Subnetworks and Kernels to Compress Binary Weight Networks
1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) have achieved great success in many computer vision tasks such as image classification ( Krizhevsky et al. , 2012 ) , object detection ( Girshick et al. , 2014 ) and semantic segmentation ( Long et al. , 2015 ) . However , modern CNNs usually have large number of parameters , posing heavy costs on memory and computation . To ease their deployment in resourceconstrained environments , different types of neural network compression and acceleration techniques have been proposed in recent years , such as network pruning ( Han et al. , 2015 ; Li et al. , 2017 ) , network quantization ( Hubara et al. , 2016 ; Rastegari et al. , 2016 ; Zhou et al. , 2016 ) , knowledge distillation ( Ba & Caruana , 2014 ; Hinton et al. , 2015 ) , efficient CNN architecture engineering and searching ( Howard et al. , 2017 ; Zhang et al. , 2018b ; Zoph & Le , 2017 ) . Comparatively , network quantization is more commercially attractive as it can not only benefit specialized hardware accelerator designs ( Sze et al. , 2017 ) , but also can be readily combined with other techniques to get further improved compression and acceleration performance ( Mishra & Marr , 2018 ; Han et al. , 2016 ; Zhou et al. , 2017 ) . Quantization methods aim to approximate fullprecision ( 32-bit floating-point ) neural networks with low-precision ( low-bit ) ones . In particular , the extremely quantized models called Binarized Neural Networks ( BNNs ) ( Courbariaux et al. , 2015 ; 2016 ; Rastegari et al. , 2016 ) which force the weights or even weights and activations to have 1-bit values ( +1 and −1 ) , bringing 32× reduction in model size and making costly 32-bit floating-point multiplications can be replaced by much cheaper binary bit-wise operations . Because of this , how to train accurate BNNs either in a post-training manner or in a training from scratch manner has attracted increasing attention . However , training BNNs poses a non-differentiable issue as converting full-precision weights into binary values leads to zero gradients . To combat this issue , most existing methods use the Straight-Through-Estimator ( STE ) . Although there are few attempts ( Achterhold et al. , 2018 ; Chen et al. , 2019 ; Bai et al. , 2019 ; Hou et al. , 2017 ) to learn BNNs without STE by using proximal gradient methods or meta-learning methods , they suffer from worse accuracy and heavier parameter tuning compared to STE based methods . In STE based methods , full-precision weights are retained during training , and the gradients w.r.t . them and their binarized ones are assumed to be the same . In the forward pass of the training , the full-precision weights of the currently learnt model are quantized to binary values for predication loss calculation . In the backward pass , the gradients w.r.t . full-precision weights instead of binary ones are used for model update . To compensating for drastic information loss and training more accurate BNNs , most state of the art STE based methods follow the formulation of ( Rastegari et al. , 2016 ) in which the binary weights are represented as a combination of scaling factors and weight signs to approximate 32-bit floating-point weight values layer-by-layer , yet also present a lot of modifications . These modifications include but are not limited to expanding binary weights to have multiple binary bases ( Lin et al. , 2017 ; Guo et al. , 2017 ) , replacing hand-crafted scaling factors with learnable ones ( Zhang et al. , 2018a ) , making an ensemble of multiple binary models ( Zhu et al. , 2019 ) , searching high-performance binary network architectures ( Kim et al. , 2020 ) , and designing improved regularization objectives , optimizers and activation functions ( Cai et al. , 2017 ; Liu et al. , 2018 ; Helwegen et al. , 2019 ; Martinez et al. , 2020 ) . There are also a few works , trying to make a better understanding of the training of BNNs with STE . In ( Alizadeh et al. , 2019 ) , the authors evaluate some of the widely used tricks , showing that adapting learning rate with a second-moment optimizer is crucial to train BNNs with STE based methods while other tricks such as weights and gradients clipping are less important . Bethge et al . ( 2019 ) shows the commonly used techniques such as hand-crafted scaling factors and custom gradients are also not crucial . Sajad et al . ( 2019 ) demonstrates learnable scaling factors combined into a modified sign function can enhance the accuracy of BNNs . Anderson & Berg ( 2018 ) makes an interpretation of why binary models can approximate their full-precision references in terms of high-dimensional geometry . Galloway et al . ( 2018 ) validates that BNNs have surprisingly improved robustness against some adversarial attacks compared to their full-precision counterparts . In this paper , we revisit the training of BNNs , particularly Binary Weight Networks ( BWNs ) with STE , but in a new perspective , exploring structural weight behaviors in training BWNs . Our main contributions are summarized as follows : • We use two popular methods ( Rastegari et al. , 2016 ) and ( Zhang et al. , 2018a ) for an empirical study , showing both hand-crafted and learnable scaling factors are not that important , while the change of weight signs plays the key role in the training of BWNs , under the settings of using common techniques and tricks . • More importantly , we observe two astonishing training phenomena : ( 1 ) the training of BWNs demonstrates the process of seeking primary binary sub-networks whose weight signs are determined and fixed at the early training stage , which is akin to recent findings of the lottery ticket hypothesis ( Frankle & Carbin , 2019 ) for training sparse neural networks ; ( 2 ) binary kernels in the convolutional layers ( Conv layers ) of final BWNs tend to be centered on a limited number of binary kernels , showing binary weight networks may has the potential to be further compressed . This breaks the common understanding that representing each weight with a single bit puts the quantization to the extreme compression . • We propose a binary kernel quantization method to compress BWNs , bringing a new type of BWNs called Quantized Binary-Kernel Networks ( QBNs ) . 2 AN EMPIRICAL STUDY ON UNDERSTANDING BWNS ’ TRAINING . In this section we will briefly describe BWNs we use in experiments , implementation details , scaling factors in BWNs , full-precision weight norm , weight sign , and sub-networks in BWNs . 2.1 DIFFERENT BINARY WEIGHT NETWORKS . BWNs generally represents those networks with binary weights , and there are several different BWNs existing . Overall they use αB to replace full-precision weight W , where B = sign ( W ) and α is proposed to minimize ||αB − W || in an either learnable or calculated way . In following experiments , we use the one implemented in XNor-Net ( Rastegari et al. , 2016 ) and denote it as XNor-BWN , and the one implemented in LQ-Net ( Zhang et al. , 2018a ) and denote it as LQ-BWN which is 1-bit weight 32-bit activation version of LQ-Net . Other popular BWN methods like DoReFa-Net and BinaryConnect are similar to these two methods . Both XNor-BWN and LQ-BWN use STE framework , and XNor-BWN uses hand-crafted calculated scaling factors , and LQ-BWN uses learnable scaling factors . 2.2 IMPLEMENTATION DETAILS AND NOTATION . Quantization : We directly use open source codes of BWN released by authors , including XNorBWN1 and LQ-BWN2 . Dataset and Network Structure : CIFAR-10 ( Krizhevsky & Hinton , 2009 ) and ImageNet ( Russakovsky et al . ) are used in our experiments . We use VGG-7 ( Simonyan & Zisserman , 2015 ) and ResNet-20 ( He et al. , 2016 ) on CIFAR-10 , and ResNet18 on ImageNet . The strcuture is the same as original ones . Hyper-parameters : We use the same training parameters on each network . The network is trained for 200 epochs . The learning rate is set initially to 0.02 and divided by 10 at 80 and 160 epochs . For random crop , we first use zero pad to resize the image into 40× 40 , and random crop into 32× 32 . For BWN trained on ImageNet , each is trained for 100 epochs . The initial learning rate is 0.1 and decays 0.1 at 30 , 60 , 90 epoch . The image is rescaled into 256×256 and then randomly cropped into 224× 224 . No additional data augmentations are used . For all networks , weight decay is applied to all Conv layers set to 4× 10−5 . Notations : In figures and tables , we will use the following abbreviations for clearer expression . BN : BatchNormalization , LR : Learning Rate . WD : Weight Decay . SF : Scaling Factors . FP : Fullprecision . VGG-7 XNor-BWN : a VGG-7 network using the binarization algorithm of XNor-BWN . ResNet-20 Baseline : a full-precision ResNet-20 only using data augmentation and weight decay without any additional tricks . Other network structures with certain methods are similar to this . Large weights , large magnitude weights , and weights with larger norm have the same meaning to indicate those weights having relatively large absolute values . 2.3 SCALING FACTORS . According to previous methods , scaling factors are one essential element in obtaining BWNs . However , according to our experiments and analysis , we find scaling factors are not so important in training BWNs , and they can somehow be ignored without the drop in performance . Here we list four reasons to explain why scaling factors are unimportant . A simple proof : BN is a common practice to be used in training BWNs . It contains two operations , Normalization and Affine as shown in Equation1 . γ and β are the affine parameters used in BN . = 5e− 4 is used in PyTorch to avoid the error of dividing zero . We use a simple proof to demonstrate that BN can absorb scaling factors as shown in Equation2 . This is correct during training when one scaling factor is applied to each output channel under the Conv-BN-ReLU structure . x′ = Normalize ( x ) = x− x̄√ σ2 + y = Affine ( x′ ) = γx′ + β ( 1 ) yα = γ αx− αx̄√ α2σ2 + + β ≈ γ x− x̄√ σ2 + + β = y ( 2 ) 1We use the codes of DoReFa-Net to realize XNor-BWN which is the same as the original implementation . https : //github.com/tensorpack/tensorpack/tree/master/examples/DoReFa-Net 2LQ-BWN is the 1-bit weight 32-bit activation version of LQ-Nets . https : //github.com/microsoft/LQ-Nets Experimental Results : In the second aspect , we directly go to experimental results . As shown Table.2 in Appendix.B , we train different networks with and without scaling factors . The test accuracy on Cifar-10 and validation accuracy on ImageNet do not show a large difference between the two methods . Later we fix scaling factors of all layers to a certain value and magnify their learning rate according to the fixed scaling factors ’ magnitude . The performance does not change when fixing scaling factors . Thus , we conclude with proper learning rate , scaling factors are not essential to train BWNs . Compare learnable SF and γ in BN : LQ-BWN uses channel-wise scaling factors . From the experiments in Appendix.C , we find that these channel-wise scaling factors having a high correlation with γ in the BN after corresponding binary Conv . This finding indicates that BN ’ s γ can replace channel-wise SF to some extent . Quantization Error Curve : Another purpose using scaling factors is to reduce the quantization error between full-precision weights and binary weights according to a BNN survey ( Qin et al. , 2020 ) . By using experiments in Appendix.D we prove that the quantization error is not actually reduced by scaling factors , but weight decay helps on this reduction .
This paper proposes some interesting observations for training BWNs. 1: The scaling factors can be removed with batch normalization used. 2: The signs of the weights with large norms are determined and fixed at the early training stage. 3: The binary weight networks can be further compressed. Moreover, the authors provide some empirical visualizations and results to demonstrate its analysis. However, the paper seems to be incomplete and needs to be further improved.
SP:fdf6eccb626f29ace14ead921e976448e2dd8bb8
Class Balancing GAN with a Classifier in the Loop
1 INTRODUCTION . Image Generation witnessed unprecedented success in recent years following the invention of Generative Adversarial Networks ( GANs ) by Goodfellow et al . ( 2014 ) . GANs have improved significantly over time with the introduction of better architectures ( Gulrajani et al. , 2017 ; Radford et al. , 2015 ) , formulation of superior objective functions ( Jolicoeur-Martineau , 2018 ; Arjovsky et al. , 2017 ) , and regularization techniques ( Miyato et al. , 2018 ) . An important breakthrough for GANs has been the ability to effectively use the information of class conditioning for synthesizing images ( Mirza & Osindero , 2014 ; Miyato & Koyama , 2018 ) . Conditional GANs have been shown to scale to large datasets such as ImageNet ( Deng et al. , 2009 ) with 1000 classes ( Miyato & Koyama , 2018 ) . One of the major issues with unconditional GANs has been their inability to produce balanced distributions over all the classes present in the dataset . This is seen as problem of missing modes in the generated distribution . A version of the missing modes problem , known as the ‘ covariate shift ’ problem was studied by Santurkar et al . ( 2018 ) . One possible reason may be the absence of knowledge about the class distribution P ( Y |X ) 1 of the generated samples during training . Conditional GANs on the other hand , do not suffer from this issue since the class label Y is supplied to the GAN during training . However , it has been recently found by Ravuri & Vinyals ( 2019 ) that despite being able to do well on metrics such as Inception Score ( IS ) ( Salimans et al . ( 2016 ) ) and Frèchet Inception Distance ( FID ) ( Heusel et al. , 2017 ) , the samples generated from the state-of-the-art conditional GANs lack diversity in comparison to the underlying training datasets . Further , we observed that although conditional GANs work well in balanced case , they suffer performance degradation in the imbalanced case . In order to address these shortcomings , we propose an orthogonal method ( with respect to label conditioning ) to induce the information about the class distribution P ( Y |X ) of generated samples in the GAN framework using a pre-trained classifier . We achieve this by tracking the class distribution of samples produced by the GAN using a pre-trained classifier . The regularizer utilizes the class distribution to penalize excessive generation of samples from the majority classes , thus enforcing 1Here Y represents labels and X represents data . the GAN to generate samples from minority classes . Our regularizer involves a novel method of modelling the forgetting of samples by GANs , based on the exponential forgetting observed in neural networks ( Kirkpatrick et al . ( 2017 ) ) . We infer the implications of our regularizer by a theoretical bound and empirically verify the same . We conduct empirical analysis of the proposed class balancing regularizer in two diverse and challenging scenarios : ( i ) Training GANs for image generation on long-tailed datasets : Generally , even in the long-tailed distribution tasks , the test set is balanced despite the imbalance in the training set . This is because it is important to develop Machine Learning systems that generalize well across all the support regions of the data distribution , avoiding undesired over-fitting to the majority ( or head ) classes . Hence , it is pertinent to train GANs that can faithfully represent all classes . ( ii ) Transferring the knowledge from a learnt classifier ( P ( Y |Xt ) ) to a GAN being trained on arbitrary prior distribution P ( Xp ) : This is a specific situation where the samples from target distribution Xt are unavailable . Instead , discriminative feature knowledge is indirectly available in the form of a trained classifier ( P ( Y |Xt ) ) . This is a perfect fit for crafting input-agnostic ( Universal ) adversarial perturbations in the data-free scenario . We show that the proposed regularizer can enable the generated samples to not only extract information about the target data with a trained classifier in the loop , but also represent its support to a greater extent . In summary , our contributions can be listed as follows : • We propose a ‘ class-balancing ’ regularizer that makes use of the statistics ( P ( Y |X ) ) of generated samples to promote uniformity while sampling from an unconditional GAN . The effect of our regularizer is depicted both theoretically ( Section 3 ) and empirically ( Section 4 ) . • We show that our regularizer enables GANs to learn uniformly across classes even when the training distribution is long-tailed . We observe gains in FID and accuracy of a classifier trained on generated samples . • We also show that by combining a pre-trained classifier ( i.e . P ( Y |Xt ) ) trained on a target dataset Xt , with an arbitrary distribution P ( Xp ) , our framework is capable of synthesizing novel samples related to the target dataset . We show that UAPs created on such novel samples generalize to real target data and hence lead to an effective data-free attack . This application is novel to our framework and can not be realized by conditional GANs . 2 BACKGROUND . 2.1 GENERATIVE ADVERSARIAL NETWORKS ( GANS ) . Generative Adversarial Networks ( GANs ) are formulated as a two player game in which the discriminator D tries to classify images into two classes : real and fake . The generator G tries to generate images ( transforming a noise vector z ∼ Pz ) which fool the discriminator ( D ) into classifying them as real . The game can be formulated by the following objective : min G max D Ex∼Pr [ log ( D ( x ) ) ] + Ez∼Pz [ log ( 1−D ( G ( z ) ) ] ( 1 ) The exact optimization for trainingD is computationally prohibitive in large networks and the GAN is trained by alternative minimization using loss functions . Multiple loss functions have been proposed for stabilizing the GAN training . In our work we use the relativistic loss function ( JolicoeurMartineau , 2018 ) which is formulated as : LrelD = −E ( x , z ) ∼ ( Pr , Pz ) [ log ( σ ( D ( x ) −D ( G ( z ) ) ) ] ( 2 ) LrelG = −E ( x , z ) ∼ ( Pr , Pz ) [ log ( σ ( D ( G ( z ) ) −D ( x ) ) ] ( 3 ) This unconditional GAN formulation does not have any class conditioning and produces different number of samples from different classes ( Santurkar et al. , 2018 ) . In other words , the distribution is not balanced ( uniform ) across different classes for the generated data . 2.2 CONDITIONAL GAN . The conditional GAN ( Mirza & Osindero , 2014 ) generates images associated with input label y using the following objective : min G max D Ex∼Pr [ log ( D ( x|y ) ) ] + Ez∼Pz [ log ( 1−D ( G ( z|y ) ) ] ( 4 ) The Auxillary Classifier GAN ( ACGAN ) ( Odena et al. , 2017 ) uses an auxiliary classifier for classification along with normal discriminator to enforce high confidence samples from the conditioned class y . Whereas cGAN with projection ( Miyato & Koyama , 2018 ) uses Conditional Batch Norm ( De Vries et al. , 2017 ) in the generator and uses a projection step in the discriminator to provide class information to the GAN . We refer to this method as cGAN in the subsequent sections . Possible issue with Conditional GAN in Long-tailed Setting : The objective in eq . ( 4 ) can be seen as learning a different G ( z|y ) and D ( x|y ) for each of the K classes . In this case the tail classes with fewer samples can suffer from poor generalization as they have very few samples . However , in practice there is parameter sharing among different class generators but still class specific parameters are also present in form of Conditional BatchNorm . We find that performance of conditional GANs degrade more in comparison to unconditonal GANs in the long-tailed scenario ( Section 4 ) . 3 METHOD . In our method we propose to introduce a pretrained classifier ( C ) to provide feedback to the generator about the label distribution P ( Y ) over the generated images . The proposed regularizer is added with the generator loss and trained using backpropogation . We first describe the method of modelling in Section 3.1 . The exact formulation of the regularizer and its theoretical properties are described in Section 3.2 . The overview of our method is presented in Figure ( 1a ) . 3.1 CLASS STATISTICS FOR GAN . GAN is a dynamic system in which the generator G has to continuously adapt itself in a way that it is able to fool the discriminator D. During the training , discriminator D updates itself , causing the objective for the generator G also to change . This change in objective can be seen as learning of different tasks for the generator G. In this context , we draw motivation from the seminal work on catastrophic forgetting in neural networks ( Kirkpatrick et al. , 2017 ) which shows that a neural network trained using SGD suffers from exponential forgetting of earlier tasks when trained on a new task . Based on this , we define effective class frequency N̂ tk of class k at cycle t as : N̂ tk = ( 1− α ) ˆN t−1k + c t−1 k ( 5 ) Here ct−1k is the number of samples of class k produced by the GAN in cycle ( t − 1 ) . The class of the sample is determined by the pretrained classifier C. Although D gets updated continuously , the update is slow and requires some iterations to change the form of D. Hence we update the statistics after certain number of iterations which compose a cycle . Here α is the exponential forgetting factor which is set to 0.5 in all our experiments . We normalize the class frequency N̂ tk to obtain discrete effective class distribution : N tk = N̂ tk∑ k N̂ t k ( 6 ) 3.2 REGULARIZER FORMULATION . The regularizer objective is defined as the maximization of the term ( Lreg ) below : max p̂ ∑ k p̂k log ( p̂k ) N tk ( 7 ) where p̂ = ∑n i=1 C ( G ( zi ) ) n . In other words , p̂ is the average softmax vector ( obtained from the classifier C ) over the batch of n samples and p̂k is its kth component corresponding to class k. zi corresponds to random noise vector sampled from Pz . If the classifier C recognizes the samples confidently with probability ≈ 1 , p̂k can be seen as the approximation to the ratio of the number of samples that belong to class k to the total number of samples in the batch n. TheN tk in the regularizer term is obtained through the update rule in Section 3.1 and is a constant during backpropagation . We want to emphasize that classifier C is not required to be trained on same data as the GAN , instead it can be trained in ways such as semi-supervised learning , few-shot learning , etc . For instance , in section 4.2 we show that a classifier trained in a semi-supervised scenario also enables the GAN to produce a balanced distribution . Hence our approach doesn ’ t specifically need labelled data which is in contrast to conditional GANs which require labels for each image while training . Proposition : The maximization of the proposed objective in ( 7 ) leads to the following bound on p̂k : p̂k ≤ e −K ( log ( K ) −1 ) N t k∑ k N t k −1 ( 8 ) where K is the number of distinct class labels produced by classifier C. Please refer to the appendix Section A.1 for proof of the same . Implications of the proposition : The bound on p̂k is inversely related to the exponent of the fraction of effective class frequency N tk/ ∑ kN t k for a given class k. In case of generating a balanced distribution , p̂k = 1/K which leads to the exponential average N tk = 1/K . Hence given sufficient iterations , the p̂k value will achieve the upper bound which signifies tightness of the same . To demonstrate effect of the regularizer empirically , we construct two extreme case examples based on the nature of the bound : • IfN tk N ti , ∀i 6= k , then the bound on p̂k would approach e−K ( log ( K ) −1 ) −1 . Hence the network is expected to decrease the proportion of class k samples . • If N tk N ti , ∀i 6= k , then the bound on p̂k will be e−1 . Hence the network is expected to increase the proportion of class k samples . We verified the two extreme cases above by training a SNDCGAN ( Miyato et al. , 2018 ) ( DCGAN with spectral normalization ) on CIFAR-10 and fixing N̂ tk ( unnormalized version of N t k ) across time steps and term it as Nk . Then we initialize Nk to a very large value and a very small value . Results presented in Figure ( 1b ) show that the GAN increases the proportion of samples of class k in case of low Nk and decreases the proportion of samples in case of large Nk . This shows the balancing behaviour of proposed regularizer .
**Overview**: The paper presents a simple regularizer term that aims to force a GAN to generate samples following a uniform distribution over different classes. The regularizer depends on a classifier that works well on an imbalanced or long-tailed dataset. The paper presents experiments on CIFAR-10 and LSUN that were synthetically long-tailed or imbalanced. The results show that the proposed term generates samples that follow a more uniform distribution over classes.
SP:c343c46cd2f33ae06be87cf9b44fbdbd59f335cd
Class Balancing GAN with a Classifier in the Loop
1 INTRODUCTION . Image Generation witnessed unprecedented success in recent years following the invention of Generative Adversarial Networks ( GANs ) by Goodfellow et al . ( 2014 ) . GANs have improved significantly over time with the introduction of better architectures ( Gulrajani et al. , 2017 ; Radford et al. , 2015 ) , formulation of superior objective functions ( Jolicoeur-Martineau , 2018 ; Arjovsky et al. , 2017 ) , and regularization techniques ( Miyato et al. , 2018 ) . An important breakthrough for GANs has been the ability to effectively use the information of class conditioning for synthesizing images ( Mirza & Osindero , 2014 ; Miyato & Koyama , 2018 ) . Conditional GANs have been shown to scale to large datasets such as ImageNet ( Deng et al. , 2009 ) with 1000 classes ( Miyato & Koyama , 2018 ) . One of the major issues with unconditional GANs has been their inability to produce balanced distributions over all the classes present in the dataset . This is seen as problem of missing modes in the generated distribution . A version of the missing modes problem , known as the ‘ covariate shift ’ problem was studied by Santurkar et al . ( 2018 ) . One possible reason may be the absence of knowledge about the class distribution P ( Y |X ) 1 of the generated samples during training . Conditional GANs on the other hand , do not suffer from this issue since the class label Y is supplied to the GAN during training . However , it has been recently found by Ravuri & Vinyals ( 2019 ) that despite being able to do well on metrics such as Inception Score ( IS ) ( Salimans et al . ( 2016 ) ) and Frèchet Inception Distance ( FID ) ( Heusel et al. , 2017 ) , the samples generated from the state-of-the-art conditional GANs lack diversity in comparison to the underlying training datasets . Further , we observed that although conditional GANs work well in balanced case , they suffer performance degradation in the imbalanced case . In order to address these shortcomings , we propose an orthogonal method ( with respect to label conditioning ) to induce the information about the class distribution P ( Y |X ) of generated samples in the GAN framework using a pre-trained classifier . We achieve this by tracking the class distribution of samples produced by the GAN using a pre-trained classifier . The regularizer utilizes the class distribution to penalize excessive generation of samples from the majority classes , thus enforcing 1Here Y represents labels and X represents data . the GAN to generate samples from minority classes . Our regularizer involves a novel method of modelling the forgetting of samples by GANs , based on the exponential forgetting observed in neural networks ( Kirkpatrick et al . ( 2017 ) ) . We infer the implications of our regularizer by a theoretical bound and empirically verify the same . We conduct empirical analysis of the proposed class balancing regularizer in two diverse and challenging scenarios : ( i ) Training GANs for image generation on long-tailed datasets : Generally , even in the long-tailed distribution tasks , the test set is balanced despite the imbalance in the training set . This is because it is important to develop Machine Learning systems that generalize well across all the support regions of the data distribution , avoiding undesired over-fitting to the majority ( or head ) classes . Hence , it is pertinent to train GANs that can faithfully represent all classes . ( ii ) Transferring the knowledge from a learnt classifier ( P ( Y |Xt ) ) to a GAN being trained on arbitrary prior distribution P ( Xp ) : This is a specific situation where the samples from target distribution Xt are unavailable . Instead , discriminative feature knowledge is indirectly available in the form of a trained classifier ( P ( Y |Xt ) ) . This is a perfect fit for crafting input-agnostic ( Universal ) adversarial perturbations in the data-free scenario . We show that the proposed regularizer can enable the generated samples to not only extract information about the target data with a trained classifier in the loop , but also represent its support to a greater extent . In summary , our contributions can be listed as follows : • We propose a ‘ class-balancing ’ regularizer that makes use of the statistics ( P ( Y |X ) ) of generated samples to promote uniformity while sampling from an unconditional GAN . The effect of our regularizer is depicted both theoretically ( Section 3 ) and empirically ( Section 4 ) . • We show that our regularizer enables GANs to learn uniformly across classes even when the training distribution is long-tailed . We observe gains in FID and accuracy of a classifier trained on generated samples . • We also show that by combining a pre-trained classifier ( i.e . P ( Y |Xt ) ) trained on a target dataset Xt , with an arbitrary distribution P ( Xp ) , our framework is capable of synthesizing novel samples related to the target dataset . We show that UAPs created on such novel samples generalize to real target data and hence lead to an effective data-free attack . This application is novel to our framework and can not be realized by conditional GANs . 2 BACKGROUND . 2.1 GENERATIVE ADVERSARIAL NETWORKS ( GANS ) . Generative Adversarial Networks ( GANs ) are formulated as a two player game in which the discriminator D tries to classify images into two classes : real and fake . The generator G tries to generate images ( transforming a noise vector z ∼ Pz ) which fool the discriminator ( D ) into classifying them as real . The game can be formulated by the following objective : min G max D Ex∼Pr [ log ( D ( x ) ) ] + Ez∼Pz [ log ( 1−D ( G ( z ) ) ] ( 1 ) The exact optimization for trainingD is computationally prohibitive in large networks and the GAN is trained by alternative minimization using loss functions . Multiple loss functions have been proposed for stabilizing the GAN training . In our work we use the relativistic loss function ( JolicoeurMartineau , 2018 ) which is formulated as : LrelD = −E ( x , z ) ∼ ( Pr , Pz ) [ log ( σ ( D ( x ) −D ( G ( z ) ) ) ] ( 2 ) LrelG = −E ( x , z ) ∼ ( Pr , Pz ) [ log ( σ ( D ( G ( z ) ) −D ( x ) ) ] ( 3 ) This unconditional GAN formulation does not have any class conditioning and produces different number of samples from different classes ( Santurkar et al. , 2018 ) . In other words , the distribution is not balanced ( uniform ) across different classes for the generated data . 2.2 CONDITIONAL GAN . The conditional GAN ( Mirza & Osindero , 2014 ) generates images associated with input label y using the following objective : min G max D Ex∼Pr [ log ( D ( x|y ) ) ] + Ez∼Pz [ log ( 1−D ( G ( z|y ) ) ] ( 4 ) The Auxillary Classifier GAN ( ACGAN ) ( Odena et al. , 2017 ) uses an auxiliary classifier for classification along with normal discriminator to enforce high confidence samples from the conditioned class y . Whereas cGAN with projection ( Miyato & Koyama , 2018 ) uses Conditional Batch Norm ( De Vries et al. , 2017 ) in the generator and uses a projection step in the discriminator to provide class information to the GAN . We refer to this method as cGAN in the subsequent sections . Possible issue with Conditional GAN in Long-tailed Setting : The objective in eq . ( 4 ) can be seen as learning a different G ( z|y ) and D ( x|y ) for each of the K classes . In this case the tail classes with fewer samples can suffer from poor generalization as they have very few samples . However , in practice there is parameter sharing among different class generators but still class specific parameters are also present in form of Conditional BatchNorm . We find that performance of conditional GANs degrade more in comparison to unconditonal GANs in the long-tailed scenario ( Section 4 ) . 3 METHOD . In our method we propose to introduce a pretrained classifier ( C ) to provide feedback to the generator about the label distribution P ( Y ) over the generated images . The proposed regularizer is added with the generator loss and trained using backpropogation . We first describe the method of modelling in Section 3.1 . The exact formulation of the regularizer and its theoretical properties are described in Section 3.2 . The overview of our method is presented in Figure ( 1a ) . 3.1 CLASS STATISTICS FOR GAN . GAN is a dynamic system in which the generator G has to continuously adapt itself in a way that it is able to fool the discriminator D. During the training , discriminator D updates itself , causing the objective for the generator G also to change . This change in objective can be seen as learning of different tasks for the generator G. In this context , we draw motivation from the seminal work on catastrophic forgetting in neural networks ( Kirkpatrick et al. , 2017 ) which shows that a neural network trained using SGD suffers from exponential forgetting of earlier tasks when trained on a new task . Based on this , we define effective class frequency N̂ tk of class k at cycle t as : N̂ tk = ( 1− α ) ˆN t−1k + c t−1 k ( 5 ) Here ct−1k is the number of samples of class k produced by the GAN in cycle ( t − 1 ) . The class of the sample is determined by the pretrained classifier C. Although D gets updated continuously , the update is slow and requires some iterations to change the form of D. Hence we update the statistics after certain number of iterations which compose a cycle . Here α is the exponential forgetting factor which is set to 0.5 in all our experiments . We normalize the class frequency N̂ tk to obtain discrete effective class distribution : N tk = N̂ tk∑ k N̂ t k ( 6 ) 3.2 REGULARIZER FORMULATION . The regularizer objective is defined as the maximization of the term ( Lreg ) below : max p̂ ∑ k p̂k log ( p̂k ) N tk ( 7 ) where p̂ = ∑n i=1 C ( G ( zi ) ) n . In other words , p̂ is the average softmax vector ( obtained from the classifier C ) over the batch of n samples and p̂k is its kth component corresponding to class k. zi corresponds to random noise vector sampled from Pz . If the classifier C recognizes the samples confidently with probability ≈ 1 , p̂k can be seen as the approximation to the ratio of the number of samples that belong to class k to the total number of samples in the batch n. TheN tk in the regularizer term is obtained through the update rule in Section 3.1 and is a constant during backpropagation . We want to emphasize that classifier C is not required to be trained on same data as the GAN , instead it can be trained in ways such as semi-supervised learning , few-shot learning , etc . For instance , in section 4.2 we show that a classifier trained in a semi-supervised scenario also enables the GAN to produce a balanced distribution . Hence our approach doesn ’ t specifically need labelled data which is in contrast to conditional GANs which require labels for each image while training . Proposition : The maximization of the proposed objective in ( 7 ) leads to the following bound on p̂k : p̂k ≤ e −K ( log ( K ) −1 ) N t k∑ k N t k −1 ( 8 ) where K is the number of distinct class labels produced by classifier C. Please refer to the appendix Section A.1 for proof of the same . Implications of the proposition : The bound on p̂k is inversely related to the exponent of the fraction of effective class frequency N tk/ ∑ kN t k for a given class k. In case of generating a balanced distribution , p̂k = 1/K which leads to the exponential average N tk = 1/K . Hence given sufficient iterations , the p̂k value will achieve the upper bound which signifies tightness of the same . To demonstrate effect of the regularizer empirically , we construct two extreme case examples based on the nature of the bound : • IfN tk N ti , ∀i 6= k , then the bound on p̂k would approach e−K ( log ( K ) −1 ) −1 . Hence the network is expected to decrease the proportion of class k samples . • If N tk N ti , ∀i 6= k , then the bound on p̂k will be e−1 . Hence the network is expected to increase the proportion of class k samples . We verified the two extreme cases above by training a SNDCGAN ( Miyato et al. , 2018 ) ( DCGAN with spectral normalization ) on CIFAR-10 and fixing N̂ tk ( unnormalized version of N t k ) across time steps and term it as Nk . Then we initialize Nk to a very large value and a very small value . Results presented in Figure ( 1b ) show that the GAN increases the proportion of samples of class k in case of low Nk and decreases the proportion of samples in case of large Nk . This shows the balancing behaviour of proposed regularizer .
The paper proposes a regularizer to force an unconditional GAN generator to produce samples that follow a uniform class distribution. To provide feedback to the generator about the class distribution over the generated images, the proposed method utilizes a pretrained classifier on the same (imbalanced) training dataset. Motivated by the exponential forgetting of earlier tasks in neural networks [1], the regularization term encourages the generator to increase the proportion of samples of an infrequent class after a certain number of iterations and vice versa. Empirical studies are performed to show the effectiveness of the regularization: 1) the paper shows that the proposed method enables generating samples with a uniform class distribution with a GAN trained on a dataset with a long-tailed class distribution and (2) that the method benefits in generating universal adversarial perturbations (UAPs) in the data-free scenario.
SP:c343c46cd2f33ae06be87cf9b44fbdbd59f335cd
GraphSAD: Learning Graph Representations with Structure-Attribute Disentanglement
1 INTRODUCTION . Representing nodes or entire graphs with informative low-dimensional feature vectors plays a crucial role in many real-world applications and domains , e.g . user analysis in social networks ( Tan et al. , 2011 ; Yan et al. , 2013 ) , relational inference in knowledge graphs ( Bordes et al. , 2013 ; Trouillon et al. , 2016 ; Sun et al. , 2019 ) , molecular property prediction in drug/material discovery ( Gilmer et al. , 2017 ; Wu et al. , 2018 ) and circuit response prediction in circuit design ( Zhang et al. , 2019 ) . Recently , Graph Neural Networks ( GNNs ) ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Xu et al. , 2019 ) have shown their superiority in many different tasks . In general , the essential idea of these methods is to learn effective node representations ( or graph representations with an additional graph pooling ) through aggregating the attributes of each node and its neighbors in an iterative and nonlinear way . For an attributed graph , GNNs commonly encode the information of its graph structure and node attributes into a single representation . This might be problematic , since the semantic space of graph structure and node attributes might not be well aligned , and these two types of information could be useful for different tasks . For example , predicting the health condition of a user mainly depends on his/her profile information , and the social network does not provide too much meaningful information ; in another case , the prediction of a user ’ s social class mainly relies on his/her social network structure . Therefore , a more reasonable solution is to disentangle these two types of information into two distinct sets of representations , and the importance of which can be further determined by downstream tasks . Such disentangled representation has been proved to be beneficial to model ’ s generalization ability and interpretability ( Chen et al. , 2016 ; Higgins et al. , 2017 ; Alemi et al. , 2017 ) . Recently , DisenGNN ( Ma et al. , 2019 ) studied disentangled node representation learning by grouping the neighbors of each node to different channels , and each channel corresponds to a different latent factor . In other words , DisenGNN focuses on disentangling the various latent factors of graph structure . By contrast , our work intends to disentangle the representations of graph structure and node attributes , which is orthogonal to their work and also more general . In this paper , we aim to learn node/graph representations with Structure-Attribute Disentanglement ( GraphSAD ) . As a naive trial , we first attempt to conduct disentanglement in the input space , named as Input-SAD , which separates a graph into a structure and an attribute component and then encodes these two components respectively . However , since graph structure and node attributes are not completely independent , it is better to suppress the dependency of these two factors in the embedding space , instead of directly separating the input graph . Inspired by this fact , we propose to distill a graph ’ s structure and attribute information into the distinct channels of embedding vectors , named as Embed-SAD . Concretely , for each node embedding , half of its elements capture the graph structure through edge reconstruction , and the other half extracts the attribute information by minimizing the mutual information with the structure counterpart and , at the same time , preserving semantic discriminability . In addition , we devise a metric to quantitatively evaluate graph representation ’ s structure-attribute disentanglement , denoted as SAD-Metric , which measures the sensitivity of a model when varying either the graph structure or node attributes of an input graph . We summarize our contributions as follows : • We study structure-attribute disentangled node/graph representation learning through separating graph structure and node attributes in either the input or the embedding space . • We design a quantitative metric to measure the extent of structure-attribute disentangle- ment , which is novel on its graph-specific data processing scheme . • Through combining the proposed disentangling techniques with various GNNs , we empir- ically verify our method ’ s superior performance on both the node and graph classification benchmark datasets . Also , we analyze the disentangled graph representations via the proposed metric and qualitative visualization . 2 PROBLEM DEFINITION AND PRELIMINARIES . 2.1 PROBLEM DEFINITION . We study learning node representations ( e.g . social networks ) or whole-graph representations ( e.g . molecular graphs ) of attributed graphs . Formally , we denote an attributed graph as G = ( V , E , A ) . V denotes the set of nodes . E = { ( u , v , tuv ) } is the set of edges with tuv as the type of the edge connecting node u and v ( e.g . different types of bonds in molecular graphs ) . A = { Av|v ∈ V } represents the set of node attributes . Our goal is to learn meaningful representations for each node or the whole graph . Existing GNNs typically mix both the graph structure and node attributes into a unified representation through neural message passing . However , in practice , these two types of information may encode different semantics and be useful for different tasks . Take the prediction on social networks as an example . When predicting the social class of users , the graph structure plays a more important role than user attributes , while user attributes are definitely more informative than graph structure when forecasting users ’ health conditions . It is therefore desirable to disentangle the information of graph structure and node attributes into different sets of representations and use the downstream task to determine their importance . Specifically , we define our problem as follows : Node/Graph Representation Learning with Structure-Attribute Disentanglement . Given an attributed graph G = ( V , E , A ) , we aim to learn node ( or whole-graph ) representations by disentangling the semantics of graph structure S = { V , E } and node attributes A into two distinct sets of representations , i.e . zv = [ zv , S , zv , A ] ( or zG = [ zG , S , zG , A ] ) . The importance of the two kinds of representations is further determined by the downstream task such as node or graph classification . 2.2 PRELIMINARIES . Graph Neural Networks ( GNNs ) . A GNN maps each node v ∈ V to an embedding vector zv and also encodes the entire graph G as vector zG . For an L-layer GNN , the L-hop information surrounding each node is captured via a neighborhood aggregation mechanism . Formally , the l-th GNN layer can be defined as : z ( l ) v = COMBINE ( l ) ( z ( l−1 ) v , AGGREGATE ( l ) ( { ( z ( l−1 ) v , z ( l−1 ) u , tuv ) : u ∈ N ( v ) } ) ) , ( 1 ) where N ( v ) is the set of node v ’ s neighbors , tuv denotes edge attribute , z ( l ) v denotes the representation of v at the l-th layer , and z ( 0 ) v is initialized by the node attribute Av . Using all the node embeddings in a graph , the entire graph ’ s embedding can be derived by a permutation-invariant readout function : zG = READOUT ( { zv|v ∈ V } ) . ( 2 ) Mutual Information Estimator . Mutual information ( MI ) quantifies the mutual dependency between two random variables . Some recent works ( Belghazi et al. , 2018 ; Hjelm et al. , 2019 ) studied neural-network-based MI estimators . Among which , the Noise-Contrastive Estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; 2012 ) was first employed as a lower bound of MI by van den Oord et al . ( 2018 ) , and we also adopt this estimator in our method for its effectiveness and concision . In practice , for two random variables x1 and x2 , given one positive pair ( x+1 , x + 2 ) ∼ p ( x1 , x2 ) and K distractors ( x+1 , x2 , j ) ∼ p ( x1 ) p ( x2 ) ( j = 1 , 2 , · · · , K ) , the NCE estimation of MI is defined as : INCE ( x+1 , x + 2 , { x2 , j } Kj=1 ) = log ( K+1 ) +log exp ( T ( x+1 , x + 2 ) ) exp ( T ( x+1 , x + 2 ) ) + ∑K j=1 exp ( T ( x+1 , x2 , j ) ) , ( 3 ) where T ( · , · ) is a parameterized discriminator function which outputs a scalar value for a pair of input samples , and its architecture is detailed in Sec . 5.1 . 3 LEARNING GRAPH REPRESENTATIONS WITH STRUCTURE-ATTRIBUTE DISENTANGLEMENT . 3.1 INPUT-SAD : STRUCTURE-ATTRIBUTE DISENTANGLEMENT FOR INPUTS . As an initial attempt , we seek to learn structure-attribute disentangled node/graph representations by separating a graph into a structure and an attribute component and then encoding them respectively , as shown in Fig . 1 ( a ) . Concretely , given an attributed graph G = ( V , E , A ) , these two components are constructed and encoded as follows . The structure component extracts the graph structure and forms another graph GS = ( VS , ES , AS ) , in which the node and edge sets remain unchanged , i.e . VS = V , ES = E , and the out-degree of each node serves as its attribute , i.e . AS = { d ( v ) |v ∈ VS } ( d ( · ) denotes the out-degree function ) . A GNN maps this component to a δ-dimensional embedding space : ( zV , S , zG , S ) = GNN ( VS , ES , AS ) , ( 4 ) where zV , S = { zv , S |v ∈ V } ∈ R|V|×δ denotes the node embeddings derived only by the graph structure , and zG , S ∈ Rδ is the embedding of the entire structure component . The attribute component is formed as a feature matrix U ∈ R|V|×D , where the feature vector Uv ∈ RD is a D-dimensional embedding of node attribute Av . For this component , a fully-connected network and a readout function ( e.g . mean pooling in our implementation ) are used for encoding : zV , A = FCN ( U ) , zG , A = READOUT ( zV , A ) , ( 5 ) where zV , A = { zv , A|v ∈ V } ∈ R|V|×δ denotes the attribute embeddings of the nodes in graph G , and zG , A ∈ Rδ embeds the whole attribute component . The complete information of graph G is restored by concatenating the structure and attribute embedding for each node and for entire graph : zV = { [ zv , S , zv , A ] | v ∈ V } ∈ R|V|×2δ , zG = [ zG , S , zG , A ] ∈ R2δ , ( 6 ) where [ · , · ] denotes the concatenation operation . Upon these concatenated node/graph embeddings , the prediction task ( e.g . node/graph classification ) is performed by a task-specific network C , which defines the supervised loss Lsup for model optimization : min GNN , FCN , C Lsup . ( 7 )
This paper presents a novel method called Embed-SAD (as well as Input-SAD) to learn graph/node representations to disentangle structure and attribute information. Input-SAD is a simple baseline that tries to get structure-attribute disentanglements by individually processing graph structures and node attributes. For structure, the original node attibutes are replaced by out-degrees only, and passed to GNNs, while for attibutes, the node attibutes are passed to fully-connected networks. Embed-SAD is a more elaborate method to disentangle the GNN embeddings by posing two types of additional losses, i.e., the edge-reconstruction loss for structures, and the Noise-Contrastive Estimation (NCE) loss to maximize the mutual information against the structure-encoding vectors, in addition to the original loss for supervision. The paper also develops an interesting evaluation metric called SAD-Metric where node attibutes or graph structures are exclusively perturbed for each graph, and prediction for whether that perturbation is for structure or for attibutes made by the element-wise absolute differences between embedded vectors before and after the perturbation. This SAD-Metric can quantify the extent to which the obtained representation can detect which perturbation, that for structures or that for attibutes, is made for each sample graph. The experimental results also demonstrated that the structure-attibute disentanglement by Embed-SAD learning strategy actually improved the prediction performance of many off-the-shelf GNNs over many different graph- or node-level tasks.
SP:d6ecb075f238cc67a6cc4f6b924e1b7b3eb69dfa
GraphSAD: Learning Graph Representations with Structure-Attribute Disentanglement
1 INTRODUCTION . Representing nodes or entire graphs with informative low-dimensional feature vectors plays a crucial role in many real-world applications and domains , e.g . user analysis in social networks ( Tan et al. , 2011 ; Yan et al. , 2013 ) , relational inference in knowledge graphs ( Bordes et al. , 2013 ; Trouillon et al. , 2016 ; Sun et al. , 2019 ) , molecular property prediction in drug/material discovery ( Gilmer et al. , 2017 ; Wu et al. , 2018 ) and circuit response prediction in circuit design ( Zhang et al. , 2019 ) . Recently , Graph Neural Networks ( GNNs ) ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Xu et al. , 2019 ) have shown their superiority in many different tasks . In general , the essential idea of these methods is to learn effective node representations ( or graph representations with an additional graph pooling ) through aggregating the attributes of each node and its neighbors in an iterative and nonlinear way . For an attributed graph , GNNs commonly encode the information of its graph structure and node attributes into a single representation . This might be problematic , since the semantic space of graph structure and node attributes might not be well aligned , and these two types of information could be useful for different tasks . For example , predicting the health condition of a user mainly depends on his/her profile information , and the social network does not provide too much meaningful information ; in another case , the prediction of a user ’ s social class mainly relies on his/her social network structure . Therefore , a more reasonable solution is to disentangle these two types of information into two distinct sets of representations , and the importance of which can be further determined by downstream tasks . Such disentangled representation has been proved to be beneficial to model ’ s generalization ability and interpretability ( Chen et al. , 2016 ; Higgins et al. , 2017 ; Alemi et al. , 2017 ) . Recently , DisenGNN ( Ma et al. , 2019 ) studied disentangled node representation learning by grouping the neighbors of each node to different channels , and each channel corresponds to a different latent factor . In other words , DisenGNN focuses on disentangling the various latent factors of graph structure . By contrast , our work intends to disentangle the representations of graph structure and node attributes , which is orthogonal to their work and also more general . In this paper , we aim to learn node/graph representations with Structure-Attribute Disentanglement ( GraphSAD ) . As a naive trial , we first attempt to conduct disentanglement in the input space , named as Input-SAD , which separates a graph into a structure and an attribute component and then encodes these two components respectively . However , since graph structure and node attributes are not completely independent , it is better to suppress the dependency of these two factors in the embedding space , instead of directly separating the input graph . Inspired by this fact , we propose to distill a graph ’ s structure and attribute information into the distinct channels of embedding vectors , named as Embed-SAD . Concretely , for each node embedding , half of its elements capture the graph structure through edge reconstruction , and the other half extracts the attribute information by minimizing the mutual information with the structure counterpart and , at the same time , preserving semantic discriminability . In addition , we devise a metric to quantitatively evaluate graph representation ’ s structure-attribute disentanglement , denoted as SAD-Metric , which measures the sensitivity of a model when varying either the graph structure or node attributes of an input graph . We summarize our contributions as follows : • We study structure-attribute disentangled node/graph representation learning through separating graph structure and node attributes in either the input or the embedding space . • We design a quantitative metric to measure the extent of structure-attribute disentangle- ment , which is novel on its graph-specific data processing scheme . • Through combining the proposed disentangling techniques with various GNNs , we empir- ically verify our method ’ s superior performance on both the node and graph classification benchmark datasets . Also , we analyze the disentangled graph representations via the proposed metric and qualitative visualization . 2 PROBLEM DEFINITION AND PRELIMINARIES . 2.1 PROBLEM DEFINITION . We study learning node representations ( e.g . social networks ) or whole-graph representations ( e.g . molecular graphs ) of attributed graphs . Formally , we denote an attributed graph as G = ( V , E , A ) . V denotes the set of nodes . E = { ( u , v , tuv ) } is the set of edges with tuv as the type of the edge connecting node u and v ( e.g . different types of bonds in molecular graphs ) . A = { Av|v ∈ V } represents the set of node attributes . Our goal is to learn meaningful representations for each node or the whole graph . Existing GNNs typically mix both the graph structure and node attributes into a unified representation through neural message passing . However , in practice , these two types of information may encode different semantics and be useful for different tasks . Take the prediction on social networks as an example . When predicting the social class of users , the graph structure plays a more important role than user attributes , while user attributes are definitely more informative than graph structure when forecasting users ’ health conditions . It is therefore desirable to disentangle the information of graph structure and node attributes into different sets of representations and use the downstream task to determine their importance . Specifically , we define our problem as follows : Node/Graph Representation Learning with Structure-Attribute Disentanglement . Given an attributed graph G = ( V , E , A ) , we aim to learn node ( or whole-graph ) representations by disentangling the semantics of graph structure S = { V , E } and node attributes A into two distinct sets of representations , i.e . zv = [ zv , S , zv , A ] ( or zG = [ zG , S , zG , A ] ) . The importance of the two kinds of representations is further determined by the downstream task such as node or graph classification . 2.2 PRELIMINARIES . Graph Neural Networks ( GNNs ) . A GNN maps each node v ∈ V to an embedding vector zv and also encodes the entire graph G as vector zG . For an L-layer GNN , the L-hop information surrounding each node is captured via a neighborhood aggregation mechanism . Formally , the l-th GNN layer can be defined as : z ( l ) v = COMBINE ( l ) ( z ( l−1 ) v , AGGREGATE ( l ) ( { ( z ( l−1 ) v , z ( l−1 ) u , tuv ) : u ∈ N ( v ) } ) ) , ( 1 ) where N ( v ) is the set of node v ’ s neighbors , tuv denotes edge attribute , z ( l ) v denotes the representation of v at the l-th layer , and z ( 0 ) v is initialized by the node attribute Av . Using all the node embeddings in a graph , the entire graph ’ s embedding can be derived by a permutation-invariant readout function : zG = READOUT ( { zv|v ∈ V } ) . ( 2 ) Mutual Information Estimator . Mutual information ( MI ) quantifies the mutual dependency between two random variables . Some recent works ( Belghazi et al. , 2018 ; Hjelm et al. , 2019 ) studied neural-network-based MI estimators . Among which , the Noise-Contrastive Estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; 2012 ) was first employed as a lower bound of MI by van den Oord et al . ( 2018 ) , and we also adopt this estimator in our method for its effectiveness and concision . In practice , for two random variables x1 and x2 , given one positive pair ( x+1 , x + 2 ) ∼ p ( x1 , x2 ) and K distractors ( x+1 , x2 , j ) ∼ p ( x1 ) p ( x2 ) ( j = 1 , 2 , · · · , K ) , the NCE estimation of MI is defined as : INCE ( x+1 , x + 2 , { x2 , j } Kj=1 ) = log ( K+1 ) +log exp ( T ( x+1 , x + 2 ) ) exp ( T ( x+1 , x + 2 ) ) + ∑K j=1 exp ( T ( x+1 , x2 , j ) ) , ( 3 ) where T ( · , · ) is a parameterized discriminator function which outputs a scalar value for a pair of input samples , and its architecture is detailed in Sec . 5.1 . 3 LEARNING GRAPH REPRESENTATIONS WITH STRUCTURE-ATTRIBUTE DISENTANGLEMENT . 3.1 INPUT-SAD : STRUCTURE-ATTRIBUTE DISENTANGLEMENT FOR INPUTS . As an initial attempt , we seek to learn structure-attribute disentangled node/graph representations by separating a graph into a structure and an attribute component and then encoding them respectively , as shown in Fig . 1 ( a ) . Concretely , given an attributed graph G = ( V , E , A ) , these two components are constructed and encoded as follows . The structure component extracts the graph structure and forms another graph GS = ( VS , ES , AS ) , in which the node and edge sets remain unchanged , i.e . VS = V , ES = E , and the out-degree of each node serves as its attribute , i.e . AS = { d ( v ) |v ∈ VS } ( d ( · ) denotes the out-degree function ) . A GNN maps this component to a δ-dimensional embedding space : ( zV , S , zG , S ) = GNN ( VS , ES , AS ) , ( 4 ) where zV , S = { zv , S |v ∈ V } ∈ R|V|×δ denotes the node embeddings derived only by the graph structure , and zG , S ∈ Rδ is the embedding of the entire structure component . The attribute component is formed as a feature matrix U ∈ R|V|×D , where the feature vector Uv ∈ RD is a D-dimensional embedding of node attribute Av . For this component , a fully-connected network and a readout function ( e.g . mean pooling in our implementation ) are used for encoding : zV , A = FCN ( U ) , zG , A = READOUT ( zV , A ) , ( 5 ) where zV , A = { zv , A|v ∈ V } ∈ R|V|×δ denotes the attribute embeddings of the nodes in graph G , and zG , A ∈ Rδ embeds the whole attribute component . The complete information of graph G is restored by concatenating the structure and attribute embedding for each node and for entire graph : zV = { [ zv , S , zv , A ] | v ∈ V } ∈ R|V|×2δ , zG = [ zG , S , zG , A ] ∈ R2δ , ( 6 ) where [ · , · ] denotes the concatenation operation . Upon these concatenated node/graph embeddings , the prediction task ( e.g . node/graph classification ) is performed by a task-specific network C , which defines the supervised loss Lsup for model optimization : min GNN , FCN , C Lsup . ( 7 )
This paper focuses on disentangling embeddings of the structure and the attribute of graph. The authors' key idea is that the structure and attribute information should be split in GNN. Based on this, the authors try to disentangle the structure embedding and the attribute embedding. With two different components, two different kinds of embeddings can be captured at the input stage. In addition, these two different kinds of embeddings can be obtained by reconstructing the edge and minimizing the mutual information. At last, the authors propose a metric to evaluate the disentanglement. The models in this paper outperform baselines in node classification and graph classification task.
SP:d6ecb075f238cc67a6cc4f6b924e1b7b3eb69dfa
NBDT: Neural-Backed Decision Tree
1 INTRODUCTION . Many computer vision applications ( e.g . medical imaging and autonomous driving ) require insight into the model ’ s decision process , complicating applications of deep learning which are traditionally black box . Recent efforts in explainable computer vision attempt to address this need and can be grouped into one of two categories : ( 1 ) saliency maps and ( 2 ) sequential decision processes . Saliency maps retroactively explain model predictions by identifying which pixels most affected the prediction . However , by focusing on the input , saliency maps fail to capture the model ’ s decision making process . For example , saliency offers no insight for a misclassification when the model is “ looking ” at the right object for the wrong reasons . Alternatively , we can gain insight into the model ’ s decision process by breaking up predictions into a sequence of smaller semantically meaningful decisions as in rule-based models like decision trees . However , existing efforts to fuse deep learning and decision trees suffer from ( 1 ) significant accuracy loss , relative to contemporary models ( e.g. , residual networks ) , ( 2 ) reduced interpretability due to accuracy optimizations ( e.g. , impure leaves and ensembles ) , and ( 3 ) tree structures that offer limited insight into the model ’ s credibility . To address these , we propose Neural-Backed Decision Trees ( NBDTs ) to jointly improve both ( 1 ) accuracy and ( 2 ) interpretability of modern neural networks , utilizing decision rules that preserve ( 3 ) properties like sequential , discrete decisions ; pure leaves ; and non-ensembled predictions . These properties in unison enable unique insights , as we show . We acknowledge that there is no universally-accepted definition of interpretability ( Lundberg et al. , 2020 ; Doshi-Velez & Kim , 2017 ; Lipton , 2016 ) , so to show interpretability , we adopt a definition offered by Poursabzi-Sangdeh et al . ( 2018 ) : A model is interpretable if a human can validate its prediction , determining when the model has made a sizable mistake . We picked this definition for its importance to downstream benefits we can evaluate , specifically ( 1 ) model or dataset debugging and ( 2 ) improving human trust . To accomplish this , NBDTs replace the final linear layer of a neural network with a differentiable oblique decision tree and , unlike its predecessors ( i.e . decision trees , hierarchical classifiers ) , uses a hierarchy derived from model parameters , does not employ a hierarchical softmax , and can be created from any existing classification neural network without architectural modifications . These improvements ⇤denotes equal contribution tailor the hierarchy to the network rather than overfit to the feature space , lessens the decision tree ’ s reliance on highly uncertain decisions , and encourages accurate recognition of high-level concepts . These benefits culminate in joint improvement of accuracy and interpretability . Our contributions : 1 . We propose a tree supervision loss , yielding NBDTs that match/outperform and outgeneralize modern neural networks ( WideResNet , EfficientNet ) on ImageNet , TinyImageNet200 , and CIFAR100 . Our loss also improves the original model by up to 2 % . 2 . We propose alternative hierarchies for oblique decision trees – induced hierarchies built using pre-trained neural network weights – that outperform both data-based hierarchies ( e.g . built with information gain ) and existing hierarchies ( e.g . WordNet ) , in accuracy . 3 . We show NBDT explanations are more helpful to the user when identifying model mistakes , preferred when using the model to assist in challenging classification tasks , and can be used to identify ambiguous ImageNet labels . 2 RELATED WORKS . Saliency Maps . Numerous efforts ( Springenberg et al. , 2014 ; Zeiler & Fergus , 2014 ; Simonyan et al. , 2013 ; Zhang et al. , 2016 ; Selvaraju et al. , 2017 ; Ribeiro et al. , 2016 ; Petsiuk et al. , 2018 ; Sundararajan et al. , 2017 ) have explored the design of saliency maps identifying pixels that most influenced the model ’ s prediction . White-box techniques ( Springenberg et al. , 2014 ; Zeiler & Fergus , 2014 ; Simonyan et al. , 2013 ; Selvaraju et al. , 2017 ; Sundararajan et al. , 2017 ) use the network ’ s parameters to determine salient image regions , and black-box techniques ( Ribeiro et al. , 2016 ; Petsiuk et al. , 2018 ) determine pixel importance by measuring the prediction ’ s response to perturbed inputs . However , saliency does not explain the model ’ s decision process ( e.g . Was the model confused early on , distinguishing between Animal and Vehicle ? Or is it only confused between dog breeds ? ) . Transfer to Explainable Models . Prior to the recent success of deep learning , decision trees were state-of-the-art on a wide variety of learning tasks and the gold standard for interpretability . Despite this recency , study at the intersection of neural network and decision tree dates back three decades , where neural networks were seeded with decision tree weights ( Banerjee , 1990 ; 1994 ; Ivanova & Kubat , 1995a ; b ) , and decision trees were created from neural network queries ( Krishnan et al. , 1999 ; Boz , 2000 ; Dancey et al. , 2004 ; Craven & Shavlik , 1996 ; 1994 ) , like distillation ( Hinton et al. , 2015 ) . The modern analog of both sets of work ( Humbird et al. , 2018 ; Siu , 2019 ; Frosst & Hinton , 2017 ) evaluate on feature-sparse , sample-sparse regimes such as the UCI datasets ( Dua & Graff , 2017 ) or MNIST ( LeCun et al. , 2010 ) and perform poorly on standard image classification tasks . Hybrid Models . Recent work produces hybrid decision tree and neural network models to scale up to datasets like CIFAR10 ( Krizhevsky , 2009 ) , CIFAR100 ( Krizhevsky , 2009 ) , TinyImageNet ( Le & Yang , 2015 ) , and ImageNet ( Deng et al. , 2009 ) . One category of models organizes the neural network into a hierarchy , dynamically selecting branches to run inference ( Veit & Belongie , 2018 ; McGill & Perona , 2017 ; Teja Mullapudi et al. , 2018 ; Redmon & Farhadi , 2017 ; Murdock et al. , 2016 ) . However , these models use impure leaves resulting in uninterpretatble , stochastic paths . Other approaches fuse deep learning into each decision tree node : an entire neural network ( Murthy et al. , 2016 ) , several layers ( Murdock et al. , 2016 ; Roy & Todorovic , 2016 ) , a linear layer ( Ahmed et al. , 2016 ) , or some other parameterization of neural network output ( Kontschieder et al. , 2015 ) . These models see reduced interpretability by using k-way decisions with large k ( via depth-2 trees ) ( Ahmed et al. , 2016 ; Guo et al. , 2018 ) or employing an ensemble ( Kontschieder et al. , 2015 ; Ahmed et al. , 2016 ) , which is often referred to as a “ black box ” ( Carvalho et al. , 2019 ; Rudin , 2018 ) . Hierarchical Classification ( Silla & Freitas , 2011 ) . One set of approaches directly uses a preexisting hierarchy over classes , such as WordNet ( Redmon & Farhadi , 2017 ; Brust & Denzler , 2019 ; Deng et al. ) . However conceptual similarity is not indicative of visual similarity . Other models build a hierarchy using the training set directly , via a classic data-dependent metric like Gini impurity ( Alaniz & Akata , 2019 ) or information gain ( Rota Bulo & Kontschieder , 2014 ; Biçici et al. , 2018 ) . These models are instead prone to overfitting , per ( Tanno et al. , 2019 ) . Finally , several works introduce hierarchical surrogate losses ( Wu et al. , 2017 ; Deng et al. , 2012 ) , such as hierarchical softmax ( Mohammed & Umaashankar , 2018 ) , but as the authors note , these methods quickly suffer from major accuracy loss with more classes or higher-resolution images ( e.g . beyond CIFAR10 ) . We demonstrate hierarchical classifiers attain higher accuracy without a hierarchical softmax . 3 METHOD . Neural-Backed Decision Trees ( NBDTs ) replace a network ’ s final linear layer with a decision tree . Unlike classical decision trees or many hierarchical classifiers , NBDTs use path probabilities for inference ( Sec 3.1 ) to tolerate highly-uncertain intermediate decisions , build a hierarchy from pretrained model weights ( Sec 3.2 & 3.3 ) to lessen overfitting , and train with a hierarchical loss ( Sec 3.4 ) to significantly better learn high-level decisions ( e.g. , Animal vs . Vehicle ) . 3.1 INFERENCE . Our NBDT first featurizes each sample using the neural network backbone ; the backbone consists of all neural network layers before the final linear layer . Second , we run the final fully-connected layer as an oblique decision tree . However , ( a ) a classic decision tree can not recover from a mistake early in the hierarchy and ( b ) just running a classic decision tree on neural features drops accuracy significantly , by up to 11 % ( Table 2 ) . Thus , we present modified decision rules ( Figure 1 , B ) : 1 . Seed oblique decision rule weights with neural network weights . An oblique decision tree supports only binary decisions , using a hyperplane for each decision . Instead , we associate a weight vector ni with each node . For leaf nodes , where i = k 2 [ 1 , K ] , each ni = wk is a row vector from the fully-connected layer ’ s weights W 2 RD⇥K . For all inner nodes , where i 2 [ K+1 , N ] , find all leaves k 2 L ( i ) in node i ’ s subtree and average their weights : ni = P k2L ( i ) wk/|L ( i ) | . 2 . Compute node probabilities . Child probabilities are given by softmax inner products . For each sample x and node i , compute the probability of each child j 2 C ( i ) using p ( j|i ) = SOFTMAX ( h~ni , xi ) [ j ] , where ~ni = ( hnj , xi ) j2C ( i ) . 3 . Pick a leaf using path probabilities . Inspired by Deng et al . ( 2012 ) , consider a leaf , its class k and its path from the root Pk . The probability of each node i 2 Pk traversing the next node in the path Ck ( i ) 2 Pk \ C ( i ) is denoted p ( Ck ( i ) |i ) . Then , the probability of leaf and its class k is p ( k ) = ⇧i2Pkp ( Ck ( i ) |i ) ( 1 ) In soft inference , the final class prediction k̂ is defined over these class probabilities , k̂ = argmaxkp ( k ) = argmaxk⇧i2Pkp ( Ck ( i ) |i ) ( 2 ) Our inference strategy has two benefits : ( a ) Since the architecture is unchanged , the fully-connected layer can be run regularly ( Table 5 ) or as decision rules ( Table 1 ) , and ( b ) unlike decision trees and other conditionally-executed models ( Tanno et al. , 2019 ; Veit & Belongie , 2018 ) , our method can recover from a mistake early in the hierarchy with sufficient uncertainty in the incorrect path ( Figure 1 C , Appendix Table 7 ) . This inference mode bests classic tree inference ( Appendix C.2 ) .
The paper proposes a method to make neural networks more accurate and interpretable by replacing their final layers with a probabilistic decision tree. As a result, the network can produce a sequence of decisions that leads to the final classification result, given an input image. The method is trained with soft decisions by assigning probabilities to each leaf, which are associated with a single class. The tree decision hyperplanes are constructed automatically from the backbone networks final dense layer and finetuned. The fact that decisions are soft solves the differentiablility problem of decisions as in various other similar papers, cited or uncited (more below).
SP:142a01056d20ddab91353b9d2ec07925f82d10ea
NBDT: Neural-Backed Decision Tree
1 INTRODUCTION . Many computer vision applications ( e.g . medical imaging and autonomous driving ) require insight into the model ’ s decision process , complicating applications of deep learning which are traditionally black box . Recent efforts in explainable computer vision attempt to address this need and can be grouped into one of two categories : ( 1 ) saliency maps and ( 2 ) sequential decision processes . Saliency maps retroactively explain model predictions by identifying which pixels most affected the prediction . However , by focusing on the input , saliency maps fail to capture the model ’ s decision making process . For example , saliency offers no insight for a misclassification when the model is “ looking ” at the right object for the wrong reasons . Alternatively , we can gain insight into the model ’ s decision process by breaking up predictions into a sequence of smaller semantically meaningful decisions as in rule-based models like decision trees . However , existing efforts to fuse deep learning and decision trees suffer from ( 1 ) significant accuracy loss , relative to contemporary models ( e.g. , residual networks ) , ( 2 ) reduced interpretability due to accuracy optimizations ( e.g. , impure leaves and ensembles ) , and ( 3 ) tree structures that offer limited insight into the model ’ s credibility . To address these , we propose Neural-Backed Decision Trees ( NBDTs ) to jointly improve both ( 1 ) accuracy and ( 2 ) interpretability of modern neural networks , utilizing decision rules that preserve ( 3 ) properties like sequential , discrete decisions ; pure leaves ; and non-ensembled predictions . These properties in unison enable unique insights , as we show . We acknowledge that there is no universally-accepted definition of interpretability ( Lundberg et al. , 2020 ; Doshi-Velez & Kim , 2017 ; Lipton , 2016 ) , so to show interpretability , we adopt a definition offered by Poursabzi-Sangdeh et al . ( 2018 ) : A model is interpretable if a human can validate its prediction , determining when the model has made a sizable mistake . We picked this definition for its importance to downstream benefits we can evaluate , specifically ( 1 ) model or dataset debugging and ( 2 ) improving human trust . To accomplish this , NBDTs replace the final linear layer of a neural network with a differentiable oblique decision tree and , unlike its predecessors ( i.e . decision trees , hierarchical classifiers ) , uses a hierarchy derived from model parameters , does not employ a hierarchical softmax , and can be created from any existing classification neural network without architectural modifications . These improvements ⇤denotes equal contribution tailor the hierarchy to the network rather than overfit to the feature space , lessens the decision tree ’ s reliance on highly uncertain decisions , and encourages accurate recognition of high-level concepts . These benefits culminate in joint improvement of accuracy and interpretability . Our contributions : 1 . We propose a tree supervision loss , yielding NBDTs that match/outperform and outgeneralize modern neural networks ( WideResNet , EfficientNet ) on ImageNet , TinyImageNet200 , and CIFAR100 . Our loss also improves the original model by up to 2 % . 2 . We propose alternative hierarchies for oblique decision trees – induced hierarchies built using pre-trained neural network weights – that outperform both data-based hierarchies ( e.g . built with information gain ) and existing hierarchies ( e.g . WordNet ) , in accuracy . 3 . We show NBDT explanations are more helpful to the user when identifying model mistakes , preferred when using the model to assist in challenging classification tasks , and can be used to identify ambiguous ImageNet labels . 2 RELATED WORKS . Saliency Maps . Numerous efforts ( Springenberg et al. , 2014 ; Zeiler & Fergus , 2014 ; Simonyan et al. , 2013 ; Zhang et al. , 2016 ; Selvaraju et al. , 2017 ; Ribeiro et al. , 2016 ; Petsiuk et al. , 2018 ; Sundararajan et al. , 2017 ) have explored the design of saliency maps identifying pixels that most influenced the model ’ s prediction . White-box techniques ( Springenberg et al. , 2014 ; Zeiler & Fergus , 2014 ; Simonyan et al. , 2013 ; Selvaraju et al. , 2017 ; Sundararajan et al. , 2017 ) use the network ’ s parameters to determine salient image regions , and black-box techniques ( Ribeiro et al. , 2016 ; Petsiuk et al. , 2018 ) determine pixel importance by measuring the prediction ’ s response to perturbed inputs . However , saliency does not explain the model ’ s decision process ( e.g . Was the model confused early on , distinguishing between Animal and Vehicle ? Or is it only confused between dog breeds ? ) . Transfer to Explainable Models . Prior to the recent success of deep learning , decision trees were state-of-the-art on a wide variety of learning tasks and the gold standard for interpretability . Despite this recency , study at the intersection of neural network and decision tree dates back three decades , where neural networks were seeded with decision tree weights ( Banerjee , 1990 ; 1994 ; Ivanova & Kubat , 1995a ; b ) , and decision trees were created from neural network queries ( Krishnan et al. , 1999 ; Boz , 2000 ; Dancey et al. , 2004 ; Craven & Shavlik , 1996 ; 1994 ) , like distillation ( Hinton et al. , 2015 ) . The modern analog of both sets of work ( Humbird et al. , 2018 ; Siu , 2019 ; Frosst & Hinton , 2017 ) evaluate on feature-sparse , sample-sparse regimes such as the UCI datasets ( Dua & Graff , 2017 ) or MNIST ( LeCun et al. , 2010 ) and perform poorly on standard image classification tasks . Hybrid Models . Recent work produces hybrid decision tree and neural network models to scale up to datasets like CIFAR10 ( Krizhevsky , 2009 ) , CIFAR100 ( Krizhevsky , 2009 ) , TinyImageNet ( Le & Yang , 2015 ) , and ImageNet ( Deng et al. , 2009 ) . One category of models organizes the neural network into a hierarchy , dynamically selecting branches to run inference ( Veit & Belongie , 2018 ; McGill & Perona , 2017 ; Teja Mullapudi et al. , 2018 ; Redmon & Farhadi , 2017 ; Murdock et al. , 2016 ) . However , these models use impure leaves resulting in uninterpretatble , stochastic paths . Other approaches fuse deep learning into each decision tree node : an entire neural network ( Murthy et al. , 2016 ) , several layers ( Murdock et al. , 2016 ; Roy & Todorovic , 2016 ) , a linear layer ( Ahmed et al. , 2016 ) , or some other parameterization of neural network output ( Kontschieder et al. , 2015 ) . These models see reduced interpretability by using k-way decisions with large k ( via depth-2 trees ) ( Ahmed et al. , 2016 ; Guo et al. , 2018 ) or employing an ensemble ( Kontschieder et al. , 2015 ; Ahmed et al. , 2016 ) , which is often referred to as a “ black box ” ( Carvalho et al. , 2019 ; Rudin , 2018 ) . Hierarchical Classification ( Silla & Freitas , 2011 ) . One set of approaches directly uses a preexisting hierarchy over classes , such as WordNet ( Redmon & Farhadi , 2017 ; Brust & Denzler , 2019 ; Deng et al. ) . However conceptual similarity is not indicative of visual similarity . Other models build a hierarchy using the training set directly , via a classic data-dependent metric like Gini impurity ( Alaniz & Akata , 2019 ) or information gain ( Rota Bulo & Kontschieder , 2014 ; Biçici et al. , 2018 ) . These models are instead prone to overfitting , per ( Tanno et al. , 2019 ) . Finally , several works introduce hierarchical surrogate losses ( Wu et al. , 2017 ; Deng et al. , 2012 ) , such as hierarchical softmax ( Mohammed & Umaashankar , 2018 ) , but as the authors note , these methods quickly suffer from major accuracy loss with more classes or higher-resolution images ( e.g . beyond CIFAR10 ) . We demonstrate hierarchical classifiers attain higher accuracy without a hierarchical softmax . 3 METHOD . Neural-Backed Decision Trees ( NBDTs ) replace a network ’ s final linear layer with a decision tree . Unlike classical decision trees or many hierarchical classifiers , NBDTs use path probabilities for inference ( Sec 3.1 ) to tolerate highly-uncertain intermediate decisions , build a hierarchy from pretrained model weights ( Sec 3.2 & 3.3 ) to lessen overfitting , and train with a hierarchical loss ( Sec 3.4 ) to significantly better learn high-level decisions ( e.g. , Animal vs . Vehicle ) . 3.1 INFERENCE . Our NBDT first featurizes each sample using the neural network backbone ; the backbone consists of all neural network layers before the final linear layer . Second , we run the final fully-connected layer as an oblique decision tree . However , ( a ) a classic decision tree can not recover from a mistake early in the hierarchy and ( b ) just running a classic decision tree on neural features drops accuracy significantly , by up to 11 % ( Table 2 ) . Thus , we present modified decision rules ( Figure 1 , B ) : 1 . Seed oblique decision rule weights with neural network weights . An oblique decision tree supports only binary decisions , using a hyperplane for each decision . Instead , we associate a weight vector ni with each node . For leaf nodes , where i = k 2 [ 1 , K ] , each ni = wk is a row vector from the fully-connected layer ’ s weights W 2 RD⇥K . For all inner nodes , where i 2 [ K+1 , N ] , find all leaves k 2 L ( i ) in node i ’ s subtree and average their weights : ni = P k2L ( i ) wk/|L ( i ) | . 2 . Compute node probabilities . Child probabilities are given by softmax inner products . For each sample x and node i , compute the probability of each child j 2 C ( i ) using p ( j|i ) = SOFTMAX ( h~ni , xi ) [ j ] , where ~ni = ( hnj , xi ) j2C ( i ) . 3 . Pick a leaf using path probabilities . Inspired by Deng et al . ( 2012 ) , consider a leaf , its class k and its path from the root Pk . The probability of each node i 2 Pk traversing the next node in the path Ck ( i ) 2 Pk \ C ( i ) is denoted p ( Ck ( i ) |i ) . Then , the probability of leaf and its class k is p ( k ) = ⇧i2Pkp ( Ck ( i ) |i ) ( 1 ) In soft inference , the final class prediction k̂ is defined over these class probabilities , k̂ = argmaxkp ( k ) = argmaxk⇧i2Pkp ( Ck ( i ) |i ) ( 2 ) Our inference strategy has two benefits : ( a ) Since the architecture is unchanged , the fully-connected layer can be run regularly ( Table 5 ) or as decision rules ( Table 1 ) , and ( b ) unlike decision trees and other conditionally-executed models ( Tanno et al. , 2019 ; Veit & Belongie , 2018 ) , our method can recover from a mistake early in the hierarchy with sufficient uncertainty in the incorrect path ( Figure 1 C , Appendix Table 7 ) . This inference mode bests classic tree inference ( Appendix C.2 ) .
Aim to improve the interpretability and the accuracy of the neural network, this paper takes a step further on the integration of NN with a decision tree. It will replace the final linear layer of the NN with a decision tree induced by pre-trained model weights. It takes advantage of both hard and soft decision trees and designs suitable tree supervision loss thereon. Extensive experiments verify the design choice of the proposed components. On both small-scale and large-scale datasets, it beats the decision tree counterparts. Also, on the aspects of generalization and interpretability, it shows the strength compared to NN.
SP:142a01056d20ddab91353b9d2ec07925f82d10ea
Combining Physics and Machine Learning for Network Flow Estimation
1 INTRODUCTION . In many applications , ranging from road traffic to supply chains to power networks , the dynamics of flows on edges of a graph is governed by physical laws/models ( Bressan et al. , 2014 ; Garavello & Piccoli , 2006 ) . For instance , the LWR model describes equilibrium equations for road traffic Lighthill & Whitham ( 1955 ) ; Richards ( 1956 ) . However , it is often difficult to fully observe flows in these applications and , as a result , they rely on off-the-shelf machine learning models to make predictions about missing flows ( Li et al. , 2017 ; Yu et al. , 2018 ) . A key limitation of these machine learning models is that they disregard the physics governing the flows . So , the question arises : can we combine physics and machine learning to make better flow predictions ? This paper investigates the problem of predicting missing edge flows based on partial observations and the underlying domain-specific physics defined by flow conservation and edge features ( Jia et al. , 2019 ) . Edge flows depend on the graph topology due to a flow conservation law—i.e . the total inflow at every vertex is approximately its total out-flow . Moreover , the flow at an edge also depends on its features , which might regularize the space of possible flow distributions in the graph . Here , we propose a model that learns how to predict missing flows from data using bilevel optimization ( Franceschi et al. , 2017 ) and neural networks . More specifically , features are given as inputs to a neural network that produces edge flow regularizers . Weights of the network are then optimized via reverse-mode differentiation based on a flow estimation loss from multiple train-validation pairs . Our work falls under a broader effort towards incorporating physics knowledge to machine learning , which is relevant for natural sciences and engineering applications where data availability is limited ( Rackauckas et al. , 2020 ) . Conservation laws ( of energy , mass , momentum , charge , etc . ) are essential to our understanding of the physical world . The classical Noether ’ s theorem shows that such laws arise from symmetries in nature ( Hanc et al. , 2004 ) . However , flow estimation , which is an inverse problem ( Tarantola , 2005 ; Arridge et al. , 2019 ) , is ill-posed under conservation alone . Regularization enables us to apply domain-knowledge in the solution of inverse problems . We motivate our problem and evaluate its solutions using two application scenarios . The first is road traffic networks ( Coclite et al. , 2005 ) , where vertices represent locations , edges are road segments , flows are counts of vehicles that traverse a segment and features include numbers of lanes and speed limits . The second scenario is electric power networks ( Dörfler et al. , 2018 ) , where vertices represent power buses , edges are power lines , flows are amounts of power transmitted and edge features include resistances and lengths of lines . Irrigation channels , gas pipelines , blood circulation , supply chains , air traffic , and telecommunication networks are other examples of flow graphs . Our contributions can be summarized as follows : ( 1 ) We introduce a missing flow estimation problem with applications in a broad class of flow graphs ; ( 2 ) we propose a model for flow estimation that is able to learn the physics of flows by combining reverse-mode differentiation and neural networks ; ( 3 ) we show that our model outperforms the best baseline by up to 18 % ; and ( 4 ) we provide evidence that our model learns interpretable physical properties , such as the role played by resistance in a power transmission network and by the number of lanes in a road traffic network . 2 FLOW ESTIMATION PROBLEM . We introduce the flow estimation problem , which consists of inferring missing flows in a network based on a flow conservation law and edge features . We provide a list of symbols in the Appendix . Flow Graph . Let G ( V , E , X ) be a flow graph with vertices V ( n= |V| ) , edges E ( m= |E| ) , and edge feature matrix X ∈ Rm×d , where X [ e ] are the features of edge e. A flow vector f ∈ Rm contains the ( possibly noisy ) flow fe for each edge e ∈ E . In case G is directed , f ∈ Rm+ , otherwise , a flow is negative if it goes against the arbitrary orientation of its edge . We assume that flows are induced by the graph , and thus , the total flow—in plus out—at each vertex is approximately conserved : ∑ ( vi , u ) ∈E f ( vi , u ) ≈ ∑ ( u , vo ) ∈E f ( u , vo ) , ∀u ∈ V In the case of a road network , flow conservation implies that vehicles mostly remain on the road . Flow Estimation Problem . Given a graph G ( V , E , X ) with partial flow observations f̂ ∈ Rm′ for a subset E ′ ⊆ E of edges ( f̂e is the flow for e ∈ E ′ , m′= |E ′| < m ) , predict flows for edges in E \ E ′ . In our road network example , partial vehicle counts f̂ might be measured by sensors placed at a few segments , and the goal is to estimate counts at the remaining segments . One would expect flows not to be fully conserved in most applications due to the existence of inputs and outputs , such as parking lots and a power generators/consumers . In case these input and output values are known exactly , they can be easily incorporated to our problem as flow observations . Moreover , if they are known approximately , we can apply them as priors ( as will be detailed in the next section ) . For the remaining of this paper , we assume that inputs and outputs are unknown and employ flow conservation as an approximation of the system . Thus , different from classical flow optimization problems , such as min-cost flow ( Ahuja et al. , 1988 ) , we assume that flows are conserved approximately . Notice that our problem is similar to the one studied in Jia et al . ( 2019 ) . However , while their definition also assumes flow conservation , it does not take into account edge features . We claim that these features play important role in capturing the physics of flows . Our main contribution is a new model that is able to learn how to regularize flows based on edge features using neural networks . 3 OUR APPROACH : PHYSICS+LEARNING . In this section , we introduce our approach for the flow estimation problem , which is summarized in Figure 1 . We formulate flow estimation as an optimization problem ( Section 3.1 ) , where the interplay between the flow network topology and edge features is defined by the physics of flow graphs . Flow estimation is shown to be equivalent to a regularized least-squares problem ( Section 3.2 ) . Moreover , we describe how the effect of edge features and the graph topology can be learned from data using bilevel optimization and neural networks in Section 3.3 . Finally , we propose a reverse-mode differentiation algorithm for flow estimation in Section 3.4 . 3.1 FLOW ESTIMATION VIA OPTIMIZATION . The extent to which flow conservation holds for flows in a graph is known as divergence and can be measured using the oriented incidence matrix B ∈ Rn×m of G. The matrix is defined as follows , Bij = 1 if ∃u such that ej = ( vi , u ) ∈ E , Bij = −1 if ∃u such that ej = ( u , vi ) ∈ E , and Bij = 0 , otherwise . Given B and f , the divergence at a vertex u can be computed as : ( Bf ) u = ∑ ( vi , u ) ∈E f ( vi , u ) − ∑ ( u , vo ) ∈E f ( u , vo ) ( 1 ) And thus , we can compute the total ( squared ) divergence in the graph as ||Bf ||22 = fᵀBᵀBf =∑ u∈V ( ( Bf ) u ) 2 . One could try to solve the flow estimation problem by minimizing ||Bf ||22 while keeping the observed flows fixed , however , this problem is ill-posed—there might be multiple solutions to the optimization . The standard approach in such a scenario is to resort to regularization . In particular , we apply a generic regularization function Φ with parameters Θ as follows : f∗ = arg min f∈Ω ||Bf ||22 + Φ ( f , X ; f ( 0 ) ; Θ ) st. fe = f̂e , ∀e ∈ E ′ ( 2 ) where Ω is the domain of f , f ( 0 ) ∈ Rm is a prior for flows , fe ( f̂e ) are entries of f ( f̂ ) for edge e and the constraint guarantees that observed flows are not changed . Priors f ( 0 ) , not be confused with observed flows f̂ , should be set according to the application ( e.g. , as zero , based on a black-box model or historical data ) . Regarding the domain Ω , we consider Ω = Rm and Ω = Rm+ . The second case is relevant for directed graphs—when flows must follow edge orientations ( e.g. , traffic ) . In Jia et al . ( 2019 ) , the authors set Φ ( f , X , f ( 0 ) ; Θ ) as λ2||f ||22 for a regularization parameter λ , which implies a uniform zero prior with anL2 penalty over edges . We claim that the regularization function plays an important role in capturing the physics of flow graphs . As an example , for a power network , Φ should account for the resistance of the lines . Thus , we propose learning the regularization from data . Our approach is based on a least-squares formulation , which will be described next . 3.2 REGULARIZED LEAST-SQUARES FORMULATION . Flow estimation problem can be viewed as an inverse problem ( Tarantola , 2005 ) . Let x ∈ Rm−m′ be the vector of missing flows and H ∈ Rm×m−m′ be a matrix such that Hij = 1 if fi maps to xj ( i.e. , they are associated to the same edge ) , and Hi , j = 0 , otherwise . Moreover , let f̃ ∈ Rm be such that f̃e = f̂e if e ∈ E ′ and f̃i = 0 , otherwise . Using this notation , we define flow estimation as BHx = −Bf̃ + , where BH is a forward operator , projecting x to a vector of vertex divergences , and −Bf̃ + is the observed data , capturing ( negative ) vertex divergences for observed flows . The error can be interpreted as noise in observations or some level of model misspecification . We can also define a regularized least-squares problem with the goal of recovering missing flows x : x∗ = arg min x∈Ω′ ||BHx +Bf̃ ||22 + ||x− x ( 0 ) ||2Q ( X ; Θ ) ( 3 ) where Ω′ is a projection of the domain of f to the space of x , ||x||2M = xᵀMx is the matrixscaled norm of x and x ( 0 ) ∈ Rm−m′ are priors for missing flows . The regularization function Φ ( f , X ; f ( 0 ) , Θ ) has the form ||x−x ( 0 ) ||2Q ( X ; Θ ) , where the matrixQ ( X ; Θ ) is a function of parameters Θ and edge features X . We focus on the case where Q ( X ; Θ ) is non-negative and diagonal . Equation 3 has a Bayesian interpretation , with x being a maximum likelihood estimate under a Gaussian assumption—i.e. , x ∼ N ( x ( 0 ) , Q ( X ; Θ ) −1 ) and Bf̃ ∼ N ( 0 , I ) ( Tarantola , 2005 ) . Thus , Q ( X ; Θ ) captures the variance in flow observations f̂ in prior estimates f ( 0 ) compared to the one . This allows the regularization function to adapt to different edges based on their features . For instance , in our road network example , Q ( X ; Θ ) might place a lower weight on flow conservation for flows at a road segment with a small number of lanes , which are possible traffic bottlenecks . Given the least-squares formulation described in this section , how do we model the regularization function Q and learn its parameters Θ ? We would like Q to be expressive enough to be able to capture complex physical properties of flows , while Θ to be computed accurately and efficiently . We will address these challenges in the remaining of this paper .
The authors propose a parametric regularizer for estimating unobserved flows in networks, incorporating edge features and other side information. The parameters of the regularizer are learned by means of minimizing the empirical cross-validated MSE. Regularization is necessary because the basic problem, while convex, typically is under-constrained; resulting in a infinite space of solutions which match the observed data.
SP:21296aeb09e1d3d7ca0a729f1ab614f15b12960d
Combining Physics and Machine Learning for Network Flow Estimation
1 INTRODUCTION . In many applications , ranging from road traffic to supply chains to power networks , the dynamics of flows on edges of a graph is governed by physical laws/models ( Bressan et al. , 2014 ; Garavello & Piccoli , 2006 ) . For instance , the LWR model describes equilibrium equations for road traffic Lighthill & Whitham ( 1955 ) ; Richards ( 1956 ) . However , it is often difficult to fully observe flows in these applications and , as a result , they rely on off-the-shelf machine learning models to make predictions about missing flows ( Li et al. , 2017 ; Yu et al. , 2018 ) . A key limitation of these machine learning models is that they disregard the physics governing the flows . So , the question arises : can we combine physics and machine learning to make better flow predictions ? This paper investigates the problem of predicting missing edge flows based on partial observations and the underlying domain-specific physics defined by flow conservation and edge features ( Jia et al. , 2019 ) . Edge flows depend on the graph topology due to a flow conservation law—i.e . the total inflow at every vertex is approximately its total out-flow . Moreover , the flow at an edge also depends on its features , which might regularize the space of possible flow distributions in the graph . Here , we propose a model that learns how to predict missing flows from data using bilevel optimization ( Franceschi et al. , 2017 ) and neural networks . More specifically , features are given as inputs to a neural network that produces edge flow regularizers . Weights of the network are then optimized via reverse-mode differentiation based on a flow estimation loss from multiple train-validation pairs . Our work falls under a broader effort towards incorporating physics knowledge to machine learning , which is relevant for natural sciences and engineering applications where data availability is limited ( Rackauckas et al. , 2020 ) . Conservation laws ( of energy , mass , momentum , charge , etc . ) are essential to our understanding of the physical world . The classical Noether ’ s theorem shows that such laws arise from symmetries in nature ( Hanc et al. , 2004 ) . However , flow estimation , which is an inverse problem ( Tarantola , 2005 ; Arridge et al. , 2019 ) , is ill-posed under conservation alone . Regularization enables us to apply domain-knowledge in the solution of inverse problems . We motivate our problem and evaluate its solutions using two application scenarios . The first is road traffic networks ( Coclite et al. , 2005 ) , where vertices represent locations , edges are road segments , flows are counts of vehicles that traverse a segment and features include numbers of lanes and speed limits . The second scenario is electric power networks ( Dörfler et al. , 2018 ) , where vertices represent power buses , edges are power lines , flows are amounts of power transmitted and edge features include resistances and lengths of lines . Irrigation channels , gas pipelines , blood circulation , supply chains , air traffic , and telecommunication networks are other examples of flow graphs . Our contributions can be summarized as follows : ( 1 ) We introduce a missing flow estimation problem with applications in a broad class of flow graphs ; ( 2 ) we propose a model for flow estimation that is able to learn the physics of flows by combining reverse-mode differentiation and neural networks ; ( 3 ) we show that our model outperforms the best baseline by up to 18 % ; and ( 4 ) we provide evidence that our model learns interpretable physical properties , such as the role played by resistance in a power transmission network and by the number of lanes in a road traffic network . 2 FLOW ESTIMATION PROBLEM . We introduce the flow estimation problem , which consists of inferring missing flows in a network based on a flow conservation law and edge features . We provide a list of symbols in the Appendix . Flow Graph . Let G ( V , E , X ) be a flow graph with vertices V ( n= |V| ) , edges E ( m= |E| ) , and edge feature matrix X ∈ Rm×d , where X [ e ] are the features of edge e. A flow vector f ∈ Rm contains the ( possibly noisy ) flow fe for each edge e ∈ E . In case G is directed , f ∈ Rm+ , otherwise , a flow is negative if it goes against the arbitrary orientation of its edge . We assume that flows are induced by the graph , and thus , the total flow—in plus out—at each vertex is approximately conserved : ∑ ( vi , u ) ∈E f ( vi , u ) ≈ ∑ ( u , vo ) ∈E f ( u , vo ) , ∀u ∈ V In the case of a road network , flow conservation implies that vehicles mostly remain on the road . Flow Estimation Problem . Given a graph G ( V , E , X ) with partial flow observations f̂ ∈ Rm′ for a subset E ′ ⊆ E of edges ( f̂e is the flow for e ∈ E ′ , m′= |E ′| < m ) , predict flows for edges in E \ E ′ . In our road network example , partial vehicle counts f̂ might be measured by sensors placed at a few segments , and the goal is to estimate counts at the remaining segments . One would expect flows not to be fully conserved in most applications due to the existence of inputs and outputs , such as parking lots and a power generators/consumers . In case these input and output values are known exactly , they can be easily incorporated to our problem as flow observations . Moreover , if they are known approximately , we can apply them as priors ( as will be detailed in the next section ) . For the remaining of this paper , we assume that inputs and outputs are unknown and employ flow conservation as an approximation of the system . Thus , different from classical flow optimization problems , such as min-cost flow ( Ahuja et al. , 1988 ) , we assume that flows are conserved approximately . Notice that our problem is similar to the one studied in Jia et al . ( 2019 ) . However , while their definition also assumes flow conservation , it does not take into account edge features . We claim that these features play important role in capturing the physics of flows . Our main contribution is a new model that is able to learn how to regularize flows based on edge features using neural networks . 3 OUR APPROACH : PHYSICS+LEARNING . In this section , we introduce our approach for the flow estimation problem , which is summarized in Figure 1 . We formulate flow estimation as an optimization problem ( Section 3.1 ) , where the interplay between the flow network topology and edge features is defined by the physics of flow graphs . Flow estimation is shown to be equivalent to a regularized least-squares problem ( Section 3.2 ) . Moreover , we describe how the effect of edge features and the graph topology can be learned from data using bilevel optimization and neural networks in Section 3.3 . Finally , we propose a reverse-mode differentiation algorithm for flow estimation in Section 3.4 . 3.1 FLOW ESTIMATION VIA OPTIMIZATION . The extent to which flow conservation holds for flows in a graph is known as divergence and can be measured using the oriented incidence matrix B ∈ Rn×m of G. The matrix is defined as follows , Bij = 1 if ∃u such that ej = ( vi , u ) ∈ E , Bij = −1 if ∃u such that ej = ( u , vi ) ∈ E , and Bij = 0 , otherwise . Given B and f , the divergence at a vertex u can be computed as : ( Bf ) u = ∑ ( vi , u ) ∈E f ( vi , u ) − ∑ ( u , vo ) ∈E f ( u , vo ) ( 1 ) And thus , we can compute the total ( squared ) divergence in the graph as ||Bf ||22 = fᵀBᵀBf =∑ u∈V ( ( Bf ) u ) 2 . One could try to solve the flow estimation problem by minimizing ||Bf ||22 while keeping the observed flows fixed , however , this problem is ill-posed—there might be multiple solutions to the optimization . The standard approach in such a scenario is to resort to regularization . In particular , we apply a generic regularization function Φ with parameters Θ as follows : f∗ = arg min f∈Ω ||Bf ||22 + Φ ( f , X ; f ( 0 ) ; Θ ) st. fe = f̂e , ∀e ∈ E ′ ( 2 ) where Ω is the domain of f , f ( 0 ) ∈ Rm is a prior for flows , fe ( f̂e ) are entries of f ( f̂ ) for edge e and the constraint guarantees that observed flows are not changed . Priors f ( 0 ) , not be confused with observed flows f̂ , should be set according to the application ( e.g. , as zero , based on a black-box model or historical data ) . Regarding the domain Ω , we consider Ω = Rm and Ω = Rm+ . The second case is relevant for directed graphs—when flows must follow edge orientations ( e.g. , traffic ) . In Jia et al . ( 2019 ) , the authors set Φ ( f , X , f ( 0 ) ; Θ ) as λ2||f ||22 for a regularization parameter λ , which implies a uniform zero prior with anL2 penalty over edges . We claim that the regularization function plays an important role in capturing the physics of flow graphs . As an example , for a power network , Φ should account for the resistance of the lines . Thus , we propose learning the regularization from data . Our approach is based on a least-squares formulation , which will be described next . 3.2 REGULARIZED LEAST-SQUARES FORMULATION . Flow estimation problem can be viewed as an inverse problem ( Tarantola , 2005 ) . Let x ∈ Rm−m′ be the vector of missing flows and H ∈ Rm×m−m′ be a matrix such that Hij = 1 if fi maps to xj ( i.e. , they are associated to the same edge ) , and Hi , j = 0 , otherwise . Moreover , let f̃ ∈ Rm be such that f̃e = f̂e if e ∈ E ′ and f̃i = 0 , otherwise . Using this notation , we define flow estimation as BHx = −Bf̃ + , where BH is a forward operator , projecting x to a vector of vertex divergences , and −Bf̃ + is the observed data , capturing ( negative ) vertex divergences for observed flows . The error can be interpreted as noise in observations or some level of model misspecification . We can also define a regularized least-squares problem with the goal of recovering missing flows x : x∗ = arg min x∈Ω′ ||BHx +Bf̃ ||22 + ||x− x ( 0 ) ||2Q ( X ; Θ ) ( 3 ) where Ω′ is a projection of the domain of f to the space of x , ||x||2M = xᵀMx is the matrixscaled norm of x and x ( 0 ) ∈ Rm−m′ are priors for missing flows . The regularization function Φ ( f , X ; f ( 0 ) , Θ ) has the form ||x−x ( 0 ) ||2Q ( X ; Θ ) , where the matrixQ ( X ; Θ ) is a function of parameters Θ and edge features X . We focus on the case where Q ( X ; Θ ) is non-negative and diagonal . Equation 3 has a Bayesian interpretation , with x being a maximum likelihood estimate under a Gaussian assumption—i.e. , x ∼ N ( x ( 0 ) , Q ( X ; Θ ) −1 ) and Bf̃ ∼ N ( 0 , I ) ( Tarantola , 2005 ) . Thus , Q ( X ; Θ ) captures the variance in flow observations f̂ in prior estimates f ( 0 ) compared to the one . This allows the regularization function to adapt to different edges based on their features . For instance , in our road network example , Q ( X ; Θ ) might place a lower weight on flow conservation for flows at a road segment with a small number of lanes , which are possible traffic bottlenecks . Given the least-squares formulation described in this section , how do we model the regularization function Q and learn its parameters Θ ? We would like Q to be expressive enough to be able to capture complex physical properties of flows , while Θ to be computed accurately and efficiently . We will address these challenges in the remaining of this paper .
In this paper, the authors introduce a method for missing flow estimation. These method has potential to address some important applications in transportation, power systems and water management. One major difference compared with the previous work is that edge features are incorporated into the optimization process so that the model has a better chance at learning edge-specific patterns. The experimental results have shown some success of the proposed method in traffic and power datasets.
SP:21296aeb09e1d3d7ca0a729f1ab614f15b12960d
Learning Efficient Planning-based Rewards for Imitation Learning
1 INTRODUCTION . Imitation learning ( IL ) offers an alternative to reinforcement learning ( RL ) for training an agent , which mimics the demonstrations of an expert and avoids manually designed reward functions . Behavioral cloning ( BC ) ( Pomerleau , 1991 ) is the simplest form of imitation learning , which learns a policy using supervised learning . More advanced methods , inverse reinforcement learning ( IRL ) ( Ng & Russell , 2000 ; Abbeel & Ng , 2004 ) seeks to recover a reward function from the demonstrations and train an RL agent on the recovered reward function . In the maximum entropy variant of IRL , the aim is to find a reward function that makes the demonstrations appear near-optimal on the principle of maximum entropy ( Ziebart et al. , 2008 ; 2010 ; Boularias et al. , 2011 ; Finn et al. , 2016 ) . However , most state-of-the-art IRL methods fail to meet the performance of demonstrations in highdimensional environments with limited demonstration data , e.g. , a one-life demonstration in Atari domain ( Yu et al. , 2020 ) . This is due to the main goal of these IRL approaches is to recover a reward function that justifies the demonstrations only . The rewards recovered from limited demonstration data would be vulnerable to the overfitting problem . Optimizing these rewards from an arbitrary initial policy results in inferior performance . Recently , Yu et al . ( 2020 ) proposed generative intrinsic reward learning for imitation learning with limited demonstration data . This method outperforms expert and IRL methods in several Atari games . Although GIRIL uses the prediction error as curiosity to design the surrogate reward that encourages ( pushes ) states away from the demonstration and avoids overfitting , the curiosity also results in ambiguous quality of the rewards in the environment . In this paper , we focus on addressing the two key issues of previous methods when learning with limited demonstration data , i.e. , 1 ) overfitting problem , and 2 ) ambiguous quality of the reward function . To address these issues , we propose to learn a straightforward surrogate reward function by learning to plan from the demonstration data , which is more reasonable than the previous intrinsic reward function ( i.e. , the prediction error between states ) . Differential planning modules ( DPM ) is potentially useful to achieve this goal , since it learns to map observation to a planning computation for a task , and generates action predictions based on the resulting plan ( Tamar et al. , 2016 ; Nardelli et al. , 2019 ; Zhang et al. , 2020 ) . Value iteration networks ( VIN ) ( Tamar et al. , 2016 ) is the representative one , which represents value iteration as a convolutional neural network ( CNN ) . Meaningful reward and value maps have been learned along with the useful planning computation , which leads to policies that generalize well to new tasks . However , due to the inefficiency of summarizing complicated transition dynamics , VIN fails to scale up to the Atari domain . To address this challenge , we propose a novel method called variational planning-embedded reward learning ( vPERL ) , which is composed of two submodules : a planning-embedded action back-tracing module and the transition dynamics module . We leverage a variational objective based on the conditional variational autoencoder ( VAE ) ( Sohn et al. , 2015 ) to jointly optimize the two submodules , which greatly improves the generalization ability . This is critical for the success of achieving a straightforward and smooth reward function and value function with limited demonstration data . As shown in Figure 1 , vPERL learns meaningful reward and value maps that attends to the resulting region of the agent executing an action , which indicates meaningful planning computation . However , directly applying VIN in Atari domain in the way of supervised learning ( Tamar et al. , 2016 ) only learns reward and value maps that attend no specific region , which usually results in no avail . Empirical results show that our method outperforms state-of-the-art IRL methods on multiple Atari games and continuous control tasks . Remarkably , our methods achieve performance that is up to 58 times of the demonstration . Moreover , the average performance improvement of our method is 1,139.1 % of the demonstration over eight Atari games . 2 BACKGROUND AND RELATED LITERATURE . Markov Decision Process ( MDP ) ( Bellman , 1966 ) is a standard model for sequential decision making and planning . An MDP M is defined by a tuple ( S , A , T , R , γ ) , where S is the set of states , A is the set of actions , T : S ×A× S → R+ is the environment transition distribution , R : S → R is the reward function , and γ ∈ ( 0 , 1 ) is the discount factor ( Puterman , 2014 ) . The expected discounted return or value of the policy π is given by V π ( s ) = Eτ [ ∑ t=0 γ tR ( st , at ) |s0 = s ] , where τ = ( s0 , a0 , s1 , a1 , · · · ) denotes the trajectory , in which the actions are selected according to π , s0 ∼ T0 ( s0 ) , at ∼ π ( at|st ) , and st+1 ∼ T ( st+1|st , at ) . The goal in an MDP is to find the optimal policy π∗ that enables the agent to obtain high long-term rewards . Generative Adversarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) extends IRL by integrating adversarial training technique for distribution matching ( Goodfellow et al. , 2014 ) . GAIL performs well in low-dimensional applications , e.g. , MuJoCo . However , it does not scale well to high-dimensional scenarios , such as Atari games ( Brown et al. , 2019a ) . Variational adversarial imitation learning ( VAIL ) ( Peng et al. , 2019 ) improves on GAIL by compressing the information via variational information bottleneck . GAIL and VAIL inherit problems of adversarial training , such as instability in training process , and are vulnerable to overfitting problem when learning with limited demonstration data . We have included both methods as comparisons to vPERL in our experiments . Generative Intrinsic Reward driven Imitation Learning ( GIRIL ) ( Yu et al. , 2020 ) leverage generative model to learn generative intrinsic rewards for better exploration . Though GIRIL outperforms previous IRL methods on several Atari games , the reward map of GIRIL is ambiguous and less informative , which results in inconsistent performance improvements in different environments . In contrast , our vPERL learns efficient planning-based reward that is more straightforward and informative . We have included GIRIL as a competitive baseline in our experiments . Differentiable planning modules perform end-to-end learning of planning computation , which leads to policies that generalize to new tasks . Value iteration ( VI ) ( Bellman , 1957 ) is a well-known method for calculating the optimal value V ∗ and optimal policy π∗ : Vn+1 ( s ) = maxaQn ( s , a ) , where Qn ( s , a ) = R ( s , a ) + γ ∑ s′ T ( s ′|s , a ) Vn ( s′ ) denotes the Q value in the nth iteration . The value function Vn in VI converges as n→∞ to V ∗ , from which the optimal policy may be derived as π∗ ( s ) = argmaxaQ∞ ( s , a ) . Value iteration networks ( VIN ) ( Tamar et al. , 2016 ) proposes to embed value iteration ( VI ) ( Bellman , 1957 ) process with a recurrent convolutional network , and generalizes well in conventional navigation domains . VIN assumes there is some unknown embedded MDP M where the optimal plan in M contains useful information about the optimal plan in the original MDP M . VIN connects the two MDPs with a parametric reward function R = fR ( s ) . Nardelli et al . ( 2019 ) proposes value propagation networks ( VPN ) generalize VIN for better sample complexity by employing value propagation ( VProp ) . Recently , universal value iteration networks ( UVIN ) extends VIN to spatially variant MDPs ( Zhang et al. , 2020 ) . Although VIN can be extended to irregular spatial graphs by applying graph convolutional operator ( Niu et al. , 2018 ) , most of the VIN variants still focus on solving the conventional navigation problems ( Zhang et al. , 2020 ) . In this paper , we extend differentiable planning module to learn an efficient reward function for imitation learning on limited demonstration data . We dig more on leveraging the learned reward function for imitation learning ; while previous related work of VIN focuses more on the value function . Therefore , our work is complementary to the research of VIN and its variants . Note that any differentiable planning module can be embedded in our method . As a simple example , we utilize the basic VIN as a backbone to build our reward learning module . 3 VARIATIONAL PLANNING-EMBEDDED REWARD LEARNING . In this section , we introduce our solution , variational planning-embedded reward learning ( vPERL ) . As illustrated in Figure 2 , our reward learning module is composed of two submodules to accomplish planning-embedded action back-tracing and explicit forward transition dynamics modeling . 3.1 ACTION BACK-TRACING AND FORWARD DYNAMICS MODELLING IN VPERL . Planning-embedded action back-tracing . Instead of directly applying VIN for policy learning ( Tamar et al. , 2016 ) , we build our first submodule qφ ( at|st , st+1 ) for action back-tracing . As illustrated in the top section of Figure 2 , we first obtain the reward map R = fR ( st , st+1 ) on an embedded MDP M , where fR is a convolutional layer . A VI module fV I takes in the reward map R , and effectively performs K times of VI by recurrently applying a convolutional layer Q for K times ( Tamar et al. , 2016 ) . The Q layer is then max-pooled to obtain the next-iteration value V . The right-directed circular arrow in a light-blue color denotes the direction of convolutions . Then , we simply obtain the action from the intermediate optimal value V ∗ by an action mapping function : at = fa ( V ∗ ) . On these terms , we build our planning-embedded action back-tracing submodule , which is formally represented as qφ ( at|st , st+1 ) = fa ( fV I ( fR ( st , st+1 ) ) ) . Since the convolutional kernel is incapable of summarizing the transition dynamics in a complex environment , directly training this submodule is still insufficient for learning efficient reward function and planning computation in an environment like Atari domain . Explicit transition dynamics modeling via inverse VI . To address this , we further build upon another submodule pθ ( st+1|at , st ) for explicit transition dynamics modeling . We build the submodule based on the inverse VI module , which is a NN architecture that mimics the process of the inverse version of VI . The implementation of the inverse VI module is straightforward . We first map the action for the intermediate optima value in another embedded MDP M ′ by a value mapping function : V ′ ∗ = fV ′ ( st , at ) . Then , we apply the inverse VI module to obtain the reward map R′ . The inverse VI module f ′V I takes in the intermediate value V ′ and recurrently apply a deconvolutional layer Q′ for K times on the value to obtain the reward map R′ . The left-directed circular arrow in a purple color denotes the direction of deconvolutions . To accomplish the transition , we map the obtained R′ to the future state by : st+1 = fs′ ( R′ ) . The transition modeling is therefore presented as pθ ( st+1|at , st ) = fs′ ( f ′V I ( fV ′ ( st , at ) ) ) , which is a differentiable submodule , and can be trained simultaneously with the action back-tracing submodule . Variational solution to vPERL . A variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process . To avoid the learned planningbased reward overfitting to the demonstration , we optimize both submodules in a unified variational solution , which follows the formulation of conditional VAE ( Sohn et al. , 2015 ) . Conditional VAE is a conditional generative model for structured output prediction using Gaussian latent variables , which is composed of a conditional encoder , decoder and prior . Accordingly , we regard the action back-tracing module qφ ( z|st , st+1 ) as the encoder , pθ ( st+1|z , st ) as the decoder , and pθ ( z|st ) as the prior . Our vPERL module is maximized with the following objective : L ( st , st+1 ; θ , φ ) = Eqφ ( z|st , st+1 ) [ log pθ ( st+1|z , st ) ] −KL ( qφ ( z|st , st+1 ) ‖pθ ( z|st ) ) − αKL ( qφ ( ât|st , st+1 ) ‖πE ( at|st ) ) ] ( 1 ) where z is the latent variable , πE ( at|st ) is the expert policy distribution , ât = Softmax ( z ) is the transformed latent variable , α is a positive scaling weight . The first two terms on the RHS of Eq . ( 1 ) in the first line denote the evidence lower bound ( ELBO ) of the conditional VAE ( Sohn et al. , 2015 ) . These two terms are critical for our reward learning module to perform planning-based action backtracing and transition modeling . Additionally , we integrate the third term on the RHS of Eq . ( 1 ) in the second line to further boost the action back-tracing . The third term minimizes the KL divergence between the expert policy distribution πE ( at|st ) and the action distribution qφ ( ât|st , st+1 ) , where ât = Softmax ( z ) is transformed from the latent variable z . In this way , we train the forward state transition and action back-tracing simultaneously . Algorithm 1 Imitation learning via variational planning-embedded reward learning ( vPERL ) . 1 : Input : Expert demonstration data D = { ( si , ai ) } Ni=1 . 2 : Initialize policy π , and the dual planning networks . 3 : for e = 1 , · · · , E do 4 : Sample a batch of demonstration D̃ ∼ D. 5 : Train vPERL module on D̃ to converge . 6 : end for 7 : for i = 1 , · · · , MAXITER do 8 : Update policy via any policy gradient method , e.g. , PPO on the learned surrogate reward rt . 9 : end for 10 : Output : Policy π . Note that the full objective in Eq . ( 1 ) is still a variational lower bound of the marginal likelihood log ( pθ ( st+1|st ) ) . Accordingly , it is reasonable to maximize this as an objective of our reward learning module . By optimizing the objective , we improve the forward state transition and action back-tracing . As a result , our reward learning module efficiently models the transition dynamics of the environment . During training , we use the latent variable z as the intermediate action . After training , we will calculate the surrogate rewards from the learned reward map . As shown in Figure 1 , our method learns meaningful reward map , which highlights the resulting region of an agent executing one action . To leverage such meaningful information , we calculate two types of rewards that both correspond to the highlighted informative region , i.e. , rt = RMax = maxR and rt = RMean = meanR , which uses the maximum and mean value of the reward map R , respectively . Algorithm 1 summarizes the full training procedure of imitation learning via vPERL . The process begins by training a vPERL module for E epochs ( steps 3-6 ) . In each training epoch , we sample a mini-batch demonstration data D̃ with a mini-batch size of B and maximize the objective in Eq . ( 1 ) . Then in steps 7-9 , we update the policy π via any policy gradient method , e.g. , PPO ( Schulman et al. , 2017 ) , so as to optimize the policy π with the learned surrogate reward function rt .
This paper proposes a method for inverse reinforcement learning that incorporates a differential planning module. Explicit transition dynamics modeling with inverse value iteration is added to promote meaningful reward learning. Empirical evaluations on several high-dimensional Atari environments and 2 continuous control environments are provided which show improvements over existing inverse reinforcement learning baselines when given only one-life demonstrations. Some visuals are also presented to show that the proposed method is able to learn more meaningful reward maps than previous methods.
SP:0370e68af5e82fcbde2ca16e57721e455620a1fe
Learning Efficient Planning-based Rewards for Imitation Learning
1 INTRODUCTION . Imitation learning ( IL ) offers an alternative to reinforcement learning ( RL ) for training an agent , which mimics the demonstrations of an expert and avoids manually designed reward functions . Behavioral cloning ( BC ) ( Pomerleau , 1991 ) is the simplest form of imitation learning , which learns a policy using supervised learning . More advanced methods , inverse reinforcement learning ( IRL ) ( Ng & Russell , 2000 ; Abbeel & Ng , 2004 ) seeks to recover a reward function from the demonstrations and train an RL agent on the recovered reward function . In the maximum entropy variant of IRL , the aim is to find a reward function that makes the demonstrations appear near-optimal on the principle of maximum entropy ( Ziebart et al. , 2008 ; 2010 ; Boularias et al. , 2011 ; Finn et al. , 2016 ) . However , most state-of-the-art IRL methods fail to meet the performance of demonstrations in highdimensional environments with limited demonstration data , e.g. , a one-life demonstration in Atari domain ( Yu et al. , 2020 ) . This is due to the main goal of these IRL approaches is to recover a reward function that justifies the demonstrations only . The rewards recovered from limited demonstration data would be vulnerable to the overfitting problem . Optimizing these rewards from an arbitrary initial policy results in inferior performance . Recently , Yu et al . ( 2020 ) proposed generative intrinsic reward learning for imitation learning with limited demonstration data . This method outperforms expert and IRL methods in several Atari games . Although GIRIL uses the prediction error as curiosity to design the surrogate reward that encourages ( pushes ) states away from the demonstration and avoids overfitting , the curiosity also results in ambiguous quality of the rewards in the environment . In this paper , we focus on addressing the two key issues of previous methods when learning with limited demonstration data , i.e. , 1 ) overfitting problem , and 2 ) ambiguous quality of the reward function . To address these issues , we propose to learn a straightforward surrogate reward function by learning to plan from the demonstration data , which is more reasonable than the previous intrinsic reward function ( i.e. , the prediction error between states ) . Differential planning modules ( DPM ) is potentially useful to achieve this goal , since it learns to map observation to a planning computation for a task , and generates action predictions based on the resulting plan ( Tamar et al. , 2016 ; Nardelli et al. , 2019 ; Zhang et al. , 2020 ) . Value iteration networks ( VIN ) ( Tamar et al. , 2016 ) is the representative one , which represents value iteration as a convolutional neural network ( CNN ) . Meaningful reward and value maps have been learned along with the useful planning computation , which leads to policies that generalize well to new tasks . However , due to the inefficiency of summarizing complicated transition dynamics , VIN fails to scale up to the Atari domain . To address this challenge , we propose a novel method called variational planning-embedded reward learning ( vPERL ) , which is composed of two submodules : a planning-embedded action back-tracing module and the transition dynamics module . We leverage a variational objective based on the conditional variational autoencoder ( VAE ) ( Sohn et al. , 2015 ) to jointly optimize the two submodules , which greatly improves the generalization ability . This is critical for the success of achieving a straightforward and smooth reward function and value function with limited demonstration data . As shown in Figure 1 , vPERL learns meaningful reward and value maps that attends to the resulting region of the agent executing an action , which indicates meaningful planning computation . However , directly applying VIN in Atari domain in the way of supervised learning ( Tamar et al. , 2016 ) only learns reward and value maps that attend no specific region , which usually results in no avail . Empirical results show that our method outperforms state-of-the-art IRL methods on multiple Atari games and continuous control tasks . Remarkably , our methods achieve performance that is up to 58 times of the demonstration . Moreover , the average performance improvement of our method is 1,139.1 % of the demonstration over eight Atari games . 2 BACKGROUND AND RELATED LITERATURE . Markov Decision Process ( MDP ) ( Bellman , 1966 ) is a standard model for sequential decision making and planning . An MDP M is defined by a tuple ( S , A , T , R , γ ) , where S is the set of states , A is the set of actions , T : S ×A× S → R+ is the environment transition distribution , R : S → R is the reward function , and γ ∈ ( 0 , 1 ) is the discount factor ( Puterman , 2014 ) . The expected discounted return or value of the policy π is given by V π ( s ) = Eτ [ ∑ t=0 γ tR ( st , at ) |s0 = s ] , where τ = ( s0 , a0 , s1 , a1 , · · · ) denotes the trajectory , in which the actions are selected according to π , s0 ∼ T0 ( s0 ) , at ∼ π ( at|st ) , and st+1 ∼ T ( st+1|st , at ) . The goal in an MDP is to find the optimal policy π∗ that enables the agent to obtain high long-term rewards . Generative Adversarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) extends IRL by integrating adversarial training technique for distribution matching ( Goodfellow et al. , 2014 ) . GAIL performs well in low-dimensional applications , e.g. , MuJoCo . However , it does not scale well to high-dimensional scenarios , such as Atari games ( Brown et al. , 2019a ) . Variational adversarial imitation learning ( VAIL ) ( Peng et al. , 2019 ) improves on GAIL by compressing the information via variational information bottleneck . GAIL and VAIL inherit problems of adversarial training , such as instability in training process , and are vulnerable to overfitting problem when learning with limited demonstration data . We have included both methods as comparisons to vPERL in our experiments . Generative Intrinsic Reward driven Imitation Learning ( GIRIL ) ( Yu et al. , 2020 ) leverage generative model to learn generative intrinsic rewards for better exploration . Though GIRIL outperforms previous IRL methods on several Atari games , the reward map of GIRIL is ambiguous and less informative , which results in inconsistent performance improvements in different environments . In contrast , our vPERL learns efficient planning-based reward that is more straightforward and informative . We have included GIRIL as a competitive baseline in our experiments . Differentiable planning modules perform end-to-end learning of planning computation , which leads to policies that generalize to new tasks . Value iteration ( VI ) ( Bellman , 1957 ) is a well-known method for calculating the optimal value V ∗ and optimal policy π∗ : Vn+1 ( s ) = maxaQn ( s , a ) , where Qn ( s , a ) = R ( s , a ) + γ ∑ s′ T ( s ′|s , a ) Vn ( s′ ) denotes the Q value in the nth iteration . The value function Vn in VI converges as n→∞ to V ∗ , from which the optimal policy may be derived as π∗ ( s ) = argmaxaQ∞ ( s , a ) . Value iteration networks ( VIN ) ( Tamar et al. , 2016 ) proposes to embed value iteration ( VI ) ( Bellman , 1957 ) process with a recurrent convolutional network , and generalizes well in conventional navigation domains . VIN assumes there is some unknown embedded MDP M where the optimal plan in M contains useful information about the optimal plan in the original MDP M . VIN connects the two MDPs with a parametric reward function R = fR ( s ) . Nardelli et al . ( 2019 ) proposes value propagation networks ( VPN ) generalize VIN for better sample complexity by employing value propagation ( VProp ) . Recently , universal value iteration networks ( UVIN ) extends VIN to spatially variant MDPs ( Zhang et al. , 2020 ) . Although VIN can be extended to irregular spatial graphs by applying graph convolutional operator ( Niu et al. , 2018 ) , most of the VIN variants still focus on solving the conventional navigation problems ( Zhang et al. , 2020 ) . In this paper , we extend differentiable planning module to learn an efficient reward function for imitation learning on limited demonstration data . We dig more on leveraging the learned reward function for imitation learning ; while previous related work of VIN focuses more on the value function . Therefore , our work is complementary to the research of VIN and its variants . Note that any differentiable planning module can be embedded in our method . As a simple example , we utilize the basic VIN as a backbone to build our reward learning module . 3 VARIATIONAL PLANNING-EMBEDDED REWARD LEARNING . In this section , we introduce our solution , variational planning-embedded reward learning ( vPERL ) . As illustrated in Figure 2 , our reward learning module is composed of two submodules to accomplish planning-embedded action back-tracing and explicit forward transition dynamics modeling . 3.1 ACTION BACK-TRACING AND FORWARD DYNAMICS MODELLING IN VPERL . Planning-embedded action back-tracing . Instead of directly applying VIN for policy learning ( Tamar et al. , 2016 ) , we build our first submodule qφ ( at|st , st+1 ) for action back-tracing . As illustrated in the top section of Figure 2 , we first obtain the reward map R = fR ( st , st+1 ) on an embedded MDP M , where fR is a convolutional layer . A VI module fV I takes in the reward map R , and effectively performs K times of VI by recurrently applying a convolutional layer Q for K times ( Tamar et al. , 2016 ) . The Q layer is then max-pooled to obtain the next-iteration value V . The right-directed circular arrow in a light-blue color denotes the direction of convolutions . Then , we simply obtain the action from the intermediate optimal value V ∗ by an action mapping function : at = fa ( V ∗ ) . On these terms , we build our planning-embedded action back-tracing submodule , which is formally represented as qφ ( at|st , st+1 ) = fa ( fV I ( fR ( st , st+1 ) ) ) . Since the convolutional kernel is incapable of summarizing the transition dynamics in a complex environment , directly training this submodule is still insufficient for learning efficient reward function and planning computation in an environment like Atari domain . Explicit transition dynamics modeling via inverse VI . To address this , we further build upon another submodule pθ ( st+1|at , st ) for explicit transition dynamics modeling . We build the submodule based on the inverse VI module , which is a NN architecture that mimics the process of the inverse version of VI . The implementation of the inverse VI module is straightforward . We first map the action for the intermediate optima value in another embedded MDP M ′ by a value mapping function : V ′ ∗ = fV ′ ( st , at ) . Then , we apply the inverse VI module to obtain the reward map R′ . The inverse VI module f ′V I takes in the intermediate value V ′ and recurrently apply a deconvolutional layer Q′ for K times on the value to obtain the reward map R′ . The left-directed circular arrow in a purple color denotes the direction of deconvolutions . To accomplish the transition , we map the obtained R′ to the future state by : st+1 = fs′ ( R′ ) . The transition modeling is therefore presented as pθ ( st+1|at , st ) = fs′ ( f ′V I ( fV ′ ( st , at ) ) ) , which is a differentiable submodule , and can be trained simultaneously with the action back-tracing submodule . Variational solution to vPERL . A variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process . To avoid the learned planningbased reward overfitting to the demonstration , we optimize both submodules in a unified variational solution , which follows the formulation of conditional VAE ( Sohn et al. , 2015 ) . Conditional VAE is a conditional generative model for structured output prediction using Gaussian latent variables , which is composed of a conditional encoder , decoder and prior . Accordingly , we regard the action back-tracing module qφ ( z|st , st+1 ) as the encoder , pθ ( st+1|z , st ) as the decoder , and pθ ( z|st ) as the prior . Our vPERL module is maximized with the following objective : L ( st , st+1 ; θ , φ ) = Eqφ ( z|st , st+1 ) [ log pθ ( st+1|z , st ) ] −KL ( qφ ( z|st , st+1 ) ‖pθ ( z|st ) ) − αKL ( qφ ( ât|st , st+1 ) ‖πE ( at|st ) ) ] ( 1 ) where z is the latent variable , πE ( at|st ) is the expert policy distribution , ât = Softmax ( z ) is the transformed latent variable , α is a positive scaling weight . The first two terms on the RHS of Eq . ( 1 ) in the first line denote the evidence lower bound ( ELBO ) of the conditional VAE ( Sohn et al. , 2015 ) . These two terms are critical for our reward learning module to perform planning-based action backtracing and transition modeling . Additionally , we integrate the third term on the RHS of Eq . ( 1 ) in the second line to further boost the action back-tracing . The third term minimizes the KL divergence between the expert policy distribution πE ( at|st ) and the action distribution qφ ( ât|st , st+1 ) , where ât = Softmax ( z ) is transformed from the latent variable z . In this way , we train the forward state transition and action back-tracing simultaneously . Algorithm 1 Imitation learning via variational planning-embedded reward learning ( vPERL ) . 1 : Input : Expert demonstration data D = { ( si , ai ) } Ni=1 . 2 : Initialize policy π , and the dual planning networks . 3 : for e = 1 , · · · , E do 4 : Sample a batch of demonstration D̃ ∼ D. 5 : Train vPERL module on D̃ to converge . 6 : end for 7 : for i = 1 , · · · , MAXITER do 8 : Update policy via any policy gradient method , e.g. , PPO on the learned surrogate reward rt . 9 : end for 10 : Output : Policy π . Note that the full objective in Eq . ( 1 ) is still a variational lower bound of the marginal likelihood log ( pθ ( st+1|st ) ) . Accordingly , it is reasonable to maximize this as an objective of our reward learning module . By optimizing the objective , we improve the forward state transition and action back-tracing . As a result , our reward learning module efficiently models the transition dynamics of the environment . During training , we use the latent variable z as the intermediate action . After training , we will calculate the surrogate rewards from the learned reward map . As shown in Figure 1 , our method learns meaningful reward map , which highlights the resulting region of an agent executing one action . To leverage such meaningful information , we calculate two types of rewards that both correspond to the highlighted informative region , i.e. , rt = RMax = maxR and rt = RMean = meanR , which uses the maximum and mean value of the reward map R , respectively . Algorithm 1 summarizes the full training procedure of imitation learning via vPERL . The process begins by training a vPERL module for E epochs ( steps 3-6 ) . In each training epoch , we sample a mini-batch demonstration data D̃ with a mini-batch size of B and maximize the objective in Eq . ( 1 ) . Then in steps 7-9 , we update the policy π via any policy gradient method , e.g. , PPO ( Schulman et al. , 2017 ) , so as to optimize the policy π with the learned surrogate reward function rt .
This paper assumes no access to the reward values and attempts to learn a policy by starting just with one demonstration to define the reward. For obtaining the reward, the authors rely on the ideas from Value Iteration Networks (VIN) method and they add the modules that help to deal with cases with complex transition dynamics. The resulting method is tested on atari domain and on continuous control tasks.
SP:0370e68af5e82fcbde2ca16e57721e455620a1fe
Semi-supervised Keypoint Localization
1 INTRODUCTION . Detecting keypoints helps with fine-grained classification ( Guo & Farrell , 2019 ) and re-identification ( Zhu et al. , 2020 ; Sarfraz et al. , 2018 ) . In the domain of wild animals ( Mathis et al. , 2018 ; Moskvyak et al. , 2020 ; Liu et al. , 2019a ; b ) , annotating data is especially challenging due to large pose variations and the need for domain experts to annotate . Moreover , there is less commercial interest in keypoint estimation for animals compared to humans , and little effort is invested in collecting and annotating public datasets . Unsupervised detection of landmarks1 ( Jakab et al. , 2018 ; Thewlis et al. , 2017 ; 2019 ) can extract useful features , but are not able to detect perceptible landmarks without supervision . On the other hand , supervised learning has the risk of overfitting if trained only on a limited number of labeled examples . Semi-supervised learning combines a small amount of labeled data with a large amount of unlabeled data during training . It is mostly studied for classification task ( van Engelen & Hoos , 2019 ) but it is also important for keypoint localization problem because annotating multiple keypoints per image is a time-consuming manual work , for which precision is the most important factor . Pseudo-labeling ( Lee , 2013 ) is a common semi-supervised approach where unlabeled examples are assigned labels ( called pseudo-labels ) predicted by a model trained on a labeled subset . A heuristic unsupervised criterion is adopted to select the pseudo-labeled data for a retraining procedure . More recently , the works of ( Dong & Yang , 2019 ; Radosavovic et al. , 2018 ) apply variations to selection criteria in pseudo-labeling for semi-supervised facial landmark detection . However , there are less variations in facial landmark positions than in human or animal body joints , where there is a high 1We use terms keypoints or landmarks interchangeably in our work . These terms are more generic than body joints ( used in human pose estimation ) because our method is applicable to a variety of categories . risk of transferring inaccurate pseudo-labeled examples to the retraining stage that is harmful for the model . Previous work of ( Honari et al. , 2018 ) in semi-supervised landmark detection utilizes additional class attributes and test only on datasets that provide these attribute annotations . Our work focuses on keypoint localization task in a common real-world scenario where annotations are provided for a small subset of data from a large unlabeled dataset . More specifically , we propose a method for semi-supervised keypoint localization that learns a list of heatmaps and a list of semantic keypoint representations for each image ( Figure 1 ) . A semantic keypoint representation is a vector of real numbers in a low-dimensional space relative to the image size , and the same keypoints in different images have similar representations . We leverage properties that are specific to the landmark localization problem to design constraints for jointly optimizing both representations . We extend a transformation consistency constraint of ( Honari et al. , 2018 ) to be able to apply it on each representation differently ( i.e . transformation equivariant constraint for heatmaps and transformation invariant constraint for semantic representations ) . Moreover , we formulate a semantic consistency constraint that encourages detecting similar features across images for the same landmark independent of the pose of the object ( e.g . an eye in all images should look similar ) . Learning both representations simultaneously allows us to use the power of both supervised and unsupervised learning . Our work is motivated by data scarcity in the domain of wild animals , but is not limited to animals , and as well , it is applicable to human body landmarks detection . The contribution of our work is three-fold : • We propose a technique for semi-supervised keypoint localization that jointly learns keypoint heatmaps and semantic representations optimised with supervised and unsupervised constraints ; • Our method can be easily added to any existing keypoint localization networks with no structural and with minimal computational overhead ; • We evaluate the proposed method on annotated image datasets for both humans and animals . As demonstrated by our results , our method significantly outperforms previously proposed supervised and unsupervised methods on several benchmarks , using only limited labeled data . The paper is organised as follows . Related work on semi-supervised learning and keypoint localization is reviewed in Section 2 . Our proposed method is described in Section 3 . Experimental settings , datasets and results are discussed in Section 4 . 2 RELATED WORK . Keypoint localization . Supervised keypoint localization research is driven by a few large datasets with labeled keypoints that span across several common research domains including human pose estimation ( Andriluka et al. , 2014 ) and facial keypoints ( Sagonas et al. , 2016 ) . Challenges in obtaining keypoint annotations have led to the rise in unsupervised landmark localization research . Several unsupervised methods leverage the concept of equivariance which means that landmark coordinates stay consistent after synthetic transformations or in subsequent video frames . Thewlis et al . ( 2017 ) propose to learn viewpoint-independent representations that are equivariant to different transformations and Dong et al . ( 2018 ) exploit the coherence of optical flow as a source of supervision . Zhang et al . ( 2018 ) learn landmark encodings by enforcing constraints that reflect the necessary properties for landmarks such as separability and concentration . Jakab et al . ( 2018 ) propose a generative approach where the predicted heatmaps are used to reconstruct the input image from a transformed copy . Recent work ( Thewlis et al. , 2019 ) enforce the consistency between instances of the same object by exchanging descriptor vectors . These methods are mostly evaluated on faces of people that have less degrees of freedom during movements and transformations than human or animal body joints . We compare our method to the combination of supervised and aforementioned unsupervised methods in Section 4 . Semi-supervised learning is the most studied for the classification task . Pseudo-labeling ( Lee , 2013 ) is a method that uses the model ’ s class predictions as artificial labels for unlabeled examples and then trains the model to predict these labels . Another technique is a consistency regularization which states that realistic perturbations of input examples from unlabeled dataset should not significantly change the output of a neural network . Consistency regularization is used in Π-model ( Laine & Aila , 2017 ) and further improved by Temporal Ensembling ( Laine & Aila , 2017 ) which maintains an exponential moving average prediction for each training example and Mean Teacher ( Tarvainen & Valpola , 2017 ) that averages model weights instead of model predictions . Recent methods UDA ( Xie et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) and FixMatch ( Sohn et al. , 2020 ) use a combination of consistency loss , pseudo-labeling and advanced augmentation techniques in addition to color perturbations and spatial transformations . In this work , we investigate adjustments required to apply consistency loss to keypoint localization which we discuss in Section 3.2 . Semi-supervised learning for keypoint localization . To the best of our knowledge , there are a few works in semi-supervised keypoint localization . Dong & Yang ( 2019 ) build on the pseudo-labeling technique and propose a teacher model and two students to generate more reliable pseudo-labels for unlabeled images . However , the method is evaluated on face landmarks and in cases with high variations of poses , there is a high possibility of inaccurate pseudo-labels that can not be filtered out and be harmful during the retraining stage . Honari et al . ( 2018 ) ; Ukita & Uematsu ( 2018 ) learn keypoints in a semi-supervised manner but utilise extra annotations to guide landmark learning such as action labels ( running , jumping ) for juman joints or emotion labels ( smiling , yawning ) for facial keypoint localization . Different from previous work our approach does not use any class labels and learns directly from unlabeled data with high pose variations . 3 SEMI-SUPERVISED LEARNING FOR KEYPOINT LOCALIZATION . In this work , we propose a semi-supervised technique for keypoint localization that learns from an image set where ground truth annotations are provided only for a small subset of the dataset . The overall architecture consists of two components : a keypoint localization network ( KLN ) that outputs keypoint heatmaps of the image , and a keypoint classification network ( KCN ) that classifies keypoints given a semantic keypoint representation as input . Our method does not pose any constraints on the architecture of the KLN and it can be added to any existing keypoint localization network with minimal modifications . We optimize heatmaps with the supervised loss and the transformation equivariance constraint . Simultaneously , keypoint representations are optimized with transformation invariance and semantic consistency constraints ( Figure 1 ) . We discuss each constraint and related components of the architecture in the next sections . 3.1 SEMANTIC KEYPOINT REPRESENTATIONS . Keypoint heatmaps are optimized to estimate locations of keypoints in the image . However , heatmaps do not carry any information about a semantic type of the keypoint ( e.g , a beak or an eye for a bird ) . In semi-supervised regime , the feedback provided by unlabeled examples are not as effective as the ones coming from labeled examples . To extract useful information from unlabeled images , we propose learning a semantic keypoint representation . In particular , keypoint localization network is encouraged to detect similar features for the same semantic keypoint across the dataset by incorporating the feedback from a keypoint representation classifier in the objective function . Motivation for our approach is that the same keypoints should activate the same feature maps . Let us consider KLN as a function f ( x ; θ ) with an input image x and trainable parameters θ that outputs heatmaps h = f ( x ; θ ) . We collect intermediate feature maps from KLN , upscale them to the spatial dimension of output heatmaps , concatenate by channels , and pass through a convolutional layer with C filters of size one ( Figure 2 ) . The resulting feature map F has the shape ( C , H , W ) . Then , feature maps F are element-wise multiplied with each keypoint heatmap hi , i ∈ { 1 , ... , K } seperately to mask out activations corresponding to the detected keypoint . The output of this operation is K feature maps of size ( C , H , W ) . Global Max Pooling ( GMP ) is applied over feature maps to keep the highest value for each channel . We call the produced vector zi = GMP ( F hi ) for each keypoint i ∈ { 1 , ... , K } a semantic keypoint representation . Finally , we pass keypoint representations to a simple KCN ( φ ) which is a fully connected network with an input and an output layer for classification with cross-entropy loss . The feedback from the cross-entropy loss makes up a semantic consistency ( SC ) loss : Lsc ( x ) = − 1 K K∑ i=1 ŷi log ( φ ( zi ) ) ( 1 ) where ŷ is a vector of ground truth semantic labels for keypoints because the order of keypoints in a heatmap is fixed . One advantage of our method is its efficiency as it only adds a small number of parameters to the network to address the task of keypoint representation classification . Specifically , KCN is a small fully connected network shared between keypoints and it has less than a thousand of parameters depending on the number of keypoints . Our approach is related to attention modules ( Vaswani et al. , 2017 ; Hu et al. , 2020 ) as our network has the ability to focus on a subset of features using elementwise multiplication with heatmaps . However , our model uses this attention-based mechanism to learn additional keypoint representations from unlabeled data by optimizing a set of unsupervised losses .
The paper presents an approach to keypoint localization (to retrieve people/animals pose) combining labeled and unlabeled data. Features are extracted and concatenated into a single descriptor per keypoints, by multiplying feature maps and heatmaps and max-pooling over the spatial domain, and used for semantic classification. Images are transformed with simple perspective augmentations. The non-supervised part comes in enforcing that keypoint representations for unlabeled images remain close.
SP:40701460d7ed2175ff193b228f93af7d50911267
Semi-supervised Keypoint Localization
1 INTRODUCTION . Detecting keypoints helps with fine-grained classification ( Guo & Farrell , 2019 ) and re-identification ( Zhu et al. , 2020 ; Sarfraz et al. , 2018 ) . In the domain of wild animals ( Mathis et al. , 2018 ; Moskvyak et al. , 2020 ; Liu et al. , 2019a ; b ) , annotating data is especially challenging due to large pose variations and the need for domain experts to annotate . Moreover , there is less commercial interest in keypoint estimation for animals compared to humans , and little effort is invested in collecting and annotating public datasets . Unsupervised detection of landmarks1 ( Jakab et al. , 2018 ; Thewlis et al. , 2017 ; 2019 ) can extract useful features , but are not able to detect perceptible landmarks without supervision . On the other hand , supervised learning has the risk of overfitting if trained only on a limited number of labeled examples . Semi-supervised learning combines a small amount of labeled data with a large amount of unlabeled data during training . It is mostly studied for classification task ( van Engelen & Hoos , 2019 ) but it is also important for keypoint localization problem because annotating multiple keypoints per image is a time-consuming manual work , for which precision is the most important factor . Pseudo-labeling ( Lee , 2013 ) is a common semi-supervised approach where unlabeled examples are assigned labels ( called pseudo-labels ) predicted by a model trained on a labeled subset . A heuristic unsupervised criterion is adopted to select the pseudo-labeled data for a retraining procedure . More recently , the works of ( Dong & Yang , 2019 ; Radosavovic et al. , 2018 ) apply variations to selection criteria in pseudo-labeling for semi-supervised facial landmark detection . However , there are less variations in facial landmark positions than in human or animal body joints , where there is a high 1We use terms keypoints or landmarks interchangeably in our work . These terms are more generic than body joints ( used in human pose estimation ) because our method is applicable to a variety of categories . risk of transferring inaccurate pseudo-labeled examples to the retraining stage that is harmful for the model . Previous work of ( Honari et al. , 2018 ) in semi-supervised landmark detection utilizes additional class attributes and test only on datasets that provide these attribute annotations . Our work focuses on keypoint localization task in a common real-world scenario where annotations are provided for a small subset of data from a large unlabeled dataset . More specifically , we propose a method for semi-supervised keypoint localization that learns a list of heatmaps and a list of semantic keypoint representations for each image ( Figure 1 ) . A semantic keypoint representation is a vector of real numbers in a low-dimensional space relative to the image size , and the same keypoints in different images have similar representations . We leverage properties that are specific to the landmark localization problem to design constraints for jointly optimizing both representations . We extend a transformation consistency constraint of ( Honari et al. , 2018 ) to be able to apply it on each representation differently ( i.e . transformation equivariant constraint for heatmaps and transformation invariant constraint for semantic representations ) . Moreover , we formulate a semantic consistency constraint that encourages detecting similar features across images for the same landmark independent of the pose of the object ( e.g . an eye in all images should look similar ) . Learning both representations simultaneously allows us to use the power of both supervised and unsupervised learning . Our work is motivated by data scarcity in the domain of wild animals , but is not limited to animals , and as well , it is applicable to human body landmarks detection . The contribution of our work is three-fold : • We propose a technique for semi-supervised keypoint localization that jointly learns keypoint heatmaps and semantic representations optimised with supervised and unsupervised constraints ; • Our method can be easily added to any existing keypoint localization networks with no structural and with minimal computational overhead ; • We evaluate the proposed method on annotated image datasets for both humans and animals . As demonstrated by our results , our method significantly outperforms previously proposed supervised and unsupervised methods on several benchmarks , using only limited labeled data . The paper is organised as follows . Related work on semi-supervised learning and keypoint localization is reviewed in Section 2 . Our proposed method is described in Section 3 . Experimental settings , datasets and results are discussed in Section 4 . 2 RELATED WORK . Keypoint localization . Supervised keypoint localization research is driven by a few large datasets with labeled keypoints that span across several common research domains including human pose estimation ( Andriluka et al. , 2014 ) and facial keypoints ( Sagonas et al. , 2016 ) . Challenges in obtaining keypoint annotations have led to the rise in unsupervised landmark localization research . Several unsupervised methods leverage the concept of equivariance which means that landmark coordinates stay consistent after synthetic transformations or in subsequent video frames . Thewlis et al . ( 2017 ) propose to learn viewpoint-independent representations that are equivariant to different transformations and Dong et al . ( 2018 ) exploit the coherence of optical flow as a source of supervision . Zhang et al . ( 2018 ) learn landmark encodings by enforcing constraints that reflect the necessary properties for landmarks such as separability and concentration . Jakab et al . ( 2018 ) propose a generative approach where the predicted heatmaps are used to reconstruct the input image from a transformed copy . Recent work ( Thewlis et al. , 2019 ) enforce the consistency between instances of the same object by exchanging descriptor vectors . These methods are mostly evaluated on faces of people that have less degrees of freedom during movements and transformations than human or animal body joints . We compare our method to the combination of supervised and aforementioned unsupervised methods in Section 4 . Semi-supervised learning is the most studied for the classification task . Pseudo-labeling ( Lee , 2013 ) is a method that uses the model ’ s class predictions as artificial labels for unlabeled examples and then trains the model to predict these labels . Another technique is a consistency regularization which states that realistic perturbations of input examples from unlabeled dataset should not significantly change the output of a neural network . Consistency regularization is used in Π-model ( Laine & Aila , 2017 ) and further improved by Temporal Ensembling ( Laine & Aila , 2017 ) which maintains an exponential moving average prediction for each training example and Mean Teacher ( Tarvainen & Valpola , 2017 ) that averages model weights instead of model predictions . Recent methods UDA ( Xie et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) and FixMatch ( Sohn et al. , 2020 ) use a combination of consistency loss , pseudo-labeling and advanced augmentation techniques in addition to color perturbations and spatial transformations . In this work , we investigate adjustments required to apply consistency loss to keypoint localization which we discuss in Section 3.2 . Semi-supervised learning for keypoint localization . To the best of our knowledge , there are a few works in semi-supervised keypoint localization . Dong & Yang ( 2019 ) build on the pseudo-labeling technique and propose a teacher model and two students to generate more reliable pseudo-labels for unlabeled images . However , the method is evaluated on face landmarks and in cases with high variations of poses , there is a high possibility of inaccurate pseudo-labels that can not be filtered out and be harmful during the retraining stage . Honari et al . ( 2018 ) ; Ukita & Uematsu ( 2018 ) learn keypoints in a semi-supervised manner but utilise extra annotations to guide landmark learning such as action labels ( running , jumping ) for juman joints or emotion labels ( smiling , yawning ) for facial keypoint localization . Different from previous work our approach does not use any class labels and learns directly from unlabeled data with high pose variations . 3 SEMI-SUPERVISED LEARNING FOR KEYPOINT LOCALIZATION . In this work , we propose a semi-supervised technique for keypoint localization that learns from an image set where ground truth annotations are provided only for a small subset of the dataset . The overall architecture consists of two components : a keypoint localization network ( KLN ) that outputs keypoint heatmaps of the image , and a keypoint classification network ( KCN ) that classifies keypoints given a semantic keypoint representation as input . Our method does not pose any constraints on the architecture of the KLN and it can be added to any existing keypoint localization network with minimal modifications . We optimize heatmaps with the supervised loss and the transformation equivariance constraint . Simultaneously , keypoint representations are optimized with transformation invariance and semantic consistency constraints ( Figure 1 ) . We discuss each constraint and related components of the architecture in the next sections . 3.1 SEMANTIC KEYPOINT REPRESENTATIONS . Keypoint heatmaps are optimized to estimate locations of keypoints in the image . However , heatmaps do not carry any information about a semantic type of the keypoint ( e.g , a beak or an eye for a bird ) . In semi-supervised regime , the feedback provided by unlabeled examples are not as effective as the ones coming from labeled examples . To extract useful information from unlabeled images , we propose learning a semantic keypoint representation . In particular , keypoint localization network is encouraged to detect similar features for the same semantic keypoint across the dataset by incorporating the feedback from a keypoint representation classifier in the objective function . Motivation for our approach is that the same keypoints should activate the same feature maps . Let us consider KLN as a function f ( x ; θ ) with an input image x and trainable parameters θ that outputs heatmaps h = f ( x ; θ ) . We collect intermediate feature maps from KLN , upscale them to the spatial dimension of output heatmaps , concatenate by channels , and pass through a convolutional layer with C filters of size one ( Figure 2 ) . The resulting feature map F has the shape ( C , H , W ) . Then , feature maps F are element-wise multiplied with each keypoint heatmap hi , i ∈ { 1 , ... , K } seperately to mask out activations corresponding to the detected keypoint . The output of this operation is K feature maps of size ( C , H , W ) . Global Max Pooling ( GMP ) is applied over feature maps to keep the highest value for each channel . We call the produced vector zi = GMP ( F hi ) for each keypoint i ∈ { 1 , ... , K } a semantic keypoint representation . Finally , we pass keypoint representations to a simple KCN ( φ ) which is a fully connected network with an input and an output layer for classification with cross-entropy loss . The feedback from the cross-entropy loss makes up a semantic consistency ( SC ) loss : Lsc ( x ) = − 1 K K∑ i=1 ŷi log ( φ ( zi ) ) ( 1 ) where ŷ is a vector of ground truth semantic labels for keypoints because the order of keypoints in a heatmap is fixed . One advantage of our method is its efficiency as it only adds a small number of parameters to the network to address the task of keypoint representation classification . Specifically , KCN is a small fully connected network shared between keypoints and it has less than a thousand of parameters depending on the number of keypoints . Our approach is related to attention modules ( Vaswani et al. , 2017 ; Hu et al. , 2020 ) as our network has the ability to focus on a subset of features using elementwise multiplication with heatmaps . However , our model uses this attention-based mechanism to learn additional keypoint representations from unlabeled data by optimizing a set of unsupervised losses .
This paper presents semi-supervised keypoint localization networks and loss functions to overcome the need for the labeled keypoint data for that task. It simultaneously generates keypoint heatmaps and pose invariant keypoint representations, where these representations were separately used to enforce translation equivariance, and translation invariance, and semantic consistency, respectively. The proposed method attains the improvement on several benchmarks for human and animal body landmark localization.
SP:40701460d7ed2175ff193b228f93af7d50911267
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
1 INTRODUCTION . Many modern machine learning algorithms are susceptible to adversarial examples : carefully crafted inputs designed to fool models into giving incorrect outputs ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Kurakin et al. , 2016a ; Xie et al. , 2017 ) . Much research has focused on increasing classifiers ’ robustness against adversarial attacks ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ; Zhang et al. , 2019a ) . However , existing adversarial defenses for image classifiers generally consider simple threat models . An adversarial threat model defines a set of perturbations that may be made to an image in order to produce an adversarial example . Common threat models include L2 and L∞ threat models , which constrain adversarial examples to be close to the original image in L2 or L∞ distances . Some work has proposed additional threat models which allow spatial perturbations ( Engstrom et al. , 2017 ; Wong et al. , 2019 ; Xiao et al. , 2018 ) , recoloring ( Hosseini and Poovendran , 2018 ; Laidlaw and Feizi , 2019 ; Bhattad et al. , 2019 ) , and other modifications ( Song et al. , 2018 ; Zeng et al. , 2019 ) of an image . There are multiple issues with these unrealistically constrained adversarial threat models . First , hardening against one threat model assumes that an adversary will only attempt attacks within that threat model . Although a classifier may be trained to be robust against L∞ attacks , for instance , an attacker could easily generate a spatial attack to fool the classifier . One possible solution is to train against multiple threat models simultaneously ( Jordan et al. , 2019 ; Laidlaw and Feizi , 2019 ; Maini et al. , 2019 ; Tramer and Boneh , 2019 ) . However , this generally results in a lower robustness against any one of the threat models when compared to hardening against that threat model alone . Furthermore , not all possible threat models may be known at training time , and adversarial defenses do not usually generalize well to unforeseen threat models ( Kang et al. , 2019 ) . The ideal solution to these drawbacks would be a defense that is robust against a wide , unconstrained threat model . We differentiate between two such threat models . The unrestricted adversarial threat model ( Brown et al. , 2018 ) encompasses any adversarial example that is labeled as one class by a classifier but a different class by humans . On the other hand , we define the perceptual adversarial threat model as including all perturbations of natural images that are imperceptible to a human . Most existing narrow threat models such as L2 , L∞ , etc . are near subsets of the perceptual threat model ( Figure 1 ) . Some other threat models , such as adversarial patch attacks ( Brown et al. , 2018 ) , may perceptibly alter an image without changing its true class and as such are contained in the unrestricted adversarial threat model . In this work , we focus on the perceptual threat model . The perceptual threat model can be formalized given the true perceptual distance d∗ ( x1 , x2 ) between images x1 and x2 , defined as how different two images appear to humans . For some threshold ∗ , which we call the perceptibility threshold , images x and x′ are indistinguishable from one another as long as d∗ ( x , x′ ) ≤ ∗ . Note that in general ∗ may depend on the specific input . Then , the perceptual threat model for a natural input x includes all adversarial examples x̃ which cause misclassification but are imperceptibly different from x , i.e . d∗ ( x , x̃ ) ≤ ∗ . The true perceptual distance d∗ ( · , · ) , however , can not be easily computed or optimized against . To solve this issue , we propose to use a neural perceptual distance , an approximation of the true perceptual distance between images using neural networks . Fortunately , there have been many surrogate perceptual distances proposed in the computer vision literature such as SSIM ( Wang et al. , 2004 ) . Recently , Zhang et al . ( 2018 ) discovered that comparing the internal activations of a convolutional neural network when two different images are passed through provides a measure , Learned Perceptual Image Patch Similarity ( LPIPS ) , that correlates well with human perception . We propose to use the LPIPS distance d ( · , · ) in place of the true perceptual distance d∗ ( · , · ) to formalize the neural perceptual threat model ( NPTM ) . We present adversarial attacks and defenses for the proposed NPTM . Generating adversarial examples bounded by the neural perceptual distance is difficult compared to generating Lp adversarial examples because of the complexity and non-convexness of the constraint . However , we develop two attacks for the NPTM , Perceptual Projected Gradient Descent ( PPGD ) and Lagrangian Perceptual Attack ( LPA ) ( see Section 4 for details ) . We find that LPA is by far the strongest adversarial attack at a given level of perceptibility ( see Figure 4 ) , reducing the most robust classifier studied to only 2.4 % accuracy on ImageNet-100 ( a subset of ImageNet ) while remaining imperceptible . LPA also finds adversarial examples outside of any of the other threat models studied ( see Figure 2 ) . Thus , even if a model is robust to many narrow threat models ( Lp , spatial , etc . ) , LPA can still cause serious errors . In addition to these attacks , which are suitable for evaluation of a classifier against the NPTM , we also develop Fast-LPA , a more efficient version of LPA that we use in Perceptual Adversarial Training ( PAT ) . Remarkably , using PAT to train a neural network classifier produces a single model with high robustness against a variety of imperceptible perturbations , including L∞ , L2 , spatial , recoloring , and JPEG attacks , on CIFAR-10 and ImageNet-100 ( Tables 2 and 3 ) . For example , PAT on ImageNet-100 gives 32.5 % accuracy against the union of these five attacks , whereas L∞ and L2 adversarial training give 0.5 % and 12.3 % accuracy , respectively ( Table 1 ) . PAT achieves more than double the accuracy against this union of five threat models despite not explicitly training against any of them . Thus , it generalizes well to unseen threat models . Does the LPIPS distance accurately reflect human perception when it is used to evaluate adversarial examples ? We performed a study on Amazon Mechanical Turk ( AMT ) to determine how perceptible 7 different types of adversarial perturbations such as L∞ , L2 , spatial , and recoloring attacks are at multiple threat-specific bounds . We find that LPIPS correlates well with human judgements across all the different adversarial perturbation types we examine . This indicates that the NPTM closely matches the true perceptual threat model and reinforces the utility of our perceptual attacks to measure adversarial robustness against an expansive threat model . Furthermore , this study allows calibration of a variety of attack bounds to a single perceptibility metric . We have released our dataset of adversarial examples along with the annotations made by participants for further study1 . 2 RELATED WORK . Adversarial robustness Adversarial robustness has been studied extensively for L2 or L∞ threat models ( Goodfellow et al. , 2015 ; Carlini and Wagner , 2017 ; Madry et al. , 2018 ) and non-Lp threat models such as spatial perturbations ( Engstrom et al. , 2017 ; Xiao et al. , 2018 ; Wong et al. , 2019 ) , recoloring of an image ( Hosseini and Poovendran , 2018 ; Laidlaw and Feizi , 2019 ; Bhattad et al. , 2019 ) , and perturbations in the frequency domain ( Kang et al. , 2019 ) . The most popular known adversarial defense for these threat models is adversarial training Kurakin et al . ( 2016b ) ; Madry et al . ( 2018 ) ; Zhang et al . ( 2019a ) where a neural network is trained to minimize the worst-case loss in a region around the input . Recent evaluation methodologies such as Unforeseen Attack Robustness ( UAR ) ( Kang et al. , 2019 ) and the Unrestricted Adversarial Examples challenge ( Brown et al. , 2018 ) have raised the problem of finding an adversarial defense which gives good robustness under more general threat models . Sharif et al . ( 2018 ) conduct a perceptual study showing that Lp threat models are a poor approximation of the perceptual threat model . Dunn et al . ( 2020 ) and Xu et al . ( 2020 ) have developed adversarial attacks that manipulate higher-level , semantic features . Jin and Rinard ( 2020 ) train with a manifold regularization term , which gives some robustness to unseen perturbation types . Stutz et al . ( 2020 ) also propose a method which gives robustness against unseen perturbation types , but requires rejecting ( abstaining on ) some inputs . Perceptual similarity Two basic similarity measures for images are the L2 distance and the Peak Signal-to-Noise Ratio ( PSNR ) . However , these similarity measures disagree with human vision on perturbations such as blurring and spatial transformations , which has motivated others including SSIM ( Wang et al. , 2004 ) , MS-SSIM ( Wang et al. , 2003 ) , CW-SSIM ( Sampat et al. , 2009 ) , HDR-VDP-2 ( Mantiuk et al. , 2011 ) and LPIPS ( Zhang et al. , 2018 ) . MAD competition ( Wang and Simoncelli , 2008 ) uses a constrained optimization technique related to our attacks to evaluate perceptual measures . Perceptual adversarial robustness Although LPIPS was previously proposed , it has mostly been used for development and evaluation of generative models ( Huang et al. , 2018 ; Karras et al. , 2019 ) . Jordan et al . ( 2019 ) first explored quantifying adversarial distortions with LPIPS distance . However , to the best of our knowledge , we are the first to apply a more accurate perceptual distance to the 1Code and data can be downloaded at https : //github.com/cassidylaidlaw/perceptual-advex . problem of improving adversarial robustness . As we show , adversarial defenses based on L2 or L∞ attacks are unable to generalize to a more diverse threat model . Our method , PAT , is the first adversarial training method we know of that can generalize to unforeseen threat models without rejecting inputs . 3 NEURAL PERCEPTUAL THREAT MODEL ( NPTM ) . Since the true perceptual distance between images can not be efficiently computed , we use approximations of it based on neural networks , i.e . neural perceptual distances . In this paper , we focus on the LPIPS distance ( Zhang et al. , 2018 ) while we note that other neural perceptual distances can also be used in our attacks and defenses . Let g : X → Y be a convolutional image classifier network defined on images x ∈ X . Let g have L layers , and let the internal activations ( outputs ) of the l-th layer of g ( x ) for an input x be denoted as gl ( x ) . Zhang et al . ( 2018 ) have found that normalizing and then comparing the internal activations of convolutional neural networks correlates well with human similarity judgements . Thus , the first step in calculating the LPIPS distance using the network g ( · ) is to normalize the internal activations across the channel dimension such that the L2 norm over channels at each pixel is one . Let ĝl ( x ) denote these channel-normalized activations at the l-th layer of the network . Next , the activations are normalized again by layer size and flattened into a single vector φ ( x ) , ( ĝ1 ( x ) √ w1h1 , . . . , ĝL ( x ) √ wLhL ) where wl and hl are the width and height of the activations in layer l , respectively . The function φ : X → A thus maps the inputs x ∈ X of the classifier g ( · ) to the resulting normalized , flattened internal activations φ ( x ) ∈ A , where A ⊆ Rm refers to the space of all possible resulting activations . The LPIPS distance d ( x1 , x2 ) between images x1 and x2 is then defined as : d ( x1 , x2 ) , ‖φ ( x1 ) − φ ( x2 ) ‖2 . ( 1 ) In the original LPIPS implementation , Zhang et al . ( 2018 ) learn weights to apply to the normalized activations based on a dataset of human perceptual judgements . However , they find that LPIPS is a good surrogate for human vision even without the additional learned weights ; this is the version we use since it avoids the need to collect such a dataset . Now let f : X → Y be a classifier which maps inputs x ∈ X to labels f ( x ) ∈ Y . f ( · ) could be the same as g ( · ) , or it could be a different network ; we experiment with both . For a given natural input x with the true label y , a neural perceptual adversarial example with a perceptibility bound is an input x̃ ∈ X such that x̃ must be perceptually similar to x but cause f to misclassify : f ( x̃ ) 6= y and d ( x , x̃ ) = ‖φ ( x ) − φ ( x̃ ) ‖2 ≤ . ( 2 )
This work proposes a new form of adversarial training, supported by two proposed adversarial attacks based off a perceptual distance. The choice of perceptual distance (LPIPS), is computed by comparing the activations of (possibly different) two neural networks with respect to a pair of inputs. The authors propose two new attacks based off this perceptual distance: PPGD and LPA, as it is distinct from the common choice of L_2 or L_inf. This work claims that performing adversarial training against adversarial examples crafted by the proposed attacks, induces robustness to a wide range of "narrow" threat models e.g. L_2, JPEG, L_inf.
SP:4815005f4ab4a69abde3b5456b811e4e98ba86c7
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
1 INTRODUCTION . Many modern machine learning algorithms are susceptible to adversarial examples : carefully crafted inputs designed to fool models into giving incorrect outputs ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Kurakin et al. , 2016a ; Xie et al. , 2017 ) . Much research has focused on increasing classifiers ’ robustness against adversarial attacks ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ; Zhang et al. , 2019a ) . However , existing adversarial defenses for image classifiers generally consider simple threat models . An adversarial threat model defines a set of perturbations that may be made to an image in order to produce an adversarial example . Common threat models include L2 and L∞ threat models , which constrain adversarial examples to be close to the original image in L2 or L∞ distances . Some work has proposed additional threat models which allow spatial perturbations ( Engstrom et al. , 2017 ; Wong et al. , 2019 ; Xiao et al. , 2018 ) , recoloring ( Hosseini and Poovendran , 2018 ; Laidlaw and Feizi , 2019 ; Bhattad et al. , 2019 ) , and other modifications ( Song et al. , 2018 ; Zeng et al. , 2019 ) of an image . There are multiple issues with these unrealistically constrained adversarial threat models . First , hardening against one threat model assumes that an adversary will only attempt attacks within that threat model . Although a classifier may be trained to be robust against L∞ attacks , for instance , an attacker could easily generate a spatial attack to fool the classifier . One possible solution is to train against multiple threat models simultaneously ( Jordan et al. , 2019 ; Laidlaw and Feizi , 2019 ; Maini et al. , 2019 ; Tramer and Boneh , 2019 ) . However , this generally results in a lower robustness against any one of the threat models when compared to hardening against that threat model alone . Furthermore , not all possible threat models may be known at training time , and adversarial defenses do not usually generalize well to unforeseen threat models ( Kang et al. , 2019 ) . The ideal solution to these drawbacks would be a defense that is robust against a wide , unconstrained threat model . We differentiate between two such threat models . The unrestricted adversarial threat model ( Brown et al. , 2018 ) encompasses any adversarial example that is labeled as one class by a classifier but a different class by humans . On the other hand , we define the perceptual adversarial threat model as including all perturbations of natural images that are imperceptible to a human . Most existing narrow threat models such as L2 , L∞ , etc . are near subsets of the perceptual threat model ( Figure 1 ) . Some other threat models , such as adversarial patch attacks ( Brown et al. , 2018 ) , may perceptibly alter an image without changing its true class and as such are contained in the unrestricted adversarial threat model . In this work , we focus on the perceptual threat model . The perceptual threat model can be formalized given the true perceptual distance d∗ ( x1 , x2 ) between images x1 and x2 , defined as how different two images appear to humans . For some threshold ∗ , which we call the perceptibility threshold , images x and x′ are indistinguishable from one another as long as d∗ ( x , x′ ) ≤ ∗ . Note that in general ∗ may depend on the specific input . Then , the perceptual threat model for a natural input x includes all adversarial examples x̃ which cause misclassification but are imperceptibly different from x , i.e . d∗ ( x , x̃ ) ≤ ∗ . The true perceptual distance d∗ ( · , · ) , however , can not be easily computed or optimized against . To solve this issue , we propose to use a neural perceptual distance , an approximation of the true perceptual distance between images using neural networks . Fortunately , there have been many surrogate perceptual distances proposed in the computer vision literature such as SSIM ( Wang et al. , 2004 ) . Recently , Zhang et al . ( 2018 ) discovered that comparing the internal activations of a convolutional neural network when two different images are passed through provides a measure , Learned Perceptual Image Patch Similarity ( LPIPS ) , that correlates well with human perception . We propose to use the LPIPS distance d ( · , · ) in place of the true perceptual distance d∗ ( · , · ) to formalize the neural perceptual threat model ( NPTM ) . We present adversarial attacks and defenses for the proposed NPTM . Generating adversarial examples bounded by the neural perceptual distance is difficult compared to generating Lp adversarial examples because of the complexity and non-convexness of the constraint . However , we develop two attacks for the NPTM , Perceptual Projected Gradient Descent ( PPGD ) and Lagrangian Perceptual Attack ( LPA ) ( see Section 4 for details ) . We find that LPA is by far the strongest adversarial attack at a given level of perceptibility ( see Figure 4 ) , reducing the most robust classifier studied to only 2.4 % accuracy on ImageNet-100 ( a subset of ImageNet ) while remaining imperceptible . LPA also finds adversarial examples outside of any of the other threat models studied ( see Figure 2 ) . Thus , even if a model is robust to many narrow threat models ( Lp , spatial , etc . ) , LPA can still cause serious errors . In addition to these attacks , which are suitable for evaluation of a classifier against the NPTM , we also develop Fast-LPA , a more efficient version of LPA that we use in Perceptual Adversarial Training ( PAT ) . Remarkably , using PAT to train a neural network classifier produces a single model with high robustness against a variety of imperceptible perturbations , including L∞ , L2 , spatial , recoloring , and JPEG attacks , on CIFAR-10 and ImageNet-100 ( Tables 2 and 3 ) . For example , PAT on ImageNet-100 gives 32.5 % accuracy against the union of these five attacks , whereas L∞ and L2 adversarial training give 0.5 % and 12.3 % accuracy , respectively ( Table 1 ) . PAT achieves more than double the accuracy against this union of five threat models despite not explicitly training against any of them . Thus , it generalizes well to unseen threat models . Does the LPIPS distance accurately reflect human perception when it is used to evaluate adversarial examples ? We performed a study on Amazon Mechanical Turk ( AMT ) to determine how perceptible 7 different types of adversarial perturbations such as L∞ , L2 , spatial , and recoloring attacks are at multiple threat-specific bounds . We find that LPIPS correlates well with human judgements across all the different adversarial perturbation types we examine . This indicates that the NPTM closely matches the true perceptual threat model and reinforces the utility of our perceptual attacks to measure adversarial robustness against an expansive threat model . Furthermore , this study allows calibration of a variety of attack bounds to a single perceptibility metric . We have released our dataset of adversarial examples along with the annotations made by participants for further study1 . 2 RELATED WORK . Adversarial robustness Adversarial robustness has been studied extensively for L2 or L∞ threat models ( Goodfellow et al. , 2015 ; Carlini and Wagner , 2017 ; Madry et al. , 2018 ) and non-Lp threat models such as spatial perturbations ( Engstrom et al. , 2017 ; Xiao et al. , 2018 ; Wong et al. , 2019 ) , recoloring of an image ( Hosseini and Poovendran , 2018 ; Laidlaw and Feizi , 2019 ; Bhattad et al. , 2019 ) , and perturbations in the frequency domain ( Kang et al. , 2019 ) . The most popular known adversarial defense for these threat models is adversarial training Kurakin et al . ( 2016b ) ; Madry et al . ( 2018 ) ; Zhang et al . ( 2019a ) where a neural network is trained to minimize the worst-case loss in a region around the input . Recent evaluation methodologies such as Unforeseen Attack Robustness ( UAR ) ( Kang et al. , 2019 ) and the Unrestricted Adversarial Examples challenge ( Brown et al. , 2018 ) have raised the problem of finding an adversarial defense which gives good robustness under more general threat models . Sharif et al . ( 2018 ) conduct a perceptual study showing that Lp threat models are a poor approximation of the perceptual threat model . Dunn et al . ( 2020 ) and Xu et al . ( 2020 ) have developed adversarial attacks that manipulate higher-level , semantic features . Jin and Rinard ( 2020 ) train with a manifold regularization term , which gives some robustness to unseen perturbation types . Stutz et al . ( 2020 ) also propose a method which gives robustness against unseen perturbation types , but requires rejecting ( abstaining on ) some inputs . Perceptual similarity Two basic similarity measures for images are the L2 distance and the Peak Signal-to-Noise Ratio ( PSNR ) . However , these similarity measures disagree with human vision on perturbations such as blurring and spatial transformations , which has motivated others including SSIM ( Wang et al. , 2004 ) , MS-SSIM ( Wang et al. , 2003 ) , CW-SSIM ( Sampat et al. , 2009 ) , HDR-VDP-2 ( Mantiuk et al. , 2011 ) and LPIPS ( Zhang et al. , 2018 ) . MAD competition ( Wang and Simoncelli , 2008 ) uses a constrained optimization technique related to our attacks to evaluate perceptual measures . Perceptual adversarial robustness Although LPIPS was previously proposed , it has mostly been used for development and evaluation of generative models ( Huang et al. , 2018 ; Karras et al. , 2019 ) . Jordan et al . ( 2019 ) first explored quantifying adversarial distortions with LPIPS distance . However , to the best of our knowledge , we are the first to apply a more accurate perceptual distance to the 1Code and data can be downloaded at https : //github.com/cassidylaidlaw/perceptual-advex . problem of improving adversarial robustness . As we show , adversarial defenses based on L2 or L∞ attacks are unable to generalize to a more diverse threat model . Our method , PAT , is the first adversarial training method we know of that can generalize to unforeseen threat models without rejecting inputs . 3 NEURAL PERCEPTUAL THREAT MODEL ( NPTM ) . Since the true perceptual distance between images can not be efficiently computed , we use approximations of it based on neural networks , i.e . neural perceptual distances . In this paper , we focus on the LPIPS distance ( Zhang et al. , 2018 ) while we note that other neural perceptual distances can also be used in our attacks and defenses . Let g : X → Y be a convolutional image classifier network defined on images x ∈ X . Let g have L layers , and let the internal activations ( outputs ) of the l-th layer of g ( x ) for an input x be denoted as gl ( x ) . Zhang et al . ( 2018 ) have found that normalizing and then comparing the internal activations of convolutional neural networks correlates well with human similarity judgements . Thus , the first step in calculating the LPIPS distance using the network g ( · ) is to normalize the internal activations across the channel dimension such that the L2 norm over channels at each pixel is one . Let ĝl ( x ) denote these channel-normalized activations at the l-th layer of the network . Next , the activations are normalized again by layer size and flattened into a single vector φ ( x ) , ( ĝ1 ( x ) √ w1h1 , . . . , ĝL ( x ) √ wLhL ) where wl and hl are the width and height of the activations in layer l , respectively . The function φ : X → A thus maps the inputs x ∈ X of the classifier g ( · ) to the resulting normalized , flattened internal activations φ ( x ) ∈ A , where A ⊆ Rm refers to the space of all possible resulting activations . The LPIPS distance d ( x1 , x2 ) between images x1 and x2 is then defined as : d ( x1 , x2 ) , ‖φ ( x1 ) − φ ( x2 ) ‖2 . ( 1 ) In the original LPIPS implementation , Zhang et al . ( 2018 ) learn weights to apply to the normalized activations based on a dataset of human perceptual judgements . However , they find that LPIPS is a good surrogate for human vision even without the additional learned weights ; this is the version we use since it avoids the need to collect such a dataset . Now let f : X → Y be a classifier which maps inputs x ∈ X to labels f ( x ) ∈ Y . f ( · ) could be the same as g ( · ) , or it could be a different network ; we experiment with both . For a given natural input x with the true label y , a neural perceptual adversarial example with a perceptibility bound is an input x̃ ∈ X such that x̃ must be perceptually similar to x but cause f to misclassify : f ( x̃ ) 6= y and d ( x , x̃ ) = ‖φ ( x ) − φ ( x̃ ) ‖2 ≤ . ( 2 )
This paper studies the adversarial robustness of deep neural networks against multiple and unforeseen threat models. Since there lacks a precise formalization of human perception, this paper adopts LPIPS, a metric that correlates well with human perception based on neural network activations. Then, two adversarial attack methods are proposed to generate adversarial examples under the metric. And an adversarial training method is also proposed. The experiments on various threat models validate the effectiveness of the proposed method.
SP:4815005f4ab4a69abde3b5456b811e4e98ba86c7
A Simple Approach To Define Curricula For Training Neural Networks
1 INTRODUCTION . Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) is a simple yet widely used algorithm for machine learning optimization . There have been many efforts to improve its performance . A number of such directions , such as AdaGrad ( Duchi et al. , 2011 ) , RMSProp ( Tieleman & Hinton , 2012 ) , and Adam ( Kingma & Ba , 2014 ) , improve upon SGD by fine-tuning its learning rate , often adaptively . However , Wilson et al . ( 2017 ) has shown that the solutions found by adaptive methods generalize worse even for simple overparameterized problems . Reddi et al . ( 2019 ) introduced AMSGrad hoping to solve this issue . Yet there is performance gap between AMSGrad and SGD in terms of the ability to generalize ( Keskar & Socher , 2017 ) . Further , Choi et al . ( 2019 ) shows that more general optimizers such as Adam and RMSProp can never underperform SGD when all their hyperparameters are carefully tuned . Hence , SGD still remains one of the main workhorses of the ML optimization toolkit . SGD proceeds by stochastically making unbiased estimates of the gradient on the full data ( Zhao & Zhang , 2015 ) . However , this approach does not match the way humans typically learn various tasks . We learn a concept faster if we are presented the easy examples first and then gradually exposed to examples with more complexity , based on a curriculum . An orthogonal extension to SGD ( Weinshall & Cohen , 2018 ) , that has some promise in improving its performance is to choose examples according to a specific strategy , driven by cognitive science – this is curriculum learning ( CL ) ( Bengio et al. , 2009 ) , wherein the examples are shown to the learner based on a curriculum . 1.1 RELATED WORKS . Bengio et al . ( 2009 ) formalizes the idea of CL in machine learning framework where the examples are fed to the learner in an order based on its difficulty . The notation of difficulty of examples has not really been formalized and various heuristics have been tried out : Bengio et al . ( 2009 ) uses manually crafted scores , self-paced learning ( SPL ) ( Kumar et al. , 2010 ) uses the loss values with respect to the learner ’ s current parameters , and CL by transfer learning uses the loss values with respect to a pre-trained learner to rate the difficulty of examples in data . Among these works , what makes SPL particular is that they use a dynamic CL strategy , i.e . the preferred ordering is determined dynamically over the course of the optimization . However , SPL does not really improve the performance of deep learning models , as noted in ( Fan et al. , 2018 ) . Similarly , Loshchilov & Hutter ( 2015 ) uses a function of rank based on latest loss values for online batch selection for faster training of neural networks . Katharopoulos & Fleuret ( 2018 ) and Chang et al . ( 2017 ) perform importance sampling to reduce the variance of stochastic gradients during training . Graves et al . ( 2017 ) and Matiisen et al . ( 2020 ) propose teacher-guided automatic CL algorithms that employ various supervised measures to define dynamic curricula . The most recent works in CL show its advantages in reinforcement learning ( Portelas et al. , 2020 ; Zhang et al. , 2020 ) . The recent work by Weinshall & Cohen ( 2018 ) introduces the notion of ideal difficult score to rate the difficulty of examples based on the loss values with respect to the set of optimal hypotheses . They theoretically show that for linear regression , the expected rate of convergence at a training step t for an example monotonically decreases with its ideal difficulty score . This is practically validated by Hacohen & Weinshall ( 2019 ) by sorting the training examples based on the performance of a network trained through transfer learning . However , there is a lack of theory to show that CL improves the performance of a completely trained network . Thus , while CL indicates that it is possible to improve the performance of SGD by a judicious ordering , both the theoretical insights as well as concrete empirical guidelines to create this ordering remain unclear . While the previous CL works employ tedious methods to score the difficulty level of the examples , Hu et al . ( 2020 ) uses the number of audio sources to determine the difficulty for audiovisual learning . Liu et al . ( 2020 ) uses the norm of word embeddings as a difficulty measure for CL for neural machine translation . In light of these recent works , we discuss the idea of using task-specific statistical ( unsupervised ) measures to score examples making it easy to perform CL on real image datasets without the aid of any pre-trained network . 1.2 OUR CONTRIBUTIONS . Our work proposes two novel algorithms for CL . We do a thorough empirical study of our algorithms and provide some more insights into why CL works . Our contributions are as follows : • We propose a novel dynamic curriculum learning ( DCL ) algorithm to study the behaviour of CL . DCL is not a practical CL algorithm since it requires the knowledge of a reasonable local optima as needs to compute the gradients of full data after ever training epoch . DCL uses the gradient information to define a curriculum that minimizes the distance between the current weight and a desired local minima . However , this simplicity in the definition of DCL makes it easier to analyze its performance formally . • Our DCL algorithm generates a natural ordering for training the examples . Previous CL works have demonstrated that exposing a part of the data initially and then gradually exposing the rest is a standard way to setup a curriculum . We use two variants of our DCL framework to show that it is not just the subset of data which is exposed to the model that matters , but also the ordering within the data partition that is exposed . We also analyze how DCL is able to serve as a regularizer and improve the generalization of networks . • We contribute a simple , novel and practical CL approach for image classification tasks that does the ordering of examples in a completely unsupervised manner using statistical measures . Our insight is that statistical measures could have an association with the difficulty of examples in real data . We empirically analyze our argument of using statistical scoring measures ( especially standard deviation ) over permutations of multiple datasets and networks . Additionally , we study why CL based on standard deviation scoring works using our DCL framework . Algorithm 1 Approximate greedy dynamic curriculum learning ( DCL+ ) . Input : Data X , local minima w̃ , weight wt , batch size b , and pacing function pace . Output : Sequence of mini-batches Bt for the next training epoch . 1 : ãt ← w̃ −wt 2 : ρt ← [ ] 3 : Bt ← [ ] 4 : for ( i = 0 ; N ; 1 ) do 5 : append − ã T t · ∇fi ( wt ) ‖ãt‖2 to ρt 6 : end for 7 : X̃ ← X sorted according to ρt , in ascending order 8 : size← pace ( t ) 9 : for ( i = 0 ; size ; b ) do 10 : append X̃ [ i , ... , i+ b− 1 ] to Bt 11 : end for 12 : return Bt 2 PRELIMINARIES . At any training step t , SGD updates the weight wt using ∇fi ( wt ) which is the gradient of loss of example xi with respect to the current weight . The learning rate and the data are denoted by η and X = { ( xi , yi ) } N−1i=0 respectively , where xi ∈ R d denotes a data point and yi ∈ [ K ] its corresponding label for a dataset with K classes . We denote the learner as hϑ : Rd → [ K ] . Generally , SGD is used to train hϑ by giving the model a sequence of mini-batches { B0 , B1 , ... , BT−1 } , where Bi ⊆ X ∀i ∈ [ T ] . Each Bi is generated by uniformly sampling examples from the data . We denote this approach as vanilla . In CL , the curriculum is defined by two functions , namely the scoring function and the pacing function . The scoring function , scoreϑ ( xi , yi ) : Rd× [ K ] → R , scores each example in the dataset . Scoring function is used to sort X in an ascending order of difficulty . A data point ( xi , yi ) is said to be easier than ( xj , yj ) if scoreϑ ( xi , yi ) < scoreϑ ( xj , yj ) , where both the examples belong to X . Unsupervised scoring measures do not use the data labels to determine the difficulty of data points . The pacing function , paceϑ ( t ) : [ T ] → [ N ] , determines how much of the data is to be exposed at a training step t ∈ [ T ] . We define speedup for CL model as its improvement over vanilla model ( in terms of the number of training steps ) to achieve a given test accuracy . For example , CL has 2× speedup if vanilla model achieves 90 % test accuracy in 100 training steps while CL achieves the same 90 % test accuracy in 50 training steps . 3 DYNAMIC CURRICULUM LEARNING . For DCL algorithms ( Kumar et al. , 2010 ; Graves et al. , 2017 ; Matiisen et al. , 2020 ) , examples are scored and sorted after every few training steps since the parameters of the scoring function change dynamically with the learner as training proceeds . Hacohen & Weinshall ( 2019 ) and Bengio et al . ( 2009 ) use a fixed scoring function and pace function for the entire training process . They empirically show that a curriculum helps to learn fast in the initial phase of the training process . In this section , we propose and analyze our novel DCL algorithm that updates the difficulty scores of all the examples in the training data at every epoch using their gradient information . We hypothesize the following : Given a weight initialization and a local minima obtained by full training of vanilla SGD , the curriculum ordering determined by our DCL variant leads to speedup in training . We first describe the algorithm , then the underlying intuition , and finally validate the hypothesis using experiments . Our DCL algorithm iteratively works on reducing the L2 distance , Rt , between the weight parameter wt and a given optimal weight w̄ at any training step t. Suppose , for any t̃ < t , St̃ , t is the ordered set containing the ( t− t̃+1 ) indices of training examples that are to be shown to the learner from the training steps t̃ through t. Let us define at = ( w̄ −wt ) , Rt = ‖at‖2 , and θt̃i as the angle between ∇fi ( wt ) and at̃ . Then , using a geometrical argument , ( see Figure 1 ) , R2t = ( Rt̃ − η j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 cos θt̃i ) ) 2 + η2 ( j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 sin θt̃i ) ) 2 = R2t̃ − 2ηRt̃ j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 cos θt̃i ) + η2 ( j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 cos θt̃i ) ) 2 + η2 ( j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 sin θt̃i ) ) 2 ( 1 ) For a vanilla model , S0 , T is generated by uniformly sampling indices from [ N ] with replacement . Since , finding a set S0 , T to minimize R2T and an optimal w̄ are intractable for nonconvex optimization problems , we approximate the DCL algorithm ( DCL+ , see Algorithm 1 ) . We approximate w̄ with w̃ , which is a local minima obtained from training the vanilla SGD model . Also , to reduce computational expense while sampling examples , we neglect the terms with coefficient η2 in equation 1 while designing our algorithm . Algorithm 1 uses a greedy approach to minimize R2t by sampling examples at every epoch using the scoring function scoret ( xi ) = −‖∇fi ( wt ) ‖2 cos θti = − aTt · ∇fi ( wt ) ‖at‖2 = ρt , i . ( 2 ) Let us denote the models that use the natural ordering of mini-batches greedily generated by Algorithm 1 for training networks as DCL+ . DCL- uses the same sequence of mini-batches that DCL+ exposes to the network at any given epoch , but the order is reversed . We empirically show that DCL+ achieves a faster and better convergence with various initializations of w0 . We use learning rates with an exponential step-decay rate for the optimizers in all our experiments as traditionally 0 1000 2000 3000 4000 5000 Training step 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 To p- 1 te st a cc ur ac y k = 0.2 k = 0.4 k = 0.6 k = 0.8 k = 1.0 Figure 3 : Learning curves for experiment 2 with varying pace ( t ) = bkNc for DCL+ . The parameter k needs to be finely tuned for improving the generalization of the network . A low k value exposes only examples with less noise to the network at every epoch whereas a high k value exposes most of the dataset including highly noisy examples to the network . A moderate k value shows less noisy examples along with some examples with moderate level of noise to the learner . Here , a moderate k = 0.6 generalizes the best . done ( Simonyan & Zisserman , 2014 ; Szegedy et al. , 2016 ) . For a fair comparison , we tune the learning rates and decay rates of the models . Experimental setup : In our experiments , we set pace ( t ) = bkNc ∀t , where k ∈ [ b/N , 1 ] is a tunable hyper-parameter . We use a 2-layer fully-connected network ( FCN ) with 10 hidden neurons and Exponential Linear Unit ( ELU ) nonlinearities to empirically validate our algorithms ( k = 0.9 ) on a subset of the MNIST dataset with class labels 0 and 1 ( Experiment 1 ) . Since , this is a very easy task ( as the vanilla model accuracy is as high as ∼ 99.9 % ) , we compare the test loss values across training steps in Figure 2a to see the behaviour of DCL on an easy task . DCL+ shows the fastest convergence , although all the networks achieve the same test accuracy . DCL+ achieves vanilla ’ s final test loss score at training step 682 ( ∼ 30 % speedup ) . In Experiment 2 , we use a 2-layered FCN with 128 hidden neurons and ELU nonlinearities to evaluate our DCL algorithms ( k = 0.6 ) on a relatively difficult small mammals dataset ( Krizhevsky et al. , 2009 ) , a super-class of CIFAR-100 . Figure 2b shows that DCL+ achieves a faster and better convergence than vanilla with respect to the test set accuracy in experiment 2 . DCL+ achieves vanilla ’ s convergence test accuracy score at training step 1896 ( ∼ 60 % speedup ) . Further experimental details are deferred to Appendix B.1 . Since , DCL is computationally expensive , we perform DCL experiments only on small datasets . Fine-tuning of k is crucial for improving the performance of DCL+ on the test set ( see Figure 3 ) . We fine-tune k by trial-and-error over the test accuracy score .
The paper contains two curriculum learning algorithms of which one assume knowledge of the parameters found by the baseline, uniform-sampling, model to push updates in that direction, and the second orders images according to an increasing stddev/entropy of pixels. While the first approach is impractical because of the strong assumption, the second approach demonstrates small gains that lie within random variance (Fig. 5, Fig. 6) and would be not straight-forward to apply to non-image data e.g. text. These reasons make the paper hard to accept.
SP:71d2c08c45a1f4635bb51699e5833c74699731f2
A Simple Approach To Define Curricula For Training Neural Networks
1 INTRODUCTION . Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) is a simple yet widely used algorithm for machine learning optimization . There have been many efforts to improve its performance . A number of such directions , such as AdaGrad ( Duchi et al. , 2011 ) , RMSProp ( Tieleman & Hinton , 2012 ) , and Adam ( Kingma & Ba , 2014 ) , improve upon SGD by fine-tuning its learning rate , often adaptively . However , Wilson et al . ( 2017 ) has shown that the solutions found by adaptive methods generalize worse even for simple overparameterized problems . Reddi et al . ( 2019 ) introduced AMSGrad hoping to solve this issue . Yet there is performance gap between AMSGrad and SGD in terms of the ability to generalize ( Keskar & Socher , 2017 ) . Further , Choi et al . ( 2019 ) shows that more general optimizers such as Adam and RMSProp can never underperform SGD when all their hyperparameters are carefully tuned . Hence , SGD still remains one of the main workhorses of the ML optimization toolkit . SGD proceeds by stochastically making unbiased estimates of the gradient on the full data ( Zhao & Zhang , 2015 ) . However , this approach does not match the way humans typically learn various tasks . We learn a concept faster if we are presented the easy examples first and then gradually exposed to examples with more complexity , based on a curriculum . An orthogonal extension to SGD ( Weinshall & Cohen , 2018 ) , that has some promise in improving its performance is to choose examples according to a specific strategy , driven by cognitive science – this is curriculum learning ( CL ) ( Bengio et al. , 2009 ) , wherein the examples are shown to the learner based on a curriculum . 1.1 RELATED WORKS . Bengio et al . ( 2009 ) formalizes the idea of CL in machine learning framework where the examples are fed to the learner in an order based on its difficulty . The notation of difficulty of examples has not really been formalized and various heuristics have been tried out : Bengio et al . ( 2009 ) uses manually crafted scores , self-paced learning ( SPL ) ( Kumar et al. , 2010 ) uses the loss values with respect to the learner ’ s current parameters , and CL by transfer learning uses the loss values with respect to a pre-trained learner to rate the difficulty of examples in data . Among these works , what makes SPL particular is that they use a dynamic CL strategy , i.e . the preferred ordering is determined dynamically over the course of the optimization . However , SPL does not really improve the performance of deep learning models , as noted in ( Fan et al. , 2018 ) . Similarly , Loshchilov & Hutter ( 2015 ) uses a function of rank based on latest loss values for online batch selection for faster training of neural networks . Katharopoulos & Fleuret ( 2018 ) and Chang et al . ( 2017 ) perform importance sampling to reduce the variance of stochastic gradients during training . Graves et al . ( 2017 ) and Matiisen et al . ( 2020 ) propose teacher-guided automatic CL algorithms that employ various supervised measures to define dynamic curricula . The most recent works in CL show its advantages in reinforcement learning ( Portelas et al. , 2020 ; Zhang et al. , 2020 ) . The recent work by Weinshall & Cohen ( 2018 ) introduces the notion of ideal difficult score to rate the difficulty of examples based on the loss values with respect to the set of optimal hypotheses . They theoretically show that for linear regression , the expected rate of convergence at a training step t for an example monotonically decreases with its ideal difficulty score . This is practically validated by Hacohen & Weinshall ( 2019 ) by sorting the training examples based on the performance of a network trained through transfer learning . However , there is a lack of theory to show that CL improves the performance of a completely trained network . Thus , while CL indicates that it is possible to improve the performance of SGD by a judicious ordering , both the theoretical insights as well as concrete empirical guidelines to create this ordering remain unclear . While the previous CL works employ tedious methods to score the difficulty level of the examples , Hu et al . ( 2020 ) uses the number of audio sources to determine the difficulty for audiovisual learning . Liu et al . ( 2020 ) uses the norm of word embeddings as a difficulty measure for CL for neural machine translation . In light of these recent works , we discuss the idea of using task-specific statistical ( unsupervised ) measures to score examples making it easy to perform CL on real image datasets without the aid of any pre-trained network . 1.2 OUR CONTRIBUTIONS . Our work proposes two novel algorithms for CL . We do a thorough empirical study of our algorithms and provide some more insights into why CL works . Our contributions are as follows : • We propose a novel dynamic curriculum learning ( DCL ) algorithm to study the behaviour of CL . DCL is not a practical CL algorithm since it requires the knowledge of a reasonable local optima as needs to compute the gradients of full data after ever training epoch . DCL uses the gradient information to define a curriculum that minimizes the distance between the current weight and a desired local minima . However , this simplicity in the definition of DCL makes it easier to analyze its performance formally . • Our DCL algorithm generates a natural ordering for training the examples . Previous CL works have demonstrated that exposing a part of the data initially and then gradually exposing the rest is a standard way to setup a curriculum . We use two variants of our DCL framework to show that it is not just the subset of data which is exposed to the model that matters , but also the ordering within the data partition that is exposed . We also analyze how DCL is able to serve as a regularizer and improve the generalization of networks . • We contribute a simple , novel and practical CL approach for image classification tasks that does the ordering of examples in a completely unsupervised manner using statistical measures . Our insight is that statistical measures could have an association with the difficulty of examples in real data . We empirically analyze our argument of using statistical scoring measures ( especially standard deviation ) over permutations of multiple datasets and networks . Additionally , we study why CL based on standard deviation scoring works using our DCL framework . Algorithm 1 Approximate greedy dynamic curriculum learning ( DCL+ ) . Input : Data X , local minima w̃ , weight wt , batch size b , and pacing function pace . Output : Sequence of mini-batches Bt for the next training epoch . 1 : ãt ← w̃ −wt 2 : ρt ← [ ] 3 : Bt ← [ ] 4 : for ( i = 0 ; N ; 1 ) do 5 : append − ã T t · ∇fi ( wt ) ‖ãt‖2 to ρt 6 : end for 7 : X̃ ← X sorted according to ρt , in ascending order 8 : size← pace ( t ) 9 : for ( i = 0 ; size ; b ) do 10 : append X̃ [ i , ... , i+ b− 1 ] to Bt 11 : end for 12 : return Bt 2 PRELIMINARIES . At any training step t , SGD updates the weight wt using ∇fi ( wt ) which is the gradient of loss of example xi with respect to the current weight . The learning rate and the data are denoted by η and X = { ( xi , yi ) } N−1i=0 respectively , where xi ∈ R d denotes a data point and yi ∈ [ K ] its corresponding label for a dataset with K classes . We denote the learner as hϑ : Rd → [ K ] . Generally , SGD is used to train hϑ by giving the model a sequence of mini-batches { B0 , B1 , ... , BT−1 } , where Bi ⊆ X ∀i ∈ [ T ] . Each Bi is generated by uniformly sampling examples from the data . We denote this approach as vanilla . In CL , the curriculum is defined by two functions , namely the scoring function and the pacing function . The scoring function , scoreϑ ( xi , yi ) : Rd× [ K ] → R , scores each example in the dataset . Scoring function is used to sort X in an ascending order of difficulty . A data point ( xi , yi ) is said to be easier than ( xj , yj ) if scoreϑ ( xi , yi ) < scoreϑ ( xj , yj ) , where both the examples belong to X . Unsupervised scoring measures do not use the data labels to determine the difficulty of data points . The pacing function , paceϑ ( t ) : [ T ] → [ N ] , determines how much of the data is to be exposed at a training step t ∈ [ T ] . We define speedup for CL model as its improvement over vanilla model ( in terms of the number of training steps ) to achieve a given test accuracy . For example , CL has 2× speedup if vanilla model achieves 90 % test accuracy in 100 training steps while CL achieves the same 90 % test accuracy in 50 training steps . 3 DYNAMIC CURRICULUM LEARNING . For DCL algorithms ( Kumar et al. , 2010 ; Graves et al. , 2017 ; Matiisen et al. , 2020 ) , examples are scored and sorted after every few training steps since the parameters of the scoring function change dynamically with the learner as training proceeds . Hacohen & Weinshall ( 2019 ) and Bengio et al . ( 2009 ) use a fixed scoring function and pace function for the entire training process . They empirically show that a curriculum helps to learn fast in the initial phase of the training process . In this section , we propose and analyze our novel DCL algorithm that updates the difficulty scores of all the examples in the training data at every epoch using their gradient information . We hypothesize the following : Given a weight initialization and a local minima obtained by full training of vanilla SGD , the curriculum ordering determined by our DCL variant leads to speedup in training . We first describe the algorithm , then the underlying intuition , and finally validate the hypothesis using experiments . Our DCL algorithm iteratively works on reducing the L2 distance , Rt , between the weight parameter wt and a given optimal weight w̄ at any training step t. Suppose , for any t̃ < t , St̃ , t is the ordered set containing the ( t− t̃+1 ) indices of training examples that are to be shown to the learner from the training steps t̃ through t. Let us define at = ( w̄ −wt ) , Rt = ‖at‖2 , and θt̃i as the angle between ∇fi ( wt ) and at̃ . Then , using a geometrical argument , ( see Figure 1 ) , R2t = ( Rt̃ − η j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 cos θt̃i ) ) 2 + η2 ( j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 sin θt̃i ) ) 2 = R2t̃ − 2ηRt̃ j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 cos θt̃i ) + η2 ( j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 cos θt̃i ) ) 2 + η2 ( j=t−1∑ j=t̃ , i∈St̃ , t−1 ( ‖∇fi ( wj ) ‖2 sin θt̃i ) ) 2 ( 1 ) For a vanilla model , S0 , T is generated by uniformly sampling indices from [ N ] with replacement . Since , finding a set S0 , T to minimize R2T and an optimal w̄ are intractable for nonconvex optimization problems , we approximate the DCL algorithm ( DCL+ , see Algorithm 1 ) . We approximate w̄ with w̃ , which is a local minima obtained from training the vanilla SGD model . Also , to reduce computational expense while sampling examples , we neglect the terms with coefficient η2 in equation 1 while designing our algorithm . Algorithm 1 uses a greedy approach to minimize R2t by sampling examples at every epoch using the scoring function scoret ( xi ) = −‖∇fi ( wt ) ‖2 cos θti = − aTt · ∇fi ( wt ) ‖at‖2 = ρt , i . ( 2 ) Let us denote the models that use the natural ordering of mini-batches greedily generated by Algorithm 1 for training networks as DCL+ . DCL- uses the same sequence of mini-batches that DCL+ exposes to the network at any given epoch , but the order is reversed . We empirically show that DCL+ achieves a faster and better convergence with various initializations of w0 . We use learning rates with an exponential step-decay rate for the optimizers in all our experiments as traditionally 0 1000 2000 3000 4000 5000 Training step 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 To p- 1 te st a cc ur ac y k = 0.2 k = 0.4 k = 0.6 k = 0.8 k = 1.0 Figure 3 : Learning curves for experiment 2 with varying pace ( t ) = bkNc for DCL+ . The parameter k needs to be finely tuned for improving the generalization of the network . A low k value exposes only examples with less noise to the network at every epoch whereas a high k value exposes most of the dataset including highly noisy examples to the network . A moderate k value shows less noisy examples along with some examples with moderate level of noise to the learner . Here , a moderate k = 0.6 generalizes the best . done ( Simonyan & Zisserman , 2014 ; Szegedy et al. , 2016 ) . For a fair comparison , we tune the learning rates and decay rates of the models . Experimental setup : In our experiments , we set pace ( t ) = bkNc ∀t , where k ∈ [ b/N , 1 ] is a tunable hyper-parameter . We use a 2-layer fully-connected network ( FCN ) with 10 hidden neurons and Exponential Linear Unit ( ELU ) nonlinearities to empirically validate our algorithms ( k = 0.9 ) on a subset of the MNIST dataset with class labels 0 and 1 ( Experiment 1 ) . Since , this is a very easy task ( as the vanilla model accuracy is as high as ∼ 99.9 % ) , we compare the test loss values across training steps in Figure 2a to see the behaviour of DCL on an easy task . DCL+ shows the fastest convergence , although all the networks achieve the same test accuracy . DCL+ achieves vanilla ’ s final test loss score at training step 682 ( ∼ 30 % speedup ) . In Experiment 2 , we use a 2-layered FCN with 128 hidden neurons and ELU nonlinearities to evaluate our DCL algorithms ( k = 0.6 ) on a relatively difficult small mammals dataset ( Krizhevsky et al. , 2009 ) , a super-class of CIFAR-100 . Figure 2b shows that DCL+ achieves a faster and better convergence than vanilla with respect to the test set accuracy in experiment 2 . DCL+ achieves vanilla ’ s convergence test accuracy score at training step 1896 ( ∼ 60 % speedup ) . Further experimental details are deferred to Appendix B.1 . Since , DCL is computationally expensive , we perform DCL experiments only on small datasets . Fine-tuning of k is crucial for improving the performance of DCL+ on the test set ( see Figure 3 ) . We fine-tune k by trial-and-error over the test accuracy score .
This work studies a number of curriculums for faster training of neural networks. They first propose a curriculum named DCL+ that is designed to order data points based on their alignment of gradient with the direction of optimization. This curriculum depends on the evaluation of individual gradients of datapoints as well as an approximation to a local optima. Next, they study a number of easy-to-compute statistical measures for ordering data points.
SP:71d2c08c45a1f4635bb51699e5833c74699731f2
CURI: A Benchmark for Productive Concept Learning Under Uncertainty
1 INTRODUCTION . Human concept learning is more flexible than today ’ s AI systems . Human conceptual knowledge is productive : people can understand and generate novel concepts via compositions of existing concepts ( “ an apartment dog ” ) ( Murphy , 2002 ) , unlike standard machine classifiers that are limited to a fixed set of classes ( “ dog ” , “ cat ” , etc. ) . Further , humans can induce goal-based , “ ad hoc ” categories such as “ things to take from one ’ s apartment in a fire ” ( children , dogs , keepsakes , etc . ) ( Barsalou , 1983 ) . Thus , unlike AI systems , humans reason seamlessly in large , essentially “ unbounded ” concept spaces . Beyond unboundedness , a natural challenge in such concept spaces is uncertainty – the right concept to be inferred is uncertain , as a plethora of candidate concepts could explain observations . For e.g . in Figure 1 ( top , image panel ) , the “ right ” concept could be that “ All objects are blue and have the same size ” , but it could also be “ There are less than four objects in the scene ” , or “ All objects have the same color ” . Humans gracefully handle such uncertainty and underdetermination ( Tenenbaum & Griffiths , 2001 ; Xu & Tenenbaum , 2007 ; Goodman et al. , 2008 ; Piantadosi et al. , 2016 ) . Popular compositional reasoning benchmarks such as CLEVR ( Johnson & Zhang , 2016 ) for visual question answering and Ravens Progressive Matrices ( Santoro et al. , 2017 ) for deductive , analogical reasoning are compositionally rich and challenging in nature , but do not tackle ambiguity and underdetermination . We address this gap in the literature , and propose the Compositional Reasoning Under Uncertainty ( CURI ) benchmark to study how modern machine learning systems can learn concepts spanning a large , productively defined space ( Figure 1 ) . In pursuit of this goal , we instantiate a meta learning task where a model must acquire a compositional concept from finite samples . A signature of productivity in human thought is our ability to handle novel combinations of known , atomic components . Thus , in CURI we instantiate different systematic train-test splits to analyze different forms of generalization in concept learning , involving novel combinations of intrinsic properties ( e.g . color , shape ) with boolean operators , counting , extrinsic object properties ( e.g . object location ) , and a novel test of variable binding in context of compositional learning . While related systematic splits have been proposed in prior work in context of other tasks such as question answering and analogical reasoning ( Barrett et al. , 2018 ; Hill et al. , 2019 ; Agrawal et al. , 2017 ; Johnson et al. , 2016 ; Vedantam et al. , 2017 ; Higgins et al. , 2017 ; Bakhtin et al. , 2019 ; Lake & Baroni , 2018 ; Ruis et al. , 2020 ) , ours is the first benchmark which tests different qualitative aspects of reasoning about productive concepts under uncertainty . Compositional Reasoning Under Uncertainty ( CURI ) Task . Concretely , the CURI task tests few-shot learning of relational concepts in a large compositional conceptual space , with design inspiration from studies in cognitive modeling using a language of thought ( LOT ) approach ( Fodor , 1975 ; Piantadosi , 2011 ; Kemp et al. , 2005 ) . CURI includes scene-based concepts such as “ All objects have the same color ” and “ There exists a blue object while the rest are triangles ” ( Figure 1 ) but unlike CLEVR ( Johnson et al. , 2016 ) there are too few examples to deduce answers with certainty . Our benchmark is defined through a series of meta-learning episodes ( see example in Figure 2 ) : given positive and negative examples of a new concept Dsupp ( known as the “ support set ” ) , the goal of an episode is to classify new examples Dquery ( the “ query set ” ) . As in few-shot classification ( Fei-Fei et al. , 2006 ) , meta-learning ( Vinyals et al. , 2016 ) , and other open-set tasks ( Lampert et al. , 2014 ) , models are evaluated on novel classes outside the ( meta- ) training set . Unlike previous work ( Triantafillou et al. , 2019 ; Lake et al. , 2019 ) that focuses on atomic concepts , our benchmarks concerns more structured , relational concepts built compositionally from a set of atomic concepts , and involves reasoning under uncertainty – an ideal learner must marginalize over many hypotheses when making predictions ( Gelman et al. , 2004 ; Xu & Tenenbaum , 2007 ; Piantadosi et al. , 2016 ) . We also vary the modality in which scenes are presented—rendering them as images , symbolic schemas , and sounds— enabling future research on modality-specific representational choices for compositional reasoning under uncertainty . Finally , we vary the concepts learned by the model during meta-training and meta-testing to test different aspects of systematic generalization . Compositionality Gap . In addition to defining systematic splits , we also characterize ( for the first time , in our knowledge ) , the difficulty of generalization entailed by each split by introducing the notion of a model-independent “ compositionality gap ” . Concretely , the compositionality gap is the difference in test performance between an ideal Bayesian learner with access to the full hypothesis space , and a Bayesian learner with access to only a ( potentially large ) list of the hypotheses examined during meta-training . A large gap indicates that any learner must extrapolate compositionally from the training hypotheses to solve the task ; additionally , models can be compared to ideal learners that either do or do not engage in such extrapolation . We anticipate that this tool will be more broadly useful for analyzing other benchmarks with compositional splits . Models . We evaluate models around various dimensions which concern the difficulty of learning productive concepts under uncertainty , including : 1 ) the modality in which the input is rendered ( image , schemas , sounds ) , 2 ) method used for reasoning across objects in a scene ( transformer , Variables . Types . Constants ( Illustrated for Images ) . relation-network , global average pooling , concatenation ) , 3 ) whether or not training provides groundtruth symbolic descriptions of concepts , and 4 ) how negative examples are sampled . Overall , our evaluations suggest that there is substantial room for improvement in compositional reasoning under uncertainty , w.r.t the compositionality gap , representing a novel challenge for compositional learning . Summary of contributions : 1 ) We introduce the Compositional Reasoning Under Uncertainty ( CURI ) benchmark for evaluating compositional , relational learning under uncertainty from observational data ; 2 ) We introduce a ‘ compositionality gap ’ metric for measuring the difficulty of systematic generalization from train to test ; 3 ) We provide various baseline models for benchmarking progess . 2 RELATED WORK . Compositional Learning . Related work has examined systematic generalization in pattern completion using Raven ’ s matrices ( PGM ) ( Santoro et al. , 2017 ; Hill et al. , 2019 ) and visual question answering with CLEVR ( Johnson et al. , 2016 ; Bahdanau et al. , 2019 ) . CURI ’ s use of the CLEVR renderer further invites particular comparison with that benchmark . Compared to these more deductive reasoning tests , CURI examines few-shot concept learning under substantial inherent uncertainty . Unlike puzzle solving or question answering , an ideal inductive learner on CURI can not know the right rule with certainty . In essence , unlike CLEVR the “ question ” to be answered is not given to the model as input , but must be inferred – making the task more challenging . While PGMs do involve such an inference , once the constraints of a puzzle are identified , it does not : 1 ) have any uncertainty in the reasoning ( which is crucial ) and 2 ) involve any “ concept ” learning – where a concept applies to multiple images – as much as it involves “ instance ” matching to complete a sequence . In contrast , a successful CURI model behaves as if marginalizing over many hypotheses consistent with the observations e.g. , ( Tenenbaum & Griffiths , 2001 ; Xu & Tenenbaum , 2007 ; Piantadosi et al. , 2016 ) , an ability which is rarely studied directly in deep learning models ( although see ( Grant et al. , 2019 ) ) . Recently , Keysers et al . ( 2019 ) proposed a method to create “ difficult ” systematic splits based on the principle that they should share atoms but have maximally different compositions . This is complementary to our splits , which provide interpretable notions of what each split tests such as disentangling , complexity , variable binding etc . Moreover , our variable binding split is predicated on having different atoms between train and test , and thus can not be recovered by their methodology . Language of Thought ( LOT ) . Our choice of compositional concepts was most closely inspired by ( Piantadosi et al. , 2016 ) along with other studies of human concept learning in the Language of Thought ( LOT ) framework ( Fodor , 1975 ; Goodman et al. , 2008 ; Kemp & Jern , 2009 ; Piantadosi et al. , 2012 ; Goodman et al. , 2015 ; Overlan et al. , 2017 ; Lake & Piantadosi , 2019 ) . In typical LOT studies of human learning , the conceptual space H is defined through a probabilistic context-free grammar G , which specifies a set of conceptual primitives and their rules of combination . Here , we use a LOT-inspired grammar G to generate an unbounded set conceptsH , while evaluating machine learning models trained without access to the underlying LOT . 3 COMPOSITIONAL REASONING UNDER UNCERTAINTY ( CURI ) DATASET . Concept space . The compositional concepts in CURI were inspired by the empirical and cognitive modeling work of Piantadosi et al . ( 2016 ) . The space of concepts ( LOT ) is defined by a context free grammar ( G ) . Figure 3 shows the LOT and specifies how primitives and functions compose to produce a large unbounded concept space . The LOT has three variables : x , representing an object in a scene , S = { x } Ni=1 representing the set of all objects in the scene , and S−x = S/ { x } , representing the set of all objects in the scene except x . Each concept describes a rule composed of object and scene properties , logical operators , and/or comparison operators , and can be evaluated on a given scene S to determine whether the scene satisfies the rule . Object and scene properties are defined by functions which can be applied to objects or scenes : for example , size ? ( x ) yields the size of an object x , while size ? ( S ) returns a set with the sizes of all the objects ( { size ? ( x ) : x ∈ S } ) . Comparison and logical operators can be used to compare and relate various properties of objects in scenes . In contrast to Piantadosi et al . ( 2016 ) , we include a count operator , which determines how many times a condition is satisfied by a set , which allows us to check how well deep learning models are able to count ( Chattopadhyay et al. , 2016 ; Johnson et al. , 2016 ; Agrawal et al. , 2017 ) . Finally , quantifiers such as exists and for-all enrich the LOT by specifying the number of objects which must satisfy a given condition . Consider the following example concept ( Figure 1 bottom ) : “ There exists a blue object in the scene and the rest of the objects are squares. ” To access the color of a given object , we use color ? ( x ) and to access the shape of a given object , we use shape ? ( x ) . To determine whether an object matches a specific property , we can combine this with equality : shape ? ( x ) = “ square ” . Finally , we can use exists to specify that at least one object must be blue , S−x to specify all the objects except for that blue object , and all to specify that all the objects in S−x must be squares . Putting it all together : exists x ∈ S ( color ? ( x ) = “ blue ” ) and all ( shape ? ( S−x ) = “ square ” ) . Structured Generalization Splits . A signature of productivity is the ability to handle novel combinations of known components ( Fodor , 1975 ; Fodor & Pylyshyn , 1988 ) . Thus , in CURI , we consider splits that require generalizing to novel combinations of known elements from our LOT ( Figure 3 ) , including combinations of constants , variables , and functions . We achieve this by creating disjoint splits of concepts Htrain and Htest for training and evaluating models . By varying the held out elements and their combinations , we obtain splits that evaluate different axes of generalization . In practice , we use our grammar G to sample and filter a large set of concepts ( see Appendix B.2 for more details ) , which yields a set of 14,929 conceptsH for training and evaluation . We next describe how each split dividesH intoHtrain andHtest , to test productive , out of distribution generalization : • Instance IID : Evaluates generalization to novel episodes from the same concept set . This is the standard setup in machine learning ( Murphy , 2013 ) , in whichHtrain =Htest . This is the only split where train and test concepts overlap . • Concept IID : Evaluates generalization to novel concepts based on an arbitrary random split of the concepts intoHtrain andHtest.1 • Counting : Evaluates the ability to learn a new concept h with novel property-count combina- tions , e.g , the training concepts never filter for exactly ‘ 3 squares ’ . • Extrinsic properties : Evaluates the ability to learn a new concept h , with novel combinations of extrinsic ( e.g . location ) and intrinsic ( e.g . color ) object properties . • Intrinsic properties : Evaluates the ability to learn a new concept h with novel combinations of intrinsic properties , e.g. , the training concepts never reference both ‘ red ’ and ‘ rubber ’ . • Boolean operations : Evaluates the ability to learn concepts which require application of a familiar boolean operation to a property to which the operation has never been applied previously . • Complexity split : Evaluates generalization from simple concepts ( those which have less than or equal to 10 symbols ) to more complex concepts ( longer than 10 symbols ) . This is indicative of the productivity ( Fodor , 1975 ) exhibited by models , in generalizing from simpler concepts to more complex concepts . • Variable binding : Evaluates learning of entirely novel intrinsic properties , e.g . the training concepts involve only “ red ” , “ blue ” , and “ green ” but test concepts involve “ yellow ” ( although ‘ yellow ’ objects can still appear in training scenes ) . This is indicative of inferential coherence ( Fodor , 1975 ) in models , in generalizing rules of inference to novel atoms . A model that infers the underlying LOT during meta-training would be expected to perform well on any such systematic split . By comparing the performance of current models to to such ideal learners , 1While some strings h might be different in surface form , they may yeild the same results when applied to images . In this split we account for such synonomy , and ensure that no two concepts which are synonyms are in different splits . See Appendix B.6 for more details . this benchmark will allow us to evaluate progress on the systematic out-of-distribution generalization capabilities of our current models . Appendix C provides more details on the strucutred splits . From Concepts to Meta-learning Episodes . A single episode comprises a support set ( Dsupp ) and a query set ( Dquery ) , each of which is generated from a given concept , h. Formally , a support or query set D has input data u and corresponding label y , i.e . D = { { yi } Ni=1 , { ui } Ni=1 } . Each support and query set contains 5 positive and 20 negative examples — negative examples are oversampled since the space of negatives is generally much larger than that for positives . The set of positive examples are sampled uniformly from a categorical distribution over all positives . However , we consider two types of negatives : 1 ) easy negatives , in which the negatives are also sampled at random , and 2 ) hard negatives , in which negatives are generated from a closely related concept which also evaluates true on the positive examples in Dsupp , such that these negatives are maximally confusing . Altogether , for each split , our train , validation , and test sets contain 500000 , 5000 , and 20000 episodes , respectively . Compositionality Gap . A key aspect of our benchmark is to define the difficulty in learning that arises from the compositional structure of the concept space . Most of the splits above are structured in a way such that Htest ∩ Htrain = ∅ – forcing a learner to use the compositional structure of the concept space to generalize toHtest . We conceptualize the difficulty of this task through the notion of its compositionality gap . Intuitively , the compositionality gap captures the difference between the generalization performance of an ideal compositional learner ( strong oracle ) compared to an ideal non-compositional learner that is unable to extrapolate outside the training concepts ( weak oracle ) . Formally , let Ω ∈ { strong , weak } denote an oracle over a concept space HΩ . The posterior predictive distribution of an oracle for query scene u and query label y ∈ { 0 , 1 } is then given as : pΩ ( y|u , Dsupp ) = ∑ h∈HΩ pΩ ( y|h , u ) pΩ ( h|Dsupp ) , where pΩ ( h|Dsupp ) ∝ pΩ ( h ) p ( { yi } Ni=1|h ; { ui } Ni=1 ) and pΩ ( h ) denote the posterior and prior , respectively . Given a metric of interest M ( e.g. , mean average precision or accuracy ) , the compositionality gap of a learning task is then simply defined as the difference in performance of the strong and weak oracle when evaluating on concepts fromHtest , i.e. , M ( pstrong ) −M ( pweak ) . Using this notion of compositionality gap , we can then define ideal learners , i.e. , the strong and weak oracle , simply via their priors . In particular , let w ( h ) denote a weight on importance of each hypothesis2 and let I denote the indicator function . We then define the prior of an oracle as pΩ ( h ) = ∑ h′∈HΩ w ( h ′ ) I [ h′ = h ] , . The difference between strong and weak oracle lies in which concepts can be accessed in these priors . In this formalism , the strong oracle has access to the union of train and test concepts ; that is Hstrong = Htrain ∪Htest . The weak oracle , on the other hand only assumes access toHweak = Htrain , which means it is unable to consider any hypothesis outside what has been seen in training and assigning it zero probability mass . Given a support set Dsupp this difference in priors leads then to different inferences on posteriors and allows us to quantify how compositionally novel a learning task is relative to these ideal learners .
The following work presents a CLEVR-based compositionality benchmark. The task of the model is to verify logical statements about an image, and in order to achieve such, must learn how to map individual statements to a composition of functions over the image checking for color, placement, shape, etc. Specific to this dataset is that it is explicitly few-shot, which forces the models to generalize very quickly and to infer under uncertainty.
SP:3f2384e43d16f4b06bf238e4ce097d4e34f25ee7
CURI: A Benchmark for Productive Concept Learning Under Uncertainty
1 INTRODUCTION . Human concept learning is more flexible than today ’ s AI systems . Human conceptual knowledge is productive : people can understand and generate novel concepts via compositions of existing concepts ( “ an apartment dog ” ) ( Murphy , 2002 ) , unlike standard machine classifiers that are limited to a fixed set of classes ( “ dog ” , “ cat ” , etc. ) . Further , humans can induce goal-based , “ ad hoc ” categories such as “ things to take from one ’ s apartment in a fire ” ( children , dogs , keepsakes , etc . ) ( Barsalou , 1983 ) . Thus , unlike AI systems , humans reason seamlessly in large , essentially “ unbounded ” concept spaces . Beyond unboundedness , a natural challenge in such concept spaces is uncertainty – the right concept to be inferred is uncertain , as a plethora of candidate concepts could explain observations . For e.g . in Figure 1 ( top , image panel ) , the “ right ” concept could be that “ All objects are blue and have the same size ” , but it could also be “ There are less than four objects in the scene ” , or “ All objects have the same color ” . Humans gracefully handle such uncertainty and underdetermination ( Tenenbaum & Griffiths , 2001 ; Xu & Tenenbaum , 2007 ; Goodman et al. , 2008 ; Piantadosi et al. , 2016 ) . Popular compositional reasoning benchmarks such as CLEVR ( Johnson & Zhang , 2016 ) for visual question answering and Ravens Progressive Matrices ( Santoro et al. , 2017 ) for deductive , analogical reasoning are compositionally rich and challenging in nature , but do not tackle ambiguity and underdetermination . We address this gap in the literature , and propose the Compositional Reasoning Under Uncertainty ( CURI ) benchmark to study how modern machine learning systems can learn concepts spanning a large , productively defined space ( Figure 1 ) . In pursuit of this goal , we instantiate a meta learning task where a model must acquire a compositional concept from finite samples . A signature of productivity in human thought is our ability to handle novel combinations of known , atomic components . Thus , in CURI we instantiate different systematic train-test splits to analyze different forms of generalization in concept learning , involving novel combinations of intrinsic properties ( e.g . color , shape ) with boolean operators , counting , extrinsic object properties ( e.g . object location ) , and a novel test of variable binding in context of compositional learning . While related systematic splits have been proposed in prior work in context of other tasks such as question answering and analogical reasoning ( Barrett et al. , 2018 ; Hill et al. , 2019 ; Agrawal et al. , 2017 ; Johnson et al. , 2016 ; Vedantam et al. , 2017 ; Higgins et al. , 2017 ; Bakhtin et al. , 2019 ; Lake & Baroni , 2018 ; Ruis et al. , 2020 ) , ours is the first benchmark which tests different qualitative aspects of reasoning about productive concepts under uncertainty . Compositional Reasoning Under Uncertainty ( CURI ) Task . Concretely , the CURI task tests few-shot learning of relational concepts in a large compositional conceptual space , with design inspiration from studies in cognitive modeling using a language of thought ( LOT ) approach ( Fodor , 1975 ; Piantadosi , 2011 ; Kemp et al. , 2005 ) . CURI includes scene-based concepts such as “ All objects have the same color ” and “ There exists a blue object while the rest are triangles ” ( Figure 1 ) but unlike CLEVR ( Johnson et al. , 2016 ) there are too few examples to deduce answers with certainty . Our benchmark is defined through a series of meta-learning episodes ( see example in Figure 2 ) : given positive and negative examples of a new concept Dsupp ( known as the “ support set ” ) , the goal of an episode is to classify new examples Dquery ( the “ query set ” ) . As in few-shot classification ( Fei-Fei et al. , 2006 ) , meta-learning ( Vinyals et al. , 2016 ) , and other open-set tasks ( Lampert et al. , 2014 ) , models are evaluated on novel classes outside the ( meta- ) training set . Unlike previous work ( Triantafillou et al. , 2019 ; Lake et al. , 2019 ) that focuses on atomic concepts , our benchmarks concerns more structured , relational concepts built compositionally from a set of atomic concepts , and involves reasoning under uncertainty – an ideal learner must marginalize over many hypotheses when making predictions ( Gelman et al. , 2004 ; Xu & Tenenbaum , 2007 ; Piantadosi et al. , 2016 ) . We also vary the modality in which scenes are presented—rendering them as images , symbolic schemas , and sounds— enabling future research on modality-specific representational choices for compositional reasoning under uncertainty . Finally , we vary the concepts learned by the model during meta-training and meta-testing to test different aspects of systematic generalization . Compositionality Gap . In addition to defining systematic splits , we also characterize ( for the first time , in our knowledge ) , the difficulty of generalization entailed by each split by introducing the notion of a model-independent “ compositionality gap ” . Concretely , the compositionality gap is the difference in test performance between an ideal Bayesian learner with access to the full hypothesis space , and a Bayesian learner with access to only a ( potentially large ) list of the hypotheses examined during meta-training . A large gap indicates that any learner must extrapolate compositionally from the training hypotheses to solve the task ; additionally , models can be compared to ideal learners that either do or do not engage in such extrapolation . We anticipate that this tool will be more broadly useful for analyzing other benchmarks with compositional splits . Models . We evaluate models around various dimensions which concern the difficulty of learning productive concepts under uncertainty , including : 1 ) the modality in which the input is rendered ( image , schemas , sounds ) , 2 ) method used for reasoning across objects in a scene ( transformer , Variables . Types . Constants ( Illustrated for Images ) . relation-network , global average pooling , concatenation ) , 3 ) whether or not training provides groundtruth symbolic descriptions of concepts , and 4 ) how negative examples are sampled . Overall , our evaluations suggest that there is substantial room for improvement in compositional reasoning under uncertainty , w.r.t the compositionality gap , representing a novel challenge for compositional learning . Summary of contributions : 1 ) We introduce the Compositional Reasoning Under Uncertainty ( CURI ) benchmark for evaluating compositional , relational learning under uncertainty from observational data ; 2 ) We introduce a ‘ compositionality gap ’ metric for measuring the difficulty of systematic generalization from train to test ; 3 ) We provide various baseline models for benchmarking progess . 2 RELATED WORK . Compositional Learning . Related work has examined systematic generalization in pattern completion using Raven ’ s matrices ( PGM ) ( Santoro et al. , 2017 ; Hill et al. , 2019 ) and visual question answering with CLEVR ( Johnson et al. , 2016 ; Bahdanau et al. , 2019 ) . CURI ’ s use of the CLEVR renderer further invites particular comparison with that benchmark . Compared to these more deductive reasoning tests , CURI examines few-shot concept learning under substantial inherent uncertainty . Unlike puzzle solving or question answering , an ideal inductive learner on CURI can not know the right rule with certainty . In essence , unlike CLEVR the “ question ” to be answered is not given to the model as input , but must be inferred – making the task more challenging . While PGMs do involve such an inference , once the constraints of a puzzle are identified , it does not : 1 ) have any uncertainty in the reasoning ( which is crucial ) and 2 ) involve any “ concept ” learning – where a concept applies to multiple images – as much as it involves “ instance ” matching to complete a sequence . In contrast , a successful CURI model behaves as if marginalizing over many hypotheses consistent with the observations e.g. , ( Tenenbaum & Griffiths , 2001 ; Xu & Tenenbaum , 2007 ; Piantadosi et al. , 2016 ) , an ability which is rarely studied directly in deep learning models ( although see ( Grant et al. , 2019 ) ) . Recently , Keysers et al . ( 2019 ) proposed a method to create “ difficult ” systematic splits based on the principle that they should share atoms but have maximally different compositions . This is complementary to our splits , which provide interpretable notions of what each split tests such as disentangling , complexity , variable binding etc . Moreover , our variable binding split is predicated on having different atoms between train and test , and thus can not be recovered by their methodology . Language of Thought ( LOT ) . Our choice of compositional concepts was most closely inspired by ( Piantadosi et al. , 2016 ) along with other studies of human concept learning in the Language of Thought ( LOT ) framework ( Fodor , 1975 ; Goodman et al. , 2008 ; Kemp & Jern , 2009 ; Piantadosi et al. , 2012 ; Goodman et al. , 2015 ; Overlan et al. , 2017 ; Lake & Piantadosi , 2019 ) . In typical LOT studies of human learning , the conceptual space H is defined through a probabilistic context-free grammar G , which specifies a set of conceptual primitives and their rules of combination . Here , we use a LOT-inspired grammar G to generate an unbounded set conceptsH , while evaluating machine learning models trained without access to the underlying LOT . 3 COMPOSITIONAL REASONING UNDER UNCERTAINTY ( CURI ) DATASET . Concept space . The compositional concepts in CURI were inspired by the empirical and cognitive modeling work of Piantadosi et al . ( 2016 ) . The space of concepts ( LOT ) is defined by a context free grammar ( G ) . Figure 3 shows the LOT and specifies how primitives and functions compose to produce a large unbounded concept space . The LOT has three variables : x , representing an object in a scene , S = { x } Ni=1 representing the set of all objects in the scene , and S−x = S/ { x } , representing the set of all objects in the scene except x . Each concept describes a rule composed of object and scene properties , logical operators , and/or comparison operators , and can be evaluated on a given scene S to determine whether the scene satisfies the rule . Object and scene properties are defined by functions which can be applied to objects or scenes : for example , size ? ( x ) yields the size of an object x , while size ? ( S ) returns a set with the sizes of all the objects ( { size ? ( x ) : x ∈ S } ) . Comparison and logical operators can be used to compare and relate various properties of objects in scenes . In contrast to Piantadosi et al . ( 2016 ) , we include a count operator , which determines how many times a condition is satisfied by a set , which allows us to check how well deep learning models are able to count ( Chattopadhyay et al. , 2016 ; Johnson et al. , 2016 ; Agrawal et al. , 2017 ) . Finally , quantifiers such as exists and for-all enrich the LOT by specifying the number of objects which must satisfy a given condition . Consider the following example concept ( Figure 1 bottom ) : “ There exists a blue object in the scene and the rest of the objects are squares. ” To access the color of a given object , we use color ? ( x ) and to access the shape of a given object , we use shape ? ( x ) . To determine whether an object matches a specific property , we can combine this with equality : shape ? ( x ) = “ square ” . Finally , we can use exists to specify that at least one object must be blue , S−x to specify all the objects except for that blue object , and all to specify that all the objects in S−x must be squares . Putting it all together : exists x ∈ S ( color ? ( x ) = “ blue ” ) and all ( shape ? ( S−x ) = “ square ” ) . Structured Generalization Splits . A signature of productivity is the ability to handle novel combinations of known components ( Fodor , 1975 ; Fodor & Pylyshyn , 1988 ) . Thus , in CURI , we consider splits that require generalizing to novel combinations of known elements from our LOT ( Figure 3 ) , including combinations of constants , variables , and functions . We achieve this by creating disjoint splits of concepts Htrain and Htest for training and evaluating models . By varying the held out elements and their combinations , we obtain splits that evaluate different axes of generalization . In practice , we use our grammar G to sample and filter a large set of concepts ( see Appendix B.2 for more details ) , which yields a set of 14,929 conceptsH for training and evaluation . We next describe how each split dividesH intoHtrain andHtest , to test productive , out of distribution generalization : • Instance IID : Evaluates generalization to novel episodes from the same concept set . This is the standard setup in machine learning ( Murphy , 2013 ) , in whichHtrain =Htest . This is the only split where train and test concepts overlap . • Concept IID : Evaluates generalization to novel concepts based on an arbitrary random split of the concepts intoHtrain andHtest.1 • Counting : Evaluates the ability to learn a new concept h with novel property-count combina- tions , e.g , the training concepts never filter for exactly ‘ 3 squares ’ . • Extrinsic properties : Evaluates the ability to learn a new concept h , with novel combinations of extrinsic ( e.g . location ) and intrinsic ( e.g . color ) object properties . • Intrinsic properties : Evaluates the ability to learn a new concept h with novel combinations of intrinsic properties , e.g. , the training concepts never reference both ‘ red ’ and ‘ rubber ’ . • Boolean operations : Evaluates the ability to learn concepts which require application of a familiar boolean operation to a property to which the operation has never been applied previously . • Complexity split : Evaluates generalization from simple concepts ( those which have less than or equal to 10 symbols ) to more complex concepts ( longer than 10 symbols ) . This is indicative of the productivity ( Fodor , 1975 ) exhibited by models , in generalizing from simpler concepts to more complex concepts . • Variable binding : Evaluates learning of entirely novel intrinsic properties , e.g . the training concepts involve only “ red ” , “ blue ” , and “ green ” but test concepts involve “ yellow ” ( although ‘ yellow ’ objects can still appear in training scenes ) . This is indicative of inferential coherence ( Fodor , 1975 ) in models , in generalizing rules of inference to novel atoms . A model that infers the underlying LOT during meta-training would be expected to perform well on any such systematic split . By comparing the performance of current models to to such ideal learners , 1While some strings h might be different in surface form , they may yeild the same results when applied to images . In this split we account for such synonomy , and ensure that no two concepts which are synonyms are in different splits . See Appendix B.6 for more details . this benchmark will allow us to evaluate progress on the systematic out-of-distribution generalization capabilities of our current models . Appendix C provides more details on the strucutred splits . From Concepts to Meta-learning Episodes . A single episode comprises a support set ( Dsupp ) and a query set ( Dquery ) , each of which is generated from a given concept , h. Formally , a support or query set D has input data u and corresponding label y , i.e . D = { { yi } Ni=1 , { ui } Ni=1 } . Each support and query set contains 5 positive and 20 negative examples — negative examples are oversampled since the space of negatives is generally much larger than that for positives . The set of positive examples are sampled uniformly from a categorical distribution over all positives . However , we consider two types of negatives : 1 ) easy negatives , in which the negatives are also sampled at random , and 2 ) hard negatives , in which negatives are generated from a closely related concept which also evaluates true on the positive examples in Dsupp , such that these negatives are maximally confusing . Altogether , for each split , our train , validation , and test sets contain 500000 , 5000 , and 20000 episodes , respectively . Compositionality Gap . A key aspect of our benchmark is to define the difficulty in learning that arises from the compositional structure of the concept space . Most of the splits above are structured in a way such that Htest ∩ Htrain = ∅ – forcing a learner to use the compositional structure of the concept space to generalize toHtest . We conceptualize the difficulty of this task through the notion of its compositionality gap . Intuitively , the compositionality gap captures the difference between the generalization performance of an ideal compositional learner ( strong oracle ) compared to an ideal non-compositional learner that is unable to extrapolate outside the training concepts ( weak oracle ) . Formally , let Ω ∈ { strong , weak } denote an oracle over a concept space HΩ . The posterior predictive distribution of an oracle for query scene u and query label y ∈ { 0 , 1 } is then given as : pΩ ( y|u , Dsupp ) = ∑ h∈HΩ pΩ ( y|h , u ) pΩ ( h|Dsupp ) , where pΩ ( h|Dsupp ) ∝ pΩ ( h ) p ( { yi } Ni=1|h ; { ui } Ni=1 ) and pΩ ( h ) denote the posterior and prior , respectively . Given a metric of interest M ( e.g. , mean average precision or accuracy ) , the compositionality gap of a learning task is then simply defined as the difference in performance of the strong and weak oracle when evaluating on concepts fromHtest , i.e. , M ( pstrong ) −M ( pweak ) . Using this notion of compositionality gap , we can then define ideal learners , i.e. , the strong and weak oracle , simply via their priors . In particular , let w ( h ) denote a weight on importance of each hypothesis2 and let I denote the indicator function . We then define the prior of an oracle as pΩ ( h ) = ∑ h′∈HΩ w ( h ′ ) I [ h′ = h ] , . The difference between strong and weak oracle lies in which concepts can be accessed in these priors . In this formalism , the strong oracle has access to the union of train and test concepts ; that is Hstrong = Htrain ∪Htest . The weak oracle , on the other hand only assumes access toHweak = Htrain , which means it is unable to consider any hypothesis outside what has been seen in training and assigning it zero probability mass . Given a support set Dsupp this difference in priors leads then to different inferences on posteriors and allows us to quantify how compositionally novel a learning task is relative to these ideal learners .
This work proposes the CURI dataset to measure productive concept learning under uncertainty. The dataset is designed using a concept space defined by a language and formulated as a few-shot meta-learning problem to tell apart in-concept samples from out-of-concept samples. The authors also design several out-of-generalization data splits that test models' ood generalization performance. Together with an oracle model, the authors show using the prototypical network that the compositional concept learning and reasoning problem in CURI is challenging.
SP:3f2384e43d16f4b06bf238e4ce097d4e34f25ee7
Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning
1 INTRODUCTION . End-to-end neural models have proven to be powerful tools for an expansive set of language and vision problems by effectively emulating the input-output behavior . However , many real problems like Question Answering ( QA ) or Dialog need more interpretable models that can incorporate explicit reasoning in the inference . In this work , we focus on the most generic form of numerical reasoning over text , encompassed by the reasoning-based MRC framework . A particularly challenging setting for this task is where the answers are numerical in nature as in the popular MRC dataset , DROP ( Dua et al. , 2019 ) . Figure 1 shows the intricacies involved in the task , ( i ) passage and query language understanding , ( ii ) contextual understanding of the passage date and numbers , and ( iii ) application of quantitative reasoning ( e.g. , max , not ) over dates and numbers to reach the final numerical answer . Three broad genres of models have proven successful on the DROP numerical reasoning task . First , large-scale pretrained language models like GenBERT ( Geva et al. , 2020 ) uses a monolithic Transformer architecture and decodes numerical answers digit-by-digit . Though they deliver mediocre performance when trained only on the target data , their competency is derived from pretraining on massive synthetic data augmented with explicit supervision of the gold numerical reasoning . Second kind of models are the reasoning-free hybrid models like NumNet ( Ran et al. , 2019 ) , NAQANet ( Dua et al. , 2019 ) , NABERT+ ( Kinley & Lin , 2019 ) and MTMSN ( Hu et al. , 2019 ) , NeRd ( Chen et al. , 2020 ) . They explicitly incorporate numerical computations in the standard extractive QA pipeline by learning a multi-type answer predictor over different reasoning types ( e.g. , max/min , diff/sum , count , negate ) and directly predicting the corresponding numerical expression , instead of learning to reason . This is facilitated by exhaustively precomputing all possible outcomes of discrete operations and augmenting the training data with the reasoning-type supervision and numerical expressions that lead to the correct answer . Lastly , the most relevant class of models to consider for this work are the modular networks for reasoning . Neural Module Networks ( NMN ) ( Gupta et al. , 2020 ) is the first explicit reasoning based QA model which parses the query into a specialized program and executes it step-wise over learnable reasoning modules . However , to do so , apart from the exhaustive precomputation of all discrete operations , it also needs more fine-grained supervision of the gold program and the gold program execution , obtained heuristically , by leveraging the abundance of templatized queries in DROP . While being more pragmatic and richer at interpretability , both modular and hybrid networks are also tightly coupled with the additional supervision . For instance , the hybrid models can not learn without it , and while NMN is the first to enable learning from QA pair alone , it still needs more finer-grained supervision for at least a part of the training data . With this , it manages to supercede the SoTA models NABERT and MTMSN on a carefully chosen subset of DROP using the supervision . However , NMN generalizes poorly to more open-ended settings where such supervision is not easy to handcraft . Need for symbolic reasoning . One striking characteristic of the modular methods is to avoid discrete reasoning by employing only learnable modules with an exhaustively precomputed space of outputs . While they perform well on DROP , their modeling complexity grows arbitrarily with more complex non-linear numerical operations ( e.g. , exp , log , cos ) . Contrarily , symbolic modular networks that execute the discrete operations are possibly more robust or pragmatic in this respect by remaining unaffected by the operation complexity . Such discrete reasoning has indeed been incorporated for simpler , well-structured tasks like math word problems ( Koncel-Kedziorski et al. , 2016 ) or KB/TableQA ( Zhong et al. , 2017 ; Liang et al. , 2018 ; Saha et al. , 2019 ) , with Deep Reinforcement Learning ( RL ) for end-to-end training . MRC however needs a more generalized framework of modular neural networks involving more fuzzy reasoning over noisy entities extracted from open-ended passages . In view of this , we propose a Weakly-Supervised Neuro-Symbolic Module Network ( WNSMN ) • A first attempt at numerical reasoning based MRC , trained with answers as the sole supervision ; • Based on a generalized framework of dependency parsing of queries into noisy heuristic programs ; • End-to-end training of neuro-symbolic reasoning modules in a RL framework with discrete rewards ; To concretely compare WNSMN with contemporary NMN , consider the example in Figure 1 . In comparison to our generalized query-parsing , NMN parses the query into a program form ( MAX ( FILTER ( FIND ( ‘ Carpenter ’ ) , ‘ goal ’ ) ) , which is step-wise executed by different learnable modules with exhaustively precomputed output set . To train the network , it employs various forms of strong supervision such as gold program operations and gold query-span attention at each step of the program and gold execution i.e. , supervision of the passage numbers ( 23 , 26 , 42 ) to execute MAX operation on . While NMN can only handle the 6 reasoning categories that the supervision was tailored to , WNSMN focuses on the full DROP with numerical answers ( called DROP-num ) that involves more diverse reasoning on more open-ended questions . We empirically compare WNSMN on DROP-num with the SoTA NMN and GenBERT that allow learning with partial or no strong supervision . Our results showcase that the proposed WNSMN achieves 32 % better accuracy than NMN in absence of at least one or more types of supervision and performs 8 % better than GenBERT when the latter is fine-tuned only on DROP in a comparable setup , without additional synthetic data having explicit supervision . 2 MODEL : WEAKLY SUPERVISED NEURO-SYMBOLIC MODULE NETWORK . We now describe our proposed WNSMN that learns to infer the answer based on weak supervision of the QA pair by generating the program form of the query and executing it through explicit reasoning . Parsing Query into Programs To keep the framework generic , we use a simplified representation of the Stanford dependency parse tree ( Chen & Manning , 2014 ) of the query to get a generalized program ( Appendix A.5 ) . First , a node is constructed for the subtree rooted at each child of the root by merging its descendants in the original word order . Next an edge is added from the left-most node ( which we call the root clause ) to every other node . Then by traversing left to right , each node is organized into a step of a program having a linear flow . For example , the program obtained in Figure 1 is X1 = ( ‘ which is the longest ’ ) ; X2 = ( ‘ goal by Carpenter ’ , X1 ) ; Answer = Discrete-Reasoning ( ‘ which is the longest ’ , X2 ) . Each program step consists of two types of arguments ( i ) Query Span Argument obtained from the corresponding node , indicates the query segment referred to , in that program step e.g. , ‘ goal by Carpenter ’ in Step 2 ( ii ) Reference Argument ( s ) obtained from the incoming edges to that node , refers to the previous steps of the program that the current one depends on e.g. , X1 in Step 2 . Next , a final step of the program is added , which has the reference argument as the leaf node ( s ) obtained in the above manner and the query span argument as the root-clause . This step is specifically responsible for handling the discrete operation , enabled by the root-clause which is often indicative of the kind of discrete reasoning involved ( e.g. , max ) . However this being a noisy heuristic , the QA model needs to be robust to such noise and additionally rely on the full query representation in order to predict the discrete operation . For simplicity we limit the number of reference arguments to 2 . 2.1 PROGRAM EXECUTION . Our proposed WNSMN learns to execute the program over the passage in three steps . In the preprocessing step , it identifies numbers and dates from the passage , and maintains them as separate canonicalized entity-lists along with their mention locations . Next , it learns an entity-specific crossattention model to rank the entities w.r.t . their query-relevance ( §2.1.1 ) , and then samples relevant entities as discrete arguments ( §2.1.2 ) and executes appropriate discrete operations on them to reach the answer . An RL framework ( §2.1.3 ) trains it end-to-end with the answer as the sole supervision . 2.1.1 ENTITY-SPECIFIC CROSS ATTENTION FOR INFORMATION EXTRACTION . To rank the query-relevant passage entities , we model the passage , program and entities jointly . Modeling interaction between program and passage This module ( Figure 2 , left ) learns to associate query span arguments of the program with the passage . For this , similar to NMN , we use a BERT-base pretrained encoder ( Devlin et al. , 2018 ) to get contextualized token embeddings of the passage and query span argument of each program step , respectively denoted by Pk and Qk for the k ’ th program step . Based on it , we learn a similarity matrix S 2 l⇥n⇥m between the program and passage , where l , n , and m respectively are the program length and query span argument and passage length ( in tokens ) . Each Sk 2 n⇥m represents the affinity over the passage tokens for the k ’ th program argument and is defined as Sk ( i , j ) = wT [ Qki ; Pkj ; Qki Pkj ] , where w is a learnable parameter and is element-wise multiplication . From this , an attention map Ak is computed over the passage tokens for the k ’ th program argument as Ak ( i , j ) = softmaxj ( Sk ( i , j ) ) = exp ( Sk ( i , j ) ) P j exp ( Sk ( i , j ) ) . Similarly , for the i ’ th token of the k ’ th program argument the cumulative attention aki w.r.t . the passage is aki = softmaxi ( P j Sk ( i , j ) ) . A linear combination of the attention map Ak ( i , · ) weighted by aki gives the expected passage attention for the k ’ th step , ↵̄k = P i akiAk ( i , · ) 2 m. Span-level smoothed attention . To facilitate information spotting and extraction over contiguous spans of text , we regularize the passage attention so that the attention on a passage token is high if the attention over its neighbors is so . We achieve this by adopting a heuristic smoothing technique ( Huang et al. , 2020 ) , taking a sliding window of different lengths ! = { 1 , 2 , . . . 10 } over the passage , and replacing the token-level attention with the attention averaged over the window . This results in 10 different attention maps over the passage for the k ’ th step of the program : { ↵̄ ! k | ! 2 { 1 , 2 , . . . , 10 } } . Soft span prediction . This network takes a multi-scaled ( Gupta et al. , 2020 ) version of ↵̄ ! k , by multiplying the attention map with |s| different scaling factors ( s = { 1 , 2 , 5 , 10 } ) , yielding a |s|dimensional representation for each passage token , i.e. , ↵̄ ! k 2 m⇥|s| . This is then passed through a L-layered stacked self-attention transformer block ( Vaswani et al. , 2017 ) , which encodes it to m⇥ d dimension , followed by a linear layer of dimension d⇥ 1 , to obtain the span prediction logits : ↵ ! k = Linear ( Transformer ( MultiScaling ( ↵̄ ! k ) ) 2 m. Further the span prediction logits at each program step ( say k ) is additively combined with those from the previous steps referenced in the current one , through the reference argument ( ref ( k ) ) at step k , i.e. , ↵ ! k = ↵ ! k + P k02ref ( k ) ↵ ! k0 . Modeling interaction between program and number/date entities This module ( Figure 2 , right ) facilitates an entity-based information spotting capability , that is , given a passage mention of a number/date entity relevant to the query , the model should be able to attend to the neighborhood around it . To do this , for each program step , we first compute a passage tokens to number tokens attention map Anum 2 l⇥m⇥N , where N is the number of unique number entities . Note that this attention map is different for each program step as the contextual BERT encoding of the passage tokens ( Pk ) is coupled with the program ’ s span argument of that step . At the k-th step , the row Anumk ( i , · ) denotes the probability distribution over the N unique number tokens w.r.t . the i-th passage token . The attention maps are obtained by a softmax normalization of each row of the corresponding passage tokens to number tokens similarity matrix , Snumk 2 m⇥N for k = { 1 . . . l } , where the elements of Snumk are computed as S num k ( i , j ) = P T kiWnPknj with Wn 2 d⇥d being a learnable projection matrix and nj being the passage location of the j-th number token . These similarity scores are additively aggregated over all mentions of the same number entity in the passage . The relation between program and entities is then modeled as ⌧ ! k = softmax ( P i ↵ ! kiA num k ( i , · ) ) 2 N , which gives the expected distribution over the N number tokens for the k-th program step and using ! as the smoothing window size . The final stacked attention map obtained for the different windows is T numk = { ⌧ ! k | ! 2 { 1 , 2 , . . . 10 } } . Similarly , for each program step k , we also compute a separate stacked attention map T datek over the unique date tokens , parameterized by a different Wd . A critical requirement for a meaningful attention over entities is to incorporate information extraction capability in the number and date attention maps Anum and Adate , by enabling the model to attend over the neighborhood of the relevant entity mentions . This is achieved by minimizing the unsupervised auxiliary losses Lnumaux and Ldateaux in the training objective , which impose an inductive bias over the number and date entities , similar to Gupta et al . ( 2020 ) . Its purpose is to ensure that the passage attention is densely distributed inside the neighborhood of ± ⌦ ( a hyperparameter , e.g. , 10 ) of the passage location of the entity mention , without imposing any bias on the attention distribution outside the neighborhood . Consequently , it maximises the log-form of cumulative likelihood of the attention distribution inside the window and the entropy of the attention distribution outside of it . ( 1 ) Lnumaux = 1 l lX k=1 mX i=1 [ log ( NX j=1 nj2 [ i± ⌦ ] a num kij ) NX j=1 nj 62 [ i± ⌦ ] a num kij log ( a num kij ) ] where is indicator function and anumkij = A num k ( i , j ) . Ldateaux for date entities is similarly defined .
This paper proposes a neurosymbolic module network that predicts a program structure following a dependency parse, populates that program's arguments, and executes it to answer numerical reasoning questions over text. They claim that compared to Gupta et al. (2020), this approach doesn't require as many domain-specific heuristics to find gold programs or as much precomputation -- it is learned with weak supervision only (just the answers). The model has a number of pieces allowing the model to reference entities, numbers, and dates in a cross-attentive fashion. Results show that on numerical questions from the DROP dataset, the model outperforms that of Gupta et al. and is competitive with other approaches when appropriate assumptions are made.
SP:0a4cf8c20a5ac64540faf909d0e6d3af34e4036c
Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning
1 INTRODUCTION . End-to-end neural models have proven to be powerful tools for an expansive set of language and vision problems by effectively emulating the input-output behavior . However , many real problems like Question Answering ( QA ) or Dialog need more interpretable models that can incorporate explicit reasoning in the inference . In this work , we focus on the most generic form of numerical reasoning over text , encompassed by the reasoning-based MRC framework . A particularly challenging setting for this task is where the answers are numerical in nature as in the popular MRC dataset , DROP ( Dua et al. , 2019 ) . Figure 1 shows the intricacies involved in the task , ( i ) passage and query language understanding , ( ii ) contextual understanding of the passage date and numbers , and ( iii ) application of quantitative reasoning ( e.g. , max , not ) over dates and numbers to reach the final numerical answer . Three broad genres of models have proven successful on the DROP numerical reasoning task . First , large-scale pretrained language models like GenBERT ( Geva et al. , 2020 ) uses a monolithic Transformer architecture and decodes numerical answers digit-by-digit . Though they deliver mediocre performance when trained only on the target data , their competency is derived from pretraining on massive synthetic data augmented with explicit supervision of the gold numerical reasoning . Second kind of models are the reasoning-free hybrid models like NumNet ( Ran et al. , 2019 ) , NAQANet ( Dua et al. , 2019 ) , NABERT+ ( Kinley & Lin , 2019 ) and MTMSN ( Hu et al. , 2019 ) , NeRd ( Chen et al. , 2020 ) . They explicitly incorporate numerical computations in the standard extractive QA pipeline by learning a multi-type answer predictor over different reasoning types ( e.g. , max/min , diff/sum , count , negate ) and directly predicting the corresponding numerical expression , instead of learning to reason . This is facilitated by exhaustively precomputing all possible outcomes of discrete operations and augmenting the training data with the reasoning-type supervision and numerical expressions that lead to the correct answer . Lastly , the most relevant class of models to consider for this work are the modular networks for reasoning . Neural Module Networks ( NMN ) ( Gupta et al. , 2020 ) is the first explicit reasoning based QA model which parses the query into a specialized program and executes it step-wise over learnable reasoning modules . However , to do so , apart from the exhaustive precomputation of all discrete operations , it also needs more fine-grained supervision of the gold program and the gold program execution , obtained heuristically , by leveraging the abundance of templatized queries in DROP . While being more pragmatic and richer at interpretability , both modular and hybrid networks are also tightly coupled with the additional supervision . For instance , the hybrid models can not learn without it , and while NMN is the first to enable learning from QA pair alone , it still needs more finer-grained supervision for at least a part of the training data . With this , it manages to supercede the SoTA models NABERT and MTMSN on a carefully chosen subset of DROP using the supervision . However , NMN generalizes poorly to more open-ended settings where such supervision is not easy to handcraft . Need for symbolic reasoning . One striking characteristic of the modular methods is to avoid discrete reasoning by employing only learnable modules with an exhaustively precomputed space of outputs . While they perform well on DROP , their modeling complexity grows arbitrarily with more complex non-linear numerical operations ( e.g. , exp , log , cos ) . Contrarily , symbolic modular networks that execute the discrete operations are possibly more robust or pragmatic in this respect by remaining unaffected by the operation complexity . Such discrete reasoning has indeed been incorporated for simpler , well-structured tasks like math word problems ( Koncel-Kedziorski et al. , 2016 ) or KB/TableQA ( Zhong et al. , 2017 ; Liang et al. , 2018 ; Saha et al. , 2019 ) , with Deep Reinforcement Learning ( RL ) for end-to-end training . MRC however needs a more generalized framework of modular neural networks involving more fuzzy reasoning over noisy entities extracted from open-ended passages . In view of this , we propose a Weakly-Supervised Neuro-Symbolic Module Network ( WNSMN ) • A first attempt at numerical reasoning based MRC , trained with answers as the sole supervision ; • Based on a generalized framework of dependency parsing of queries into noisy heuristic programs ; • End-to-end training of neuro-symbolic reasoning modules in a RL framework with discrete rewards ; To concretely compare WNSMN with contemporary NMN , consider the example in Figure 1 . In comparison to our generalized query-parsing , NMN parses the query into a program form ( MAX ( FILTER ( FIND ( ‘ Carpenter ’ ) , ‘ goal ’ ) ) , which is step-wise executed by different learnable modules with exhaustively precomputed output set . To train the network , it employs various forms of strong supervision such as gold program operations and gold query-span attention at each step of the program and gold execution i.e. , supervision of the passage numbers ( 23 , 26 , 42 ) to execute MAX operation on . While NMN can only handle the 6 reasoning categories that the supervision was tailored to , WNSMN focuses on the full DROP with numerical answers ( called DROP-num ) that involves more diverse reasoning on more open-ended questions . We empirically compare WNSMN on DROP-num with the SoTA NMN and GenBERT that allow learning with partial or no strong supervision . Our results showcase that the proposed WNSMN achieves 32 % better accuracy than NMN in absence of at least one or more types of supervision and performs 8 % better than GenBERT when the latter is fine-tuned only on DROP in a comparable setup , without additional synthetic data having explicit supervision . 2 MODEL : WEAKLY SUPERVISED NEURO-SYMBOLIC MODULE NETWORK . We now describe our proposed WNSMN that learns to infer the answer based on weak supervision of the QA pair by generating the program form of the query and executing it through explicit reasoning . Parsing Query into Programs To keep the framework generic , we use a simplified representation of the Stanford dependency parse tree ( Chen & Manning , 2014 ) of the query to get a generalized program ( Appendix A.5 ) . First , a node is constructed for the subtree rooted at each child of the root by merging its descendants in the original word order . Next an edge is added from the left-most node ( which we call the root clause ) to every other node . Then by traversing left to right , each node is organized into a step of a program having a linear flow . For example , the program obtained in Figure 1 is X1 = ( ‘ which is the longest ’ ) ; X2 = ( ‘ goal by Carpenter ’ , X1 ) ; Answer = Discrete-Reasoning ( ‘ which is the longest ’ , X2 ) . Each program step consists of two types of arguments ( i ) Query Span Argument obtained from the corresponding node , indicates the query segment referred to , in that program step e.g. , ‘ goal by Carpenter ’ in Step 2 ( ii ) Reference Argument ( s ) obtained from the incoming edges to that node , refers to the previous steps of the program that the current one depends on e.g. , X1 in Step 2 . Next , a final step of the program is added , which has the reference argument as the leaf node ( s ) obtained in the above manner and the query span argument as the root-clause . This step is specifically responsible for handling the discrete operation , enabled by the root-clause which is often indicative of the kind of discrete reasoning involved ( e.g. , max ) . However this being a noisy heuristic , the QA model needs to be robust to such noise and additionally rely on the full query representation in order to predict the discrete operation . For simplicity we limit the number of reference arguments to 2 . 2.1 PROGRAM EXECUTION . Our proposed WNSMN learns to execute the program over the passage in three steps . In the preprocessing step , it identifies numbers and dates from the passage , and maintains them as separate canonicalized entity-lists along with their mention locations . Next , it learns an entity-specific crossattention model to rank the entities w.r.t . their query-relevance ( §2.1.1 ) , and then samples relevant entities as discrete arguments ( §2.1.2 ) and executes appropriate discrete operations on them to reach the answer . An RL framework ( §2.1.3 ) trains it end-to-end with the answer as the sole supervision . 2.1.1 ENTITY-SPECIFIC CROSS ATTENTION FOR INFORMATION EXTRACTION . To rank the query-relevant passage entities , we model the passage , program and entities jointly . Modeling interaction between program and passage This module ( Figure 2 , left ) learns to associate query span arguments of the program with the passage . For this , similar to NMN , we use a BERT-base pretrained encoder ( Devlin et al. , 2018 ) to get contextualized token embeddings of the passage and query span argument of each program step , respectively denoted by Pk and Qk for the k ’ th program step . Based on it , we learn a similarity matrix S 2 l⇥n⇥m between the program and passage , where l , n , and m respectively are the program length and query span argument and passage length ( in tokens ) . Each Sk 2 n⇥m represents the affinity over the passage tokens for the k ’ th program argument and is defined as Sk ( i , j ) = wT [ Qki ; Pkj ; Qki Pkj ] , where w is a learnable parameter and is element-wise multiplication . From this , an attention map Ak is computed over the passage tokens for the k ’ th program argument as Ak ( i , j ) = softmaxj ( Sk ( i , j ) ) = exp ( Sk ( i , j ) ) P j exp ( Sk ( i , j ) ) . Similarly , for the i ’ th token of the k ’ th program argument the cumulative attention aki w.r.t . the passage is aki = softmaxi ( P j Sk ( i , j ) ) . A linear combination of the attention map Ak ( i , · ) weighted by aki gives the expected passage attention for the k ’ th step , ↵̄k = P i akiAk ( i , · ) 2 m. Span-level smoothed attention . To facilitate information spotting and extraction over contiguous spans of text , we regularize the passage attention so that the attention on a passage token is high if the attention over its neighbors is so . We achieve this by adopting a heuristic smoothing technique ( Huang et al. , 2020 ) , taking a sliding window of different lengths ! = { 1 , 2 , . . . 10 } over the passage , and replacing the token-level attention with the attention averaged over the window . This results in 10 different attention maps over the passage for the k ’ th step of the program : { ↵̄ ! k | ! 2 { 1 , 2 , . . . , 10 } } . Soft span prediction . This network takes a multi-scaled ( Gupta et al. , 2020 ) version of ↵̄ ! k , by multiplying the attention map with |s| different scaling factors ( s = { 1 , 2 , 5 , 10 } ) , yielding a |s|dimensional representation for each passage token , i.e. , ↵̄ ! k 2 m⇥|s| . This is then passed through a L-layered stacked self-attention transformer block ( Vaswani et al. , 2017 ) , which encodes it to m⇥ d dimension , followed by a linear layer of dimension d⇥ 1 , to obtain the span prediction logits : ↵ ! k = Linear ( Transformer ( MultiScaling ( ↵̄ ! k ) ) 2 m. Further the span prediction logits at each program step ( say k ) is additively combined with those from the previous steps referenced in the current one , through the reference argument ( ref ( k ) ) at step k , i.e. , ↵ ! k = ↵ ! k + P k02ref ( k ) ↵ ! k0 . Modeling interaction between program and number/date entities This module ( Figure 2 , right ) facilitates an entity-based information spotting capability , that is , given a passage mention of a number/date entity relevant to the query , the model should be able to attend to the neighborhood around it . To do this , for each program step , we first compute a passage tokens to number tokens attention map Anum 2 l⇥m⇥N , where N is the number of unique number entities . Note that this attention map is different for each program step as the contextual BERT encoding of the passage tokens ( Pk ) is coupled with the program ’ s span argument of that step . At the k-th step , the row Anumk ( i , · ) denotes the probability distribution over the N unique number tokens w.r.t . the i-th passage token . The attention maps are obtained by a softmax normalization of each row of the corresponding passage tokens to number tokens similarity matrix , Snumk 2 m⇥N for k = { 1 . . . l } , where the elements of Snumk are computed as S num k ( i , j ) = P T kiWnPknj with Wn 2 d⇥d being a learnable projection matrix and nj being the passage location of the j-th number token . These similarity scores are additively aggregated over all mentions of the same number entity in the passage . The relation between program and entities is then modeled as ⌧ ! k = softmax ( P i ↵ ! kiA num k ( i , · ) ) 2 N , which gives the expected distribution over the N number tokens for the k-th program step and using ! as the smoothing window size . The final stacked attention map obtained for the different windows is T numk = { ⌧ ! k | ! 2 { 1 , 2 , . . . 10 } } . Similarly , for each program step k , we also compute a separate stacked attention map T datek over the unique date tokens , parameterized by a different Wd . A critical requirement for a meaningful attention over entities is to incorporate information extraction capability in the number and date attention maps Anum and Adate , by enabling the model to attend over the neighborhood of the relevant entity mentions . This is achieved by minimizing the unsupervised auxiliary losses Lnumaux and Ldateaux in the training objective , which impose an inductive bias over the number and date entities , similar to Gupta et al . ( 2020 ) . Its purpose is to ensure that the passage attention is densely distributed inside the neighborhood of ± ⌦ ( a hyperparameter , e.g. , 10 ) of the passage location of the entity mention , without imposing any bias on the attention distribution outside the neighborhood . Consequently , it maximises the log-form of cumulative likelihood of the attention distribution inside the window and the entropy of the attention distribution outside of it . ( 1 ) Lnumaux = 1 l lX k=1 mX i=1 [ log ( NX j=1 nj2 [ i± ⌦ ] a num kij ) NX j=1 nj 62 [ i± ⌦ ] a num kij log ( a num kij ) ] where is indicator function and anumkij = A num k ( i , j ) . Ldateaux for date entities is similarly defined .
The paper proposes a new model for numerical reasoning in machine comprehension. Given a passage and a query, the model outputs an arithmetic expression over numbers/dates in the passage (e.g. max(23, 26, 42)). The model is trained with weak supervision in the form of numerical answers only. This weak supervision is used to define reward for reinforcement learning training. A key claimed advantage of the model compared to the prior art is that it trains end-to-end from the rewards as the only form of supervision. This is contrasted to neural module networks, which require program supervision for good performance, as well as GenBERT, which requires additional synthetic training data for pretraining. Two key quantitative results include:
SP:0a4cf8c20a5ac64540faf909d0e6d3af34e4036c
LambdaNetworks: Modeling long-range Interactions without Attention
1 INTRODUCTION . Modeling long-range dependencies in data is a central problem in machine learning . Selfattention ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) has emerged as a popular approach to do so , but the costly memory requirement of self-attention hinders its application to long sequences and multidimensional data such as images2 . Linear ( or efficient ) attention mechanisms ( Katharopoulos et al. , 2020 ; Choromanski et al. , 2020 ) offer a scalable remedy for high memory usage but fail to model internal data structure , such as relative distances between pixels or edge relations between nodes in a graph . This work addresses both issues . We propose lambda layers which model long-range interactions between a query and a structured set of context elements at a reduced memory cost . Lambda layers transform each available context into a linear function , termed a lambda , which is then directly applied to the corresponding query . Whereas self-attention defines a similarity kernel between the query and the context elements , a lambda layer instead summarizes contextual information into a fixed-size linear function ( i.e . a matrix ) , thus bypassing the need for memory-intensive attention maps . This difference is illustrated in Figure 1 . Lambda layers are versatile and can be implemented to model both content-based and position-based interactions in global , local or masked contexts . The resulting neural networks , LambdaNetworks , are computationally efficient , model long-range dependencies at a small memory cost and can therefore be applied to large structured inputs such as high resolution images . 1An updated version of this paper can be found on arXiv . 2For example , applying a single multi-head attention layer to a batch of 128 64x64 input images with 8 heads requires 64GB of memory , which is prohibitive in practice . We evaluate LambdaNetworks on computer vision tasks where works using self-attention are hindered by large memory costs ( Wang et al. , 2018 ; Bello et al. , 2019 ) , suffer impractical implementations ( Ramachandran et al. , 2019 ) , or require vast amounts of data ( Dosovitskiy et al. , 2020 ) . In our experiments spanning ImageNet classification , COCO object detection and instance segmentation , LambdaNetworks significantly outperform their convolutional and attentional counterparts , while being more computationally efficient and faster than the latter . We summarize our contributions : • Lambda layers : a class of layers , that model content-based and position-based interactions without materializing attention maps . Lambda layers offer a unifying view of channel , spatial and linear attention ( Appendix D.4 ) . Some of our observations , such as the computational benefits of a multi-query formulation , extend to linear attention . Lambda layers are easily implemented with einsum operations and convolution kernels , operations with efficient implementations on modern machine learning accelerators . • Lambda layers significantly outperform their convolution and attention counterparts on the ImageNet classification task while being more computationally efficient . For example , simply replacing the 3x3 convolutions in the bottleneck blocks of the ResNet-50 architecture ( He et al. , 2016 ) with lambda layers yields a +1.5 % top-1 ImageNet accuracy improvement while reducing parameters by 40 % ( Section 5.1 ) . • Lambda layers achieve considerable computational benefits , both in latency and memory requirements , over multiple self-attention alternatives , including local and axial attention ( Ramachandran et al. , 2019 ; Wang et al. , 2020a ) . When used in a ResNet-50 architecture at image resolution 224 , lambda layers reduce memory consumption by ∼200x compared to global attention ( ∼7x compared to axial attention ) while being ∼3.7x faster than local attention ( Section 5.2 ) . • A study of hybrid convolution-lambda models as a means to maximize the speed-accuracy tradeoff ( Section 5.3 ) . Hybrid designs that first employ convolutions at the highest resolution and lambda layers in intermediate to low resolutions achieve the best speed-accuracy tradeoff . • LambdaResNets : a family of hybrids based on the training and scaling strategies recommended in Bello et al . ( 2021 ) . LambdaResNets achieve up to a 4.4x speed-up over EfficientNets on ImageNet , while being more memory-efficient . LambdaResNets can also be designed for parameter or flops efficiency . For example , a LambdaResNet with 42M parameters achieves 84.3 % top-1 ImageNet accuracy at image resolution 320 ( Section E.4 ) . • In large-scale semi-supervised training with an additional 130M pseudo-labeled images , LambdaResNets achieve up to 86.7 % top-1 ImageNet accuracy while being 9.5x faster than EfficientNet NoisyStudent ( Xie et al. , 2020 ) and 9x faster than a Vision Transformer ( Dosovitskiy et al. , 2020 ) with comparable accuracies ( Section 5.3 ) . • An evaluation of LambdaResNets on COCO object detection and instance segmentation using Mask-RCNN ( He et al. , 2017 ) . LambdaResNet backbones yield consistent gains across all metrics on both tasks ( e.g . +1.8 % mAP improvement for detecting small objects ) . 2 MODELING LONG-RANGE INTERACTIONS . In this section , we formally define queries , contexts and interactions . Starting from first principles , we motivate keys and relative position embeddings as a requirement for capturing structured interactions between queries and their contexts . We then show that lambda layers arise as an alternative to attention mechanisms for capturing long-range interactions . Notation . We denote scalars , vectors and tensors using lower-case , bold lower-case and bold upper-case letters , e.g. , n , x and X . We denote |n| the cardinality of a set whose elements are indexed by n. We denote xn the n-th row of X . We denote xij the |ij| elements of X . When possible , we adopt the terminology of self-attention to ease readability and highlight differences . 2.1 MOTIVATING QUERIES , KEYS , POSITION EMBEDDINGS AND VALUES . Defining queries and contexts . Let Q = { ( qn , n ) } and C = { ( cm , m ) } denote structured collections of vectors , respectively referred to as the queries and the context . Each query ( qn , n ) is characterized by its content qn ∈ R|k| and position n. Similarly , each context element ( cm , m ) is characterized by its content cm and its position m in the context . The ( n , m ) pair may refer to any pairwise relation between structured elements , e.g . relative distances between pixels or edges between nodes in a graph . Defining interactions . We consider the general problem of mapping a query ( qn , n ) to an output vector yn ∈ R|v| given the context C with a function F : ( ( qn , n ) , C ) 7→ yn . Such a function may act as a layer in a neural network when processing structured inputs . We refer to ( qn , cm ) interactions as content-based and ( qn , ( n , m ) ) interactions as position-based . We note that while absolute positional information is sometimes directly added to the query ( or context element ) content3 , we consider this type of interaction to be content-based as it ignores the relation ( n , m ) between the query and context element positions . Introducing keys and relative position embeddings to capture long-range interactions . In the context of deep learning , we prioritize fast batched linear operations and use dot-product operations as our interactions . This motivates introducing vectors that can interact with the queries via a dotproduct operation and therefore have the same dimension as the queries . In particular , content-based interactions ( qn , cm ) require a |k|-dimensional vector that depends on cm , commonly referred to as the key km . Conversely , position-based interactions ( qn , ( n , m ) ) require a relative position embedding enm ∈ R|k| ( Shaw et al. , 2018 ) . As the query/key depth |k| and context spatial dimension |m| are not in the output yn ∈ R|v| , these dimensions need to be contracted as part of the layer computations . Therefore Every layer capturing long-range interactions can be characterized based on whether it contracts ( 1 ) the query depth or ( 2 ) the context positions first . 2.2 ATTENTION VS LAMBDA LAYERS .. ( 1 ) Attention layers . Contracting the query depth first creates a similarity kernel ( the attention map ) between the query and context elements and is known as the attention operation . As the number of context positions |m| grows larger and the input and output dimensions |k| and |v| remain fixed , one may hypothesize that computing attention maps become wasteful , given that the layer output is a vector of comparatively small dimension |v| |m| . ( 2 ) Lambda layers . Instead , it may be more efficient to simply map each query to its output as yn = F ( ( qn , n ) , C ) = λ ( C , n ) ( qn ) for some linear function λ ( C , n ) : R|k| → R|v| . In this 3This approach is often used in natural language processing tasks ( Vaswani et al. , 2017 ) but has had limited success in the visual domain where relative position information between pixels is crucial ( Bello et al. , 2019 ) . scenario , the context is aggregated into a fixed-size linear function λn = λ ( C , n ) . Each λn acts as a small linear function4 that exists independently of the context ( once computed ) and is discarded after being applied to its associated query qn . 3 LAMBDA LAYERS . 3.1 LAMBDA LAYER : TRANSFORMING CONTEXTS INTO LINEAR FUNCTIONS .. A lambda layer takes the inputs X ∈ R|n|×din and the context C ∈ R|m|×dc as input and generates linear function lambdas that are then applied to the queries , yielding outputs Y ∈ R|n|×dout . Without loss of generality , we assume din = dc = dout = d. As is the case with self -attention , we we may have C = X . In the rest of this paper , we focus on a specific instance of a lambda layer and show that it captures long-range content and position-based interactions without materializing attention maps . Figure 2 presents the computational graph of the lambda layer . We first describe the lambda layer when applied to a single query ( qn , n ) . Generating the contextual lambda function . We wish to generate a linear function R|k| → R|v| , i.e . a matrix λn ∈ R|k|×|v| . The lambda layer first computes keys K and values V by linearly projecting the context , and keys are normalized across context positions via a softmax operation yielding normalized keys K̄ . The λn matrix is obtained by using the normalized keys K̄ and position embeddings En to aggregate the values V as λn = ∑ m ( k̄m + enm ) v T m = K̄ T V︸ ︷︷ ︸ content lambda + ETnV︸ ︷︷ ︸ position lambda ∈ R|k|×|v| ( 1 ) where we also define the content lambda λc and position lambda λpn . • The content lambda λc is shared across all query positions n and is invariant to permutation of the context elements . It encodes how to transform the query qn solely based on the context content . • The position lambda λpn depends on the query position n via the position embeddingEn . It encodes how to transform the query qn based on the context elements cm and their relative positions to the query ( n , m ) . Applying lambda to its query . The query qn ∈ R|k| is obtained from the input xn via a learned linear projection and the output of the lambda layer is obtained as yn = λ T nqn = ( λ c + λpn ) Tqn ∈ R|v| . ( 2 ) 4This mechanism is reminiscent of functional programming and λ-calculus which motivates the lambda terminology . Interpretation of lambda layers . The columns of the λn ∈ R|k|×|v| matrix can be viewed as a fixed-size set of |k| contextual features . These contextual features are aggregated based on the context ’ s content ( content-based interactions ) and structure ( position-based interactions ) . Applying the lambda then dynamically distributes these contextual features based on the query to produce the output as yn = ∑ k qnkλnk . This process captures content and position-based interactions without producing attention maps and can be viewed as an efficient relative attention mechanism . Normalization . One may modify Equations 1 and 2 to include non-linearities or normalization operations . Our experiments indicate that applying batch normalization ( Ioffe & Szegedy , 2015 ) after computing the queries and the values is helpful .
This paper proposes a novel lambda layer to capture long-range interactions by transforming available contexts into linear functions, termed lambdas and applying these linear functions to each input separately. The proposed Lambda Network achieves good performances on ImageNet Classification, COCO object detection and instance segmentation tasks. The proposed lambda convolution is much more dense than the attention-based layer thus reducing parameters and complexity. However there are still several weaknesses in this paper. 1) Generalization of the proposed lambda convolution layer. For example, how about the performance of the lambda layer when combined with the lighter convolutional networks, e.g. mobilenet ? How about the performance when much deeper networks for the highest performance? 2)The source code is suggested to be released for more details. 3) Check the typos in the paper.
SP:28475d91bb10fb0a3a8add77cca7505a839e145d
LambdaNetworks: Modeling long-range Interactions without Attention
1 INTRODUCTION . Modeling long-range dependencies in data is a central problem in machine learning . Selfattention ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) has emerged as a popular approach to do so , but the costly memory requirement of self-attention hinders its application to long sequences and multidimensional data such as images2 . Linear ( or efficient ) attention mechanisms ( Katharopoulos et al. , 2020 ; Choromanski et al. , 2020 ) offer a scalable remedy for high memory usage but fail to model internal data structure , such as relative distances between pixels or edge relations between nodes in a graph . This work addresses both issues . We propose lambda layers which model long-range interactions between a query and a structured set of context elements at a reduced memory cost . Lambda layers transform each available context into a linear function , termed a lambda , which is then directly applied to the corresponding query . Whereas self-attention defines a similarity kernel between the query and the context elements , a lambda layer instead summarizes contextual information into a fixed-size linear function ( i.e . a matrix ) , thus bypassing the need for memory-intensive attention maps . This difference is illustrated in Figure 1 . Lambda layers are versatile and can be implemented to model both content-based and position-based interactions in global , local or masked contexts . The resulting neural networks , LambdaNetworks , are computationally efficient , model long-range dependencies at a small memory cost and can therefore be applied to large structured inputs such as high resolution images . 1An updated version of this paper can be found on arXiv . 2For example , applying a single multi-head attention layer to a batch of 128 64x64 input images with 8 heads requires 64GB of memory , which is prohibitive in practice . We evaluate LambdaNetworks on computer vision tasks where works using self-attention are hindered by large memory costs ( Wang et al. , 2018 ; Bello et al. , 2019 ) , suffer impractical implementations ( Ramachandran et al. , 2019 ) , or require vast amounts of data ( Dosovitskiy et al. , 2020 ) . In our experiments spanning ImageNet classification , COCO object detection and instance segmentation , LambdaNetworks significantly outperform their convolutional and attentional counterparts , while being more computationally efficient and faster than the latter . We summarize our contributions : • Lambda layers : a class of layers , that model content-based and position-based interactions without materializing attention maps . Lambda layers offer a unifying view of channel , spatial and linear attention ( Appendix D.4 ) . Some of our observations , such as the computational benefits of a multi-query formulation , extend to linear attention . Lambda layers are easily implemented with einsum operations and convolution kernels , operations with efficient implementations on modern machine learning accelerators . • Lambda layers significantly outperform their convolution and attention counterparts on the ImageNet classification task while being more computationally efficient . For example , simply replacing the 3x3 convolutions in the bottleneck blocks of the ResNet-50 architecture ( He et al. , 2016 ) with lambda layers yields a +1.5 % top-1 ImageNet accuracy improvement while reducing parameters by 40 % ( Section 5.1 ) . • Lambda layers achieve considerable computational benefits , both in latency and memory requirements , over multiple self-attention alternatives , including local and axial attention ( Ramachandran et al. , 2019 ; Wang et al. , 2020a ) . When used in a ResNet-50 architecture at image resolution 224 , lambda layers reduce memory consumption by ∼200x compared to global attention ( ∼7x compared to axial attention ) while being ∼3.7x faster than local attention ( Section 5.2 ) . • A study of hybrid convolution-lambda models as a means to maximize the speed-accuracy tradeoff ( Section 5.3 ) . Hybrid designs that first employ convolutions at the highest resolution and lambda layers in intermediate to low resolutions achieve the best speed-accuracy tradeoff . • LambdaResNets : a family of hybrids based on the training and scaling strategies recommended in Bello et al . ( 2021 ) . LambdaResNets achieve up to a 4.4x speed-up over EfficientNets on ImageNet , while being more memory-efficient . LambdaResNets can also be designed for parameter or flops efficiency . For example , a LambdaResNet with 42M parameters achieves 84.3 % top-1 ImageNet accuracy at image resolution 320 ( Section E.4 ) . • In large-scale semi-supervised training with an additional 130M pseudo-labeled images , LambdaResNets achieve up to 86.7 % top-1 ImageNet accuracy while being 9.5x faster than EfficientNet NoisyStudent ( Xie et al. , 2020 ) and 9x faster than a Vision Transformer ( Dosovitskiy et al. , 2020 ) with comparable accuracies ( Section 5.3 ) . • An evaluation of LambdaResNets on COCO object detection and instance segmentation using Mask-RCNN ( He et al. , 2017 ) . LambdaResNet backbones yield consistent gains across all metrics on both tasks ( e.g . +1.8 % mAP improvement for detecting small objects ) . 2 MODELING LONG-RANGE INTERACTIONS . In this section , we formally define queries , contexts and interactions . Starting from first principles , we motivate keys and relative position embeddings as a requirement for capturing structured interactions between queries and their contexts . We then show that lambda layers arise as an alternative to attention mechanisms for capturing long-range interactions . Notation . We denote scalars , vectors and tensors using lower-case , bold lower-case and bold upper-case letters , e.g. , n , x and X . We denote |n| the cardinality of a set whose elements are indexed by n. We denote xn the n-th row of X . We denote xij the |ij| elements of X . When possible , we adopt the terminology of self-attention to ease readability and highlight differences . 2.1 MOTIVATING QUERIES , KEYS , POSITION EMBEDDINGS AND VALUES . Defining queries and contexts . Let Q = { ( qn , n ) } and C = { ( cm , m ) } denote structured collections of vectors , respectively referred to as the queries and the context . Each query ( qn , n ) is characterized by its content qn ∈ R|k| and position n. Similarly , each context element ( cm , m ) is characterized by its content cm and its position m in the context . The ( n , m ) pair may refer to any pairwise relation between structured elements , e.g . relative distances between pixels or edges between nodes in a graph . Defining interactions . We consider the general problem of mapping a query ( qn , n ) to an output vector yn ∈ R|v| given the context C with a function F : ( ( qn , n ) , C ) 7→ yn . Such a function may act as a layer in a neural network when processing structured inputs . We refer to ( qn , cm ) interactions as content-based and ( qn , ( n , m ) ) interactions as position-based . We note that while absolute positional information is sometimes directly added to the query ( or context element ) content3 , we consider this type of interaction to be content-based as it ignores the relation ( n , m ) between the query and context element positions . Introducing keys and relative position embeddings to capture long-range interactions . In the context of deep learning , we prioritize fast batched linear operations and use dot-product operations as our interactions . This motivates introducing vectors that can interact with the queries via a dotproduct operation and therefore have the same dimension as the queries . In particular , content-based interactions ( qn , cm ) require a |k|-dimensional vector that depends on cm , commonly referred to as the key km . Conversely , position-based interactions ( qn , ( n , m ) ) require a relative position embedding enm ∈ R|k| ( Shaw et al. , 2018 ) . As the query/key depth |k| and context spatial dimension |m| are not in the output yn ∈ R|v| , these dimensions need to be contracted as part of the layer computations . Therefore Every layer capturing long-range interactions can be characterized based on whether it contracts ( 1 ) the query depth or ( 2 ) the context positions first . 2.2 ATTENTION VS LAMBDA LAYERS .. ( 1 ) Attention layers . Contracting the query depth first creates a similarity kernel ( the attention map ) between the query and context elements and is known as the attention operation . As the number of context positions |m| grows larger and the input and output dimensions |k| and |v| remain fixed , one may hypothesize that computing attention maps become wasteful , given that the layer output is a vector of comparatively small dimension |v| |m| . ( 2 ) Lambda layers . Instead , it may be more efficient to simply map each query to its output as yn = F ( ( qn , n ) , C ) = λ ( C , n ) ( qn ) for some linear function λ ( C , n ) : R|k| → R|v| . In this 3This approach is often used in natural language processing tasks ( Vaswani et al. , 2017 ) but has had limited success in the visual domain where relative position information between pixels is crucial ( Bello et al. , 2019 ) . scenario , the context is aggregated into a fixed-size linear function λn = λ ( C , n ) . Each λn acts as a small linear function4 that exists independently of the context ( once computed ) and is discarded after being applied to its associated query qn . 3 LAMBDA LAYERS . 3.1 LAMBDA LAYER : TRANSFORMING CONTEXTS INTO LINEAR FUNCTIONS .. A lambda layer takes the inputs X ∈ R|n|×din and the context C ∈ R|m|×dc as input and generates linear function lambdas that are then applied to the queries , yielding outputs Y ∈ R|n|×dout . Without loss of generality , we assume din = dc = dout = d. As is the case with self -attention , we we may have C = X . In the rest of this paper , we focus on a specific instance of a lambda layer and show that it captures long-range content and position-based interactions without materializing attention maps . Figure 2 presents the computational graph of the lambda layer . We first describe the lambda layer when applied to a single query ( qn , n ) . Generating the contextual lambda function . We wish to generate a linear function R|k| → R|v| , i.e . a matrix λn ∈ R|k|×|v| . The lambda layer first computes keys K and values V by linearly projecting the context , and keys are normalized across context positions via a softmax operation yielding normalized keys K̄ . The λn matrix is obtained by using the normalized keys K̄ and position embeddings En to aggregate the values V as λn = ∑ m ( k̄m + enm ) v T m = K̄ T V︸ ︷︷ ︸ content lambda + ETnV︸ ︷︷ ︸ position lambda ∈ R|k|×|v| ( 1 ) where we also define the content lambda λc and position lambda λpn . • The content lambda λc is shared across all query positions n and is invariant to permutation of the context elements . It encodes how to transform the query qn solely based on the context content . • The position lambda λpn depends on the query position n via the position embeddingEn . It encodes how to transform the query qn based on the context elements cm and their relative positions to the query ( n , m ) . Applying lambda to its query . The query qn ∈ R|k| is obtained from the input xn via a learned linear projection and the output of the lambda layer is obtained as yn = λ T nqn = ( λ c + λpn ) Tqn ∈ R|v| . ( 2 ) 4This mechanism is reminiscent of functional programming and λ-calculus which motivates the lambda terminology . Interpretation of lambda layers . The columns of the λn ∈ R|k|×|v| matrix can be viewed as a fixed-size set of |k| contextual features . These contextual features are aggregated based on the context ’ s content ( content-based interactions ) and structure ( position-based interactions ) . Applying the lambda then dynamically distributes these contextual features based on the query to produce the output as yn = ∑ k qnkλnk . This process captures content and position-based interactions without producing attention maps and can be viewed as an efficient relative attention mechanism . Normalization . One may modify Equations 1 and 2 to include non-linearities or normalization operations . Our experiments indicate that applying batch normalization ( Ioffe & Szegedy , 2015 ) after computing the queries and the values is helpful .
This paper presents an efficient method to model long-range interaction. The proposed lambda layer removes the nonlinearity of the original attention operation and makes the matrix multiplication independent of the context, hence skipping expensive computation and storage of large attention maps. Two kinds of lambda functions in lambda layer, i.e., content lambda and position lambda, allows the model to capture both dense content and long-range interaction. In addition, the lambda layer can be further extended to working with local context and to being more efficient by docomposing a query into multiple short ones. Its effectivess has been demonstrated on extensive experiments on different backbone network architectures and tasks. Its speed-accuracy tradeoff perform very favorably against SOTA methods.
SP:28475d91bb10fb0a3a8add77cca7505a839e145d
VECoDeR - Variational Embeddings for Community Detection and Node Representation
1 INTRODUCTION . Graphs are flexible data structures that model complex relationships among entities , i.e . data points as nodes and the relations between nodes via edges . One important task in graph analysis is community detection , where the objective is to cluster nodes into multiple groups ( communities ) . Each community is a set of densely connected nodes . The communities can be overlapping or non-overlapping , depending on whether they share some nodes or not . Several algorithmic ( Ahn et al. , 2010 ; Derényi et al. , 2005 ) and probabilistic approaches ( Gopalan & Blei , 2013 ; Leskovec & Mcauley , 2012 ; Wang et al. , 2017 ; Yang et al. , 2013 ) to community detection have been proposed . Another fundamental task in graph analysis is learning the node embeddings . These embeddings can then be used for downstream tasks like graph visualization ( Tang et al. , 2016 ; Wang et al. , 2016 ; Gao et al. , 2011 ; Wang et al. , 2017 ) and classification ( Cao et al. , 2015 ; Tang et al. , 2015 ) . In the literature , these tasks are usually treated separately . Although the standard graph embedding methods capture the basic connectivity , the learning of the node embeddings is independent of community detection . For instance , a simple approach can be to get the node embeddings via DeepWalk ( Perozzi et al. , 2014 ) and get community assignments for each node by using k-means or Gaussian mixture model . Looking from the other perspective , methods like Bigclam ( Yang & Leskovec ( 2013 ) ) , that focus on finding the community structure in the dataset , perform poorly for node-representation tasks e.g . node classification . This motivates us to study the approaches that jointly learn community-aware node embeddings . Recently several approaches , like CNRL ( Tu et al. , 2018 ) , ComE ( Cavallari et al. , 2017 ) , vGraph ( Sun et al . ( 2019 ) ) etc , have been proposed to learn the node embeddings and detect communities simultaneously in a unified framework . Several studies have shown that community detection is improved by incorporating the node representation in the learning process ( Cao et al. , 2015 ; Kozdoba & Mannor , 2015 ) . The intuition is that the global structure of graphs learned during community detection can provide useful context for node embeddings and vice versa . The joint learning methods ( CNRL , ComE and vGraph ) learn two embeddings for each node . One node embedding is used for the node representation task . The second node embedding is the “ context ” embedding of the node which aids in community detection . As CNRL and ComE are based on Skip-Gram ( Mikolov et al. , 2013 ) and DeepWalk ( Perozzi et al. , 2014 ) , they inherit “ context ” embedding from it for learning the neighbourhood information of the node . vGraph also requires two node embeddings for parameterizing two different distributions . In contrast , we propose learning a single community-aware node representation which is directly used for both tasks . In this way , we not only get rid of an extraneous node embedding but also reduce the computational cost . In this paper , we propose an efficient generative model called VECODER for jointly learning both community detection and node representation . The underlying intuition behind VECODER is that every node can be a member of one or more communities . However , the node embeddings should be learned in such a way that connected nodes are “ closer ” to each other than unconnected nodes . Moreover , connected nodes should have similar community assignments . Formally , we assume that for i-th node , the node embeddings zi are generated from a prior distribution p ( z ) . Given zi , the community assignments ci are sampled from p ( ci|zi ) , which is parameterized by node and community embeddings . In order to generate an edge ( i , j ) , we sample another node embedding zj from p ( z ) and respective community assignment cj from p ( cj |zj ) . Afterwards , the node embeddings and the respective community assignments of node pairs are fed to a decoder . The decoder ensures that embeddings of both the nodes and the communities of connected nodes share high similarity . This enables learning such node embeddings that are useful for both community detection and node representation tasks . We validate the effectiveness of our approach on several real-world graph datasets . In Sec . 4 , we show empirically that VECODER is able to outperform the baseline methods including the direct competitors on all three tasks i.e . node classification , overlapping community detection and nonoverlapping community detection . Furthermore , we compare the computational cost of training different algorithms . VECODER is up to 40x more time-efficient than its competitors . We also conduct hyperparameter sensitivity analysis which demonstrates the robustness of our approach . Our main contributions are summarized below : • We propose an efficient generative model called VECODER for joint community detection and node representation learning . • We adopt a novel approach and argue that a single node embedding is sufficient for learning both the representation of the node itself and its context . • Training VECODER is extremely time-efficient in comparison to its competitors . 2 RELATED WORK . Community Detection . Early community detection algorithms are inspired from clustering algorithms ( Xie et al. , 2013 ) . For instance , spectral clustering ( Tang & Liu , 2011 ) is applied to the graph Laplacian matrix for extracting the communities . Similarly , several matrix factorization based methods have been proposed to tackle the community detection problem . For example , Bigclam ( Yang & Leskovec ( 2013 ) ) treats the problem as a non-negative matrix factorization ( NMF ) task . It aims to recover the node-community affiliation matrix and learns the latent factors which represent community affiliations of nodes . Another method CESNA ( Yang et al . ( 2013 ) ) extends Bigclam by modelling the interaction between the network structure and the node attributes . The performance of matrix factorization methods is limited due to the capacity of the bi-linear models . Some generative models , like vGraph ( Sun et al. , 2019 ) , Circles ( Leskovec & Mcauley ( 2012 ) ) etc , have also been proposed to detect communities in a graph . Node Representation Learning . Many successful algorithms which learn node representation in an unsupervised way are based on random walk objectives ( Perozzi et al. , 2014 ; Tang et al. , 2015 ; Grover & Leskovec , 2016 ; Hamilton et al. , 2017 ) . Some known issues with random-walk based methods ( e.g . DeepWalk , node2vec etc ) are : ( 1 ) They sacrifice the structural information of the graph by putting over-emphasis on the proximity information ( Ribeiro et al. , 2017 ) and ( 2 ) great dependence of the performance on hyperparameters ( walk-length , number of hops etc ) ( Perozzi et al. , 2014 ; Grover & Leskovec , 2016 ) . Recently , Gilmer et al . ( 2017 ) recently showed that graph convolutions encoder models greatly reduce the need for using the random-walk based training objectives . This is because the graph convolutions enforce that the neighboring nodes have similar representations . Some interesting GCN based approaches include graph autoencoders e.g . GAE and VGAE ( Kipf & Welling ( 2016b ) ) and DGI ( Velickovic et al. , 2019 ) . Joint community detection and node representation learning . In the literature , several attempts have been made to tackle both these tasks in a single framework . Most of these methods propose an alternate optimization process , i.e . learn node embeddings and improve community assignments with them and vice versa ( Cavallari et al. , 2017 ; Tu et al. , 2018 ) . Some approaches , like CNRL ( Tu et al. , 2018 ) and ComE ( Cavallari et al. , 2017 ) , are inspired from random walk , thus inheriting the shortcomings of random walk . Others , like GEMSEC ( Rozemberczki et al . ( 2019 ) , are limited to the detection of non-overlapping communities . There also exist some generative models like CommunityGAN ( Jia et al . ( 2019 ) ) and vGraph ( Sun et al . ( 2019 ) ) that jointly learn community assignments and node embeddings . Some methods have high computational complexity , i.e . quadratic to the number of nodes in a graph , e.g . M-NMF ( Wang et al . ( 2017 ) ) and DNR ( Yang et al. , 2016a ) . CNRL , ComE and vGraph require learning two embeddings for each node for simultaneously tackling the two tasks . Unlike them , VECODER learns a single community-aware node representation which is directly used for both tasks . It is pertinent to highlight that although both vGraph and VECODER adopt a variational approach but the underlying models are quite different . vGraph assumes that each node can be represented as a mixture of multiple communities and is described by a multinomial distribution over communities , whereas VECODER models the node embedding by a single distribution . For a given node , vGraph , first draws a community assignment and then a connected neighbor node is generated based on the assignment . Whereas , VECODER draws the node embedding from prior distribution and then community assignment is conditioned on a single node only . In simple terms , vGraph also needs edge information in the generative process whereas VECODER does not require it . VECODER relies on the decoder to ensure that embeddings of the connected nodes and their communities share high similarity with each other . 3 METHODOLOGY . 3.1 PROBLEM FORMULATION . Suppose an undirected graph G = ( V , E ) with the adjacency matrix A ∈ RN×N and a matrix X ∈ RN×F of F -dimensional node features , N being the number of nodes . Given K as the number of communities , we aim to jointly learn the node embeddings and the community embeddings following a variational approach such that : ( 1 ) One or more communities can be assigned to every node and ( 2 ) the node embeddings can be used for both community detection and node classification . 3.2 VARIATIONAL MODEL . Generative Model : Let us denote the latent node embedding and community assignment for i-th node by the random variables zi ∈ Rd and ci respectively . The generative model is given by : p ( A ) = ∫ ∑ c p ( Z , c , A ) dZ , ( 1 ) where c = [ c1 , c2 , · · · , cN ] and the matrix Z = [ z1 , z2 , · · · , zN ] stacks the node embeddings . The joint distribution in ( 1 ) is mathematically expressed as p ( Z , c , A ) = p ( Z ) pθ ( c|Z ) pθ ( A|c , Z ) , ( 2 ) where θ denotes the model parameters . Let us denote elements of A by aij . Following existing approaches ( Kipf & Welling , 2016b ; Khan et al. , 2020 ) , we consider zi to be i.i.d random variables . Furthermore , assuming ci|zi to be i.i.d random variables , the joint distributions in ( 2 ) can be factorized as p ( Z ) = N∏ i=1 p ( zi ) ( 3 ) pθ ( c|Z ) = N∏ i=1 pθ ( ci|zi ) ( 4 ) pθ ( A|c , Z ) = ∏ i , j pθ ( aij |ci , cj , zi , zj ) , ( 5 ) where Eq . ( 5 ) assumes that the edge decoder pθ ( aij |ci , cj , zi , zj ) depends only on ci , cj , zi and zj . Inference Model : We aim to learn the model parameters θ such that log ( pθ ( A ) ) is maximized . In order to ensure computational tractability , we introduce the approximate posterior qφ ( Z , c|I ) = ∏ i qφ ( zi , ci|I ) = ∏ i qφ ( zi|I ) qφ ( ci|zi , I ) , ( 6 ) where I = ( A , X ) if node features are available , otherwise I = A . We maximize the corresponding ELBO bound ( for derivation , refer to the supplementary material ) , given by LELBO ≈ − N∑ i=1 DKL ( qφ ( zi|I ) || p ( zi ) ) − N∑ i=1 1 M M∑ m=1 DKL ( qφ ( ci|z ( m ) i , I ) || pθ ( ci|z ( m ) i ) ) + ∑ ( i , j ) ∈E E ( zi , zj , ci , cj ) ∼qφ ( zi , zj , ci , cj |I ) { log ( pθ ( aij |ci , cj , zi , zj ) ) } , ( 7 ) where DKL ( .|| . ) represents the KL-divergence between two distributions . The distribution qφ ( zi , zj , ci , cj |I ) in the third term of Eq . ( 7 ) is factorized into two conditionally independent distributions i.e . qφ ( zi , zj , ci , cj |I ) = qφ ( zi , ci|I ) qφ ( zj , cj |I ) . ( 8 )
The paper deals with the problem of simultaneously learning node embeddings and detecting communities on graphs. Although both tasks are particularly important while analyzing networks, most of the proposed approaches address them independently. The paper proposes a generative model, called VECODER, that aims to jointly learn overlapping communities and node representations. The proposed model follows a variational formulation which assumes that the node embeddings are generated from a prior distribution; this can be used to control how community embeddings are sampled. This leads to an encoder-decoder architecture, where the decoder ensures that similar (i.e., connected) nodes will obtain similar embeddings. The proposed model has been empirically evaluated on three tasks (overlapping and non-overlapping community detection, and node classification), and the performance has been compared against various baseline models.
SP:dc61f3b946fd4ff24d64e8a34483dd2bd0b1b333
VECoDeR - Variational Embeddings for Community Detection and Node Representation
1 INTRODUCTION . Graphs are flexible data structures that model complex relationships among entities , i.e . data points as nodes and the relations between nodes via edges . One important task in graph analysis is community detection , where the objective is to cluster nodes into multiple groups ( communities ) . Each community is a set of densely connected nodes . The communities can be overlapping or non-overlapping , depending on whether they share some nodes or not . Several algorithmic ( Ahn et al. , 2010 ; Derényi et al. , 2005 ) and probabilistic approaches ( Gopalan & Blei , 2013 ; Leskovec & Mcauley , 2012 ; Wang et al. , 2017 ; Yang et al. , 2013 ) to community detection have been proposed . Another fundamental task in graph analysis is learning the node embeddings . These embeddings can then be used for downstream tasks like graph visualization ( Tang et al. , 2016 ; Wang et al. , 2016 ; Gao et al. , 2011 ; Wang et al. , 2017 ) and classification ( Cao et al. , 2015 ; Tang et al. , 2015 ) . In the literature , these tasks are usually treated separately . Although the standard graph embedding methods capture the basic connectivity , the learning of the node embeddings is independent of community detection . For instance , a simple approach can be to get the node embeddings via DeepWalk ( Perozzi et al. , 2014 ) and get community assignments for each node by using k-means or Gaussian mixture model . Looking from the other perspective , methods like Bigclam ( Yang & Leskovec ( 2013 ) ) , that focus on finding the community structure in the dataset , perform poorly for node-representation tasks e.g . node classification . This motivates us to study the approaches that jointly learn community-aware node embeddings . Recently several approaches , like CNRL ( Tu et al. , 2018 ) , ComE ( Cavallari et al. , 2017 ) , vGraph ( Sun et al . ( 2019 ) ) etc , have been proposed to learn the node embeddings and detect communities simultaneously in a unified framework . Several studies have shown that community detection is improved by incorporating the node representation in the learning process ( Cao et al. , 2015 ; Kozdoba & Mannor , 2015 ) . The intuition is that the global structure of graphs learned during community detection can provide useful context for node embeddings and vice versa . The joint learning methods ( CNRL , ComE and vGraph ) learn two embeddings for each node . One node embedding is used for the node representation task . The second node embedding is the “ context ” embedding of the node which aids in community detection . As CNRL and ComE are based on Skip-Gram ( Mikolov et al. , 2013 ) and DeepWalk ( Perozzi et al. , 2014 ) , they inherit “ context ” embedding from it for learning the neighbourhood information of the node . vGraph also requires two node embeddings for parameterizing two different distributions . In contrast , we propose learning a single community-aware node representation which is directly used for both tasks . In this way , we not only get rid of an extraneous node embedding but also reduce the computational cost . In this paper , we propose an efficient generative model called VECODER for jointly learning both community detection and node representation . The underlying intuition behind VECODER is that every node can be a member of one or more communities . However , the node embeddings should be learned in such a way that connected nodes are “ closer ” to each other than unconnected nodes . Moreover , connected nodes should have similar community assignments . Formally , we assume that for i-th node , the node embeddings zi are generated from a prior distribution p ( z ) . Given zi , the community assignments ci are sampled from p ( ci|zi ) , which is parameterized by node and community embeddings . In order to generate an edge ( i , j ) , we sample another node embedding zj from p ( z ) and respective community assignment cj from p ( cj |zj ) . Afterwards , the node embeddings and the respective community assignments of node pairs are fed to a decoder . The decoder ensures that embeddings of both the nodes and the communities of connected nodes share high similarity . This enables learning such node embeddings that are useful for both community detection and node representation tasks . We validate the effectiveness of our approach on several real-world graph datasets . In Sec . 4 , we show empirically that VECODER is able to outperform the baseline methods including the direct competitors on all three tasks i.e . node classification , overlapping community detection and nonoverlapping community detection . Furthermore , we compare the computational cost of training different algorithms . VECODER is up to 40x more time-efficient than its competitors . We also conduct hyperparameter sensitivity analysis which demonstrates the robustness of our approach . Our main contributions are summarized below : • We propose an efficient generative model called VECODER for joint community detection and node representation learning . • We adopt a novel approach and argue that a single node embedding is sufficient for learning both the representation of the node itself and its context . • Training VECODER is extremely time-efficient in comparison to its competitors . 2 RELATED WORK . Community Detection . Early community detection algorithms are inspired from clustering algorithms ( Xie et al. , 2013 ) . For instance , spectral clustering ( Tang & Liu , 2011 ) is applied to the graph Laplacian matrix for extracting the communities . Similarly , several matrix factorization based methods have been proposed to tackle the community detection problem . For example , Bigclam ( Yang & Leskovec ( 2013 ) ) treats the problem as a non-negative matrix factorization ( NMF ) task . It aims to recover the node-community affiliation matrix and learns the latent factors which represent community affiliations of nodes . Another method CESNA ( Yang et al . ( 2013 ) ) extends Bigclam by modelling the interaction between the network structure and the node attributes . The performance of matrix factorization methods is limited due to the capacity of the bi-linear models . Some generative models , like vGraph ( Sun et al. , 2019 ) , Circles ( Leskovec & Mcauley ( 2012 ) ) etc , have also been proposed to detect communities in a graph . Node Representation Learning . Many successful algorithms which learn node representation in an unsupervised way are based on random walk objectives ( Perozzi et al. , 2014 ; Tang et al. , 2015 ; Grover & Leskovec , 2016 ; Hamilton et al. , 2017 ) . Some known issues with random-walk based methods ( e.g . DeepWalk , node2vec etc ) are : ( 1 ) They sacrifice the structural information of the graph by putting over-emphasis on the proximity information ( Ribeiro et al. , 2017 ) and ( 2 ) great dependence of the performance on hyperparameters ( walk-length , number of hops etc ) ( Perozzi et al. , 2014 ; Grover & Leskovec , 2016 ) . Recently , Gilmer et al . ( 2017 ) recently showed that graph convolutions encoder models greatly reduce the need for using the random-walk based training objectives . This is because the graph convolutions enforce that the neighboring nodes have similar representations . Some interesting GCN based approaches include graph autoencoders e.g . GAE and VGAE ( Kipf & Welling ( 2016b ) ) and DGI ( Velickovic et al. , 2019 ) . Joint community detection and node representation learning . In the literature , several attempts have been made to tackle both these tasks in a single framework . Most of these methods propose an alternate optimization process , i.e . learn node embeddings and improve community assignments with them and vice versa ( Cavallari et al. , 2017 ; Tu et al. , 2018 ) . Some approaches , like CNRL ( Tu et al. , 2018 ) and ComE ( Cavallari et al. , 2017 ) , are inspired from random walk , thus inheriting the shortcomings of random walk . Others , like GEMSEC ( Rozemberczki et al . ( 2019 ) , are limited to the detection of non-overlapping communities . There also exist some generative models like CommunityGAN ( Jia et al . ( 2019 ) ) and vGraph ( Sun et al . ( 2019 ) ) that jointly learn community assignments and node embeddings . Some methods have high computational complexity , i.e . quadratic to the number of nodes in a graph , e.g . M-NMF ( Wang et al . ( 2017 ) ) and DNR ( Yang et al. , 2016a ) . CNRL , ComE and vGraph require learning two embeddings for each node for simultaneously tackling the two tasks . Unlike them , VECODER learns a single community-aware node representation which is directly used for both tasks . It is pertinent to highlight that although both vGraph and VECODER adopt a variational approach but the underlying models are quite different . vGraph assumes that each node can be represented as a mixture of multiple communities and is described by a multinomial distribution over communities , whereas VECODER models the node embedding by a single distribution . For a given node , vGraph , first draws a community assignment and then a connected neighbor node is generated based on the assignment . Whereas , VECODER draws the node embedding from prior distribution and then community assignment is conditioned on a single node only . In simple terms , vGraph also needs edge information in the generative process whereas VECODER does not require it . VECODER relies on the decoder to ensure that embeddings of the connected nodes and their communities share high similarity with each other . 3 METHODOLOGY . 3.1 PROBLEM FORMULATION . Suppose an undirected graph G = ( V , E ) with the adjacency matrix A ∈ RN×N and a matrix X ∈ RN×F of F -dimensional node features , N being the number of nodes . Given K as the number of communities , we aim to jointly learn the node embeddings and the community embeddings following a variational approach such that : ( 1 ) One or more communities can be assigned to every node and ( 2 ) the node embeddings can be used for both community detection and node classification . 3.2 VARIATIONAL MODEL . Generative Model : Let us denote the latent node embedding and community assignment for i-th node by the random variables zi ∈ Rd and ci respectively . The generative model is given by : p ( A ) = ∫ ∑ c p ( Z , c , A ) dZ , ( 1 ) where c = [ c1 , c2 , · · · , cN ] and the matrix Z = [ z1 , z2 , · · · , zN ] stacks the node embeddings . The joint distribution in ( 1 ) is mathematically expressed as p ( Z , c , A ) = p ( Z ) pθ ( c|Z ) pθ ( A|c , Z ) , ( 2 ) where θ denotes the model parameters . Let us denote elements of A by aij . Following existing approaches ( Kipf & Welling , 2016b ; Khan et al. , 2020 ) , we consider zi to be i.i.d random variables . Furthermore , assuming ci|zi to be i.i.d random variables , the joint distributions in ( 2 ) can be factorized as p ( Z ) = N∏ i=1 p ( zi ) ( 3 ) pθ ( c|Z ) = N∏ i=1 pθ ( ci|zi ) ( 4 ) pθ ( A|c , Z ) = ∏ i , j pθ ( aij |ci , cj , zi , zj ) , ( 5 ) where Eq . ( 5 ) assumes that the edge decoder pθ ( aij |ci , cj , zi , zj ) depends only on ci , cj , zi and zj . Inference Model : We aim to learn the model parameters θ such that log ( pθ ( A ) ) is maximized . In order to ensure computational tractability , we introduce the approximate posterior qφ ( Z , c|I ) = ∏ i qφ ( zi , ci|I ) = ∏ i qφ ( zi|I ) qφ ( ci|zi , I ) , ( 6 ) where I = ( A , X ) if node features are available , otherwise I = A . We maximize the corresponding ELBO bound ( for derivation , refer to the supplementary material ) , given by LELBO ≈ − N∑ i=1 DKL ( qφ ( zi|I ) || p ( zi ) ) − N∑ i=1 1 M M∑ m=1 DKL ( qφ ( ci|z ( m ) i , I ) || pθ ( ci|z ( m ) i ) ) + ∑ ( i , j ) ∈E E ( zi , zj , ci , cj ) ∼qφ ( zi , zj , ci , cj |I ) { log ( pθ ( aij |ci , cj , zi , zj ) ) } , ( 7 ) where DKL ( .|| . ) represents the KL-divergence between two distributions . The distribution qφ ( zi , zj , ci , cj |I ) in the third term of Eq . ( 7 ) is factorized into two conditionally independent distributions i.e . qφ ( zi , zj , ci , cj |I ) = qφ ( zi , ci|I ) qφ ( zj , cj |I ) . ( 8 )
This paper aims to learn node representations of graph to jointly satisfy node embedding properties and community detection property. Node embedding must preserve proximities guaranteeing that adjacent nodes are closer than others. Community detection must promote more similar clustering assignments to adjacent nodes than others. These two problems have been tackled separately or simultaneously but with maintaining two different node representations. The authors claim that the proposed VECoDeR is capable of learning a single community-aware node representation per node, which is jointly effective in both scenarios.
SP:dc61f3b946fd4ff24d64e8a34483dd2bd0b1b333
A Probabilistic Approach to Constrained Deep Clustering
1 INTRODUCTION . The ever-growing amount of data and the time cost associated with its labeling has made clustering a relevant task in the field of machine learning . Yet , in many cases , a fully unsupervised clustering algorithm might naturally find a solution which is not consistent with the domain knowledge ( Basu et al. , 2008 ) . In medicine , for example , clustering could be driven by unwanted bias , such as the type of machine used to record the data , rather than more informative features . Moreover , practitioners often have access to prior information about the types of clusters that are sought , and a principled method to guide the algorithm towards a desirable configuration is then needed . Constrained clustering , therefore has a long history in machine learning as it enforces desirable clustering properties by incorporating domain knowledge , in the form of constraints , into the clustering objective . Following recent advances in deep clustering , constrained clustering algorithms have been recently used in combination with deep neural networks ( DNN ) to favor a better representation of highdimensional data sets . The methods proposed so far mainly extend some of the most widely used deep clustering algorithms , such as DEC ( Xie et al. , 2016 ) , to include a variety of loss functions that force the clustering process to be consistent with the given constraints ( Ren et al. , 2019 ; Shukla et al. , 2018 ; Zhang et al. , 2019b ) . Although they perform well , none of the above methods model the data generative process . As a result , they can neither uncover the underlying structure of the data , nor control the strength of the clustering preferences , nor generate new samples ( Min et al. , 2018 ) . To address the above issues , we propose a novel probabilistic approach to constrained clustering , the Constrained Variational Deep Embedding ( CVaDE ) , that uncovers the underlying data distribution conditioned on domain knowledge , expressed in the form of pairwise constraints . Our method extends previous work in unsupervised variational deep clustering ( Jiang et al. , 2017 ; Dilokthanakul et al. , 2016 ) to incorporate clustering preferences as Bayesian prior probabilities with varying degrees of uncertainty . This allows systematical reasoning about parameter uncertainty ( Zhang et al. , 2019a ) , thereby enabling the ability to perform Bayesian model validation , outlier detection and data generation . By integrating prior information in the generative process of the data , our model can guide the clustering process towards the configuration sought by the practitioners . Our main contributions are as follows : ( i ) We propose a constrained clustering method ( CVaDE ) to incorporate given clustering preferences , with varying degrees of certainty , within the Variational Auto-Encoder ( VAE ) framework . ( ii ) We provide a thorough empirical assessment of our model . In particular , we show that ( a ) a small fraction of prior information remarkably increases the performance of CVaDE compared to unsupervised variational clustering methods , ( b ) our model shows superior clustering performance compared to state-of-the-art deep constrained clustering models on a wide range of data sets and , ( c ) our model proves to be robust against noise as it can easily incorporate the uncertainty of the given constraints . ( iii ) We show that our model can drive the clustering performance towards different desirable configurations , depending on the constraints used , and it successfully generates new samples on challenging real-world image data . 2 THEORETICAL BACKGROUND & RELATED WORK . Constrained Clustering . A constrained clustering problem differs from the classical clustering scenario as the user has access to some pre-existing knowledge about the desired partition of the data . The constraints are usually expressed as pairwise constraints ( Wagstaff & Cardie , 2000 ) , consisting of must-links and can not -links , which indicate whether two samples are believed to belong to the same cluster or to different clusters . Such pairwise relations contain less information than the labels used in classification tasks but are usually easier to obtain . Traditional clustering methods have been then extended to enforce pairwise constraints ( Lange et al. , 2005 ) . COP-KMEANS ( Wagstaff et al. , 2001 ) and MPCK-mean ( Bilenko et al. , 2004 ) adapted the well-known K-means algorithm , while several methods proposed a constrained version of the Gaussian Mixture Models ( Shental et al. , 2003 ; Law et al. , 2004 ; 2005 ) . Among them , penalized probabilistic clustering ( PPC , Lu & Leen ( 2004 ) ) is most related to our work as it expresses the pairwise constraints as Bayesian priors over the assignment of data points to clusters . However , PPC , as well as the previous models , shows poor performance and high computational complexity on high-dimensional and large-scale data sets . Deep Constrained Clustering . To overcome the limitations of the above models , constrained clustering algorithms have been lately used in combination with DNNs . Hsu & Kira ( 2015 ) train a DNN to minimize the Kullback-Leibler ( KL ) divergence between similar pairs of samples , while Chen ( 2015 ) performs semi-supervised maximum margin clustering of the learned features on a DNN . More recently , many extensions of the widely used DEC model ( Xie et al. , 2016 ) have been proposed to include a variety of loss functions to enforce pairwise constraints . Among them , SDEC ( Ren et al. , 2019 ) includes a distance loss function that forces the data points with a must-link to be close in the latent space and vice-versa . C-IDEC ( Zhang et al. , 2019b ) , uses , instead , a KL divergence loss , extending the work of Shukla et al . ( 2018 ) . Other works have focused on discriminative clustering methods by self-generating pairwise constraints from either Siamese networks or KNN graphs ( Smieja et al. , 2020 ) ( Fogel et al. , 2019 ) . As none of the approaches proposed so far is based on generative models , the above methods fail to uncover the underlying data distribution . Additionally , DEC-based architectures rely on heavy pretraining of the autoencoder , resulting in no theoretical guarantee that the learned latent space is indeed suitable for clustering ( Min et al. , 2018 ) . VAE-based deep clustering . Many models have been proposed in the literature to perform unsupervised clustering through deep generative models ( Li et al. , 2019 ; Yang et al. , 2019 ; Manduchi et al. , 2019 ; Kopf et al. , 2019 ) . Among them , the Variational Deep Embedding ( VaDE , Jiang et al . ( 2017 ) ) and the Gaussian Mixture Variational Autoencoder ( GMM-VAE , Dilokthanakul et al . ( 2016 ) ) propose a variant of the VAE ( Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) ) in which the prior is a Gaussian Mixture distribution . With this assumption , they construct an inference model that can be directly optimised in the framework of stochastic gradient variational Bayes . However , variational deep clustering methods , such as the VaDE , can not incorporate domain knowledge and clustering preferences . Even though a semi-supervised version on the VAE has been proposed by Kingma et al . ( 2014 ) , the latter can not be naturally applied to clustering . For this reason , we aim at extending the above methods to incorporate clustering preferences in the form of constraints , modeled as Bayesian priors , to guide the clustering process towards a desirable configuration . 3 CONSTRAINED VARIATIONAL DEEP EMBEDDING . In the following section , we propose a novel constrained clustering model ( CVaDE ) to incorporate clustering preferences , with varying degree of certainty , in a VAE-based deep clustering setting . In particular , we use the VaDE ( Jiang et al. , 2017 ) generative assumptions of the data , conditioned on the domain knowledge . We then illustrate how our model can be trained efficiently in the framework of stochastic gradient variational Bayes by optimizing the Conditional Variational Lower Bound . Additionally , we define concrete prior formulations to incorporate our preferences , with a focus on pairwise constraints . 3.1 THE GENERATIVE ASSUMPTIONS . Let us consider a data set X = { xi } Ni=1 consisting of N samples with xi ∈ RM that we wish to cluster into K groups according to some prior information encoded as G. For example , we may know a priori that certain samples should be clustered together with different degree of certainty . HenceG encodes both our prior knowledge on the data set and the degree of confidence . We assume the data is generated from a random process consisting of three steps . First , the cluster assignments c = { ci } Ni=1 , with ci ∈ { 1 , . . . , K } , are sampled from a distribution conditioned on the prior information , c ∼ p ( c|G ) . Next , for each cluster assignment ci , a continuous latent embedding , zi ∈ RD , is sampled from a Gaussian distribution , whose mean and variance depend on the selected cluster ci . Finally , the sample xi is generated from a distribution conditioned on zi . Given ci , the generative process can be summarized as : zi ∼ p ( zi|ci ) = N ( zi|µci , σ2ciI ) ( 1 ) xi ∼ pθ ( xi|zi ) = { N ( xi|µxi , σ2xiI ) with [ µxi , σ 2 xi ] = f ( zi ; θ ) if xi is real-valued Ber ( µxi ) with µxi = f ( zi ; θ ) if xi is binary ( 2 ) where µci and σ 2 ci are mean and variance of the Gaussian distribution corresponding to cluster ci in the latent space and the function f ( z ; θ ) is a neural network , called decoder , parametrized by θ . Without prior information , that is when p ( c|G ) = p ( c ) = ∏ i p ( ci ) = ∏ i Cat ( ci|π ) , the cluster assignments are independent and identical distributed as they follow a categorical distribution with mixing parameters π . In that case , the generative assumptions described above are equal to those of Jiang et al . ( 2017 ) and the parameters of the model can be learned using the unsupervised VaDE method ( see Appendix C ) . In the following , we explore the case when p ( c|G ) 6= p ( c ) . 3.2 CONDITIONAL VARIATIONAL LOWER BOUND . Given the data generative assumptions illustrated in Sec . 3.1 , the objective is to infer the parameters π , µc , σ2c and θ which better explain the data X given prior information on the cluster assignments G. We achieve this by maximizing the marginal log-likelihood conditioned onG , that is : log p ( X|G ) = log ∫ Z ∑ c p ( X , Z , c|G ) , ( 3 ) where Z = { zi } Ni=1 is the collection of the latent embeddings corresponding to the data setX . The conditional joint probability is derived from Eq 1/2 and can be factorized as : p ( X , Z , c|G ) = pθ ( X|Z ) p ( Z|c ) p ( c|G ) = p ( c|G ) N∏ i=1 pθ ( xi|zi ) p ( zi|ci ) . ( 4 ) Since the conditional log-likelihood is intractable , we derive a lower bound of the log marginal conditional probability of the data , which we call Conditional ELBO ( C-ELBO , LC ) : LC ( X|G ) = Eqφ ( Z , c|X ) log p ( X , Z , c|G ) qφ ( Z , c|X ) ( 5 ) Similarly to Jiang et al . ( 2017 ) and Dilokthanakul et al . ( 2016 ) , we employ the following amortized mean-field variational distribution : qφ ( Z , c|X ) = qφ ( Z|X ) p ( c|Z ) = N∏ i=1 qφ ( zi|xi ) p ( ci|zi ) with p ( ci|zi ) = p ( zi|ci ) p ( ci ) ∑ k p ( zi|k ) p ( k ) , ( 6 ) where qφ ( zi|xi ) is a Gaussian distribution with mean µ ( xi ) and variance σ2 ( xi ) I which are the outputs of a neural network , called encoder , parametrized by φ and p ( ci = k ) is denoted as p ( k ) for simplicity . It is important to note that , in this formulation , the variational distribution does not depend on G. This approximation is used to retain a mean-field variational distribution if the cluster assignments , conditioned on the prior information , are not independent ( Sec 3.4.1 ) , that is when p ( c|G ) 6= ∏ i p ( ci|G ) .
This paper extends the variational deep embedding VaDE model (a VAE-based clustering method) to integrate pairwise constraints between objects, i.e., must-link and cannot-link. The constraints are integrated a priori as a condition. That is, the prior over the cluster labels is conditioned on the constraints. The whole model, referred to as Constrained VaDE (CVaDE), takes the form of a conditional VAE tailored for constrained clustering. Experiments are curried out on various real-world datasets, and the proposed method is compared to VaDE as well as to recent and classical constrained clustering methods.
SP:774027f8c53b842fa8ef0569dc1c9b2eaa82872b
A Probabilistic Approach to Constrained Deep Clustering
1 INTRODUCTION . The ever-growing amount of data and the time cost associated with its labeling has made clustering a relevant task in the field of machine learning . Yet , in many cases , a fully unsupervised clustering algorithm might naturally find a solution which is not consistent with the domain knowledge ( Basu et al. , 2008 ) . In medicine , for example , clustering could be driven by unwanted bias , such as the type of machine used to record the data , rather than more informative features . Moreover , practitioners often have access to prior information about the types of clusters that are sought , and a principled method to guide the algorithm towards a desirable configuration is then needed . Constrained clustering , therefore has a long history in machine learning as it enforces desirable clustering properties by incorporating domain knowledge , in the form of constraints , into the clustering objective . Following recent advances in deep clustering , constrained clustering algorithms have been recently used in combination with deep neural networks ( DNN ) to favor a better representation of highdimensional data sets . The methods proposed so far mainly extend some of the most widely used deep clustering algorithms , such as DEC ( Xie et al. , 2016 ) , to include a variety of loss functions that force the clustering process to be consistent with the given constraints ( Ren et al. , 2019 ; Shukla et al. , 2018 ; Zhang et al. , 2019b ) . Although they perform well , none of the above methods model the data generative process . As a result , they can neither uncover the underlying structure of the data , nor control the strength of the clustering preferences , nor generate new samples ( Min et al. , 2018 ) . To address the above issues , we propose a novel probabilistic approach to constrained clustering , the Constrained Variational Deep Embedding ( CVaDE ) , that uncovers the underlying data distribution conditioned on domain knowledge , expressed in the form of pairwise constraints . Our method extends previous work in unsupervised variational deep clustering ( Jiang et al. , 2017 ; Dilokthanakul et al. , 2016 ) to incorporate clustering preferences as Bayesian prior probabilities with varying degrees of uncertainty . This allows systematical reasoning about parameter uncertainty ( Zhang et al. , 2019a ) , thereby enabling the ability to perform Bayesian model validation , outlier detection and data generation . By integrating prior information in the generative process of the data , our model can guide the clustering process towards the configuration sought by the practitioners . Our main contributions are as follows : ( i ) We propose a constrained clustering method ( CVaDE ) to incorporate given clustering preferences , with varying degrees of certainty , within the Variational Auto-Encoder ( VAE ) framework . ( ii ) We provide a thorough empirical assessment of our model . In particular , we show that ( a ) a small fraction of prior information remarkably increases the performance of CVaDE compared to unsupervised variational clustering methods , ( b ) our model shows superior clustering performance compared to state-of-the-art deep constrained clustering models on a wide range of data sets and , ( c ) our model proves to be robust against noise as it can easily incorporate the uncertainty of the given constraints . ( iii ) We show that our model can drive the clustering performance towards different desirable configurations , depending on the constraints used , and it successfully generates new samples on challenging real-world image data . 2 THEORETICAL BACKGROUND & RELATED WORK . Constrained Clustering . A constrained clustering problem differs from the classical clustering scenario as the user has access to some pre-existing knowledge about the desired partition of the data . The constraints are usually expressed as pairwise constraints ( Wagstaff & Cardie , 2000 ) , consisting of must-links and can not -links , which indicate whether two samples are believed to belong to the same cluster or to different clusters . Such pairwise relations contain less information than the labels used in classification tasks but are usually easier to obtain . Traditional clustering methods have been then extended to enforce pairwise constraints ( Lange et al. , 2005 ) . COP-KMEANS ( Wagstaff et al. , 2001 ) and MPCK-mean ( Bilenko et al. , 2004 ) adapted the well-known K-means algorithm , while several methods proposed a constrained version of the Gaussian Mixture Models ( Shental et al. , 2003 ; Law et al. , 2004 ; 2005 ) . Among them , penalized probabilistic clustering ( PPC , Lu & Leen ( 2004 ) ) is most related to our work as it expresses the pairwise constraints as Bayesian priors over the assignment of data points to clusters . However , PPC , as well as the previous models , shows poor performance and high computational complexity on high-dimensional and large-scale data sets . Deep Constrained Clustering . To overcome the limitations of the above models , constrained clustering algorithms have been lately used in combination with DNNs . Hsu & Kira ( 2015 ) train a DNN to minimize the Kullback-Leibler ( KL ) divergence between similar pairs of samples , while Chen ( 2015 ) performs semi-supervised maximum margin clustering of the learned features on a DNN . More recently , many extensions of the widely used DEC model ( Xie et al. , 2016 ) have been proposed to include a variety of loss functions to enforce pairwise constraints . Among them , SDEC ( Ren et al. , 2019 ) includes a distance loss function that forces the data points with a must-link to be close in the latent space and vice-versa . C-IDEC ( Zhang et al. , 2019b ) , uses , instead , a KL divergence loss , extending the work of Shukla et al . ( 2018 ) . Other works have focused on discriminative clustering methods by self-generating pairwise constraints from either Siamese networks or KNN graphs ( Smieja et al. , 2020 ) ( Fogel et al. , 2019 ) . As none of the approaches proposed so far is based on generative models , the above methods fail to uncover the underlying data distribution . Additionally , DEC-based architectures rely on heavy pretraining of the autoencoder , resulting in no theoretical guarantee that the learned latent space is indeed suitable for clustering ( Min et al. , 2018 ) . VAE-based deep clustering . Many models have been proposed in the literature to perform unsupervised clustering through deep generative models ( Li et al. , 2019 ; Yang et al. , 2019 ; Manduchi et al. , 2019 ; Kopf et al. , 2019 ) . Among them , the Variational Deep Embedding ( VaDE , Jiang et al . ( 2017 ) ) and the Gaussian Mixture Variational Autoencoder ( GMM-VAE , Dilokthanakul et al . ( 2016 ) ) propose a variant of the VAE ( Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) ) in which the prior is a Gaussian Mixture distribution . With this assumption , they construct an inference model that can be directly optimised in the framework of stochastic gradient variational Bayes . However , variational deep clustering methods , such as the VaDE , can not incorporate domain knowledge and clustering preferences . Even though a semi-supervised version on the VAE has been proposed by Kingma et al . ( 2014 ) , the latter can not be naturally applied to clustering . For this reason , we aim at extending the above methods to incorporate clustering preferences in the form of constraints , modeled as Bayesian priors , to guide the clustering process towards a desirable configuration . 3 CONSTRAINED VARIATIONAL DEEP EMBEDDING . In the following section , we propose a novel constrained clustering model ( CVaDE ) to incorporate clustering preferences , with varying degree of certainty , in a VAE-based deep clustering setting . In particular , we use the VaDE ( Jiang et al. , 2017 ) generative assumptions of the data , conditioned on the domain knowledge . We then illustrate how our model can be trained efficiently in the framework of stochastic gradient variational Bayes by optimizing the Conditional Variational Lower Bound . Additionally , we define concrete prior formulations to incorporate our preferences , with a focus on pairwise constraints . 3.1 THE GENERATIVE ASSUMPTIONS . Let us consider a data set X = { xi } Ni=1 consisting of N samples with xi ∈ RM that we wish to cluster into K groups according to some prior information encoded as G. For example , we may know a priori that certain samples should be clustered together with different degree of certainty . HenceG encodes both our prior knowledge on the data set and the degree of confidence . We assume the data is generated from a random process consisting of three steps . First , the cluster assignments c = { ci } Ni=1 , with ci ∈ { 1 , . . . , K } , are sampled from a distribution conditioned on the prior information , c ∼ p ( c|G ) . Next , for each cluster assignment ci , a continuous latent embedding , zi ∈ RD , is sampled from a Gaussian distribution , whose mean and variance depend on the selected cluster ci . Finally , the sample xi is generated from a distribution conditioned on zi . Given ci , the generative process can be summarized as : zi ∼ p ( zi|ci ) = N ( zi|µci , σ2ciI ) ( 1 ) xi ∼ pθ ( xi|zi ) = { N ( xi|µxi , σ2xiI ) with [ µxi , σ 2 xi ] = f ( zi ; θ ) if xi is real-valued Ber ( µxi ) with µxi = f ( zi ; θ ) if xi is binary ( 2 ) where µci and σ 2 ci are mean and variance of the Gaussian distribution corresponding to cluster ci in the latent space and the function f ( z ; θ ) is a neural network , called decoder , parametrized by θ . Without prior information , that is when p ( c|G ) = p ( c ) = ∏ i p ( ci ) = ∏ i Cat ( ci|π ) , the cluster assignments are independent and identical distributed as they follow a categorical distribution with mixing parameters π . In that case , the generative assumptions described above are equal to those of Jiang et al . ( 2017 ) and the parameters of the model can be learned using the unsupervised VaDE method ( see Appendix C ) . In the following , we explore the case when p ( c|G ) 6= p ( c ) . 3.2 CONDITIONAL VARIATIONAL LOWER BOUND . Given the data generative assumptions illustrated in Sec . 3.1 , the objective is to infer the parameters π , µc , σ2c and θ which better explain the data X given prior information on the cluster assignments G. We achieve this by maximizing the marginal log-likelihood conditioned onG , that is : log p ( X|G ) = log ∫ Z ∑ c p ( X , Z , c|G ) , ( 3 ) where Z = { zi } Ni=1 is the collection of the latent embeddings corresponding to the data setX . The conditional joint probability is derived from Eq 1/2 and can be factorized as : p ( X , Z , c|G ) = pθ ( X|Z ) p ( Z|c ) p ( c|G ) = p ( c|G ) N∏ i=1 pθ ( xi|zi ) p ( zi|ci ) . ( 4 ) Since the conditional log-likelihood is intractable , we derive a lower bound of the log marginal conditional probability of the data , which we call Conditional ELBO ( C-ELBO , LC ) : LC ( X|G ) = Eqφ ( Z , c|X ) log p ( X , Z , c|G ) qφ ( Z , c|X ) ( 5 ) Similarly to Jiang et al . ( 2017 ) and Dilokthanakul et al . ( 2016 ) , we employ the following amortized mean-field variational distribution : qφ ( Z , c|X ) = qφ ( Z|X ) p ( c|Z ) = N∏ i=1 qφ ( zi|xi ) p ( ci|zi ) with p ( ci|zi ) = p ( zi|ci ) p ( ci ) ∑ k p ( zi|k ) p ( k ) , ( 6 ) where qφ ( zi|xi ) is a Gaussian distribution with mean µ ( xi ) and variance σ2 ( xi ) I which are the outputs of a neural network , called encoder , parametrized by φ and p ( ci = k ) is denoted as p ( k ) for simplicity . It is important to note that , in this formulation , the variational distribution does not depend on G. This approximation is used to retain a mean-field variational distribution if the cluster assignments , conditioned on the prior information , are not independent ( Sec 3.4.1 ) , that is when p ( c|G ) 6= ∏ i p ( ci|G ) .
This work proposes CVaDE which is an extension of variational based deep clustering model (VaDE) with additional incorporation of prior clustering preferences as supervision. These priors guide the underlying clustering process towards a user-desirable partitioning of input data. The priors are provided in the form of pairwise constraints indicating which pair of samples belongs to same or different class. Clustering process is modelled using variational Bayes in which the clustering constraints are incorporated into prior probabilities with varying degree of uncertainty. The empirical results shows that in comparison to unconstrained clustering the small amount of pairwise constraints significantly improves clustering performance. Further, it demonstrates CVaDE's robustness to noise, generation capability as well as successful incorporation of different desirable preferences to drive clustering performance towards completely different partitioning.
SP:774027f8c53b842fa8ef0569dc1c9b2eaa82872b
On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness
1 INTRODUCTION . A substantial body of work has shown that deep networks can be highly susceptible to adversarial attacks , in which minor changes to the input lead to incorrect , even bizarre classifications ( Nguyen et al. , 2015 ; Moosavi-Dezfooli et al. , 2016 ; Su et al. , 2019 ; Brendel et al. , 2018 ; Shamir et al. , 2019 ) . Much of this work has considered ` p-norm adversarial examples , but there has also been recent interest in exploring adversarial models beyond bounded ` p-norm ( Brown et al. , 2018 ; Engstrom et al. , 2017 ; Gilmer et al. , 2018 ; Xiao et al. , 2018 ; Alaifari et al. , 2019 ) . What these results have in common is that changes that either are imperceptible or should be irrelevant to the classification task can lead to drastically different network behavior . One reason for this vulnerability to adversarial attack is the non-Lipschitzness property of typical neural networks : small but adversarial movements in the input space can often produce large perturbations in the feature space . In this work , we consider the question of whether non-Lipschitz networks are intrinsically vulnerable , or if they could still be made robust to adversarial attack , in an abstract but ( we believe ) instructive adversarial model . In particular , suppose an adversary , by making an imperceptible change to an input x , can cause its representation F ( x ) in feature space ( the penultimate layer of the network ) to move by an arbitrary amount : will such an adversary always win ? Clearly if the adversary can modify F ( x ) by an arbitrary amount in an arbitrary direction , then yes . But what if the adversary can modify F ( x ) by an arbitrary amount but only in a random direction ( which it can not control ) ? In this case , we show an interesting dichotomy : if the classifier must output a classification on any input it is given , then yes the adversary will still win , no matter how well-separated the classes are in feature space and no matter what decision surface the classifier uses . However , if the classifier is allowed to abstain , then it can defeat such an adversary so long as natural data of different classes are reasonably well-separated in feature space . Our results hold for generalizations of these models as well , such as adversaries that can modify feature representations in random low-dimensional subspaces , or directions that are not completely random . More broadly , our results provide a theoretical explanation for the importance of allowing abstaining , or selective classification , in the presence of adversarial attack . Apart from providing a useful abstraction for non-Lipschitz feature embeddings , our model may be viewed as capturing an interesting class of real attacks . There are various global properties of an image , such as brightness , contrast , or rotation angle whose change might be “ perceptible but not relevant ” to classification tasks . Our model could also be viewed as an abstraction of attacks of that nature . Feature space attacks of other forms , where one can perturb abstract features denoting styles , including interpretable styles such as vivid colors and sharp outlines and uninterpretable ones , have also been empirically studied in ( Xu et al. , 2020 ; Ganeshan & Babu , 2019 ) . An interesting property of our model is that it is critical to be able to refuse to predict : any algorithm which always predicts a class label—therefore without an ability to abstain—is guaranteed to perform poorly . This provides a first formal hardness result about abstention in adversarial defense , and also a first provable negative result in feature-space attacks . We therefore allow the algorithm to output “ don ’ t know ” for some examples , which , as a by-product of our algorithm , serves as a detection mechanism for adversarial examples . It also results in an interesting trade-off between robustness and accuracy : by controlling how frequently we refuse to predict , we are able to trade ( robust ) precision off against recall . We also provide results for how to provably optimize for such a trade-off using a data-driven algorithm . Our strong theoretical advances are backed by empirical evidence in the context of contrastive learning ( He et al. , 2020 ; Chen et al. , 2020 ; Khosla et al. , 2020 ) . 1.1 OUR CONTRIBUTIONS . Our work tackles the problem of defending against adversarial perturbations in a random feature subspace , and advances the theory and practice of robust machine learning in multiple ways . • We introduce a formal model that captures feature-space attacks and the effect of nonLipschitzness of deep networks which can magnify input perturbations . • We begin our analysis with a hardness result concerning defending against adversary without the option of “ don ’ t know ” . We show that all classifiers that partition the feature space into two or more classes—thus without an ability to abstain—are provably vulnerable to adversarial examples for at least one class of examples with nearly half probability . • We explore the power of abstention option : a variant of nearest-neighbor classifier with the ability to abstain is provably robust against adversarial attacks , even in the presence of outliers in the training data set . We characterize the conditions under which the algorithm does not output “ don ’ t know ” too often . • We leverage and extend dispersion techniques from data-driven decision making , and present a novel data-driven method for learning data-specific optimal hyperparameters in our defense algorithms to simultaneously obtain high robust accuracy and low abstention rates . Unlike typical hyperparameter tuning , our approach provably converges to a global optimum . • Experimentally , we show that our proposed algorithm achieves certified adversarial robustness on representations learned by supervised and self-supervised contrastive learning . Our method significantly outperforms algorithms without the ability to abstain . 2 RELATED WORK . Adversarial robustness with abstention options . Classification with abstention option ( a.k.a . selective classification ( Geifman & El-Yaniv , 2017 ) ) is a relatively less explored direction in the adversarial machine learning . Hosseini et al . ( 2017 ) augmented the output class set with a NULL label and trained the classifier to reject the adversarial examples by classifying them as NULL ; Stutz et al . ( 2020 ) and Laidlaw & Feizi ( 2019 ) obtained robustness by rejecting low-confidence adversarial examples according to confidence thresholding or predictions on the perturbations of adversarial examples . Another related line of research to our method is the detection of adversarial examples ( Grosse et al. , 2017 ; Li & Li , 2017 ; Carlini & Wagner , 2017 ; Ma et al. , 2018 ; Meng & Chen , 2017 ; Metzen et al. , 2017 ; Bhagoji et al. , 2018 ; Xu et al. , 2017 ; Hu et al. , 2019 ) . However , theoretical understanding behind the empirical success of adversarial defenses with an abstention option remains elusive . Data-driven decision making . Data-driven algorithm selection refers to choosing a good algorithm from a parameterized family of algorithms for given data . It is known as “ hyperparameter tuning ” to machine learning practitioners and typically involves a “ grid search ” , “ random search ” ( Bergstra & Bengio ( 2012 ) ) or gradient-based search , with no guarantees of convergence to a global optimum . It was formally introduced to the theory of computing community by Gupta & Roughgarden ( 2017 ) as a learning paradigm , and was further extended in ( Balcan et al. , 2017 ) . The key idea is to model the problem of identifying a good algorithm from data as a statistical learning problem . The technique has found useful application in providing provably better algorithms for several domains including clustering , mechanism design , and mixed integer programs , and providing guarantees like differential privacy and adaptive online learning ( Balcan et al. , 2018a ; b ; 2020 ) . For learning in an adversarial setting , we provide the first demonstration of the effectiveness of data-driven algorithm selection in a defense method to optimize over the accuracy-abstention trade-off with strong theoretical guarantees . 3 PRELIMINARIES . Notation . We will use bold lower-case letters such as x and y to represent vectors , lower-case letters such as x and y to represent scalars , and calligraphy capital letters such as X , Y and D to represent distributions . Specifically , we denote by x 2 X the sample instance , and by y 2 Y the label , where X ✓ Rn1 and Y indicate the image and label spaces , respectively . Denote by F : X ! Rn2 the feature embedding which maps an instance to a high-dimensional vector in the latent space F ( X ) . It can be parameterized , e.g. , by deep neural networks . We will frequently use v 2 Rn2 to represent an adversarial perturbation in the feature space . Denote by dist ( · , · ) the distance between any two vectors in the image or feature space . Examples of distances include dist ( x1 , x2 ) = kx1 x2k—the one induced by vector norm . We use B ( x , ⌧ ) to represent a neighborhood of x : { x0 : dist ( x , x0 ) ⌧ } in the image or feature space . We will frequently denote by DX the distribution of instances in the input space , by DX|y the distribution of instances in the input space conditioned on the class y , by DF ( X ) the distribution of features , and by DF ( X ) |y the distribution of features conditioned on the class y . 3.1 RANDOM FEATURE SUBSPACE THREAT MODEL . In principle , the adversarial example for a given labeled data ( x , y ) is a data point x0 that causes a classifier to output a different label on x0 than the true label y . Probably one of the most popular adversarial examples is the norm-bounded perturbation in the input space . Despite a large literature devoted to defending against norm-bounded adversary by improving the Lipschitzness of neural network as a function mapping from input space to feature space ( Zhang et al. , 2019 ; Yang et al. , 2020 ) , it is typically not true that small perturbation in the input space necessarily implies small modification in the feature space . In this paper , we study a threat model where an adversary can modify the data by a large amount in the feature space . Note that because this large modification in feature space is assumed to come from a small perturbation in input space , we always assume that the true correct label y is the same for x0 as for x . Our model highlights the power of abstention in the adversarial learning : there is a provable separation when we have and do not have an abstention option under our threat model . Our threat model . In the setting of ( robust ) representation learning , we are given a set of training instances x1 , ... , xm 2 X . Let x be an n1-dimensional test input for classification . The input is embedded into a high n2-dimensional feature space using a deep neural network F . We predict the class of x by a prediction function on F ( x ) which can potentially output “ don ’ t know ” . The adversary may corrupt F ( x ) such that the modified feature vector is restricted in a random n3-dimensional affine subspace denoted by S + { F ( x ) } , while the perturbation magnitude might be arbitrarily large . The adversary is given access to everything including F , x , S and the true label of x . Throughout the paper , we will refer adversary and adversarial example to this threat model . Algorithm 1 ROBUSTCLASSIFIER ( ⌧ , ) 1 : Input : A test feature F ( x ) ( potentially an adversarial example ) , a set of training features F ( xi ) and their labels yi , i 2 [ m ] , a threshold parameter ⌧ , a separation parameter . 2 : Preprocessing : Delete training examples F ( xi ) if minj2 [ m ] , yi 6=yj dist ( F ( xi ) , F ( xj ) ) < 3 : Output : A predicted label of F ( x ) , or “ don ’ t know ” . 4 : if mini2 [ m ] dist ( F ( x ) , F ( xi ) ) < ⌧ then 5 : Return yarg mini2 [ m ] dist ( F ( x ) , F ( xi ) ) 6 : else 7 : Return “ don ’ t know ”
This paper studies, through a provable approach, whether abstaining (i.e., refusing to answer) can be beneficial for achieving small adversarial/robust error in settings where the input is potentially adversarially perturbed. The paper proves a separation between the power of models with and without abstain. In particular, it is shown that for a certain adversarial model (more about this below) when we force the model to answer without an abstain option, it will have high adversarial error, but when abstain is allowed, it can have small adversarial error as well as small abstention rate in certain settings. The paper then studies algorithms for robust contrastive learning in which they map the inputs into high-dimensional spaces and then aim to classify them using an abstain-enabled model based on 1-NN. The paper studies ways to adjust the parameters of the model as the data comes in an online fashion (divided into batches). They show how to achieve sublinear regret in such settings. They then compare linear classifiers with their own (1-NN style) classifiers and show advantages in robustness with such models when abstaining is allowed.
SP:95782322a8951193e0690262f6a90d2ed5ed7463
On the Power of Abstention and Data-Driven Decision Making for Adversarial Robustness
1 INTRODUCTION . A substantial body of work has shown that deep networks can be highly susceptible to adversarial attacks , in which minor changes to the input lead to incorrect , even bizarre classifications ( Nguyen et al. , 2015 ; Moosavi-Dezfooli et al. , 2016 ; Su et al. , 2019 ; Brendel et al. , 2018 ; Shamir et al. , 2019 ) . Much of this work has considered ` p-norm adversarial examples , but there has also been recent interest in exploring adversarial models beyond bounded ` p-norm ( Brown et al. , 2018 ; Engstrom et al. , 2017 ; Gilmer et al. , 2018 ; Xiao et al. , 2018 ; Alaifari et al. , 2019 ) . What these results have in common is that changes that either are imperceptible or should be irrelevant to the classification task can lead to drastically different network behavior . One reason for this vulnerability to adversarial attack is the non-Lipschitzness property of typical neural networks : small but adversarial movements in the input space can often produce large perturbations in the feature space . In this work , we consider the question of whether non-Lipschitz networks are intrinsically vulnerable , or if they could still be made robust to adversarial attack , in an abstract but ( we believe ) instructive adversarial model . In particular , suppose an adversary , by making an imperceptible change to an input x , can cause its representation F ( x ) in feature space ( the penultimate layer of the network ) to move by an arbitrary amount : will such an adversary always win ? Clearly if the adversary can modify F ( x ) by an arbitrary amount in an arbitrary direction , then yes . But what if the adversary can modify F ( x ) by an arbitrary amount but only in a random direction ( which it can not control ) ? In this case , we show an interesting dichotomy : if the classifier must output a classification on any input it is given , then yes the adversary will still win , no matter how well-separated the classes are in feature space and no matter what decision surface the classifier uses . However , if the classifier is allowed to abstain , then it can defeat such an adversary so long as natural data of different classes are reasonably well-separated in feature space . Our results hold for generalizations of these models as well , such as adversaries that can modify feature representations in random low-dimensional subspaces , or directions that are not completely random . More broadly , our results provide a theoretical explanation for the importance of allowing abstaining , or selective classification , in the presence of adversarial attack . Apart from providing a useful abstraction for non-Lipschitz feature embeddings , our model may be viewed as capturing an interesting class of real attacks . There are various global properties of an image , such as brightness , contrast , or rotation angle whose change might be “ perceptible but not relevant ” to classification tasks . Our model could also be viewed as an abstraction of attacks of that nature . Feature space attacks of other forms , where one can perturb abstract features denoting styles , including interpretable styles such as vivid colors and sharp outlines and uninterpretable ones , have also been empirically studied in ( Xu et al. , 2020 ; Ganeshan & Babu , 2019 ) . An interesting property of our model is that it is critical to be able to refuse to predict : any algorithm which always predicts a class label—therefore without an ability to abstain—is guaranteed to perform poorly . This provides a first formal hardness result about abstention in adversarial defense , and also a first provable negative result in feature-space attacks . We therefore allow the algorithm to output “ don ’ t know ” for some examples , which , as a by-product of our algorithm , serves as a detection mechanism for adversarial examples . It also results in an interesting trade-off between robustness and accuracy : by controlling how frequently we refuse to predict , we are able to trade ( robust ) precision off against recall . We also provide results for how to provably optimize for such a trade-off using a data-driven algorithm . Our strong theoretical advances are backed by empirical evidence in the context of contrastive learning ( He et al. , 2020 ; Chen et al. , 2020 ; Khosla et al. , 2020 ) . 1.1 OUR CONTRIBUTIONS . Our work tackles the problem of defending against adversarial perturbations in a random feature subspace , and advances the theory and practice of robust machine learning in multiple ways . • We introduce a formal model that captures feature-space attacks and the effect of nonLipschitzness of deep networks which can magnify input perturbations . • We begin our analysis with a hardness result concerning defending against adversary without the option of “ don ’ t know ” . We show that all classifiers that partition the feature space into two or more classes—thus without an ability to abstain—are provably vulnerable to adversarial examples for at least one class of examples with nearly half probability . • We explore the power of abstention option : a variant of nearest-neighbor classifier with the ability to abstain is provably robust against adversarial attacks , even in the presence of outliers in the training data set . We characterize the conditions under which the algorithm does not output “ don ’ t know ” too often . • We leverage and extend dispersion techniques from data-driven decision making , and present a novel data-driven method for learning data-specific optimal hyperparameters in our defense algorithms to simultaneously obtain high robust accuracy and low abstention rates . Unlike typical hyperparameter tuning , our approach provably converges to a global optimum . • Experimentally , we show that our proposed algorithm achieves certified adversarial robustness on representations learned by supervised and self-supervised contrastive learning . Our method significantly outperforms algorithms without the ability to abstain . 2 RELATED WORK . Adversarial robustness with abstention options . Classification with abstention option ( a.k.a . selective classification ( Geifman & El-Yaniv , 2017 ) ) is a relatively less explored direction in the adversarial machine learning . Hosseini et al . ( 2017 ) augmented the output class set with a NULL label and trained the classifier to reject the adversarial examples by classifying them as NULL ; Stutz et al . ( 2020 ) and Laidlaw & Feizi ( 2019 ) obtained robustness by rejecting low-confidence adversarial examples according to confidence thresholding or predictions on the perturbations of adversarial examples . Another related line of research to our method is the detection of adversarial examples ( Grosse et al. , 2017 ; Li & Li , 2017 ; Carlini & Wagner , 2017 ; Ma et al. , 2018 ; Meng & Chen , 2017 ; Metzen et al. , 2017 ; Bhagoji et al. , 2018 ; Xu et al. , 2017 ; Hu et al. , 2019 ) . However , theoretical understanding behind the empirical success of adversarial defenses with an abstention option remains elusive . Data-driven decision making . Data-driven algorithm selection refers to choosing a good algorithm from a parameterized family of algorithms for given data . It is known as “ hyperparameter tuning ” to machine learning practitioners and typically involves a “ grid search ” , “ random search ” ( Bergstra & Bengio ( 2012 ) ) or gradient-based search , with no guarantees of convergence to a global optimum . It was formally introduced to the theory of computing community by Gupta & Roughgarden ( 2017 ) as a learning paradigm , and was further extended in ( Balcan et al. , 2017 ) . The key idea is to model the problem of identifying a good algorithm from data as a statistical learning problem . The technique has found useful application in providing provably better algorithms for several domains including clustering , mechanism design , and mixed integer programs , and providing guarantees like differential privacy and adaptive online learning ( Balcan et al. , 2018a ; b ; 2020 ) . For learning in an adversarial setting , we provide the first demonstration of the effectiveness of data-driven algorithm selection in a defense method to optimize over the accuracy-abstention trade-off with strong theoretical guarantees . 3 PRELIMINARIES . Notation . We will use bold lower-case letters such as x and y to represent vectors , lower-case letters such as x and y to represent scalars , and calligraphy capital letters such as X , Y and D to represent distributions . Specifically , we denote by x 2 X the sample instance , and by y 2 Y the label , where X ✓ Rn1 and Y indicate the image and label spaces , respectively . Denote by F : X ! Rn2 the feature embedding which maps an instance to a high-dimensional vector in the latent space F ( X ) . It can be parameterized , e.g. , by deep neural networks . We will frequently use v 2 Rn2 to represent an adversarial perturbation in the feature space . Denote by dist ( · , · ) the distance between any two vectors in the image or feature space . Examples of distances include dist ( x1 , x2 ) = kx1 x2k—the one induced by vector norm . We use B ( x , ⌧ ) to represent a neighborhood of x : { x0 : dist ( x , x0 ) ⌧ } in the image or feature space . We will frequently denote by DX the distribution of instances in the input space , by DX|y the distribution of instances in the input space conditioned on the class y , by DF ( X ) the distribution of features , and by DF ( X ) |y the distribution of features conditioned on the class y . 3.1 RANDOM FEATURE SUBSPACE THREAT MODEL . In principle , the adversarial example for a given labeled data ( x , y ) is a data point x0 that causes a classifier to output a different label on x0 than the true label y . Probably one of the most popular adversarial examples is the norm-bounded perturbation in the input space . Despite a large literature devoted to defending against norm-bounded adversary by improving the Lipschitzness of neural network as a function mapping from input space to feature space ( Zhang et al. , 2019 ; Yang et al. , 2020 ) , it is typically not true that small perturbation in the input space necessarily implies small modification in the feature space . In this paper , we study a threat model where an adversary can modify the data by a large amount in the feature space . Note that because this large modification in feature space is assumed to come from a small perturbation in input space , we always assume that the true correct label y is the same for x0 as for x . Our model highlights the power of abstention in the adversarial learning : there is a provable separation when we have and do not have an abstention option under our threat model . Our threat model . In the setting of ( robust ) representation learning , we are given a set of training instances x1 , ... , xm 2 X . Let x be an n1-dimensional test input for classification . The input is embedded into a high n2-dimensional feature space using a deep neural network F . We predict the class of x by a prediction function on F ( x ) which can potentially output “ don ’ t know ” . The adversary may corrupt F ( x ) such that the modified feature vector is restricted in a random n3-dimensional affine subspace denoted by S + { F ( x ) } , while the perturbation magnitude might be arbitrarily large . The adversary is given access to everything including F , x , S and the true label of x . Throughout the paper , we will refer adversary and adversarial example to this threat model . Algorithm 1 ROBUSTCLASSIFIER ( ⌧ , ) 1 : Input : A test feature F ( x ) ( potentially an adversarial example ) , a set of training features F ( xi ) and their labels yi , i 2 [ m ] , a threshold parameter ⌧ , a separation parameter . 2 : Preprocessing : Delete training examples F ( xi ) if minj2 [ m ] , yi 6=yj dist ( F ( xi ) , F ( xj ) ) < 3 : Output : A predicted label of F ( x ) , or “ don ’ t know ” . 4 : if mini2 [ m ] dist ( F ( x ) , F ( xi ) ) < ⌧ then 5 : Return yarg mini2 [ m ] dist ( F ( x ) , F ( xi ) ) 6 : else 7 : Return “ don ’ t know ”
This paper proves some fundamental facts about classifiers that can't abstain (provide a non-classification) and their robustness to adversarial perturbations. In Sec. 4, they provide a result that such classifiers are always vulnerable to adversarial perturbations in a technical sense. In particular, there will always be a class in which most training examples can be randomly perturbed in a way that an incorrect label will result nearly half the time. In Sec 5, they propose a modified nearest-neighbor classification algorithm, with two parameters that control abstention and "noise removal". They provide upper bounds on error in a random subspace attack scheme, and refine/loosen these results in several more specific/general scenarios. In Secs. 6 & 7, they discuss methods to tune the two parameters and provide experimental evidence of their theoretical results.
SP:95782322a8951193e0690262f6a90d2ed5ed7463
VEM-GCN: Topology Optimization with Variational EM for Graph Convolutional Networks
1 INTRODUCTION . Complex graph-structured data are ubiquitous in the real world , ranging from social networks to chemical molecules . Inspired by the remarkable performance of convolutional neural networks ( CNNs ) in processing data with regular grid structures ( e.g. , images ) , a myriad of studies on GCNs have emerged to execute “ convolution ” in the graph domain ( Niepert et al. , 2016 ; Kipf & Welling , 2017 ; Gilmer et al. , 2017 ; Hamilton et al. , 2017 ; Monti et al. , 2017 ; Gao et al. , 2018 ) . Many of these approaches follow a neighborhood aggregation mechanism ( a.k.a. , message passing scheme ) that updates the representation of each node by iteratively aggregating the transformed messages sent from its neighboring nodes . Commencing with the pioneering works ( Kipf & Welling , 2017 ; Gilmer et al. , 2017 ) , numerous strategies have been developed to improve the vanilla message passing scheme such as introducing self-attention mechanism ( Veličković et al. , 2018 ; Zhang et al. , 2020 ) , incorporating local structural information ( Zhang et al. , 2020 ; Jin et al. , 2019 ; Ye et al. , 2020 ) , and leveraging the link attributes ( Gong & Cheng , 2019 ; Li et al. , 2019 ; Jiang et al. , 2019 ) . Despite significant success in many fundamental tasks of graph-based machine learning , message passing-based GCNs almost all process the observed graph structure as ground truth and might suffer from the over-smoothing problem ( Li et al. , 2018 ) , which would seriously affect the node classification performance . Given the observed noisy graph topology ( i.e. , excessive inter-class edges are linked while many intra-class edges are missing ) , when multiple message passing layers are stacked to enlarge the receptive field ( the maximum hop of neighborhoods ) , features of neighboring nodes in different classes would be dominant in message passing . Thus , node representations would be corrupted by the harmful noise and affect the discrimination of graph nodes . The over-smoothing phenomenon in GCNs has already been studied from different aspects . Li et al . ( 2018 ) first interpreted over-smoothing from the perspective of Laplacian smoothing , while Xu et al . ( 2018 ) and Klicpera et al . ( 2019a ) associated it with the limit distribution of random walk . Furthermore , Chen et al . ( 2020a ) developed quantitative metrics to measure the over-smoothness from the topological view . They argued that the key factor leading to over-smoothing is the noise passing between nodes of different categories and the classification performance of GCNs is positively correlated with the proportion of intra-class node pairs in all edges . In this paper , we propose VEM-GCN , a novel architecture to address the over-smoothing problem with topology optimization for uncertain graphs . Considering that a “ clearer ” graph with more intra-class edges and fewer inter-class edges would improve the node classification performance of GCNs ( Yang et al. , 2019 ; Chen et al. , 2020a ) , VEM-GCN approaches a latent adjacency matrix parameterized by the assortative-constrained stochastic block model ( SBM ) where nodes share the same label are linked and inter-class edges should be cut off . To jointly refine the latent graph structure and learn desirable node representations for classification , variational EM algorithm ( Neal & Hinton , 1998 ) is adopted to optimize the evidence lower bound ( ELBO ) of the likelihood function . In the inference procedure ( E-step ) , graph topology is optimized by approximating the posterior probability distribution of the latent adjacency matrix with a neural network learned from node embeddings . In the learning procedure ( M-step ) , a conventional GCN is trained to maximize the log-likelihood of the observed node labels based on the learned latent graph structure . The E-step and M-step optimize the graph topology and improve the classification of unlabeled nodes in an alternating fashion . The proposed VEM-GCN architecture is flexible and general . In the E-step , the neural network can support arbitrary desirable node embeddings generated by algorithms such as node2vec ( Grover & Leskovec , 2016 ) , struc2vec ( Ribeiro et al. , 2017 ) , and GCNs , or the raw node attributes . The GCN in the M-step can also be substituted with arbitrary graph models . Furthermore , recent strategies for relieving the over-smoothing issue , i.e. , AdaEdge ( Chen et al. , 2020a ) and DropEdge ( Rong et al. , 2020 ) , are shown to be the specific cases of VEM-GCN under certain conditions . For empirical evaluation , we conduct extensive experiments on seven benchmarks for node classification , including four citation networks , two Amazon co-purchase graphs , and one Microsoft Academic graph . Experimental results demonstrate the effectiveness of the proposed VEM-GCN architecture in optimizing graph topology and mitigating the over-smoothing problem for GCNs . 2 BACKGROUND AND RELATED WORKS . Problem Setting . This paper focuses on the task of graph-based transductive node classification . A simple attributed graph is defined as a tuple Gobs = ( V , Aobs , X ) , where V = { vi } Ni=1 is the node set , Aobs = [ aobsij ] ∈ { 0 , 1 } N×N is the observed adjacency matrix , and X ∈ RN×f represents the collection of attributes with each row corresponding to the features of an individual node . Given the labels Yl = [ yic ] ∈ { 0 , 1 } |Vl|×C for a subset of graph nodes Vl ⊂ V assigned to C classes , the task is to infer the classes Yu = [ yjc ] ∈ { 0 , 1 } |Vu|×C for the unlabeled nodes Vu = V\Vl based on Gobs . Graph Convolutional Networks ( GCNs ) . The core of most GCNs is message passing scheme , where each node updates its representation by iteratively aggregating features from its neighborhoods . Denote with W ( l ) the learnable weights in the l-th layer , N ( i ) the set of neighboring node indices for node vi , and σ ( · ) the nonlinear activation function . A basic message passing layer takes the following form : h ( l+1 ) i = σ ( ∑ j∈N ( i ) ∪ { i } α ( l ) ij W ( l ) h ( l ) j ) . ( 1 ) Here , h ( l ) j is the input features of node vj in the l-th layer , W ( l ) h ( l ) j is the corresponding transformed message , and α ( l ) ij is the aggregation weight for the message passing from node vj to node vi . Existing GCNs mainly differ in the mechanism for computing α ( l ) ij ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ; Ye et al. , 2020 ; Hamilton et al. , 2017 ; Zhang et al. , 2020 ) . Stochastic Block Model ( SBM ) . SBM ( Holland et al. , 1983 ) is a generative model for producing graphs with community structures . It parameterizes the edge probability between each node pair by āij |yi , yj ∼ { Bernoulli ( p0 ) , if yi = yj Bernoulli ( p1 ) , if yi 6= yj , ( 2 ) where āij is an indicator variable for the edge linking nodes vi and vj , yi and yj denote their corresponding communities ( classes ) , p0 and p1 are termed community link strength and cross- community link probability , respectively . The case where p0 > p1 is called an assortative model , while the case p0 < p1 is called disassortative . In this paper , we leverage an assortative-constrained SBM ( Gribel et al. , 2020 ) with p0 = 1 and p1 = 0 to model the latent graph for a clear topology . Over-smoothing . Real-world graphs often possess high sparsity and are corrupted by certain noise that leads to inter-class misconnection and missing intra-class edges . Over-smoothing is mainly caused by the indistinguishable features of nodes in different classes produced by the message passing along inter-class edges . Various strategies have been developed to alleviate this problem . JK-Net ( Xu et al. , 2018 ) utilizes skip connection for adaptive feature aggregation and DNA ( Fey , 2019 ) further makes improvements based on the attention mechanism . PPNP and APPNP ( Klicpera et al. , 2019a ) modify the message passing scheme by personalized PageRank ( PPR ) to avoid reaching the limit distribution of random walk . CGNN ( Xhonneux et al. , 2020 ) addresses over-smoothing in a similar manner as PPR . Zhao & Akoglu ( 2020 ) introduced a graph layer normalization scheme termed PairNorm to maintain the total pairwise distance between nodes unchanged across layers . GCNII ( Chen et al. , 2020b ) extends GCN with Initial residual and Identity mapping . However , these methods can not fundamentally address the over-smoothing issue , as they all view the observed graph as ground truth and the features of nodes in different classes would still be over-mixed along the inter-class edges . AdaEdge ( Chen et al. , 2020a ) constantly refines the graph topology by adjusting the edges in a self-training-like fashion . However , AdaEdge only adjusts the edges linking nodes classified with high confidence , which leads to limited improvement or degradation in classification performance due to the incorrect operations for misclassified nodes . DropEdge ( Rong et al. , 2020 ) randomly removes a certain fraction of edges to reduce message passing . Despite enhanced robustness , DropEdge does not essentially optimize the graph topology . BBGDC ( Hasanzadeh et al. , 2020 ) generalizes Dropout ( Srivastava et al. , 2014 ) and DropEdge by adaptive connection sampling . Uncertain Graphs and Topology Optimization . Learning with uncertain graphs is another related research area , where the observed graph structure is supposed to be derived from noisy data rather than ground truth . Bayesian approaches are typical methods that introduce uncertainty to network analysis . Zhang et al . ( 2019 ) developed BGCN that considers the observed graph as a sample from a parametric family of random graphs and makes maximum a posteriori ( MAP ) estimate of the graph parameters . Tiao et al . ( 2019 ) also viewed graph edges as Bernoulli random variables and used variational inference to optimize the posterior distribution of the adjacency matrix by approximating the pre-defined graph priors . Some other Bayesian methods have also been developed to combine GCNs with probabilistic models ( Ng et al. , 2018 ; Ma et al. , 2019 ) . However , without explicit optimization for the graph structure , they only improve the robustness under certain conditions such as incomplete edges , active learning , and adversarial attacks . For explicit topology optimization , Franceschi et al . ( 2019 ) presented LDS to parameterize edges as independent Bernoulli random variables and learn discrete structures for GCNs by solving a bilevel programming . However , LDS requires an extra validation set for training and suffers from limited scalability . TO-GCN ( Yang et al. , 2019 ) only adds the intra-class edges derived from the labeled nodes , which causes topology imbalance between Vu and Vl . GDC ( Klicpera et al. , 2019b ) refines the adjacency matrix with graph diffusion to consider the links between high-order neighborhoods . However , the added edges might still be noisy to hamper the classification . GRCN ( Yu et al. , 2020 ) modifies the original adjacency matrix by adding a residual matrix with each element measuring the similarity between two corresponding node embeddings , and IDGL ( Chen et al. , 2020c ) iteratively learns the graph structure in a similar manner . Pro-GNN ( Jin et al. , 2020 ) introduces low rank and sparsity constraints to recover a clean graph in defending adversarial attacks . NeuralSparse ( Zheng et al. , 2020 ) uses the Gumbel Softmax trick ( Jang et al. , 2017 ) to sample k neighbors from the original neighborhoods for each node but does not consider recovering missing intra-class edges . Different from the aforementioned methods , VEM-GCN aims at relieving the over-smoothing issue . We introduce a learned latent graph based on the assortative-constrained SBM to explicitly enhance intra-class connection and suppress inter-class interaction with the variational EM algorithm .
The authors present a method for tackling the problem of over-smoothing in graph convolutional networks. Specifically, this is achieved by explicitly modelling a latent graph which, ideally, would be a graph which connects an observation to all other observations of the same class and no observations of a different class. In practice, there is only an uncertain picture of this latent graph as in many applications the labels must be estimated for unlabelled observations. The authors present an EM variational algorithm for approximating both this latent graph and using it to improve the estimation of a GCN. The authors demonstrate that the proposed method performs favourably on a battery of test against an array of existing methods for solving the node classification problem.
SP:9977ed83006cd0ccbf385f26220aa9395a723157
VEM-GCN: Topology Optimization with Variational EM for Graph Convolutional Networks
1 INTRODUCTION . Complex graph-structured data are ubiquitous in the real world , ranging from social networks to chemical molecules . Inspired by the remarkable performance of convolutional neural networks ( CNNs ) in processing data with regular grid structures ( e.g. , images ) , a myriad of studies on GCNs have emerged to execute “ convolution ” in the graph domain ( Niepert et al. , 2016 ; Kipf & Welling , 2017 ; Gilmer et al. , 2017 ; Hamilton et al. , 2017 ; Monti et al. , 2017 ; Gao et al. , 2018 ) . Many of these approaches follow a neighborhood aggregation mechanism ( a.k.a. , message passing scheme ) that updates the representation of each node by iteratively aggregating the transformed messages sent from its neighboring nodes . Commencing with the pioneering works ( Kipf & Welling , 2017 ; Gilmer et al. , 2017 ) , numerous strategies have been developed to improve the vanilla message passing scheme such as introducing self-attention mechanism ( Veličković et al. , 2018 ; Zhang et al. , 2020 ) , incorporating local structural information ( Zhang et al. , 2020 ; Jin et al. , 2019 ; Ye et al. , 2020 ) , and leveraging the link attributes ( Gong & Cheng , 2019 ; Li et al. , 2019 ; Jiang et al. , 2019 ) . Despite significant success in many fundamental tasks of graph-based machine learning , message passing-based GCNs almost all process the observed graph structure as ground truth and might suffer from the over-smoothing problem ( Li et al. , 2018 ) , which would seriously affect the node classification performance . Given the observed noisy graph topology ( i.e. , excessive inter-class edges are linked while many intra-class edges are missing ) , when multiple message passing layers are stacked to enlarge the receptive field ( the maximum hop of neighborhoods ) , features of neighboring nodes in different classes would be dominant in message passing . Thus , node representations would be corrupted by the harmful noise and affect the discrimination of graph nodes . The over-smoothing phenomenon in GCNs has already been studied from different aspects . Li et al . ( 2018 ) first interpreted over-smoothing from the perspective of Laplacian smoothing , while Xu et al . ( 2018 ) and Klicpera et al . ( 2019a ) associated it with the limit distribution of random walk . Furthermore , Chen et al . ( 2020a ) developed quantitative metrics to measure the over-smoothness from the topological view . They argued that the key factor leading to over-smoothing is the noise passing between nodes of different categories and the classification performance of GCNs is positively correlated with the proportion of intra-class node pairs in all edges . In this paper , we propose VEM-GCN , a novel architecture to address the over-smoothing problem with topology optimization for uncertain graphs . Considering that a “ clearer ” graph with more intra-class edges and fewer inter-class edges would improve the node classification performance of GCNs ( Yang et al. , 2019 ; Chen et al. , 2020a ) , VEM-GCN approaches a latent adjacency matrix parameterized by the assortative-constrained stochastic block model ( SBM ) where nodes share the same label are linked and inter-class edges should be cut off . To jointly refine the latent graph structure and learn desirable node representations for classification , variational EM algorithm ( Neal & Hinton , 1998 ) is adopted to optimize the evidence lower bound ( ELBO ) of the likelihood function . In the inference procedure ( E-step ) , graph topology is optimized by approximating the posterior probability distribution of the latent adjacency matrix with a neural network learned from node embeddings . In the learning procedure ( M-step ) , a conventional GCN is trained to maximize the log-likelihood of the observed node labels based on the learned latent graph structure . The E-step and M-step optimize the graph topology and improve the classification of unlabeled nodes in an alternating fashion . The proposed VEM-GCN architecture is flexible and general . In the E-step , the neural network can support arbitrary desirable node embeddings generated by algorithms such as node2vec ( Grover & Leskovec , 2016 ) , struc2vec ( Ribeiro et al. , 2017 ) , and GCNs , or the raw node attributes . The GCN in the M-step can also be substituted with arbitrary graph models . Furthermore , recent strategies for relieving the over-smoothing issue , i.e. , AdaEdge ( Chen et al. , 2020a ) and DropEdge ( Rong et al. , 2020 ) , are shown to be the specific cases of VEM-GCN under certain conditions . For empirical evaluation , we conduct extensive experiments on seven benchmarks for node classification , including four citation networks , two Amazon co-purchase graphs , and one Microsoft Academic graph . Experimental results demonstrate the effectiveness of the proposed VEM-GCN architecture in optimizing graph topology and mitigating the over-smoothing problem for GCNs . 2 BACKGROUND AND RELATED WORKS . Problem Setting . This paper focuses on the task of graph-based transductive node classification . A simple attributed graph is defined as a tuple Gobs = ( V , Aobs , X ) , where V = { vi } Ni=1 is the node set , Aobs = [ aobsij ] ∈ { 0 , 1 } N×N is the observed adjacency matrix , and X ∈ RN×f represents the collection of attributes with each row corresponding to the features of an individual node . Given the labels Yl = [ yic ] ∈ { 0 , 1 } |Vl|×C for a subset of graph nodes Vl ⊂ V assigned to C classes , the task is to infer the classes Yu = [ yjc ] ∈ { 0 , 1 } |Vu|×C for the unlabeled nodes Vu = V\Vl based on Gobs . Graph Convolutional Networks ( GCNs ) . The core of most GCNs is message passing scheme , where each node updates its representation by iteratively aggregating features from its neighborhoods . Denote with W ( l ) the learnable weights in the l-th layer , N ( i ) the set of neighboring node indices for node vi , and σ ( · ) the nonlinear activation function . A basic message passing layer takes the following form : h ( l+1 ) i = σ ( ∑ j∈N ( i ) ∪ { i } α ( l ) ij W ( l ) h ( l ) j ) . ( 1 ) Here , h ( l ) j is the input features of node vj in the l-th layer , W ( l ) h ( l ) j is the corresponding transformed message , and α ( l ) ij is the aggregation weight for the message passing from node vj to node vi . Existing GCNs mainly differ in the mechanism for computing α ( l ) ij ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ; Ye et al. , 2020 ; Hamilton et al. , 2017 ; Zhang et al. , 2020 ) . Stochastic Block Model ( SBM ) . SBM ( Holland et al. , 1983 ) is a generative model for producing graphs with community structures . It parameterizes the edge probability between each node pair by āij |yi , yj ∼ { Bernoulli ( p0 ) , if yi = yj Bernoulli ( p1 ) , if yi 6= yj , ( 2 ) where āij is an indicator variable for the edge linking nodes vi and vj , yi and yj denote their corresponding communities ( classes ) , p0 and p1 are termed community link strength and cross- community link probability , respectively . The case where p0 > p1 is called an assortative model , while the case p0 < p1 is called disassortative . In this paper , we leverage an assortative-constrained SBM ( Gribel et al. , 2020 ) with p0 = 1 and p1 = 0 to model the latent graph for a clear topology . Over-smoothing . Real-world graphs often possess high sparsity and are corrupted by certain noise that leads to inter-class misconnection and missing intra-class edges . Over-smoothing is mainly caused by the indistinguishable features of nodes in different classes produced by the message passing along inter-class edges . Various strategies have been developed to alleviate this problem . JK-Net ( Xu et al. , 2018 ) utilizes skip connection for adaptive feature aggregation and DNA ( Fey , 2019 ) further makes improvements based on the attention mechanism . PPNP and APPNP ( Klicpera et al. , 2019a ) modify the message passing scheme by personalized PageRank ( PPR ) to avoid reaching the limit distribution of random walk . CGNN ( Xhonneux et al. , 2020 ) addresses over-smoothing in a similar manner as PPR . Zhao & Akoglu ( 2020 ) introduced a graph layer normalization scheme termed PairNorm to maintain the total pairwise distance between nodes unchanged across layers . GCNII ( Chen et al. , 2020b ) extends GCN with Initial residual and Identity mapping . However , these methods can not fundamentally address the over-smoothing issue , as they all view the observed graph as ground truth and the features of nodes in different classes would still be over-mixed along the inter-class edges . AdaEdge ( Chen et al. , 2020a ) constantly refines the graph topology by adjusting the edges in a self-training-like fashion . However , AdaEdge only adjusts the edges linking nodes classified with high confidence , which leads to limited improvement or degradation in classification performance due to the incorrect operations for misclassified nodes . DropEdge ( Rong et al. , 2020 ) randomly removes a certain fraction of edges to reduce message passing . Despite enhanced robustness , DropEdge does not essentially optimize the graph topology . BBGDC ( Hasanzadeh et al. , 2020 ) generalizes Dropout ( Srivastava et al. , 2014 ) and DropEdge by adaptive connection sampling . Uncertain Graphs and Topology Optimization . Learning with uncertain graphs is another related research area , where the observed graph structure is supposed to be derived from noisy data rather than ground truth . Bayesian approaches are typical methods that introduce uncertainty to network analysis . Zhang et al . ( 2019 ) developed BGCN that considers the observed graph as a sample from a parametric family of random graphs and makes maximum a posteriori ( MAP ) estimate of the graph parameters . Tiao et al . ( 2019 ) also viewed graph edges as Bernoulli random variables and used variational inference to optimize the posterior distribution of the adjacency matrix by approximating the pre-defined graph priors . Some other Bayesian methods have also been developed to combine GCNs with probabilistic models ( Ng et al. , 2018 ; Ma et al. , 2019 ) . However , without explicit optimization for the graph structure , they only improve the robustness under certain conditions such as incomplete edges , active learning , and adversarial attacks . For explicit topology optimization , Franceschi et al . ( 2019 ) presented LDS to parameterize edges as independent Bernoulli random variables and learn discrete structures for GCNs by solving a bilevel programming . However , LDS requires an extra validation set for training and suffers from limited scalability . TO-GCN ( Yang et al. , 2019 ) only adds the intra-class edges derived from the labeled nodes , which causes topology imbalance between Vu and Vl . GDC ( Klicpera et al. , 2019b ) refines the adjacency matrix with graph diffusion to consider the links between high-order neighborhoods . However , the added edges might still be noisy to hamper the classification . GRCN ( Yu et al. , 2020 ) modifies the original adjacency matrix by adding a residual matrix with each element measuring the similarity between two corresponding node embeddings , and IDGL ( Chen et al. , 2020c ) iteratively learns the graph structure in a similar manner . Pro-GNN ( Jin et al. , 2020 ) introduces low rank and sparsity constraints to recover a clean graph in defending adversarial attacks . NeuralSparse ( Zheng et al. , 2020 ) uses the Gumbel Softmax trick ( Jang et al. , 2017 ) to sample k neighbors from the original neighborhoods for each node but does not consider recovering missing intra-class edges . Different from the aforementioned methods , VEM-GCN aims at relieving the over-smoothing issue . We introduce a learned latent graph based on the assortative-constrained SBM to explicitly enhance intra-class connection and suppress inter-class interaction with the variational EM algorithm .
This paper proposes a method to alleviate the over-smoothing problem of GNNs. The key idea is to generate a latent graph structure via leveraging stochastic block model to approximate the observed graph structure and label information. The learned latent graph is expected to have a clear community structure with dense intra-class edges and sparse inter-class edges, so that labels of unlabeled nodes are better predicted based on the latent structure. The whole framework is well designed as an MLE problem, with EBLO solved by an alternate EM style algorithm. Both E-step and M-step are assumed to enhance each other's performance, but this point is not clearly validated in the experiments. Also, it is good to see some discussions on the relationship between the proposed framework and dropedge and adaedge methods. Overall, the idea makes sense in terms of joint topology optimization (via SBM) and node classification. The methodology is designed well as an MLE problem. The paper writes well and the experimental results demonstrate effectiveness to some extent.
SP:9977ed83006cd0ccbf385f26220aa9395a723157
Zero-Shot Recognition through Image-Guided Semantic Classification
1 INTRODUCTION . As a feasible solution for addressing the limitations of supervised classification methods , zeroshot learning ( ZSL ) aims to recognize objects whose instances have not been seen during training ( Larochelle et al. , 2008 ; Palatucci et al. , 2009 ) . Unseen classes are recognized by associating seen and unseen classes through some form of semantic space ; therefore , the knowledge learned from seen classes is transferred to unseen classes . In the semantic space , each class has a corresponding vector representation called a class prototype . Class prototypes can be obtained from human-annotated attributes that describe visual properties of objects ( Farhadi et al. , 2009 ; Lampert et al. , 2014 ) or from word embeddings learned in an unsupervised manner from text corpus ( Mikolov et al. , 2013 ; Pennington et al. , 2014 ; Devlin et al. , 2018 ) . A majority of ZSL methods can be viewed using the visual-semantic embedding framework , as displayed in Figure 1 ( a ) . Images are mapped from the visual space to the semantic space in which all classes reside , or images and labels are projected to a latent space ( Yang & Hospedales , 2015 ; Liu et al. , 2018 ) . Then , the inference is performed in this common space ( Akata et al. , 2013 ; Frome et al. , 2013 ; Socher et al. , 2013 ) , typically using cosine similarity or Euclidean distance . Another perspective of embedding-based methods is to construct an image classifier for each unseen class by learning the correspondence between a binary one-versus-rest image classifier ( i.e. , visual representation of a class ) and its class prototype in the semantic space ( i.e. , semantic representation of a class ) ( Wang et al. , 2019 ) . Once this correspondence function is learned , a binary one-versus-rest image classifier can be constructed for an unseen class with its prototype ( Wang et al. , 2019 ) . For example , a commonly used choice for such correspondence is the bilinear function ( Frome et al. , 2013 ; Akata et al. , 2013 ; 2015 ; Romera-Paredes & Torr , 2015 ; Li et al. , 2018 ) . Considerable efforts have been made to extend the linear function to nonlinear ones ( Xian et al. , 2016 ; Wang et al. , 2017 ; Elhoseiny et al. , 2017 ; Qiao et al. , 2016 ) . Figure 1 ( b ) illustrates this perspective . Learning the correspondence between an image classifier and a class prototype has the following drawbacks . First , the assumption of using a single image classifier for each class is restrictive because the manner for separating classes in both visual and semantic spaces would not be unique . We argue that semantic classification should be conducted dynamically conditioned on an input image . For example , the visual attribute wheel may be useful for classifying most car images . Nevertheless , cars with missing wheels should also be correctly recognized using other visual attributes . Therefore , instance-specific semantic classifiers are more preferable than category-specific ones because the classifier weights can be adaptively determined based on image content . Second , the scale of training data for learning the correspondence is constrained to be the number of class labels . In other words , a training set with C labels has only C visual-semantic classifier pairs to build the correspondence . This may hinder the robustness of deep models that usually require large-scale training data . Finally , although class embedding has rich semantic meanings , each class is represented by only a single class prototype to determine where images of that class collapse inevitably ( MarcoBaroni , 2016 ; Fu et al. , 2015 ) . The mapped semantic representations from images may collapse to hubs , which are close to many other points in the semantic space , rather than being similar to the true class label ( MarcoBaroni , 2016 ) . In this paper , we present a new method , named Image-Guided Semantic Classification ( IGSC ) , to address these problems . IGSC aims to learn the correspondence between an image and its corresponding label classifier , as illustrated in Figure 1 ( c ) . In contrast to existing methods focusing on the learning of visual ( or semantic ) representations ( Zhang et al. , 2016 ; Frome et al. , 2013 ; Socher et al. , 2013 ) , IGSC analyzes the input image and seeks for combinations of variables in the semantic space ( e.g. , combinations of attributes ) that distinguish a class ( belonging to the input ) from other classes . The proposed IGSC method has the following characteristics : • IGSC learns the correspondence between an image in the visual space and a classifier in the semantic space . The correspondence can be learned with training pairs in the scale of training images rather than that of classes . • IGSC performs learning to learn in an end-to-end manner . Label classification is conducted by a semantic classifier whose weights are generated on the fly . This model is simple yet powerful because of its adaptive nature . • IGSC unifies visual attribute detection and label classification . This is achieved via the design of a conditional network ( the proposed classifier learning method ) , in which label classification is the main task of interest and the conditional input image provides additional information of a specific situation . • IGSC alleviates the hubness problem . The correspondence between an image and a semantic classifier learned from seen classes can be transferred to recognize unseen concepts . We evaluated IGSC with experiments conducted on four public benchmark datasets , including SUN ( Patterson & Hays , 2012 ) , CUB ( Patterson & Hays , 2012 ) , AWA2 ( Lampert et al. , 2014 ) , and aPY ( Farhadi et al. , 2009 ) . Experimental results demonstrated that the proposed method achieved promising performance , compared with current state-of-the-art methods.The remainder of the paper is organized as follows : We briefly review related work in Section 2 . Section 3 presents the proposed framework . The experimental results and conclusions are provided in Sections 4 and 5 , respectively . 2 RELATED WORK . Zero-shot learning has evolved rapidly during the last decade , and therefore documenting the extensive literature with limited pages is rarely possible . In this section , we review a few representative zero-shot learning methods and refer readers to ( Xian et al. , 2019a ; Wang et al. , 2019 ) for a comprehensive survey . One pioneering main stream of ZSL uses attributes to infer the label of an image belonging to one of the unseen classes ( Lampert et al. , 2014 ; Al-Halah et al. , 2016 ; Norouzi et al. , 2014 ; Jayaraman & Grauman , 2014 ; Kankuekul et al. , 2012 ) . The attributes of an image are predicted , then the class label is inferred by searching the class which attains the most similar set of attributes . For example , the Direct Attribute Prediction ( DAP ) model ( Lampert et al. , 2009 ) estimates the posterior of each attribute for an image by learning probabilistic attribute classifiers . A test sample is then classified by each attribute classifier alternately , and the class label is predicted by probabilistic estimation . Similar to the attribute-based methods , the proposed method has the merits of modeling the relationships among classes . However , IGSC unifies these two steps : attribute classifier learning and inferring from detected attributes to the class . Furthermore , attribute classifiers are jointly learned in IGSC . A broad family of ZSL methods apply an embedding framework that directly learns a mapping from the visual space to the semantic space ( Palatucci et al. , 2009 ; Akata et al. , 2013 ; 2015 ; RomeraParedes & Torr , 2015 ) . The visual-to-semantic mapping can be linear ( Frome et al. , 2013 ) or nonlinear ( Socher et al. , 2013 ) . For example , DeViSE ( Frome et al. , 2013 ) learns a linear mapping between the image and semantic spaces using an efficient ranking loss formulation . Cross-Modal Transfer ( CMT ) ( Socher et al. , 2013 ) uses a neural network with two hidden layers to learn a nonlinear projection from image feature space to word vector space . More recently , deep neural network models are proposed to mirror learned semantic relations among classes in the visual domain from the image ( Annadani & Biswas , 2018 ) or from the part ( Zhu et al. , 2018a ) levels . IGSC is also an embedding-based ZSL method . IGSC differs significantly from existing methods in that IGSC learns the correspondence between an image and its semantic classifier , enabling the possibility of using different classification manners to separate class prototypes in the semantic space . Recent ZSL models adopt the generative adversarial network ( GAN ) ( Goodfellow et al. , 2014 ) or other generative models for synthesizing unseen examples ( Bucher et al. , 2017 ; Long et al. , 2017 ; Jiang et al. , 2018 ; Verma et al. , 2018 ; Xian et al. , 2018 ; Zhu et al. , 2018b ; Xian et al. , 2019b ; Verma et al. , 2020 ; Yu et al. , 2020 ; Ma & Hu , 2020 ) or for reconstructing training images ( Chen et al. , 2018 ) . The synthesized images obtained at the training stage can be fed to conventional classifiers so that ZSL is converted into the conventional supervised learning problem ( Long et al. , 2017 ) . The transformation from attributes to image features require involving generative models such as denoising autoencoders ( Bucher et al. , 2017 ) , GAN ( Xian et al. , 2018 ; Zhu et al. , 2018b ) or their variants ( Verma et al. , 2018 ; Felix et al. , 2018 ; Xian et al. , 2019b ; Yu et al. , 2020 ; Ma & Hu , 2020 ) . Despite outstanding performances reported in the papers , these works leverage some form of the unseen class information during training . In view of real-world applications involving recognition in-the-wild , novel classes including the image samples as well as the semantic representations may not be available in model learning . The proposed method is agnostic to all unseen class information during training . Furthermore , the proposed method is much simpler in the architecture design and has a much smaller model size , compared with the generative methods . 3 APPROACH . 3.1 PROBLEM DESCRIPTION . Given a training set S = { ( xn , yn ) , n = 1 . . . N } , with yn ∈ Ys being a class label in the seen class set , the goal of ZSL is to learn a classifier f : X → Y which can generalize to predict any image x to its correct label , which is not only in Ys but also in the unseen class set Yu . In the prevailing family of compatibility learning ZSL ( Xian et al. , 2019a ; Ba et al. , 2015 ) , the prediction is made via : ŷ = f ( x ; W ) = argmax y∈Y F ( x , y ; W ) . ( 1 ) In particular , if Y = Yu , this is the conventional ZSL setting ; if Y = Ys∪Yu , this is the generalized zero-shot learning ( GZSL ) setting , which is more practical for real-world applications . The compatibility function F ( · ) —parameterized by W—is used to associate visual and semantic information . In the visual space , each image x has a vector representation , denoted by θ ( x ) . Similarly , each class label y has a vector representation in the semantic space ( called the class prototype ) , denoted by φ ( y ) . In short , θ ( x ) and φ ( y ) are the image and class embeddings , both of which are given .
The authors tackle the problem of zero-shot learning, that is, the recognition of classes and categories for which no visual data are available, but only semantic embedding, providing a description of the classes in terms of auxiliary textual descriptions. To this aim, authors propose a method dubbed Image-Guided Semantic Classification in which a two-stream network (fed by either visual and semantic embedding) learns a compatibility function whose recognition performance is enhanced by means of calibrated stacking (Chao et al. 2016).
SP:e0a53b0c2398f49df1c8c053acb1dc4bc64a0729
Zero-Shot Recognition through Image-Guided Semantic Classification
1 INTRODUCTION . As a feasible solution for addressing the limitations of supervised classification methods , zeroshot learning ( ZSL ) aims to recognize objects whose instances have not been seen during training ( Larochelle et al. , 2008 ; Palatucci et al. , 2009 ) . Unseen classes are recognized by associating seen and unseen classes through some form of semantic space ; therefore , the knowledge learned from seen classes is transferred to unseen classes . In the semantic space , each class has a corresponding vector representation called a class prototype . Class prototypes can be obtained from human-annotated attributes that describe visual properties of objects ( Farhadi et al. , 2009 ; Lampert et al. , 2014 ) or from word embeddings learned in an unsupervised manner from text corpus ( Mikolov et al. , 2013 ; Pennington et al. , 2014 ; Devlin et al. , 2018 ) . A majority of ZSL methods can be viewed using the visual-semantic embedding framework , as displayed in Figure 1 ( a ) . Images are mapped from the visual space to the semantic space in which all classes reside , or images and labels are projected to a latent space ( Yang & Hospedales , 2015 ; Liu et al. , 2018 ) . Then , the inference is performed in this common space ( Akata et al. , 2013 ; Frome et al. , 2013 ; Socher et al. , 2013 ) , typically using cosine similarity or Euclidean distance . Another perspective of embedding-based methods is to construct an image classifier for each unseen class by learning the correspondence between a binary one-versus-rest image classifier ( i.e. , visual representation of a class ) and its class prototype in the semantic space ( i.e. , semantic representation of a class ) ( Wang et al. , 2019 ) . Once this correspondence function is learned , a binary one-versus-rest image classifier can be constructed for an unseen class with its prototype ( Wang et al. , 2019 ) . For example , a commonly used choice for such correspondence is the bilinear function ( Frome et al. , 2013 ; Akata et al. , 2013 ; 2015 ; Romera-Paredes & Torr , 2015 ; Li et al. , 2018 ) . Considerable efforts have been made to extend the linear function to nonlinear ones ( Xian et al. , 2016 ; Wang et al. , 2017 ; Elhoseiny et al. , 2017 ; Qiao et al. , 2016 ) . Figure 1 ( b ) illustrates this perspective . Learning the correspondence between an image classifier and a class prototype has the following drawbacks . First , the assumption of using a single image classifier for each class is restrictive because the manner for separating classes in both visual and semantic spaces would not be unique . We argue that semantic classification should be conducted dynamically conditioned on an input image . For example , the visual attribute wheel may be useful for classifying most car images . Nevertheless , cars with missing wheels should also be correctly recognized using other visual attributes . Therefore , instance-specific semantic classifiers are more preferable than category-specific ones because the classifier weights can be adaptively determined based on image content . Second , the scale of training data for learning the correspondence is constrained to be the number of class labels . In other words , a training set with C labels has only C visual-semantic classifier pairs to build the correspondence . This may hinder the robustness of deep models that usually require large-scale training data . Finally , although class embedding has rich semantic meanings , each class is represented by only a single class prototype to determine where images of that class collapse inevitably ( MarcoBaroni , 2016 ; Fu et al. , 2015 ) . The mapped semantic representations from images may collapse to hubs , which are close to many other points in the semantic space , rather than being similar to the true class label ( MarcoBaroni , 2016 ) . In this paper , we present a new method , named Image-Guided Semantic Classification ( IGSC ) , to address these problems . IGSC aims to learn the correspondence between an image and its corresponding label classifier , as illustrated in Figure 1 ( c ) . In contrast to existing methods focusing on the learning of visual ( or semantic ) representations ( Zhang et al. , 2016 ; Frome et al. , 2013 ; Socher et al. , 2013 ) , IGSC analyzes the input image and seeks for combinations of variables in the semantic space ( e.g. , combinations of attributes ) that distinguish a class ( belonging to the input ) from other classes . The proposed IGSC method has the following characteristics : • IGSC learns the correspondence between an image in the visual space and a classifier in the semantic space . The correspondence can be learned with training pairs in the scale of training images rather than that of classes . • IGSC performs learning to learn in an end-to-end manner . Label classification is conducted by a semantic classifier whose weights are generated on the fly . This model is simple yet powerful because of its adaptive nature . • IGSC unifies visual attribute detection and label classification . This is achieved via the design of a conditional network ( the proposed classifier learning method ) , in which label classification is the main task of interest and the conditional input image provides additional information of a specific situation . • IGSC alleviates the hubness problem . The correspondence between an image and a semantic classifier learned from seen classes can be transferred to recognize unseen concepts . We evaluated IGSC with experiments conducted on four public benchmark datasets , including SUN ( Patterson & Hays , 2012 ) , CUB ( Patterson & Hays , 2012 ) , AWA2 ( Lampert et al. , 2014 ) , and aPY ( Farhadi et al. , 2009 ) . Experimental results demonstrated that the proposed method achieved promising performance , compared with current state-of-the-art methods.The remainder of the paper is organized as follows : We briefly review related work in Section 2 . Section 3 presents the proposed framework . The experimental results and conclusions are provided in Sections 4 and 5 , respectively . 2 RELATED WORK . Zero-shot learning has evolved rapidly during the last decade , and therefore documenting the extensive literature with limited pages is rarely possible . In this section , we review a few representative zero-shot learning methods and refer readers to ( Xian et al. , 2019a ; Wang et al. , 2019 ) for a comprehensive survey . One pioneering main stream of ZSL uses attributes to infer the label of an image belonging to one of the unseen classes ( Lampert et al. , 2014 ; Al-Halah et al. , 2016 ; Norouzi et al. , 2014 ; Jayaraman & Grauman , 2014 ; Kankuekul et al. , 2012 ) . The attributes of an image are predicted , then the class label is inferred by searching the class which attains the most similar set of attributes . For example , the Direct Attribute Prediction ( DAP ) model ( Lampert et al. , 2009 ) estimates the posterior of each attribute for an image by learning probabilistic attribute classifiers . A test sample is then classified by each attribute classifier alternately , and the class label is predicted by probabilistic estimation . Similar to the attribute-based methods , the proposed method has the merits of modeling the relationships among classes . However , IGSC unifies these two steps : attribute classifier learning and inferring from detected attributes to the class . Furthermore , attribute classifiers are jointly learned in IGSC . A broad family of ZSL methods apply an embedding framework that directly learns a mapping from the visual space to the semantic space ( Palatucci et al. , 2009 ; Akata et al. , 2013 ; 2015 ; RomeraParedes & Torr , 2015 ) . The visual-to-semantic mapping can be linear ( Frome et al. , 2013 ) or nonlinear ( Socher et al. , 2013 ) . For example , DeViSE ( Frome et al. , 2013 ) learns a linear mapping between the image and semantic spaces using an efficient ranking loss formulation . Cross-Modal Transfer ( CMT ) ( Socher et al. , 2013 ) uses a neural network with two hidden layers to learn a nonlinear projection from image feature space to word vector space . More recently , deep neural network models are proposed to mirror learned semantic relations among classes in the visual domain from the image ( Annadani & Biswas , 2018 ) or from the part ( Zhu et al. , 2018a ) levels . IGSC is also an embedding-based ZSL method . IGSC differs significantly from existing methods in that IGSC learns the correspondence between an image and its semantic classifier , enabling the possibility of using different classification manners to separate class prototypes in the semantic space . Recent ZSL models adopt the generative adversarial network ( GAN ) ( Goodfellow et al. , 2014 ) or other generative models for synthesizing unseen examples ( Bucher et al. , 2017 ; Long et al. , 2017 ; Jiang et al. , 2018 ; Verma et al. , 2018 ; Xian et al. , 2018 ; Zhu et al. , 2018b ; Xian et al. , 2019b ; Verma et al. , 2020 ; Yu et al. , 2020 ; Ma & Hu , 2020 ) or for reconstructing training images ( Chen et al. , 2018 ) . The synthesized images obtained at the training stage can be fed to conventional classifiers so that ZSL is converted into the conventional supervised learning problem ( Long et al. , 2017 ) . The transformation from attributes to image features require involving generative models such as denoising autoencoders ( Bucher et al. , 2017 ) , GAN ( Xian et al. , 2018 ; Zhu et al. , 2018b ) or their variants ( Verma et al. , 2018 ; Felix et al. , 2018 ; Xian et al. , 2019b ; Yu et al. , 2020 ; Ma & Hu , 2020 ) . Despite outstanding performances reported in the papers , these works leverage some form of the unseen class information during training . In view of real-world applications involving recognition in-the-wild , novel classes including the image samples as well as the semantic representations may not be available in model learning . The proposed method is agnostic to all unseen class information during training . Furthermore , the proposed method is much simpler in the architecture design and has a much smaller model size , compared with the generative methods . 3 APPROACH . 3.1 PROBLEM DESCRIPTION . Given a training set S = { ( xn , yn ) , n = 1 . . . N } , with yn ∈ Ys being a class label in the seen class set , the goal of ZSL is to learn a classifier f : X → Y which can generalize to predict any image x to its correct label , which is not only in Ys but also in the unseen class set Yu . In the prevailing family of compatibility learning ZSL ( Xian et al. , 2019a ; Ba et al. , 2015 ) , the prediction is made via : ŷ = f ( x ; W ) = argmax y∈Y F ( x , y ; W ) . ( 1 ) In particular , if Y = Yu , this is the conventional ZSL setting ; if Y = Ys∪Yu , this is the generalized zero-shot learning ( GZSL ) setting , which is more practical for real-world applications . The compatibility function F ( · ) —parameterized by W—is used to associate visual and semantic information . In the visual space , each image x has a vector representation , denoted by θ ( x ) . Similarly , each class label y has a vector representation in the semantic space ( called the class prototype ) , denoted by φ ( y ) . In short , θ ( x ) and φ ( y ) are the image and class embeddings , both of which are given .
This paper proposes a simple yet effective method for zero-shot learning. In the method, a network is learned to predict the compatibility function weight given the input of the image. The predicted weight is then applied to semantic attributes and the final class label is predicted by the maximum compatibility score. The method is evaluated on benchmark datasets and illustrates competitive performance.
SP:e0a53b0c2398f49df1c8c053acb1dc4bc64a0729
Learning Contextual Perturbation Budgets for Training Robust Neural Networks
1 INTRODUCTION . It has been demonstrated that deep neural networks , although achieving impressive performance on various tasks , are vulnerable to adverarial perturbations ( Szegedy et al. , 2013 ) . Models with high accuracy on clean and unperturbed data can be fooled to have extremely poor performance when input data are adversarially perturbed . The existence of adversarial perturbations causes concerns in safety-critical applications such as self-driving cars , face recognition and medical diagnosis . A number of methods have been proposed for training robust neural networks that can resist to adversarial perturbations to some extent . Among them , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) and certified defenses ( Wong et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2020 ) are of the most reliable ones so far , and most of them are trying to make the network robust to any perturbation within an ` p norm ball . Taking the commonly used ` ∞-ball defense as an example , robust training methods aim to make model robust to perturbation on any pixel , which means the model is uniformly robust on all the input dimensions . But is this a valid assumption we should make ? As we know , human perception is non-uniform ( humans focus on important features even though these features can be sensitive to small noise ) and context dependent ( what part of image is important heavily depends on what is on the image ) . We expect a robust model to be close to human perception , rather than learn to defend against a particular fixed threat model , e.g. , the traditional ` ∞-norm one . Intuitively , we expect a good model to be more sensitive to important features and less sensitive to unimportant features , and the importance of features should be context-dependent . Taking the MNIST hand-written digit classification problem as an example , the digit 9 can be transformed to 4 simply by modifying just a few pixels on its head , so those pixels should be considered more important , and enforcing them to be robust to a large perturbation may not be correct . On the other hand , the pixels on the frame of such an input image can be safely modified without changing the ground-truth label of the image . Therefore , a uniform budget in robust training can greatly hamper the performance of neural networks on certain tasks , and will force network to ignore some important features that are important for classification . Robustness certification with non-uniform perturbation budgets has been discussed in a prior work ( Liu et al. , 2019 ) , but training robust models and learning context-dependent perturbation budgets has not been addressed in prior works , which is more challenging and important for obtaining robust models . A detailed discussion on our difference with Liu et al . ( 2019 ) is in Sec . 2.2 . In this paper , we propose the first method that can learn context-dependent non-uniform perturbation budgets in certified robust training , based on prior certified defense algorithms on ` p-norm threat models ( Zhang et al. , 2020 ; Xu et al. , 2020 ) . To learn a context-dependent budget without introducing too many learnable parameters , we introduce a perturbation budget generator with an auxiliary neural network , to generate the context-dependent budgets based on the input image . We also impose constraints on the generator to make generated budgets satisfy target robustness volumes and ranges of budgets , where robustness volume is defined as the multiplication of budgets on all input dimensions . We then train the classifier with a linear-relaxation-based certified defense algorithm , auto LiRPA ( Xu et al. , 2020 ) generalized from CROWN-IBP ( Zhang et al. , 2020 ) , to minimize the verified error under given budget constraints . The gradients of the loss function can be back-propagated to the perturbation budgets , allowing training the classification network and budget generator jointly in robust training . Our contribution can be summarized below : • We propose a novel algorithm to train robust networks with contextual perturbation budgets rather than uniform ones . We show that it can be incorporated into certified defense methods with linear relaxation-based robustness verification . • We demonstrate that our method can effectively train both the classifier and the perturbation generator jointly , and we able to train models on relatively larger robustness volumes and outperform those trained with uniform budgets . • We also show that the learned perturbation budgets are semantically meaningful and align well with the importance of different pixels in the input image . We further confirm this with two synthetic tasks and datasets constructed from MNIST and CIFAR-10 respectively . 2 BACKGROUND AND RELATED WORK . 2.1 TRAINING ROBUST NEURAL NETWORKS . Since the discovery of adversarial examples ( Szegedy et al . ( 2013 ) , Biggio et al . ( 2013 ) ) , a great number of works has been devoted to improving the robustness of neural networks from both attack and defense perspectives ( Moosavi-Dezfooli et al. , 2016 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ; Moosavi-Dezfooli et al. , 2017 ; Gowal et al. , 2019 ) . On a K-way classification task , training an adversarially robust neural network fw with weight w can generally be formulated as solving the following min-max optimization problem : min w E ( x , y ) ∼D max δ∈∆ L ( fw ( x + δ ) , y ) , ( 1 ) where D is the data distribution , and ∆ is a threat model defining the space of the perturbations , and L is a loss function . Adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) applies adversarial attacks to solve the inner maximization problem and train the neural network on generated adversarial examples , with efficiency advanced in some recent works ( Shafahi et al. , 2019 ; Wong et al. , 2020 ) . However , robustness improvements from adversarial training do not have provable guarantees . Some other recent works seek to train networks that have provable robustness , namely certified defense methods . Such methods solves the inner maximization by computing certified upper bounds that provably hold true for any perturbation within the threat model , including abstract interpretation ( Singh et al. , 2018 ) , interval bound propagation ( Gowal et al. , 2018 ; Mirman et al. , 2018 ) , randomized smoothing ( Cohen et al. , 2019 ; Salman et al. , 2019 ; Zhai et al. , 2020 ) , and linear-relaxation-based methods ( Wong & Kolter , 2018 ; Mirman et al. , 2018 ; Wong et al. , 2018 ; Zhang et al. , 2020 ; Xu et al. , 2020 ) . However , nearly all existing certified defense methods treat all input features equally in the threat model , such as an ` p-ball threat models , ∆= { δ : ‖δ ‖p≤ } , especially the ` ∞ threat model commonly used in many previous works . But different input pixels are not uniformly important to the prediction , and thus we propose to learn contextual perturbation budgets for different pixels . 2.2 ROBUSTNESS CERTIFICATION WITH NON-UNIFORM BUDGETS . There has been a recent work Liu et al . ( 2019 ) studying robustness certification with non-uniform budgets . Their work aims to maximize the robustness volume while ensuring the network prediction is certifiably correct , and this optimization problem is solved using augmented Lagrangian method . To highlight , their method has several limitations : 1 ) The perturbation budgets are obtained by solving a constrained optimization problem for each input example , which is very time-consuming . This is not only too inefficient for training as it can bring a large overhead in each training step , but also incapable for learning perturbation budgets jointly with the classifier . In contrast , we have a significantly different scheme – we aim to maximize the accuracy under given target robustness volumes . And our perturbation budgets is obtained with a lightweight neural-network-based generator , and it can be jointly trained with the classifier in an end-to-end way . 2 ) Their work only focuses on certifying trained models due to the inherent limitation of the method , while we address of the problem of training robust neural networks and learning contextual perturbation budgets simultaneously . Consequently , we are able to empirically demonstrate that we can effectively train robust models with much larger robustness volumes and achieve lower errors under given target robustness volumes . 3 ) We further have experiments with synthetic tasks on MNIST and CIFAR-10 in Sec . 4.2 and Sec . 4.3 respectively to demonstrate that the learned perturbation budgets are semantically meaningful and can capture contextual information . 3 PROPOSED METHOD . 3.1 PROBLEM SETTING . Variable threat model Unlike many previous works that use an ` ∞ threat model with a uniform on all input dimensions , we allow different pixels to have different perturbation budgets , but they need to meet some constraints as we will define below . For an n-dimensional input x , when the perturbation budget of pixel xi is i , the threat model is ∆ ( 1 , 2 , · · · , n ) = { δ : |δi| ≤ i , 1 ≤ i ≤ n } . And thereby the ` ∞ threat model is a special case with i = 0 ( ∀1 ≤ i ≤ n ) . We define the robustness volume of a threat model ∆ as the multiplication of all i ( 1 ≤ i ≤ n ) : V ( ∆ ) = n∏ i=1 i , i.e. , log V ( ∆ ) = n∑ i=1 log i . ( 2 ) In principal , we have a target robustness volume , V0 = n0 , and fw is considered to be provably robust under this target robustness volume on instance ( x , y ) if and only if : ∃∆ ∈ D , ∀k 6= y , min δ∈∆ ( [ fw ( x + δ ) ] y − [ fw ( x + δ ) ] k ) > 0 . ( 3 ) This means that the predicted score of the ground-truth class y is certifiably larger than any other class k 6= y under some threat model ∆ . And instead of using a fixed ∆ , in our framework ∆ can be taken from a threat model space D , which consists of threat models ∆ ( 1 , 2 , · · · , n ) satisfying the following two constraints : Volume constraint : ∑ i log i = n log 0 , ( 4 ) Range constraint : l ≤ i ≤ u , l = α 0 , u = min ( α 0 , 1 ) . ( 5 ) The volume constraint states that the robustness volume of ∆ is equal to the target robustness volume n0 ( written in log domain above ) . For the range constraint , there are relative factors α and α controlling the perturbation budget range of each pixel , namely [ l , u ] . This constraint can be set to guarantee a minimum robustness and also prevent the model from over-invariant on each pixel . Perturbation budget generation A classifier can be accompanied by a perturbation budget generator ( x ) which tries to find perturbation budgets 1 , 2 , · · · , n and the corresponding threat model ∆ ( x ) that can minimize the verified loss of the classifier while satisfying constraints ( 4 ) and ( 5 ) , so that ( 3 ) is more likely to hold true with ∆ generated by ( x ) . We will state the optimization problem in the next paragraph . Robust classification Accompanied by a perturbation budget generator , we aim to learn a robust classifier fw with the following min-max optimization problem : min w E ( x , y ) ∼D max δ∈∆ ( x ) L ( fw ( x + δ ) , y ) . ( 6 ) Note that a key difference between this problem and the traditional one in ( 1 ) is that now the threat model ∆ ( x ) is variable and dependent on the input x . This ∆ ( x ) is generated by ( x ) , under the given volume and range constraints . We evaluate the robustness of a classifier fw by computing an average verified error on all the test instances , where the verified correctness on each instance is evaluated similarly as ( 3 ) , where ∆ is taken as the generated ∆ ( x ) .
This paper proposes to change the perturbation budget for adversarial attacks to a non-uniform setting where differet input pixels have different perturbation budgets. To achieve this, an additional network is trained to learn the perturbation budget for each part of the input. The approach seems to perform better than a uniform perturbation budget and also learns semantically meaningful budgets for the input.
SP:2997e3ea21f2a8a5dbb7952ecabcc70dfc1e0c57
Learning Contextual Perturbation Budgets for Training Robust Neural Networks
1 INTRODUCTION . It has been demonstrated that deep neural networks , although achieving impressive performance on various tasks , are vulnerable to adverarial perturbations ( Szegedy et al. , 2013 ) . Models with high accuracy on clean and unperturbed data can be fooled to have extremely poor performance when input data are adversarially perturbed . The existence of adversarial perturbations causes concerns in safety-critical applications such as self-driving cars , face recognition and medical diagnosis . A number of methods have been proposed for training robust neural networks that can resist to adversarial perturbations to some extent . Among them , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) and certified defenses ( Wong et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2020 ) are of the most reliable ones so far , and most of them are trying to make the network robust to any perturbation within an ` p norm ball . Taking the commonly used ` ∞-ball defense as an example , robust training methods aim to make model robust to perturbation on any pixel , which means the model is uniformly robust on all the input dimensions . But is this a valid assumption we should make ? As we know , human perception is non-uniform ( humans focus on important features even though these features can be sensitive to small noise ) and context dependent ( what part of image is important heavily depends on what is on the image ) . We expect a robust model to be close to human perception , rather than learn to defend against a particular fixed threat model , e.g. , the traditional ` ∞-norm one . Intuitively , we expect a good model to be more sensitive to important features and less sensitive to unimportant features , and the importance of features should be context-dependent . Taking the MNIST hand-written digit classification problem as an example , the digit 9 can be transformed to 4 simply by modifying just a few pixels on its head , so those pixels should be considered more important , and enforcing them to be robust to a large perturbation may not be correct . On the other hand , the pixels on the frame of such an input image can be safely modified without changing the ground-truth label of the image . Therefore , a uniform budget in robust training can greatly hamper the performance of neural networks on certain tasks , and will force network to ignore some important features that are important for classification . Robustness certification with non-uniform perturbation budgets has been discussed in a prior work ( Liu et al. , 2019 ) , but training robust models and learning context-dependent perturbation budgets has not been addressed in prior works , which is more challenging and important for obtaining robust models . A detailed discussion on our difference with Liu et al . ( 2019 ) is in Sec . 2.2 . In this paper , we propose the first method that can learn context-dependent non-uniform perturbation budgets in certified robust training , based on prior certified defense algorithms on ` p-norm threat models ( Zhang et al. , 2020 ; Xu et al. , 2020 ) . To learn a context-dependent budget without introducing too many learnable parameters , we introduce a perturbation budget generator with an auxiliary neural network , to generate the context-dependent budgets based on the input image . We also impose constraints on the generator to make generated budgets satisfy target robustness volumes and ranges of budgets , where robustness volume is defined as the multiplication of budgets on all input dimensions . We then train the classifier with a linear-relaxation-based certified defense algorithm , auto LiRPA ( Xu et al. , 2020 ) generalized from CROWN-IBP ( Zhang et al. , 2020 ) , to minimize the verified error under given budget constraints . The gradients of the loss function can be back-propagated to the perturbation budgets , allowing training the classification network and budget generator jointly in robust training . Our contribution can be summarized below : • We propose a novel algorithm to train robust networks with contextual perturbation budgets rather than uniform ones . We show that it can be incorporated into certified defense methods with linear relaxation-based robustness verification . • We demonstrate that our method can effectively train both the classifier and the perturbation generator jointly , and we able to train models on relatively larger robustness volumes and outperform those trained with uniform budgets . • We also show that the learned perturbation budgets are semantically meaningful and align well with the importance of different pixels in the input image . We further confirm this with two synthetic tasks and datasets constructed from MNIST and CIFAR-10 respectively . 2 BACKGROUND AND RELATED WORK . 2.1 TRAINING ROBUST NEURAL NETWORKS . Since the discovery of adversarial examples ( Szegedy et al . ( 2013 ) , Biggio et al . ( 2013 ) ) , a great number of works has been devoted to improving the robustness of neural networks from both attack and defense perspectives ( Moosavi-Dezfooli et al. , 2016 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ; Moosavi-Dezfooli et al. , 2017 ; Gowal et al. , 2019 ) . On a K-way classification task , training an adversarially robust neural network fw with weight w can generally be formulated as solving the following min-max optimization problem : min w E ( x , y ) ∼D max δ∈∆ L ( fw ( x + δ ) , y ) , ( 1 ) where D is the data distribution , and ∆ is a threat model defining the space of the perturbations , and L is a loss function . Adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) applies adversarial attacks to solve the inner maximization problem and train the neural network on generated adversarial examples , with efficiency advanced in some recent works ( Shafahi et al. , 2019 ; Wong et al. , 2020 ) . However , robustness improvements from adversarial training do not have provable guarantees . Some other recent works seek to train networks that have provable robustness , namely certified defense methods . Such methods solves the inner maximization by computing certified upper bounds that provably hold true for any perturbation within the threat model , including abstract interpretation ( Singh et al. , 2018 ) , interval bound propagation ( Gowal et al. , 2018 ; Mirman et al. , 2018 ) , randomized smoothing ( Cohen et al. , 2019 ; Salman et al. , 2019 ; Zhai et al. , 2020 ) , and linear-relaxation-based methods ( Wong & Kolter , 2018 ; Mirman et al. , 2018 ; Wong et al. , 2018 ; Zhang et al. , 2020 ; Xu et al. , 2020 ) . However , nearly all existing certified defense methods treat all input features equally in the threat model , such as an ` p-ball threat models , ∆= { δ : ‖δ ‖p≤ } , especially the ` ∞ threat model commonly used in many previous works . But different input pixels are not uniformly important to the prediction , and thus we propose to learn contextual perturbation budgets for different pixels . 2.2 ROBUSTNESS CERTIFICATION WITH NON-UNIFORM BUDGETS . There has been a recent work Liu et al . ( 2019 ) studying robustness certification with non-uniform budgets . Their work aims to maximize the robustness volume while ensuring the network prediction is certifiably correct , and this optimization problem is solved using augmented Lagrangian method . To highlight , their method has several limitations : 1 ) The perturbation budgets are obtained by solving a constrained optimization problem for each input example , which is very time-consuming . This is not only too inefficient for training as it can bring a large overhead in each training step , but also incapable for learning perturbation budgets jointly with the classifier . In contrast , we have a significantly different scheme – we aim to maximize the accuracy under given target robustness volumes . And our perturbation budgets is obtained with a lightweight neural-network-based generator , and it can be jointly trained with the classifier in an end-to-end way . 2 ) Their work only focuses on certifying trained models due to the inherent limitation of the method , while we address of the problem of training robust neural networks and learning contextual perturbation budgets simultaneously . Consequently , we are able to empirically demonstrate that we can effectively train robust models with much larger robustness volumes and achieve lower errors under given target robustness volumes . 3 ) We further have experiments with synthetic tasks on MNIST and CIFAR-10 in Sec . 4.2 and Sec . 4.3 respectively to demonstrate that the learned perturbation budgets are semantically meaningful and can capture contextual information . 3 PROPOSED METHOD . 3.1 PROBLEM SETTING . Variable threat model Unlike many previous works that use an ` ∞ threat model with a uniform on all input dimensions , we allow different pixels to have different perturbation budgets , but they need to meet some constraints as we will define below . For an n-dimensional input x , when the perturbation budget of pixel xi is i , the threat model is ∆ ( 1 , 2 , · · · , n ) = { δ : |δi| ≤ i , 1 ≤ i ≤ n } . And thereby the ` ∞ threat model is a special case with i = 0 ( ∀1 ≤ i ≤ n ) . We define the robustness volume of a threat model ∆ as the multiplication of all i ( 1 ≤ i ≤ n ) : V ( ∆ ) = n∏ i=1 i , i.e. , log V ( ∆ ) = n∑ i=1 log i . ( 2 ) In principal , we have a target robustness volume , V0 = n0 , and fw is considered to be provably robust under this target robustness volume on instance ( x , y ) if and only if : ∃∆ ∈ D , ∀k 6= y , min δ∈∆ ( [ fw ( x + δ ) ] y − [ fw ( x + δ ) ] k ) > 0 . ( 3 ) This means that the predicted score of the ground-truth class y is certifiably larger than any other class k 6= y under some threat model ∆ . And instead of using a fixed ∆ , in our framework ∆ can be taken from a threat model space D , which consists of threat models ∆ ( 1 , 2 , · · · , n ) satisfying the following two constraints : Volume constraint : ∑ i log i = n log 0 , ( 4 ) Range constraint : l ≤ i ≤ u , l = α 0 , u = min ( α 0 , 1 ) . ( 5 ) The volume constraint states that the robustness volume of ∆ is equal to the target robustness volume n0 ( written in log domain above ) . For the range constraint , there are relative factors α and α controlling the perturbation budget range of each pixel , namely [ l , u ] . This constraint can be set to guarantee a minimum robustness and also prevent the model from over-invariant on each pixel . Perturbation budget generation A classifier can be accompanied by a perturbation budget generator ( x ) which tries to find perturbation budgets 1 , 2 , · · · , n and the corresponding threat model ∆ ( x ) that can minimize the verified loss of the classifier while satisfying constraints ( 4 ) and ( 5 ) , so that ( 3 ) is more likely to hold true with ∆ generated by ( x ) . We will state the optimization problem in the next paragraph . Robust classification Accompanied by a perturbation budget generator , we aim to learn a robust classifier fw with the following min-max optimization problem : min w E ( x , y ) ∼D max δ∈∆ ( x ) L ( fw ( x + δ ) , y ) . ( 6 ) Note that a key difference between this problem and the traditional one in ( 1 ) is that now the threat model ∆ ( x ) is variable and dependent on the input x . This ∆ ( x ) is generated by ( x ) , under the given volume and range constraints . We evaluate the robustness of a classifier fw by computing an average verified error on all the test instances , where the verified correctness on each instance is evaluated similarly as ( 3 ) , where ∆ is taken as the generated ∆ ( x ) .
This paper address the problem of training robust neural networks with non-uniform perturbation budgets on different input pixels. In practice, a perturbation budget generator is introduced to generate the context-aware perturbation budget (i.e. conditioned on the input) for each pixel of the input image. A “robustness volume” constraint on generated perturbation budgets to control the robustness intensity is also proposed. Extensive experiments on MNIST and CIFAR10 demonstrate the proposed outperform SOTA method under various uniform perturbation budgets.
SP:2997e3ea21f2a8a5dbb7952ecabcc70dfc1e0c57
Frequency Regularized Deep Convolutional Dictionary Learning and Application to Blind Denoising
1 INTRODUCTION . Sparsity in a transform domain is an important and widely applicable property of natural images . This property can be exploited in a variety of tasks such as signal representation , feature extraction , and image processing . For instance , consider restoring an image from a degraded version ( noisy , blurry , or missing pixels ) . These inverse problems are generally ill-posed and require utilizing adequate prior knowledge , for which sparsity has proven extremely effective ( Mairal et al. , 2014 ) . In recent years , such problems have been tackled with deep neural network architectures that achieve superior performance but are not well-understood in terms of their building blocks . In this study , we are interested in utilizing the knowledge from classical signal processing and spare coding literature to introduce a learned framework which is interpretable and that can perform on-par with state-ofthe-art deep-learning methods . We choose to explore this method under the task of natural image denoising , in line with much of the recent literature ( Sreter & Giryes , 2018 ; Simon & Elad , 2019 ; Lecouat et al. , 2020 ) . As a benefit of this interpretability , we are able to extend the framework for a blind-denoising setting using ideas from signal processing . In sparse representation we seek to approximate a signal as a linear combination of a few vectors from a set of vectors ( usually called dictionary atoms ) . Olshausen & Field ( 1996 ) , following a neuroscientific perspective , proposed to adapt the dictionary to a set of training data . Later , dictionary learning combined with sparse coding was investigated in numerous applications ( Mairal et al. , 2009a ; Protter & Elad , 2008 ) . More specifically , for a set ofN image patches ( reshaped into column vectors ) X = [ x1 , · · · , xN ] ∈ Rm×N , we seek to find the dictionary D∗ ∈ Rm×k and the sparse representation Z∗ = [ z∗1 , · · · , z∗N ] ∈ Rk×N such that D∗ , Z∗ = arg min D , Z N∑ i=1 ‖zi‖0 subject to : Dzi = xi , ∀i = 1 , · · · , N. ( 1 ) This formulation is not tractable for large signals since minimizing the ` 0-pseudo-norm involves a combinatorial optimization ( Natarajan , 1995 ) . To address this complication , a popular technique is to relax the problem by using the ` 1-norm as a surrogate ( Sreter & Giryes , 2018 ) . When dealing with inverse problems such as denoising , learning the dictionary from the degraded signal has shown effective ( Toić & Frossard , 2011 ) . Let yi = xi + ni ∈ Rm represent the noisy signal where ni follows an additive white Gaussian distribution , N ( 0 , σ2nI ) . Then , the relaxed formulation can be written as min D , Z N∑ i=1 ‖zi‖1 s.t . N∑ i=1 1 2 ‖Dzi − yi‖22 ≤ or minD , Z N∑ i=1 1 2 ‖Dzi − yi‖22 + λ‖zi‖1 ( 2 ) where λ is a regularization parameter and is nontrivialy related to the representation error . We will refer to this as the basis-pursuit denoising ( BPDN ) formulation of dictionary learning . Many iterative algorithms have been proposed in the literature to solve this problem ( Mairal et al. , 2014 ) . A majority of these algorithms split the problem into a step updating the dictionary followed by a step solving for the sparse codes . Note that learning a dictionary over independent image patches neglects the dependencies between these patches . As a result , the models involving patch processing are inherently sub-optimal ( Batenkov et al. , 2017 ; Simon & Elad , 2019 ) . Although enforcing local priors on merged images ( Sulam & Elad , 2015 ) and utilizing self-similarity between patches ( Mairal et al. , 2009b ) have been proposed as ideas to mitigate this flaw , ideally a global shift-invariant model is more appropriate . By constraining the dictionary to have a Toeplitz structure , the Convolutional Sparse Coding ( CSC ) model has been introduced which replaces the local patch processing with a global convolution ( Grosse et al. , 2007 ; Papyan et al. , 2017 ) . Algorithms for solving the CSC model are also discussed in ( Moreau & Gramfort , 2019 ; Wohlberg , 2017 ) . In this study , we are interested in interpretable CSC-based deep-learning models . A metric known as the mutual-coherence is well known to be related to the representation capability of the dictionary and is of special concern in using the CSC model with natural images ( Simon & Elad , 2019 ) . We take an alternative route to Simon & Elad ( 2019 ) in addressing the mutual-coherence of CSC-based deep-learning models , which is both less computationally expensive and improves the denoising performance . We continue the discussion about CSC-based deep-learning models in Sec . 1.1 . Another important aspect of the sparse representation is the sparse coding algorithm . For a given signal y ∈ Rm and dictionary D , iterative soft-thresholding algorithm ( ISTA ) ( Beck & Teboulle , 2009 ) finds the solution to the BPDN functional , z∗ = arg minz 1/2 ‖Dz − y‖ 2 2 + λ‖z‖1 , by repeating the following iteration until a convergence criterion is reached : z ( k+1 ) = Sλη ( k ) ( zk − η ( k ) DT ( Dz ( k ) − y ) ) where Sθ ( x ) = sgn ( x ) ( |x| − θ ) + , θ ≥ 0 . ( 3 ) Here , η ( k ) is the step-size of the descent algorithm at iteration k. Note that performing sparse coding with an iterative method like ISTA for all patches is computationally exhausting and slow . To resolve this issue , Gregor & LeCun ( 2010 ) proposed to approximate the sparse coding via a learned differentiable encoder , dubbed LISTA . Further extensions of LISTA both in terms of practice and theory have been studied in the literature ( Wu et al. , 2019 ; Chen et al. , 2018 ) . More recently , using LISTA combined with dictionary learning has been a research highlight ( Sreter & Giryes , 2018 ; Simon & Elad , 2019 ; Lecouat et al. , 2020 ) . We refer to this type of models that leverages LISTA for convolutional dictionary learning as CDL models . 1.1 RELATED WORKS . In this study , we are interested in the CDL model that concatenates a LISTA network with a linear convolutional synthesis dictionary . Let D be a convolutional dictionary with M filters ( and their integer shifts ) . We denote the filters in D by d j where j ∈ { 1 , · · · , M } . Let Zi denote the sparse code for the data sample yi = xi + ni where i ∈ { 1 , 2 , · · · , N } and n ∼ N ( 0 , σ2nI ) . The corresponding subband signal to d j in Zi can be denoted as z j i . Then the convolutional dictionary learning problem is written as minimize dj , Zi N∑ i=1 1 2 ‖yi − M∑ j=1 dj ∗ zji ‖ 2 2 + λ M∑ j=1 ‖zji ‖1 . ( 4 ) Sreter & Giryes ( 2018 ) introduce the approximate convolutional sparse coding ( ACSC ) framework for “ task-driven convolutional sparse coding ” , combining a convolutional extension of LISTA with a linear convolutional decoder . The proposed framework offers a strategy for training an approximate convolutional sparse coding network and a corresponding convolutional dictionary in an end-to-end fashion . They demonstrate competitive performance against classical patch-based methods such as K-SVD ( Aharon et al. , 2006 ) , on image denoising and image inpainting . Our proposed baseline model ( CDLNet ) differs from the ACSC model by use of mean-subtraction preprocessing , employing small-strided convolutions , and imposing a norm-constraint on the synthesis dictionary . Simon & Elad ( 2019 ) extend the framework of Sreter & Giryes ( 2018 ) by considering the role of stride in the stable recovery of signals and proposed the “ CSCNet ” framework . They argue that the CSC model for image representation in a sparse domain is limited by the inclusion of “ smooth filters ” , which are required to represent the piecewise smooth characteristics of natural images . This limitation manifests itself in the maximum cross-correlation between atoms of the dictionary , known as the mutual-coherence . They empirically show that using relatively large stride , while processing shifted-duplicates of the input , improves denoising performance of the model . Although using large stride reduces the mutual coherence of the learned filters , all possible shifts of the image need to be processed and averaged , yielding a model very similar to patch-processing . We propose a frequency regularization strategy to mitigate the problem of smooth-varying filters which does not require shift-averaging . Note that the parameter λ in equation 4 depends on the desired sparsity , relative to the noise-level , and is directly related to the threshold values in ISTA . Sreter & Giryes ( 2018 ) propose to learn different thresholds for each channel , effectively changing the regularizer term in equation 4 to∑M j=1 ‖λjz j i ‖1 . Inspired by the benefit of minimax-concave ( MC ) penalty ( Selesnick , 2017 ) over ` 1 norm , Pokala et al . ( 2020 ) propose “ ConFirmNet ” where firm-thresholding function is used in the network . Kim & Park ( 2020 ) propose a signal adaptive threshold scheme for LISTA where the threshold is decreased if the previous estimate of an element is large . Mohan et al . ( 2020 ) explore the role of bias-vectors in popular deep-learning network ’ s convolution operators . They advocate for eliminating the biases completely to improve generalization in blinddenoising where there is mismatch between training and inference noise level . Isogawa et al . ( 2017 ) propose altering the biases of deep neural-networks by scaling them with the input noise standarddeviation . Their method is ultimately a non-blind denoising scheme as they use the ground-truth noise statistics during training and inference . In contrast , we propose a blind-denoising scheme that is motivated by the interpretation of the biases in LISTA as thresholds and employ a scaling by the noise variance ( in the last layer of LISTA ) , estimated from the input signal during training and inference . Performance of different denoising techniques on other noise distributions have also been studied in the literature , which is not the focus of this study ( Abdelhamed et al. , 2018 ; Plotz & Roth , 2017 ) . 1.2 CONTRIBUTION OF THIS STUDY . The unrolled convolutional sparse coding and dictionary learning frameworks have led to the field dubbed “ interpretable deep-learning ” . The networks constructed in such a way have the benefit of interpretability and decreased parameter count while performing quite closely to other state-of-theart deep-learning models . In this study we further extend such frameworks . We propose utilizing a strided convolutional dictionary with a fixed low-pass channel and a set of frequency-regularized learnt filters ( Section 2.2 ) . Our experimental results demonstrate that such frequency regularization and small stride leads to more interpretable dictionary filters than the prior work . Consequently , by limiting the number of low-pass atoms in the dictionary and using small-strided convolutions , we address the modeling assumptions associated with the convolutional sparse coding model ( Section 2.1.1 ) . Additionally , leveraging interpretability of our network , we propose to parameterize the softthresholding operator in LISTA such that the thresholds are proportional to the estimated input noiselevel for a given image ( Section 2.3 ) . Experimentally , we show improved denoising performance at reduced computational complexity compared to other frameworks ( Section 3.2 ) . Furthermore , our parameterization of the learned thresholds greatly improves robustness to noise-level mismatch between training and inference and increases the generalizability of the network ( Section 3.3 ) .
The paper proposes a new regularization for the dictionary in the learned convolutional sparse coding model of Sreter & Giryes '18. The main contribution is that the dictionary is regularized to be composed of 1) a fixed low-pass filter and 2) a set of learned filters to occupy the complementary high-frequency space. A second contribution is that the thresholding in the network is adjustable according to the estimated noise level in the image.
SP:42b2a4961b167d02370a0924d0666be1bf962110
Frequency Regularized Deep Convolutional Dictionary Learning and Application to Blind Denoising
1 INTRODUCTION . Sparsity in a transform domain is an important and widely applicable property of natural images . This property can be exploited in a variety of tasks such as signal representation , feature extraction , and image processing . For instance , consider restoring an image from a degraded version ( noisy , blurry , or missing pixels ) . These inverse problems are generally ill-posed and require utilizing adequate prior knowledge , for which sparsity has proven extremely effective ( Mairal et al. , 2014 ) . In recent years , such problems have been tackled with deep neural network architectures that achieve superior performance but are not well-understood in terms of their building blocks . In this study , we are interested in utilizing the knowledge from classical signal processing and spare coding literature to introduce a learned framework which is interpretable and that can perform on-par with state-ofthe-art deep-learning methods . We choose to explore this method under the task of natural image denoising , in line with much of the recent literature ( Sreter & Giryes , 2018 ; Simon & Elad , 2019 ; Lecouat et al. , 2020 ) . As a benefit of this interpretability , we are able to extend the framework for a blind-denoising setting using ideas from signal processing . In sparse representation we seek to approximate a signal as a linear combination of a few vectors from a set of vectors ( usually called dictionary atoms ) . Olshausen & Field ( 1996 ) , following a neuroscientific perspective , proposed to adapt the dictionary to a set of training data . Later , dictionary learning combined with sparse coding was investigated in numerous applications ( Mairal et al. , 2009a ; Protter & Elad , 2008 ) . More specifically , for a set ofN image patches ( reshaped into column vectors ) X = [ x1 , · · · , xN ] ∈ Rm×N , we seek to find the dictionary D∗ ∈ Rm×k and the sparse representation Z∗ = [ z∗1 , · · · , z∗N ] ∈ Rk×N such that D∗ , Z∗ = arg min D , Z N∑ i=1 ‖zi‖0 subject to : Dzi = xi , ∀i = 1 , · · · , N. ( 1 ) This formulation is not tractable for large signals since minimizing the ` 0-pseudo-norm involves a combinatorial optimization ( Natarajan , 1995 ) . To address this complication , a popular technique is to relax the problem by using the ` 1-norm as a surrogate ( Sreter & Giryes , 2018 ) . When dealing with inverse problems such as denoising , learning the dictionary from the degraded signal has shown effective ( Toić & Frossard , 2011 ) . Let yi = xi + ni ∈ Rm represent the noisy signal where ni follows an additive white Gaussian distribution , N ( 0 , σ2nI ) . Then , the relaxed formulation can be written as min D , Z N∑ i=1 ‖zi‖1 s.t . N∑ i=1 1 2 ‖Dzi − yi‖22 ≤ or minD , Z N∑ i=1 1 2 ‖Dzi − yi‖22 + λ‖zi‖1 ( 2 ) where λ is a regularization parameter and is nontrivialy related to the representation error . We will refer to this as the basis-pursuit denoising ( BPDN ) formulation of dictionary learning . Many iterative algorithms have been proposed in the literature to solve this problem ( Mairal et al. , 2014 ) . A majority of these algorithms split the problem into a step updating the dictionary followed by a step solving for the sparse codes . Note that learning a dictionary over independent image patches neglects the dependencies between these patches . As a result , the models involving patch processing are inherently sub-optimal ( Batenkov et al. , 2017 ; Simon & Elad , 2019 ) . Although enforcing local priors on merged images ( Sulam & Elad , 2015 ) and utilizing self-similarity between patches ( Mairal et al. , 2009b ) have been proposed as ideas to mitigate this flaw , ideally a global shift-invariant model is more appropriate . By constraining the dictionary to have a Toeplitz structure , the Convolutional Sparse Coding ( CSC ) model has been introduced which replaces the local patch processing with a global convolution ( Grosse et al. , 2007 ; Papyan et al. , 2017 ) . Algorithms for solving the CSC model are also discussed in ( Moreau & Gramfort , 2019 ; Wohlberg , 2017 ) . In this study , we are interested in interpretable CSC-based deep-learning models . A metric known as the mutual-coherence is well known to be related to the representation capability of the dictionary and is of special concern in using the CSC model with natural images ( Simon & Elad , 2019 ) . We take an alternative route to Simon & Elad ( 2019 ) in addressing the mutual-coherence of CSC-based deep-learning models , which is both less computationally expensive and improves the denoising performance . We continue the discussion about CSC-based deep-learning models in Sec . 1.1 . Another important aspect of the sparse representation is the sparse coding algorithm . For a given signal y ∈ Rm and dictionary D , iterative soft-thresholding algorithm ( ISTA ) ( Beck & Teboulle , 2009 ) finds the solution to the BPDN functional , z∗ = arg minz 1/2 ‖Dz − y‖ 2 2 + λ‖z‖1 , by repeating the following iteration until a convergence criterion is reached : z ( k+1 ) = Sλη ( k ) ( zk − η ( k ) DT ( Dz ( k ) − y ) ) where Sθ ( x ) = sgn ( x ) ( |x| − θ ) + , θ ≥ 0 . ( 3 ) Here , η ( k ) is the step-size of the descent algorithm at iteration k. Note that performing sparse coding with an iterative method like ISTA for all patches is computationally exhausting and slow . To resolve this issue , Gregor & LeCun ( 2010 ) proposed to approximate the sparse coding via a learned differentiable encoder , dubbed LISTA . Further extensions of LISTA both in terms of practice and theory have been studied in the literature ( Wu et al. , 2019 ; Chen et al. , 2018 ) . More recently , using LISTA combined with dictionary learning has been a research highlight ( Sreter & Giryes , 2018 ; Simon & Elad , 2019 ; Lecouat et al. , 2020 ) . We refer to this type of models that leverages LISTA for convolutional dictionary learning as CDL models . 1.1 RELATED WORKS . In this study , we are interested in the CDL model that concatenates a LISTA network with a linear convolutional synthesis dictionary . Let D be a convolutional dictionary with M filters ( and their integer shifts ) . We denote the filters in D by d j where j ∈ { 1 , · · · , M } . Let Zi denote the sparse code for the data sample yi = xi + ni where i ∈ { 1 , 2 , · · · , N } and n ∼ N ( 0 , σ2nI ) . The corresponding subband signal to d j in Zi can be denoted as z j i . Then the convolutional dictionary learning problem is written as minimize dj , Zi N∑ i=1 1 2 ‖yi − M∑ j=1 dj ∗ zji ‖ 2 2 + λ M∑ j=1 ‖zji ‖1 . ( 4 ) Sreter & Giryes ( 2018 ) introduce the approximate convolutional sparse coding ( ACSC ) framework for “ task-driven convolutional sparse coding ” , combining a convolutional extension of LISTA with a linear convolutional decoder . The proposed framework offers a strategy for training an approximate convolutional sparse coding network and a corresponding convolutional dictionary in an end-to-end fashion . They demonstrate competitive performance against classical patch-based methods such as K-SVD ( Aharon et al. , 2006 ) , on image denoising and image inpainting . Our proposed baseline model ( CDLNet ) differs from the ACSC model by use of mean-subtraction preprocessing , employing small-strided convolutions , and imposing a norm-constraint on the synthesis dictionary . Simon & Elad ( 2019 ) extend the framework of Sreter & Giryes ( 2018 ) by considering the role of stride in the stable recovery of signals and proposed the “ CSCNet ” framework . They argue that the CSC model for image representation in a sparse domain is limited by the inclusion of “ smooth filters ” , which are required to represent the piecewise smooth characteristics of natural images . This limitation manifests itself in the maximum cross-correlation between atoms of the dictionary , known as the mutual-coherence . They empirically show that using relatively large stride , while processing shifted-duplicates of the input , improves denoising performance of the model . Although using large stride reduces the mutual coherence of the learned filters , all possible shifts of the image need to be processed and averaged , yielding a model very similar to patch-processing . We propose a frequency regularization strategy to mitigate the problem of smooth-varying filters which does not require shift-averaging . Note that the parameter λ in equation 4 depends on the desired sparsity , relative to the noise-level , and is directly related to the threshold values in ISTA . Sreter & Giryes ( 2018 ) propose to learn different thresholds for each channel , effectively changing the regularizer term in equation 4 to∑M j=1 ‖λjz j i ‖1 . Inspired by the benefit of minimax-concave ( MC ) penalty ( Selesnick , 2017 ) over ` 1 norm , Pokala et al . ( 2020 ) propose “ ConFirmNet ” where firm-thresholding function is used in the network . Kim & Park ( 2020 ) propose a signal adaptive threshold scheme for LISTA where the threshold is decreased if the previous estimate of an element is large . Mohan et al . ( 2020 ) explore the role of bias-vectors in popular deep-learning network ’ s convolution operators . They advocate for eliminating the biases completely to improve generalization in blinddenoising where there is mismatch between training and inference noise level . Isogawa et al . ( 2017 ) propose altering the biases of deep neural-networks by scaling them with the input noise standarddeviation . Their method is ultimately a non-blind denoising scheme as they use the ground-truth noise statistics during training and inference . In contrast , we propose a blind-denoising scheme that is motivated by the interpretation of the biases in LISTA as thresholds and employ a scaling by the noise variance ( in the last layer of LISTA ) , estimated from the input signal during training and inference . Performance of different denoising techniques on other noise distributions have also been studied in the literature , which is not the focus of this study ( Abdelhamed et al. , 2018 ; Plotz & Roth , 2017 ) . 1.2 CONTRIBUTION OF THIS STUDY . The unrolled convolutional sparse coding and dictionary learning frameworks have led to the field dubbed “ interpretable deep-learning ” . The networks constructed in such a way have the benefit of interpretability and decreased parameter count while performing quite closely to other state-of-theart deep-learning models . In this study we further extend such frameworks . We propose utilizing a strided convolutional dictionary with a fixed low-pass channel and a set of frequency-regularized learnt filters ( Section 2.2 ) . Our experimental results demonstrate that such frequency regularization and small stride leads to more interpretable dictionary filters than the prior work . Consequently , by limiting the number of low-pass atoms in the dictionary and using small-strided convolutions , we address the modeling assumptions associated with the convolutional sparse coding model ( Section 2.1.1 ) . Additionally , leveraging interpretability of our network , we propose to parameterize the softthresholding operator in LISTA such that the thresholds are proportional to the estimated input noiselevel for a given image ( Section 2.3 ) . Experimentally , we show improved denoising performance at reduced computational complexity compared to other frameworks ( Section 3.2 ) . Furthermore , our parameterization of the learned thresholds greatly improves robustness to noise-level mismatch between training and inference and increases the generalizability of the network ( Section 3.3 ) .
The paper proposes a denoising method with a neural network inspired from convolutional dictionary learning. In the proposed method, one atom of the dictionary is constrained to be a low frequency filters and all other atoms are to be high frequency atoms. The authors also propose to make the threshold depends on the noise level to better adapt to different noise level and to use strided convolution to reduce the computational cost of the method. The method is then evaluated on images from BSD68.
SP:42b2a4961b167d02370a0924d0666be1bf962110
Lifelong Learning of Compositional Structures
1 INTRODUCTION . A major goal of artificial intelligence is to create an agent capable of acquiring a general understanding of the world . Such an agent would require the ability to continually accumulate and build upon its knowledge as it encounters new experiences . Lifelong machine learning addresses this setting , whereby an agent faces a continual stream of diverse problems and must strive to capture the knowledge necessary for solving each new task it encounters . If the agent is capable of accumulating knowledge in some form of compositional representation ( e.g. , neural net modules ) , it could then selectively reuse and combine relevant pieces of knowledge to construct novel solutions . Various compositional representations for multiple tasks have been proposed recently ( Zaremba et al. , 2016 ; Hu et al. , 2017 ; Kirsch et al. , 2018 ; Meyerson & Miikkulainen , 2018 ) . We address the novel question of how to learn these compositional structures in a lifelong learning setting . We design a general-purpose framework that is agnostic to the specific algorithms used for learning and the form of the structures being learned . Evoking Piaget ’ s ( 1976 ) assimilation and accommodation stages of intellectual development , this framework embodies the benefits of dividing the lifelong learning process into two distinct stages . In the first stage , the learner strives to solve a new task by combining existing components it has already acquired . The second stage uses discoveries from the new task to improve existing components and to construct fresh components if necessary . Our proposed framework , which we depict visually in Appendix A , is capable of incorporating various forms of compositional structures , as well as different mechanisms for avoiding catastrophic forgetting ( McCloskey & Cohen , 1989 ) . As examples of the flexibility of our framework , it can incorporate naı̈ve fine-tuning , experience replay , and elastic weight consolidation ( Kirkpatrick et al. , 2017 ) as knowledge retention mechanisms , and linear combinations of linear models ( Kumar & Daumé III , 2012 ; Ruvolo & Eaton , 2013 ) , soft layer ordering ( Meyerson & Miikkulainen , 2018 ) , and a soft version of gating networks ( Kirsch et al. , 2018 ; Rosenbaum et al. , 2018 ) as the compositional structures . We instantiate our framework with the nine combinations of these examples , and evaluate it on eight different data sets , consistently showing that separating the lifelong learning process into two stages increases the capabilities of the learning system , reducing catastrophic forgetting and achieving higher overall performance . Qualitatively , we show that the components learned by an algorithm that adheres to our framework correspond to self-contained , reusable functions . 2 RELATED WORK . Lifelong learning In continual or lifelong learning , agents must handle a variety of tasks over their lifetimes , and should accumulate knowledge in a way that enables them to more efficiently learn to solve new problems . Recent efforts have mainly focused on avoiding catastrophic forgetting . At a high level , algorithms define parts of parametric models ( e.g. , deep neural networks ) to be shared across tasks . As the agent encounters tasks sequentially , it strives to retain the knowledge that enabled it to solve earlier tasks . One common approach is to impose regularization to prevent parameters from deviating in directions that are harmful for performance on the early tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Li & Hoiem , 2017 ; Ritter et al. , 2018 ) . Another approach retains a small buffer of data from all tasks , and continually updates the model parameters utilizing data from all tasks , thereby maintaining the knowledge required to solve them ( Lopez-Paz & Ranzato , 2017 ; Nguyen et al. , 2018 ; Isele & Cosgun , 2018 ) . A related technique is to learn a generative model to “ hallucinate ” data , reducing the memory footprint at the cost of using lower-quality data and increasing the cost of accessing data ( Shin et al. , 2017 ; Achille et al. , 2018 ; Rao et al. , 2019 ) . These approaches , although effective in avoiding the problem of catastrophic forgetting , make no substantial effort toward the discovery of reusable knowledge . One could argue that the model parameters are learned in such a way that they are reusable across all tasks . However , it is unclear what the reusability of these parameters means , and moreover the way in which parameters are reused is hard-coded into the architecture design . This latter issue is a major drawback when attempting to learn tasks with a high degree of variability , as the exact form in which tasks are related is often unknown . Ideally , the algorithm would be able to determine these relations autonomously . Other methods learn a set of models that are reusable across many tasks and automatically select how to reuse them ( Ruvolo & Eaton , 2013 ; Nagabandi et al. , 2019 ) . However , such methods selectively reuse entire models , enabling knowledge reuse , but not explicitly in a compositional manner . Compositional knowledge A mostly distinct line of parallel work has explored the learning of compositional knowledge . The majority of such methods either learn the structure for piecing together a given set of components ( Cai et al. , 2017 ; Xu et al. , 2018 ; Bunel et al. , 2018 ) or learn the set of components given a known structure for how to compose them ( Bošnjak et al. , 2017 ) . A more interesting case is when neither the structure nor the set of components are given , and the agent must autonomously discover the compositional structure underlying a set of tasks . Some approaches for solving this problem assume access to a solution descriptor ( e.g. , in natural language ) , which can be mapped by the agent to a solution structure ( Hu et al. , 2017 ; Johnson et al. , 2017 ; Pahuja et al. , 2019 ) . However , many agents ( e.g. , service robots ) are expected to learn in more autonomous settings , where this kind of supervision is not available . Other approaches instead learn the structure directly from optimization of a cost function ( Rosenbaum et al. , 2018 ; Kirsch et al. , 2018 ; Meyerson & Miikkulainen , 2018 ; Alet et al. , 2018 ; Chang et al. , 2019 ) . Many of these works can be viewed as instances of neural architecture search , a closely related area ( Elsken et al. , 2019 ) . However , note that the approaches above assume that the agent will have access to a large batch of tasks , enabling it to evaluate numerous combinations of components and structures on all tasks simultaneously . More realistically , the agent faces a sequence of tasks in a lifelong learning fashion . Most work in this line assumes that each component can be fully learned by training on a single task , and then can be reused for other tasks ( Reed & de Freitas , 2016 ; Fernando et al. , 2017 ; Valkov et al. , 2018 ) . Unfortunately , this is infeasible in many real-world scenarios in which the agent has access to little data for each of the tasks . One notable exception was proposed by Gaunt et al . ( 2017 ) , which improves early components with experience in new tasks , but is limited to very simplistic settings . Unlike prior work , our approach explicitly learns compositional structures in a lifelong learning setting . We do not assume access to a large batch of tasks or the ability to learn definitive components after training on a single task . Instead , we train on a small initial batch of tasks ( four tasks , in our experiments ) , and then autonomously update the existing components to accommodate new tasks . Our framework also permits incorporating new components over time . Related work has increased network capacity in the non-compositional setting ( Yoon et al. , 2018 ) or in a compositional setting where previously learned parameters are kept fixed ( Li et al. , 2019 ) . Another method enables adaptation of existing parameters ( Rajasegaran et al. , 2019 ) , but requires expensively storing and training multiple models for each task to select the best one before adapting the existing parameters , and is designed for a specific choice of architecture , unlike our general framework . 3 THE LIFELONG LEARNING PROBLEM . We frame lifelong learning as online multi-task learning . The agent will face a sequence of tasks T ( 1 ) , . . . , T ( T ) over its lifetime . Each task will be a learning problem defined by a cost function L ( t ) ( f ( t ) ) , where the agent must learn a prediction function f ( t ) ∈ F : X ( t ) 7→ Y ( t ) to minimize the cost , where F is a function class , and X ( t ) and Y ( t ) are the instance and label spaces , respectively . Each task ’ s solution is parameterized by θ ( t ) , such that f ( t ) = fθ ( t ) . The goal of the lifelong learner is to find parameters θ ( 1 ) , . . . , θ ( T ) that minimize the cost across all tasks : ∑T t=1 L ( t ) ( f ( t ) ) . The number of tasks , the order in which tasks will arrive , and the task relationships are all unknown . Given limited data for each new task , the agent will strive to discover any relevant information to 1 ) relate it to previously stored knowledge in order to permit transfer and 2 ) store any new knowledge for future reuse . The agent may be evaluated on any previous task , requiring it to perform well on all tasks . In consequence , it must strive to retain knowledge from even the earliest tasks . 4 THE LIFELONG COMPOSITIONAL LEARNING FRAMEWORK . Our framework for lifelong learning of compositional structures ( illustrated in Appendix A ) stores knowledge in a set of k shared components M = { m1 , . . . , mk } that are acquired and refined over the agent ’ s lifetime . Each component mi = mφi ∈ M is a self-contained , reusable function parameterized by φi that can be combined with other components . The agent reconstructs each task ’ s predictive function f ( t ) via a task-specific structure s ( t ) : X ( t ) ×Mk 7→ F , withMk being the set of possible sequences of k components , such that f ( t ) ( x ) = s ( t ) ( x , M ) ( x ) , where s ( t ) is parameterized by a vector ψ ( t ) . Note that s ( t ) yields a function from F . The structure functions select the components from M and the order in which to compose them to construct the model for each task ( the f ( t ) ’ s ) . Specific examples of components and structures are described in Section 4.1 . The intuition behind our framework is that , at any point in time t , the agent will have acquired a set of components suitable for solving tasks it encountered previously ( T ( 1 ) , . . . , T ( t−1 ) ) . If these components , with minor adaptations , can be combined to solve the current task T ( t ) , then the agent should first learn how to reuse these components before making any modifications to them . The rationale for this idea of keeping components fixed during the early stages of training on the current task T ( t ) , before the agent has acquired sufficient knowledge to perform well on T ( t ) , is that premature modification could be catastrophically damaging to the set of existing components . Once the structure s ( t ) has been learned , we consider that the agent has captured sufficient knowledge about the current task , and it would be sensible to update the components to better accommodate that knowledge . If , instead , it is not possible to capture the current task with the existing components , then new components should be added . These notions loosely mirror the stages of assimilation and accommodation in Piaget ’ s ( 1976 ) theories on intellectual development , and so we adopt those terms . Algorithms under our framework take the form of Algorithm 1 , split into the following steps . Algorithm 1 Lifelong Comp . Learning Initialize components M while T ( t ) ← getTask ( ) Freeze M for i = 1 , . . . , structUpdates Assimilation step on structure s ( t ) if i mod adaptFreq = 0 Freeze s ( t ) , unfreeze M for j = 1 , . . . , compUpdates Adaptation step on M Freeze M , unfreeze s ( t ) Add components via expansion Store info . for future adaptation Initialization The components M should be initialized encouraging reusability , both across tasks and within different structural configurations of task models . The former signifies that the components should solve a particular sub-problem regardless of the objective of the task . The latter means that components may be reused multiple times within the structure for a single task ’ s model , or at different structural orders across different tasks . For example , in deep nets , this means that the components could be used at different depths . We achieve this by training the first few tasks the agent encounters jointly to initialize M , keeping a fixed , but random , structure that reuses components to encourage reusability . Assimilation Algorithms for finding compositional knowledge vary in how they optimize each task ’ s structure . In modular nets , component selection can be learned via reinforcement learning ( Johnson et al. , 2017 ; Rosenbaum et al. , 2018 ; Chang et al. , 2019 ; Pahuja et al. , 2019 ) , stochastic search ( Fernando et al. , 2017 ; Alet et al. , 2018 ) , or backpropagation ( Shazeer et al. , 2017 ; Kirsch et al. , 2018 ; Meyerson & Miikkulainen , 2018 ) . Our framework will use any of these approaches to assimilate the current task by keeping the components M fixed and learning only the structure s ( t ) . Approaches supported by our framework must accept decoupling the learning of the structure from the learning of the components themselves ; this requirement holds for all the above examples . Accommodation An effective approach should maintain performance on earlier tasks , while being flexible enough to incorporate new knowledge . To accommodate new knowledge from the current task , the learner may adapt existing components or expand to include new components : • Adaptation step Approaches for non-compositional structures have been to naı̈vely fine- tune models with data from the current task , to impose regularization to selectively freeze weights ( Kirkpatrick et al. , 2017 ; Ritter et al. , 2018 ) , or to store a portion of data from previous tasks and use experience replay ( Lopez-Paz & Ranzato , 2017 ; Isele & Cosgun , 2018 ) . We will instantiate our framework by using any of these methods to accommodate new knowledge into existing components once the current task has been assimilated . For this to be possible , we require that the method can be selectively applied to only the component parameters φ . • Expansion step Often , existing components , even with some adaptation , are insufficient to solve the current task . In this case , the learner would incorporate novel components , which should encode knowledge distinct from existing components and combine with those components to solve the new task . The ability to discover new components endows the learner with the flexibility required to learn over a lifetime . For this , we create component dropout , described in Section 4.2 . Concrete instantiations of Algorithm 1 are described in Section 5.1 , with pseudocode in Appendix B .
The authors propose a new framework for compositional lifelong learning. In the proposed approach, the composition and adaptation parts are separated when a lifelong learner faces a new task: first, learn the best way to compose all existing components for the new task (and train an optional new component if exiting components aren't sufficient to reach a good performance), and only then adapt the components parameters to better fit the new problem. This new framework is validated on extensive experiments, using three composition and 3 adaptation strategies from the literature on 9 datasets. The paper is pleasing to read, each choice is discussed and justified
SP:56eb9cca9680e7ac118f3baf29789f172715c7d0
Lifelong Learning of Compositional Structures
1 INTRODUCTION . A major goal of artificial intelligence is to create an agent capable of acquiring a general understanding of the world . Such an agent would require the ability to continually accumulate and build upon its knowledge as it encounters new experiences . Lifelong machine learning addresses this setting , whereby an agent faces a continual stream of diverse problems and must strive to capture the knowledge necessary for solving each new task it encounters . If the agent is capable of accumulating knowledge in some form of compositional representation ( e.g. , neural net modules ) , it could then selectively reuse and combine relevant pieces of knowledge to construct novel solutions . Various compositional representations for multiple tasks have been proposed recently ( Zaremba et al. , 2016 ; Hu et al. , 2017 ; Kirsch et al. , 2018 ; Meyerson & Miikkulainen , 2018 ) . We address the novel question of how to learn these compositional structures in a lifelong learning setting . We design a general-purpose framework that is agnostic to the specific algorithms used for learning and the form of the structures being learned . Evoking Piaget ’ s ( 1976 ) assimilation and accommodation stages of intellectual development , this framework embodies the benefits of dividing the lifelong learning process into two distinct stages . In the first stage , the learner strives to solve a new task by combining existing components it has already acquired . The second stage uses discoveries from the new task to improve existing components and to construct fresh components if necessary . Our proposed framework , which we depict visually in Appendix A , is capable of incorporating various forms of compositional structures , as well as different mechanisms for avoiding catastrophic forgetting ( McCloskey & Cohen , 1989 ) . As examples of the flexibility of our framework , it can incorporate naı̈ve fine-tuning , experience replay , and elastic weight consolidation ( Kirkpatrick et al. , 2017 ) as knowledge retention mechanisms , and linear combinations of linear models ( Kumar & Daumé III , 2012 ; Ruvolo & Eaton , 2013 ) , soft layer ordering ( Meyerson & Miikkulainen , 2018 ) , and a soft version of gating networks ( Kirsch et al. , 2018 ; Rosenbaum et al. , 2018 ) as the compositional structures . We instantiate our framework with the nine combinations of these examples , and evaluate it on eight different data sets , consistently showing that separating the lifelong learning process into two stages increases the capabilities of the learning system , reducing catastrophic forgetting and achieving higher overall performance . Qualitatively , we show that the components learned by an algorithm that adheres to our framework correspond to self-contained , reusable functions . 2 RELATED WORK . Lifelong learning In continual or lifelong learning , agents must handle a variety of tasks over their lifetimes , and should accumulate knowledge in a way that enables them to more efficiently learn to solve new problems . Recent efforts have mainly focused on avoiding catastrophic forgetting . At a high level , algorithms define parts of parametric models ( e.g. , deep neural networks ) to be shared across tasks . As the agent encounters tasks sequentially , it strives to retain the knowledge that enabled it to solve earlier tasks . One common approach is to impose regularization to prevent parameters from deviating in directions that are harmful for performance on the early tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Li & Hoiem , 2017 ; Ritter et al. , 2018 ) . Another approach retains a small buffer of data from all tasks , and continually updates the model parameters utilizing data from all tasks , thereby maintaining the knowledge required to solve them ( Lopez-Paz & Ranzato , 2017 ; Nguyen et al. , 2018 ; Isele & Cosgun , 2018 ) . A related technique is to learn a generative model to “ hallucinate ” data , reducing the memory footprint at the cost of using lower-quality data and increasing the cost of accessing data ( Shin et al. , 2017 ; Achille et al. , 2018 ; Rao et al. , 2019 ) . These approaches , although effective in avoiding the problem of catastrophic forgetting , make no substantial effort toward the discovery of reusable knowledge . One could argue that the model parameters are learned in such a way that they are reusable across all tasks . However , it is unclear what the reusability of these parameters means , and moreover the way in which parameters are reused is hard-coded into the architecture design . This latter issue is a major drawback when attempting to learn tasks with a high degree of variability , as the exact form in which tasks are related is often unknown . Ideally , the algorithm would be able to determine these relations autonomously . Other methods learn a set of models that are reusable across many tasks and automatically select how to reuse them ( Ruvolo & Eaton , 2013 ; Nagabandi et al. , 2019 ) . However , such methods selectively reuse entire models , enabling knowledge reuse , but not explicitly in a compositional manner . Compositional knowledge A mostly distinct line of parallel work has explored the learning of compositional knowledge . The majority of such methods either learn the structure for piecing together a given set of components ( Cai et al. , 2017 ; Xu et al. , 2018 ; Bunel et al. , 2018 ) or learn the set of components given a known structure for how to compose them ( Bošnjak et al. , 2017 ) . A more interesting case is when neither the structure nor the set of components are given , and the agent must autonomously discover the compositional structure underlying a set of tasks . Some approaches for solving this problem assume access to a solution descriptor ( e.g. , in natural language ) , which can be mapped by the agent to a solution structure ( Hu et al. , 2017 ; Johnson et al. , 2017 ; Pahuja et al. , 2019 ) . However , many agents ( e.g. , service robots ) are expected to learn in more autonomous settings , where this kind of supervision is not available . Other approaches instead learn the structure directly from optimization of a cost function ( Rosenbaum et al. , 2018 ; Kirsch et al. , 2018 ; Meyerson & Miikkulainen , 2018 ; Alet et al. , 2018 ; Chang et al. , 2019 ) . Many of these works can be viewed as instances of neural architecture search , a closely related area ( Elsken et al. , 2019 ) . However , note that the approaches above assume that the agent will have access to a large batch of tasks , enabling it to evaluate numerous combinations of components and structures on all tasks simultaneously . More realistically , the agent faces a sequence of tasks in a lifelong learning fashion . Most work in this line assumes that each component can be fully learned by training on a single task , and then can be reused for other tasks ( Reed & de Freitas , 2016 ; Fernando et al. , 2017 ; Valkov et al. , 2018 ) . Unfortunately , this is infeasible in many real-world scenarios in which the agent has access to little data for each of the tasks . One notable exception was proposed by Gaunt et al . ( 2017 ) , which improves early components with experience in new tasks , but is limited to very simplistic settings . Unlike prior work , our approach explicitly learns compositional structures in a lifelong learning setting . We do not assume access to a large batch of tasks or the ability to learn definitive components after training on a single task . Instead , we train on a small initial batch of tasks ( four tasks , in our experiments ) , and then autonomously update the existing components to accommodate new tasks . Our framework also permits incorporating new components over time . Related work has increased network capacity in the non-compositional setting ( Yoon et al. , 2018 ) or in a compositional setting where previously learned parameters are kept fixed ( Li et al. , 2019 ) . Another method enables adaptation of existing parameters ( Rajasegaran et al. , 2019 ) , but requires expensively storing and training multiple models for each task to select the best one before adapting the existing parameters , and is designed for a specific choice of architecture , unlike our general framework . 3 THE LIFELONG LEARNING PROBLEM . We frame lifelong learning as online multi-task learning . The agent will face a sequence of tasks T ( 1 ) , . . . , T ( T ) over its lifetime . Each task will be a learning problem defined by a cost function L ( t ) ( f ( t ) ) , where the agent must learn a prediction function f ( t ) ∈ F : X ( t ) 7→ Y ( t ) to minimize the cost , where F is a function class , and X ( t ) and Y ( t ) are the instance and label spaces , respectively . Each task ’ s solution is parameterized by θ ( t ) , such that f ( t ) = fθ ( t ) . The goal of the lifelong learner is to find parameters θ ( 1 ) , . . . , θ ( T ) that minimize the cost across all tasks : ∑T t=1 L ( t ) ( f ( t ) ) . The number of tasks , the order in which tasks will arrive , and the task relationships are all unknown . Given limited data for each new task , the agent will strive to discover any relevant information to 1 ) relate it to previously stored knowledge in order to permit transfer and 2 ) store any new knowledge for future reuse . The agent may be evaluated on any previous task , requiring it to perform well on all tasks . In consequence , it must strive to retain knowledge from even the earliest tasks . 4 THE LIFELONG COMPOSITIONAL LEARNING FRAMEWORK . Our framework for lifelong learning of compositional structures ( illustrated in Appendix A ) stores knowledge in a set of k shared components M = { m1 , . . . , mk } that are acquired and refined over the agent ’ s lifetime . Each component mi = mφi ∈ M is a self-contained , reusable function parameterized by φi that can be combined with other components . The agent reconstructs each task ’ s predictive function f ( t ) via a task-specific structure s ( t ) : X ( t ) ×Mk 7→ F , withMk being the set of possible sequences of k components , such that f ( t ) ( x ) = s ( t ) ( x , M ) ( x ) , where s ( t ) is parameterized by a vector ψ ( t ) . Note that s ( t ) yields a function from F . The structure functions select the components from M and the order in which to compose them to construct the model for each task ( the f ( t ) ’ s ) . Specific examples of components and structures are described in Section 4.1 . The intuition behind our framework is that , at any point in time t , the agent will have acquired a set of components suitable for solving tasks it encountered previously ( T ( 1 ) , . . . , T ( t−1 ) ) . If these components , with minor adaptations , can be combined to solve the current task T ( t ) , then the agent should first learn how to reuse these components before making any modifications to them . The rationale for this idea of keeping components fixed during the early stages of training on the current task T ( t ) , before the agent has acquired sufficient knowledge to perform well on T ( t ) , is that premature modification could be catastrophically damaging to the set of existing components . Once the structure s ( t ) has been learned , we consider that the agent has captured sufficient knowledge about the current task , and it would be sensible to update the components to better accommodate that knowledge . If , instead , it is not possible to capture the current task with the existing components , then new components should be added . These notions loosely mirror the stages of assimilation and accommodation in Piaget ’ s ( 1976 ) theories on intellectual development , and so we adopt those terms . Algorithms under our framework take the form of Algorithm 1 , split into the following steps . Algorithm 1 Lifelong Comp . Learning Initialize components M while T ( t ) ← getTask ( ) Freeze M for i = 1 , . . . , structUpdates Assimilation step on structure s ( t ) if i mod adaptFreq = 0 Freeze s ( t ) , unfreeze M for j = 1 , . . . , compUpdates Adaptation step on M Freeze M , unfreeze s ( t ) Add components via expansion Store info . for future adaptation Initialization The components M should be initialized encouraging reusability , both across tasks and within different structural configurations of task models . The former signifies that the components should solve a particular sub-problem regardless of the objective of the task . The latter means that components may be reused multiple times within the structure for a single task ’ s model , or at different structural orders across different tasks . For example , in deep nets , this means that the components could be used at different depths . We achieve this by training the first few tasks the agent encounters jointly to initialize M , keeping a fixed , but random , structure that reuses components to encourage reusability . Assimilation Algorithms for finding compositional knowledge vary in how they optimize each task ’ s structure . In modular nets , component selection can be learned via reinforcement learning ( Johnson et al. , 2017 ; Rosenbaum et al. , 2018 ; Chang et al. , 2019 ; Pahuja et al. , 2019 ) , stochastic search ( Fernando et al. , 2017 ; Alet et al. , 2018 ) , or backpropagation ( Shazeer et al. , 2017 ; Kirsch et al. , 2018 ; Meyerson & Miikkulainen , 2018 ) . Our framework will use any of these approaches to assimilate the current task by keeping the components M fixed and learning only the structure s ( t ) . Approaches supported by our framework must accept decoupling the learning of the structure from the learning of the components themselves ; this requirement holds for all the above examples . Accommodation An effective approach should maintain performance on earlier tasks , while being flexible enough to incorporate new knowledge . To accommodate new knowledge from the current task , the learner may adapt existing components or expand to include new components : • Adaptation step Approaches for non-compositional structures have been to naı̈vely fine- tune models with data from the current task , to impose regularization to selectively freeze weights ( Kirkpatrick et al. , 2017 ; Ritter et al. , 2018 ) , or to store a portion of data from previous tasks and use experience replay ( Lopez-Paz & Ranzato , 2017 ; Isele & Cosgun , 2018 ) . We will instantiate our framework by using any of these methods to accommodate new knowledge into existing components once the current task has been assimilated . For this to be possible , we require that the method can be selectively applied to only the component parameters φ . • Expansion step Often , existing components , even with some adaptation , are insufficient to solve the current task . In this case , the learner would incorporate novel components , which should encode knowledge distinct from existing components and combine with those components to solve the new task . The ability to discover new components endows the learner with the flexibility required to learn over a lifetime . For this , we create component dropout , described in Section 4.2 . Concrete instantiations of Algorithm 1 are described in Section 5.1 , with pseudocode in Appendix B .
The paper introduces a framework for lifelong learning of compositional structures. The algorithm is loosely inspired by biological learning and consists of two main steps. The step of component selection relies on existing methods that can learn task-specific structure. In the next step (adaptation), the algorithm adapts the knowledge from the previous tasks to the current task and if that is insufficient to solve the task, new components are added. Adaptation step relies on existing methods for adapting the knowledge state given a new task in continual learning (component parameters are updated). Knowledge expansion (adding new components) uses component dropout, a method proposed by the authors which combines pruning and alternating backpropagation steps with and without the potential new component. The proposed method is beneficial in terms of computational complexity in comparison with the standard lifelong learning methods. The authors evaluate the method on three compositional structures and show that it outperforms the baselines. The paper includes visualisation of the learned components, extensive appendix with additional experiments and ablation studies, and a systematic overview of the prior work in learning compositional structures and lifelong learning.
SP:56eb9cca9680e7ac118f3baf29789f172715c7d0
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments
1 INTRODUCTION . Organisms can carve out environmental niches within which they can maintain relative predictability amidst the entropy around them ( Boltzmann , 1886 ; Schrödinger , 1944 ; Schneider & Kay , 1994 ; Friston , 2009 ) . For example , humans go to great lengths to shield themselves from surprise — we band together to build cities with homes , supplying water , food , gas , and electricity to control the deterioration of our bodies and living spaces amidst heat , cold , wind and storm . These activities exercise sophisticated control over the environment , which makes the environment more predictable and less “ surprising ” ( Friston , 2009 ; Friston et al. , 2009 ) . Could the motive of preserving order guide the automatic acquisition of useful behaviors in artificial agents ? We study this question in the context of unsupervised reinforcement learning , which deals with the problem of acquiring complex behaviors and skills with no supervision ( labels ) or incentives ( external rewards ) . Many previously proposed unsupervised reinforcement learning methods focus on noveltyseeking behaviors ( Schmidhuber , 1991 ; Lehman & Stanley , 2011 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ) . Such methods can lead to meaningful behavior in simulated environments , such as video games , where interesting and novel events mainly happen when the agent executes a specific and coherent pattern of behavior . However , we posit that in more realistic open-world environments , natural forces outside of the agent ’ s control already offer an excellent source of novelty : from other agents to unexpected natural forces , agents in these settings must contend with a constant stream of unexpected events . In such settings , rejecting perturbations and maintaining a steady equilibrium may pose a greater challenge than novelty seeking . Based on this observation , we devise an algorithm , surprise minimizing reinforcement learning ( SMiRL ) , that specifically aims to reduce the entropy of the states visited by the agent . SMiRL maintains an estimate of the distribution of visited states , pθ ( s ) , and a policy that seeks to reach likely future states under pθ ( s ) . After each action , pθ ( s ) is updated with the new state , while the policy is conditioned on the parameters of this distribution to construct a stationary MDP . We illustrate this with a diagram in Figure 1a . We empirically evaluate SMiRL in a range of domains that are characterized by naturally increasing entropy , including video game environments based on Tetris and Doom , and simulated robot tasks that require controlling a humanoid robot to balance and walk . Our experiments show that , in environments that satisfy the assumptions of our method , SMiRL automatically discovers complex and coordinated behaviors without any reward signal , learning to successfully play Tetris , shoot enemies in Doom , and balance a humanoid robot at the edge of a cliff . We also show that SMiRL can provide an effective auxiliary objective when a reward signal is provided , accelerating learning in these domains substantially more effectively than pure novelty-seeking methods . Videos of our results are available online1 2 RELATED WORK . Prior work on unsupervised learning has proposed algorithms that learn without a reward function , such as empowerment ( Klyubin et al. , 2005 ; Mohamed & Jimenez Rezende , 2015 ) or intrinsic motivation ( Chentanez et al. , 2005 ; Oudeyer & Kaplan , 2009 ; Oudeyer et al. , 2007 ) . Intrinsic motivation has typically focused on encouraging novelty-seeking behaviors by maximizing model uncertainty ( Houthooft et al. , 2016 ; Still & Precup , 2012 ; Shyam et al. , 2018 ; Pathak et al. , 2019 ) , by maximizing model prediction error or improvement ( Lopes et al. , 2012 ; Pathak et al. , 2017 ) , through state visitation counts ( Bellemare et al. , 2016 ) , via surprise maximization ( Achiam & Sastry , 2017b ; Schmidhuber , 1991 ; Sun et al. , 2011 ) , and through other novelty-based reward bonuses ( Lehman & Stanley , 2011 ; Achiam & Sastry , 2017a ; Burda et al. , 2018a ; Kim et al. , 2019 ) . We do the opposite . Inspired by the free energy principle ( Friston , 2009 ; Friston et al. , 2009 ; Ueltzhöffer , 2018 ; Faraji et al. , 2018 ; Friston et al. , 2016 ) including recent methods that train policies using RL ( Tschantz et al. , 2020a ; b ; Annabi et al. , 2020 ) that encode a prior over desired observations , we instead incentivize an agent to minimize surprise over the distribution of states generated by the policy in unstable environments , and study the resulting behaviors . In such environments it is non-trivial to achieve low entropy state distributions , which we believe are more reflective of the real world . Learning progress methods that minimize model parameter entropy ( Lopes et al. , 2012 ; Kim et al. , 2020 ) avoid the issues novelty-based methods have with noisy distractors . These methods are based on learning the parameters of the dynamics where our method is learning to control the marginal state distribution . Several works aim to maximize state entropy to encourage exploration ( Lee et al. , 2019 ; Hazan et al. , 2019 ) . Our method aims to do the opposite , minimizing state entropy . Recent work connects the free energy principle , empowerment and predictive information maximization under the same framework to understand their differences ( Biehl et al. , 2018 ) . Existing work has also studied how competitive self-play and competitive , multi-agent environments can lead to complex behaviors with minimal reward information ( Silver et al. , 2017 ; Bansal et al. , 2017 ; Sukhbaatar et al. , 2017 ; Baker et al. , 2019 ; Weihs et al. , 2019 ; Chen et al. , 2020 ) . Like these works , we also consider how complex behaviors can emerge in resource-constrained environments , but instead of multi-agent competition , we utilize surprise minimization to drive the emergence of complex skills . 1https : //sites.google.com/view/surpriseminimization 3 SURPRISE MINIMIZING AGENTS . We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking out low entropy state distributions . The long term effects of actions on surprise can be subtle , since actions change both ( i ) the state that the agent is in , and ( ii ) its beliefs , represented by a model pθ ( s ) , about which states are likely under its current policy . SMiRL induces the agent to modify its policy π so that it encounters states s with high pθ ( s ) , as well as to seek out states that will change the model pθ ( s ) so that future states are more likely . In this section , we will first describe what we mean by unstable environments and provide the surprise minimization problem statement , and then present our practical deep reinforcement learning algorithm for learning policies that minimize surprise . Many commonly used reinforcement learning benchmark environments are stable , in the sense the agent remains in a narrow range of starting states unless it takes coordinated and purposeful actions . In such settings , unsupervised RL algorithms that seek out novelty can discover meaningful behaviors . However , many environments – including , as we argue , those that reflect properties commonly found in the real world , – are unstable , in the sense that unexpected and disruptive events naturally lead to novelty and increased state entropy even if the agent does not carry out any particularly meaningful or purposeful behavior . In unstable environments , minimizing cumulative surprise requires taking actions to reach a stable distribution of states , and then acting continually and purposefully to stay in this distribution . An example of this is illustrated in Figure 1b : the agent ’ s environment is unstable due to varied weather . If the robot builds a shelter , it will initially experience unfamiliar states , but in the long term the observations inside the shelter are more stable and less surprising than those outside . Another example is the game of Tetris ( Figure 2 ) , where the environment spawns new blocks and drops them into random configurations , unless a skilled agent takes actions to control the board . The challenge of maintaining low entropy in unstable settings forces the SMiRL agent to acquire meaningful skills . We defer a more precise definition of unstable environments to Section 4 , where we describe several unstable environments and contrast them with the static environments that are more commonly found in RL benchmark tasks . In static environments , novelty seeking methods must discover complex behaviors to increase entropy , leading to interesting behavior , while SMiRL may trivially find low entropy policies . We show that the reverse is true for unstable environments : a novelty seeking agent is satisfied with watching the environment change around it , while a surprise minimizing agent must develop meaningful skills to lower entropy . Problem statement . To instantiate SMiRL , we design a reinforcement learning agent that receives larger rewards for experiencing more familiar states , based on the history of states it has experienced during the current episode . This translates to learning a policy with the lowest state entropy . We assume a fully-observed controlled Markov process ( CMP ) , where we use st to denote the state at time t , at to denote the agent ’ s action , p ( s0 ) to denote the initial state distribution , and T ( st+1|st , at ) to denote the transition probabilities . The agent learns a policy πφ ( a|s ) , parameterized by φ . The goal is to minimize the entropy of its state marginal distribution under its current policy πφ at each time step of the episode . We can estimate this entropy by fitting an estimate of the state marginal dπφ ( st ) at each time step t , given by pθt−1 ( st ) , using the states seen so far during the episode , τt = { s1 , . . . , st } that is stationary . The sum of the entropies of the state distributions over an episode can then be estimated as T∑ t=0 H ( st ) = − T∑ t=0 Est∼dπφ ( st ) [ log d πφ ( st ) ] ≤ − T∑ t=0 Est∼dπφ ( st ) [ log pθt−1 ( st ) ] , ( 1 ) where the inequality becomes an equality if pθt−1 ( st ) accurately models d πφ ( st ) . Minimizing the right-hand side of this equation corresponds to maximizing an RL objective with rewards : r ( st ) = log pθt−1 ( st ) . ( 2 ) However , an optimal policy for solving this problem must take changes in the distribution pθt−1 ( st ) into account when selecting actions , since this distribution changes at each step . To ensure that the underlying RL optimization corresponds to a stationary and Markovian problem , we construct an augmented MDP to instantiate SMiRL in practice , which we describe in the following section . Algorithm 1 SMiRL 1 : while not converged do 2 : β ← { } . Reset experience 3 : for episode = 0 , . . . , M do 4 : s0 ∼ p ( s0 ) ; τ0 ← { s0 } . Initialize state 5 : s̄0 ← ( s0,0 , 0 ) . Initialize aug. state 6 : for each t = 0 , . . . , T do 7 : at ∼ πφ ( at|st , θt , t ) . Get action 8 : st+1 ∼ T ( st+1|st , at ) . Step dynamics 9 : rt ← log pθt ( st+1 ) . SMiRL reward 10 : τt+1←τt ∪ { st+1 } . Record state 11 : θt+1 ← U ( τt+1 ) . Fit model 12 : s̄t+1 ← { ( st+1 , θt+1 , tt+1 ) } 13 : β ← β ∪ { ( s̄t , at , rt , s̄t+1 ) } 14 : end for 15 : end for each 16 : φ← RL ( φ , β ) . Update policy 17 : end while Training SMiRL agents . In order to instantiate SMiRL , we construct an augmented MDP out of the original CMP , where the reward in Equation ( 2 ) can be expressed entirely as a function of the state . This augmented MDP has a state space that includes the original state st , as well as the sufficient statistics of pθt ( s ) . For example , if pθt ( s ) is a normal distribution with parameters θt , then ( θt , t ) – the parameters of the distribution and the number of states seen so far – represents a sufficient statistic . Note that it is possible to use other , more complicated , methods to summarize the statistics , including reading in the entirety of τt using a recurrent model . The policy conditioned on the augmented state is then given by πφ ( at|st , θt , t ) . The parameters of the sufficient statistics are updated θt =U ( τt ) using a maximum likelihood state density estimation process θt=arg max θ ∑t n=0 log pθ ( sn ) over the experience within the episode τt . When ( θt , t ) is a sufficient statistic , the update may be written as θt = U ( st , θt−1 , t − 1 ) . Specific update functions U ( τt ) used in our experiments are described in Appendix C and at the end of the section . Since the reward is given by r ( st , θt−1 , t− 1 ) = log pθt−1 ( st ) , and θt is a function of st and ( θt−1 , t− 1 ) , the resulting RL problem is fully Markovian and stationary , and as a result standard RL algorithms will converge to locally optimal solutions . Appendix D include details on the MDP dynamics . In Figure 8 , we illustrate the evolution of pθt ( s ) during an episode of the game Tetris . The pseudocode for this algorithm is presented in Algorithm 1 . Density estimation with learned representations . SMiRL may , in principle , be used with any choice of model class for the density model pθt ( s ) . As we show in our experiments , relatively simple distribution classes , such as products of independent marginals , suffice to run SMiRL in many environments . However , it may be desirable in more complex environments to use more sophisticated density estimators , especially when learning directly from high-dimensional observations such as images . In these cases , we can use variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) to learn a non-linear state representation . A VAE is trained using the standard ELBO objective to reconstruct states s after encoding them into a latent representation z via an encoder qω ( z|s ) , with parameters ω . Thus , z can be viewed as a compressed representation of the state . When using VAE representations , we train the VAE online together with the policy . This approach necessitates two changes to the procedure described Algorithm 1 . First , training a VAE requires more data than the simpler independent models , which can easily be fitted to data from individual episodes . We propose to overcome this by not resetting the VAE parameters between training episodes , and instead training the VAE across episodes . Second , instead of passing the VAE model parameters to the policy , we only update a distribution over the VAE latent state , given by pθt ( z ) , such that pθt ( z ) replaces pθt ( s ) in the SMiRL algorithm , and is fitted to only that episode ’ s ( encoded ) state history . We represent pθt ( z ) as a normal distribution with a diagonal covariance , and fit it to the VAE encoder outputs . Thus , the mean and variance of pθt ( z ) are passed to the policy at each time step , along with t. This implements the density estimate in line 9 of Algorithm 1 . The corresponding update U ( τt ) is : z0 , . . . , zt = E [ qω ( z|s ) ] for s ∈ τt , µ = 1/t+1 t∑ j=0 zj , σ = 1/t+1 t∑ j=0 ( µ− zj ) 2 , θt = [ µ , σ ] . Training the VAE online , over all previously seen data , deviates from the recipe in the previous section , where the density model was only updated within an episode . In this case the model is updated after a collection of episodes . This makes the objective for RL somewhat non-stationary and could theoretically cause issues for convergence , however we found in practice that the increased representational capacity provides significant improvement in performance .
This work proposes an RL approach SMiRL that is able to learn effective policies in unstable environments without the need for external reward. The idea at a high-level is almost the opposite of intrinsic motivation RL approaches, which encourage novelty-seeking behaviors. The proposed method instead aims to minimize surprise or state entropy. To train the agent, rewards come from state marginal estimates, but because this distribution is changing, the authors create an augmented MDP. Through experiments on game domains and robot control tasks, the authors show that SMiRL outperforms intrinsic motivation methods. The authors also show that SMiRL can be used to do imitation and can be combined with regular reward signals.
SP:0147099ac2866672f507e5e37383fa4f50addd0e
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments
1 INTRODUCTION . Organisms can carve out environmental niches within which they can maintain relative predictability amidst the entropy around them ( Boltzmann , 1886 ; Schrödinger , 1944 ; Schneider & Kay , 1994 ; Friston , 2009 ) . For example , humans go to great lengths to shield themselves from surprise — we band together to build cities with homes , supplying water , food , gas , and electricity to control the deterioration of our bodies and living spaces amidst heat , cold , wind and storm . These activities exercise sophisticated control over the environment , which makes the environment more predictable and less “ surprising ” ( Friston , 2009 ; Friston et al. , 2009 ) . Could the motive of preserving order guide the automatic acquisition of useful behaviors in artificial agents ? We study this question in the context of unsupervised reinforcement learning , which deals with the problem of acquiring complex behaviors and skills with no supervision ( labels ) or incentives ( external rewards ) . Many previously proposed unsupervised reinforcement learning methods focus on noveltyseeking behaviors ( Schmidhuber , 1991 ; Lehman & Stanley , 2011 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ) . Such methods can lead to meaningful behavior in simulated environments , such as video games , where interesting and novel events mainly happen when the agent executes a specific and coherent pattern of behavior . However , we posit that in more realistic open-world environments , natural forces outside of the agent ’ s control already offer an excellent source of novelty : from other agents to unexpected natural forces , agents in these settings must contend with a constant stream of unexpected events . In such settings , rejecting perturbations and maintaining a steady equilibrium may pose a greater challenge than novelty seeking . Based on this observation , we devise an algorithm , surprise minimizing reinforcement learning ( SMiRL ) , that specifically aims to reduce the entropy of the states visited by the agent . SMiRL maintains an estimate of the distribution of visited states , pθ ( s ) , and a policy that seeks to reach likely future states under pθ ( s ) . After each action , pθ ( s ) is updated with the new state , while the policy is conditioned on the parameters of this distribution to construct a stationary MDP . We illustrate this with a diagram in Figure 1a . We empirically evaluate SMiRL in a range of domains that are characterized by naturally increasing entropy , including video game environments based on Tetris and Doom , and simulated robot tasks that require controlling a humanoid robot to balance and walk . Our experiments show that , in environments that satisfy the assumptions of our method , SMiRL automatically discovers complex and coordinated behaviors without any reward signal , learning to successfully play Tetris , shoot enemies in Doom , and balance a humanoid robot at the edge of a cliff . We also show that SMiRL can provide an effective auxiliary objective when a reward signal is provided , accelerating learning in these domains substantially more effectively than pure novelty-seeking methods . Videos of our results are available online1 2 RELATED WORK . Prior work on unsupervised learning has proposed algorithms that learn without a reward function , such as empowerment ( Klyubin et al. , 2005 ; Mohamed & Jimenez Rezende , 2015 ) or intrinsic motivation ( Chentanez et al. , 2005 ; Oudeyer & Kaplan , 2009 ; Oudeyer et al. , 2007 ) . Intrinsic motivation has typically focused on encouraging novelty-seeking behaviors by maximizing model uncertainty ( Houthooft et al. , 2016 ; Still & Precup , 2012 ; Shyam et al. , 2018 ; Pathak et al. , 2019 ) , by maximizing model prediction error or improvement ( Lopes et al. , 2012 ; Pathak et al. , 2017 ) , through state visitation counts ( Bellemare et al. , 2016 ) , via surprise maximization ( Achiam & Sastry , 2017b ; Schmidhuber , 1991 ; Sun et al. , 2011 ) , and through other novelty-based reward bonuses ( Lehman & Stanley , 2011 ; Achiam & Sastry , 2017a ; Burda et al. , 2018a ; Kim et al. , 2019 ) . We do the opposite . Inspired by the free energy principle ( Friston , 2009 ; Friston et al. , 2009 ; Ueltzhöffer , 2018 ; Faraji et al. , 2018 ; Friston et al. , 2016 ) including recent methods that train policies using RL ( Tschantz et al. , 2020a ; b ; Annabi et al. , 2020 ) that encode a prior over desired observations , we instead incentivize an agent to minimize surprise over the distribution of states generated by the policy in unstable environments , and study the resulting behaviors . In such environments it is non-trivial to achieve low entropy state distributions , which we believe are more reflective of the real world . Learning progress methods that minimize model parameter entropy ( Lopes et al. , 2012 ; Kim et al. , 2020 ) avoid the issues novelty-based methods have with noisy distractors . These methods are based on learning the parameters of the dynamics where our method is learning to control the marginal state distribution . Several works aim to maximize state entropy to encourage exploration ( Lee et al. , 2019 ; Hazan et al. , 2019 ) . Our method aims to do the opposite , minimizing state entropy . Recent work connects the free energy principle , empowerment and predictive information maximization under the same framework to understand their differences ( Biehl et al. , 2018 ) . Existing work has also studied how competitive self-play and competitive , multi-agent environments can lead to complex behaviors with minimal reward information ( Silver et al. , 2017 ; Bansal et al. , 2017 ; Sukhbaatar et al. , 2017 ; Baker et al. , 2019 ; Weihs et al. , 2019 ; Chen et al. , 2020 ) . Like these works , we also consider how complex behaviors can emerge in resource-constrained environments , but instead of multi-agent competition , we utilize surprise minimization to drive the emergence of complex skills . 1https : //sites.google.com/view/surpriseminimization 3 SURPRISE MINIMIZING AGENTS . We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking out low entropy state distributions . The long term effects of actions on surprise can be subtle , since actions change both ( i ) the state that the agent is in , and ( ii ) its beliefs , represented by a model pθ ( s ) , about which states are likely under its current policy . SMiRL induces the agent to modify its policy π so that it encounters states s with high pθ ( s ) , as well as to seek out states that will change the model pθ ( s ) so that future states are more likely . In this section , we will first describe what we mean by unstable environments and provide the surprise minimization problem statement , and then present our practical deep reinforcement learning algorithm for learning policies that minimize surprise . Many commonly used reinforcement learning benchmark environments are stable , in the sense the agent remains in a narrow range of starting states unless it takes coordinated and purposeful actions . In such settings , unsupervised RL algorithms that seek out novelty can discover meaningful behaviors . However , many environments – including , as we argue , those that reflect properties commonly found in the real world , – are unstable , in the sense that unexpected and disruptive events naturally lead to novelty and increased state entropy even if the agent does not carry out any particularly meaningful or purposeful behavior . In unstable environments , minimizing cumulative surprise requires taking actions to reach a stable distribution of states , and then acting continually and purposefully to stay in this distribution . An example of this is illustrated in Figure 1b : the agent ’ s environment is unstable due to varied weather . If the robot builds a shelter , it will initially experience unfamiliar states , but in the long term the observations inside the shelter are more stable and less surprising than those outside . Another example is the game of Tetris ( Figure 2 ) , where the environment spawns new blocks and drops them into random configurations , unless a skilled agent takes actions to control the board . The challenge of maintaining low entropy in unstable settings forces the SMiRL agent to acquire meaningful skills . We defer a more precise definition of unstable environments to Section 4 , where we describe several unstable environments and contrast them with the static environments that are more commonly found in RL benchmark tasks . In static environments , novelty seeking methods must discover complex behaviors to increase entropy , leading to interesting behavior , while SMiRL may trivially find low entropy policies . We show that the reverse is true for unstable environments : a novelty seeking agent is satisfied with watching the environment change around it , while a surprise minimizing agent must develop meaningful skills to lower entropy . Problem statement . To instantiate SMiRL , we design a reinforcement learning agent that receives larger rewards for experiencing more familiar states , based on the history of states it has experienced during the current episode . This translates to learning a policy with the lowest state entropy . We assume a fully-observed controlled Markov process ( CMP ) , where we use st to denote the state at time t , at to denote the agent ’ s action , p ( s0 ) to denote the initial state distribution , and T ( st+1|st , at ) to denote the transition probabilities . The agent learns a policy πφ ( a|s ) , parameterized by φ . The goal is to minimize the entropy of its state marginal distribution under its current policy πφ at each time step of the episode . We can estimate this entropy by fitting an estimate of the state marginal dπφ ( st ) at each time step t , given by pθt−1 ( st ) , using the states seen so far during the episode , τt = { s1 , . . . , st } that is stationary . The sum of the entropies of the state distributions over an episode can then be estimated as T∑ t=0 H ( st ) = − T∑ t=0 Est∼dπφ ( st ) [ log d πφ ( st ) ] ≤ − T∑ t=0 Est∼dπφ ( st ) [ log pθt−1 ( st ) ] , ( 1 ) where the inequality becomes an equality if pθt−1 ( st ) accurately models d πφ ( st ) . Minimizing the right-hand side of this equation corresponds to maximizing an RL objective with rewards : r ( st ) = log pθt−1 ( st ) . ( 2 ) However , an optimal policy for solving this problem must take changes in the distribution pθt−1 ( st ) into account when selecting actions , since this distribution changes at each step . To ensure that the underlying RL optimization corresponds to a stationary and Markovian problem , we construct an augmented MDP to instantiate SMiRL in practice , which we describe in the following section . Algorithm 1 SMiRL 1 : while not converged do 2 : β ← { } . Reset experience 3 : for episode = 0 , . . . , M do 4 : s0 ∼ p ( s0 ) ; τ0 ← { s0 } . Initialize state 5 : s̄0 ← ( s0,0 , 0 ) . Initialize aug. state 6 : for each t = 0 , . . . , T do 7 : at ∼ πφ ( at|st , θt , t ) . Get action 8 : st+1 ∼ T ( st+1|st , at ) . Step dynamics 9 : rt ← log pθt ( st+1 ) . SMiRL reward 10 : τt+1←τt ∪ { st+1 } . Record state 11 : θt+1 ← U ( τt+1 ) . Fit model 12 : s̄t+1 ← { ( st+1 , θt+1 , tt+1 ) } 13 : β ← β ∪ { ( s̄t , at , rt , s̄t+1 ) } 14 : end for 15 : end for each 16 : φ← RL ( φ , β ) . Update policy 17 : end while Training SMiRL agents . In order to instantiate SMiRL , we construct an augmented MDP out of the original CMP , where the reward in Equation ( 2 ) can be expressed entirely as a function of the state . This augmented MDP has a state space that includes the original state st , as well as the sufficient statistics of pθt ( s ) . For example , if pθt ( s ) is a normal distribution with parameters θt , then ( θt , t ) – the parameters of the distribution and the number of states seen so far – represents a sufficient statistic . Note that it is possible to use other , more complicated , methods to summarize the statistics , including reading in the entirety of τt using a recurrent model . The policy conditioned on the augmented state is then given by πφ ( at|st , θt , t ) . The parameters of the sufficient statistics are updated θt =U ( τt ) using a maximum likelihood state density estimation process θt=arg max θ ∑t n=0 log pθ ( sn ) over the experience within the episode τt . When ( θt , t ) is a sufficient statistic , the update may be written as θt = U ( st , θt−1 , t − 1 ) . Specific update functions U ( τt ) used in our experiments are described in Appendix C and at the end of the section . Since the reward is given by r ( st , θt−1 , t− 1 ) = log pθt−1 ( st ) , and θt is a function of st and ( θt−1 , t− 1 ) , the resulting RL problem is fully Markovian and stationary , and as a result standard RL algorithms will converge to locally optimal solutions . Appendix D include details on the MDP dynamics . In Figure 8 , we illustrate the evolution of pθt ( s ) during an episode of the game Tetris . The pseudocode for this algorithm is presented in Algorithm 1 . Density estimation with learned representations . SMiRL may , in principle , be used with any choice of model class for the density model pθt ( s ) . As we show in our experiments , relatively simple distribution classes , such as products of independent marginals , suffice to run SMiRL in many environments . However , it may be desirable in more complex environments to use more sophisticated density estimators , especially when learning directly from high-dimensional observations such as images . In these cases , we can use variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) to learn a non-linear state representation . A VAE is trained using the standard ELBO objective to reconstruct states s after encoding them into a latent representation z via an encoder qω ( z|s ) , with parameters ω . Thus , z can be viewed as a compressed representation of the state . When using VAE representations , we train the VAE online together with the policy . This approach necessitates two changes to the procedure described Algorithm 1 . First , training a VAE requires more data than the simpler independent models , which can easily be fitted to data from individual episodes . We propose to overcome this by not resetting the VAE parameters between training episodes , and instead training the VAE across episodes . Second , instead of passing the VAE model parameters to the policy , we only update a distribution over the VAE latent state , given by pθt ( z ) , such that pθt ( z ) replaces pθt ( s ) in the SMiRL algorithm , and is fitted to only that episode ’ s ( encoded ) state history . We represent pθt ( z ) as a normal distribution with a diagonal covariance , and fit it to the VAE encoder outputs . Thus , the mean and variance of pθt ( z ) are passed to the policy at each time step , along with t. This implements the density estimate in line 9 of Algorithm 1 . The corresponding update U ( τt ) is : z0 , . . . , zt = E [ qω ( z|s ) ] for s ∈ τt , µ = 1/t+1 t∑ j=0 zj , σ = 1/t+1 t∑ j=0 ( µ− zj ) 2 , θt = [ µ , σ ] . Training the VAE online , over all previously seen data , deviates from the recipe in the previous section , where the density model was only updated within an episode . In this case the model is updated after a collection of episodes . This makes the objective for RL somewhat non-stationary and could theoretically cause issues for convergence , however we found in practice that the increased representational capacity provides significant improvement in performance .
The authors target the unsupervised reinforcement learning problem. An opposite idea from the existing approaches by maximizing state entropy is adopted to minimize state entropy. It is interesting that such an idea has achieved good performance in unstable environments. A state distribution is fitted during the interaction with an environment and the probability of the current state is used as a virtual reward. The parameters or sufficient statistics are also applied to the policy. The motivation is clear and verified. It is generally a good paper.
SP:0147099ac2866672f507e5e37383fa4f50addd0e
Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes
1 INTRODUCTION . Determinantal point processes ( DPPs ) have proven useful for numerous machine learning tasks . For example , recent uses include summarization ( Sharghi et al. , 2018 ) , recommender systems ( Wilhelm et al. , 2018 ) , neural network compression ( Mariet & Sra , 2016 ) , kernel approximation ( Li et al. , 2016 ) , multi-modal output generation ( Elfeki et al. , 2019 ) , and batch selection , both for stochastic optimization ( Zhang et al. , 2017 ) and for active learning ( Bıyık et al. , 2019 ) . For subset selection problems where the ground set of items to select from has cardinality M , the typical DPP is parameterized by an M ×M kernel matrix . Most prior work has been concerned with symmetric DPPs , where the kernel must equal its transpose . However , recent work has considered the more general class of nonsymmetric DPPs ( NDPPs ) and shown that these have additional useful modeling power ( Brunel , 2018 ; Gartrell et al. , 2019 ) . In particular , unlike symmetric DPPs , which can only model negative correlations between items , NDPPs allow modeling of positive correlations , where the presence of item i in the selected set increases the probability that some other item j will also be selected . There are many intuitive examples of how positive correlations can be of practical importance . For example , consider a product recommendation task for a retail website , where a camera is found in a user ’ s shopping cart , and the goal is to display several other items that might be purchased . Relative to an empty cart , the presence of the camera probably increases the probability of buying an accessory like a tripod . Although NDPPs can theoretically model such behavior , the existing approach for NDPP learning and inference ( Gartrell et al. , 2019 ) is often impractical in terms of both storage and runtime requirements . These algorithms require memory quadratic in M and time quadratic ( for inference ) or cubic ( for learning ) in M ; for the not-unusual M of 1 million , this requires storing 8TB-size objects in memory , with runtime millions or billions of times slower than that of a linear-complexity method . In this work , we make the following contributions : Learning : We propose a new decomposition of the NDPP kernel which reduces the storage and runtime requirements of learning and inference to linear in M . Fortuitously , the modified decomposition retains all of the previous decomposition ’ s modeling power , as it covers the same part of the NDPP kernel space . The algebraic manipulations we apply to get linear complexity for this decomposition can not be applied to prior work , meaning that our new decomposition is crucial for scalability . Inference : After learning , prior NDPP work applies a DPP conditioning algorithm to do subset expansion ( Gartrell et al. , 2019 ) , with quadratic runtime in M . However , prior work does not examine the general problem of MAP inference for NDPPs , i.e. , solving the problem of finding the highestprobability subset under a DPP . For symmetric DPPs , there exists a standard greedy MAP inference algorithm that is linear in M . In this work , we develop a version of this algorithm that is also linear for low-rank NDPPs . The low-rank requirement is unique to NDPPs , and highlights the fact that the transformation of the algorithm from the symmetric to the nonsymmetric space is non-trivial . To the best of our knowledge , this is the first MAP algorithm proposed for NDPPs . We combine the above contributions through experiments that involve learning NDPP kernels and applying MAP inference to these kernels to do subset selection for several real-world datasets . These experiments demonstrate that our algorithms are much more scalable , and that the new kernel decomposition matches the predictive performance of the decomposition from prior work . 2 BACKGROUND . Consider a finite set Y = { 1 , 2 , . . . , M } of cardinalityM , which we will also denote by [ [ M ] ] . A DPP on [ [ M ] ] defines a probability distribution over all of its 2M subsets . It is parameterized by a matrix L ∈ RM×M , called the kernel , such that the probability of each subset Y ⊆ [ [ M ] ] is proportional to the determinant of its corresponding principal submatrix : Pr ( Y ) ∝ det ( LY ) . The normalization constant for this distribution can be expressed as a single M ×M determinant : ∑ Y⊆ [ [ M ] ] det ( LY ) = det ( L + I ) ( Kulesza et al. , 2012 , Theorem 2.1 ) . Hence , Pr ( Y ) = det ( LY ) / det ( L + I ) . We will use PL to denote this distribution . For intuition about the kernel parameters , notice that the probabilities of singletons { i } and { j } are proportional to Lii and Ljj , respectively . Hence , it is common to think of L ’ s diagonal as representing item qualities . The probability of a pair { i , j } is proportional to det ( L { i , j } ) = LiiLjj − LijLji . Thus , if −LijLji < 0 , this indicates i and j interact negatively . Similarly , if −LijLji > 0 , then i and j interact positively . Therefore , off-diagonal terms determine item interactions . ( The vague term “ interactions ” can be replaced by the more precise term “ correlations ” if we consider the DPP ’ s marginal kernel instead ; see Gartrell et al . ( 2019 , Section 2.1 ) for an extensive discussion . ) In order to ensure that PL defines a probability distribution , all principal minors of L must be non-negative : det ( LY ) ≥ 0 . Matrices that satisfy this property are called P0-matrices ( Fang , 1989 , Definition 1 ) . There is no known generative method or matrix decomposition that fully covers the space of all P0 matrices , although there are many that partially cover the space ( Tsatsomeros , 2004 ) . One common partial solution is to use a decomposition that covers the space of symmetric P0 matrices . By restricting to the space of symmetric matrices , one can exploit the fact that L ∈ P0 if L is positive semidefinite ( PSD ) * ( Prussing , 1986 ) . Any symmetric PSD matrix can be written as the Gramian matrix of some set of vectors : L : = V V > , where V ∈ RM×K . Hence , the V V > decomposition provides an easy means of generating the entire space of symmetric P0 matrices . It also has a nice intuitive interpretation : we can view the i-th row of V as a length-K feature vector describing item i . Unfortunately , the symmetry requirement limits the types of correlations that a DPP can capture . A symmetric model is able to capture only nonpositive interactions between items , since LijLji = L2ij ≥ 0 , whereas a nonsymmetric L can also capture positive correlations . ( Again , see Gartrell et al . ( 2019 , Section 2.1 ) for more intuition . ) To expand coverage to nonsymmetric matrices in P0 , it is natural to consider nonsymmetric PSD matrices . In what follows , we denote by P+0 the set of all nonsymmetric ( and symmetric ) PSD matrices . Any nonsymmetric PSD matrix is in P0 ( Gartrell et al. , 2019 , Lemma 1 ) , so P+0 ⊆ P0 . However , unlike in the symmetric case , the set of nonsymmetric PSD * Recall that a matrix L ∈ RM×M is defined to be PSD if and only if x > Lx ≥ 0 , for all x ∈ RM . matrices does not fully cover the set of nonsymmetric P0 matrices . For example , consider L = ( 1 5/3 1/2 1 ) with det ( L { 1 } ) , det ( L { 2 } ) , det ( L { 1,2 } ) ≥ 0 , but x > Lx < 0 for x = ( −1 1 ) . Still , nonsymmetric PSD matrices cover a large enough portion of the P0 space to be useful in practice , as evidenced by the experiments of Gartrell et al . ( 2019 ) . This work covered the P+0 space by using the following decomposition : L : = S + A , with S : = V V > for V ∈ RM×K , and A : = BC > −CB > for B , C ∈ RM×K . This decomposition makes use of the fact that any matrix L can be decomposed uniquely as the sum of a symmetric matrix S = ( L + LT ) /2 and a skew-symmetric matrix A = ( L−LT ) /2 . All skew-symmetric matrices A are trivially PSD , since x > Ax = 0 for all x ∈ RM . Hence , the L here is guaranteed to be PSD simply because its S uses the standard Gramian decomposition V V > . In this work we will also only consider P+0 , and leave to future work the problem of finding tractable ways to cover the rest of P0 . We propose a new decomposition of L that also covers the P+0 space , but allows for more scalable learning . As in prior work , our decomposition has inner dimension K that could be as large as M , but is usually much smaller in practice . Our algorithms work well for modest values of K. In cases where the natural K is larger ( e.g. , natural language processing ) , random projections can often be used to significantly reduce K ( Gillenwater et al. , 2012a ) . 3 NEW KERNEL DECOMPOSITION AND SCALABLE LEARNING . Prior work on NDPPs proposed a maximum likelihood estimation ( MLE ) algorithm ( Gartrell et al. , 2019 ) . Due to that work ’ s particular kernel decomposition , this algorithm had complexity cubic in the number of items M . Here , we propose a kernel decomposition that reduces this to linear in M . We begin by showing that our new decomposition covers the space of P+0 matrices . Before diving in , let us define Σi : = ( 0 λi −λi 0 ) as shorthand for a 2× 2 block matrix with zeros on-diagonal and opposite values off-diagonal . Then , our proposed decomposition is as follows : L : = S + A , with S : = V V > and A : = BCB > , ( 1 ) where V , B ∈ RM×K , and C ∈ RK×K is a block-diagonal matrix with some diagonal blocks of the form Σi , with λi > 0 , and zeros elsewhere . The following lemma shows that this decomposition covers the space of P+0 matrices . Lemma 1 . Let A ∈ RM×M be a skew-symmetric matrix with rank ` ≤ M . Then , there exist B ∈ RM× ` and positive numbers λ1 , . . . , λb ` /2c , such that A = BCB > , where C ∈ R ` × ` is the block-diagonal matrix with b ` /2c diagonal blocks of size 2 given by Σi , i = 1 , . . . , b ` /2c and zero elsewhere . The proof of Lemma 1 and all subsequent results can be found in Appendix F. With this decomposition in hand , we now proceed to show that it can be used for linear-time MLE learning . To do so , we must show that corresponding NDPP log-likelihood objective and gradient can be computed in time linear in M . Given a collection of n observed subsets { Y1 , ... , Yn } composed of items from Y = [ [ M ] ] , the full formulation of the regularized log-likelihood is : φ ( V , B , C ) = 1 n n∑ i=1 log det ( VYiV > Yi +BYiCB > Yi ) − log det ( V V > +BCB > + I ) −R ( V , B ) , ( 2 ) where VYi ∈ R|Yi|×K denotes a matrix composed of the rows of V that correspond to the items in Yi . The regularization term , R ( V , B ) , is defined as follows : R ( V , B ) = α M∑ i=1 1 µi ‖vi‖22 + β M∑ i=1 1 µi ‖bi‖22 , ( 3 ) where µi counts the number of occurrences of item i in the training set , vi and bi are rows of V and B , respectively , and α , β > 0 are tunable hyperparameters . This regularization is similar to that of prior works ( Gartrell et al. , 2017 ; 2019 ) . We omit regularization for C. Theorem 1 shows that computing the regularized log-likelihood and its gradient both have time complexity linear in M . The complexities also depend on K , the rank of the NDPP , and K ′ , the size of the largest observed subset in the data . For many real-world datasets we observe that K ′ M , and we set K = K ′ . Hence , linearity in M means that we can efficiently perform learning for datasets with very large ground sets , which is impossible with the cubic-complexity L decomposition in prior work ( Gartrell et al. , 2019 ) . Theorem 1 . Given an NDPP with kernel L = V V > +BCB > , parameterized by V of rank K , B of rank K , and a K ×K matrix C , we can compute the regularized log-likelihood ( Eq . 2 ) and its gradient in O ( MK2 +K3 +nK ′3 ) time , where K ′ is the size of the largest of the n training subsets .
Nonsymmetric determinantal point processes (NDPPs) received some attention recently because they allow modeling of both negative and positive correlations between items. This paper developed scalable learning and MAP inference algorithms with space and time complexity linear in ground set size, which is a huge improvement compared to previous approaches. Experimental results show that the algorithms scale significantly better, and can roughly match the predictive performance of prior work.
SP:eb3a644606a97c248271782c2d9c83e699a329b9
Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes
1 INTRODUCTION . Determinantal point processes ( DPPs ) have proven useful for numerous machine learning tasks . For example , recent uses include summarization ( Sharghi et al. , 2018 ) , recommender systems ( Wilhelm et al. , 2018 ) , neural network compression ( Mariet & Sra , 2016 ) , kernel approximation ( Li et al. , 2016 ) , multi-modal output generation ( Elfeki et al. , 2019 ) , and batch selection , both for stochastic optimization ( Zhang et al. , 2017 ) and for active learning ( Bıyık et al. , 2019 ) . For subset selection problems where the ground set of items to select from has cardinality M , the typical DPP is parameterized by an M ×M kernel matrix . Most prior work has been concerned with symmetric DPPs , where the kernel must equal its transpose . However , recent work has considered the more general class of nonsymmetric DPPs ( NDPPs ) and shown that these have additional useful modeling power ( Brunel , 2018 ; Gartrell et al. , 2019 ) . In particular , unlike symmetric DPPs , which can only model negative correlations between items , NDPPs allow modeling of positive correlations , where the presence of item i in the selected set increases the probability that some other item j will also be selected . There are many intuitive examples of how positive correlations can be of practical importance . For example , consider a product recommendation task for a retail website , where a camera is found in a user ’ s shopping cart , and the goal is to display several other items that might be purchased . Relative to an empty cart , the presence of the camera probably increases the probability of buying an accessory like a tripod . Although NDPPs can theoretically model such behavior , the existing approach for NDPP learning and inference ( Gartrell et al. , 2019 ) is often impractical in terms of both storage and runtime requirements . These algorithms require memory quadratic in M and time quadratic ( for inference ) or cubic ( for learning ) in M ; for the not-unusual M of 1 million , this requires storing 8TB-size objects in memory , with runtime millions or billions of times slower than that of a linear-complexity method . In this work , we make the following contributions : Learning : We propose a new decomposition of the NDPP kernel which reduces the storage and runtime requirements of learning and inference to linear in M . Fortuitously , the modified decomposition retains all of the previous decomposition ’ s modeling power , as it covers the same part of the NDPP kernel space . The algebraic manipulations we apply to get linear complexity for this decomposition can not be applied to prior work , meaning that our new decomposition is crucial for scalability . Inference : After learning , prior NDPP work applies a DPP conditioning algorithm to do subset expansion ( Gartrell et al. , 2019 ) , with quadratic runtime in M . However , prior work does not examine the general problem of MAP inference for NDPPs , i.e. , solving the problem of finding the highestprobability subset under a DPP . For symmetric DPPs , there exists a standard greedy MAP inference algorithm that is linear in M . In this work , we develop a version of this algorithm that is also linear for low-rank NDPPs . The low-rank requirement is unique to NDPPs , and highlights the fact that the transformation of the algorithm from the symmetric to the nonsymmetric space is non-trivial . To the best of our knowledge , this is the first MAP algorithm proposed for NDPPs . We combine the above contributions through experiments that involve learning NDPP kernels and applying MAP inference to these kernels to do subset selection for several real-world datasets . These experiments demonstrate that our algorithms are much more scalable , and that the new kernel decomposition matches the predictive performance of the decomposition from prior work . 2 BACKGROUND . Consider a finite set Y = { 1 , 2 , . . . , M } of cardinalityM , which we will also denote by [ [ M ] ] . A DPP on [ [ M ] ] defines a probability distribution over all of its 2M subsets . It is parameterized by a matrix L ∈ RM×M , called the kernel , such that the probability of each subset Y ⊆ [ [ M ] ] is proportional to the determinant of its corresponding principal submatrix : Pr ( Y ) ∝ det ( LY ) . The normalization constant for this distribution can be expressed as a single M ×M determinant : ∑ Y⊆ [ [ M ] ] det ( LY ) = det ( L + I ) ( Kulesza et al. , 2012 , Theorem 2.1 ) . Hence , Pr ( Y ) = det ( LY ) / det ( L + I ) . We will use PL to denote this distribution . For intuition about the kernel parameters , notice that the probabilities of singletons { i } and { j } are proportional to Lii and Ljj , respectively . Hence , it is common to think of L ’ s diagonal as representing item qualities . The probability of a pair { i , j } is proportional to det ( L { i , j } ) = LiiLjj − LijLji . Thus , if −LijLji < 0 , this indicates i and j interact negatively . Similarly , if −LijLji > 0 , then i and j interact positively . Therefore , off-diagonal terms determine item interactions . ( The vague term “ interactions ” can be replaced by the more precise term “ correlations ” if we consider the DPP ’ s marginal kernel instead ; see Gartrell et al . ( 2019 , Section 2.1 ) for an extensive discussion . ) In order to ensure that PL defines a probability distribution , all principal minors of L must be non-negative : det ( LY ) ≥ 0 . Matrices that satisfy this property are called P0-matrices ( Fang , 1989 , Definition 1 ) . There is no known generative method or matrix decomposition that fully covers the space of all P0 matrices , although there are many that partially cover the space ( Tsatsomeros , 2004 ) . One common partial solution is to use a decomposition that covers the space of symmetric P0 matrices . By restricting to the space of symmetric matrices , one can exploit the fact that L ∈ P0 if L is positive semidefinite ( PSD ) * ( Prussing , 1986 ) . Any symmetric PSD matrix can be written as the Gramian matrix of some set of vectors : L : = V V > , where V ∈ RM×K . Hence , the V V > decomposition provides an easy means of generating the entire space of symmetric P0 matrices . It also has a nice intuitive interpretation : we can view the i-th row of V as a length-K feature vector describing item i . Unfortunately , the symmetry requirement limits the types of correlations that a DPP can capture . A symmetric model is able to capture only nonpositive interactions between items , since LijLji = L2ij ≥ 0 , whereas a nonsymmetric L can also capture positive correlations . ( Again , see Gartrell et al . ( 2019 , Section 2.1 ) for more intuition . ) To expand coverage to nonsymmetric matrices in P0 , it is natural to consider nonsymmetric PSD matrices . In what follows , we denote by P+0 the set of all nonsymmetric ( and symmetric ) PSD matrices . Any nonsymmetric PSD matrix is in P0 ( Gartrell et al. , 2019 , Lemma 1 ) , so P+0 ⊆ P0 . However , unlike in the symmetric case , the set of nonsymmetric PSD * Recall that a matrix L ∈ RM×M is defined to be PSD if and only if x > Lx ≥ 0 , for all x ∈ RM . matrices does not fully cover the set of nonsymmetric P0 matrices . For example , consider L = ( 1 5/3 1/2 1 ) with det ( L { 1 } ) , det ( L { 2 } ) , det ( L { 1,2 } ) ≥ 0 , but x > Lx < 0 for x = ( −1 1 ) . Still , nonsymmetric PSD matrices cover a large enough portion of the P0 space to be useful in practice , as evidenced by the experiments of Gartrell et al . ( 2019 ) . This work covered the P+0 space by using the following decomposition : L : = S + A , with S : = V V > for V ∈ RM×K , and A : = BC > −CB > for B , C ∈ RM×K . This decomposition makes use of the fact that any matrix L can be decomposed uniquely as the sum of a symmetric matrix S = ( L + LT ) /2 and a skew-symmetric matrix A = ( L−LT ) /2 . All skew-symmetric matrices A are trivially PSD , since x > Ax = 0 for all x ∈ RM . Hence , the L here is guaranteed to be PSD simply because its S uses the standard Gramian decomposition V V > . In this work we will also only consider P+0 , and leave to future work the problem of finding tractable ways to cover the rest of P0 . We propose a new decomposition of L that also covers the P+0 space , but allows for more scalable learning . As in prior work , our decomposition has inner dimension K that could be as large as M , but is usually much smaller in practice . Our algorithms work well for modest values of K. In cases where the natural K is larger ( e.g. , natural language processing ) , random projections can often be used to significantly reduce K ( Gillenwater et al. , 2012a ) . 3 NEW KERNEL DECOMPOSITION AND SCALABLE LEARNING . Prior work on NDPPs proposed a maximum likelihood estimation ( MLE ) algorithm ( Gartrell et al. , 2019 ) . Due to that work ’ s particular kernel decomposition , this algorithm had complexity cubic in the number of items M . Here , we propose a kernel decomposition that reduces this to linear in M . We begin by showing that our new decomposition covers the space of P+0 matrices . Before diving in , let us define Σi : = ( 0 λi −λi 0 ) as shorthand for a 2× 2 block matrix with zeros on-diagonal and opposite values off-diagonal . Then , our proposed decomposition is as follows : L : = S + A , with S : = V V > and A : = BCB > , ( 1 ) where V , B ∈ RM×K , and C ∈ RK×K is a block-diagonal matrix with some diagonal blocks of the form Σi , with λi > 0 , and zeros elsewhere . The following lemma shows that this decomposition covers the space of P+0 matrices . Lemma 1 . Let A ∈ RM×M be a skew-symmetric matrix with rank ` ≤ M . Then , there exist B ∈ RM× ` and positive numbers λ1 , . . . , λb ` /2c , such that A = BCB > , where C ∈ R ` × ` is the block-diagonal matrix with b ` /2c diagonal blocks of size 2 given by Σi , i = 1 , . . . , b ` /2c and zero elsewhere . The proof of Lemma 1 and all subsequent results can be found in Appendix F. With this decomposition in hand , we now proceed to show that it can be used for linear-time MLE learning . To do so , we must show that corresponding NDPP log-likelihood objective and gradient can be computed in time linear in M . Given a collection of n observed subsets { Y1 , ... , Yn } composed of items from Y = [ [ M ] ] , the full formulation of the regularized log-likelihood is : φ ( V , B , C ) = 1 n n∑ i=1 log det ( VYiV > Yi +BYiCB > Yi ) − log det ( V V > +BCB > + I ) −R ( V , B ) , ( 2 ) where VYi ∈ R|Yi|×K denotes a matrix composed of the rows of V that correspond to the items in Yi . The regularization term , R ( V , B ) , is defined as follows : R ( V , B ) = α M∑ i=1 1 µi ‖vi‖22 + β M∑ i=1 1 µi ‖bi‖22 , ( 3 ) where µi counts the number of occurrences of item i in the training set , vi and bi are rows of V and B , respectively , and α , β > 0 are tunable hyperparameters . This regularization is similar to that of prior works ( Gartrell et al. , 2017 ; 2019 ) . We omit regularization for C. Theorem 1 shows that computing the regularized log-likelihood and its gradient both have time complexity linear in M . The complexities also depend on K , the rank of the NDPP , and K ′ , the size of the largest observed subset in the data . For many real-world datasets we observe that K ′ M , and we set K = K ′ . Hence , linearity in M means that we can efficiently perform learning for datasets with very large ground sets , which is impossible with the cubic-complexity L decomposition in prior work ( Gartrell et al. , 2019 ) . Theorem 1 . Given an NDPP with kernel L = V V > +BCB > , parameterized by V of rank K , B of rank K , and a K ×K matrix C , we can compute the regularized log-likelihood ( Eq . 2 ) and its gradient in O ( MK2 +K3 +nK ′3 ) time , where K ′ is the size of the largest of the n training subsets .
This paper propose a decomposition for non-symmetric determinantal point process (NDPP) kernels (M*M) which reduces the requirements of storage and running to linear in cardinality (M). Additionally, they derive a NDPP maximum a posteriori inference algorithm that applies to both their proposed kernel and the previous work (NDPP). In their experiments, they show both learning kernels and the MAP inference for subset selection on real-life datasets.
SP:eb3a644606a97c248271782c2d9c83e699a329b9
Improving Zero-Shot Voice Style Transfer via Disentangled Representation Learning
1 INTRODUCTION . Style transfer , which automatically converts a data instance into a target style , while preserving its content information , has attracted considerable attention in various machine learning domains , including computer vision ( Gatys et al. , 2016 ; Luan et al. , 2017 ; Huang & Belongie , 2017 ) , video processing ( Huang et al. , 2017 ; Chen et al. , 2017 ) , and natural language processing ( Shen et al. , 2017 ; Yang et al. , 2018 ; Lample et al. , 2019 ; Cheng et al. , 2020b ) . In speech processing , style transfer was earlier recognized as voice conversion ( VC ) ( Muda et al. , 2010 ) , which converts one speaker ’ s utterance , as if it was from another speaker but with the same semantic meaning . Voice style transfer ( VST ) has received long-term research interest , due to its potential for applications in security ( Sisman et al. , 2018 ) , medicine ( Nakamura et al. , 2006 ) , entertainment ( Villavicencio & Bonada , 2010 ) and education ( Mohammadi & Kain , 2017 ) , among others . Although widely investigated , VST remains challenging when applied to more general application scenarios . Most of the traditional VST methods require parallel training data , i.e. , paired voices from two speakers uttering the same sentence . This constraint limits the application of such models in the real world , where data are often not pair-wise available . Among the few existing models that address non-parallel data ( Hsu et al. , 2016 ; Lee & Wu , 2006 ; Godoy et al. , 2011 ) , most methods can not handle many-to-many transfer ( Saito et al. , 2018 ; Kaneko & Kameoka , 2018 ; Kameoka et al. , 2018 ) , which prevents them from converting multiple source voices to multiple target speaker styles . Even among the few non-parallel many-to-many transfer models , to the best of our knowledge , only two models ( Qian et al. , 2019 ; Chou & Lee , 2019 ) allow zero-shot transfer , i.e. , conversion from/to newly-coming speakers ( unseen during training ) without re-training the model . The only two zero-shot VST models ( AUTOVC ( Qian et al. , 2019 ) and AdaIN-VC ( Chou & Lee , 2019 ) ) share a common weakness . Both methods construct encoder-decoder frameworks , which extract the style and the content information into style and content embeddings , and generate a voice sample by combining a style embedding and a content embedding through the decoder . With the combination of the source content embedding and the target style embedding , the models generate ∗Equal contribution . the transferred voice , based only on source and target voice samples . AUTOVC ( Qian et al. , 2019 ) uses a GE2E ( Wan et al. , 2018 ) pre-trained style encoder to ensure rich speaker-related information in style embeddings . However , AUTOVC has no regularizer to guarantee that the content encoder does not encode any style information . AdaIN-VC ( Chou & Lee , 2019 ) applies instance normalization ( Ulyanov et al. , 2016 ) to the feature map of content representations , which helps to eliminate the style information from content embeddings . However , AdaIN-VC fails to prevent content information from being revealed in the style embeddings . Both methods can not assure that the style and content embeddings are disentangled without information revealed from each other . With information-theoretic guidance , we propose a disentangled-representation-learning method to enhance the encoder-decoder zero-shot VST framework , for both style and content information preservation . We call the proposed method Information-theoretic Disentangled Embedding for Voice Conversion ( IDE-VC ) . Our model successfully induces the style and content of voices into independent representation spaces by minimizing the mutual information between style and content embeddings . We also derive two new multi-group mutual information lower bounds , to further improve the representativeness of the latent embeddings . Experiments demonstrate that our method outperforms previous works under both many-to-many and zero-shot transfer setups on two objective metrics and two subjective metrics . 2 BACKGROUND . In information theory , mutual information ( MI ) is a crucial concept that measures the dependence between two random variables . Mathematically , the MI between two variables x and y is I ( x ; y ) : = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] , ( 1 ) where p ( x ) and p ( y ) are marginal distributions of x and y , and p ( x , y ) is the joint distribution . Recently , MI has attracted considerable interest in machine learning as a criterion to minimize or maximize the dependence between different parts of a model ( Chen et al. , 2016 ; Alemi et al. , 2016 ; Hjelm et al. , 2018 ; Veličković et al. , 2018 ; Song et al. , 2019 ) . However , the calculation of exact MI values is challenging in practice , since the closed form of joint distribution p ( x , y ) in equation ( 1 ) is generally unknown . To solve this problem , several MI estimators have been proposed . For MI maximization tasks , Nguyen , Wainwright and Jordan ( NWJ ) ( Nguyen et al. , 2010 ) propose a lower bound by representing ( 1 ) as an f -divergence ( Moon & Hero , 2014 ) : INWJ : = Ep ( x , y ) [ f ( x , y ) ] − e−1Ep ( x ) p ( y ) [ ef ( x , y ) ] , ( 2 ) with a score function f ( x , y ) . Another widely-used sample-based MI lower bound is InfoNCE ( Oord et al. , 2018 ) , which is derived with Noise Contrastive Estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ) . With sample pairs { ( xi , yi ) } Ni=1 drawn from the joint distribution p ( x , y ) , the InfoNCE lower bound is defined as INCE : = E [ 1 N N∑ i=1 log ef ( xi , yi ) 1 N ∑N j=1 e f ( xi , yj ) ] . ( 3 ) For MI minimization tasks , Cheng et al . ( 2020a ) proposed a contrastively learned upper bound that requires the conditional distribution p ( x|y ) : I ( x ; y ) ≤ E [ 1 N N∑ i=1 [ log p ( xi|yi ) − 1 N N∑ j=1 log p ( xj |yi ) ] ] . ( 4 ) where the MI is bounded by the log-ratio of conditional distribution p ( x|y ) between positive and negative sample pairs . In the following , we derive our information-theoretic disentangled representation learning framework for voice style transfer based on the MI estimators described above . 3 PROPOSED MODEL . We assume access to N audio ( voice ) recordings from M speakers , where speaker u has Nu voice samples Xu = { xui } Nui=1 . The proposed approach encodes each voice input x ∈ X = ∪Mu=1Xu into a speaker-related ( style ) embedding s = Es ( x ) and a content-related embedding c = Ec ( x ) , using respectively a style encoder Es ( · ) and a content encoder Ec ( · ) . To transfer a source xui from speaker u to the target style of the voice of speaker v , xvj , we combine the content embedding cui = Ec ( xui ) and the style embedding svj = Es ( xvj ) to generate the transferred voice x̂u→v , i = D ( svj , cui ) with a decoder D ( s , c ) . To implement this two-step transfer process , we introduce a novel mutual information ( MI ) -based learning objective , that induces the style embedding s and content embedding c into independent representation spaces ( i.e. , ideally , s contains rich style information of xwith no content information , and vice versa ) . In the following , we first describe our MI-based training objective in Section 3.1 , and then discuss the practical estimation of the objective in Sections 3.2 and 3.3 . 3.1 MI-BASED DISENTANGLING OBJECTIVE . From an information-theoretic perspective , to learn representative latent embedding ( s , c ) , it is desirable to maximize the mutual information between the embedding pair ( s , c ) and the input x . Meanwhile , the style embedding s and the content c are desired to be independent , so that we can control the style transfer process with different style and content attributes . Therefore , we minimize the mutual information I ( s ; c ) to disentangle the style embedding and content embedding spaces . Consequently , our overall disentangled-representation-learning objective seeks to minimize L = I ( s ; c ) − I ( x ; s , c ) = I ( s ; c ) − I ( x ; c|s ) − I ( x ; s ) . ( 5 ) As discussed in Locatello et al . ( Locatello et al. , 2019 ) , without inductive bias for supervision , the learned representation can be meaningless . To address this problem , we use the speaker identity u as a variable with values { 1 , . . . , M } to learn representative style embedding s for speaker-related attributes . Noting that the process from speaker u to his/her voice xui to the style embedding sui ( as u → x → s ) is a Markov Chain , we conclude I ( s ; x ) ≥ I ( s ; u ) based on the MI data-processing inequality ( Cover & Thomas , 2012 ) ( as stated in the Supplementary Material ) . Therefore , we replace I ( s ; x ) in L with I ( s ; u ) and minimize an upper bound instead : L̄ = I ( s ; c ) − I ( x ; c|s ) − I ( u ; s ) ≥ I ( s ; c ) − I ( x ; c|s ) − I ( x ; s ) , ( 6 ) In practice , calculating the MI is challenging , as we typically only have access to samples , and lack the required distributions ( Chen et al. , 2016 ) . To solve this problem , below we provide several MI estimates to the objective terms I ( s ; c ) , I ( x ; c|s ) and I ( u ; s ) . 3.2 MI LOWER BOUND ESTIMATION . To maximize I ( u ; s ) , we derive the following multi-group MI lower bound ( Theorem 3.1 ) based on the NWJ bound developed in Nguyen et al . ( Nguyen et al. , 2010 ) . The detailed proof is provided in the Supplementary Material . Let µ ( −ui ) v = µv represent the mean of all style embeddings in group Xv , constituting the style centroid of speaker v ; µ ( −ui ) u is the mean of all style embeddings in group Xu except data point xui , representing a leave-xui-out style centroid of speaker u . Intuitively , we minimize ‖sui − µ ( −ui ) u ‖ to encourage the style embedding of voice xui to be more similar to the style centroid of speaker u , while maximizing ‖sui − µ ( −ui ) v ‖ to enlarge the margin between sui and the other speakers ’ style centroids µv . We denote the right-hand side of ( 7 ) as Î1 . Theorem 3.1 . Let µ ( −ui ) v = 1Nv ∑Nv k=1 svk if u 6= v ; and µ ( −ui ) u = 1 Nu−1 ∑ j 6=i suj . Then , I ( u ; s ) ≥ E [ 1 N M∑ u=1 Nu∑ i=1 [ − ‖sui − µ ( −ui ) u ‖2 − e−1 N M∑ v=1 Nv exp { −‖sui − µ ( −ui ) v ‖2 } ] ] . ( 7 ) To maximize I ( x ; c|s ) , we derive a conditional mutual information lower bound below : Theorem 3.2 . Assume that given s = su , samples { ( xui , cui ) } Nui=1 are observed . With a variational distribution qφ ( x|s , c ) , we have I ( x ; c|s ) ≥ E [ Î ] , where Î = 1 N M∑ u=1 Nu∑ i=1 [ log qφ ( xui|cui , su ) − log ( 1 Nu Nu∑ j=1 qφ ( xuj |cui , su ) ) ] . ( 8 ) Based on the criterion for s in equation ( 7 ) , a well-learned style encoder Es pulls all style embeddings sui from speaker u together . Suppose su is representative of the style embeddings of set Xu . If we parameterize the distribution qφ ( x|s , c ) ∝ exp ( −‖x−D ( s , c ) ‖2 ) with decoder D ( s , c ) , then based on Theorem 3.2 , we can estimate the lower bound of I ( x ; c|s ) with the following objective : Î2 : = 1 N M∑ u=1 Nu∑ i=1 [ − ‖xui −D ( cui , su ) ‖2 − log ( 1 Nu Nu∑ j=1 exp { −‖xuj −D ( cui , su ) ‖2 } ) ] . When maximizing Î2 , for speaker u with his/her given voice style su , we encourage the content embedding cui to well reconstruct the original voice xui , with small ‖xui − D ( cui , su ) ‖ . Additionally , the distance ‖xuj −D ( cui , su ) ‖ is minimized , ensuring cui does not contain information to reconstruct other voices xuj from speaker u . With Î2 , the correlation between xui and cui is amplified , which improves cui in preserving the content information .
This paper proposes a zero-shot voice style transfer (VST) algorithms that explicitly controls the disentanglement between content information and style information. Experiments show that the proposed algorithm can achieve significant improvement over the existing state-of-the-art VST algorithms. There are two major strengths of this paper. First, it motivates the algorithm design from an information-theoretic perspective. Second, the performance improvement is significant.
SP:86d37b08b4c0ab21d139c57bbe3b9e5535eeb3f9