paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Few-Round Learning for Federated Learning
1 Introduction . Today , valuable data are being collected increasingly at distributed edge nodes such as mobile phones , wearable client devices and smart vehicles/drones . Directly sending these local data to the central server for model training raises significant privacy concerns . To address this issue , an emerging trend known as federated learning ( FL ) [ 13 , 9 , 1 , 11 , 20 , 16 , 15 ] , where server uploading of local data is not necessary , has been actively researched . In FL , a large group of distributed clients interested in solving the same task ( e.g. , classification on given categories of images ) collaborate in training a single global model without sharing their data . While standard supervised learning uses some dataset D to find the model φ that would minimize a loss function f ( φ , D ) , FL in comparison seeks the model φ that minimizes the averaged version of the local losses f ( φ , Dk ) , computed at each node k using local data Dk . The learning process typically starts from a randomly initialized or some pretrained model and is carried out through iterative aggregation of the local model updates . 1.1 Backgrounds and Main Contributions . Motivation . Unfortunately , FL generally requires a large number of communication rounds between the server and the clients for model exchange , to achieve a desired level of performance . This makes ∗Equal contribution . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . the implementation of FL a significant challenge in bandwidth-limited or time-sensitive applications . Especially in real-time applications ( e.g. , connected vehicles or drones ) where the model should quickly adapt to dynamically evolving environments , the requirement on many communication rounds becomes a major bottleneck . Goal and challenge . To tackle this problem from the service provider ’ s perspective , we aim to prepare an initial model that can quickly adapt to any group ( focusing on its own task ) of clients within only a few rounds of FL . The key challenge here is that the task of the group conducting FL ( i.e. , the downstream task for which the prepared model will be used ) is generally not known when the service provider prepares the initial model . In the context of classification , different tasks mean classification involving different sets of classes . For example , classifying diseases A , B , C ( task 1 ) is a different task compared to classifying diseases D , E , F ( task 2 ) . Since the group conducting FL for the downstream task can include classes that are unseen during preparation , existing FL approaches can not tackle this problem . Key idea . Our key idea is to adopt meta-learning ( which enables reliable prediction even when the task at inference is unseen when the model was meta-trained ) to prepare the initial model that enables few-round FL . In other words , we aim to meta-train an initial model for few-round downstream FL . Once meta-training is over , the service provider would offer the trained model to some clients who want to solve a common task after collaborating through a quick few rounds of FL . These clients may or may not be the participants of the earlier meta-training phase , and their classification task is generally considered unseen during meta-training . A high-level description of our idea is depicted in Fig . 1 ( b ) . Given a small target value R , we take an episodic training approach to enable R-round FL for any group of clients . In essence , we find the initial model φ that would minimize the average of local losses f ( θR ( φ ) , Dk ) , where θR ( φ ) is the model to be updated from φ through R rounds of FL among future clients in the deployment stage . Despite the high practical significance of this problem formulation , to the best of our knowledge , this is the first work to propose a meta-learning strategy geared to few-round FL . It is also worth mentioning that model preparation is not a real-time requirement and can often be done when bandwidth demands are sparse . Comparison with personalized FL . We stress that our idea has a different purpose and approach relative to the recent line of works on federated meta-learning [ 12 , 4 ] , which initiate a model for personalized optimizations at local clients ( see Fig . 1 ( a ) ) . The goal of these approaches is to obtain a personalized local model at each client within a few steps of gradient descents , in the deployment stage . To achieve this goal , in the preparation stage , a few steps of local updates and meta-update are first performed at each participant independently ( with its own local data ) , and FL ( or aggregation ) is adopted just to take advantage of data of various participants : these approaches seek φ that minimizes the average of local losses f ( θk ( φ ) , Dk ) , where θk ( φ ) is the local model updated from φ through a number of gradient steps using local data Dk . In contrast to personalized FL that focuses on local client models in the deployment stage , our few-round learning inherits the ability of FL at deployment to obtain a global model . Hence , for our scheme , it is natural to adopt FL in the preparation stage to mimic the R-round FL scenario at deployment ; in the preparation stage , meta-update is performed at each participant after the collaborative R FL rounds . To sum , our approach aims to prepare an initial model that leads to a global model within a “ few rounds of FL ” , while personalized FL aims for an initial model leading to personalized models within “ few steps of local updates ” based only on the local data . These are obviously two completely different problems with distinct solutions . Main contributions . Technically , we utilize a model-agnostic meta-learning ( MAML ) approach to prepare the initial model via an episodic training strategy . While directly applying MAML independently to each local model leads to existing solutions on personalized FL [ 12 , 4 ] , in our approach , R rounds of local updates and aggregations are first performed in each episode before the meta-update process . This unique episode construction compared to personalized FL methods mimics the deployment stage where actual inference is preceded by an R-round FL procedure . Another key ingredient in our solution is to adopt prototype aggregation in each FL round to construct global prototypes that serve as better class representatives compared to the locally computed prototypes , in learning embedding space . This strategy is especially effective when a non-IID ( independent , identically distributed ) data distribution across clients tends to induce a significantly biased model after performing local updates . The global prototypes serve as prior knowledge , a form of regularization , and prevent local models from overfitting to the local data . Moreover , the global prototypes ( reflecting all classes across clients ) can assist the local models to learn a more general embedding space . We call this approach a global prototype-assisted learning ( GPAL ) strategy . Our main contributions are summarized as follows : • We formulate a new problem of high practical significance , namely , few-round learning , where the goal is to prepare an initial model that can quickly adapt to any group of clients within only a few rounds of FL . • We propose a meta-training algorithm specifically geared to R rounds of FL followed by inference , to be performed by a group of clients on a possibly unseen task . • We guarantee convergence of our meta-training algorithm via theoretical analysis . • We show via experiments that our scheme outperforms existing pretraining approaches including fine-tuning via FedAvg and personalized FL in both IID and non-IID scenarios . 1.2 Related Works . Few-shot learning . Few-shot learning is an instantiation of meta-learning . In the context of image classification , few-shot learning typically involves episodic training where each episode of training data is arranged into a few training ( support ) sample images and validation ( query ) samples to mimic inference that uses only a few examples [ 19 ] . Through a repetitive exposure to a series of varying episodes with different sets of image classes , the model learns to handle new tasks ( classification against unseen classes ) each time . Two widely-known few-shot learning methods with different philosophical twists , which are also conceptually relevant to the present work , are MAML [ 5 ] and Prototypical Networks [ 18 ] . MAML attempts to generate an initial model from which different models targeting different tasks can be obtained quickly via just a few gradient updates . The idea is that the initial model is learned via meta-training to develop an internal representation that is close in some sense to a variety of unseen tasks . Prototypical Networks , on the other hand , learn embedding space such that model outputs cluster around class prototypes , the class-specific centroids of the embedder outputs . With episodic training , simple Prototypical Networks are surprisingly effective in learning inductive bias for successful generalization to new tasks . We stress that our few-round learning scheme ( that targets a few global rounds of FL ) has different purpose and technical approach compared to the existing works on few-shot learning ( that targets few shots of data sample ) . Nevertheless , we take advantage of both concepts on MAML and Prototypical Networks to achieve our own goal : we adopt MAML in updating the initial model specifically geared to R-round FL , and adopt both prototype aggregation and prototype-assisted learning strategies to learn a general embedding space and successfully handle the non-IID issue in FL . Federated meta-learning . Recent research activity has focused on improving model personalization via federated meta-learning [ 12 , 3 , 4 , 7 ] . The common goal of these works is to generate an initial model based on which each new client can find its own optimized model via a few local gradient steps and using only its own data . In these works , meta-learning employed during federated learning intends to enable each client to handle previously unseen tasks , in the spirit of MAML of [ 5 ] . User-specific next-word prediction at individual smartphones , for example , is a possible application . Compared to this line of work , we focus on creating an initial model that leads to a high-accuracy global model , rather than personalized models . In this way , we seek to take advantage of a higher variety of data as well as the larger data volume that would be made available through collaborative learning of a group of distributed nodes . A clear example is the diagnosis of a broader class of diseases that would be possible through collaborative training across more examples contributed by a larger group of individuals . Personalized FL methods ( e.g. , [ 12 , 4 ] ) especially have disadvantage in non-IID settings where each client necessarily lacks a sufficient variety of data . The results are reported in Section 4 . One-shot FL . Another line of work recently focused on one-shot FL , where the goal is to train a global model with just one communication round between the server and the clients . The authors of [ 6 ] proposed an ensemble method to choose reliable client-specific models from given clients . In the work of [ 17 ] , local clients send XOR-encoded MNIST image data to the server , and the server decodes it to train the global model . While the server would need certain data in advance to decode the received results , XOR operation can serve as data augmentation while preserving privacy . In the fusion learning of [ 8 ] , each local client uploads both the model parameters and the distribution parameters to the server . The server generates artificial data samples from the distribution parameters to train a global model . When the data gets complex , however , it is not clear whether conversion into a simple distribution would be reliable . Compared to the existing works on one-shot FL that employ some randomly initialized model , the key difference of our method is the use of meta-learning to prepare an initial model which can adapt to unseen tasks of individual groups ’ of clients within R rounds of FL . The advantage of our scheme compared to these methods is shown in Section 4 .
This paper studied the combination of federated learning tasks in a meta-learning setting. In particular, with the assistance of the pre-trained meta-model, the new FL model's training can be completed within limited communication rounds. It was inspired by the meta-learning method used in few-shot learning scenario. This paper proposed a few-round learning (FRL) algorithm and designed global prototype-assisted learning (GPAL) scheme to assist training. It is an interesting topic to combine meta-learning with federated learning.
SP:ecc41670e8132da6dd5fdc3e75405c3060733512
Cross-Node Federated Graph Neural Network for Spatio-Temporal Data Modeling
1 INTRODUCTION . Modeling the dynamics of spatio-temporal data generated from networks of edge devices or nodes ( e.g . sensors , wearable devices and the Internet of Things ( IoT ) devices ) is critical for various applications including traffic flow prediction ( Li et al. , 2018 ; Yu et al. , 2018 ) , forecasting ( Seo et al. , 2019 ; Azencot et al. , 2020 ) , and user activity detection ( Yan et al. , 2018 ; Liu et al. , 2020 ) . While existing works on spatio-temporal dynamics modeling ( Battaglia et al. , 2016 ; Kipf et al. , 2018 ; Battaglia et al. , 2018 ) assume that the model is trained with centralized data gathered from all devices , the volume of data generated at these edge devices precludes the use of such centralized data processing , and calls for decentralized processing where computations on the edge can lead to significant gains in improving the latency . In addition , in case of spatio-temporal forecasting , the edge devices need to leverage the complex inter-dependencies to improve the prediction performance . Moreover , with increasing concerns about data privacy and its access restrictions due to existing licensing agreements , it is critical for spatio-temporal modeling to utilize decentralized data , yet leveraging the underlying relationships for improved performance . Although recent works in federated learning ( FL ) ( Kairouz et al. , 2019 ) provides a solution for training a model with decentralized data on multiple devices , these works either do not consider the inherent spatio-temporal dependencies ( McMahan et al. , 2017 ; Li et al. , 2020b ; Karimireddy et al. , 2020 ) or only model it implicitly by imposing the graph structure in the regularization on model weights ( Smith et al. , 2017 ) , the latter of which suffers from the limitation of regularization based methods due to the assumption that graphs only encode similarity of nodes ( Kipf & Welling , 2017 ) , and can not operate in settings where only a fraction of devices are observed during training ( inductive learning setting ) . As a result , there is a need for an architecture for spatio-temporal data modeling which enables reliable computation on the edge , while maintaining the data decentralized . To this end , leveraging recent works on federated learning ( Kairouz et al. , 2019 ) , we introduce the cross-node federated learning requirement to ensure that data generated locally at a node remains decentralized . Specifically , our architecture – Cross-Node Federated Graph Neural Network ( CNFGNN ) , aims to effectively model the complex spatio-temporal dependencies under the cross-node federated learning constraint . For this , CNFGNN decomposes the modeling of temporal and spatial dependencies using an encoder-decoder model on each device to extract the temporal features with local data , and a Graph Neural Network ( GNN ) based model on the server to capture spatial dependencies among devices . As compared to existing federated learning techniques that rely on regularization to incorporate spatial relationships , CNFGNN leverages an explicit graph structure using a graph neural networkbased ( GNNs ) architecture , which leads to performance gains . However , the federated learning ( data sharing ) constraint means that the GNN can not be trained in a centralized manner , since each node can only access the data stored on itself . To address this , CNFGNN employs Split Learning ( Singh et al. , 2019 ) to train the spatial and temporal modules . Further , to alleviate the associated high communication cost incurred by Split Learning , we propose an alternating optimization-based training procedure of these modules , which incurs only half the communication overhead as compared to a comparable Split Learning architecture . Here , we also use Federated Averaging ( FedAvg ) ( McMahan et al. , 2017 ) to train a shared temporal feature extractor for all nodes , which leads to improved empirical performance . Our main contributions are as follows : 1 . We propose Cross-Node Federated Graph Neural Network ( CNFGNN ) , a GNN-based federated learning architecture that captures complex spatio-temporal relationships among multiple nodes while ensuring that the data generated locally remains decentralized at no extra computation cost at the edge devices . 2 . Our modeling and training procedure enables GNN-based architectures to be used in federated learning settings . We achieve this by disentangling the modeling of local temporal dynamics on edge devices and spatial dynamics on the central server , and leverage an alternating optimization-based procedure for updating the spatial and temporal modules using Split Learning and Federated Averaging to enable effective GNN-based federated learning . 3 . We demonstrate that CNFGNN achieves the best prediction performance ( both in transductive and inductive settings ) at no extra computation cost on edge devices with modest communication cost , as compared to the related techniques on a traffic flow prediction task . 2 RELATED WORK . Our method derives elements from graph neural networks , federated learning and privacy-preserving graph learning , we now discuss related works in these areas in relation to our work . Graph Neural Networks ( GNNs ) . GNNs have shown their superior performance on various learning tasks with graph-structured data , including graph embedding ( Hamilton et al. , 2017 ) , node classification ( Kipf & Welling , 2017 ) , spatio-temporal data modeling ( Yan et al. , 2018 ; Li et al. , 2018 ; Yu et al. , 2018 ) and multi-agent trajectory prediction ( Battaglia et al. , 2016 ; Kipf et al. , 2018 ; Li et al. , 2020a ) . Recent GNN models ( Hamilton et al. , 2017 ; Ying et al. , 2018 ; You et al. , 2019 ; Huang et al. , 2018 ) also have sampling strategies and are able to scale on large graphs . While GNNs enjoy the benefit from strong inductive bias ( Battaglia et al. , 2018 ; Xu et al. , 2019 ) , most works require centralized data during the training and the inference processes . Federated Learning ( FL ) . Federated learning is a machine learning setting where multiple clients train a model in collaboration with decentralized training data ( Kairouz et al. , 2019 ) . It requires that the raw data of each client is stored locally without any exchange or transfer . However , the decentralized training data comes at the cost of less utilization due to the heterogeneous distributions of data on clients and the lack of information exchange among clients . Various optimization algorithms have been developed for federated learning on non-IID and unbalanced data ( McMahan et al. , 2017 ; Li et al. , 2020b ; Karimireddy et al. , 2020 ) . Smith et al . ( 2017 ) propose a multi-task learning framework that captures relationships amongst data . While the above works mitigate the caveat of missing neighbors ’ information to some extent , they are not as effective as GNN models and still suffer from the absence of feature exchange and aggregation . Alternating Optimization . Alternating optimization is a popular choice in non-convex optimization ( Agarwal et al. , 2014 ; Arora et al. , 2014 ; 2015 ; Jain & Kar , 2017 ) . In the context of Federated Learning , Liang et al . ( 2020 ) uses alternating optimization for learning a simple global model and reduces the number of communicated parameters , and He et al . ( 2020 ) uses alternating optimization for knowledge distillation from server models to edge models . In our work , we utilize alternating optimization to effectively train on-device modules and the server module jointly , which captures temporal and spatial relationships respectively . Privacy-Preserving Graph Learning . Suzumura et al . ( 2019 ) and Mei et al . ( 2019 ) use statistics of graph structures instead of node information exchange and aggregation to avoid the leakage of node information . Recent works have also incorporated graph learning models with privacypreserving techniques such as Differential Privacy ( DP ) , Secure Multi-Party Computation ( MPC ) and Homomorphic Encryption ( HE ) . Zhou et al . ( 2020 ) utilize MPC and HE when learning a GNN model for node classification with vertically split data to preserve silo-level privacy instead of nodelevel privacy . Sajadmanesh & Gatica-Perez ( 2020 ) preprocesses the input raw data with DP before feeding it into a GNN model . Composing privacy-preserving techniques for graph learning can help build federated learning systems following the privacy-in-depth principle , wherein the privacy properties degrade as gracefully as possible if one technique fails ( Kairouz et al. , 2019 ) . 3 CROSS-NODE FEDERATED GRAPH NEURAL NETWORK . 3.1 PROBLEM FORMULATION . Given a dataset with a graph G = ( V , E ) , a feature tensor X ∈ R|V|× ... and a label tensor Y ∈ R|V|× ... , we consider learning a model under the cross-node federated learning constraint : node feature xi = Xi , ... , node label yi = Yi , ... , and model output ŷi are only visible to the node i . One typical task that requires the cross-node federated learning constraint is the prediction of spatiotemporal data generated by a network of sensors . In such a scenario , V is the set of sensors and E describes relations among sensors ( e.g . eij ∈ E if and only if the distance between vi and vj is below some threshold ) . The feature tensor xi ∈ Rm×D represents the i-th sensor ’ s records in the D-dim space during the past m time steps , and the label yi ∈ Rn×D represents the i-th sensor ’ s records in the future n time steps . Since records collected on different sensors owned by different users/organizations may not be allowed to be shared due to the need for edge computation or licensing issues on data access , it is necessary to design an algorithm modeling the spatio-temporal relation without any direct exchange of node-level data . 3.2 PROPOSED METHOD . We now introduce our proposed Cross-Node Federated Graph Neural Network ( CNFGNN ) model . Here , we begin by disentangling the modeling of node-level temporal dynamics and server-level spatial dynamics as follows : ( i ) ( Figure 1c ) on each node , an encoder-decoder model extracts temporal features from data on the node and makes predictions ; ( ii ) ( Figure 1b ) on the central server , a Graph Network ( GN ) ( Battaglia et al. , 2018 ) propagates extracted node temporal features and outputs node embeddings , which incorporate the relationship information amongst nodes . ( i ) has access to the not shareable node data and is executed on each node locally . ( ii ) only involves the upload and download of smashed features and gradients instead of the raw data on nodes . This decomposition enables the exchange and aggregation of node information under the cross-node federated learning constraint . 3.2.1 MODELING OF NODE-LEVEL TEMPORAL DYNAMICS . We modify the Gated Recurrent Unit ( GRU ) based encoder-decoder architecture in ( Cho et al. , 2014 ) for the modeling of node-level temporal dynamics on each node . Given an input sequence xi ∈ Rm×D on the i-th node , an encoder sequentially reads the whole sequence and outputs the hidden state hc , i as the summary of the input sequence according to Equation 1. hc , i = Encoderi ( xi , h ( 0 ) c , i ) , ( 1 ) where h ( 0 ) c , i is a zero-valued initial hidden state vector . To incorporate the spatial dynamics into the prediction model of each node , we concatenate hc , i with the node embedding hG , c , i generated from the procedure described in 3.2.2 , which contains spatial information , as the initial state vector of the decoder . The decoder generates the prediction ŷi in an auto-regressive way starting from the last frame of the input sequence xi , m with the concatenated hidden state vector . ŷi = Decoderi ( xi , m , [ hc , i ; hG , c , i ] ) . ( 2 ) We choose the mean squared error ( MSE ) between the prediction and the ground truth values as the loss function , which is evaluated on each node locally .
Graph neural networks and federated learning are both promising directions of works individually. This papers is apparently one of the first few attempts to combine them for spatio-temporal data modeling. The time series data in the local nodes is modelled by an Encoder-Decoder architecture and spatial locality property of various nodes is captured by the server. The Encoder at each node projects the time series data into an embedding space. This embedding is used by the GNN at the server as node features. The server side GNN outputs node embeddings. The Encoder embeddings and the GNN embeddings are then concatenated and fed to the decoder that predicts the outputs for the subsequent time steps. To ensure that all the nodes encode their temporal data in a common space, the encoders are shared by the clients. Overall, the results look promising.
SP:a7dd38170e565b5450928720a51a50952ce48d86
A generalized probability kernel on discrete distributions and its application in two-sample test
We propose a generalized probability kernel ( GPK ) on discrete distributions with finite support . This probability kernel , defined as kernel between distributions instead of samples , generalizes the existing discrepancy statistics such as maximum mean discrepancy ( MMD ) as well as probability product kernels , and extends to more general cases . For both existing and newly proposed statistics , we estimate them through empirical frequency and illustrate the strategy to analyze the resulting bias and convergence bounds . We further propose power-MMD , a natural extension of MMD in the framework of GPK , illustrating its usage for the task of two-sample test . Our work connects the fields of discrete distribution-property estimation and kernel-based hypothesis test , which might shed light on more new possibilities . 1 INTRODUCTION . We focus on the two-sample problem , which is given two i.i.d samples { x1 , x2 , ... xn } , { y1 , y2 , ... , yn } , could we infer the discrepancy between underlying distributions they are drawn from . For such a problem , the option of hypothesis test ( two-sample test ) is most popular , and a variety of statistics in estimating the discrepancy is proposed . In recent years , RKHS based method such as maximum mean discrepancy ( MMD ) has gained a lot of attention . ( Gretton et al. , 2012 ) has shown that in a universal-RKHS F , MMD ( F , p , q ) = 0 if and only if p = q , thus could be used for the two-sample hypothesis test . ( Gretton et al. , 2012 ) further provides unbiased estimator of MMD with fast asymptotic convergence rate , illustrating its advantages . On the other hand , estimating distribution properties with plugin ( empirical ) estimators on discrete setting is an active research area in recent years , where people focus on problem settings with large support size but not so large sample size . The Bernstein polynomial technique is introduced to analyze the bias of the plugin estimators in ( Yi & Alon , 2020 ) , which provides remarkable progress on bias-reduction methods of the plugin estimators . It is thus interesting to ask if the plugin estimators could motivate new results for the RKHS-based two-sample test . Another interesting topic is about the probability kernel , defined as kernel function over probabilities , instead of over samples . As is easily seen , any discrepancy measure of distribution p and q could potentially be valid probability kernels , not so much work focuses on this . While ( Jebara et al. , 2004 ) introduced the so called probability product kernels which generalize a variety of discrepancy measures , its properties remain further study . Motivated by above observations , our work focuses on a specialized probability kernel function which is a direct generalization of sample-based RKHS methods such as MMD . We focus on using plugin-estimator as the default estimator of the kernel function we defined , and illustrate that with the help of Bernstein polynomial techniques , we could analyze the bias and convergence bounds of these plugin-estimators . Our work thus connects the fields of discrete distribution-property estimation and kernel-based hypothesis test , which brings interesting possibilities . 2 NOTATION . We use bold symbol p , q ∈ Rk to represent a probability function over a discrete support with support size k , and pi , qi represents the ith entry of p and q . We use { v1 , v2 , ... , vk } , vi ∈ Rd to represent the support of p , q . [ k ] : = { 1 , 2 , 3 ... , k } represents the set of indices of elements in { v1 , v2 , ... , vk } . We use φ ◦ ( p , q ) to denote an element-wise function from Rk × Rk to Rk , where ( φ ◦ ( p , q ) ) i = φ ◦ ( pi , qi ) and φ ◦ p to denote an element-wise function from Rk to Rk , where ( φ ◦ p ) i = φ ◦ pi . With a slight abuse of notation , we denote pρ , p − q as element-wise function defined above . We use kernel ( p , q ) to denote kernel function which maps from Rk × Rk to real value R. And kernel ( x , y ) , x , y ∈ Rd represents a kernel function from Rd × Rd to real value R. We use K to denote the gram matrix generated from kernel ( x , y ) on finite support { v1 , v2 , ... , vk } , where Kij = kernel ( vi , vj ) . We use { x1 , x2 , ... , xn } ∼ p and { y1 , y2 , ... , yn } ∼ q to denote the samples from distribution p and q , where n is the sample size . 3 GENERALIZED PROBABILITY KERNEL . Probability kernel function , defined as kernel function between distributions instead of samples , is a natural extension of the idea of kernel function in sample space . Definition 1 . Given distribution p and q belongs to a family of discrete distribution with the same finite support { v1 , v2 , ... , vk } , vi ∈ Rd , where k is the support size , we define the probability kernel function as PK ( p , q ) , which is a kernel function maps from Rk × Rk to real value R. Many discrepancy measures , such as MMD , can serve as probability kernel functions , but people usually don ’ t use the term of probability kernel function when describing them . The reason is that for most of the time , we only consider a limited number of distributions , and do not need or have the resources to navigate through all the distributions within the family . For example , when looking into the two-sample problem , we usually assume two samples { x1 , x2 , ... , xn } ∈ Rd and { y1 , y2 , ... , yn } ∈ Rd are i.i.d drawn from two distributions p and q , and use the discrepancy measure MMD [ F , p , q ] to determine if p and q are indistinguishable in the RKHS F . We do not consider all other distributions in F that is irrelevant to our samples ! So far the idea of kernel function between distributions is in practice not so much useful , however , here in this paper , we propose , when considering the plugin-estimator of many of the existing discrepancy measures , it is beneficial to view them as probability kernel functions . 3.1 DEFINATION OF GENERALIZED PROBABILITY KERNEL . Definition 2 ( Generalized probability kernel ) . Given the family S of discrete distribution on support { v1 , v2 , .. , vk } where vi ∈ Rd . Let F be a unit ball in a universal-RKHS H with associated continuous kernel RK ( x , y ) , where for any x ∈ Rd and y ∈ Rd , RK ( x , y ) maps from Rd ×Rd to R. We denote gram matrix Kij = RK ( vi , vj ) . The generalized probability kernel function on distribution p , q ∈ S is GPKF , φ ( p , q ) = φ ◦ ( p , q ) Kφ ◦ ( q , p ) T = ∑ i∈ [ k ] ∑ j∈ [ k ] φ ◦ ( pi , qi ) Kijφ ◦ ( qj , pj ) where φ ◦ ( p , q ) is an element-wise mapping function on discrete distribution p , q ∈ S , which maps from Rk × Rk to Rk , Obviously , under this definition , the GPK is a symmetric probability kernel function where GPKF , φ ( p , q ) = GPKF , φ ( q , p ) Mapping function φ represent a great amount of possibilities . For most cases , we need to narrow down the region and equipped it with some convenient properties so that the GPK measure could be useful . One example is for the measurement of discrepancy , where we want GPKF , φ ( p , q ) = 0 if and only if p = q . Definition 3 ( discrepancy probability kernel ) . Let S be a family of discrete distribution p ∈ S on support { v1 , v2 , ... , vk } . A discrepancy probability kernel is a kernel function PK ( p , q ) that PK ( p , q ) = 0 if and only if p = q Theorem 1 . GPKF , φ ( p , q ) with the mapping function φ that satisfies : 1. symmetric or antisymmetric with respect to p and q : φ ◦ ( p , q ) = φ ◦ ( q , p ) or φ ◦ ( p , q ) = −φ ◦ ( q , p ) 2 . ‖φ ◦ ( p , q ) ‖2 = ‖φ ◦ ( q , p ) ‖2 = 0 if and only if p = q , where ‖ · ‖2 represents L2 norm . is a discrepancy probability kernel . Proof . GPKF , φ ( p , q ) = ∑ i∈ [ k ] ∑ j∈ [ k ] φ ◦ ( pi , qi ) Kijφ ◦ ( qj , pj ) = φ ◦ ( p , q ) Kφ ◦ ( q , p ) T = ±φ ◦ ( p , q ) Kφ ◦ ( p , q ) T = ±vKvT K is a semipositive definite matrix , thus by definition of positive definite matrix , vKvT ≥ 0 , where equality holds if and only if v = 0 , and since v = φ ◦ ( p , q ) , this condition further means φ ◦ ( p , q ) = 0 , which holds if and only if p = q . Another example is the polynomial GPK , which is our main focus of this paper . Such a subclass of GPK is interesting since we can build unbiased estimators of it using techniques of Bernstein polynomial in ( Qian et al. , 2011 ) . As we will show in section 5. , we also have analyzable convergence bounds for the resulting unbiased estimators , illustrating its potential usage for applications such as two-sample test . Definition 4 ( polynomial GPK ) . The polynomial GPK is the subset of GPK that equipped with the mapping function φ that is polynomial in p and q : φ ◦ ( p , q ) = ∑o l=0 ∑o s=0 αl , sp lqs where o ∈ Z is the degree of the polynomial , and al , s ∈ R is the coefficient Below we give some examples of polynomial GPK , which include MMD proposed in ( Gretton et al. , 2012 ) , and the newly proposed power-MMD in this paper , which is a natural extension of MMD , from the view point of probability kernels . 3.1.1 EXAMPLE 1 : MMD AS MEMBER OF POLYNOMIAL GPK . Given discrete distribution p , q with support { v1 , v2 , ... , vk } , we can rewrite MMD with distribution probability function pi , qi : MMD2F ( p , q ) = ‖Ex∼pf ( x ) −Ex′∼qf ( x′ ) ‖ 2 H = ∥∥∥∥∥∥ ∑ i∈ [ k ] f ( vi ) pi − ∑ i∈ [ k ] f ( vi ) qi ∥∥∥∥∥∥ 2 H = ∥∥∥∥∥∥ ∑ i∈ [ k ] f ( vi ) pi − f ( vi ) qi ∥∥∥∥∥∥ 2 H = ∑ i∈ [ k ] ∑ j∈ [ k ] ( pi − qi ) f ( vi ) f ( vj ) ( pj − qj ) = ∑ i∈ [ k ] ∑ j∈ [ k ] ( pi − qi ) Kij ( pj − qj ) = −GPKF , φl ( p , q ) Where φl ◦ ( p , q ) = p − q , H is the RKHS defined in MMD literature , and f is the function that maps vi toH . GPKF , φl ( p , q ) is a special case of polynomial GPK where α1,0 = 1 , α0,1 = −1 , and all other coefficients are 0 .
The paper under review proposes to generalize MMD for discrete random variables whose labels take value in $\mathbb{R}^k$. They propose to estimate these generalized probability kernel distance using empirical estimators. Their properties are studied for two particular examples, namely a kernelized Stein discrenpancy and polynomials versions. Consistency and bias of both estimators are studied and bias corrected.
SP:9a099507d376dd1553a8d11b821ce564b8a595ff
Generalized Gumbel-Softmax Gradient Estimator for Generic Discrete Random Variables
1 INTRODUCTION . Stochastic computational graphs , including deep generative models such as variational autoencoders , are widely used for representation learning . Optimizing the network parameters through gradient methods requires an estimation of the gradient values , but the stochasticity requires the computation of expectation , which differentiates this problem from the deterministic gradient of ordinary neural networks . There are two common ways of obtaining the gradients : score function-based methods and reparameterization methods . The score function-based estimators tend to result in unbiased gradients with high variances , while the reparameterization estimators seem to result in biased gradients with low variances ( Xu et al. , 2019 ) . Hence , the core technique of the score-function based estimators becomes reducing the variances of gradients to achieve stable and fast optimizations . Meanwhile , utilizing the reparameterization estimators requires the differentiable non-centered parameterization ( Kingma & Welling , 2014a ) of random variables . If we focus on the reparameterization estimators , one of the most popular examples is the reparameterization in the Gaussian variational autoencoder ( VAE ) ( Kingma & Welling , 2014b ) , which has an exact reparameterization form . Other VAEs with explicit priors suggest the reparameterization tricks with approximations ( Nalisnick & Smyth , 2017 ; Joo et al. , 2020 ) . For continuous random variables , it is feasible to estimate gradients with automatic differentiation by utilizing a transport equation ( Jankowiak & Obermeyer , 2018 ) or an implicit reparameterization ( Figurnov et al. , 2018 ) . However , these methods are not applicable to discrete random variables , due to the non-differentiability . Recently , some discrete random variables , such as Bernoulli or categorical random variables , have been well-explored in terms of the reparameterization method by overcoming such difficulty through a continuous relaxation ( Jang et al. , 2017 ; Maddison et al. , 2017 ) . However , other discrete distributions have not been explored from the learning perspective in the deep generative modeling community , for example , Poisson , binomial , multinomial , geometric , negative binomial distributions , etc . Prior works on graphical models , such as Ranganath et al . ( 2015 ; 2016 ) , utilized Poisson latent variables for the latent counting . Another line of work ( Wu et al. , 2020 ) utilized the Gaussian approximation on the Poisson latent variable to count the number of words , which can be a poor approximation if the rate parameter is small . In this sense , study on the stochastic gradient estimator for discrete distributions is required in the deep generative modeling community , which broadens the choice of prior assumptions and the utilization of various distributions . This paper proposes a generalized version of the Gumbel-Softmax reparameterization trick , which can be utilized to generic discrete random variables through continuous relaxation , namely Generalized Gumbel-Softmax ( GENGS ) . The key ideas of GENGS are ( 1 ) a conversion of the sampling process to one-hot categorical selection process ; ( 2 ) a reversion of the selected category in the one-hot form to the original sample value ; and ( 3 ) a relaxation of the categorical selection process into the continuous form . To implement these steps , GENGS first truncates discrete random variables to approximate the distribution with the finite number of possible outcomes . Afterward , GENGS utilizes the Gumbel-Softmax trick together with a special form of a linear transformation . Our main theorem supports that the proposed GENGS is applicable to general discrete random variables , other than the Bernoulli and the categorical . The GENGS experiments show the efficacy with synthetic examples and VAEs , as well as the usability in topic model application . 2 PRELIMINARY : REPARAMETERIZATION TRICK & GUMBEL-SOFTMAX . 2.1 BACKPROPAGATION THROUGH STOCHASTIC NODES WITH REPARAMETERIZATION TRICK . Suppose we have a stochastic node , or a latent variable , z ∼ p ( z|θ ) , where the distribution depends on parameter θ . The goal is optimizing the loss function L ( θ , η ) = Ez∼p ( z|θ ) [ fη ( z ) ] , where fη is a continuous and differentiable function with respect to η , i.e. , neural networks . To optimize the loss function in terms of θ through the gradient methods , we need to find ∇θL ( θ , η ) = ∇θEz∼p ( z|θ ) [ fη ( z ) ] , which can not be directly computed with its original form . To compute ∇θL ( θ , η ) , the reparameterization trick alternatively introduces an auxiliary variable ∼ p ( ) , which takes over all randomness of the latent variable z , so the sampled value z can be re-written as z = g ( θ , ) , with a deterministic and differentiable function g in terms of θ . Figure 1 ( a ) illustrates the reparameterization trick : the shaded nodes indicate random nodes , and the dotted lines denote sampling processes . Here , the gradient of the loss function with respect to θ is derived as ∇θL = ∇θEz∼p ( z|θ ) [ fη ( z ) ] = E ∼p ( ) [ ∇θfη ( g ( θ , ) ) ] = E ∼p ( ) [ ∇gfη ( g ( θ , ) ) ∇θg ( θ , ) ] ( 1 ) where the last term of Equation 1 is now achievable . A condition on enabling the reparameterization trick is the assumption of the continuity of the random variable z , so the distribution of z is limited to a class of continuous distributions . To utilize the differentiable reparameterization trick on discrete random variables , continuous relaxation can be applied : for example , a relaxation from the categorical distribution to the Gumbel-Softmax distribution , described in the next subsection . 2.2 REPARAMETERIZATION TRICK ON CATEGORICAL RANDOM VARIABLE . A Gumbel-Max trick ( Gumbel , 1948 ) is a procedure for sampling a one-hot categorical value from the Gumbel distribution , instead of direct sampling from a categorical distribution . This implies that the categorical random variable X ∼ Categorical ( π ) , where π lies on the ( n− 1 ) -dimensional simplex ∆n−1 , can be reparameterized by the Gumbel-Max trick : ( 1 ) sample uj ∼ Uniform ( 0 , 1 ) to generate a gumbel sample gj = − log ( − log uj ) for each j = 1 , · · · , n ; and ( 2 ) compute k = argmaxnj=1 [ log πj + gj ] , where π is a categorical parameter . This procedure generates a one-hot sample x such that xj = 0 for j 6= k and xk = 1 with P ( Xk = 1 ) = πk . We denote GM ( π ) to be the distribution whose samples are generated by the Gumbel-Max trick . A Gumbel-Softmax trick ( Jang et al. , 2017 ; Maddison et al. , 2017 ) is an alternative to the GumbelMax trick that continuously relaxes a categorical random variable . The Gumbel-Softmax utilizes the softmax with a temperature τ > 0 , instead of the argmax in the sampling process , which enables ( 1 ) relaxing the discreteness of the categorical random variable to the one-hot-like form x ( τ ) = softmax ( log π+g τ ) in the continuous domain ; and ( 2 ) approximating the Gumbel-Max by taking τ small enough . Lately , the Gumbel-Softmax estimator has been widely used to reparameterize categorical random variables , such as RelaxedOneHotCategorical in TensorFlow ( Abadi et al. , 2016 ) . We denote GS ( π , τ ) to be the distribution generated by the Gumbel-Softmax trick . 3 PROCESS OF GENGS REPARAMETERIZATION . This section discusses the process of GENGS to help understand the concept with minimal theoretical details , and Section 4 provides the theoretical background of GENGS . The three steps of GENGS are the following : ( 1 ) approximate a discrete distribution by truncating the distribution ; ( 2 ) reparameterize the truncated distribution with the Gumbel-Max trick and the linear transformation T , which will be introduced below ; and ( 3 ) relax the discreteness by replacing the Gumbel-Max trick in Step 2 with the Gumbel-Softmax trick . Figure 1 ( b ) illustrates the full steps of the GENGS trick . Step 1 . Truncate the discrete distribution to finitize the number of possible outcomes . Suppose X ∼ Poisson ( 100 ) , which has a mode near at x = 100 , and near-zero probabilities at x < 50 and x > 150 . The key idea of the first step is ignoring the outcomes of near-zero probabilities at certain levels ( ex . x < 50 and x > 150 ) and only focusing on the probable samples of meaningful probabilities ( ex . 50 ≤ x ≤ 150 ) , i.e. , truncating the distribution , which finitizes the support of the distribution . Now , suppose we have a discrete random variable X ∼ D ( λ ) , and its truncated random variable Z ∼ TD ( λ , R ) , where R denotes the truncation range that needs to be pre-defined . Proposition 3 in Section 4 provides theoretical reason that Z approximates X . Since we finitized the support by the truncation , we may assume Z has a support C = { c0 , · · · , cn−1 } of n possible outcomes and its corresponding constant outcome vector c = ( c0 , · · · , cn−1 ) . Note that the ordering of ck is not significant , and Appendix E provides examples of the setting on c. Step 2 . Divide sampling process of Z into two-fold : select a one-hot category of Z , and revert the selected one-hot category into the original value . For example , if the sampled value of Z is c2 ∈ C , we will first focus on the one-hot category class vector one hot ( c2 ) = ( 0 , 0 , 1 , 0 , · · · , 0 ) , rather than the sampled value c2 . Such a one-hot categorical selection process is possible by utilizing the categorical selection w ∼ Categorical ( π ) or its reparameterized version , the Gumbel-Max trick GM ( π ) . Here , the categorical parameter π = ( π0 , · · · , πn−1 ) can be directly calculated by the explicit probability mass funciton ( PMF ) of the distribution , i.e. , πk = P ( Z = ck ) . However , the PMF of the truncated distribution requires a modification from the PMF of the original distribution , which is determined by how we define Z from X . See Definition 1 , 2 , and Appendix A for detailed configuration of π . Suppose we now have a one-hot categorical sample w from the categorical parameter π . Afterward , we revert the one-hot selected categorical vector w = ( w0 , · · · , wn−1 ) into the original sample value with a linear transformation T ( w ) = ∑ k wkck = ∑ k w c. Proposition 4 shows the validity of the alternative sampling process in Section 4 . Step 3 . Relax the one-hot categorical selection into the continuous form by utilizing the GumbelSoftmax trick . Up to now , the sole shortage of the reparameterization trick is the differentiability due to the one-hot categorical selecting Gumbel-Max process . Then , as in Section 2.2 , the process can be continuously relaxed with the Gumbel-Softmax trick GS ( π , τ ) for some temperature τ . Theorem 5 in Section 4 shows that the alternative sampling process still holds under the continuous relaxation .
The paper presents a generalization of the Gumbel-Softmax gradient estimator. The original Gumbel-Softmax is usually applied to Bernoulli and categorical random variables. The method proposed in the paper attempts to extend it applicability to other discrete distributions, such as Poisson, multinomial, geometric, among others. The main ideas of the approach are: (1) Random variables that may take countably infinite values are truncated, (2) The sampling process of the random variable is converted to a one-hot scheme (where Gumbel-Softmax relaxation is applied), (3) ``One-hot'' samples are reverted to the original sample space.
SP:5f22f64538ccd28123d51c7f8b16fe056cc5dc0b
Structure Controllable Text Generation
1 INTRODUCTION . Natural language is not just a sequence collections of tokens but a structure well-organized sequence expressing understandable information . The structure of language usually obeys a set of grammatical rules , which helps beginners grasp the language with less efforts . Similarly , incorporating the structure into neural language model can obtain an increasing abstract level of representation and improves the generalization which may potentially reduce the need of large amount of training data ( Shen et al. , 2019b ) . The incorporations of structure information demonstrates considerable improvements in many language understanding tasks ( Zhang et al. , 2019 ; Hao et al. , 2019 ; Wang et al. , 2019 ) . In text generation , it cares about not only the generated contents ( i.e. , what to say ) but also the presented structure forms ( i.e. , how to say ) ( Peng et al. , 2019 ) . Similar contents or meanings can be presented with different structure forms . The structures and contents can be considered and planned separately to achieve a highly informative generated text . From an empirical view , controlling or planning the generated structure may be helpful in several aspects : i ) reducing the uncertainty of the generated contents with specific structure conditions , which may contribute to a good quality of generated text ; ii ) enhancing the interpretability of the generated text since more controlling attributes can be realized during the generation ; iii ) improving the structure , format or style consistence in specific structure-constraint generation task or specific domain generation with particular formats , such as style or paraphrase generation ( Chen et al. , 2019 ; Ficler & Goldberg , 2017 ) , poetry generation ( Deng et al. , 2020 ; Li et al. , 2020 ) , and lyric generation ( Watanabe et al. , 2018 ; Lu et al. , 2019 ) . The language structures determined by the set of grammatical rules vary from different granularity levels , such as participial construction ( pc ) is character-level , part of speech ( pos ) is word/phrase level , and sequence length is sentence level . These kinds of structure are coupled and nested together , which are realized with the contents simultaneously in most of the token by token generation . It is difficult to disentangle the contents and the text structure , and even harder to discriminate and control the different granularity level of structure during text generation . Individually controlling some specific types of structure like sequence length ( Kikuchi et al. , 2016 ) , verbal predicate ( Tu et al. , 2019 ) have been investigated in text generation . These works design specific structure representation and are inappropriately for controlling other types of structure , let alone controlling multiple types of structure simultaneously . Directly embedding the structure and adding them into the word embeddings can achieve considerable controlling capability in character-level structure during text generation , such as tone level and rhyme ( Deng et al. , 2020 ) controlling in Chinese poetry generation . While this method may fail when the controlled structure ( such as phrase level or sentence level ) needs to aware the subsequent structure during the generation process . In addition to summarizing the structure embeddings and word embeddings , SongNet ( Li et al. , 2020 ) designs another structure embeddings which are queried and incorporated globally by the summarized embeddings to renew the representation . With pre-training and fine-tuning , the SongNet ( Li et al. , 2020 ) can also achieve good controllability in tailor-designed formats 1 ( sentence level structure ) . The symbol sets for this format are particular designed and may not applicable for other type of structure . Contrast to the above works , in this paper , we are not focus on controlling specific type of structure or format , instead we propose a framework to control more general types of structure in text generation . This framework allows for controlling individual type of structure , multiple or multigranularity types of structure during text generation . The controlled types of structure are extracted from sequence templates ( any valid sentence is a valid template ) by one or several auxiliary models . The extracted structure information are regarded as conditions , and the auxiliary model can be any credible model or tool that can extract soundable structure information from template . Since we want the generation of the current token or word can aware the global structures , the bi-directional transformer encoder is adopted for structure representation and learning . The learned structure representations are further incorporated into the decoder to guide the realization of the controlled structure . The main contributions of this work are summarized as follows : • A straightforward , interpretable structure controlling text generation framework is proposed , which is capable of controlling multi-granularity sequence structure from characterlevel to sentence-level structure by explicitly incorporating the corresponding structure information . • A simple alignment method and structure embedding , representation and learning method are proposed , which are utilized for representing the multi-granularity and multiple types of structure . • A structure-aware transformer language model is proposed , and the structure representation and token representation can be learned simultaneously . The structure information are queried globally and incorporated into the token representation with attention mechanism , which contribute to controlling the generated structure . • Extensive experiments in controlling different individual type of structure and multigranularity types of structure have been conducted on Chinese lyrics corpus . The structure controllability is effective and the quality of the generated lyrics is favorable . We also conduct controlling experiments on English Penn Treebank dataset , which demonstrates similar structure controlling capability with this proposed framework . 2 RELATED WORKS . Controllable text generation has received much attention recently . Many efforts are devoted to controlling the content of the generated text ( Kiddon et al. , 2016 ; Lebret et al. , 2016 ; Shen et al. , 2019a ) . Based on conditioned RNN language model , stylistic parameters are further incorporated as conditioning context to control stylistic aspects of the generated text ( Ficler & Goldberg , 2017 ) . Basing generator on VAEs , Hu et al . ( 2017 ) proposes a generative model to generate plausible sentences with designated semantics . A simple plug and play language model is proposed in Dathathri et al . ( 2019 ) to guide controlling attributes ( e.g . topic or sentiment ) in text generation , without further training of the pre-trained language model . None of these work attempts to control the structure of the generated text . A similar approach , exemplar-based text generation , is proposed in Peng et al . ( 2019 ) , where for each input text , an exemplar text is retrieved from the training data and is then used to construct a customized decoder for outputting a target . It is ambiguously to discriminate how much the exemplar contributes to the generated structure or contents . Another similar work is SongNet ( Li et al. , 2020 ) , which are proposed to control the so called rigid formats . The rigid for- 1This format or structure is more about the length of each sentence within one paragraph or passage . mats are specifically designed with a sequence of placeholder symbols , which are utilized to control the sentence ( or sub-sentence ) length . Our method is different from all the previous methods in fourfold : 1 ) we focus on a general structure controlling framework in text generation instead of controlling a specific type of structure ; 2 ) both individual type of structure and multiple or multi-granularity types of structure can be controlled ; 3 ) instead of designing the structure symbols by ourself , we adopt the most representative structure symbols as extracted by external models to increase the applicability of our framework ; 4 ) the extracted structure information decoupled from the sequence information are learned and represented fully before them are incorporated into word information to guide the text generation . 3 MODEL DESCRIPTION . 3.1 STRUCTURE CONDITIONAL LANGUAGE MODEL . Given a natural language sequence denoted by x = [ x1 , ... , xT ] , each word denoted as xt , t = 1 , ... , T . The sequence joint distribution p ( x ) can be factorized into the product of conditional distributions p ( xt|x < t ) as follows : p ( x ) =p ( x1 , ... , xT ) = T∏ t=1 p ( xt|x < t ) . ( 1 ) A standard language model is modeling the above distribution and maximizing the corresponding likelihood accordingly ( Bengio et al. , 2003 ; Peters et al. , 2018 ; Shen et al. , 2019b ) . The above distribution considers the order structure of natural language sequence explicitly , and the conditional distribution are based on the previous word tokens . Although the standard language model can generate sentence with high quality , the generated structure is inexplicable and can not be controlled to satisfy specific generation task . Therefore , we incorporate the structure information explicitly into language model , and guide the structure generation . The joint distribution of sequence x can be reformulated as shown in Equation equation 2 : p ( x ) =p ( x1 , ... , xT ) =p ( s ) T∏ t=1 p ( xt|x < t , s ) ( 2 ) where , s represents the global structure of the natural language sequence x , the global structure can be any of the structure information like pos tags or semantic roles of the sequence , and p ( s ) is the prior distribution of the global structure . We extract the structure information with auxiliary model , and this structure information is considered as prior knowledge , which will not be optimized by the language model . The model parameters are learned by maximizing the objective function of SCLM , which is to maximize the likelihood as shown in Equation equation 3 : max θ log pθ ( x ) = T∑ t=1 log pθ ( xt|x < t , s ) ( 3 ) We utilize the Transformer ( Vaswani et al. , 2017 ) as the backbone for implementing our SCLM . The structure information is first extracted by auxiliary model and then encoded into transformer encoder . The structure information can be learned and represented fully , which can be further incorporated to contribute the aware of the structure for sequence token representation with attention mechanism . The reason why both the transformer encoder and decoder are adopted here is that we want each token in sequence to aware its local and global structure information . Only the Transformer decoder , like GPT ( Radford et al. , 2018 ) ignores the subsequent structure information of the token . The Transformer architecture is well designed and suitable for the implementation of the structure conditional language model . We only modified the input representation and few parameters of transformer . 3.2 STRUCTURE EXTRACTION . We use auxiliary model ( such as lexical tool ) g ( • ) to extract the structure information s from natural language sequence x as shown in Equation equation 4 . The auxiliary model can be regarded as prior knowledge and will not be optimized . s = g ( x ) . ( 4 ) The structure can be any sounded structure information of language sequence vary from characterlevel structure ( like participial construction ) , word-level structure ( like part of speech ) to sentencelevel structure ( positions for example ) . The multi-granularity types of sequence structure s1 , s2 , .. , si can be extracted by different auxiliary models g1 ( • ) , g2 ( • ) , ... , gi ( • ) respectively . Since each structure unit ( especially for word-level and sentence-level structure ) may contain several characters , we assign these characters with the same symbol of this kind structure . We keep the length of the structure the same with the sequence tokens . To be specific , we use the part of speech ( pos ) and participial construction ( pc ) as examples to illustrate the alignment of multi-granularity types of structure . The pos information can be extracted by many lexical analyzer tools like Jieba analyzer and Stanza ( Qi et al. , 2020 ) for Chinese and English sequence respectively . In Chinese , the pos is a type of word-level structure , and the participial construction is the character-level structure for each segmented word . We utilize the symbol collections Cpos = { n , v , r , ... } 2 from lexical analyzer ( like Jieba ) to represent the pos for each word . The symbol collections Cpc = { P , S , B , M , E } 3 are utilized to represent the pc for each character within each word . Suppose we have two levels ( word-level and character-level ) structure information for a sequence x = [ x1 , ... , xi , ... , xn ] , we can also present the word-level form of the sequence with w = [ w1 , ... , wj , ... , wnw ] , nw ≤ n , and the pos structure can be represented with s′w = [ pos1 , ... , posj , ... , posnw ] , posj ∈ Cpos ; each word contains several characters wj = [ ... , xj , k , ... ] , k ∈ [ 1 , mj ] , and the pc structure for each word are sc , j = [ ... , pcj , k , ... ] , pcj , k ∈ Cpc where ∑nw j=1mj = n. Therefore , we can obtain the word-level structure ( pos ) and character-level structure ( pc ) with the same length with the original sequence as can be shown in the following expressions : sw = [ ... , posj , .. , posj︸ ︷︷ ︸ mj , ... ] , j ∈ [ 1 , nw ] ( 5 ) sc = [ ... , pcj,1 , ... , pcj , k , ... , pcj , mj︸ ︷︷ ︸ mj , ... ] ( 6 ) The sentence level structure like positions have unique representation for each token and do not need any further processing for the alignment . With the alignment process , multi-granularity and multi-type of sequence structure can be incorporated and controlled in the generation . An illustration of multi-granularity structure information for a natural language sentence can be shown in Fig . 1 .
This paper presents a text generation model conditioned on desired structures. The proposed method is essentially a translation model from structure information (represented with multiple sequences of tokens) to a text. This study converts a text into structure information such as part of speech (POS) and participial construction (PC). Then, this paper proposes Structure Aware Transformer (SAT), which is essentially the same as the Transformer architecture. The experiments use datasets of Chinese lyrics and English Penn Treebank. This paper reports that giving structure information improved the performance in PPL and BLEU compared with GPT-2.
SP:94c1fa434cf2eb8f4f762cd06cf838b0018c6fa0
Non-Linear Rewards For Successor Features
Recently , Reinforcement Learning ( RL ) algorithms have achieved superhuman performance in several challenging domains , such as Atari ( Mnih et al. , 2015 ) , Go ( Silver et al. , 2016 ) , and Starcraft II ( Vinyals et al. , 2019 ) . The main driver of these successes has been the use of deep neural networks , which are a class of powerful non-linear function approximators , with RL algorithms ( LeCun et al. , 2015 ) . However , this class of Deep Reinforcement Learning ( Deep RL ) algorithms require immense amounts of data within an environment , often ranging from tens to hundreds of millions of samples ( Arulkumaran et al. , 2017 ) . Furthermore , commonly used algorithms often have difficulty in transferring a learned policy between related tasks , such as where the environmental dynamics remain constant , but the goal changes . In this case , the model must either be retrained completely or fine-tuned on the new task , in both cases requiring millions of additional samples . If the state dynamics are constant , but the reward structure varies between tasks , it is wasteful to retrain the entire model . A more pragmatic approach would be to decompose the RL agent ’ s policy such that separate functions can learn the state dynamics and the reward structure ; doing so enables reuse of the dynamics model and only requires learning the reward component . Successor features ( Dayan , 1993 ) do precisely this ; a model-free policy ’ s action-value function is expressed as the dot product between a vector of expected discounted future state occupancies , the successor features , and another vector representing the immediate reward in each of those successor states . The factorization follows from the assumption that reward can be predicted as the dot product between a state representation vector and a learned reward vector . Therefore , transfer to a new task requires relearning only the reward parameters instead of the entire model and amounts to the supervised learning problem of predicting the current state ’ s immediate reward . This factorization can be limiting because it is assumed that the reward is a linear function of the current state , which might not always be the case as the encoded features might not capture the required quantity for accurate reward modelling ( Eysenbach et al. , 2018 ; Hansen et al. , 2019 ) . Therefore , this paper introduces a new form for the reward function : non-linear with respect to the current state . We assume that the learned features are not optimal and the reward can not be predicted directly from the raw features , which is not a strong assumption . This form increases the reward function ’ s representational power and makes it possible to incorporate the current state into reward estimation ; lessening the burden on the encoder components . Under the new reward formulation , a secondary term emerges , which learns the future expected auto-correlation matrix of the state features . This new secondary term , referred to as Λ , can be exploited as a possible avenue for directed exploration . Exploring the environment using Λ allows us to exploit and reuse learned environmental knowledge instead of relying on a purely random approach for exploration , such as -greedy . Following this , the contributions of this research are as follows : • A novel formulation of successor features that uses a non-linear reward function . This formulation increases the representational power of the reward function . • Under the new reward formulation , a second term appears that models the future expected auto-correlation matrix of the state features . • We provide preliminary results that show the second term can be used for guided exploration during transfer instead of relying on -greedy exploration . After the introduction of relevant background material in Section 1 , we introduce the successor feature framework with a non-linear reward function in Section 2 , Section 3 provides experimental support and provides an analysis of the new term in the decomposition . The paper concludes with a final discussion and possible avenues for future work in Section 4 . 1 BACKGROUND . 1.1 REINFORCEMENT LEARNING . Consider the interaction between an agent and an environment modelled by a Markov decision process ( MDP ) ( Puterman , 2014 ) . An MDP is defined as a set of states S , a set of actionsA , a reward function R : S → R , a discount factor γ ∈ [ 0 , 1 ] , and a transition function T : S ×A → [ 0 , 1 ] . The transition function gives the next-state distribution upon taking action a in state s and is often referred to as the dynamics of the MDP . The objective of the agent in RL is to find a policy π , a mapping from states to actions , which maximizes the expected discounted sum of rewards within the environment . One solution to this problem is to rely on learning a value function , where the action-value function of a policy π is defined as : Qπ ( s , a ) = Eπ [ ∞∑ t=0 γtR ( st ) |St = s , At = a ] where Eπ [ . . . ] denotes the expected value when following the policy π . The policy is learned using an alternating process of policy evaluation , given the action-value of a particular policy and policy improvement , which derives a new policy that is greedy with respect to Qπ ( s , a ) ( Puterman , 2014 ) . 1.2 SUCCESSOR FEATURES . Successor Features ( SF ) offer a decomposition of the Q-value function and have been mentioned under various names and interpretations ( Dayan , 1993 ; Kulkarni et al. , 2016 ; Barreto et al. , 2017 ; Machado et al. , 2017 ) . This decomposition follows from the assumption that the reward function can be approximately represented as a linear combination of learned features φ ( s ; θφ ) extracted by a neural network with parameters θφ and a reward weight vector w. As such , the expected one-step reward can be computed as : r ( s , a ) = φ ( s ; θφ ) > w . Following from this , the Q function can be rewritten as : Q ( s , a ) ≈ Eπ [ rt+1 + γrt+2 + . . . |St = s , At = a ] = Eπ [ φ ( st+1 ; θφ ) > w + φ ( st+2 ; θφ ) > w + . . . |St = s , At = a ] Q ( s , a ) = ψπ ( s , a ) > · w where ψ ( s , a ) are referred to as the successor features under policy π . The ith component of ψ ( s , a ) provides the expected discounted sum of φ ( i ) t when following policy π starting from state s and action a . It is assumed that the features φ ( s ; θφ ) are representative of the state s , such that ψ ( . ) can be turned into a function ψπ ( φ ( st ; θφ ) , at ) . For brevity , φ ( st ; θφ ) is referred to simply as φt and ψπ ( s , a ) as ψ ( s , a ) . The decomposition neatly separates the Q-function into two learning problems , for ψπ and w : estimating the features under the current policy dynamics , and estimating the reward given a state . Because the decomposition still has the same form as the Q-function , the successor features are computed using a Bellman equation update in which the reward function is replaced by φt : ψπ ( φt , at ) = φt + γE [ ψπ ( φt+1 , at+1 ) ] such that approximate successor features can be learned using an RL method , such as Q-Learning ( Szepesvári , 2009 ) . Following from this , the approximation of the reward vector w becomes a supervised learning problem . Often , this weight is learned using ordinary least squares from the sampled environmental data . One benefit of having a decoupled representation is that only the relevant function must be relearned when either the dynamics or the reward changes . Therefore , if the task changes , but the environmental dynamics remain constant , only the reward vector parameters w must be relearned , which are minimal compared to the total number of parameters in the full model . 2 MODEL , ARCHITECTURE , AND TRAINING . The Successor Feature framework has several limitations , primarily stemming from the assumptions around its derivation , such as constant environmental structure between tasks or that the reward can be linearly predicted from state features . Work towards solving the former has been developed by Zhang et al . ( 2017 ) whereby they learn a linear mapping between task state features . The latter assumption , whereby the reward is assumed to be a linear mapping of state features , is not guaranteed and , as we show , the Successor Feature framework fails in such cases . Therefore , the method presented in this section aims to provide a stronger guarantee of the framework ’ s performance in such cases by developing a more robust reward component . This section discusses our change to the successor feature framework , which adjusts reward function , from a linear function , to a non-linear function . First , a discussion of the new decomposition is given with the full derivation provided in Appendix A . Then experimental support for this change will be presented and analyzed to examine what the new term in the decomposition learns . 2.1 NON-LINEAR REWARD FUNCTION . The successor feature framework builds upon the assumption that the current reward rt can be represented by the linear combination of the current state representation φt ∈ Rz and a learned reward vector w ∈ Rz , such that rt = φ > t w. This form is limiting because there is no guarantee that the reward will be a linear combination of the state features or that the required state features can be learned by the encoder ( Eysenbach et al. , 2018 ; Hansen et al. , 2019 ) . In practice the optimal state features are often not learned ; therefore , we build on the basis that the state features are sub-optimal , which in itself is not a strong assumption . To increase the flexibility of this reward model , let us consider the following form : rt = φ > t o + φ > t Aφt ( 1 ) where { φt , o } ∈ Rz , and A ∈ Rz×z . Both o and A are learnable parameters modelling the reward structure of the environment . Equation 1 shows that the formulation introduces a non-linear transformation with respect to φ . Comparing this with the original formulation , it is evident that this is equivalent to setting w = o + Aφ . The state-action value function Q ( s , a ) , under this new reward structure , can be derived to yield : Qπ ( st , a ) = ψ π ( st , a ) > o + βtr ( AΛπ ( st , a ) ) ( 2 ) where β ∈ { 0 , 1 } controls the inclusion of Λ and tr is the trace operator . It can now be shown that ψ and Λ satisfy the Bellman equation ( Bellman , 1966 ) : ψπ ( st , a ) = Eπ [ φt+1 + γψ ( st+1 , π ( st+1 ) ) |St = s , At = a ] ( 3 ) Λπ ( st , a ) = Eπ [ φt+1φ > t+1 + γΛ ( st+1 , π ( st+1 ) ) |St = s , At = a ] ( 4 ) where for ψ and Λ , φ and φφ > respectively play the role of rewards . In addition to ψ , it is now necessary to model Λ , which outputs an Rz×z matrix per action . The quantity φtφ > t can be interpreted as an auto-correlation matrix of the state features . We can see that this form allows the Λ term to model some form of future expected stochasticity of the environment . For example , the diagonal of Λ will model a second order moment capturing each feature ’ s change with respect to itself φ1 . We provide analysis and further discussion of Λ in Section 3.5 .
Successor representations are an old idea that has seem recent interest in the ML community. The idea is conceptually straightforward, by assuming the rewards are linear in some space $r = \vect{\phi}(s, a) \cdot \vect{w}$ then we learn something analogous to an action-value function for the discounted expected features under a policy so that the action-value on task $\vect{w}$ is $Q^\pi_{\vect{w}}(s, a) = \psi^\pi(s, a) \cdot \vect{w}$. This allows computing the action value for the policy under a new task $\vect{w}'$ straight.
SP:2ffc4cfa0da20b936bb4abee091f2d056dc12dfc
A Mixture of Variational Autoencoders for Deep Clustering
1 INTRODUCTION . Clustering is one of the most fundamental techniques used in unsupervised machine learning . It is the process of classifying data into several classes without using any label information . In the past decades , a plethora of clustering methods have been developed and successfully employed in various fields , including computer vision ( Jolion et al. , 1991 ) , natural language processing ( Ngomo & Schumacher , 2009 ) , social networks ( Handcock et al. , 2007 ) and medical informatics ( Gotz et al. , 2011 ) . The most well-known clustering approaches include the traditional k-means algorithm and the generative model , which assumes that the data points are generated from a Mixture-of-Gaussians ( MoG ) , and the model parameters are learned via the Expectation-Maximization ( EM ) algorithm . However , using these methods over datasets that include high-dimensional data is problematic since , in these vector spaces , the inter-point distances become less informative . As a result , the respective methods have provided new opportunities for clustering ( Min et al. , 2018 ) . These methods incorporate the ability to learn a ( non-linear ) mapping of the raw features in a low-dimensional vector space that hopefully allow a more feasible application of clustering methods . Deep learning methods are expected to automatically discover the most suitable non-linear representations for a specified task . However , a straightforward implementation of “ deep ” k-means algorithm by jointly learning the embedding space and applying clustering to the embedded data , leads to a trivial solution , where the data feature vectors are collapsed into a single point in the embedded space , and thus , the k centroids are collapsed into a single spurious entity . For this reason , the objective function of many deep clustering methods is composed of both a clustering term computed in the embedded space and a regularization term in the form of a reconstruction error to avoid data collapsing . One broad family of successful deep clustering algorithms , which was shown to yield state-ofthe-art results , is the generative model-based methods . Most of these methods are based on the Variational Autoencoder framework ( Kingma & Welling , 2014 ) , e.g. , Gaussian Mixture Variational Autoencoders ( GMVAE ) ( Dilokthanakul et al. , 2016 ) and Variational Deep Embedding ( VaDE ) . Instead of using an arbitrary prior to the latent variable , these algorithms proposed using specific distributions that will allow clustering at the bottleneck , such as MoG distributions . This design results in a VAE based training objective function that is composed of a significant reconstruction term and a second parameter regularization term , as discussed above . However , this objective seems to miss the clustering target since the reconstruction term is not related to the clustering , and actual clustering is only associated with the regularization term optimization . This might result in inferior clustering performance , degenerated generative model , and stability issues during training . We propose a solution to alleviate the issues introduced by previous deep clustering generative models . To that end , we propose the k-Deep Variational Auto Encoders ( dubbed k-DVAE ) . Our k-DVAE improves upon the current state-of-the-art clustering methods in several facets : ( 1 ) A novel model that outperforms the current methods in terms of clustering accuracy . ( 2 ) A novel Variational Bayesian framework to balance the data reconstruction and actual clustering that differs from the previous methods . ( 3 ) A network architecture that allows better generative modeling and thus more accurate data generation . Importantly , this architecture uses a lower amount of parameters compared to previous models . We implemented the k-DVAE algorithm on various standard document and image corpora and obtained improved results for all the datasets we experimented with compared to state-of-the-art clustering methods . 2 RELATED WORK . Deep clustering has been studied extensively in the literature . The most common deep clustering methods aim to project the data into a non-linear , low-dimensional feature space , where the task of clustering appears to be feasible . Then , traditional clustering methods are further applied to perform the actual clustering . Previous works have employed autoencoders ( Yang et al. , 2016 ; Ghasedi Dizaji et al. , 2017 ; Yang et al. , 2017 ; Fogel et al. , 2019 ; Opochinsky et al. , 2020 ) , Variational Autoencoders ( VAEs ) ( Jiang et al. , 2016 ; Dilokthanakul et al. , 2016 ; Yang et al. , 2019 ; Li et al. , 2019 ) and Generative Adversarial Networks ( GANs ) ( Springenberg , 2015 ; Chen et al. , 2016 ) . IMSAT ( Hu et al. , 2017 ) , is another recent method that augmented the training data . Our method does not make any use of augmented data during training and therefore , we do not consider IMSAT to be an appropriate or fair baseline for comparison . Additionally , the GMVAE method has shown to yield inferior performance results compared to the rest of VAE-based deep clustering , hence we do not present it in our evaluations . Among the aforementioned work , VaDE ( Jiang et al. , 2016 ) and k-DAE ( Opochinsky et al. , 2020 ) are most relevant to our work . Both VaDE and our work utilize the Varitional Bayes framework , and use a probabilistic generative process to determine the data generation model . Yet , the difference lies in both the generative process and the use of several autoencoders : our network consists of a set of k autoencoders , where each specializes on encoding and reconstructing a different cluster . The k-DAE architecture consists of a set of k autoencoders , but does not consider generative modelling , which as we show , proved to be more powerful and yields significant clustering performance results in recent years . The recent , state-of-the-art DGG method ( Yang et al. , 2019 ) was built on the foundations of VaDE , and integrates graph embeddings that serves as a regularization over the VaDE objective . Using the DGG revised objective , each pair of samples that are connected on the learned graph , will have similar posterior distributions , using the Jenson-Shannon ( JS ) divergence similarity metric . The other baselines used in this study are described in Section 4.2 . 3 THE k-DVAE CLUSTERING ALGORITHM In this section , we describe our k-Deep Variational Auto Encoders ( dubbed k-DVAE ) . First , we formulate the generative model that our algorithm is based on . Next , we derive the optimization objective score . Then we discuss the differences between our model and previous VAE based algorithms such as VaDE ( Jiang et al. , 2016 ) and illustrate the advantages of our approach . 3.1 GENERATIVE MODEL . In our generative modeling , we assume that the data are drawn from a mixture of VAEs , each with a standard Gaussian latent r.v. , as follows : 1 . Draw a cluster y by sampling from p ( y = i ) = αi , i = 1 , ... , k. 2 . Sample a latent r.v . z from the unit normal distribution , z ∼ N ( 0 , I ) . 3 . Sample an observed r.v . x : ( a ) If x is real-valued vector : sample a data vector using the conditional distribution , x| ( z , y = i ) ∼ N ( µθi ( z ) , Σθi ( z ) ) . ( b ) If x is binary vector : sample a data vector using the conditional distribution , x| ( z , y = i ) ∼ Ber ( µθi ( z ) ) . θi is the stacked vector of parameters of the i-th neural network ( NN ) . It formulates a decoder NN that corresponds to the i-th cluster , 1 ≤ i ≤ k , assuming that the total number of clusters is k. µθi ( z ) , Σθi ( z ) are computed by a decoder NN with an input z and parameters θi . We denote the parameter set of all the decoders by θ = { θ1 , ... , θk } . Note that the latent data representation z is drawn independently of the selected class y , and the class only affects when selecting the sample x . 3.2 LEARNING THE MODEL PARAMETERS BY OPTIMIZING A VARIATIONAL LOWER BOUND . Direct optimization of the likelihood function : p ( x ; θ ) = ∑ y ∫ z p ( z ) p ( y ) p ( x|z , y ; θ ) dz is intractable . Instead , we can use variational approximation methods and learn the model parameters by maximizing the Evidence Lower BOund ( ELBO ) lower bound . The ELBO ( θ , λ ) expression is given by : ELBO ( θ , λ ) = ∑ y ∫ z q ( y , z|x ; λ ) log p ( x|y , z ; θ ) dz−DKL ( q ( y , z|x ; λ ) ||p ( y , z ; θ ) ) , ( 1 ) whereDKL is the Kullback Leibler ( KL ) divergence between two density functions , and q ( y , z|x ; λ ) is a conditional density function parametrized by λ . We use an approximate conditional density q ( y , z|x ) that mirrors the structure of the generative model . For each cluster we define an encoder that transforms the input x into the latent space of that cluster : q ( y = i , z|x ; λ ) = q ( y = i|x ) q ( z|x , y = i ; λi ) , such that q ( z|x , y = i ; λi ) = N ( z ; µλi ( x ) , Σλi ( x ) ) where µλi ( x ) , Σλi ( x ) are computed by an encoder NN with input x and parameter-set λi and we use the notation λ = { λ1 , ... , λk } . The first term of the ELBO expression ( 1 ) can be written as : ∑ y ∫ z q ( y , z|x ; λ ) log p ( x|y , z ; θ ) dz = ∑ i q ( y = i|x ) Eq ( z|x , y=i ; λi ) logN ( x ; µθi ( z ) , Σθi ( z ) ) . ( 2 ) We next use Monte-Carlo sampling to approximate the expectation in Eq . ( 2 ) : Eq ( z|x , y=i ; λi ) logN ( x ; µθi ( z ) , Σθi ( z ) ) ≈ logN ( x ; µθi ( z ) , Σθi ( z ) ) , ( 3 ) such that z| ( x , y = i ) is sampled from N ( µλi ( x ) , Σλi ( x ) ) . Applying the chain rule for KL divergence to the second term of the ELBO expression ( 1 ) , we get : DKL ( q ( y , z|x ; λ ) ||p ( y , z ; θ ) ) = DKL ( q ( y|x ; λ ) ||p ( y ; θ ) ) + ∑ i q ( y = i|x ) DKL ( N ( µλi ( x ) , Σλi ( x ) ) ||N ( 0 , I ) ) . ( 4 ) We next replace the soft clustering in Eq . ( 3 ) and Eq . ( 4 ) , by a hard clustering : k∑ i=1 q ( y = i|x ) ( logN ( x ; µθi ( zi ) , Σθi ( zi ) ) −DKL ( N ( µλi ( x ) , Σλi ( x ) ) ||N ( 0 , I ) ) ) ( 5 ) ≈ max i ( logN ( x ; µθi ( zi ) , Σθi ( zi ) ) −DKL ( N ( µλi ( x ) , Σλi ( x ) ) ||N ( 0 , I ) ) ) . Finally , by neglecting the term DKL ( q ( y|x ) ||p ( y ; θ ) ) ( 4 ) ( or equivalently setting q ( y|x ) = p ( y ; θ ) ) , we obtain the following objective for optimization : ELBO ( θ , λ ) ≈ max i { logN ( x ; µθi ( zi ) , Σθi ( zi ) ) −DKL ( N ( µλi ( x ) , Σλi ( x ) ) ||N ( 0 , I ) ) } s.t . zi ∼ N ( µλi ( x ) , Σλi ( x ) ) . ( 6 ) Algorithm 1 ELBO score computation Input : Data sample x Output : Estimated score . for i = 1 to k do Compute µλi ( x ) and Σλi ( x ) using the i-th encoder . Draw zi ∼ N ( µλi ( x ) , Σλi ( x ) ) . Compute µθi ( zi ) and Σθi ( zi ) using the i-th decoder . end for Compute the ELBO score using Eq . ( 6 ) . Algorithm 2 Hard clustering Input : Data sample x Output : Estimated cluster ŷ ( x ) of x. for i = 1 to k do Compute z̄i ← µλi ( x ) using the i-th encoder . Compute µθi ( z̄i ) and Σθi ( z̄i ) using the i-th decoder . end for Compute the cluster ŷ ( x ) using Eq . ( 8 ) . When optimizing the ELBO expression , we sample the Gaussian r.v . zi| ( x , y = i ) using the reparameterization trick . Note that the ELBO objective function ( 6 ) consists of a reconstruction term and a regularization term and both are involved in the clustering decision . In the derivation of the objective function above we assumed that x is a real-valued vector . The derivation of the ELBO objective function for the discrete case is similar . The score computation procedure is depicted in Algorithm 1 and the overall architecture of the autoencoder used in the training is depicted in Fig . 1 .
The paper proposed to cluster the data using k different VAEs’. The method is different from the existing VAE-based deep clustering method (VaDE), which uses only one VAE but employs a Gaussian mixture prior to achieve the clustering goal. The difficulties of the proposed model lie at how to train the model efficiently. To this end, some approximations are made to the ELBO by using the MAP value to replace the expectation as well as dropping some KL term. The approximations are the key to the training, but not justified well. Experiments are conducted on several image and text datasets, and show superior performance comparing to existing deep clustering methods.
SP:1e11e9ad7288da902ed69a7735d1d89e81692b54
BERTology Meets Biology: Interpreting Attention in Protein Language Models
1 INTRODUCTION . The study of proteins , the fundamental macromolecules governing biology and life itself , has led to remarkable advances in understanding human health and the development of disease therapies . The decreasing cost of sequencing technology has enabled vast databases of naturally occurring proteins ( El-Gebali et al. , 2019a ) , which are rich in information for developing powerful machine learning models of protein sequences . For example , sequence models leveraging principles of co-evolution , whether modeling pairwise or higher-order interactions , have enabled prediction of structure or function ( Rollins et al. , 2019 ) . Proteins , as a sequence of amino acids , can be viewed precisely as a language and therefore modeled using neural architectures developed for natural language . In particular , the Transformer ( Vaswani et al. , 2017 ) , which has revolutionized unsupervised learning for text , shows promise for similar impact on protein sequence modeling . However , the strong performance of the Transformer comes at the cost of interpretability , and this lack of transparency can hide underlying problems such as model bias and spurious correlations ( Niven & Kao , 2019 ; Tan & Celis , 2019 ; Kurita et al. , 2019 ) . In response , much NLP research now focuses on interpreting the Transformer , e.g. , the subspecialty of “ BERTology ” ( Rogers et al. , 2020 ) , which specifically studies the BERT model ( Devlin et al. , 2019 ) . In this work , we adapt and extend this line of interpretability research to protein sequences . We analyze Transformer protein models through the lens of attention , and present a set of interpretability methods that capture the unique functional and structural characteristics of proteins . We also compare the knowledge encoded in attention weights to that captured by hidden-state representations . Finally , we present a visualization of attention contextualized within three-dimensional protein structure . Our analysis reveals that attention captures high-level structural properties of proteins , connecting amino acids that are spatially close in three-dimensional structure , but apart in the underlying sequence ( Figure 1a ) . We also find that attention targets binding sites , a key functional component of proteins ( Figure 1b ) . Further , we show how attention is consistent with a classic measure of similarity between amino acids—the substitution matrix . Finally , we demonstrate that attention captures progressively higher-level representations of structure and function with increasing layer depth . ( a ) Attention in head 12-4 , which targets amino acid pairs that are close in physical space ( see inset subsequence 117D-157I ) but lie apart in the sequence . Example is a de novo designed TIMbarrel ( 5BVL ) with characteristic symmetry . ( b ) Attention in head 7-1 , which targets binding sites , a key functional component of proteins . Example is HIV-1 protease ( 7HVP ) . The primary location receiving attention is 27G , a binding site for protease inhibitor small-molecule drugs . Figure 1 : Examples of how specialized attention heads in a Transformer recover protein structure and function , based solely on language model pre-training . Orange lines depict attention between amino acids ( line width proportional to attention weight ; values below 0.1 hidden ) . Heads were selected based on correlation with ground-truth annotations of contact maps and binding sites . Visualizations based on the NGL Viewer ( Rose et al. , 2018 ; Rose & Hildebrand , 2015 ; Nguyen et al. , 2017 ) . In contrast to NLP , which aims to automate a capability that humans already have—understanding natural language—protein modeling also seeks to shed light on biological processes that are not fully understood . Thus we also discuss how interpretability can aid scientific discovery . 2 BACKGROUND : PROTEINS . In this section we provide background on the biological concepts discussed in later sections . Amino acids . Just as language is composed of words from a shared lexicon , every protein sequence is formed from a vocabulary of amino acids , of which 20 are commonly observed . Amino acids may be denoted by their full name ( e.g. , Proline ) , a 3-letter abbreviation ( Pro ) , or a single-letter code ( P ) . Substitution matrix . While word synonyms are encoded in a thesaurus , proteins that are similar in structure or function are captured in a substitution matrix , which scores pairs of amino acids on how readily they may be substituted for one another while maintaining protein viability . One common substitution matrix is BLOSUM ( Henikoff & Henikoff , 1992 ) , which is derived from co-occurrence statistics of amino acids in aligned protein sequences . Protein structure . Though a protein may be abstracted as a sequence of amino acids , it represents a physical entity with a well-defined three-dimensional structure ( Figure 1 ) . Secondary structure describes the local segments of proteins ; two commonly observed types are the alpha helix and beta sheet . Tertiary structure encompasses the large-scale formations that determine the overall shape and function of the protein . One way to characterize tertiary structure is by a contact map , which describes the pairs of amino acids that are in contact ( within 8 angstroms of one another ) in the folded protein structure but lie apart ( by at least 6 positions ) in the underlying sequence ( Rao et al. , 2019 ) . Binding sites . Proteins may also be characterized by their functional properties . Binding sites are protein regions that bind with other molecules ( proteins , natural ligands , and small-molecule drugs ) to carry out a specific function . For example , the HIV-1 protease is an enzyme responsible for a critical process in replication of HIV ( Brik & Wong , 2003 ) . It has a binding site , shown in Figure 1b , that is a target for drug development to ensure inhibition . Post-translational modifications . After a protein is translated from RNA , it may undergo additional modifications , e.g . phosphorylation , which play a key role in protein structure and function . 3 METHODOLOGY . Model . We demonstrate our interpretability methods on five Transformer models that were pretrained through language modeling of amino acid sequences . We primarily focus on the BERT-Base model from TAPE ( Rao et al. , 2019 ) , which was pretrained on Pfam , a dataset of 31M protein sequences ( ElGebali et al. , 2019b ) . We refer to this model as TapeBert . We also analyze 4 pre-trained Transformer models from ProtTrans ( Elnaggar et al. , 2020 ) : ProtBert and ProtBert-BFD , which are 30-layer , 16-head BERT models ; ProtAlbert , a 12-layer , 64-head ALBERT ( Lan et al. , 2020 ) model ; and ProtXLNet , a 30-layer , 16-head XLNet ( Yang et al. , 2019 ) model . ProtBert-BFD was pretrained on BFD ( Steinegger & Söding , 2018 ) , a dataset of 2.1B protein sequences , while the other ProtTrans models were pretrained on UniRef100 ( Suzek et al. , 2014 ) , which includes 216M protein sequences . A summary of these 5 models is presented in Appendix A.1 . Here we present an overview of BERT , with additional details on all models in Appendix A.2 . BERT inputs a sequence of amino acids x = ( x1 , . . . , xn ) and applies a series of encoders . Each encoder layer ` outputs a sequence of continuous embeddings ( h ( ` ) 1 , . . . , h ( ` ) n ) using a multi-headed attention mechanism . Each attention head in a layer produces a set of attention weights α for an input , where αi , j > 0 is the attention from token i to token j , such that ∑ j αi , j = 1 . Intuitively , attention weights define the influence of every token on the next layer ’ s representation for the current token . We denote a particular head by < layer > - < head_index > , e.g . head 3-7 for the 3rd layer ’ s 7th head . Attention analysis . We analyze how attention aligns with various protein properties . For properties of token pairs , e.g . contact maps , we define an indicator function f ( i , j ) that returns 1 if the property is present in token pair ( i , j ) ( e.g. , if amino acids i and j are in contact ) , and 0 otherwise . We then compute the proportion of high-attention token pairs ( αi , j > θ ) where the property is present , aggregated over a dataset X : pα ( f ) = ∑ x∈X |x|∑ i=1 |x|∑ j=1 f ( i , j ) · 1αi , j > θ /∑ x∈X |x|∑ i=1 |x|∑ j=1 1αi , j > θ ( 1 ) where θ is a threshold to select for high-confidence attention weights . We also present an alternative , continuous version of this metric in Appendix B.1 . For properties of individual tokens , e.g . binding sites , we define f ( i , j ) to return 1 if the property is present in token j ( e.g . if j is a binding site ) . In this case , pα ( f ) equals the proportion of attention that is directed to the property ( e.g . the proportion of attention focused on binding sites ) . When applying these metrics , we include two types of checks to ensure that the results are not due to chance . First , we test that the proportion of attention that aligns with particular properties is significantly higher than the background frequency of these properties , taking into account the Bonferroni correction for multiple hypotheses corresponding to multiple attention heads . Second , we compare the results to a null model , which is an instance of the model with randomly shuffled attention weights . We describe these methods in detail in Appendix B.2 . Probing tasks . We also perform probing tasks on the model , which test the knowledge contained in model representations by using them as inputs to a classifier that predicts a property of interest ( Veldhoen et al. , 2016 ; Conneau et al. , 2018 ; Adi et al. , 2016 ) . The performance of the probing classifier serves as a measure of the knowledge of the property that is encoded in the representation . We run both embedding probes , which assess the knowledge encoded in the output embeddings of each layer , and attention probes ( Reif et al. , 2019 ; Clark et al. , 2019 ) , which measure the knowledge contained in the attention weights for pairwise features . Details are provided in Appendix B.3 . Datasets . For our analyses of amino acids and contact maps , we use a curated dataset from TAPE based on ProteinNet ( AlQuraishi , 2019 ; Fox et al. , 2013 ; Berman et al. , 2000 ; Moult et al. , 2018 ) , which contains amino acid sequences annotated with spatial coordinates ( used for the contact map analysis ) . For the analysis of secondary structure and binding sites we use the Secondary Structure dataset ( Rao et al. , 2019 ; Berman et al. , 2000 ; Moult et al. , 2018 ; Klausen et al. , 2019 ) from TAPE . We employed a taxonomy of secondary structure with three categories : Helix , Strand , and Turn/Bend , with the last two belonging to the higher-level beta sheet category ( Sec . 2 ) . We used this taxonomy to study how the model understood structurally distinct regions of beta sheets . We obtained token-level binding site and protein modification labels from the Protein Data Bank ( Berman et al. , 2000 ) . For analyzing attention , we used a random subset of 5000 sequences from the training split of the respective datasets ( note that none of the aforementioned annotations were used in model training ) . For the diagnostic classifier , we used the respective training splits for training and the validation splits for evaluation . See Appendix B.4 for additional details . Experimental details We exclude attention to the [ SEP ] delimiter token , as it has been shown to be a “ no-op ” attention token ( Clark et al. , 2019 ) , as well as attention to the [ CLS ] token , which is not explicitly used in language modeling . We only include results for attention heads where at least 100 high-confidence attention arcs are available for analysis . We set the attention threshold θ to 0.3 to select for high-confidence attention while retaining sufficient data for analysis . We truncate all protein sequences to a length of 512 to reduce memory requirements.1 We note that all of the above analyses are purely associative and do not attempt to establish a causal link between attention and model behavior ( Vig et al. , 2020 ; Grimsley et al. , 2020 ) , nor to explain model predictions ( Jain & Wallace , 2019 ; Wiegreffe & Pinter , 2019 ) .
The authors analyzed how attention and embeddings of Transformers trained on protein sequence correlate with protein properties such as pairwise contacts, binding sites, and post-translational modifications. The paper extends existing papers such as Rives 2020 ‘Biological structure and function emerge…’ by showing that layers learn more and more complex protein features with increasing layer depth and by proposing new visualization techniques. The paper is mostly clearly written while the methodological contributions are incremental. The evaluation needs to be strengthened.
SP:b33ac0129381deaa5375b1f6b06b70d58f16a5a9
Optimization Planning for 3D ConvNets
1 INTRODUCTION . The recent advances in 3D Convolutional Neural Networks ( 3D ConvNets ) have successfully pushed the limits and improved the state-of-the-art of video recognition . For instance , an ensemble of LGD3D networks ( Qiu et al. , 2019 ) achieves 17.88 % in terms of average error in trimmed video classification task of ActivityNet Challenge 2019 , which is dramatically lower than the error ( 29.3 % ) attained by the former I3D networks ( Carreira & Zisserman , 2017 ) . The result basically indicates the advantage and great potential of 3D ConvNets for improving the performance of video recognition . Despite these impressive progresses , learning effective 3D ConvNets for video recognition remains challenging , due to large variations and complexities of video content . Existing works on 3D ConvNets ( Tran et al. , 2015 ; Carreira & Zisserman , 2017 ; Tran et al. , 2018 ; Wang et al. , 2018c ; Feichtenhofer et al. , 2019 ; Qiu et al. , 2017 ; 2019 ) predominately focus on the designs of network architectures but seldom explore how to train a 3D ConvNets in a principled way . The difficulty in training 3D ConvNets originates from the high flexibility of the training scheme . Compared to the training of 2D ConvNets ( Ge et al. , 2019 ; Lang et al. , 2019 ; Yaida , 2019 ) , the involvement of temporal dimension in 3D ConvNets brings two new problems of how many frames should be sampled from the video and how to sample these frames . First , the length of video clip is a tradeoff to control the balance between training efficiency and long-range temporal modeling for learning 3D ConvNets . On one hand , training with short clips ( 16 frames ) ( Tran et al. , 2015 ; Qiu et al. , 2017 ) generally leads to fast convergence with large mini-batch , and also alleviates the overfitting problem through data augmentation brought by sampling short clips . On the other hand , recent works ( Varol et al. , 2018 ; Wang et al. , 2018c ; Qiu et al. , 2019 ) have proven better ability in capturing long-range dependency when training with long clips ( over 100 frames ) at the expense of training time . The second issue is the sampling strategy . Uniform sampling ( Fan et al. , 2019 ; Jiang et al. , 2019 ; Martı́nez et al. , 2019 ) offers the network a fast-forward overview of the entire video , while consecutive sampling ( Tran et al. , 2015 ; Qiu et al. , 2017 ; 2019 ; Varol et al. , 2018 ; Wang et al. , 2018c ) can better capture the spatio-temporal relation across frames . Given these complex choices of training scheme , learning a powerful 3D ConvNets often requires significant engineering efforts of human experts to determine the optimal strategy on each dataset . That motivates us to automate the design of training strategy for 3D ConvNets . In the paper , we propose optimization planning mechanism which seeks the optimal training strategy of 3D ConvNets adaptively . To this end , our optimization planning studies three problems : 1 ) choose between consecutive or uniform sampling ; 2 ) when to increase the length of input clip ; 3 ) when to decrease the learning rate . Specifically , we decompose the training process into several training states . Each state is assigned with the fixed hyper-parameters , including sampling strategy , length of input clip and learning rate . The transition between states represents the change of hyper-parameters during training . Therefore , the training process can be decided by the permutation of different states and the number of epochs for each state . Here , we build a candidate transition graph to define the valid transitions between states . The search of the best optimization strategy is then equivalent to seeking the optimal path from the initial state to the final state on the graph , which can be solved by dynamic programming algorithm . In order to determine the best epoch for each state in such process , we propose a knee point estimation method via fitting the performance-epoch curve . In general , our optimization planning is viewed as a training scheme controller and is readily applicable to train other neural networks in stages with multiple hyper-parameters . To the best of our knowledge , our work is the first to address the issue of optimization planning for 3D ConvNets training . The issue also leads to the elegant view of how the order and epochs for different hyper-parameters should be planned adaptively . We uniquely formulate the problem as seeking an optimal training path and devise a new 3D ConvNets with dual-head classifier . Extensive experiments on seven datasets demonstrate the effectiveness of our proposal , and with optimization planning , our 3D ConvNets achieves superior results than several state-of-the-art techniques . 2 RELATED WORK . The early works using Convolutional Neural Networks for video recognition are mostly extended from 2D ConvNets for image classification ( Karpathy et al. , 2014 ; Simonyan & Zisserman , 2014 ; Feichtenhofer et al. , 2016 ; Wang et al. , 2016 ) . These approaches often treat a video as a sequence of frames or optical flow images , and the pixel-level temporal evolution across consecutive frames are seldom explored . To alleviate this issue , 3D ConvNets in Ji et al . ( 2013 ) is devised to directly learn spatio-temporal representation from a short video clip via 3D convolution . Tran et al . design a widely-adopted 3D ConvNets in Tran et al . ( 2015 ) , namely C3D , consisting of 3D convolutions and 3D poolings optimized on the large-scale Sports1M ( Karpathy et al. , 2014 ) dataset . Despite having encouraging performances , the training of 3D ConvNets is computationally expensive and the model size suffers from a massive growth . Later in Qiu et al . ( 2017 ) ; Tran et al . ( 2018 ) ; Xie et al . ( 2018 ) , the decomposed 3D convolution is proposed to simulate one 3D convolution with one 2D spatial convolution plus one 1D temporal convolution . Recently , more advanced techniques are presented for 3D ConvNets , including inflating 2D convolutions ( Carreira & Zisserman , 2017 ) , non-local pooling ( Wang et al. , 2018c ) and local-and-global diffusion ( Qiu et al. , 2019 ) . Our work expands the research horizons of 3D ConvNets and focuses on improving 3D ConvNets training by adaptively planning the optimization process . The related works for 2D ConvNets training ( Chee & Toulis , 2018 ; Lang et al. , 2019 ; Yaida , 2019 ) automate the training strategy via only changing the learning rate adaptively . Our problem is much more challenging especially when temporal dimension is additionally considered and involved in the training scheme of 3D ConvNets . For enhancing 3D ConvNets training , the recent works ( Wang et al. , 2018c ; Qiu et al. , 2019 ) first train 3D ConvNets with short input clips and then fine-tune the network with lengthy clips , which balances training efficiency and long-range temporal modeling . The multigrid method ( Wu et al. , 2020 ) further cyclically changes spatial resolution and temporal duration of input clips for a more efficient optimization of 3D ConvNets . The research in this paper contributes by studying not only training 3D ConvNets with multiple lengths of input clips , but also adaptively scheduling the change of input clip length through optimization planning . 3 OPTIMIZATION PLANNING . 3.1 PROBLEM FORMULATION . The goal of optimization planning is to automate the learning strategy of 3D ConvNets . Formally , the optimization process of 3D ConvNets can be represented as an optimization path P = 〈S0 , S1 , ... , SN 〉 , which consists of one initial state S0 and N intermediate states . Each intermediate state is assigned with the fixed hyper-parameters , and the training is performed with these N different settings one by one . The training epoch on each setting is decided by T = { t1 , t2 , ... , tN } , in which ti denotes the number of epochs when moving from Si−1 to Si . The hyperparameters include sampling strategy ∈ { cs , us } , length of input clip ∈ { l1 , l2 , ... , lNl } and learning rate ∈ { r1 , r2 , ... , rNr } , where cs and us denotes consecutive sampling and uniform sampling , respectively . In this case , there are 2×Nl ×Nr valid types of training states . The objective function of optimization planning is to seek the optimal strategy { P , T } by maximizing the performance of the final state SN : maximize P , T V ( SN ) , ( 1 ) where V ( · ) is the target performance , i.e. , mean accuracy on validation set in our case . 3.2 OPTIMIZATION PATH . To plan the optimal permutation of training states , we first choose a final state SN , which is usually with low learning rate and lengthy input clip . Then , the problem of seeking an optimal optimization path to SN is naturally decomposed to the subproblem of finding the optimization path to an intermediate state Si and the state transition from Si to SN . As such , the problem can be solved by dynamic programming . Specifically , the solution of optimization path P ( SN ) can be given in a recursive form : P ( SN ) = 〈P ( Si∗ ) , SN 〉 , i∗ = argmax i { V ( Si → SN ) } . ( 2 ) When executing the transfer from the state Si to the state SN , we fine-tune the 3D ConvNets at the state Si by using the hyper-parameters at the state SN . We then evaluate such fine-tuned model on the validation set to measure the priority of this transition , i.e. , V ( Si → SN ) . We choose the state S∗i , which achieves the highest priority of transition to the state SN , as the preceding state of SN . In other words , the optimal path for SN derives from the best-performing preceding state Si∗ . Here , we propose to pre-define all the valid transitions in a directed acyclic graph and determine the best optimization path of each state one by one in the topological order . Figure 1 ( a ) shows one example of the pre-defined transition graph . In the example , we set the number of candidate input clip lengths Nl = 3 and the number of candidate learning rates Nr = 3 . Hence , there are 2× 3× 3 = 18 candidate states . Then , the possible transitions , i.e. , the connections between states , are determined by the following principles : ( 1 ) The transitions between states with different sampling strategies are forbidden . We choose S9 and S18 as the final states for consecutive sampling and uniform sampling , respectively . ( 2 ) The training only starts from high learning rate and short input clips . ( 3 ) The intermediate state can be only transferred to a new state , where either the learning rate is decreased or the length of input clip is increased in the new state . Please note that , some very specific learning rate strategies , e.g. , schedules with restart or warmup , show that increasing the learning rate properly may benefit training . Nevertheless , there is still no clear principle of when to increase the learning rate , and thus it is very difficult to automate these schedules . In the works of adaptively changing the learning rate for 2D ConvNets training ( Ge et al. , 2019 ; Lang et al. , 2019 ; Yaida , 2019 ) , such cyclic schedules are also not taken into account . As a result , we only consider the schedule of decreasing learning rate in the transition graph . These principles can simplify the transition graph and reduce the time cost when solving Equ. ( 2 ) . We take this graph as basic transition graph . Furthermore , we also build an extended transition graph by enabling simultaneously decreasing the input clip length and the learning rate , as shown in Figure 1 ( b ) . In such graph , the training strategies are more flexible .
The paper proposes a novel way to automatically tune 3D ConvNet hyper-parameters (learning rate, input clip length, sampling way). This is achieved by decomposing the optimization path into several states and the state transition is triggered when the knee-point on the performance-epoch curve is met. Extensive experiments are conducted on popular video benchmarks and show that the optimization planning is effective to improve the accuracy and requires less time time compared to the hand-tuned procedure.
SP:128ae7bc4a53360e9492783e7430da8c778f3d66
Score-based Causal Discovery from Heterogeneous Data
1 INTRODUCTION . Discovering causal relations among variables is a fundamental problem in various fields such as economics , biology , drug testing , and commercial decision making . Because conducting randomized controlled trials is usually expensive or even infeasible , discovering causal relations from observational data , i.e . causal discovery ( Pearl , 2000 ; Spirtes et al. , 2000 ) , has received much attention over the past few decades . Early causal discovery algorithms can be roughly categorized into two types : constraint-based ones ( e.g . PC ( Spirtes et al. , 2000 ) ) and score-based ones ( e.g . GES ( Chickering , 2002 ) ) . In general , these methods can not uniquely identify the causal graph but are guaranteed to output a Markov equivalence class . Since the seminal work by Shimizu et al . ( 2006 ) , several methods have been developed , achieving identifiability of the whole causal structure by making use of constrained Functional Causal Models ( FCMs ) , including the linear non-Gaussian model ( Shimizu et al. , 2006 ) , the nonlinear additive noise model ( Hoyer et al. , 2009 ) , and the post-nonlinear model ( Zhang & Hyvärinen , 2009 ) . Recently , Zheng et al . ( 2018 ) proposed a score-based method that formulates the causal discovery problem as continuous optimization with a structural constraint that ensures acyclicity . Based on the continuous structural constraint , several researchers further proposed to model the causal relations by neural networks ( NNs ) ( Lachapelle et al. , 2019 ; Yu et al. , 2019 ; Zheng et al. , 2019 ) . Another recent work Zhu & Chen ( 2019 ) used reinforcement learning ( RL ) for causal discovery , where the RL agent searches over the graph space and outputs a graph that fits the data best . The above approaches are designed for data from a single domain with a fixed causal model , with the limitation that many of the edge directions can not be determined without strong functional constraints . In addition , the sample size of data from one domain is usually not large enough to guarantee small statistical estimation errors . One way to improve statistical reliability is to combine datasets from multiple domains , such as P-value meta-analyses ( Lee , 2015 ; Marot et al. , 2009 ) . The idea of combining multiple-domain data is commonly seen in learning with mixture of Bayesion networks ( Thiesson et al. , 1998 ) . While mixture of Bayesion networks are usually used for density estimation , the purpose of causal analysis from multiple-domain data is completely different , it aims at discovering the underlying causal graphs for all domains . Regarding causal analysis from multiple-domain data , a challenge is the data heterogeneity problem : the data distribution may vary across domains . For example , in fMRI hippocampus signal analysis , the connection strength among different brain regions may change across different subjects ( domains ) . Due to the distribution shift , directly pooling the data from multiple domains may lead to spurious edges . To tackle the issue , different ways have been investigated , including using sliding windows ( Calhoun et al. , 2014 ) , online change point detection ( Adams & MacKay , 2007 ) , online undirected graph learning ( Talih & Hengartner , 2005 ) , locally stationary structure tracker ( Kummerfeld & Danks , 2013 ) , and regime aware learning ( Bendtsen , 2016 ) . However , these methods may suffer from high estimation variance due to sample scarcity , large type II errors , and a large number of statistical tests . Huang et al . ( 2015 ) recovers causal relations with changing modules by making use of certain types of smoothness of the change , while it does not explicitly locate the changing causal modules . Other similar methods , including Xing et al . ( 2010 ) ; Song et al . ( 2009 ) , can be reduced to online parameter learning because the causal directions are given . By utilizing the invariance property ( Hoover , 1990 ; Tian & Pearl , 2001 ; Peters et al. , 2016 ) and the more general independent change mechanism ( Pearl , 2000 ) , recently , Ghassami et al . ( 2018 ) developed two methods : identical boundaries ( IB ) and minimal changes ( MC ) , for causal discovery from multi-domain data . However , the proposed methods 1 ) assume causal sufficiency ( i.e. , all common causes of variables are measured ) , which is usually not held in real circumstances , 2 ) are designed for linear systems only , 3 ) and are not capable of identifying causal directions from more than ten domains . Huang et al . ( 2019 ) proposed a more general approach called CD-NOD for both linear and nonlinear heterogeneous data , by extending the PC algorithm to tackle the heterogeneity issue . However , inheriting the drawbacks of constraint-based methods , CD-NOD involves a multiple testing problem and is time-consuming due to large number of independence tests . To overcome the limitations of existing works , we propose a Multiple-Domain Score Search ( MDSS ) method for causal discovery from heterogeneous data , which enjoys the following properties . ( 1 ) To avoid spurious edges when combing multi-domain data , MDSS searches over the space of augmented graphs , which includes an additional domain index as a surrogate variable to characterize the distribution shift . ( 2 ) The changing causal modules can be immediately identified from the recovered augmented graph . ( 3 ) Benefiting from causal invariance and the independent change mechanism , MDSS uses a novel Multiple-Domain Score ( MDS ) to help identify more causal directions beyond those in the Markov equivalence class from distribution-shifted data . ( 4 ) MDSS can be readily incorporated into off-the-shelf search strategies and is time-efficient and applicable to both linear and nonlinear data . ( 5 ) Theoretically , we show that MDSS is guaranteed to find the correct graph skeleton asymptotically , and further identify more causal directions than other traditional score-based and constraint-based algorithms . Empirical studies on both synthetic and real data prove the efficacy of our method . 2 METHODOLOGY . In this section , we start from a brief introduction to causal discovery and distribution shifts ( Section 2.1 ) , and then in Section 2.2 and 2.3 , we introduce our proposed Multiple-Domain Score Search ( MDSS ) . In Section 2.2 , MDSS starts with a predefined graph search algorithm to learn the skeleton of the causal graph , with the linear Bayesian information criterion ( BIC ) score or nonlinear generalized score ( GS ( Huang et al. , 2018 ) ) on the augmented causal system . Then in Section 2.3 , MDSS further identifies causal directions with Multiple-Domain Score ( MDS ) based on the identified skeleton of the graph from Section 2.2 . Both theoretically and empirically , we show that MDSS can identify more directions compared to algorithms that are designed for i.i.d . or stationary data . 2.1 BACKGROUND IN CAUSAL DISCOVERY AND DISTRIBUTION SHIFTS . The basic causal discovery problem can be formulated as follows : Suppose there are d observable random variables , i.e . V = ( V1 , ... , Vd ) . Each random variable satisfies the following generating process : Vi = fi ( PAi , ) , where fi is a function to model the causal relation between Vi and its parents PAi , and i is a noise variable with non-zero variance . All the noise variables are independent of each other . The task of causal discovery is to recover the causal adjacency matrix B given the observed data matrix X ∈ RT×d , where Bij = 1 indicates that Vi is a parent of Vj , and T is the sample size . We denote the underlying causal graph over V as G0 . For each Vi , we call P ( Vi|PAi ) its causal module . For a single domain , the joint probability can be factorized as P ( V ) = ∏d i=1 P ( Vi|PAi ) . Suppose there are n domains with distribution shifts ( i.e . P ( V ) changes across domains ) , which implies that some causal modules change across domains . The changes may be caused by the variation of functional models , causal strength , or noise variance . Furthermore , we have the following assumptions . Assumption 1 . The changes of causal modules can be represented as functions of domain index C , denoted by g ( C ) , Assumption 2 . There is no confounder in each single dataset , but we allow the changes of different causal modules being dependent . Remark : If changes in several causal modules are dependent , it can be regarded as special `` confounders '' that simultaneously affect these causal modules . As a consequence of such confounders , previous causal discovery algorithms designed for i.i.d . or stationary data may output erroneous edges . See section 3.1 for an illustration . Thus , causal discovery from multiple-domain data with distribution shifts ( i.e . , heterogeneous data ) can be much more difficult than that from single-domain data . 2.2 SKELETON ESTIMATION ON AUGMENTED GRAPHS . With Assumptions 1 and 2 , it is natural to consider g ( C ) as an extra variable in order to remove any potential influence caused by these special confounders . We assume that there are L such confounders ( g1 ( C ) , ... , gL ( C ) ) . The causal relation between each observable variable Vi and its parents PAi can be formalized with Vi = fi ( PAi , gi ( C ) , θi ( C ) , i ) , ( 1 ) where gi ( C ) ⊆ { gl ( C ) } Ll=1 is the set of confounders that influence Vi , θi ( C ) are the effective parameters in Vi ’ s causal module that are also assumed to be functions of C and are mutually independent for all variables . Let G0 be the underlying causal graph over V. We denote the graph resulting from adding arrows gi ( C ) → Vi and θi ( C ) → Vi on G0 for each Vi in V as Gaug over V ∪ { gl ( C ) } Ll=1 ∪ { θi ( C ) } d i=1 . We call Gaug an augmented graph ( see Figure 1 ( d ) as an example ) , which satisfies the following assumption . Assumption 3 . The joint distribution over V ∪ { gl ( C ) } Ll=1 ∪ { θi ( C ) } d i=1 is Markov and faithful to Gaug . To remove the potential influence from confounders and recover causal relations from multiple domains , one way is to perform causal discovery algorithms on the augmented graph . While { gl ( C ) } Ll=1 and { θi ( C ) } d i=1 are not directly observed , we take C as a surrogate variable ( Huang et al. , 2019 ) for them because C is always available as a domain index . Given Assumptions 1 , 2 and 3 , one can apply any score-based method over V∪ { C } to recover the causal relations among variables V as if { gl ( C ) } Ll=1∪ { θi ( C ) } d i=1 were known . For simplicity , we denote the graph over V ∪ { C } as augmented graph as well . Since C is the domain index , P ( C ) follows a discrete uniform distribution . Correspondingly , the generating process of non-stationary data can be considered as follows : First we generate random values from P ( C ) , and then we generate data points over V according to the SEM in Equation 1 . Finally , generated data points are sorted in ascending order according to the values of C ( i.e. , data points having the same value of C are regarded as belonging to the same domain ) . In other words , we observe the distribution P ( V|C ) , where P ( V|C ) may change across different values of C , resulting in non-stationary data . Note that if we do not include C into the system explicitly , samples of V are not i.i.d . However , after explicitly including the domain index C into the system , P ( V , C ) is fixed , and thus the pooled data are i.i.d . samples from distribution P ( V , C ) . Before stating our main result , we first give the definitions of globally consistent scoring criterion and locally consistent scoring criterion , which will be used in the paper . Definition 1 ( Globally Consistent Scoring Criterion ) . Let D be a dataset consisting of T i.i.d . samples from some distribution P ( · ) . LetH and G be any DAGs . A scoring criterion S is globally consistent if the following two properties hold as T →∞ : 1 . IfH contains P and G does not contain P , then S ( H , D ) > S ( G , D ) 1 . 2 . IfH and G both contain P , and G contains fewer parameters thanH , then S ( H , D ) < S ( G , D ) . Definition 2 ( Locally Consistent Scoring Criterion ) . Let D be a dataset consisting of T i.i.d . samples from some distribution P ( · ) . Let G be any DAG , and let G′ be the DAG that results from adding the edge Vi → Vj on G. A scoring criterion S ( G , D ) is locally consistent if the following two properties hold as T →∞ : 1 . If Vj 6⊥Vi|PAGj , then S ( G′ , D ) > S ( G , D ) . 2 . If Vj ⊥Vi|PAGj , then S ( G′ , D ) < S ( G , D ) . 1Here , larger score means the corresponding graph is closer to the equivalent class of the true DAG , while the MDS defined in Section 2.3 should be regarded as a type of `` loss function '' which needs to be minimized . It has been shown that the BIC score and the GS score are both globally and locally consistent ( Chickering , 2002 ; Huang et al. , 2018 ) . The procedure for skeleton estimation on augmented graphs is described in Algorithm 1 . The predefined graph search algorithms will be discussed in Section 2.4 . Apart from the recovered skeleton over V , the changing modules can be detected as well in Step 4 of Algorithm 1 . It is important to note that we allow causal relations to be either linear or nonlinear . If they are nonlinear , we apply GS as a score function . When they are linear , although we can also use GS , we use linear BIC instead because it is less likely to be overfitting for linear data and is computationally more efficient . Algorithm 1 Skeleton Search on Augmented Graph Input : n datasets , each has T observations , d variables and index C. Output : skeleton S of Gaug ’ s subgraph G1 over V , and variables VC ∈ V that are connected with C. 1 : Pool all datasets with an extra surrogate variable C to form a data matrix X ∈ RnT× ( d+1 ) . 2 : Use the predefined graph search algorithm with BIC or GS plus acyclicity constraints to recover the augmented graph . Eliminate any direction Vi → C in the graph with the prior that any variable Vi does not affect domain index . This step leads to the recovered augmented graph Gaug . 3 : Discard the index variable in Gaug to obtain the induced subgraph G1 . Discard the directions in G1 and output the skeleton S of G1 . 4 : Detect changing causal modules by inspecting Gaug recovered in Step 2 , and output VC . The validity of searching on augmented graph is guaranteed by Theorem 1 . Theorem 1 . Let D be the pooling of all datasets , DC be the augmented dataset with the domain index as an extra random variable . Let G0 be the underlying causal graph for the distribution of D over V , GC be the underlying causal graph for the distribution of DC over V ∪ C. If we denote G′C as the graph after the following modifications on GC : 1. adding any edges , 2. deleting any edges or 3. reversing any edges that changes the conditional dependence relation of GC , then we have S ( GC , DC ) > S ( G′C , DC ) , where S is any globally consistent scoring criterion . Proof of the theorem is given in Appendix A.1 . Intuitively , this theorem means we will obtain an augmented graph that is in the same Markov equivalence class as the true augmented graph if we maximize the score .
This paper proposes strategies for learning the structure of multiple sets of data observed over a common set of variables which may exhibit distribution shift. The authors address this problem by augmenting the dataset with an indicator variable which indicates membership to dataset. After augmenting the dataset standard algorithms for structure learning are applied, with the additional restriction that the indicator variable may only be an ancestor. The authors provide theory that shows the procedure consistently estimates the local structures. The authors then show how the additional information obtained from the structure learned with the context variable can be used to disambiguate directions. Experimental results show the efficacy of the proposed approach.
SP:9e4232a23a81fe31b824208547760a9906a05a4a
Reinforcement Learning with Random Delays
1 INTRODUCTION This article is concerned with the Reinforcement Learning ( RL ) scenario depicted in Figure 1 , which is commonly encountered in real-world applications ( Mahmood et al. , 2018 ; Fuchs et al. , 2020 ; Hwangbo et al. , 2017 ) . Oftentimes , actions generated by the agent are not immediately applied in the environment , and observations do not immediately reach the agent . Such environments have mainly been studied under the unrealistic assumption of constant delays ( Nilsson et al. , 1998 ; Ge et al. , 2013 ; Mahmood et al. , 2018 ) . Here , prior work has proposed different planning algorithms which naively try to undelay the environment by simulating future observations ( Walsh et al. , 2008 ; Schuitema et al. , 2010 ; Firoiu et al. , 2018 ) . We propose an off-policy , planning-free approach that enables lowbias and low-variance multi-step value estimation in environments with random delays . First , we study the anatomy of such environments in order to exploit their structure , defining Random-Delay Markov Decision Processes ( RDMDP ) . Then , we show how to trans- form trajectory fragments collected under one policy into trajectory fragments distributed according to another policy . We demonstrate this principle by deriving a novel off-policy algorithm ( DCAC ) based on Soft Actor-Critic ( SAC ) , and exhibiting greatly improved performance in delayed environments . Along with this work we release our code , including a wrapper that conveniently augments any OpenAI gym environment with custom delays . 2 DELAYED ENVIRONMENTS . We frame the general setting of real-world Reinforcement Learning in terms of an agent , random observation delays , random action delays , and an undelayed environment . At the beginning of each time-step , the agent starts computing a new action from the most recent available delayed observation . Meanwhile , a new observation is sent and the most recent delayed action is applied in the undelayed environment . Real-valued delays are rounded up to the next integer time-step . ∗equal contribution For a given delayed observation st , the observation delay ωt refers to the number of elapsed time-steps from when st finishes being captured to when it starts being used to compute a new action . The action delay αt refers to the number of elapsed time-steps from when the last action influencing st starts being computed to one time-step before st finishes being captured . We further refer to ωt + αt as the total delay of st. As a motivating illustration of real-world delayed setting , we have collected a dataset of communication delays between a decisionmaking computer and a flying robot over WiFi , summarized in Figure 2 . In the presence of such delays , the naive approach is to simply use the last received observation . In this case , any delay longer than one time-step violates the Markov assumption , since the last sent action becomes an unobserved part of the current state of the environment . To overcome this issue , we define a Markov Decision Process that takes into account the communication dynamics . 2.1 RANDOM DELAY MARKOV DECISION PROCESSES . To ensure the Markov property in delayed settings , it is necessary to augment the delayed observation with at least the last K sent actions . K is the combined maximum possible observation and action delay . This is required as the oldest actions along with the delayed observation describe the current state of the undelayed environment , whereas the most recent actions are yet to be applied ( see Appendix C ) . Using this augmentation suffices to ensure that the Markov property is met in certain delayed environments . On the other hand , it is possible to do much better when the delays themselves are also part of the state-space . First , this allows us to model self-correlated delays , e.g . discarding outdated actions and observations ( see Appendix A.1 ) . Second , this provides useful information to the model about how old an observation is and what actions have been applied next . Third , knowledge over the total delay allows for efficient credit assignment and off-policy partial trajectory resampling , as we show in this work . Definition 1 . A Random Delay Markov Decision Process RDMDP ( E , pω , pα ) = ( X , A , µ̃ , p̃ ) augments a Markov Decision Process E = ( S , A , µ , p ) with : ( 1 ) state-space X = S ×AK × N2 , ( 2 ) action-space A , ( 3 ) initial state distribution µ̃ ( x0 ) = µ̃ ( s , u , ω , α ) = µ ( s ) δ ( u− cu ) δ ( ω − cω ) δ ( α− cα ) , ( 4 ) transition distribution p̃ ( s′ , u′ , ω′ , α′ , r′|s , u , ω , α , a ) =fω−ω′ ( s′ , α′ , r′|s , u , ω , α , a ) pω ( ω′|ω ) pu ( u′|u , a ) , where s ∈ S is the delayed observation , u ∈ AK is a buffer of the last K sent actions , ω ∈ N is the observation delay , and α ∈ N is the action delay as defined above . To avoid conflicting with the subscript notation , we index the action buffers ’ elements using square brackets . Here , u [ 1 ] is the most recent and u [ K ] is the oldest action in the buffer . We denote slices by u [ i : j ] = ( u [ i ] , . . . , u [ j ] ) and u [ i : −j ] = ( u [ i ] , . . . , u [ K−j ] ) . We slightly override this notation and additionally define u [ 0 ] = a . The constants cu ∈ AK and cω , cα ∈ N initialize u , ω , α , since δ is the Dirac delta distribution . The transition distribution itself is composed of three parts : ( 1 ) The observation delay distribution pω modelling the evolution of observation delays . Note that this density function must represent a discrete distribution ( i.e . be a weighted sum of Dirac delta distributions ) . Furthermore , this process will repeat observations if there are no new ones available . This means that the observation delay can maximally grow by one from one time-step to the next . ( 2 ) The transition distribution for the action buffer pu ( u′|u , a ) = δ ( u′ − ( a , u [ 1 : −1 ] ) ) . ( 3 ) The distribution f∆ describing the evolution of observations , rewards and action delays ( Definition 2 ) . Definition 2 . For each change in observation delays ( ∆=ω−ω′ ) we define a variable step update distribution f∆ as f∆ ( s ′ , α′ , r′|s , u , ω , α , a ) =Es∗ , α∗ , r∗∼f∆−1 ( ·|s , u , ω , α , a ) [ p ( s ′ , r′−r∗|s∗ , u [ ω′︷ ︸︸ ︷ ω−∆+α′ ] ) pα ( α′|α∗ ) ] . ( 1 ) The base case of the recursion is f−1 ( s′ , α′ , r′ | s , u , ω , α , a ) = δ ( s′ − s ) δ ( α′ − α ) δ ( r′ ) . Here , pα is the action delay distribution which , similar to pω , must be discrete . The transition distribution of the underlying , undelayed MDP is p. The r′ − r∗ term accumulates intermediate rewards in case observations are skipped or repeated ( see Appendix A.4 ) . Since the observation delay can not increment by more than one , f−1 is used when ω is increasing , whereas f0 is used when there is no change in observation delay . A simple special case of the RDMDP is the constant observation and action delay case with pω ( ω′|ω ) = δ ( ω′ − cω ) and pα ( α′|α ) = δ ( α′ − cα ) . Here , the RDMDP reduces to a Constantly Delayed Markov Decision Process , described by Walsh et al . ( 2008 ) . In this case , the action and observation delays α , ω can be removed from the state-space as they don ’ t carry information . Examples of RDMDP dynamics are visualized in Figure 3 ( see also Appendix C ) . 3 REINFORCEMENT LEARNING IN DELAYED ENVIRONMENTS . Delayed environments as described in Section 2 are specific types of MDP , with an augmented statespace and delayed dynamics . Therefore , using this augmented state-space , traditional algorithms such as Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018a ) ( Haarnoja et al. , 2018b ) will always work in randomly delayed settings . However , their performance will still deteriorate because of the more difficult credit assignment caused by delayed observations and rewards , on top of the exploration and generalization burdens of delayed environments . We now analyze how to compensate for the credit assignment difficulty by leveraging our knowledge about the delays ’ dynamics . One solution is to perform on-policy multi-step rollouts on sub-trajectories that are longer than the considered delays . On the other hand , on-policy algorithms are known to be sample-inefficient and therefore are not commonly used in real-world applications , where data collection is costly . This motivates the development of off-policy algorithms able to reuse old samples , such as SAC . Intuitively , in delayed environments , one should take advantage of the fact that actions only influence observations and rewards after a number of time-steps relative to the beginning of their computation ( the total delay ω + α ) . Since the delay information is part of the state-space , it can be leveraged to track the action influence through time . However , applying conventional off-policy algorithms in delayed settings leads to the following issue : the trajectories used to perform the aforementioned multi-step backups have been sampled under an outdated policy , and therefore contain outdated action buffers . In this section , we propose a method to tackle this issue by performing partial trajectory resampling . We make use of the fact that the delayed dynamics are known to simulate the effect they would have had under the current policy , effectively transforming off-policy sub-trajectories into on-policy sub-trajectories . This enables us to derive a family of efficient off-policy algorithms for randomly delayed settings . 3.1 PARTIAL TRAJECTORY RESAMPLING IN DELAYED ENVIRONMENTS . One important observation implied by Figure 3 is that , given the delayed dynamics of RDMDPs , some actions contained in the action buffer of an off-policy state did not influence the subsequent delayed observations and rewards for a number of time-steps . Therefore , if an off-policy sub-trajectory is short enough , it is possible to recursively resample its action buffers with no influence on the return . We propose the following transformation of off-policy sub-trajectories : Definition 3 . The partial trajectory resampling operator recursively updates action buffers as follows σπn ( s ∗ 1 , u ∗ 1 , ω ∗ 1 , α ∗ 1︸ ︷︷ ︸ x∗1 , r∗1 , τ ∗ n−1|x∗0 ; s1 , u1 , ω1 , α1︸ ︷︷ ︸ x1 , r1 , τn−1 ) =δ ( ( s∗1 , ω ∗ 1 , α ∗ 1 , r ∗ 1 ) − ( s1 , ω1 , α1 , r1 ) ) Ea0∼π ( ·|x∗0 ) [ δ ( u ∗ 1− ( a0 , u∗0 [ 1 : −1 ] ) ) ] σπn−1 ( τ∗n−1|x∗1 ; τn−1 ) ( 2 ) with trivial base case σ0 ( x∗0 ) = 1 This operator recursively resamples the most recent actions of each action buffer in an input subtrajectory τn , according to a new policy π . Everything else stays unchanged . A visual example is provided in Figure 4 with n = 2 and an action buffer of two actions . When resampled actions are delayed and would not affect the environment , they do not `` invalidate '' the sub-trajectory . The resampled trajectories can then be considered on-policy . Theorem 1 . The partial trajectory resampling operator σπn ( Def . 3 ) transforms off-policy trajectories into on-policy trajectories Eτn∼pµn ( ·|x0 ) [ σ π n ( τ ∗ n|x0 ; τn ) ] =pπn ( τ∗n|x0 ) ( 3 ) on the condition that none of the delayed observations depend on any of the resampled actions , i.e . ω∗t + α ∗ t ≥ t ( 4 ) where t indexes the trajectory τ∗n = ( s ∗ 1 , u ∗ 1 , ω ∗ 1 , α ∗ 1 , r ∗ 1 , . . . , s ∗ n , u ∗ n , ω ∗ n , α ∗ n , r ∗ n ) from 1 to n. The condition in Equation 4 can be understood visually with the help of Figure 3 . In the constant delay example it is fulfilled until the third time-step . After that , the observations would have been influenced by the resampled actions ( starting with a0 ) .
The paper introduces an algorithm for the case where actions have delayed effects in RL, and specifically in the case where the delay is random. A resampling approach is applied to off policy buffered data in order to align it with the current policy and this approach is integrated into a SAC architecture, creating the new DCAC algorithm. Empirical results in constant-delay and random-delay environments show the algorithm outperforming baselines.
SP:37e441bbd53413fb7ae61d146145795c481a2bf0
Fine-grained Synthesis of Unrestricted Adversarial Examples
1 INTRODUCTION . Adversarial examples , inputs resembling real samples but maliciously crafted to mislead machine learning models , have been studied extensively in the last few years . Most of the existing papers , however , focus on normconstrained attacks and defenses , in which the adversarial input lies in an -neighborhood of a real sample using the Lp distance metric ( commonly with p = 0 , 2 , ∞ ) . For small , the adversarial input is quasi-indistinguishable from the natural sample . For an adversarial image to fool the human visual system , it is sufficient to be normconstrained ; but this condition is not necessary . Moreover , defenses tailored for norm-constrained attacks can fail on other subtle input modifications . This has led to a recent surge of interest on unrestricted adversarial attacks in which the adversary is not bounded by a norm threshold . These methods typically hand-craft transformations to capture visual similarity . Spatial transformations [ Engstrom et al . ( 2017 ) ; Xiao et al . ( 2018 ) ; Alaifari et al . ( 2018 ) ] , viewpoint or pose changes [ Alcorn et al . ( 2018 ) ] , inserting small patches [ Brown et al . ( 2017 ) ] , among other methods , have been proposed for unrestricted adversarial attacks . In this paper , we focus on fine-grained manipulation of images for unrestricted adversarial attacks . We build upon state-of-the-art generative models which disentangle factors of variation in images . We create fine and coarsegrained adversarial changes by manipulating various latent variables at different resolutions . Loss of the target network is used to guide the generation process . The pre-trained generative model constrains the search space for our adversarial examples to realistic images , thereby revealing the target model ’ s vulnerability in the natural image space . We verify that we do not deviate from the space of realistic images with a user study as well as a t-SNE plot comparing distributions of real and adversarial images ( see Fig . 7 in the appendix ) . As a result , we observe that including these examples in training the model enhances its accuracy on clean images . Our contributions can be summarized as follows : • We present the first method for fine-grained generation of high-resolution unrestricted adversarial examples in which the attacker controls which aspects of the image to manipulate , resulting in a diverse set of realistic , on-the-manifold adversarial examples . • We demonstrate that adversarial training with our examples improves performance of the model on clean images . This is in contrast to training with norm-bounded perturbations which degrades the model ’ s accuracy . Unlike recent approaches such as Xie et al . ( 2020 ) which use a separate auxiliary batch norm for adversarial examples , our method does not require any modifications to the architecture . • We propose the first method for generating unrestricted adversarial examples for semantic segmentation and object detection . Training with our examples improves segmentation results on clean images . • We demonstrate that our proposed attack can break certified defenses on norm-bounded perturbations . 2 RELATED WORK . 2.1 NORM-CONSTRAINED ADVERSARIAL EXAMPLES . Most of the existing works on adversarial attacks and defenses focus on norm-constrained adversarial examples : for a given classifier F : Rn → { 1 , . . . , K } and an image x ∈ Rn , the adversarial image x′ ∈ Rn is created such that ‖x− x′‖p < and F ( x ) 6= F ( x′ ) . Common values for p are 0 , 2 , ∞ , and is chosen small enough so that the perturbation is imperceptible . Various algorithms have been proposed for creating x′ from x. Optimization-based methods solve a surrogate optimization problem based on the classifier ’ s loss and the perturbation norm . In their pioneering paper on adversarial examples , Szegedy et al . ( 2013 ) use box-constrained L-BFGS [ Fletcher ( 2013 ) ] to minimize the surrogate loss function . Carlini & Wagner ( 2017 ) propose stronger optimization-based attacks for L0 , L2 and L∞ norms using better objective functions and the Adam optimizer . Gradient-based methods use gradient of the classifier ’ s loss with respect to the input image . Fast Gradient Sign Method ( FGSM ) [ Goodfellow et al . ( 2014 ) ] uses a first-order approximation of the function for faster generation and is optimized for the L∞ norm . Projected Gradient Descent ( PGD ) [ Madry et al . ( 2017 ) ] is an iterative variant of FGSM which provides a strong first-order attack by using multiple steps of gradient ascent and projecting perturbed images to an -ball centered at the input . Other variants of FGSM are proposed by Dong et al . ( 2018 ) and Kurakin et al . ( 2016 ) . Several methods have been proposed for defending against adversarial attacks . These approaches can be broadly categorized to empirical defenses which are empirically robust to adversarial examples , and certified defenses which are provably robust to a certain class of attacks . One of the most successful empirical defenses is adversarial training [ Goodfellow et al . ( 2014 ) ; Kurakin et al . ( 2016 ) ; Madry et al . ( 2017 ) ] which augments training data with adversarial examples generated as the training progresses . Many empirical defenses attempt to combat adversaries using a form of input pre-processing or by manipulating intermediate features or gradients [ Guo et al . ( 2017 ) ; Xie et al . ( 2017 ) ; Samangouei et al . ( 2018 ) ] . Few approaches have been able to scale up to high-resolution datasets such as ImageNet [ Liao et al . ( 2018 ) ; Xie et al . ( 2018 ) ; Kannan et al . ( 2018 ) ] . Athalye et al . ( 2018 ) show that many of these defenses fail due to obfuscated gradients , which occurs when the defense method is designed to mask information about the model ’ s gradients . Vulnerabilities of empirical defenses have led to increased interest in certified defenses , which provide a guarantee that the classifier ’ s prediction is constant within a neighborhood of the input . Several certified defenses have been proposed [ Wong & Kolter ( 2017 ) ; Raghunathan et al . ( 2018 ) ; Tsuzuku et al . ( 2018 ) ] which typically do not scale to ImageNet . Cohen et al . ( 2019 ) use randomized smoothing with Gaussian noise to obtain provably L2-robust classifiers on ImageNet . Lecuyer et al . ( 2019 ) propose an alternative certified defense at ImageNet scale leveraging a connection between robustness against adversarial examples and differential privacy theory . 2.2 UNRESTRICTED ADVERSARIAL EXAMPLES . For an image to be adversarial , it needs to be visually indistinguishable from real images . One way to achieve this is by applying subtle geometric transformations to the input image . Spatially transformed adversarial examples are introduced by Xiao et al . ( 2018 ) in which a flow field is learned to displace pixels of the image . Similarly , Alaifari et al . ( 2018 ) iteratively apply small deformations to the input in order to obtain the adversarial image . Engstrom et al . ( 2017 ) show that simple translations and rotations are enough for fooling deep neural networks . Alcorn et al . ( 2018 ) manipulate pose of an object to fool deep neural networks . They estimate parameters of a 3D renderer that cause the target model to misbehave in response to the rendered image . Another approach for evading the norm constraint is to insert new objects in the image . Adversarial Patch [ Brown et al . ( 2017 ) ] creates an adversarial image by completely replacing part of an image with a synthetic patch , which is image-agnostic and robust to transformations . Existence of on-the-manifold adversarial examples is also shown by Gilmer et al . ( 2018 ) , that consider the task of classifying between two concentric n-dimensional spheres . Stutz et al . ( 2019 ) demonstrate that both robust and accurate models are possible by using on-the-manifold adversarial examples . A challenge for creating unrestricted adversarial examples and defending against them is introduced by Brown et al . ( 2018 ) using the simple task of classifying between birds and bicycles . The recent work by Gowal et al . ( 2020 ) show that adversarial training with examples generated by StyleGAN can improve performance of the model on clean images . They consider the classification task on low-resolution datasets such as ColorMNIST and CelebA , and only use fine changes in their adversarial training . Our approach is effective on high-resolution datasets such as CelebA-HQ and LSUN , uses a range of low-level to high-level changes for adversarial training and encompasses several tasks including classification , segmentation and detection . In addition , we demonstrate that our adversarial examples can break certified defenses on norm-constrained perturbations and are realistic as verified by human evaluation . Song et al . ( 2018 ) search in the latent ( z ) space of AC-GAN [ Odena et al . ( 2017 ) ] to find generated images that can fool a target classifier but yield correct predictions on AC-GAN ’ s auxiliary classifier . They constrain the search region of z so that it is close to a randomly sampled noise vector , and show results on MNIST , SVHN and CelebA datasets . Requiring two classifiers to have inconsistent predictions degrades sample quality of the model . As we show in the appendix , training with these adversarial examples hurts the model ’ s performance on clean images . Moreover , this approach has no control over the generation process since small changes in the z space can lead to large changes in generated images and even create unrealistic samples . On the other hand , our method manipulates high-resolution real or synthesized images in a fine-grained manner owing to the interpretable disentangled latent space . It also generates samples which improve the model ’ s accuracy on clean images both in classification and segmentation tasks . To further illustrate difference of our approach with Song et al . ( 2018 ) , we plot t-SNE embeddings of real images from CelebA-HQ as well as adversarial examples from our method and Song et al. ’ s approach in the appendix and show that our adversarial images stay closer to the manifold of real images . 3 APPROACH . Most of the existing works on unrestricted adversarial attacks rely on geometric transformations and deformations which are oblivious to latent factors of variation . In this paper , we leverage disentangled latent representations of images for unrestricted adversarial attacks . We build upon state-of-the-art generative models and consider various target tasks : classification , semantic segmentation and object detection . 3.1 CLASSIFICATION . Style-GAN [ Karras et al . ( 2018 ) ] is a state-of-the-art generative model which disentangles high-level attributes and stochastic variations in an unsupervised manner . Stylistic variations are represented by style variables and stochastic details are captured by noise variables . Changing the noise only affects low-level details , leaving the overall composition and high-level aspects intact . This allows us to manipulate the noise variables such that variations are barely noticeable by the human eye . The style variables affect higher level aspects of image generation . For instance , when the model is trained on bedrooms , style variables from the top layers control viewpoint of the camera , middle layers select the particular furniture , and bottom layers deal with colors and details of materials . This allows us to manipulate images in a controlled manner , providing an avenue for fine-grained unrestricted attacks . Formally , we can represent Style-GAN with a mapping function f and a synthesis network g. The mapping function is an 8-layer MLP which takes a latent code z , and produces an intermediate latent vector w = f ( z ) . This vector is then specialized by learned affine transformations A to style variables y , which control adaptive instance normalization operations after each convolutional layer of the synthesis network g. Noise inputs are single-channel images consisting of un-correlated Gaussian noise that are fed to each layer of the synthesis network . Learned perfeature scaling factors B are used to generate noise variables η which are added to the output of convolutional
This paper proposes a mechanism to generate adversarial examples by applying latent variables level manipulation, based on the styleGAN framework. Unlike previous works mostly focused on image level perturbations and geometry transformations, this work tends to control higher level latent sampling such as style, so as to generate a style-adversarial examples. Although a similar idea has been proposed by Song et al. (2018), this work is along the same direction and achieves better performance. The loss is proposed for general classification tasks such as object classification, object detection and semantic segmentation. The experimental results show not only qualitatively confusing human vision but also quantitatively improve the performance on testing clean images.
SP:6fc9ae204ba7ca8db33d3ce39362ab05d36eec97
Signed Graph Diffusion Network
1 INTRODUCTION . Given a signed social graph , how can we learn appropriate node representations to infer the signs of missing edges ? Signed social graphs model trust relationships between people with positive ( trust ) and negative ( distrust ) edges . Many online social services such as Epinions ( Guha et al. , 2004 ) and Slashdot ( Kunegis et al. , 2009 ) that allow users to express their opinions are naturally represented as signed social graphs . Such graphs have attracted considerable attention for diverse applications including link sign prediction ( Leskovec et al. , 2010a ; Kumar et al. , 2016 ) , node ranking ( Jung et al. , 2016 ; Li et al. , 2019b ) , community analysis ( Yang et al. , 2007 ; Chu et al. , 2016 ) , graph generation ( Derr et al. , 2018a ; Jung et al. , 2020 ) , and anomaly detection ( Kumar et al. , 2014 ) . Node representation learning is a fundamental building block for analyzing graph data , and many researchers have put tremendous efforts into developing effective models for unsigned graphs . Graph convolutional networks ( GCN ) and their variants ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ) have spurred great attention in machine learning community , and recent works ( Klicpera et al. , 2019 ; Li et al. , 2019a ) have demonstrated stunning progress by handling the performance degradation caused by over-smoothing ( Li et al. , 2018 ; Oono & Suzuki , 2020 ) ( i.e. , node representations become indistinguishable as the number of propagation increases ) or the vanishing gradient problem ( Li et al. , 2019a ) in the first generation of GCN models . However , all of these models have a limited performance on node representation learning in signed graphs since they only consider unsigned edges under the homophily assumption ( Kipf & Welling , 2017 ) . Many studies have been recently conducted to consider such signed edges , and they are categorized into network embedding and GCN-based models . Network embedding ( Kim et al. , 2018 ; Xu et al. , 2019b ) learns the representations of nodes by optimizing an unsupervised loss that primarily aims to locate two nodes ’ embeddings closely ( or far ) if they are positively ( or negatively ) connected . However , they are not trained jointly with a specific task in an end-to-end manner , i.e. , latent features and the task are trained separately . Thus , their performance is limited unless each of them is tuned delicately . GCN-based models ( Derr et al. , 2018b ; Li et al. , 2020 ) have extended the graph convolutions to signed graphs using balance theory ( Holland & Leinhardt , 1971 ) in order to properly propagate node features on signed edges . However , these models are directly extended from existing GCNs without consideration of the over-smoothing problem that degrades their performance . This problem hinders them from exploiting more information from multi-hop neighbors for learning node features in signed graphs . We propose SGDNET ( SIGNED GRAPH DIFFUSION NETWORK ) , a novel graph neural network for node representation learning in signed graphs . Our main contributions are summarized as follows : • End-to-end learning . We design SGDNET that performs end-to-end node representation learning . Given a signed graph , SGDNET produces node embeddings through multiple signed graph diffusion ( SGD ) layers ( Figure 1 ( a ) ) , which are fed into a loss function of a specific task such as link sign prediction . • Novel feature diffusion . We propose a signed random walk diffusion method that prop- agates node embeddings on signed edges based on random walks considering signs , and injects local features ( Figure 1 ( c ) ) . This enables SGDNET to learn distinguishable node representations considering multi-hop neighbors while preserving local information . • Experiments . Extensive experiments show that SGDNET effectively learns node represen- tations of signed social graphs for link sign prediction , giving at least 3.9 % higher accuracy than the state-of-the-art models in real datasets ( Table 2 ) . 2 RELATED WORK . 2.1 GRAPH CONVOLUTIONAL NETWORKS ON UNSIGNED GRAPHS . Graph convolutional network ( GCN ) ( Kipf & Welling , 2017 ) models the latent representation of a node by employing a convolutional operation on the features of its neighbors . Various GCN-based approaches ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Hamilton et al. , 2017 ) have aroused considerable attention since they enable diverse graph supervised tasks ( Kipf & Welling , 2017 ; Yao et al. , 2019 ; Xu et al. , 2019a ) to be performed concisely under an end-to-end framework . However , the first generation of GCN models exhibit performance degradation due to the over-smoothing and vanishing gradient problems . Several works ( Li et al. , 2018 ; Oono & Suzuki , 2020 ) have theoretically revealed the over-smoothing problem . Also , Li et al . ( Li et al. , 2019a ) have empirically shown that stacking more GCN layers leads to the vanishing gradient problem as in convolutional neural networks ( He et al. , 2016 ) . Consequently , most GCN-based models ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Hamilton et al. , 2017 ) are shallow ; i.e. , they do not use the feature information in faraway nodes when modeling node embeddings . A recent research direction aims at resolving the limitation . Klicpera et al . ( Klicpera et al. , 2019 ) proposed APPNP exploiting Personalized PageRank ( Jeh & Widom , 2003 ) to not only propagate hidden node embeddings far but also preserve local features , thereby preventing aggregated features from being over-smoothed . Li et al . ( Li et al. , 2019a ) suggested ResGCN adding skip connections between GCN layers , as in ResNet ( He et al. , 2016 ) . However , all of these models do not provide how to use signed edges since they are based on the homophily assumption ( Kipf & Welling , 2017 ) , i.e. , users having connections are likely to be similar , which is not valid for negative edges . As opposed to the homophily , negative edges have the semantics of heterophily ( Rogers , 2010 ) , i.e. , users having connections are dissimilar . Although these methods can still be applied to signed graphs by ignoring the edge signs , their trained features have limited capacity . 2.2 NETWORK EMBEDDING AND GRAPH CONVOLUTIONAL NETWORKS ON SIGNED GRAPHS . Traditional methods on network embedding extract latent node features specialized for signed graphs in an unsupervised manner . Kim et al . ( Kim et al. , 2018 ) proposed SIDE which optimizes a likelihood over direct and indirect signed connections on truncated random walks sampled from a signed graph . Xu et al . ( Xu et al. , 2019b ) developed SLF considering positive , negative , and non-linked relationships between nodes to learn non-negative node embeddings . However , such approaches are not end-to-end , i.e. , they are not directly optimized for solving a supervised task such as link prediction . There are recent progresses on end-to-end learning on signed networks under the GCN framework . Derr et al . ( Derr et al. , 2018b ) proposed SGCN which extends the GCN mechanism to signed graphs considering balanced and unbalanced relationships supported by structural balance theory ( Holland & Leinhardt , 1971 ) . Yu et al . ( Li et al. , 2020 ) developed SNEA using attention techniques to reveal the importance of these relationships . However , such state-of-the-art models do not consider the over-smoothing problem since they are directly extended from GCN . 3 PROPOSED METHOD . We propose SGDNET ( SIGNED GRAPH DIFFUSION NETWORK ) , a novel end-to-end model for node representation learning in signed graphs . Our SGDNET aims to properly aggregate node features on signed edges , and to effectively use the features of multi-hop neighbors so that generated features are not over-smoothed . Our main ideas are to diffuse node features along random walks considering the signs of edges , and to inject local node features at each aggregation . Figure 1 depicts the overall architecture of SGDNET . Given a signed graph G and initial node features X ∈ Rn×d0 as shown in Figure 1 ( a ) , SGDNET extracts the final node embeddings H ( L ) ∈ Rn×dL through multiple SGD layers where n is the number of nodes , L is the number of SGD layers , and dl is the embedding dimension of the l-th layer . Then , H ( L ) is fed into a loss function of a specific task so that they are jointly trained in an end-to-end framework . Given H ( l−1 ) , the l-th SGD layer aims to learn H ( l ) based on feature transformations and signed random walk diffusion Fd ( · ) as shown in Figure 1 ( b ) . The layer also uses the skip connection to prevent the vanishing gradient problem when the depth of SGDNET increases . Figure 1 ( c ) illustrates the intuition behind the signed random walk diffusion . Each node has two features corresponding to positive and negative surfers , respectively . The surfer flips its sign when moving along negative edges , while the sign is kept along positive edges . For example , the positive ( or negative ) surfer becomes positive at node v if it moves from a positively connected node u ( or a negatively connected node t ) . As a result , the aggregated features at node v become similar to those connected by positive edges ( e.g. , node u ) , and different from those connected by negative edges ( e.g. , node t ) . In other words , it satisfies homophily and heterophily at the same time while unsigned GCNs can not handle the heterophily of negative edges . Furthermore , we inject the local feature ( i.e. , the input feature of the module ) of node v at each aggregation so that the resulting features remain distinguishable during the diffusion . 3.1 SIGNED GRAPH DIFFUSION LAYER . Given a signed graph G and the node embeddings H ( l−1 ) from the previous layer , the l-th SGD layer learns new embeddings H ( l ) as shown in Figure 1 ( b ) . It first transforms H ( l−1 ) into hidden features H̃ ( l ) as H̃ ( l ) = H ( l−1 ) W ( l ) t with a learnable parameter W ( l ) t ∈ Rdl−1×dl . Then , it applies the signed random walk diffusion which is represented as the function Fd ( G , H̃ ( l ) ) that returns P ( l ) ∈ Rn×dl and M ( l ) ∈ Rn×dl as the positive and the negative embeddings , respectively ( details in Section 3.2 ) . The embeddings are concatenated and transformed as follows : H ( l ) = φ ( [ P ( l ) ||M ( l ) ] W ( l ) n + H ( l−1 ) ) ( 1 ) where φ ( · ) is a non-linear activator such as tanh , || denotes horizontal concatenation of two matrices , and W ( l ) n ∈ R2dl×dl is a trainable weight matrix that learns a relationship between P ( l ) and M ( l ) . We use the skip connection ( He et al. , 2016 ; Li et al. , 2019a ) with H ( l−1 ) in Equation ( 1 ) to avoid the vanishing gradient issue which frequently occurs when multiple layers are stacked .
In this paper, the author studied the problem of node embedding in signed networks. The authors proposed SGDNet which combines the idea of diffusion/random work in signed networks and Residual connection in GCN. The network is trained directly with classification loss on edge sign prediction. The authors carried out extensive experiments on several real-world networks with comparison to several state-of-the-art methods. The proposed method showed superior performance in the sign prediction task.
SP:597472fc14f399625474d13df3453d6377a6c465
Predicting the impact of dataset composition on model performance
1 INTRODUCTION . The success of large scale machine learning systems depends critically on the quantity and quality of data used during training , and we can not expect these systems to succeed if there is not enough training data or if that data does not cover all the phenomena contained in the test distribution ( BenDavid et al. , 2010 ) . Knowing this , the designer of a machine learning system might create multiple sources of data , with each one targeting a different feature or domain that the model ought to do well on ( Crammer et al. , 2007 ; Wang et al. , 2019a ) . This data-driven design strategy provides powerful tools to improve and evaluate model behavior , but also poses an additional challenge : what is the right way to combine these various data sources ? What is the optimal data collection policy for a given budget ? Our goal is to answer these questions by quantifying the relationship between data sources and model performance – how well will our model do if we were to train it on n samples using a data mixture ( q1 . . . qk ) over ourK data sources . A precise model for predicting model performance will allow us to both identify the optimal data collection policy and quantify cost-performance tradeoffs . The starting point of our work is the recent observation across speech , vision and text ( Hestness et al. , 2017 ; Kaplan et al. , 2020 ; Rosenfeld et al. , 2020 ) that the empirical performance of a model is remarkably predictable , and follows the log-linear formula log ( error ) ≈ −α log ( n ) + C. ( 1 ) In this work , we expand this observation to the multi-data-source setting and discover the surprising fact that the slope of the log-linear relationship ( α ) does not vary with data composition and that the data composition only affects the intercept ( C ) . The simple dependence of log-error on data size allows us to reduce the problem of estimating model error into a learning problem . Our approach is straightforward : we hypothesize that model error follows V ( n , q ) : = exp ( −α log ( n ) +log ( C ( q ) ) ) for a simple parametric functional formC ( q ) , and fit this to observed pairs of ( n , q , error ) that we obtain by subsampling the dataset and re-training a model . We show that there is a natural and simple choice of C ( q ) as a rational function that we derive from optimal experimental design for linear regression , M-estimation , and nonparametric smoothing . The simple and parametric dependence of V ( n , q ) on n allows us to use our resulting estimates to predict model performance under substantial extrapolation in data size . Empirically , the resulting predictions are extremely accurate and hold under substantial extrapolation . On the Amazon review prediction dataset ( Mansour et al. , 2009 ) , we can learn to predict model performance nearly perfectly ( r2 = 0.96 ) from a small dataset of 1200 examples across 3 sources and extrapolate to predict the model error on datasets of up to 4000 examples . We show this high accuracy continues to hold on a real-world task oriented dialogue system ( r2 = 0.93 ) , a multi-domain machine translation system ( r2 = 0.83 ) , and boolean question answering with weak supervision ( r2 = 0.86 ) . In each of the cases , our proposed approach substantially outperforms the best baseline , with the baselines performing worse-than-random in both the machine translation and question answering tasks . Related work Quantifying the effect of data composition on model performance is closely related to the classical ideas of optimal experimental design , as well as more recent machine learning methods such as active learning and data valuation . Our work will draw inspiration from the classical V -optimal experimental design ( John & Draper , 1975 ) as a way to understand how model performance will change with the data collection policies . However , our approach differs substantially beyond this . Instead of making strong linearity assumptions and identifying closed form formulas for model performance , we treat identifying the impact of data sources on errors as itself a prediction problem , which allows us to quantify these effects for neural networks and non-separable objectives . Active learning provides methods for incrementally selecting new points to rapidly reduce a loss ( Hanneke , 2007 ) . These approaches only consider the problem of optimal data collection and do not seek to predict model performance under all data collection strategies ( including suboptimal ones ) , which is critical when making cost-performance tradeoffs across data sources . The model performance predictions produced in our work complements existing work on active learning by providing accurate forecasts of model performance under different data collection strategies . Finally , data valuation methods such as the Shapley value attempt to assign estimate the impact of a data source on model performance ( Ghorbani & Zou , 2019 ; Jia et al. , 2019 ; Ghorbani et al. , 2020 ; Yoon et al. , 2019 ) . These approaches are natural when pricing data sources as part of a market mechanism ( Ohrimenko et al. , 2019 ; Agarwal et al. , 2019 ) due to the axiomatic properties of the Shapley value . Our approach differs in that we seek simply to estimate the performance of a model rather than to assign a single price to examples from a data source . This difference means that axioms such as additivity that are critical for the Shapley value are not relevant for our goal . We show that for the purpose of predicting errors , a rational function ( rather than a linear cost ) follows naturally from optimal experimental design . Our experiments also show that our rational function approximation provides better model performance predictions than a linear , additive model . 2 PROBLEM STATEMENT AND EMPIRICAL OBSERVATIONS . Our goal is to predict the performance of a model as a function of the number of training samples n as well as the dataset composition q , where qk represents the fraction of the training data drawn from data source k. We will now define this goal more formally in terms of the training data distribution , model fitting , and test loss . The training data consists of an n-sample training set pn , q that is created by sampling from the mixture p : = ∑ k∈ [ K ] qkpk where pk are data generating distributions for each of theK data sources and qk are mixture weights with qk ≥ 0 and ∑ k∈ [ K ] qk = 1 . Using this dataset , we learn a prediction model θ̂ that incurs loss ` ( θ̂ ; x , y ) for a training example ( x , y ) . The fitted model is the empirical loss minimizer , which we define as θ̂ ( pn , q ) : = arg min θ∈Θ Epn , q [ ` ( θ ; x , y ) ] . The performance of this classifier is evaluated on a test distribution which may differ from the training distribution by a covariate shift ( i.e . p ( y | x ) = ptest ( y | x ) ) . We are interested in model performance as a function of the data size and composition ( and not a fixed empirical distribution pn , q ) and thus our goal is to predict the model ’ s expected excess loss over draws in both the training and test distributions , L ( n , q ) : = E [ ` ( θ̂ ( pn , q ) ; x , y ) ] − inf θ E [ ` ( θ ; x , y ) ] . Estimating L requires that we hypothesize a relationship between ( n , q ) and the expected model loss . Following earlier observations by Hestness et al . ( 2017 ) , we expect a log-linear relationship between L ( n , q ) and log ( n ) for any fixed q , which implies a possible approximation as log ( L ( n , q ) ) ≈ log ( V ( n , q ) ) : = α ( q ) log ( n ) + C ( q ) . ( 2 ) We now examine this hypothesis in a simple toy example . Linear toy data : We will start with the simplest nontrivial example of linear least-squares regression to study L ( n , q ) . In this example , there are two data sources over x ∈ R2 . The first data source has substantial variability on the first coordinate x0 but not x1 and vice versa for the second data source . The overall generative process is y | x ∼ [ 0.5 , 1 ] > x+ z ∼ Bern ( q ) ∼ N ( 0 , 1 ) x | z = 0 ∼ N ( 0 , [ 1 0 0 0.001 ] ) x | z = 1 ∼ N ( 0 , [ 0.001 0 0 1 ] ) . Let L ( n , q ) be the excess squared loss of a linear least squares model trained with n samples from a mixture q and evaluated on a test distribution with q = 0.5 . What will L ( n , q ) look like ? Figure 1a shows a clear linear relationship between log dataset size ( log ( n ) ) and log ( L ( n , q ) ) . The intercept of the linear relationship seems to vary with the data mixture q , but the slope seems constant . Examining Figure 1a more closely , we find that the extremes of using either data source exclusively ( blue / purple lines ) performs worse than a mix suggesting that log ( L ( n , q ) ) is unlikely to be linear in q . Intuitively , we can think of each data distribution as having a different strength ( i.e . more variance in either x0 or x1 ) and combining the two results in a better data distribution than either alone . We can see this more clearly when we estimate the intercept for each of these lines ( Figure 1b ) . The estimated intercepts show a U-shaped curve that rapidly increases as q → 0 or q → 1 and is generally flat from 0.2 to 0.8 . 3 METHOD AND THEORY . We have observed that in the case of a simple linear regression , the log-error not only follows the relationship outlined in equation 2 , but also that the slope α is constant as we vary the data composition ( and we will further validate this claim on more complex tasks and models in subsequent sections ) . This observation shows we may be able to further simplify the log-linear approximation as log ( L ( n , q ) ) ≈ log ( V ( n , q ) ) : = −α log ( n ) + log ( C ( q ) ) . Now note that this functional form decouples the data size n and mixture proportions C ( q ) into two terms . This is the key observation of our work : log ( V ( n , q ) ) has a very simple dependence on n , and the more complex term C ( q ) has no dependence on n. Therefore we can cast this as a learning problem , where we learn α and a parametric function Cλ ( q ) based on the model ’ s error over a range of q and small n , and extrapolate this for large n using the log-linear dependence of log V on n. Concretely , given a dataset with { n1 . . . nk } we can generate a subsampled dataset with n̂k ∼ Unif ( 0 , nk ) samples from each source . This results in a training set with data size n̂ = ∑ k n̂k and composition q̂k = n̂kn̂ . We fit a model to this subsampled data and compute its loss L ( n̂ , q̂ ) . Given the triple ( n̂ , q̂ , L ( n̂ , q̂ ) ) we can now simply fit the hypothesized functional form , min λ , α Eq̂ , n̂ [ ( log ( L ( n̂ , q̂ ) ) − α log ( n̂ ) + log ( Cλ ( q̂ ) ) ) 2 ] . The experimental data does not specify the functional form of Cλ ( q ) except that it should handle convex functions like those seen in Figure 1b . We will now study V ( n , q ) theoretically and argue that a natural choice is the rational function Cλ ( q ) : = M∑ i=1 ( K∑ k=1 λikqk ) −1 . In the subsequent sections , we will study three settings : ordinary linear regression , M-estimation , and nonparametric regression and show that our hypothesized log-linear approximation arises naturally in all three cases .
This work studies the problem of predicting model performance with more training data when the data are collected from different sources. The predictor is a function of the number training examples, and the ratio of examples from each source. The predictor needs to be built from a small number of training examples the observed model performance, and applied to larger numbers of training example without actually training the model. The predictor can be used to decide a good data collection policy which is expected to have best model performance. The proposed solution is a simple parametric form of the predictor, which is log-linear in the log of # training examples, and log-rational in the source distribution vector. The solution is motivated by recent literature about the same task for single source. The correctness of the solution is proved for several cases: linear regression, M-estimator and nonparametric binning. The performance of the predictor is then evaluated for a number of real-world tasks: linear regression for Amazon book rating, semantic parsing, machine translation and multitask question answering. The performance is measured by r2 score between the actual performance and predicted performance. The proposed predictor has a clear advantage over the baseline of using a linear predictor.
SP:bbd6a6fcf9731e02fdf3e45c4eb4156be2c38d33
Learning Mesh-Based Simulation with Graph Networks
1 INTRODUCTION . State-of-the art modeling of complex physical systems , such as deforming surfaces and volumes , often employs mesh representations to solve the underlying partial differential equations ( PDEs ) . Mesh-based finite element simulations underpin popular methods in structural mechanics [ 31 , 48 ] , aerodynamics [ 13 , 34 ] , electromagnetics [ 32 ] , geophysics [ 35 , 39 ] , and acoustics [ 26 ] . Meshes also support adaptive representations , which enables optimal use of the resource budget by allocating greater resolution to regions of the simulation domain where strong gradients are expected or more accuracy is required , such as the tip of an airfoil in an aerodynamics simulation . Adaptive meshing enables running simulations at accuracy and resolution levels impossible with regular discretization schemes [ 8 , 27 ] ( Figure 3b ) . Despite their advantages , mesh representations have received relatively little attention in machine learning . While meshes are sometimes used for learned geometry processing [ 9 ] and generative models of shapes [ 15 , 29 ] , most work on predicting high-dimensional physical systems focuses on grids , owing to the popularity and hardware support for CNN architectures [ 19 ] . We introduce a method for predicting dynamics of physical systems , which capitalizes on the advantages of adaptive mesh representations . Our method works by encoding the simulation state into a graph , and performing computations in two separate spaces : the mesh-space , spanned by the simulation mesh , and the Euclidean world-space in which the simulation manifold is embedded ( see Figure 3a ) . By passing messages in mesh-space , we can approximate differential operators that underpin the internal dynamics of most physical systems . Message-passing in world-space can estimate external dynamics , not captured by the mesh-space interactions , such as contact and collision . Unstructured irregular meshes , as opposed to regular grids , support learning dynamics which are independent of resolution , allowing variable resolution and scale at runtime . By learning a map of desired resolution over the mesh ( sizing field ) , together with a local remesher , our method can even adaptively change ∗equal contribution Videos of all our experiments can be found at https : //sites.google.com/view/meshgraphnets the discretization during rollouts , budgeting greater computational resources for important regions of the simulation domain . Together , our method allows us to learn the dynamics of vastly different physical systems , from cloth simulation over structural mechanics to fluid dynamics directly from data , providing only very general biases such as spatial equivariance . We demonstrate that by using mesh-space computation we can reliably model materials with a rest state such as elastics , which are challenging for meshfree prediction models [ 37 ] . MESHGRAPHNETS outperform particle- and grid-based baselines , and can generalize to more complex dynamics than those on which it was trained . 2 RELATED WORK . Modelling high-dimensional physics problems with deep learning algorithms has become an area of great research interest in fields such as computational fluid dynamics . High resolution simulations are often very slow , and learned models can provide faster predictions , reducing turnaround time for workflows in engineering and science [ 16 , 6 , 49 , 20 , 1 ] . Short run times are also a desirable property for fluid simulation in visualization and graphics [ 46 , 41 , 47 ] . Learned simulations can be useful for real-world predictions where the physical model , parameters or boundary conditions are not fully known [ 12 ] . Conversely , the accuracy of predictions can be increased by including specialized knowledge about the system modelled in the form of loss terms [ 43 , 23 ] , or by physics-informed feature normalization [ 40 ] . The methods mentioned above are based on convolutional architectures on regular grids . Although this is by far the most widespread architecture for learning high-dimensional physical systems , recently there has been an increased interest in particle-based representations , which are particularly attractive for modelling the dynamics of free-surface liquids and granular materials . Ladicky et al . [ 22 ] use random forests to speed up liquid simulations . Various works [ 24 , 42 , 37 ] use graph neural networks ( GNNs ) [ 38 , 4 ] to model particle-based granular materials and fluids , as well as glassy dynamics [ 3 ] . Learned methods can improve certain aspects of classical FEM simulations , e.g . more accurate handling of strongly nonlinear displacements [ 25 ] or learned elements which directly map between forces and displacements [ 10 ] . Finally , dynamics of high dimensional systems can be learned in reduced spaces . Holden et al . [ 18 ] performs PCA decomposition on cloth data , and learns a correction model to improve accuracy of subspace simulation . These models are however very domain-specific , and the expression range is limited due to the use of the linear subspace . There is increased attention in using meshes for learned geometry and shape processing [ 9 , 29 , 17 ] . But despite mesh-based simulations being the tool of choice in mechanical engineering and related disciplines , adaptive mesh representations have not seen much use in machine learning for physics prediction , with a few notable exceptions [ 5 , 2 ] . Belbute-Peres et al . [ 5 ] embed a differentiable aerodynamics solver in a graph convolution ( GCN ) [ 21 ] prediction pipeline for super-resolution in aerodynamics predictions . Our method has similarities , but without a solver in the loop , which potentially makes it easier to use and adapt to new systems . In Section 5 we show that MESHGRAPHNETS are better suited for dynamical prediction than GCN-based architectures . Finally , Graph Element Networks [ 2 ] uses meshes over 2D grid domains to more efficiently compute predictions and scene representations . Notably they use small planar systems ( < 50 nodes ) , while we show how to scale mesh-based predictions to complex 3D systems with thousands of nodes . 3 MODEL . We describe the state of the system at time t using a simulation mesh M t = ( V , EM ) with nodes V connected by mesh edges EM . Each node i ∈ V is associated with a reference mesh-space coordinate ui which spans the simulation mesh , and additional dynamical quantities qi that we want to model . Eulerian systems ( Figure 2c , d ) model the evolution of continuous fields such as velocity over a fixed mesh , and qi sample these fields at the mesh nodes . In Lagrangian systems , the mesh represents a moving and deforming surface or volume ( e.g . Figure 2a , b ) , and contains an extra world-space coordinate xi describing the dynamic state of the mesh in 3D space , in addition to the fixed mesh-space coordinate ui ( Figure 3a ) . 3.1 LEARNING FORWARD DYNAMICS . The task is to learn a forward model of the dynamic quantities of the mesh at time t+1 given the current mesh M t and ( optionally ) a history of previous meshes { M t−1 , ... , M t−h } . We propose MESHGRAPHNETS , a graph neural network model with an Encode-Process-Decode architecture [ 4 , 37 ] , followed by an integrator . Figure 1 shows a visual scheme of the MESHGRAPHNETS architecture . Domain specific information on the encoding and integration can be found in Section 4 . Encoder The encoder encodes the current mesh M t into a multigraph G = ( V , EM , EW ) . Mesh nodes become graph nodes V , and mesh edges become bidirectional mesh-edges EM in the graph . This serves to compute the internal dynamics of the mesh . For Lagrangian systems , we add world edges EW to the graph , to enable learning external dynamics such as ( self- ) collision and contact , which are non-local in mesh-space.1 World-space edges are created by spatial proximity : that is , given a fixed-radius rW on the order of the smallest mesh edge lengths , we add a world edge between nodes i and j if |xi − xj | < rW , excluding node pairs already connected in the mesh . This encourages using world edges to pass information between nodes that are spatially close , but distant in mesh space ( Figure 3a ) . Next , we encode features into graph nodes and edges . To achieve spatial equivariance , positional features are provided as relative edge features . We encode the relative displacement vector in mesh space uij = ui−uj and its norm |uij | into the mesh edges eMij ∈ EM . Then , we encode the relative world-space displacement vector xij and its norm |xij | into both mesh edges eMij ∈ EM and world edges eWij ∈ EW . All remaining dynamical features qi , as well as a one-hot vector indicating node type , are provided as node features in vi . Finally , the concatenated features above are encoded into a latent vector of size 128 at each node and edge , using the encoder MLPs M , W , V for mesh edges eMij , world edges e W ij , and nodes vi respectively . See sections 4 and A.1 for more details on input encoding . Processor The processor consists of L identical message passing blocks , which generalize GraphNet blocks [ 36 ] to multiple edge sets . Each block contains a separate set of network parameters , and is applied in sequence to the output of the previous block , updating the mesh edge eMij , world edge eWij , and node vi embeddings to e ′M ij , e ′W ij , v ′ i respectively by e′ M ij ← fM ( eMij , vi , vj ) , e′ W ij ← fW ( eWij , vi , vj ) , v′i ← fV ( vi , ∑ j e′ M ij , ∑ j e′ W ij ) ( 1 ) where fM , fW , fV are implemented using MLPs with a residual connection . Decoder and state updater For predicting the time t+1 state from the time t input , the decoder uses an MLP δV to transform the latent node features vi after the final processing step into one or more output features pi . We can interpret the output features pi as ( higher-order ) derivatives of qi , and integrate them using a forward-Euler integrator with ∆t = 1 to compute the next-step dynamical quantity qt+1i . For firstorder systems the output pi is integrated once to update qt+1i = pi + q t i , while for second-order integration happens twice : qt+1i = pi + 2q t i − qt−1 . Additional output features pi are also used to make direct predictions of auxiliary quantities such as pressure or stress . For domain-specific details on decoding , see Section 4 . Finally , the output mesh nodes V are updated using qt+1i to produce M t+1 . For some systems , we dynamically adapt the mesh after each prediction step ; this is explained in the following section . 1From here on , any mention of world edges and world coordinates applies only to Lagrangian systems ; they are omitted for Eulerian systems . 3.2 ADAPTIVE REMESHING . Adaptive remeshing algorithms generally consist of two parts : identifying which regions of the simulation domain need coarse or fine resolution , and adapting the nodes and their connections to this target resolution . Only the first part requires domain knowledge of the type of physical system , which usually comes in the form of heuristics . For instance , in cloth simulation , one common heuristic is the refinement of areas with high curvature to ensure smooth bending dynamics ( Figure 3b ) , while in computational fluid dynamics , it is common to refine around wall boundaries where high gradients of the velocity field are expected . In this work we adopt the sizing field methodology [ 27 ] . The sizing field tensor S ( u ) ∈ R2×2 specifies the desired local resolution by encoding the maximally allowed oriented , edge lengths in the simulation mesh . An edge uij is valid if and only if uTijSi uij ≤ 1 , otherwise it is too long , and needs to be split2 . Given the sizing field , a generic local remeshing algorithm can simply split all invalid edges to refine the mesh , and collapse as many edges as possible , without creating new invalid edges , to coarsen the mesh . We denote this remeshing process as M ′ = R ( M , S ) . Learned remeshing To leverage the advantages in efficiency and accuracy of dynamic remeshing , we need to be able to adapt the mesh at test time . Since remeshing requires domain knowledge , we would however need to call the specific remesher used to generate the training data at each step during the model rollout , reducing the benefits of learning the model . Instead , we learn a model of the sizing field ( the only domain-specific part of remeshing ) using the same architecture as in Section 3.1 and train a decoder output pi to produce a sizing tensor for each node . At test time , for each time step we predict both the next simulation state and the sizing field , and use a generic , domainindependent remesher R to compute the adapted next-step mesh as M t+1 = R ( M̂ t+1 , Ŝt+1 ) . We demonstrate this on triangular meshes , Section A.3 describes the simple generic remesher that we use for this purpose . While the sizing field is agnostic to the mesh type , other mesh types may require different local remeshers ; for tetrahedral meshes a method such as Wicke et al . [ 45 ] could be used , while quad meshes can simply be split into triangular meshes .
This paper presents a graph-network-based architecture for learning to perform mesh-based simulations, which can be run more efficiently than the full, "ground-truth" simulations. The experiments demonstrate that the proposed method is able to learn to simulate a wide range of different physical scenarios. Moreover, the presented results also demonstrate an ability to generalize to configurations different from the one seen in training.
SP:a52f70b4b90309b1553f59e8730e6378ad57b684
Differentially Private Synthetic Data: Applied Evaluations and Enhancements
1 INTRODUCTION . Maintaining an individual ’ s privacy is a major concern when collecting sensitive information from groups or organizations . A formalization of privacy , known as differential privacy , has become the gold standard with which to protect information from malicious agents ( Dwork , TAMC 2008 ) . Differential privacy offers some of the most stringent known theoretical privacy guarantees ( Dwork et al. , 2014 ) . Intuitively , for some query on some dataset , a differentially private algorithm produces an output , regulated by a privacy parameter , that is statistically indistinguishable from the same query on the same dataset had any one individual ’ s information been removed . This powerful tool has been adopted by researchers and industry leaders , and has become particularly interesting to machine learning practitioners , who hope to leverage privatized data in training predictive models ( Ji et al. , 2014 ; Vietri et al. , 2020 ) . Because differential privacy often depends on adding noise , the results of differentially private algorithms can come at the cost of data accuracy and utility . However , differentially private machine learning algorithms have shown promise across a number of domains . These algorithms can provide tight privacy guarantees while still producing accurate predictions ( Abadi et al. , 2016 ) . A drawback to most methods , however , is in the one-off nature of training : once the model is produced , the privacy budget for a real dataset can be entirely consumed . The differentially private model is therefore inflexible to retraining and difficult to share/verify : the output model is a black box . This can be especially disadvantageous in the presence of high dimensional data that require rigorous training techniques like dimensionality reduction or feature selection ( Hay et al. , 2016 ) . With limited budget to spend , data scientists can not exercise free range over a dataset , thus sacrificing model quality . In an effort to remedy this , and other challenges faced by traditional differentially private methods for querying , we can use differentially private techniques for synthetic data generation , investigate the privatized data , and train informed supervised learning models . In order to use the many state-of-the-art methods for differentially private synthetic data effectively in industry domains , we must first address pitfalls in practical analysis , such as the lack of realistic benchmarking ( Arnold & Neunhoeffer , 2020 ) . Benchmarking is non-trivial , as many new stateof-the-art differentially private synthetic data algorithms leverage generative adversarial networks ( GANs ) , making them expensive to evaluate on large scale datasets ( Zhao et al. , 2019 ) . Furthermore , many of state-of-the-art approaches lack direct comparisons to one another , and by nature of the privatization mechanisms , interpreting experimental results is non-trivial ( Jayaraman & Evans , 2019 ) . New metrics presented to analyze differentially private synthetic data methods may themselves need more work to understand , especially in the domain of tabular data ( Ruggles et al. , 2019 ; Machanavajjhala et al. , 2017 ) . To that end , our contributions in this paper are 3-fold . ( 1 ) We introduce more realisitic benchmarking . Practitioners commonly collect state-of-the-art approaches for comparison in a shared environment ( Xu et al. , 2019 ) . We provide our evaluation framework , with extensive comparisons on both standard datasets and our real-world , industry applications . ( 2 ) We provide experimentation on novel metrics at scale . We stress the tradeoff between synthetic data utility and statistical similarity , and offer guidelines for untried data . ( 3 ) We present a straightforward and pragmatic enhancement , QUAIL , that addresses the tradeoff between utility and statistical similarity . QUAIL ’ s simple modification to a differentially private data synthesis architecture boosts synthetic data utility in machine learning scenarios without harming summary statistics or privacy guarantees . 2 BACKGROUND . Differential Privacy ( DP ) is a formal definition of privacy offering strong assurances against various re-identification and re-construction attacks ( Dwork et al. , 2006 ; 2014 ) . In the last decade , DP has attracted significant attention due to its provable privacy guarantees and ability to quantify privacy loss , as well as unique properties such as robustness to auxiliary information , composability enabling modular design , and group privacy ( Dwork et al. , 2014 ; Abadi et al. , 2016 ) Definition 1 . ( Differential Privacy Dwork et al . ( 2006 ) ) A randomized function K provides ( , δ ) differential privacy if ∀S ⊆ Range ( K ) , all neighboring datasets D , D̂ differing on a single entry , Pr [ K ( D ) ∈ S ] ≤ e · Pr [ K ( D̂ ) ∈ S ] + δ , ( 1 ) This is a standard definition of DP , implying that the outputs of differentially private algorithm for datasets that vary by a single individual are indistinguishable , bounded by the privacy parameter . Here , is a non-negative number otherwise known as the privacy budget . Smaller values more rigorously enforce privacy , but often decrease data utility . An important property of DP is its resistance to post-processing . Given an ( , δ ) -differentially private algorithm K : D → O , and f : O → Ó an arbitrary randomized mapping , f ◦ K : D → Ó is also differentially private . Currently , the widespread accessibility of data has increased data protection and privacy regulations , leading to a surge of research into applied scenarios for differential privacy ( Allen et al . ( 2019 ) ; Ding et al . ( 2017 ) ; Doudalis et al . ( 2017 ) . There have been several studies into protecting individual ’ s privacy during model training Li et al . ( 2014 ) ; Zhang et al . ( 2015 ) ; Feldman et al . ( 2018 ) . In particular , several studies have attempted to solve the problem of preserving privacy in deep learning ( Phan et al . ( 2017 ) ; Abadi et al . ( 2016 ) ; Shokri & Shmatikov ( 2015 ) ; Xie et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Jordon et al . ( 2018b ) ; Torkzadehmahani et al . ( 2019 ) ) . Here , two main techniques for training models with differential privacy are discussed : DP-SGD Differentially Private Stochastic Gradient Descent ( DP-SGD ) , proposed by Abadi et al . ( 2016 ) , is one of the first studies to make the Stochastic Gradient Descent ( SGD ) computation differential private . Intuitively , DPSGD minimizes its loss function while preserving differential privacy by clipping the gradient in the optimization ’ s l2 norm to reduce the model ’ s sensitivity , and adding noise to protect privacy . Further details can be found in the Appendix . PATE Private Aggregation of Teacher Ensembles ( PATE ) Papernot et al . ( 2016 ) provided PATE , which functions by first deploying multiple teacher models that are trained on disjoint datasets , then deploying the teacher models on unseen data to make predictions . On unseen data , the teacher models “ vote ” to determine the label ; here random noise is introduced to privatize the results of the vote . The random noise is generated following the Laplace Lap ( λ ) distribution . PATE further introduces student models , which try to train a model , but only have access to the privatized labels garnered from the teacher ’ s vote . By training multiple teachers on disjoint datasets and adding noise to the output predicted by those teacher models , the student can not relearn an individual teacher ’ s model or related parameters . 2.1 PRIVACY PRESERVING SYNTHETIC DATA MODELS . Synthetic data generation techniques , such as generative adversarial networks ( GANs ) ( Goodfellow et al . ( 2014 ) ; Arjovsky et al . ( 2017 ) ; Xu et al . ( 2019 ) ) , have become a practical way to release realistic fake data for various explorations and analyses . Although these techniques are able to generate high-quality fake data , they may also reveal user sensitive information and are vulnerable to re-identification and/or membership attacks ( Hayes et al . ( 2019 ) ; Hitaj et al . ( 2017 ) ; Chen et al . ( 2019 ) ) . Therefore , in the interest of data protection , these techniques must be formally privatized . In recent years , researchers have combined data synthesis methods with DP solutions to allow for the release of data with high utility while preserving an individual ’ s privacy ( Xie et al . ( 2018 ) ; Jordon et al . ( 2018b ) ; Park et al . ( 2018 ) ; Mukherjee et al . ( 2019 ) ) . Below , we briefly discuss three popular differentially private data synthesizers , evaluated in this paper . MWEM Multiplicative Weights Exponential Mechanism ( MWEM ) proposed by Hardt et al . ( 2012 ) is a simple yet effective technique for releasing differentially private datasets . It combines Multiplicative Weights ( Hardt & Rothblum , 2010 ) with the Exponential Mechanism ( McSherry & Talwar , 2007 ) to achieve differential privacy . The Exponential Mechanism is a popular mechanism for designing -differentially private algorithms that select for a best set of results R using a scoring function s ( B , r ) . Informally , s ( B , r ) can be thought of as the quality of a result r for a dataset B. MWEM starts with a dataset approximation and uses the Multiplicative Weights update rule to improve the accuracy of the approximating distribution by selecting for informative queries using the Exponential Mechanism . This process of updates iteratively improves the approximation . DPGAN Following Abadi et al . ( 2016 ) ’ s work , a number of studies utilized DP-SGD and GANs to generate differential private synthetic data ( Xie et al. , 2018 ; Torkzadehmahani et al. , 2019 ; Xu et al. , 2018 ) . These models inject noise to the GAN ’ s discriminator during training to enforce differential privacy . DP ’ s guarantee of post-processing privacy means that privatizing the GAN ’ s discriminator enforces differential privacy on the parameters of the GAN ’ s generator , as the GAN ’ s mapping function between the two functions does not involve any private data . We use the Differentially Private Generative Adversarial Network ( DPGAN ) Xie et al . ( 2018 ) as one of our benchmark synthesizers . DPGAN leverages the Wasserstein GAN proposed by Arjovsky et al . ( 2017 ) , adds noise on the gradients , and clips the model weights only , ensuring the Lipschitz property of the network . DPGAN has been evaluated on image data and Electronic Health Records ( EHR ) in the past . PATE-GAN Jordon et al . ( 2018b ) modified the Private Aggregation of Teacher Ensembles ( PATE ) framework to apply to GANs in order to preserve the differential privacy of synthetic data . Similarly to DPGAN , PATE-GAN only applies the PATE mechanism to the discriminator . The dataset is first partitioned into k subsets , and k teacher discriminators are initialized . Each teacher discriminator is trained to discriminate between a subset of the original data and fake data generated by Generator . The student discriminators are then trained to distinguish real data and fake data using the labels generated by an ensemble of teacher discriminators with random noise added . Lastly , the generator is trained to fool the student discriminator . Jordon et al . ( 2018b ) claim that this method outperforms DPGAN for classification tasks , and present supporting results .
The proposed method works as follows. Given samples are partitioned into two parts; one is for classifier training and the other is for data synthesizer training. Both are trained in a differentially private manner. After training, the DP synthesizer generates samples and the DP classifier labels them so that the resulting samples can be used as training samples. By the post-processing theorems, the resulting are differentially private, which are published as synthesized samples.
SP:a8ae05c783e0c619cb859c4ad6da479529bf7af4
Recall Loss for Imbalanced Image Classification and Semantic Segmentation
1 INTRODUCTION . Dataset imbalance is an important problem for many computer vision tasks such as semantic segmentation and image classification . In semantic segmentation , imbalance occurs as a result of natural occurrence and varying sizes of different classes . For example , in an outdoor driving segmentation dataset , light poles and pedestrians are considered minority classes compared to large classes such as building , sky , and road . These minority classes are often more important than large classes for safety reasons . In image classification , imbalance can occur as a result of data collection . Some classes are more difficult to obtain data for than others . For example , the inaturalist dataset ( Van Horn et al. , 2018 ) has collected images of over 8000 natural species . Since some species are rare , the dataset exhibits the notorious long-tail distribution . When presented with imbalanced datasets , the standard cross entropy loss often yields unsatisfactory results as the training process naturally biases towards large classes resulting in low accuracy and precision on small classes . Researchers have studied the imbalance problem for classification , detection , and segmentation extensively . Most prior research has been on designing balanced loss functions . We classify existing loss functions under three categories : region-based losses , statistics-balanced losses and performance-balanced losses . Region-based losses directly optimize region metrics ( e.g. , Jaccard index ( Rahman & Wang , 2016 ) ) and are mainly popular in medical segmentation applications ; Statistics-balanced losses ( e.g. , LDAM ( Cao et al. , 2019 ) , Class-Balanced ( CB ) loss ( Cui et al. , 2019 ) ) up/down weighs the contribution of a class based on its class margin or class size ; however , they tend to encourage excessive false positives in minority classes to improve mean accuracy especially in segmentation . A recent study in Zhou et al . ( 2020 ) also shows that the weighting undermines the generic representation learning capability of the feature extractors ; Performancebalanced losses ( e.g. , focal loss ( Lin et al. , 2017 ) ) use a certain performance indicator to weigh the loss of each class . As an example , focal loss assumes that the difficulty of a class is correlated with imbalance and can be reflected by the predicted confidence . However , it has not been very successful in other applications for dealing with imbalance as reported by Cui et al . ( 2019 ) . We investigate the reasons of failure in Appendix A.1 . Besides various losses , another thread focuses on training strategies to decouple classifier and representation learning in image classification such as the two-stage ( Kang et al. , 2020 ) and two-branch ( Zhou et al. , 2020 ) approaches . The decoupling approaches have shown state-of-the-art performance compared to other carefully designed losses . As studied by Zhou et al . ( 2020 ) , statistics-balanced losses might even negatively affect representation learning because they always upweigh a minority class and ignores many more examples from the large classes . We propose a novel performance-balanced loss using the recall metric to address the imbalance problem . The recall loss down/up weighs a class based on the training recall performance of that class . It is an example of hard class mining as supposed to the hard example mining strategy in the focal loss . Unlike the statistics-balanced losses , the recall loss dynamically changes its weights with training based on per-class recall performance ( see fig . 1 ( a ) ) . The dynamism is the key to overcome many drawbacks of the statistics-balanced losses . In our experiments , the CB loss improves accuracy at the expense of Intersection over Union ( IOU ) which considers false positives in semantic segmentation . However , our recall loss can effectively balance between precision and recall of each class , and hence , it improves accuracy but maintains a competitive IOU . Experiments on two benchmark semantic segmentation datasets demonstrate that the proposed recall loss shows significantly better performance than state-of-the-art loss functions used in prior works . We also show that while statistics-balanced losses negatively affect representation learning , the recall loss improves representation learning for imbalanced image classification and achieves state-of-the-art results with our simple decoupled network ( fig . 1 ( b ) , ( c ) ) on two common benchmarks . Specifically , we outperform previous state-of-the-art methods on Place-LT by 5.7 % and iNaturalist2018 by 1.1 % . Our main contributions are summarized below . • We introduce a novel loss function based on the metric recall . Recall loss weighs the standard cross entropy loss for each class with its instantaneous training recall performance . • The proposed recall loss learns a better semantic segmentation model that provides improved and balanced performance of accuracy and IOU . We demonstrate the loss on both synthetic and real semantic segmentation datasets . • The proposed loss also improves feature learning in image classification . We show state-of-the-art results on two common classification benchmarks with a simple decoupled network . 2 RELATED WORK . Imbalance in Image Classification . Various losses have been proposed to deal with imbalance or long-tail distributions in image classification . Cost-sensitive loss ( Khan et al. , 2017 ) proposes to iteratively optimize both the model parameters and also a cost-sensitive layer which is integrated into the cost function ( more in Appendix B ) . Lifted Loss ( Oh Song et al. , 2016 ) considers all positive and negative pairs in a mini-batch . Range loss ( Zhang et al. , 2017 ) pushes examples in the same class together while forcing different class centers away from each other . More complicated marginbased approaches , ( Dong et al. , 2018 ; Khan et al. , 2019 ; Hayat et al. , 2019 ) are discussed in the Appendix B. Class-Balanced Loss ( Cui et al. , 2019 ) motivates a weighted cross entropy loss with the concept of effective number of samples in each class . LDAM ( Cao et al. , 2019 ) also derives a weighted cross entropy loss based on margins between classes . However , DecoupleRC ( Kang et al. , 2020 ) pointed out that balanced losses might negatively affect the representation learning process ; hence , classifier learning and representation learning should be separated . OLTR ( Liu et al. , 2019 ) first learns a good representation and uses an attention mechanism to learn a balanced classifier . In the same spirit , DRW ( Cao et al. , 2019 ) uses a two-stage training , and BBN ( Zhou et al. , 2020 ) proposes a two-branch network with a custom training schedule . Both methods emphasize generic representation learning in the beginning and rebalancing the small classes at a later stage . However , both methods require extensive experiments for finding a good learning schedule . Drawing from the same idea , we design a Simple Decoupled Network ( SDN ) that uses two classification heads where one head is responsible for feature extractor training and the other for classifier training . Imbalance in Image Segmentation . In image segmentation , Dice and Jaccard indices ( Intersection over Union ) are commonly used as the evaluation metrics . However , the most common training criterion , cross entropy , does not directly optimize these metrics . In medical imaging , researchers proposed to optimize soft/surrogate version of these indices . SoftIOU ( Rahman & Wang , 2016 ) proposes to optimize a soft version of the Jaccard index ; Lovasz Softmax ( Berman et al. , 2018 ) also optimizes the Jaccard index based on the Lovasz convex extension ; SoftDice ( Sudre et al. , 2017 ) optimizes a soft version of the Dice index and similarly softTversky ( Salehi et al. , 2017 ) optimizes a soft Tversky index . Table 1 in Appendix 3.4 provides an overview of the different indices . However , concerns have been raised in Taghanaki et al . ( 2019 ) on whether these losses consider the tradeoff between false positives and false negatives . We show that they also tend to yield high mean accuracy at the expense of lower mean IOU , whereas our loss improves accuracy while maintaining a competitive IOU . Imbalance in Object Detection . Imbalance is also a problem in object detection where the foreground-background imbalance is extreme and undermines learning . Online Hard Example Mining ( OHEM ) ( Shrivastava et al. , 2016 ) proposes to find hard examples by ranking the losses and only keeping those with the highest losses . Seesaw Loss ( Wang et al. , 2020 ) proposes to dynamically weight the cross entropy loss with cumulative class ratios . Focal Loss ( FL ) ( Lin et al. , 2017 ) chooses to down weigh easy samples and emphasize hard samples by weighting each sample by 1 p where p is the predicted probability for the sample . The weight for each sample dynamically changes with training and the method never completely discards any samples . Focal loss is especially successful because it is easy to implement and proves effective in the binary foreground-background imbalance setting . We compare the proposed method with these losses on image classification and semantic segmentation . 3 RECALL LOSS . 3.1 MOTIVATION : FROM INVERSE FREQUENCY LOSS TO RECALL LOSS . To motivate our proposed loss , we first analyze the standard cross entropy loss . Let { xn , yn } 8n 2 { 1 , ... , N } , where xn 2 Rd , yn 2 { 1 , ... , C } denote the set of training data and corresponding labels . Let Pn denotes the predictive softmax-distribution over all classes for input xn and P in denotes the probability of the i-th class . The cross entropy loss used in multiclass classification is defined as : CE = NX n=1 log ( P ynn ) = CX c=1 X n : yn=c log ( P ynn ) = CX c=1 Nc log ( P c ) ( 1 ) where P c = ( Q n : yn=c P ynn ) 1/Nc denotes the geometric mean confidence of class c and Nc denotes the number of samples in class c. As shown in Eq . 1 , the conventional cross entropy optimizes the geometric mean confidence of each class weighted by the number of pixels in each class . When there is a significant class imbalance in the dataset , the loss function biases towards large classes as a result of larger Nc . One commonly used loss for imbalanced datasets is inverse frequency cross entropy loss ( Eigen & Fergus , 2015 ; Badrinarayanan et al. , 2017 ) which assigns more weight to the loss of minority classes . Let N denote the total number of pixels in the training set and Nc denotes the number of pixels belonging to class c 2 { 1 , .. , C } . The frequency of a class is calculated as freq ( c ) = Nc/N . We show that while the unweighted cross entropy loss optimizes the overall confidence , the loss weighted by inverse frequency optimizes mean confidence . If we use an inverse frequency weighting , the loss is rebalanced . Note we leave out the N in freq ( c ) as it is shared by all classes . InvCE = CX c=1 1 freq ( c ) Nc log ( P c ) = CX c=1 1 Nc Nc log ( P c ) = CX c=1 log ( P c ) ( 2 ) As shown in Eq . 2 , the weighted loss optimizes the geometric mean of accuracy directly . However , the inverse frequency loss might not be optimal in practice because it over-weighs the minority classes and introduces excessive false positives , i.e. , it sacrifices precision for recall . This problem is especially severe in semantic segmentation ( Chan et al. , 2019 ) . Applying the inverse frequency loss to segmentation increases recall for each class . However , the improvement comes at the cost of excessive false positives , especially for small classes . While the inverse frequency loss solves the problem of imbalance , it focuses only on improving one aspect of the problem in classification , i.e . the recall of each class . To solve this issue , we propose to weigh the inverse frequency loss in Eq . 2 with the false negative ( FNc ) counts for each class . The first insight is that the FNc is bounded by the total number of samples in a class and zero , i.e . Nc FNc 0 ( 3 ) By weighting the inverse frequency cross entropy loss in Eq . 2 by the false negative counts for each class , we obtain a moderate loss function which sits between the regular cross entropy loss and inverse frequency loss . We want to note that the idea of finding a middle ground between these two loss functions has been explored in different forms . For example , the BBN ( Zhou et al. , 2020 ) method explicitly uses an adaptor function that controls the contribution of the two losses . However , an obvious drawback is that the adaptor function needs to be extensively searched based on empirical evidence and intuition . RecallCE = CX c=1 FNc log ( P c ) = CX c=1 FNc Nc Nc log ( P c ) = CX c=1 FNc FNc + TPc Nc log ( P c ) ( 4 ) As Eq . 4 shows , the loss can be implemented as the regular cross entropy loss weighted by classwise false negative rate ( FNR ) . The second insight is that minority classes are most likely more difficult to classify with higher FNR and large classes with smaller FNR . Therefore , similar to inverse frequency loss , gradients of minority classes will be boosted and gradients of majority classes will be suppressed . However , unlike frequency weighting , the weighting will not be as extreme as motivated in Eq . 3 . In the next section , we will derive the final dynamic form and compare it to the other performance-balanced loss : the focal loss ( Lin et al. , 2017 ) .
A novel recall loss (RecallCE) that considers dynamically-changing class recalls is proposed in this paper to mitigate class imbalance in long-tailed recognition problems. The class recalls are estimated using either the current batch statistics or an exponential moving average, depending on the number of class (or class diversity) present in training batches. Relationships between RecallCE and existing widely-used loss functions are mathematically shown. RecallCE performs competitively with existing loss functions on semantic segmentation tasks and outperform them on image classification tasks.
SP:959ed37c07a831c71c5dd586a5940313e62b7018
Mind the Pad -- CNNs Can Develop Blind Spots
1 MOTIVATION Convolutional neural networks ( CNNs ) serve as feature extractors for a wide variety of machinelearning tasks . Little attention has been paid to the spatial distribution of activation in the feature maps a CNN computes . Our interest in analyzing this distribution is triggered by mysterious failure cases of a traffic light detector : The detector successfully detects a small but visible traffic light in a road scene . However , it fails completely in detecting the same traffic light in the next frame captured by the ego-vehicle . The major difference between both frames is a limited shift along the vertical dimension as the vehicle moves forward . Therefore , the drastic difference in object detection is surprising given that CNNs are often assumed to have a high degree of translation invariance [ 8 ; 17 ] . The spatial distribution of activation in feature maps varies with the input . Nevertheless , by closely examining this distribution for a large number of samples , we found consistent patterns among them , often in the form of artifacts that do not resemble any input features . This work aims to analyze the root cause of such artifacts and their impact on CNNs . We show that these artifacts are responsible for the mysterious failure cases mentioned earlier , as they can induce ‘ blind spots ’ for the object detection head . Our contributions are : • Demonstrating how the padding mechanism can induce spatial bias in CNNs ( Section 2 ) . • Demonstrating how spatial bias can impair downstream tasks ( Section 3 ) . • Identifying uneven application of 0-padding as a resolvable source of bias ( Section 5 ) . • Relating the padding mechanism with the foveation behavior of CNNs ( Section 6 ) . • Providing recommendations to mitigate spatial bias and demonstrating how this can prevent blind spots and boost model accuracy . 2 THE EMERGENCE OF SPATIAL BIAS IN CNNS Our aim is to determine to which extent activation magnitude in CNN feature maps is influenced by location . We demonstrate our analysis on a publicly-available traffic-light detection model [ 36 ] . This model implements the SSD architecture [ 26 ] in TensorFlow [ 1 ] , using MobileNet-v1 [ 13 ] as a feature extractor . The model is trained on the BSTLD dataset [ 4 ] which annotates traffic lights in road scenes . Figure 1 shows two example scenes from the dataset . For each scene , we show two feature maps computed by two filters in the 11th convolutional layer . This layer contains 512 filters whose feature maps are used directly by the first box predictor in the SSD to detect small objects . 1 Published as a conference paper at ICLR 2021 The bottom row in Figure 1 shows the average response of each of the two aforementioned filters , computed over the test set in BSTLD . The first filter seems to respond mainly to features in the top half of the input , while the second filter responds mainly to street areas . There are visible lines in the two average maps that do not seem to resemble any scene features and are consistently present in the individual feature maps . We analyzed the prevalence of these line artifacts in the feature maps of all 512 filters . The right column in Figure 1 shows the average of these maps per scene , as well as over the entire test set ( see supplemental for all 512 maps ) . The artifacts are largely visible in the average maps , with variations per scene depending on which individual maps are dominant . A useful way to make the artifacts stand out is to neutralize scene features by computing the feature maps for a zero-valued input . Figure 2 depicts the resulting average map for each convolutional layer after applying ReLU units . The first average map is constant as we expect with a 0-valued input . The second map is also constant except for a 1-pixel boundary where the value is lower at the left border and higher at the other three borders . We magnify the corners to make these deviations visible . The border deviations increase in thickness and in variance at subsequent layers , creating multiple line artifacts at each border . These artifacts become quite pronounced at ReLU 8 where they start to propagate inwards , resembling the ones in Figure 1 . 2 Published as a conference paper at ICLR 2021 It is evident that the 1-pixel border variations in the second map are caused by the padding mechanism in use . This mechanism pads the output of the previous layer with a 1-pixel 0-valued border in order to maintain the size of the feature map after applying 3x3 convolutional . The maps in the first layer are not impacted because the input we feed is zero valued . Subsequent layers , however , are increasingly impacted by the padding , as preceding bias terms do not warrant 0-valued input . It is noticeable in Figure 2 that the artifacts caused by the padding differ across the four borders . To investigate this asymmetry , we analyze the convolutional kernels ( often called filters ) that produce the feature maps . Figure 3 depicts a per-layer mean of these 3x3 kernels . These mean kernels exhibit different degrees of asymmetry in the spatial distribution of their weights . For example , the kernels in L1 assign ( on average ) a negative weight at the left border , and a positive weight at the bottom . This directly impacts the padding-induced variation at each border . Such asymmetries are related to uneven application of padding as we explain in Section 5 . 3 IMPLICATIONS OF SPATIAL BIAS We demonstrate how feature-map artifacts can cause blind spots for the SSD model . Similar issues arise in several small-object detectors , e.g. , for faces and masks , as well as in pixel-oriented tasks such as semantic segmentation and image inpainting ( see supplemental for examples ) . Figure 4 illustrates how the SSD predicts small objects based on the feature maps of the 11-th convolutional layer . The SSD uses the pixel positions in these maps as anchors of object proposals . Each proposal is scored by the SSD to represent a target category , with ” background “ being an implicit category that is crucial to exclude irrelevant parts of the input . In addition to these scores , the SSD computes a bounding box to localize the predicted object at each anchor . We examine 3 Published as a conference paper at ICLR 2021 object proposals computed at 1:2 aspect ratio , as they resemble the shape of most traffic lights in the dataset . We visualize the resulting score maps both for the background category and for traffic lights , when feeding a 0-valued input to the SSD . We also visualize the bounding boxes of these proposals in the image space . The SSD predicts the image content to be of background category at all anchor locations , as evident from the value range in both score maps . Such predictions are expected with an input that contains no traffic lights . However , the line artifacts in the feature maps have a strong impact on the score maps . These artifacts elevate the likelihood of anchors closer to the top to be classified as background ( see the yellow band in the background score map ) . Conversely , these anchors have significantly lower scores for the traffic light category , compared with other anchors in the feature map . Such difference in the impact on the target categories is due to the different weights the SSD assigns to the feature maps for each target . As a result , the artifacts lead to potential blind spots in which the scores for certain categories are artificially muted . To validate whether or not the blind spots hinder object detection , we examine road scenes that contain highly-visible traffic light instances in the impacted area . Figure 4-bottom shows an example of such a scene . The SSD computes a low detection score of 7 % when the traffic light lies in the blind spot ( see middle image ) , far below the detection false-positive cutoff . Shifting the scene image upwards or downwards makes the instance detectable with a high score as long as it lies outside the blind spot . This explains the failure cases mentioned in Section 1 . To further validate this effect , we run the SSD on baseline images that each contains one traffic light instance at a specific location in the input . We store the detection score for each instance . Figure 5a depicts the computed scores in a 2D map . It is evident that the model fails to detect the traffic light instance exactly when it is located within the “ blind spot ” band . The artifacts further disrupt the localization of the objects as evident in the top-right plot in Figure 4 which shows per-anchor object proposals computed for a 0 input . 4 REMINDER : WHY IS PADDING NEEDED IN CNNS ? Padding is applied at most convolutional layers in CNNs to serve two fundamental purposes : Maintaining feature map size A padding that satisfies this property is often described as SAME or HALF padding . FULL padding expands the maps by kernel size - 1 along each dimension . VALID padding performs no padding , eroding the maps by the same amount . SAME padding is important to ( 1 ) design deep networks that can handle arbitrary input size ( a challenge in the presence of gradual erosion ) , ( 2 ) maintain the aspect ratio of non-square input , and ( 3 ) concatenate feature maps from different layers as in Inception [ 39 ] and ResNet [ 12 ] models . Reducing information bias against the boundary Consider a 3⇥3 kernel applied to a 2D input . An input location at least 2 pixels away from the boundary contributes to nine local convolution operations when computing the feature map . On the other hand , the corner is involved only one time under VALID padding , four times under a 1-pixel SAME 0-padding , and nine times under a 2-pixel FULL 0-padding . With SAME 0-padding , the cumulative contribution differences among the input pixels grow exponentially over the CNN layers . We refer to such uneven treatment of input pixels as the foveation behavior of the padding mechanism and elaborate on this in Section 6 . We next explore solutions to the issues that cause padding to induce spatial bias . 4 Published as a conference paper at ICLR 2021 5 ELIMINATING UNEVEN APPLICATION OF PADDING While useful to reduce bias against the boundary , applying padding at down-sampling layers can lead to asymmetry in CNN internals . Figure 6a illustrates the source of this asymmetry when strided convolution is used for downsampling : At one side of the feature map , the padding is consumed by the kernel while at the other side it is not . To warrant even application of padding throughout the CNN , the following must hold at all d down-sampling layers , where ( hi , wi ) is the output shape at the i-th layer with khi ⇥ kwi as kernel size , ( shi , swi ) as strides , and = ( phi , pwi ) as padding amount ( refer to appendix A for a proof ) : 8i 2 { 1 , . . , d } : hi 1 = shi · ( hi 1 ) + khi 2 · phi ^ wi 1 = swi · ( wi 1 ) + kwi 2 · pwi ( 1 ) The values h0 and w0 represent the CNN input dimensions . The above constraints are not always satisfied during training or inference with arbitrary input dimensions . For example , ImageNet classifiers based on ResNet [ 12 ] and MobileNet [ 13 ] contain five down-sampling layers ( d = 5 ) that apply 1-pixel 0-padding before performing 2-strided convolution . To avoid uneven application of padding , the input to these CNNs must satisfy the following , as explained in appendix A : h0 = a1⇥2d+1 = 32 ·a1+1 and w0 = a2⇥2d+1 = 32 ·a2+1 where a1 , a2 2 N+ ( 2 ) The traditional 1 and prevalent input size for training ImageNet models is 224⇥224 . This size violates Eq . 2 , leading to uneven padding at every down-sampling layer in ResNet and MobileNet models where 0-padding is effectively applied only at the left and top sides of layer input . This over-represents zeros at the top and left sides of 3⇥ 3 feature-map patches the filters are convolved with during training . The top row of Figure 6b shows per-layer mean filters in three ResNet models in PyTorch [ 33 ] , pre-trained on ImageNet with 224⇥224 images . In all of these models , a few of the mean filters , adjacent to down-sampling layers , exhibit stark asymmetry about their centers . We increase the image size to 225⇥225 without introducing additional image information2 . This size satisfies Eq . 2 , warranting even application of padding at every downsampling layer in the above models . Retraining the models with this size strongly reduces this asymmetry as evident in the bottom row of Figure 6b . This , in turn , visibly boosts the accuracy in all models we experimented with as we report in Table 1 . The accuracy did not improve further when we retrained two of the models , ResNet-18 and ResNet-34 , on 226 ⇥ 226 images . This provides evidence that the boost is due to eliminating uneven padding and not merely due to increasing the input size . 1 This size has been used to facilitate model comparison on ImageNet , since the inception of AlexNet . 2 This is done via constant padding . The side to pad with one pixel is chosen at random to balance out the application of padding at both sides over the training set . No additional padding is applied at further layers . 5 Published as a conference paper at ICLR 2021 Replacing 0-padding with a padding method that reuses feature map values can alleviate the asymmetry in the learned filters in the presence of unevenly applied padding . Another possibility is to use a rigid downsampling kernel , such as max-pooling , instead of a learned one . Appendix C demonstrates both possibilities . Finally , antialiasing before downsampling [ 43 ] can strongly reduce the asymmetry as we elaborate in Section 8 and in Appendix E. Even when no padding is applied ( phi = 0 or pwi = 0 ) , an input size that does no satisfy Eq . 1 can lead to uneven erosion of feature maps , in turn , reducing the contribution of pixels from the impacted sides ( Fig 7e . Satisfying Eq 1 imposes a restriction on input size , e.g. , to values in increments of 2d = 32 with the above models ( 193⇥193 , 225⇥225 , 257⇥257 , ... ) . Depending on the application domain , this can be guaranteed either by resizing an input to the closest increment , or by padding it accordingly with suited values . 6 PADDING MECHANISM AND FOVEATION By foveation we mean the unequal involvement of input pixels in convolutional operations throughout the CNN . Padding plays a fundamental role in the foveation behavior of CNNs . We visualize this behavior by means of a foveation map that counts for each input pixel the number of convolutional paths through which it can propagate information to the CNN output . We obtain these counts by computing the effective receptive field [ 28 ] for the sum of the final convolutional layer after assigning all weights in the network to 1 ( code in supplemental ) . Neutralizing the weights is essential to obtain per-pixel counts of input-output paths that reflect the foveation behavior . Figure 7a shows the extensive foveation effect when no padding is applied . The diminishing contribution of vast areas of the input explains the drastic drop in accuracy recently observed under VALID padding [ 16 ] . In contrast , FULL 0-padding does not incur foveation , however , at the cost of increasing the output size after each layer , making it impractical as explained in Section 4 . SAME 0-padding incurs moderate foveation at the periphery , whose absolute extent depends on the number of convolutional layers and their filter sizes . Its relative extent depends on the input size : the larger the input , the larger the ratio of the constant area in yellow ( refer to appendix B for a detailed example ) . 6 Published as a conference paper at ICLR 2021 Figure 7b shows the foveation behavior of alternatives to SAME 0-padding that have roots in wavelet analysis [ 19 ] and image processing [ 27 ] . Mirror padding mirrors pixels at the boundary to fill the padding area . When the border is included ( SYMMETRIC mode in TensorFlow ) all input pixels have an equal number of input-output paths 3 , resulting in a uniform foveation map . When the border is not included ( REFLECT mode both in PyTorch and in TensorFlow ) , the map exhibits bias against the border and towards a contour in its proximity . This bias is amplified over multiple layers . Replication padding exhibits the opposite bias when the padding area is wider than 1 pixel . This is because it replicates the outer 1-pixel border multiple times to fill this area 3 . The method is equivalent to SYMMETRIC if the padding area is 1-pixel wide . Circular padding wraps opposing borders , enabling the kernels to seamlessly operate on the boundary and resulting in a uniform map . Partial Convolution [ 22 ] has been proposed as a padding method that treats pixels outside the original image as missing values and rescales the computed convolutions accordingly [ 23 ] . Its foveation behavior resembles reflective padding 3 . Distribution padding [ 30 ] resizes the input to fill the padding area around the original feature map , aiming at preserving the distribution of the map . Its foveation map is largely uniform , except for the corners and edges . Impact of input size Besides influencing the relative extent of foveation effects , the input size also determines the presence of uneven padding ( or uneven feature-map erosion ) , as we discussed in Section 5 . Figure 7e shows the foveation map for VGG-19 with a 127⇥127 input . This input violates Eq . 1 at every downsampling layer ( appendix A ) , leading to successive feature map erosion at the bottom and right sides which is reflected in the foveation map ( see appendix B for a detailed example ) . The bottom-right part of the input is hence less involved in the CNN computations . Impact of dilation We assign a dilation factor of 2 to all VGG-19 convolutional layers . While this exponentially increases the receptive field of the neurons at deeper layers [ 42 ] , dilation doubles the extent of the non-uniform peripheral areas that emerge with SAME 0-padding as evident in Figure 7c . SYMMETRIC and circular padding maintain uniform foveation maps regardless of dilation 3 . In contrast , dilation increases the complexity of these maps for REFLECT and replication padding . Impact of strides Whether learned on based on pooling , downsampling layers can amplify the impact of succeeding convolutional layers on foveation behaviour . Furthermore , these layers can cause input pixels to vary in the count of their input-output paths . This can happen when the kernel size is not divisible by the stride , leading to a checkerboard pattern in the foveation maps . This manifests in ResNet models as we illustrate in appendix B . In VGG-19 , all max-pooling layers use a stride of 2 and kernel size of 2 . Changing the kernel size to 3 leads to a checkerboard pattern as evident in Figure 7d . Such effects were shown to impact pixel-oriented tasks [ 32 ] . The padding technique and its foveation behaviour have direct impact on feature-map artifacts ( Section 7 ) , and on the ability of CNNs to encode spatial information ( Section 8 ) . Understanding the foveation behavior is key to determine how suited a padding method is for a given task . For example , small object detection is known to be challenging close to the boundary [ 26 ] , in part due to the foveation behavior of SAME 0-padding . In Figure 5b , we change the padding method in the SSD to SYMMETRIC . The stimulus is noticeably more detectable at the boundary , compared with 0-padding 4 . In contrast , ImageNet classification is less sensitive to foveation effects because the target objects are mostly located away from the periphery . Nevertheless , the padding method was shown to impact classification accuracy [ 23 ] because it still affects feature map artifacts . 7 PADDING METHODS AND FEATURE MAP ARTIFACTS It is also noticeable that the score map in Figure 5b is more uniform than in Figure 5a . In particular , under SYMMETRIC padding the model is able to detect traffic lights placed in the blind spots of the original 0-padded model . To verify whether the line artifacts in Figure 2 are mitigated , we inspect the mean feature maps of the adapted model . With a constant input , SYMMETRIC padding warrants constant maps throughout the CNN because it reuses the border to fill the padding area . Instead , we average these maps over 30 samples generated uniformly at random . Figure 8 depicts the mean maps which are largely uniform , unlike the case with 0-padding . 3 Refer to appendix F or to http : //mind-the-pad.github.io for visual illustration and further theoretical analysis of the foveation behavior . 4Since the input size causes uneven application of padding , the right and bottom borders are still challenging . 7 Published as a conference paper at ICLR 2021 To further analyze the impact of SYMMETRIC padding , we retrain the adapted model following the original training protocol . This significantly improves the average precision ( AP ) as reported in Table 2 under different overlap thresholds ( matching IoU ) , confirming that small object detection is particularly sensitive to feature-map artifacts . Of the padding methods listed in Section 6 , mirror padding in both SYMMETRIC and REFLECT modes , PartialConv , and circular padding are generally effective at reducing feature map artifacts that emerge under zero padding , in particular salient line patterns . In contrast , distribution padding can induce significant artifacts . Refer to appendix D for comparative examples of artifacts under the aforementioned padding schemes . Artifact magnitude and propagation While feature-map artifacts are induced by the padding mechanism at the boundary , their magnitude and inward propagation in the maps are impacted by several architectural aspects of CNNs . In particular , certain normalization schemes such as batchnorm [ 15 ] tend to limit the range of variation within a feature map and to relatively harmonize this range across different maps . This , in turn , impacts how possible artifacts in these maps accumulate when they are processed by the next convolutional layer . Similarly , artifacts that manifest after applying ReLU units are of a positive sign . These factors were instrumental in the formation of potential blind spots described in Section 3 . We hence recommend to involve non-convolutional layers when inspecting the feature maps . Besides having possible impact on artifact magnitude , several aspects of convolution arithmetic , such as filter size and dilation factors , can also impact the spatial propagation of these artifacts . 8 RELATED FINDINGS AND TAKEAWAYS Handling the boundary is an inherent challenge when dealing with spatial data [ 9 ] . Mean padding is known to cause visual artifacts in traditional image processing , with alternative methods proposed to mitigate them [ 24 ] . CNNs have been often assumed to deal with such effects implicitly . Innamorati et al [ 14 ] propose learning separate sets of filters dedicated to the boundaries to avoid impacting the weights learned by regular filters . A grouped padding strategy , proposed to support 2⇥2 filters [ 41 ] , offers avenues to mitigate uneven padding and corresponding skewness in foveation maps without restrictions on input size ( see our note in appendix B for explanation ) . Finally , insights from signal and image processing [ 10 ; 11 ] could inspire further CNN padding schemes . Zero padding has been recently linked to CNNs ’ ability to encode position information [ 7 ; 16 ; 18 ; 29 ] . In contrast , circular padding was shown to limit this ability [ 7 ] and to boost shift invariance [ 35 ] . The input sizes in those studies do induce uneven padding . This can be , in part , the underlying mechanism behind the aforementioned ability . Whether or not this ability is desirable depends on the task , with several methods proposed to explicitly encode spatial information [ 5 ; 6 ; 20 ; 25 ; 29 ; 31 ] . 8 Published as a conference paper at ICLR 2021 Downsampling using max-pooling or strided convolution has been shown to impact shift invariance in CNNs by incurring aliasing effects [ 3 ; 38 ; 43 ] . These effects can manifest in the same symptoms we reported in Section 1 , albeit for a different reason . Zhang [ 43 ] demonstrated how blurring the feature maps before subsampling mitigates aliasing effects and improves ImageNet classification accuracy of various popular CNNs . We analyzed the mean filters in antialiased MobileNet and ResNet models pre-trained on ImageNet under 0-padding , with 224⇥224 as input size ( refer to Appendix E ) . We found that antialiasing can also mitigate the asymmetry of mean filters that exhibited high asymmetry in the baseline models , especially at deeper layers . This is remarkable given that these models are trained on 224⇥224 images , which incurs one-sided zero padding at every downsampling layer . This could , in part , be attributed to the ability of the BlurPool operator used in antialiased CNN to smoothen the acuity of zero-padded borders , in turn , reducing the value imbalance incurred by one-sided padding . Further analysis is needed to examine the interaction between padding and aliasing effects in CNNs and to establish possible synergy between antialiasing and eliminating uneven application of padding . Luo et al [ 28 ] drew connections between effective receptive fields and foveated vision . Our analysis links foveation behavior with the padding scheme and suggests that it might occur implicitly in CNNs when using VALID or SAME 0-padding , without the need for explicit mechanisms [ 2 ; 21 ] . Furthermore , it explains the drastic accuracy drop noted by [ 16 ] under VALID padding , which is amplified by feature map erosion . Choosing a padding method SAME 0-padding is by far the most widely-used method . Compared with other methods , it can enable as much as 50 % faster training and inference . Problem-specific constraints can dictate different choices [ 34 ; 35 ; 40 ] . In the lack of a universally superior padding method , we recommend considering multiple ones while paying attention to the nature of the data and the task , as well as to the following aspects : • Feature-map statistics : 0-padding can alter the value distribution within the feature maps and can shift their mean value in the presence of ReLU units . The alternatives presented in Section 6 tend to preserve this distribution , thanks to reusing existing values in the maps . • Foveation behavior : 0-padding might not be suited for tasks that require high precision at the periphery , unlike circular and SYMMETRIC mirror padding . • Interference with image semantics ( esp . with a padding amount > 1 pixel ) : For example , circular padding could introduce border discontinuities unless the input is panoramic [ 35 ] . • Potential to induce feature map artifacts : All alternatives to 0-padding induce relatively fewer artifacts , except for Distribution padding [ 30 ] ( see appendix D ) . We also recommend eliminating uneven padding at downsampling layers both at training and at inference time , as we illustrated in Section 5 . This is especially important when zero padding is applied and the downsampling is learned . The scripts used to generate the visualizations in this paper are available in the supplemental as well as at http : //mind-the-pad.github.io . Summary We demonstrated how the padding mechanism can induce spatial bias in CNNs , in the form of skewed kernels and feature-map artifacts . These artifacts can be highly pronounced with the widely-used 0-padding when applied unevenly at the four sides of the feature maps . We demonstrated how such uneven padding can inherently take place in state-of-the-art CNNs , and how the artifacts it causes can be detrimental to certain tasks such as small object detection . We provided visualization methods to expose these artifacts and to analyze the implication of various padding schemes on boundary pixels . We further proposed solutions to eliminate uneven padding and to mitigate spatial bias in CNNs . Further work is needed to closely examine the implications of spatial bias and foveation in various applications ( see supplementary for examples ) , as well as padding impact on recurrent models and 1-D CNNs . ACKNOWLEDGEMENT We are thankful to Ross Girshick for providing useful recommendations and experiment ideas , and to Shubham Muttepawar for implementing an interactive tool out of our analysis scripts , guided by our front-end specialist Edward Wang and our AI user-experience designer Sara Zhang . 9 Published as a conference paper at ICLR 2021 REFERENCES [ 1 ] M. Abadi , A. Agarwal , P. Barham , E. Brevdo , Z. Chen , C. Citro , G. S. Corrado , A. Davis , J . Dean , et al . TensorFlow : Large-scale machine learning on heterogeneous distributed systems . arXiv preprint arXiv:1603.04467 , 2016 . [ 2 ] E. Akbas and M. P. Eckstein . Object detection through search with a foveated visual system . PLoS computational biology , 13 ( 10 ) : e1005743 , 2017 . [ 3 ] A. Azulay and Y. Weiss . Why do deep convolutional networks generalize so poorly to small image transformations ? Journal of Machine Learning Research ( JMLR ) , 20 ( 184 ) :1–25 , 2019 . [ 4 ] K. Behrendt , L. Novak , and R. Botros . A deep learning approach to traffic lights : Detection , tracking , and classification . In Robotics and Automation ( ICRA ) , 2017 IEEE International Conference on , pp . 1370–1377 . IEEE , 2017 . [ 5 ] C.-A . Brust , S. Sickert , M. Simon , E. Rodner , and J. Denzler . Convolutional patch networks with spatial prior for road detection and urban scene understanding . In International Joint Conference on Computer Vision , Imaging and Computer Graphics Theory and Applications ( VISAPP ) , 2015 . [ 6 ] G. F. Elsayed , P. Ramachandran , J. Shlens , and S. Kornblith . Revisiting spatial invariance with low-rank local connectivity . In International Conference on Machine Learning ( ICML ) , 2020 . [ 7 ] J. Geiping , H. Bauermeister , H. Dröge , and M. Moeller . Inverting gradients–how easy is it to break privacy in federated learning ? arXiv preprint arXiv:2003.14053 , 2020 . [ 8 ] R. Gens and P. M. Domingos . Deep symmetry networks . In Advances in neural information processing systems ( NeurIPS ) , pp . 2537–2545 , 2014 . [ 9 ] D. Griffith and C. Amrhein . An evaluation of correction techniques for boundary effects in spatial statistical analysis : traditional methods . Geographical Analysis , 15 ( 4 ) :352–360 , 1983 . [ 10 ] V. Gupta and N. Ramani . A note on convolution and padding for two-dimensional data . Geophysical Prospecting , 26 ( 1 ) :214–217 , 1978 . [ 11 ] L. Hamey . A functional approach to border handling in image processing . In International Conference on Digital Image Computing : Techniques and Applications , pp . 1–8 , 2015 . [ 12 ] K. He , X. Zhang , S. Ren , and J . Sun . Deep residual learning for image recognition . In IEEE conference on Computer Vision and Pattern Recognition ( CVPR ) , pp . 770–778 , 2016 . [ 13 ] A. G. Howard , M. Zhu , B. Chen , D. Kalenichenko , W. Wang , T. Weyand , M. Andreetto , and H. Adam . MobileNets : Efficient convolutional neural networks for mobile vision applications . arXiv preprint arXiv:1704.04861 , 2017 . [ 14 ] C. Innamorati , T. Ritschel , T. Weyrich , and N. J. Mitra . Learning on the edge : Investigating boundary filters in CNNs . International Journal of Computer Vision ( IJCV ) , pp . 1–10 , 2019 . [ 15 ] S. Ioffe and C. Szegedy . Batch normalization : Accelerating deep network training by reducing internal covariate shift . In International Conference on Machine Learning ( ICML ) , pp . 448– 456 , 2015 . [ 16 ] M. A. Islam , S. Jia , and N. D. Bruce . How much position information do convolutional neural networks encode ? In International Conference on Learning Representations ( ICLR ) , 2020 . [ 17 ] M. Jaderberg , K. Simonyan , A. Zisserman , et al . Spatial transformer networks . In Advances in neural information processing systems ( NeurIPS ) , pp . 2017–2025 , 2015 . [ 18 ] O. S. Kayhan and J. C. van Gemert . On translation invariance in CNNs : Convolutional layers can exploit absolute spatial location . In IEEE conference on Computer Vision and Pattern Recognition ( CVPR ) , 2020 . [ 19 ] T. L. Kijewski-Correa . Full-scale measurements and system identification : A time-frequency perspective . PhD thesis , University of Notre Dame. , 2003 . 10 Published as a conference paper at ICLR 2021 [ 20 ] I. Kim , W. Baek , and S. Kim . Spatially attentive output layer for image classification . In IEEE conference on Computer Vision and Pattern Recognition ( CVPR ) , 2020 . [ 21 ] H. Larochelle and G. E. Hinton . Learning to combine foveal glimpses with a third-order boltzmann machine . In Advances in neural information processing systems ( NeurIPS ) , pp . 1243–1251 , 2010 . [ 22 ] G. Liu , F. A. Reda , K. J. Shih , T.-C. Wang , A. Tao , and B. Catanzaro . Image inpainting for irregular holes using partial convolutions . In European Conference on Computer Vision , 2018 . [ 23 ] G. Liu , K. J. Shih , T.-C. Wang , F. A. Reda , K. Sapra , Z. Yu , A. Tao , and B. Catanzaro . Partial convolution based padding . In arXiv preprint arXiv:1811.11718 , 2018 . [ 24 ] R. Liu and J. Jia . Reducing boundary artifacts in image deconvolution . In IEEE International Conference on Image Processing ( ICIP ) , pp . 505–508 , 2008 . [ 25 ] R. Liu , J. Lehman , P. Molino , F. P. Such , E. Frank , A. Sergeev , and J. Yosinski . An intriguing failing of convolutional neural networks and the CoordConv solution . In Advances in Neural Information Processing Systems ( NeurIPS ) , pp . 9605–9616 , 2018 . [ 26 ] W. Liu , D. Anguelov , D. Erhan , C. Szegedy , S. Reed , C.-Y . Fu , and A. C. Berg . SSD : Single shot multibox detector . In European Conference on Computer Vision , pp . 21–37 , 2016 . [ 27 ] S. Lou , X. Jiang , and P. J. Scott . Fast algorithm for morphological filters . Journal of Physics : Conference Series , 311 ( 1 ) :012001 , 2011 . [ 28 ] W. Luo , Y. Li , R. Urtasun , and R. Zemel . Understanding the effective receptive field in deep convolutional neural networks . In Advances in Neural Information Processing Systems ( NeurIPS ) , pp . 4898–4906 , 2016 . [ 29 ] R. Murase , M. Suganuma , and T. Okatani . How can cnns use image position for segmentation ? arXiv preprint arXiv:2005.03463 , 2020 . [ 30 ] A.-D. Nguyen , S. Choi , W. Kim , S. Ahn , J. Kim , and S. Lee . Distribution padding in convolutional neural networks . In IEEE International Conference on Image Processing ( ICIP ) , pp . 4275–4279 , 2019 . [ 31 ] D. Novotny , S. Albanie , D. Larlus , and A. Vedaldi . Semi-convolutional operators for instance segmentation . In European Conference on Computer Vision ( ECCV ) , pp . 86–102 , 2018 . [ 32 ] A. Odena , V. Dumoulin , and C. Olah . Deconvolution and checkerboard artifacts . Distill , 1 ( 10 ) : e3 , 2016 . [ 33 ] A. Paszke , S. Gross , F. Massa , A. Lerer , J. Bradbury , G. Chanan , T. Killeen , Z. Lin , N. Gimelshein , et al . PyTorch : An imperative style , high-performance deep learning library . In Advances in Neural Information Processing Systems ( NeurIPS ) , pp . 8024–8035 , 2019 . [ 34 ] P. O. Pinheiro , T.-Y . Lin , R. Collobert , and P. Dollár . Learning to refine object segments . In European Conference on Computer Vision ( ECCV ) , pp . 75–91 , 2016 . [ 35 ] S. Schubert , P. Neubert , J. Pöschmann , and P. Pretzel . Circular convolutional neural networks for panoramic images and laser data . In IEEE Intelligent Vehicles Symposium ( IV ) , pp . 653– 660 , 2019 . [ 36 ] E. Shalnov . BSTLD-demo : A sample project to train and evaluate model on BSTLD . https : //github.com/e-sha/BSTLD_demo , 2019 . [ 37 ] K. Simonyan and A. Zisserman . Very deep convolutional networks for large-scale image recognition . In International Conference on Learning Representations ( ICLR ) , 2015 . [ 38 ] G. Sundaramoorthi and T. E. Wang . Translation insensitive CNNs . arXiv preprint arXiv:1911.11238 , 2019 . 11 Published as a conference paper at ICLR 2021 [ 39 ] C. Szegedy , W. Liu , Y. Jia , P. Sermanet , S. Reed , D. Anguelov , D. Erhan , V. Vanhoucke , and A. Rabinovich . Going deeper with convolutions . In IEEE conference on Computer Vision and Pattern Recognition ( CVPR ) , pp . 1–9 , 2015 . [ 40 ] S. Vashishth , S. Sanyal , V. Nitin , N. Agrawal , and P. Talukdar . InteractE : Improving convolution-based knowledge graph embeddings by increasing feature interactions . In AAAI conference on Artifical Intelligence , 2020 . [ 41 ] S. Wu , G. Wang , P. Tang , F. Chen , and L. Shi . Convolution with even-sized kernels and symmetric padding . In Advances in Neural Information Processing Systems ( NeurIPS ) , pp . 1192–1203 , 2019 . [ 42 ] F. Yu and V. Koltun . Multi-scale context aggregation by dilated convolutions . In International Conference on Learning Representations ( ICLR ) , 2016 . [ 43 ] R. Zhang . Making convolutional networks shift-invariant again . In International Conference on Machine Learning ( ICML ) , 2019 . 12 Published as a conference paper at ICLR 2021 A ELIMINATING UNEVEN APPLICATION OF PADDING Consider a CNN with d downsampling layers , L1 , L2 , ... , Ld . To simplify the analysis and without loss of generality we assume that the kernels in these layers are of square shape and that all other layers maintain their input size . We denote by si and ki the stride and kernel size of layer Li . We denote by hi and wi the dimensions of the feature maps computed by Li . We denote by h0 and w0 the size of the CNN input . We examine the conditions to warrant no uneven application of padding along the height dimension . Parallel conditions apply to the width dimension . We denote by h̄i the height of the padded input to Li . The effective portion ĥi h̄i of this amount processed by the convolutional filters in Li is equal to : ĥi = si · ( hi 1 ) + ki Our goal is to warrant that ĥi = h̄i to prevent information loss and to avoid uneven padding along the vertical dimension when the unconsumed part h̄i ĥi < si is an odd number . Since the non-downsampling layers maintain their input size , we can formulate the height of the padded input as follows : h̄i = hi 1 + 2 · pi where pi is the amount of padding applied at the top and at the bottom of the input in Li . Accordingly , we can warrant no uneven padding if the following holds : 8i 2 [ 1. . d ] : hi 1 = si · ( hi 1 ) + ki 2 · pi ( 3 ) Example 1 : ResNet-18 This network contains five downsampling layers ( d = 5 ) all of which use a stride of 2 . Despite performing downsampling , all of these layers apply a padding amount entailed by SAME padding to avoid information bias against the boundary . In four of these layers having 3⇥3 kernels ( ki = 3 ) , the amount used is pi = 1 . For the first layer having 7⇥ 7 kernels , this amount is equal to 3 . In both cases , the term ki 2 · pi in Eq . 3 is equal to 1 . To warrant no uneven padding along the vertical dimension , the heights of the feature maps at downsampling layers should hence satisfy : 8i 2 [ 1. . d ] : hi 1 = 2 · ( hi 1 ) + 1 = 2 · hi 1 Accordingly , the input height should satisfy : h0 = 2 d · hd ( 2d 1 ) = 2d · ( hd 1 ) + 1 where hd is the height of the final feature map , and can be any natural number larger than 1 to avoid a degenerate case of a 1⇥ 1 input . The same holds for the input width : w0 = 2 d · ( wd 1 ) + 1 A 225⇥225 input satisfies these constraints since 225 = 25 · 7+ 1 , yielding even padding in all five downsampling layers and output feature maps of size 8⇥ 8 . Example 2 : VGG-16 This network contains five max-pooling layers ( d = 5 ) all of which use a stride of 2 and a kernel size of 2 and apply no padding . To warrant no uneven padding along the vertical dimension , the heights of the feature maps at all of these layers should hence satisfy : 8i 2 [ 1. . d ] : hi 1 = 2 · ( hi 1 ) + 2 = 2 · hi Accordingly , the input dimensions should satisfy : h0 = 2 d · hd and w0 = 2d · wd ( 4 ) A 224⇥224 input satisfies these constraints since 224 = 25 · 7 , causing no feature-map erosion at any downsampling layer and resulting in output feature maps of size 7⇥ 7 . 13 Published as a conference paper at ICLR 2021 B THE EXTENT OF FOVEATION UNDER SAME 0-PADDING We illustrate how the absolute extent of foveation under SAME 0-padding depends on the number of convolutional layers , and how its relative extent depends on the input size . In the following maps , color represents the number of paths to the CNN output for each input pixel . Note : The checkerboard pattern is caused by downsampling layers in ResNet that use 3⇥ 3 kernels and a stride of 2 . In the next figure , we illustrate how uneven application of padding impacts the foveation maps . Note : It is possible to rectify the skewness in the 2nd foveation map by alternating the side where one-sided padding is applied between successive downsampling layers . This , however , does not mitigate the skewness in the learned filters ( see next Section ) . 14 Published as a conference paper at ICLR 2021 C THE IMPACT OF THE PADDING METHOD ON LEARNED WEIGHTS In the presence of uneven application of padding , 0-padding causes skewness in the learned weights because the filters are exposed more frequently to feature-map patches with zeros at their top and left sides . Redundancy methods such as circular or mirror padding mitigate such skewness because they fill the padding areas with values taken from the feature maps . PartialConv also mitigates such skewness because it assumes the pixels in the padding area are missing , and rescales the partial convolutional sum to account for them . Below we show the effectiveness of these alternatives in mitigating the skewness in three ResNet architectures . What if no padding is applied during downsampling ? VGG models perform downsampling using 2 ⇥ 2 pooling layers that do not apply any padding . Accordingly , the mean filters do not exhibit significant skewness , even if the input size does not satisfy Eq 4 : 15 Published as a conference paper at ICLR 2021 D THE IMPACT OF PADDING METHODS ON FEATURE-MAP ARTIFACTS We show per-layer mean feature maps in ResNet-18 under different padding methods . The mean maps are averaged over 20 input samples generated at random . 16 Published as a conference paper at ICLR 2021 E THE IMPACT OF ANTIALIASING ON THE LEARNED WEIGHTS We demonstrate how antialiasing [ 43 ] significantly reduces the asymmetry of mean filters around downsampling layers , even in the presence of unevenly-applied zero padding . 17 Published as a conference paper at ICLR 2021 F FOVEATION ANALYSIS OF PADDING ALGORITHMS Refer to http : //mind-the-pad.github.io for an interactive and animated visual illustration of padding algorithms and their foveation behavior . This appendix serves as a print version . Among the SAME padding algorithms we discussed in the manuscript , two algorithms warrant that each input pixel is involved in an equal number of convolutional operations , leading to uniform foveation maps : circular padding and SYMMETRIC mirror padding . In contrast , this number varies under zero padding , REFLECT mirror padding , replication padding , and partial convolution . We illustrate in detail how each padding algorithm treats the input pixels . For this purpose we illustrate step by step how each pixel is processed by the convolutional kernel . We choose a set of pixels that are sufficient to expose the behavior of the respective algorithm . This set spans an area within two or three pixels from the boundary that encompasses all relevant cases for the analysis and is situated at the top-left corner . The behavior at the other corners is analogous . All illustrations use a stride of 1 . Except for VALID , all configurations warrant SAME padding . • VALID Padding : This algorithm is illustrated on a 3⇥ 3 kernel without dilation . A larger kernel size or dilation factor will increase the foveation effect . • Zero Padding : This algorithm is illustrated on a 3 ⇥ 3 kernel without dilation . A larger kernel size or dilation factor will increase the foveation effect . • Circular Padding : This algorithm is illustrated on a 3 ⇥ 3 kernel without dilation . It is straightforward to prove that the algorithm warrants equal treatment of the pixels irrespective of the kernel size or dilation factor . This is because it effectively applies circular convolution : Once the kernel hits one side , it can seamlessly operate on the pixels of the other side . Circular convolution hence renders the feature map as infinite to the kernel , warranting that edge pixels are treated in the same manner as interior pixels . • Mirror Padding ( SYMMETRIC ) : This algorithm warrants that each pixel is involved in the same number of convolutional operations . It is important to notice that , unlike under circular convolution , these operations do not utilize the kernel pixels uniformly as we demonstrate in detail . We illustrate the algorithm behavior under the following settings : – 3⇥ 3 kernel and dilation factor of 1 . – 5⇥ 5 kernel and dilation factor of 1 . – 3⇥ 3 kernel and dilation factor of 2 . – 2 ⇥ 2 kernel and dilation factor of 1 , along with a grouped padding strategy to com- pensate for uneven padding [ 41 ] . – 4⇥ 4 kernel size and dilation factor of 1 , along with a grouped padding strategy . • Mirror Padding ( REFLECT ) : This algorithm is illustrated on a 3⇥ 3 kernel without dilation . • Replication Padding : This algorithm is illustrated on a 5⇥ 5 kernel without dilation . We choose this kernel size since a 3⇥3 kernel under SAME padding would render the algorithm equivalent to SYMMETRIC mirror padding . • Partial Convolution : This algorithm is illustrated on a 3 ⇥ 3 kernel without dilation . Its foveation behavior is analogous to REFLECT mirror padding . 18 a b c .. .. 1 2 3 3 3 d e f .. .. 2 4 6 6 6 g h i .. .. 3 6 9 9 9 .. .. .. .. .. 3 6 9 9 9 .. .. .. .. .. 6 9 9 9 Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 sum = 6.25 sum = 8.75 sum = 7.5 sum = 8.75 sum = 9 sum = 10.5 sum = 7.5 sum = 10.5 sum = 9 uniform Detailed Illustration of how the counts are derived a b c d e f g h i a b c ... a b c ... d e f ... d e f ... g h i ... g h i .. ... ... ... ... ... ... ... ... a b c ... a b c ... a b c ... ... d e f ... d e f ... d e f ... ... g h i ... g h i ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... VALID Padding Illustrated on a 3x3 kernel Input # of conv ops each pixel is involved in Convolutions involving ( a ) Convolutions involving ( b ) Convolutions involving ( c ) 3 a b c ... a b c ... d e f ... d e f ... g h i ... g h i ... ... ... ... ... ... ... ... ... a b c ... d e f ... g h i ... ... ... ... ... a b c ... a b c ... a b c ... a b c ... d e f ... d e f ... d e f ... d e f ... g h i ... g h i ... g h i ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... a b c ... ... a b c ... d e f ... ... d e f ... ... g h i ... ... g h i ... ... ... ... ... ... ... ... ... ... ... Convolutions involving ( e ) Convolutions involving ( f ) Convolutions involving ( d ) : rotated version of ( b ) Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment a b c ... d e f ... g h i ... ... ... ... ... Other border cases are translation or rotation of ( a ) or ( b ) a 1 1 b 1 1 1 1 1 1 1 1 sum = 4 sum = 6 Zero Padding Illustrated on 3x3 kernel and 1-pixel padding Original Input Padded Input # of conv ops each pixel is involved in 0 0 0 .. .. .. a b .. .. .. 0 4 6 6 6 6 c d .. .. .. 0 6 9 9 9 9 .. .. .. .. .. 0 6 9 9 9 9 .. .. .. .. .. .. 6 9 9 9 9 .. .. .. .. .. .. 6 9 9 9 9 a b .. .. .. c d .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. c 1 1 d 1 1 1 1 1 1 1 1 1 1 1 1 1 sum = 6 sum = 9 uniform Detailed Illustration of how the counts are derived 0 0 0 0 a b b a b b a b b a b 0 c d e d e e d e e d e 0 a b 0 a b c 0 a b 0 a b 0 a b 0 a b 0 c d 0 c d f 0 c d 0 c d 0 c d 0 c d Convolutions involving ( a ) Convolutions involving ( b ) Which kernel cells these ops utilize ? 0 0 0 0 0 00 0 0 0 0 00 0 00 0 0 0 0 0 0 0 0 0 0 0 Original Input Padded Input m s r q p o n m s a b c d e f g g a b c d e f g a 9 9 9 9 9 9 9 x .. .. .. .. .. h h x .. .. .. .. .. h x 9 9 9 9 9 9 9 w .. .. .. .. .. i i w .. .. .. .. .. i w 9 9 9 9 9 9 9 v .. .. .. .. .. j j v .. .. .. .. .. j v 9 9 9 9 9 9 9 u .. .. .. .. .. k k u .. .. .. .. .. k u 9 9 9 9 9 9 9 t .. .. .. .. .. l l t .. .. .. .. .. l t 9 9 9 9 9 9 9 s r q p o n m m s r q p o n m s 9 9 9 9 9 9 9 g a b c d e f g a Which kernel cells these ops utilize ? Other border cases are translation or rotation of ( a ) or ( b ) a 1 1 1 b 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 sum = 9 sum = 9 uniform uniform Convolutions involving ( a ) m s r s r q g a b a b c f g a n m s g a b a b c h x .. x .. .. .. h x f g a h x .. x .. .. i w .. w .. .. .. i w .. h x l t .. t .. .. .. l t m s r s r q n m s g a b a b c f g a Convolutions involving ( b ) m s r g a b s r q a b c r q p b c d g a b h x .. a b c x .. .. b c d .. .. .. h x .. i w .. x .. .. w .. .. .. .. .. .. .. .. l t .. t .. .. .. .. .. m s r s r q r q p g a b a b c b c d Circular Padding Illustrated on 3x3 kernel and 1-pixel padding # of conv ops each pixel is involved in Detailed Illustration of how the counts are derived a a b c .. .. a b c .. .. a a b c .. .. 9 9 9 9 9 d e f .. .. d d e f .. .. 9 9 9 9 9 g h i .. .. g g h i .. .. 9 9 9 9 9 .. .. .. .. .. .. .. .. .. .. .. 9 9 9 9 9 .. .. .. .. .. .. .. .. .. .. .. 9 9 9 9 9 Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 4 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 uniform uniform uniform uniform Detailed Illustration of how the counts are derived a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c d d e f d d e f d d e f d d e f g g h i g g h i g g h i g g h i a a b c a a b c a a b c a d e f a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c d d e f d d e f d d e f d d e f d d e f d d e f g g h i g g h i g g h i g g h i g g h i g g h i a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c d d e f d d e f d d e f d d e f d d e f d d e f g g h i g g h i g g h i g g h i g g h i g g h i Mirror Padding ( SYMMETRIC ) Illustrated on 3x3 kernel and 1-pixel padding Original Input Padded Input # of conv ops each pixel is involved in Convolutions involving ( b ) Convolutions involving ( a ) Convolutions involving ( c ) a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c b a b c a a b c a a b c a a b c d d e f d d e f d d e f d d e f d d e f d d e f g g h i g g h i g g h i g g h i g g h i g g h i a a b c a a b c a a b c a a b c a a b c a a b c d d e f d d e f d d e f g g h i g g h i g g h i a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c a a b c d d e f d d e f d d e f d d e f d d e f d d e f g g h i g g h i g g h i g g h i g g h i g g h i a a b c a a b c a a b c a a b c a a b c a a b c d d e f d d e f d d e f g g h i g g h i g g h i Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment Convolutions involving ( f ) Convolutions involving ( e ) Convolutions involving ( d ) : Rotated version of ( b ) Original Input Padded Input # of conv ops for each pixel e d d e f .. .. b a a b c .. .. a b c .. .. b a a b c .. .. 25 25 25 25 25 d e f .. .. e d d e f .. .. 25 25 25 25 25 g h i .. .. h g g h i .. .. 25 25 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 25 25 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 25 25 25 25 25 Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 4 4 2 4 2 2 2 2 2 2 2 2 4 4 2 4 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1 1 1 1 1 1 4 4 2 4 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 2 2 1 2 1 1 1 1 1 1 1 1 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 sum = 25 uniform Detailed Illustration of how the counts are derived e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i Mirror Padding ( SYMMETRIC ) Illustrated on 5x5 kernel and 2-pixel padding Convolutions involving ( c ) Convolutions involving ( b ) Convolutions involving ( a ) e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i f : e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i Convolutions involving ( f ) Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment Convolutions involving ( e ) Convolutions involving ( d ) : Rotated version of ( b ) e d d e f .. b a a b c .. a b c .. b a a b c .. 9 9 9 9 d e f .. e d d e f .. 9 9 9 9 g h i .. h g g h i .. 9 9 9 9 .. .. .. .. .. .. .. .. .. .. 9 9 9 9 .. .. .. .. .. .. Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 4 2 4 2 2 2 2 4 2 4 2 2 2 2 2 1 2 1 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 2 1 2 1 1 1 1 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 sum = 9 Detailed Illustration of how the counts are derived e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i Original Input Padded Input # of conv ops for each pixel Mirror Padding ( SYMMETRIC ) Illustrated on 3x3 kernel and 1-pixel padding with dilation factor of 2 Convolutions involving ( c ) Convolutions involving ( b ) Convolutions involving ( a ) uniform e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i e d d e f e d d e f e d d e f e d d e f b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c b a a b c e d d e f e d d e f e d d e f e d d e f h g g h i h g g h i h g g h i h g g h i Convolutions involving ( f ) Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment Convolutions involving ( e ) Convolutions involving ( d ) : Rotated version of ( b ) Original Input a a b a b a b .. a a b .. a a b .. a b .. a b .. c d .. c c d .. c c d .. c d .. c d .. .. .. .. .. .. .. .. .. .. .. .. .. Number of conv ops each pixel is involved in 9 6 6 3 2 2 3 6 6 1 2 2 4 4 4 6 4 4 6 4 4 2 4 4 2 4 4 4 4 4 6 4 4 6 4 4 2 4 4 2 4 4 4 4 4 Which kernel cells these ops utilize ? Padded at topleft corner Padded at bottomleft corner Padded at topright corner Padded at bottomright corner Average ( grouped padding strategy ) 3 2 2 1 2 1 2 0.75 2 2 1 0.75 0.5 sum = 9 sum = 3 sum = 3 sum = 1 sum = 4 2 2 1 1 2 2 1 1 1.5 1.5 1 1 1 1 0.5 0.5 sum = 6 sum = 2 sum = 6 sum = 2 sum = 4 2 1 2 1 1 1 1.5 0.5 2 1 2 1 1 1 1.5 0.5 sum = 6 sum = 6 sum = 2 sum = 2 sum = 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 sum = 4 sum = 4 sum = 4 sum = 4 sum = 4 uniform uniform uniform uniform uniform Convolutions involving ( a ) Convolutions involving ( b ) Convolutions involving ( c ) Convolutions involving ( d ) Padded at topleft corner Padded at bottomleft corner Padded at topright corner Padded at bottomright corner Average ( grouped padding strategy ) Padded at topleft corner Padded at bottomleft corner Padded at topright corner Padded at bottomright corner Mirror Padding ( SYMMETRIC ) with Grouping Illustrated on 2x2 kernel and 1-pixel padding A grouped padding strategy is applied to balance uneven padding ( Wu et al 2019 ) Detailed Illustration of how the counts are derived Padded at bottomright cornera a b a a b a a b a a b Padded at topleft corner a a b .. a a b .. a a b .. a a b .. a b .. c c d .. c c d .. c c d .. c c d .. c d .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Padded at topright corner a b a b Padded at bottomleft corner a b .. a b .. a a b .. a a b .. c d .. c d .. c c d .. c c d .. .. .. .. .. .. .. .. .. .. .. .. .. a a b a a b a a b a a b a a b .. a a b .. a a b .. a a b .. c c d .. c c d .. c c d .. c c d .. .. .. .. .. .. .. .. .. .. .. .. .. a b a b a b a b a b .. a b .. a b .. a b .. c d .. c d .. c d .. c d .. .. .. .. .. .. .. .. .. .. .. .. .. a a b .. a a b .. a b .. a b .. c c d .. c c d .. c d .. c d .. .. .. .. .. .. .. .. .. .. .. .. .. Padded at topright corner Padded at topleft corner Padded at bottomleft corner Padded at bottomright corner Convolutions involving ( b ) Convolutions involving ( a ) Convolutions involving ( c ) : Rotated version of ( b ) Convolutions involving ( d ) : Rotated version of ( a ) Original Input Padded at top-left Padded at bottomleft corner Padded at top-right Padded at bottomright corner e d d e f d d e f b a a b c b a a b c a a b c a a b c a b c .. b a a b c .. b a a b c .. a a b c .. a a b c .. d e f .. e d d e f .. e d d e f .. d d e f .. d d e f .. g h i h g g h i h g g h i g g h i g g h i .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Number of conv ops each pixel is involved in Average ( grouped padding strategy ) 25 25 20 20 20 15 15 12 12 12 15 15 20 20 20 9 9 12 12 12 16 16 16 16 16 25 25 20 20 20 15 15 12 12 12 15 15 20 20 20 9 9 12 12 12 16 16 16 16 16 20 20 16 16 16 20 20 16 16 16 12 12 16 16 16 12 12 16 16 16 16 16 16 16 16 20 20 16 16 16 20 20 16 16 16 12 12 16 16 16 12 12 16 16 16 16 16 16 16 16 20 20 16 16 16 20 20 16 16 16 12 12 16 16 16 12 12 16 16 16 16 16 16 16 16 Which kernel cells these ops utilize ? ( a ) 4 4 2 4 4 2 4 2 4 2 4 3 1 4 4 2 2 2 1 4 2 2 1 3 2.25 0.75 2 2 1 2 1 1 0.75 0.25 sum = 25 sum = 15 sum = 15 sum = 9 sum = 16 ( b ) 4 2 2 2 4 2 2 2 2 2 2 2 2 2 3 2 2 1 4 2 2 2 2 1 1 1 2 2 2 1 1 1 2.25 1.5 1.5 0.75 2 1 1 1 1 1 1 0.75 0.5 0.5 0.25 sum = 25 sum = 15 sum = 15 sum = 9 sum = 16 ( c ) 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1.5 1.5 1.5 1.5 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 sum = 20 sum = 12 sum = 20 sum = 12 sum = 16 ( e ) 4 2 2 2 2 1 1 1 2 2 2 1 1 1 2.25 1.5 1.5 0.75 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1.5 1 1 0.5 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1.5 1 1 0.5 2 1 1 1 1 1 1 0.75 0.5 0.5 0.25 sum = 25 sum = 15 sum = 15 sum = 9 sum = 16 ( f ) 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1.5 1.5 1.5 1.5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.5 0.5 0.5 0.5 sum = 20 sum = 12 sum = 20 sum = 12 sum = 16 ( g ) rotated version of ( c ) ( h ) rotated version of ( f ) ( i ) regular treatment Mirror Padding ( SYMMETRIC ) with Grouping Illustrated on 4x4 kernel and 1-pixel padding Padded at top-left Padded at bottom-left Padded at top-right Padded at bottom-right Padded at top-left Padded at bottom-left Padded at top-right Padded at bottom-right Average ( grouped padding strategy ) ( d ) rotated version of ( b ) a a a b c .. .. a a a b c .. .. a b c .. .. a a a b c .. .. 36 24 30 30 30 d e f .. .. d d d e f .. .. 24 16 20 20 20 g h i .. .. g g g h i .. .. 30 20 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 30 20 25 25 25 .. .. .. .. .. .. .. .. .. .. .. .. 30 20 25 25 25 Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 9 6 3 3 3 3 3 3 3 3 3 3 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 6 4 2 2 2 2 2 2 2 2 2 2 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 1 1 1 1 1 sum = 36 sum = 24 sum = 30 sum = 24 sum = 16 sum = 20 sum = 30 sum = 20 sum = 25 Detailed Illustration of how the counts are derived a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a b b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a b b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i Original Input Padded Input # of conv ops for each pixel Replication Padding Illustrated on 5x5 kernel and 2-pixel padding Convolutions involving ( b ) Convolutions involving ( a ) Convolutions involving ( c ) uniform a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i g g g h i g g g h i a a d b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c b a a b c a a a b c a a a b c d d d e f d d d e f d d d e f e d d e f d d d e f d d d e f g g g h i g g g h i g g g h i h g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c b a a b c a a a b c a a a b c a a a b c a a a b c a a a b c b a a b c a a a b c a a a b c d d d e f d d d e f d d d e f e d d e f d d d e f d d d e f g g g h i g g g h i g g g h i h g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c a a a b c d d d e f d d d e f d d d e f d d d e f g g g h i g g g h i g g g h i g g g h i Convolutions involving ( f ) Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment Convolutions involving ( e ) Convolutions involving ( d ) : Rotated version of ( b ) Original Input Padded Input # of conv ops each pixel is involved in e d e f .. .. a b c .. .. b a b c .. .. 4 8 6 6 6 d e f .. .. e d e f .. .. 8 16 12 12 12 g h i .. .. h g h i .. .. 6 12 9 9 9 .. .. .. .. .. .. .. .. .. .. .. 6 12 9 9 9 .. .. .. .. .. .. .. .. .. .. .. 6 12 9 9 9 Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 1 1 2 1 1 1 1 1 2 2 2 2 2 4 2 2 1 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 2 1 1 1 1 1 sum = 4 sum = 8 sum = 6 sum = 8 sum = 12 sum = 16 sum = 6 sum = 12 sum = 9 uniform Detailed Illustration of how the counts are derived e d e f e d e f e d e f e d e f b a b c b a b c b a b c b a b c e d e f e d e f e d e f e d e f h g h i h g h i h g h i h g h i e d e f e d e f e d e f e d e f e d e f e d e f b a b c b a b c b a b c b a b c b a b c b a b c e d e f e d e f e d e f e d e f e d e f e d e f h g h i h g h i h g h i h g h i h g h i h g h i e d e f e d e f e d e f e d e f e d e f e d e f b a b c b a b c b a b c b a b c b a b c b a b c e d e f e d e f e d e f e d e f e d e f e d e f h g h i h g h i h g h i h g h i h g h i h g h i Mirror Padding ( REFLECT ) Illustrated on 3x3 kernel and 1-pixel padding Convolutions involving ( a ) Convolutions involving ( b ) Convolutions involving ( c ) e d e f e d e f e d e f e d e f e d e f e d e f b a b c b a b c b a b c b a b c b a b c b a b c e d e f e d e f e d e f e d e f e d e f e d e f h g h i h g h i h g h i h g h i h g h i h g h i e d e f e d e f e d e f b a b c b a b c b a b c e d e f e d e f e d e f h g h i h g h i h g h i e d e f e d e f e d e f e d e f e d e f e d e f b a b c b a b c b a b c b a b c b a b c b a b c e d e f e d e f e d e f e d e f e d e f e d e f h g h i h g h i h g h i h g h i h g h i h g h i e d e f e d e f e d e f b a b c b a b c b a b c e d e f e d e f e d e f h g h i h g h i h g h i Convolutions involving ( d ) : Rotated version of ( b ) Convolutions involving ( e ) Convolutions involving ( f ) Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment a b c .. .. 6.25 8.75 7.5 7.5 7.5 d e f .. .. 8.75 12.25 10.5 10.5 10.5 g h i .. .. 7.5 10.5 9 9 9 .. .. .. .. .. 7.5 10.5 9 9 9 .. .. .. .. .. 10.5 9 9 9 Which kernel cells these ops utilize ? a : b : c : d : e : f : g : h : i : 1 1.5 1 1 1.5 1 1 1 1 1.5 1 1 1.5 1 1 1 1 1.5 1 1 1.5 1 1 1 1.5 2.25 1.5 1.5 2.25 1.5 1.5 1.5 1 1.5 1 1 1.5 1 1 1 1 1.5 1 1 1.5 1 1 1 1.5 2.25 1.5 1.5 2.25 1.5 1.5 1.5 1 1.5 1 1 1.5 1 1 1 sum = 6.25 sum = 8.75 sum = 7.5 sum = 8.75 sum = 9 sum = 10.5 sum = 7.5 sum = 10.5 sum = 9 uniform Detailed Illustration of how the counts are derived a b c a b c a b c a b c d e f d e f d e f d e f g h i g h i g h i g h i weight 9 / 4 = 2.25 9 / 6 = 1.5 9 / 6 = 1.5 9 / 9 = 1 sum = 6.25 a b c ... a b c ... a b c ... a b c ... a b c ... a b c ... d e f ... d e f ... d e f ... d e f ... d e f ... d e f ... g h i ... g h i ... g h i ... g h i ... g h i ... g h i .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... weight 9 / 4 = 2.25 9 / 6 = 1.5 9 / 6 = 1.5 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1 sum = 8.75 a b c ... a b c ... a b c ... a b c ... a b c ... a b c ... ... d e f ... d e f ... d e f ... d e f ... d e f ... d e f ... ... g h i ... g h i ... g h i ... g h i ... g h i ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... weight 9 / 6 = 1.5 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1 sum = 7.5 Partial Convolution Illustrated on a 3x3 kernel Input Weighted # of conv ops each pixel is involved in Convolutions involving ( a ) Convolutions involving ( b ) Convolutions involving ( c ) 7.5 a b c ... a b c ... a b c ... a b c ... a b c ... a b c ... d e f ... d e f ... d e f ... d e f ... d e f ... d e f ... g h i ... g h i ... g h i ... g h i ... g h i ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... weight 9 / 4 = 2.25 9 / 6 = 1.5 9 / 6 = 1.5 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1 a b c ... a b c ... a b c ... d e f ... d e f ... d e f ... g h i ... g h i ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... weight 9 / 4 = 1.5 9 / 9 = 1 9 / 9 = 1 sum = 12.25 a b c ... a b c ... a b c ... a b c ... a b c ... a b c ... d e f ... d e f ... d e f ... d e f ... d e f ... d e f ... g h i ... g h i ... g h i ... g h i ... g h i ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... weight 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1 a b c ... ... a b c ... ... a b c ... d e f ... ... d e f ... ... d e f ... ... g h i ... g h i ... ... g h i ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... weight 9 / 6 = 1.5 9 / 9 = 1 9 / 9 = 1 sum = 10.5 Convolutions involving ( e ) Convolutions involving ( f ) Convolutions involving ( d ) : rotated version of ( b ) Convolutions involving ( g ) : Rotated version of ( c ) Convolutions involving ( h ) : Rotated version of ( f ) Convolutions involving ( i ) : Regular uniform treatment
The paper studies the effect of padding on artefacts in CNN feature maps and performance on image classification and object detection. It convincingly makes the case that these artefacts have a significant detrimental effect on task performance, e.g. leading to blind spots / missed detections of small objects near the image border. It also studies the effect of uneven padding in downsampling layers, where the padding may only affect some sides of the image and not others, depending on the image size. A condition is presented for when this does / does not occur. The effect of different padding methods is also studied from the perspective of foveation by computing the number of paths from an input pixel to the output. A number of practical recommendations are given.
SP:1ccd6cfc6dce5a3f4b0c65dd1625f71ac3225c2d
"Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code
1 INTRODUCTION . We begin by recalling the usual set-up for neural differential equations . 1.1 NEURAL ORDINARY DIFFERENTIAL EQUATIONS . The general approach of neural ordinary differential equations ( E , 2017 ; Chen et al. , 2018 ) is to use ODEs as a learnable component of a differentiable framework . Typically the goal is to approximate a map x 7→ y by learning functions ` 1 ( · , φ ) , ` 2 ( · , ψ ) and f ( · , · , θ ) , which are composed such that z ( τ ) = ` 1 ( x , φ ) , z ( t ) = z ( τ ) + ∫ t τ f ( s , z ( s ) , θ ) ds and y ≈ ` 2 ( z ( T ) , ψ ) . ( 1 ) The variables φ , θ , ψ denote learnable parameters and the ODE is solved over the interval [ τ , T ] . We include the ( often linear ) maps ` 1 ( · , φ ) , ` 2 ( · , ψ ) for generality , as in many contexts they are important for the expressiveness of the model ( Dupont et al. , 2019 ; Zhang et al. , 2020 ) , though our contributions will be focused around the ODE component and will not depend on these maps . Here we will consider neural differential equations that may be interpreted as a neural ODE . 1.2 APPLICATIONS . Neural differential equations have to the best our knowledge three main applications : 1 . Time series modeling . Rubanova et al . ( 2019 ) interleave Neural ODEs with RNNs to produce ODEs with jumps . Kidger et al . ( 2020 ) take f ( t , z , θ ) = g ( z , θ ) dXdt ( t ) , dependent on some time-varying input X , to produce a neural controlled differential equation . 2 . Continuous Normalising Flows as in Chen et al . ( 2018 ) ; Grathwohl et al . ( 2019 ) , in which the overall model acts as coupling or transformation between probability distributions , 3 . Modeling or controlling physical environments , for which a differential equation based model may be explicitly desired , see for example Zhong et al . ( 2020 ) . 1.3 ADJOINT EQUATIONS . The integral in equation ( 1 ) may be backpropagated through either by backpropagating through the internal operations of a numerical solver , or by solving the backwards-in-time adjoint equations with respect to some ( scalar ) loss L. az ( T ) = dL dz ( T ) , az ( t ) = az ( T ) − ∫ t T az ( s ) · ∂f ∂z ( s , z ( s ) , θ ) ds and dL dz ( τ ) = az ( τ ) , aθ ( T ) = 0 , aθ ( t ) = aθ ( T ) − ∫ t T az ( s ) · ∂f ∂θ ( s , z ( s ) , θ ) ds and dL dθ = aθ ( τ ) , at ( T ) = dL dT , at ( t ) = at ( T ) − ∫ t T az ( s ) · ∂f ∂s ( s , z ( s ) , θ ) ds and dL dτ = at ( τ ) , ( 2 ) These equations are typically solved together as a joint system a ( t ) = [ az ( t ) , aθ ( t ) , at ( t ) ] . ( They are already coupled ; the latter two equations depend on az . ) As additionally their integrands require z ( s ) , and as the results of the forward computation of equation ( 1 ) are usually not stored , then the adjoint equations are typically additionally augmented by recovering z by solving backwards-intime z ( t ) = z ( T ) + ∫ t T f ( s , z ( s ) , θ ) ds . ( 3 ) 1.4 CONTRIBUTIONS . We demonstrate that the particular structure of the adjoint equations implies that numerical equation solvers will typically take too many steps , that are too small , wasting time during backpropagation . Specifically , the accept/reject step of adaptive-step-size solvers is too stringent . By applying a correction to account for this , we demonstrate that the number of steps needed to solve the adjoint equations may be reduced by typically about 40 % . We observe improvements on some problems by as much as 62 % . Factoring in the forward pass ( which is unchanged ) , the overall training time is roughly halved . Our method is hyperparameter-free and requires no tuning . We do not observe any change in model performance , and at least with the torchdiffeq package ( our chosen differential equation package ) , this correction may be applied with only 12 lines of code . 2 METHOD . 2.1 NUMERICAL SOLVERS . Both the forward pass given by equation ( 1 ) , and the backward pass given by equations ( 2 ) and ( 3 ) , are solved by invoking a numerical differential equation solver . Our interest here is in adaptive-stepsize solvers . Indeed the default choice for solving many equations is the adaptive-step-size Runge– Kutta 5 ( 4 ) scheme of Dormand–Prince ( Dormand & Prince , 1980 ) , for example as implemented by dopri5 in the torchdiffeq package or ode45 in MATLAB . A full discussion of the internal operations of these solvers is beyond our scope here ; the part of interest to us is the accept/reject scheme . Consider the case of solving the general ODE y ( t ) = y ( τ ) + ∫ t τ f ( s , y ( s ) ) ds , with y ( t ) ∈ Rd . Suppose for some fixed t the solver has computed some estimate ŷ ( t ) ≈ y ( t ) , and it now seeks to take a step ∆ > 0 to compute ŷ ( t + ∆ ) ≈ y ( t + ∆ ) . A step is made , and some candidate ŷcandidate ( t + ∆ ) is generated . The solver additionally produces yerr ∈ Rd representing an estimate of the numerical error made in each channel during that step . Given some prespecified absolute tolerance ATOL ( for example 10−9 ) , relative tolerance RTOL ( for example 10−6 ) , and ( semi ) norm ‖ · ‖ : Rd → [ 0 , ∞ ) ( for example ‖y‖ = √ 1 d ∑d i=1 y 2 i the RMS norm ) , then an estimate of the size of the equation is given by SCALE = ATOL+RTOL ·max ( ŷ ( t ) , ŷcandidate ( t+ ∆ ) ) ∈ Rd , ( 4 ) where the maximum is taken channel-wise , and the error ratio r = ∥∥∥ yerr SCALE ∥∥∥ ∈ R ( 5 ) is then computed . If r ≤ 1 then the error is deemed acceptable , the step is accepted and we take ŷ ( t+∆ ) = ŷcandidate ( t+∆ ) . If r > 1 then the error is deemed too large , the candidate ŷcandidate ( t+∆ ) is rejected , and the procedure is repeated with a smaller ∆ . Note the dependence on the choice of norm ‖ · ‖ : in particular this determines the relative importance of each channel towards the accept/reject criterion . 2.2 ADJOINT SEMINORMS . Not an ODE The key observation is that aθ ( and in fact also at ) does not appear anywhere in the vector fields of equation ( 2 ) . This means that ( conditioned on knowing z and az ) , the integral corresponding to aθ is just an integral—not an ODE . As such , it is arguably inappropriate to solve it with an ODE solver , which makes the implicit assumption that small errors now may propagate to create large errors later . Accept/reject This is made manifest in the accept/reject step of equation ( 5 ) . Typical choices of norm ‖ · ‖ , such as L2 , will usually weight each channel equally . But we have just established that to solve the adjoint equations accurately , it is far more important that z and az be accurate than it is that aθ be accurate . Seminorms Thus , when solving the adjoint equations equation ( 2 ) , we propose to use a ‖ · ‖ that scales down the effect in those channels corresponding to aθ . In practice , in our experiments , we scale ‖ · ‖ all the way down by applying zero weight to the offending channels , so that ‖ · ‖ is in fact a seminorm . This means that the integration steps are chosen solely for the computation of az and z , and the values of aθ are computed just by integrating with respect to those steps . Example As an explicit example , note that aθ ( T ) = 0 . When solving the adjoint equation numerically , this means for t close to T that the second term in equation ( 4 ) is small . As ATOL is typically also small , then SCALE is additionally small , and the error ratio r in equation ( 5 ) is large . This implies that it becomes easy for the error ratio r to violate r ≤ 1 , and it is easy for the step to be rejected . Now there is nothing intrinsically bad about a step being rejected—we would like to solve the ODE accurately , after all—the problem is that this is a spurious rejection , as the rejection occurred to ensure the accuracy of aθ , which is as already established unnecessary . In practice , we observe that spurious rejections may occur for any t , not just those near T . Other channels In fact , essentially the same argument applies to at as well : this does not affect the value of the vector field either . In a continuous normalising flow , the log-probability channel is also only an integral , rather than an ODE , and again the same argument may be applied . Does this reduce the accuracy of parameter gradients ? One obvious concern is that we are typically ultimately interested in the parameter gradients aθ , in order to train a model ; with respect to this our approach seems counter-intuitive . However , we verify empirically that models still train without a reduction in performance . We explain this by noting that the z , az channels truly are ODEs , so that small errors now do propagate to create larger errors later . Thus these are likely the dominant source of error overall . 2.3 CODE . Depending on the software package , the code for making this change can be trivial . For example , using PyTorch ( Paszke et al. , 2019 ) and torchdiffeq ( Chen et al. , 2018 ) , the standard set-up requires only a few additional lines of code . The additional 12 lines are marked . 1 import t o r c h d i f f e q 2 3 def rms norm ( t e n s o r ) : # 4 re turn t e n s o r . pow ( 2 ) . mean ( ) . s q r t ( ) # 5 # 6 def make norm ( s t a t e ) : # 7 s t a t e s i z e = s t a t e . numel ( ) # 8 def norm ( a u g s t a t e ) : # 9 y = a u g s t a t e [ 1 : 1 + s t a t e s i z e ] # 10 a d j y = a u g s t a t e [ 1 + s t a t e s i z e : 1 + 2 ∗ s t a t e s i z e ] # 11 re turn max ( rms norm ( y ) , rms norm ( a d j y ) ) # 12 re turn norm # 13 # 14 t o r c h d i f f e q . o d e i n t a d j o i n t ( func = . . . , y0 = . . . , t = . . . , 15 a d j o i n t o p t i o n s = d i c t ( norm=make norm ( y0 ) ) ) # This amounts to the extra 12 lines of code stated in the title—a number that even includes the additional whitespace and visual indents . To keep the remainder of this discussion software-agnostic , we defer further explanation of this specific code to Appendix A .
The paper proposes a modification for the adjoint method, such that to improve the training efficiency of neural ODEs. The proposed idea is that the solution of some terms in the adjoint method can be less accurate, because these are not ODEs but simple integrals, and hence, the error does not propagate. Thus, the solver can utilize bigger steps, and in total to perform less steps. In the experiments the efficiency is demonstrated under different scenarios where neural ODEs are used.
SP:4a4c6ede9645c5b814a84fbd9e91472f0888621e
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
1 INTRODUCTION . Recently , research related to learning on graph structural data has gained considerable attention in machine learning community . Graph neural networks ( Gori et al. , 2005 ; Hamilton et al. , 2017 ; Veličković et al. , 2018 ) , particularly graph convolutional networks ( Kipf & Welling , 2017 ; Defferrard et al. , 2016 ; Bruna et al. , 2014 ) have demonstrated their remarkable ability on node classification ( Kipf & Welling , 2017 ) , link prediction ( Zhu et al. , 2016 ) and clustering tasks ( Fortunato , 2010 ) . Despite their enormous success , almost all of these models have shallow model architectures with only two or three layers . The shallow design of GCN appears counterintuitive as deep versions of these models , in principle , have access to more information , but perform worse . Oversmoothing ( Li et al. , 2018 ) has been proposed to explain why deep GCN fails , showing that by repeatedly applying Laplacian smoothing , GCN may mix the node features from different clusters and makes them indistinguishable . This also indicates that by stacking too many graph convolutional layers , the embedding of each node in GCN is inclined to converge to certain value ( Li et al. , 2018 ) , making it harder for classification . These shallow model architectures restricted by oversmoothing issue ∗Corresponding author . 1Code is available at https : //github.com/datake/AdaGCN . limit their ability to extract the knowledge from high-order neighbors , i.e. , features from remote hops of neighbors for current nodes . Therefore , it is crucial to design deep graph models such that high-order information can be aggregated in an effective way for better predictions . There are some works ( Xu et al. , 2018b ; Liao et al. , 2019 ; Klicpera et al. , 2018 ; Li et al. , 2019 ; Liu et al. , 2020 ) that tried to address this issue partially , and the discussion can refer to Appendix A.1 . By contrast , we argue that a key direction of constructing deep graph models lies in the efficient exploration and effective combination of information from different orders of neighbors . Due to the apparent sequential relationship between different orders of neighbors , it is a natural choice to incorporate boosting algorithm into the design of deep graph models . As an important realization of boosting theory , AdaBoost ( Freund et al. , 1999 ) is extremely easy to implement and keeps competitive in terms of both practical performance and computational cost ( Hastie et al. , 2009 ) . Moreover , boosting theory has been used to analyze the success of ResNets in computer vision ( Huang et al. , 2018 ) and AdaGAN ( Tolstikhin et al. , 2017 ) has already successfully incorporated boosting algorithm into the training of GAN ( Goodfellow et al. , 2014 ) . In this work , we focus on incorporating AdaBoost into the design of deep graph convolutional networks in a non-trivial way . Firstly , in pursuit of the introduction of AdaBoost framework , we refine the type of graph convolutions and thus obtain a novel RNN-like GCN architecture called AdaGCN . Our approach can efficiently extract knowledge from different orders of neighbors and then combine these information in an AdaBoost manner with iterative updating of the node weights . Also , we compare our AdaGCN with existing methods from the perspective of both architectural difference and feature representation power to show the benefits of our method . Finally , we conduct extensive experiments to demonstrate the consistent state-of-the-art performance of our approach across different label rates and computational advantage over other alternatives . 2 OUR APPROACH : ADAGCN . 2.1 ESTABLISHMENT OF ADAGCN . Consider an undirected graph G = ( V , E ) with N nodes vi ∈ V , edges ( vi , vj ) ∈ E . A ∈ RN×N is the adjacency matrix with corresponding degree matrix Dii = ∑ j Aij . In the vanilla GCN model ( Kipf & Welling , 2017 ) for semi-supervised node classification , the graph embedding of nodes with two convolutional layers is formulated as : Z =  ReLU ( ÂXW ( 0 ) ) W ( 1 ) ( 1 ) where Z ∈ RN×K is the final embedding matrix ( output logits ) of nodes before softmax and K is the number of classes . X ∈ RN×C denotes the feature matrix where C is the input dimension .  = D̃− 1 2 ÃD̃− 1 2 where à = A + I and D̃ is the degree matrix of à . In addition , W ( 0 ) ∈ RC×H is the input-to-hidden weight matrix for a hidden layer with H feature maps and W ( 1 ) ∈ RH×K is the hidden-to-output weight matrix . Our key motivation of constructing deep graph models is to efficiently explore information of highorder neighbors and then combine these messages from different orders of neighbors in an AdaBoost way . Nevertheless , if we naively extract information from high-order neighbors based on GCN , we are faced with stacking l layers ’ parameter matrix W ( i ) , i = 0 , ... , l − 1 , which is definitely costly in computation . Besides , Multi-Scale Deep Graph Convolutional Networks ( Luan et al. , 2019 ) also theoretically demonstrated that the output can only contain the stationary information of graph structure and loses all the local information in nodes for being smoothed if we simply deepen GCN . Intuitively , the desirable representation of node features does not necessarily need too many nonlinear transformation f applied on them . This is simply due to the fact that the feature of each node is normally one-dimensional sparse vector rather than multi-dimensional data structures , e.g. , images , that intuitively need deep convolution network to extract high-level representation for vision tasks . This insight has been empirically demonstrated in many recent works ( Wu et al. , 2019 ; Klicpera et al. , 2018 ; Xu et al. , 2018a ) , showing that a two-layer fully-connected neural networks is a better choice in the implementation . Similarly , our AdaGCN also follows this direction by choosing an appropriate f in each layer rather than directly deepen GCN layers . Thus , we propose to remove ReLU to avoid the expensive joint optimization of multiple parameter matrices . Similarly , Simplified Graph Convolution ( SGC ) ( Wu et al. , 2019 ) also adopted this prac- tice , arguing that nonlinearity between GCN layers is not crucial and the majority of the benefits arises from local weighting of neighboring features . Then the simplified graph convolution is : Z = ÂlXW ( 0 ) W ( 1 ) · · ·W ( l−1 ) = ÂlXW̃ , ( 2 ) where we collapse W ( 0 ) W ( 1 ) · · ·W ( l−1 ) as W̃ and Âl denotes  to the l-th power . In particular , one crucial impact of ReLU in GCN is to accelerate the convergence of matrix multiplication since the ReLU is a contraction mapping intuitively . Thus , the removal of ReLU operation could also alleviate the oversmoothing issue , i.e . slowering the convergence of node embedding to indistinguishable ones ( Li et al. , 2018 ) . Additionally , without ReLU this simplified graph convolution is also able to avoid the aforementioned joint optimization over multiple parameter matrices , resulting in computational benefits . Nevertheless , we find that this type of stacked linear transformation from graph convolution has insufficient power in representing information of high-order neighbors , which is revealed in our experiment described in Appendix A.2 . Therefore , we propose to utilize an appropriate nonlinear function fθ , e.g. , a two-layer fully-connected neural network , to replace the linear transformation W̃ in Eq . 2 and enhance the representation ability of each base classifier in AdaGCN as follows : Z ( l ) = fθ (  lX ) , ( 3 ) where Z ( l ) represents the final embedding matrix ( output logits before Softmax ) after the l-th base classifier in AdaGCN . This formulation also implies that the l-th base classifier in AdaGCN is extracting knowledge from features of current nodes and their l-th hop of neighbors . Due to the fact that the function of l-th base classifier in AdaGCN is similar to that of the l-th layer in other traditional GCN-based methods that directly stack many graph convolutional layers , we regard the whole part of l-th base classifier as the l-th layers in AdaGCN . As for the realization of Multi-class AdaBoost , we apply SAMME ( Stagewise Additive Modeling using a Multi-class Exponential loss function ) algorithm ( Hastie et al. , 2009 ) , a natural and clean multi-class extension of the two-class AdaBoost adaptively combining weak classifiers . As illustrated in Figure 1 , we apply base classifier f ( l ) θ to extract knowledge from current node feature and l-th hop of neighbors by minimizing current weighted loss . Then we directly compute the weighted error rate err ( l ) and corresponding weight α ( l ) of current base classifier f ( l ) θ as follows : err ( l ) = n∑ i=1 wiI ( ci 6= f ( l ) θ ( xi ) ) / n∑ i=1 wi α ( l ) = log 1− err ( l ) err ( l ) + log ( K − 1 ) , ( 4 ) where wi denotes the weight of i-th node and ci represents the category of current i-th node . To attain a positive α ( l ) , we only need ( 1 − err ( l ) ) > 1/K , i.e. , the accuracy of each weak classifier should be better than random guess ( Hastie et al. , 2009 ) . This can be met easily to guarantee the weights to be updated in the right direction . Then we adjust nodes ’ weights by increasing weights on incorrectly classified ones : wi ← wi · exp ( α ( l ) · I ( ci 6= f ( l ) θ ( xi ) ) ) , i = 1 , . . . , n ( 5 ) After re-normalizing the weights , we then compute Âl+1X =  · ( ÂlX ) to sequentially extract knowledge from l+1-th hop of neighbors in the following base classifier f ( l+1 ) θ . One crucial point of AdaGCN is that different from traditional AdaBoost , we only define one fθ , e.g . a two-layer fully connected neural network , which in practice is recursively optimized in each base classifier just similar to a recurrent neural network . This also indicates that the parameters from last base classifier are leveraged as the initialization of next base classifier , which coincides with our intuition that l+1-th hop of neighbors are directly connected from l-th hop of neighbors . The efficacy of this kind of layer-wise training has been similarly verified in ( Belilovsky et al. , 2018 ) recently . Further , we combine the predictions from different orders of neighbors in an Adaboost way to obtain the final prediction C ( A , X ) : C ( A , X ) = argmax k L∑ l=0 α ( l ) f ( l ) θ (  lX ) ( 6 ) Finally , we obtain the concise form of AdaGCN in the following : ÂlX =  · ( Âl−1X ) Z ( l ) = f ( l ) θ (  lX ) Z = AdaBoost ( Z ( l ) ) ( 7 ) Note that fθ is non-linear , rather than linear in SGC ( Wu et al. , 2019 ) , to guarantee the representation power . As shown in Figure 1 , the architecture of AdaGCN is a variant of RNN with synchronous sequence input and output . Although the same classifier architecture is adopted for f ( l ) θ , their parameters are different , which is different from vanilla RNN . We provide a detailed description of the our algorithm in Section 3 .
By integrating Adaboosting and a fully connected layer, this paper provides a new graph neural network structure. The objective of this paper is to design a deeper graph models in an efficient way for better performance. The computational efficiency and performance of the proposed algorithm are evaluated using the task of node property prediction on several public datasets. This is a new variant of GNN, but the quality this paper is lower than the expectation regarding to the clarity and organisation.
SP:43b0b8d8e0c30180cb627ef62898028f5e7dfec8
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
We propose a simple , general and effective technique , Reward Randomization for discovering diverse strategic policies in complex multi-agent games . Combining reward randomization and policy gradient , we derive a new algorithm , RewardRandomized Policy Gradient ( RPG ) . RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas , including grid-world games and a real-world game Agar.io , where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques . Furthermore , with the set of diverse strategies from RPG , we can ( 1 ) achieve higher payoffs by fine-tuning the best policy from the set ; and ( 2 ) obtain an adaptive agent by using this set of strategies as its training opponents . The source code and example videos can be found in our website : https : //sites.google . com/view/staghuntrpg . 1 INTRODUCTION . Games have been a long-standing benchmark for artificial intelligence , which prompts persistent technical advances towards our ultimate goal of building intelligent agents like humans , from Shannon ’ s initial interest in Chess ( Shannon , 1950 ) and IBM DeepBlue ( Campbell et al. , 2002 ) , to the most recent deep reinforcement learning breakthroughs in Go ( Silver et al. , 2017 ) , Dota II ( OpenAI et al. , 2019 ) and Starcraft ( Vinyals et al. , 2019 ) . Hence , analyzing and understanding the challenges in various games also become critical for developing new learning algorithms for even harder challenges . Most recent successes in games are based on decentralized multi-agent learning ( Brown , 1951 ; Singh et al. , 2000 ; Lowe et al. , 2017 ; Silver et al. , 2018 ) , where agents compete against each other and optimize their own rewards to gradually improve their strategies . In this framework , Nash Equilibrium ( NE ) ( Nash , 1951 ) , where no player could benefit from altering its strategy unilaterally , provides a general solution concept and serves as a goal for policy learning and has attracted increasingly significant interests from AI researchers ( Heinrich & Silver , 2016 ; Lanctot et al. , 2017 ; Foerster et al. , 2018 ; Kamra et al. , 2019 ; Han & Hu , 2019 ; Bai & Jin , 2020 ; Perolat et al. , 2020 ) : many existing works studied how to design practical multi-agent reinforcement learning ( MARL ) algorithms that can provably converge to an NE in Markov games , particularly in the zero-sum setting . Despite the empirical success of these algorithms , a fundamental question remains largely unstudied in the field : even if an MARL algorithm converges to an NE , which equilibrium will it converge to ? The existence of multiple NEs is extremely common in many multi-agent games . Discovering as many NE strategies as possible is particularly important in practice not only because different NEs can produce drastically different payoffs but also because when facing unknown players who are trained to play an NE strategy , we can gain advantage by identifying which NE strategy the opponent is playing and choosing the most appropriate response . Unfortunately , in many games where multiple distinct NEs exist , the popular decentralized policy gradient algorithm ( PG ) , which has led to great successes in numerous games including Dota II and Stacraft , always converge to a particular NE with non-optimal payoffs and fail to explore more diverse modes in the strategy space . Consider an extremely simple example , a 2-by-2 matrix game Stag-Hunt ( Rousseau , 1984 ; Skyrms , 2004 ) , where two pure strategy NEs exist : a “ risky ” cooperative equilibrium with the highest payoff ∗Equal contribution . † Work done as an intern at Institute for Interdisciplinary Information Sciences ( IIIS ) , Tsinghua University . for both agents and a “ safe ” non-cooperative equilibrium with strictly lower payoffs . We show , from both theoretical and practical perspectives , that even in this simple matrix-form game , PG fails to discover the high-payoff “ risky ” NE with high probability . The intuition is that the neighborhood that makes policies converge to the “ risky ” NE can be substantially small comparing to the entire policy space . Therefore , an exponentially large number of exploration steps are needed to ensure PG discovers the desired mode . We propose a simple technique , Reward Randomization ( RR ) , which can help PG discover the “ risky ” cooperation strategy in the stag-hunt game with theoretical guarantees . The core idea of RR is to directly perturb the reward structure of the multi-agent game of interest , which is typically low-dimensional . RR directly alters the landscape of different strategy modes in the policy space and therefore makes it possible to easily discover novel behavior in the perturbed game ( Fig . 1 ) . We call this new PG variant Reward-Randomized Policy Gradient ( RPG ) . To further illustrate the effectiveness of RPG , we introduce three Markov games – two gridworld games and a real-world online game Agar.io . All these games have multiple NEs including both “ risky ” cooperation strategies and “ safe ” non-cooperative strategies . We empirically show that even with state-of-the-art exploration techniques , PG fails to discover the “ risky ” cooperation strategies . In contrast , RPG discovers a surprisingly diverse set of human-interpretable strategies in all these games , including some non-trivial emergent behavior . Importantly , among this set are policies achieving much higher payoffs for each player compared to those found by PG . This “ diversityseeking ” property of RPG also makes it feasible to build adaptive policies : by re-training an RL agent against the diverse opponents discovered by RPG , the agent is able to dynamically alter its strategy between different modes , e.g. , either cooperate or compete , w.r.t . its test-time opponent ’ s behavior . We summarize our contributions as follow • We studied a collection of challenging multi-agent games , where the popular multi-agent PG algorithm always converges to a sub-optimal equilibrium strategy with low payoffs . • A novel reward-space exploration technique , reward randomization ( RR ) , for discovering hard-to-find equilibrium with high payoffs . Both theoretical and empirical results show that reward randomization substantially outperforms classical policy/action-space exploration techniques in challenging trust dilemmas . • We empirically show that RR discovers surprisingly diverse strategic behaviors in complex Markov games , which further provides a practical solution for building an adaptive agent . • A new multi-agent environment Agar.io , which allows complex multi-agent strategic behavior . We released the environment to the community as a novel testbed for MARL research . 2 A MOTIVATING EXAMPLE : STAG HUNT Stag Hare Stag a , a c , b Hare b , c d , d Table 1 : The stag-hunt game , a > b ≥ d > c. We start by analyzing a simple problem : finding the NE with the optimal payoffs in the Stag Hunt game . This game was originally introduced in Rousseau ’ s work , “ A discourse on inequality ” ( Rousseau , 1984 ) : a group of hunters are tracking a big stag silently ; now a hare shows up , each hunter should decide whether to keep tracking the stag or kill the hare immediately . This leads to the 2-by-2 matrix-form stag-hunt game in Tab . 1 with two actions for each agent , Stag ( S ) and Hare ( H ) . There are two pure strategy NEs : the Stag NE , where both agents choose S and receive a high payoff a ( e.g. , a = 4 ) , and the Hare NE , where both agents choose H and receive a lower payoff d ( e.g. , d = 1 ) . The Stag NE is “ risky ” because if one agent defects , they still receives a decent reward b ( e.g. , b = 3 ) for eating the hare alone while the other agent with an S action may suffer from a big loss c for being hungry ( e.g. , c = −10 ) . Formally , let A = { S , H } denote the action space , πi ( θi ) denote the policy for agent i ( i ∈ { 1 , 2 } ) parameterized by θi , i.e. , P [ πi ( θi ) = S ] = θi and P [ πi ( θi ) = H ] = 1− θi , and R ( a1 , a2 ; i ) denote the payoff for agent i when agent 1 takes action a1 and agent 2 takes action a2 . Each agent i optimizes its expected utility Ui ( π1 , π2 ) = Ea1∼π1 , a2∼π2 [ R ( a1 , a2 ; i ) ] . Using the standard policy gradient algorithm , a typical learning procedure is to repeatedly take the following two steps until convergence1 : ( 1 ) estimate gradient ∇i = ∇Ui ( π1 , π2 ) via self-play ; ( 2 ) update the policies by θi ← θi + α∇i with learning rate α . Although PG is widely used in practice , the following theorem shows in certain scenarios , unfortunately , the probability that PG converges to the Stag NE is low . Theorem 1 . Suppose a− b = ( d− c ) for some 0 < < 1 and initialize θ1 , θ2 ∼ Unif [ 0 , 1 ] . Then the probability that PG discovers the high-payoff NE is upper bounded by 2 + 2 1+2 + 2 . Theorem 1 shows when the risk is high ( i.e. , c is low ) , then the probability of finding the Stag NE via PG is very low . Note this theorem applies to random initialization , which is standard in RL . Remark : One needs at least N = Ω ( 1 ) restarts to ensure a constant success probability . Fig . 2 shows empirical studies : we select 4 value assignments , i.e. , c ∈ { −5 , −20 , −50 , −100 } and a=4 , b=3 , d=1 , and run a state-of-the-art PG method , proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) , on these games . The Stag NE is rarely reached , and , as c becomes smaller , the probability of finding the Stag NE significantly decreases . Peysakhovich & Lerer ( 2018b ) provided a theorem of similar flavor without analyzing the dynamics of the learning algorithm whereas we explicitly characterize the behavior of PG . They studied a prosocial reward-sharing scheme , which transforms the reward of both agents toR ( a1 , a2 ; 1 ) +R ( a1 , a2 ; 2 ) . Reward sharing can be viewed as a special case of our method and , as shown in Sec . 5 , it is insufficient for solving complex temporal games . 2.1 REWARD RANDOMIZATION IN THE MATRIX-FORM STAG-HUNT GAME . 9 Thm . 1 suggests that the utility function R highly influences what strategy PG might learn . Taking one step further , even if a strategy is difficult to learn with a particular R , it might be easier in some other function R′ . Hence , if we can define an appropriate spaceR over different utility functions and draw samples from R , we may possibly discover desired novel strategies by running PG on some sampled utility function R′ and evaluating the obtained policy profile on the original game with R. We call this procedure Reward Randomization ( RR ) . Concretely , in the stag-hunt game , R is parameterized by 4 variables ( aR , bR , cR , dR ) . We can define a distribution over R4 , draw a tuple R′ = ( aR′ , bR′ , cR′ , dR′ ) from this distribution , and run PG on R′ . Denote the original stag-hunt game where the Stag NE is hard to discover as R0 . Reward randomization draws N perturbed tuples R1 , . . . , RN , runs PG on each Ri , and evaluates each of the obtained strategies on R0 . The theorem below shows it is highly likely that the population of the N policy profiles obtained from the perturbed games contains the Stag NE strategy . Theorem 2 . For any Stag-Hunt game , suppose in the i-th run of RR we randomly generate aRi , bRi , cRi , dRi ∼ Unif [ −1 , 1 ] and initialize θ1 , θ2 ∼ Unif [ 0 , 1 ] , then with probability at least 1− 0.6N = 1− exp ( −Ω ( N ) ) , the aforementioned RR procedure discovers the high-payoff NE . Here we use the uniform distribution as an example . Other distributions may also help in practice . Comparing Thm . 2 and Thm . 1 , RR significantly improves standard PG w.r.t . success probability . Remark 1 : For the scenario studied in Thm . 1 , to achieve a ( 1− δ ) success probability for some 0 < δ < 1 , PG requires at least N = Ω ( 1 log ( 1 δ ) ) random restarts . For the same scenario , RR only requires to repeat at most N = O ( log ( 1/δ ) ) which is independent of . When is small , this is a huge improvement . Remark 2 : Thm . 2 suggests that comparing with policy randomization , perturbing the payoff matrix makes it substantially easier to discover a strategy that can be hardly reached in the original game . Note that although in Stag Hunt , we particularly focus on the Stag NE that has the highest payoff for both agents , in general RR can also be applied to NE selection in other matrix-form games using a payoff evaluation functionE ( π1 , π2 ) . For example , we can setE ( π1 , π2 ) = U1 ( π1 , π2 ) +U2 ( π1 , π2 ) for a prosocial NE , or look for Pareto-optimal NEs by setting E ( π1 , π2 ) = βU1 ( π1 , π2 ) + ( 1 − β ) U2 ( π1 , π2 ) with 0 ≤ β ≤ 1 . 1In general matrix games beyond stag hunt , the procedure can be cyclic as well ( Singh et al. , 2000 ) . Algorithm 1 : RPG : Reward-Randomized Policy Gradient Input : original game M , search spaceR , evaluation function E , population size N ; draw samples { R ( 1 ) , . . . , R ( N ) } fromR ; { π ( i ) 1 , π ( i ) 2 } ← PG on induced games { M ( R ( i ) ) } i in parallel ; // RR phase select the best candidate π ( k ) 1 , π ( k ) 2 by k = arg maxiE ( π ( i ) 1 , π ( i ) 2 ) ; // evaluation phase π ? 1 , π ? 2 ← fine-tune π ( k ) 1 , π ( k ) 2 on M via PG ( if necessary ) ; // fine-tuning phase return π ? 1 , π ? 2 ;
This paper considers the problem of finding a nash equilibrium in two player games where each of the algorithm runs an RL algorithm. In this paper they ask the question -- which nash equilibria does the dynamics converge to in this two player game (where each player optimizes based on a policy gradient algorithm). They construct two player games with multiple nash equilibria; one is a favorable nash equilibria where both players get high rewards while the other is a less favorable nash equilibria where both player only get medium rewards. In such games they first show that in general simply running policy gradient on the natural reward function i.e., the observed payoff will not lead to the desirable nash equilibria. The goal of this paper is to ameliorate this by considering perturbations in the reward space. At a high level, the algorithm learns multiple policies on a class of games generated by sampling multiple reward functions from a family and training one policy per sampled reward function using PG. Then using an evaluation function, the best policy is picked by evaluating each of the learnt policies on the original game.
SP:04a93ed7a7bef0c8f8c99a1fa381cc920fbd2002
Predicting Video with VQVAE
1 INTRODUCTION . When it comes to real-world image data , deep generative models have made substantial progress . With advances in computational efficiency and improvements in architectures , it is now feasible to generate high resolution , realistic images from vast and highly diverse datasets ( Brock et al. , 2019 ; Razavi et al. , 2019 ; Karras et al. , 2017 ) . Apart from the domain of images , deep generative models have also shown promise in other data domains such as music ( Dieleman et al. , 2018 ; Dhariwa et al. , 2020 ) , speech synthesis ( Oord et al. , 2016 ) , 3D voxels ( Liu et al. , 2018 ; Nash & Williams , 2017 ) , and text ( Radford et al. , 2019 ) . One particular fledgling domain is video . While some work in the area of video generation ( Clark et al. , 2020 ; Vondrick et al. , 2016 ; Saito & Saito , 2018 ) has explored video synthesis—generating videos with no prior frame information— many approaches actually focus on the task of video prediction conditioned on past frames ( Ranzato et al. , 2014 ; Srivastava et al. , 2015 ; Patraucean et al. , 2015 ; Mathieu et al. , 2016 ; Lee et al. , 2018 ; Babaeizadeh et al. , 2018 ; Oliu et al. , 2018 ; Xiong et al. , 2018 ; Xue et al. , 2016 ; Finn et al. , 2016 ; Luc et al. , 2020 ) . It can be argued that video synthesis is a combination of image generation and video prediction . In other words , one could decouple the problem of video synthesis into unconditional image generation and conditional video prediction from a generated image . Therefore , we specifically focus on video prediction in this paper . Potential computer vision applications of video forecasting include interpolation , anomaly detection , and activity understanding . More generally , video prediction also has more general implications for intelligent systems—the ability to anticipate the dynamics of the environment . The problem is thus also relevant for robotics and reinforcement learning ( Finn et al. , 2016 ; Ebert et al. , 2017 ; Oh et al. , 2015 ; Ha & Schmidhuber , 2018 ; Racanire et al. , 2017 ) . Approaches toward video prediction have largely skewed toward variations of generative adversarial networks ( Mathieu et al. , 2016 ; Lee et al. , 2018 ; Clark et al. , 2020 ; Vondrick et al. , 2016 ; Luc et al. , 2020 ) . In comparison , we are aware of only a relatively small number of approaches which propose variational autoencoders ( Babaeizadeh et al. , 2018 ; Xue et al. , 2016 ; Denton & Fergus , 2018 ) , autoregressive models ( Kalchbrenner et al. , 2017 ; Weissenborn et al. , 2020 ) , or flow based approaches ( Kumar et al. , 2020 ) . There may be a number of reasons for this situation . One is the explosion in the dimensionality of the input space . A generative model of video needs to model not only one image but tens of them in a coherent fashion . This makes it difficult to scale up such models to large datasets or high resolutions . In addition , previous work ( Clark et al. , 2020 ) suggests that video prediction may be fundamentally more difficult than video synthesis ; a synthesis model can generate simple samples from the dataset while prediction potentially forces the model to forecast conditioned on videos that are outliers in the distribution . Furthermore , most prior work has focused on datasets with low scene diversity such as Moving MNIST ( Srivastava et al. , 2015 ) , KTH ( Schuldt et al. , 2004 ) , or robotic arm datasets ( Finn et al. , 2016 ; Ebert et al. , 2017 ) . While there have been attempts to synthesize video at a high resolution ( Clark et al. , 2020 ) , we know of no attempt—excluding flow based approaches—to predict video beyond resolutions of 64x64 . In this paper we address the large dimensionality of video data through compression . Using Vector Quantized Variational Autoencoders ( VQ-VAE ) ( van den Oord et al. , 2017 ) , we can compress video into a space requiring only 1.3 % of the bits expressed in pixels . While this compressed encoding is lossy , we can still reconstruct the original video from the latent representation with a high degree of fidelity . Furthermore , we can leverage the modularity of VQ-VAE and decompose our latent representation into a hierarchy of encodings , separating high-level , global information from details such as fine texture or small motions . Instead of training a generative model directly on pixel space , we can instead model this much more tractable discrete representation , allowing us to train much more powerful models , use large diverse datasets , and generate at a high resolution . While most prior work has focused on GANs , this discrete representation can also be modeled by likelihood-based models . Likelihood models in concept do not suffer from mode-collapse , instability in training , and lack of diversity of samples often witnessed in GANs ( Denton & Fergus , 2018 ; Babaeizadeh et al. , 2018 ; Razavi et al. , 2019 ) . In this paper , we propose a PixelCNN augmented with causal convolutions in time and spatiotemporal self-attention to model this space of latents . In addition , because the latent representation is decomposed into a hierarchy , we can exploit this decomposition and train separate specialized models at different levels of the hierarchy . Our paper makes four contributions . First , we demonstrate the novel application of VQ-VAE to video data . Second , we propose a set of spatiotemporal PixelCNNs to predict video by utilizing the latent representation learned with VQ-VAE . Third , we explicitly predict video at a higher resolution than ever before . Finally , we demonstrate the competitive performance of our model with a crowdsourced human evaluation . 2 BACKGROUND . 2.1 VECTOR QUANTIZED AUTOENCODERS . VQ-VAEs ( van den Oord et al. , 2017 ) are autoencoders which learn a discrete latent encoding for input data x . First , the output of non-linear encoder ze ( x ) , implemented by a neural network , is passed through a discretization bottleneck . ze ( x ) is mapped via nearest-neighbor into a quantized codebook e ∈ RK×D where D is the dimensionality of each vector ej and K is the number of categories in the codebook . The discretized representation is thus given by : zq ( x ) = ek where k = argminj ||ze ( x ) − ej ||2 ( 1 ) Equation 1 is not differentiable ; however , ( van den Oord et al. , 2017 ) notes that copying the gradient of zq ( x ) to ze ( x ) is a suitable approximation similar to the straight-through estimator ( Bengio et al. , 2013 ) . A decoder D , also implemented by a neural network , then reconstructs the input from zq ( x ) . The total loss function for the VQ-VAE is thus : L = ||D ( zg ( x ) ) − x||22 + ||sg [ zg ( x ) ] − e||22 + β||zg ( x ) − sg [ e ] ||22 ( 2 ) Where sg is a stop gradient operator , and β is a parameter which regulates the rate of code change . As in previous work ( van den Oord et al. , 2017 ; Razavi et al. , 2019 ) , we replace the second term in equation 2 and learn the codebook e ∈ RK×D via an exponential moving average of previous values during training : N ( t ) i : = N ( t−1 ) i ∗ γ + n ( t ) i ( 1− γ ) , m ( t ) i : = m ( t−1 ) i ∗ γ + n ( t ) i∑ j ze ( x ) ( t ) i , j ( 1− γ ) , e ( t ) i : = m ( t ) i N ( t ) i ( 3 ) Where γ is a decay parameter and n ( t ) i is the numbers of vectors in zg ( x ) in a batch that will map to ei . 2.2 PIXELCNN MODELS . PixelCNN and related models have shown promise in modeling a wide variety of data domains ( van den Oord et al. , 2016 ; Oord et al. , 2016 ; Kalchbrenner et al. , 2017 ; Weissenborn et al. , 2020 ) . These autoregressive models are likelihood-based—they explicitly optimize negative log-likelihood . They exploit the fact that the joint probability distribution input data x can be factored into a product of conditional distributions for each dimension of the data : Pθ ( x ) = n∏ i=0 pθ ( xi|x < i ) ( 4 ) Where n is the full dimensionality of the data . This factorization is implemented by a neural network , and the exact set of conditional dependencies is determined by the data domain . Image pixels may depend on regions above and to the left of them ( van den Oord et al. , 2016 ) , while temporal dimensions may depend on past dimensions ( Oord et al. , 2016 ; Kalchbrenner et al. , 2017 ; Weissenborn et al. , 2020 ) . 3 METHOD . Our approach consists of two main components . First , we compress video segments into a discrete latent representation using a hierarchical VQ-VAE . We then propose a multi-stage autoregressive model based on the PixelCNN architecture , exploiting the low dimensionality of the compressed latent space and the hierarchy of the representation . 3.1 COMPRESSING VIDEO WITH VQ-VAE . Similar to ( Razavi et al. , 2019 ) , we use VQ-VAE to compress video in a hierarchical fashion . This multi-stage composition of the latent representation allows decomposition of global , high level information from low-level details such as edges or fine motion . For image information ( Razavi et al. , 2019 ) this approach confers a number of advantages . First , the decomposition allows latent codes to specialize at each level . High level information can be represented in an even more compressed manner , and the total reconstruction error is lower . In addition , this hierarchy leads to a naturally modular generative model . We can then develop a generative model that specializes in modeling the high-level , global information . We can then train a separate model , conditioned on global information , that fills in the details and models the low-level information further down the hierarchy . In this paper , we adopt the terminology of ( Razavi et al. , 2019 ) and call the set of high-level latents the top layer and the low-level latents the bottom layer . Consistent with the experimental setup of previous work in video prediction , we deal with 16-frame videos . Most of the videos in our training dataset are 25 frames per second . We use frames at a 256 × 256 resolution . The full video voxel is thus 256 × 256 × 16 . Using residual blocks with 3D convolutions , we downsample the video spatiotemporally . At the bottom layer , the video is downsampled to a quantized latent space of 64× 64× 8 , reducing the spatial dimension by 4 and the temporal dimension by 2 . Another stack of blocks reduces all dimensions by 2 , with a top layer of 32× 32× 4 . Each of the voxels in the layer is quantized into 512 codes with a different codebook for both layers . The decoder then concatenates the bottom layer and the top layer after upsampling using transposed convolutions . From this concatentation as input , the decoder deterministically outputs the full 256 × 256 × 16 video . Overall , we reduce a 256 × 256 × 16 × 3 × log ( 256 ) space down to a 64×64×8× log ( 512 ) +32×32×4× log ( 512 ) space , a greater than 98 % reduction in bits required .
The authors propose to use a VQVAE-2 setup for video prediction. In particular, they propose a hierarchical discrete latent variable model that compresses videos into a latent space. An autoregressive model is then used to model dynamics in this latent space, which has reduced dimensionality, and can be used together with the VQVAE decoder to predict video. Empirical results show that this model is comparable to SOTA GAN models and a human evaluation suggests that humans have a preference for the
SP:24344b20e162a68ed6631aa050c2c09a8f91d5ac
Initialization and Regularization of Factorized Neural Layers
1 INTRODUCTION . Most neural network layers consist of matrix-parameterized functions followed by simple operations such as activation or normalization . These layers are the main sources of model expressivity , but also the biggest contributors to computation and memory cost ; thus modifying these layers to improve computational performance while maintaining performance is highly desirable . We study the approach of factorizing layers , i.e . reparameterizing them so that their weights are defined as products of two or more matrices . When these are smaller than the original matrix , the resulting networks are more efficient for both training and inference ( Denil et al. , 2013 ; Moczulski et al. , 2015 ; Ioannou et al. , 2016 ; Tai et al. , 2016 ) , resulting in model compression . On the other hand , if training cost is not a concern , one can increase the width or depth of the factors to over-parameterize models ( Guo et al. , 2020 ; Cao et al. , 2020 ) , improving learning without increasing inference-time cost . This can be seen as a simple , teacher-free form of knowledge distillation . Factorized layers also arise implicitly , such as in the case of multi-head attention ( MHA ) ( Vaswani et al. , 2017 ) . Despite such appealing properties , networks with factorized neural layers are non-trivial to train from scratch , requiring custom initialization , regularization , and optimization schemes . In this paper we focus on initialization , regularization , and how they interact with gradient-based optimization of factorized layers . We first study spectral initialization ( SI ) , which initializes factors using singular value decomposition ( SVD ) so that their product approximates the target un-factorized matrix . Then , we study Frobenius decay ( FD ) , which regularizes the product of matrices in a factorized layer rather than its individual terms . Both are motivated by matching the training regimen of the analogous un-factorized optimization . Note that SI has been previously considered in the context of model compression , albeit usually for factorizing pre-trained models ( Nakkiran et al. , 2015 ; Yaguchi et al. , 2019 ; Yang et al. , 2020 ) rather than low-rank initialization for end-to-end training ; FD has been used in model compression using an uncompressed teacher ( Idelbayev & Carreira-Perpiñán , 2020 ) . We formalize and study the justifications of SI and FD from both the classical perspective— matching the un-factorized objective and scaling—and in the presence of BatchNorm ( Ioffe & Szegedy , 2015 ) , where this does not apply . Extending recent studies of weight-decay ( Zhang et al. , 2019 ) , we argue that the effective step-size at spectral initialization is controlled by the factorization ’ s Frobenius norm and show convincing evidence that weight-decay penalizes the nuclear norm . We then turn to applications , starting with low-memory training , which is dominated by unstructured sparsity methods—i.e . guessing “ lottery tickets ” ( Frankle & Carbin , 2019 ) —with a prevailing trend of viewing low-rank methods as uncompetitive for compression ( Blalock et al. , 2020 ; Zhang et al. , 2020 ; Idelbayev & Carreira-Perpiñán , 2020 ; Su et al. , 2020 ) . Here we show that , without tuning , factorized neural layers outperform all structured sparsity methods on ResNet architectures ( He et al. , 2016 ) , despite lagging on VGG ( Simonyan & Zisserman , 2015 ) . Through ablations , we show that this result is due to using both SI and FD on the factorized layers . We further compare to a recent evaluation of tensor-decomposition approaches for compressed WideResNet training ( Zagoruyko & Komodakis , 2016 ; Gray et al. , 2019 ) , showing that ( a ) low-rank approaches with SI and FD can outperform them and ( b ) they are themselves helped by tensor-variants of SI and FD . We also study a fledgling subfield we term overcomplete knowledge distillation ( Arora et al. , 2018 ; Guo et al. , 2020 ; Cao et al. , 2020 ) in which model weights are over-parameterized as overcomplete factorizations ; after training the factors are multiplied to obtain a compact representation of the same network . We show that FD leads to significant improvements , e.g . we outperform ResNet110 with an overcomplete ResNet56 that takes 1.5x less time to train and has 2x fewer parameters at test-time . Finally , we study Transformer architectures , starting by showing that FD improves translation performance when applied to MHA . We also show that SI is critical for low-rank training of the model ’ s linear layers . In an application to BERT pre-training ( Devlin et al. , 2019 ) , we construct a Frobeniusregularized variant—FLAMBé—of the LAMB method ( You et al. , 2020 ) , and show that , much like for transformers , it improves performance both for full-rank and low-rank MHA layers . To summarize , our main contributions are ( 1 ) motivating the study of training factorized layers via both the usual setting ( model compression ) and recent applications ( distillation , multi-head attention ) , ( 2 ) justifying the use of SI and FD mathematically and experimentally , and ( 3 ) demonstrating their effectiveness by providing strong baselines and novel advances in many settings . Code to reproduce our results is available here : https : //github.com/microsoft/fnl_paper . 1.1 RELATED WORK . We are not the first study gradient descent on factorized layers ; in particular , deep linear nets are well-studied in theory ( Saxe et al. , 2014 ; Gunasekar et al. , 2019 ) . Apart from Bernacchia et al . ( 2018 ) these largely examine existing algorithms , although Arora et al . ( 2018 ) do effectively propose overcomplete knowledge distillation . Rather than the descent method , we focus on the initialization and regularization . For the former , several papers use SI after training ( Nakkiran et al. , 2015 ; Yaguchi et al. , 2019 ; Yang et al. , 2020 ) , while Ioannou et al . ( 2016 ) argue for initializing factors as though they were single layers , which we find inferior to SI in some cases . Outside deep learning , spectral methods have also been shown to yield better initializations for certain matrix and tensor problems ( Keshavan et al. , 2010 ; Chi et al. , 2019 ; Cai et al. , 2019 ) . For regularization , Gray et al . ( 2019 ) suggest compression-rate scaling ( CRS ) , which scales weight-decay using the reduction in parameter count ; this is justified via the usual Bayesian understanding of ` 2-regularization ( Murphy , 2012 ) . However , we find that FD is superior to any tuning of regular weight-decay , which subsumes CRS . Our own analysis is based on recent work suggesting that the function of weight-decay is to aid optimization by preventing the effective step-size from becoming too small ( Zhang et al. , 2019 ) . 2 PRELIMINARIES ON FACTORIZED NEURAL LAYERS . In the training phase of ( self- ) supervised ML , we often solve optimization problems of the form minθPΘ 1 |S| ř px , yqPS ` pfθpxq , yq ` Ωpθq , where fθ : X ÞÑ Y is a function from input domain X to output domain Y parameterized by elements θ P Θ , ` : Y ˆY ÞÑ R is a scalar-valued loss function , Ω : Θ ÞÑ R is a scalar-valued regularizer , and S Ă X ˆY is a finite set of ( self- ) supervised training examples . We study the setting where fθ is a neural network , an L-layer function whose parameters θ consist of L matrices Wi P Rmiˆni and whose output fθpxq given input x is defined recursively using L functions gi via the formula xi “ gipWi , xi´1q , with x0 “ x and fθpxq “ xL . The standard approach to training fθ is to specify the regularizer Ω , ( randomly ) pick an initialization in Θ , and iteratively update the parameters using some first-order algorithm such as SGD to optimize the objective above until some stopping criterion is met . However , in many cases we instead optimize over factorized variants of these networks , in which some or all of the matrices Wi P Rmiˆni are re-parameterized as a product Wi “ Uip śdi j “ 1 MijqV Ti for some inner depth di ě 0 and matrices Ui P Rmiˆri , Vi P Rniˆri , and Mij P Rriˆri @ j . As discussed in the following examples , this can be done to obtain better generalization , improve optimization , or satisfy practical computational or memory constraints during training or inference . For simplicity , we drop the subscript i whenever re-parameterizing only one layer and only consider the cases when inner depth d is 0 or 1 . 2.1 FULLY-CONNECTED LAYERS . A fully-connected layer takes an n-dimensional input xi´1 and outputs an m-dimensional vector xi “ σpWxi´1q , where σ : Rm ÞÑ Rm is an element-wise activation function . Here , decomposing W P Rmˆn into the product UV T , where U P Rmˆr , V P Rnˆr , and setting r ! mintm , nu reduces computation and memory costs from Opmnq to Opmr ` nrq . We refer to this setting as model compression . Standard learning theory suggests that a small rank r also improves generalization , e.g . for a factorized fully-connected ReLU network , applying } W } 2F { } W } 22 ď rankpW q to Neyshabur et al . ( 2018 , Theorem 1 ) and substituting Wi “ UiV Ti gives a w.h.p . margin-bound Õp a mr { |S|q suggesting that generalization error varies with the square root of the rank ( see Corollary A.1 ) . Alternatively , by setting r ě mintm , nu and/or including an inner matrix M P Rrˆr , we can attempt to take advantage of improved optimization due to increased width ( Du & Hu , 2019 ) and/or increased depth ( Arora et al. , 2018 ) . Crucially , this does not increase inference costs because we can recompose the matrix after training and just use the product . As the goal is to obtain a better small model by first training a large one , we refer to this setting as overcomplete knowledge distillation ; of course , unlike regular distillation it is much simpler since there is no student-teacher training stage . 2.2 CONVOLUTIONAL LAYERS . A 2d convolutional layer takes an hˆ w ˆ ci´1-dimensional input xi´1 and outputs a hˆ w ˆ cidimensional output xi defined by convolving ci different k ˆ k filters over each of ci´1 input channels . Often the result is passed through a nonlinearity . 2d convolutional layers are parameterized by ciˆ ci´1ˆ kˆ k tensors and require Opk2cici´1q memory and compute . A straightforward way of factorizing this tensor without using tensor decomposition is to reshape it into a cik ˆ ci´1k matrix W , which can then be decomposed as W “ UV T for U P Rcikˆr , V P Rci´1kˆr and some rank r ą 0 . As in the fully-connected case , we can either set the rank r to be small in order to reduce the number of parameters or alternatively increase the width ( r ) or the depth ( d ) of the factorization to do overcomplete knowledge distillation . Note that in the low-rank case a naive approach does not save computation since we must first multiply U and V T , reshape the product UV T , and then use the resulting tensor in a regular 2d convolution of the original size and complexity . However , as shown by Tai et al . ( 2016 ) , applying the 2d k ˆ k convolution with ci´1 input channels and ci output channels obtained by reshaping UV T is equivalent to a composition of two 1d convolutions : the first defined by V T P Rrˆci´1k consists of r output channels and filters of size k along one input dimension and the second defined by U P Rcikˆr consists of ci output channels and filters of size k along the other input dimension . Together the two 1d convolutions require Opkrpci ` ci´1qq memory and computation , which is significantly better than the Opk2cici´1q cost of the unfactorized case if r ! kmintci , ci´1u .
This paper studies initialization and regularization in factorized neural networks (reparameterize a weight matrix by the product of several weight matrices). The authors proposed spectral initialization, that is to initialize the factorized matrices using the SVD of the un-factorized matrix. The authors also proposed Frobenius decay that is to regularize the Frobenius norm of the product of the factorized weight matrices. The motivation is to simulate the routines for non-decomposed counterparts. The authors empirically showed the effectiveness of spectral initialization and Frobenius decay in different applications: compressed model training, knowledge distillation, and multi-head self-training.
SP:070b8df785712a7741fa4a986ef99f3c47f52b1a
Perturbation Type Categorization for Multiple $\ell_p$ Bounded Adversarial Robustness
1 INTRODUCTION . There has been a long line of work studying the vulnerabilities of machine learning models to small changes in the input data . In particular , most existing works focus on ` p bounded perturbations ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . While majority of the prior work aims at achieving robustness against a single perturbation type ( Madry et al. , 2018 ; Kurakin et al. , 2017 ; Tramèr et al. , 2018 ; Dong et al. , 2018 ; Zhang et al. , 2019 ; Carmon et al. , 2019 ) , real-world deployment of machine learning models requires them to be robust against various imperceptible changes in the input , irrespective of the attack type . Prior work has shown that when models are trained to be robust against one perturbation type , such robustness typically does not transfer to attacks of a different type ( Schott et al. , 2018 ; Kang et al. , 2019 ) . As a result , recent works have proposed to develop models that are robust against the union of multiple perturbation types ( Tramèr & Boneh , 2019 ; Maini et al. , 2020 ) . Specifically , these works consider adversaries limited by their ` p distance from the original input for p ∈ { 1 , 2 , ∞ } . While these methods improve the overall robustness against multiple perturbation types , when evaluating the robustness against each individual perturbation type , the robustness of models trained by these methods is still considerably worse than those trained on a single perturbation type . Further , these methods are found sensitive to small changes in hyperparameters . In this work , we propose an alternative view that does not require a single predictor to be robust against a union of perturbation types . Instead , we propose to utilize a union of predictors to improve the 1We will open-source the code , pre-trained models , and perturbation type datasets upon publication . overall robustness , where each predictor is specialized to defend against certain perturbation types . In particular , we introduce the problem of categorizing adversarial examples based on their perturbation types . Based on this idea , we propose PROTECTOR , a two-stage pipeline that performs Perturbation Type Categorization for Robustness against multiple perturbations . Specifically , first a perturbation type classifier predicts the type of the attack . Then , among the second-level predictors , PROTECTOR selects the one that is the most robust to the predicted perturbation type to make final prediction . We validate our approach from both theoretical and empirical aspects . First , we present theoretical analysis to show that for benign samples with the same ground truth label , their distributions become highly distinct when added with different types of perturbations , and thus can be separated . Further , we show that there exists a natural tension between attacking the top-level perturbation classifier and the second-level predictors – strong attacks against the second-level predictors make it easier for the perturbation classifier to predict the adversarial perturbation type , and fooling the perturbation classifier requires planting weaker ( or less representative ) attacks against the second-level predictors . As a result , even an imperfect perturbation classifier is sufficient to significantly improve the overall robustness of the model to multiple perturbation types . Empirically , we show that the perturbation type classifier generalizes well on classifying adversarial examples against different adversarially trained models . Then we further compare PROTECTOR to the state-of-the-art defenses against multiple perturbations on MNIST and CIFAR-10 . PROTECTOR outperforms prior approaches by over 5 % against the union of the ` 1 , ` 2 and ` ∞ attacks . While past work has focused on the worst case metric against all attacks , on average they suffer significant tradeoffs against individual attacks . From the suite of 25 different attacks tested , the average improvement for PROTECTOR over all the attacks w.r.t . the state-of-art baseline defense is ∼ 15 % on both MNIST and CIFAR10 . In particular , by adding random noise to the model input at test time , we further increase the tension between attacking top-level and second-level components , and bring in additional improvement of robustness against adaptive attackers . Additionally , PROTECTOR provides a modular way to integrate and update defenses against a single perturbation type . 2 RELATED WORK . Adversarial examples . The realization of the existence of adversarial examples in deep neural networks has spun active research on attack algorithms and defense proposals ( Szegedy et al. , 2013 ) . Among different types of attacks ( Madry et al. , 2018 ; Hendrycks et al. , 2019 ; Hendrycks & Dietterich , 2019 ; Bhattad et al. , 2020 ) , the most commonly studied ones constrain the adversarial perturbation within an ` p region of radius p around the original input . To improve the model robustness in the presence of such adversaries , the majority of existing defenses utilize adversarial training ( Goodfellow et al. , 2015 ) , which augments the training dataset with adversarial images . Till date , different variants of the original adversarial training algorithm remain the most successful defenses against adversarial attacks ( Carmon et al. , 2019 ; Zhang et al. , 2019 ; Wong et al. , 2020 ; Rice et al. , 2020 ) . Other types of defenses include input transformation ( Guo et al. , 2018 ; Buckman et al. , 2018 ) and network distillation ( Papernot et al. , 2016 ) , but were rendered ineffective under stronger adversaries ( He et al. , 2017 ; Carlini & Wagner , 2017a ; Athalye et al. , 2018 ; Tramer et al. , 2020 ) . Other works have explored the relation between randomizing the inputs and adversarial examples . Tabacof & Valle ( 2016 ) analyzed the change in adversarial robustness with varying levels of noise . Hu et al . ( 2019 ) evaluated the robustness of a data point to random noise to detect adversarial examples , whereas Cohen et al . ( 2019 ) utilized randomized smoothing for certified robustness to adversarial attacks . Defenses against multiple perturbation types . Recent research has been drawn towards the goal of universal adversarial robustness . Since ` p-norm bounded attacks are amongst the strongest attacks in adversarial examples literature , defending against a union of such attacks is an important step towards this end goal . Schott et al . ( 2018 ) ; Kang et al . ( 2019 ) showed that models that were trained for a given ` p-norm bounded attacks are not robust against attacks in a different ` q region . Succeeding work has aimed at developing one single model that is robust against the union of multiple perturbation types . Schott et al . ( 2018 ) proposed the use of multiple variational autoencoders to achieve robustness to multiple ` p attacks on the MNIST dataset . Tramèr & Boneh ( 2019 ) used simple aggregations of multiple adversaries to achieve non-trivial robust accuracy against the union of the ` 1 , ` 2 , ` ∞ regions . Maini et al . ( 2020 ) proposed the MSD algorithm that takes gradient steps in the union of multiple ` p regions to improve multiple perturbation robustness . In a related line of work , Croce & Hein ( 2020a ) proposed a method for provable robustness against all ` p regions for p ≥ 1 . Instead of presenting empirical results , they study the upper and lower bounds of certified robust test error on much smaller perturbation radii . Therefore , their work has a different focus , and is not directly comparable to empirical defenses studied in our work . Detection of adversarial examples . Multiple prior works have focused on detecting adversarial examples ( Feinman et al. , 2017 ; Lee et al. , 2018 ; Ma et al. , 2018 ; Cennamo et al. , 2020 ; Fidel et al. , 2019 ; Yin et al. , 2019a ; b ) . However , most of these defenses have been shown to be vulnerable in the presence of adaptive adversaries ( Carlini & Wagner , 2017a ; Tramer et al. , 2020 ) . In comparison , our work focuses on a more challenging problem of categorizing different perturbations types . However , we show that by establishing a trade-off between fooling the perturbation classifier and the individual ` p-robust models , even an imperfect perturbation classifier is sufficient to make our pipeline robust . 3 PROTECTOR : PERTURBATION TYPE CATEGORIZATION FOR ROBUSTNESS . In this section , we discuss our proposed PROTECTOR approach , which performs perturbation type categorization to improve the model robustness against multiple perturbation types . We first illustrate the PROTECTOR pipeline in Figure 1 , then discuss the details of each component . At a high level , PROTECTOR performs the classification task as a two-stage process . Given an input x , PROTECTOR first utilizes a perturbation classifier Cadv to predict its adversarial perturbation type . Then , based on the ` p attack type predicted by Cadv , PROTECTOR uses the corresponding second-level predictor Mp to provide the final prediction , where Mp is specially trained to be robust against the ` p attack . Formally , let fθ be the PROTECTOR model , then the final prediction is : fθ ( x ) = Mp ( x ) ; s.t . p = argmaxCadv ( x ) ( 1 ) Note that when the input is a benign image , it could be classified as any perturbation type by Cadv , since all secondlevel predictors should achieve a high test accuracy on benign images . As shown in Figure 1 , although we consider the robustness against three attack types , i.e. , ` 1 , ` 2 , ` ∞ perturbations , unless otherwise specified , our perturbation classifier performs binary classification between p = { { 1 , 2 } , ∞ } . As will be discussed in Section 6 , using two second-level predictors achieves better overall robustness than using three second-level predictors . We hypothesize that compared to the ` ∞ adversarial examples , ` 1 and ` 2 attacks are harder to separate , especially when facing an adaptive adversary which aims to attack the entire pipeline . To provide an intuitive illustration , we randomly sample 10K adversarial examples generated with PGD attacks on MNIST , and visualize the results of the Principal Component Analysis ( PCA ) in Figure 2 . We observe that the first two principal components for ` 1 and ` 2 adversarial examples are largely overlapping , while those for ` ∞ are clearly from a different distribution . Note that this simple visualization by no means suggest that ` 1 and ` 2 adversarial examples are not separable , it merely serves as a motivation . 4 THEORETICAL ANALYSIS . In this section , we provide a theoretical justification of our PROTECTOR framework design . First , we formally illustrate the setup of robust classification against multiple ` p perturbation types , and we consider models trained for a binary classification task . Based on this problem setting , in Theorem 1 , we show the existence of a classifier that can separate adversarial examples belonging to different perturbation types . Moreover , in Theorem 2 , we show that our PROTECTOR framework naturally offers a trade-off between fooling the perturbation classifier Cadv and the individual robust models Mp , thus it is extremely difficult for adversaries to stage attacks against the entire pipeline . Note that we focus on the simplified binary classification task for the convenience of theoretical analysis , but our PROTECTOR framework could improve the robustness of models trained on real-world image classification benchmarks as well , and we will discuss the empirical examination in Section 6 .
The paper proposes a two-stage defense method to improve the adversarial robustness over different perturbation types. Specifically, it first builds a hierarchical binary classifier to differentiable the perturbation types and then uses the result to guide to its corresponding defense models. It first proves the different types of perturbations could be separable and the adversary could be weakened to fool the binary classifier. It shows their methods achieve a clear improvement in the experiments.
SP:f33566d66d4f232d32107d392bb27c110b0b0ae3
BiGCN: A Bi-directional Low-Pass Filtering Graph Neural Network
1 INTRODUCTION . Graphs are important research objects in the field of machine learning as they are good carriers for structural data such as social networks and citation networks . Recently , graph neural networks ( GNNs ) received extensive attention due to their great performances in graph representation learning . A graph neural network takes node features and graph structure ( e.g . adjacency matrix ) as input , and embeds the graph into a lower-dimensional space . With the success of GNNs ( Kipf & Welling , 2017 ; Veličković et al. , 2017 ; Hamilton et al. , 2017 ; Chen et al. , 2018 ) in various domains , more and more efforts are focused on the reasons why GNNs are so powerful ( Xu et al. , 2019 ) . Li et al ( Li et al. , 2018 ) re-examined graph convolutional networks ( GCNs ) and connected it with Laplacian smoothing . NT and Maehara et al ( NT & Maehara , 2019 ) revisited GCNs in terms of graph signal processing and explained that many graph convolutions can be considered as low-pass filters ( e.g . ( Kipf & Welling , 2017 ; Wu et al. , 2019 ) ) which can capture low-frequency components and remove some feature noise by making connective nodes more similar . In fact , these findings are not new . Since its first appearance in Bruna et al . ( 2014 ) , spectral GCNs have been closely related to graph signal processing and denoising . The spectral graph convolutional operation is derived from Graph Fourier Transform , and the filter can be formulated as a function with respect to the graph Laplacian matrix , denoted as g ( L ) . In general spectral GCNs , the forward function is : H ( l+1 ) = σ ( g ( L ) H ( l ) ) . Kipf and Welling ( Kipf & Welling , 2017 ) approximated g ( L ) using first-order Chebyshev polynomials , which can be simplified as multiplying the augmented normalized adjacency matrix to the feature matrix . Despite the efficiency , this first-order graph filter is found sensitive to changes in the graph signals and the underlying graph structure ( Isufi et al. , 2016 ; Bianchi et al. , 2019 ) . For instance , on isolated nodes or small single components of the graph , their denoising effect is quite limited due to the lack of reliable neighbors . The potential incorrect structure information will also constrain the power of GCNs and cause more negative impacts with deeper layers . As noisy/incorrect information is inevitable in real-world graph data , more powerful and robust GCNs are needed to solve this problem . In this work , we propose a new graph neural network with more powerful denoising effects from the perspective of graph signal processing and higher fault tolerance to the graph structure . Different from image data , graph data usually has high dimensional features , and there may be some latent connection/correlation between each dimensions . Noting this , we take this connection information into account to offset the efforts of certain unreliable structure information , and remove extra noise by applying a smoothness assumption on such a ” feature graph ” . Derived from the additional Laplacian smoothing regularization in this feature graph , we obtain a novel variant of spectral GCNs , named BiGCN , which contains low-pass graph filters for both the original graph and a latent feature connection graph in each convolution layer . Our model can extract low-frequency components from both the graphs , so it is more expressive than the original spectral GCN ; and it removes the noise from two directions , so it is also more robust . We evaluate our model on two tasks : node classification and link prediction . In addition to the original graph data , in order to demonstrate the effectiveness of our model with respect to graph signal denoising and fault tolerance , we design three cases with noise/structure mistakes : randomly adding Gaussian noise with different variances to a certain percentage of nodes ; adding different levels of Gaussian noise to the whole graph feature ; and changing a certain percentage of connections . The remarkable performances of our model in these experiments verify our power and robustness on both clean data and noisy data . The main contributions of this work are summarized below . • We propose a new framework for the representation learning of graphs with node features . Instead of only considering the signals in the original graph , we take into account the feature correlations and make the model more robust . • We formulate our graph neural network based on Laplacian smoothing and derive a bidirectional low-pass graph filter using the Alternating Direction Method of Multipliers ( ADMM ) algorithm . • We set three cases to demonstrate the powerful denoising capacity and high fault tolerance of our model in tasks of node classification and link prediction . 2 RELATED WORK . We summarize the related work in the field of graph signal processing and denoising and recent work on spectral graph convolutional networks as follows . 2.1 GRAPH SIGNAL PROCESSING AND DENOISING . Graph-structured data is ubiquitous in the world . Graph signal processing ( GSP ) ( Ortega et al. , 2018 ) is intended for analyzing and processing the graph signals whose values are defined on the set of graph vertices . It can be seen as a bridge between classical signal processing and spectral graph theory . One line of the research in this area is the generalization of the Fourier transform to the graph domain and the development of powerful graph filters ( Zhu & Rabbat , 2012 ; Isufi et al. , 2016 ) . It can be applied to various tasks , such as representation learning and denoising ( Chen et al. , 2014 ) . More recently , the tools of GSP have been successfully used for the definition of spectral graph neural networks , making a strong connection between GSP and deep learning . In this work , we restart with the concepts from graph signal processing and define a new smoothing model for deep graph learning and graph denoising . It is worth mentioning that the concept of denoising/robustness in GSP is different from the defense/robustness against adversarial attacks ( e.g . ( Zügner & Günnemann , 2019 ) ) , so we do not make comparisons with those models . 2.2 SPECTRAL GRAPH CONVOLUTIONAL NETWORKS . Inspired by the success of convolutional neural networks in images and other Euclidean domains , the researcher also started to extend the power of deep learning to graphs . One of the earliest trends for defining the convolutional operation on graphs is the use of the Graph Fourier Transform and its definition in the spectral domain instead of the original spatial domain ( Bruna et al. , 2014 ) . Defferrard et al ( Defferrard et al. , 2016 ) proposed ChebyNet which defines a filter as Chebyshev polynomials of the diagonal matrix of eigenvalues , which can be exactly localized in the k-hop neighborhood . Later on , Kipf and Welling ( Kipf & Welling , 2017 ) simplified the Chebyshev filters using the first-order polynomial filter , which led to the well-known graph convolutional network . Recently , many new spectral graph filters have been developed . For example , the rational auto-regressive moving average graph filters ( ARMA ) ( Isufi et al. , 2016 ; Bianchi et al. , 2019 ) are proposed to enhance the modeling capacity of GNNs . Compared to the polynomial ones , ARMA filters are more robust and provide a more flexible graph frequency response . Feedback-looped filters ( Wijesinghe & Wang , 2019 ) further improved localization and computational efficiency . There is also another type of graph convolutional networks that defines convolutional operations in the spatial domain by aggregating information from neighbors . The spatial types are not closely related to our work , so it is beyond the scope of our discussion . As we will discuss later , our model is closely related to spectral graph convolutional networks . We define our graph filter from the perspective of Laplacian smoothing , and then extend it not only to the original graph but also to a latent feature graph in order to improve the capacity and robustness of the model . 3 BACKGROUND : GRAPH SIGNAL PROCESSING . In this section , we will briefly introduce some concepts of graph signal processing ( GSP ) , including graphs smoothness , graph Fourier Transform and graph filters , which will be used in later sections . Graph Laplacian and Smoothness . A graph can be represented as G = ( V , E ) , which consists of a set of n nodes V = { 1 , . . . , n } and a set of edges E ⊆ V × V . In this paper , we only consider undirected attributed graphs . We denote the adjacency matrix of G as A = ( aij ) ∈ Rn×n and the degree matrix of G as D = diag ( d ( 1 ) , . . . , d ( n ) ) ∈ Rn×n . In the degree matrix , d ( i ) represents the degree of vertex i ∈ V . We consider that each vertex i ∈ V associates a scalar x ( i ) ∈ R which is also called a graph signal . All graph signals can be represented by x ∈ Rn . Some variants of graph Laplacian can be defined on graph G. We denote the graph Laplacian of G as L = D − A ∈ Rn×n . It should be noted that the sum of rows of graph Laplacian L is zero . The smoothness of a graph signal x can be measure through the quadratic form of graph Laplacian : ∆ ( x ) = xTLx = Σi , j 1 2aij ( x ( i ) − x ( j ) ) 2 . Due to the fact that xTLx ≥ 0 , L is a semi-positive definite and symmetric matrix . Graph Fourier Transform and Graph Filters . Decomposing the Laplacian matrix with L = UΛUT , we can get the orthogonal eigenvectors U as Fourier basis and eigenvalues Λ as graph frequencies . The Graph Fourier Transform F : Rn → Rn is defined by Fx = x̂ : = UTx . The inverse Graph Fourier Transform is defined by F−1x̂ = x : = Ux̂ . It enables us to transfer the graph signal to the spectral domain , and then define a graph filter g in the spectral domain for filtering the graph signal x : g ( L ) x = Ug ( Λ ) UTx = Ug ( Λ ) F ( x ) where g ( Λ ) = diag ( g ( λ1 ) , ... g ( λN ) ) controls how the graph frequencies can be altered . 4 BIGCN . The Graph Fourier Transform has been successfully used to define various low-pass filters on graph signals ( column vectors of feature matrix ) and derive spectral graph convolutional networks ( Defferrard et al. , 2016 ; Bianchi et al. , 2019 ; Wijesinghe & Wang , 2019 ) . A spectral graph convolutional operation can be formulated as a function g with respect to the Laplacian matrix L. Although it can smooth the graph and remove certain feature-wise noise by assimilating neighbor nodes , it is sensitive to node-wise noise and unreliable structure information . Notice that when the node features contain rich information , there may exist correlations between different dimensions of features which can be used to figure out the low-tolerance problem . Therefore , it is natural to define filters on ” feature signals ” ( row vectors of graph feature matrix ) based on the feature correlation . Inspired by this , we propose a bi-directional spectral GCN , named BiGCN , with column filters and row filters derived from the Laplacian smoothness assumption , as shown in Fig 1 . In this way , we can enhance the denoising capacity and fault tolerance to graph structure of spectral graph convolutions . To explain it better , we start with the following simple case .
This paper proposed a new graph convolutional network. It considers not only the original graph structure information but also the latent correlations between features, resulting in a graph neural network as a bi-directional low-pass filter. The new filter is derived using the alternating direction method of multipliers (ADMM) algorithm. Experiments show the new model's denoising performance is better than previous models.
SP:8051813e72f10269c587a17450be5f23973595de
Adversarially Guided Actor-Critic
1 INTRODUCTION . Research in deep reinforcement learning ( RL ) has proven to be successful across a wide range of problems ( Silver et al. , 2014 ; Schulman et al. , 2016 ; Lillicrap et al. , 2016 ; Mnih et al. , 2016 ) . Nevertheless , generalization and exploration in RL still represent key challenges that leave most current methods ineffective . First , a battery of recent studies ( Farebrother et al. , 2018 ; Zhang et al. , 2018a ; Song et al. , 2020 ; Cobbe et al. , 2020 ) indicates that current RL methods fail to generalize correctly even when agents have been trained in a diverse set of environments . Second , exploration has been extensively studied in RL ; however , most hard-exploration problems use the same environment for training and evaluation . Hence , since a well-designed exploration strategy should maximize the information received from a trajectory about an environment , the exploration capabilities may not be appropriately assessed if that information is memorized . In this line of research , we choose to study the exploration capabilities of our method and its ability to generalize to new scenarios . Our evaluation domains will , therefore , be tasks with sparse reward in procedurally-generated environments . In this work , we propose Adversarially Guided Actor-Critic ( AGAC ) , which reconsiders the actor-critic framework by introducing a third protagonist : the adversary . Its role is to predict the actor ’ s actions correctly . Meanwhile , the actor must not only find the optimal actions to maximize the sum of expected returns , but also counteract the predictions of the adversary . This formulation is lightly inspired by adversarial methods , specifically generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Such a link between GANs and actor-critic methods has been formalized by Pfau & Vinyals ( 2016 ) ; however , in the context of a third protagonist , we draw a different analogy . The adversary can be interpreted as playing the role of a discriminator that must predict the actions of the actor , and the actor can be considered as playing the role of a generator that behaves to deceive the predictions of the adversary . This approach has the advantage , as with GANs , that the optimization procedure generates a diversity of meaningful data , corresponding to sequences of actions in AGAC . ∗Equal contribution . This paper analyses and explores how AGAC explicitly drives diversity in the behaviors of the agent while remaining reward-focused , and to which extent this approach allows to adapt to the evolving state space of procedurally-generated environments where the map is constructed differently with each new episode . Moreover , because stability is a legitimate concern since specific instances of adversarial networks were shown to be prone to hyperparameter sensitivity issues ( Arjovsky & Bottou , 2017 ) , we also examine this aspect in our experiments . The contributions of this work are as follow : ( i ) we propose a novel actor-critic formulation inspired from adversarial learning ( AGAC ) , ( ii ) we analyse empirically AGAC on key reinforcement learning aspects such as diversity , exploration and stability , ( iii ) we demonstrate significant gains in performance on several sparse-reward hard-exploration tasks including procedurally-generated tasks . 2 RELATED WORK . Actor-critic methods ( Barto et al. , 1983 ; Sutton , 1984 ) have been extended to the deep learning setting by Mnih et al . ( 2016 ) , who combined deep neural networks and multiple distributed actors with an actor-critic setting , with strong results on Atari . Since then , many additions have been proposed , be it architectural improvements ( Vinyals et al. , 2019 ) , better advantage or value estimation ( Schulman et al. , 2016 ; Flet-Berliac et al. , 2021 ) , or the incorporation of off-policy elements ( Wang et al. , 2017 ; Oh et al. , 2018 ; Flet-Berliac & Preux , 2020 ) . Regularization was shown to improve actor-critic methods , either by enforcing trust regions ( Schulman et al. , 2015 ; 2017 ; Wu et al. , 2017 ) , or by correcting for off-policiness ( Munos et al. , 2016 ; Gruslys et al. , 2018 ) ; and recent works analyzed its impact from a theoretical standpoint ( Geist et al. , 2019 ; Ahmed et al. , 2019 ; Vieillard et al. , 2020a ; b ) . Related to our work , Han & Sung ( 2020 ) use the entropy of the mixture between the policy induced from a replay buffer and the current policy as a regularizer . To the best of our knowledge , none of these methods explored the use of an adversarial objective to drive exploration . While introduced in supervised learning , adversarial learning ( Goodfellow et al. , 2015 ; Miyato et al. , 2016 ; Kurakin et al. , 2017 ) was leveraged in several RL works . Ho & Ermon ( 2016 ) propose an imitation learning method that uses a discriminator whose task is to distinguish between expert trajectories and those of the agent while the agent tries to match expert behavior to fool the discriminator . Bahdanau et al . ( 2019 ) use a discriminator to distinguish goal states from non-goal states based on a textual instruction , and use the resulting model as a reward function . Florensa et al . ( 2018 ) use a GAN to produce sub-goals at the right level of difficulty for the current agent , inducing a form of curriculum . Additionally , Pfau & Vinyals ( 2016 ) provide a parallel between GANs and the actor-critic framework . While exploration is driven in part by the core RL algorithms ( Fortunato et al. , 2018 ; Han & Sung , 2020 ; Ferret et al. , 2021 ) , it is often necessary to resort to exploration-specific techniques . For instance , intrinsic motivation encourages exploratory behavior from the agent . Some works use state-visitation counts or pseudo-counts to promote exhaustive exploration ( Bellemare et al. , 2016a ) , while others use curiosity rewards , expressed in the magnitude of prediction error from the agent , to push it towards unfamiliar areas of the state space ( Burda et al. , 2018 ) . Ecoffet et al . ( 2019 ) propose a technique akin to tree traversal to explore while learning to come back to promising areas . Eysenbach et al . ( 2018 ) show that encouraging diversity helps with exploration , even in the absence of reward . Last but not least , generalization is a key challenge in RL . Zhang et al . ( 2018b ) showed that , even when the environment is not deterministic , agents can overfit to their training distribution and that it is difficult to distinguish agents likely to generalize to new environments from those that will not . In the same vein , recent work has advocated using procedurally-generated environments , in which a new instance of the environment is sampled when a new episode starts , to assess generalization capabilities better ( Justesen et al. , 2018 ; Cobbe et al. , 2020 ) . Finally , methods based on network randomization ( Igl et al. , 2019 ) , noise injection ( Lee et al. , 2020 ) , and credit assignment ( Ferret et al. , 2020 ) have been proposed to reduce the generalization gap for RL agents . 3 BACKGROUND AND NOTATIONS . We place ourselves in the Markov Decision Processes ( Puterman , 1994 ) framework . A Markov Decision Process ( MDP ) is a tuple M = { S , A , P , R , γ } , where S is the state space , A is the action space , P is the transition kernel , R is the bounded reward function and γ ∈ [ 0 , 1 ) is the discount factor . Let π denote a stochastic policy mapping states to distributions over actions . We place ourselves in the infinite-horizon setting , i.e. , we seek a policy that optimizes J ( π ) = Eπ [ ∑∞ t=0 γ tr ( st , at ) ] . The value of a state is the quantity V π ( s ) = Eπ [ ∑∞ t=0 γ tr ( st , at ) |s0 = s ] and the value of a state-action pair Qπ ( s , a ) of performing action a in state s and then following policy π is defined as : Qπ ( s , a ) = Eπ [ ∑∞ t=0 γ tr ( st , at ) |s0 = s , a0 = a ] . The advantage function , which quantifies how an action a is better than the average action in state s , is Aπ ( s , a ) = Qπ ( s , a ) − V π ( s ) . Finally , the entropyHπ of a policy is calculated as : Hπ ( s ) = Eπ ( ·|s ) [ − log π ( ·|s ) ] . Actor-Critic and Deep Policy Gradients . An actor-critic algorithm is composed of two main components : a policy and a value predictor . In deep RL , both the policy and the value function are obtained via parametric estimators ; we denote θ and φ their respective parameters . The policy is updated via policy gradient , while the value is usually updated via temporal difference or Monte Carlo rollouts . In practice , for a sequence of transitions { st , at , rt , st+1 } t∈ [ 0 , N ] , we use the following policy gradient loss ( including the commonly used entropic penalty ) : LPG = − 1 N t+N∑ t′=t ( At′ log π ( at′ |st′ , θ ) + αHπ ( st′ , θ ) ) , where α is the entropy coefficient and At is the generalized advantage estimator ( Schulman et al. , 2016 ) defined as : At = ∑t+N t′=t ( γλ ) t′−t ( rt′ + γVφold ( st′+1 ) − Vφold ( st′ ) ) , with λ a fixed hyperparameter and Vφold the value function estimator at the previous optimization iteration . To estimate the value function , we solve the non-linear regression problem minimizeφ ∑t+N t′=t ( Vφ ( st′ ) − V̂t′ ) 2 where V̂t = At + Vφold ( st′ ) . 4 ADVERSARIALLY GUIDED ACTOR-CRITIC . To foster diversified behavior in its trajectories , AGAC introduces a third protagonist to the actor-critic framework : the adversary . The role of the adversary is to accurately predict the actor ’ s actions , by minimizing the discrepancy between its action distribution πadv and the distribution induced by the policy π . Meanwhile , in addition to finding the optimal actions to maximize the sum of expected returns , the actor must also counteract the adversary ’ s predictions by maximizing the discrepancy between π and πadv ( see Appendix B for an illustration ) . This discrepancy , used as a form of exploration bonus , is defined as the difference of action log-probabilities ( see Eq . ( 1 ) ) , whose expectation is the Kullback–Leibler divergence : DKL ( π ( ·|s ) ‖πadv ( ·|s ) ) = Eπ ( ·|s ) [ log π ( ·|s ) − log πadv ( ·|s ) ] . Formally , for each state-action pair ( st , at ) in a trajectory , an action-dependent bonus log π ( at|st ) − log πadv ( at|st ) is added to the advantage . In addition , the value target of the critic is modified to include the action-independent equivalent , which is the KL-divergence DKL ( π ( ·|st ) ‖πadv ( ·|st ) ) . We discuss the role of these mirrored terms below , and the implications of AGAC ’ s modified objective from a more theoretical standpoint in the next section . In addition to the parameters θ ( resp . θold the parameter of the policy at the previous iteration ) and φ defined above ( resp . φold that of the critic ) , we denote ψ ( resp . ψold ) that of the adversary . AGAC minimizes the following loss : LAGAC = LPG + βV LV +βadvLadv . In the new objective LPG = − 1N ∑N t=0 ( A AGAC t log π ( at|st , θ ) + αHπ ( st , θ ) ) , AGAC modifies At as : AAGACt = At+ c ( log π ( at|st , θold ) − log πadv ( at|st , ψold ) ) , ( 1 ) with c a varying hyperparameter that controls the dependence on the action log-probability difference . To encourage exploration without preventing asymptotic stability , c is linearly annealed during the course of training . LV is the objective function of the critic defined as : LV = 1 N N∑ t=0 ( Vφ ( st ) − ( V̂t+ cDKL ( π ( ·|st , θold ) ‖πadv ( ·|st , ψold ) ) ) ) 2 . ( 2 ) Finally , Ladv is the objective function of the adversary : Ladv = 1 N N∑ t=0 DKL ( π ( ·|st , θold ) ‖πadv ( ·|st , ψ ) ) . ( 3 ) Eqs . ( 1 ) , ( 2 ) and ( 3 ) are the three equations that our method modifies ( we color in blue the specific parts ) in the traditional actor-critic framework . The terms βV and βadv are fixed hyperparameters . Under the proposed actor-critic formulation , the probability of sampling an action is increased if the modified advantage is positive , i.e . ( i ) the corresponding return is larger than the predicted value and/or ( ii ) the action log-probability difference is large . More precisely , our method favors transitions whose actions were less accurately predicted than the average action , i.e . log π ( a|s ) − log πadv ( a|s ) ≥ DKL ( π ( ·|s ) ‖πadv ( ·|s ) ) . This is particularly visible for λ → 1 , in which case the generalized advantage is At = Gt − Vφold ( st ) , resulting in the appearance of both aforementioned mirrored terms in the modified advantage : AAGACt = Gt − V̂ φold t + c ( log π ( at|st ) − log πadv ( at|st ) − D̂φoldKL ( π ( ·|st ) ‖πadv ( ·|st ) ) ) , with Gt the observed return , V̂ φold t the estimated return and D̂ φold KL ( π ( ·|st ) ‖πadv ( ·|st ) ) the estimated KL-divergence ( estimated components of Vφold ( st ) from Eq . 2 ) . To avoid instability , in practice the adversary is a separate estimator , updated with a smaller learning rate than the actor . This way , it represents a delayed and more steady version of the actor ’ s policy , which prevents the agent from having to constantly adapt or focus solely on fooling the adversary .
This paper proposed a new actor-critic framework with adversary guide for deep reinforcement learning (RL), and introduced new Kullback-Leiblier divergence bonus term based on the difference between actor network and adversary network to deal with the exploration in RL. The experimental results showed the merit of this method for exploration. Some comments are provided as follows.
SP:4a7558123aa3ce672415a3e07eb3077d3ff92730
Accounting for Unobserved Confounding in Domain Generalization
1 INTRODUCTION . Prediction algorithms use data , necessarily sampled under specific conditions , to learn correlations that extrapolate to new or related data . If successful , the performance gap between these two domains is small , and we say that algorithms generalize beyond their training data . Doing so is difficult however , some form of uncertainty about the distribution of new data is unavoidable . The set of potential distributional changes that we may encounter is mostly unknown and in many cases may be large and varied . Some examples include covariate shifts ( Bickel et al. , 2009 ) , interventions in the underlying causal system ( Pearl , 2009 ) , varying levels of noise ( Fuller , 2009 ) and confounding ( Pearl , 1998 ) . All of these feature in modern applications , and while learning systems are increasingly deployed in practice , generalization of predictions and their reliability in a broad sense remains an open question . A common approach to formalize learning with uncertain data is , instead of optimizing for correlations in a fixed distribution , to do so simultaneously for a range of different distributions in an uncertainty set P ( Ben-Tal et al. , 2009 ) . minimize f sup P∈P E ( x , y ) ∼P [ L ( f ( x ) , y ) ] ( 1 ) for some measure of error L of the function f that relates input and output examples ( x , y ) ∼ P . Choosing different sets P leads to estimators with different properties . It includes as special cases , for instance , many approaches in domain adaptation , covariate shift , robust statistics and optimization ( Kuhn et al. , 2019 ; Bickel et al. , 2009 ; Duchi et al. , 2016 ; 2019 ; Sinha et al. , 2017 ; Wozabal , 2012 ; Abadeh et al. , 2015 ; Duchi & Namkoong , 2018 ) . Robust solutions to problem ( 1 ) are said to generalize if potential shifted , test distributions are contained in P , but also larger sets P result in conservative solutions ( i.e . with sub-optimal performance ) on data sampled from distribution away from worst-case scenarios , in general . One formulation of causality is in fact also a version of this problem , for P defined as any distribution arising from arbitrary interventions on observed covariates x leading to shifts in their distribution Px ( see e.g . sections 3.2 and 3.3 in ( Meinshausen , 2018 ) ) . The invariance to changes in covariate distributions of causal solutions is powerful for generalization , but implicitly assumes that all covariates or other drivers of the outcome subject to change at test time are observed . Often shifts occur elsewhere , for example in the distribution of unobserved confounders , in which case also conditional distributions Py|x may shift . Perhaps surprisingly , in the presence of unobserved confounders , the goals of achieving robustness and learning a causal model can be different ( and similar behaviour also occurs with varying measurement noise ) . There is in general an inherent trade-off in generalization performance . In the presence of unobserved confounders , causal and correlation-based solutions are both optimal in different regimes , depending on the shift in the underlying generating mechanism from which new data is generated . Consider a simple example , illustrated in Figure 1 , to show this explicitly . We assume access to observations of variables ( X1 , X2 , Y ) in two training datasets , each dataset sampled with differing variances ( σ2 = 1 and σ2 = 2 ) from the following structural model F , X2 : = −H + EX2 , Y : = X2 + 3H + EY , X1 : = Y +X2 + EX1 , EX1 , EX2 ∼ N ( 0 , σ2 ) , EY ∼ N ( 0 , 1 ) are exogenous variables . In a first scenario ( leftmost panel ) we consider all data ( training and testing ) to be generated without unobserved confounders , H : = 0 ; and , in a second scenario ( remaining panels ) all data with unobserved confounders , H : = EH ∼ N ( 0 , 1 ) . Each panel of Figure 1 shows performance on new data obtained after manipulating the underlying data generating system ; the magnitude and type of intervention appears in the horizontal axis . We consider the following learning paradigms : Ordinary Least Squares ( OLS ) learns the linear mapping that minimizes average training risk , Domain Robust Optimization ( DRO ) minimizes the maximum training risk among the two available datasets , and the causal solution , assumed known , has fixed coefficients ( 0 , 1 ) for ( X1 , X2 ) . Two important observations motivate this paper . First , observe that Ordinary Least Squares ( OLS ) and Domain Robust Optimization ( DRO ) absorb spurious correlations ( due toH , and the fact thatX1 is caused by Y ) with unstable performance under shifts in p ( X1 , X2 ) but as a consequence good performance under shifts in p ( H ) . Causal solutions , by contrast , are robust to shifts in p ( X1 , X2 ) , even on new data with large shifts , but underperform substantially under changes in the distribution of unobserved confounders p ( H ) . Second , the presence of unobserved confounding hurts generalization performance in general with higher errors for all methods , e.g . contrast the middle and leftmost panel . To the best of our knowledge , the influence of unobserved confounders has been minimally explored in the context of generalization of learning algorithms , even though , as Figure 1 shows , in this context different shifts in distribution may have important consequences for predictive performance . Our Contributions . In this paper we provide a new choice of P and learning problem ( 1 ) that we show to be justified by certain statistical invariances across training and testing data , to be expected in the presence of unobserved confounders . This leads us to define a new differentiable , regularized objective for representation learning . Our proposal defines P as an affine combination of available training data distributions , and we show that solutions to this problem are robust to more general shifts in distribution than previously considered , spanning robustness to shifts in observed , unobserved , and target variables , depending on the properties of the available training data distributions . This approach has benefits for performance out-of-sample but also for tasks involving variable selection , where important features are consistently replicated across experiments with our objective . 2 INVARIANCES IN THE PRESENCE OF UNOBSERVED CONFOUNDERS . This section formally introduces the problem of out-of-distribution generalization . We describe in greater detail the reasons that popular learning principles , such as Empirical Risk Minimization ( ERM ) , underperform in general , and define certain invariances to recover solutions that generalize . We take the perspective that all potential distributions that may be observed over a system of variables arise from a causal modelM = ( F , V , U ) , characterized by endogenous variables , V ∈ V , representing all variables determined by the system , either observed or not ; exogenous variables , U ∈ U , in contrast imposed upon the model , and a sequence of structural equations F : U → V , describing how endogenous variables can be ( deterministically ) obtained from the exogenous variables ( Pearl , 2009 ) . An example is given in Figure 1 , V = ( X1 , X2 , H , Y ) are endogenous and U = ( EX1 , EX2 , EH , EY ) are exogenous variables . Unseen data is generated from such a systemM after manipulating the distribution of exogenous variables U , which propagates across the system shifting the joint distribution of all variables V , whether observed or unobserved , but keeping the causal mechanisms F unchanged . Representative examples include changes in data collection conditions , such as due to different measurement devices , or new data sources , such as patients in different hospitals or countries , among many others . Our goal is to learn a representation Z = φ ( X ) acting on a set observed variables X ⊂ V with the ability to extrapolate to new unseen data , and doing so acknowledging that all relevant variables in V are likely not observed . Unobserved confounders ( for the task at hand , say predicting Y ∈ V ) simultaneously cause X and Y , confounding or biasing the causal association between X and Y giving rise to spurious correlations that do not reproduce in general ( Pearl , 1998 ; 2009 ) . We present a brief argument below highlighting the systematic bias due to unobserved confounders in ERM . 2.1 THE BIASES OF UNOBSERVED CONFOUNDING . Consider the following structural equation for observed variables ( X , Y ) , Y : = f ◦ φ ( X ) + E ( 2 ) where f : = f ( · ; β0 ) is a predictor acting on a representation Z : = φ ( X ) and E stands for potential sources of mispecification and unexplained sources of variability . For a given sample of data ( x , y ) and z = φ ( x ) , the optimal prediction rule β̂ is often taken to minimize squared residuals , with β̂ the solution to the normal equations : ∇βf ( z ; β̂ ) y = ∇βf ( z ; β̂ ) f ( z ; β̂ ) , where ∇βf ( z ; β̂ ) denotes the column vector of gradients of f with respect to parameters β evaluated at β̂ . Consider the Taylor expansion of f ( z ; β0 ) around an estimate β̂ sufficiently close to β0 , f ( z ; β0 ) ≈ f ( z ; β̂ ) + ∇βf ( z ; β̂ ) T ( β0 − β̂ ) . Using this approximation in our first order optimality condition we find , ∇βf ( z ; β̂ ) ∇βf ( z ; β̂ ) T ( β0 − β̂ ) + v = ∇βf ( z ; β̂ ) ( 3 ) where v is a scaled disturbance term that includes the rest of the linear approximation of f and is small asymptotically ; : = y − f ( z ; β̂ ) is the residual . β̂ is consistent for the true β0 if and only if ∇βf ( z ; β̂ ) → 0 in probability . This assumption is satisfied if E ( all sources of variation in Y not captured by X ) are independent of X ( i.e . exogenous ) or in other words if all common causes or confounders to both X and Y have been observed . Conventional regression may assign significant associations to variables that are neither directly nor indirectly related to the outcome , and in this case , we have no performance guarantees on new data with changes in the distribution of these variables . Omitted variables are a common source of unobserved confounding but we note in Appendix B that similar biases also arise from other prevalent model mispecifications , such as measurement error ( Carroll et al. , 2006 ) . 2.2 INVARIANCES WITH MULTIPLE ENVIRONMENTS . The underlying structural mechanism F , that also relates unobserved with observed variables , even if unknown , is stable irrespective of manipulations in exogenous variables that may give rise to heterogeneous data sources . Under certain conditions , statistical footprints emerge from this structural invariance across different data sources , properties testable from data that have been exploited recently , for example ( Peters et al. , 2016 ; Ghassami et al. , 2017 ; Rothenhäusler et al. , 2019 ) . We assume that such a heterogeneous data scenario applies , input and output pairs ( X , Y ) are observed across heterogeneous data sources or environments e , defined as a probability distribution Pe over an observation space X × Y that arises , just like new unseen data , from manipulations in the distribution of exogenous variables in an underlying modelM . For the remainder of this section , consider restricting ourselves to data sources emerging from manipulations in exogenous EX , appearing in the structural equations of X only in an underlying additive noise model ( see Appendix C.1 for the precise statement of assumptions and more context ) . It may be shown by considering the distributions of error terms Y −f ◦φ ( X ) and its correlation with any function of X , that the inner product∇βf ( z ; β0 ) , even if non-zero due to unobserved confounding , converges to a fixed unknown value equal across training environments ( see Appendix C.1 for the derivation ) . With a similar decomposition to the one given in equation ( 3 ) , in the population case , it holds that up to disturbance terms , ( E ( x , y ) ∼Pi ∇βf ( z ; β ? ) ∇βf ( z ; β ? ) T − E ( x , y ) ∼Pj ∇βf ( z ; β ? ) ∇βf ( z ; β ? ) ) T ( β0 − β ? ) = ( E ( x , y ) ∼Pi ∇βf ( z ; β ? ) − E ( x , y ) ∼Pj ∇βf ( z ; β ? ) ) = 0 ( 4 ) where β ? is a solution to , E ( x , y ) ∼Pi ∇βf ( z ; β ) ( y − f ( z ; β ) ) − E ( x , y ) ∼Pj ∇βf ( z ; β ) ( y − f ( z ; β ) ) = 0 . ( 5 ) and is consistent for the causal parameters β0 if unique . i , j ∈ E are the indices of any two observed environments in an index set E . This invariance across environments must hold for causal parameters ( under certain conditions ) even in the presence of unobserved confounders . A few remarks are necessary concerning this relationship and its extrapolation properties . • The first is based on the observation that , up to a constant , each inner product in ( 5 ) is the gradient of the squared error with respect to β . This reveals that the optimal predictor , in the presence of unobserved confounding , is not one that produces minimum loss but one that produces a non-zero loss gradient equal across environments . Seeking minimum error solutions , even in the population case , produces estimators with necessarily unstable correlations because the variability due to unobserved confounders is not explainable from observed data . Forcing gradients to be zero then forces models to utilize artifacts of the specific data collection process that are not related to the input-output relationship ; and , for this reason , will not in general perform outside training data . • From ( 5 ) we may pose a sequence of moment conditions for each pair of available environments . We may then seek solutions β that make all of them small simultaneously . Solutions are unique if the set of moments is sufficient to identify β ? exactly ( and given our model assumptions may be interpreted as causal and robust to certain interventions ) . We revisit our introductory example to show in Appendix A that , in contrast to ERM and Invariant Risk Minimization ( IRM ) ( a related approach proposed in ( Arjovsky et al. , 2019 ) we discuss in more detail in later sections ) , this procedure does recover the underlying causal model correctly in the presence of unobserved confounding . • In practice however , only a set of solutions may be identified with no performance guarantees for any individual solutions , and no guarantees if assumptions fail to hold . Moreover , even if accessible , causal solutions , robust to certain distribution shifts , may not always be desirable under more general shifts ( recall for instance the experiments in the rightmost panel of Figure 1 ) .
This paper proposes a new regularizer that can be plugged in gradient-based learning algorithms, which aims at solving the problems induced by unobserved confounders. And the authors provide the upper bound for one specific kind of distributionally robust optimization problem, whose uncertainty set is defined as the affine combinations of training distributions. And based on this the algorithm is proposed to deal with the problem of unobserved confounders. Experiments on three medical datasets validate the effectiveness of the method.
SP:5964ce1b29c23bb9e4b9a83a466ca0bc3f869183
Group-Connected Multilayer Perceptron Networks
1 INTRODUCTION . Deep neural networks have been quite successful across various machine learning tasks . However , this advancement has been mostly limited to certain domains . For example in image and voice data , one can leverage domain properties such as location invariance , scale invariance , coherence , etc . via using convolutional layers ( Goodfellow et al. , 2016 ) . Alternatively , for graph data , graph convolutional networks were suggested to leverage adjacency patterns present in datasets structured as a graph ( Kipf & Welling , 2016 ; Xu et al. , 2019 ) . However , there has been little progress in learning deep representations for datasets that do not follow a particular known structure in the feature domain . Take for instance the case of a simple tabular dataset for disease diagnosis . Such a dataset may consist of features from different categories such as demographics ( e.g. , age , gender , income , etc . ) , examinations ( e.g. , blood pressure , lab results , etc . ) , and other clinical conditions . In this scenario , the lack of any known structure between features to be used as a prior would lead to the use of a fully-connected multilayer perceptron network ( MLP ) . Nonetheless , it has been known in the literature that MLP architectures , due to their huge complexity , do not usually admit efficient training and generalization for networks of more than a few layers . In this paper , we propose Group-Connected Multiplayer Perceptron ( GMLP ) networks . The main idea behind GMLP is to learn and leverage expressive feature subsets , henceforth referred to as feature groups . A feature group is defined as a subset of features that provides a meaningful representation or high-level concept that would help the downstream task . For instance , in the disease diagnosis example , the combination of a certain blood factor and age might be the indicator of a higher level clinical condition which would help the final classification task . Furthermore , GMLP leverages feature groups limiting network connections to local group-wise connections and builds a feature hierarchy via merging groups as the network grows in depth . GMLP can be seen as an architecture that learns expressive feature combinations and leverages them via group-wise operations . The main contributions of this paper are as follows : ( i ) proposing a method for end-to-end learning of expressive feature combinations , ( ii ) suggesting a network architecture to utilize feature groups and local connections to build deep representations , ( iii ) conducting extensive experiments demonstrating the effectiveness of GMLP as well as visualizations and ablation studies for better understanding of the suggested architecture . We evaluated the proposed method on five different real-world datasets in various application domains and demonstrated the effectiveness of GMLP compared to state-of-the-art methods in the literature . Furthermore , we conducted ablation studies and comparisons to study different architectural and training factors as well as visualizations on MNIST and synthesized data . To help to reproduce the results and encouraging future studies on group-connected architectures , we made the source code related to this paper available online 1 . 2 RELATED WORK . Fully-connected MLPs are the most widely-used neural models for datasets in which no prior assumption is made on the relationship between features . However , due to the huge complexity of fully-connected layers , MLPs are prone to overfitting resulting in shallow architectures limited to a few layers in depth ( Goodfellow et al. , 2016 ) . Various techniques have been suggested to improve training these models which include regularization techniques such as L-1/L-2 regularization , dropout , etc . and normalization techniques such as layer normalization , weigh normalization , batch normalization , etc . ( Srivastava et al. , 2014 ; Ba et al. , 2016 ; Salimans & Kingma , 2016 ; Ioffe & Szegedy , 2015 ) . For instance , self-normalizing neural networks ( SNNs ) have been recently suggested as state of the art normalization methods that prevent vanishing or exploding gradients which help training feed-forward networks with higher depths ( Klambauer et al. , 2017 ) . From the architectural perspective , there has been great attention toward networks consisting of sparse connections between layers rather than having dense fully-connected layers ( Dey et al. , 2018 ) . Sparse connected neural networks are usually trained based on either a sparse prior structure over the network architecture ( Richter & Wattenhofer , 2018 ) or based on pruning a fully-connected network to a sparse network ( Yun et al. , 2019 ; Tartaglione et al. , 2018 ; Mocanu et al. , 2018 ) . However , it should be noted that the main objective of most sparse neural network literature has been focused on improving the memory and compute requirements while maintaining competitive accuracies compared to MLPs . As a parallel line of research , the idea of using expressive feature combinations or groups has been suggested as a prior over the feature domain . Perhaps , the most successful and widespread use of this idea is in creating random forest models in which different trees are trained based on different feature subsets in order to deal with high-dimensional and high-variance data ( Breiman , 2001 ) . More recently , feature grouping is suggested by Aydore et al . ( 2019 ) as a statistical regularization technique to learn from datasets of large feature size and a small number of training samples . They do the forward network computation by projecting input features using samples taken from a bank of feature grouping matrices , reducing the input layer complexity and regularizing the model . In another recent study , Ke et al . ( 2018 ) used expressive feature combinations to learn from tabular datasets using a recursive encoder with a shared embedding network . They suggest a recursive architecture in which more important feature groups have a more direct impact on the final prediction . While promising results have been reported using these methods , feature grouping has been mostly considered as a preprocessing step . For instance , Aydore et al . ( 2019 ) uses the recursive nearest agglomeration ( ReNA ) ( Hoyos-Idrobo et al. , 2018 ) clustering to determine feature groups prior to the analysis . Alternatively , Ke et al . ( 2018 ) defined feature groups based on a pre-trained gradient boosting decision tree ( GBDT ) ( Friedman , 2001 ) . Feature grouping as a preprocessing step not only increases the complexity and raises practical considerations , but also limits the optimality of the selected features in subsequent analysis . In this study , we propose an end-to-end solution to learn expressive feature groups . Moreover , we introduce a network architecture to exploit interrelations within the feature groups to reduce the network complexity and to train deeper representations . 1We plan to include a link to the source code and GitHub page related to this paper in the camera-ready version . 3 PROPOSED METHOD . 3.1 ARCHITECTURE OVERVIEW . In this paper , we propose GMLP which intuitively can be broken down to three stages : ( i ) selecting expressive feature groups , ( ii ) learning dynamics within each group individually , and ( iii ) merging information between groups as the network grows in depth ( see Figure 1 ) . In this architecture , expressive groups are jointly selected during the training phase . Furthermore , GMLP is leveraging feature groups and using local group-wise weight layers to significantly reduce the number of parameters . While the suggested idea can be materialized as different architectures , in the current study , we suggest organization of the network as architectures resembling a binary tree spanning from leaves ( i.e. , features ) to a certain abstraction depth closer to the root2 . As the network grows deeper , after each local group-wise weight layer , half of the groups are merged using pooling operations , effectively reducing the width of the network while increasing the receptive field . At the last layer , all features within all groups are concatenated into a dense feature vector fed to the output layer . 3.2 NOTATION . We consider the generic problem of supervised classification based on a dataset of feature and target pairs , D : ( x1 : N , y1 : N ) , where xi ∈ < d , yi ∈ { 1 . . . C } , and N is the number of dataset samples . Furthermore , we define group size , m , as the number of neurons or elements within each group , and group count , k , as the number of selected groups which are essentially subsets of input features . Also , L is used to refer to the total depth of a network . We use zli ∈ < m to refer to activation values of group i in layer l. In this paper , we define all vectors as column vectors . 3.3 NETWORK LAYERS . In this section , we present the formal definition of different GMLP network layers . The very first layer of the network , Group-Select , is responsible for organizing features into k groups of size m each . A routing matrix , Ψ , is used for connecting each neuron within each group to exactly one feature in the feature set : z01 : k = Ψx , ( 1 ) 2Please note that , in this paper , tree structures are considered to grow from leaves to the root . In other words , in this context , limiting the depth is synonymous with considering the tree portion spanning from a certain depth to leave nodes . where Ψ ∈ { 0 , 1 } km×d is a sparse matrix determining features that are present in each group . As we are interested in jointly learning Ψ during the training phase , we use the following continuous relaxation : Ψi , j ≈ exp ( ψi , j/τ ) ∑j′=d j′=1 exp ( ψi , j′/τ ) . ( 2 ) In this equation , ψ is a real-valued matrix reparameterizing the routing matrix through a softmax operation with temperature , τ . The lower the temperature , the more ( 2 ) converges to the desired discrete and sparse binary routing matrix . Note that , in the continuous relaxation , the matrix ψ can be optimized via the backpropagation of gradients from classification loss terms . In the next section , we provide further detail on temperature annealing schedules as well as other techniques to enhance the Ψ approximation . Based on selected groups , we suggest local fully-connected weight layers for each group : Group-FC . The goal of Group-FC is to extract higher-level representations using the selected expressive feature subsets . This operation is usually followed by non-linearity functions ( e.g. , ReLU ) , normalization operations ( e.g , Batch Norm ) , and dropout . Formally , Group-FC can be defined as : zl+1i = f ( W l iz l i + b l i ) , ( 3 ) where W li ∈ < m×m and bli ∈ < m are the weight matrix and bias vector , applied on group i at layer l. Here , f represents other subsequent operations such as non-linearity , normalization , and dropout . Lastly , Group-Pool is defined as an operation which merges representations of two groups into a single group , reducing network width by half while increasing the effective receptive field : zl+1i = pool ( z l i , z l i+k/2l+1 ) , ( 4 ) where zli and z l i+k/2 are the ith group from the first and second halves , respectively ; and pool is a pooling function from < 2m to < m . In this study , we explore different variants of pooling functions such as max pooling , average pooling , or using linear weight layers as transformations from < 2m to < m . Please note that while we use a similar terminology as pooling in convolutional networks , the pooling operation explained here is not applied location-wise , but instead it is applied feature-wise , between different groups pairs . The values of m and k are closely related to the number and order of feature interactions for a certain task . Using proper m and k values enables us to reduce the parameter space while maintaining the model complexity required to solve the task . However , finding the ideal m and k directly from a given dataset is a very challenging problem . In this work , we treat m and k as hyperparameters to be found by a hyperparameter search .
The paper describes an MLP architectures for problems in which the features do not have a known structure (eg, tabular data). A "differentiable routing matrix" partitions the data into K blocks. Then, standard MLPs are applied to each block and the results are recursively aggregated by moving forward in the model.
SP:dbb0ed3b53fc0905982b51853e83f5cdbaf3b535
Task-Agnostic and Adaptive-Size BERT Compression
1 INTRODUCTION . Pre-trained Transformer ( Vaswani et al. , 2017 ) -based language models like BERT ( Devlin et al. , 2019 ) , XLNet ( Yang et al. , 2019 ) and RoBERTa ( Liu et al. , 2019 ) have achieved impressive performance on a variety of downstream natural language processing tasks . These models are pre-trained on massive language corpus via self-supervised tasks to learn language representation and fine-tuned on specific downstream tasks . Despite their effectiveness , these models are quite expensive in terms of computation and memory cost , which makes them difficult for the deployment on different downstream tasks and various resource-restricted scenarios such as online servers , mobile phones , and embedded devices . Therefore , it is crucial to compress pre-trained models for practical deployment . Recently , a variety of compression techniques have been adopted to compress pre-trained models , such as pruning ( McCarley , 2019 ; Gordon et al. , 2020 ) , weight factorization ( Lan et al. , 2019 ) , quantization ( Shen et al. , 2020 ; Zafrir et al. , 2019 ) , and knowledge distillation ( Sun et al. , 2019 ; Sanh et al. , 2019 ; Chen et al. , 2020 ; Jiao et al. , 2019 ; Hou et al. , 2020 ; Song et al. , 2020 ) . Several existing works ( Tsai et al. , 2020 ; McCarley , 2019 ; Gordon et al. , 2020 ; Sanh et al. , 2019 ; Zafrir et al. , 2019 ; Chen et al. , 2020 ; Lan et al. , 2019 ; Sun et al. , 2019 ) compress a large pre-trained model into a small or fast model with fixed size on the pre-training or fine-tuning stage and have achieved good compression efficiency and accuracy . However , from the perspective of practical deployment , a fixed size model can not be deployed in devices with different memory and latency constraints . For example , smaller models are preferred in embedded devices than in online servers , and faster inference speed is more critical in online services than in offline services . On the other hand , some previous methods ( Chen et al. , 2020 ; Hou et al. , 2020 ) compress the models on the fine-tuning stage for each specific downstream task . This can achieve good accuracy due to the dedicated design in each task . However , compressing the model for each task can be laborious and a compressed model for one task may not generalize well on another downstream task . In this paper , we study the BERT compression in a different setting : the compressed models need to cover different sizes and latencies , in order to support devices with different kinds of memory and latency constraints ; the compression is conducted on the pre-training stage so as to be downstream task agnostic . To this end , we propose NAS-BERT , which leverages neural architecture search ( NAS ) to automatically compress BERT models . We carefully design a search space that contains multi-head attention ( Vaswani et al. , 2017 ) , separable convolution ( Kaiser et al. , 2018 ) , feed-forward network and identity operations with different hidden sizes to find efficient models . In order to search models with adaptive sizes that satisfy diverse requirements of memory and latency constraints in different devices , we train a big supernet that contains all the candidate operations and architectures with weight sharing ( Bender et al. , 2018 ; Cai et al. , 2018 ; 2019 ; Yu et al. , 2020 ) . In order to reduce the laborious compressing on each downstream task , we directly train the big supernet and get the compressed model on the pre-training task to make it applicable across different downstream tasks . However , it is extremely expensive to directly perform architecture search in a big supernet on the heavy pre-training task . To improve the search efficiency and accuracy , we employ several techniques including block-wise search , search space pruning and performance approximation during the search process : ( 1 ) We adopt block-wise search ( Li et al. , 2020a ) to divide the supernet into blocks so that the size of the search space can be reduced exponentially . To train each block , we leverage a pre-trained teacher model , divide the teacher model into blocks similarly , and use the input and output hidden states of the corresponding teacher block as paired data for training . ( 2 ) To further reduce the search cost of each block ( even if block-wise search has greatly reduced the search space ) due to the heavy burden of the pre-training task , we propose progressive shrinking to dynamically prune the search space according to the validation loss during training . To ensure that architectures with different sizes and latencies can be retained during the pruning process , we further divide all the architectures in each block into several bins according to their model sizes and perform progressive shrinking in each bin . ( 3 ) We obtain the compressed models under specific constraints of memory and latency by assembling the architectures in every block using performance approximation , which can reduce the cost in model selection . We evaluate the models compressed by NAS-BERT on the GLUE benchmark ( Wang et al. , 2018 ) . The results show that NAS-BERT can find lightweight models with various sizes from 5M to 60M with better accuracy than that achieved by previous work . Our contributions of NAS-BERT can be summarized as follows : • We carefully design a search space that contains various architectures and different sizes , and apply NAS on the pre-training task to search for efficient lightweight models , which is able to deliver adaptive model sizes given different requirements of memory or latency and apply for different downstream tasks . • We further apply block-wise search , progressive shrinking and performance approximation to reduce the huge search cost and improve the search accuracy . • Experiments on the GLUE benchmark datasets demonstrate the effectiveness of NAS-BERT for model compression . 2 RELATED WORK . BERT Model Compression Recently , compressing pre-trained language models has been studied extensively and several techniques have been proposed such as knowledge distillation , pruning , weight factorization , quantization and so on . Existing works ( Tsai et al. , 2020 ; Sanh et al. , 2019 ; Sun et al. , 2019 ; Song et al. , 2020 ; Jiao et al. , 2019 ; Lan et al. , 2019 ; Zafrir et al. , 2019 ; Shen et al. , 2020 ; Wang et al. , 2019b ; Lan et al. , 2019 ; Zafrir et al. , 2019 ; Chen et al. , 2020 ) aim to compress the pre-trained model into a fixed size of the model and have achieved a trade-off between the small parameter size ( usually no more than 66M ) and the good performance . However , these compressed models can not be deployed in devices with different memory and latency constraints . Recent works ( Hou et al. , 2020 ) can deliver adaptive models for each specific downstream task and demonstrate the effectiveness of the task-oriented compression . For practical deployment , it can be laborious to compress models from each task . Other works ( Fan et al. , 2019 ) can produce compressed models on the pre-training stage that can directly generalize to downstream tasks , and allow for efficient pruning at inference time . However , they do not explore the potential of different architectures as in our work . Different from existing works , NAS-BERT aims for task-agnostic compression on the pre-training stage which eliminates the laborious compression for each specific downstream task , and carefully designs the search space which is capable to explore the potential of different architectures and deliver various models given diverse memory and latency requirements . Neural Architecture Search for Efficient Models Many works have leveraged NAS to search efficient models ( Liu et al. , 2018 ; Cai et al. , 2018 ; Howard et al. , 2019 ; Tan & Le , 2019 ; Cai et al. , 2019 ; Yu et al. , 2020 ; Wang et al. , 2020a ; Tsai et al. , 2020 ) . Most of them focus on computer vision tasks and rely on specific designs on the convolutional layers ( e.g. , inverted bottleneck convolution ( Howard et al. , 2019 ) or elastic kernel size ( Cai et al. , 2019 ; Yu et al. , 2020 ) ) . Among them , once-for-all ( Cai et al. , 2019 ) and BigNAS ( Yu et al. , 2020 ) train a big supernet that contains all the candidate architectures and can get a specialized sub-network by selecting from the supernet to support various requirements ( e.g. , model size and latency ) . HAT ( Wang et al. , 2020a ) also trains a supernet with the adaptive widths and depths for machine translation tasks . Our proposed NAS-BERT also trains a big supernet . However , different from these methods , we target model compression for BERT at the pre-training stage , which is a more challenging task due to the large model size and huge pre-training cost . Therefore , we introduce several techniques including blockwise search , progressive shrinking , and performance approximation to reduce the training cost and improve search efficiency . Tsai et al . ( 2020 ) apply one-shot NAS to search a faster Transformer but they can not deliver multiple architectures to meet various constraints for deployment . Different from Tsai et al . ( 2020 ) , NAS-BERT 1 ) progressively shrinks the search space to allocate more resources to promising architectures and thus can deliver various architectures without adding much computation ; 2 ) designs bins in the shrinking algorithm to guarantee that we can search architectures to meet diverse memory and latency constraints . 3 ) explores novel architectures with convolution layer , multi-head attention , and feed-forward layer , and achieves better performance than previous works for BERT compression . 3 METHOD . In this section , we describe NAS-BERT , which conducts neural architecture search to find small , novel and accurate BERT models . To meet the requirements of deployment for different memory and latency constraints and across different downstream tasks , we 1 ) train a supernet with a novel search space that contains different sizes of models for various resource-restricted devices , and 2 ) directly search the models on the pre-training task to make them generalizable on different downstream tasks . The method can be divided into three steps : 1 ) search space design ( Section 3.1 ) ; 2 ) supernet training ( Section 3.2 ) ; 3 ) model selection ( Section 3.3 ) . Due to the huge cost to train the big supernet on the heavy pre-training task and select compressed models under specific constraints , we introduce several techniques including block-wise search , search space pruning and performance approximation in Section 3.2 and 3.3 to reduce the search space and improve the search efficiency . 3.1 SEARCH SPACE DESIGN . A novel search space allows the potential of combinations of different operations , instead of simply stacking basic Transformer block ( multi-head attention and feed-forward network ) as in the original BERT model . We adopt the chain-structured search space ( Elsken et al. , 2018 ) , and construct an over-parameterized supernet A with L layers and each layer contains all candidate operations in O = { o1 , · · · , oC } , where C is the number of predefined candidate operations . Residual connection is applied to each layer by adding the input to the output . There areCL possible paths ( architectures ) in the supernet , and a specific architecture a = ( a1 , · · · , aL ) is a sub-net ( path ) in the supernet , where al ∈ O is the operation in the l-th layer , as shown in Fig . 2 ( a ) . We adopt weight sharing mechanism that is widely used in NAS ( Bender et al. , 2018 ; Cai et al. , 2019 ) for efficient training , where each architecture ( path ) shares the same set of operations in each layer . We further describe each operation in O as follows : 1 ) Multi-head attention ( MHA ) and feedforward network ( FFN ) , which are the two basic operations in Transformer and are popular in pretraining models ( in this way we can cover BERT model as a subnet in our supernet ) . 2 ) Separable convolution ( SepConv ) , whose effectiveness and efficiency in natural language processing tasks have been demonstrated by previous work ( Kaiser et al. , 2018 ; Karatzoglou et al. , 2020 ) . 3 ) Identity operation , which can support architectures with the number of layers less than L. Identity operation is regarded as a placeholder in a candidate architecture and can be removed to obtain a shallower network . More detailed considerations on choosing the operation set are in Appendix A.1 . Apart from different types of operations , to allow adaptive model sizes , each operation can have different hidden sizes : { 128 , 192 , 256 , 384 , 512 } . In this way , architectures in the search space can be of different depths and widths . The complete candidate operation setO contains ( 1+1+3 ) ∗5+1 = 26 operations , where the first product term represents the number of types of operations and 3 represents the SepConv with different kernel size { 3 , 5 , 7 } , the second product term represents that there are 5 different hidden sizes . We list 26 operations in Table 1 . The detailed structure of separable convolution is shown in Fig . 1 .
This paper proposes to search architectures of BERT model under various memory and latency contraints. The search algorithm is conducted by pretraining a big supernet that contains the all the sub-network structures, where the optimal models for different requirements are selected from it. Once an architecture is found, it is re-trained through pretraining-finetuning or two-stage distillation for each specific task. Several approaches (block-wise training and search, progressive shrinking, performance approximation) are proposed to improve the search efficiency. Experiments on GLUE benchmark shows the models found by proposed methods can achieve better accuracy than some of the previous compressed BERT models. The paper (together with the appendix) is clearly presented, and the idea is new and interesting to me. The experiments are detailed and comprehensive.
SP:adfae2d05cdf908663fa093cd58f0e8d50ab2d9a
Deep Learning Is Composite Kernel Learning
1 INTRODUCTION . The success of deep learning is attributed to feature learning . The conventional view is that feature learning happens in the hidden layers of a deep network : in the initial layers simple low level features are learnt , and sophisticated high level features are learnt as one proceeds in depth . In this viewpoint , the penultimate layer output is the final hidden feature and the final layer learns a linear model with these hidden features . While this interpretation of feature learning is intuitive , beyond the first couple of layers it is hard make any meaningful interpretation of what happens in the intermediate layers . Recent works Jacot et al . ( 2018 ) ; Arora et al . ( 2019 ) ; Cao and Gu ( 2019 ) have provided a kernel learning interpretation for deep learning by showing that in the limit of infinite width deep learning becomes kernel learning . These works are based on neural tangents , wherein , the gradient of the network output with respect to the network parameters known as the neural tangent features ( NTFs ) are considered as the features . Arora et al . ( 2019 ) show that at randomised initialisation of weights , the kernel matrix associated with the NTFs , known as the neural tangent kernel ( NTK ) converges to a deterministic matrix and that optimisation and generalisation of infinite width deep neural networks is characterised by this deterministic kernel matrix . Cao and Gu ( 2019 ) provided generalisation bounds in terms of the NTK matrix . Arora et al . ( 2019 ) also proposed a pure-kernel method based on CNTK ( NTK of convolutional neural networks , i.e. , CNNs ) which significantly outperformed the previous state-of-the-art kernel methods . The NTK either as an interpretation or as a method in itself has been very successful . Nevertheless it has some open issues namely i ) non-interpretable : the kernel is the inner product of gradients and has no physical interpretation , ii ) no feature learning : the NTFs are random and fixed during training and iii ) performance gap : finite width CNN outperforms the infinite width CNTK , i.e. , NTK does not fully explain the success of deep learning . Recently , Lakshminarayanan and Singh ( 2020 ) developed a neural path ( NP ) framework to provide a kernel interpretation for deep learning that addresses the open issues in the current NTK framework . Here , DNNs with ReLU activations are considered , and the gates ( on/off state of ReLU ) are encoded in the so called neural path feature ( NPF ) and the weights in the network in the so called neural path value ( NPV ) . The key findings can be broken into the following steps . Step 1 : The NPFs and NPV are decoupled . Gates are treated as masks , which are held in a separate feature network and applied to the main network called the value network . This enables one to study the various kinds of gates ( i.e. , NPFs ) , such as random gates ( of a randomly initialised network ) , semi-learnt gates ( sampled at an intermediate epoch during training ) , and learnt gates ( sampled from a fully trained network ) . This addresses the feature learning issue . Step 2 : When the gates/masks are decoupled and applied externally it follows that NTK = const ⇥ NPK , at random initialisation of weights . For a pair of input examples , NPK is a similarity measure 1Introduced for the first time in the work of Lakshminarayanan and Singh ( 2020 ) . that depends on the size of the sub-network formed by the gates that are active simultaneously for examples . This addresses the interpretability issue . Step 3 : CNTK performs better than random gates/masks and gates/masks from fully trained networks perform better than CNTK . This explains the performance gap between CNN and CNTK . It was also observed ( on standard datasets ) that when learnt gates/masks are used , the weights of the value network can be reset and re-trained from scratch without significant loss of performance . 1.1 CONTRIBUTIONS IN THIS WORK . We attribute the success of deep learning to the following two key ingredients : ( i ) a composite kernel with gates as fundamental building blocks and ( ii ) allowing the gates to learn/adapt during training . Formally , we extend the NP framework of Lakshminarayanan and Singh ( 2020 ) as explained below . • Composite Kernel : The NPK matrix has a composite structure ( architecture dependent ) . 1 . Fully-Connected networks : H fc is the Hadamard product of the input data Gram matrix , and the kernel matrices corresponding to the binary gating features of the individual layers . 2 . Residual networks ( ResNets ) with skip connections : H res assumes a sum of products form . In particular , consider a ResNet with ( b + 2 ) blocks and b skip connections . Within this ResNet there are i = 1 , . . . , 2b possible dense networks , and then H res = P2b i=1 CiH fc i , where Ci > 0 are positive constants based on normalisation layers . 3 . Convolutional neural networks ( CNN ) with pooling : Hcnn is rotation invariant . • Gate Learning : We show that learnt gates perform better than random gates . Starting with the setup of Lakshminarayanan and Singh ( 2020 ) , we build combinatorially many models by , 1. permuting the order of the layers when we apply them as external masks , 2. having two types of modes based on input provided to the value network namely i ) ‘ standard ’ : input is the actual image and ii ) ‘ all-ones ’ : input is a tensor with all entries as ‘ 1 ’ . We observe in our experiments that the performance is robust to such combinatorial variations . Message : This work along with that of Lakshminarayanan and Singh ( 2020 ) provides a paradigm shift in understanding deep learning . Here , gates play a central role . Each gate is related to a hyperplane , and gates together form layer level binary features whose kernels are the base kernels . Laying out these binary features depth-wise gives rise to a product of the base kernels . The skip connections gives a ‘ sum of product ’ structure , and convolution with pooling gives rotation invariance . Organisation : Section 2 describes the network architectures namely fully-connected , convolutional and residual , which we take up for theoretical analysis . Section 3 extends the neural path framework to CNN and ResNet . Section 4 explains the composite kernel . Section 5 connects the NTK and NPK for CNN and ResNet . Section 6 consists of numerical experiments . 2 ARCHITECTURES : FULLY CONNECTED , CONVOLUTIONAL AND RESIDUAL . In this section , we present the three architectures that we take up for theoretical analysis . These are i ) fully connected ( FC or FC-DNN ) , ii ) convolutional ( CNN ) and iii ) residual ( ResNets ) . In what follows , [ n ] is the set { 1 , . . . , n } , and the dataset is given by ( xs , ys ) ns=1 2 Rdin ⇥ R. Fully Connected : We consider fully connected networks with width ‘ w ’ and depth ‘ d ’ . CNN : We consider a 1-dimensional convolutional neural network with circular convolutions ( see Table 2 ) , with dcv convolutional layers ( l = 1 , . . . , dcv ) , followed by a global-average/max-pooling layer ( l = dcv + 1 ) and dfc ( l = dcv + 2 , . . . , dcv + dfc + 1 ) FC layers . The convolutional window size is wcv < din , the number of filters per convolutional layer is w , and the width of the FC is also w. Definition 2.1 ( Circular Convolution ) . For x 2 Rdin , i 2 [ din ] and r 2 { 0 , . . . , din 1 } , define : ( i ) i r = i+ r , for i+ r din and i r = i+ r din , for i+ r > din . ( ii ) rot ( x , r ) ( i ) = x ( i r ) , i 2 [ din ] . ( iii ) qx , ⇥ ( ifout , iout , l ) = P icv , iin ⇥ ( icv , iin , iout , l ) · zx , ⇥ ( ifout ( icv 1 ) , iin , l 1 ) , where iin/iout are the indices ( taking values in [ w ] ) of the input/output filters . icv denotes the indices of the convolutional window ( taking values in [ wcv ] ) between input and output filters iin and iout . ifout denotes the indices ( taking values in [ din ] , the dimension of input features ) of individual nodes in a given output filter . Definition 2.2 ( Pooling ) . Let Gpool x , ⇥ ( ifout , iout , dcv + 1 ) denote the pooling mask , then we have zx , ⇥ ( iout , dcv + 1 ) = X ifout zx , ⇥ ( ifout , iout , dcv ) ·Gpoolx , ⇥ ( ifout , iout , dcv + 1 ) , where in the case of global-average-pooling Gpool x , ⇥ ( ifout , iout , dcv + 1 ) = 1 din , 8iout 2 [ w ] , ifout 2 [ din ] , and in the case of max-pooling , for a given iout 2 [ w ] , Gpoolx , ⇥ ( imax , iout , dcv + 1 ) = 1 where imax = argmaxifout zx , ⇥ ( ifout , iout , dcv ) , and G pool x , ⇥ ( ifout , iout , dcv + 1 ) = 0 , 8ifout 6= imax . ResNet : We consider ResNets with ‘ ( b + 2 ) ’ blocks and ‘ b ’ skip connections between the blocks ( Figure 1 ) . Each block is a FC-DNN of depth ‘ dblk ’ and width ‘ w ’ . Here , pre i , post i , i 2 [ b ] are normalisation variables . Definition 2.3 ( Sub FC-DNNs ) . Let 2 [ b ] denote the power set of [ b ] and let J 2 2 [ b ] denote any subset of [ b ] . Define the ‘ J th ’ sub-FC-DNN of the ResNet to be the fully connected network obtained by ignoring/removing ( see Figure 1 ) the skip connections skip j , 8j 2 J ( see Figure 1 ) . 3 NEURAL PATH FRAMEWORK . In this section , we extend the neural path framework developed by LS2020 , to CNN and ResNet architectures described in the previous section . The neural path framework exploits the gating property of ReLU activation , which can be thought of as gate/mask that blocks/allows its pre-activation input depending on its 0/1 state ( 0 if pre-activation is negative and 1 if pre-activation is positive ) . The key idea here is to break a DNN ( with ReLU ) into paths , and express its output as a summation of the contribution of the paths . The contribution of a path is the product of the signal in its input node , the weights in the path and the gates in the path . For a DNN with P paths , for an input x 2 Rdin , the gating information is encoded in a novel neural path feature ( NPF ) , x , ⇥ 2 RP and a novel neural path value ( NPV ) , v⇥ 2 RP encodes the weights . The output of the DNN is then the inner product of the NPFs and NPVs , i.e. , ŷ⇥ ( xs ) = h xs , ⇥ , v⇥i ( Proposition 3.4 ) . Definition 3.1 . A path starts from an input node , passes through weights , hidden nodes , and normal- isation constants and ends at the output node . Proposition 3.1 . The total number of paths in FC-DNN , CNN and ResNet are respectively given by P fc = dinw ( d 1 ) , P cnn = din ( wcvw ) dcvw ( dfc 1 ) and P res = din · P b i=0 b i w ( i+2 ) dblk 1 . Notation 3.1 ( Index Maps ) . The ranges of index maps I f l , Icv l , Il are [ din ] , [ wcv ] and [ w ] respectively . The index maps are used to identify the nodes through which a path p passes . Further , let IJ ( p ) : [ P res ] ! 2 [ b ] specify the indices of the skip connections ignored in path p. Also , we follow the convention that weights and gating values of layers corresponding to blocks skipped are 1 . Definition 3.2 ( Path Activity ) . The product of the gating values in a path p is its ‘ activity ’ denoted by A⇥ ( x , p ) . We define : ( a ) A⇥ ( x , p ) = ⇧ d 1 l=1 Gx , ⇥ ( Il ( p ) , l ) , for FC-DNN and ResNet . ( b ) A⇥ ( x , p ) = ⇧ dcv+1 l=1 Gx , ⇥ ( I f l ( p ) , Il ( p ) , l ) ·⇧ dcv+dfc+1 l=dcv+2 Gx , ⇥ ( Il ( p ) , l ) , for CNN . In CNN , the pooling layer is accounted by letting G = Gpool for l = dcv + 1 . Definition 3.3 ( Bundle Paths of Sharing Weights ) . Let P̂ cnn = P cnn din , and { B1 , . . . , BP̂ cnn } be a collection of sets such that 8i , j 2 [ P̂ cnn ] , i 6= j we have Bi \Bj = ; and [ P̂ cnn i=1Bi = [ P cnn ] . Further , if paths p , p 0 2 Bi , then Icvl ( p ) = Icvl ( p0 ) , 8l = 1 , . . . , dcv and Il ( p ) = Il ( p0 ) , 8l = 0 , . . . , dcv . Proposition 3.2 . There are exactly din paths in a bundle . Definition 3.4 ( Normalisation Factor ) . Define ( J ) = ⇧ j2J pre j · ⇧ j02 [ b ] post j0 Weight sharing is shown in the the cartoon in Figure 2 , which shows a CNN with din = 3 , w = 1 , wcv = 2 , dcv = 3 , dfc = 0 . Here , the red coloured paths all share the same weights ⇥ ( 1 , 1 , 1 , l ) , l = 1 , 2 , 3 and the blue coloured paths all share the same weights given by ⇥ ( 2 , 1 , 1 , l ) , l = 1 , 2 , 3 . Definition 3.5 ( Neural Path Value ) . The product of the weights and normalisation factors in a path p is its ‘ value ’ . The value of a path bundle is the value of any path in that bundle . The path/bundle values are denoted by v⇥ ( p ) /v⇥ ( Bp̂ ) and are defined as follows : ( a ) v⇥ ( p ) = ⇧dl=1⇥ ( Il 1 ( p ) , Il ( p ) , l ) . ( b ) v⇥ ( Bp̂ ) = ⇧ dcv l=1⇥ ( Icvl ( p ) , Il 1 ( p ) , Il ( p ) , l ) ·⇧ dcv+dfc+1 l=dcv+2 ⇥ ( Il 1 ( p ) , Il ( p ) , l ) , for any p 2 Bp̂ . ( c ) v⇥ ( p ) = ⇧dl=1⇥ ( Il 1 ( p ) , Il ( p ) , l ) · ( IJ ( p ) ) . The neural path value is defined as v⇥ = ( v⇥ ( p ) , p 2 [ P fc ] ) 2 RP fc , v⇥ = ( v⇥ ( Bp̂ ) , p̂ 2 [ P̂ cnn ] ) 2 RP̂ cnn , and v⇥ = ( v⇥ ( p ) , p 2 [ P res ] ) 2 RP res for FC-DNN , CNN and ResNet respectively . Proposition 3.3 ( Rotational Invariance ) . Internal variables in the convolutional layers are circularly symmetric , i.e. , for r 2 { 0 , . . . , din 1 } it follows that ( i ) zrot ( x , r ) , ⇥ ( ifout , · , · ) = zx , ⇥ ( ifout r , · , · ) , ( ii ) qrot ( x , r ) , ⇥ ( ifout , · , · ) = qx , ⇥ ( ifout r , · , · ) and ( iii ) Grot ( x , r ) , ⇥ ( ifout , · , · ) = Gx , ⇥ ( ifout r , · , · ) . Definition 3.6 . The neural path feature ( NPF ) corresponding to a path p is given by ( a ) x , ⇥ ( p ) = x ( I f0 ( p ) ) A⇥ ( xs , p ) for FC-DNN and ResNet . ( b ) x , ⇥ ( p̂ ) = P p̂2Bp̂ x ( I f 0 ( p ) ) A⇥ ( x , p ) for CNN . The NPF is defined as x , ⇥ = ( x , ⇥ ( p ) , p 2 [ P fc ] ) 2 RP fc , x , ⇥ = ( x , ⇥ ( Bp̂ ) , p̂ 2 [ P̂ cnn ] ) 2 RP̂ cnn , and x , ⇥ = ( x , ⇥ ( p ) , p 2 [ P res ] ) 2 RP res for FC-DNN , CNN and ResNet respectively . Proposition 3.4 ( Output=hNPF , NPVi ) . The output of the network can be written as an inner product of the NPF and NPV , i.e. , ŷ⇥ ( x ) = h x , ⇥ , v⇥i .
This paper builds on recent work characterising deep neural networks in terms of Neural Tangent Kernels and Neural Path Features. Over the past few years, a number of papers have developed the theory of Neural Tangent Kernels, which can be used to interpret infinite width deep neural networks in the context of a particular type of kernel. A recent paper (Lakshminarayanan and Singh, NeurIPS 2020) provided a new perspective on Neural Tangent Kernels for Gated Neural Networks, by decomposing the network into independent paths. For a fixed set of network weights, we can consider each path to give rise to a feature, corresponding to whether this path is active (i.e., is not switched off by one of the gates on the path). Then, the output of the neural network can be viewed as a weighted sum of active paths, equivalently the dot product of the neural path feature vector and a neural path value vector. Lakshminarayanan and Singh showed that under certain assumptions, a kernel defined in terms of the neural path feature is approximately equal to the neural tangent kernel (up to a constant). Specifically, they show that the value of the neural tangent kernel matrix tends to a constant multiple of the neural path kernel matrix as the width of the network goes to infinity. This suggests that the key component in a deep neural network with RELU activations is the gating structure, which defines active subnetworks, as opposed to the values.
SP:55f630e6b41243dfe92ea4269bb1a1e6e8109974
Return-Based Contrastive Representation Learning for Reinforcement Learning
1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms can learn representations from high-dimensional inputs , as well as learn policies based on such representations to maximize long-term returns simultaneously . However , deep RL algorithms typically require large numbers of samples , which can be quite expensive to obtain ( Mnih et al. , 2015 ) . In contrast , it is usually much more sample efficient to learn policies with learned representations/extracted features ( Srinivas et al. , 2020 ) . To this end , various auxiliary tasks have been proposed to accelerate representation learning in aid of the main RL task ( Suddarth and Kergosien , 1990 ; Sutton et al. , 2011 ; Gelada et al. , 2019 ; Bellemare et al. , 2019 ; François-Lavet et al. , 2019 ; Shen et al. , 2020 ; Zhang et al. , 2020 ; Dabney et al. , 2020 ; Srinivas et al. , 2020 ) . Representative examples of auxiliary tasks include predicting the future in either the pixel space or the latent space with reconstruction-based losses ( e.g. , Jaderberg et al. , 2016 ; Hafner et al. , 2019a ; b ) . Recently , contrastive learning has been introduced to construct auxiliary tasks and achieves better performance compared to reconstruction based methods in accelerating RL algorithms ( Oord et al. , 2018 ; Srinivas et al. , 2020 ) . Without the need to reconstruct inputs such as raw pixels , contrastive learning based methods can ignore irrelevant features such as static background in games and learn more compact representations . Oord et al . ( 2018 ) propose a contrastive representation learning method based on the temporal structure of state sequence . Srinivas et al . ( 2020 ) propose to leverage the prior knowledge from computer vision , learning representations that are invariant to image augmentation . However , existing works mainly construct contrastive auxiliary losses in an unsupervised manner , without considering feedback signals in RL problems as supervision . In this paper , we take a further step to leverage the return feedback to design a contrastive auxiliary loss to accelerate RL algorithms . Specifically , we propose a novel method , called Return-based ∗This work is conducted at Microsoft Research Asia . The first two authors contributed equally to this work . Contrastive representation learning for Reinforcement Learning ( RCRL ) . In our method , given an anchor state-action pair , we choose a state-action pair with the same or similar return as the positive sample , and a state-action pair with different return as the negative sample . Then , we train a discriminator to classify between positive and negative samples given the anchor based on their representations as the auxiliary task . The intuition here is to learn state-action representations that capture return-relevant features while ignoring return-irrelevant features . From a theoretical perspective , RCRL is supported by a novel state-action abstraction , called Zπirrelevance . Zπ-irrelevance abstraction aggregates state-action pairs with similar return distributions under certain policy π . We show that Zπ-irrelevance abstraction can reduce the size of the stateaction space ( cf . Appendix A ) as well as approximate the Q values arbitrarily accurately ( cf . Section 4.1 ) . We further propose a method called Z-learning that can calculate Zπ-irrelevance abstraction with sampled returns rather than the return distribution , which is hardly available in practice . Zlearning can learn Zπ-irrelevance abstraction provably efficiently . Our algorithm RCRL can be seen as the empirical version of Z-learning by making a few approximations such as integrating with deep RL algorithms , and collecting positive pairs within a consecutive segment in a trajectory of the anchors . We conduct experiments on Atari games ( Bellemare et al. , 2013 ) and DeepMind Control suite ( Tassa et al. , 2018 ) in low data regime . The experiment results show that our auxiliary task combined with Rainbow ( Hessel et al. , 2017 ) for discrete control tasks or SAC ( Haarnoja et al. , 2018 ) for continuous control tasks achieves superior performance over other state-of-the-art baselines for this regime . Our method can be further combined with existing unsupervised contrastive learning methods to achieve even better performance . We also perform a detailed analysis on how the representation changes during training with/without our auxiliary loss . We find that a good embedding network assigns similar/dissimilar representations to state-action pairs with similar/dissimilar return distributions , and our algorithm can boost such generalization and speed up training . Our contributions are summarized as follows : • We introduce a novel contrastive loss based on return , to learn return-relevant representations and speed up deep RL algorithms . • We theoretically build the connection between the contrastive loss and a new form of stateaction abstraction , which can reduce the size of the state-action space as well as approximate the Q values arbitrarily accurately . • Our algorithm achieves superior performance against strong baselines in Atari games and DeepMind Control suite in low data regime . Besides , the performance can be further enhanced when combined with existing auxiliary tasks . 2 RELATED WORK . 2.1 AUXILIARY TASK . In reinforcement learning , the auxiliary task can be used for both the model-based setting and the model-free setting . In the model-based settings , world models can be used as auxiliary tasks and lead to better performance , such as CRAR ( François-Lavet et al. , 2019 ) , Dreamer ( Hafner et al. , 2019a ) , and PlaNet ( Hafner et al. , 2019b ) . Due to the complex components ( e.g. , the latent transition or reward module ) in the world model , such methods are empirically unstable to train and relies on different regularizations to converge . In the model-free settings , many algorithms construct various auxiliary tasks to improve performance , such as predicting the future ( Jaderberg et al. , 2016 ; Shelhamer et al. , 2016 ; Guo et al. , 2020 ; Lee et al. , 2020 ; Mazoure et al. , 2020 ) , learning value functions with different rewards or under different policies ( Veeriah et al. , 2019 ; Schaul et al. , 2015 ; Borsa et al. , 2018 ; Bellemare et al. , 2019 ; Dabney et al. , 2020 ) , learning from many-goals ( Veeriah et al. , 2018 ) , or the combination of different auxiliary objectives ( de Bruin et al. , 2018 ) . Moreover , auxiliary tasks can be designed based on the prior knowledge about the environment ( Mirowski et al. , 2016 ; Shen et al. , 2020 ; van der Pol et al. , 2020 ) or the raw state representation ( Srinivas et al. , 2020 ) . Hessel et al . ( 2019 ) also apply auxiliary task to the multi-task RL setting . Contrastive learning has seen dramatic progress recently , and been introduced to learn state representation ( Oord et al. , 2018 ; Sermanet et al. , 2018 ; Dwibedi et al. , 2018 ; Aytar et al. , 2018 ; Anand et al. , 2019 ; Srinivas et al. , 2020 ) . Temporal structure ( Sermanet et al. , 2018 ; Aytar et al. , 2018 ) and local spatial structure ( Anand et al. , 2019 ) has been leveraged for state representation learning via contrastive losses . CPC ( Oord et al. , 2018 ) and CURL ( Srinivas et al. , 2020 ) adopt a contrastive auxiliary tasks to accelerate representation learning and speed up main RL tasks , by leveraging the temporal structure and image augmentation respectively . To the best of our knowledge , we are the first to leverage return to construct a contrastive auxiliary task for speeding up the main RL task . 2.2 ABSTRACTION . State abstraction ( or state aggregation ) aggregates states by ignoring irrelevant state information . By reducing state space , state abstraction can enable efficient policy learning . Different types of abstraction are proposed in literature , ranging from fine-grained to coarse-grained abstraction , each reducing state space to a different extent . Bisimulation or model irrelevance ( Dean and Givan , 1997 ; Givan et al. , 2003 ) define state abstraction under which both transition and reward function are kept invariant . By contrast , other types of state abstraction that are coarser than bisimulation such as Qπ irrelevance orQ∗ irrelevance ( see e.g. , Li et al. , 2006 ) , which keep the Q function invariant under any policy π or the optimal policy respectively . There are also some works on state-action abstractions , e.g. , MDP homomorphism ( Ravindran , 2003 ; Ravindran and Barto , 2004a ) and approximate MDP homomorphism ( Ravindran and Barto , 2004b ; Taylor et al. , 2009 ) , which are similar to bisimulation in keeping reward and transition invariant , but extending bisimulation from state abstraction to stateaction abstraction . In this paper , we consider a new form of state-action abstraction Zπ-irrelevance , which aggregates state-action pairs with the same return distribution and is coarser than bisimulation or homomorphism which are frequently used as auxiliary tasks ( e.g. , Biza and Platt , 2018 ; Gelada et al. , 2019 ; Zhang et al. , 2020 ) . However , it is worth noting that Zπ-irrelevance is only used to build the theoretical foundation of our algorithm , and show that our proposed auxiliary task is well-aligned with the main RL task . Representation learning in deep RL is in general very different from aggregating states in tabular case , though the latter may build nice theoretical foundation for the former . Here we focus on how to design auxiliary tasks to accelerate representation learning using contrastive learning techniques , and we propose a novel return-based contrastive method based on our proposed Zπ-irrelevance abstraction . 3 PRELIMINARY . We consider a Markov Decision Process ( MDP ) which is a tuple ( S , A , P , R , µ , γ ) specifying the state space S , the action space A , the state transition probability P ( st+1|st , at ) , the reward R ( rt|st , at ) , the initial state distribution µ ∈ ∆S and the discount factor γ . Also , we denote x : = ( s , a ) ∈ X : = S × A to be the state-action pair . A ( stationary ) policy π : S → ∆A specifies the action selection probability on each state . Following the policy π , the discounted sum of future rewards ( or return ) is denoted by the random variable Zπ ( s , a ) = ∑∞ t=0 γ tR ( st , at ) , where s0 = s , a0 = a , st ∼ P ( ·|st−1 , at−1 ) , and at ∼ π ( ·|st ) . We divide the range of return into K equal bins { R0 = Rmin , R1 , · · · , RK = Rmax } such that Rk −Rk−1 = ( Rmax−Rmin ) /K , ∀k ∈ [ K ] , where Rmin ( resp . Rmax ) is the minimum ( reps. maximum ) possible return , and [ K ] : = { 1 , 2 , · · · , K } . We use b ( R ) = k ∈ [ K ] to denote the event that R falls into the kth bin , i.e. , Rk−1 < R ≤ Rk . Hence , b ( R ) can be viewed as the discretized version of the return , and the distribution of discretized return can be represented by a K-dimensional vector Zπ ( x ) ∈ ∆K , where the k-th element equals to Pr [ Rk−1 < Zπ ( x ) ≤ Rk ] . The Q function is defined as Qπ ( x ) = E [ Zπ ( x ) ] , and the state value function is defined as V π ( s ) = Ea∼π ( ·|s ) [ Zπ ( s , a ) ] . The objective for RL is to find a policy π that maximizes the expected cumulative reward J ( π ) = Es∼µ [ V π ( s ) ] . We denote the optimal policy as π∗ and the corresponding optimal Q function as Q∗ : = Qπ ∗ .
The authors propose the inclusion of an auxiliary task for training an RL model, where the auxiliary task objective is to learn an abstraction of the state-action space that clusters (s,a) pairs according to their expected return. The authors first describe a basic abstraction learning framework (Z-learning) followed by the extension to Deep RL as an auxiliary task (RCRL). The authors present results in Atari (discrete action) building on Rainbow, showing an improvement compared to baselines on median HNS in the low-data regime, and results on DMControl (continuous action) building on SAC, showing similar or improved performance compared to baselines.
SP:e17f92caae3e2bd4830eadeb4b268c1c82d43e4d
Adaptive Hierarchical Hyper-gradient Descent
1 INTRODUCTION . The basic optimization algorithm for training deep neural networks is the gradient descent method ( GD ) , which includes stochastic gradient descent ( SGD ) , mini-batch gradient descent , and batch gradient descent . The model parameters are updated according to the first-order gradients of the empirical risks with respect to the parameters being optimized , while back-propagation is implemented for calculating the gradients of parameters ( Ruder , 2016 ) . Naïve gradient descent methods apply fixed learning rates without any adaptation mechanisms . However , considering the change of available information during the learning process , SGD with fixed learning rates can result in inefficiency and requires a large amount of computing resources in hyper-parameter searching . One solution is to introduce a learning rate adaptation . This idea can be traced back to the work on gain adaptation for connectionist learning methods ( Sutton , 1992 ) and related extensions for non-linear cases ( Schraudolph , 1999 ; Yu et al. , 2006 ) . In recent years , optimizers with adaptive updating rules were developed in the context of deep learning , while the learning rates are still fixed in training . The proposed methods include AdaGrad ( Duchi et al. , 2011 ) , RMSProp ( Tieleman and Hinton , 2012 ) , and Adam ( Kingma and Ba , 2015 ) . In addition , there are optimizers aiming to address the convergence issue in Adam ( Reddi et al. , 2018 ; Luo et al. , 2018 ) and to rectify the variance of the adaptive learning rate ( Liu et al. , 2019 ) . Other techniques , such as Lookahead , can also achieve variance reduction and stability improvement with negligible extra computational cost ( Zhang et al. , 2019 ) . Even though the adaptive optimizers with fixed learning rates can converge faster than SGD in a wide range of tasks , the updating rules are designed manually while more hyper-parameters are introduced . Another idea is to use objective function information and update the learning rates as trainable parameters . These methods were introduced as automatic differentiation , where the hyper-parameters can be optimized with backpropagation ( Maclaurin et al. , 2015 ; Baydin et al. , 2018 ) . As gradient-based hyper-parameter optimization methods , they can be implemented as an online approach ( Franceschi et al. , 2017 ) . With the idea of auto-differentiation , learning rates can be updated in real-time with the corresponding derivatives of the empirical risk ( Almeida et al. , 1998 ) , which can be generated to all types of optimizers for deep neural networks ( Baydin et al. , 2017 ) . Another step size adaptation approach called “ L4 ” , is based on the linearized expansion of the loss functions , which rescales the gradient to make fixed predicted progress on the loss ( Rolinek and Martius , 2018 ) . Furthermore , by addressing the issue of poor generalization performance of adaptive methods , dynamically bound for gradient methods was introduced to build a gradual transition between adaptive approach and SGD ( Luo et al. , 2018 ) . Another set of approaches train an RNN ( recurrent neural network ) agent to generate the optimal learning rates in the next step given the historical training information , known as “ learning to learn ” ( Andrychowicz et al. , 2016 ) . This approach empirically outperforms hand-designed optimizers in a variety of learning tasks , but another study has shown that it may not be effective for long horizons ( Lv et al. , 2017 ) . The generalization ability of this approach can be improved by using meta training samples and hierarchical LSTMs ( long short-term memory ) ( Wichrowska et al. , 2017 ) . Beyond the adaptive learning rate , learning rate schedules can also improve the convergence of optimizers , including time-based decay , step decay , exponential decay ( Li and Arora , 2019 ) . The most fundamental and widely applied one is a piece-wise step-decay learning rate schedule , which could vastly improve the convergence of SGD and even adaptive optimizers ( Luo et al. , 2018 ; Liu et al. , 2019 ) . It can be further improved by introducing a statistical test to determine when to apply step-decay ( Lang et al. , 2019 ; Zhang et al. , 2020 ) . Also , there are works on warm-restart ( O ’ donoghue and Candes , 2015 ; Loshchilov and Hutter , 2017 ) , which could improve the performance of SGD anytime when training deep neural networks . We find that the existing gradient or model-based learning rate adaptation methods including hypergradient descent , L4 and learning to learn only focus on global adaptation , which could be further extended to multi-level cases . That focus aims to introduce locally shared adaptive learning rates such as the layer-wise learning rate and parameter-wise learning rate and considers all levels ’ information in determining the updating step-size for each parameter . The main contribution of our study can be summarized as follows : • We introduce hierarchical learning rate structures for neural networks and apply hypergradient descent to obtain adaptive learning rates at different levels . • We introduce a set of regularization techniques for learning rates to address the balance of global and local adaptations and show the relationship with weighted combinations . • We propose an algorithm implementing the combination of adaptive learning rates at multiple levels for model parameter updating . 2 MULTI-LEVEL ADAPTATION METHODS . 2.1 LAYER-WISE , UNIT-WISE AND PARAMETER-WISE ADAPTATION . In the paper on hyper-descent ( Baydin et al. , 2017 ) , the learning rate is set to be a scalar . However , to make the most of learning rate adaptation , in this study , we introduce layer-wise or even parameterwise updating rules , where the learning rate αt in each iteration time step is considered to be a vector ( layer-wise ) or even a list of matrices ( parameter-wise ) . For the sake of simplicity , we collect all the learning rates in a vector : αt = ( α1 , t , ... , αN , t ) T . Correspondingly , the objective f ( θ ) is a function of θ = ( θ1 , θ2 , ... , θN ) T , collecting all the model parameters . In this case , the derivative of the objective function f with respect to each learning rate can be written as ∂f ( θt−1 ) ∂αi , t−1 = ∂f ( θ1 , t−1 , ... , θN , t−1 ) ∂αi , t−1 = N∑ j=1 ∂f ( θ1 , t−1 , ... , θN , t−1 ) ∂θj , t−1 ∂θj , t−1 ∂αi , t−1 , ( 1 ) where N is the total number of all the model parameters . Eq . ( 1 ) can be generalized to groupwise updating , where we associate a learning rate with a special group of parameters , and each parameter group is updated according to its only learning rate . Notice that although there is a dependency between αt−1 and θt−2 with : αt−1 = αt−2 − β∇f ( θt−2 ) , where β is the updating rate of hyper-gradient descent , we consider that αt−1 is calculated after θt−2 and thus a change of αt−1 will not result in a change of θt−2 . Assume θt = u ( Θt−1 , α ) is the updating rule , where Θt = { θs } ts=0 and α is the learning rate , then the basic gradient descent method for each group i gives θi , t = u ( Θt−1 , αi , t−1 ) = θi , t−1 − αi , t−1∇θif ( θt−1 ) . Hence for gradient descent ∂f ( θt−1 ) ∂αi , t−1 = ∇θif ( θt−1 ) T∇αi , t−1u ( Θt−1 , αi , t−1 ) = −∇θif ( θt−1 ) T∇θif ( θt−2 ) . ( 2 ) Here αi , t−1 is a scalar with index i at time step t − 1 , corresponding to the learning rate of the ith group , while the shape of ∇θif ( θ ) is the same as the shape of θi . We particularly consider two special cases : ( 1 ) In layer-wise adaptation , θi is the weight matrix of ith layer , and αi is the particular learning rate for this layer . ( 2 ) In parameter-wise adaptation , θi corresponds to a certain parameter involved in the model , which can be an element of the weight matrix in a certain layer . 2.2 REGULARIZATION ON ADAPTIVE LEARNING RATES . The selection of adaptation level should depend on a case-by-case basis . Global or parameter-wise adaptation is usually not the optimal choice across all circumstances . Recall that for deep neural networks , we typically use a relatively large architecture with regularization . This idea can also be applied to learning rate space with parameter structure . To address over-parameterization in implementing lower-level learning rate adaptation , we introduce regularization on learning rates to control the flexibility . First , for layer-wise adaptation , we can add the following regularization term to the loss function Llr_reg_layer = λlayer ∑ l ( αl − αg ) 2 , ( 3 ) where l is the indices for each layer , λlayer is the layer-wise regularization coefficient , αl and αg are the layer-wise and global-wise adaptive learning rates . A larger λlayer can push each layer ’ s learning rate towards the global learning rate across all the layers . Given a particular αg , t , the gradient of the loss function with respect to the learning rate αl in layer l can be written as ∂Lfull ( θ , α ) ∂αl , t = ∂Lmodel ( θ , α ) ∂αl , t + ∂Llr_reg ( θ , α ) ∂αl , t = ∇θlf ( θt−1 ) T∇αl , t−1u ( Θt−2 , αt−1 ) + 2λlayer ( αl , t − αg , t ) . ( 4 ) Notice that the time step index of layer-wise regularization term is t rather than t− 1 , which ensures that we push the layer-wise learning rates towards the corresponding global learning rates of the current step t. Denoting by hl , t−1 = −∇θlf ( θt−1 ) T∇θlu ( Θt−2 , αl , t−1 ) , then the updating rule for learning rates can be written as αl , t = αl , t−1 − β ∂Lfull ( θ , α ) ∂αl , t = αl , t−1 − β ( −hl , t−1 + 2λlayer ( αl , t − αg , t ) ) . ( 5 ) Eq . ( 5 ) has a close form solution but only applicable in the two-levels case . However , there is an extra hyper-parameter λlayer to be tuned . In addition , when there are more levels , components of learning rates at different levels can be interdependent . To construct a workable updating scheme for Eq . ( 5 ) , we replace αl , t and αg , t with their relevant approximations . We take the strategy of using their updated version without considering regularization , i.e. , α̂l , t = αl , t−1 + βhl , t−1 and α̂g , t = αg , t−1 + βhg , t−1 , where hg , t−1 = −∇θf ( θt−1 ) T∇αg , t−1u ( Θt−2 , αg , t−1 ) is the global h for all parameters . Here we regard α̂l , t and α̂g , t as the “ virtual ” layer-wise and global-wise learning rates for time step t and taking them into the right-hand side of Eq . ( 5 ) gives the new updating rule as follows α∗l , t = αl , t−1 + βhl , t−1 − 2βλlayer ( α̂l , t − α̂g , t ) = ( 1− 2βλlayer ) α̂l , t + 2βλlayerα̂g , t . ( 6 ) Notice that in Eq . ( 6 ) , the two terms are actually a weighted average of the layer-wise learning rate α̂l , t and global learning rate α̂g , t at the current time step . Since we hope to push the layer-wise learning rates towards the global one , the parameters should meet the constraint : 0 < 2βλlayer < 1 , and thus they can be optimized using hyper-parameter searching within a bounded interval as well as gradient-based hyper-parameter optimizations . We can also consider the case where three levels of learning rate adaptations are involved , including global-wise , layer-wise , and parameter-wise adaptation . If we introduce two more regularization terms to control the variation of parameter-wise learning rate with respect to layer-wise learning rate and global learning rates , the regularization loss can be written as Llr_reg_para = λlayer ∑ l ( αl − αg ) 2 + λpara_layer ∑ l ∑ p ( αpl − αl ) 2 + λpara ∑ l ∑ p ( αpl − αg ) 2 , where αpl is the learning rate for the p-th parameter inside layer l. The second and third terms push each parameter-wise learning rate towards the layer-wise learning rate and the global learning rates , respectively . Like the two-level case , the updating rule with this three-level regularization can be approximated by the weighted combination of three components under “ virtual approximation ” . The detail of the updating rule for the three-levels case is given by Algorithm 1 in Section 2.3 . We also provided a discussion on the bias of implementing “ virtual approximation ” in Appendix A.1 . In general , we can organize all the learning rates in a tree structure . For example , in the three-level case above , αg will be the root node , while { αl } are the children node at level 1 of the tree and { αlp } are the children node of αl as leaf nodes at level three of the tree . In a general case , we assume there are L levels in the tree . Denote the set of all the paths from the root node to each of leave nodes as P and a path is denoted by p = { α1 , α2 , ... , αL } where α1 is the root node , and αL is the left node on the path . On this path , denote ancestors ( i ) all the ancestor nodes of αi along the path , i.e. , ancestors ( i ) = { α1 , ... , αi−1 } . We will construct a regularizer to push αi towards each of its parents . Then the regularization can be written as Llr_reg = ∑ p∈P ∑ αi∈p ∑ αj∈ancestor ( i ) λij ( αi − αj ) 2 . ( 7 ) Under this pair-wise L2 regularization , the updating rule for any leave node learning rate αL can be given by the following theorem whose proof is provided in Appendix A.2 . Theorem 1 . Under virtual approximation , the effect of applying pair-wise L2 regularization Eq . ( 7 ) results in performing a weighted linear combination of virtual learning rates at different levels α∗L = ∑L j=1 γjα̂j with ∑L j=1 γj = 1 , where each component α̂j is calculated by assuming no regularization . Remarks : Theorem 1 actually suggests that a similar updating rule can be obtained for the learning rate at any level on the path . All these have been demonstrated in Algorithm 1 for the three-level case .
Setting appropriate learning rate for network optimization is an important task in deep learning applications. This paper investigates the setting of learning rates for network parameters in different levels, e.g., individual parameter, each layer and global levels. By setting the constraints on the learning rates at multiple scales, the paper derived a hierarchical learning rate setting approach, which is the combination of adaptive learning rates at different levels.
SP:bd0775160c5ab06f765a031236995c84926b5f70
Linear Convergence and Implicit Regularization of Generalized Mirror Descent with Time-Dependent Mirrors
1 INTRODUCTION . Recent work has established the optimization and generalization benefits of over-parameterization in machine learning ( Belkin et al. , 2019 ; Liu et al. , 2020 ; Zhang et al. , 2017 ) . In particular , several works including Vaswani et al . ( 2019 ) ; Du et al . ( 2018 ) ; Liu et al . ( 2020 ) ; Li & Liang ( 2018 ) have demonstrated that over-parameterized models converge to a global minimum when trained using stochastic gradient descent and that such convergence can occur at a linear rate . Independently , other work , such as Gunasekar et al . ( 2018 ) , have characterized implicit regularization of overparameterized models , i.e. , the properties of the solution selected by a given optimization method , without proving convergence . Recently , Azizan & Hassibi ( 2019 ) ; Azizan et al . ( 2019 ) simultaneously proved convergence and analyzed approximate implicit regularization for mirror descent ( Beck & Teboulle , 2003 ; Nemirovsky & Yudin , 1983 ) . In particular , by using the fundamental identity of stochastic mirror descent ( SMD ) , they proved that SMD converges to an interpolating solution that is approximately the closest one to the initialization in Bregman divergence . However , these works do not provide a rate of convergence for SMD and assume that there exists an interpolating solution within in Bregman divergence from the initialization . In this work , we provide sufficient conditions for linear convergence and obtain approximate implicit regularization results for generalized mirror descent ( GMD ) , an extension of mirror descent that introduces ( 1 ) a potential-free update rule and ( 2 ) a time-dependent mirror ; namely , GMD with invertible φ : Rd → Rd and learning rate η is used to minimize a real valued loss function , f , according to the update rule : φ ( t ) ( w ( t+1 ) ) = φ ( t ) ( w ( t ) ) − η∇f ( w ( t ) ) . ( 1 ) We discuss the stochastic version of GMD ( SGMD ) in Section 3 . GMD generalizes both mirror descent and preconditioning methods . Namely , if for all t , φ ( t ) = ∇ψ for some strictly convex function ψ , then GMD corresponds to mirror descent with potential ψ ; if φ ( t ) = G ( t ) for some invertible matrix G ( t ) ∈ Rd×d , then the update rule in equation ( 1 ) reduces to w ( t+1 ) = w ( t ) − ηG ( t ) −1 ∇f ( w ( t ) ) and hence represents applying a pre-conditioner to gradient updates . The following is a summary of our results : 1 . We provide a simple proof for linear convergence of GMD under the Polyak-Lojasiewicz inequality ( Theorem 1 ) . 2 . We provide sufficient conditions under which SGMD converges linearly under an adaptive learning rate ( Theorems 2 and 3 ) 1 . 3 . As corollaries to Theorems 1 and 3 , in Section 5 we provide sufficient conditions for linear convergence of stochastic mirror descent as well as stochastic preconditioner methods such as Adagrad ( Duchi et al. , 2011 ) . 4 . We prove the existence of an interpolating solution and linear convergence of GMD to this solution for non-negative loss functions that locally satisfy the PL * inequality ( Liu et al. , 2020 ) . This result ( Theorem 4 ) provides approximate implicit regularization results for GMD : GMD converges linearly to an interpolating solution that is approximately the closest interpolating solution to the initialization in ` 2 norm in the dual space induced by φ ( t ) . 2 RELATED WORK . Recent work ( Azizan et al. , 2019 ) established convergence of stochastic mirror descent ( SMD ) for nonlinear optimization problems . It characterized the implicit bias of mirror descent by demonstrating that SMD converges to a global minimum that is within epsilon of the closest interpolating solution in Bregman divergence . The analysis in Azizan et al . ( 2019 ) relies on the fundamental identity of SMD and does not provide explicit learning rates or establish a rate of convergence for SMD in the nonlinear setting . The work in Azizan & Hassibi ( 2019 ) provided explicit learning rates for the convergence of SMD in the linear setting under strongly convex potential , again without a rate of convergence . While these works established convergence of SMD , prior work by Gunasekar et al . ( 2018 ) analyzed the implicit bias of SMD without proving convergence . A potential-based version of generalized mirror descent with time-varying regularizes was presented for online problems in Orabona et al . ( 2015 ) . That work is primarily concerned with establishing regret bounds for the online learning setting , which differs from our setting of minimizing a loss function given a set of known data points . A potential-free formulation of GMD for the flow was presented in Gunasekar et al . ( 2020 ) . The Polyak-Lojasiewicz ( PL ) inequality ( Lojasiewicz , 1963 ; Polyak , 1963 ) serves as a simple condition for linear convergence in non-convex optimization problems and is satisfied in a number of settings including over-parameterized neural networks ( Liu et al. , 2020 ) . Work by Karimi et al . ( 2016 ) demonstrated linear convergence of a number of descent methods ( including gradient descent ) under the PL inequality . Similarly , Vaswani et al . ( 2019 ) proved linear convergence of stochastic gradient descent ( SGD ) under the PL inequality and the strong growth condition ( SGC ) , and Bassily et al . ( 2018 ) established the same rate for SGD under just the PL inequality . Soltanolkotabi et al . ( 2019 ) also used the PL inequality to establish a local linear convergence result for gradient descent on 1 hiddden layer over-parameterized neural networks . Recently , Xie et al . ( 2020 ) established linear convergence for a norm version of Adagrad ( AdagradNorm ) using the PL inequality , while Wu et al . ( 2019 ) established linear convergence for AdagradNorm in the particular setting of over-parameterized neural networks with one hidden layer . An alternate analysis for Adagrad-Norm for smooth , non-convex functions was presented in Ward et al . ( 2019 ) , resulting in a sub-linear convergence rate . 1We also provide a fixed learning rate for monotonically decreasing gradients∇f ( w ( t ) ) . Instead of focusing on a specific method , the goal of this work is to establish sufficient conditions for linear convergence by applying the PL inequality to a more general setting ( SGMD ) . We arrive at linear convergence for specific methods such as mirror descent and preconditioned gradient descent methods as corollaries . Moreover , our local convergence results provide an intuitive formulation of approximate implicit regularization for GMD and thus mirror descent . Namely , instead of resorting to Bregman divergence , we prove that GMD converges to an interpolating solution that is approximately the closest interpolating solution to the initialization in ` 2 norm in the dual space induced by φ ( t ) . 3 ALGORITHM DESCRIPTION AND PRELIMINARIES . We begin with a formal description of SGMD . Let fi : Rd → R denote real-valued , differentiable loss functions and let f ( x ) = 1n ∑n i=1 fi ( x ) . In addition , let φ ( t ) : Rd → Rd be an invertible function for all non-negative integers t. We solve the optimization problem arg min x∈Rd f ( x ) using stochastic generalized mirror descent with learning rate η2 : φ ( t ) ( w ( t+1 ) ) = φ ( t ) ( w ( t ) ) − η∇fit ( w ( t ) ) , ( 2 ) where it ∈ [ n ] is chosen uniformly at random . As described in the introduction , the above algorithm generalizes both gradient descent ( where φ ( x ) = x ) and mirror descent ( where φ ( t ) ( x ) = ∇ψ ( x ) for some strictly convex potential function ψ ) . In the case where φ ( t ) ( x ) = G ( t ) x for an invertible matrix G ( t ) ∈ Rd×d , the update rule in equation ( 2 ) reduces to : w ( t+1 ) = w ( t ) − ηG ( t ) −1 ∇fit ( w ( t ) ) Hence , when φ ( t ) is an invertible linear transformation , Equation ( 2 ) is equivalent to pre-conditioned gradient descent . We now present the Polyak-Lojasiewicz inequality and lemmas from optimization theory that will be used in our proofs3 . Polyak-Lojasiewicz ( PL ) Inequality . A function f : Rd → R is µ-PL if for some µ > 0 : 1 2 ‖∇f ( x ) ‖2≥ µ ( f ( x ) − f ( x∗ ) ) ∀x ∈ Rd , ( 3 ) where x∗ ∈ Rd is a global minimizer for f . A useful variation of the PL inequality is the PL * inequality introduced in Liu et al . ( 2020 ) which does not require knowledge of f ( x∗ ) . Definition . A function f : Rd → R is µ-PL * if for some µ > 0 : 1 2 ‖∇f ( x ) ‖2≥ µf ( x ) ∀x ∈ Rd , ( 4 ) A function that is µ-PL * is also µ-PL when f is non-negative . Additionally , we will typically assume that f is L-smooth ( with L-Lipschitz continuous derivative ) . Definition . A function f : Rd → R is L-smooth for L > 0 if for all x , y ∈ Rd : ‖∇f ( x ) −∇f ( y ) ‖≤ L‖x− y‖ . If φ ( t ) ( x ) = x for any t and x ∈ Rd then SGMD reduces to SGD . If f is L-smooth and satisfies the PL-Inequality , then SGD converges linearly to a global minimum ( Bassily et al. , 2018 ; Karimi et al. , 2016 ; Vaswani et al. , 2019 ) . Moreover , the following lemma ( proven in Appendix A ) shows that the PL * condition implies the existence of a global minimum x∗ for non-negative , L-smooth f . Lemma 1 . If f : Rd → R is µ-PL * , L-smooth and f ( x ) ≥ 0 for all x ∈ Rd , then gradient descent with learning rate η < 2L converges linearly to x ∗ satisfying f ( x∗ ) = 0 . 2The framework also allows for adaptive learning rates by using η ( t ) to denote a time-dependent step size . 3We assume all norms are the 2-norm unless stated otherwise . Hence , in cases where the loss function is nonnegative ( for example the squared loss ) , we can remove the usual assumption about the existence of a global minimum , x∗ , and instead assume that f satisfies the PL * inequality . We now reference standard properties of L-smooth functions ( Zhou , 2018 ) , which will be used in our proofs . Lemma 2 . If f : Rd → R is L-smooth , then for all x , y ∈ Rd : ( a ) f ( y ) ≤ f ( x ) + 〈∇f ( x ) , y − x〉+ L 2 ‖y − x‖2 , ( b ) ‖∇f ( x ) ‖2≤ 2L ( f ( x ) − f ( x∗ ) ) . The following lemma relates µ and L ( the proof is in Appendix B ) . Lemma 3 . If f : Rd → R is µ-PL and L-smooth , then µ ≤ L. Using Lemma 2b in place of the strong growth condition ( i.e . Ei [ ‖∇fi ( x ) ‖2 ] ≤ ρ‖∇f ( x ) ‖2 ) yields slightly different learning rates when establishing convergence of stochastic descent methods ( as is apparent from the different learning rates between Bassily et al . ( 2018 ) and Vaswani et al . ( 2019 ) ) . The following simple lemma will be used in the proof of Theorem 3 . Lemma 4 . If f ( x ) = 1n ∑n i=1 fi ( x ) where fi : Rd → R are Li-smooth , then f is supi Li-smooth . Note that there could exist some other constant L′ < supi Li for which f is L ′-smooth , but this upper bound suffices for our proof of Theorem 3 . Lastly , we define and reference standard properties of strongly convex functions ( Zhou , 2018 ) , which will be useful in demonstrating how our GMD results generalize those for mirror descent . Definition . For α > 0 , a differentiable function , ψ : Rd → R , is α-strongly convex if for all x , y , ψ ( y ) ≥ ψ ( x ) + 〈∇ψ ( x ) , y − x〉+ α 2 ‖y − x‖2 . Lemma 5 . If ψ : Rd → R is α-strongly convex , then for all x , y : ψ ( y ) ≤ ψ ( x ) + 〈∇ψ ( x ) , y − x〉+ 1 2α ‖∇ψ ( y ) −∇ψ ( x ) ‖2 . With these preliminaries in hand , we now present our proofs for linear convergence of SGMD using the PL-Inequality .
This paper studies the interesting property of generalized mirror descent (GMD) and its stochastic variant for nonconvex optimization problems. First, for GMD this paper shows the linear convergence under PL* condition (in Lemma 1) and finds out a new sufficient condition for the linear convergence (in Theorem 2). Next, this work tried to extend this result to a stochastic setting (in Theorem 3). Moreover, the implicit regularization of GMD is studied, which is an extension of the previous studies by [Azizan et al.].
SP:0007eeef2280b8cd027be08249b27e2116328ab8
DHOG: Deep Hierarchical Object Grouping
1 INTRODUCTION . It is very expensive to label a dataset with respect to a particular task . Consider the alternative where a user , instead of labelling a dataset , specifies a simple set of class-preserving transformations or ‘ augmentations ’ . For example , lighting changes will not change a dog into a cat . Is it possible to learn a model that produces a useful representation by leveraging a set of such augmentations ? This representation would need to be good at capturing salient information about the data , and enable downstream tasks to be done efficiently . If the representation were a discrete labelling which groups the dataset into clusters , an obvious choice of downstream task is unsupervised clustering . Ideally the clusters should match direct labelling , without ever having been learnt on explicitly labelled data . Using data augmentations to drive unsupervised representation learning for images has been explored by a number of authors ( Dosovitskiy et al. , 2014 ; 2015 ; Bachman et al. , 2019 ; Chang et al. , 2017 ; Wu et al. , 2019 ; Ji et al. , 2019 ; Cubuk et al. , 2019 ) . These approaches typically involve learning neural networks that map augmentations of the same image to similar representations , which is reasonable since variances across many common augmentations often align with the invariances we would require . A number of earlier works target maximising mutual information ( MI ) between augmentations ( van den Oord et al. , 2018 ; Hjelm et al. , 2019 ; Wu et al. , 2019 ; Ji et al. , 2019 ; Bachman et al. , 2019 ) . Targetting high MI between representations computed from distinct augmentations enables learning representations that capture the invariances induced by the augmentations . We are interested in a particularly parsimonious representation : a discrete labelling of the data . This labelling can be seen as a clustering ( Ji et al. , 2019 ) procedure , where MI can be computed and assessment can be done directly using the learned labelling , as opposed to via an auxiliary network trained posthoc . 1.1 SUBOPTIMAL MUTUAL INFORMATION MAXIMISATION . We argue and show that the MI objective is not maximised effectively in existing work due to the combination of : 1 . Greedy optimisation algorithms used to train neural networks , such as stochastic gradient descent ( SGD ) that potentially target local optima ; and 2 . A limited set of data augmentations that can result in the existence of multiple local optima to the MI maximisation objective . SGD is greedy in the sense that early-found high-gradient features can dominate and so networks will tend to learn easier-to-compute locally-optimal representations ( for example , one that can be computed using fewer neural network layers ) over those that depend on complex features . By way of example , in natural images , average colour is an easy-to-compute characteristic , whereas object type is not . If the augmentation strategy preserves average colour , then a reasonable mapping need only compute colour information , and high MI between learned image representations will be obtained . This result is suboptimal in the sense that a hypothetical higher MI optima exists that also captures semantic information , assuming the model has sufficient capacity to learn and represent this . The conceivable existence of many such local optima coupled with greedy optimisation presents a challenge : how can we leverage powerful image augmentation-driven MI objectives while avoiding greedily-found local optima ? Dealing with greedy solutions Heuristic solutions , such as as Sobel edge-detection ( Caron et al. , 2018 ; Ji et al. , 2019 ) as a pre-processing step , have been suggested to remove/alter the features in images that may cause trivial representations to be learned . This is a symptomatic treatment and not a solution . In the work presented herein , we acknowledge that greedy SGD can get stuck in local optima of the MI maximisation objective because of limited data augmentations . Instead of trying to prevent a greedy solution , our technique lets a model learn this representation , but also requires it to learn an additional distinct representation . Specifically , we minimise the MI between these two representations so that the latter can not rely on the same features . We extend this idea by adding representations , each time requiring the latest to be distinct from all previous representations . Downstream task : clustering For this work , our focus is on finding higher MI representations ; we then assess the downstream capability on the ground truth task of image classification , meaning that we can either ( 1 ) learn a representation that must be ‘ decoded ’ via an additional learning step , or ( 2 ) produce a discrete labelling that requires no additional learning . Clustering methods offer a direct comparison and require no labels for learning a mapping from the learned representation to class labels . Instead , labels are only required to assign groups to appropriate classes and no learning is done using these . Our comparisons are with respect to clustering methods . 1.2 CONTRIBUTIONS . Learning a set of representations by encouraging them to have low MI , while still maximising the original augmentation-driven MI objective for each representation , is the core idea behind Deep Hierarchical Object Grouping ( DHOG ) . We define a mechanism to produce a set of hierarchicallyordered solutions ( in the sense of easy-to-hard orderings , not tree structures ) . DHOG is able to better maximise the original MI objective between augmentations since each representation must correspond to a unique local optima . Our contributions are : 1 . We demonstrate that current methods do not effectively maximise the MI objective1 because greedy stochastic gradient descent ( SGD ) typically results in suboptimal local optima . To mitigate for this problem , we introducing DHOG : a robust neural network image grouping method to learn diverse and hierarchically arranged sets of discrete image labellings ( Section 3 ) by explicitly modelling , accounting for , and avoiding spurious local optima , requiring only simple data augmentations , and needing no Sobel edge detection . 2 . We show a marked improvement over the current state-of-the-art for standard benchmarks in end-to-end image clustering for CIFAR-10 , CIFAR-100-20 ( a 20-way class grouping of CIFAR-100 , and SVHN ; we set a new accuracy benchmarks on CINIC-10 ; and show the utility of our method on STL-10 ( Section 4 ) . To be clear , DHOG still learns to map data augmentations to similar representations as this is imperative to the learning process . The difference is that DHOG enables a number of intentionally distinct data labellings to be learned , arranged hierarchically in terms of source feature complexity . 1We show this by finding higher mutual information solutions using DHOG , rather than by any analysis of the solutions themselves . 2 RELATED WORK . The idea of MI maximisation for representation learning is called the infoMAX principle ( Linsker , 1988 ; Tschannen et al. , 2019 ) . Contrastive predictive coding ( van den Oord et al. , 2018 ) ( CPC ) models a 2D latent space using an autoregressive model and defines a predictive setup to maximise MI between distinct spatial locations . Deep InfoMAX ( Hjelm et al. , 2019 ) ( DIM ) does not maximise MI across a set of data augmentations , but instead uses mutual information neural estimation ( Belghazi et al. , 2018 ) and negative sampling to balance maximising MI between global representations and local representations . Augmented multiscale Deep InfoMAX ( Bachman et al. , 2019 ) ( AMDIM ) incorporates MI maximisation across data augmentations and multiscale comparisons . Clustering approaches are more directly applicable for comparison with DHOG because they explicitly learn a discrete labelling . The authors of deep embedding for clustering ( DEC ) ( Xie et al. , 2016 ) focused their attention on jointly learning an embedding suited to clustering and a clustering itself . They argued that the notion of distance in the feature space is crucial to a clustering objective . Joint unsupervised learning of deep representations and image clusters ( JULE ) ( Yang et al. , 2016 ) provided supervisory signal for representation learning . Some methods ( Ghasedi Dizaji et al. , 2017 ; Fard et al. , 2018 ) employ autoencoder architectures along with careful regularisation of cluster assignments to ( 1 ) ensure sufficient information retention , and ( 2 ) avoid cluster degeneracy ( i.e. , mapping all images to the same class ) . Deep adaptive clustering ( Chang et al. , 2017 ) ( DAC ) recasts the clustering problem as binary pairwise classification , pre-selecting comparison samples via feature cosine distances . A constraint on the DAC system allows for a one-hot encoding that avoids cluster degeneracy . Another mechanism for dealing with degeneracy is to use a standard clustering algorithm , such asK-means to iteratively group on learned features . This approach is used by DeepCluster ( Caron et al. , 2018 ) . Associative deep clustering ( ADC ) ( Haeusser et al. , 2018 ) uses the idea that associations in the embedding space are useful for learning . A network was learned to associate data with ( pseudolabelled ) centroids . They leveraged augmentations by encouraging samples to output similar cluster probabilities . Deep comprehensive correlation mining ( Wu et al. , 2019 ) ( DCCM ) constructs a sample correlation graph for pseudo-labels and maximises the MI between augmentations , and the MI between local and global features for each augmentation . While many of the aforementioned methods estimate MI in some manner , invariant information clustering ( Ji et al. , 2019 ) ( IIC ) directly defines the MI using the c-way softmax output ( i.e. , probability of belong to class c ) , and maximises this over data augmentations to learn clusters . They effectively avoid degenerate solutions because MI maximisation implicitly targets marginal entropy . We use the same formulation for MI in Section 3 . 3 METHOD . Figure 1 shows the DHOG architecture . DHOG is an approach for obtaining jointly trained multilevel representations as discrete labellings , arranged in a simple-to-complex hierarchy , and computed by separate ‘ heads ’ . A head is an unit that computes a multivariate class probability vector . By requiring low MI between heads , a diversity of solutions to the MI maximisation objective can be found . The head that best maximises MI between augmentations typically aligns better with a ground truth task that also relies on complex features that augmentations are designed to preserve . Figure 1 demonstrates the DHOG architecture and training principles . There are shared model weights ( 2 : ResNet blocks 1 , 2 , and 3 ) and head-specific weights ( the MLP layers and 3 : ResNet blocks 4 to 8 ) . For the sake of brevity , we abuse notation and use MI ( z , z′ ) between labelling probability vectors as an overloaded shorthand for the mutual information MI ( c , c′ ) between the labelling random variables c and c′ that have probability vectors z and z′ respectively . Any branch of the DHOG architecture ( 1 to any zi ) can be regarded as a single neural network . These are trained to maximise the MI between the label variables at each head for different augmentations ; i.e. , between label variables with probability vectors zi ( x ) and zi ( x′ ) for augmentations x and x′ . Four augmentations are shown at 1 . The MI is maximised pairwise between all pairs , at 4 . This process can be considered pulling the mapped representations together . block is repeated k − 3 times ( k = 8 here ) . 1 Augmentations of each image , xa ... d , are separately processed by the network . 2 Each shallow ResNet block ( 1 . . . 3 ) constitutes shared computation for deeper blocks , while also computing separate probability vectors , z1 . . . z3 . Each zi is viewed as the probability for each outcome of the random variable ci that makes a discrete labelling choice . 3 The deepest ResNet blocks compute further z > 3 . 4 The network is trained by maximising the MI between allocations ci from all data augmentations , and 5 separately for each node i , minimising the MI between ci and c < i for the same data augmentation . 6 This is implemented by stopping gradients such that they are not back-propagated for later computation paths ( red crosses ) . Following IIC ( Ji et al. , 2019 ) , we compute the MI directly from the label probability vectors within a minibatch . Let zi , z′i denote the random probability vectors at head i associated with sampling a data item and its augmentations , and passing those through the network . Then we can compute the mutual MI between labels associated with each augmentation using MIaug ( ci , c ′ i ) = Tr ( E [ zi ( z ′ i ) T ] T log ( E [ zi ( z ′ i ) T ] ) ) − E [ zTi ] logE [ zi ] − E [ ( z′i ) T ] logE [ ( z′i ) ] , ( 1 ) where Tr is the matrix trace , logarithms are computed element-wise , and expectations are over data samples and augmentations of each sample . In practice we compute an empirical estimate of this MI based on samples from a minibatch .
This paper addresses the problem of unsupervised learning of class representation using data augmentation. Its key idea is to encourage the learned representations to have low MI while maximizing the original augmentation-driven MI objective. It reports the improved performance for the benchmarks of Ji et al. 2019 – classification on some easy datasets (e.g. CIFAR-10, CINIC-10, SVHN and STL-10).
SP:2eec02429adee2ab91752629c85df9f1463e54d8
Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU
1 INTRODUCTION . The signature transform , sometimes referred to as the path signature or simply signature , is a central object in rough path theory ( Lyons , 1998 ; 2014 ) . It is a transformation on differentiable paths1 , and may be thought of as loosely analogous to the Fourier transform . However whilst the Fourier transform extracts information about frequency , treats each channel separately , and is linear , the signature transform exacts information about order and area , explicitly considers combinations of channels , and is in a precise sense ‘ universally nonlinear ’ ( Bonnier et al. , 2019 , Proposition A.6 ) . The logsignature transform ( Liao et al. , 2019 ) is a related transform , that we will also consider . In both cases , by treating sequences of data as continuous paths , then the ( log ) signature transform may be applied for use in problems with sequential structure , such as time series . Indeed there is a significant body of work using the ( log ) signature transform in machine learning , with examples ranging from handwriting identification to sepsis prediction , see for example Morrill et al . ( 2019 ) ; Fermanian ( 2019 ) ; Király & Oberhauser ( 2019 ) ; Toth & Oberhauser ( 2020 ) ; Morrill et al . ( 2020b ) . Earlier work often used the signature and logsignature transforms as a feature transformation . See Levin et al . ( 2013 ) ; Chevyrev & Kormilitzin ( 2016 ) ; Yang et al . ( 2016a ; b ) ; Kormilitzin et al . ( 2016 ) ; Li et al . ( 2017 ) ; Perez Arribas et al . ( 2018 ) for a range of examples . In this context , when training a model on top , it is sufficent to simply preprocess the entire dataset with the signature or logsignature transform , and then save the result . However , recent work has focused on embedding the signature and logsignature transforms within neural networks . Recent work includes Bonnier et al . ( 2019 ) ; Liao et al . ( 2019 ) ; Moor et al . ( 2020 ) ; Morrill et al . ( 2020a ) ; Kidger et al . ( 2020 ) among others . In this context , the signature and logsignature transforms are evaluated many times throughout a training procedure , and as such efficient and differentiable implementations are crucial . Previous libraries ( Lyons , 2017 ; Reizenstein & Graham , 2018 ) have been CPU-only and single-threaded , and quickly become the major source of slowdown when training and evaluating these networks . 1And may be extended to paths of bounded variation , or merely finite p-variation ( Lyons et al. , 2004 ) . 1.1 CONTRIBUTIONS . We introduce Signatory , a CPU- and GPU-capable library for calculating and performing functionality related to the signature and logsignature transforms . To our knowledge it is the first GPU-capable library for these operations . The focus is on machine learning applications . Signatory is significantly faster than previous libraries ( whether run on the CPU or the GPU ) , due to a combination of parallelism and novel algorithmic improvements . In particular the latter includes both uniform and asymptotic rate improvements over previous algorithms . Additionally , Signatory provides functionality not available in previous libraries , such as precomputation strategies for efficient querying of the ( log ) signature transform over arbitrary overlapping intervals . The library integrates with the open source PyTorch ecosystem and runs on Linux or Windows . Documentation , examples , benchmarks and tests form a part of the project . Much of the code is written in C++ primitives and the CPU implementation utilises OpenMP . The backward operations are handwritten for both speed and memory efficiency , and do not rely on the autodifferentiation provided by PyTorch . The source code is located at https : //github.com/patrick-kidger/signatory , documentation and examples are available at https : //signatory.readthedocs.io , and the project may be installed directly via pip . This paper is not a guide to using Signatory—for that we refer to the documentation . This is meant as a technical exposition of its innovations . 1.2 APPLICATIONS . Signatory has already seen a rapid uptake amongst the signature community . Recent work using Signatory include Morrill et al . ( 2020b ) ; Perez Arribas et al . ( 2020 ) who involve signatures in neural differential equations , or Moor et al . ( 2020 ) ; Min & Ichiba ( 2020 ) who study deep signature models ( Bonnier et al. , 2019 ) . Meanwhile Ni et al . ( 2020 ) apply Signatory to hybridise signatures with GANs , and Morrill et al . ( 2020a ) create a generalised framework for the “ signature method ” . As a final example , Signatory is now itself a dependency for other libraries ( Kidger , 2020 ) . 2 BACKGROUND . We begin with some exposition on theory of the signature and logsignature transforms . We begin with definitions and offer intuition afterwards . Also see Reizenstein & Graham ( 2018 ) for an introduction focusing on computational concerns , and Lyons et al . ( 2004 ) and Hodgkinson et al . ( 2020 ) for pedagogical introductions to the motivating theory of rough paths . 2.1 THE SIGNATURE TRANSFORM . Definition 1 . Let Rd1⊗Rd2⊗· · ·⊗Rdn denote the space of all real tensors with shape d1×d2×· · ·× dn . There is a corresponding binary operation ⊗ , called the tensor product , which maps a tensor of shape ( d1 , . . . , dn ) and a tensor of shape ( e1 , . . . , em ) to a tensor of shape ( d1 , . . . , dn , e1 , . . . , em ) via ( Ai1 , ... , in , Bj1 , ... , jm ) 7→ Ai1 , ... , inBj1 , ... , jm . For example when applied to two vectors , it reduces to the outer product . Let ( Rd ) ⊗k = Rd ⊗ · · · ⊗Rd , and v⊗k = v⊗ · · · ⊗ v for v ∈ Rd , in each case with k− 1 many ⊗ . Definition 2 . Let N ∈ N. The signature transform to depth N is defined as SigN : { f ∈ C ( [ 0 , 1 ] ; Rd ) ∣∣ f differentiable } → N∏ k=1 ( Rd ) ⊗k , SigN ( f ) = ∫ · · · ∫ 0 < t1 < ··· < tk < 1 df dt ( t1 ) ⊗ · · · ⊗ df dt ( tk ) dt1 · · · dtk 1≤k≤N . ( 1 ) Most texts define the signature transform using the notation of stochastic calculus . Here , we sacrifice some generality ( that is not needed in this context ) in favour of more widely-used notation.2 The signature transform may naturally be extended to sequences of data . Definition 3 . The space of sequences of data over a set V is S ( V ) = { x = ( x1 , . . . , xL ) |L ∈ N , xi ∈ V for all i } . An interval of ( x1 , . . . , xL ) ∈ S ( V ) is ( xi , . . . , xj ) ∈ S ( V ) for some 1 ≤ i < j ≤ L. Definition 4 . Let x = ( x1 , . . . , xL ) ∈ S ( Rd ) with L ≥ 2 . Let f : [ 0 , 1 ] → Rd be the unique continuous piecewise affine function such that f ( i−1L−1 ) = xi for all i , and is affine on the pieces in between . Let N ∈ N. Then define SigN ( x ) = SigN ( f ) . In this way we interpret SigN as a map SigN : S ( Rd ) → N∏ k=1 ( Rd ) ⊗k . Note that the choice of i−1L−1 is unimportant ; any L points in [ 0 , 1 ] would suffice , and in fact the definition is invariant to this choice ( Bonnier et al. , 2019 , Definition A.10 ) . 2.2 THE GROUPLIKE STRUCTURE . With A0 = B0 = 1 ∈ R on the right hand side , define by3 : ( N∏ k=1 ( Rd ) ⊗k ) × ( N∏ k=1 ( Rd ) ⊗k ) → N∏ k=1 ( Rd ) ⊗k , ( A1 , . . . AN ) ( B1 , . . . , BN ) 7→ k∑ j=0 Aj ⊗Bk−j 1≤k≤N . Chen ’ s identity ( Lyons et al. , 2004 , Theorem 2.9 ) states that the image of the signature transform forms a noncommutative group with respect to . That is , given a sequence of data ( x1 , . . . , xL ) ∈ S ( Rd ) and some j ∈ { 2 , . . . , L− 1 } , then SigN ( ( x1 , . . . , xL ) ) = Sig N ( ( x1 , . . . , xj ) ) Sig N ( ( xj , . . . , xL ) ) . ( 2 ) Furthermore the signature of a sequence of length two may be computed explicitly from the definition . Letting exp : Rd → N∏ k=1 ( Rd ) ⊗k , exp : v → ( v , v⊗2 2 ! , v⊗3 3 ! , . . . , v⊗N N ! ) , then SigN ( ( x1 , x2 ) ) = exp ( x2 − x1 ) . With Chen ’ s identity , this implies that the signature transform may be computed by evaluating SigN ( ( x1 , . . . , xL ) ) = exp ( x2 − x1 ) exp ( x3 − x2 ) · · · exp ( xL − xL−1 ) . ( 3 ) 2.3 THE LOGSIGNATURE , INVERTED SIGNATURE , AND INVERTED LOGSIGNATURE . The group inverse we denote −1 . Additionally a notion of logarithm may be defined ( Liao et al. , 2019 ) , where log : image ( SigN ) → N∏ k=1 ( Rd ) ⊗k . ( 4 ) 2Additionally , many texts also include a k = 0 term , which is defined to equal one . We omit this as it does not carry any information , and is therefore irrelevant to the task of machine learning . 3Most texts use ⊗ rather than to denote this operation , as it may be regarded as an generalisation of the tensor product . That will not be important to us , however , so we use differing notation to aid interpretation . This then defines the notions of inverted signature transform , logsignature transform and inverted logsignature transform as InvertSigN ( x ) = SigN ( x ) −1 , LogSigN ( x ) = log ( SigN ( x ) ) , InvertLogSigN ( x ) = log ( SigN ( x ) −1 ) respectively . We emphasise that the inverted signature or logsignature transforms are not the inverse maps of the signature or the logsignature transforms . The logsignature transform extracts the same information as the signature transform , but represents the information in a much more compact way , as image ( log ) is a proper subspace4 of∏N k=1 ( Rd ) ⊗k . Its dimension is w ( d , N ) = ∑N k=1 1 k ∑ i|k µ ( k i ) di , which is known as Witt ’ s for- mula ( Lothaire , 1997 ) . µ is the Möbius function .
The paper presents the first GPU-capable library implementing the _"signature"_ and _"log-signature"_ functions as well as their gradients. It introduces these transformations to a machine learning audience, as well as their recent uses in ML, then proposes algorithmic improvements that reduce the necessary computation. The resulting library is benchmarked against existing implementations, and the code, benchmarks, and proofs are included in supplementary materials.
SP:22fbfa80cf81ea79a19faee749e9c8b2e23f1f3f
K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION
1 INTRODUCTION . Pre-trained language models ( PLMs ) , such as ELMo ( Peters et al. , 2018 ) , GPT ( Radford et al. , 2018 ) , BERT ( Devlin et al. , 2019 ) , RoBERTa ( Liu et al. , 2019 ) , and XLNet ( Yang et al. , 2019 ) , have made remarkable breakthroughs in many natural language understanding ( NLU ) tasks , including text classification , reading comprehension , and natural language inference . These models are trained on large-scale text corpora with self-supervision based on either bi-directional or auto-regressive pre-training . Equally promising performances have been achieved in natural language generation ( NLG ) tasks , such as machine translation and text summarization , by MASS ( Song et al. , 2019 ) , UniLM ( Dong et al. , 2019 ) , BART ( Lewis et al. , 2020 ) , T5 ( Raffel et al. , 2019 ) , PEGASUS ( Zhang et al. , 2020 ) , and ProphetNet ( Yan et al. , 2020 ) . In contrast , these approaches adopt Transformerbased sequence-to-sequence models to jointly pre-train for both the encoder and the decoder . While these PLMs can learn rich semantic patterns from raw text data and thereby enhance downstream NLP applications , many of them do not explicitly model domain-specific knowledge . As a result , they may not be as sufficient for capturing human-curated or domain-specific knowledge that is necessary for tasks in a certain domain , such as tasks in e-commerce scenarios . In order to overcome this limitation , several recent studies have proposed to enrich PLMs with explicit knowledge , including knowledge base ( KB ) ( Zhang et al. , 2019 ; Peters et al. , 2019 ; Xiong et al. , 2020 ; Wang et al. , 2019 ; 2020 ) , lexical relation ( Lauscher et al. , 2019 ; Wang et al. , 2020 ) , word sense ( Levine et al. , 2020 ) , part-of-speech tag ( Ke et al. , 2019 ) , and sentiment polarity ( Ke et al. , 2019 ; Tian et al. , 2020 ) . However , these methods only integrate domain-specific knowledge into the encoder , and the decoding process in many NLG tasks benefits little from these knowledge . 1Our code is available at https : //github.com/ICLR21Anonymous/knowledge_pretrain . To mitigate this problem , we propose a Knowledge-injected Pre-trained Language model that is suitable for both Natural Language Understanding and Generation ( K-PLUG ) . Different from existing knowledge-injected PLMs , K-PLUG integrates knowledge into pre-training for both the encoder and the decoder , and thus K-PLUG can be adopted to both downstream knowledge-driven NLU and NLG tasks . We verify the performance of the proposed method in various e-commerce scenarios . In the proposed K-PLUG , we formulate the learning of four types of domain-specific knowledge : e-commerce domain-specific knowledge-bases , aspects of product entities , categories of product entities , and unique selling propositions ( USPs ) ( Garrett , 1961 ) of product entities . Specifically , e-commerce KB stores standardized product attribute information , product aspects are features that play a crucial role in understanding product information , product categories are the backbones for constructing taxonomies for organization , and USPs are the essence of what differentiates a product from its competitors . K-PLUG learns these types of knowledge into a unified PLM , enhancing performances for various language understanding and generation tasks . To effectively learn these four types of valuable domain-specific knowledge in K-PLUG , we proposed five new pre-training objectives : knowledge-aware masked language model ( KMLM ) , knowledge-aware masked sequence-to-sequence ( KMS2S ) , product entity aspect boundary detection ( PEABD ) , product entity category classification ( PECC ) , and product entity aspect summary generation ( PEASG ) . Among these objectives , KMLM and KMS2S learn to predict the masked single and multiple tokens , respectively , that are associated with domain-specific knowledge rather than general information ; PEABD determines the boundaries between descriptions of different product aspects given full product information ; PECC identifies the product category that each product belongs to ; and PEASG generates a summary for each individual product aspect based on the entire product description . After pre-training K-PLUG , we fine-tune it on three domain-specific NLP tasks , namely , ecommerce knowledge base completion , abstractive product summarization , and multi-turn dialogue . The results show that K-PLUG significantly outperforms comparative models on all these tasks . Our main contributions can be summarized as follows : • We present K-PLUG that learns domain-specific knowledge for both the encoder and the decoder in a pre-training language model framework , which benefits both NLG and NLU tasks . • We formulate the learning of four types of domain-specific knowledge in e-commerce scenarios : e-commerce domain-specific knowledge-bases , aspects of product entities , categories of product entities , and unique selling propositions of product entities , which provide critical information for many applications in the domain of e-commerce . Specifically , five self-supervised objectives are proposed to learn these four types of knowledge into a unified PLM . • Our proposed model exhibits clear effectiveness in many downstream tasks in the ecommerce scenario , including e-commerce KB completion , abstractive product summarization , and multi-turn dialogue . 2 RELATED WORK . 2.1 PLMS IN GENERAL . Unsupervised pre-training language model has been successfully applied to many NLP tasks . ELMo ( Peters et al. , 2018 ) learns the contextual representations based on a bidirectional LM . GPT ( Radford et al. , 2018 ) predicts tokens based on the context on the left-hand side . BERT ( Devlin et al. , 2019 ) adopts a bi-directional LM to predict the masked tokens . XLNet ( Yang et al. , 2019 ) predicts masked tokens in a permuted order through an autoregressive method . MASS ( Song et al. , 2019 ) pre-trains the sequence-to-sequence LM to recover a span of masked tokens . UniLM ( Dong et al. , 2019 ) combines bidirectional , unidirectional , and sequence-to-sequence LMs . T5 ( Raffel et al. , 2019 ) and BART ( Lewis et al. , 2020 ) present denoising sequence-to-sequence pre-training . PEGASUS ( Zhang et al. , 2020 ) pre-trains with gap-sentence generation objective . While humancurated or domain-specific knowledge is essential for downstream knowledge-driven tasks , these methods do not explicitly consider external knowledge like our proposed K-PLUG . ties , categories of product entities , and : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : unique selling propositions of product entities . Pre-training objectives include knowledge-aware masked language model ( KMLM ) , knowledge-aware masked sequence-to-sequence ( KMS2S ) , product entity aspect boundary detection ( PEABD ) , product entity category classification ( PECC ) , and product entity aspect summary generation ( PEASG ) . 2.2 INJECTING KNOWLEDGE INTO PLMS . Recent work investigates how to incorporate knowledge into PLMs for NLU . ERNIE ( Sun et al. , 2019 ) enhances language representation with the entity/phrase-level masking . ERNIE ( Zhang et al. , 2019 ) identifies and links entity mentions in texts to their corresponding entities in KB . Similar to ERNIE ( Zhang et al. , 2019 ) , KnowBERT ( Peters et al. , 2019 ) injects KBs into PLM . Xiong et al . ( 2020 ) leverages an entity replacement pre-training objective to learn better representations for entities . KEPLER ( Wang et al. , 2019 ) adopts the knowledge embedding objective in the pretraining . Besides , SKEP ( Tian et al. , 2020 ) , SenseBERT ( Levine et al. , 2020 ) , SentiLR ( Ke et al. , 2019 ) , and K-ADAPTER ( Wang et al. , 2020 ) propose to integrate sentiment knowledge , word sense , sentiment polarity , and lexical relation into PLM , respectively . However , most of these studies are focused on integrating knowledge for language understanding task , work of utilizing domainspecific knowledge for pre-training for language generation tasks are limited . Inspired by these work , we construct K-PLUG that learns domain-specific knowledge into a PLM for both NLU and NLG tasks . 3 KNOWLEDGE-INJECTED PRE-TRAINING . In this section , we explain the data used to pre-train K-PLUG , its model architecture , and our pretraining objectives . 3.1 DATA PREPARATION . We collect the pre-training data from a mainstream Chinese e-commerce platform2 , which contains approximately 25 million textual product descriptions and covers 40 product categories . With an average length of 405 tokens , these product descriptions constitute a corpus with a size of 10B Chinese characters . Each product description consists of information on 10.7 product aspects on average , and each product aspect is accompanied with a summary highlighting its prominent features , as shown in Figure 1 ( a ) . Additionally , the e-commerce KB and USPs ( further explained below ) used in our pre-training data are as specified by the e-commerce platform and its online stores . 2https : //www.jd.com/ 3.2 MODEL ARCHITECTURE . K-PLUG adopts the standard sequence-to-sequence Transformer architecture ( Vaswani et al. , 2017 ) , consisting of a 6-layer encoder and a 6-layer decoder as Song et al . ( 2019 ) . We set the size of hidden vectors as 768 , and the number of self-attention heads as 12 . More details about the experimental settings are in the appendix . 3.3 KNOWLEDGE FORMULATION AND PRE-TRAINING OBJECTIVES . We formulate the learning of four types of knowledge in a unified PLM : e-commerce KB , aspects of product entities , categories of product entities , and USPs of product entities . Specifically , ecommerce KB stores standardized product attribute information , e.g. , ( Material : Cotton ) and ( Collar Type : Pointed Collar ) . It provides details about the products ( Logan IV et al. , 2017 ) . Aspects of product entities are features of a product , such as the sound quality of a stereo speaker , etc . ( Li et al. , 2020 ) . Categories of product entities such as Clothing and Food are widely used by e-commerce platforms to organize their products so to present structured offerings to their customers ( Luo et al. , 2020 ; Dong et al. , 2020 ) USPs of product entities are the essence of what differentiates a product from its competitors ( Garrett , 1961 ) . For example , a stereo speaker ’ s USP exhibiting its supreme sound quality could be “ crystal clear stereo sound ” . An effective USP immediately motivates the purchasing behavior of potential buyers . We propose and evaluate five novel self-supervised pre-training objectives to learn the abovementioned four types of knowledge in the K-PLUG model ( see Figure 1 ) . Knowledge-aware Masked Language Model ( KMLM ) . Inspired by BERT ( Devlin et al. , 2019 ) , we adopt the masked language model ( MLM ) to train the Transformer encoder as one of our pre-training objectives , which learns to predict the masked tokens in the source sequence ( e.g. , “ The company is [ MASK ] at the foot of a hill. ” ) . Similar to BERT , we mask 15 % of all tokens in a text sequence ; 80 % of the masked tokens are replaced with the [ MASK ] token , 10 % with a random token , and 10 % left unchanged . Particularly , given an original text sequence x = ( x1 , ... , xm , ... , xM ) with M tokens , a masked sequence is produced by masking xm through one of the three ways explained above , e.g. , replacing xm with [ MASK ] to create x̃ = ( x1 , ... , [ MASK ] , ... , xM ) . MLM aims to model the conditional likelihood P ( xm|x̃ ) , and the loss function is : LMLM = logP ( xm|x̃ ) ( 1 ) The major difference from BERT is that our KMLM prioritizes knowledge tokens , which contain knowledge regarding product attributes and USPs , when selecting positions to mask and , in the case that the knowledge tokens make up less than 15 % of all tokens , randomly picks non-knowledge tokens to complete the masking .
This paper proposes pretraining language model for e-commerce domain. Specifically, the authors design five pretraining objectives to incorporate various domain knowledge into the the models with an encoder-decoder architecture. When further finetuned on language understanding and generation tasks in the e-commerce domain, the proposed models named K-PLUG outperforms the existing baseline models including those pretrained on general domains. The paper is generally easy to follow. Designs of the pretraining objectives are reasonable and empirically effective. Experiments are solid and convincing.
SP:42107a5481baa3bdf72b965d0db08ef92b78a92f
Learning Self-Similarity in Space and Time as a Generalized Motion for Action Recognition
1 INTRODUCTION . Learning spatio-temporal dynamics is the key to video understanding . To this end , extending convolutional neural networks ( CNNs ) with spatio-temporal convolution has been actively investigated in recent years ( Tran et al. , 2015 ; Carreira & Zisserman , 2017 ; Tran et al. , 2018 ) . The empirical results so far indicate that spatio-temporal convolution alone is not sufficient for grasping the whole picture ; it often learns irrelevant context bias rather than motion information ( Materzynska et al. , 2020 ) and thus the additional use of optical flow turns out to boost the performance in most cases ( Carreira & Zisserman , 2017 ; Lin et al. , 2019 ) . Motivated by this , recent action recognition methods learn to extract explicit motion , i.e. , flow or correspondence , between feature maps of adjacent frames and they improve the performance indeed ( Li et al. , 2020c ; Kwon et al. , 2020 ) . But , is it essential to extract such an explicit form of flows or correspondences ? How can we learn a richer and more robust form of motion information for videos in the wild ? In this paper , we propose to learn spatio-temporal self-similarity ( STSS ) representation for video understanding . Self-similarity is a relational descriptor for an image that effectively captures intrastructures by representing each local region as similarities to its spatial neighbors ( Shechtman & Irani , 2007 ) . Given a sequence of frames , i.e. , a video , it extends along the temporal dimension and thus represents each local region as similarities to its neighbors in space and time . By converting appearance features into relational values , STSS enables a learner to better recognize structural patterns in space and time . For neighbors at the same frame it computes a spatial self-similarity map , while for neighbors at a different frame it extracts a motion likelihood map . If we fix our attention to the similarity map to the very next frame within STSS and attempt to extract a single displacement vector to the most likely position at the frame , the problem reduces to optical flow , which is a particular type of motion information . In contrast , we leverage the whole volume of STSS and let our model learn to extract an effective motion representation from it in an end-toend manner . With a sufficient volume of neighborhood in space and time , it effectively captures long-term interaction and fast motion in the video , leading to robust action recognition . We introduce a neural block for STSS representation , dubbed SELFY , that can be easily inserted into neural architectures and learned end-to-end without additional supervision . Our experimental analysis demonstrates its superiority over previous methods for motion modeling as well as its complementarity to spatio-temporal features from direct convolutions . On the standard benchmarks for action recognition , Something-Something V1 & V2 , Diving-48 , and FineGym , the proposed method achieves the state-of-the-art results . 2 RELATED WORK . Video action recognition . Video action recognition is a task to categorize videos into pre-defined action classes . One of the conventional topics in action recognition is to capture temporal dynamics in videos . In deep learning , many approaches attempt to learn temporal dynamics in different ways : Two-stream networks with external optical flows ( Simonyan & Zisserman , 2014 ; Wang et al. , 2016 ) , recurrent networks ( Donahue et al. , 2015 ) , and 3D CNNs ( Tran et al. , 2015 ; Carreira & Zisserman , 2017 ) . Recent approaches have introduced the advanced 3D CNNs ( Tran et al. , 2018 ; 2019 ; Feichtenhofer , 2020 ; Lin et al. , 2019 ; Fan et al. , 2020 ) and show the effectiveness of capturing spatio-temporal features , so that 3D CNNs now become a de facto approach to learn temporal dynamics in the video . However , spatio-temporal convolution is vulnerable unless relevant features are well aligned across frames within the fixed-sized kernel . To address this issue , a few methods adaptively translate the kernel offsets with deformable convolutions ( Zhao et al. , 2018 ; Li et al. , 2020a ) , while several methods ( Feichtenhofer et al. , 2019 ; Li et al. , 2020b ) modulate the other hyper-parameters , e.g. , higher frame rate or larger spatial receptive fields . Unlike these methods , we address the problem of the spatio-temporal convolution by a sufficient volume of STSS , capturing far-sighted spatio-temporal relations . Learning motion features . Since using the external optical flow benefits 3D CNNs to improve the action recognition accuracy ( Carreira & Zisserman , 2017 ; Zolfaghari et al. , 2018 ; Tran et al. , 2018 ) , several approaches try to learn frame-by-frame motion features from RGB sequences inside neural architectures . Fan et al . ( 2018 ) ; Piergiovanni & Ryoo ( 2019 ) internalize TV-L1 ( Zach et al. , 2007 ) optical flows into the CNN . Frame-wise feature differences ( Sun et al. , 2018b ; Lee et al. , 2018 ; Jiang et al. , 2019 ; Li et al. , 2020c ) are also utilized as the motion features . Recent correlation-based methods ( Wang et al. , 2020 ; Kwon et al. , 2020 ) adopt a correlation operator ( Dosovitskiy et al. , 2015 ; Sun et al. , 2018a ; Yang & Ramanan , 2019 ) to learn motion features between adjacent frames . However , these methods compute frame-by-frame motion features between two adjacent frames and then rely on stacked spatio-temporal convolutions for capturing long-range motion dynamics . We propose to learn STSS features , as generalized motion features , that enable to capture both shortterm and long-term interactions in the video . Self-similarity . Self-similarity represents an internal geometric layout of images . It is widely used in many computer vision tasks , such as object detection ( Shechtman & Irani , 2007 ) , image retrieval ( Hörster & Lienhart , 2008 ) , and semantic correspondence matching ( Kim et al. , 2015 ; 2017 ) . In the video domain , Shechtman & Irani ( 2007 ) firstly introduce the concept of STSS and transforms the STSS to a hand-crafted local descriptor for action detection . Inspired from this work , early methods adopt self-similarities for capturing view-invariant temporal patterns ( Junejo et al. , 2008 ; 2010 ; Körner & Denzler , 2013 ) , but they use temporal self-similarities only due to computational costs . Recently , there are several non-local approaches ( Wang et al. , 2018 ; Liu et al. , 2019 ) that utilize STSS for capturing long-range dynamics of videos . However , they use STSS for reweighting or aligning visual features , which is an indirect way of using STSS . Different from these methods , our method leverages full STSS directly as generalized motion information and learns an effective representation for action recognition within a video-processing architecture . To the best of our knowledge , our work is the first in learning STSS representation using modern CNNs . The contribution of our paper can be summarized as follows . First , we revisit the notion of selfsimilarity and propose to learn a generalized , far-sighted motion representations from STSS . Second , we implement STSS representation learning as a neural block , dubbed SELFY , that can be integrated into existing neural architectures . Third , we provide comprehensive evaluations on SELFY , achieving the state-of-the-art on benchmarks : Something-Something V1 & V2 , Diving-48 , and FineGym . 3 OUR APPROACH . In this section , we first revisit the notions of self-similarity and discuss its relation to motion . We then introduce our method for learning effective spatio-temporal self-similarity representation , which can be easily integrated into video-processing architectures and learned end-to-end . 3.1 SELF-SIMILARITY REVISITED . Self-similarity is a relational descriptor that suppresses variations in appearance and reveals structural patterns in images or videos ( Shechtman & Irani , 2007 ) . Given an image feature map I ∈ RX×Y×C , self-similarity transformation of I results in a 4D tensor S ∈ RX×Y×U×V , whose elements are defined as Sx , y , u , v = sim ( Ix , y , Ix+u , y+v ) , where sim ( · , · ) is a similarity function , e.g. , cosine similarity . Here , ( x , y ) is a query coordinate while ( u , v ) is a spatial offset from it . To impose a locality , the offset is restricted to its neighborhood : ( u , v ) ∈ [ −dU , dU ] × [ −dV , dV ] , so that U = 2dU + 1 and V = 2dV + 1 , respectively . By converting C-dimensional appearance feature Ix , y into UV -dimensional relational feature Sx , y , it suppresses variations in appearance and reveals spatial structures in the image . Note that the selfsimilarity transformation closely relates to conventional cross-similarity ( or correlation ) across two different feature maps ( I , I′ ∈ RX×Y×C ) , which can be defined as Sx , y , u , v = sim ( Ix , y , I ′ x+u , y+v ) . Given two images of a moving object , the cross-similarity transformation effectively captures motion information and thus is commonly used in optical flow and correspondence estimation ( Dosovitskiy et al. , 2015 ; Sun et al. , 2018a ; Yang & Ramanan , 2019 ) . For a sequence of frames , i.e. , a video , one can naturally extend the spatial self-similarity along the temporal axis . Let V ∈ RT×X×Y×C denote a feature map of the video with T frames . Spatiotemporal self-similarity ( STSS ) transformation of V results in a 6D tensor S ∈ RT×X×Y×L×U×V , whose elements are defined as St , x , y , l , u , v = sim ( Vt , x , y , Vt+l , x+u , y+v ) , ( 1 ) where ( t , x , y ) is the spatio-temporal coordinate and ( l , u , v ) is a spatio-temporal offset from it . In addition to the locality of spatial offsets above , the temporal offset l is also restricted to its temporal neighborhood : l ∈ [ −dL , dL ] , so that L = 2dL + 1 . What types of information does STSS describe ? Interestingly , for each time t , the STSS tensor S can be decomposed along temporal offset l into a single spatial self-similarity tensor ( when l = 0 ) and 2dL spatial cross-similarity tensors ( when l 6= 0 ) ; the partial tensors with a small offset ( e.g. , l = −1 or +1 ) collect motion information from adjacent frames and those with larger offsets capture it from further frames both forward and backward in time . Unlike previous approaches to learn motion ( Dosovitskiy et al. , 2015 ; Wang et al. , 2020 ; Kwon et al. , 2020 ) , which rely on crosssimilarity between adjacent frames , STSS allows to take a generalized , far-sighted view on motion , i.e. , both short-term and long-term , both forward and backward , as well as spatial self-motion . 3.2 SPATIO-TEMPORAL SELF-SIMILARITY REPRESENTATION LEARNING . By leveraging the rich information in STSS , we propose to learn a generalized motion representation for video understanding . To achieve this goal without additional supervision , we design a neural block , dubbed SELFY , which can be inserted into a video-processing architectures and learned end-to-end . The overall structure is illustrated in Fig . 2 . It consists of three steps : self-similarity transformation , feature extraction , and feature integration . Given the input video feature tensor V , the self-similarity transformation step converts it to the STSS tensor S as in Eq. ( 1 ) . In the following , we describe feature extraction and integration steps .
This submission proposed a motion representation method based on spatio-temporal self-similarity (STSS), which represents each local region as similarities to its neighbors in both spatial and temporal dimension. There are previous works (e.g., Ref[1] , [2], [5] listed here) which utilize STSS for feature extractions, authors claim that this work is the first one to learn STSS representation based on modern CNN architecture. The proposed method is implemented as a neural block, i.e., SELFY, which can be applied into neural architectures and learned end-to-end without additional supervision. On 3 standard human action recognition data sets, Something-Something-V1 & V2, Diving-48, and FineGym, the proposed method achieves quite good empirical results.
SP:970151fd51696294ccd5746783a07d4cfab90054
Zero-Cost Proxies for Lightweight NAS
1 INTRODUCTION . Instead of manually designing neural networks , neural architecture search ( NAS ) algorithms are used to automatically discover the best ones ( Tan & Le , 2019a ; Liu et al. , 2019 ; Bender et al. , 2018 ) . Early work by Zoph & Le ( 2017 ) proposed using a reinforcement learning ( RL ) controller that constructs candidate architectures , these are evaluated and then feedback is provided to the controller based on the performance of the candidate . One major problem with this basic NAS methodology is that each evaluation is very costly – typically on the order of hours or days to train a single neural network fully . We focus on this evaluation phase – we propose using proxies that require a single minibatch of data and a single forward/backward propagation pass to score a neural network . This is inspired by recent pruning-at-initialization work by Lee et al . ( 2019 ) , Wang et al . ( 2020 ) and Tanaka et al . ( 2020 ) wherein a per-parameter saliency metric is computed before training to inform parameter pruning . Can we use such saliency metrics to score an entire neural network ? Furthermore , can we use these “ single minibatch ” metrics to rank and compare multiple neural networks for use within NAS ? If so , how do we best integrate these metrics within existing NAS algorithms such as RL or evolutionary search ? These are the questions that we hope to ( empirically ) tackle in this work with the goal of making NAS less compute-hungry . Our contributions are : • Zero-cost proxies We adapt pruning-at-initialization metrics for use with NAS . This requires these metrics to operate at the granularity of an entire network rather than individual parameters – we devise and validate approaches that aggregate parameter-level metrics in a manner suitable for ranking candidates during NAS search . • Comparison to conventional proxies We perform a detailed comparison between zerocost and conventional NAS proxies that use a form of reduced-computation training . First , we quantify the rank consistency of conventional proxies on large-scale datasets : 15k models vs. 50 models used in ( Zhou et al. , 2020 ) . Second , we show that zero-cost proxies can match or exceed the rank consistency of conventional proxies . • Ablations on NAS benchmarks We perform ablations of our zero-cost proxies on five different NAS benchmarks ( NAS-Bench-101/201/NLP/ASR and PyTorchCV ) to both test the zero-cost metrics under different settings , and expose properties of successful metrics . • Integration with NAS Finally , we propose two ways to use zero-cost metrics effectively within NAS algorithms : random search , reinforcement learning , aging evolution and predictor-based search . For all algorithms and three NAS datasets we show significant speedups , up to 4× for NAS-Bench-101 compared to current state-of-the-art . 2 RELATED WORK . NAS Efficiency To decrease NAS search time , various techniques were used in the literature . Pham et al . ( 2018 ) and Cai et al . ( 2018 ) use weight sharing between candidate models to decrease the training time during evaluation . Liu et al . ( 2019 ) and others use smaller datasets ( CIFAR-10 ) as a proxy to the full task ( ImageNet1k ) . In EcoNAS , Zhou et al . ( 2020 ) extensively investigated reduced-training proxies wherein input size , model size , number of training samples and number of epochs were reduced in the NAS evaluation phase . We compare to EcoNAS in this work to elucidate how well our zero-cost proxies perform compared to familiar and widely-used conventional proxies . Pruning The goal is to reduce the number of parameters in a neural network , one way to do this is by identifying a saliency ( importance ) metric for each parameter , and the less-important parameters are removed . For example , Han et al . ( 2015 ) , Frankle & Carbin ( 2019 ) and others use parameter magnitudes as the criterion while LeCun et al . ( 1990 ) , Hassibi & Stork ( 1993 ) and Molchanov et al . ( 2017 ) use gradients . However , the aforementioned works require training before computing the saliency criterion . A new class of pruning-at-initialization algorithms , that require no training , were introduced by Lee et al . ( 2019 ) and extended by Wang et al . ( 2020 ) and Tanaka et al . ( 2020 ) . A single forward/backward propagation pass is used to compute a saliency criterion which is successfully used to heavily prune neural networks before training . We extend these pruning-at-initialization criteria towards scoring entire neural networks and we investigate their use with NAS algorithms . Intersection between pruning and NAS Concepts from pruning have been used within NAS multiple times . For example , Mei et al . ( 2020 ) use channel pruning in their AtomNAS work to arrive at customized multi-kernel-size convolutions ( mixconvs as introduced by Tan & Le ( 2019b ) ) . In their Blockswap work , Turner et al . ( 2020 ) use Fisher information at initialization to score different lightweight primitives that are substituted into a neural network to decrease computation . This is the earliest work we could find that attempts to perform a type of NAS by scoring neural networks without training using a pruning criterion , More recently , Mellor et al . ( 2020 ) introduced a new metric for scoring neural networks at initialization based on the correlation of Jacobians with different inputs . They perform “ NAS without training ” by performing random search with their zero-cost metric ( jacob cov ) to rank neural networks instead of using accuracy . We include jacob cov in our analysis and we introduce five more zero-cost metrics in this work . 3 PROXIES FOR NEURAL NETWORK ACCURACY . 3.1 CONVENTIONAL NAS PROXIES ( ECONAS ) . In conventional sample-based NAS , a proxy training regime is often used to predict a model ’ s accuracy instead of full training . Zhou et al . ( 2020 ) investigate conventional proxies in depth by computing the Spearman rank correlation coefficient ( Spearman ρ ) of a proxy task to final test accuracy . The proxy used is a reduced-computation training , wherein one of the following four variables is reduced : ( 1 ) number of epochs , ( 2 ) number of training samples , ( 3 ) input resolution ( 4 ) model size ( controlled through the number of channels after the first convolution ) . Even though such proxies were used in many prior works , EcoNAS is the first systematic study of conventional proxy tasks that we found . One main finding by Zhou et al . ( 2020 ) is that using approximately 14 of the model size and input resolution , all training samples , and 110 the number of epochs was a reasonable proxy which yielded the best results for their experiment ( Zhou et al. , 2020 ) . 3.2 ZERO-COST NAS PROXIES . We present alternative proxies for network accuracy that can be used to speed up NAS . A simple proxy that we use is grad norm in which we sum the Euclidean norm of the gradients after a single minibatch of training data . Other metrics listed below were previously introduced in the context of parameter pruning at the granularity of a single parameter – a saliency is computed to rank parameters and remove the least important ones . We adapt these metrics to score and rank entire neural network models for NAS . 3.2.1 SNIP , GRASP AND SYNAPTIC FLOW . In their snip work , Lee et al . ( 2019 ) proposed performing parameter pruning based on a saliency metric computed at initialization using a single minibatch of data . This saliency criteria approximates the change in loss when a specific parameter is removed . Wang et al . ( 2020 ) attempted to improve on the snip metric by approximating the change in gradient norm ( instead of loss ) when a parameter is pruned in their grasp objective . Finally , Tanaka et al . ( 2020 ) generalized these so-called synaptic saliency scores and proposed a modified version ( synflow ) which avoids layer collapse when performing parameter pruning . Instead of using a minibatch of training data and cross-entropy loss ( as in snip or grasp ) , with synflow we compute a loss which is simply the product of all parameters in the network ; therefore , no data is needed to compute this loss or the synflow metric itself . These are the three metrics : snip : Sp ( θ ) = ∣∣∣∣∂L∂θ θ ∣∣∣∣ , grasp : Sp ( θ ) = − ( H∂L∂θ ) θ , synflow : Sp ( θ ) = ∂L∂θ θ ( 1 ) where L is the loss function of a neural network with parameters θ , H is the Hessian1 , Sp is the per-parameter saliency and is the Hadamard product . We extend these saliency metrics to score an entire neural network by summing over all parameters N in the model : Sn = ∑N i Sp ( θ ) i . 3.2.2 FISHER . Theis et al . ( 2018 ) perform channel pruning by removing activation channels ( and their corresponding parameters ) that are estimated to have the least effect on the loss . They build on the work of Molchanov et al . ( 2017 ) and Figurnov et al . ( 2016 ) . More recently , Turner et al . ( 2020 ) aggregated this fisher metric for all channels in a convolution primitive to quantify the importance of that primitive when it is replaced by a more efficient alternative . We further aggregate the fisher metric for all layers in a neural network to score an entire network as shown in the following equations : fisher : Sz ( z ) = ( ∂L ∂z z ) 2 , Sn = M∑ i=1 Sz ( zi ) ( 2 ) where Sz is the saliency per activation z , and M is the length of the vectorized feature map . 3.2.3 JACOBIAN COVARIANCE . This metric was purpose-designed to score neural networks in the context of NAS – we refer the reader to the original paper for detailed reasoning and derivation of the metric which we call jacob cov ( Mellor et al. , 2020 ) . In brief , this metric captures the correlation of activations within a network when subject to different inputs within a minibatch of data – the lower the correlation , the better the network is expected to perform as it can differentiate between different inputs well . 4 EMPIRICAL EVALUATION OF PROXY TASKS . Generally , most of the proxies presented in the previous section try to capture how trainable a neural network is by inspecting the gradients at the beginning of training . In this work , we refrain from attempting to explain precisely why each metric works ( or does not work ) and instead focus on the empirical evaluation of those metrics in different scenarios . We use the Spearman rank correlation coefficient ( Spearman ρ ) to quantify how well a proxy ranks models compared to the ground-truth ranking produced by final test accuracy ( Daniel , 1990 ) . 1The full Hessian does not need to be explicitly constructed as explained by Pearlmutter ( 1993 ) . 4.1 NAS-BENCH-201 . NAS-Bench-201 is a purpose-built benchmark for prototyping NAS algorithms ( Dong & Yang , 2020 ) . It contains 15,625 CNN models from a cell-based search space and corresponding training statistics . We first use NAS-Bench-201 to evaluate conventional proxies from EcoNAS , then we evaluate our zero-cost proxies and compare the two approaches .
This paper provides an extensive empirical evaluation of zero-cost proxies which can be combined with existing NAS methods to speed up search time. The proposed method utilizes ‘pruning-at-initialization’ works which computes gradient-computation at initialization as a proxy for performance of the given neural architectures. Through extensive experiments, this paper compares between conventional proxies and ablation study on five NAS benchmarks and shows the validation of the proposed proxy.
SP:2409111bd2e2211c6e3c11c4c4eaf494d14e3f44
Hard Masking for Explaining Graph Neural Networks
1 INTRODUCTION . Graph Neural Networks ( GNNs ) are a flexible and powerful family of models that build representations of nodes or edges on irregular graph-structured data and have experienced significant attention in recent years . These methods are based on the so-called “ neighborhood aggregation ” scheme in which a node representation is learned by aggregation of features from their neighbors and have achieved state-of-the-art performance on node and graph classification tasks . Despite their popularity , approaches investigating their interpretability have received limited attention . This paper focuses on explaining or interpreting the rationale underlying a given prediction of already trained graph neural networks . There have been numerous approaches proposed in the literature for the general interpretability of machine learning models . The most popular approaches are feature attribution methods that intend to attribute importance to input features given an input prediction either agnostic to the model parameter ( Ribeiro et al. , 2018 ; 2016 ) or using model-specific attribution approaches ( Xu et al. , 2015 ; Binder et al. , 2016 ; Sundararajan et al. , 2017 ) . However , models learned over graph-structured data have some unique challenges . Specifically , predictions on graphs are induced by a complex combination of nodes and paths of edges between them in addition to the node features . Thus explanations for a prediction should ideally be a small subgraph of the input graph and a small subset of node features that are most influential for the prediction ( Ying et al. , 2019 ) . The only existing approach for GNN explainability proposes to learn a real-valued graph mask that selects the important subgraph of the GNNs computation graph to maximize the mutual information with the GNNs prediction ( Ying et al. , 2019 ) . We identify two crucial limitations of such an approach . Firstly , although mathematically tractable , a continuous mask does not ensure sparsity compared to a discrete mask – a desirable property for interpretability . Secondly , suitable notions of what constitutes an explanation in a GNN model and its evaluation are missing . This paper proposes an alternate notion of interpretability for GNNs grounded in ideas from data compression in information theory . Specifically , we consider an explanation as a compressed form of the original feature matrix . The goodness of the explanation is measured by the expected deviation from the prediction of the underlying model . We formalize this idea of interpreting GNN decisions as an explicit optimization problem in a rate-distortion framework . A subgraph of the node ’ s computational graph and its set of features are relevant for a classification decision if the expected classifier score remains nearly the same when randomizing the remaining features . This formulation is arguably both a crisp , robust , and understandable notion of interpretability that is easy to evaluate . We propose a simple combinatorial procedure ZORRO that aims to find a sparse subset of features and nodes in the computational graph while adhering to a user-specified level of fidelity . Our method aims to find multiple disjoint explanations ( whenever possible ) that guarantee an acceptable lower bound on fidelity to the model ’ s decision . Another key problem in post-hoc interpretability of GNNs is that of evaluating explanation methods . Current evaluation methods , such as those used by GNNEXPLAINER , are primarily anecdotal and lack principled metrics . Secondly , especially for real-world datasets , there is no ground truth for the explanation , making comparison difficult . We , on the other hand , posit that an explanation is faithful to the underlying model if it retains enough predictive power – a crisp and measurable quantity . To this extent , our optimization metric , fidelity , encodes an information-theoretic interpretation of explanation – if the explanation is highly predictive in expectation , then it is a high qualitative explanation . We conducted extensive experimentation on three datasets and four diverse GNN approaches – Graph Convolution Networks ( Kipf & Welling , 2017 ) , Graph Attention Networks ( Veličković et al. , 2018 ) , GIN ( Xu et al. , 2019 ) , and APPNP ( Klicpera et al. , 2019 ) . Our main key findings are as follows . 1 . We show that not one but multiple diverse explanations are possible that sufficiently explain a prediction . This multiplicity of explanations indicates the possible configurations that could be utilized by the model to arrive at a decision . 2 . Unlike earlier mutual-information preserving interpretability approaches , i.e . GNNEXPLAINER ( Ying et al. , 2019 ) , we show that our explanations are both more predictive and sparse . We show that even with sparser explanations , our approach contains far more predictive capacity than GNNEXPLAINER . 3 . We then analyze the explanations across multiple GNN models to showcase differences between their learning behavior . We specifically show that GNN models rely heavily on homophily and that prediction errors are due inability to capture homophilic signals from their neighborhoods . 2 RELATED WORK . Representation learning approaches on graphs encode graph structure with or without node features into low-dimensional vector representations , using deep learning and nonlinear dimensionality reduction techniques . These representations are trained in an unsupervised ( Perozzi et al. , 2014 ; Khosla et al. , 2019 ; Funke et al. , 2020 ) or semi-supervised manner by using neighborhood aggregation strategies and task-based objectives ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ) . This work focuses on the post-hoc interpretability of decisions made by semi-supervised models based on graph convolution networks for node classification tasks . Inspired by the success of convolutional neural networks , graph convolution network ( GCN ) ( Kipf & Welling , 2017 ) generalizes the convolution operation for irregular graph data . GCN and several of its variants follow a neighborhood aggregation strategy where they compute a node ’ s representation by recursive aggregation and transformation of feature representations of its neighbors . For the node classification task , the final node representations are then used to predict unlabelled nodes ’ classes . Interpretability in Machine Learning . Post-hoc approaches to model interpretability are popularized by feature attribution methods that aim to assign importance to input features given a prediction either agnostic to the model parameters ( Ribeiro et al. , 2018 ; 2016 ) or using model specific attribution approaches ( Xu et al. , 2015 ; Binder et al. , 2016 ; Sundararajan et al. , 2017 ) . Instance-wise feature selection ( IFS ) approaches ( Chen et al. , 2018 ; Carter et al. , 2018 ; Yoon et al. , 2018 ) , on the other hand , focuses on finding a sufficient feature subset or explanation that leads to little or no degradation of the prediction accuracy when other features are masked . The advantage of this formulation is that the output explanation has a precise meaning in terms of the predictive power of the chosen subset . Applying these works directly for graph models is infeasible due to the complex form of explanation , which should consider the complex association among nodes in addition to the input features . Interpretability in GNNs . Model agnostic approaches like ours to interpretability in GNNs include GNNEXPLAINER ( Ying et al. , 2019 ) and XGNN ( Yuan et al. , 2020 ) . GNNEXPLAINER learns a realvalued graph mask and feature mask such that the mutual information with GNN ’ s predictions is maximized . XGNN proposed a reinforcement learning-based graph generation approach to generate explanations for the predicted class for a graph . We instead focus on explaining node level decisions . As a model introspective approach , Pope et al . ( 2019 ) extended the gradient-based saliency map methods to GCNs , which rely on propagating gradients/relevance from the output to the original model ’ s input features . Other works ( Kang et al. , 2019 ; Idahl et al. , 2019 ) focus on explaining unsupervised network representations , which is out of scope for the current work . 3 PROBLEM DEFINITION AND APPROACH . 3.1 BACKGROUND ON GNNS . Let G = ( V , E ) be a graph where each node is associated with d dimensional input feature vector . Graph neural networks compute node representations by recursive aggregation and transformation of feature representations of its neighbors which are finally used for label prediction . Formally for a L-layer GNN , let x ( ` ) n denote the feature representation of node n ∈ V at a layer ` ∈ L and Nn denotes the set of its 1-hop neighbors . x ( 0 ) n corresponds to the input feature vector of n. The ` -th layer of a GNN can then be described as an aggregation of node features from the previous layer followed by a transformation operation . z ( ` ) n = AGGREGATION ( ` ) ( { x ( ` −1 ) n , { x ( ` −1 ) j | j ∈ Nn } } ) ( 1 ) x ( ` ) n = TRANSFORMATION ( ` ) ( z ( ` ) n ) ( 2 ) Each GNN defines its own aggregation function which is differentiable and usually a permutation invariant function . The transformation operation is usually a non-linear transformation employing ReLU non-linear activation . The final node ’ s embedding z ( L ) n is then used to make the predictions Φ ( n ) ← argmaxσ ( z ( L ) n W ) , ( 3 ) where σ is a sigmoid or softmax function depending on whether the node belongs to multiple or a single class . and W is a learnable weight matrix . The ith element of z ( L ) n W corresponds to the ( predicted ) probability that node n is assigned to some class i . 3.2 PROBLEM FORMULATION . We are interested in explaining the prediction of a GNN Φ ( n ) for any node n. We note that for a particular node , n the subgraph taking part in the computation of neighborhood aggregation operation , see Eq . ( 1 ) , fully determines the information used by GNN to predict its class . In particular , for a L-layer GNN , this subgraph would be the graph induced on nodes in the L-hop neighborhood of n. We will call this subgraph as the computational graph of the query node . We would like to pint out that the term computational graph should not be confused with the computational graph of the neural network . Let G ( n ) ⊆ G denote the computational graph of the node n. Let X ( n ) , or briefly X denotes the feature matrix restricted to the nodes of G ( n ) , where each row corresponds to a d-dimensional feature vector of the corresponding node in the computational graph . We formulate the task of explaining the model prediction for a node n , as finding a partition of the components of its computational graph into a subset , S of relevant nodes and features , and its complement Sc of non-relevant components . In particular , the subset S should be such that fixing its value to the true values already determines the model output for almost all possible assignments to the non-relevant subset Sc . The subset S is then returned as an explanation . To quantify relevance , we compute the expected value of fidelity in model ’ s prediction for the noisy assignment to the non-relevant components . Let us denote with YS the new perturbed feature matrix obtained by fixing the components of the S to their actual values and otherwise noisy entries . The values of components in Sc are then drawn from some noisy distribution , N . Let S = { Vs , Fs } be the explanation with selected nodes Vs and selected features Fs . Let S be the mask matrix such that each element Si , j = 1 if and only if ith node ( in G ( n ) ) and jth feature are included in sets Vs and Fs respectively and 0 otherwise . YS = X S + Z ( 1− S ) , Z ∼ N , ( 4 ) where denotes an element-wise multiplication , and 1 a matrix of ones with the corresponding size . Figure 1 shows how the fixed elements are selected by Fs and Vs . Definition . The fidelity of explanation S with respect to the graph neural network Φ and the noise distribution N is given by F ( S ) = EYS |Z∼N [ 1Φ ( X ) =Φ ( YS ) ] . ( 5 ) By fixing the fidelity to a certain user-defined threshold , say τ , we are then interested in all possible disjoint sets of explanations that would have the fidelity of at least τ . More precisely , our resulting set of explanations R is given as R = { S1 , S2 , . . . | ∀iF ( Si ) ≥ τ and ∩ i Si = ∅ } ( 6 ) CONNECTION TO THE RATE-DISTORTION THEORY Our problem formulation is inspired by rate-distortion theory ( Sims , 2016 ) which addresses the problem of determining the minimal information of a source signal that should be communicated over a leaky channel so that the source ( input signal ) can be approximately reconstructed at the receiver ( output signal ) without exceeding an expected distortion D. In our problem , we are interested in finding a small subset S such that having knowledge only about the signal on S and filling in the rest of the information randomly will almost surely preserve the class prediction if our chosen subset contains the information that is relevant for the model ’ s decision . Rather than measuring distortion or disagreement in the model ’ s decisions , we instead measure fidelity or agreement among the model ’ s decisions with the original and the distorted signal , respectively . Distortion can be computed using fidelity as D = 1 − F . A schematic representation of our problem in terms of rate-distortion framework is shown in Figure 2 .
This work proposes to explain graph neural networks using hard masking techniques. Specifically, it tries to find the node mask $V_s$ and feature mask $F_s$ which can identify the most important information of the input such that the masked information can yield a high fidelity score. This work proposes a greedy method, ZORRO, to explore these hard masks, which can be used as the explanations of the prediction. Experimental results are interesting and promising.
SP:ee844974cf8fa5c95205cf27dfc9b80a277aa469
Approximation Algorithms for Sparse Principal Component Analysis
1 INTRODUCTION . Principal Component Analysis ( PCA ) and the related Singular Value Decomposition ( SVD ) are fundamental data analysis and dimension reduction tools in a wide range of areas including machine learning , multivariate statistics and many others . They return a set of orthogonal vectors of decreasing importance that are often interpreted as fundamental latent factors that underlie the observed data . Even though the vectors returned by PCA and SVD have strong optimality properties , they are notoriously difficult to interpret in terms of the underlying processes generating the data ( Mahoney & Drineas , 2009 ) , since they are linear combinations of all available data points or all available features . The concept of Sparse Principal Components Analysis ( SPCA ) was introduced in the seminal work of ( d ’ Aspremont et al. , 2007 ) , where sparsity constraints were enforced on the singular vectors in order to improve interpretability . A prominent example where sparsity improves interpretability is document analysis , where sparse principal components can be mapped to specific topics by inspecting the ( few ) keywords in their support ( d ’ Aspremont et al. , 2007 ; Mahoney & Drineas , 2009 ; Papailiopoulos et al. , 2013 ) . Formally , given a positive semidefinite ( PSD ) matrix A ∈ Rn×n , SPCA can be defined as follows:1 Z∗ = maxx∈Rn , ‖x‖2≤1 x > Ax , subject to ‖x‖0 ≤ k. ( 1 ) In the above formulation , A is a covariance matrix representing , for example , all pairwise feature or object similarities for an underlying data matrix . Therefore , SPCA can be applied for either the object or feature space of the data matrix , while the parameter k controls the sparsity of the resulting vector and is part of the input . Let x∗ denote a vector that achieves the optimal value Z∗ in the above formulation . Then intuitively , the optimization problem of eqn . ( 1 ) seeks a sparse , unit norm vector x∗ that maximizes the data variance . It is well-known that solving the above optimization problem is NP-hard ( Moghaddam et al. , 2006a ) and that its hardness is due to the sparsity constraint . Indeed , if the sparsity constraint was removed , then the resulting optimization problem can be easily solved by computing the top left or right singular vector of A and its maximal value Z∗ is equal to the top singular value of A . Notation . We use bold letters to denote matrices and vectors . For a matrix A ∈ Rn×n , we denote its ( i , j ) -th entry by Ai , j ; its i-th row by Ai∗ and its j-th column by A∗j ; its 2-norm by 1Recall that the p-th power of the ` p norm of a vector x ∈ Rn is defined as ‖x‖pp = ∑n i=1 |xi| p for 0 < p < ∞ . For p = 0 , ‖x‖0 is a semi-norm denoting the number of non-zero entries of x . ‖A‖2 = maxx∈Rn , ‖x‖2=1 ‖Ax‖2 ; and its ( squared ) Frobenius norm by ‖A‖ 2 F = ∑ i , j A 2 i , j . We use the notation A 0 to denote that the matrix A is symmetric positive semidefinite ( PSD ) and Tr ( A ) = ∑ iAi , i to denote its trace , which is also equal to the sum of its singular values . Given a PSD matrix A ∈ Rn×n , its Singular Value Decomposition is given by A = UΣUT , where U is the matrix of left/right singular vectors and Σ is the diagonal matrix of singular values . 1.1 OUR CONTRIBUTIONS . We present three algorithms for SPCA and associated quality-of-approximation results ( Theorems 2.2 , 3.1 , and 4.1 ) . All three algorithms are simple , intuitive , and run in O ( n3.5 ) or less time . They return a vector that is provably sparse and , when applied to the input covariance matrix A , provably captures a fraction of the optimal solution Z∗ . We note that in all three algorithms , the output vector has a sparsity that depends on k ( the target sparsity of the original SPCA problem of eqn . ( 1 ) ) and ( an accuracy parameter between zero and one ) . The first algorithm is based on randomized , approximate matrix multiplication : it randomly ( but non-uniformly ) selects a subset of O ( k/ 2 ) columns of A1/2 ( the square root of the PSD matrix A ) and computes its top right singular vector . The output of this algorithm is precisely this singular vector , padded with zeros to become a vector in Rn . It turns out that this simple algorithm , which , surprisingly has not been analyzed in prior work , returns an O ( k/ 2 ) sparse vector y ∈ Rn that satisfies ( with constant probability that can be amplified as desired , see Section 2 for details ) : y > Ay ≥ 1 2 Z∗ − √ Z∗ · √ Tr ( A ) k . Notice that the above bound depends on both Z∗ and it square root and therefore is not a relative error bound . The second term scales as a function of the trace of A divided by k , which depends on the properties of the matrix A and the target sparsity . The second algorithm is a deterministic thresholding scheme . It computes a small number of the top singular vectors of the matrix A and then applies a deterministic thresholding scheme on those singular vectors to ( eventually ) construct a sparse vector z ∈ Rn that satisfies z > Az ≥ ( 1/2 ) Z∗− ( 3/2 ) Tr ( A ) . Our analysis provides unconditional guarantees for the accuracy of the solution of this simple thresholding scheme . To the best of our knowledge , no such analyses have appeared in prior work ( see Section 1.2 for details ) . The error bound of the second algorithm is weaker than the one provided by the first algorithm , but the second algorithm is deterministic and does not need to compute the square root ( i.e. , all singular vectors and singular values ) of the matrix A . Our third algorithm provides novel bounds for the following standard convex relaxation of the problem of eqn . ( 1 ) . max Z∈Rn×n , Z 0 Tr ( AZ ) s.t . Tr ( Z ) ≤ 1 and ∑ |Zi , j | ≤ k. ( 2 ) It is well-known that the optimal solution to eqn . ( 2 ) is at least the optimal solution to eqn . ( 1 ) . We present a novel , two-step rounding scheme that converts the optimal solution matrix Z ∈ Rn×n to a vector z ∈ Rn that has expected sparsity2 Õ ( k2/ 2 ) and satisfies z > Az ≥ γZ ( 1− ) · Z∗ − . Here , γZ is a constant that precisely depends on the top singular value of Z , the condition number of Z , and the extent to which the SDP relaxation of eqn . ( 2 ) is able to capture the original problem ( see Theorem 4.1 and the following discussion for details ) . To the best of our knowledge , this is the first analysis of a rounding scheme for the convex relaxation of eqn . ( 2 ) that does not assume a specific model for the covariance matrix A . Applications to Sparse Kernel PCA . Our algorithms have immediate applications to sparse kernel PCA ( SKPCA ) , where the input matrix A ∈ Rn×n is instead implicitly given as a kernel matrix whose entry ( i , j ) is the value k ( i , j ) : = 〈φ ( Xi∗ ) , φ ( Xj∗ ) 〉 for some kernel function φ that implicitly maps an observation vector into some high-dimensional feature space . Although A is not explicit , 2For simplicity of presentation and following the lines of ( Fountoulakis et al. , 2017 ) , we assume that the rows and columns of the matrix A have unit norm ; this assumption was not necessary for the previous two algorithms and can be removed as in ( Fountoulakis et al. , 2017 ) . We are also hiding a poly-logarithmic factor for simplicity , hence the Õ ( · ) notation . See Theorem 4.1 for a detailed statement . we can query all O ( n2 ) entries of A using O ( n2 ) time assuming an oracle that computes the kernel function k. We can then subsequently apply our SPCA algorithms and achieve polynomial time runtime with the same approximation guarantees . 1.2 PRIOR WORK . SPCA was formally introduced by ( d ’ Aspremont et al. , 2007 ) ; however , previously studied PCA approaches based on rotating ( Jolliffe , 1995 ) or thresholding ( Cadima & Jolliffe , 1995 ) the top singular vector of the input matrix seemed to work well , at least in practice , given sparsity constraints . Following ( d ’ Aspremont et al. , 2007 ) , there has been an abundance of interest in SPCA . ( Jolliffe et al. , 2003 ) considered LASSO ( SCoTLASS ) on an ` 1 relaxation of the problem , while ( Zou & Hastie , 2005 ) considered a non-convex regression-type approximation , penalized similar to LASSO . Additional heuristics based on LASSO ( Ando et al. , 2009 ) and non-convex ` 1 regularizations ( Zou & Hastie , 2005 ; Zou et al. , 2006 ; Sriperumbudur et al. , 2007 ; Shen & Huang , 2008 ) have also been explored . Random sampling approaches based on non-convex ` 1 relaxations ( Fountoulakis et al. , 2017 ) have also been studied ; we highlight that unlike our approach , ( Fountoulakis et al. , 2017 ) solved a non-convex relaxation of the SPCA problem and thus perhaps relied on locally optimal solutions . Additionally , ( Moghaddam et al. , 2006b ) considered a branch-and-bound heuristic motivated by greedy spectral ideas . ( Journée et al. , 2010 ; Papailiopoulos et al. , 2013 ; Kuleshov , 2013 ; Yuan & Zhang , 2013 ) further explored other spectral approaches based on iterative methods similar to the power method . ( Yuan & Zhang , 2013 ) specifically designed a sparse PCA algorithm with early stopping for the power method , based on the target sparsity . Another line of work focused on using semidefinite programming ( SDP ) relaxations ( d ’ Aspremont et al. , 2007 ; d ’ Aspremont et al. , 2008 ; Amini & Wainwright , 2009 ) . Notably , ( Amini & Wainwright , 2009 ) achieved provable theoretical guarantees regarding the SDP and thresholding approach of ( d ’ Aspremont et al. , 2007 ) in a specific , high-dimensional spiked covariance model , in which a base matrix is perturbed by adding a sparse maximal eigenvector . In other words , the input matrix is the identity matrix plus a “ spike ” , i.e. , a sparse rank-one matrix . Despite the variety of heuristic-based sparse PCA approaches , very few theoretical guarantees have been provided for SPCA ; this is partially explained by a line of hardness-of-approximation results . The sparse PCA problem is well-known to be NP-Hard ( Moghaddam et al. , 2006a ) . ( Magdon-Ismail , 2017 ) shows that if the input matrix is not PSD , then even the sign of the optimal value can not be determined in polynomial time unless P = NP , ruling out any multiplicative approximation algorithm . In the case where the input matrix is PSD , ( Chan et al. , 2016 ) shows that it is NP-hard to approximate the optimal value up to multiplicative ( 1 + ) error , ruling out any polynomialtime approximation scheme ( PTAS ) . Moreover , they show Small-Set Expansion hardness for any polynomial-time constant factor approximation algorithm and also that the standard SDP relaxation might have an exponential gap . We conclude by summarizing prior work that offers provable guarantees ( beyond the work of ( Amini & Wainwright , 2009 ) ) , typically given some assumptions about the input matrix . ( d ’ Aspremont et al. , 2014 ) showed that the SDP relaxation can be used to find provable bounds when the covariance input matrix is formed by a number of data points sampled from Gaussian models with a single sparse singular vector . ( Papailiopoulos et al. , 2013 ) presented a combinatorial algorithm that analyzed a specific set of vectors in a low-dimensional eigenspace of the input matrix and presented relative error guarantees for the optimal objective , given the assumption that the input covariance matrix has a decaying spectrum . ( Asteris et al. , 2011 ) gave a polynomial-time algorithm that solves sparse PCA exactly for input matrices of constant rank . ( Chan et al. , 2016 ) showed that sparse PCA can be approximated in polynomial time within a factor of n−1/3 and also highlighted an additive PTAS of ( Asteris et al. , 2015 ) based on the idea of finding multiple disjoint components and solving bipartite maximum weight matching problems . This PTAS needs time npoly ( 1/ ) , whereas all of our algorithms have running times that are a low-degree polynomial in n .
This paper proposed three simple algorithms for sparse principal component analysis (SPCA): a) randomized matrix multiplication; b) deterministic thresholding scheme; and c) semidefinite programming relaxation. All of the proposed algorithms look like native combinations of existing techniques and simple sparsification steps. However, it is somewhat interesting to have novel theoretical guarantees for these simple strategies whose error bounds depend on the properties of the input matrix and the target sparsity.
SP:0f29e5886a7840aacdbce931b6c795d43b545172
Should Ensemble Members Be Calibrated?
1 INTRODUCTION . Deep learning approaches achieve state-of-the-art performance in a wide range of applications , including image classification . However , these networks tend to be overconfident in their predictions , they often exhibit poor calibration . A system is well calibrated , if when the system makes a prediction with probability of 0.6 then 60 % of the time that prediction is correct . Calibration is very important in deploying system , especially in risk-sensitive tasks , such as medicine ( Jiang et al. , 2012 ) , auto-driving ( Bojarski et al. , 2016 ) , and economics ( Gneiting et al. , 2007 ) . It was shown by Niculescu-Mizil & Caruana ( 2005 ) that shallow neural networks are well calibrated . However , Guo et al . ( 2017 ) found that more complex neural network model with deep structures do not exhibit the same behaviour . This work motivated recent research into calibration for general deep learning systems . Previous research has mainly examined calibration based on samples from the true data distribution { x ( i ) , y ( i ) } Ni=1 ∼ p ( x , ω ) , y ( i ) ∈ { ω1 , ... , ωK } ( Zadrozny & Elkan , 2002 ; Vaicenavicius et al. , 2019 ) . This analysis relies on the limiting behaviour as N → +∞ to define a well calibrated system P ( y = ŷ|P ( ŷ|x ; θ ) = p ) = p ⇐⇒ lim N→+∞ ∑ i∈Spj δ ( y ( i ) , ŷ ( i ) ) |Spj | = p ( 1 ) where Spj = { i|P ( ŷ ( i ) = j|x ( i ) ; θ ) = p , i = 1 , ... , N } and ŷ ( i ) the model prediction for x ( i ) . δ ( s , t ) = 1 if s = t , otherwise 0 . However , Eq . ( 1 ) doesn ’ t explicitly reflect the relation between P ( y = ŷ|P ( ŷ|x ; θ ) = p ) and the underlying data distribution p ( x , y ) . In this work we examine this explicit relationship and use it to define a range of calibration evaluation criteria , including the standard sample-based criteria . One issue with deep-learning approaches is the large number of model parameters associated with the networks . Deep ensembles ( Lakshminarayanan et al. , 2017 ) is a simple , effective , approach for handling this problem . It has been found to improve performance , as well as allowing measures of uncertainty . In recent literature there has been “ contradictory ” empirical observations about the relationship between the calibration of the members of the ensemble and the calibration of the final ensemble prediction ( Rahaman & Thiery , 2020 ; Wen et al. , 2020 ) . In this paper , we examine the underlying theory and empirical results relating to calibration with ensemble methods . We found , both theoretically and empirically , that ensembling multiple calibrated models decreases the confidence of final prediction , resulting in an ill-calibrated ensemble prediction . To address this , strategies to calibrate the final ensemble prediction , rather than individual members , are required . Additionally we empiricaly examine the situation where the ensemble is comprised of models with different topologies , and resulting complexity/performance , requiring non-uniform ensemble averaging . In this study , we focus on post-hoc calibration of ensemble , based on temperature annealing . Guo et al . ( 2017 ) conducted a thorough comparison of various existing post-hoc calibration methods and found that temperature scaling was a simple , fast , and often highly effective approach to calibration . However , standard temperature scaling acts globally for all regions of the input samples , i.e . all logits are scaled towards one single direction , either increasing or decreasing the distribution entropy . To address this constraint , that may hurt some legitimately confident predictions , we investigate the effect of region-specific temperatures . Empirical results demonstrate the effectiveness of this approach , with minimal increase in the number of calibration parameters . 2 RELATED WORK . Calibration is inherently related to uncertainty modeling . Two of the most important scopes of calibration are calibration evaluation and calibration system construction . One method to assessing calibration is the reliability diagram ( Vaicenavicius et al. , 2019 ; Bröcker , 2012 ) . Though informative , It is still desirable to have an overall metric . Widmann et al . ( 2019 ) investigate different distances in the probability simplex for estimating calibration error . Nixon et al . ( 2019 ) point out the problem of fixed spaced binning scheme , bins with few predictions may have low-bias but high-variance measurement . Calibration error measure adaptive to dense populated regions have also been proposed ( Nixon et al. , 2019 ) . Vaicenavicius et al . ( 2019 ) treated the calibration evaluation as hypotheses tests . All these approaches examine calibration criteria from a sample-based perspective , rather than as a function of the underlying data distribution which is used in the thoretical analysis in this work . There are two main approaches to calibrating systems . The first is to recalibrate the uncalibrated systems with post-hoc calibration mapping , e.g . Platt scaling ( Platt et al. , 1999 ) , isotonic regression ( Zadrozny & Elkan , 2002 ) , Dirichlet calibration ( Kull et al. , 2017 ; 2019 ) . The second is to directly build calibrated systems , via : ( i ) improving model structures , e.g . deep convolutional Gaussian processes ( Tran et al. , 2019 ) ; ( ii ) data augmentation , e.g . adversarial samples ( Hendrycks & Dietterich , 2019 ; Stutz et al. , 2020 ) or Mixup ( Zhang et al. , 2018 ) ; ( iii ) minimize calibration error during training ( Kumar et al. , 2018 ) . Calibration based on histogram binning ( Zadrozny & Elkan , 2001 ) , Bayesian binning ( Naeini et al. , 2015 ) and scaling binning ( Kumar et al. , 2019 ) are related to our proposed dynamic temperature scaling , in the sense that the samples are divided into regions and separate calibration mapping are applied . However , our method can preserve the property that all predictions belonging to one sample sum to 1 . The region-based classifier by Kuleshov & Liang ( 2015 ) is also related to our approach . Ensemble diversity has been proposed for improved calibration ( Raftery et al. , 2005 ; Stickland & Murray , 2020 ) . In Zhong & Kwok ( 2013 ) , ensembles of SVM , logistic regressor , boosted decision trees are investigated , where the combination weights of calibrated probabilities is based on AUC of ROC . However , AUC is not comparable between different models as discussed in Ashukha et al . ( 2020 ) . In this work we investigate the combination of different deep neural network structures . The weights assigned to the probabilities is optimised using a likelihood-based metric . 3 CALIBRATION FRAMEWORK . Let X ⊆ Rd be the d-dimensional input space and Y = { ω1 , ... , ωK } be the discrete output space consisting of K classes . The true underlying joint distribution for the data is p ( x , ω ) = P ( ω|x ) p ( x ) , x ∈ X , ω ∈ Y . Given some training data D ∼ p ( x , ω ) , a model θ is trained to predict the distribution P ( ω|x ; θ ) given observation features . For a calibrated system the average predicted posterior probability should equate to the average posterior of the underlying distribution for a specific probability region . Two extreme cases will always yield perfect calibration . First when the predictions that are the same , and equal to the class prior for all inputs , P ( ωj |x ; θ ) = P ( ωj ) . Sec- ond the minimum Bayes ’ risk classifier is obtained , P ( ωj |x ; θ ) = p ( x , ωj ) ∑K k=1 p ( x , ωk ) . Note that perfect calibration doesn ’ t imply high accuracy , as shown by the system predicting the prior distribution . 3.1 DISTRIBUTION CALIBRATION . A system is calibrated if the predictive probability values can accurately indicate the portion of correct predictions . Perfect calibration for a system that yields P ( ω|x ; θ ) when the training and test data are obtained form the joint distribution p ( x , ω ) can be defined as : ∫ x∈Rpj ( θ , ) P ( ωj |x ; θ ) p ( x ) dx = ∫ x∈Rpj ( θ , ) P ( ωj |x ) p ( x ) dx ∀p , ωj , → 0 ( 2 ) Rpj ( θ , ) = { x ∣∣∣|P ( ωj |x ; θ ) − p| ≤ , x ∈ X } ( 3 ) Rpj ( θ , ) denotes the region of input space where the system predictive probability for class ωj is sufficiently close , within error of , to the probability p. A perfectly calibrated system will satisfy this expression for all regions , the expected predictive probability ( left side of Eq . ( 2 ) ) is identical to the expected correctness , i.e. , expected true probability ( right side of Eq . ( 2 ) ) . Rpj ( θ , ) defines the region in which calibration is defined . For top-label calibration , only the most probable class is considered and the region defined in Eq . ( 3 ) is modified to reflect this : R̃pj ( θ , ) = R p j ( θ , ) ∩ { x ∣∣∣ωj = argmax ω P ( ω|x ; θ ) , x ∈ X } ( 4 ) Eq . ( 4 ) is a strict subset of Eq . ( 3 ) . As the two calibration regions are different between calibration and top-label calibration , perfect calibration doesn ’ t imply top-label calibration , and vise versa . A simple illustrative example of this property is given in A.3 . Binary classification , K = 2 , is an exception to this general rule , as the regions for top-label calibration are equivalent to those for perfect calibration , i.e . R̃pj ( θ , ) = R p j ( θ , ) . Hence , perfect calibration is equivalent to top-label calibration for binary classification ( Nguyen & O ’ Connor , 2015 ) . Eq . ( 2 ) defines the requirements for a perfectly calibrated system . It is useful to define metrics that allow how close a system is to perfect calibration to be assessed . Let the region calibration error be : Cpj ( θ , ) = ∫ x∈Rpj ( θ , ) ( P ( ωj |x ; θ ) − P ( ωj |x ) ) p ( x ) dx ( 5 ) This then allows two forms of expected calibration losses to be defined ACE ( θ ) = 1 K ∫ 1 0 ∣∣∣∣∣ K∑ j=1 Cpj ( θ , ) ∣∣∣∣∣dp ; ACCE ( θ ) = 1K K∑ j=1 ∫ 1 0 ∣∣∣Cpj ( θ , ) ∣∣∣dp ( 6 ) All Calibration Error ( ACE ) only considers the expected calibration error for a particular probability , irrespective of the class associated with the data1 ( Hendrycks et al. , 2019 ) . Hence , All Class Calibration Error ( ACCE ) that requires that all classes minimises the calibration error for all probabilities is advocated by Kull et al . ( 2019 ) ; Kumar et al . ( 2019 ) . Nixon et al . ( 2019 ) propose the Thresholded Adaptive Calibration Error ( TACE ) to consider only the prediction larger than a threshold , and it can be described as a special case of ACCE by replacing the integral range . Naeini et al . ( 2015 ) also propose to only consider the region with maximum error . Though measures such as ACE and ACCE require consistency of the expected posteriors with the true distribution , for tasks with multiple classes , particularly large numbers of classes , the same weight is given to the ability of the model to assign low probabilities to highly unlikely classes , and high probabilities to the “ correct ” class . For systems with large numbers of classes this can yield artificially low scores . To address this problem it is more common to replace the regions in Eq . ( 5 ) with the top-label regions in Eq . ( 4 ) , to give a top-label calibration error C̃pj ( θ , ) . This then yields 1In this section the references given refer to the sample-based equivalent versions of the distributional calibration expressions in this paper using the same concepts , rather than identical expressions . the expected top-label equivalents of ACCE and ACE , Expected Class Calibration Error ( ECCE ) and Expected Calibration Error ( ECE ) . Here for example ECE by Guo et al . ( 2017 ) is expressed as ECE ( θ ) = ∫ 1 0 ∣∣∣∣∣ K∑ j=1 ∫ x∈R̃pj ( θ , ) ( P ( ωj |x ; θ ) − P ( ωj |x ) ) p ( x ) dx ∣∣∣∣∣dp ( 7 ) = ∫ 1 0 O ( θ , p ) |Conf ( θ , p ) − Acc ( θ , p ) |dp ( 8 ) whereO ( θ , p ) = ∑K j=1 ∫ x∈R̃pj ( θ , ) p ( x ) dx is the fraction observations that are assigned to that particular probability and Conf ( θ , p ) and Acc ( θ , p ) are the ideal distribution accuracy and confidences from the model for that probability . For more details see the appendix .
The paper makes an analysis of calibration in ensembles of deep learning models. Through some theoretical developments, the paper supports that a given ensemble cannot be more confident than the average individual members for regions where the ensemble is well calibrated. Empirical results, on CIFAR-100 and three different deep models, report a comparison of ensemble calibration, where calibration is done over all members in order to achieved a calibrated ensemble decision, over individual calibration of members with no feedback from the ensemble decisions. Results show that individual member calibration does not lead to calibrated ensembles, and as such calibrating directly on the ensemble output is required for obtained a proper evaluation of its uncertainty. Different ensemble calibration approaches are also compared.
SP:c7c0fc5a3d6319117b445707e7818c6f292bf533
ERMAS: Learning Policies Robust to Reality Gaps in Multi-Agent Simulations
1 INTRODUCTION . Reinforcement learning ( RL ) offers a tool to optimize policy decisions affecting complex , multiagent systems ; for example , to improve traffic flow or economic productivity . In practice , the need for efficient policy evaluation necessitates training on simulations of multi-agent systems ( MAS ) . Agents in these systems can be emulated with fixed behavioral rules , or by optimizing for a reward function using RL ( Zheng et al. , 2020 ) . For instance , the impact of economic policy decisions are often estimated with agent-based models ( Holland & Miller , 1991 ; Bonabeau , 2002 ) . This commonly introduces a reality gap as the reward function and resulting behavior of simulated agents might differ from those of real people ( Simon & Schaeffer , 1990 ) . This becomes especially problematic as the complexity of the simulation grows , for example , when increasing the number of agents , or adding agent affordances ( Kirman , 1992 ; Howitt , 2012 ) . As a result , policies learned in imperfect simulations need to be robust against reality gaps in order to be effective in the real world . We introduce -Robust Multi-Agent Simulation ( ERMAS ) , a robust optimization framework for training robust policies , termed planners , that interact with real-world multi-agent systems . ERMAS trains robust planners by simulating multi-agent systems with RL and sampling worst-case behaviors from the worst-case agents . This form of multi-agent robustness poses a very challenging multilevel ( e.g. , max-min-min ) optimization problem . Existing techniques which could be applied to ERMAS ’ s multi-agent robustness objective , e.g. , naive adversarial robustness ( Pinto et al. , 2017 ) and domain randomization ( Tobin et al. , 2017 ; Peng et al. , 2018 ) , are intractable as they would require an expensive search through a large space of agent reward functions . Alternative frameworks improve robustness , e.g. , to changes in environment dynamics , observation or action spaces ( Pinto et al. , 2017 ; Li et al. , 2019 ; Tessler et al. , 2019 ) , but do not address reality gaps due to reward function mismatches , as they use inappropriate metrics on the space of adversarial perturbations . To solve this problem , ERMAS has three key features : 1 ) It formulates a multi-agent robustness objective equivalent to finding the worst case -Nash equilibria . 2 ) It optimizes a tractable dual problem to the equivalent objective . 3 ) It approximates the dual problem using local solution concepts and first-order meta-learning techniques ( Nichol et al. , 2018 ; Finn et al. , 2017 ) . ERMAS ultimately yields policies that are robust to other agents ’ behavioral deviations , up to a regret of . We show that ERMAS learns robust policies in repeated bimatrix games by finding the worst-case reality gaps , corresponding to highly adversarial agents , which in turn leads to more robust planners . We further consider a challenging , large-scale spatiotemporal economy that features a social planner that learns to adjust agent rewards . In both settings , we show policies trained by ERMAS are more robust by testing them in perturbed environments with agents that have optimized for reward functions unused during ERMAS training . This generalization error emulates the challenge faced in transferring policies to the real world . In particular , we show ERMAS can find AI Economist tax policies that achieve higher social welfare across a broad range of agent risk aversion objectives . In all , we demonstrate ERMAS is effective even in settings where baselines fail or become intractable . Contributions To summarize , our contributions are : • We derive a multi-agent adversarial robustness problem using -Nash equilibria , which poses a challenging nested optimization problem . • We describe how ERMAS efficiently solves the nested problem using dualization , trust- regions , and first-order meta-learning techniques . • We empirically validate ERMAS by training robust policies in two multi-agent problems : sequential bimatrix games and economic simulations . In particular , ERMAS scales to complex spatiotemporal multi-agent simulations . 2 ROBUSTNESS AND REALITY GAPS IN MULTI-AGENT ENVIRONMENTS . We seek to learn a policy πp for an agent , termed the planner , that interacts with an environment featuring N other agents . The planner ’ s objective depends both on its own policy and the behavior of other agents in response to that policy ; this is a multi-agent RL problem in which the planner and agents co-adapt . In practice , evaluating ( and optimizing ) πp requires use of a simulation with agents that emulate those in the environment of interest ( i.e . the real world ) , which might contain agents whose reward function differs from those used in the simulation . Our goal is to train planner policies that are robust to such reality gaps . Formally , we build on partially-observable multi-agent Markov Games ( MGs ) ( Sutton & Barto , 2018 ) , defined by the tuple M : = ( S , A , r , T , γ , o , I ) , where S and A are the state and action spaces , respectively , and I are agent indices . Since the MG played by the agents depends on the choice of planner policy , we denote the MG given by πp as M [ πp ] . MGs proceed in episodes that last H + 1 steps ( possibly infinite ) , covering H transitions . At each time t ∈ [ 0 , H ] , the world state is denoted st. Each agent i = 1 , . . . , N receives an observation oi , t , executes an action ai , t and receives a reward ri , t . The environment transitions to the next state st+1 , according to the transition distribution T ( st+1|st , at ) .1 Each agent observes oi , t , a part of the state st . Agent policies πi are parameterized by θi while the planner policy πp is parameterized by θp . The Nash equilibria of M [ πp ] are agent policies where any unilateral deviation is suboptimal : ANE ( πp ) : = { π | ∀i ∈ [ 1 , N ] , π̃i ∈ Π : Ji ( π̃i , π−i , πp ) ≤ Ji ( πi , π−i , πp ) } , ( 1 ) where Ji ( π , πp ) : = Eπ , πp [ ∑H t=0 γ tr ( i ) t ] denotes the objective of agent i . Hence , a rational agent would not unilaterally deviate from π ∈ ANE ( πp ) . To evaluate a fixed planner policy πp , we simply sample outcomes using policies π ∈ ANE ( πp ) . Also optimizing πp introduces a form of two-level learning . Under appropriate conditions , this can be solved with simultaneous gradient descent ( Zheng et al. , 2020 ; Fiez et al. , 2019 ) . Robustness Objective As noted before , we wish to learn planner policies πp that are robust to reality gaps arising from changes in agent reward functions , e.g. , when agents are boundedly rational.2 We develop a robustness objective for the planner by formalizing such reality gaps as perturbations 1Bold-faced quantities denote vectors or sets , e.g. , a = ( a1 , . . . , aN ) , the action profile for N agents . 2This type of reality gap occurs when the simulated environment ’ s reward function r fails to rationalize the actual behavior of the agents in the real environment , i.e. , when agents in the real world act suboptimally with respect to the simulation ’ s reward function . ξi ∈ Ξ to agent objectives , where the uncertainty set Ξ : ( S , A ) H → R is the space of possible perturbations and represents uncertainty about the objectives of other agents . We extend ANE ( πp , ξ ) to condition the Nash equilibria on perturbations ξ : ANE ( πp , ξ ) : = { π | ∀i ∈ [ 1 , N ] , π̃i ∈ Π : Jξi ( π̃i , π−i , πp ) ≤ J ξ i ( πi , π−i , πp ) } , ( 2 ) Jξi ( π̃i , π−i , πp ) : = Ji ( π̃i , π−i , πp ) + Eτi∼π̃i , π−i , πp [ ξi ( τi ) ] ( 3 ) where τi is a trajectory ( sequence of state-action pairs ) . Following Morimoto & Doya ( 2001 ) , a robust planner optimizes its reward , subject to agents playing a perturbed Nash equilibrium ANE ( πp , ξ ) that maximally penalizes the planner : π∗p = arg max πp min ξ∈Ξ min π∈ANE ( πp , ξ ) Jp ( π , πp ) . ( 4 ) Note that agent policies π ∈ ANE ( πp , ξ ) describes agents that optimize their own reward function , and we assume an adversary chooses ξ. Bounded Uncertainty Set There are two challenges with Equation 4 . First , if the adversary can arbitrarily choose Ξ , the worst case is uninformative.3 Second , depending on the complexity of Π , the uncertainty set Ξ may be high-dimensional and intractable to search . We address these issues by upper-bounding the size of the uncertainty set , L∞ norm of ξi ∈ Ξ , by the term . Thus upper-bounds the difference between the reward functions of agents in the training and testing environments , e.g. , between simulation and the real world . This bounded uncertainty set is : Ξ : = { ξ ∣∣∣∣∣ supπ , πp |ξi ( π , πp ) | < , for all i ∈ I } . ( 5 ) This uncertainty set is equivalent to the -equilibria of M [ πp ] : ANE ( πp , ) : = { π | ∀i ∈ [ 1 , N ] , π̃i ∈ Π : Ji ( π̃i , π−i , πp ) ≤ Ji ( πi , π−i , πp ) + } . ( 6 ) is a tunable hyperparameter—this is the case with most robust RL ( Pinto et al. , 2017 ; Li et al. , 2019 ) —but a good starting value is the anticipated error in reward objective estimates ( applicationspecific ) . Using 6 , the robustness objective becomes the following constrained optimization problem : arg max πp J∗p , min ( πp , ) ︸ ︷︷ ︸ Planner-OPT , where J∗p , min ( πp , ) : = min π∈ANE ( πp , ) Jp ( π , πp ) ︸ ︷︷ ︸ Agent-Adv-Search . ( 7 ) Using ANE ( πp , ) replaces the problem of intractably searching through Ξ with searching through ANE ( πp , ) , and thus merges the two nested min operations in Equation 4 . Conceptually , this transfers the worst-case search problem to the agents : Agent-Adv-Search agents find an adversarial equilibrium ; Planner-OPT optimizes the planner given adversarial agents . Note that the constraint set in Eq 7 is non-empty for ≥ 0 ; the constraints ( Eq 6 ) simply upper-bound the regret of agents . By definition , for non-empty bounded Π , there exists an optimal policy with zero regret . 3 ERMAS : ROBUST POLICIES IN MULTI-AGENT SIMULATIONS . We now introduce ERMAS , an efficient optimization framework to solve the robustness problem in Equation 7 . ERMAS proceeds in three steps . First , it dualizes Equation 7 following constrained RL . Second , it defines a trust-region for the uncertainty set ANE ( πp , ) , approximating the dual problem . Finally , it uses first-order meta-learning with trust-regions to solve the approximate dual problem . See Appendix A.1 for the detailed Algorithm description . Dualizing Agent-Adv-Search The agent search problem in Equation 7 can be formulated similar to a constrained RL problem ( Paternain et al. , 2019 ) , where the primary objective of the agents is to minimize the planner ’ s reward and the secondary objective is to maximize their own reward . While conventional constrained RL enforces a constant lower bound in the constraint , e.g. , 3For instance , by setting ξi such that Jξi = −Jp . Ji ( π , πp ) ≥ C , we enforce a dynamic one : ∀i ∈ 1 . . . N : Ji ( π , πp ) ≥ Ji ( π∗i , π−i , πp ) − , where π∗i is the optimal unilateral deviation for agent i : π ∗ i : = arg maxπ̃i∈Π Ji ( π̃i , π−i , πp ) . Letting λ denote Lagrange multipliers , we can dualize Agent-Adv-Search , i.e. , Equation 4 , as : min π ( Jp ( π , πp ) − N∑ i=1 λi [ Ji ( π ∗ i , π−i , πp ) − Ji ( π , πp ) − i ] ) ︸ ︷︷ ︸ J†p , min ( πp , ) , ( 8 ) This is identical to the dualization of constrained reinforcement learning , whose duality gap is empirically negligible and provably zero under weak assumptions ( Paternain et al. , 2019 ) . We now abuse notation to denote θ : = [ θ1 , . . . , θN , θp ] and Jp ( θ ) : = Jp ( π , πp ) where θ are the parameters of π . To solve Equation 8 , the agents apply gradients : ∇θiJ † p , min ( πp , ) = −∇θiJp ( θ ) − λi∇θi [ Ji ( θ ′ i ( θ ) , θ−i ) − Ji ( θ ) ] , ( 9 ) where θ′i ( θ ) is the parameters of the optimal unilateral deviation π ∗ i for agent i , i.e . the parameters that minimize local regret , which depends on the current policy parameters θi . λi is updated as : ∇λiJ † p , min ( πp , ) = Ji ( π ∗ i , π−i , πp ) − Ji ( π , πp ) − i . ( 10 ) Equation 8 still poses a challenge through the Ji ( π∗i , π−i , πp ) terms , which correspond to unknown agent regret . We now detail the efficient approximation of the value and derivative of agent regret using local and meta-learning approximations , respectively . Trust Regions using Local -equilibria Estimating regret requires knowledge of the optimal unilateral deviation for agent i . We can simplify this problem by proposing a refinement of -equilibria inspired by the notion of local Nash equilibria in differentiable games ( Ratliff et al. , 2014 ) . Definition 3.1 . A strategy π is a local -Nash equilibrium if there exists open sets Wi ⊂ ΠN such that πi ∈ Wi and for each i ∈ { 1 , . . . , N } we have that Ji ( π′i , π−i ) ≤ Ji ( π ) + ′ for all π′i ∈Wi \ { πi } , where ′ : = supπ′i∈Wi KL ( πi||π ′ i ) . By instead performing Agent-Adv-Search on the local -Nash equilibria , we can limit the set of unilateral deviations to consider to a small trust region , Πη ( π ) : ANE ( πp , η ) : = { π | ∀i ∈ [ 1 , N ] , π̃i ∈ Πη ( πi ) : Ji ( π̃i , π−i , πp ) ≤ Ji ( πi , π−i , πp ) + } , ( 11 ) Πη ( π ) : = { π′ ∈ Π | KL ( π||π′ ) ≤ η } , ( 12 ) where η > 0 defines the size of the trust region . For small η , algorithms such as TRPO ( Schulman et al. , 2017 ) can be used to efficiently approximate optimal local deviations of πi , affording reasonable approximations of Ji ( π∗i , π−i , πp ) . Note that our usage of trust region algorithms is not for optimization purposes . ERMAS requires the use of trust region optimization to ensure that the equilibria considered by ERMAS are limited to a local neighborhood of the policy space ( Eq 11 ) . First-Order Meta Learning Approximation The full gradient in Equation 9 is also complicated by the need to estimate the derivative of local regret ∇θi [ Ji ( θ′i ( θ ) , θ−i ) − Ji ( θ ) ] . The second term maximizes the performance of the agent ’ s policy and is simply found with policy gradient . The first term is less straightforward : it minimizes the performance of the best agent policy in the current trust region . We note that this first term corresponds to a meta-learning gradient . We follow REPTILE ( Nichol et al. , 2018 ) to obtain a first-order approximation of a M -step meta-learning gradient : ∇θiJi ( θ′i ( θ ) , θ−i ) = g1 − 1 M M∑ i=1 gi , gi = ∇θiJi θi + i−1∑ j=1 gj , θ−i , θp , ( 13 ) where gi denotes the ith policy gradient in the direction of Ji . In practice , we scale this meta-learning term with the hyperparameter β as β < 1 incorporates a helpful inductive bias where maximizing agent reward leads to local maxima . We can alternatively apply this gradient update periodically , to both mimic β < 1 and reduce computation overhead . First-order meta-learning approximations are known to be empirically effective , and are necessary for ERMAS to efficiently solve Eq 8 . ERMAS By solving the dual problem ( Eq . 8 ) , ERMAS yields robustness to -equilibria and , equivalently , uncertainty in agent objectives . ERMAS solves the dual problem by combining trust regions and meta-learning techniques to estimate and differentiate agent regret . Algorithm 1 and 2 ( Appendix A.1 ) describe an efficient implementation of this procedure for nested policy learning .
This paper proposes an interesting method for being able to act and plan robustly in a multiagent simulation and be robust to the reality gap between training time and testing time for agents in a marl setting. the method does show improvements in terms of being able to train the policy for this use case and being more robust to some out of distribution configuration of the environment however these improvements appear to be rather limited. in addition, the organization and writing for the paper is very technical and could be improved with additional background information on the uses of metrics and environments as well as better flow between the content in the paper to understand the importance of the different aspects of the method. These improvements could help the reader understand the novelty and important aspects of the method that are difficult to measure. At the moment it comes across as a mix of different methods combined to be able to support this more robust method without a very clear story about the primary problem the paper is trying to solve or the more significant technical aspect of the method that provides this novel solution.
SP:2e8e7fca411be533fbe6069ba360c17189be2fee
CANVASEMB: Learning Layout Representation with Large-scale Pre-training for Graphic Design
Layout representation , which models visual elements in a canvas and their interrelations , plays a crucial role in graphic design intelligence . With a large variety of layout designs and the unique characteristic of layouts that visual elements are defined as a list of categorical ( e.g . shape type ) and numerical ( e.g . position and size ) properties , it is challenging to learn a general and compact representation with limited data . Inspired by the recent success of self-supervised pre-training techniques in various natural language processing tasks , in this paper , we propose CanvasEmb ( Canvas Embedding ) , which pre-trains deep representation from unlabeled graphic designs by jointly conditioning on all the context elements in the same canvas , with a multi-dimensional feature encoder and a multi-task learning objective . The pre-trained CanvasEmb model can be fine-tuned with just one additional output layer and with a small size of training data to create models for a wide range of downstream tasks . We verify our approach with presentation slides data . We construct a large-scale dataset with more than one million slides , and propose two novel layout understanding tasks with human labeling sets , namely element role labeling and image captioning . Evaluation results on these two tasks show that our model with fine-tuning achieves state-of-the-art performances . Furthermore , we conduct a deep analysis aiming to understand the modeling mechanism of CanvasEmb , and demonstrate its great potential use on more applications such as layout auto completion and layout retrieval . 1 INTRODUCTION . Graphic design leverages layout to set up and arrange visual elements in a canvas for conveying message in different types of documents , while layout representation is the reversed process to understand visual elements and their inter-relations in a canvas , which is the key for the analysis ( Stoffel et al. , 2010 ) , retrieval ( Beusekom et al. , 2006 ) and generation ( Li et al. , 2020b ; Lee et al. , 2020 ) of graphic designs . However , elements in a layout are complex , which are defined with multi-dimensional properties such as type ( e.g. , text box , image or button ) , position and color . For example , the web page and presentation slide shown in Figure 1 is defined by a lot of settings , as each example is constructed by several elements and each element is defined by several proprieties . Due to the complex and sparse features of elements , as well as the rich diversity of layouts , learning a general and compact layout representation is challenging with limited amount of data . Previous works related to layout representations ( Li et al. , 2019 ; Tabata et al. , 2019 ; Lee et al. , 2020 ) are mostly task-oriented . They simplify the layout only as the positions of elements , and directly optimize task-specific labels with less than a few thousands instances . Recently a majority of self-supervised pre-trained models such as ELMO ( Peters et al. , 2018 ) , GPT ( Radford , 2018 ) and BERT ( Devlin et al. , 2019 ) have shown promising results in improving a variety of natural language processing ( NLP ) tasks . The success of pre-trained models in NLP has inspired us to learn contextual layout representations from large-scale unlabeled graphic designs , which can facilitate various downstream tasks for design intelligence . As one highly related work , LayoutLM ( Xu et al. , 2019 ) is a document pre-trained model incorporating both text content and layout information for scanned documents . However , it is difficult to generalize to other document types , since its input is word-level and it defines layout only as the word position , which is insufficient to describe a layout in graphic design . In this paper , we present CanvasEmb , a large-scale pre-trained model for learning contextual layout representation . It is designed to pre-train deep representation from unlabeled graphic designs by jointly conditioning on all the context elements in the same canvas , and the pre-trained CanvasEmb model can be fine-tuned with just one additional output layer and with a small size of training data to create models for a wide range of downstream tasks . Specifically , we define a generic and high-coverage vocabulary to describe element properties in the canvas . A feature encoder is designed to jointly incorporate multi-dimensional properties , and it is developed with the multi-layer Transformer ( Devlin et al. , 2019 ) for modeling element contexts . To ensure the representation conditioning on all dimensions of element contexts , we adopt the masked language modeling strategy with a multi-task objective , where we randomly mask some properties of elements for prediction in the pre-training . To verify our approach , we construct a large-scale dataset with more than one million presentation slides containing rich layout meta-information for pre-training . We then propose two novel downstream tasks for layout understanding with human labeling sets to evaluate the performance of our pre-trained CanvasEmb model . The first task is element role labeling . Only given the information of layout , the goal is to classify the semantic role of each element ( e.g. , title , subtitle ) . The second task is image captioning , which detects if a text box and an image in a slide belongs to the image captioning relation . Experimental results on the two tasks show that fine-tuning the CanvasEmb model achieves state-of-the-art performance . Furthermore , we conduct deep analysis to understand the modeling mechanismCanvasEmb . Also , we demonstrate the great potential use of our pre-trained CanvasEmb with two extended applications , including layout auto completion ( Li et al. , 2020b ) and layout retrieval . The contributions of this work are as follows : • We propose CanvasEmb , which to the best of our knowledge is the first pre-trained model for layouts in graphic design . It can be fine-tuned with a small size of training data for a wide range of downstream tasks . • We construct a large-scale dataset of presentation slides with rich layout information , as well as two novel tasks for layout understanding ( i.e. , element role labeling and image captioning ) with human labeling sets . • We demonstrate that our model achieves state-of-the art performances on the two downstream tasks , and show the potential for more applications such as layout auto-completion and layout retrieval . 2 RELATED WORK . Layout representation is the focal point of design in rich media , including presentation slides , magazines , comics , posters and web pages . High-quality representations can be conductive to multiple practical design tasks . Early works on design layout or document layout mainly rely on templates ( Hurst et al. , 2009 ; Damera-Venkata et al. , 2011 ) or heuristic rules ( O ’ Donovan et al. , 2014 ; Tabata et al. , 2019 ) and require professional knowledge and manual efforts . To efficiently facili- tate the problem-solving aspects of sketching in the graphic designs , Todi et al . ( 2016 ) propose an interactive layout design tool which uses a real-time layout optimiser without requiring extensive input . However , these methods are restricted and usually fail to model the rich varieties of media information . Xinru Zheng & Lau ( 2019 ) make use of the content information to model the graphic design layouts in the purely data-driven scenario to adapt to the contents to be laid out . Recently , there is a trend to adopt neural networks and deep learning methods to promote automating layout to be more efficiently . For example , to be more user-friendly , Pang et al . ( 2016 ) adopt attention mechanisms to trace the user ’ s attention , and Lee et al . ( 2020 ) improve the conventional GANbased methods ( Li et al. , 2019 ; Xinru Zheng & Lau , 2019 ) to explicitly model relationships among components and user-specified constraints . BERT4Rec ( Sun et al. , 2019 ) employs the deep bidirectional self-attention to model user behavior sequences . And Li et al . ( 2020b ) develop Transformerbased tree decoders on the task of auto completion of user interface layout design , which can ease the efforts of UI designers and developers . However , previous works typically deal with limited kinds of design elements and fail to give general and scalable solutions to layout representation learning . Enlightened by the significant impact of large-scale pre-trained models in the area of NLP ( Peters et al. , 2018 ; Radford , 2018 ; Devlin et al. , 2019 ; Yang et al. , 2019 ) and multi-modal learning ( Sun et al. , 2019 ; Lu et al. , 2019 ; Li et al. , 2020a ) , our work implements the attention-based Transformer framework enhanced with pretraining to propose a data-driven and scalable method that captures contextual information for layout representation , which can be applied well on downstream tasks in graphic design . 3 LAYOUT IN GRAPHIC DESIGN . Layout in graphic design refers to the way in which we arrange the visual elements on a canvas . Though some settings might vary specific to different document types , there exists basic characteristics of elements that make up the content of layouts ( example shown in Figure 1 ) : • Type Properties . Elements can be text boxes , pictures or lines . According to the semantic roles , elements can be divided into title , subtitle , button or other placeholders . • Geometry Properties . Position and size indicate the elements ’ placement in the layout . Besides , z-order is the ordering of overlapping two-dimensional elements , and rotation describes the an element ’ s circular movement . • Color Properties . Color is one of the most straightforward visual feature , including the RGBA channels and extra features such as color gradient . • Content-related Properties . Though user contents are separated from the layout , some content-related properties ( e.g. , text font size and font type ) can affect the layout arrangement . Elements are complex and sparse , composed with the above properties of either categorical ( e.g. , shape type , color ) or numerical ( e.g. , position , font size , word count ) values . Hence , layouts are diverse and complicated for modeling . In the next section , we will introduce our approach for layout representation learning . 4 MODELING . We present our model CanvasEmb , which inputs elements with multi-dimensional properties and outputs representation for the layout . To train our model , we adopt the two-stage learning framework , namely pre-training and fine-tuning . 4.1 MODEL ARCHITECTURE . We formulate the input as a sequence of visual elements { x0 , x1 , x2 , ... , xn } in the layout , where each element xi is defined with m properties { p1i , p2i , ... , pmi } . Here x0 is the sequence representation which is randomly initialized . Figure 2 shows the overview architecture of our model , which is similar to BERT ( Devlin et al. , 2019 ) . The feature embedding encodes high dimensional properties for elements , and is concatenated with the transformer encoder to model the global context of elements . The output representation can be further used to make prediction of element-level and layout-level labels , as well as the relations between the elements , with an extra task-specific prediction layer . Here , we introduce the details of model components . Feature Embedding . For the i-th element xi = [ p1i ; p2i ; ... ; pmi ] , the embedding ei is obtained by concatenating m property embeddings : ei = Θ ( 1i ⊕ 2i ⊕ ... ⊕ mi ) , ( 1 ) where ⊕ is the concatenation operator and Θ is a non-linear transform function . For each channel j , the corresponding property pji contains multi-dimensional values . For example , given a 2-dim numerical property pji = [ p j,1 i ; p j,2 i ] ( e.g. , element size with height and width ) , the embedding in this channel can be calculated as : ji = ξ j ( pj,1i ) ⊕ ξ j ( pj,2i ) , ( 2 ) where pj , ki represents the k-th value in p j i and ξ j is the embedding function . There are two types of embedding functions . For properties with categorical values such as type and color , we use the embedding matrix as the learning parameter . For properties with numerical values such as position and size , the positional encoding ( Vaswani et al. , 2017 ) is adopted : PE ( p j , k i , 2h ) = sin ( p j , k i /10000 2h/dj , k ) ( 3 ) PE ( p j , k i , 2h+ 1 ) = cos ( p j , k i /10000 2h/dj , k ) ( 4 ) where dj , k means the embedding dimension assigned to pj , ki . Transformer Encoder . On top of the feature embeddings , we use a transformer encoder ( Vaswani et al. , 2017 ) to encode the element contexts . Similar to BERT ( Devlin et al. , 2019 ) , the multi-layer transformer with the multi-head self-attention mechanism enables to capture correlations between different elements and property fields . Finally , we can get the low-dimensional representations { h ( L ) 0 ; h ( L ) 1 ; ... , h ( L ) n } for all elements from the last , i.e . the L-th encoding layer .
This paper applies state-of-the-art transformer-based neural networks to layout representation learning of slides. The most notable contribution of this paper is the construction of large-scale parsed slide layout dataset. This paper proposes to pre-train the network on this large-scale dataset without masked reconstruction strategy and verifies it with several subtasks including element role labeling, image captioning, auto-completion and layout retrieval, with a comparison to a decision-tree based method as baseline.
SP:a4900e2a8fbd39245400e377869f8c5350ce12fd
DISE: Dynamic Integrator Selection to Minimize Forward Pass Time in Neural ODEs
1 INTRODUCTION . Neural ordinary differential equations ( Neural ODEs ) are to learn time-dependent physical dynamics describing continuous residual networks ( Chen et al. , 2018 ) . It is well known that residual connections are numerically similar to the explicit Euler method , the simplest integrator to solve ODEs . In this regard , Neural ODEs are considered as a generalization of residual networks . In general , it is agreed by many researchers that Neural ODEs have two advantages and one disadvantage : i ) Neural ODEs can sometimes reduce the required number of neural network parameters , e.g. , ( Pinckaers & Litjens , 2019 ) , ii ) Neural ODEs can interpret the neural network layer ( or time ) as a continuous variable and a hidden vector at an arbitrary layer can be calculated , iii ) however , Neural ODEs ’ s forward-pass inference can sometimes be numerically unstable ( i.e. , the underflow error of DOPRI ’ s adaptive step size ) and/or slow to solve an integral problem ( i.e. , too many steps in DOPRI ) ( Zhuang et al. , 2020b ; Finlay et al. , 2020 ; Daulbaev et al. , 2020 ; Quaglino et al. , 2020 ) . Much work has been actively devoted to address the numerically unstable nature of solving integral problems . In this work , however , we are interested in addressing the problem of long forward-pass inference time . To overcome the challenge , we i ) directly regularize the numerical errors of the Dormand–Prince ( DOPRI ) method ( Dormand & Prince , 1980 ) , which means we try to learn an ODE that can be quickly solved by DOPRI , and ii ) dynamically select an appropriate integrator for each sample rather than relying on only one integrator . In many cases , Neural ODEs use DOPRI , one of the most advanced adaptive step integrator , for its best accuracy . However , our method allows that we rely on simpler integrators , such as the Euler method or the fourth-order Runge–Kutta ( RK4 ) method ( Ixaru & Vanden Berghe , 2004 ) , for carefully selected inputs . Table 1 shows an experimental result that our proposed regularization not only reduces the number of function evaluations ( NFE ) — the inference time is linearly proportional to the number of function evaluations in Neural ODEs — but also increases the inference accuracy in the MNIST classification task . We can reduce the inference time by reducing the average number of steps ( and thus , the average NFE ) of DOPRI , which can be obtained when the learned ODE is trained to be in a suitable form to solve with DOPRI with a proper regularization . However , the NFE of DOPRI in a step is 6 whereas RK4 has 4 and the Euler method has 1 . So , the Euler method is six times faster than DOPRI even when their step sizes are identical . Therefore , the automatic step size adjustment of DOPRI is not enough to minimize the NFE of forward-pass inference ( see Section B in Appendix for more detailed descriptions with a concrete example ) . To this end , we design an auxiliary network that chooses an appropriate integrator for each sample . The combination of our regularization and the proposed Dynamic Integrator SElection ( DISE ) shows the best performance in the table . We conduct experiments for three different tasks and datasets : MNIST image classification , PhysioNet mortality prediction , and continuous normalizing flows . Our method shows the best ( or close to the best ) accuracy with a much smaller NFE than state-of-the-art methods . Our contributions can be summarized as follows : 1 . We design an effective regularization to reduce the number of function evaluations ( NFE ) of Neural ODEs . 2 . We design a sample-wise dynamic integrator selection ( DISE ) method to further accelerate Neural ODEs without significantly sacrificing model accuracy . 3 . We conduct in-depth analyses with three popular tasks of Neural ODEs . 2 RELATED WORK . In this section , we review the literature on Neural ODEs . In particular , we review recent regularization designs for Neuarl ODEs and numerical methods to solve ODEs . 2.1 NEURAL ODES . It had been attempted by several researchers to model neural networks as differential equations ( Weinan , 2017 ; Ruthotto & Haber , 2019 ; Lu et al. , 2018 ; Ciccone et al. , 2018 ; Chen et al. , 2018 ; Gholami et al. , 2019 ) . Among them , the seminal neural ordinary differential equations ( Neural ODEs ) , as shown in Fig . 1 , consist of three parts in general : a feature extractor , an ODE , and a classifier ( Chen et al. , 2018 ; Zhuang et al. , 2020a ) . Given an input x , the feature extractor produces an input to the ODE , denoted h ( 0 ) . Let h ( t ) be a hidden vector at layer ( or time ) t in the ODE part . In Neural ODEs , a neural network f with a set of parameters , denoted θ , approximates ∂h ( t ) ∂t and h ( t1 ) becomes h ( 0 ) + ∫ t1 t0 f ( h ( t ) , t ; θ ) dt , where f ( h ( t ) , t ; θ ) = ∂h ( t ) ∂t . In other words , the internal dynamics of the hidden vector evolution is described by an ODE . One key advantage of Neural ODEs is that we can reduce the number of parameters without sacrificing model accuracy . For instance , one recent work based on a Neural ODE marked the best accuracy for medical image segmentation with an order of magnitude smaller parameter numbers ( Pinckaers & Litjens , 2019 ) . In general , we calculate h ( 1 ) 1 and feed it into the next classifier and its final prediction is made . One can accordingly modify the architecture in Fig . 1 for other types of tasks . For simplicity but without loss of generality , we assume the architecture in our discussion . Neural ODEs have been used in many tasks , ranging from classification and regression to time series forecasting and generative models ( Yildiz et al. , 2019 ; Grathwohl et al. , 2019 ; Rubanova et al. , 2019 ) . 2.2 ODE SOLVERS . DOPRI is one of the most powerful integrators ( Hairer et al. , 1993 ) and widely used in Neural ODEs . It is a member of the Runge–Kutta family of ODE solvers . DOPRI dynamically controls the step size while solving an integral problem . It is now the default method for MATLAB , GNU Octave , and Simulink . It internally estimates an error by using a heuristic method and the step size is determined by a function inversely proportional to the estimated error — the larger the error , the shorter the step size . The error at i-th step of DOPRI for an integral problem x , denoted errx , i , is estimated by the difference between the fourth-order and the fifth-order Runge–Kutta methods at the moment . The intuition behind the heuristic error estimation is simple yet effective . Among simpler methods , we consider the Euler method , and the fourth-order Runge–Kutta ( RK4 ) method . The Euler method is the simplest method to solve ODEs and both the Euler method and RK4 use a fixed step size . Therefore , their solving time is deterministic . One step of DOPRI involves six function evaluations , i.e. , six function calls of f . The Euler method calls the network f only once in a step and RK4 calls four times . Therefore , the Euler method is six times faster than DOPRI for a step . The term ‘ NFE ’ refers to the number of function evaluations to solve an integral problem . For the Euler method and RK4 , NFE is deterministic and does not vary . In DOPRI , however , NFE varies from one sample to another , depending on the estimated error and the number of steps . We refer readers to Section B in Appendix for more detailed descriptions with a concrete example . 2.3 REGULARIZATIONS IN NEURAL ODES . To make Neural ODEs faster , one possible way is regularizing the ODE function f . Two naı̈ve methods are regularizing θ with the L1 or L2 regularizers ( Ng , 2004 ) . Strictly speaking , these two regularizers are to prevent overfitting . Therefore , preventing overfitting does not necessarily mean quick forward-pass inference . To this end , Dupont et al . showed that by augmenting h ( t ) with additional zeros , i.e. , augmenting the dimensionality of h ( t ) , one can achieve similar effects ( Dupont et al. , 2019 ) . However , this method is meaningful when we can not freely control the dimensionality of h ( t ) , which is not our setting . Recently , a kinetic regularization concept has been proposed by Finlay et al . ( Finlay et al. , 2020 ) , which is written as follows : Rk def = ∫ t1 t0 ‖f ( h ( t ) , t ; θ ) ‖22 dt . ( 1 ) Among all regularization terms designed so far , this kinetic regularization ’ s goal is the closest to ours . It can encourage Neural ODEs to learn straight-line paths from h ( t0 ) to h ( t1 ) . 3 PROPOSED METHOD . While enabling the design of compact models , Neural ODEs have one critical drawback that they require solving integral problems , for which many approximation methods have been proposed : the Euler method , RK4 , and DOPRI , to name a few . Almost all of them are based on discretizing t and converting an integral into a series of additions . In many cases , therefore , it requires a dense discretization , resulting in a long forward-pass inference time . 1For simplicity but without loss of generality , the time duration can be fixed into t ∈ [ 0 , 1 ] . Any arbitrary length ODEs can be compressed into a unit time interval ODE . In some time series datasets , however , the final integral time t1 is given in a sample . In such a case , t1 is set to the sample time . In this paper , we tackle the problem of minimizing the number of function evaluations ( and thereby , the forward-pass inference time ) of Neural ODEs without significantly sacrificing model accuracy . Our proposed method consists of two parts : i ) using the DOPRI ’ s error estimation as a regularizer and ii ) using an auxiliary network to select an appropriate integrator for each input sample . 3.1 DOPRI ’ S ERROR ESTIMATION AS A REGULARIZER . We re-implement the DOPRI method in PyTorch and make it return the estimated error terms . Let { errx,1 , errx,2 , · · · , errx , N } , where N is the number of steps of DOPRI , be an array of errors estimated by DOPRI while solving an integral problem for an input x . Note that the adaptive step size is an inverse function of the error at each step . We use the following regularizer while training Neural ODEs : Rerr def = ∑ x∈T N∑ i=1 errx , i , ( 2 ) where x is an input sample for which we have to solve an integral problem , and T is a training set . For instance , x can be an image sample to classify . If we train a Neural ODE to classify images with the cross-entropy loss in conjunction with the regularizer , the trained Neural ODE will learn an ODE that can correctly classify images while reducing the forward-pass time of DOPRI . The backward-pass calculation of our proposed regularizer can be done in O ( 1savg ) by maintaining the forward-pass computation graph , where savg is the average step size of DOPRI . However , this complexity will decrease as training goes on with our regularizer because the average step size will increase .
This paper addresses the complexity of the forward pass inference in neural ODEs. The paper proposes to augment training of the neural ODE with an auxiliary neural network that dynamically selects the best numerical integrator for a given input sample. Furthermore, the paper also proposes a regularizer that uses the errors of the numerical integrator to reduce the number of function evaluations, without sacrificing accuracy.
SP:e030bf232cd040a4c2ea834f6d803d7fcf4aa971
Perfect density models cannot guarantee anomaly detection
1 INTRODUCTION . Several machine learning methods aim at extrapolating a behavior observed on training data in order to produce predictions on new observations . But every so often , such extrapolation can result in wrong outputs , especially on points that we would consider infrequent with respect to the training distribution . Faced with unusual situations , whether adversarial ( Szegedy et al. , 2013 ; Carlini & Wagner , 2017 ) or just rare ( Hendrycks & Dietterich , 2019 ) , a desirable behavior from a machine learning system would be to flag these outliers so that the user can assess if the result is reliable and gather more information if need be ( Zhao & Tresp , 2019 ; Fu et al. , 2017 ) . This can be critical for applications like medical decision making ( Lee et al. , 2018 ) or autonomous vehicle navigation ( Filos et al. , 2020 ) , where such outliers are ubiquitous . What are the situations that are deemed unusual ? Defining these anomalies ( Hodge & Austin , 2004 ; Pimentel et al. , 2014 ) manually can be laborious if not impossible , and so generally applicable , automated methods are preferable . In that regard , the framework of probabilistic reasoning has been an appealing formalism because a natural candidate for outliers are situations that are improbable or out-of-distribution . Since the true probability distribution density p∗X of the data is often not provided , one would instead use an estimator , p ( θ ) X , from this data to assess the regularity of a point . Density estimation has been a particularly challenging task on high-dimensional problems . However , recent advances in deep probabilistic models , including variational auto-encoders ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Vahdat & Kautz , 2020 ) , deep autoregressive models ( Uria et al. , 2014 ; van den Oord et al. , 2016b ; a ) , and flow-based generative models ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ) , have shown promise for density estimation , which has the potential to enable accurate density-based methods ( Bishop , 1994 ) for anomaly detection . Yet , several works have observed that a significant gap persists between the potential of density-based anomaly detection and empirical results . For instance , Choi et al . ( 2018 ) , Nalisnick et al . ( 2018 ) , and Hendrycks et al . ( 2018 ) noticed that generative models trained on a benchmark dataset ( e.g. , CIFAR-10 , Krizhevsky et al. , 2009 ) and tested on another ( e.g. , SVHN , Netzer et al. , 2011 ) are not able to identify the latter as out-of-distribution with current methods . Different hypotheses have been formulated to explain that discrepancy , ranging from the curse of dimensionality ( Nalisnick et al. , 2019 ) to a significant mismatch between p ( θ ) X and p ∗ X ( Choi et al. , 2018 ; Fetaya et al. , 2020 ; Kirichenko et al. , 2020 ; Zhang et al. , 2020 ) . In this work , we propose a new perspective on this discrepancy and challenge the expectation that density estimation should enable anomaly detection . We show that the aforementioned discrepancy persists even with perfect density models , and therefore goes beyond issues of estimation , approximation , or optimization errors ( Bottou & Bousquet , 2008 ) . We highlight that this issue is pervasive as it occurs even in low-dimensional settings and for a variety of density-based methods for anomaly detection . 2 DENSITY-BASED ANOMALY DETECTION . 2.1 UNSUPERVISED ANOMALY DETECTION : PROBLEM STATEMENT . Unsupervised anomaly detection is a classification problem ( Moya et al. , 1993 ; Schölkopf et al. , 2001 ) , where one aims at distinguishing between regular points ( inliers ) and irregular points ( outliers ) . However , as opposed to the usual classification task , labels distinguishing inliers and outliers are not provided for training , if outliers are even provided at all . Given a input space X ⊆ RD , the task can be summarized as partitioning this space between the subset of outliers Xout and the subset of inliers Xin , i.e. , Xout ∪ Xin = X and Xout ∩ Xin = ∅ . When the training data is distributed according to the probability measure P ∗X ( with density p ∗ X 1 ) , one would usually pick the set of regular points Xin such that this set contains the majority ( but not all ) of the mass ( e.g. , 95 % ) of this distribution , i.e. , P ∗X ( Xin ) = 1− α ∈ ( 1 2 , 1 ) . But , for any given α , there exists in theory an infinity of corresponding partitions into Xin and Xout ( see Figure 1 ) . How are these partitions defined to match our intuition of inliers and outliers ? We will focus in this paper on recently used methods based on probability density . 2.2 DENSITY SCORING . When talking about outliers , infrequent observations , the association with probability can be quite intuitive . For instance , one would expect an anomaly to happen rarely and be unlikely . Since the language of statistics often associate the term likelihood with quantities like p ( θ ) X ( x ) , one might consider an unlikely sample to have a low ” likelihood ” , that is a low probability density p∗X ( x ) . Conversely , regular samples would have a high density p∗X ( x ) following that reasoning . This is an intuition that is not only prevalent in several modern anomaly detection methods ( Bishop , 1994 ; Blei et al. , 2017 ; Hendrycks et al. , 2018 ; Kirichenko et al. , 2020 ; Rudolph et al. , 2020 ; Liu et al. , 2020 ) but also in techniques like low-temperature sampling ( Graves , 2013 ) used for example in Kingma & Dhariwal ( 2018 ) and Parmar et al . ( 2018 ) . The associated approach , described in Bishop ( 1994 ) , consists in defining the inliers as the points whose density exceed a certain threshold λ > 0 ( for example , chosen such that inliers include a predefined amount of mass , e.g. , 95 % ) , making the modes the most regular points in this setting . Xout and Xin are then respectively the lower-level and upper-level sets { x ∈ X , p∗X ( x ) ≤ λ } and { x ∈ X , p∗X ( x ) > λ } ( see Figure 2b ) . 1We will also assume in the rest of the paper that for any x ∈ X , p∗X ( x ) > 0. x p ∗ X ( x ) ( a ) An example of a distribution density p∗X . x λ ( b ) Density scoring method applied to the distribution p∗X . x e −H ( p∗X ) ( c ) Typicality test method ( with one sample ) applied to the distribution p∗X . Figure 2 : Illustration of different density-based methods applied to a particular one-dimensional distribution p∗X . Outliers are in red and inliers are in blue . The thresholds are picked so that inliers include 95 % of the mass . In Figure 2b , inliers are considered as the points with density above the threshold λ > 0 while in Figure 2c , they are the points whose log-density are in the -interval around the negentropy −H ( p∗X ) . 2.3 TYPICALITY TEST . The Gaussian Annulus theorem ( Blum et al. , 2016 ) ( generalized in Vershynin , 2019 ) attests that most of the mass of a high-dimensional standard Gaussian N ( 0 , ID ) is located close to the hypersphere of radius √ D. However , the mode of its density is at the center 0 . A natural conclusion is that the curse of dimensionality creates a discrepancy between the density upper-level sets and what we expect as inliers ( Choi et al. , 2018 ; Nalisnick et al. , 2019 ; Morningstar et al. , 2020 ; Dieleman , 2020 ) . This motivated Nalisnick et al . ( 2019 ) to propose another method for testing whether a point is an inlier or not , relying on a measure of its typicality . This method relies on the notion of typical set ( Cover , 1999 ) defined by taking as inliers points whose average log-density is close to the average log-density of the distribution ( see Figure 2c ) . Definition 1 ( Cover , 1999 ) . Given independent and identically distributed elements ( x ( n ) ) n≤N from a distribution with density p∗X , the typical set A ( N ) ( p∗X ) ⊂ XN is made of all sequences that satisfy : ∣∣∣∣∣H ( p∗X ) + 1N N∑ n=1 log p∗X ( x ( n ) ) ∣∣∣∣∣ ≤ , where H ( X ) = −E [ log p∗X ( X ) ] is the ( differential ) entropy and > 0 a constant . This method matches the intuition behind the Gaussian Annulus theorem on the set of inliers of a high-dimensional standard Gaussian . Indeed , using a concentration inequality , we can show that limN→+∞ ( P ∗ ( Xi ) 1≤n≤N ( A ( N ) ) ) = 1 , which means that with N large enough , A ( N ) ( p∗X ) will contain most of the mass of ( p∗X ) N , justifying the name typicality . 3 THE ROLE OF REPARAMETRIZATION . Given the anomaly detection problem formulation Subsection 2.1 , we are interested in reasoning about the properties a solution ought to satisfy , in the ideal case of infinite data and capacity . For density-based methods this means that p ( θ ) X = p ∗ X . This setting is appealing as it gives space for theoretical results without worrying about the underfitting or overfitting issues mentioned by Hendrycks et al . ( 2018 ) ; Fetaya et al . ( 2020 ) ; Morningstar et al . ( 2020 ) ; Kirichenko et al . ( 2020 ) ; Zhang et al . ( 2020 ) . Although we work in practice on points ( e.g. , vectors ) , it is important to keep in mind that these points are actually representations of an underlying outcome . As a random variable , X is by definition the function from this outcome ω to the corresponding observation x = X ( ω ) . However , at its core , an anomaly detection solution aims at classifying outcomes through these measurements . How is the 0.0 0.2 0.4 0.6 0.8 1.0 0.6 0.8 1.0 1.2 1.4 x p ∗ X ( x ) ( a ) An example of a distribution density p∗X . 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x f ( x ) ( b ) Example of an invertible function f from [ 0 , 1 ] to [ 0 , 1 ] . 0.0 0.2 0.4 0.6 0.8 1.0 0.6 0.8 1.0 1.2 1.4 f ( x ) p ∗ f ( X ) ( f ( x ) ) ( c ) Resulting density p∗f ( X ) from applying f to X ∼ p∗X as a function of the new axis f ( x ) . Figure 3 : Illustration of the change of variables formula and how much the application of a bijection can affect the density of the points considered in a one-dimensional case . In Figures 3a and 3c , points x with high density p∗X ( x ) are in blue and points with low density p ∗ X ( x ) are in red . choice of X affecting the problem of anomaly detection ? While several papers studied the effects of a change of representation through the lens of inductive bias ( Kirichenko et al. , 2020 ; Zhang et al. , 2020 ) , we investigate the more fundamental effects of reparametrizations f . To sidestep concerns about loss of information ( Winkens et al. , 2020 ) , we study the particular case of an invertible map f . The measurements x = X ( ω ) and f ( x ) = ( f ◦ X ) ( ω ) represent the same outcome ω ( although differently ) , and , since x and f ( x ) are connected by an invertible transformation f , the same method applied respectively to X or f ( X ) should classify them with the same label , either as an inlier or an outlier . The target of these methods is to essentially assess the regularity of the outcome ω . From this , we could ideally make the following requirement for a solution to anomaly detection . Principle . In an infinite data and capacity setting , the result of an anomaly detection method should be invariant to any continuous invertible reparametrization f . Do density-based methods follow this principle ? To answer that question , we look into how density behaves under a reversible change of representation . In particular , the change of variables formula ( Kaplan , 1952 ) ( used in Tabak & Turner , 2013 ; Dinh et al. , 2014 ; Rezende & Mohamed , 2015 ) , formalizes a simple intuition of this behavior : where points are brought closer together the density increases whereas this density decreases when points are spread apart . The formula itself is written as : p∗f ( X ) ( f ( x ) ) = p∗X ( x ) ∣∣∣∣ ∂f∂xT ( x ) ∣∣∣∣−1 , where ∣∣∣ ∂f∂xT ( x ) ∣∣∣ is the Jacobian determinant of f at x , a quantity that reflects a local change in volume incurred by f . Figure 3 already illustrates how the function f ( Figure 3b ) can spread apart points close to the extremities to decrease the corresponding density round 0 and 1 , and , as a result , turns the density on the left ( Figure 3a ) into the density on the right ( Figure 3c ) . With this example , one can wonder to which degree an invertible change of representation can affect the density and the anomaly detection methods presented in Subsections 2.2 and 2.3 that use it .
Detecting anomalies is a notoriously ill-defined problem. The notion of anomaly is not a rigorous concept and different algorithms produce different results. The paper critiques a broad set of methods which involve likelihood (or density) estimations. It's main idea revolves around the 'Principle' set on Page 4. The principle claims that when data capacity and computational constraints are removed, an AD algorithm should be invariant to 'reparametrization' of the input. Roughly speaking, that means the algorithm should be invariant to arbitrary 'name changing' of the input - the result should not change if each data item x is replaced by f(x) if f is invertible.
SP:7005dadb8330bdc6f7f2a066ff816bbe174ec843
AUTOSAMPLING: SEARCH FOR EFFECTIVE DATA SAMPLING SCHEDULES
1 INTRODUCTION . Data sampling policies can greatly influence the performance of model training in computer vision tasks , and therefore finding robust sampling policies can be important . Handcrafted rules , e.g . data resampling , reweighting , and importance sampling , promote better model performance by adjusting the training data frequency and order ( Estabrooks et al. , 2004 ; Weiss et al. , 2007 ; Bengio et al. , 2009 ; Johnson & Guestrin , 2018 ; Katharopoulos & Fleuret , 2018 ; Shrivastava et al. , 2016 ; Jesson et al. , 2017 ) . Handcrafted rules heavily rely on the assumption over the dataset and can not adapt well to datasets with their own characteristics . To handle this issue , learning-based methods ( Li et al. , 2019 ; Jiang et al. , 2017 ; Fan et al. , 2017 ) were designed to automatically reweight or select training data utilizing meta-learning techniques or a policy network . However existing learning-based sampling methods still rely on human priors as proxies to optimize sampling policies , which may fail in practice . Such priors often include assumptions on policy network design for data selection ( Fan et al. , 2017 ) , or dataset conditions like noisiness ( Li et al. , 2019 ; Loshchilov & Hutter , 2015 ) or imbalance ( Wang et al. , 2019 ) . These approaches take images features , losses , importance or their representations as inputs and use the policy network or other learning approaches with small amount of parameters for estimating the sampling probability . However , for example , images with similar visual features can be redundant in training , but their losses or features fed into the policy network are more likely to be close , causing the same probability to be sampled for redundant samples if we rely on aforementioned priors . Therefore , we propose to directly optimize the sampling schedule itself so that no prior knowledge is required for the dataset . Specifically , the sampling schedule refers to order by which data are selected for the entire training course . In this way , we only rely on data themselves to determine the optimal sampling schedule without any prior . Directly optimizing a sampling schedule is challenging due to its inherent high dimension . For example , for the ImageNet classification dataset ( Deng et al. , 2009 ) with around one million samples , the dimension of parameters would be in the same order . While popular approaches such as deep reinforcement learning ( Cubuk et al. , 2018 ; Zhang et al. , 2020 ) , Bayesian optimization ( Snoek et al. , 2015 ) , population-based training ( Jaderberg et al. , 2017 ) or simple random search ( Bergstra & Bengio , 2012 ) have already been utilized to tune low-dimensional hyper-parameters like augmentation schedules , their applications in directly finding good sampling schedules remain unexploited . For instance , the dimension of a data augmentation policy is generally only in dozens , and it needs thousands of training runs ( Cubuk et al. , 2018 ) to sample enough rewards to find an optimal augmentation policy because high-quality rewards require many epochs of training to obtain . As such , optimizing a sampling schedule may require orders of magnitude more rewards than data augmentation to gather and hence training runs , which result in prohibitively slow convergence . To overcome the aforementioned challenge , we propose a data sampling policy search framework , named AutoSampling , to sufficiently learn an optimal sampling schedule in a population-based training fashion ( Jaderberg et al. , 2017 ) . Unlike previous methods , which focus on collecting longterm rewards and updating hyper-parameters or agents offline , our AutoSampling method collects rewards online with a shortened collection cycle but without priors . Specifically , the AutoSampling collects rewards within several training iterations , tens or hundred times shorter than that in existing works ( Ho et al. , 2019 ; Cubuk et al. , 2018 ) . In this manner , we provide the search process with much more frequent feedback to ensure sufficient optimization of the sampling schedule . Each time when a few training iterations pass , we collect the reward from the previous several iterations , accumulate them and later update the sampling distribution using the rewards . Then we perturb the sampling distribution to search in distribution space , and use it to generate new mini-batches for later iterations , which are recorded into the output sampling schedule . As illustrated in Sec . 4.1 , shortened collection cycles with less interference also can better reflect the training value of each data . Our contributions are as follows : • To our best knowledge , we are the first to propose to directly learn a robust sampling schedule from the data themselves without any human prior or condition on the dataset . • We propose the AutoSampling method to handle the optimization difficulty due to the high dimension of sampling schedules , and efficiently learn a robust sampling schedule through shortened reward collection cycle and online update of the sampling schedule . Comprehensive experiments on CIFAR-10/100 and ImageNet datasets ( Krizhevsky , 2009 ; Deng et al. , 2009 ) with different networks show that the Autosampling can increase the top-1 accuracy by up to 2.85 % on CIFAR-10 , 2.19 % on CIFAR-100 , and 2.83 % on ImageNet . 2 BACKGROUND . 2.1 RELATED WORK . Data sampling is of great significance to deep learning , and has been extensively studied . Approaches with human-designed rules take pre-defined heuristic rules to modify the frequency and order by which training data is presented . In particular , one intuitive method is to resample or reweight data according to their frequencies , difficulties or importance in training ( Estabrooks et al. , 2004 ; Weiss et al. , 2007 ; Drummond et al. , 2003 ; Bengio et al. , 2009 ; Lin et al. , 2017 ; Shrivastava et al. , 2016 ; Loshchilov & Hutter , 2015 ; Wang et al. , 2019 ; Johnson & Guestrin , 2018 ; Katharopoulos & Fleuret , 2018 ; Byrd & Lipton , 2018 ; Jesson et al. , 2017 ) . These methods have been widely used in imbalanced training or hard mining problems . However , they are often restricted to certain tasks and datasets based on which they are proposed , and their ability to generalize to a broader range of tasks with different data distribution may be limited . In another word , these methods often implicitly assume certain conditions on the dataset , such as cleanness or imbalance . In addition , learning-based methods have been proposed for finding suitable sampling schemes automatically . Methods using meta-learning or reinforcement learning are also utilized to automatically select or reweight data during training ( Li et al. , 2019 ; Jiang et al. , 2017 ; Ren et al. , 2018 ; Fan et al. , 2017 ) , but they are only tested on small-scale or noisy datasets . Whether or not they can generalize over tasks of other datasets still remain untested . In this work , we directly study the data sampling without any prior , and we also investigate its wide generalization ability across different datasets such as CIFAR-10 , CIFAR-100 and ImageNet using many typical networks . As for hyper-parameter tuning , popular approaches such as deep reinforcement learning ( Cubuk et al. , 2018 ; Zhang et al. , 2020 ) , Bayesian optimization ( Snoek et al. , 2015 ) or simply random search ( Bergstra & Bengio , 2012 ) have already been utilized to tune low-dimensional hyper-parameters and proven to be effective . Nevertheless , they have not been adopted to find good sampling schedule due to its inherent high dimensiona . Some recent works tackle the challenge of optimizing highdimensional hyper-parameter . MacKay et al . ( 2019 ) uses structured best-response functions and Jonathan Lorraine ( 2019 ) achieve this goal through the combinations of the implicit function theorem and efficient inverse Hessian approximations . However , they have not been tested on the task of optimizing sampling schedules , which is the major focus of our work in this paper . 2.2 POPULATION BASED TRAINING . Hyper-parameter tuning task can be framed as a bi-level optimization problem with the following objective function , min h∈H L ( θ∗ , h ) subject to θ∗ = argmax θ∈Θ eval ( θ , h ) ( 1 ) where θ represents the model weight and h = ( h1 , h2 , · · · , hT ) is the hyper-parameter schedule for T training intervals . Population based training ( PBT ) ( Jaderberg et al. , 2017 ) solves the bilevel optimization problem by training a population P of child models in parallel with different hyper-parameter schedules initialized : P = { ( θi , hi , t ) } Np i=1 ( 2 ) where θi , hi respectively represents the child model weight , the corresponding hyper-parameter schedule for the training interval t on worker i , and Np is the number of workers . PBT proceeds in intervals , which usually consists of several epochs of training . During the interval , the population of models are trained in parallel to finish the lower-level optimization of weights θi . Between intervals , an exploit-and-explore procedure is adopted to conduct the upper-level optimization of the hyper-parameter schedule . In particular for interval t , to exploit we evaluate child models on a held-out validation dataset : h∗t , θ ∗ t = argmax pi= ( θi , hi , t ) ∈P eval ( θi , hi ) θ∗ → θi , i = 1 , · · · , Np ( 3 ) We record the best performing hyper-parameter setting h∗t and broadcast the top-performing model θ∗t to all workers . To explore , we initialize new hyper-parameter schedules for interval t+ 1 with different random seeds on all workers , which can be viewed as a search in the hyper-parameter space . The next exploit-and-explore cycle will then be continued . In the end , the top-performing hyper-parameter schedule h∗ = ( h∗1 , h ∗ 2 , · · · , h∗T ) can be obtained . PBT is applied to tune low-dimenisal hyper-parameters such as data augmentation schedules ( Ho et al. , 2019 ; Jaderberg et al. , 2017 ) . However , it can not be directly used for finding sampling strategies due to the high dimension . Unlike PBT , our AutoSampling adopts a multi-exploitation-andexploration structure , leading to much shorter reward collection cycles that contribute to much more and effective rewards for sufficient optimization within a practical computational budget . 3 AUTOSAMPLING WITH SEARCHING . The overview of our AutoSampling is illustrated in Fig.1 . AutoSampling alternately runs multiexploitation step and exploration step . In the exploration step , we 1 ) update the sampling distribution Algorithm 1 : The Multi-Exploitation Step Input : Training dataset D , population P = { ( θi , hi , t ) } Np i=1 , number of workers Np , number of exploitation intervals T , exploitation interval length Ns Initialize H∗ ← ( ) for t = 1 to T do for j = 1 to Ns do for ( θi , ht , i , t ) ∈ P do θi ←5L ( θi , ht , i ) B update the weight of child model i end for h∗t , θ ∗ t = argmaxP eval ( θi , hi ) H∗ ← H∗ + h∗t B update the sampling for child model i for i = 1 to Np do θi ← θ∗t B clone the optimal weight end for end for end for Return H∗ , P using the rewards collected from the multi-exploitation step ( the sampling distribution is uniform distribution initially ) ; 2 ) perturb the updated sampling distribution for child models so that different child models have different sampling distributions ; 3 ) use the corresponding perturbed sampling distribution for each child model to sample mini-batches of training data . In the multi-exploitation step , we 1 ) train multiple child models using the mini-batches sampled from the exploration step ; 2 ) collect short-term rewards from the child models . AutoSampling finishes with a recorded topperforming sampling schedule , which can be transferred to other models .
The authors mainly concentrate on data sampling. To address the issue of optimizing high-dimensional sampling hyper-parameter in data sampling and release the requirement of prior knowledge from current methods, the authors introduce a searching-based method named AutoSampling. This method is comprised of exploration step and exploitation step which are conducted alternatively. The exploitation step train multi child models with current sampling strategy and save the best model for next iteration. while the exploration step estimates the sampling distribution according to the sampled data in exploitation step and rectifies it to sample all data possibly. The authors have conducted sufficient experiments to verify the superior of their method, especially for the effectiveness and generalizability.
SP:6b7e12310d7b29f8d66442933dd71b1b915805be
Adversarial representation learning for synthetic replacement of private attributes
1 INTRODUCTION . Increasing capacity and performance of modern machine learning models lead to increasing amounts of data required for training them ( Goodfellow et al. , 2016 ) . However , collecting and using large datasets which may contain sensitive information about individuals is often impeded by increasingly strong privacy laws protecting individual rights , and the infeasibility of obtaining individual consent . Giving privacy guarantees on a dataset may let us share data , while protecting the rights of individuals , and thus unlocking the large benefits for individuals and for society that big datasets can provide . In this work , we propose a technique for selective obfuscation of image datasets . The aim is to provide the original data as detailed as possible while making it hard for an adversary to detect specific sensitive attributes . The proposed solution is agnostic to the downstream task , with the objective to make the data as private as possible given a distortion constraint . This issue has previously been addressed using adversarial representation learning with some success : a filter model is trained to obfuscate sensitive information while an adversary model is trained to recover the information ( Edwards & Storkey , 2016 ) . In the current work , we demonstrate that it is easier to hide sensitive information if you replace it with something else : a sample which is independent from the input data . Aside from the adversary module , our proposed solution includes two main components : one filter model that is trained to remove the sensitive attribute , and one generator model that inserts a synthetically generated new value for the sensitive attribute . The generated sensitive attribute is entirely independent from the sensitive attribute in the original input image . Following a body of work in privacy-related adversarial learning we evaluate the proposed model on faces from the CelebA dataset ( Liu et al. , 2015 ) , and consider , for example , the smile or gender of a person to be the sensitive attribute . The smile is an attribute that carries interesting aspects in the transformations of a human face . The obvious change reside close to the mouth when a person smiles , but also other subtle changes occur : eyelids tighten , dimples show and the skin wrinkles . The current work includes a thorough analysis of the dataset , including correlations of such features . These correlations make the task interesting and challenging , reflecting the real difficulty that may occur when anonymizing data . What is the right trade-off between preserving the utility as defined by allowing information about other attributes to remain , and removing the sensitive information ? In our setup , the adversary can make an arbitrary number of queries to the model . For each query another sample will be produced from the distribution of the sensitive data , while keeping as much as possible of the non-sensitive information about the requested data point . 2 RELATED WORK . Privacy-preserving machine learning has been studied from a number of different angles . Some work assumes access to a privacy-preserving mechanism , such as bounding boxes for faces , and studies how to hide people ’ s identity by blurring ( Oh et al. , 2016a ) , removing ( Orekondy et al. , 2018 ) or generating the face of other people ( Hukkelås et al. , 2019 ) in their place . Other work assumes access to the utility-preserving mechanism and proposes to obfuscate everything except what they want to retain ( Alharbi et al. , 2019 ) . This raises the question : how do we find the pixels in an image that need to be modified to preserve privacy with respect to some attribute ? Furthermore , Oh et al . ( 2016b ) showed that blurring or removing the head of a person has a limited effect on privacy . The finding is crucial ; we can not rely on modifications of an image such as blurring or overpainting to achieve privacy . An adversarial set-up instead captures the signals that the adversary uses , and can attain a stronger privacy . Adversarial learning is the process of training a model to fool an adversary ( Goodfellow et al. , 2014 ) . Both models are trained simultaneously , and become increasingly good at their respective task during training . This approach has been successfully used to learn image-to-image transformations ( Isola et al. , 2017 ; Choi et al. , 2018 ) , and synthesis of properties such as facial expressions ( Song et al. , 2017 ; Tang et al. , 2019 ) . Privacy-preserving adversarial representation learning utilize this paradigm to learn representations of data that hide sensitive information ( Edwards & Storkey , 2016 ; Zhang et al. , 2018 ; Xie et al. , 2017 ; Beutel et al. , 2017 ; Raval et al. , 2017 ) . Bertran et al . ( 2019 ) minimize the mutual information between the utility variable and the input image data conditioned on the learned representation . Roy & Boddeti ( 2019 ) maximize the entropy of the discriminator output rather than minimizing the log likelihood , which is beneficial for stronger privacy . Osia et al . ( 2020 ) approached the problem using an information-bottleneck . Wu et al . ( 2018 ) , Ren et al . ( 2018 ) , and Wang et al . ( 2019 ) learn transformations of video that respect a privacy budget while maintaining performance on a downstream task . Tran et al . ( 2018 ) proposed an approach for pose-invariant face recognition . Similar to our work , their approach used adversarial learning to disentangle specific attributes in the data . Oh et al . ( 2017 ) trained a model to add a small amount of noise to the input to hide the identity of a person . Xiao et al . ( 2020 ) learn a representation from which it is hard to reconstruct the original input , but from which it is possible to predict a predefined task . The method provides control over which attributes that is preserved , but no control over which attributes that are being censored . That is , it puts more emphasis on preserving utility than privacy , which is not always desired . All of these , with the exception of Edwards & Storkey ( 2016 ) ( see below ) , depend on knowing the downstream task labels . Our work has no such dependency : the data produced by our method is designed to be usable regardless of downstream task . In Edwards & Storkey ( 2016 ) , a limited experiment is included which does not depend on the downstream task . In this experiment , they remove sensitive text which was overlaid on images , a task which is much simpler than the real-world problem considered in the current work . The overlaid text is independent of the underlying image , and therefore the solution does not require a trade-off between utility and privacy which is the case in most real settings . Furthermore , we also replace the sensitive information with synthetic information which we show further strengthens the privacy . Like in the current work , Huang et al . ( 2017 , 2018 ) use adversarial learning to minimize the mutual information between the private attribute and the censored image under a distortion constraint . Our solution extends and improves upon these ideas with a modular design consisting of a filter that is trained to obfuscate the data , and a generator that further enhances the privacy by adding new independently sampled synthetic information for the sensitive attributes . 3 PRIVACY-PRESERVING ADVERSARIAL REPRESENTATION LEARNING . In the current work , we propose a novel solution for utility-preserving privacy-enhancing transformations of data : we use privacy-preserving representation learning to obfuscate information in the input data , and output results that retain the information and structure of the input . 3.1 PROBLEM SETTING . Generative adversarial privacy ( GAP ) ( Huang et al. , 2018 ) was proposed as a method to provide privacy in images under a distortion constraint , and will be used as the baseline in the current work . In GAP , one assumes a joint distribution P ( X , S ) of public data points X and sensitive private attributes S where S is typically correlated with X . The authors define a privacy mechanism X ′ = f ( X , Z1 ) where Z1 is the source of noise or randomness in f . Let hf ( X ′ ) be an adversary ’ s prediction of the sensitive attribute S from the privatized data X ′ according to a decision rule hf . The performance of the adversary is thus measured by a loss function ` f ( hf ( f ( x , z1 ) ) , s ) and the expected loss of the adversary with respect to X , S and Z1 is Lf ( hf , f ) = E x , s∼p ( x , s ) z1∼p ( z1 ) [ ` f ( hf ( f ( x , z1 ) , s ) ] , ( 1 ) where p ( z1 ) is the source of noise . The privacy mechanism f will be trained to be privacy-preserving and utility-preserving . That is , it should be hard for an analyst to infer S from X ′ , but X ′ should be minimally distorted with respect to X . Huang et al . ( 2018 ) formulate this as a constrained minimax problem min f max hf −Lf ( f , hf ) s.t . E x , s∼p ( x , s ) z1∼p ( z1 ) [ d ( f ( x , z1 ) , x ) ] ≤ 1 , ( 2 ) where the constant 1 ≥ 0 defines the allowed distortion for the privatizer and d ( · , · ) is some distortion measure . In the current work , f will be referred to as the filter since the purpose of it is to filter the sensitive information from x . A potential limitation with this formulation is that it only obfuscates the sensitive information in x which may make it obvious to the adversary that x′ is a censored version of x . Instead , we propose to replace the sensitive information with a new independent value s′ . 3.2 OUR CONTRIBUTION . We extend the filter with a generator module g , defined as X ′′ = g ( f ( X , Z1 ) , S′ , Z2 ) where S′ denotes the random variable of the new synthetic value for the sensitive attribute . Z1 and Z2 denote the sources of randomness in f and g respectively . The discriminator hg is trained to predict s when the input is a real image , and to predict the “ fake ” output when the input comes from g as in the learning setup in Salimans et al . ( 2016 ) . The objective of the generator g ( x′ , s′ , z2 ) is to generate a new synthetic ( independent ) sensitive attribute s′ in x′ , that will fool the discriminator hg . We define the loss of the discriminator hg as Lg ( hg , g ) = E x , s∼p ( x , s ) s′∼p ( s′ ) z1 , z2∼p ( z1 , z2 ) [ ` g ( hg ( g ( f ( x , z1 ) , s ′ , z2 ) ) , fake ) ] + E x , s∼p ( x , s ) [ ` g ( hg ( x ) , s ) ] , ( 3 ) where p ( z1 , z2 ) is the source of noise , p ( s′ ) is the assumed distribution of the synthetic sensitive attributes s′ , fake is the fake class , and ` g is the loss function . We formulate this as a constrained minimax problem min g max hg −Lg ( g , hg ) s.t . E x , s∼p ( x , s ) s′∼p ( s′ ) z1 , z2∼p ( z1 , z2 ) [ d ( g ( f ( x , z1 ) , s ′ , z2 ) , x ) ] ≤ 2 , ( 4 ) where the constant 2 ≥ 0 defines the allowed distortion for the generator . In Figure 1 we show the difference between ( a ) minimizing log-likelihood of the adversary , ( b ) maximizing entropy of the adversary , and ( c ) maximizing the entropy of the adversary and also synthetically replace the sensitive attribute with a random sample .
The paper introduces a framework to privatize sensitive attributes of data using adversarial representation learning. The proposed method consists of a “filter” that removes the sensitive attribute from the data representation, and a “generator” that replaces the removed sensitive attribute with a randomly sampled synthetic value. The authors argue that the second step done by the generator enhances privacy, and use experiments on real image data to verify their method and compare it with a baseline.
SP:fb5575d5c26f54fbccbc9de46440c174fe46abdf
Model-Based Offline Planning
1 INTRODUCTION . Learnt policies for robotic and industrial systems have the potential to both increase existing systems ’ efficiency & robustness , as well as open possibilities for systems previously considered too complex to control . Learnt policies also afford the possibility for non-experts to program controllers for systems that would currently require weeks of specialized work . Currently , however , most approaches for learning controllers require significant interactive time with a system to be able to converge to a performant policy . This is often either undesirable or impossible due to operating cost , safety issues , or system availability . Fortunately , many systems are designed to log sufficient data about their state and control choices to create a dataset of operator commands and resulting system states . In these cases , controllers could be learned offline , using algorithms that produce a good controller using only these logs , without ever interacting with the system . In this paper we propose such an algorithm , which we call Model-Based Offline Planning ( MBOP ) , which is able to learn policies directly from logs of a semi-performant controller without interacting with the corresponding environment . It is able to leverage these logs to generate a more performant policy than the one used to generate the logs , which can subsequently be goal-conditioned or constrained dynamically during system operation . Learning from logs of a system is often called ‘ Offline Reinforcement Learning ’ ( Wu et al. , 2019 ; Peng et al. , 2019 ; Fujimoto et al. , 2019 ; Wang et al. , 2020 ) and both model-free ( Wu et al. , 2019 ; Wang et al. , 2020 ; Fujimoto et al. , 2019 ; Peng et al. , 2019 ) and model-based ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) approaches have been proposed to learn policies in this setting . Current modelbased approaches , MOPO ( Yu et al. , 2020 ) and MoREL ( Kidambi et al. , 2020 ) , learn a model to train a model-free policy in a Dyna-like ( Sutton & Barto , 2018 ) manner . Our proposed approach , MBOP , is a model-based approach that leverages Model-Predictive Control ( MPC ) ( Rault et al. , 1978 ) and extends the MPPI ( Williams et al. , 2017b ) trajectory optimizer to provide a goal or reward-conditioned policy using real-time planning . It combines three main elements : a learnt world model , a learnt behavior-cloning policy , and a learnt fixed-horizon value-function . MBOP ’ s key advantages are its data-efficiency and adaptability . MBOP is able to learn policies that perform better than the demonstration data from as little as 100 seconds of simulated system time ( equivalent to 5000 steps ) . A single trained MBOP policy can be conditioned with a reward function , a goal state , as well as state-based constraints , all of which can be non-stationary , allowing for easy control by a human operator or a hierarchical system . Given these two key advantages , we believe it to be a good candidate for real-world use in control systems with offline data . We contextualize MBOP relative to existing work in Section 2 , and describe MBOP in Section 3 . In Section 4.2 , we demonstrate MBOP ’ s performance on standard benchmark performance tasks for offline RL , and in Section 4.3 we demonstrate MBOP ’ s performance in zero-shot adaptation to varying task goals and constraints . In Section 4.4 we perform an ablation analysis and consider combined contributions of MBOP ’ s various elements . 2 RELATED WORKS . Model-Based approaches with neural networks have shown promising results in recent years . Guided Policy Search ( Levine & Koltun , 2013 ) leverages differential dynamic programming as a trajectory optimizer on locally linear models , and caches the resulting piece-wise policy in a neural network . Williams et al . ( 2017b ) show that a simple model-based controller can quickly learn to drive a vehicle on a dirt track , the BADGR robot ( Kahn et al. , 2020 ) also uses Model-Predictive Path Integral ( MPPI ) ( Williams et al. , 2017a ) with a learned model to learn to navigate to novel locations , Yang et al . ( 2020 ) show good results learning legged locomotion policies using MPC with learned models , and ( Ebert et al. , 2018 ) demonstrate flexible robot arm controllers leveraging learned models with image-based goals . Silver et al . ( 2016 ) have shown the power of additional explicit planning in various board games including Go . More recently planning-based algorithms such as PlaNet ( Hafner et al. , 2019b ) have shown strong results in pixel-based continuous control tasks by leveraging latent variational RNNs . Simpler approaches such as PDDM ( Nagabandi et al. , 2020 ) or PETS ( Chua et al. , 2018 ) have shown good results using full state information both in simulation and on real robots . MBOP is strongly influenced by PDDM ( Nagabandi et al. , 2020 ) ( itself an extension on PETS ( Chua et al. , 2018 ) ) , in particular with the use of ensembles and how they are leveraged during planning . PDDM was not designed for offline use , and MBOP adds a value function composition as well as a policy prior during planning to increase data efficiency and strengthen the set of priors for offline learning . It leverages the same trajectory re-weighting approach used in PDDM and takes advantage of its beta-mixture of the T trajectory buffer . Both MoREL ( Kidambi et al. , 2020 ) and MOPO ( Yu et al. , 2020 ) leverage model-based approaches for offline learning . This is similar to approaches used in MBPO ( Janner et al. , 2019 ) and DREAMER ( Hafner et al. , 2019a ) , both of which leverage a learnt model to learn a model-free controller . MoREL and MOPO , however , due to their offline nature , train their model-free learner by using a surrogate MDP which penalizes for underlying model uncertainty . They do not use the models for direct planning on the problem , thus making the final policy task-specific . MOPO demonstrate the ability of their algorithm to alter the reward function and re-train a new policy according to this reward , but can not leverage the final policy to dynamically adapt to an arbitrary goal or constrained objective . Matsushima et al . ( 2020 ) use a model-based policy for deployment efficient RL . Their use case is a mix between offline and online RL , where they consider that there is a limited number of deployments . They share a similarity in the sense that they also use a behaviorcloning policy πβ to guide trajectories in a learned ensemble model , but perform policy improvement steps on a parametrized policy initialized from πβ using a behavior-regularized objective function . Similarly to MoREL and MOPO their approach learns a parameterized policy for acting in the real system . The use of a value function to extend the planning horizon of a planning-based policy has been previously proposed by Lowrey et al . ( 2018 ) with the POLO algorithm . POLO uses a ground-truth model ( e.g . physics simulator ) with MPPI/MPC for trajectory optimization . POLO additionally learns an approximate value-function through interaction with the environment which is then appended to optimized trajectories to improve return estimation . Aside from the fact that MBOP uses an entirely approximate & learned model , it uses a similar idea but with a fixed-horizon value function to avoid bootstrapping , and separate heads of the ensemble during trajectory optimization . BC-trained policies as sampling priors have been looked at by POPLIN ( Wang & Ba , 2019 ) . POPLIN does not use value bootstrapping , and re-samples an ensemble head at each timestep during rollouts , which likely provides less consistent variations in simulated plans . They show strong results relative to a series of model-based and model-free approaches , but do not manage to perform on the Gym Walker envi- ronment . Additionally , they are overall much less data efficient than MBOP and do not demonstrate performance in the offline setting . Task-time adaptation using model-based approaches has been considered previously in the modelbased literature . Lu et al . ( 2019 ) look at mixing model-free and model-based approaches using notions of uncertainty to allow for adaptive controllers for non-stationary problems . Rajeswaran et al . ( 2020 ) use a game-theoretic framework to describe two adaptive learners that are both more sample efficient than common MBRL algorithms , as well as being more robust to non-stationary goals and system dynamics . MBOP is able to perform zero-shot adaptation to non-stationary goals and constraints , but does not provide a mechanism for dealing with non-stationary dynamics . If brought into the on-line settings , approaches from these algorithms such as concentrating on recent data , could however be leveraged to allow for this . Previous approaches all look at various elements present in MBOP but none consider the full combination of a BC prior on the trajectory optimizer with a value-function initialization , especially in the case of full offline learning . Along with this high-level design , many implementation details such as consistent ensemble sampling during rollouts , or averaging returns over ensemble heads , appear to be important for a stable controller from our experience . 3 MODEL-BASED OFFLINE PLANNING . Our proposed algorithm , MBOP ( Model-Based Offline Planning ) , is a model-based RL algorithm able to produce performant policies entirely from logs of a less-performant policy , without ever interacting with the actual environment . MBOP learns a world model and leverages a particle-based trajectory optimizer and model-predictive control ( MPC ) to produce a control action conditioned on the current state . It can be seen as an extension of PDDM ( Nagabandi et al. , 2020 ) , with a behaviorcloned policy used as a prior on action sampling , and a fixed-horizon value function used to extend the planning horizon . In this following sections , we introduce the Markov Decision Process ( MDP ) formalism , briefly explain planning-based approaches , discuss offline learning , and then introduce the elements of MBOP before describing the algorithm in full . 3.1 MARKOV DECISION PROCESS . Let us model our tasks as a Markov Decision Process ( MDP ) , which can be defined as a tuple ( S , A , p , r , γ ) , where an agent is in a state st ∈ S and takes an action at ∈ A at timestep t. When in state st and taking an action at , an agent will arrive in a new state st+1 with probability p ( st+1|st , at ) , and receive a reward r ( st , at , st+1 ) . The cumulative reward over a full episode is called the return R and can be truncated to a specific horizon as RH . Generally reinforcement learning and control aim to provide an optimal policy function πs : S → A which will provide an action at in state st which will lead to the highest long-term return : π∗ ( st ) = argmaxa∈A ∑∞ t=1 γ tr ( st , π ∗ ( st ) ) , where γ is a time-wise discounting factor that we fix to γ = 1 , and therefore only consider finite-horizon returns . 3.2 PLANNING WITH LEARNED MODELS . A large body of the contemporary work with MDPs involves Reinforcement Learning ( RL ) Sutton & Barto ( 2018 ) with model-free policies Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2015 ) ; Schulman et al . ( 2017 ) ; Abdolmaleki et al . ( 2018 ) . These approaches learn some form of policy network which provides its approximation of the best action at for a given state st often as a single forward-pass of the network . MBOP and other model-based approaches Deisenroth & Rasmussen ( 2011 ) ; Chua et al . ( 2018 ) ; Williams et al . ( 2017b ) ; Hafner et al . ( 2019b ) ; Lowrey et al . ( 2018 ) ; Nagabandi et al . ( 2020 ) are very different . They learn an approximate model of their environment and then use a planning algorithm to find a high-return trajectory through this model , which is then applied to the environment 1 . This is interesting because the final policy can be more easily adapted to new 1This approach is often called Model-Based Reinforcement Learning ( MBRL ) in the literature , but we chose to talk more generally about planning with learned models as the presence of a reward is not fundamentally necessary and the notion of reinforcement is much less present . tasks , be made to respect constraints , or offer some level of explainability . When bringing learned controllers to industrial systems , many of these aspects are highly desireable , even to the expense of raw performance .
This work studies the offline RL problem and proposes MBOP for the same. The proposed method learns ensembles of dynamics models, behavioral policies, and value functions using the offline dataset. Subsequently, the approach uses online MPC with a learned terminal value function. The paper demonstrates experimental results on standard benchmark tasks (RLU and D4RL) as well as zero-shot adaptation results.
SP:0cd88bb9be953a5db3d9ac0208848b26a6f4e1bd
Variational Auto-Encoder Architectures that Excel at Causal Inference
1 INTRODUCTION . As one of the main tasks in studying causality ( Peters et al. , 2017 ; Guo et al. , 2018 ) , the goal of Causal Inference is to figure out how much the value of a certain variable would change ( i.e. , the effect ) had another certain variable ( i.e. , the cause ) changed its value . A prominent example is the counterfactual question ( Rubin , 1974 ; Pearl , 2009 ) “ Would this patient have lived longer [ and by how much ] , had she received an alternative treatment ? ” . Such question is often asked in the context of precision medicine , which attempts to identify which medical procedure t ∈ T will benefit a certain patient x the most , in terms of the treatment outcome y ∈ R ( e.g. , survival time ) . A fundamental problem in causal inference is the unobservablity of the counterfactual outcomes ( Holland , 1986 ) . That is , for each subject i , any real-world dataset can only contain the outcome of the administered treatment ( aka the observed outcome : yi ) , but not the outcome ( s ) of the alternative treatment ( s ) ( aka the counterfactual outcome ( s ) ) — i.e. , yti for t ∈ T \ { ti } . In other words , the causal effect is never observed ( i.e. , missing in any training data ) and can not be used to train predictive models , nor can it be used to evaluated a proposed model . This makes estimating causal effects a more difficult problem than that of generalization in the supervised learning paradigm . In general , we can categorize most machine learning algorithms into two general approaches , which differ in how the input features x and their target values y are modeled ( Ng & Jordan , 2002 ) : Discriminative methods focus solely on modeling the conditional distribution p ( y|x ) with the goal of direct prediction of y for each instance x . For prediction tasks , discriminative approaches are often more accurate since they use the model parameters more efficiently than generative approaches . Most of the current causal inference methods are discriminative , including the Balancing Neural Network ( BNN ) ( Johansson et al. , 2016 ) , CounterFactual Regression Network ( CFR-Net ) ( Shalit et al. , 2017 ) , and CFR-Net ’ s extensions — cf. , ( Yao et al. , 2018 ; Hassanpour & Greiner , 2019 ; 2020 ) — as well as Dragon-Net ( Shi et al. , 2019 ) . Generative methods , on the other hand , describe the relationship between x and y by their joint probability distribution p ( x , y ) . This , in turn , would allow the generative model to answer arbitrary queries , including coping with missing features x using the marginal distribution p ( x ) or [ similar to discriminative models ] predicting the unknown target values y via p ( y|x ) . A promising direction forward for causal inference is developing generative models , using either Generative Adverserial Network ( GAN ) ( Goodfellow et al. , 2014 ) or Variational Auto-Encoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . This has led to two generative approaches for causal inference : GANs for inference of Individualised Treatment Effects ( GANITE ) ( Yoon et al. , 2018 ) and Causal Effect VAE ( CEVAE ) Louizos et al . ( 2017 ) . However , neither of the two achieve competitive performance in terms of treatment effect estimation compared to the discriminative approaches . Although discriminative models have excellent predictive performance , they suffer from two drawbacks : ( i ) overfitting , and ( ii ) making highly-confident predictions , even for instances that are “ far ” from the observed training data . Generative models based on Bayesian inference , on the other hand , can handle both of these drawbacks : issue ( i ) can be minimized by taking an average over the posterior distribution of model parameters ; and issue ( ii ) can be addressed by explicitly providing model uncertainty via the posterior ( Gordon & Hernández-Lobato , 2020 ) . Although the exact inference is often intractable , efficient approximations to the parameter posterior distribution is possible through variational methods . Here , we use the Variational Auto-Encoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) for the Bayesian inference component of our causal inference method . Contribution : In this paper , we propose three interrelated Bayesian model architectures ( namely Series , Parallel , and Hybrid ) that employ the VAE framework to address the task of causal inference for binary treatments . We find that the best performing architecture is the Hybrid model , that is [ partially ] successful in decomposing the underlying factors of any observational dataset . This is a valuable property , as that means it can accurately estimate all all treatment outcomes . We demonstrate that these models significantly outperform the state-of-the-art in terms of treatment effect estimation performance on two publicly available benchmarks , as well as a fully synthetic dataset that allows for detailed performance analyses . 2 RELATED WORKS . CFR-Net Shalit et al . ( 2017 ) considered the binary treatment task and attempted to learn a representation space Φ that reduces selection bias by making Pr ( Φ ( x ) | t=0 ) and Pr ( Φ ( x ) | t=1 ) as close to each other as possible , provided that Φ ( x ) retains enough information that the learned regressors { ht ( Φ ( · ) ) : t∈ { 0 , 1 } } can generalize well on the observed outcomes . Their objective function includes L [ yi , h ti ( Φ ( xi ) ) ] , which is the loss of predicting the observed outcome for sample i ( described as xi ) , weighted by ωi = ti2u + 1−ti 2 ( 1−u ) , where u = Pr ( t=1 ) . This is effectively setting ωi = 1 2 Pr ( ti ) where Pr ( ti ) is the probability of selecting treatment ti over the entire population . DR-CFR Hassanpour & Greiner ( 2020 ) argued against the standard implicit assumption that all of the covariates X are confounders ( i.e. , contribute to both treatment assignment and outcome determination ) . Instead , they proposed a graphical model similar to that in Figure 1 and designed a discriminative causal inference approach accordingly — built on top of the CFR-Net . Specifically , their model , named Disentangled Representations for CFR ( DR-CFR ) , includes three representation networks , each trained with constraints to insure that each component corresponds to its respective underlying factor . While the idea behind DR-CFR provides an interesting intuition , it is known that only generative models ( and not discriminative ones ) can truly identify the underlying data generating mechanism . This paper is a step in this direction . Dragon-Net Shi et al . ( 2019 ) ’ s main objective was to estimate the Average Treatment Effect ( ATE ) , which they explain requires a two stage procedure : ( i ) fit models that predict the outcomes for both treatments ; and ( ii ) find a downstream estimator of the effect . Their method is based on a classic result from strong ignorability — i.e. , Theorem 3 in ( Rosenbaum & Rubin , 1983 ) — that states : ( y1 , y0 ) ⊥ t |x & Pr ( t = 1 |x ) ∈ ( 0 , 1 ) =⇒ ( y1 , y0 ) ⊥ t | b ( x ) & Pr ( t = 1 | b ( x ) ) ∈ ( 0 , 1 ) where b ( x ) is a balancing score1 . They consider propensity score as a balancing score and argue that only the parts ofX relevant for predicting T are required for the estimation of the causal effect 2 . This theorem only provides a way to match treated and control instances though — i.e. , it helps finding potential counterfactuals from the alternative group to calculate ATE . Shi et al . ( 2019 ) , however , used this theorem to derive minimal representations on which to regress to estimate the outcomes . 1That is , X ⊥ T | b ( X ) ( Rosenbaum & Rubin , 1983 ) . 2The authors acknowledge that this would hurt the predictive performance for individual outcomes . As a result , this yields inaccurate estimation of Individual Treatment Effects ( ITEs ) . GANITE Yoon et al . ( 2018 ) proposed the counterfactual GAN , whose generator G , given { x , t , yt } , estimates the counterfactual outcomes ( ŷ¬t ) ; and whose discriminator D tries to identify which of { [ x , 0 , y0 ] , [ x , 1 , y1 ] } is the factual outcome . It is , however , unclear why this requires that G must produce samples that are indistinguishable from the factual outcomes , especially as D can just learn the treatment selection mechanism instead of distinguishing the factual outcomes from counterfactuals . Although this work is among the few generative approaches for causal inference , our empirical results ( in Section 4 ) show that it does not effectively estimate counterfactual outcomes . CEVAE Louizos et al . ( 2017 ) used VAE to extract latent confounders from their observed proxies in X . While this is an interesting step in the right direction , empirical results show that it does not always accurately estimate treatment effect ( see Section 4 ) . The authors note that this may be because CEVAE is not able to address the problem of selection bias . Another reason that we think contributes to CEVAE ’ s sub-optimal performance is its assumed graphical model of the underlying data generating mechanism ( depicted in Figure 2 ) . This model assumes that there is only one latent variable Z ( confounding T and Y ) that generates the entire observational data ; however , we know from ( Kuang et al. , 2017 ) and ( Hassanpour & Greiner , 2020 ) that there must be more ( see Figure 1 ) . R2 : M1 and M2 VAEs In an attempt to enhance the conventional representation learning with VAEs — referred to as the M1 model ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) — in a semisupervised manner , Kingma et al . ( 2014 ) proposed the M2 VAE . While the M1 model helps learning latent representations from the covariate matrix X alone , the M2 model allows the target information also to guide the representation learning process . In our work , the target information includes the treatment bit T as well as the observed outcome Y . This additional information helps learning more expressive representations , that was not possible with the unsupervised M1 model . Appendix A.1 presents a more detailed overview of the M1 and M2 VAEs . 3 METHOD . Following ( Hassanpour & Greiner , 2020 ) and without loss of generality , we assume that the random variable X follows an unknown joint probability distribution Pr ( X |Γ , ∆ , Υ , Ξ ) , where Γ , ∆ , Υ , and Ξ are non-overlapping independent factors . Moreover , we assume that treatment T follows Pr ( T |Γ , ∆ ) ( i.e. , Γ and ∆ are the responsible factors for selection bias ) and outcome Y T follows Pr T ( Y T |∆ , Υ ) ; see Figure 1 . Observe that the factor Γ ( resp. , Υ ) partially determines only T ( resp. , Y ) , but not Y ( resp. , T ) ; and ∆ includes the confounding factors between T and Y . Our goal is to design generative model architectures that encourage learning disentangled representations of these four underlying latent factors ( see Figure 1 ) . In other words , it is an attempt to decompose and separately learn the underlying factors that are responsible for determining T and Y . To achieve this , we propose three architectures ( as illustrated in Figures 3 ( a ) , 3 ( b ) , and 3 ( c ) ) , each employing a VAE ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) that each include a decoder ( generative model ) and an encoder ( variational posterior ) . Specifically , we use the M1 and M2 models from ( Kingma et al. , 2014 ) as our building blocks , leading to a Series architecture , a Parallel architecture , and a Hybrid one . Each component is parametrized as a deep neural network .
Some generative models have been proposed for causal effect estimation but they often do not have a competitive performance. Recent work suggested that a combination of generative and discriminative model may improve treatment estimation with observational data, and further suggests a generic latent variable model for factorizing selection bias, as well as outcome. The author(s) build on this work and propose a set of deep generative models, with a hybrid objective function (generative + discriminative), that outperforms current approaches for ATE.
SP:06ca143a8bf0570ccfba6abcb37a22ab23a9c3dd
Real-time Uncertainty Decomposition for Online Learning Control
1 INTRODUCTION . With improved sensor quality and more powerful computational resources , data-driven models are increasingly applied in safety-critical domains such as autonomous driving or human-robot interaction ( Grigorescu et al. , 2020 ) . However , measurements usually suffer from noise and the available data is often scarce compared to all possible states of a complex environment . This requires controllers , which rely on supervised learning techniques , to properly react to ignorance and imprecision in the model to avoid dangerous situations . In order to allow an implementation of risk-averse ( for exploitation and safety improvements ) or risk-seeking ( for exploration ) behavior , the model must clearly disaggregate the information in the data into more than just the “ best estimate ” and differentiate between different sources of uncertainty . Besides the point estimate of a model , one can distinguish aleatoric ( uncertainty in the data ) and epistemic ( uncertainty in the model ) uncertainty . The former is irreducible as it is inherent to the stochastic process the data is recorded from , while the latter origins from a limited expressive power of the model or scarce training samples ( Der Kiureghian & Ditlevsen , 2009 ) . Gaussian processes ( GPs ) inherently provide a measure for its fidelity with the posterior standard deviation prediction ( Rasmussen & Williams , 2006 ) . It also allows to differentiate aleatoric uncertainty ( typically considered as observation noise ) and epistemic uncertainty ( modeled by the kernel ) . However , the former allows only homoscedastic ( constant ) estimates , while real-world applications typically require heteroscedastic uncertainty models . An extension to heteroscedastic GP regression is presented in ( Lazaro-Gredilla & Titsias , 2011 ) , however , it is a variational approximation and further increases the computational complexity and GPs generally suffer from poor scaling to large datasets ( Quinonero-Candela & Rasmussen , 2005 ) . In deep learning , the modeling of uncertainties also gained increasing interest over the past years ( Kendall & Gal , 2017 ) . Heteroscedastic aleatoric uncertainty can be captured well , if the output of the stochastic process can directly be observed and its parametric distribution is known . However , for more general cases , approximation techniques such as variational inference or sampling is required ( Bishop , 2006 ) . For epistemic uncertainty estimation with neural networks ( NN ) , the key idea for most approaches can be summarized as follows . Randomness is introduced to the neural network through sampling during training and inference . While the training robustifies the network against the injected noise at the training locations , it allows the noise to pass to the output at input locations where no training data is available . For inference , multiple predictions of the network are sampled for the same inputs , allowing to compute a statistical measure for the uncertainty at the output ( Depeweg et al. , 2018 ; Depeweg , 2019 ) . However , sampling the network during inference is a high computational burden , and is therefore not suitable in real-time critical control tasks . An ensemble based approach by ( Lakshminarayanan et al. , 2017 ) works with far less instances of a network , but does not differentiate between aleatoric and epistemic uncertainty explicitly . Despite those drawbacks in the uncertainty representation of data-driven models , the control community started to incorporate them increasingly in the decision making for various applications . For example Fanger et al . ( 2016 ) uses an epistemic uncertainty measure to dynamically assign leader order follower roles for cooperative robotic manipulation . The work by Berkenkamp et al . ( 2016 ) ensures a safe exploration of an unknown task space based on GP error bounds and a gain scheduling approach for computed torque control is presented in Beckers et al . ( 2019 ) . The work by Liu et al . ( 2020 ) considers the epistemic uncertainty as an estimate of the distance between source and target domains ( known as domain shift ) to design a robust controller . In Umlauft & Hirche ( 2020 ) and Chowdhary et al . ( 2015 ) an online learning control approach for GPs models is considered , which approach the dual control problem ( Wittenmark , 1995 ) as a model-based adaptive control problem . The work by Yesildirak & Lewis ( 1995 ) uses neural network for adaptive control in a continuous time fashion , which relies on a time-triggered ( periodic ) update of the model rather than a eventbased adaptation as we propose in this work . More general , risk averse control strategies have been presented by Umlauft et al . ( 2018 ) ; Medina et al . ( 2013 ) ; Todorov & Li ( 2005 ) . However , all of these approaches only consider the model fidelity in general and do not differentiate between aleatoric and epistemic uncertainty . Therefore , the main contributions of this paper are the following . We propose a deep learning framework with a real-time capable epistemic uncertainty prediction . The resulting online learning model is employed by a controller , which shows a distinct reaction to epistemic and aleatoric uncertainty . We evaluate the proposed methods on synthetic and real-world benchmark data sets , and simulate a quadcopter controller , which learns online the disturbances injected by thermals . 2 PROBLEM FORMULATION . Consider the discrete-time dynamical system1 with control u ∈ U ⊆ Rdu and state x ∈ X ⊆ Rdx xk+1 = g ( xk , uk ) + yk , ( 1 ) where g : X× U→ X is known , while y is a i.i.d . random vector sampled in every time step from yk ∼ D ( f ( xk ) ) , ( 2 ) whereD ( · ) denotes a known distribution over real vectors y ∈ Y ⊆ Rdx and depends on the parameters p ∈ P ⊆ Rdp . These state-dependent parameters arise from an unknown mapping f : X→ P. We denote the unknown component yk of the dynamical system generally as disturbance but it could also be the unmodeled part of the dynamics , such as friction or serve as black-box model for the dynamics if no analytic description is available ( g ( · , · ) = 0 ) . We assume measurements can be taken to obtain the data set Dtr = { ( xi , yi ) } Ntri=1 with inputs Xtr = { xi } Ntr i=1 and outputs Ytr = { yi } Ntr i=1 , such that a model f̂ ( · ) of f ( · ) can be learned . Ntr ∈ N denotes the current number of training data points and is initially zero , i.e. , the training set is empty . The task is to choose a control input uk , such that the system ( 1 ) follows a given reference xdes . Furthermore , the controller can take new measurements of y to improve its model over time . We consider each measurement of y to be costly and therefore new training points should only be collected when necessary . Applications , where data collection is costly can be found in distributed systems , where multiple sensors share the same scarce communication channel , or in autonomous systems with limited data storage capacity . The need for high data efficiency requires models , which judge upon their own fidelity in real-time to identify valuable measurements . As existing approaches for modeling epistemic uncertainty in deep learning suffer from a high computational complexity we first focus on developing a novel method for epistemic uncertainty predictions before proposing an online learning control strategy which makes use of a neural network model decomposing its uncertainties . 1Bold/capital symbols generally denote vectors/matrices , D ( · ) /U ( · ) /N ( · ) /B ( · ) a general parametric/the uniform/Gaussian/Bernoulli distribution , respectively . 3 EPISTEMIC UNCERTAINTY ESTIMATION . 3.1 RELATED WORK . Learning an epistemic uncertainty estimator is not straight forward as it measures the absence of training data . Most prominently Gaussian processes with stationary kernels offer such a measure implicitly with their posterior variance prediction . However , GPs are known to scale poorly for large data sets : While regression and uncertainty predictions can be performed with O ( Ntr ) and O ( Ntr2 ) , respectively , considering a new data point takesO ( Ntr3 ) computations ( also without hyperparameter optimization , O ( Ntr2 ) for rank-1 update methods ) . While various methods have been proposed to make GP computationally more efficient , including sparse GPs ( Quinonero-Candela & Rasmussen , 2005 ) , distributed GPs ( Deisenroth & Ng , 2015 ) and local GPs ( Nguyen-Tuong et al. , 2009a ; b ) , these approximations typically focus only on the precision of the point estimate and distort the uncertainty prediction . For estimating the ” distance ” to the training set , kernel density estimation ( KDE ) can also be used ( Rosenblatt , 1956 ) , however , the non-parametric nature implies that the inference time grows linearly with the number of considered data points , which we aim to avoid . More recently , several different approaches for epistemic uncertainty estimates using deep learning frameworks have been proposed . Popular approaches rely on Bayesian approximations ( Depeweg et al. , 2016 ) or permanent dropouts ( not only during training to avoid overfitting ) ( Gal , 2016 ; Gal & Ghahramani , 2016 ) . Furthermore , latent inputs can also be used to achieve a decomposition into aleatoric and epistemic uncertainty as presented in ( Depeweg et al. , 2017 ) . However , in particular for Bayesian NNs , these approaches become computationally challenging . Firstly , they have a larger number of parameters to tune than their deterministic counterparts and rely on variational inference methods ( Kwon et al. , 2020 ) . Secondly , the prediction requires to sample the entire network before the statistics of the output can be computed . For the application in real-time critical control problems ( e.g. , robotics with a sampling rate of 1 kHz ) , these computational burdens prohibit an employment of these techniques . A sampling-free estimation methods is proposed by Postels et al . ( 2019 ) , but suffers from a quadratic space complexity in the number of weights in the network and relies on firstorder Taylor approximations in the propagation of the uncertainties , which might become inaccurate depending on the non-linearity of the activation functions 3.2 EpiOut - EXPLICITLY LEARNING EPISTEMIC UNCERTAINTY In order to allow the estimation of epistemic uncertainty in real-time , we introduce the idea of explicitly modeling it with a separate output of a neural network , calling it EpiOut . Since the epistemic uncertainty expresses the absence of data , the original data set Dtr does not contain data for training EpiOut . Therefore , we generate an epistemic uncertainty data set , with inputs Xepi = { x̃j } Nepi j=1 and outputs Yepi = { ỹj } Nepi j=1 concatenated in Depi = { ( x̃j , ỹj ) } Nepi j=1 , Nepi ∈ N. Different variations for sampling the set Xepi can be chosen depending on the desired scalability properties . A naive approach is to sample the entire input space uniformly , which suffers from the curse of dimensionality . Alternatively , we propose to sample around existing training points from a normal distribution Xepi = Ntr⋃ i=1 { x̃j ∼ N ( xi , Γ ) , j = 1 , . . . , Nepi/Ntr } , ( 3 ) where we implicitly assume that Nepi is chosen such that δ = Nepi/Ntr is a positive integer . Supposing that a standardization of the input space to unity is performed based on Xtr , Γ = I can be chosen if no further knowledge on f ( · ) is available . Otherwise , scaling Γ can be interpreted similarly to the lengthscale of a GP as a measure for how far away from a training point the prediction is reliable : Larger Γ will lead to further spread of Xepi and therefore low epistemic uncertainty in the neighborhood of the training data , which would be meaningful if the true function is known to have a low Lipschitz constant , and vice versa . We propose to set δ to a multiple of 2dx + 1 which corresponds to the intuition to pad each training point in both directions of each dimension with a epi point x̃ . The reasoning behind the additional +1 point will become clear in the following . To define the set Yepi , we first compute the minimal distance ( according to any distance metric d : X×X→ R0 , + ) to the training data for each epi point dj = min x∈Xtr d ( x̃j , x ) , j = 1 , . . . , Nepi , ( 4 ) keeping in mind that the closest training data point is not necessarily the one used to generate the sample . Let dNtr be theNtr-th smallest element of all dj , we generate Yepi and update Xepi as follows ỹj = { 1 , if dj > dNtr 0 , x̃j ← arg minx∈Xtr d ( x̃j , x ) if dj ≤ dNtr . ( 5 ) Thus , theNtr points in Xepi with the least distance to a training point are replaced by the corresponding point in Xtr . Now the choice of 2dx + 1 epi points becomes clear as one of them will be turned into ỹ = 0 corresponding to “ low epistemic uncertainty ” , while 2dx points further away from the training point ỹ = 0 indicate the opposite . Given the data set Depi , the neural network is now equipped with one additional output , i.e. , the parameter layer is dp + 1 dimensional with output [ f̂ ( · ) η ( · ) ] T . The new output η ( · ) is terminated with a neuron using a sigmoidal activation function , such that η : X → [ 0 , 1 ] . This is beneficial because it allows immediately to judge , whether the predicted uncertainty is high ( ≈ 1 ) or low ( ≈ 0 ) without any reference evaluation ( see comparison to alteranative methods below ) . Independently of the loss function for the original network , the augmented output , also considered as epistemic output is trained using a binary cross-entropy loss , which is the natural choice for binary outputs . It quantifies the uncertainty in the prediction of the other outputs based on the distance to the training data measured by d ( · , · ) . For the sake of focus , we will be using the Euclidean distance , however the method can be easily extended to other metrics and we leave it to future work to investigate alternatives .
The authors consider the problem of efficient modeling of epistemic uncertainty, separated from aleatoric uncertainty, for neural networks. They propose a novel methodology, involving automatically constructing a epistemic uncertainty support data set used to extend a given NN with an epistemic uncertainty output. The method is compared with previous, less efficient, approaches and is applied to the important problem of data-efficient online learning of a controller for real-time use with convincing results.
SP:1efc842f413903e41727a6b79b9d3ea89011a85b
Large Batch Simulation for Deep Reinforcement Learning
1 INTRODUCTION . Speed matters . It is now common for modern reinforcement learning ( RL ) algorithms leveraging deep neural networks ( DNNs ) to require billions of samples of experience from simulated environments ( Wijmans et al. , 2020 ; Petrenko et al. , 2020 ; OpenAI et al. , 2019 ; Silver et al. , 2017 ; Vinyals et al. , 2019 ) . For embodied AI tasks such as visual navigation , where the ultimate goal for learned policies is deployment in the real world , learning from realistic simulations is important for successful transfer of learned policies to physical robots . In these cases simulators must render detailed 3D scenes and simulate agent interaction with complex environments ( Kolve et al. , 2017 ; Dosovitskiy et al. , 2017 ; Savva et al. , 2019 ; Xia et al. , 2020 ; Gan et al. , 2020 ) . Evaluating and training a DNN on billions of simulated samples is computationally expensive . For instance , the DD-PPO system ( Wijmans et al. , 2020 ) used 64 GPUs over three days to learn from 2.5 billion frames of experience and achieve near-perfect PointGoal navigation in 3D scanned environments of indoor spaces . At an even larger distributed training scale , OpenAI Five used over 50,000 CPUs and 1000 GPUs to train Dota 2 agents ( OpenAI et al. , 2019 ) . Unfortunately , experiments at this scale are out of reach for most researchers . This problem will only grow worse as the field explores more complex tasks in more detailed environments . Many efforts to accelerate deep RL focus on improving the efficiency of DNN evaluation and training – e.g. , by “ centralizing ” computations to facilitate efficient batch execution on GPUs or TPUs ( Espeholt et al. , 2020 ; Petrenko et al. , 2020 ) or by parallelizing across GPUs ( Wijmans et al. , 2020 ) . However , most RL platforms still accelerate environment simulation by running many copies of off-the-shelf , unmodified simulators , such as simulators designed for video game engines ( Bellemare et al. , 2013 ; Kempka et al. , 2016 ; Beattie et al. , 2016 ; Weihs et al. , 2020 ) , on large numbers ∗Correspondence to bps @ cs.stanford.edu of CPUs or GPUs . This approach is a simple and productive way to improve simulation throughput , but it makes inefficient use of computation resources . For example , when rendering complex environments ( Kolve et al. , 2017 ; Savva et al. , 2019 ; Xia et al. , 2018 ) , a single simulator instance might consume gigabytes of GPU memory , limiting the total number of instances to far below the parallelism afforded by the machine . Further , running many simulator instances ( in particular when they are distributed across machines ) can introduce overhead in synchronization and communication with other components of the RL system . Inefficient environment simulation is a major reason RL platforms typically require scale-out parallelism to achieve high end-to-end system throughput . In this paper , we crack open the simulation black box and take a holistic approach to co-designing a 3D renderer , simulator , and RL training system . Our key contribution is batch simulation for RL : designing high-throughput simulators that accept large batches of requests as input ( aggregated across different environments , potentially with different assets ) and efficiently execute the entire batch at once . Exposing work en masse facilitates a number of optimizations : we reduce memory footprint by sharing scene assets ( geometry and textures ) across rendering requests ( enabling orders of magnitude more environments to be rendered simultaneously on a single GPU ) , amortize rendering work using GPU commands that draw triangles from multiple scenes at once , hide latency of scene I/O , and exploit batch transfer to reduce data communication and synchronization costs between the simulator , DNN inference , and training . To further improve end-to-end RL speedups , the DNN workload must be optimized to match high simulation throughput , so we design a computationally efficient policy DNN that still achieves high task performance in our experiments . Large-batch simulation increases the number of samples collected per training iteration , so we also employ techniques from large-batch supervised learning to maintain sample efficiency in this regime . We evaluate batch simulation on the task of PointGoal navigation ( Anderson et al. , 2018 ) in 3D scanned Gibson and Matterport3D environments , and show that end-to-end optimization of batched rendering , simulation , inference , and training yields a 110× speedup over state-of-the-art prior systems , while achieving 97 % of the task performance for depth-sensor-driven agents and 91 % for RGB-camera-driven agents . Concretely , we demonstrate sample generation and training at over 19,000 frames of experience per second on a single GPU.1 In real-world terms , a single GPU is capable of training a virtual agent on 26 years of experience in a single day.2 This new performance regime significantly improves the accessibility and efficiency of RL research in realistic 3D environments , and opens new possibilities for more complex embodied tasks in the future . 2 RELATED WORK . Systems for high-performance RL . Existing systems for high-performance RL have primarily focused on improving the efficiency of DNN components of the workload ( policy inference and optimization ) and use a simulator designed for efficient single agent simulation as a black box . For example , Impala and Ape-X used multiple worker processes to asynchronously collect experience for a centralized learner ( Espeholt et al. , 2018 ; Horgan et al. , 2018 ) . SEED RL and Sample Factory built upon this idea and introduced inference workers that centralize network inference , thereby allowing it to be accelerated by GPUs or TPUs ( Espeholt et al. , 2020 ; Petrenko et al. , 2020 ) . DD-PPO proposed a synchronous distributed system for similar purposes ( Wijmans et al. , 2020 ) . A number 1Samples of experience used for learning , not ‘ frameskipped ’ metrics typically used in Atari/DMLab . 2Calculated on rate a physical robot ( LoCoBot ( Carnegie Mellon University , 2019 ) ) collects observations when operating constantly at maximum speed ( 0.5 m/s ) and capturing 1 frame every 0.25m . of efficient implementations of these ideas have been proposed as part of RL frameworks or in other deep learning libraries ( Liang et al. , 2018 ; Stooke & Abbeel , 2019 ; Küttler et al. , 2019 ) . We extend the idea of centralizing inference and learning to simulation by cracking open the simulator black box and designing a new simulation architecture for RL workloads . Our large-batch simulator is a drop-in replacement for large numbers of ( non-batched ) simulation workers , making it synergistic with existing asynchronous and synchronous distributed training schemes . It reduces the number of processes and communication overhead needed for asynchronous methods and eliminates separate simulation worker processes altogether for synchronous methods . We demonstrate this by combining our system with DD-PPO ( Wijmans et al. , 2020 ) . Concurrently with our work , CuLE , a GPU-accelerated reimplementation of the Atari Learning Environment ( ALE ) , demonstrates the benefits of centralized batch simulation ( Dalton et al. , 2020 ) . While both our work and CuLE enable wide-batch execution of their respective simulation workloads , our focus is on high-performance batch rendering of complex 3D environments . This involves optimizations ( GPU-driven pipelined geometry culling , 3D asset sharing , and asynchronous data transfer ) not addressed by CuLE due to the simplicity of rendering Atari-like environments . Additionally , like CuLE , we observe that the large training batches produced by batch simulation reduce RL sample efficiency . Our work goes further and leverages large-batch optimization techniques from the supervised learning literature to mitigate the loss of sample efficiency without shrinking batch size . Large mini-batch optimization . A consequence of large batch simulation is that more experience is collected between gradient updates . This provides the opportunity to accelerate learning via large mini-batch optimization . In supervised learning , using large mini-batches during optimization typically decreases the generalization performance of models ( Keskar et al. , 2017 ) . Goyal et al . ( 2017 ) demonstrated that model performance can be improved by scaling the learning rate proportionally with the batch size and “ warming-up ” the learning rate at the start of training . You et al . ( 2017 ) proposed an optimizer modification , LARS , that adaptively scales the learning rate at each layer , and applied it to SGD to improve generalization further . In reinforcement learning and natural language processing , the Adam optimizer ( Kingma & Ba , 2015 ) is often used instead of SGD . Lamb ( You et al. , 2020 ) combines LARS ( You et al. , 2017 ) with Adam ( Kingma & Ba , 2015 ) . We do not find that large mini-batch optimization harms generalization in reinforcement learning , but we do find it decreases sample efficiency . We adapt the techniques proposed above – learning rate scaling ( You et al. , 2017 ) and the Lamb optimizer ( You et al. , 2020 ) – to improve sample efficiency . Simulators for machine learning . Platforms for simulating realistic environments for model training fall into two broad categories : those built on top of pre-existing game engines ( Kolve et al. , 2017 ; Dosovitskiy et al. , 2017 ; Lee et al. , 2019 ; Gan et al. , 2020 ; James et al. , 2020 ) , and those built from scratch using open-source 3D graphics and physics libraries ( Savva et al. , 2017 ; 2019 ; Xia et al. , 2018 ; 2020 ; Xiang et al. , 2020 ; Zeng et al. , 2020 ) . While improving simulator performance has been a focus of this line of work , it has been evaluated in a narrow sense ( i.e . frame rate benchmarks for predetermined agent trajectories ) , not accounting for the overall performance of end-to-end RL training . We instead take a holistic approach to co-design rendering and simulation modules and their interfaces to the RL training system , obtaining significant gains in end-to-end throughput over the state of the art . 3 SYSTEM DESIGN & IMPLEMENTATION . Batch simulation accelerates rollout generation during RL training by processing many simulated environments simultaneously in large batches . Fig . 2 illustrates how batch simulation interacts with policy inference to generate rollouts . Simulation for sensorymotor agents , such as the PointGoal navigation task targeted by our implementation , can be separated into two tasks : determining the next environment state given an agent ’ s actions and rendering its sensory observations . Therefore , our design utilizes two components : a batch simulator that performs geodesic distance and navigation mesh ( Snook , 2000 ) computations on the CPU , and a batch renderer that renders complex 3D environments on the GPU . During rollout generation , batches of requests are passed between these components – given N agents , the simulator produces a batch of N environment states . Next , the renderer processes the batch of environment states by simultaneously rendering N frames and exposing the result directly in GPU memory . Agent observations ( from both the simulator and the renderer ) are then provided as a batch to policy inference to determine the next actions for the N agents . The key idea is that the batch simulator and renderer implementations ( in addition to the DNN workload ) take responsibility for their own parallelization . Large batch sizes ( values of N on the order of hundreds to thousands of environments ) provide opportunities for implementations to efficiently utilize parallel execution resources ( e.g. , GPUs ) as well as amortize processing , synchronization , and data communication costs across many environments . The remainder of this section describes the design and key implementation details of our system ’ s batch simulator and batch renderer , as well as contributions that improve the efficiency of policy inference and optimization in this regime .
This paper shows that batch simulation can accelerate reinforcement learning in 3D environments. Batch simulation accepts and executes large batches of simulation requests at the same time on one accelerator. The authors demonstrate that this technique can substantially speed up the processing and achieve ~100x speed up in convergence. They also propose minor-changes to DD-PPO to speed up the convergence even further. The authors also included the code which is always appreciated.
SP:085509d909d9fc476066424fd561bcebf6c57e51
DINO: A Conditional Energy-Based GAN for Domain Translation
1 INTRODUCTION . Domain translation methods exploit the information redundancy often found in data from different domains in order to find a mapping between them . Successful applications of domain translation include image style transfer ( Zhu et al. , 2017a ) and speech-enhancement ( Pascual et al. , 2017 ) . Furthermore , these systems are increasingly being used to translate across modalities in applications such as speech-driven animation ( Chung et al. , 2017 ) and caption-based image generation ( Reed et al. , 2016 ) . Some of the most popular methods for domain translation are based on conditional Generative Adversarial Networks ( cGANs ) ( Mirza & Osindero , 2014 ) . The conditional information in cGANs is used to drive the generation and to enforce the correspondence between condition and sample . Various alternatives have been proposed for how the condition should be included in the discriminator ( Miyato & Koyama , 2018 ; Reed et al. , 2016 ) but the majority of frameworks provide it as an input , hoping that the sample ’ s correlation with the condition will play a role in distinguishing between synthesized and genuine samples . The main drawback of this approach is that it does not encourage the use of the conditional information and therefore its contribution can be diminished or even ignored . This may lead to samples that are not semantically consistent with the condition . In this paper , we propose the Dual Inverse Network Optimisation ( DINO ) framework1 which is based on energy-based GANs ( Zhao et al. , 2017 ) and consists of two networks that perform translation in opposite directions as shown in Figure 1 . In this framework , one network ( Forward network ) translates data from the source domain to the target domain while the other ( Reverse Network ) performs the inverse translation . The Reverse network ’ s goal is to minimize the reconstruction error for genuine data and to maximize it for generated data . The Forward network aims to produce samples that can be accurately reconstructed back to the source domain by the Reverse Network . Therefore , during training the Forward network is trained as a generator and the Reverse as a discriminator . Since discrimination is based on the ability to recover source domain samples , the Forward network is driven to produce samples that are not only realistic but also preserve the shared semantics . We show that this approach is effective across a broad range of supervised translation problems , capturing the correspondence even when domains are from different modalities ( i.e. , video-audio ) . In detail , the contributions of this paper are : 1Source code : https : //github.com/DinoMan/DINO • A domain translation framework , based on a novel conditioning mechanism for energybased GANs , where the adversarial loss is based on the prediction of the condition . • An adaptive method for balancing the Forward and Reverse networks , which makes training more robust and improves performance . • A method for simultaneously training two networks to perform translation in inverse direc- tions , which requires fewer parameters than other domain translation methods . • The first end-to-end trainable model for video-driven speech reconstruction capable of pro- ducing intelligible speech without requiring task-specific losses to enforce correct content . 2 RELATED WORK . Domain translation covers a wide range of problems including image-to-image translation ( Isola et al. , 2017 ) , caption-based image synthesis ( Qiao et al. , 2019 ) , and text-to-speech synthesis ( Arik et al. , 2017 ) . Unsupervised translation methods attempt to find a relationship between domains using unpaired training data . However , finding correspondence without supervision is an ill-posed problem which is why these methods often impose additional constraints on their networks or objectives . The majority of unsupervised methods are applied to image-to-image translation problems . The CoGAN model ( Liu & Tuzel , 2016 ) imposes a weight-sharing constraint on specific layers of two GANs , which are trained to produce samples from different domains . The motivation is that sharing weights in layers associated with high-level features should help preserve the overall structure of the images . This approach is extended in the UNIT framework ( Liu et al. , 2017 ) , where the generative networks are Variational Autoencoders ( VAEs ) with a shared latent space . The weight-sharing used in the CoGAN and UNIT frameworks restricts them to problems where both domains are of the same modality . A more generic method of achieving domain-correspondence is presented in the CycleGAN model proposed by Zhu et al . ( 2017a ) . The CycleGAN objective includes a cycle-consistency loss to ensure that image translation between two domains is invertible . Recently , Chen et al . ( 2020 ) showed that reusing part of the discriminators in CycleGAN as encoders for the generators achieves parameter reduction as well as better results . Although it is possible to apply the cycle consistency loss for cross-modal translation it has not been widely used in such scenarios . Unlike unsupervised methods , supervised approaches rely on having a one-to-one correspondence between the data from different domains . The Pix2Pix model ( Isola et al. , 2017 ) uses cGANs to perform image-to-image translation and has inspired many subsequent works ( Zhu et al. , 2017a ; Wang et al. , 2018 ; Park et al. , 2019 ) . Compared to unsupervised methods , supervised approaches have had more success in translating across different modalities . Notable applications include speechdriven facial animation ( Vougioukas et al. , 2020 ) and text-to-image synthesis ( Reed et al. , 2016 ; Qiao et al. , 2019 ) . It is important to note that the adversarial loss in cGANs alone is often not capable of establishing domain correspondence , which is why these approaches also rely on additional reconstruction or perceptual losses ( Johnson et al. , 2016 ) in order to accurately capture semantics . In many scenarios , the relationship between domains is not bijective ( e.g . one-to-many mapping ) hence it is desirable for translation systems to produce a diverse set of outputs for a given input . Achieving this diversity is a common issue with GAN-based translation systems ( Isola et al. , 2017 ; Liu et al. , 2017 ) since they often suffer from mode collapse . The Pix2Pix model ( Isola et al. , 2017 ) proposes using dropout in both training and inference stages as a solution to this problem . Another successful approach is to apply the diversity regularisation presented in Yang et al . ( 2019 ) . Furthermore , many works ( Zhu et al. , 2017b ; Huang et al. , 2018 ; Chang et al. , 2018 ) attempt to solve this issue by enforcing a bijective mapping between the latent space and the target image domain . Finally , adding a reconstruction loss to the objective also discourages mode collapse ( Rosca et al. , 2017 ) , by requiring that the entire support of the distribution of training images is covered . 2.1 CONDITIONAL GANS . The most common method for conditioning GANs is proposed by Mirza & Osindero ( 2014 ) and feeds the conditional information as input to both the generator and the discriminator . Using the condition in the discriminator assumes that the correlation of samples with the condition will be considered when distinguishing between real and fake samples . However , feeding the condition to the discriminator does not guarantee that the correspondence will be captured and could even lead to the condition being ignored by the network . This issue is shared across all methods which use the condition as input to the discriminator ( Miyato & Koyama , 2018 ; Reed et al. , 2016 ) . Furthermore , it explains why these models perform well when there is structural similarity between domains ( e.g . image-to-image translation ) but struggle to maintain semantics in cases where domains are significantly different such as cross-modal applications ( e.g . video-to-speech ) . Another method presented in Park et al . ( 2019 ) proposes generator conditioning through spatiallyadaptive normalisation layers ( SPADE ) . This approach has been used to produce state of the art results in image generation . It should be noted that this approach requires that source domain data be one-hot encoded semantic segmentation maps and is therefore limited to specific image-translation problems ( i.e . segmentation maps to texture image translations ) . More importantly , conditioning of the discriminator is still done by feeding the condition as an input and hence will have similar drawbacks as other cGAN based methods with regards to semantic preservation . In some cases it is possible to guide the discriminator to learn specific semantics by performing a self-supervised task . An example of this is the discriminator proposed in Vougioukas et al . ( 2020 ) which enforces audio-visual synchrony in facial animation by detecting in and out of sync pairs of video and audio . However , this adversarial loss alone can not fully enforce audio-visual synchronization which is why additional reconstruction losses are required . Finally , it is important to note that finding a self-supervised task capable of enforcing the desired semantics is not always possible . 2.2 ENERGY-BASED GANS . Energy-based GANs ( Mathieu et al. , 2015 ; Berthelot et al. , 2017 ) use a discriminator D which is an autoencoder . The generator G synthesizes a sample G ( z ) from a noise sample z ∈ Z . The discriminator output is fed to a loss function L in order to form an energy function LD ( · ) = L ( D ( · ) ) . The objective of the discriminator is to minimize the energy assigned to real data x ∈ X and maximize the energy of generated data . The generator has the opposite objective , leading to the following minimax game : min D max G V ( D , G ) = LD ( x ) − LD ( G ( z ) ) ( 1 ) The EBGAN model proposed by Mathieu et al . ( 2015 ) uses the mean square error ( MSE ) to measure the reconstruction and a margin loss to limit the penalization for generated samples . The resulting objective thus becomes : min D max G V ( D , G ) = ‖D ( x ) − x‖+max ( 0 , m− ‖D ( G ( z ) ) −G ( z ) ‖ ) , ( 2 ) The marginm corresponds to the maximum energy that should be assigned to a synthesized sample . Performance depends on the magnitude of the margin , with large values causing instability and small values resulting in mode collapse . For this reason , some approaches ( Wang et al. , 2017 ; Mathieu et al. , 2015 ) recommend decaying the margin during training . An alternative approach is proposed by Berthelot et al . ( 2017 ) which introduces an equilibrium concept to balance the generator and discriminator and measure training convergence . Energy-based GANs have been successful in generating high quality images although their use for conditional generation is limited . 3 METHOD . The encoder-decoder structure used in the discriminator of an energy-based GAN gives it the flexibility to perform various regression tasks . The choice of task determines how energy is distributed and can help the network focus on specific characteristics . We propose a conditional version of EBGAN where the generator ( Forward network ) and discriminator ( Reverse network ) perform translations in opposite directions . The Reverse network is trained to minimize the reconstruction error for real samples ( low energy ) and maximize the error for generated samples ( high energy ) . The Forward network aims to produce samples that will be assigned a low energy by the Reverse network . Generated samples that do not preserve the semantics can not be accurately reconstructed back to the source domain and are thus penalized . Given a condition x ∈ X and its corresponding target y ∈ Y and networks F : X → Y and R : Y → X the objective of the DINO framework becomes : min R max F V ( R , F ) = L ( R ( y ) , x ) −L ( R ( F ( x ) ) , x ) , ( 3 ) where L ( · , · ) is a loss measuring the reconstruction error between two samples . Multiple choices exist for the loss function and their effects are explained in Lecun et al . ( 2006 ) . We propose using the MSE to measure reconstruction error and a margin loss similar to that used in EBGAN . However , as shown in Mathieu et al . ( 2015 ) this method is sensitive to the value of margin parameter m , which must be gradually decayed to avoid instability . We propose using an adaptive method inspired by BEGAN ( Berthelot et al. , 2017 ) which is based on maintaining a fixed ratio γ ∈ [ 0 , 1 ) between the reconstruction of positive and negative samples . γ = L ( R ( y ) , x ) L ( R ( F ( x ) ) , x ) ( 4 ) Balancing is achieved using a proportional controller with gain λ . A typical value for the gain is λ = 0.001 . The output of the controller kt ∈ [ 0 , 1 ] determines the amount of emphasis that the Reverse network places on the reconstruction error of generated samples . The balance determines an upper bound for the energy of fake samples , which is a fixed multiple of the energy assigned to real samples . When the generator is producing samples with a low energy they are pushed to this limit faster than when the generator is already producing high-energy samples . Since the ratio of reconstruction errors is kept fixed this limit will decay as the reconstruction error for real samples improves over time . This achieves a similar result to a decaying margin loss without the necessity for a decay schedule . The output of the controller as well as the reconstruction error for real and fake samples during training is shown in Figure 2 . We notice that the controller output increases at the start of training in order to push generated samples to a higher energy value and reduces once the limit determined by γ is reached . Although this approach is inspired by BEGAN there are some key differences which prevent the BEGAN from working with the predictive conditioning proposed in this paper . These are discussed in detail in Section A.4 of the appendix . In practice we find it advantageous to use the margin loss in combination with adaptive balancing . In this case the margin parameter serves as a hard cutoff for the energy of generated samples and helps stabilize the system at the beginning of training . As training progresses and reconstruction of real samples improves training relies more on the soft limit enforced by the energy balancing mechanism . In this case we can set γ = 0 to fall back to a fixed margin approach . The training objective is shown in Equation 5 . When dealing with one-to-many scenarios we find that adding a reconstruction loss to the generator ’ s objective can help improve sample diversity . LR = ‖R ( y ) − x‖+ kt ·max ( 0 , m− ‖R ( F ( x ) ) − x‖ ) LF = ‖R ( F ( x ) ) − x‖ kt+1 = kt + λ · [ ‖R ( y ) − x‖ − γ · ‖R ( F ( x ) ) − x‖ ] ( 5 )
The paper proposes an adversarial framework DINO to train translation models from source to target and target to source. The basic idea is to replace generator and discriminator in the energy based GAN with two source-to-target generation models. The discriminator(reverse generator) and the generator competes in a minimax game to reconstruct the data. The framework is further extended with duplicate output heads for both discriminator and generator to enhance the training robustness.
SP:24ee3df238dc009de59a51589f2e171d750b345e
Deconstructing the Regularization of BatchNorm
Batch normalization ( BatchNorm ) has become a standard technique in deep learning . Its popularity is in no small part due to its often positive effect on generalization . Despite this success , the regularization effect of the technique is still poorly understood . This study aims to decompose BatchNorm into separate mechanisms that are much simpler . We identify three effects of BatchNorm and assess their impact directly with ablations and interventions . Our experiments show that preventing explosive growth at the final layer at initialization and during training can recover a large part of BatchNorm ’ s generalization boost . This regularization mechanism can lift accuracy by 2.9 % for Resnet-50 on Imagenet without BatchNorm . We show it is linked to other methods like Dropout and recent initializations like Fixup . Surprisingly , this simple mechanism matches the improvement of 0.9 % of the more complex Dropout regularization for the state-of-the-art Efficientnet-B8 model on Imagenet . This demonstrates the underrated effectiveness of simple regularizations and sheds light on directions to further improve generalization for deep nets . 1 INTRODUCTION . Deep learning has made remarkable progress on a variety of domains in the last decade . While part of this progress relied on training larger models on larger datasets , it also depended crucially on the development of new training methods . A prominent example of such a development is batch normalization ( BatchNorm ) ( Ioffe and Szegedy , 2015 ) , which has become a standard component of training protocols . For example , state-of-the-art models in image recognition ( Szegedy et al. , 2017 ; He et al. , 2016 ; Tan and Le , 2019 ) , object detection ( He et al. , 2019 ; Du et al. , 2019 ) , and image segmentation ( Chen et al. , 2017 ) all use BatchNorm . Despite its prominence , the mechanisms behind BatchNorm ’ s effectiveness are not well-understood ( Santurkar et al. , 2018 ; Bjorck et al. , 2018 ; Yang et al. , 2019 ) . Perhaps at the core of the confusion is that BatchNorm has many effects . It has been correlated to reducing covariate shift ( Ioffe and Szegedy , 2015 ) , enabling higher learning rates ( Bjorck et al. , 2018 ) , improving initialization ( Zhang et al. , 2019 ) , and improving conditioning ( Desjardins et al. , 2015 ) , to name a few . These entangled effects make it difficult to properly study the technique . In this work , we deconstruct some of the effects of BatchNorm in search of much simpler components . The advantage of this approach compared to previous work is that it allows going beyond correlating these effects to BatchNorm by evaluating their impact separately . The mechanisms we consider in this work are purposefully simple . These simpler mechanisms are easier to understand and , surprisingly , they are competitive even at the level of the state-of-the-art . Our contributions can be summarized as follows : 1 . How does normalization help generalization ? We isolate and quantify the benefits of the different effects of BatchNorm using additive penalties and ablations . To our knowledge , we are the first to provide empirical evidence that BatchNorm ’ s effect of regularizing against explosive growth at initialization and during training can recover a large part of its generalization boost . Replicating this effect with Fixup initialization and the proposed additive penalty improves accuracy by 2.9 % for Resnet-50 without BatchNorm . 2 . Links to Fixup and Dropout . We draw novel connections between the regularization on the final layer , Dropout regularization , Fixup initialization and BatchNorm . 3 . Simplicity in regularization . The mechanism we identify can be useful as a standalone regularization . It produces 0.9 % improvement on the Efficientnet B8 architecture , matching the more complex Dropout regularization . 2 DECOMPOSING THE REGULARIZATION EFFECTS OF BATCH . NORMALIZATION Embedding Network Feature Embedding Input Embedding L2 Output Layer BatchNorm or Standardizing Loss Functional L2 Output Figure 1 : Diagram of a neural network showing where the mechanisms operate . In this section , we break BatchNorm into different mechanisms that can be studied separately . BatchNorm ( Ioffe and Szegedy , 2015 ) is a technique that accelerates training by standardizing the intermediate activations of a deep network . It achieves this standardization by using explicit normalization instead of relying on an additive regularization . While BatchNorm has been correlated to many effects , it is still unclear which effect , if any , explains most of its generalization boost . The effects we evaluate are the implicit regularizing effect on the norms at the final layer , and also its primary effect of standardizing the intermediate layers . In order to test these purposefully simple mechanisms we will rely on ablations and additive penalties . The use of additive penalties allows us to disentangle these effects where we control for the positive effect of BatchNorm on initialization by using the recently proposed Fixup initializer ( Zhang et al. , 2019 ) . 2.1 REGULARIZING AGAINST EXPLOSIVE GROWTH IN THE FINAL LAYER . First , we characterize the implicit effect of normalization on the final layer . Consider a neural network of the form NN ( x ) = WEmb ( x ) with loss L where x ∈ RI is the input of the network , W ∈ RK×H is the final weight matrix in the model , and Emb ( x ) : RI → RH is a feature embedding network with L layers . Let us take the common case where Emb ( x ) = Swish ( γBatchNorm ( PreEmb ( x ) ) + β ) where PreEmb ( x ) is the output of a residual network , Swish ( x ) = xσ ( ρx ) is the Swish activation ( Ramachandran et al. , 2017 ; Elfwing et al. , 2018 ) with scalar parameter ρ ( typically denoted β ) , and BatchNorm parameters γ , β. BatchNorm makes weight decay regularization on γ , β approximately equivalent to an additive penalty on the norm on the feature embedding L ( NN ( x ) ) + λ‖γ‖2 + λ‖β‖2 = L ( NN ( x ) ) + λ 4 E [ ‖Emb ( x ) ‖2 ] +O ( |ρ| ) . ( 1 ) See the Appendix A for the derivation . It means that the norm of the BatchNorm parameters alone is enough to directly control the norm of the feature embedding . It is a guarantee that the norm of the feature embedding can not grow explosively during training as long as these parameters are small . This regularization effect of BatchNorm can occur even without explicit weight decay due to the tendency of stochastic gradient descent to favor low norm parameters ( Wilson et al. , 2017 ) . This equivalency does not hold without BatchNorm because the activation of the embedding network become an important factor in the norm of the feature embedding ( ‖γ‖2 + ‖β‖2 6= E [ ‖γPreEmb ( x ) + β‖2 ] in general ) . Indeed , ( Balduzzi et al. , 2017 ; Gehring et al. , 2017 ; Zhang et al. , 2019 ) have shown that the activations of residual networks without BatchNorm tend explode exponentially in the depth of the network at initialization . This results in an extremely large embedding norm , even though the parameters are relatively small . We confirm experimentally in Section 4.3 that networks without BatchNorm have much larger feature embedding norms . Feature Embedding L2 ( EL2 ) We propose to assess the effect of this regularization mechanism by isolating it as the following additive penalty REL2 ( NN ) = 1 H E [ ‖Emb ( x ) ‖2 ] . ( 2 ) Adding this regularization to the loss allows us to test the impact of this mechanism independently on networks with and without BatchNorm . We dub this regularization embedding L2 for short in the following sections as it is L2 with a metric that is at the embedding network function level , with no additional penalties on the intermediate layers . It is applied right before the classification layer , in other words right after the final average pooling layer for residual networks . We will see with our experiments that this simple regularization can in large part recover the regularization boost of BatchNorm , has links to several known methods and is practically useful even at the level of the state-of-the-art . Functional L2 ( FL2 ) Regularizing the feature embedding has an implicit effect on the final output norm ( E [ ‖NN ( x ) ‖ ] ≤ 1/2 ‖W‖2 + H/2 · REL2 ( NN ) ) . In order to test the impact of this effect , we will also evaluate a direct penalty on the norm of the final output RFL2 ( NN ) = 1 K E [ ‖NN ( x ) ‖2 ] . ( 3 ) We dub this regularization functional L2 for short in the following sections as it is L2 with a metric that is at the full network function level . It is applied on the logits of the model for classification . In Section 4.4 , we investigate whether it is regularizing this norm or the feature embedding norm that is more closely correlated to better generalization . 2.2 STANDARDIZING THE INTERMEDIATE ACTIVATIONS OF THE MODEL . As a baseline mechanism , we will also consider an additive penalty that encourages the normalization of every intermediate layer called standardization loss ( Collins et al. , 2019 ) . This is a useful reference to gauge the effectiveness of the regularization on the embedding described in the previous sub-section . It also helps disentangle the side-effects of normalization . The penalty is DKL ( P ( x ) ||N ( x|0 , I ) ) = 1 2 ∑ i ( µ2i + σ 2 i − log σ2i − 1 ) ( 4 ) where µi is the mean and σi is the variance of an intermediate layer of the network . These are the same statistic that are computed by BatchNorm and a penalty is added to the loss for all the intermediate layers . This regularization has been considered in ( Collins et al. , 2019 ) . They found that it accelerated learning but fell significantly short of the generalization boost of batch normalization . However , they did not account for the positive effect of normalization on initialization . We correct for this in our experiments using the recently proposed Fixup initialization ( Zhang et al. , 2019 ) . 3 DRAWING LINKS TO OTHER METHODS . In this section , we will draw connections between the mechanisms considered for BatchNorm and other methods . 3.1 DROPOUT REGULARIZATION . Dropout ( Hinton et al. , 2012 ) is a regularization that prevents overfitting by randomly omitting subsets of features during training . Despite its early popularity , its use has declined with the rise of batch normalized convolutional networks . Ghiasi et al . ( 2018 ) , and Zagoruyko and Komodakis ( 2016 ) , find that it produces comparatively much smaller improvements when applied to the intermediate layers of such networks . However , Tan and Le ( 2019 ) has shown that state-of-the-art results can be obtained by applying Dropout only at the input of the final layer of the network . Interestingly , we can relate this particular use of Dropout to BatchNorm . Wang and Manning ( 2013 ) have shown that Dropout with MSE loss can be isolated as an additive regularization when applied at the last layer LDropout = 1 N ∑ E [ ( WDropout ( Emb ( xi ) ) − yi ) 2 ] = 1 N ∑ ( WEmb ( xi ) − yi ) 2 + λtr ( Wdiag ( E [ Emb ( x ) Emb ( x ) T ] ) WT ) where the additive Dropout regularization is RDropout ( NN ) = tr ( Wdiag ( E [ Emb ( x ) Emb ( x ) T ] ) WT ) and λ is the strength of the penalty . In this formulation , we can see that Dropout is related to the mechanisms in Section 2.1 as follows K ·RFL2 ( NN ) ≈ RDropout ( NN ) ≤ 1 4 ‖W‖4 + H 4 REL2 ( NN ) . ( 5 ) The approximate relationship to functional L2 relies on the assumption that the features are relatively decorrelated . To our knowledge this close but simple relationship between the expected norm of the output and Dropout with MSE loss had not been noted before . The upper bound with embedding L2 gives us a guarantee on Dropout robustness when training with BatchNorm and weight decay . In some sense , this means that networks with BatchNorm already incorporate a regularization effect similar to that conferred by Dropout . This can explain why networks with BatchNorm have tended to benefit relatively less from Dropout . The approximate relationship to functional L2 can be extended to networks with cross-entropy by using Taylor expansion and assuming the network has low-confidence . This assumption is likely only correct at initialization . In comparison , a related upper-bound can be found for embedding L2 using Taylor expansion . We will see in Section 4.5 that embedding and functional L2 can match or exceed Dropout for state-of-the-art architectures on Imagenet such as Efficientnet ( Tan and Le , 2019 ) . This is surprising because using an additive penalty at the final layer is much simpler than Dropout .
The paper empirically studies the regularization of BN. It proposes the point that the BN's effect is connected with the regularizing against explosive growth in the final layer. To motivate this point, it takes a single-layer case and shows the BN approximately penalizes on the norm of the feature embedding thereon. Two regularizations are proposed according to the point and are used to justify it.
SP:88c3a4a7498801de3d7442253a4aeae5b83a3eb5
Frequency-aware Interface Dynamics with Generative Adversarial Networks
1 INTRODUCTION . Complex and chaotic physical phenomena such as liquids , gels and goo are still very challenging when it comes to representing them as detailed and realistically as possible . A variety of numerical methods have been proposed to simulate such materials , from purely Eulerian methods ( Harlow & Welch , 1965 ; Stam , 1999 ) , over particle based methods ( Gingold & Monaghan , 1977 ; Ihmsen et al. , 2014 ) , to hybrids ( Zhu & Bridson , 2005 ; Stomakhin et al. , 2013 ) . Such simulations have also been targeted with deep learning methods ( Tompson et al. , 2017 ; Mrowca et al. , 2018 ; Li et al. , 2019 ) , but despite significant advances , they remain very time-consuming and highly challenging to solve . One approach to speed up the necessary calculations and to allow for more control is to employ super-sampling . This can be seen as a form of post-processing where one simulates only a lowresolution simulation and uses an up-sampling technique to approximate the behavior of a highresolution simulation . Neural networks are of special interest here because of their capability to efficiently approximate the strongly nonlinear behavior of physical simulations . Applying neural networks to space-time data sets of physical simulations has seen strongly growing interest in recent years ( Ladicky et al. , 2015 ; Kim et al. , 2020 ) , and is particularly interesting in this context to incorporate additional constraints , e.g. , for temporal coherence ( Xie et al. , 2018 ) , or for physical plausibility ( Tompson et al. , 2017 ; Kim et al. , 2019 ) . An important aspect here is that methods based on simple distance losses , such as mean square errors , quickly reach their limits . The generated data tends to be smooth without the necessary small-scale features . Generative adversarial networks ( GANs ) have been proposed to overcome this issue ( Goodfellow , 2016 ) . They are characterized by the fact that , apart from a generative network , they also make use of a discriminator that classifies the results of the generator with respect to the ground-truth data . Via a joint training , the distribution of solutions of the generator is guided to approximate the ground-truth data distribution . As the quality of the results is primarily determined by the discriminator network , it remains an open problem to accurately evaluate the quality of the inferred results . In our work we propose to evaluate the problem in the Fourier space . In this way , we are able to evaluate the given methods reliably , and it allows us to design improved learning algorithms that more faithfully recover the small scale details of the reference data . For the core of our method , we build on an existing GAN-based architecture that employs two discriminator networks , one for the spatial and one for the temporal behaviour ( Xie et al. , 2018 ) . In terms of ground truth data , we focus on multi-phase ( solid-fluid-air ) interactions with a sharp fluid-air interface . Unlike single-phase flow whose details are visible and relevant solely due to transparency throughout the volume , the details of our data are in most cases only visible on the surface . Of course , the internal dynamics in the volume also play a role , but they are mostly hidden from the viewer , only the effects on the surface are visible . Furthermore , we consider phenomena that build up and take place over the course of several frames . Thus , as we will outline below , we employ a recurrent approach that is conditioned on a previous output in order to produce the solution for a subsequent timestep . In order to represent and process fine details , we treat such detail as high-frequency displacements of a low-frequency surface , and correspondingly formulate the problem in Fourier space . The transformation into Fourier space yields an isolated view of the individual frequencies , and thus allows for a much improved analysis of the results achieved by different methods . E.g. , it robustly identifies the strong smoothing behavior of L2 metrics , and can detect mode collapse problems of adversarial training runs . We also demonstrate how frequency information can be incorporated into the learning objective in order to improve results . To summarize , the central contributions of our work are : ( 1 ) A method for frequency evaluation with a consideration of spatial properties , ( 2 ) A novel frequency aware loss formulation , ( 3 ) A simple , yet intuitive evaluation of different generative methods , ( 4 ) A time consistent spatio-temporal upsampling of complex physical surfaces . Related Work Deep learning methods in conjunction with physical models were employed in variety of contexts , ranging from learning models for physical intuition ( Battaglia et al. , 2016 ; Sanchez-Gonzalez et al. , 2018 ) , over robotic control ( Schenck & Fox , 2018 ; Hu et al. , 2019 ) to engineering applications ( Ling et al. , 2016 ; Morton et al. , 2018 ) . In the following , we focus on fluid-like materials with continuous descriptions , which encompass a wide range of behavior and pose challenging tasks for learning methods ( Mrowca et al. , 2018 ; Li et al. , 2019 ) . For fluid flows in particular , a variety of learning methods were proposed ( Tompson et al. , 2017 ; Prantl et al. , 2017 ; Um et al. , 2018 ) . A common approach to reduce the high computational cost of a simulation is to employ super-resolution techniques ( Dong et al. , 2016 ; Chu & Thuerey , 2017 ; Bai et al. , 2019 ) . In this context , our work targets the up-sampling for physics-based animations , for which we leverage the approach proposed by Xie et al . ( 2018 ) . However , in contrast to this work , we target phenomena with clear interfaces , which motivates the frequency-based viewpoint of our work . For sharp interfaces , Lagrangian models are a very popular discretization of continuum mechanical systems . E.g. , smoothed particle hydrodynamics ( SPH ) ( Gingold & Monaghan , 1977 ; Koschier et al. , 2019 ) is a widely-used particle-based simulation method . While points and particles are likewise frequently used representations for physical deep learning ( Li et al. , 2019 ; Ummenhofer et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ) , Eulerian , i.e. , grid-based representations offer advantages in terms of efficient and robust kernel evaluations . We employ generative adversarial networks ( Goodfellow , 2016 ) , as a powerful and established method for learning generative models . Here , ” unconditional ” GANs typically rely on a synthetic input vector from Gaussian noise to produce the desired output distribution , e.g. , the DC-GAN approach ( Radford et al. , 2016 ) . Conditional GANs ( Mirza & Osindero , 2014 ) were introduced to provide the network with an input that allows the neural network to steer the generation of the output . Hence super-resolution tasks for natural images ( Ledig et al. , 2016 ) , or image translation tasks ( Isola et al. , 2017 ) employ conditional GANs . The time dimension was also taken into account in natural imaging works , e.g. , by Saito et al . in the form of a temporal generator ( Saito et al. , 2017 ) , or via a stochastic sequence generator ( Yu et al. , 2017 ) . Other works have included direct L2 loss terms as temporal regularizers ( Bhattacharjee & Das , 2017 ; Chen et al. , 2017 ) , which , however , typically strongly restricts the changes over time . Similar to flow advection , video networks also often use warping information to align data over time ( Liu et al. , 2017 ; de Bezenac et al. , 2017 ) . We will demonstrate that recurrent architectures similar to those used for video super-resolution ( Sajjadi et al. , 2018 ) are likewise very amenable for physical problems over time . 2 METHOD . The input for our method is a coarsely approximated source simulation , with the learning objective to infer the surface of a target simulation over space and time . This target is typically computed via a potentially very costly , finely resolved simulation run for the same physical setup . When it comes to the possibilities of simulation representations , there is a great variance . In our case we have chosen an implicit representation of the data , by a signed-distance field ( SDF ) denoted by g : R3 → R. An SDF returns , for a given point , the signed distance to the surface , with negative being inside the medium . Such a function is realized in practice by a grid X ∈ RMx×My×Mz , storing the pre-computed signed distance values , where M∗ , ∗ ∈ { x , y , z } specifies the size of the grid in the respective dimension x , y or z . We have chosen this representation because most neural network layers are designed for array-like representations , and the loss functions on grid-based data are very efficient to evaluate . Additionally , an implicit representation via a grid can leverage tools from the field of level-set processing ( Adalsteinsson & Sethian , 1999 ) , and facilitates the frequency viewpoint via a Fourier transformation . Additional values , like the velocity , are also mapped on a grid V ∈ RMx×My×Mz×3 . Our goal is to let a generative network G : RMx×My×Mz×4 → RNx×Ny×Nz infer a grid Ỹ which approximates a desired high-resolution simulation Y ∈ RNx×Ny×Nz with N∗ = kM∗ , N∗ ∈ N { x , y , z } and up-sample factor k ∈ N , i.e . G ( X ) = Ỹ ≈ Y . As our method only requires position and velocity data from a simulation , it is largely agnostic to the type of solver or physical model for generating the source and target particle data . 2.1 NEURAL NETWORK FORMULATION . Our method is based on a generative , neural network with a 3D fully-convolutional ResNet architecture ( He et al. , 2016 ) that produces an output field at a single instance in time . The low-resolution input data is first up-sampled with a tri-linear up-sampling and then processed with several convolutional layers , as shown in Figure 1a . We use leaky ReLU as activation function after each layer , except for the last layer , where we use a tanh activation . In our case , the input data consists of the implicitly represented geometry data Xt , the velocity Vt of the simulation as well as the results of a previous pass Ỹt . The previously generated data is advected with the low-resolution velocity before further processing . Through this feedback loop we train our network recurrently by iterating over a sequence of T = 10 frames . This yields stability over longer periods of time and gives better insights about temporal behaviour . Furthermore , the recurrent training is important to enable persistent behavior over time , such as the progression of fine surface waves . Unlike the process for generating the input data , the network training can not resort to a physical simulation with full resolution , and hence can not uniquely determine the evolution of future states . Therefore , its main learning objective is to capture the dynamics of the target simulations beyond that basic motion computed with an advection step . For initialization of the undefined first frame Ỹ−1 we use a tri-linear up-sampled version of the input . To train our network we have to define first a loss function that allows us to evaluate the differences between generated and ground-truth data . The most basic loss function is a simple mean squared error ( MSE ) : Ls = ||Y − Ỹ ||22 . ( 1 ) This has the big disadvantage that it is ill-suited to measure the similarity or differences of solutions . For example , considering a function with multiple solutions for a given input , i.e. , a multi-modal setting , a method that trains with an MSE loss will learn the expected value of the output distribution , i.e , the average of the different solutions . However , the average is typically not a part of the solution set . Thus , the MSE loss often does not correspond to the correct distance in solution space , based on significant factors corresponding to the distribution of the solutions . Our super-sampling setup is such a problem : Due to the low resolution input , the high resolution details can not be determined uniquely , resulting in a variety of possible solutions when up-sampling . Via physical properties of the material and its temporal sequence , some solutions can be eliminated , but nonetheless the space of solutions typically remains infinitely large . If an MSE loss is used , all such samples from the training data set are simply averaged to obtain a mean value , so that the result does no longer reflect the level of detail of the ground-truth data . The MSE loss nevertheless gives a rough direction , and provides a stable learning target . Hence , we still use it as a component in the final loss formulation , in combination with an adversarial loss . In contrast to a direct distance metric , the adversarial loss approximates the ground-truth distribution . Hence , the network no longer learns one mean value , but chooses one valid solution out of the possible ones . We define a discriminator Ds that takes as input a high-resolution version of a simulation frame and classifies it , distinguishing between ground-truth and generated frames . It does this through a binary output , where 0 is ” fake ” and 1 is ” real ” . Its task is to provide the generator with feedback on the correctness of the given data . The special feature is that the discriminators are trained together with the generator , thus creating a competitive interaction where both parties improve each other . As loss for the discriminator we use a binary cross-entropy : Lbce = y log ( ỹ ) + ( 1− y ) log ( 1− ỹ ) , ( 2 ) where y is the ground-truth and ỹ is the generated value from the discriminator . For complex tasks , GANs can be unstable and difficult to control . For this reason we additionally use the recent Spectral Normalization ( Miyato et al. , 2018 ) , which we found to provide more stable adversarial training . While we have primarily focused on spatial content , i.e. , the surface of the material so far , the temporal behavior likewise plays a crucial role , and poses similar difficulties in our multi-modal setting . On the one hand , the generation of details can quickly lead to temporally incoherent results , which is characterized by unappealing flickering . On the other hand , our network also should be able to match and recreate spatial solutions over time that reflect the physical behavior . Following previous work ( Xie et al. , 2018 ) , we use an additional discriminator Dt to classify the temporal behavior of data . This is done by passing three corresponding frames , which are aligned with each other using advection A : RNx×Ny×Nz×3 × RMx×My×Mz×3 → RNx×Ny×Nz×3 . Apart from this , the temporal discriminator closely follows the structure of the spatial discriminator . Both discriminators ( Figure 1b ) use a typical funnel structure , where the dimension is increasingly reduced using strided convolutional layers , with a last fully connected layer computing the classification result . We likewise use leaky ReLU activations , with a sigmoid function for the last layer . The classification of the discriminators is included in the loss formulation of the generator : LDs = 1 T T∑ t Ds ( G ( tX ) , Xt ) , LDt = Dt ( A ( G ( Xt−1 ) , Vt ) , G ( X ) , A ( G ( Xt+1 ) , −Vt ) , X ) , ( 3 ) which gives the final loss function : LG = Ls + αLDs + βLDt , ( 4 ) where α and β indicate the weighting of the individual loss terms . An additional benefit of the adversarial loss is that it allows for learning from unpaired data . A common problem for up-sampling methods is the generation of paired ground truth data for training . Due to different numerical approximations , and hence potentially differing physical behavior , the easiest solution is to simulate at high resolution , and down-sample the data . While at training time the down-sampled data is used , at test time , the model needs to be applied to data from a lowresolution simulation instead . This typically leads to large distribution shifts , and correspondingly impaired inference quality . Therefore , we take an unpaired training approach into account that decouples the low and high resolution data . The feedback from the discriminators is still based on the ground-truth data , which makes the output conditionally dependent on the input , but also approximates the behavior of the reference data . However , there is no direct supervision in the generator anymore : the output is no longer compared with a matching ground-truth in the loss , but only related to the input . This is done by down-sampling the output and comparing it with the input : L∗s = ||X − p ( Ỹ ) ||22 , ( 5 ) where p : RNx×Ny×Nz×3 → RMx×My×Mz×3 is a down-sampling function based on average pooling . This effectively removes the need for paired low- and high-resolution samples at training time , and fully relies on the discriminator to match both distributions . To indicate the focus on surface structures , we refer to the final version of our generative network as surfGAN . For a more detailed description of the training and the network architecture we refer to the appendix A.1 and A.2 .
This paper presents a GAN framework to learn spatial and temporal representation on complex physical surfaces to apply to simulation. The method represents data in a SDF-like way so it is agnostic to the properties of material and simulation model. The network is built upon conventional GAN, with two discriminator for temporal and spatial. A loss function is proposed to evaluates on a grid-based Fourier transform of the output and ground truth to better preserve high frequency details (temporally and spatially). Results show performance on a physical simulation with opaque, elasto-plastic materials.
SP:da34f0f0c8f4887dc84cdb63ec13ac7550e0c37c
Characterizing Lookahead Dynamics of Smooth Games
As multi-agent systems proliferate in machine learning research , games have attracted much attention as a framework to understand optimization of multiple interacting objectives . However , a key challenge in game optimization is that , in general , there is no guarantee for usual gradient-based methods to converge to a local solution of the game . The latest work by Chavdarova et al . ( 2020 ) report that Lookahead optimizer ( Zhang et al. , 2019 ) significantly improves the performance of Generative Adversarial Networks ( GANs ) and reduces the rotational force of bilinear games . While promising , their observations were purely empirical , and Lookahead optimization of smooth games still lacks theoretical understanding . In this paper , we fill this gap by theoretically characterizing Lookahead dynamics of smooth games . We provide an intuitive geometric explanation on how and when Lookahead can improve game dynamics in terms of stability and convergence . Furthermore , we present sufficient conditions under which Lookahead optimization of bilinear games provably stabilizes or accelerates convergence to a Nash equilibrium of the game . Finally , we show that Lookahead optimizer preserves locally asymptotically stable equilibria of base dynamics , and can either stabilize or accelerate the local convergence to a given equilibrium with proper assumptions . We verify our theoretical predictions by conducting numerical experiments on two-player zero-sum ( non-linear ) games . 1 INTRODUCTION . Recently , a plethora of learning problems have been formulated as games between multiple interacting agents , including Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ; Brock et al. , 2019 ; Karras et al. , 2019 ) , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) , self-play ( Silver et al. , 2018 ; Bansal et al. , 2018 ) , inverse reinforcement learning ( RL ) ( Fu et al. , 2018 ) and multi-agent RL ( Lanctot et al. , 2017 ; Vinyals et al. , 2019 ) . However , the optimization of interdependent objectives is a non-trivial problem , in terms of both computational complexity ( Daskalakis et al. , 2006 ; Chen et al. , 2009 ) and convergence to an equilibrium ( Goodfellow , 2017 ; Mertikopoulos et al. , 2018 ; Mescheder et al. , 2018 ; Hsieh et al. , 2020 ) . In particular , gradient-based optimization methods often fail to converge and oscillate around a ( local ) Nash equilibrium of the game even in a very simple setting ( Mescheder et al. , 2018 ; Daskalakis et al. , 2018 ; Mertikopoulos et al. , 2019 ; Gidel et al. , 2019b ; a ) . To tackle such non-convergent game dynamics , a huge effort has been devoted to developing efficient optimization methods with nice convergence guarantees in smooth games ( Mescheder et al. , 2017 ; 2018 ; Daskalakis et al. , 2018 ; Balduzzi et al. , 2018 ; Gidel et al. , 2019b ; a ; Schäfer & Anandkumar , 2019 ; Yazici et al. , 2019 ; Loizou et al. , 2020 ) . Meanwhile , Chavdarova et al . ( 2020 ) have recently reported that the Lookahead optimizer ( Zhang et al. , 2019 ) significantly improves the empirical performance of GANs and reduces the rotational force of a bilinear game dynamics . Specifically , they demonstrate that class-unconditional GANs trained by a Lookahead optimizer can outperform class-conditional BigGAN ( Brock et al. , 2019 ) trained by Adam ( Kingma & Ba , 2015 ) even with a model of 1/30 parameters and negligible computation overheads . They also show that Lookahead optimization of a stochastic bilinear game tends to be more robust against large gradient variances than other popular first-order methods , and converges to a Nash equilibrium of the game where other methods fail . Despite its great promise , the study of Chavdarova et al . ( 2020 ) relied on purely empirical observations , and the dynamics of Lookahead game optimization still lacks theoretical understanding . Specifically , many open questions , such as the convergence properties of Lookahead dynamics and the impact of its hyperparameters on the convergence , remain unexplained . In this work , we fill this gap by theoretically characterizing the Lookahead dynamics of smooth games . Our contributions are summarized as follows : • We provide an intuitive geometric explanation on how and when Lookahead can improve the game dynamics in terms of stability and convergence to an equilibrium . • We analyze the convergence of Lookahead dynamics in bilinear games and present sufficient conditions under which the base dynamics can be either stabilized or accelerated . • We characterize the limit points of Lookahead dynamics in terms of their stability and local convergence rates . Specifically , we show that Lookahead ( i ) preserves locally asymptotically stable equilibria of base dynamics and ( ii ) can either stabilize or accelerate the local convergence to a given equilibrium by carefully choosing its hyperparameters . • Each of our theoretical predictions is verified with numerical experiments on two-player zero-sum ( non-linear ) smooth games . 2 PRELIMINARIES . We briefly review the objective of smooth game optimization , first-order game dynamics , and Lookahead optimizer . Finally , we discuss previous work on game optimization . We summarize the notations throughout this paper in Table A.1 . 2.1 SMOOTH GAMES . Following Balduzzi et al . ( 2018 ) , a smooth game between players i = 1 , . . . , n can be defined as a set of smooth scalar functions { fi } ni=1 with fi : Rd → R such that d = ∑n i=1 di . Each fi represents the cost of player i ’ s strategy xi ∈ Rdi with respect to other players ’ strategies x−i . The goal of this game optimization is finding a ( local ) Nash equilibrium of the game ( Nash , 1951 ) , which is a strategy profile where no player has an unilateral incentive to change its own strategy . Definition 1 ( Nash equilibrium ) . Let { fi } ni=1 be a smooth game with strategy spaces { Rdi } ni=1 such that d = ∑n i=1 di . Then x ∗ ∈ Rd is a local Nash equilibrium of the game if , for each i = 1 , . . . , n , there is a neighborhood Ui of x∗i such that fi ( xi , x∗−i ) ≥ fi ( x∗ ) holds for any xi ∈ Ui . Such x∗ is said to be a global Nash equilibrium of the game when Ui = Rdi for each i = 1 , . . . , n. A straightforward computational approach to find a ( local ) Nash equilibrium of a smooth game is to carefully design a gradient-based strategy update rule for each player . Such update rules that define iterative plays between players are referred to as a dynamics of the game . Definition 2 ( Dynamics of a game ) . A dynamics of a smooth game { fi } ni=1 indicates a differentiable operator F : Rd → Rd that describes players ’ iterative strategy updates as x ( t+1 ) = F ( x ( t ) ) . One might expect that a simple myopic game dynamics , such as gradient descent , would suffice to find a ( local ) Nash equilibrium of a game as in traditional minimization problems . However , in general , gradient descent optimization of smooth games often fail to converge and oscillate around an equilibrium of the game ( Daskalakis et al. , 2018 ; Gidel et al. , 2019b ; a ; Letcher et al. , 2019 ) . Such non-convergent behavior of game dynamics is mainly due to ( non-cooperative ) interaction between multiple cost functions , and is considered as a key challenge in the game optimization ( Mescheder et al. , 2017 ; 2018 ; Mazumdar et al. , 2019 ; Hsieh et al. , 2020 ) . 2.2 FIRST-ORDER METHODS FOR SMOOTH GAME OPTIMIZATION . We introduce well-known first-order methods for smooth game optimization . To ease the notation , we use ∇xf ( · ) to denote the concatenated partial derivatives ( ∇x1f1 ( · ) , . . . , ∇xnfn ( · ) ) of a smooth game { fi } ni=1 , where ∇xifi ( · ) is a partial derivative of a player i ’ s cost function with respective to its own strategy . Gradient Descent ( GD ) minimizes the cost function of each player using the gradient descent . Its simultaneous dynamics FGDSim of a smooth game { fi } ni=1 with a learning rate η > 0 is given by x ( t+1 ) = FGDSim ( x ( t ) ) def = x ( t ) − η∇xf ( x ( t ) ) . ( 1 ) On the other hand , its alternating dynamics FGDAlt is described by x ( t+1 ) = FGDAlt ( x ( t ) ) def = F1 ◦ . . . ◦ Fn ( x ( t ) ) , where ( 2 ) Fi ( x ) def = ( . . . , xi−1 , xi − η∇xifi ( x ) , xi+1 , . . . ) . ( 3 ) Proximal Point ( PP ) ( Martinet , 1970 ) computes an update by solving a proximal problem at each iteration . Its simultaneous dynamics F PPSim of a smooth game { fi } ni=1 with a learning rate η > 0 is x ( t+1 ) = F PPSim ( x ( t ) ) def = x ( t ) − η∇xf ( x ( t+1 ) ) . ( 4 ) Note that this update rule is implicit in a sense that x ( t+1 ) appears on both sides of the equation ; hence it requires solving the proximal subproblem for x ( t+1 ) per iteration . Extra Gradient ( EG ) ( Korpelevich , 1976 ) computes an update by using an extrapolated gradient . Its simultaneous dynamics F EGSim of a smooth game { fi } ni=1 with a learning rate η > 0 is x ( t+1 ) = F EGSim ( x ( t ) ) def = x ( t ) − η∇xf ( x ( t+ 1 2 ) ) , where ( 5 ) x ( t+ 1 2 ) def = x ( t ) − η∇xf ( x ( t ) ) . ( 6 ) 2.3 LOOKAHEAD OPTIMIZER . Lookahead ( Zhang et al. , 2019 ) is a recently proposed optimizer that wraps around a base optimizer and takes a backward synchronization step for each k forward steps . Given a dynamics FA induced by a base optimization method A , the Lookahead dynamics GLA-A with a synchronization period k ∈ N and a rate α ∈ ( 0 , 1 ) is x ( t+1 ) = GLA-A ( x ( t ) ) def = ( 1− α ) x ( t ) + αF kA ( x ( t ) ) . ( 7 ) 2.4 RELATED WORK . The convergence analysis of first-order smooth game dynamics dates several decades back and have been established in the context of saddle-point problems ( Rockafellar , 1976 ; Korpelevich , 1976 ; Tseng , 1995 ) , which is a special case of zero-sum games . For example , Rockafellar ( 1976 ) showed the linear convergence of PP in the bilinear and strongly-convex-strongly-concave ( SCSC ) saddlepoint problems . Tseng ( 1995 ) and Facchinei & Pang ( 2003 ) proved the linear convergence of EG in the same problem , and Nemirovski ( 2004 ) did in the convex-concave problem over compact sets . As many learning problems are formulated as games in recent years ( Goodfellow et al. , 2014 ; Madry et al. , 2018 ; Silver et al. , 2018 ; Fu et al. , 2018 ; Vinyals et al. , 2019 ) , game optimization has regained considerable attentions from the research community . Optimistic gradient descent ( OGD ) ( Popov , 1980 ) , which can be seen as an efficient approximation of EG , was recently rediscovered in the context of GAN training ( Daskalakis et al. , 2018 ) . Recent work of Liang & Stokes ( 2019 ) and Gidel et al . ( 2019a ) proved linear convergence of OGD in bilinear and SCSC games . Mokhtari et al . ( 2020 ) established an unifying theoretical framework for analyzing PP , EG and OGD dynamics . Zhang & Yu ( 2020 ) presented exact and optimal conditions for PP , EG and OGD dynamics to converge in bilinear games . While there has been a growing interest for incorporating second-order information into game dynamics ( Mescheder et al. , 2017 ; Balduzzi et al. , 2018 ; Mazumdar et al. , 2019 ; Schäfer & Anandkumar , 2019 ; Loizou et al. , 2020 ) to remedy non-convergent behaviors , the first-order optimization still dominates in practice ( Brock et al. , 2019 ; Donahue & Simonyan , 2019 ) due to computational and memory cost of second-order methods . Lately , Chavdarova et al . ( 2020 ) reported that recently developed Lookahead optimizer ( Zhang et al. , 2019 ) significantly improves the empirical performance of GANs and reduces the rotational force of bilinear game dynamics . However , this study relied on purely empirical observation and lacked theoretical understanding for Lookahead optimization of smooth games . Although Wang et al . ( 2020 ) proved that Lookahead optimizer globally converges to a stationary point in minimization problems , its convergence in smooth games still remain as an open question .
This paper investigates „lookahead dynamics of smooth games“. By this the authors mean discrete-time dynamical systems generating from a given algorithm by adding a relaxation step in the updates. The main aim of the paper is to solve smooth games. Under sufficient convexity assumptions Nash equilibria for such games can be identified as solutions to a Variational Inequality with a monotone and operator. This is in particular the case for convex-concave min-max problems. The main conclusion of this paper is that a combination of relaxation and lookahead effects stabilizes the learning dynamics and can lead to acceleration over the base algorithm.
SP:96e4c8e540941178aa3a9d9c0f11a58128a87e26
Disentangling Adversarial Robustness in Directions of the Data Manifold
1 INTRODUCTION . In recent years , deep neural networks ( DNNs ) ( Krizhevsky et al . ( 2012 ) ; Hochreiter and Schmidhuber ( 1997 ) ) have become popular and successful in many machine learning tasks . They have been used in different problems with great success . But DNNs are shown to be vulnerable to adversarial examples ( Szegedy et al . ( 2013 ) ; Goodfellow et al . ( 2014a ) ) . A well-trained model can be easily attacked by adding a small perturbation to the image . An effective way to solve this issue is to train the robust model using training data augmented with adversarial examples , i.e . adversarial training . With the growing success of generative models , researchers have tried to use generative adversarial networks ( GAN ) ( Goodfellow et al . ( 2014b ) ) and variational autoencoder ( VAE ) ( Kingma and Welling ( 2013 ) ) to generate adversarial examples ( Xiao et al . ( 2018 ) ; Zhao et al . ( 2017 ) ; Song et al . ( 2018a ) ; Kos et al . ( 2018 ) ; Song et al . ( 2018b ) ) to fool the classification model with great success . They found that standard adversarial training can not defend these new attacks . Unlike the regular adversarial examples , these new adversarial examples are perceptible by humans but they preserve the semantic information of the original data . A good DNN should be robust to such semantic attacks . Since the GAN and VAE are approximations of the true data distribution , these adversarial examples will stay in the data manifold . Hence they are called on-manifold adversarial examples by ( Stutz et al . ( 2019 ) ) . On the other hand , experimental evidences support that regular adversarial examples leave the data manifold ( Song et al . ( 2017 ) ) . We call the regular adversarial examples as off-manifold adversarial examples . The concepts of on-manifold and off-manifold adversarial examples are important . Because they can help us to understand the issue of conflict between adversarial robustness and generalization ( Stutz et al . ( 2019 ) ; Raghunathan et al . ( 2019 ) ) , which is still an open problem . In this paper , we study the attacking mechanisms of these two types of examples , as well as the corresponding adversarial training methods . This study , as far as we know , has not been done before . Specifically , we consider a generative attack method that adds a small perturbation in the latent space of the generative models . Since standard adversarial training can not defend this attack , we consider the training methods that use training data augmented with these on-manifold adversarial examples , which we call latent space adversarial training . Then we compare it to standard adversarial training ( training with off-manifold adversarial examples ) . Contributions : We study the theoretical properties of latent space adversarial training and standard adversarial training in the Gaussian mixture model with a linear generator . We give the excess risk analysis and saddle point analysis in this model . Based on this case study , we claim that : • Regular adversarial examples attack in directions of small variance of the data manifold and leave the data manifold . • Standard adversarial training increases the model robustness by amplifying the small variance . Hence , it extends the boundary of the data manifold in directions of small variance . • Generative adversarial examples attack in directions of large variance of the data manifold and stay in the data manifold . • Latent space adversarial training increases the model robustness by amplifying the large variance . Hence , it extends the boundary of the data manifold in directions of large variance . We provide experiments on MNIST and CIFAR-10 and show that the above phenomena also exist in real datasets . It gives us a new perspective to understand the behavior of on-manifold and off-manifold adversarial examples . Finally , we study the robustness trade-off between generative and regular adversarial examples . On MNIST , robustness trade-off is unavoidable , but the conflict between generative adversarial examples and regular adversarial examples are much smaller than the conflict between regular adversarial examples of different norms . On CIFAR-10 , there is nearly no robustness trade-off between generative and regular adversarial examples . 2 RELATED WORK . Our work is related to attack and defense methods . Specifically , we care about attacks and defenses with generative models . Attack Adversarial examples for deep neural networks were first intruduced in ( Szegedy et al . ( 2013 ) ) . However , adversarial machine learning or robust machine learning has been studied for a long time ( Biggio and Roli ( 2018 ) ) . In the setting of white box attack ( Kurakin et al . ( 2016 ) ; Papernot et al . ( 2016 ) ; Moosavi-Dezfooli et al . ( 2016 ) ; Carlini and Wagner ( 2017 ) ) , the attackers have fully access to the model ( weights , gradients , etc. ) . In black box attack ( Chen et al . ( 2017 ) ; Su et al . ( 2019 ) ; Ilyas et al . ( 2018 ) ) , the attackers have limited access to the model . First order optimization methods , which use the gradient information to craft adversarial examples , such as PGD ( Madry et al . ( 2017 ) ) , are widely used for white box attack . Zeroth-order optimization methods ( Chen et al . ( 2017 ) ) are used in black box setting . Li et al . ( 2019 ) improved the query efficiency in black-box attack . HopSkipJumpAttack ( Chen et al . ( 2020 ) ) is another query-efficient attack method . Generative adversarial examples Recently , generative models have been used to craft adversarial examples ( Xiao et al . ( 2018 ) ; Song et al . ( 2018b ) ; Kos et al . ( 2018 ) ; Schott et al . ( 2018 ) ) . The adversarial examples are more natural ( Zhao et al . ( 2017 ) ) . These adversarial examples lie in the data manifold , and they are called on-manifold adversarial examples . Defense Training algorithms against adversarial attacks can be subdivided into the following categories . Adversarial training : The training data is augmented with adversarial examples to make the models more robust ( Madry et al . ( 2017 ) ; Szegedy et al . ( 2013 ) ; Tramèr et al . ( 2017 ) ) . Preprocessing : Inputs or hidden layers are quantized , projected onto different sets or other preprocessing methods ( Buckman et al . ( 2018 ) ; ( Guo et al . ( 2017 ) ; Kabilan et al . ( 2018 ) ) . Stochasticity : Inputs or hidden activations are randomized ( Prakash et al . ( 2018 ) ; Dhillon et al . ( 2018 ) ; Xie et al . ( 2017 ) ) . However , some of them are shown to be useless defenses given by obfuscated gradients ( Athalye et al . ( 2018 ) ) . Adaptive attack ( Tramer et al . ( 2020 ) ) is used for evaluating defenses to adversarial examples . Defense with generative model Using generative models to design defense algorithms have been studied extensively . Using GAN , we can project the adversarial examples back to the data manifold ( Jalal et al . ( 2017 ) ; Samangouei et al . ( 2018 ) ) . VAE is also used to train robust model ( Schott et al . ( 2018 ) ) . 3 PROBLEM DESCRIPTION . Original space adversarial training : Consider the classification problem of training a classifer fθ to map the data points x ∈ X ⊂ Rd to the labels y ∈ Y , where X and Y are the input data space and the label space . The classifier fθ is parameterized by θ . We assume that the data pairs ( x , y ) are sampled from the distribution P ( X , Y ) over X × Y . Standard training is to find the solution of minθ E ( x , y ) ∼P ` ( fθ ( x ) , y ) , where ` ( · , · ) is the loss function . The goal of dversarial training is to solve the minimax problem min θ E ( x , y ) ∼P max ‖x−x′‖≤ε ` ( fθ ( x ′ ) , y ) , ( 1 ) where ε is the threshold of perturbation . Here we can use ` 1 , ` 2 or ` ∞-norm ( Madry et al . ( 2017 ) ) . The inner maximization problem is to find the adversarial examples x′ to attack the given classifier fθ . The outer minimization problem is to train the classifier to defend the given adversarial examples x′ . we refer to these attacks as regular attacks . We refer to these minimax problems as standard adversarial training or original space adversarial training . Latent space adversarial training : We assume that the data lie in a low dimensional manifold of Rd . Furthermore , we assume the true distribution D is a pushforward from a prior Guassian distribution z ∼ N ( 0 , I ) using G ( z ) , where G : Z → X is a mapping from the latent space Z to the original space X . This is a basic assumption of GAN or VAE . Let I : X → Z be the inverse mapping of G ( z ) . The goal of latent space adversarial training is to solve the following minimax problem min θ E ( x , y ) ∼P max ‖z′−I ( x ) ‖≤ε ` ( fθ ( G ( z ′ ) ) , y ) . ( 2 ) Unlike the regular attacks , the distance between the original examples and adversarial examples can be large . To preserve the label of the data , we use the conditional generative models ( e.g . C-GAN ( Mirza and Osindero ( 2014 ) ) and C-VAE ( Sohn et al . ( 2015 ) ) ) , i.e . the generator Gy ( z ) and inverse mapping Iy ( x ) are conditioned on the label y , for adversarial training . We refer to these attacks as generative attack , and these adversarial training as latent space adversarial training . Regular attack algorithms Two widely used gradient-based attack algorithms for the inner maximization problem in equation ( 1 ) are fast gradient sign method ( FGSM ) ( Goodfellow et al . ( 2014a ) ) and projected gradient descend ( PGD ) ( Madry et al . ( 2017 ) ) . Using FGSM , the adversarial examples are calculated by x′ = x+ εsgn ( ∇x ` ( fθ ( x ) , y ) ) , where ∇x denotes the gradient with respect to x. PGD attempts to find a near optimal adversarial example for the inner maximization problem ( 1 ) in multiple steps . In the tth step , xt+1 = Πx+S [ x t + α∇x ` ( fθ ( xt ) , y ) /‖∇x ` ( fθ ( xt ) , y ) ‖ ] , where α is the step size , Πx+S [ · ] is the projection operator to project the given vector to the constraint x+ S = { x′|‖x− x′‖ ≤ ε } . In the whole paper , we refer to these as FGSM-attack and PGD-attack , and the corresponding original space adversarial training as FGSM-adv and PGD-adv . FGSM-attack is a weak attack and PGD-attack is a stronger attack . In section 5 , we use them to show that a strong original space adversarial training , PGD-adv , does not work well against a weak attack , FGSM-attack in the latent space . Conversely , latent space adversarial training can not defend a simple FGSM-attack in the original space . Generative attack algorithm In our experiments , we use FGSM in the latent space for the inner maximization problem in equation ( 2 ) z′ = I ( x ) + εsgn ( ∇z ` ( fθ ( G ( z ) ) , y ) ) . Because of the mode collapse issue of GAN ( Salimans et al . ( 2016 ) ; Gulrajani et al . ( 2017 ) ) , adding a small perturbation in the latent space of GAN may output the same images . Thus we use VAE in our experiments . we refer to this generative attack and latent space adversarial training as VAE-attack and VAE-adv .
This paper analytically considers two flavours of adversarial training in a Gaussian mixture model. The first uses regular adversarial examples, and the second uses examples drawn from a generative model. The authors show that the adversarial perturbations generated in the two cases differ in a cleanly-characterisable way: in the first case the perturbations differ from real data in a direction aligned with the smallest eigenvalues of the data covariance. In the latter case the perturbations are in a direction aligned with the largest eigenvalues. Experimental results on MNIST and CIFAR are presented to illustrate how the analysis transfers to real datasets.
SP:8ebdf09acf96ca3dc86a413fdcd2f524d2a54cb7
Towards Learning to Remember in Meta Learning of Sequential Domains
1 INTRODUCTION . Humans have the ability to quickly learn new skills from a few examples , without erasing old skills . It is desirable for machine-learning models to adopt this capability when learning under changing contexts/domains , which are common scenarios for real-world problems . These tasks are easy for humans , yet pose challenges for current deep-learning models mainly due to the following two reasons : 1 ) Catastrophic forgetting is a well-known problem for neural networks , which are prone to drastically losing knowledge on old tasks when a domain is shifted ( McCloskey & Cohen , 1989 ) ; 2 ) It has been a long-standing challenge to make neural networks generalize quickly from a limited amount of training data ( Wang et al. , 2020a ) . For example , the dialogue system can be trained on a sequence of domains , ( hotel booking , insurance , restaurant , car services , etc ) due to the sequential availability of dataset ( Mi et al. , 2020 ) . For each domain , each task is defined as learning one customer-specific model ( Lin et al. , 2019 ) . After finishing meta training , the model could be deployed to the previously trained domains , as the new ( unseen ) customers from previous domains may arrive later , they have their own ( small ) training data ( support set ) used for adapting the sequentially meta-learned models . After adaptation , the newly adapted model for the new customers can be deployed to make responses to the customers . We formulate the above problem as sequential domain few-shot learning , where a model is required to make proper decisions based on only a few training examples while undergoing constantly changing contexts/domains . It is expected that adjustments to a new context/domain should not erase knowledge already learned from old ones . The problem consists of two key components that have been considered separately in previous research : the ability to learn from a limited amount of data , referred to as few-shot learning ; and the ability to learn new tasks without forgetting old knowledge , known as continual learning . The two aspects have been proved to be particularly challenging for deep learning models , explored independently by extensive previous work ( Finn et al. , 2017 ; Snell et al. , 2017 ; Kirkpatrick et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ) . However , a more challenging yet useful perspective to jointly integrate the two aspects remains less explored . Generally speaking , meta-learning targets learning from a large number of similar tasks with a limited number of training examples per class . Most existing works focus on developing the general- ization ability under a single context/domain ( Santoro et al. , 2016 ; Finn et al. , 2017 ; 2018 ; Snell et al. , 2017 ; Ravi & Beatson , 2019 ) . Recently , it has been shown that catastrophic forgetting often occurs when transferring a meta-learning model to a new context ( Ren et al. , 2019 ; Yoon et al. , 2020 ) . Continual learning aims to mitigate negative backward transfer effects on learned tasks when input distribution shift occurs during sequential context changes . Related techniques of which are currently applied mostly on standard classification problems ( Serrà et al. , 2018 ; Ebrahimi et al. , 2020b ) . In this paper , we generalize it to the sequential domain meta-learning setting , which seeks good generalization on unseen tasks from all domains with only limited training resources from previous domains . We term the problem sequential domain meta learning . Note this setting is different from continual few-shot learning that focuses on remembering previously learned lowresource tasks in a single domain . Our setting does not aim to remember on a specific task , but rather to maintain good generalization to a large amount of unseen few-shot tasks from previous domains without catastrophic forgetting . This setting is common and fits well in dynamic real-world scenarios such as recommendation system and dialogue training system . The domain shift arised from this setting during meta learning poses new challenges to existing continual-learning techniques . This is mainly due to the high variability underlying a large number of dynamically formed few-shot tasks , making it infeasible for a model to explicitly remember each task . In our setting , a model is expected to remember patterns generic to a domain , while neglecting noise and variance of a specific few-shot task . This ability , termed as remember to generalize , allows a model to capture general patterns of a domain that repeatedly occur in batches of tasks while avoid being too sensitive to a specific few-shot task . In this paper , we propose to address the aforementioned challenges by designing a dynamic learningrate adaptation scheme for learning to remember previous domains . These techniques could jointly consider gradients from multiple few-shot tasks to filter out task variance and only remember patterns that are generic in each domain . Our main idea is to meta learn both the model parameters and learning rates by backpropagating both a domain loss and a memory loss to adaptively update model parameters and the learning rates , respectively . Specifically , our mechanism keeps a small memory of tasks from previous domains , which are then used to guide the dynamic and adaptive learning behaviors on different portions of the network parameters . The proposed mechanism is versatile and applicable to both the metric-based prototypical network ( Snell et al. , 2017 ) and the gradient-based ANIL ( Raghu et al. , 2020 ) meta-learning model . Our contributions are summarized as follows : • We propose a challenging benchmark that requires a meta learning model to sequentially learn on a sequence of domains enduring domain shift without much forgetting on previous domains . • We extend meta learning models with existing dynamic learning rate modeling techniques . This can mitigate catastrophic forgetting through meta learning both model parameters and learning rates to dynamically control the network update process . This can be seamlessly integrated into both metric-based and gradient-based meta learning approaches . • We conduct extensive experiments on multiple public datasets under different sequential domain few-shot learning scenarios . We further test functionality of the dynamic learning-rate update mechanism for both metric-based and gradient-based meta-learning approaches . Comparisons are made towards a wide range of representative continuallearning techniques and models . Results demonstrate that our method outperforms strong baselines by a large margin . 2 RELATED WORKS . 2.1 META LEARNING . Meta learning ( Schmidhuber , 1993 ) , aka , learning to learn , aims to rapidly adapt to a new task by reusing previous experience through training on a large number of tasks . Meta learning can be roughly classified into the following categories : 1 ) Metric/Embedding-based approaches such as ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Edwards & Storkey , 2017 ) , which map input data into embedding ( feature ) spaces with decisions made based on some distance metric in the feature space ; 2 ) Black-box learning methods such as ( Andrychowicz et al. , 2016 ; Graves et al. , 2014 ; Mishra et al. , 2018 ) ; 3 ) Optimization-based methods such as ( Finn et al. , 2017 ; Ravi & Larochelle , 2017 ; Li et al. , 2017 ; Antoniou & Storkey , 2019 ) , which improve gradient-based optimization algorithms or learn to initialize network parameters ; and 4 ) Bayesian meta-learning methods such as ( Ravi & Beatson , 2019 ; Finn et al. , 2018 ; Yoon et al. , 2018b ; Grant et al. , 2018 ; Wang et al. , 2020b ) . These methods are used to either interprete and understand MAML ( Grant et al. , 2018 ) , or to model uncertainty of meta learning models ( Yoon et al. , 2018b ; Finn et al. , 2018 ; Wang et al. , 2020b ) . 5 ) Memorybased meta learning ( Santoro et al. , 2016 ; Munkhdalai & Yu , 2017 ; Mikulik et al. , 2020 ) , which apply additional memory component for meta learning . Online meta learning ( Finn et al. , 2019 ) is also related to us . They focus on forward transfer , i.e. , achieving better performance on future task and use all the data from previous tasks to do meta learning , while our setting is significantly different from theirs as we focus on mitigating catastrophic forgetting with only very limited access to previous domains . Dynamically updating the learning rates for networks is not new and has been explored in several contexts . Meta-SGD ( Li et al. , 2017 ) learns the per parameter learning rates for meta learning to improve flexibility and performance . ( Gupta et al. , 2020 ) use dynamic learning rates to mitigate forgetting in online continual learning . T-net ( Lee & Choi , 2018 ) learns a metric in activation space , which informs the update direction and step size for task-specific learning . Flennerhag et al . ( 2020 ) proposes e warped gradient descent to meta-learns an efficiently parameterised preconditioning matrix to dynamically update the network . Our work extends dynamic learning rate techniques to sequential domain meta learning setting to mitigate catastrophic forgetting . 2.2 CONTINUAL LEARNING . Continual learning tackles the problem of maintaining knowledge when input distribution shift happens in sequentially arriving tasks . There are different methods to address this problem , including 1 ) retaining memory for future replay ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ; Riemer et al. , 2019 ; Chaudhry et al. , 2019b ) ; 2 ) designing tailored network architectures ( Rusu et al. , 2016 ; Fernando et al. , 2017 ; Yoon et al. , 2018a ) ; 3 ) performing proper regularization during parameter updates ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; von Oswald et al. , 2019 ) ; and 4 ) introducing Bayesian methods for model parameter inference ( Nguyen et al. , 2018 ; Ebrahimi et al. , 2020a ) . Specifically , methods based on memory replay store representative samples from old tasks and rehearsal is performed during training ( Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2019a ; Riemer et al. , 2019 ) . Recent research also utilizes generative models to memorize previously seen data ( Lesort et al. , 2019 ) . Representatives of architecture-based methods include Progressive Neural Networks ( Rusu et al. , 2016 ) , PathNet ( Fernando et al. , 2017 ) , Dynamically Expandable Networks ( Yoon et al. , 2018a ) , Hard Attention Mask ( HAT ) ( Serrà et al. , 2018 ) and PackNet ( Mallya & Lazeb- nik , 2017 ) , etc . These models explicitly modify network topology to preserve previous knowledge . The classic architecture-based approaches proposed in ( Serrà et al. , 2018 ) and ( Mallya & Lazebnik , 2017 ) do not fit into this setting , as they attempt to fully remember each historic task . Progressive Neural Networks ( Rusu et al. , 2016 ) guarantee zero forgetting but at the cost of growing network architectures and increasing parameters rapidly , which is unaffordable in memory-constraint cases . Regularization-based methods constrain the updated parameters to avoid drastic changes to previously learned tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; von Oswald et al. , 2019 ; Ebrahimi et al. , 2020c ) . They can restrict the capacity to meta-learning of new domains , thus it could hurt the performance on a new domain . Bayesian-based methods model parameters in a probabilistic way , and then parameters are updated either based on their posterior distributions ( Nguyen et al. , 2018 ) or on their uncertainty ( Ebrahimi et al. , 2020a ) . However , in the context of meta-learning , the uncertainty or posterior estimation could be highly inaccurate due to the small-data setting in each task , thus hindering the performance . Recently , there are works using meta learning to improve continual learning . For example , ( Javed & White , 2019 ) proposes to learn versatile representations by explicit training towards minimizing forgetting .
The paper proposes a method for the sequential meta-learning problem. The author meta learn not only model parameters but also learning rate vectors for parameter blocks. To this end, the meta-learn model finds appropriate model parameters and adaptive learning rate vectors that capture task-general information. Overall experiments are performed on few-shot meta-learning settings with sequential domains (datasets).
SP:000d80bbed580799f47117d2c65cb08f17b783e3
Provable Robust Learning for Deep Neural Networks under Agnostic Corrupted Supervision
1 INTRODUCTION . Corrupted supervision is a common issue in real-world learning tasks , where the learning targets are not accurate due to various factors in the data collection process . In deep learning models , such corruptions are especially severe , whose degree-of-freedom makes them easily memorize corrected examples and susceptible to overfitting ( Zhang et al. , 2016 ) . There are extensive efforts to achieve robustness against corrupted supervisions . A natural approach to deal with corrupted supervision in deep neural networks ( DNNs ) is to reduce the model exposure to corrupted data points during training . By detecting and filtering ( or re-weighting ) the possible corrupted samples , the learning is expected to deliver a model that is similar to the one trained on clean data ( without corruption ) ( Kumar et al. , 2010 ; Han et al. , 2018 ; Zheng et al. , 2020 ) . There are different criteria designed to identify the corrupted data points in training . For example , Kumar et al . ( 2010 ) ; Han et al . ( 2018 ) ; Jiang et al . ( 2018 ) leveraged the loss function values of data points ; Zheng et al . ( 2020 ) tapped prediction uncertainty for filtering data ; Malach & Shalev-Shwartz ( 2017 ) used the disagreement between two deep networks ; Reed et al . ( 2014 ) utilized the prediction consistency of neighboring iterations . The success of these methods highly depends on the effectiveness of the detection criteria in correctly identifying the corrupted data points . Since the corrupted labels remain unknown throughout the learning , such “ unsupervised ” detection approaches may not be effective , either lack theoretical guarantees of robustness ( Han et al. , 2018 ; Reed et al. , 2014 ; Malach & Shalev-Shwartz , 2017 ; Li et al. , 2017 ) or provide guarantees under assumptions of the availability of prior knowledge about the type of corruption ( Zheng et al. , 2020 ; Shah et al. , 2020 ; Patrini et al. , 2017 ; Yi & Wu , 2019 ) . Besides , another limitation of many existing approaches is that , they are exclusively designed for classification problems ( e.g. , Malach & Shalev-Shwartz ( 2017 ) ; Reed et al . ( 2014 ) ; Menon et al . ( 2019 ) ; Zheng et al . ( 2020 ) ) and are not straightforward to extend to solve regression problems . To tackle these challenges , this paper presents a unified optimization framework with robustness guarantees without any assumptions on how supervisions are corrupted , and is applicable to both classification and regression problems . Instead of developing an accurate criterion for detection corrupted samples , we adopt a novel perspective and focus on limiting the collective impact of corrupted samples during the learning process through robust mean estimation of gradients . Specifically , if our estimated average gradient is close to the gradient from the clean data during the learning iterations , then the final model will be close to the model trained on clean data . As such , a corrupted data point can still be used during the training when it does not considerably alter the averaged gradient . This observation has remarkably impact on our algorithm design : instead of explicitly quantifying ( and identifying ) individual corrupted data points , which is a hard problem in itself , we are now dealing with an easier task , i.e. , eliminating training data points that significantly distort the mean gradient estimation . One immediate consequence of this design is that , even when a corrupted data point failed to be excluded by the proposed algorithm , the data point is likely to have very limited impact on the overall loss , as compared with state-of-the-art filtering data points based on loss values . We perform experiments on both regression and classification with corrupted supervision on multiple benchmark datasets . The results show that the proposed method outperforms state-of-the-art . 2 BACKGROUND . Learning from corrupted data ( Huber , 1992 ) has attracted considerable attention in the machine learning community ( Natarajan et al. , 2013 ) . Many recent studies have investigated robustness of classification tasks with noisy labels . For example , Kumar et al . ( 2010 ) proposed a self-paced learning ( SPL ) approach , which assigns higher weights to examples with smaller loss . A similar idea was used in curriculum learning ( Bengio et al. , 2009 ) , in which the model learns easy samples first before learning harder ones . Alternative methods inspired by SPL include learning the data weights ( Jiang et al. , 2018 ) and collaborative learning ( Han et al. , 2018 ; Yu et al. , 2019 ) . Label correction ( Patrini et al. , 2017 ; Li et al. , 2017 ; Yi & Wu , 2019 ) is another approach , which revises original labels in data with a goal to recover clean labels from corrupt ones . However , since we do not have access to which data points are corrupted , it is hard to get provable guarantees for label correction without strong assumptions regarding the corruption type . Accurate estimation of gradients is a key step for successful optimization . The relationship between gradient estimation and its final convergence has been widely studied in the optimization community . Since computing an approximated ( and potentially biased ) gradient is often more efficient than computing the exact gradient , many studies used approximated gradients to optimize their models and showed that they suffer from the biased estimation problem if there is no assumptions on the gradient estimation ( d ’ Aspremont , 2008 ; Schmidt et al. , 2011 ; Bernstein et al. , 2018 ; Hu et al. , 2020 ; Ajalloeian & Stich , 2020 ) . A closely related topic is robust estimation of the mean . Given corrupted data , robust mean estimation aims at generating an estimated mean µ̂ such that the difference between the estimated mean on corrupted data and the mean of clean data ‖µ̂− µ‖2 is minimized . It was showed that median or trimmed-mean are the optimal statistics for mean estimation in one-dimensional data ( Huber , 1992 ) . However , robustness in high dimension is quite challenging since applying the coordinate-wise optimal robust estimator would lead to an error factor O ( √ d ) that scales with the data dimension . Although some classical work , such as Tukey median ( Tukey , 1975 ) , successfully designed algorithms to get rid of the O ( √ d ) error , the algorithms themselves are not polynomial-time algorithm . More recently , Diakonikolas et al . ( 2016 ) ; Lai et al . ( 2016 ) successfully designed polynomial-time algorithms with dimension-free error bounds . The results have been widely applied to improve algorithmic efficiency in various scenarios ( Dong et al. , 2019 ; Cheng et al. , 2020 ) . Robust optimization aims to optimize the model given corrupted data . Many previous studies improve the robustness of the optimization in different problem settings . However , most of them either study linear regression and its variantes ( Bhatia et al. , 2015 ; 2017 ; Shen & Sanghavi , 2019 ) or study the convex optimization ( Prasad et al. , 2018 ) . Thus , those results can not be directly generalized to deep neural networks . Diakonikolas et al . ( 2019 ) is a very generalized non-convex optimization method with the agnostic corruption guarantee . However , the space complexity of the algorithm is high , thus can not be applied to deep neural networks given current hardware limitations . 3 METHODOLOGY . Before introducing our algorithm , we first discuss the corrupted supervision . To characterize agnostic corruptions , we make use of an adversary that tries to corrupt the supervision of a clean data . There is no limitation on how the adversary corrupts the supervision , which can either be randomly permuting the target , or in a way that maximizes the negative impact ( i.e. , lower performance ) . Firstly , the adversary can choose up to fraction of the clean target Dy ∈ Rn×q and change the selected row of Dy to arbitrary valid numbers , generating D y ∈ Rn×q . Then , the adversary returns the corrupted dataset Dx , D y to our learning algorithmA . In this process , the only constraint on the adversary is the fraction , and the adversary has full knowledge of the data , and even the learning algorithm A . A natural question to ask is : Given a data set with -fraction corrupted supervision Dx ∈ Rn×p , D y , and a learning objective φ : Rp × Rq × Rd → R parameterized by θ , can we output parameters θ ∈ Rd such that ‖∇θφ ( θ ; Dx , Dy ) ‖ is minimized . When = 0 , we have D y = Dy and learning is done on the clean data . The stochastic gradient descent could converge to a stationary point , where ‖∇θφ ( θ ; Dx , Dy ) ‖ = 0 . However , when the supervision is corrupted as above , this is not the case any more , due to the error in θ impacted by the corrupted data . We thus want an efficient algorithm to find a model θ that minimizes ‖∇θφ ( θ ; Dx , Dy ) ‖ . A robust model θ should have a small value of ‖∇θφ ( θ ; Dx , Dy ) ‖ , and we hypothesize that a smaller ‖∇θφ ( θ ; Dx , Dy ) ‖ has better generalization . 3.1 STOCHASTIC GRADIENT DESCENT WITH BIASED GRADIENT . A direct consequence of corrupted supervision is biased gradient estimation . In this section , we will first analyze how such biased gradient estimation affects the robustness of learning . The classical analysis of stochastic gradient descent ( SGD ) requires access to the stochastic gradient oracle , which is an unbiased estimation of the true gradient . However , corrupted supervision leads to corrupted gradients , and it is thus difficult to get unbiased gradient estimation without assumptions of how the gradients are corrupted . We start the analysis by the following informal theorem ( without elaborated discussions of assumptions ) of how biased gradient affects the final convergence of SGD . Its formal version is provided in Theorem 4 , Appendix . Theorem 1 ( Convergence of Biased SGD ( Informal ) ) Under mild assumptions , denote ζ to be the maximum ` 2 norm of the difference between clean minibatch gradient and corrupted minibatch gradient ‖g− g̃‖ ≤ ζ , then by using biased gradient estimation , SGD converges to the ζ-approximated stationary points : E‖∇φ ( θt ) ‖2 = O ( ζ2 ) . Remark 1 In the corrupted supervision setting , let the gradient estimated by corrupted data D be ĝ , the gradient estimated by clean data D be g. Assume ‖g̃ − g‖ ≤ ζ , it follows that when using corrupted dataset in SGD , it converges to the ζ-approximated stationary point of the objective defined by the clean data . Note the difference between above theorem and typical convergence theorem is that we are using a biased gradient estimation . According to Theorem 1 and the remark , a robust estimation of the gradient g is the key to ensure a robust model ( converge to the clean solution ) . We also assume the loss function has the form of L ( y , ŷ ) , where many commonly used loss functions fall in this category .
In this paper, the authors studied the problem of training neural networks under data poisoning, i.e., when a small fraction of the training data is corrupted by the adversary. They considered two data corruption settings, one allows both the data x and supervision y to be corrupted, which is called general corruption, and one with only supervision y corrupted. Their first algorithm, which removes the datapoints whose gradient norm is large when computing the average gradient, applies to the general supervision setting. They showed their algorithm has eps\sqrt(d) error or eps*L error, which can be quite large for high-dimensional and deep neural nets learning settings. Their second algorithm applies to the setting where only supervision y is corrupted, and the algorithm works by removing the datapoints whose output layer gradient is large. Assuming the clean data has bounded gradient, and the dimension of y is p, their algorithm achieves error eps*sqrt(p).
SP:e3330123a00c4e32e60792230c6a7a883e84aa98
Multi-Class Uncertainty Calibration via Mutual Information Maximization-based Binning
1 INTRODUCTION . Despite great ability in learning discriminative features , deep neural network ( DNN ) classifiers often make over-confident predictions . This can lead to potentially catastrophic consequences in safety critical applications , e.g. , medical diagnosis and autonomous driving perception tasks . A multi-class classifier is perfectly calibrated if among the cases receiving the prediction distribution q , the ground truth class distribution is also q . The mismatch between the prediction and ground truth distribution can be measured using the Expected Calibration Error ( ECE ) ( Guo et al. , 2017 ; Kull et al. , 2019 ) . Since the pioneering work of ( Guo et al. , 2017 ) , scaling methods have been widely acknowledged as an efficient post-hoc multi-class calibration solution for modern DNNs . The common practice of evaluating their ECE resorts to histogram density estimation ( HDE ) for modeling the distribution of the predictions . However , Vaicenavicius et al . ( 2019 ) proved that with a fixed number of evaluation bins the ECE of scaling methods is underestimated even with an infinite number of samples . Widmann et al . ( 2019 ) ; Kumar et al . ( 2019 ) ; Wenger et al . ( 2020 ) also empirically showed this underestimation phenomena . This deems scaling methods as unreliable calibration solutions , as their true ECEs can be larger than evaluated , putting many applications at risk . Additionally , setting HDE also faces the bias/variance trade-off . Increasing its number of evaluation bins reduces the bias , as the evaluation quantization error is smaller , however , the estimation of the ground truth correctness begins to suffer from high variance . Fig . 1-a ) shows that the empirical ECE estimates of both the raw network outputs and the temperature scaling method ( TS ) ( Guo et al. , 2017 ) are sensitive to the number of evaluation ∗firstname.lastname @ de.bosch.com Code available at https : //github.com/boschresearch/imax-calibration 1 bins . It remains unclear how to optimally choose the number of evaluation bins so as to minimize the estimation error . Recent work ( Zhang et al. , 2020 ; Widmann et al. , 2019 ) suggested kernel density estimation ( KDE ) instead of HDE . However , the choice of the kernel and bandwidth also remains unclear , and the smoothness of the ground truth distribution is hard to verify in practice . An alternative technique for post-hoc calibration is Histogram Binning ( HB ) ( Zadrozny & Elkan , 2001 ; Guo et al. , 2017 ; Kumar et al. , 2019 ) . Note , here HB is a calibration method and is different to the HDE used for evaluating ECEs of scaling methods . HB produces discrete predictions , whose probability mass functions can be empirically estimated without using HDE/KDE . Therefore , its ECE estimate is constant and unaffected by the number of evaluation bins in Fig . 1-a ) and it can converge to the true value with increasing evaluation samples ( Vaicenavicius et al. , 2019 ) , see Fig . 1-b ) . The most common variants of HB are Equal ( Eq . ) size ( uniformly partitioning the probability interval [ 0 , 1 ] ) , and Eq . mass ( uniformly distributing samples over bins ) binning . These simple methods for multi-class calibration are known to degrade accuracy , since quantization through binning may remove a considerable amount of label information contained by the classifier ’ s outputs . In this work we show that the key for HB to retain the accuracy of trained classifiers is choosing bin edges that minimize the amount of label information loss . Both Eq . size and mass binning are suboptimal . We present I-Max , a novel iterative method for optimizing bin edges with proved convergence . As the location of its bin edges inherently ensures sufficient calibration samples per bin , the bin representatives of I-Max can then be effectively optimized for calibration . Two design objectives , calibration and accuracy , are thus nicely disentangled under I-Max . For multi-class calibration , I-Max adopts the one-vs-rest ( OvR ) strategy to individually calibrate the prediction probability of each class . To cope with a limited number of calibration samples , we propose to share one binning scheme for calibrating the prediction probabilities of similar classes , e.g. , with similar class priors or belonging to the same class category . At small data regime , we can even choose to fit one binning scheme on the merged training sets of all per-class calibrations . Such a shared class-wise ( sCW ) calibration strategy greatly improves the sample efficiency of I-Max binning . I-Max is evaluated according to multiple performance metrics , including accuracy , ECE , Brier and NLL , and compared against benchmark calibration methods across multiple datasets and trained classifiers . For ImageNet , I-Max obtains up to 66.11 % reduction in ECE compared to the baseline and up to 38.14 % reduction compared to the state-of-the-art GP-scaling method ( Wenger et al. , 2020 ) . 2 RELATED WORK . For confidence calibration , Bayesian DNNs and their approximations , e.g . ( Blundell et al. , 2015 ) ( Gal & Ghahramani , 2016 ) are resource-demanding methods to consider predictive model uncertainty . However , applications with limited complexity overhead and latency require sampling-free and singlemodel based calibration methods . Examples include modifying the training loss ( Kumar et al. , 2018 ) , scalable Gaussian processes ( Milios et al. , 2018 ) , sampling-free uncertainty estimation ( Postels et al. , 2019 ) , data augmentation ( Patel et al. , 2019 ; Thulasidasan et al. , 2019 ; Yun et al. , 2019 ; Hendrycks et al. , 2020 ) and ensemble distribution distillation ( Malinin et al. , 2020 ) . In comparison , a simple approach that requires no retraining of the models is post-hoc calibration ( Guo et al. , 2017 ) . 2 Prediction probabilities ( logits ) scaling and binning are the two main solutions for post-hoc calibration . Scaling methods use parametric or non-parametric models to adjust the raw logits . Guo et al . ( 2017 ) investigated linear models , ranging from the single-parameter based TS to more complicated vector/matrix scaling . To avoid overfitting , Kull et al . ( 2019 ) suggested to regularize matrix scaling with a L2 loss on the model weights . Recently , Wenger et al . ( 2020 ) adopted a latent Gaussian process for multi-class calibration . Ji et al . ( 2019 ) extended TS to a bin-wise setting , by learning separate temperatures for various confidence subsets . To improve the expressive capacity of TS , an ensemble of temperatures were adopted by Zhang et al . ( 2020 ) . Owing to continuous outputs of scaling methods , one critical issue discovered in the recent work is : Their empirical ECE estimate is not only non-verifiable ( Kumar et al. , 2019 ) , but also asymptotically smaller than the ground truth ( Vaicenavicius et al. , 2019 ) . Recent work ( Zhang et al. , 2020 ; Widmann et al. , 2019 ) exploited KDEs for an improved ECE evaluation , however , the parameter setting requires further investigation . Nixon et al . ( 2019 ) and ( Ashukha et al. , 2020 ) discussed potential issues of the ECE metric , and the former suggested to 1 ) use equal mass binning for ECE evaluation ; 2 ) measure both top-1 and class-wise ECE to evaluate multi-class calibrators , 3 ) only include predictions with a confidence above some epsilon in the class-wise ECE score . As an alternative to scaling , HB quantizes the raw confidences with either Eq . size or Eq . mass bins ( Zadrozny & Elkan , 2001 ) . It offers asymptotically convergent ECE estimation ( Vaicenavicius et al. , 2019 ) , but is less sample efficient than scaling methods and also suffers from accuracy loss ( Guo et al. , 2017 ) . Kumar et al . ( 2019 ) proposed to perform scaling before binning for an improved sample efficiency . Isotonic regression ( Zadrozny & Elkan , 2002 ) and Bayesian binning into quantiles ( BBQ ) ( Naeini et al. , 2015 ) are often viewed as binning methods . However , their ECE estimates face the same issue as scaling methods : though isotonic regression fits a piecewise linear function , its predictions are continuous as they are interpolated for unseen data . BBQ considers multiple binning schemes with different numbers of bins , and combines them using a continuous Bayesian score , resulting in continuous predictions . In this work , we improve the current HB design by casting bin optimization into a MI maximization problem . Furthermore , our findings can also be used to improve scaling methods . 3 METHOD . Here we introduce the I-Max binning scheme , which addresses the issues of HB in terms of preserving label-information in multi-class calibration . After the problem setup in Sec . 3.1 , Sec . 3.2 ) presents a sample-efficient technique for one-vs-rest calibration . In Sec . 3.3 we formulate the training objective of binning as MI maximization and derive a simple algorithm for I-Max binning . 3.1 PROBLEM SETUP . We address supervised multi-class classification tasks , where each input x ∈ X belongs to one of K classes , and the ground truth labels are one-hot encoded , i.e. , y = [ y1 , y2 , . . . , yK ] ∈ { 0 , 1 } K . Let f : X 7→ [ 0 , 1 ] K be a DNN trained using cross-entropy loss . It maps each x onto a probability vector q = [ q1 , . . . , qK ] ∈ [ 0 , 1 ] K , which is used to rank the K possible classes of the current instance , e.g. , argmaxk qk being the top-1 ranked class . As the trained classifier tends to overfit to the cross-entropy loss rather than the accuracy ( i.e. , 0/1 loss ) , q as the prediction distribution is typically poorly calibrated . A post-hoc calibrator h to revise q can deliver an improved performance . To evaluate the calibration performance of h ◦ f , class-wise ECE averaged over the K classes is a common metric , measuring the expected deviation of the predicted per-class confidence after calibration , i.e. , hk ( q ) , from the ground truth probability p ( yk = 1|h ( q ) ) : cwECE ( h ◦ f ) = 1 K K∑ k=1 Eq=f ( x ) { ∣∣∣p ( yk = 1|h ( q ) ) − hk ( q ) ∣∣∣ } . ( 1 ) When h is a binning scheme , hk ( q ) is discrete and thus repetitive . We can then empirically set p ( yk = 1|h ( q ) ) as the frequency of label-1 samples among those receiving the same hk ( q ) . On the contrary , scaling methods are continuous . It is unlikely that two samples attain the same hk ( q ) , thus requiring additional quantization , i.e. , applying HDE for modeling the distribution of hk ( q ) , or alternatively using KDE . It is noted that ideally we should compare the whole distribution h ( q ) with 3 the ground truth p ( y|h ( q ) ) . However , neither HDE nor KDE scales well with the number of classes . Therefore , the multi-class ECE evaluation often boils down to the one-dimensional class-wise ECE as in ( 1 ) or the top-1 ECE , i.e. , E [ |p ( yk=arg maxk hk ( q ) = 1|h ( q ) ) −maxk hk ( q ) | ] .
This paper highlights the issues with the scaling method and histogram binning i.e., underestimate calibration error in scaling methods and failing to preserve classification accuracy, and sample-inefficiency in HB. They use the I-Max concept for binning, which maximizes the mutual information between labels and quantized logits. They claim that their approach mitigates potential loss in ranking performance and allows simultaneous improvement of ranking and calibration performance by disentangling the optimization of bin edges and representatives. They also propose a shared class-wise (sCW) strategy that fits a single calibrator on the merged training sets of all K class-wise problems to improve the sample efficiency.
SP:6dd9907f23d32802fd10d9405d165269fd1492ee
Mitigating Deep Double Descent by Concatenating Inputs
1 INTRODUCTION . Underparameterization and overparameterization are at the heart of understanding modern neural networks . The traditional notion of underparameterization and overparameterization led to the classic U-shaped generalization error curve ( Trevor Hastie & Friedman , 2001 ; Stuart Geman & Doursat , 1992 ) , where generalization would worsen when the model had either too few ( underparameterized ) or too many parameters ( overparameterized ) . Correspondingly , it was expected that an underparameterized model would underfit and fail to identify more complex and informative patterns , and an overparameterized model would overfit and identify non-informative patterns . This view no longer holds for modern neural networks . It is widely accepted that neural networks are vastly overparameterized , yet generalize well . There is strong evidence that increasing the number of parameters leads to better generalization ( Zagoruyko & Komodakis , 2016 ; Huang et al. , 2017 ; Larsson et al. , 2016 ) , and models are often trained to achieve zero training loss ( Salakhutdinov , 2017 ) , while still improving in generalization error , whereas the traditional view would suggest overfitting . To bridge the gap , Belkin et al . ( 2018a ) proposed the double descent curve , where the underparameterized region follows the U-shaped curve , and the overparameterized region smoothly decreases in generalization error , as the number of parameters increases further . This results in a peak in generalization error , where a fewer number of samples would counter-intuitively decrease the error . There has been extensive experimental evidence of the double descent curve in deep learning ( Nakkiran et al. , 2019 ; Yang et al. , 2020 ) , as well as in models such as random forests , and one layer neural networks ( Belkin et al. , 2018a ; Ba et al. , 2020 ) . One recurring theme in the definition of overparameterization and underparameterization lies in the number of neural network parameters relative to the number of samples ( Belkin et al. , 2018a ; Nakkiran et al. , 2019 ; Ba et al. , 2020 ; Bibas et al. , 2019 ; Muthukumar et al. , 2019 ; Hastie et al. , 2019 ) . On a high level , a greater number of parameters than samples is generally considered overparameterization , and fewer is considered underparameterization . However , this leads to the question “ What is a sample ? ” In this paper , we revisit the fundamental underpinnings of overparameterization and underparameterization , and stress test when it means to be overparameterized or underparameterized , through extensive experiments of a cleverly constructed input . We artificially augment existing datasets by simply stacking every combination of inputs , and show the mitigation of the double descent curve in the deep neural network setting . We humbly hypothesize that in deep neural networks we can , perhaps , artificially increase the number of samples without increasing the information contained in the dataset , and by implicitly changing the classification pipeline mitigate the double descent curve . In particular , the narrative of our paper obeys the following : • We propose a simple construction to artificially augment existing datasets of sizeO ( n ) by stacking inputs to produce a dataset of size O ( n2 ) . • We demonstrate that the construction has no impact on the double descent curve in the linear regression case . • We show experimentally that those results on double descent curve do not extend to the case of neural networks . Concretely , we reproduce results from recent landmark papers , and present the difference in behavior with respect to the double descent curve . 2 RELATED WORKS . The double descent curve was proposed recently in ( Belkin et al. , 2018a ) , where the authors define overparameterization and underparameterization as the proportion of parameters to samples . The authors explain the phenomenon through the model capacity class . With more parameters in the overparameterized region , there is larger “ capacity ” ( i.e. , the model class contains more candidates ) , and thus may contain better , simpler models by Occam ’ s Razor rule . The interpolation region is suggested to exist when the model capacity is capable of fitting the data nearly perfectly by overfitting on non-informative features , resulting in higher test error . Experiments included a one layer neural network , random forests , and others . The double descent curve is also observed in deep neural networks ( Nakkiran et al. , 2019 ) , with the additional observation of epoch-wise double descent . Experimentation is amplified by label noise . With the observation of unimodel variance ( Neal et al. , 2018 ) , Yang et al . ( 2020 ) also decomposes the risk into bias and variance , and posits that the double descent curve arises due to the bell-shaped variance curve rising faster than the bias decreases . There is substantial theoretical work on double descent , particularly in the least squares regression setting . Advani & Saxe ( 2017 ) analyses this linear setting and proves the existence of the interpolation region , where the number of parameters equals the number of samples in the asymptotic limit where samples and parameters tend to infinity . Hastie et al . ( 2019 ) follows a similar line of work , and proves that regularization reduces the peak in the interpolation region . Belkin et al . ( 2019b ) requires only finite samples , where the features and target be jointly Gaussian . Other papers with similar setup include ( Bartlett et al. , 2019 ; Muthukumar et al. , 2019 ; Bibas et al. , 2019 ; Mitra , 2019 ; Mei & Montanari , 2019 ) . Ba et al . ( 2020 ) analyses the least squares regression setting for two layer linear neural networks in the asymptotic setting , where the double descent curve is present when only the second layer is optimized . There is also work in proving that optimally tuned ` 2-norm regularization mitigates the double descent curve for certain linear regression models with isotropic data distribution ( Nakkiran , 2019 ) . This setting has also been studied with respect to the variance in the parameter space ( Bartlett et al. , 2019 ) . Multiple descent has also been studied , and in particular there is work to show in the linear regression setting that multiple descent curves can be directly designed by the user ( Chen et al. , 2020 ) . Additionally , there is supporting evidence of double descent in the sample-wise perspective ( Nakkiran et al. , 2020 ) . There is other work in this area , including studying the double descent curve for least squares in random feature models ( Belkin et al. , 2019a ; d ’ Ascoli et al. , 2020 ; Ghorbani et al. , 2019 ) , leveraging the Neural Tangent Kernel to argue for certain number of parameters the output of the neural network diverges ( Geiger et al. , 2020 ) , characterizing the double descent in non-linear settings ( Caron & Chretien , 2020 ) , kernel learning ( Belkin et al. , 2018b ; Liang et al. , 2019 ) , and connecting to other fields ( Geiger et al. , 2019 ) . Lastly , we note here that , in the deep neural network setting , models can be trained to zero training loss even with random labels ( Zhang et al. , 2016 ) . 3 THE CONCATENATED INPUTS CONSTRUCTION . We introduce the concatenated inputs construction , on which our main hypothesis is based on . The concatenated inputs construction refers to the general idea of concatenating pairs of inputs and element-wise adding and averaging pairs of outputs to produce new inputs and targets . This way the size of a dataset can be artificially –but non-trivially– increased . This construction can be applied both to the regression setting and the classification setting . In the setting of linear regression , for given input pairs , ( x1 , y1 ) , ( x2 , y2 ) , an augmented dataset can be constructed as : { ( [ x1 , x1 ] , y1+y1 2 ) , ( [ x1 , x2 ] , y1+y2 2 ) , ( [ x2 , x1 ] , y2+y1 2 ) , ( [ x2 , x2 ] , y2+y2 2 ) } , where [ α , β ] represents concatenation of the input α , β . In the setting of classification , the process is identical , where the targets are produced by element-wise addition and then averaged to sum to 1 . The averaging is not strictly necessary even in the deep neural network classification case , where the binary cross entropy loss can be used instead of cross entropy . For test data , we concatenate the same input with itself , and the target is the original target . This way a dataset of size O ( n ) is artificially augmented to size O ( n2 ) . Concretely , our reasons for the concatenated inputs construction are as follows : i ) there is limited injection of information or semantic meaning ; ii ) the number of samples is significantly increased . For the purposes of understanding underparameterization , overparameterization and the double descent curve , such a construction tries to isolate the definition of a sample . We revisit and assess these implications in the context of extensive experiments in the following sections . 4 RESULTS . In this section , we reproduce settings from benchmark double descent papers , add the concatenated inputs construction and analyze the findings . In particular , we begin with the linear regression , move to one hidden layer feedforward neural networks , and then deep neural networks , in both the model parameter-wise double descent and epoch-wise double descent . Finally , we analyze the performance of deep neural networks for the concatenated inputs construction , and the behavior of the double descent curve in the classification setting , when we transfer from the cross entropy to the binary cross entropy loss . 4.1 LINEAR REGRESSION . The linear regression setting has been a fruitful testbed for empirical work in double descent , as well as yielding substantial theoretical understanding . The concatenated inputs construction is applied similarly here , however with different motivation . Namely , we wish to motivate that the concatenated inputs construction is not expected to add any information and is therefore not expected to impact the double descent curve . We reproduce the linear regression setting from Nakkiran ( 2019 ) , given in Figure 1 . For the concatenated inputs construction , we first draw the number of samples before concatenation and construction of the augmented dataset . We observe that , by construction , the concatenated inputs construction does not affect the double descent curve , and the peak occurs in the exact same location . We also make the remark here that it is not surprising that this is the case , and it is not complicated to understand why from a theoretical perspective . 4.2 ONE HIDDEN LAYER FEEDFORWARD NEURAL NETWORK . Following linear regression , we move to neural networks . We train a feedforward neural network with one hidden layer and ReLU activations on a subset of the MNIST dataset , reproducing the experimental setup from Belkin et al . ( 2018a ) . We vary the number of parameters in the neural network by changing the size of the hidden layer . We use the Cross Entropy loss instead of the original MSE loss due to the prevalence of those losses for image classification tasks . This is shown in Figure 2 . We observe the double descent in the loss versus number of parameters , but were unable to produce the double descent in the error . In the rightmost plot , the double descent curve is completely removed in the concatenated inputs construction relative to the other two settings . Namely , a smooth decrease in loss is observed , where there is a clear double descent in the other cases . Furthermore , we provide the extra setting of concatenating each input only by itself , and the double descent curve is present almost exactly in this scenario . This provides further evidence that the disappearance of the double descent is not due to the extra parameters which originate from the larger sized inputs . In this setting , it appears that the behavior of underparameterization and overparameterization can be altered by simply artificially increasing the number of samples through concatenating images . In addition , the model trained on MNIST and one-hot vectors can be concatenated with itself , with all other parameters being zero , to produce a model with two times the number of hidden units which can be applied to the concatenated inputs construction . We consider this setting in the context of a possible explanation of the interpolation region , where the number of parameters nears that of samples . Concretely , it is possible for a neural network with double the hidden units in the concatenated inputs construction to recover the double descent curve by learning two smaller , disconnected networks , where each of the smaller networks are the ones learned in the double descent peak of the standard , one-hot case . However , in practice while the network can do so , it does not appear to , which leads to the smooth descent in the rightmost plot in Figure 2 .
The paper investigates the double descent phenomenon. It proposes the augmentation of the dataset via concatenating the covariate x and interpolating the label y, which increases the data size from n to n^2. The paper shows that the phenomenon of double descent can be mitigated via augmenting the input. The idea of investigating double descent from manipulating samples is novel and interesting.
SP:8be0ea7136590dd63b9a82556995ef1e7b1d644c
Latent Skill Planning for Exploration and Transfer
1 INTRODUCTION Humans can effortlessly compose skills , where skills are a sequence of temporally correlated actions , and quickly adapt skills learned from one task to another . In order to build re-usable knowledge about the environment , Model-based Reinforcement Learning ( MBRL ) ( Wang et al. , 2019 ) provides an intuitive framework which holds the promise of training agents that generalize to different situations , and are sample efficient with respect to number of environment interactions required for training . For temporally composing behaviors , hierarchical reinforcement learning ( HRL ) ( Barto & Mahadevan , 2003 ) seeks to learn behaviors at different levels of abstraction explicitly . A simple approach for learning the environment dynamics is to learn a world model either directly in the observation space ( Chua et al. , 2018 ; Sharma et al. , 2019 ; Wang & Ba , 2019 ) or in a latent space ( Hafner et al. , 2019 ; 2018 ) . World models summarize an agent ’ s experience in the form of learned transition dynamics , and reward models , which are used to learn either parametric policies by amortizing over the entire training experience ( Hafner et al. , 2019 ; Janner et al. , 2019 ) , or perform online planning as done in Planet ( Hafner et al. , 2018 ) , and PETS ( Chua et al. , 2018 ) . Amortization here refers to learning a parameterized policy , whose parameters are updated using samples during the training phase , and which can then be directly queried at each state to output an action , during evaluation . Fully online planning methods such as PETS ( Chua et al. , 2018 ) only learn the dynamics ( and reward ) model and rely on an online ∗Kevin and Homanga contributed equally to this work . search procedure such as Cross-Entropy Method ( CEM ; Rubinstein , 1997 ) on the learned models to determine which action to execute next . Since rollouts from the learned dynamics and reward models are not executed in the actual environment during training , these learned models are sometimes also referred to as imagination models ( Hafner et al. , 2018 ; 2019 ) . Fully amortized methods such as Dreamer ( Hafner et al. , 2019 ) , train a reactive policy with many rollouts from the imagination model . They then execute the resulting policy in the environment . The benefit of the amortized method is that it becomes better with experience . Amortized policies are also faster . An action is computed in one forward pass of the reactive policy as opposed to the potentially expensive search procedure used in CEM . Additionally , the performance of the amortized method is more consistent as CEM relies on drawing good samples from a random action distribution . On the other hand , the shortcoming of the amortized policy is generalization . When attempting novel tasks unseen during training , CEM will plan action sequences for the new task , as per the new reward function while a fully amortized method would be stuck with a behaviour optimized for the training tasks . Since it is intractable to perform fully online random shooting based planning in high-dimensional action spaces ( Bharadhwaj et al. , 2020 ; Amos & Yarats , 2019 ) , it motivates the question : can we combine online search with amortized policy learning in a meaningful way to learn useful and transferable skills for MBRL ? To this end , we propose a partially amortized planning algorithm that temporally composes high-level skills through the Cross-Entropy Method ( CEM ) ( Rubinstein , 1997 ) , and uses these skills to condition a low-level policy that is amortized over the agent ’ s experience . Our world model consists of a learned latent dynamics model , and a learned latent reward model . We have a mutual information ( MI ) based intrinsic reward objective , in addition to the predicted task rewards that are used to train the low level-policy , while the high level skills are planned through CEM using the learned task rewards . We term our approach Learning Skills for Planning ( LSP ) . The key idea of LSP is that the high-level skills are able to abstract out essential information necessary for solving a task , while being agnostic to irrelevant aspects of the environment , such that given a new task in a similar environment , the agent will be able to meaningfully compose the learned skills with very little fine-tuning . In addition , since the skill-space is low dimensional , we can leverage the benefits of online planning in skill space through CEM , without encountering intractability of using CEM for planning directly in the higher dimensional action space and especially for longer time horizons ( Figure 1 ) . In summary , our main contributions are developing a partially amortized planning approach for MBRL , demonstrating that high-level skills can be temporally composed using this scheme to condition low level policies , and experimentally demonstrating the benefit of LSP over challenging locomotion tasks that require composing different behaviors to solve the task , and benefit in terms of transfer from one quadruped locomotion task to another , with very little adaptation in the target task . 2 BACKGROUND . We discuss learning latent dynamics for MBRL , and mutual information skill discovery , that serve as the basic theoretical tools for our approach . 2.1 LEARNING LATENT DYNAMICS AND BEHAVIORS IN IMAGINATION . Latent dynamics models are special cases of world models used in MBRL , that project observations into a latent representation , amenable for planning ( Hafner et al. , 2019 ; 2018 ) . This framework is general as it can model both partially observed environments where sensory inputs can be pixel observations , and fully observable environments , where sensory inputs can be proprioceptive state features . The latent dynamics models we consider in this work , consist of four key components , a representation module pθ ( st|st−1 , at−1 , ot ) and an observation module qθ ( ot|sT ) that encode observations and actions to continuous vector-valued latent states st , a latent forward dynamics module qθ ( st|st−1 , at−1 ) that predicts future latent states given only the past states and actions , and a task reward module qθ ( rt|st ) , that predicts the reward from the environment given the current latent state . To learn this model , the agent interacts with the environment and maximizes the following expectation under the dataset of environment interactions D = { ( ot , at , rt ) } J .= ED ( ∑ t ( J tO + J tR + J tD ) ) + const J tO . = ln q ( ot | st ) J tR . = ln q ( rt | st ) J tD . = −βKL ( p ( st | st−1 , at−1 , ot ) ∥∥ q ( st | st−1 , at−1 ) ) . ( 1 ) For optimizing behavior under this latent dynamics model , the agent rolls out trajectories in imagination and estimates the value V ( · ) of the imagined trajectories { sτ , aτ , rτ } t+Hτ=t through TD ( λ ) estimates as described by Sutton & Barto ( 2018 ) ; Hafner et al . ( 2019 ) . The agent can either learn a fully amortized policy qφ ( a|s ) as done in Dreamer , by backpropagating through the learned value network vψ ( · ) or plan online through CEM , for example as in Planet . 2.2 MUTUAL INFORMATION SKILL DISCOVERY . Some methods for skill discovery have adopted a probabilistic approach that uses the mutual information between skills and future states as an objective ( Sharma et al. , 2019 ) . In this approach , skills are represented through a latent variable z upon which a low level policy π ( a|s , z ) is conditioned . Given the current state s0 , skills are sampled from some selection distribution p ( z|s0 ) . The skill conditioned policy is executed under the environment dynamics pd ( st+1|st , a ) resulting in a series of future states abbreviated s′ : = { s } . Mutual information is defined as : MI ( z , { s } |s0 ) = H ( z|s0 ) −H ( z| { s } , s0 ) = H ( { s } |s0 ) −H ( { s } |s0 , z ) It quantifies the reduction in uncertainty about the future states given the skill and vice versa . By maximizing the mutual information with respect to the low level policy , the skills are encouraged to produce discernible future states . 3 PARTIAL AMORTIZATION THROUGH HIERARCHY . Our aim is to learn behaviors suitable for solving complex control tasks , and amenable to transfer to different tasks , with minimal fine-tuning . To achieve this , we consider the setting of MBRL , where the agent builds up re-usable knowledge of the environment dynamics . For planning , we adopt a partial amortization strategy , such that some aspects of the behavior are re-used over the entire training experience , while other aspects are learned online . We achieve partial amortization by forming high level latent plans and learning a low level policy conditioned on the latent plan . The three different forms of amortization in planning are described visually through probabilistic graphical models in Figure 2 and Figure 3 . We first describe the different components of our model , motivate the mutual information based auxiliary objective , and finally discuss the complete algorithm . World model . Our world model is a latent dynamics model consisting of the components described in section 2 . Low level policy . The low-level policy qφ ( at|st , z ) is used to decide which action to execute given the current latent state st and the currently active skill z . Similar to Dreamer ( Hafner et al. , 2019 ) , we also train a value model vψ ( st ) to estimate the expected rewards the action model achieves from each state st. We estimate value the same way as in equation 6 of Dreamer , balancing bias and variance . The action model is trained to maximize the estimate of the value , while the value model is trained to fit the estimate of the value that alters as the action model is updated , as done in a typical actor-critic setup ( Konda & Tsitsiklis , 2000 ) . High level skills . In our framework high level skills are continuous random variables that are held for a fixed number K steps . The high-level skills z are sampled from a skill selection distribution p ( z1 : dH/Ke|ζ ) = N ( µ , Σ ) which is optimized for task performance through CEM . Here , H denotes the planning horizon . For the sake of notational convenience we denote z1 : dH/Ke as z . Let ( j ) denote the jth CEM iteration . We first sample G skills { z ( g ) } Gg=1 ∼ p ( z|ζ ( j ) ) , execute G parallel imaginary rollouts of horizon H in the learned model with the skill-conditioned policy qφ ( at|st , z ( g ) ) . Instead of evaluating rollouts based only on the sum of rewards , we utilize the value network and compute value estimates { Vg } Gg=1 . We sort { Vg } Gg=1 , choose the top M values , and use the corresponding skills to update the sampling distribution parameters as ζ ( j+1 ) = ( µ ( j+1 ) , Σ ( j+1 ) ) µ ( j+1 ) = Mean ( { z ( m ) } Mm=1 ) Σ ( j+1 ) = Variance ( { z ( m ) } Mm=1 )
The paper proposes combining model-based RL with high-level skill learning and composition through hierarchical RL, into a single reinforcement learning framework. More specifically, the proposed approach leverages planning and composing skills in the low-dimensional, high-level representation, and learn low-level skills conditioned on the high-level skills. Only the low-level policies are executed in the environment to generate experiences. A mutual information objective is used to learn low-level policies conditioned on high-level skills, and this was shown to improve sample efficiency as the low-level policies do not learn to ignore the high-level skills they are conditioned on.
SP:496ef52f5094a12fe59e9966848b69b54c7763fd
Bag of Tricks for Adversarial Training
1 INTRODUCTION . Adversarial training ( AT ) has been one of the most effective defense strategies against adversarial attacks ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . Based on the primary AT frameworks like PGD-AT ( Madry et al. , 2018 ) , many improvements have been proposed from different perspectives , and demonstrate promising results ( detailed in Sec . 2 ) . However , the recent benchmarks ( Croce & Hein , 2020b ; Chen & Gu , 2020 ) find that simply early stopping the training procedure of PGD-AT ( Rice et al. , 2020 ) can attain the gains from almost all the previously proposed improvements , including the state-of-the-art TRADES ( Zhang et al. , 2019b ) . This fact is somewhat striking since TRADES also executes early stopping ( one epoch after decaying the learning rate ) in their code implementation . Besides , the reported robustness of PGD-AT in Rice et al . ( 2020 ) is much higher than in Madry et al . ( 2018 ) , even without early-stopping . This paradox motivates us to check the implementation details of these seminal works . We find that TRADES uses weight decay of 2× 10−4 , Gaussian PGD initialization as δ0 ∼ N ( 0 , αI ) , and eval mode of batch normalization ( BN ) when crafting adversarial examples , while Rice et al . ( 2020 ) use weight decay of 5× 10−4 , uniform PGD initialization as δ0 ∼ U ( − , ) , and train mode of BN to generate adversarial examples . In our experiments on CIFAR-10 ( e.g. , Table 8 ) , the two slightly different settings can differ the robust accuracy by ∼ 5 % , which is significant according to the reported benchmarks . To have a comprehensive study , we further investigate the implementation details of tens of papers working on the AT methods , some of which are summarized in Table 1 . We find that even using the same model architectures , the basic hyperparameter settings ( e.g. , weight decay , learning rate schedule , etc . ) used in these papers are highly inconsistent and customized , which could affect the model performance and may override the gains from the methods themselves . Under this situation , if we directly benchmark these methods using their released code or checkpoints , some actually effective improvements would be under-estimated due to the improper hyperparameter settings . Our contributions . We evaluate the effects of a wide range of basic training tricks ( e.g. , warmup , early stopping , weight decay , batch size , BN mode , etc . ) on the adversarially trained models . Our empirical results suggest that improper training settings can largely degenerate the model performance , ∗Corresponding author . 1Code is available at https : //github.com/P2333/Bag-of-Tricks-for-AT while this degeneration may be mistakenly ascribed to the methods themselves . We provide a baseline recipe for PGD-AT on CIFAR-10 as an example , and demonstrate the generality of the recipe on training other frameworks like TRADES . As seen in Table 16 , the retrained TRADES achieve new state-of-the-art performance on the AutoAttack benchmark ( Croce & Hein , 2020b ) . Although our empirical conclusions may not generalize to other datasets or tasks , we reveal the facts that adversarially trained models could be sensitive to certain training settings , which are usually neglected in previous work . These results also encourage the community to re-implement the previously proposed defenses with fine-tuned training settings to better explore their potentials . 2 RELATED WORK . In this section , we introduce related work on the adversarial defenses and recent benchmarks . We detail on the adversarial attacks in Appendix A.1 . 2.1 ADVERSARIAL DEFENSES . To alleviate the adversarial vulnerability of deep learning models , many defense strategies have been proposed , but most of them can eventually be evaded by adaptive attacks ( Carlini & Wagner , 2017b ; Athalye et al. , 2018 ) . Other more theoretically guaranteed routines include training provably robust networks ( Dvijotham et al. , 2018a ; b ; Hein & Andriushchenko , 2017 ; Wong & Kolter , 2018 ) and obtaining certified models via randomized smoothing ( Cohen et al. , 2019 ) . While these methods are promising , they currently do not match the state-of-the-art robustness under empirical evaluations . The idea of adversarial training ( AT ) stems from the seminal work of Goodfellow et al . ( 2015 ) , while other AT frameworks like PGD-AT ( Madry et al. , 2018 ) and TRADES ( Zhang et al. , 2019b ) occupied the winner solutions in the adversarial competitions ( Kurakin et al. , 2018 ; Brendel et al. , 2020 ) . Based on these primary AT frameworks , many improvements have been proposed via encoding the mechanisms inspired from other domains , including ensemble learning ( Tramèr et al. , 2018 ; Pang et al. , 2019 ) , metric learning ( Mao et al. , 2019 ; Li et al. , 2019 ; Pang et al. , 2020c ) , generative modeling ( Jiang et al. , 2018 ; Pang et al. , 2018b ; Wang & Yu , 2019 ; Deng et al. , 2020 ) , semisupervised learning ( Carmon et al. , 2019 ; Alayrac et al. , 2019 ; Zhai et al. , 2019 ) , and self-supervised learning ( Hendrycks et al. , 2019 ; Chen et al. , 2020a ; b ; Naseer et al. , 2020 ) . On the other hand , due to the high computational cost of AT , many efforts are devoted to accelerating the training procedure via reusing the computations ( Shafahi et al. , 2019b ; Zhang et al. , 2019a ) , adaptive adversarial steps ( Wang et al. , 2019 ; Zhang et al. , 2020 ) or one-step training ( Wong et al. , 2020 ; Liu et al. , 2020 ; Vivek B & Venkatesh Babu , 2020 ) . The following works try to solve the side effects ( e.g. , catastrophic overfitting ) caused by these fast AT methods ( Andriushchenko & Flammarion , 2020 ; Li et al. , 2020 ) . 2.2 ADVERSARIAL BENCHMARKS . Due to the large number of proposed defenses , several benchmarks have been developed to rank the adversarial robustness of existing methods . Dong et al . ( 2020 ) perform large-scale experiments to generate robustness curves , which are used for evaluating typical defenses . Croce & Hein ( 2020b ) propose AutoAttack , which is an ensemble of four selected attacks . They apply AutoAttack on tens of previous defenses and provide a comprehensive leader board . Chen & Gu ( 2020 ) propose the black-box RayS attack , and establish a similar leader board for defenses . In this paper , we mainly apply PGD attack and AutoAttack as two common ways to evaluate the models . Except for the adversarial robustness , there are other efforts that introduce augmented datasets for accessing the robustness against general corruptions or perturbations . Mu & Gilmer ( 2019 ) introduce MNIST-C with a suite of 15 corruptions applied to the MNIST test set , while Hendrycks & Dietterich ( 2019 ) introduce ImageNet-C and ImageNet-P with common corruptions and perturbations on natural images . Evaluating robustness on these datasets can reflect the generality of the proposed defenses , and avoid overfitting to certain attacking patterns ( Engstrom et al. , 2019 ; Tramèr & Boneh , 2019 ) . 3 BAG OF TRICKS . Our overarching goal is to investigate how the usually overlooked implementation details affect the performance of the adversarially trained models . Our experiments are done on CIFAR-10 ( Krizhevsky & Hinton , 2009 ) under the ` ∞ threat model of maximal perturbation = 8/255 , without accessibility to additional data . We evaluate the models under 10-steps PGD attack ( PGD-10 ) ( Madry et al. , 2018 ) and AutoAttack ( AA ) ( Croce & Hein , 2020b ) . For the PGD attack , we apply untargeted mode using ground truth labels , step size of 2/255 , and 5 restarts for evaluation / no restart for training . For the AutoAttack2 , we apply the standard version , with no restart for AutoPGD and FAB , compared to 5 restarts for plus version . We consider some basic training tricks and perform ablation studies on each of them , based on the default training setting as described below : Default setting . Following Rice et al . ( 2020 ) , in the default setting , we apply the primary PGD-AT framework and the hyperparameters including batch size 128 ; SGD momentum optimizer with the initial learning rate of 0.1 ; weight decay 5× 10−4 ; ReLU activation function and no label smoothing ; train mode for batch normalization when crafting adversarial examples . All the models are trained for 110 epochs with the learning rate decaying by a factor of 0.1 at 100 and 105 epochs , respectively . We report the results on the checkpoint with the best PGD-10 accuracy . Note that our empirical observations and conclusions may not always generalize to other datasets or AT frameworks , but we emphasize the importance of using consistent implementation details ( not only the same model architectures ) to enable fair comparisons among different AT methods . 2https : //github.com/fra31/auto-attack 256 83.33 52.20 82.24 52.52 . 512 83.40 50.69 82.16 53.36 . 256 86.21 52.90 85.89 56.09 . 3.1 EARLY STOPPING AND WARMUP . Early stopping training epoch . The trick of early stopping w.r.t . the training epoch was first applied in the implementation of TRADES ( Zhang et al. , 2019b ) , where the learning rate decays at the 75th epoch and the training is stopped at the 76th epoch . Later Rice et al . ( 2020 ) provide a comprehensive study on the overfitting phenomenon in AT , and advocate early stopping the training epoch as a general strategy for preventing adversarial overfitting , which could be triggered according to the PGD accuracy on a split validation set . Due to its effectiveness , we regard this trick as a default choice . Early stopping adversarial intensity . Another level of early stopping happens on the adversarial intensity , e.g. , early stopping PGD steps when crafting adversarial examples for training . This trick was first applied by the runner-up of the defense track in NeurIPS 2018 adversarial vision challenge ( Brendel et al. , 2020 ) . Later efforts are devoted to formalizing this early stopping mechanism with different trigger rules ( Wang et al. , 2019 ; Zhang et al. , 2020 ) . Balaji et al . ( 2019 ) early stop the adversarial perturbation , which has a similar effect on the adversarial intensity . In the left part of Table 2 , we evaluate the method proposed by Zhang et al . ( 2020 ) due to its simplicity . As seen , this kind of early stopping can improve the performance on clean data while keeping comparable accuracy under PGD-10 . However , the performance under the stronger AutoAttack is degraded . Warmup w.r.t . learning rate . Warmup w.r.t . learning rate is a general trick for training deep learning models ( Goodfellow et al. , 2016 ) . In the adversarial setting , Wong et al . ( 2020 ) show that the one cycle learning rate schedule is one of the critical ingredients for the success of FastAT . Thus , we evaluate the effect of this trick for the piecewise learning rate schedule and PGD-AT framework . We linearly increase the learning rate from zero to the preset value in the first 10 / 15 / 20 epochs . As shown in the middle part of Table 2 , the effect of warming up learning rate is marginal . Warmup w.r.t . adversarial intensity . In the AT procedure , warmup can also be executed w.r.t . the adversarial intensity . Cai et al . ( 2018 ) propose the curriculum AT process to gradually increase the adversarial intensity and monitor the overfitting trend . Qin et al . ( 2019 ) increase the maximal Epochs Epochs Figure 1 : ( a ) Test accuracy w.r.t . different values of weight decay . The reported checkpoints correspond to the best PGD-10 accuracy ( Rice et al. , 2020 ) . We test on two model architectures , and highlight ( with red circles ) three most commonly used weight decays in previous work ; ( b ) Curves of test accuracy w.r.t . training epochs , where the model is WRN-34-10 . We set weight decay be 1× 10−4 , 2× 10−4 , and 5× 10−4 , respectively . We can observe that smaller weight decay can learn faster but also more tend to overfit w.r.t . the robust accuracy . In Fig . 4 , we early decay the learning rate before the models overfitting , but weight decay of 5× 10−4 still achieve better robustness . perturbation from zero to 8/255 in the first 15 epochs . In the right part of Table 2 , we linearly increase the maximal perturbation in the first 10 / 15 / 20 epochs , while the effect is still limited .
The paper provides an evaluation of different hyperparameter settings for adversarial training. Specifically, it evaluates combinations of warmup, early stopping, weight decay, batch size and other parameters on adversarially trained models. The paper states that its overarching goal is to ``investigate how the implementation details affect the performance of the adversarial trained models''.
SP:0e32b047c35f57579f4eb935720e6a4a61c33116
VA-RED$^2$: Video Adaptive Redundancy Reduction
1 INTRODUCTION . Large computationally expensive models based on 2D/3D convolutional neural networks ( CNNs ) are widely used in video understanding ( Tran et al. , 2015 ; Carreira & Zisserman , 2017 ; Tran et al. , 2018 ) . Thus , increasing computational efficiency is highly sought after ( Feichtenhofer , 2020 ; Zhou et al. , 2018c ; Zolfaghari et al. , 2018 ) . However , most of these efficient approaches focus on architectural changes in order to maximize network capacity while maintaining a compact model ( Zolfaghari et al. , 2018 ; Feichtenhofer , 2020 ) or improving the way that the network consumes temporal information ( Feichtenhofer et al. , 2018 ; Korbar et al. , 2019 ) . Despite promising results , it is well known that CNNs perform unnecessary computations at some levels of the network ( Han et al. , 2015a ; Howard et al. , 2017 ; Sandler et al. , 2018 ; Feichtenhofer , 2020 ; Pan et al. , 2018 ) , especially for video models since the high appearance similarity between consecutive frames results in a large amount of redundancy . In this paper , we aim at dynamically reducing the internal computations of popular video CNN architectures . Our motivation comes from the existence of highly similar feature maps across both time and channel dimensions in video models . Furthermore , this internal redundancy varies depending on the input : for instance , static videos will have more temporal redundancy whereas videos depicting a single large object moving tend to produce a higher number of redundant feature maps . To reduce the varied redundancy across channel and temporal dimensions , we introduce an input-dependent redundancy reduction framework called VA-RED2 ( Video Adaptive REDundancy REDuction ) for efficient video recognition ( see Figure 1 for an illustrative example ) . Our method is model-agnostic and hence can be applied to any state-of-the-art video recognition networks . The key mechanism that VA-RED2 uses to increase efficiency is to replace full computations of some redundant feature maps with cheap reconstruction operations . Specifically , our framework avoids computing all the feature maps . Instead , we choose to only calculate those non-redundant part of feature maps and reconstruct the rest using cheap linear operations from the non-redundant features maps . In addition , VA-RED2 makes decisions on a per-input basis : our framework learns an input-dependent policy that defines a ” full computation ratio ” for each layer of a 2D/3D network . This ratio determines the amount of features that will be fully computed at that layer , versus the features that will be reconstructed from the non-redundant feature maps . Importantly , we apply this strategy on both time and channel dimensions . We show that for both traditional video models such as I3D ( Carreira & Zisserman , 2017 ) , R ( 2+1 ) D ( Tran et al. , 2018 ) , and more advanced models such as X3D ( Feichtenhofer , 2020 ) , this method significantly reduces the total floating point operations ( FLOPs ) on common video datasets without accuracy degradation . The main contributions of our work includes : ( 1 ) A novel input-dependent adaptive framework for efficient video recognition , VA-RED2 , that automatically decides what feature maps to compute per input instance . Our approach is in contrast to most current video processing networks , where feature redundancy across both time and channel dimensions is not directly mitigated . ( 2 ) An adaptive policy jointly learned with the network weights in a fully differentiable way with a sharedweight mechanism , that allows us to make decisions on how many feature maps to compute . Our approach is model-agnostic and can be applied to any backbones to reduce feature redundancy in both time and channel domains . ( 3 ) Striking results of VA-RED2 over baselines , with a 30 % reduction in computation in comparison to R ( 2+1 ) D ( Tran et al. , 2018 ) , a 40 % over I3D-InceptionV2 ( Carreira & Zisserman , 2017 ) , and about 20 % over the recently proposed X3D-M ( Feichtenhofer , 2020 ) without any performance loss , for video action recognition task . The superiority of our approach is extensively tested on three video recognition datasets ( Mini-Kinetics-200 , Kinetics-400 ( Carreira & Zisserman , 2017 ) , and Moments-In-Time ( Monfort et al. , 2019 ) ) and one spatio-temporal action localization dataset ( J-HMDB-21 ( Jhuang et al. , 2013 ) ) . ( 4 ) A generalization of our framework to video action recognition , spatio-temporal localization , and semantic segmentation tasks , achieving promising results while offering significant reduction in computation over competing methods . 2 RELATED WORK . Efficiency in Video Understanding Models . Video understanding has made significant progress in recent years , mainly due to the adoption of convolutional neural networks , in form of 2D CNNs ( Karpathy et al. , 2014 ; Simonyan & Zisserman , 2014 ; Chéron et al. , 2015 ; Feichtenhofer et al. , 2017 ; Gkioxari & Malik , 2015 ; Wang et al. , 2016 ; Zhou et al. , 2018a ; Lin et al. , 2019 ; Fan et al. , 2019 ) or 3D CNNs ( Tran et al. , 2015 ; Carreira & Zisserman , 2017 ; Hara et al. , 2018 ; Tran et al. , 2018 ) . Despite promising results on common benchmarks , there is a significant interest in developing more efficient techniques and smaller models with reasonable performance . Previous works have shown reductions in computational complexity by using hybrid 2D-3D architectures ( Xie et al. , 2018 ; Zhou et al. , 2018c ; Zolfaghari et al. , 2018 ) , group convolution ( Tran et al. , 2019 ) or selecting salient clips ( Korbar et al. , 2019 ) . Feichtenhofer et al. , ( Feichtenhofer et al. , 2018 ) propose a dedicated low-framerate pathway . Expansion of 2D architectures through a stepwise expansion approach over the key variables such as temporal duration , frame rate , spatial resolution , network width , is recently proposed in ( Feichtenhofer , 2020 ) . Diba et al . ( Diba et al. , 2019 ) learn motion dynamic of videos with a self-supervised task for video understanding . Fan et al . ( Fan et al. , 2020 ) incorporate a efficient learnable 3D-shift module into a 3D video network . Wang et al . ( Wang et al. , 2020 ) devise a correlation module to learn correlation along temporal dimension . Li et al . ( Li et al. , 2020 ) encode the clip-level ordered temporal information with a CIDC network . While these approaches bring considerable efficiency improvements , none of them dynamically calibrates the required feature map computations on a per-input basis . Our framework achieves substantial improvements in average efficiency by avoiding redundant feature map computation depending on the input . Adaptive Inference . Many adaptive computation methods have been recently proposed with the goal of improving efficiency ( Bengio et al. , 2015 ; 2013 ; Veit & Belongie , 2018 ; Wang et al. , 2018 ; Graves , 2016 ; Meng et al. , 2021 ) . Several works have been proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference ( Yu et al. , 2018 ; Figurnov et al. , 2017 ; McGill & Perona , 2017 ; Teerapittayanon et al. , 2016 ) Wang et al . ( Wang et al. , 2018 ) propose to skip convolutional blocks on a per input basis using reinforcement learning and supervised pre-training . Veit et al . ( Veit & Belongie , 2018 ) propose a block skipping method controlled by samples from a Gumbel softmax , while Wu et al . ( Wu et al. , 2018 ) develop a reinforcement learning approach to achieve this goal . Adaptive computation time for recurrent neural networks is also presented in ( Graves , 2016 ) . SpotTune ( Guo et al. , 2019 ) learns to adaptively route information through finetuned or pre-trained layers . A few works have been recently proposed for selecting salient frames conditioned on the input ( Yeung et al. , 2016 ; Wu et al. , 2019 ; Korbar et al. , 2019 ; Gao et al. , 2019 ) while recognizing actions in long untrimmed videos . Different from adaptive data sampling ( Yeung et al. , 2016 ; Wu et al. , 2019 ; Korbar et al. , 2019 ; Gao et al. , 2019 ) , in this paper , our goal is to remove feature map redundancy by deciding how many features need to be computed for temporal and channel dimensions per input basis , for efficient video recognition . AR-Net ( Meng et al. , 2020 ) recently learns to adaptively choose the resolution of input frames with several individual backbone networks for video inference . In contrast , our method focuses on reducing the redundancy in both temporal and channel dimension , and is applicable to both 3D and 2D models , while AR-Net is only for 2D model and it is focused on spatial resolution . Moreover , our method integrates all the inference routes into a single model which is in the almost same size to the original base model . Thus our model is significantly smaller than AR-Net in terms number of model parameters . Neural Architecture Search . Our network learns the best internal redundancy reduction scheme , which is similar to previous work on automatically searching architectures ( Elsken et al. , 2018 ) . Liu et al . ( Liu et al. , 2018 ) formulate the architecture search task in a differentiable manner ; Cai et al . ( Cai et al. , 2018 ) directly learn architectures for a target task and hardware , Tan et al . ( Tan & Le , 2019 ) design a compound scaling strategy that searches through several key dimensions for CNNs ( depth , width , resolution ) . Finally , Tan et al . ( Tan et al. , 2019 ) incorporate latency to find efficient networks adapted for mobile use . In contrast , our approach learns a policy that chooses over full or reduced convolutions at inference time , effectively switching between various discovered subnetworks to minimize redundant computations and deliver high accuracy . 3 VIDEO ADAPTIVE REDUNDANCY REDUCTION . Our main goal is to automatically decide which feature maps to compute for each input video in order to classify it correctly with the minimum computation . The intuition behind our proposed method is that there are many similar feature maps along the temporal and channel dimensions . For each video instance , we estimate the ratio of feature maps that need to be fully computed along the temporal dimension and channel dimension . Then , for the other feature maps , we reconstruct them from those pre-computed feature maps using cheap linear operations . Approach Overview . Without loss of generality , we start from a 3D convolutional network G , and denote its lth 3D convolution layer as fl , and the corresponding input and output feature maps as Xl and Yl respectively . For each 3D convolution layer , we use a very lightweight policy layer pl denoted as soft modulation gate to decide the ratio of feature maps along the temporal and channel dimensions which need to be computed . As shown in Figure 2 , for temporal-wise dynamic inference , we reduce the computation of 3D convolution layer by dynamically scaling the temporal stride of the 3D filter with a factor R = 2pl ( Xl ) [ 0 ] . Thus the shape of output Y ′l becomes Cout × To/R ×Ho ×Wo . To keep the same output shape , we reconstruct the remaining features based on Y ′l as Yl [ j + iR ] = { Φti , j ( Y ′ l [ i ] ) if j ∈ { 1 , ... , R− 1 } Y ′l [ i ] if j = 0 , i ∈ { 0 , 1 , ... , To/R− 1 } , ( 1 ) where Yl [ j + iR ] represents the ( j + iR ) th feature map of Yl along the temporal dimension , Y ′l [ i ] denotes the ith feature map of Y ′l , and Φ t i , j is the cheap linear operation along the temporal dimension . The total computational cost of this process can be written as : C ( f tl ) = 1 R · C ( fl ) + ∑ i , j C ( Φti , j ) ≈ 1 R · C ( fl ) , ( 2 ) where the function C ( · ) returns the computation cost for a specific operation , and f tl represents our dynamic convolution process along temporal dimension . Different from temporal-wise dynamic inference , we reduce the channel-wise computation by dynamically controlling the number of output channels . We scale the output channel number with a factor r = ( 12 ) pl ( Xl ) [ 1 ] . In this case , the shape of output Y ′l is rCout × To × Ho × Wo . Same as before , we reconstruct the remaining features via cheap linear operations , which can be formulated as Yl = [ Y ′l , Φ c ( Y ′l ) ] , where Φc ( Y ′l ) ∈ R ( 1−r ) Cout×To×Ho×Wo represents the cheaply generated feature maps along the channel dimension , and Yl ∈ RCout×To×Ho×Wo is the output of the channel-wise dynamic inference . The total computation cost of joint temporal-wise and channel-wise dynamic inference is : C ( f t , cl ) ≈ r R · C ( fl ) , ( 3 ) where f t , cl is the adjunct process of temporal-wise and channel-wise dynamic inference . Soft Modulation Gate for Differentiable Optimization . We adopt an extremely lightweight policy layer pl called soft modulation gate for each convolution layer fl to modulate the ratio of features which need to be computed . Specifically , the soft modulation gate takes the input feature maps Xl as input and learns two probability vectors V lt ∈ RSt and V lc ∈ RSc , where St and Sc are the temporal search space size and the channel search space size respectively . The V lt and V l c are learned by : [ V lt , V l c ] = pl ( Xl ) = φ ( F ( ωp,2 , δ ( N ( F ( ωp,1 , G ( Xl ) ) ) ) ) + βlp ) , ( 4 ) where F ( · , · ) denotes the fully-connected layer , N is the batch normalization , δ ( · ) represents the tanh ( · ) function , G is the global pooling operation whose output shape is Cin · T × 1× 1 , φ ( · ) is the output activation function , here we just use max ( tanh ( · ) , 0 ) whose output range is [ 0 , 1 ) , and ωp,1 ∈ R ( St+Sc ) ×Dh , ωp,2 ∈ RDh×Cin·T are the weights of their corresponding layers , Dh is the hidden dimension number . V lt and V l c will then be used to modulate the ratio of the feature maps to be computed in temporal-wise dynamic convolution and channel-wise dynamic convolution . During training , we obtain the final output of the dynamic convolution by weighted sum of all the feature maps which contains different ratio of fully-computed features as follows : Y lc = Sc∑ i=1 V lc [ i ] · f cl ( Xl , r = ( 1 2 ) ( i−1 ) ) , Yl = St∑ j=1 V lt [ j ] · f tl ( Y lc , R = 2 ( j−1 ) ) , ( 5 ) where f cl ( · , r ) is the channel-wise dynamic convolution with the channel scaling factor r , and f tl ( · , R ) it the temporal-wise dynamic convolution with the temporal stride scaling factor R. During the inference phase , only the dynamic convolutions whose weights are not zero will be computed . Shared-weight Training and Inference . Many works in adaptive computation and neural architecture search suffer from very heavy computational cost and memory usage during training stage due to the large search space . In our case , under the naive implementation , the training computational cost and parameter size would linearly grow as the search space size increases . To train our model efficiently , we utilize a weight-sharing mechanism to reduce the computational cost and training memory . To be specific , we first compute all the possible necessary features using a big kernel . Then , for each dynamic convolution with different scaling factor , we sample its corresponding ratio of necessary features and reconstruct the rest features by cheap operations to get the final output . Though this , we are able to keep the computational cost at a constant value invariant to the search space . More details on this are included in Section B of the Appendix . Efficiency Loss . To encourage our network to output a computational efficient subgraph , we introduce the efficiency loss Lc during the training process , which can be formulated as Le = ( µ0 L∑ l=1 C ( fl ) ∑L k=1 C ( fk ) · r s l Rsl ) 2 , µ0 = { 1 if correct 0 otherwise , ( 6 ) where rsl is channel scaling factor of the largest filter in the series of channel-wise dynamic convolutions , and Rsl is stride scaling factor of the largest filter of temporal-wise dynamic convolutions . Overall , the loss function of our whole framework can be written as L = La + λeLe , where La is the accuracy loss of the whole network and λe is the weight of efficiency loss which can be used to balance the importance of the optimization of prediction accuracy and computational cost .
The paper presents a framework to reduce internal redundancy in the video recognition model. To do so, given the input frames, the framework predicts two scaling factors to conduct temporal and channel dimension reduction. The remaining part is reconstructed by cheap operations. The authors show that the framework achieves favorable results on several benchmarks.
SP:4d41be9a2f6e949a140b7a81dd85cadaabba63ef
Parameter-Based Value Functions
1 INTRODUCTION . Value functions are central to Reinforcement Learning ( RL ) . For a given policy , they estimate the value of being in a specific state ( or of choosing a particular action in a given state ) . Many RL breakthroughs were achieved through improved estimates of such values , which can be used to find optimal policies ( Tesauro , 1995 ; Mnih et al. , 2015 ) . However , learning value functions of arbitrary policies without observing their behavior in the environment is not trivial . Such off-policy learning requires to correct the mismatch between the distribution of updates induced by the behavioral policy and the one we want to learn . Common techniques include Importance Sampling ( IS ) ( Hesterberg , 1988 ) and deterministic policy gradient methods ( DPG ) ( Silver et al. , 2014 ) , which adopt the actorcritic architecture ( Sutton , 1984 ; Konda & Tsitsiklis , 2001 ; Peters & Schaal , 2008 ) . Unfortunately , these approaches have limitations . IS suffers from large variance ( Cortes et al. , 2010 ; Metelli et al. , 2018 ; Wang et al. , 2016 ) while traditional off-policy actor-critic methods introduce off-policy objectives whose gradients are difficult to follow since they involve the gradient of the action-value function with respect to the policy parameters∇θQπθ ( s , a ) ( Degris et al. , 2012 ; Silver et al. , 2014 ) . This term is usually ignored , resulting in biased gradients for the off-policy objective . Furthermore , off-policy actor-critic algorithms learn value functions of a single target policy . When value functions are updated to track the learned policy , the information about old policies is lost . We address the problem of generalization across many value functions in the off-policy setting by introducing a class of parameter-based value functions ( PBVFs ) defined for any policy . PBVFs are value functions whose inputs include the policy parameters , the PSSVF V ( θ ) , PSVF V ( s , θ ) , and PAVF Q ( s , a , θ ) . PBVFs can be learned using Monte Carlo ( MC ) ( Metropolis & Ulam , 1949 ) or Temporal Difference ( TD ) ( Sutton , 1988 ) methods . The PAVF Q ( s , a , θ ) leads to a novel stochastic and deterministic off-policy policy gradient theorem and , unlike previous approaches , can directly compute∇θQπθ ( s , a ) . Based on these results , we develop off-policy actor-critic methods and compare our algorithms to two strong baselines , ARS and DDPG ( Mania et al. , 2018 ; Lillicrap et al. , 2015 ) , outperforming them in some environments . We make theoretical , algorithmic , and experimental contributions : Section 2 introduces the standard MDP setting ; Section 3 formally presents PBVFs and derive algorithms for V ( θ ) , V ( s , θ ) and Q ( s , a , θ ) ; Section 4 describes the experimental evaluation using shallow and deep policies ; Sections 5 and 6 discuss related and future work . Proofs and derivations can be found in Appendix A.2 . 2 BACKGROUND . We consider a Markov Decision Process ( MDP ) ( Stratonovich , 1960 ; Puterman , 2014 ) M = ( S , A , P , R , γ , µ0 ) where at each step an agent observes a state s ∈ S , chooses action a ∈ A , transitions into state s′ with probability P ( s′|s , a ) and receives a reward R ( s , a ) . The agent starts from an initial state , chosen with probability µ0 ( s ) . It is represented by a parametrized stochastic policy πθ : S → ∆ ( A ) , which provides the probability of performing action a in state s. Θ is the space of policy parameters . The policy is deterministic if for each state s there exists an action a such that πθ ( a|s ) = 1 . The return Rt is defined as the cumulative discounted reward from time step t : Rt = ∑T−t−1 k=0 γ kR ( st+k+1 , at+k+1 ) , where T denotes the time horizon and γ a realvalued discount factor . The performance of the agent is measured by the cumulative discounted expected reward ( expected return ) , defined as J ( πθ ) = Eπθ [ R0 ] . Given a policy πθ , the state-value function V πθ ( s ) = Eπθ [ Rt|st = s ] is defined as the expected return for being in a state s and following policy πθ . By integrating over the state space S , we can express the maximization of the expected cumulative reward in terms of the state-value function J ( πθ ) = ∫ S µ0 ( s ) V πθ ( s ) ds . The action-value function Qπθ ( s , a ) , which is defined as the expected return for performing action a in state s , and following the policy πθ , is Qπθ ( s , a ) = Eπθ [ Rt|st = s , at = a ] , and it is related to the state-value function by V πθ ( s ) = ∫ A πθ ( a|s ) Qπθ ( s , a ) da . We define as dπθ ( s′ ) the discounted weighting of states encountered starting at s0 ∼ µ0 ( s ) and following the policy πθ : dπθ ( s′ ) = ∫ S ∑∞ t=1 γ t−1µ0 ( s ) P ( s → s′ , t , πθ ) ds , where P ( s → s′ , t , πθ ) is the probability of transitioning to s′ after t time steps , starting from s and following policy πθ . Sutton et al . ( 1999 ) showed that , for stochastic policies , the gradient of J ( πθ ) does not involve the derivative of dπθ ( s ) and can be expressed in a simple form : ∇θJ ( πθ ) = ∫ S dπθ ( s ) ∫ A ∇θπθ ( a|s ) Qπθ ( s , a ) dads . ( 1 ) Similarly , for deterministic policies Silver et al . ( 2014 ) obtained the following : ∇θJ ( πθ ) = ∫ S dπθ ( s ) ∇θπθ ( s ) ∇aQπθ ( s , a ) |a=πθ ( s ) ds . ( 2 ) Off-policy RL In off-policy policy optimization , we seek to find the parameters of the policy maximizing a performance index Jb ( πθ ) using data collected from a behavioral policy πb . Here the objective function Jb ( πθ ) is typically modified to be the value function of the target policy , integrated over dπb∞ ( s ) = limt→∞ P ( st = s|s0 , πb ) , the limiting distribution of states under πb ( assuming it exists ) ( Degris et al. , 2012 ; Imani et al. , 2018 ; Wang et al. , 2016 ) . Throughout the paper we assume that the support of dπb∞ includes the support of µ0 so that the optimal solution for Jb is also optimal for J . Formally , we want to find : Jb ( πθ∗ ) = max θ ∫ S dπb∞ ( s ) V πθ ( s ) ds = max θ ∫ S dπb∞ ( s ) ∫ A πθ ( a|s ) Qπθ ( s , a ) da ds . ( 3 ) Unfortunately , in the off-policy setting , the states are obtained from dπb∞ and not from d πθ ∞ , hence the gradients suffer from a distribution shift ( Liu et al. , 2019 ; Nachum et al. , 2019 ) . Moreover , since we have no access to dπθ∞ , a term in the policy gradient theorem corresponding to the gradient of the action value function with respect to the policy parameters needs to be estimated . This term is usually ignored in traditional off-policy policy gradient theorems1 . In particular , when the policy is stochastic , Degris et al . ( 2012 ) showed that : ∇θJb ( πθ ) = ∫ S dπb∞ ( s ) ∫ A πb ( a|s ) πθ ( a|s ) πb ( a|s ) ( Qπθ ( s , a ) ∇θ log πθ ( a|s ) +∇θQπθ ( s , a ) ) da ds ( 4 ) ≈ ∫ S dπb∞ ( s ) ∫ A πb ( a|s ) πθ ( a|s ) πb ( a|s ) ( Qπθ ( s , a ) ∇θ log πθ ( a|s ) ) da ds . ( 5 ) Analogously , Silver et al . ( 2014 ) provided the following approximation for deterministic policies 2 : ∇θJb ( πθ ) = ∫ S dπb∞ ( s ) ( ∇θπθ ( s ) ∇aQπθ ( s , a ) |a=πθ ( s ) +∇θQπθ ( s , a ) |a=πθ ( s ) ) ds ( 6 ) ≈ ∫ S dπb∞ ( s ) ( ∇θπθ ( s ) ∇aQπθ ( s , a ) |a=πθ ( s ) ) ds . ( 7 ) 1With tabular policies , dropping this term still results in a convergent algorithm ( Degris et al. , 2012 ) . 2In the original formulation of Silver et al . ( 2014 ) dπb∞ ( s ) is replaced by dπb ( s ) . Although the term ∇θQπθ ( s , a ) is dropped , there might be advantages in using the approximate gradient of Jb in order to find the maximum of the original RL objective J . Indeed , if we were on-policy , the approximated off-policy policy gradients by Degris et al . ( 2012 ) ; Silver et al . ( 2014 ) would revert to the on-policy policy gradients , while an exact gradient for Jb would necessarily introduce a bias . However , when we are off-policy , it is not clear whether this would be better than using the exact gradient of Jb in order to maximize J . In this work , we assume that Jb can be considered a good objective for off-policy RL and we derive an exact gradient for it . 3 PARAMETER-BASED VALUE FUNCTIONS . In this section , we introduce our parameter-based value functions , the PSSVF V ( θ ) , PSVF V ( s , θ ) , and PAVF Q ( s , a , θ ) and their corresponding learning algorithms . First , we augment the state and action-value functions , allowing them to receive as an input also the weights of a parametric policy . The parameter-based state-value function ( PSVF ) V ( s , θ ) = E [ Rt|st = s , θ ] is defined as the expected return for being in state s and following policy parameterized by θ . Similarly , the parameter-based action-value function ( PAVF ) Q ( s , a , θ ) = E [ Rt|st = s , at = a , θ ] is defined as the expected return for being in state s , taking action a and following policy parameterized by θ . Using PBVFs , the RL objective becomes : J ( πθ ) = ∫ S µ0 ( s ) V π ( s , θ ) ds . Maximizing this objective leads to on-policy policy gradient theorems that are analogous to the traditional ones ( Sutton et al. , 1999 ; Silver et al. , 2014 ) : Theorem 3.1 . Let πθ be stochastic . For any Markov Decision Process , the following holds : ∇θJ ( πθ ) = Es∼dπθ ( s ) , a∼πθ ( .|s ) [ ( Q ( s , a , θ ) ∇θ log πθ ( a|s ) ) ] . ( 8 ) Theorem 3.2 . Let πθ be deterministic . Under standard regularity assumptions ( Silver et al. , 2014 ) , for any Markov Decision Process , the following holds : ∇θJ ( πθ ) = Es∼dπθ ( s ) [ ∇aQ ( s , a , θ ) |a=πθ ( s ) ∇θπθ ( s ) ] . ( 9 ) Parameter-based value functions allow us also to learn a function of the policy parameters that directly approximates J ( πθ ) . In particular , the parameter-based start-state-value function ( PSSVF ) is defined as : V ( θ ) : = Es∼µ0 ( s ) [ V ( s , θ ) ] = ∫ S µ0 ( s ) V ( s , θ ) ds = J ( πθ ) . ( 10 ) Off-policy RL In the off-policy setting , the objective to be maximized becomes : Jb ( πθ∗ ) = max θ ∫ S dπb∞ ( s ) V ( s , θ ) ds = max θ ∫ S ∫ A dπb∞ ( s ) πθ ( a|s ) Q ( s , a , θ ) dads . ( 11 ) By taking the gradient of the performance Jb with respect to the policy parameters θ we obtain novel policy gradient theorems . Since θ is continuous , we need to use function approximators Vw ( θ ) ≈ V ( θ ) , Vw ( s , θ ) ≈ V ( s , θ ) and Qw ( s , a , θ ) ≈ Q ( s , a , θ ) . Compatible function approximations can be derived to ensure that the approximated value function is following the true gradient . Like in previous approaches , this would result in linearity conditions . However , here we consider nonlinear function approximation and we leave the convergence analysis of linear PBVFs as future work . In episodic settings , we do not have access to dπb∞ , so in the algorithm derivations and in the experiments we approximate it by sampling trajectories generated by the behavioral policy . In all cases , the policy improvement step can be very expensive , due to the computation of the arg max over a continuous space Θ. Actor-critic methods can be derived to solve this optimization problem , where the critic ( PBVFs ) can be learned using TD or MC methods , while the actor is updated following the gradient with respect to the critic . Although our algorithms on PSSVF and PSVF can be used with both stochastic and deterministic policies , removing the stochasticity of the action-selection process might facilitate learning the value function . All our algorithms make use of a replay buffer . 3.1 PARAMETER-BASED START-STATE-VALUE FUNCTION V ( θ ) We first derive the PSSVF V ( θ ) . Given the original performance index J , and taking the gradient with respect to θ , we obtain : ∇θJ ( πθ ) = ∫ S µ0 ( s ) ∇θV ( s , θ ) ds = Es∼µ0 ( s ) [ ∇θV ( s , θ ) ] = ∇θV ( θ ) . ( 12 ) In Algorithm 1 , the critic Vw ( θ ) is learned using MC to estimate the value of any policy θ . The actor is then updated following the direction of improvement suggested by the critic . Since the main application of PSSVF is in episodic tasks3 , we optimize for the undiscounted objective . Algorithm 1 Actor-critic with Monte Carlo prediction for V ( θ ) Input : Differentiable critic Vw : Θ → R with parameters w ; deterministic or stochastic actor πθ with parameters θ ; empty replay buffer D Output : Learned Vw ≈ V ( θ ) ∀θ , learned πθ ≈ πθ∗ Initialize critic and actor weights w , θ repeat : Generate an episode s0 , a0 , r1 , s1 , a1 , r2 , . . . , sT−1 , aT−1 , rT with policy πθ Compute return r = ∑T k=1 rk Store ( θ , r ) in the replay buffer D for many steps do : Sample a batch B = { ( r , θ ) } from D Update critic by stochastic gradient descent : ∇w E ( r , θ ) ∈B [ r − Vw ( θ ) ] 2 end for for many steps do : Update actor by gradient ascent : ∇θVw ( θ ) end for until convergence 3.2 PARAMETER-BASED STATE-VALUE FUNCTION V ( s , θ ) Learning the value function using MC approaches can be difficult due to the high variance of the estimate . Furthermore , episode-based algorithms like Algorithm 1 are unable to credit good actions in bad episodes . Gradient methods based on TD updates provide a biased estimate of V ( s , θ ) with much lower variance and can credit actions at each time step . Taking the gradient of Jb ( πθ ) in the PSVF formulation4 , we obtain : ∇θJb ( πθ ) = ∫ S dπb∞ ( s ) ∇θV ( s , θ ) ds = Es∼dπb∞ ( s ) [ ∇θV ( s , θ ) ] . ( 13 ) Algorithm 2 ( Appendix ) uses the actor-critic architecture , where the critic is learned via TD5 . 3.3 PARAMETER-BASED ACTION-VALUE FUNCTION Q ( s , a , θ ) The introduction of the PAVF Q ( s , a , θ ) allows us to derive new policy gradients theorems when using a stochastic or deterministic policy . Stochastic policy gradients We want to use data collected from some stochastic behavioral policy πb in order to learn the action-value of a target policy πθ . Traditional off-policy actor-critic algorithms only approximate the gradient of Jb , since they do not estimate the gradient of the actionvalue function with respect to the policy parameters ∇θQπθ ( s , a ) ( Degris et al. , 2012 ; Silver et al. , 2014 ) . With PBVFs , we can directly compute this contribution to the gradient . This yields an exact policy gradient theorem for Jb : Theorem 3.3 . For any Markov Decision Process , the following holds : ∇θJb ( πθ ) = Es∼dπb∞ ( s ) , a∼πb ( .|s ) [ πθ ( a|s ) πb ( a|s ) ( Q ( s , a , θ ) ∇θ log πθ ( a|s ) +∇θQ ( s , a , θ ) ) ] . ( 14 ) Algorithm 3 ( Appendix ) uses an actor-critic architecture and can be seen as an extension of OffPAC ( Degris et al. , 2012 ) to PAVF . 3Alternatives include regenerative method for MC estimation ( Rubinstein & Kroese , 2016 ) . 4Compared to standard methods based on the state-value function , we can directly optimize the policy following the performance gradient of the PSVF , obtaining a policy improvement step in a model-free way . 5Note that the differentiability of the policy πθ is never required in PSSVF and PSVF . Deterministic policy gradients Estimating Q ( s , a , θ ) is in general a difficult problem due to the stochasticity of the policy . Deterministic policies of the form π : S → A can help improving the efficiency in learning value functions , since the expectation over the action space is no longer required . Using PBVFs , we can write the performance of a policy πθ as : Jb ( πθ ) = ∫ S dπb∞ ( s ) V ( s , θ ) ds = ∫ S dπb∞ ( s ) Q ( s , πθ ( s ) , θ ) ds . ( 15 ) Taking the gradient with respect to θ we obtain a deterministic policy gradient theorem : Theorem 3.4 . Under standard regularity assumptions ( Silver et al. , 2014 ) , for any Markov Decision Process , the following holds : ∇θJb ( πθ ) = Es∼dπb∞ ( s ) [ ∇aQ ( s , a , θ ) |a=πθ ( s ) ∇θπθ ( s ) +∇θQ ( s , a , θ ) |a=πθ ( s ) ] . ( 16 ) Algorithm 4 ( Appendix ) uses an actor-critic architecture and can be seen as an extension of DPG ( Silver et al. , 2014 ) to PAVF . Despite the novel formulation of algorithm 3 , we decided to avoid the stochasticity of the policy and to implement and analyze only the deterministic PAVF .
On page 2, in the background section: the discounted state distribution, what you wrote is not a distribution (doesn't sum to 1). In order to define this $d^{\pi_\theta}$ properly, you can multiply everything by $1-\gamma$. The interpretation is that you "reset" in your initial distribution $\mu_0$ with probability $1 - \gamma$ at every step, or continue in the discounted stationary distribution with probability $\gamma$.
SP:7757f1f1066f31276dcbc93ad684ee84d925206a
VideoGen: Generative Modeling of Videos using VQ-VAE and Transformers
1 INTRODUCTION . Deep generative models of multiple types ( Goodfellow et al. , 2014 ; van den Oord et al. , 2016b ; Dinh et al. , 2016 ) have seen incredible progress in the last few years on multiple modalities including natural images ( van den Oord et al. , 2016c ; Zhang et al. , 2019 ; Brock et al. , 2018 ; Kingma & Dhariwal , 2018 ; Ho et al. , 2019a ; Karras et al. , 2017 ; 2019 ; Van Den Oord et al. , 2017 ; Razavi et al. , 2019 ; Vahdat & Kautz , 2020 ; Ho et al. , 2020 ; Chen et al. , 2020 ) , audio waveforms conditioned on language features ( van den Oord et al. , 2016a ; Oord et al. , 2017 ; Bińkowski et al. , 2019 ) , natural language in the form of text ( Radford et al. , 2019 ; Brown et al. , 2020 ) , and music generation ( Dhariwal et al. , 2020 ) . These results have been made possible thanks to fundamental advances in deep learning architectures ( He et al. , 2015 ; van den Oord et al. , 2016b ; c ; Vaswani et al. , 2017 ; Zhang et al. , 2019 ; Menick & Kalchbrenner , 2018 ) as well as the availability of compute resources ( Jouppi et al. , 2017 ; Amodei & Hernandez , 2018 ) that are more powerful than a few years ago . However , one notable modality that has not seen the same level of progress in generative modeling is high fidelity natural videos . The complexity of natural videos requires modeling correlations across both space and time with much higher input dimensions , thereby presenting a natural next challenge for current deep generative models . The complexity of the problem also demands more compute resources which can be considered as one important reason for the slow progress in generative modeling of videos . It is useful to build generative models of videos , both conditional and unconditional , as it implicitly solves the problem of video prediction and forecasting . Video prediction ( Kalchbrenner et al. , 2017 ; Sønderby et al. , 2020 ) can be seen as learning a generative model of future frames conditioned on the past frames . Architectures developed for video generation can be useful in forecasting applications for autonomous driving , such as predicting the future in more semantic and dense abstractions like segmentation masks ( Luc et al. , 2017 ) . Finally , building generative models of the world around us is considered as one way to measure understanding of physical common sense ( Lake et al. , 2015 ) . Multiple classes of generative models have been shown to produce strikingly good samples such as autoregressive models ( van den Oord et al. , 2016b ; c ; Menick & Kalchbrenner , 2018 ; Radford et al. , 2019 ; Chen et al. , 2020 ) , generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ; Radford et al. , 2015 ) , variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ; Kingma et al. , 2016 ; Vahdat & Kautz , 2020 ) , Flows ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ) , vector quantized VAE ( VQ-VAE ) ( Van Den Oord et al. , 2017 ; Razavi et al. , 2019 ) , and lately diffusion and score matching models ( Sohl-Dickstein et al. , 2015 ; Song & Ermon , 2019 ; Ho et al. , 2020 ) . These different generative model families have their tradeoffs : sampling speed , sample diversity , sample quality , ease of training , compute requirements , and ease of evaluation . To build a generative model for videos , we first make a choice between likelihood-based and adversarial models . Likelihood-based models are convenient to train since the objective is well understood , easy to optimize across a range of batch sizes , and easy to evaluate . Given that videos already present a hard modeling challenge due to the nature of the data , we believe likelihood-based models present fewer difficulties in the optimization and evaluation , hence allowing us to focus on the architecture modeling . Among likelihood-based models , autoregressive models that work on discrete data in particular have shown great success and have well established training recipes and modeling architectures . Second , we consider the following question : Is it better to perform autoregressive modeling in a downsampled latent space without spatio-temporal redundancies compared to modeling at the atomic level of all pixels across space and time ? Below , we present our reasons for choosing the former : Natural images and videos contain a lot of spatial and temporal redundancies and hence the reason we use image compression tools such as JPEG ( Wallace , 1992 ) and video codecs such as MPEG ( Le Gall , 1991 ) everyday . These redundancies can be removed by learning a denoised downsampled encoding of the high resolution inputs . For example , 4x downsampling across spatial and temporal dimensions results in 64x downsampled resolution so that the computation of powerful deep generative models is spent on these more fewer and useful bits . As shown in VQ-VAE ( Van Den Oord et al. , 2017 ) , even a lossy decoder can transform the latents to generate sufficiently realistic samples . Furthermore , modeling in the latent space downsampled across space and time instead of the pixel space improves sampling speed and compute requirements due to reduced dimensionality . 1 The above line of reasoning leads us to our proposed model : VideoGen , a simple video generation architecture that is a minimal adaptation of VQ-VAE and GPT architectures for videos . VideoGen employs 3D convolutions and transposed convolutions ( Tran et al. , 2015 ) along with axial attention ( Clark et al. , 2019 ; Ho et al. , 2019b ) for the autoencoder in VQ-VAE in order to be able to learn a downsampled set of discrete latents . These latents are then autoregressively generated by a GPT-like ( Radford et al. , 2019 ; Child et al. , 2019 ; Chen et al. , 2020 ) architecture . The latents are then decoded to videos of the original resolution using the decoder of the VQ-VAE . Our results are highlighted below : 1 . On the widely benchmarked BAIR Robot Pushing dataset ( Ebert et al. , 2017 ) , VideoGen can generate realistic samples that are competitive with existing methods such as DVD-GAN ( Clark et al. , 2019 ) , achieving an FVD of 112 when benchmarked with real samples , and an FVD * ( Razavi et al. , 2019 ) of 94 when benchmarked with reconstructions . 2 . VideoGen can easily be adapted for action conditional video generation . We present qualitative results on the BAIR Robot Pushing dataset and Vizdoom simulator ( Kempka et al. , 2016 ) . 3 . We present ablations showing that employing axial attention blocks in the VQ-VAE and spatiotemporal position encodings in the Transformer are helpful design choices in VideoGen . 4 . Our results are achievable with a maximum of 8 Quadro RTX 6000 GPUs ( 24 GB memory ) , significantly lower than the resources used in prior methods such as DVD-GAN ( Clark et al. , 2019 ) ( 32 to 512 16GB TPU ( Jouppi et al. , 2017 ) cores ) . 1Modeling long sequences is a challenge for transformer based architectures due to quadratic memory complexity of the attention matrix ( Child et al. , 2019 ) . 2 BACKGROUND . 2.1 VQ-VAE . The Vector Quantized Variational Autoencoder ( VQ-VAE ) ( Van Den Oord et al. , 2017 ) is a model that learns to compress high dimensional data points into a discretized latent space and reconstruct them . The encoder E ( x ) → h first encodes x into a series of latent vectors h which is then discretized by performing a nearest neighbors lookup in a codebook of embeddings C = { ei } Ki=1 of size K. The decoder D ( e ) → x̂ then learns to reconstruct x from the quantized encodings . The VQ-VAE is trained using the following objective : L = ‖x−D ( e ) ‖22︸ ︷︷ ︸ Lrecon + ‖sg [ E ( x ) ] − e‖22︸ ︷︷ ︸ Lcodebook +β ‖sg [ e ] − E ( x ) ‖22︸ ︷︷ ︸ Lcommit where sg refers to a stop-gradient . The objective consists of a reconstruction loss Lrecon , a codebook loss Lcodebook , and a commitment loss Lcommit . The reconstruction loss encourages the VQ-VAE to learn good representations to accurately reconstruct data samples . The codebook loss brings codebook embeddings closer to their corresponding encoder outputs , and the commitment loss is weighted by a hyperparameter β and prevents the encoder outputs from fluctuating between different code vectors . An alternative replacement for the codebook loss described in Van Den Oord et al . ( 2017 ) is to use an EMA update which empirically shows faster training and convergence speed . In this paper , we use the EMA update when training the VQ-VAE . 2.2 GPT . GPT and Image-GPT ( Chen et al. , 2020 ) are a class of autoregressive transformers that have shown tremendous success in modelling discrete data such as natural language and high dimensional images . These models factorize the data distribution p ( x ) according to p ( x ) = ∏d i=1 p ( xi|x < i ) through masked self-attention mechanisms and are optimized through maximum likelihood . As in Vaswani et al . ( 2017 ) , the architectures follow the standard design of employing multi-head self-attention blocks followed by pointwise MLP feedforward blocks . 3 VIDEOGEN . Our primary contribution is VideoGen , a new method to model complex video data in a computationally efficient manner . An overview of our method is shown in Fig 2 . Learning Latent Codes In order to learn a set of discrete latent codes , we first train a VQ-VAE on the video data . The encoder architecture consists of a series of 3D convolutions that downsample over space-time , followed by attention residual blocks . Each attention residual block is designed as shown in Fig 3 , where we use LayerNorm ( Ba et al. , 2016 ) , and axial attention layers follow Ho et al . ( 2019b ) . The architecture for the decoder is the reverse of the encoder , with attention residual blocks followed by a series of 3D transposed convolution that upsample over space-time . The position encodings are learned spatio-temporal embeddings that are shared between all axial attention layers in the encoder and decoder . Learning a Prior The second stage of our method is to learn a prior over the latents . The prior is learned by training a transformer model over the VQ-VAE latents . We follow the iGPT architecture with added dropout after the feedforward and attention block layers for regularization . Although the VQ-VAE is trained unconditionally , we can generate conditional samples by training a conditional prior . We use two types of conditioning : • Concatenation : We concatenate a conditional vector before every feedforward block in the transformer . This conditioning method is primarily used for frame conditioning , where the conditioned frame is encoded into a conditioning vector by a ResNet ( He et al. , 2016 ) backbone and then concatenated . • Conditional Norms : Similar to conditioning methods used in GANs , we parameterize the gain and bias in the transformer Layer Normalization ( Ba et al. , 2016 ) layers as affine functions of the conditional vector . This conditioning method is used for action conditioning .
This paper proposes a generative model to synthesize videos using VQ-VAEs. The scheme works in latent space by using embeddings for video sequences learnt by the VQ-VAE. For inference, an autoregressive transformer prior for video sequences is learnt, which upon sampling from and sending to the VQ-VAE decoder, generates unconditional (or conditional) samples of video. To learn video embeddings, the paper uses a 3D convolutional network, with an extra dimension for time.
SP:01597fbcad0e467ed94efcdfde93a565cb3a763e
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers
1 INTRODUCTION . Due to the enormous data size and non-convexity , stochastic optimization algorithms have become widely used in training deep neural networks . In addition to Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) , many variations such as Adagrad ( Duchi et al. , 2011 ) and Adam ( Kingma & Ba , 2014 ) have been proposed . Unlike classical , hyperparameter free optimizers such as gradient descent and Newton ’ s method1 , stochastic optimizers often hold multiple hyperparameters including learning rate and momentum coefficients . Those hyperparameters are critical not only to the speed , but also to the final performance , and are often hard to tune . It is thus non-trivial to benchmark and compare optimizers in deep neural network training . And a benchmarking mechanism that focuses on the peak performance could lead to a false sense of improvement when developing new optimizers without considering tuning efforts . In this paper , we aim to rethink the role of hyperparameter tuning in benchmarking optimizers and develop new benchmarking protocols to reflect their performance in practical tasks better . We then benchmark seven recently proposed and widely used optimizers and study their performance on a wide range of tasks . In the following , we will first briefly review the two existing benchmarking protocols , discuss their pros and cons , and then introduce our contributions . Benchmarking performance under the best hyperparameters A majority of previous benchmarks and comparisons on optimizers are based on the best hyperparameters . Wilson et al . ( 2017 ) ; Shah et al . ( 2018 ) made a comparison of SGD-based methods against adaptive ones under their best hyperparameter configurations . They found that SGD can outperform adaptive methods on several datasets under careful tuning . Most of the benchmarking frameworks for ML training also assume knowing the best hyperparameters for optimizers ( Schneider et al. , 2019 ; Coleman et al. , 2017 ; Zhu et al. , 2018 ) . Also , the popular MLPerf benchmark evaluated the performance of optimizers under the best hyperparameter . It showed that ImageNet and BERT could be trained in 1 minute using the combination of good optimizers , good hyperparameters , and thousands of accelerators . 1The step sizes of gradient descent and Newton ’ s method can be automatically adjusted by a line search procedure ( Nocedal & Wright , 2006 ) . Despite each optimizer ’ s peak performance being evaluated , benchmarking under the best hyperparameters makes the comparison between optimizers unreliable and fails to reflect their practical performance . First , the assumption of knowing the best hyperparameter is unrealistic . In practice , it requires a lot of tuning efforts to find the best hyperparameter , and the tuning efficiency varies greatly for different optimizers . It is also tricky to define the “ best hyperparameter ” , which depends on the hyperparameter searching range and grids . Further , since many of these optimizers are sensitive to hyperparameters , some improvements reported for new optimizers may come from insufficient tuning for previous work . Benchmarking performance with random hyperparameter search It has been pointed out in several papers that tuning hyperparameter needs to be considered in evaluating optimizers ( Schneider et al. , 2019 ; Asi & Duchi , 2019 ) , but having a formal evaluation protocol on this topic is nontrivial . Only recently , two papers Choi et al . ( 2019 ) and Sivaprasad et al . ( 2020 ) take hyperparameter tuning time into account when comparing SGD with Adam/Adagrad . However , their comparisons among optimizers are conducted on random hyperparameter search . We argue that these comparisons could over-emphasize the role of hyperparameter tuning , which could lead to a pessimistic and impractical performance benchmarking for optimizers . This is due to the following reasons : First , in the random search comparison , each bad hyperparameter has to run fully ( e.g. , 200 epochs ) . In practice , a user can always stop the program early for bad hyperparameters if having a limited time budget . For instance , if the learning rate for SGD is too large , a user can easily observe that SGD diverges in a few iterations and directly stops the current job . Therefore , the random search hypothesis will over-emphasize the role of hyperparameter tuning and does not align with a real user ’ s practical efficiency . Second , the performance of the best hyperparameter is crucial for many applications . For example , in many real-world applications , we need to re-train the model every day or every week with newly added data . So the best hyperparameter selected in the beginning might benefit all these re-train tasks rather than searching parameters from scratch . In addition , due to the expensive random search , random search based evaluation often focuses on the low-accuracy region2 , while practically we care about the performance for getting reasonably good accuracy . Our contributions Given that hyperparameter tuning is either under-emphasized ( assuming the best hyperparameters ) or over-emphasize ( assuming random search ) in existing benchmarking protocols and comparisons , we develop new evaluation protocols to compare optimizers to reflect the real use cases better . Our evaluation framework includes two protocols . First , to evaluate the end-to-end training efficiency for a user to train the best model from scratch , we develop an efficient evaluation protocol to compare the accuracy obtained under various time budgets , including the hyperparameter tuning time . Instead of using the random search algorithm , we adopt the Hyperband ( Li et al. , 2017 ) algorithm for hyperparameter tuning since it can stop early for bad configurations and better reflect the real running time required by a user . Further , we also propose to evaluate the data addition training efficiency for a user to re-train the model with some newly added training data , with the knowledge of the best hyperparameter tuned in the previous training set . We also conduct human studies to study how machine learning researchers are tuning parameters in optimizers and how that aligns with our proposed protocols . Based on the proposed evaluation protocols , we study how much progress has recently proposed algorithms made compared with SGD or Adam . Note that most of the recent proposed optimizers have shown outperforming SGD and Adam under the best hyperparameters for some particular tasks , but it is not clear whether the improvements are still significant when considering hyper-parameter tuning , and across various tasks . To this end , we conduct comprehensive experiments comparing state-of-the-art training algorithms , including SGD ( Robbins & Monro , 1951 ) , Adam ( Kingma & Ba , 2014 ) , RAdam ( Liu et al. , 2019 ) , Yogi ( Zaheer et al. , 2018 ) , LARS ( You et al. , 2017 ) , LAMB ( You et al. , 2019 ) , and Lookahead ( Zhang et al. , 2019 ) , on a variety of training tasks including image classification , generated adversarial networks ( GANs ) , sentence classification ( BERT fine-tuning ) , reinforcement learning and graph neural network training . Our main conclusions are : 1 ) On CIFAR-10 and CIFAR-100 , all the optimizers including SGD are competitive . 2 ) Adaptive methods are generally better on more complex tasks ( NLP , GCN , RL ) . 3 ) There is no clear winner among adaptive methods . Although RAdam is more stable than Adam across tasks , Adam is still a very competitive baseline even compared with recently proposed methods . 2For instance , Sivaprasad et al . ( 2020 ) only reaches < 50 % accuracy in their CIFAR-100 comparisons . 2 RELATED WORK . Optimizers . Properties of deep learning make it natural to apply stochastic first order methods , such as Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) . Severe issues such as a zig-zag training trajectory and a uniform learning rate have been exposed , and researchers have then drawn extensive attention to modify the existing SGD for improvement . Along this line of work , tremendous progresses have been made including SGDM ( Qian , 1999 ) , Adagrad ( Duchi et al. , 2011 ) , RMSProp ( Tieleman & Hinton , 2012 ) , and Adam ( Kingma & Ba , 2014 ) . These methods utilize momentums to stabilize and speed up training procedures . Particularly , Adam is regarded as the default algorithm due to its outstanding compatibility . Then variants such as Amsgrad ( Reddi et al. , 2019 ) , Adabound ( Luo et al. , 2019 ) , Yogi ( Zaheer et al. , 2018 ) , and RAdam ( Liu et al. , 2019 ) have been proposed to resolve different drawbacks of Adam . Meanwhile , the requirement of large batch training has inspired the development of LARS ( You et al. , 2017 ) and LAMB ( You et al. , 2019 ) . Moreover , Zhang et al . ( 2019 ) has put forward a framework called Lookahead to boost optimization performance by iteratively updating two sets of weights . Hyperparameter tuning methods . Random search and grid search ( Bergstra & Bengio , 2012 ) can be a basic hyperparameter tuning method in the literature . However , the inefficiency of these methods stimulates the development of more advanced search strategies . Bayesian optimization methods including Bergstra et al . ( 2011 ) and Hutter et al . ( 2011 ) accelerate random search by fitting a black-box function of hyperparameter and the expected objective to adaptively guide the search direction . Parallel to this line of work , Hyperband ( Li et al. , 2017 ) focuses on reducing evaluation cost for each configuration and early terminates relatively worse trials . Falkner et al . ( 2018 ) proposes BOHB to combine the benefits of both Bayesian Optimization and Hyperband . All these methods still require huge computation resources . A recent work ( Metz et al. , 2020 ) has tried to obtain a list of potential hyperparameters by meta-learning from thousands of representative tasks . We strike a balance between effectiveness and computing cost and leverage Hyperband in our evaluation protocol to compare a wider range of optimizers . 3 PROPOSED EVALUATION PROTOCOLS . In this section , we introduce the proposed evaluation framework for optimizers . We consider two evaluation protocols , each corresponding to an important training scenario : • Scenario I ( End-to-end training ) : This is the general training scenario , where a user is given an unfamiliar optimizer and task , the goal is to achieve the best validation performance after several trials and errors . In this case , the evaluation needs to include hyperparameter tuning time . We develop an efficiency evaluation protocol to compare various optimizers in terms of CPE and peak performance . • Scenario II ( Data-addition training ) : This is another useful scenario encountered in many applications , where the same model needs to be retrained regularly after collecting some fresh data . In this case , a naive solution is to reuse the previously optimal hyperparameters and retrain the model . However , since the distribution is shifted , the result depends on the sensitivity to that shift . We describe the detailed evaluation protocol for each setting in the following subsections . 3.1 END-TO-END TRAINING EVALUATION PROTOCOL . Before introducing our evaluation protocol for Scenario I , we first formally define the concept of optimizer and its hyperparameters . Definition 1 . An optimizer is employed to solve a minimization problem minθ L ( θ ) and can be defined by a tuple o ∈ O = ( U , Ω ) , where O contains all types of optimizers . U is a specific update rule and Ω = ( ω1 , . . . , ωN ) ∈ RN represents a vector ofN hyperparameters . Search space of these hyperparameters is denoted by F . Given an initial parameter value θ0 , together with a trajectory of optimization procedure Ht = { θs , L ( θs ) , ∇L ( θs ) } , the optimizer updates θ by θt+1 = U ( Ht , Ω ) . We aim to evaluate the end-to-end time for a user to get the best model , including the hyperparameter tuning time . A recent work ( Sivaprasad et al. , 2020 ) assumes that a user conducts random search for finding the best hyperparameter setting . Still , we argue that the random search procedure will over-emphasize the importance of hyperparameters when tuning is considered — it assumes a user never stops the training even if they observe divergence or bad results in the initial training phase , which is unrealistic . Figure 1 illustrates why random search might not lead to a fair comparison of optimizers . In Figure 1 , we are given two optimizers , A and B , and their corresponding loss w.r.t . hyperparameter . According to Sivaprasad et al . ( 2020 ) , optimizer B is considered better than optimizer A under a constrained budget since most regions of the hyperparameter space of A outperforms B . For instance , suppose we randomly sample the same hyperparamter setting for A and B . The final config ω∗r ( B ) found under this strategy can have a lower expected loss than that of ω∗r ( A ) , as shown in Figure 1a . However , there exists a more practical search strategy which can invalidate this statement with the assumption of a limited searching budget : a user can early terminate a configuration trial when trapped in bad results or diverging . Hence , we can observe in Figure 1b that for optimizer A , this strategy earlystops many configurations and only allow a limited number of trials to explore to the deeper stage . Therefore , the bad hyperparameters will not affect the overall efficiency of Optimizer A too much . In contrast , for optimizer B , performances of different hyperparameters are relatively satisfactory and hard to distinguish , resulting in similar and long termination time for each trial . Therefore , it may be easier for a practical search strategy p to find the best configuration ω∗p ( A ) of optimizer A than ω∗p ( B ) , given the same constrained budget . This example suggests that random search may over-emphasize the parameter sensitivity when benchmarking optimizers . To better reflect a practical hyperparameter tuning scenario , our evaluation assumes a user applies Hyperband ( Li et al. , 2017 ) , a simple but effective hyperparameter tuning scheme to get the best model . Hyperband formulates hyperparameter optimization as a unique bandit problem . It accelerates random search through adaptive resource allocation and earlystopping , as demonstrated in Figure 1b . Compared with its more complicated counterparts such as BOHB ( Falkner et al. , 2018 ) , Hyperband requires less computing resources and performs similarly within a constrained budget . The algorithm is presented in Appendix A . Despite different hyperparameter tuning algorithms , human tuning by experts is still regarded as the most effective . To verify and support that Hyperband is an effective method and even competitive with humans , we conduct a human study as follows : for image classification on CIFAR10 , given 10 learning rate configurations of SGD in the grid [ 1.0 × 10−8 , 1.0 × 10−7 , 1.0 × 10−6 , . . . , 10 ] , participants are requested to search the best one at their discretion . Namely , they can stop or pause a trial any time and continue to evaluate a new configuration until they feel it has already reached the best performance . 10 participants are sampled randomly from Ph.D. students with computer science backgrounds . We collect their tuning trajectories and average them as human performance , which is considered “ optimal ” in this human study . In Figure 2 , we plot curves for hyperparameter tuning of human , Hyperband , random search , random search with an early stopping ( ES ) strategy in Sivaprasad et al . ( 2020 ) , and Hyperband with ES . We find that Hyperband matches humans ’ behavior better , while random search tends to trap in suboptimal configurations although random with early stopping can mitigate this issue to some extent . This finding shows the advantage of Hyperband over random search regardless of early stopping , and justifies the use of Hyperband in optimizer benchmarking . More details of this human study can be found in Appedix B . With Hyperband incorporated in end-to-end training , we assume that each configuration is run sequentially and record the best performance obtained at time step t as Pt . Specifically , Pt represents the evaluation metric for each task , e.g. , accuracy for image classification and return for reinforcement learning . { Pt } Tt=1 forms a trajectory for plotting learning curves on test set like Figure 3 . Although it is intuitive to observe the performance of different optimizers according to such figures , summarizing a learning curve into a quantifiable , scalar value can be more insightful for evaluation . Thus , as shown in Eq . 1 , we use λ-tunability defined in Sivaprasad et al . ( 2020 ) to further measure the performance of optimizers : λ-tunability = ∑T t=1 λt · Pt , where ∑ t λt = 1 and ∀tλt > 0 . ( 1 ) One intuitive way is to set λt = 1t=T to determine which optimizer can reach the best model performance after the whole training procedure . However , merely considering the peak performance is not a good guidance on the choice of optimizers . In practice , we tend to take into account the complete trajectory and exert more emphasis on the early stage . Thus , we employ the Cumulative Performance-Early weighting scheme where λt ∝ ( T − i ) , to compute λ-tunablity instead of the extreme assignment λt = 1t=T . The value obtained is termed as CPE for simplicity . We present our evaluation protocol in Algorithm 1 . As we can see , end-to-end training with hyperparameter optimization is conducted for various optimizers on the given task . The trajectory { Pt } Tt=1 is recorded to compute the peak performance as well as CPE value . Note that the procedure is repeated M times to obtain a reliable result . We use M = 3 in all experiments . More details of time cost and acceleration of the algorithm can be found in Appendix E. Algorithm 1 End-to-End Efficiency Evaluation Protocol Input : A set of optimizers O = { o : o = ( U , Ω ) } , task a ∈ A , feasible search space F 1 : for o ∈ O do 2 : for i = 1 to M do 3 : Conduct hyperparameter search in F with the optimizer o using HyperBand on a 4 : Record the performance trajectory { Pt } Tt=1 explored by HyperBand 5 : Calculate the peak performance and CPE by Eq . 1 6 : end for 7 : Average peak and CPE values over M repetitions for the optimizer o 8 : end for 9 : Evaluate optimizers according to their peak and CPE values
This paper studies the topic of evaluating the performance of optimizers for neural networks. The paper makes the argument that existing evaluation procedures either over emphasize the finding of optimal hyperparameters or under-evaluate the performance of an algorithm by randomly sampling hyperparameters. This paper's primary objective is to propose an evaluation procedure that better aligns with a practitioner's goal than existing evaluation procedures. The proposed procedure evaluates optimization algorithms by using the hyperband hyperparameter optimization algorithm to tune hyperparameters and then score the algorithm using a weighted combination of validation performance scores over regularly sampled training intervals. The aggregate performances of algorithms are then ranked using performance profiles.
SP:2376a19af5be2a66ce8cf04713ab41c972f48382
Pointwise Binary Classification with Pairwise Confidence Comparisons
1 INTRODUCTION . Traditional supervised learning techniques have achieved great advances , while they are demanding for precisely labeled data . In many real-world scenarios , it may be too difficult to collect such data . To alleviate this issue , a large number of weakly supervised learning problems ( Zhou , 2018 ) have been extensively studied , including semi-supervised learning ( Zhu & Goldberg , 2009 ; Niu et al. , 2013 ; Sakai et al. , 2018 ) , multi-instance learning ( Zhou et al. , 2009 ; Sun et al. , 2016 ; Zhang & Zhou , 2017 ) , noisy-label learning ( Han et al. , 2018 ; Xia et al. , 2019 ; Wei et al. , 2020 ) , partial-label learning ( Zhang et al. , 2017 ; Feng et al. , 2020b ; Lv et al. , 2020 ) , complementary-label learning ( Ishida et al. , 2017 ; Yu et al. , 2018 ; Ishida et al. , 2019 ; Feng et al. , 2020a ) , positive-unlabeled classification ( Gong et al. , 2019 ) , positive-confidence classification ( Ishida et al. , 2018 ) , similarunlabeled classification ( Bao et al. , 2018 ) , unlabeled-unlabeled classification ( Lu et al. , 2019 ; 2020 ) , and triplet classification ( Cui et al. , 2020 ) . This paper considers another novel weakly supervised learning setting called pairwise comparison ( Pcomp ) classification , where we aim to perform pointwise binary classification with only pairwise comparison data , instead of pointwise labeled data . A pairwise comparison ( x , x′ ) represents that the instance x has a larger confidence of belonging to the positive class than the instance x′ . Such weak supervision ( pairwise confidence comparison ) could be much easier for people to collect than full supervision ( pointwise label ) in practice , especially for applications on sensitive or private matters . For example , it may be difficult to collect sensitive or private data with pointwise labels , as asking for the true labels could be prohibited or illegal . In this case , it could be easier for people to collect other weak supervision like the comparison information between two examples . It is also advantageous to consider pairwise confidence comparisons in pointwise binary classification with class overlapping , where the labeling task becomes difficult , and even experienced labelers may provide wrong pointwise labels . Let us denote the labeling standard of a labeler as p̃ ( y|x ) and assume that an instance x1 is more positive than another instance x2 . Facing the difficult labeling task , different labelers may hold different labeling standards , p̃ ( y = +1|x1 ) > p̃ ( y = +1|x2 ) > 1/2 , p̃ ( y = +1|x1 ) > 1/2 > p̃ ( y = +1|x2 ) , and 1/2 > p̃ ( y = +1|x1 ) > p̃ ( y = +1|x2 ) , thereby providing different pointwise labels : ( +1 , +1 ) , ( +1 , −1 ) , ( −1 , −1 ) . We can find that different labelers may provide inconsistent pointwise labels , while pairwise confidence comparisons are unanimous and accurate . One may argue that we could aggregate multiple labels of the same instance using crowdsourcing learning methods ( Whitehill et al. , 2009 ; Raykar et al. , 2010 ) . However , as not every instance will be labeled by multiple labelers , it is not always applicable to crowdsourcing learning methods . Therefore , our proposed Pcomp classification is useful in this case . Our contributions in this paper can be summarized as follows : • We propose Pcomp classification , a novel weakly supervised learning setting , and present a mathematical formulation for the generation process of pairwise comparison data . • We prove that an unbiased risk estimator ( URE ) can be derived , propose an empirical risk minimization ( ERM ) based method , and present an improvement using correction functions ( Lu et al. , 2020 ) for alleviating overftting when complex models are used . • We start from the noisy-label learning perspective to introduce the RankPruning method ( Northcutt et al. , 2017 ) that holds a progressive URE for solving our proposed Pcomp classification problem and improve it by imposing consistency regularization . • We experimentally demonstrate the effectiveness of our proposed solutions for Pcomp classification . 2 PRELIMINARIES . Binary classification with pairwise comparisons and extra pointwise labels has been studied ( Xu et al. , 2017 ; Kane et al. , 2017 ) . Our paper focuses on a more challenging problem where only pairwise comparison examples are provided . Unlike previous studies ( Xu et al. , 2017 ; Kane et al. , 2017 ) that leverage some pointwise labels to differentiate the labels of pairwise comparisons , our methods are purely based on ERM with only pairwise comparisons . In the next , we briefly introduce some notations and review the related problem formulations of binary classification , positive-unlabeled classification , and unlabeled-unlabeled classification . Binary Classification . Since our paper focuses on how to train a binary classifier from pairwise comparison data , we first review the problem formulation of binary classification . Let the feature space be X and the label space be Y = { +1 , −1 } . Suppose the collected dataset is denoted by D = { ( xi , yi ) } ni=1 where each example ( xi , yi ) is independently sampled from the joint distribution with density p ( x , y ) , which includes an instance xi ∈ X and a label yi ∈ Y . The goal of binary classification is to train an optimal classifier f : X 7→ R by minimizing the following expected classification risk : R ( f ) = Ep ( x , y ) [ ` ( f ( x ) , y ) ] = π+Ep+ ( x ) [ ` ( f ( x ) , +1 ) ] + π−Ep− ( x ) [ ` ( f ( x ) , −1 ) ] , ( 1 ) where ` : R × Y 7→ R+ denotes a binary loss function , π+ : = p ( y = +1 ) ( or π− : = p ( y = −1 ) ) denotes the positive ( or negative ) class prior probability , and p+ ( x ) : = p ( x|y = +1 ) ( or p− ( x ) : = p ( x|y = −1 ) ) denotes the class-conditional probability density of the positive ( or negative ) data . ERM approximates the expectations over p+ ( x ) and p− ( x ) by the empirical averages of positive and negative data and the empirical risk is minimized with respect to the classifier f . Positive-Unlabeled ( PU ) Classification . In some real-world scenarios , it may be difficult to collect negative data , and only positive ( P ) and unlabeled ( U ) data are available . PU classification aims to train an effective binary classifier in this weakly supervised setting . Previous studies ( du Plessis et al. , 2014 ; 2015 ; Kiryo et al. , 2017 ) showed that the classification risk R ( f ) in Eq . ( 1 ) can be rewritten only in terms of positive and unlabeled data as R ( f ) = RPU ( f ) = π+Ep+ ( x ) [ ` ( f ( x ) , +1 ) − ` ( f ( x ) , −1 ) ] + Ep ( x ) [ ` ( f ( x ) , −1 ) ] , ( 2 ) where p ( x ) = π+p+ ( x ) + π−p− ( x ) denotes the probability density of unlabeled data . This risk expression immediately allows us to employ ERM in terms of positive and unlabeled data . Unlabeled-Unlabeled ( UU ) Classification . The recent studies ( Lu et al. , 2019 ; 2020 ) showed that it is possible to train a binary classifier only from two unlabeled datasets with different class priors . Lu et al . ( 2019 ) showed that the classification risk can be rewritten as R ( f ) = RUU ( f ) = Eptr ( x ) [ ( 1− θ′ ) π+ θ − θ′ ` ( f ( x ) , +1 ) − θ ′ ( 1− π+ ) θ − θ′ ` ( f ( x ) , −1 ) ] + Eptr′ ( x′ ) [ θ ( 1− π+ ) θ − θ′ ` ( f ( x′ ) , −1 ) − ( 1− θ ) π+ θ − θ′ ` ( f ( x′ ) , +1 ) ] , ( 3 ) where θ and θ′ are different class priors of two unlabeled datasets , and ptr ( x ) and ptr′ ( x′ ) are the densities of two datasets of unlabeled data , respectively . This risk expression immediately allows us to employ ERM only from two sets of unlabeled data . For RUU ( f ) in Eq . ( 3 ) , if we set θ = 1 , θ′ = π+ , and replace ptr ( x ) and ptr′ ( x′ ) by p+ ( x ) and p ( x ) respectively , then we can recover RPU ( f ) in Eq . ( 2 ) . Therefore , UU classification could be taken as a generalized framework of PU classification in terms of URE . Besides , Eq . ( 3 ) also recovers a complicated URE of similarunlabeled classification ( Bao et al. , 2018 ) by setting θ = π+ and θ′ = π2+/ ( 2π 2 + − 2π+ + 1 ) . To solve our proposed Pcomp classification problem , we will present a mathematical formulation for the generation process of pairwise comparison data , based on which we will explore two UREs to train a binary classifier by ERM and establish the corresponding estimation error bounds . 3 DATA GENERATION PROCESS . In order to derive UREs for performing ERM , we first formulate the underlying generation process of pairwise comparison data1 , which consists of pairs of unlabeled data that we know which one is more likely to be positive . Suppose the provided dataset is denoted by D̃ = { ( xi , x′i ) } ni=1 where ( xi , x′i ) ( with their unknown true labels ( yi , y′i ) ) is expected to satisfy p ( yi = +1|xi ) > p ( y′i = +1|x′i ) . It is clear that we could easily collect pairwise comparison data if the positive confidence ( i.e. , p ( y = +1|x ) ) of each instance could be obtained . However , such information is much harder to obtain than class labels in real-world scenarios . Therefore , unlike some studies ( Ishida et al. , 2018 ; Shinoda et al. , 2020 ) that assume the positive confidence of each instance is provided by the labeler , we only assume that the labeler has access to the labels of training data . Specifically , we adopt the assumption ( Cui et al. , 2020 ) that weakly supervised examples are first sampled from the true data distribution , but the labels are only accessible to the labeler . Then , the labeler would provide us weakly supervised information ( i.e. , pairwise comparison information ) according to the labels of sampled data pairs . That is , for any pair of unlabeled data ( x , x′ ) , the labeler would tell us whether ( x , x′ ) could be collected as a pairwise comparison for Pcomp classification , based on the labels ( y , y′ ) rather than the positive confidences ( p ( y = +1|x ) , p ( y = +1|x′ ) ) . Now , the question becomes : how does the labeler consider ( x , x′ ) as a pairwise comparison for Pcomp classification , in terms of the labels ( y , y′ ) ? As shown in our previous example of binary classification with class overlapping , we could infer that the labels ( y , y′ ) of our required pairwise comparison data ( x , x′ ) for Pcomp classification can only be one of the three cases { ( +1 , −1 ) , ( +1 , +1 ) , ( −1 , −1 ) } , because the condition p ( y = +1|x ) ≥ p ( y′ = +1|x′ ) is definitely violated if ( y , y′ ) = ( −1 , +1 ) . Therefore , we assume that the labeler would take ( x , x′ ) as a pairwise comparison example in the dataset D̃ , if the labels ( y , y′ ) of ( x , x′ ) belong to the above three cases . It is also worth noting that for a pair of data ( x , x′ ) with labels ( y , y′ ) = ( −1 , +1 ) , the labeler would take ( x′ , x ) as a pairwise comparison example . Because by exchanging the positions of ( x , x′ ) , ( x′ , x ) would be associated with labels ( +1 , −1 ) , which belong to the three cases . In summary , we assume that pairwise comparison data are sampled from those pairs of data whose labels belong to the three cases { ( +1 , −1 ) , ( +1 , +1 ) , ( −1 , −1 ) } . Based on the above described generation process of pairwise comparison data , we have the following theorem . Theorem 1 . According to the generation process of pairwise comparison data described above , let p̃ ( x , x′ ) = q ( x , x′ ) π2+ + π 2 − + π+π− , ( 4 ) where q ( x , x′ ) = π2+p+ ( x ) p+ ( x ′ ) + π2−p− ( x ) p− ( x ′ ) + π+π−p+ ( x ) p− ( x ′ ) . Then we have D̃ = { ( xi , x′i ) } ni=1 i.i.d.∼ p̃ ( x , x′ ) . 1In contrast to Xu et al . ( 2019 ) and Xu et al . ( 2020 ) which utilized pairwise comparison data to solve the regression problem , we focus on binary classification . The proof is provided in Appendix A. Theorem 1 provides an explicit expression of the probability density of pairwise comparison data . Next , we would like to extract pointwise information from pairwise information , since our goal is to perform pointwise binary classification . Let π̃ = π2+ +π 2 −+π+π− = π+ +π 2 − = π 2 + +π− and we denote the pointwise data collected from D̃ = { ( xi , x′i ) } ni=1 by breaking the pairwise comparison relation as D̃+ = { xi } ni=1 and D̃− = { x′i } ni=1 . Then we can obtain the following theorem . Theorem 2 . Pointwise examples in D̃+ and D̃− are independently drawn from p̃+ ( x ) and p̃− ( x′ ) , where p̃+ ( x ) = π+ π2− + π+ p+ ( x ) + π2− π2− + π+ p− ( x ) , p̃− ( x ′ ) = π2+ π2+ + π− p+ ( x ′ ) + π− π2+ + π− p− ( x ′ ) . The proof is provided in Appendix B. Theorem 2 shows the relationships between the pointwise densities and the class-conditional densities . Besides , it indicates that from pairwise comparison data , we can essentially obtain examples that are independently drawn from p̃+ ( x ) and p̃− ( x′ ) .
The paper develops a method to learn a binary classifier based only on pairwise comparison data. For example, the classifier learns to classify pictures of people as "adult" versus "child" based on pairwise comparisons of the form "person C is older than person X". The authors derive their method based on an empirical risk minimization argument. The authors test their methods on four standard data sets (three MNIST variants and one more). They compare to some baselines including binary biased, noisy unbiased, and RankPruning. They try 4 different variations of their method. The Pcomp-Teacher model performs especially well.
SP:8ff9e46f3d6f0c6d74158383600839bdd97478af
Practical Marginalized Importance Sampling with the Successor Representation
1 INTRODUCTION . Off-policy evaluation ( OPE ) is a reinforcement learning ( RL ) task where the aim is to measure the performance of a target policy from data collected by a separate behavior policy ( Sutton & Barto , 1998 ) . As it can often be difficult or costly to obtain new data , OPE offers an avenue for re-using previously stored data , making it an important challenge for applying RL to real-world domains ( Zhao et al. , 2009 ; Mandel et al. , 2014 ; Swaminathan et al. , 2017 ; Gauci et al. , 2018 ) . Marginalized importance sampling ( MIS ) ( Liu et al. , 2018 ; Xie et al. , 2019 ; Nachum et al. , 2019a ) is a family of OPE methods which re-weight sampled rewards by directly learning the density ratio between the state-action occupancy of the target policy and the sampling distribution . This approach can have significantly lower variance than traditional importance sampling methods ( Precup et al. , 2001 ) , which consider a product of ratios over trajectories , and is amenable to deterministic policies and behavior agnostic settings where the sampling distribution is unknown . However , the body of MIS work is largely theoretical , and as a result , empirical evaluations of MIS have mostly been carried out on simple low-dimensional tasks , such as mountain car ( state dim . of 2 ) or cartpole ( state dim . of 4 ) . In comparison , deep RL algorithms have shown successful behaviors in high-dimensional domains such as Humanoid locomotion ( state dim . of 376 ) and Atari ( image-based ) . In this paper , we present a straightforward approach for MIS that can be computed from the successor representation ( SR ) of the target policy . Our algorithm , the Successor Representation DIstribution Correction Estimation ( SR-DICE ) , is the first method that allows MIS to scale to highdimensional systems , far outperforming previous approaches . In comparison to previous algorithms which rely on minimax optimization or kernel methods ( Liu et al. , 2018 ; Nachum et al. , 2019a ; Uehara & Jiang , 2019 ; Mousavi et al. , 2020 ) , SR-DICE requires only a simple convex loss applied to the linear function determining the reward , after computing the SR . Similar to the deep RL methods which can learn in high-dimensional domains , the SR can be computed easily using behavior-agnostic temporal-difference ( TD ) methods . This makes our algorithm highly amenable to deep learning architectures and applicable to complex tasks . Our derivation of SR-DICE also reveals an interesting connection between MIS methods and value function learning . The key motivation for MIS methods is , unlike traditional importance sampling methods , they can avoid variance with an exponential dependence on horizon , by re-weighting individual transitions rather than accumulating ratios along entire trajectories . We remark that while the MIS ratios only consider individual transitions , the optimization procedure is still subject to the dynamics of the underlying MDP . Subsequently , we use this insight to show a connection between a well-known MIS method , DualDICE ( Nachum et al. , 2019a ) , and Bellman residual minimization ( Bellman , 1957 ; Baird , 1995 ) , which can help explain some of the optimization properties and performance of DualDICE , as well as other related MIS methods . We benchmark the performance of SR-DICE on several high-dimensional domains in MuJoCo ( Todorov et al. , 2012 ) and Atari ( Bellemare et al. , 2013 ) , against several recent MIS methods . Our results demonstrate two key findings regarding high-dimensional tasks . SR-DICE significantly outperforms the benchmark algorithms . We attribute this performance gap to SR-DICE ’ s deep RL components , outperforming the MIS baselines in the same way that deep RL outperforms traditional methods on high-dimensional domains . Unfortunately , part of this performance gap is due to the fact that the baseline MIS methods scale poorly to challenging tasks . In Atari we find that the baseline MIS method exhibit unstable estimates , often reaching errors with many orders of magnitude . MIS underperforms deep RL . Although SR-DICE achieves a high performance , we find its errors are bounded by the quality of the SR. Consequently , we find that SR-DICE and the standard SR achieve a similar performance across all tasks . Worse so , we find that using a deep TD method , comparable to DQN ( Mnih et al. , 2015 ) for policy evaluation outperforms both methods . Although the performance gap is minimal , for OPE there lacks a convincing argument for SR-DICE , or any current MIS method , which introduce unnecessary complexity . However , this does not mean MIS is useless . We remark that the density ratios themselves are an independent objective which have been used for applications such as policy regularization ( Nachum et al. , 2019b ; Touati et al. , 2020 ) , imitation learning ( Kostrikov et al. , 2019 ) , off-policy policy gradients ( Imani et al. , 2018 ; Liu et al. , 2019b ; Zhang et al. , 2019 ) , and non-uniform sampling ( Sinha et al. , 2020 ) . SR-DICE serves as a stable , scalable approach for computing these ratios . We provide extensive experimental details in the supplementary material and our code is made available . 2 BACKGROUND . Reinforcement Learning . RL is a framework for maximizing accumulated reward of an agent interacting with its environment ( Sutton & Barto , 1998 ) . This problem is typically framed as a Markov Decision Process ( MDP ) ( S , A , R , p , d0 , γ ) , with state space S , action space A , reward function R , dynamics model p , initial state distribution d0 and discount factor γ . An agent selects actions according to a policy π : S ×A → [ 0 , 1 ] . In this paper we address the problem of off-policy evaluation ( OPE ) problem where the aim is to measure the normalized expected per-step reward of the policy R ( π ) = ( 1− γ ) Eπ [ ∑∞ t=0 γ tr ( st , at ) ] . An important notion in OPE is the value function Qπ ( s , a ) = Eπ [ ∑∞ t=0 γ tr ( st , at ) |s0 = s , a0 = a ] , which measures the expected sum of discounted rewards when following π , starting from ( s , a ) . We define dπ ( s , a ) as the discounted state-action occupancy , the probability of seeing ( s , a ) under policy π with discount γ : dπ ( s , a ) = ( 1 − γ ) ∑∞ t=0 γ t ∫ s0 d0 ( s0 ) pπ ( s0 → s , t ) π ( a|s ) d ( s0 ) , where pπ ( s0 → s , t ) is the probability of arriving at the state s after t time steps when starting from an initial state s0 . This distribution is important as R ( π ) equals the expected reward r ( s , a ) under dπ : R ( π ) = E ( s , a ) ∼dπ , r [ r ( s , a ) ] . ( 1 ) Successor Representation . The successor representation ( SR ) ( Dayan , 1993 ) of a policy is a measure of occupancy of future states . It can be viewed as a general value function that learns a vector of the expected discounted visitation for each state . The successor representation Ψπ of a given policy π is defined as Ψπ ( s′|s ) = Eπ [ ∑∞ t=0 γ t 1 ( st = s ′ ) |s0 = s ] . Importantly , the value function can be recovered from the SR by summing over the expected reward of each state V π ( s ) = ∑ s′ Ψ π ( s′|s ) Ea′∼π [ r ( s′ , a′ ) ] . For infinite state and action spaces , the SR can instead be generalized to the expected occupancy over features , known as the deep SR ( Kulkarni et al. , 2016 ) or successor features ( Barreto et al. , 2017 ) . For a given encoding function φ : S×A → Rn , the deep SR ψπ : S × A → Rn is defined as the expected discounted sum over features from the encoding function φ when starting from a given state-action pair and following π : ψπ ( s , a ) = Eπ [ ∞∑ t=0 γtφ ( st , at ) ∣∣∣∣s0 = s , a0 = a ] . ( 2 ) If the encoding φ ( s , a ) is learned such that the original reward function is a linear function of the encoding r ( s , a ) = w > φ ( s , a ) , then similar to the original formulation of SR , the value function can be recovered from a linear function of the SR : Qπ ( s , a ) = w > ψπ ( s , a ) . The deep SR network ψπ is trained to minimize the MSE between ψπ ( s , a ) and φ ( s , a ) + γψ′ ( s′ , a′ ) on transitions ( s , a , s′ ) sampled from the data set . A frozen target network ψ′ is used to provide stability ( Mnih et al. , 2015 ; Kulkarni et al. , 2016 ) , and is updated to the current network ψ′ ← ψπ after a fixed number of time steps . The encoding function φ is typically trained by an encoder-decoder network ( Kulkarni et al. , 2016 ; Machado et al. , 2017 ; 2018a ) . Marginalized Importance Sampling . Marginalized importance sampling ( MIS ) is a family of importance sampling approaches for off-policy evaluation in which the performance R ( π ) is evaluated by re-weighting rewards sampled from a data set D = { ( s , a , r , s′ ) } ∼ p ( s′|s , a ) dD ( s , a ) , where dD is an arbitrary distribution , typically but not necessarily , induced by some behavior policy . It follows that R ( π ) can computed with importance sampling weights on the rewards d π ( s , a ) dD ( s , a ) : R ( π ) = E ( s , a ) ∼dD , r [ dπ ( s , a ) dD ( s , a ) r ( s , a ) ] . ( 3 ) The goal of marginalized importance sampling methods is to learn the weights w ( s , a ) ≈ d π ( s , a ) dD ( s , a ) , using data contained in D. The main benefit of MIS is that unlike traditional importance methods , the ratios are applied to individual transitions rather than complete trajectories , which can reduce the variance of long or infinite horizon problems . In other cases , the ratios themselves can be used for a variety of applications which require estimating the occupancy of state-action pairs . DualDICE . Dual stationary DIstribution Correction Estimation ( DualDICE ) ( Nachum et al. , 2019a ) is a well-known MIS method which uses a minimax optimization to learn the density ratios . The underlying objective which DualDICE aims to minimize is the following : min f J ( f ) : = 1 2 E ( s , a ) ∼dD [ ( f ( s , a ) − γEs′ , π [ f ( s′ , a′ ) ] ) 2 ] − ( 1− γ ) Es0 , a0∼π [ f ( s0 , a0 ) ] . ( 4 ) It can be shown that Equation ( 4 ) is uniquely optimized by the MIS density ratio . However , since f ( s , a ) − γEπ [ f ( s′ , a′ ) ] is dependent on transitions ( s , a , s′ ) , there are two practical issues with this underlying objective . First , the objective contains a square within an expectation , giving rise to the double sampling problem ( Baird , 1995 ) , where the gradient will be biased when using only a single sample of ( s , a , s′ ) . Second , computing f ( s , a ) − γEs′ , π [ f ( s′ , a′ ) ] for arbitrary state-action pairs , particularly those not contained in the data set , is non-trivial , as it relies on an expectation over succeeding states , which is generally inaccessible without a model of the environment . To address both concerns , DualDICE uses Fenchel duality ( Rockafellar , 1970 ) to create the following minimax optimization problem : min f max w J ( f , w ) : = E ( s , a ) ∼dD , a′∼π , s′ [ w ( s , a ) ( f ( s , a ) − γf ( s′ , a′ ) ) − 0.5w ( s , a ) 2 ] − ( 1− γ ) Es0 , a0 [ f ( s0 , a0 ) ] . ( 5 ) Similar to the original formulation , Equation ( 4 ) , it can be shown that Equation ( 5 ) is minimized when w ( s , a ) is the desired density ratio .
The paper proposes an approach to employ successor representation combined with marginalized importance sampling. The basic idea exploited in the paper consists of expressing the occupancies in terms of the successor representation and to model it via a linear combination of some features. This allows handling, although approximately, continuous state-action spaces. After having derived the objective function, an experimental evaluation on both Mujoco and Atari domains is presented, including an ablation study.
SP:0685dd85f87da44ee57de28dd64c6c06181cdc65
Tight Second-Order Certificates for Randomized Smoothing
1 INTRODUCTION . A topic of much recent interest in machine learning has been the design of deep classifiers with provable robustness guarantees . In particular , for an m-class classifier h : Rd → [ m ] , the L2 certification problem for an input x is to find a radius ρ such that , for all δ with ‖δ‖2 < ρ , h ( x ) = h ( x + δ ) . This robustness certificate serves as a lower bound on the magnitude of any adversarial perturbation of the input that can change the classification : therefore , the certificate is a security guarantee against adversarial attacks . There are many approaches to the certification problem , including exact methods , which compute the precise norm to the decision boundary ( Tjeng et al. , 2019 ; Carlini et al. , 2017 ; Huang et al. , 2017 ) as well as methods for which the certificate ρ is merely a lower bound on the distance to the decision boundary ( Wong & Kolter , 2018 ; Gowal et al. , 2018 ; Raghunathan et al. , 2018 ) . One approach that belongs to the latter category is Lipschitz function approximation . Recall that a function f : Rd → R is L-Lipschitz if , for all x , x′ , |f ( x ) − f ( x′ ) | ≤ L‖x − x′‖2 . If a classifier is known to be a Lipschitz function , this immediately implies a robustness certificate . In particular , consider a binary classification for simplicity , where we use an L-Lipschitz function f as a classifier , using the sign of f ( x ) as the classification . Then for any input x , we are assured that the classification ( i.e , the sign ) will remain constant for all x′ within a radius |f ( x ) |/L of x . Numerous methods for training Lipschitz neural networks with small , known Lipschitz constants have been proposed . ( Fazlyab et al. , 2019 ; Zhang et al. , 2019 ; Anil et al. , 2019 ; Li et al. , 2019b ) It is desirable that the network be as expressive as possible , while still maintaining the desired Lipschitz property . Anil et al . ( 2019 ) in particular demonstrates that their proposed method can universally approximate Lipschitz functions , given sufficient network complexity . However , in practice , for the robust certification problem on large-scale input , randomized smoothing ( Cohen et al. , 2019 ) is the current state-of-the-art method . The key observation of randomized smoothing ( as formalized by ( Salman et al. , 2019 ; Levine et al. , 2019 ) ) is that , for any arbitrary base classifier function f : Rd → [ 0 , 1 ] , the function x→ Φ−1 ( pa ) where pa ( x ) : = E ∼N ( 0 , σ2I ) f ( x + ) ( 1 ) is ( 1/σ ) -Lipschitz , where N ( 0 , σ2I ) is a d-dimensional isometric Gaussian distribution with variance σ2 and Φ−1 is the inverse normal CDF function . As a result , given the smoothed classifier value pa ( x ) at x , one can calculate the certified radius ρ ( x ) = σΦ−1 ( pa ( x ) ) in which pa ( x ) ≥ 0.5 ( i.e. , Φ−1 ( pa ( x ) ) ≥ 0 ) . This means that we can use pa ( x ) ∈ Rd → [ 0 , 1 ] as a robust binary classifier ( with one class assignment if pa ( x ) ≥ 0.5 , and the other if pa ( x ) < 0.5 ) . Cohen et al . ( 2019 ) shows that this is a tight certificate result for a classifier smoothed with Gaussian noise : given the value of pa ( x ) , there exists a base classifier function f such that , if pa is the Gaussian-smoothed version of f , then there exists an x′ with ‖x− x′‖2 = ρ such that pa ( x′ ) = 0.5 . In other words , the certificate provided by ( Cohen et al. , 2019 ) is the largest possible certificate for Gaussian smoothing , given only the value of pa ( x ) . Previous results ( Li et al. , 2019a ; Lecuyer et al. , 2019 ) provided looser bounds for Gaussian smoothing . Singla & Feizi ( 2020 ) have recently shown , for shallow neural networks , that , rather than globally bounding the ( first-order ) Lipschitz constant of the network , it is possible to achieve larger robustness certificates by instead globally bounding the Lipschitz constant of the gradient of the network . This second-order , curvature-based method takes advantage of the fact that the gradient at x can be computed easily via back-propagation , so certificates can make use of both f ( x ) and∇xf ( x ) . This leads to a question : can we also use the gradient of a smoothed classifier ∇xpa ( x ) to improve smoothing-based certificates ? In this work , we show that there is a universal curvature-like bound for all randomly-smoothed classifiers . Therefore , given pa ( x ) and∇xpa ( x ) , we can compute larger certificates than is possible using the value of pa ( x ) alone . Moreover , our bound is tight in that , given only the pair ( pa ( x ) , ∇xpa ( x ) ) , the certificate we provide is the largest possible certificate for Gaussian smoothing . We call our certificates “ Second-order Smoothing ” ( SoS ) certificates . As shown in Figure 1 , the smoothing-based certificates which we can achieve using second-order smoothing represent relatively modest improvements compared to the first-order bounds . This is a meaningful negative result , given the tightness of our bounds , and is therefore useful in guiding ( or limiting ) future research into higher-order smoothing certificates . Additionally , this result shows that randomized smoothing ( or , specifically , functions in the form of Equation 1 ) can not be used to universally approximate Lipschitz functions : all randomly smoothed functions will have the additional curvature constraint described in this work . If the base classifier f is a neural network , computing the expectation in Equation 1 analytically is not tractable . Therefore it is standard ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ; Salman et al. , 2019 ) to estimate this expectation using N random samples , and bound the expectation probabilistically . The certificate is then as a high-probability , rather than exact , result , using the estimated lower bound of pa ( x ) . In Section 3.1 , we discuss empirical estimation of the gradient norm of a smoothed classifier for second-order certification , and develop an estimator for this quantity , in which the number of samples required to estimate the gradient scales linearly with the dimensionality d of the input.1 In order to overcome this , in Section 4 , we develop a modified form of Gaussian randomized smoothing , Gausian Dipole Smoothing , which allows for a dipole certificate , related to the secondorder certificate , to be computed . Unlike the second-order certificate , however , the dipole certificate has no explicit dependence of dimensionality in its estimation , and therefore can practically scale to real-world high-dimensional datasets . 2 PRELIMINARIES , ASSUMPTIONS AND NOTATION . We use f ( x ) to represent a generic scalar-valued “ base ” function to be smoothed . In general , we assume f ∈ Rd → [ 0 , 1 ] . However , for empirical estimation results ( Theorem 3 ) , we assume that f is a “ hard ” base classifier : f ∈ Rd → { 0 , 1 } . This will be made clear in context . The smoothed version of f is notated as pa ∈ Rd → [ 0 , 1 ] , defined as in equation 1 . Recall that Φ is the normal CDF function and Φ′ is the normal PDF function . In randomized smoothing for multi-class problems , the base classifier is typically a vector-valued function f ∈ Rd → { 0 , 1 } m , ∑ c fc ( x ) = 1 , where m is the number of classes . The final classification returned by the smoothed classifier is then given by a : = arg maxc E fc ( x + ) . However , in most prominent implementations ( Cohen et al. , 2019 ; Salman et al. , 2019 ) , certificates are computed using only the smoothed value for the estimated top class a , where a is estimated using a small number N0 of initial random samples , before the final value of pa ( x ) is computed using N samples . The certificate then determines the radius in which pa ( x′ ) will remain above 0.5 : this guarantees that a will remain the top class , regardless of the other logits . While some works ( Lecuyer et al. , 2019 ; Feng et al. , 2020 ) independently estimate each smoothed logit , this incurs additional estimation error as the number of classes increases . In this work , we assume that only estimates for the top-class smoothed logit pa ( x ) and its gradient ∇xpa ( x ) are available ( although we briefly discuss the case with more estimated logits in Section 3.2 ) . When discussing empirical estimation , we use η as the accepted probability of failure of an estimation method . 3 SECOND-ORDER SMOOTHING CERTIFICATE . We now state our main second-order robustness certificate result : Theorem 1 . For all x , x′ with ‖x− x′‖2 < ρ , and for all f : Rd → [ 0 , 1 ] , pa ( x ′ ) ≥ Φ ( Φ−1 ( a′ + pa ( x ) ) − ρ σ ) − Φ ( Φ−1 ( a′ ) − ρ σ ) ( 2 ) where a′ is the ( unique ) solution to Φ′ ( Φ−1 ( a′ ) ) − Φ′ ( Φ−1 ( a′ + pa ( x ) ) ) = −σ‖∇xpa ( x ) ‖2 . ( 3 ) Further , for all pairs ( pa ( x ) , ‖∇xpa ( x ) ‖2 ) which are possible , there exists a base classifier f and an adversarial point x′ such that Equation 2 is an equality . This implies that our certificate is realizable , and therefore tight . Note that the right-hand side of Equation 2 is monotonically decreasing with ρ : we can then compute a robustness certificate by simply setting pa ( x′ ) = 0.5 and solving for the certified radius ρ . Also , 1In a concurrent work initially distributed after the submission of this work , Mohapatra et al . ( 2020 ) have proposed an identical second-order smoothing certificate , along with a tighter empirical estimator for the gradient norm . In this estimator , the number of samples required scales with √ d. a′ can be computed easily , because the left-hand side of Equation 3 is monotonic in a′ . Evaluated certificate values are shown in Figure 1-b , and compared with first-order certificates . All proofs are presented in Appendix A . Like in Cohen et al . ( 2019 ) , we proceed by constructing the worst-case base classifier f given pa ( x ) and ‖∇xpa ( x ) ‖2 . This is the base classifier f which creates an adversarial point to the smoothed classifier as close as possible to x , given the constraints that pa ( x ) and ‖∇pa ( x ) ‖2 are equal to their reported values . In Cohen et al . ( 2019 ) , given only pa ( x ) , this is simply a linear classifier . With the gradient norm , the worst case is that x lies in a region with class a which is a slice between two linear decision boundaries , both perpendicular to ∇pa ( x ) . See Figure 3 . Note that , by isometry and because ∇pa ( x ) is the only vector information we have , there is no benefit in certified radius to having the direction of ∇pa ( x ) : the norm is sufficient . In the case of a linear classifier the gradient takes its maximum possible value : ‖∇xpa ( x ) ‖2 = σ−1Φ′ ( Φ−1 ( pa ( x ) ) . This case is shown in Figure 3-a : if the gradient norm is equal to this value , the second-order certificate is identical to the first-order certificate ( Cohen et al. , 2019 ) . However , if the gradient norm is smaller , then we can not be in this worst-case linear-classifier scenario . Instead , the new “ worst case ” is constructed by introducing a second “ wrong class ” region opposite to the direction of the adversarial point ( Figure 3-b ) . In the extreme case ( Figure 3-c ) where the gradient norm is zero , this is accomplished by balancing two adversarial regions in a “ sandwich ” around x . This “ sandwich ” configuration reveals the relative weakness of gradient information in improving robustness certificates : having zero gradient does not require that the adversarial regions be evenly distributed around x . Rather , it is sufficient to distribute the adversarial probability mass 1− pa ( x ) into just two adversarial regions . Therefore , the certified radius , even in this most extreme case , is similar to the Cohen et al . ( 2019 ) certificate in the case with half as much adversarial probability mass ( the first-order certificate for pa ( x ) : = ( 1 + pa ( x ) ) /2 ) . This can be seen in Figure 1-b : note that at pa ( x ) = 0.6 , if the gradient norm is known to be zero , the certificate is slightly below the certificate for pa ( x ) = 0.8 with no gradient information . The second-order certificate when ( pa ( x ) = 0.6 , ‖∇xpa ( x ) |‖2 = 0 ) is in fact slightly below the first-order certificate for pa ( x ) = 0.8 , because the Gaussian noise samples throughout all of space , so the smoothed classifier decision boundary is slightly affected by the adversarial region in the opposite direction of x . Because we can explicitly construct “ worst-case ” classifiers which represent the equality case of Equation 2 , our certificates are known to be tight : the reported certified radii are the largest possible certificates , if only pa ( x ) and ‖∇pa ( x ) ‖2 are known . In Figure 2 , we show how our second-order certificate behaves on a simple , two-dimensional , nonlinearly separable dataset , the classic Swiss Roll . The increases are marginal , mostly because the certificates using standard randomized smoothing are already fairly tight . On these data , the certified radii for the two classes are nearly touching in many places along the decision boundary . However , for the blue class , which is surrounded on multiple sides by the red class , there are noticeable increases in the certified radius . This is especially true for points near the center of the blue class , which are at the “ top of the hill ” of the blue class probability , and therefore have smaller gradient .
This paper presents a randomized second-order smoothing certificate for providing robustness guarantees against adversarial attacks. By additionally using the gradient estimation of smoothed classifier, the proposed method has been shown to outperform the existing randomized smoothing certificate in practice. A variant of the method without explicitly estimating gradient vector has also been proposed to avoid the dependence of feature dimension in concentration analysis.
SP:98492c9032ac3381f5897bc6f17fd0f136546999
Beyond Trivial Counterfactual Generations with Diverse Valuable Explanations
1 INTRODUCTION . Consider a face authentication system for unlocking a device . In case of non-authentications ( possible false-negative predictions ) , this system could provide generic advices to its user such as “ face the camera ” or “ remove any face occlusions ” . However , these may not explain the reason for the possible malfunction . To provide more insights regarding its decisions , the system could instead provide information specific to the captured image ( its input data ) . It might list the input features that most contributed to its decision ( e.g. , a region of the input image ) , but this feature could be “ face ” , which is trivial and does not suggest an alternative action to its user . Further , it provides little useful information about the model . Instead , non-trivial explanations may be key for better understanding and diagnosing the system— including the data it was trained on— and improving its reliability . Such explanations might improve systems across a wide variety of domains including in medical imaging [ 58 ] , automated driving systems [ 48 ] , and quality control in manufacturing [ 22 ] . The explainability literature aims to understand the decisions made by a machine learning ( ML ) model such as the aformentionned face authentication system . Counterfactual explanation methods [ 11 , 13 , 4 ] can help discover the limitations of a ML model by uncovering data and model biases . The counterfactual explanation methods provide perturbed versions of the input data that emphasize features that contributed most to the ML model ’ s output . For example , if an authentication system is not recognizing a user wearing sunglasses then the system could generate an alternative image of the user ’ s face without sunglasses that would be correctly recognized . This is different from other types of explainability methods such as feature importance methods [ 50 , 51 , 4 ] and boundary approximation methods [ 47 , 37 ] . The former highlight salient regions of the input but do not indicate how the ML model could achieve a different prediction . The second family of methods produce explanations that are limited to linear approximations of the ML model . Unfortunately , these linear approximations are often inaccurate . In contrast , counterfactual methods suggest changes in the input that would lead to a change in the corresponding output , providing information not only about where the change should be but also what the change should be . Counterfactual explanations should be actionable , i.e. , a user should be able to act on it . An actionable explanation would suggest feasible changes like removing sunglasses instead of unrealistic ones like adding more eyes to the user ’ s face . Counterfactual explanations that are valid , proximal , and sparse are more likely to be actionable [ 49 , 38 ] . That is , a counterfactual explanation that changes the outcome of the ML model ( valid ) by changing the minimal number of input features ( sparse ) , while remaining close to the input ( proximal ) . Generating a set of diverse explanations increases the likelihood of finding an actionable explanation [ 49 , 38 ] . A set of counterfactuals is diverse if each one proposes to change a different set of attributes . Intuitively , each of these explanations shed light on a different action that user can take to change the ML model ’ s outcome . Current counterfactual generation methods like xGEM [ 26 ] generates a single explanation that is far from the input . Thus , it fails to be proximal , sparse , and diverse . Progressive Exaggeration ( PE ) [ 53 ] provides higher-quality explanations more proximal than xGEM , but it still fails to provide a diverse set of explanations . In addition , the image generator of PE is trained on the same data as the ML model in order to detect biases thereby limiting their applicability . Moreover , like the previous methods in the literature , these two methods tend to produce trivial explanations . For instance , an explanation that suggests to increase the ‘ smile ’ attribute of a ‘ smile ’ classifier for an already-smiling face is trivial and it does not explain why a misclassification occurred . In this work , we focus on diverse valuable explanations ( DiVE ) , that is , valid , proximal , sparse , and non-trivial . We propose Diverse Valuable Explanations ( DiVE ) , an explainability method that can interpret a ML model by identifying sets of valuable attributes that have the most effect on the ML model ’ s output . DiVE produces multiple counterfactual explanations which are enforced to be valuable , and diverse resulting in more actionable explanations than the previous literature . Our method first learns a generative model of the data using a β-TCVAE [ 5 ] to obtain a disentangled latent representation which leads to more proximal and sparse explanations . In addition , the VAE is not required to be trained on the same dataset as the ML model to be explained . DiVE then learns a latent perturbation using constraints to enforce diversity , sparsity , and proximity . In order to generate nontrivial explanations , DiVE leverages the Fisher information matrix of its latent space to focus its search on the less influential factors of variation of the ML model . This mechanism enables the discovery of spurious correlations learned by the ML model . We provide experiments to assess whether our explanations are more valuable and diverse than current state-of-the-art . First , we assess their validity on the CelebA dataset [ 33 ] and provide quantitative and qualitative results on a bias detection benchmark [ 53 ] . Second , we show that the generated explanations are more proximal in terms of Fréchet Inception Distance ( FID ) [ 19 ] , which is a measure of similarity between two datasets of images commonly used to evaluate the generation quality of GAN . In addition , we evaluate the latent space closeness and face verification accuracy , as reported by Singla et al . [ 53 ] . Third , we assess the sparsity of the generated counterfactuals by computing the average change in facial attributes . Fourth , we show that DiVE is more successful at finding more non-trivial explanations than previous methods and baselines . In the supplementary material we provide additional results on the out-of-distribution performance of DiVE . We summarize the contributions of this work as follows : 1 ) We propose DiVE , an explainability method that can interpret a ML model by identifying the attributes that have the most effect on its output . 2 ) DiVE achieves state of the art in terms of the validity , proximity , and sparsity of its explanations , detecting biases on the datasets , and producing multiple explanations for an image . 3 ) We identify the importance of finding non-trivial explanations and we propose a new benchmark to evaluate how valuable the explanations are . 4 ) We propose to leverage the Fisher information matrix of the latent space for finding spurious features that produce non-trivial explanations . 2 RELATED WORK . Explainable artificial intelligence ( XAI ) is a suite of techniques developed to make either the construction or interpretation of model decisions more accessible and meaningful . Broadly speaking , there are two branches of work in XAI , ad-hoc and post-hoc . Ad-hoc methods focus on making mod- els interpretable , by imbuing model components or parameters with interpretations that are rooted in the data themselves [ 45 , 39 , 25 ] . Unfortunately , most successful machine learning methods , including deep learning ones , are uninterpretable [ 6 , 32 , 18 , 24 ] . Post-hoc methods aim to explain the decisions of non interpretable models . These methods can be categorized as non-generative and generative . Non-generative methods use information from a ML model to identify the features most responsible for an outcome for a given input . Approaches like [ 47 , 37 , 41 ] interpret ML model decisions by using derived information to fit a locally interpretable model . Others use the gradient of the ML model parameters to perform feature attribution [ 59 , 60 , 52 , 54 , 50 , 1 , 51 ] , sometimes by employing a reference distribution for the features [ 51 , 11 ] . This has the advantage of identifying alternative feature values that when substituted for the observed values would result in a different mode outcome . These methods are limited to small contiguous regions of features with high influence on the target model outcome . In so doing , they can struggle to provide plausible changes of the input that are actionable by an user in order to correct a certain output or bias of the model . Generative methods such as [ 7 , 5 , 4 ] propose plausible modifications of the input that change the model decision . However the generated perturbations are usually found in pixel space and thus are bound to masking small regions of the image without necessarily having a semantic meaning . Closest to our work are generative counterfactual explanation methods [ 26 , 9 , 15 , 53 ] which synthesize perturbed versions of observed data that result in a corresponding change of the model prediction . While these methods provide valid and proximal explanations for a model outcome , they fail to provide a diverse set of non-trivial explanations . Mothilal et al . [ 38 ] addressed the diversity problem by introducing a diversity constraint between a set of randomly initialized counterfactuals ( DICE ) . However , DICE shares the same problems as [ 7 , 4 ] since perturbations are directly performed on the observed feature space , and does not take into account trivial explanations . In this work we propose DiVE , a counterfactual explanation method that generates a diverse set of valid , proximal , sparse , and non-trivial explanations . Appendix A provides a more exhaustive review of the related work . 3 PROPOSED METHOD . We propose DiVE , an explainability method that can interpret a ML model by identifying the latent attributes that have the most effect on its output . Summarized in Figure 1 , DiVE uses an encoder , a decoder , and a fixed weight ML model . The ML model could be any function for which we have access to its gradients . In this work , we focus on a binary image classifier in order to produce visual explanations . DiVE consists of two main steps . First , the encoder and the decoder are trained in an unsupervised manner to approximate the data distribution on which the ML model was trained . Unlike PE [ 53 ] , our encoder-decoder model does not need to train on the same dataset that the ML model was trained on . Second , we optimize a set of vectors i to perturb the latent representation z generated by the trained encoder . The details of the optimization procedure are provided in Algorithm 1 in the Appendix . We use the following 3 main losses for this optimization : a counterfactual loss LCF that attempts to fool the ML model , an proximity loss Lprox that constrains the explanations with respect to the number of changing attributes , and a diversity loss Ldiv that enforces the explainer to generate diverse explanations with only one confounding factor for each of them . Finally , we propose several strategies to mask subsets of dimensions in the latent space to prevent the explainer from producing trivial explanations . Next we explain the methodology in more detail .
The authors propose interpreting the decision of a black-box (BB) image classifier using diverse counterfactual explanations. The proposed model consists of a pre-trained β-TCVAE, which learns to extract a disentangled latent representation for the input image. To generate explanations for a given image, the model optimizes to find n latent perturbations. Each decoded output from β-TCVAE is similar to the original image and produces a desired outcome from the BB classifier. To ensure the diversity among the n latent perturbations, the model minimizes the pairwise similarity loss between the latent perturbations. The model further performs spectral clustering to partition the latent space into different attributes. Thus, at inference time, for the same input image, multiple counterfactual images can be generated as explanations by changing different dimensions of the latent space. The experiments demonstrate the realistic quality of the explanations and their ability to discover bias in the BB classifier.
SP:4590c3a3d2a389f0d09fb308793c06855ac02fea
Regression from Upper One-side Labeled Data
1 INTRODUCTION . This paper addresses a scenario in which a regression function is learned for label sensor values that are the results of sensing the magnitude of some phenomenon . A lower sensor value means not only a relatively lower magnitude than a higher value but also a missing or incomplete observation of a monitored phenomenon . Label sensor values for missing observations are lower than those for when observations are correct without missing observations and are also usually lower than an optimal regression line that is learned from the correct observations . A naive regression algorithm using such labels causes the results of prediction to be low and is thus biased and underfitted in comparison with the optimal regression line . In particular , when the data coverage of a label sensor is insufficient , the effect of missing observations causing there to be bias is critical . One practical example is that , for comfort in healthcare , we mimic and replace an intrusive wrist sensor ( label sensor ) with non-intrusive bed sensors ( explanatory sensors ) . We learn a regression function that predicts the values of the wrist sensor from values of the bed sensors . The wrist sensor is wrapped around a wrist . It accurately represents the motion intensity of a person and is used such as for sleep-wake discrimination Tryon ( 2013 ) ; Mullaney et al . ( 1980 ) ; Webster et al . ( 1982 ) ; Cole et al . ( 1992 ) . However , it can sense motion only on the forearm , which causes data coverage to be insufficient and observations of movements on other body parts to be missing frequently . The bed sensors are installed under a bed ; while their accuracy is limited because of their non-intrusiveness , they have much broader data coverage than that of the wrist sensor . In this case , the wrist sensor values for missing observations are improperly low and also inconsistent with the bed sensor values as shown in Fig . 1- ( 1 ) . This leads to severe bias and underfitting . The specific problem causing the bias stems from the fact that our data labeled with lower values than the estimations of the regression function are mixed with data that should be originally labeled above the regression line . Here , we call data labeled above the regression line upper-side data , depicted as circles in Fig . 1- ( 2 ) , and data labeled below the regression line lower-side data , depicted as squares in Fig . 1- ( 2 ) . When there are missing observations , that is , our scenario , it means that the original data with missing observations have been moved to the lower side , depicted as triangles in Fig . 1- ( 3 ) . We can not determine which data have been moved by just examining the label values . It follows that our lower-side data are mixed with the original upper- and lower-side data . We thus should assume our lower-side data to be unlabeled data , that is , a mix of original upperand lower-side data . We overcome the bias by handling this asymmetric label corruption , in which upper-side data are correctly labeled but lower-side data are always unlabeled . There is an established approach against such corrupted weak labels in regression , that is , robust regression that regards weak labels as containing outliers Huber et al . ( 1964 ) ; Narula & Wellington ( 1982 ) ; Draper & Smith ( 1998 ) ; Wilcox ( 1997 ) . However , since not asymmetric but rather symmetric label corruption is assumed there , it is still biased in our problem setting . In the classification problem setting , asymmetric label corruption is addressed with positive-unlabeled ( PU ) learning , where it is assumed that negative data can not be obtained but unlabeled data are available as well as positive data Denis ( 1998 ) ; De Comité et al . ( 1999 ) ; Letouzey et al . ( 2000 ) ; Shi et al . ( 2018 ) ; Kato et al . ( 2019 ) ; Sakai & Shimizu ( 2019 ) ; Charoenphakdee & Sugiyama ( 2019 ) ; Li et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Xu et al . ( 2019 ) ; Zhang et al . ( 2020 ) ; Guo et al . ( 2020 ) ; Chen et al . ( 2020 ) . The focus is on classification tasks , and an unbiased risk estimator has been proposed Du Plessis et al . ( 2014 ; 2015 ) . There is a gap between the classification problem setting and our regression problem setting , i.e. , we have to estimate specific continuous values , not positive/negative classes . We fill the gap with a novel approach for deriving an unbiased solution for our regression setting . In this paper , we formulate a regression problem from upper one-side labeled data , in which the upper-side data are correctly labeled , and we regard lower-side data as unlabeled data . We refer to this as one-side regression . Using these upper-side labeled and lower-side unlabeled data , we derive a learning algorithm in an unbiased and consistent manner to ordinary regression that uses data labeled correctly in both upper- and lower-side cases . This is achieved by deriving our gradient that requires only upper-side data and unlabeled data as an asymptotically equivalent expression of that for ordinary regression . This is a key difference from the derivation of unbiased PU classification where loss has been used . We additionally found that a specific class of losses enables us to make it so that an unbiased solution can be learned practically . For implementing the algorithm , we propose a stochastic optimization method . In numerical experiments using synthetic and real-world datasets , we empirically evaluated the effectiveness of the proposed algorithm . We found that it improves performance against regression algorithms that assume that both upper- and lower-side data are correctly labeled . 2 ONE-SIDE REGRESSION . Our goal is to derive a learning algorithm with upper one-side labeled data in an unbiased and consistent manner to ordinary regression that uses both upper- and lower-side labeled data . We first consider the ordinary regression problem ; after that , we formulate a one-side regression problem by transforming the objective function of the ordinary one . 2.1 ORDINARY REGRESSION PROBLEM . Let x ∈ RD ( D ∈ N ) be a D-dimensional explanatory variable and y ∈ R be a real-valued label . We learn a regression function f ( x ) that computes the value of an estimation of a label , ŷ , for a newly observed x as ŷ = f ( x ) . The optimal regression function f∗ is given by f∗ ≡ argmin f L ( f ) , ( 1 ) where L ( f ) is the expected loss when the regression function f ( x ) is applied to data , x and y , distributed in accordance with an underlying probability distribution p ( x , y ) : L ( f ) ≡ E [ L ( f ( x ) , y ) ] , ( 2 ) where E denotes the expectation over p ( x , y ) , and L ( f ( x ) , y ) is the loss function between f ( x ) and y , e.g. , the squared loss , L ( f ( x ) , y ) = ‖f ( x ) − y‖22 . L ( f ) can be written by using the decomposed expectations Eup when labels are higher than estimations of the regression function ( f ( x ) < y , upper-side case ) and Elo when labels are lower than the estimations of the regression function ( y < f ( x ) , lower-side case ) as L ( f ) = πupEup [ L ( f ( x ) , y ) ] + πloElo [ L ( f ( x ) , y ) ] , ( 3 ) where πup and πlo are the ratios for upper- and lower-side cases , respectively . Note that the decomposition in Eq . ( 3 ) holds for any f including f∗ , and we omitted the decomposed expectation when y = f ( x ) because it is always zero . 2.2 ONE-SIDE REGRESSION PROBLEM . We here consider a scenario in which we have training data , D ≡ { xn , yn } Nn=1 , that are correctly labeled only in the upper-side case because of the existence of missing label observations . The data in the lower-side case are a mix of original upper- and lower-side data and are considered to be unlabeled data . We can divide D by estimations of the regression function f into upper-side data { Xup , yup } ≡ { x , y ∈ D | f ( x ) < y } and unlabeled data Xun ≡ { x ∈ D | y < f ( x ) } . In the ordinary regression , where both upper- and lower-side data are correctly labeled for training , expectations Eup and Elo in Eq . ( 3 ) can be estimated by using the corresponding sample averages . In our setting , however , correctly labeled data from the lower-side case are unavailable , and , therefore , Elo can not be estimated directly . We can avoid this problem by expressing L ( f ) as L̃ ( f ) ≡πupEup [ L ( f ( x ) , y ) ] + E [ L ( f ( x ) , ỹlo ) ] − πupEup [ L ( f ( x ) , ỹlo ) ] , ( 4 ) where expectation E for x can be estimated by computing a sample average for our unlabeled data Xun , and ỹlo is a virtual label that is always lower than the estimations of the regression function f ( x ) , whose details will be given in the next paragraph . For this expression , the expected loss L̃ ( f ) is represented by only the expectations over the upper-side data and unlabeled data , Eup and E. Thus , we can design a gradient-based learning algorithm by using our training data . This transformation comes from Eqs . ( 2 ) and ( 3 ) with ỹlo as E [ L ( f ( x ) , ỹlo ) ] = πupEup [ L ( f ( x ) , ỹlo ) ] + πloElo [ L ( f ( x ) , ỹlo ) ] πloElo [ L ( f ( x ) , ỹlo ) ] = E [ L ( f ( x ) , ỹlo ) ] − πupEup [ L ( f ( x ) , ỹlo ) ] . ( 5 ) In practice , we can not properly set the value of ỹlo as being always lower than f ( x ) . However , for learning based on gradients , this is not needed when we set the loss function as losses whose gradients do not depend on the value of ỹlo but just on the sign of f ( x ) − ỹlo , which is always positive and sgn ( f ( x ) − ỹlo ) = 1 from the definition of ỹlo , i.e. , the loss functions satisfy ∂L ( f ( x ) , y ) ∂θ = g ( sgn ( f ( x ) − y ) , f ( x ) ) , ( 6 ) where θ is the parameter vector of f , g ( sgn ( f ( x ) − y ) , f ( x ) ) is a gradient function depending on sgn ( f ( x ) − y ) and f ( x ) , and sgn ( • ) is a sign function . Common such losses are absolute loss and quantile losses . For example , the gradient of absolute loss , |f ( x ) − y| , is ∂|f ( x ) − y| ∂θ = ∂f ( x ) ∂θ ( sgn ( f ( x ) − y ) = 1 ) −∂f ( x ) ∂θ ( sgn ( f ( x ) − y ) = −1 ) Undefined ( sgn ( f ( x ) − y ) = 0 ) , ( 7 ) which does not depend on the value of y but just on the sign of f ( x ) − y .
This paper considers a regression setting in which the missing values are observed with lower values than the true values. Authors provided appealing application for this problem setting. They rewrote the risk and provided an unbiased gradient estimator. However, there is a gap between the estimator and the actual implementation, thus making the overall paper less convincible.
SP:366b8c3549160787f24e8e585953ed99ecdb0aa2
On the Impossibility of Global Convergence in Multi-Loss Optimization
1 INTRODUCTION . Problem Setting . As multi-agent architectures proliferate in machine learning , it is becoming increasingly important to understand the dynamics of gradient-based methods when optimizing multiple interacting goals , otherwise known as differentiable games . This framework encompasses GANs ( Goodfellow et al. , 2014 ) , intrinsic curiosity ( Pathak et al. , 2017 ) , imaginative agents ( Racanière et al. , 2017 ) , synthetic gradients ( Jaderberg et al. , 2017 ) , hierarchical reinforcement learning ( Wayne & Abbott , 2014 ; Vezhnevets et al. , 2017 ) and multi-agent RL in general ( Busoniu et al. , 2008 ) . The interactions between learning agents make for vastly more complex mechanics : naively applying gradient descent on each loss simultaneously is known to diverge even in simple bilinear games . Related Work . A large number of methods have recently been proposed to alleviate the failings of simultaneous gradient descent : adaptations of single-loss algorithms such as Extragradient ( EG ) ( Azizian et al. , 2019 ) and Optimistic Mirror Descent ( OMD ) ( Daskalakis et al. , 2018 ) , Alternating Gradient Descent ( AGD ) for finite regret ( Bailey et al. , 2019 ) , Consensus Optimization ( CO ) for GAN training ( Mescheder et al. , 2017 ) , Competitive Gradient Descent ( CGD ) based on solving a bilinear approximation of the loss functions ( Schaefer & Anandkumar , 2019 ) , Symplectic Gradient Adjustment ( SGA ) based on a novel decomposition of game mechanics ( Balduzzi et al. , 2018 ; Letcher et al. , 2019a ) , and opponent-shaping algorithms including Learning with OpponentLearning Awareness ( LOLA ) ( Foerster et al. , 2018 ) and its convergent counterpart , Stable Opponent Shaping ( SOS ) ( Letcher et al. , 2019b ) . Let A be this set of algorithms . Each has shown promising theoretical implications and empirical results , but none offers insight into global convergence in the non-convex setting , which includes the vast majority of machine learning applications . One of the main roadblocks compared with single-loss optimization has been noted by Schaefer & Anandkumar ( 2019 ) : “ a convergence proof in the nonconvex case analogue to Lee et al . ( 2016 ) is still out of reach in the competitive setting . A major obstacle to this end is the identification of a suitable measure of progress ( which is given by the function value in the single agent setting ) , since norms of gradients can not be expected to decay monotonously for competitive dynamics in non-convex-concave games. ” It has been established that Hamiltonian Gradient Descent converges in two-player zero-sum games under a “ sufficiently bilinear ” condition by Abernethy et al . ( 2019 ) , but this algorithm is unsuitable for optimization as it can not distinguish between minimization and maximization ( Hsieh et al. , 2020 , Appendix C.4 ) . Global convergence has also been established for some algorithms in a few special cases : potential and Hamiltonian games ( Balduzzi et al. , 2018 ) , zero-sum games satisfying the twosided Polyak-Łojasiewicz condition ( Yang et al. , 2020 ) , zero-sum linear quadratic games ( Zhang et al. , 2019 ) and zero-sum games whose loss and first three derivatives are bounded ( Mangoubi & Vishnoi , 2020 ) . These are significant contributions with several applications of interest , but do not include any of the architectures mentioned above . Finally , Balduzzi et al . ( 2020 ) show that GD dynamics are bounded under a ‘ negative sentiment ’ assumption in smooth markets , which do include GANs – but this does not imply convergence , as we will show . On the other hand , failure of global convergence has been shown for the Multiplicative Weights Update method by Palaiopanos et al . ( 2017 ) , for policy-gradient algorithms by Mazumdar et al . ( 2020 ) , and for simultaneous and alternating gradient descent ( simGD and AGD ) by Vlatakis-Gkaragkounis et al . ( 2019 ) ; Bailey et al . ( 2019 ) , with interesting connections to Poincaré recurrence . Nonetheless , nothing is claimed about other optimization methods . Farnia & Ozdaglar ( 2020 ) show that GANs may have no Nash equilibria , but it does not follow that algorithms fail to converge since there may be locally-attracting but non-Nash critical points ( Mazumdar et al. , 2019 , Example 2 ) . Finally , Hsieh et al . ( 2020 ) uploaded a preprint just after the completion of this work with a similar focus to ours . They prove that generalized Robbins-Monro schemes may converge with arbitrarily high probability to spurious attractors . This includes simGD , AGD , stochastic EG , optimistic gradient and Kiefer-Wolfowitz . However , Hsieh et al . ( 2020 ) focus on the possible occurrence of undesirable convergence phenomena for stochastic algorithms . We instead prove that desirable convergence properties can not simultaneously hold for all algorithms ( including deterministic ) . Moreover , their results apply only to decreasing step-sizes whereas ours include constant step-sizes . These distinctions are further highlighted by Hsieh et al . ( 2020 ) in the further related work section . Taken together , our works give a fuller picture of the failure of global convergence in multi-loss optimization . Contribution . We prove that global convergence in multi-loss optimization is fundamentally incompatible with the ‘ reasonable ’ requirement that algorithms avoid strict maxima and converge only to critical points . We construct a two-player game with zero-sum interactions whose losses are coercive and analytic , but whose only critical point is a strict maximum ( Theorem 1 ) . Reasonable algorithms must either diverge to infinite losses or cycle ( bounded non-convergent iterates ) . One might hope that global convergence could at least be guaranteed in games with strict minima and no other critical points . On the contrary we show that strict minima can have arbitrarily small regions of attraction , in the sense that reasonable algorithms will fail to converge there with arbitrarily high probability for fixed initial parameter distribution ( Theorem 2 ) . Finally , restricting the game class even further , we construct a zero-sum game in which all algorithms in A ( as defined in Appendix A ) are proven to cycle ( Theorem 3 ) . It may be that cycles do not arise in high-dimensional games of interest including GANs . Proving or disproving this is an important avenue for further research , but requires that we recognise the impossibility of global guarantees in the first place . 2 BACKGROUND . 2.1 SINGLE LOSSES : GLOBAL CONVERGENCE OF GRADIENT DESCENT . Given a continuously differentiable function f : Rd → R , let θk+1 = θk − α∇f ( θk ) be the iterates of gradient descent with learning rate α , initialised at θ0 . Under standard regularity conditions , gradient descent converges globally to critical points : Proposition 1 . Assume f ∈ C2 has compact sublevel sets and is either analytic or has isolated critical points . For any θ0 ∈ Rd , define U0 = { f ( θ ) ≤ f ( θ0 ) } and let L < ∞ be a Lipschitz constant for ∇f in U0 . Then for any 0 < α < 2/L we have limk θk = θ̄ for some critical point θ̄ . The requirements for convergence are relatively mild : 1. f has compact sublevel sets iff f is coercive , lim‖θ‖→∞ f ( θ ) =∞ , which mostly holds in machine learning since f is a loss function . 2. f has isolated critical points if it is a Morse function ( nondegenerate Hessian at critical points ) , which holds for almost all C2 functions . More precisely , Morse functions form an open , dense subset of all functions f ∈ C2 ( Rd , R ) in the Whitney C2-topology . 3 . Global Lipschitz continuity is not assumed , which would fail even for cubic polynomials . The goal of this paper is to prove that similar ( even weaker ) guarantees can not be obtained in the multi-loss setting – not only for GD , but for any reasonable algorithm . This has to do with the more complex nature of gradient vector fields arising from multiple losses . 2.2 DIFFERENTIABLE GAMES . Following Balduzzi et al . ( 2018 ) , we frame the problem of multi-loss optimization as a differentiable game among cooperating and competing agents/players . These may simply be different internal components of a single system , like the generator and discriminator in GANs . Definition 1 . A differentiable game is a set of n agents with parameters θ = ( θ1 , . . . , θn ) ∈ Rd and twice continuously differentiable losses Li : Rd → R , where θi ∈ Rdi for each i and ∑ i di = d. Losses are not assumed to be convex/concave in any of the parameters . In practice , losses need only be differentiable almost-everywhere : think of neural nets with rectified linear units . If n = 1 , the ‘ game ’ is simply to minimise a given loss function . We write ∇iLk = ∇θiLk and ∇ijLk = ∇θj∇θiLk for any i , j , k , and define the simultaneous gradient of the game ξ = ( ∇1L1 , . . . , ∇nLn ) T ∈ Rd as the concatenation of each player ’ s gradient . If each agent independently minimises their loss using GD with learning rate α , the parameter update for all agents is given by θ ← θ − αξ ( θ ) . We call this simultaneous gradient descent ( simGD ) , or GD for short . We call θ̄ a critical point if ξ ( θ̄ ) = 0 . Now introduce the ‘ Hessian ’ ( or Jacobian ) of the game as the block matrix H = ∇ξ = ∇11L 1 · · · ∇1nL1 ... . . . ... ∇n1Ln · · · ∇nnLn ∈ Rd×d . Importantly note that H is not symmetric in general unless n = 1 , in which case we recover the usual Hessian H = ∇2L . However H can be decomposed into symmetric and anti-symmetric components as H = S + A ( Balduzzi et al. , 2018 ) . A second useful decomposition has appeared recently in ( Letcher et al. , 2019b ) and ( Schaefer & Anandkumar , 2019 ) : H = Hd + Ho where Hd and Ho are the matrices of diagonal and off-diagonal blocks ; formally , Hd = ⊕ i∇iiLi . One solution concept for differentiable games , analogous to the single-loss case , is defined as follows . Definition 2 . A critical point θ̄ is a ( strict , local ) minimum if H ( θ̄ ) 0.1 These were named ( strict ) stable fixed points by Balduzzi et al . ( 2018 ) , but the term is usually reserved in dynamical systems to the larger class defined by Hessian eigenvalues with positive real parts , which is implied but not equivalent to H 0 for non-symmetric matrices . In particular , strict minima are ( differential ) Nash equilibria as defined by Mazumdar et al . ( 2019 ) , since diagonal blocks must also be positive definite : ∇iiLi ( θ̄ ) 0 . The converse does not hold . Algorithm class . This paper is concerned with any algorithm whose iterates are obtained by initialising θ0 and applying a function F to the previous iterates , namely θk+1 = F ( θk , . . . , θ0 ) . This holds for all gradient-based methods ( deterministic or stochastic ) ; most of them are only functions 1For non-symmetric matrices , positive definiteness is defined as H 0 iff uTHu > 0 for all non-zero u ∈ Rd . This is equivalent to the symmetric part S of H being positive definite . of the current iterate θk , so that θk = F k ( θ0 ) . All probabilistic statements in this paper assume that θ0 is initialised following any bounded and continuous measure ν on Rd . Continuity is a weak requirement and widely holds across machine learning , while boundedness mostly holds in practice since the bounded region can be made large enough to accommodate required initial points . For single-player games , the goal of such algorithms is for θk to converge to a local ( perhaps global ) minimum as k → ∞ . The goal is less clear for differentiable games , but is generally to reach a minimum or a Nash equilibrium . In the case of GANs the goal might be to reach parameters that produce realistic images , which is more challenging to define formally . Throughout the text we use the term ( limit ) cycle to mean bounded but non-convergent iterates . This terminology is used because bounded iterates are non-convergent if and only if they have at least two accumulation points , between which they must ‘ cycle ’ infinitely often . This is not to be taken literally : the set of accumulation points may not even be connected . Hsieh et al . ( 2020 ) provide a more complete characterisation of these cycles . Game class . Expecting global guarantees in all differentiable games is excessive , since every continuous dynamical system arises as simultaneous GD on the loss functions of a differentiable game ( Balduzzi et al. , 2020 , Lemma 1 ) . For this reason , the aforementioned authors have introduced a vastly more tractable class of games called markets . Definition 3 . A ( smooth ) market is a differentiable game where interactions between players are pairwise zero-sum , namely , Li ( θ ) = Li ( θi ) + ∑ j 6=i gij ( θ i , θj ) with gij ( θi , θj ) + gji ( θj , θi ) = 0 for all i , j . This generalises zero-sum games while remaining amenable to optimization and aggregation , meaning that “ we can draw conclusions about the gradient-based dynamics of the collective by summing over properties of its members ” ( Balduzzi et al. , 2020 ) . Moreover , this class captures a large number of applications including GANs and related architectures , intrinsic curiosity modules , adversarial training , task-suites and population self-play . One would modestly hope for some reasonable algorithm to converge globally in markets . We will prove that even this is too much to ask .
1. For Theorem 1, as the reviewer understands it, for an optimization problem whose only critical point is a strict maxima, it only has four outcomes, which are listed in the theorem. The result seems quite intuitive and provides very limited understanding for the problem. Please list other possible outcomes for the general problem and state in such way that the paper finds some impossible outcomes which can be excluded for consideration.
SP:07e927ae4286e3e227bf1c8ed5d17669ee871d96
Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels
1 INTRODUCTION . Neural architecture search ( NAS ) aims to automate the design of good neural network architectures for a given task and dataset . Although different NAS strategies have led to state-of-the-art neural architectures , outperforming human experts ’ design on a variety of tasks ( Real et al. , 2017 ; Zoph and Le , 2017 ; Cai et al. , 2018 ; Liu et al. , 2018a ; b ; Luo et al. , 2018 ; Pham et al. , 2018 ; Real et al. , 2018 ; Zoph et al. , 2018a ; Xie et al. , 2018 ) , these strategies behave in a black-box fashion , which returns little design insight except for the final architecture for deployment . In this paper , we introduce the idea of interpretable NAS , extending the learning scope from simply the optimal architecture to interpretable features . These features can help explain the performance of networks searched and guide future architecture design . We make the first attempt at interpretable NAS by proposing a new NAS method , NAS-BOWL ; our method combines a Gaussian process ( GP ) surrogate with the Weisfeiler-Lehman ( WL ) subtree graph kernel ( we term this surrogate GPWL ) and applies it within the Bayesian Optimisation ( BO ) framework to efficiently query the search space . During search , we harness the interpretable architecture features extracted by the WL kernel and learn their corresponding effects on the network performance based on the surrogate gradient information . Besides offering a new perspective on interpratability , our method also improves over the existing BO-based NAS approaches . To accommodate the popular cell-based search spaces , which are noncontinuous and graph-like ( Zoph et al. , 2018a ; Ying et al. , 2019 ; Dong and Yang , 2020 ) , current approaches either rely on encoding schemes ( Ying et al. , 2019 ; White et al. , 2019 ) or manually designed similarity metrics ( Kandasamy et al. , 2018 ) , both of which are not scalable to large architectures and ignore the important topological structure of architectures . Another line of work employs graph neural networks ( GNNs ) to construct the BO surrogate ( Ma et al. , 2019 ; Zhang et al. , 2019 ; Shi et al. , 2019 ) ; however , the GNN design introduces additional hyperparameter tuning , and the training of the GNN also requires a large amount of architecture data , which is particularly ∗Equal contribution . Codes are available at https : //github.com/xingchenwan/nasbowl expensive to obtain in NAS . Our method , instead , uses the WL graph kernel to naturally handle the graph-like search spaces and capture the topological structure of architectures . Meanwhile , our surrogate preserves the merits of GPs in data-efficiency , uncertainty computation and automated hyperparameter treatment . In summary , our main contributions are as follows : • We introduce a GP-based BO strategy for NAS , NAS-BOWL , which is highly query-efficient and amenable to the graph-like NAS search spaces . Our proposed surrogate model combines a GP with the WL graph kernel ( GPWL ) to exploit the implicit topological structure of architectures . It is scalable to large architecture cells ( e.g . 32 nodes ) and can achieve better prediction performance than competing methods . • We propose the idea of interpretable NAS based on the graph features extracted by the WL kernel and their corresponding surrogate derivatives . We show that interpretability helps in explaining the performance of the searched neural architectures . As a singular example of concrete application , we propose a simple yet effective motif-based transfer learning baseline to warm-start search on a new image tasks . • We demonstrate that our surrogate model achieves superior performance with much fewer ob- servations in search spaces of different sizes , and that our strategy both achieves state-of-the-art performances on both NAS-Bench datasets and open-domain experiments while being much more efficient than comparable methods . 2 PRELIMINARIES . Graph Representation of Neural Networks Architectures in popular NAS search spaces can be represented as an acyclic directed graph ( Elsken et al. , 2018 ; Zoph et al. , 2018b ; Ying et al. , 2019 ; Dong and Yang , 2020 ; Xie et al. , 2019 ) , where each graph node represents an operation unit or layer ( e.g . a conv3×3-bn-relu in Ying et al . ( 2019 ) ) and each edge defines the information flow from one layer to another . With this representation , NAS can be formulated as an optimisation problem to find the directed graph and its corresponding node operations ( i.e . the directed attributed graph G ) that give the best architecture validation performance y ( G ) : G∗ = arg maxG y ( G ) . Bayesian Optimisation and Gaussian Processes To solve the above optimisation , we adopt BO , which is a query-efficient technique for optimising a black-box , expensive-to-evaluate objective ( Brochu et al. , 2010 ) . BO uses a statistical surrogate to model the objective and builds an acquisition function based on the surrogate . The next query location is recommended by optimising the acquisition function which balances the exploitation and exploration . We use a GP as the surrogate model in this work , as it can achieve competitive modelling performance with small amount of query data ( Williams and Rasmussen , 2006 ) and give analytic predictive posterior mean µ ( Gt|Dt−1 ) and variance k ( Gt , G′t|Dt−1 ) on the heretofore unseen graph Gt given t − 1 observations : µ ( Gt|Dt−1 ) = k ( Gt , G1 : t−1 ) K−11 : t−1y1 : t−1 and k ( Gt , G′t|Dt−1 ) = k ( Gt , G ′ t ) − k ( Gt , G1 : t−1 ) K−11 : t−1k ( G1 : t−1 , G′t ) where G1 : t−1 = { G1 , . . . , Gt−1 } and y1 : t−1 = [ y1 , . . . , yt−1 ] T are the t − 1 observed graphs and objective function values , respectively , and Dt−1 = { G1 : t−1 , y1 : t−1 } . [ K1 : t−1 ] i , j = k ( Gi , Gj ) is the ( i , j ) -th element of Gram matrix induced on the ( i , j ) -th training samples by k ( · , · ) , the graph kernel function . We use Expected Improvement ( Mockus et al. , 1978 ) in this work though our approach is compatible with alternative choices . Graph Kernels Graph kernels are kernel functions defined over graphs to compute their level of similarity . A generic graph kernel may be represented by the function k ( · , · ) over a pair of graphs G and G′ ( Kriege et al. , 2020 ) : k ( G , G′ ) = 〈φ ( G ) , φ ( G′ ) 〉H ( 2.1 ) where φ ( · ) is some feature representation of the graph extracted by the graph kernel and 〈· , ·〉H denotes inner product in the associated reproducing kernel Hilbert space ( RKHS ) ( Nikolentzos et al. , 2019 ; Kriege et al. , 2020 ) . For more detailed reviews on graph kernels , the readers are referred to Nikolentzos et al . ( 2019 ) , Ghosh et al . ( 2018 ) and Kriege et al . ( 2020 ) . Algorithm 1 NAS-BOWL Algorithm . Optional steps of the exemplary use of motifbased warm starting ( Sec 3.2 ) are marked in gray italics . 1 : Input : Maximum BO iterations T , BO batch size b , acquisition function α ( · ) , initial observed data on the target task D0 , Optional : past-task query data Dpast and surrogate Spast 2 : Output : The best architecture G∗T 3 : Initialise the GPWL surrogate S with D0 4 : for t = 1 , . . . , T do 5 : if Pruning based on the past-task motifs then 6 : Compute the motif importance scores ( equation 3.4 ) with Spast/S on Dpast/Dt 7 : while |Gt| < B do 8 : Generate a batch of candidate archi- tectures and reject those which contain none of the top 25 % good motifs ( similar procedure as Fig . 2 ( a ) ) 9 : end while 10 : else 11 : Generate B candidate architectures Gt 12 : end if 13 : { Gt , i } bi=1 = argmaxG∈Gt αt ( G|Dt−1 ) 14 : Evaluate their validation accuracy { yt , i } bi=1 15 : Dt ← Dt−1 ∪ ( { Gt , i } Bi=1 , { yt , i } bi=1 ) 16 : Update the surrogate S with Dt 17 : end for 18 : Return the best architecture seen so far G∗T 3 PROPOSED METHOD . We begin by presenting our proposed algorithm , NAS-BOWL in Algorithm 1 , where there are a few key design features , namely the design of the GP surrogate suitable for architecture search ( we term the surrogate GPWL ) and the method to generate candidate architectures at each BO iteration . We will discuss the first one in Section 3.1 . For architecture generation , we either generate the new candidates via random sampling the adjacency matrices , or use a mutation algorithm similar to those used in a number of previous works ( Kandasamy et al. , 2018 ; Ma et al. , 2019 ; White et al. , 2019 ; Shi et al. , 2019 ) : at each iteration , we generate the architectures by mutating a number of queried architectures that perform the best . Generating candidate architectures in this way enables us to exploit the prior information on the best architectures observed so far to explore the large search space more efficiently . We report NAS-BOWL with both strategies in our experiments . Finally , to give a demonstration of the new possibilities opened by our work , we give an exemplary practical use of intepretable motifs for transfer learning in Algorithm 1 , which is elaborated in Sec 3.2 . 3.1 SURROGATE AND GRAPH KERNEL DESIGN . To enable the GP to work effectively on the graph-like architecture search space , selecting a suitable kernel function is arguably the most important design decision . We propose to use the WeisfeilerLehman ( WL ) graph kernel ( Shervashidze et al. , 2011 ) to enable the direct definition of a GP surrogate on the graph-like search space . The WL kernel compares two directed graphs based on both local and global structures . It starts by comparing the node labels of both graphs via a base kernel kbase ( φ0 ( G ) , φ0 ( G ′ ) ) where φ0 ( G ) denotes the histogram of features at level h = 0 ( i.e . node features ) in the graph , where h is both the index of WL iterations and the depth of the subtree features extracted . For the WL kernel with h > 0 , as shown in Fig . 1 , it then proceeds to collect features at h = 1 by aggregating neighbourhood labels , and compare the two graphs with kbase ( φ1 ( G ) , φ1 ( G ′ ) ) based on the subtree structures of depth 1 ( Shervashidze et al. , 2011 ; Höppner and Jahnke , 2020 ) . The procedure then repeats until the highest iteration level h = H specified and the resulting WL kernel is given by : kHWL ( G , G ′ ) = H∑ h=0 kbase ( φh ( G ) , φh ( G ′ ) ) . ( 3.1 ) In the above equation , kbase is a base kernel ( such as dot product ) over the vector feature embedding . As h increases , the WL kernel captures higher-order features which correspond to increasingly larger neighbourhoods and features at each h are concatenated to form the the final feature vector ( φ ( G ) = [ φ0 ( G ) , ... , φH ( G ) ] ) . The readers are referred to App . A for more detailed algorithmic descriptions of the WL kernel . We argue that WL is desirable for three reasons . First , in contrast to many ad hoc approaches , WL is established with proven successes on labelled and directed graphs , by which networks are represented . Second , the WL representation of graphs is expressive , topology-preserving yet interpretable : Morris et al . ( 2019 ) show that WL is as powerful as standard GNNs in terms of discrimination power . However , GNNs requires relatively large amount of training data and thus is more data-inefficient ( we validate this in Sec . 5 ) . Also , the features extracted by GNNs are harder to interpret compared to those by WL . Note that the WL kernel by itself only measures the similarity between graphs and does not aim to select useful substructures explicitly . It is our novel deployment of the WL procedure ( App . A ) for the NAS application that leads to the extraction of interpretable features while comparing different architectures . We further make smart use of these network features to help explain the architecture performance in Sec 3.2 . Finally , WL is efficient and scalable : denoting { n , m } as the number of nodes and edges respectively , computing the Gram matrix on N training graphs may scale O ( NHm + N2Hn ) ( Shervashidze et al. , 2011 ) . As we show in App . E.3 , in typical cell-based spaces H ≤ 3 suffices , suggesting that the kernel computation cost is likely eclipsed by the O ( N3 ) scaling of GP we incur nonetheless . This is to be contrasted to approaches such as path encoding in White et al . ( 2019 ) , which scales exponentially with n without truncation , and the edit distance kernel in Jin et al . ( 2019 ) , whose exact solution is NP-complete ( Zeng et al. , 2009 ) . With the above-mentioned merits , the incorporation of the WL kernel permits the usage of GP-based BO on various NAS search spaces . This enables the practitioners to harness the rich literature of GP-based BO methods on hyperparameter optimisation and redeploy them on NAS problems . Most prominently , the use of GP surrogate frees us from hand-picking the WL hyperparameter H as we can automatically learn the optimal values by maximising the Bayesian marginal likelihood . As we will justify in Sec . 5 and App . E.3 , this process is extremely effective . This renders a further major advantage of our method as it has no inherent hyperparameters that require manual tuning . This reaffirms with our belief that a practical NAS method itself should require minimum tuning , as it is almost impossible to run traditional hyperparameter search given the vast resources required . Other enhancements , such as improving the expressiveness of the surrogate by combining multiple types of kernels , are briefly investigated in App . C. We find the amount of performance gain depends on the NAS search space and a WL kernel alone suffices for common cell-based spaces .
The authors propose a new neural architecture search algorithm combining Bayesian optimization with the expressive and popular Weisfeiler-Lehman (WL) Graph Kernel. One advantage of using WL is the interpretable results that stem from the nature of how the kernel is computed, namely a propagation scheme through the graph. Combined the derivative of Eq. 3.2, one can extract subgraphs that are directly responsible for increased performance. In a variety of experiments, the authors show not only increased performance of detected architectures but also find subgraphs that are found by other algorithms as well.
SP:bac034cc8f02b43a03e24f0a8d327c4b68afed09
Adversarial Environment Generation for Learning to Navigate the Web
1 INTRODUCTION . Autonomous web navigation agents that complete tedious , digital tasks , such a booking a flight or filling out forms , have a potential to significantly improve user experience and systems ’ accessibility . The agents could enable a user to issue requests such as , “ Buy me a plane ticket to Los Angeles leaving on Friday ” , and have the agent automatically handle the details of completing these tasks . However , the complexity and diversity of real-world websites make this a formidable challenge . General web navigation form-filling tasks such as these require an agent to navigate through a set of web pages , matching user ’ s information to the appropriate elements on a web page . This is a highly challenging decision-making problem for several reasons . First , the observation space is large , and partially-observable , consisting of a single web page in the flow of several web pages ( e.g . the payment information page is only one part of a shopping task ) . Web pages are represented using the Document Object Model ( DOM ) , a tree of web elements with hundreds of nodes . Second , actions are all possible combination of the web elements ( fill-in boxes , drop-downs , click on the buttons ) and their possible values . For example , the drop-down selection actions are only appropriate if there there is a drop-down menu present . Even if the agent is able to navigate the site to arrive at the correct page , and eventually select the correct element ( e.g . the ‘ departure ’ field for booking a flight ) , there are many possible values it can insert ( e.g . all user input ) . Therefore , the action space is discrete and prohibitively large , with only a valid set of actions changing with the context . Finally , the same task , such as booking a flight , results in a very different experience and workflow depending on the website . The agent must be able to adapt and operate in the new environment to complete the task . Therefore , the reinforcement learning ( RL ) agents should be capable of zero-shot generalization to new environments . Prior work made significant strides toward learning web navigation on a single website , yet the existing methods do not scale . Behavior cloning from expert demonstrations ( Shi et al. , 2017 ; Liu et al. , 2018 ) shows promising results , however , it requires a number of demonstrations for every single website . RL agent trained using synthetic demonstrations created with a generative model Gur et al . ( 2019 ) improves the performance . Yet , the method still requires training a separate policy for every single website requiring tens of thousands of interactions with every website . Lastly , the existing benchmarks ( Shi et al. , 2017 ; Liu et al. , 2018 ) have limited complexity . Their DOM trees are fixed and considerably smaller than real websites . We aim to train RL agents to solve web navigation form-filling tasks ; by correctly entering relevant information into unknown websites . Successful generalization to new websites requires training an agent on a large distribution of possible tasks and environments . The question is how to create a distribution that will not only cover most realistic tasks , but can be presented in a curriculum that is learnable by the agent . Manually designing a pre-defined curriculum of hand-built websites is tedious , and intractable . Another option would be to apply domain randomization ( DR ) ( as in e.g . Jakobi ( 1997 ) ; Sadeghi & Levine ( 2016 ) ; Tobin et al . ( 2017 ) ) to randomize parameters of websites , or automatically increase some parameter controlling the difficulty over time ( as in Gur et al . ( 2019 ) ) . However , all these approaches are likely to fail to cover important test cases , and can not tailor the difficulty of the parameter configuration to the current ability of the agent . Adversarial Environment Generation ( AEG ) trains a learning adversary to automatically generate a curriculum of training environments , enabling both increased complexity of training environments , and generalization to new , unforeseen test environments . However , if we naively train a minimax adversary—i.e . an adversary that seeks to minimize the performance of the learning agent—the adversary is motivated to create the hardest possible website , preventing learning . Instead , PAIRED ( Protagonist Antagonist Induced Regret Environment Design ) ( Dennis et al. , 2020 ) , trains the adversary to maximize the regret , estimated as a difference between two navigation agents ( protagonist and antagonist ) . While PAIRED shows exciting results , without an explicit feedback on how skillful antagonist is and mechanism to control the difficulty of the environment , the method is susceptible to local minima , and has hard time learning in the complex environments when the regret is zero . We present Flexible b-PAIRED , which builds on PAIRED framework , and jointly trains the adversarial RL agent ( adversary ) and a population of navigator agents . Flexible b-PAIRED adversary learns to present ” just-the-right-challenge ” to the navigation agents . We enable Flexible b-PAIRED adversary to tailor the environment difficulty to the ability of the best performing agent by introducing an explicit difficulty budgeting mechanism , and a novel multi-objective loss function . The budgeting mechanism gives the adversary the direct control of the difficulty of the generated environment . The adversary training simultaneously optimizes for an objective that ties in adversary difficulty budget with the navigator agent ’ s performance ( observed expected return ) , and the population-based regret similar to PAIRED . Lastly , to enable AEG web-design , we present a new benchmarking environment , gMiniWoB , and a web-design adversary architecture . gMiniWoB enables an adversary to construct websites of increasing complexity out of common design primitives such as navigation bars , product carousels , item decks , web forms , and item carts . The evaluation environments in gMiniWob are order of magnitude more complex than miniWob ( Shi et al. , 2017 ) . The adversary architecture is a LSTM-based decoder , seeded with a random seed . It first selects number of web pages . Then , at each step of an open loop , the adversary either emits a design element and its placement , or opts to skip an element and save design budget . The adversary ’ s used difficulty budget is a log-likelihood of joint probability of not adding design elements . This paper makes the following contributions : i ) A new benchmarking environment , gMiniWoB , which empowers the use of AEG for web navigation , by enabling the construction of websites out of compositional design primitives ; ii ) The Flexible b-PAIRED algorithm , which computes a more stable estimate of regret and directly incentivizes the adversary to tailor the complexity of the generated environment to the performance of the best-performing agent ; iii ) web navigation adversary decoder architecture , and iv ) empirical results demonstrating that Flexible b-PAIRED generates a curriculum of increasingly challenging websites , and produces agents that can successfully generalize to navigating complex , unseen sites at test time . Flexible b-PAIRED approach significantly outperforms prior work on minimax regret AEG ( Dennis et al. , 2020 ) , as well as a state-of-the-art approach for using RL to train web navigation agents ( Gur et al. , 2019 ) , resulting in agents that complete the most difficult tasks with more than 75 % success rate , 4x improvement over the strongest baseline . We are releasing gMiniWoB in open-source in the hopes of enabling further progress on this problem . We hope that this work will provide a meaningful way to make progress on the exceptionally challenging problem of learning to navigate the web , and will be of interest to the wider RL research community for auto-curriculum design in complex and compositional environments . 2 RELATED WORK . Web navigation benchmarks and tasks : Prior work on training agents to navigate the web introduced the MiniWoB ( Shi et al. , 2017 ) and MiniWoB++ ( Liu et al. , 2018 ) environments , a fixed set of manually curated toy websites , but relied on obtaining expert demonstrations for each website , which can not scale effectively to cover the large variety of real-world websites , and can not adapt to changing websites . Further , these methods failed to solve complex web navigation tasks such as flight booking or social media interaction ( Gur et al. , 2019 ) . Gur et al . ( 2019 ) take a step farther by training an RL agent to solve complex web navigation tasks using a scheduled curriculum . The curriculum linearly increases a parameter p , in which 1 − p controls the number of web elements that are solved by querying an oracle policy , which is obtained via expert data . This work differs in several ways . First , we introduce a new framework , gMiniWoB , that allows generating complex websites on-the-fly with tunable difficulty levels . Additionally , we do not rely on any expert demonstrations to augment sparse rewards . We use AEG to automatically learn to generate a curriculum of web navigation tasks that are tailored to the current skill level of the agent . Next , we make no assumption on the availability of any website while they assume websites are given a priori . Lastly , our web navigation agents generalize to unseen environments . Goal Generation : Florensa et al . ( 2018 ) trains a Generative Adversarial Network ( GAN ) for generating a curriculum of goals with fixed environment dynamics . A generator is trained to output new goals and the discriminator is trained to predict if the goal is achievable . The generator is bootstrapped from sample goals that the initial agent is able to reach in the environment . It is tested on simple navigation tasks with the same environments . In contrast , we train an adversary that generates a curriculum of environments , including goals , starting with an empty environment in which bootstrapping a generator network from sample episodes is not possible . We test on unseen environments with more complicated and high dimensional state and action spaces . Adversarial Environment Generation : Multi-agent training can be an effective method for automatically generating a curriculum of RL tasks ( e.g . Leibo et al . ( 2019 ) ; Matiisen et al . ( 2019 ) ; Graves et al . ( 2017 ) ; Portelas et al . ( 2020 ) ) . For example , Asymmetric Self Play ( ASP ) ( Sukhbaatar et al. , 2017 ) trains two agents , in which the second agent must learn to repeat the actions taken by the first , demonstrator agent . Both agents play in the same , fixed environment . In contrast , we use a third agent to learn to generate challenging new environments . POET ( Wang et al. , 2019 ; 2020 ) is Under review as a conference paper at ICLR 2021 DIV # text VAR * INPUT text=VAR * LABEL * DIV INPUT text= ” Username ” DIV # text “ First Name ” INPUT LABEL ( a ) An underspecified DOM tree template . The text box is always included , its text and label element are variables . DIV # text VAR * INPUT text=VAR * LABEL * DIV INPUT text= ” Username ” DIV # text “ First Name ” INPUT LABEL ( b ) A fully specified DOM primitive where a label is created and its text is assigned . DIV # text VAR * INPUT text=VAR * LABEL * DIV INPUT text= ” Username ” DIV # text “ First Name ” INPUT LABEL ( c ) A fully specified DOM primitive where only the inner text within the text box is assigned . Figure 2 : An example underspecified DOM tree template ( a ) and its instantiations ( b , c ) with different values . ( * ) indicates a variable ; either an element or one of its attributes . ( c ) is used in Page 1 and ( b ) is used in Page 2 in Figure 3. an AEG technique which uses a population of adversaries to generate the terrain a 2D walker agent must learn to navigate . To create a curriculum , POET requires generating many new environments , testing all agents within each one , and discarding environments based on a manually chosen a reward threshold , which wastes a significant amount of computation . Campero et al . ( 2020 ) use a teacher to propose navigation tasks ; the teacher ’ s reward is based on whether the agent takes more steps than a threshold , a hyperparameter that is linearly increased over the course of training . Most closely related to our work is PAIRED ( Dennis et al. , 2020 ) , which is an AEG method for training agents with minimal regret that works by constraining the environment-generating adversary using the performance of a second agent . However , PAIRED only demonstrated results on simple gridworld environments , and did not expand to the type of complex , high-dimensional state-action space required for web navigation . We improve on PAIRED using a more flexible estimate of the regret , as well as a budget mechanism , and show that this significantly improves performance . RL with Autoregressive Models : Keneshloo et al . ( 2020 ) outlines training sequence-to-sequence ( seq2seq ) models with RL algorithms . Previous models first pretrained a seq2seq model with ground-truth inputs and outputs and then finetuned with RL using different reward functions such as BLEU score . In this work , we propose a decoder-like autoregressive adversary model that is trained without any ground-truth data . The model is fed its own predictions from previous time steps and updated using a novel adversarial objective .
This paper improves upon existing approaches for learning to fill forms on the web automatically. The main idea is to train an adversary to generate a curriculum of environments to train an agent to learn to fill forms on the web. Training such an adversary can be challenging since the adversary may prove to be too strong for the main agent to learn anything from. Thus, the paper proposes few techniques to control or shape this adversary such that the main agent is able to learn quickly as compared to similar existing approach.
SP:90e01288266255a58201a01f06dd8fcc4cac4034
Uncertainty-Based Adaptive Learning for Reading Comprehension
1 INTRODUCTION . The goal of machine reading comprehension ( MRC ) is to train an AI model which is able to understand natural language text ( e.g . a passage ) , and answer questions related to it ( Hirschman et al. , 1999 ) ; see Figure 1 for an example . MRC has been one of the most important problems in natural language processing thanks to its various successful applications , such as smooth-talking AI speaker assistants – a technology that was highlighted as among 10 breakthrough technologies by MIT Technology Review very recently ( Karen , 2019 ) . Of central importance to MRC is the availability of benchmarking question-answering datasets , where a larger dataset often enables training of a more informative neural networks . In this regard , there have been a number of benchmark datasets proposed in recent years , with the efforts of pushing forward the development of MRC . A partial list includes SQuAD ( Rajpurkar et al. , 2016 ) , NewsQA ( Trischler et al. , 2017 ) , MSMARCO ( Nguyen et al. , 2016 ) , and Natural Questions ( Kwiatkowski et al. , 2019 ) . While the emergence of these high-quality datasets have stimulated a surge of research and have witness a large volume of deployments of MRC , it is often challenging to go beyond the scale of the current architectures of neural networks , in that it is extremely expensive to obtain massive amount of labeled data . The barrier of data collection can be seen from SQuAD : the research group at Standford University spent 1,547 working hours for the annotation of SQuAD dataset , with the cost over $ 14,000 . This issue was also set out and addressed by AI companies . However , even equipped with machine learning assisted labeling tools ( e.g . Amazon SageMaker Ground Truth ) , it is still expensive to hire and educate expert workers for annotation . What makes the issue more serious is that there is a rise of security and privacy concerns in various problems , which prevents researchers from scaling their projects to diverse domains efficiently . For example , all annotators are advised to get a series of training about privacy rules , such as Health Insurance Portability & Accountability Act , before they can work on the medical records . In this work , we tackle the challenge by proposing a computationally efficient learning algorithm that is amenable for label-demanding problems . Unlike prior MRC methods that separate data annotation and model training , our algorithm interleaves these two phases . Our algorithm , in spirit , resembles the theme of active learning ( Balcan et al. , 2007 ) , where the promise of active learning is that we can always concentrate on fitting only the most informative examples without suffering a degraded performance . While there have been a considerable number of works showing that active learning often guarantees exponential savings of labels , the analysis holds typically for linear classification models Awasthi et al . ( 2017 ) ; Zhang ( 2018 ) ; Zhang et al . ( 2020 ) . In stark contrast , less is explored for the more practical neural network based models since it is nontrivial to extend important concepts such as large margin of linear classifiers to neural networks . As a remedy , we consider an unsupervised sampling scheme based on the uncertainty of the instances ( Settles , 2009 ) . Our sampling scheme is adaptive ( i.e . active ) in the sense that it chooses instances that the currently learned model is most uncertain on . To this end , we recall that the purpose of MRC is to take as input a passage and a question , and finds the most accurate answer from the passage . Roughly speaking , this can be thought of as a weight assignment problem , where we need to calculate how likely each word span in the passage could be the correct answer . Ideally , we would hope that the algorithm assigns 1 to the correct answer , and assigns 0 to the remaining , leading to a large separation between the correct and those incorrect . Alternatively , if the algorithm assigns , say 0.5 to two different answers and assigns 0 to others , then it is very uncertain about its response – this is a strong criterion that we need to query the correct answer to an expert , i.e . performing active labeling . Our uncertainty-based sampling scheme is essentially motivated by this observation : the uncertainty of an instance ( i.e . a pair of passage and question ) is defined as the gap between the weight of the best candidate answer and the second best . We will present a more formal description in Section 2 . After identifying these most uncertain , and hence most informative instances , we query their labels and use them to update the model . In this phase , in addition to minimize the widely used entropybased loss function , we consider an adaptive regularizer which has two important properties . First , it enforces that the new model will not deviate far from the current model , since 1 ) with reasonable initialization we would expect that the initial model should perform not too bad ; and 2 ) we do not want to overfit the data even if they are recognized as informative . Second , the regularizer has a coefficient that is increasing with iterations . Namely , as the algorithm proceeds the stability of model updating outweighs loss minimization . In Section 2 we elaborate on the concrete form of our objective function . It is also worth mentioning that since in each iteration , the algorithm only fits the uncertain instances , the model updating is more faster than traditional methods . The pipeline is illustrated in Figure 2 . Given abundant unlabeled instances , our algorithm first evaluates their uncertainty and detects the most informative ones , marked as red . Then we send these instances to an expert to obtain the groundtruth answers , marked as yellow . With the newly added labeled samples , it is possible to perform incremental updating of the MRC model . Roadmap . We summarize our main technical contributions below , and discuss more related works in Section 5 . In Section 2 we present a detailed description of the core components of our algorithm , and in Section 3 we provide an end-to-end learning paradigm for MRC with implementation details . In Section 4 , we demonstrate the efficacy of our algorithm in terms of exact match , F-1 score , and the savings of labels . Finally we conclude this paper in Section 6 . 1.1 SUMMARY OF CONTRIBUTIONS . We consider the problem of learning an MRC model in the label-demanding context , and we propose a novel algorithm that interleaves data annotation and model updating . In particular , there are two core components for this end : an unsupervised uncertainty-based sampling scheme that only queries labels of the most informative instances with respect to the currently learned model , and an adaptive loss minimization paradigm that simultaneously fits the data and controls the degree of model updating . Moreover , our approach is modular in nature , meaning that the community would benefit from this work by leveraging our techniques into more real-world problems ( e.g . image classification ) where the availability of labels is a major concern . 2 ALGORITHM . In this section , we formally introduce the problem setup and our main algorithm ALBUS ( Algorithm 1 ) . We use x : = ( p , q ) to represent a pair of passage p and question q , which is also called an unlabeled instance , or simply an instance . If there are multiple questions , say q1 , q2 , to a same passage p , we will use two instances x1 : = ( p , q1 ) and x2 : = ( p , q2 ) . Given an instance x , our goal is to predict an answer . We use a zero-one vector a to indicate the correct answer , and ( x , a ) is called a labeled instance . The prediction made by the learner is denoted by â . We will always assume that all the coordinates of â are non-negative , and their sum equals one , which can be easily satisfied if the last layer of the neural network is softmax . 2.1 UNSUPERVISED UNCERTAINTY-BASED RANDOM SAMPLING . Since data annotation is expensive , we treat the problem as such that all the instances are unlabeled before running the algorithm , and as the algorithm proceeds , it may adaptively detects the most informative instances to be labeled by experts or crowd workers . Thus , the central questions to learning are : 1 ) how to measure the informativeness of the unlabeled instances in a computationally efficient manner ; and 2 ) how to select a manageable number of instances for annotation ( since the algorithm might identify bunch of useful instances ) . We address both questions in the following . 2.1.1 METRIC OF INFORMATIVENESS . Intuition . We first address the first question , i.e . design a metric to evaluate the informativeness . To ease the discussion , suppose that for a given instance x , there are only two answers to choose from , i.e . a is a two-dimensional vector , and that the algorithm has been initialized , e.g . via pre-training . If the current model takes as input x , and predicts â = ( 1 , 0 ) , then we think of this instance as less informative , in that the algorithm has an extremely high confidence on its prediction.1 On the other spectrum , if the prediction â = ( 0.5 , 0.5 ) , then it indicates that the current model is not able to distinguish the two answers . Thus , sending the correct answer a together with the instance to the algorithm will lead to significant progress . We observe that underlying the intuition is a notion of separation between the answer with highest confidence and that with second highest , denoted by ∆w ( x ) , where w denotes the current model 1The algorithm may of course make a mistake , but this will be treated by future model updating . Here we are just giving an intuitive explanation following the idealized scenario . Algorithm 1 ALBUS : Adaptive Learning By Uncertainty-Based Sampling Require : a set of unlabeled instances U = { x1 , . . . , xn } , initial MRC model w0 , maximum itera- tion number T , thresholds { τ1 , . . . , τT } , number of instances to be labeled n0 . Ensure : A new MRC model wT . 1 : U1 ← U . 2 : for t = 1 , · · · , T do 3 : Compute ∆wt−1 ( x ) for all x ∈ Ut . 4 : Bt ← { x ∈ Ut : ∆wt−1 ( x ) ≤ τt } . 5 : Compute the sampling probability Pr ( x ) for all x ∈ Bt . 6 : St ← randomly choose n0 instances from Bt by the distribution { Pr ( x ) } x∈Bt , and query their labels . 7 : Update the model wt ← arg minw L ( w ; St ) . 8 : Ut+1 ← Ut\St . 9 : end for parameters . In fact , let our algorithm be a function fw : x 7→ â . Denote by â ( 1 ) and â ( 2 ) the highest and second highest value in â . Then ∆w ( x ) = â ( 1 ) − â ( 2 ) . ( 1 ) Given the unlabeled training set { x1 , x2 , . . . , xn } and the currently learned model , we can evaluate the degree of separation { ∆1 , ∆2 , . . . , ∆n } where we write ∆i : = ∆w ( xi ) to reduce notation clutter since most of the time , the model w is clear from the context . This answers the first question proposed at the beginning of the section , i.e . how to measure the informativeness of the instances .
This paper proposes to apply uncertainty-based measures to guide the collection of training samples for reading comprehension. The paper describes a relatively simple metric to estimate model uncertainty of unlabeled examples, and develops an algorithm to sample examples that exhibit least model certainty. They describe a learning and regularization model for this scenario and evaluate their proposal on SQuAD and NewsQA datasets.
SP:deb175b73241e3a04c2d2887934889508db4e39e
Expressive Power of Invariant and Equivariant Graph Neural Networks
1 INTRODUCTION . Graph Neural Networks ( GNN ) are designed to deal with graph structured data . Since a graph is not changed by permutation of its nodes , GNNs should be either invariant if they return a result that must not depend on the representation of the input ( typically when building a graph embedding ) or equivariant if the output must be permuted when the input is permuted ( typically when building an embedding of the nodes ) . More fundamentally , incorporating symmetries in machine learning is a fundamental problem as it allows to reduce the number of degree of freedom to be learned . Deep learning on graphs . This paper focuses on learning deep representation of graphs with network architectures , namely GNN , designed to be invariant to permutation or equivariant by permutation . From a practical perspective , various message passing GNNs have been proposed , see Dwivedi et al . ( 2020 ) for a recent survey and benchmarking on learning tasks . In this paper , we study 3 architectures : Message passing GNN ( MGNN ) which is probably the most popular architecture used in practice , order-k Linear GNN ( k-LGNN ) proposed in Maron et al . ( 2018 ) and order-k Folklore GNN ( kFGNN ) first introduced by Maron et al . ( 2019a ) . MGNN layers are local thus highly parallelizable on GPUs which make them scalable for large sparse graphs . k-LGNN and k-FGNN are dealing with representations of graphs as tensors of order k which make them of little practical use for k ≥ 3 . In order to compare these architectures , the separating power of these networks has been compared to a hierarchy of graph invariants developed for the graph isomorphism problem . Namely , for k ≥ 2 , k-WL ( G ) are invariants based on the Weisfeiler-Lehman tests ( described in Section 4.1 ) . For each k ≥ 2 , ( k + 1 ) -WL has strictly more separating power than k-WL ( in the sense that there is a pair of non-isomorphic graphs distinguishable by ( k + 1 ) -WL and not by k-WL ) . GIN ( which are invariant MGNN ) introduced in Xu et al . ( 2018 ) are shown to be as powerful as 2-WL . In Maron et al . ( 2019a ) , Geerts ( 2020b ) and Geerts ( 2020a ) , k-LGNN are shown to be as powerful as k-WL and 2-FGNN is shown to be as powerful as 3-WL . In this paper , we extend this last result about k-FGNN to general values of k. So in term of separating power , when restricted to tensors of order k , k-FGNN is the most powerful architecture among the ones considered in this work . This means that for a given pair of graphs G and G′ , if ( k + 1 ) -WL ( G ) 6= ( k + 1 ) -WL ( G′ ) , then there exists a k-FGNN , say GNNG , G′ such that GNNG , G′ ( G ) 6= GNNG , G′ ( G′ ) . Approximation results for GNNs . Results on the separating power of GNNs only deal with pairwise comparison of graphs : we need a priori a different GNN for each pair of graphs in order to distinguish them . Such results are of little help in a practical learning scenario . Our main contribution in this paper overcomes this issue and we show that a single GNN can give a meaningful representation for all graphs . More precisely , we characterize the set of functions that can be approximated by MGNNs , k-LGNNs and k-FGNNs respectively . Standard Stone-Weierstrass theorem shows that if an algebra A of real continuous functions separates points , then A is dense in the set of continuous function on a compact set . Here we extend such a theorem to general functions with symmetries and apply it to invariant and equivaraint functions to get our main result for GNNs . As a consequence , we show that k-FGNNs have the best approximation power among architectures dealing with tensors of order k. Universality results for GNNs . Universal approximation theorems ( similar to Cybenko ( 1989 ) for multi-layers perceptron ) have been proved for linear GNNs in Maron et al . ( 2019b ) ; Keriven & Peyré ( 2019 ) ; Chen et al . ( 2019 ) . They show that some classes of GNNs can approximate any function defined on graphs . To be able to approximate any invariant function , they require the use of very complex networks , namely k-LGNN where k tends to infinity with n the number of nodes . Since we prove that any invariant function less powerful than ( k + 1 ) -WL can be approximated by a k-FGNN , letting k tends to infinity directly implies universality . Universality results for k-FGNN is another contribution of our work . Equivariant GNNs . Our second set of results extends previous analysis from invariant functions to equivariant functions . There are much less results about equivariant GNNs : Keriven & Peyré ( 2019 ) proves the universality of linear equivariant GNNs , and Maehara & Hoang ( 2019 ) shows the universality of a new class of networks they introduced . Here , we consider a natural equivariant extension of k-WL and prove that equivariant ( k + 1 ) -LGNNs and k-FGNN can approximate any equivariant function less powerful than this equivariant ( k + 1 ) -WL for k ≥ 1 . At this stage , we should note that all universality results for GNNs by Maron et al . ( 2019b ) ; Keriven & Peyré ( 2019 ) ; Chen et al . ( 2019 ) are easily recovered from our main results . Also our analysis is valid for graphs of varying sizes . Empirical results for the Quadratic Assigment Problem ( QAP ) . To validate our theoretical contributions , we empirically show that 2-FGNN outperforms classical MGNN . Indeed , Maron et al . ( 2019a ) already demonstrate state of the art results for the invariant version of 2-FGNNs ( for graph classification or graph regression ) . Here we consider the graph alignment problem and show that the equivariant 2-FGNN is able to learn a node embedding which beats by a large margin other algorithms ( based on spectral method , SDP or GNNs ) . Outline and contribution . After reviewing more previous works and notations in the next section , we define the various classes of GNNs studied in this paper in Section 3 : message passing GNN , linear GNN and folklore GNN . Section 4 contains our main theoretical results for GNNs . First in Section 4.2 we describe the separating power of each GNN architecture with respect to the WeisfeilerLehman test . In Section 4.3 , we give approximation guarantees for MGNNs , LGNNs and FGNNs at fixed order of tensor . They cover both the invariant and equivariant cases and are our main theoretical contributions . For these , we develop in Section D a fine-grained Stone-Weierstrass approximation theorem for vector-valued functions with symmetries . Our theorem handles both invariant and equivariant cases and is inspired by recent works in approximation theory . In Section 6 , we illustrate our theoretical results on a practical application : the graph alignment problem , a well-known NP-hard problem . We highlight a previously overlooked implementation question : the handling of batches of graphs of varying sizes . A PyTorch implementation of the code necessary to reproduce the results is available at https : //github.com/mlelarge/graph_neural_net 2 RELATED WORK . The pioneering works that applied neural networks to graphs are Gori et al . ( 2005 ) and Scarselli et al . ( 2009 ) that learn node representation with recurrent neural networks . More recent message passing architectures make use of non-linear functions of the adjacency matrix ( Kipf & Welling , 2016 ) , for example polynomials ( Defferrard et al. , 2016 ) . For regular-grid graphs , they match classical convolutional networks which by design can only approximate translation-invariant functions and hence have limited expressive power . In this paper , we focus instead on more expressive architectures . Following the recent surge in interest in graph neural networks , some works have tried to extend the pioneering work of Cybenko ( 1989 ) ; Hornik et al . ( 1989 ) for various GNN architectures . Among the first ones is Scarselli et al . ( 2009 ) , which studied invariant message-passing GNNs . They showed that such networks can approximate , in a weak sense , all functions whose discriminatory power is weaker than 1-WL . Yarotsky ( 2018 ) described universal architectures which are invariant or equivariant to some group action . These models rely on polynomial intermediate layers of arbitrary degrees , which would be prohibitive in practice . Maron et al . ( 2019b ) leveraged classical results about the polynomials invariant to a group action to show that k-LGNN are universal as k tends to infinity with the number of nodes . Keriven & Peyré ( 2019 ) derived a similar result , in the more complicated equivariant case by introducing a new Stone-Weierstrass theorem . Similarly to Maron et al . ( 2019b ) , they require the order of tensors to go to infinity . Another route towards universality is the one of Chen et al . ( 2019 ) . In the invariant setting , they show for a class of GNN that universality is equivalent to being able to discriminate between ( non-isomorphic ) graphs . However , the only way to achieve such discriminatory power is to use tensors of arbitrary high order , see also Ravanbakhsh ( 2020 ) . Our work encompass and precise these results using high-order tensors as it yields approximation guarantees even at fixed order of tensor . CPNGNN in Sato et al . ( 2019 ) and DimeNet in Klicpera et al . ( 2020 ) are message passing GNN incorporating more information than those studied here . Partial results about their separating power follows from Garg et al . ( 2020 ) which provides impossibility results to decide graph properties including girth , circumference , diameter , radius , conjoint cycle , total number of cycles , and k-cliques . Chen et al . ( 2020 ) studies the ability of GNNs to count graph substructures . Though our theorems are much more general , note that their results are improved by the present work . Note also , that if the nodes are given distinct features , MGNNs become much more expressive Loukas ( 2019 ) but looses their invariant or equivariant properties . Averaging i.e . relational pooling ( RP ) has been proposed to recover these properties Murphy et al . ( 2019a ) . However , the ideal RP , leading to a universal approximation , can not be used for large graphs due to its complexity of O ( |V | ! ) . Regarding the other classes of RPGNN i.e . the k-ary pooling ( Murphy et al. , 2019b ) , we will show how our general theorems in the invariant case can be applied to characterize their approximation power ( see Section 5 ) . Note that for neural networks on sets , the situation is a bit simpler . Efficient architectures such as DeepSets ( Zaheer et al. , 2017 ) or PointNet ( Qi et al. , 2017 ) have been shown to be invariant universal . Similar results exist in the equivariant case ( Segol & Lipman , 2020 ; Maron et al. , 2020 ) , whose proofs rely on polynomial arguments . Though this is not our main motivation , our approximation theorems could also be applied in this context see Sections D.3 and D.4 .
The authors prove several statements about the expressiveness of different classes of graph neural nets (GNNs): conventional message passing networks, linear GNNs (LGNN) and “folklore GNNs” (FGNN). The novel theoretical contributions include analysis of expressiveness of FGNNs that use tensors of arbitrary order in terms of comparison to the Weisfeiler-Lehman tests; a characterization of the functions that these classes of networks can approximate; universality of FGNN as the tensor order goes to infinity. The results are based on a general Stone-Weierstrass-like theorem for equivariant functions. Prior universality results can be recovered as special cases. The authors have a simple experiment that show in a limited setting that a practical implementation agrees with the theory.
SP:53fbf29aa3f60001c2fc4f1a9bb797ebc9ceb986
SpreadsheetCoder: Formula Prediction from Semi-structured Context
1 INTRODUCTION . Spreadsheets are ubiquitous for data storage , with hundreds of millions of users . Support for helping users write formulas in spreadsheets is a powerful feature for data analysis . Although spreadsheet formula languages are relatively simpler than general-purpose programming languages for data manipulation , writing spreadsheet formulas could still be tedious and error-prone for end users ( Gulwani , 2011 ; Hermans et al. , 2012b ; Cheung et al. , 2016 ) . Systems such as FlashFill ( Gulwani , 2011 ; Gulwani et al. , 2012 ) help end-users perform string transformation tasks in spreadsheets using a few input-output examples by automatically synthesizing a program in a domain-specific language ( DSL ) . Recently , several learning approaches based on different neural architectures have been developed for learning such programs from examples , and have demonstrated promising results ( Parisotto et al. , 2017 ; Devlin et al. , 2017 ; Vijayakumar et al. , 2018 ) . All these previous works formalize the spreadsheet program prediction problem as a programming by example task , with the goal of synthesizing programs from a small number of input-output examples . We argue that this choice engenders three key limitations . First , this setup assumes that each data row is independent , and each formula is executed on data cells of the same row . However , real spreadsheets are less structured than this . Data in spreadsheets is typically organized as semi-structured tables , and cells in different rows could be correlated . As shown in Figure 1 , in the same table , different data blocks could have different structures , without a common schema . Formulas can take cell values in other rows as function arguments . Second , because spreadsheets are semi-structured , they also contain rich metadata . In particular , many spreadsheet tables include headers that provide high-level descriptions of the data , which could provide important clues for formula prediction . However , table headers are not utilized in prior work . Finally , programming-by-example methods output programs in a DSL , which is typically designed to facilitate synthesis , and is much less flexible than the language in which users write formulas . For example , the FlashFill DSL only covers a subset of spreadsheet functions for string processing , and it does not support rectangular ranges , a common feature of spreadsheet formulas . In contrast , spreadsheet languages also support a wide variety of functions for numerical calculation , while the argument selection is more flexible and takes the spreadsheet table structure into account . In total , these limitations can compromise the applicability of such prior efforts to more diverse real-world spreadsheets and to richer language functionality . Instead , we propose synthesizing spreadsheet formulas without an explicit specification . To predict a formula in a given cell , the context of data and metadata is used as an implicit ( partial ) specification of the desired program . For example ( Figure 1b ) , if predicting a formula at the end of a column of numbers labeled “ Score ” , and a cell in the same row contains the text “ Total ” , this context might specify the user ’ s intent to compute a column sum . Our problem brings several new challenges compared to related work in programming by example ( Gulwani , 2011 ; Bunel et al. , 2018 ; Balog et al. , 2017 ) , semantic parsing ( Popescu et al. , 2003 ; Zhong et al. , 2017 ; Yu et al. , 2018 ) and source code completion ( Raychev et al. , 2014 ; Li et al. , 2018 ; Svyatkovskiy et al. , 2019 ) . Spreadsheet tables contain rich two-dimensional relational structure and natural language metadata , but the rows do not follow a fixed schema as in a relational database . Meanwhile , our tabular context is more ambiguous as the program specification , and the spreadsheet language studied in this work is more flexible than in the program synthesis literature . In this paper , we present SPREADSHEETCODER , a neural network architecture for spreadsheet formula prediction . SPREADSHEETCODER encodes the spreadsheet context in its table format , and generates the corresponding formula in the target cell . A BERT-based encoder ( Devlin et al. , 2019 ) computes an embedding vector for each input token , incorporating the contextual information from nearby rows and columns . The BERT encoder is initialized from the weights pre-trained on English text corpora , which is beneficial for encoding table headers . To handle cell references , we propose a two-stage decoding process inspired by sketch learning for program synthesis ( Solar-Lezama , 2008 ; Murali et al. , 2018 ; Dong & Lapata , 2018 ; Nye et al. , 2019 ) . Our decoder first generates a formula sketch , which does not include concrete cell references , and then predicts the corresponding cell ranges to generate the complete formula . For evaluation ( Section 4 ) , we construct a large-scale benchmark of spreadsheets publicly shared within our organization . We show that SPREADSHEETCODER outperforms neural network approaches for programming by example ( Devlin et al. , 2017 ) , and achieves 42.51 % top-1 full-formula accuracy , and 57.41 % top-1 formula-sketch accuracy , both of which are already high enough to be practically useful . Moreover , SPREADSHEETCODER can predict cell ranges and around a hundred different spreadsheet operators , which is much more flexible than DSLs used in prior works . With various ablation experiments , we demonstrate that both implicit specification from the context and text from the headers are crucial for obtaining good performance . 2 PROBLEM SETUP . In this section , we discuss the setup of our spreadsheet formula prediction problem . We first describe the input specification , then introduce the language and representation for spreadsheet formulas . Input specification . We illustrate the input context in Figure 1 . The input context consists of two parts : ( a ) context surrounding the target cell ( e.g. , all cell values in rows 2–7 , and columns A–D , excluding cell D4 in Figure 1a ) , and ( b ) the header row ( e.g. , row 1 ) . In contrast to prior programming-by-example approaches ( Gulwani , 2011 ; Parisotto et al. , 2017 ; Devlin et al. , 2017 ; Vijayakumar et al. , 2018 ) , our input specification features ( a ) tabular input , rather than independent rows as input-output examples , and ( b ) header information . Tabular input is important for many cases where formulas are executed on various input cells from different rows and columns ( Figure 1 ) , and headers hold clues about the purpose of a column as well as its intended type , e.g , the header cell `` Score '' in Figure 1b is likely to indicate that the column data should be numbers . Note that we do not include the intended output of the target cell in our input specification , for three reasons . First , unlike programming-by-example problems , we do not have multiple independent input-output examples available from which to induce a formula , so providing multiple input-output examples is not an option . Second , even for our single input instance , the evaluated formula value may not be known by the spreadsheet user yet . Finally , we tried including the intended formula execution result in our specification , but it did not improve the prediction accuracy beyond what the contextual information alone allowed . The spreadsheet language . Our model predicts formulas written in the Google Sheets language1 . Compared to the domain-specific language defined in FlashFill , which focuses on string transfor- 1Google Sheets function list : https : //support.google.com/docs/table/25273 ? hl=en . mations , the spreadsheet language supports a richer set of operators . Besides string manipulation operators such as CONCATENATE , LOWER , etc. , the spreadsheet language also includes operators for numerical calculations ( e.g. , SUM and AVERAGE ) , table lookups ( e.g. , VLOOKUP ) and conditional statements ( IF , IFS ) . As will be discussed in Section 4 , around a hundred different base formula functions appear in our dataset , many more than the operators defined in the FlashFill DSL . We limit our problem to formulas with references to local cells in a spreadsheet tab , thus we exclude formulas with references to other tabs or spreadsheets , and absolute cell ranges . Formula representation . One of the key challenges in formula representation is how to represent cell references , especially ranges , which are prevalent in spreadsheet formulas . Naively using the absolute cell positions , e.g. , A5 , may not be meaningful across different spreadsheets . Meanwhile , a single spreadsheet can have millions of cells , thus the set of possible ranges is very large . To address this , we design a representation for formula sketches inspired by prior work on sketch learning for program synthesis ( Solar-Lezama , 2008 ; Murali et al. , 2018 ; Dong & Lapata , 2018 ; Nye et al. , 2019 ) . A formula sketch includes every token in the prefix representation of the parse tree of the spreadsheet formula , except for cell references . References , which can be either a single cell or a range of cells , are replaced with a special placeholder RANGE token . For example , the sketch of the formula in Figure 1a is IF < = RANGE 1 `` A '' IF < = RANGE 2 `` B '' IF < = RANGE 3 `` C '' IF < = RANGE 4 `` D '' `` E '' $ ENDSKETCH $ , where $ ENDSKETCH $ denotes the end of the sketch . Notice that the sketch includes literals , such as the constants 1 and `` A '' . To complete the formula representation , we design an intermediate representation for ranges , relative to the target cell . For example , B5 in Figure 1c is represented as $ R $ R [ 0 ] C [ 1 ] $ ENDR $ since it is on the next column but the same row as the target cell A5 , and range C2 : C6 in Figure 1b is represented as $ R $ R [ -5 ] C [ 0 ] $ SEP $ R [ -1 ] C [ 0 ] $ ENDR $ . The special tokens $ R $ and $ ENDR $ start and conclude a concrete range , respectively , and $ SEP $ separates the beginning and end ( relative ) references of a rectangular multi-cell range . A complete spreadsheet formula includes both the sketch and any concrete ranges ; e.g. , the formula in Figure 1b is represented as SUM RANGE $ ENDSKETCH $ $ R $ R [ -5 ] C [ 0 ] $ SEP $ R [ -1 ] C [ 0 ] $ ENDR $ EOF , where EOF denotes the end of the formula . In Section 3.2 , we will discuss our two-stage decoding process , which sequentially predicts the formula sketch and ranges . 3 SPREADSHEETCODER MODEL ARCHITECTURE . In this section , we present our SPREADSHEETCODER model architecture for spreadsheet formula prediction . We provide an overview of our model design in Figure 2 . 3.1 TABULAR CONTEXT ENCODER . Input representation . Our model input includes the surrounding data values of the target cell as a table , and the first row is the header . When there is no header in the spreadsheet table , we set the header row to be an empty sequence . We include data values in cells that are at most D rows and D columns away from the target cell , so that the input dimension is ( 2D + 2 ) × ( 2D + 1 ) , and we set D = 10 in our experiments . Row-based BERT encoder . We first use a BERT encoder ( Devlin et al. , 2019 ) to compute a rowbased contextual embedding for each token in the target cell ’ s context . Since our 2D + 1 + 1 rows contain many tokens and we use a standard BERT encoder of 512-token inputs , we tile our rows into bundles of three adjacent data rows , plus the header row , which is included in every bundle . Then we compute a token-wise BERT embedding for each bundle separately ; the BERT weights are initialized from a pre-trained checkpoint for English . Specifically , in our experiments where D = 10 , we concatenate all cell values for each row i in the context into a token sequence Ri , which has length L = 128 ( we trim and pad as needed ) . We combine rows in bundles Srb = [ Hr , R3b−1 , R3b , R3b+1 ] , for b ∈ [ −3 , 3 ] ; here Hr is the header row . We set the BERT segment IDs to 0 for the header tokens , and 1 for data tokens in each bundle . There are 2D + 1 = 21 rows of context , so each of the 21 data rows is covered exactly once by the seven bundles . The header row is assigned a different BERT representation in each bundle . To obtain a single representation of the header row , we average per token across the embeddings from all of the bundles . Column-based BERT encoder . As shown in Figure 1b , some formulas manipulate cells in the same column , in which case a column-based representation may be more desirable . Therefore , we also compute a column-based contextual embedding for all context tokens . We perform similar tiling as for the row-based BERT encoding , yielding column bundles Scb for b ∈ [ −3 , 3 ] . Unlike with row-wise tiling , where we include the header row Hr with every bundle , for column-wise tiling we use the column of the target cell , Hc = C0 , as the “ header column ” in every bundle . After obtaining all token embeddings from this tiled computation by the BERT encoder , we discard token embeddings of C0 in its role as header column , and only use its regular token embeddings from bundle Sc0 . Row-wise and column-wise convolution layers . Although the output vectors of BERT encoders already contain important contextual information , such as headers , nearby rows and columns , they still do not fully embed the entire input table as the context . To encode the context from more distant rows and columns , we add a row-wise convolution layer and a column-wise convolution layer on top of each BERT encoder . Specifically , the row-wise convolution layer has a kernel size of 1× L , and the column-wise convolution layer has a kernel size of ( 2D + 2 ) × 1 for row-based BERT , and ( 2D + 1 ) × 1 for column-based BERT . In this way , the convolution layer aggregates across BERT embeddings from different bundles , allowing the model to take longer range dependencies into account . For each input token , let eb be its BERT output vector , cr be the output of the row-wise convolution layer , and cc be the output of the column-wise convolution layer . The final embedding of each input token is the concatenation of the BERT output and the output of convolution layers , i.e. , e = [ cr + cc ; eb ] .
This paper presents an interesting formulation for spreadsheet formula synthesis. Instead of taking the input output pairs as input, as is done in the programming by example (PBE) approaches, the proposed approach takes the semi-structured tabular context as input for predicting a formula for the target cell. A neural network architecture is presented which uses a BERT-based encoder to leverage the natural language meta-data.
SP:8ec8daeab04e7a3d11e9098a910df23c2d6665d1
Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth
1 INTRODUCTION . Cloth is particularly challenging for neural networks to model due to the complex physical processes that govern how cloth deforms . In physical simulation , cloth deformation is typically modeled via a partial differential equation that is discretized with finite element models ranging in complexity from variational energy formulations to basic masses and springs , see e.g . Baraff & Witkin ( 1998 ) ; Bridson et al . ( 2002 ; 2003 ) ; Grinspun et al . ( 2003 ) ; Baraff et al . ( 2003 ) ; Selle et al . ( 2008 ) . Mimicking these complex physical processes and numerical algorithms with machine learning inference has shown promise , but still struggles to capture high-frequency folds/wrinkles . PCA-based methods De Aguiar et al . ( 2010 ) ; Hahn et al . ( 2014 ) remove important high variance details and struggle with nonlinearities emanating from joint rotations and collisions . More recently , Gundogdu et al . ( 2019 ) ; Santesteban et al . ( 2019 ) ; Patel et al . ( 2020 ) ; Jin et al . ( 2020 ) leverage body skinning Magnenat-Thalmann et al . ( 1988 ) ; Lander ( 1998 ) ; Lewis et al . ( 2000 ) to capture some degree of the nonlinearity ; the cloth is then represented via learned offsets from a co-dimension one skinned body surface . Building on this prior work , we propose replacing the skinned co-dimension one body surface parameterization with a skinned ( fully ) three-dimensional parameterization of the volume surrounding the body . We parameterize the three-dimensional space corresponding to the volumetric region of air surrounding the body with a tetrahedral mesh . In order to do this , we leverage the work of Lee et al . ( 2018 ; 2019 ) , which proposed a number of techniques for creating and deforming such a tetrahedral mesh using a variety of skinning and simulation techniques . The resulting kinematically deforming skinned mesh ( KDSM ) was shown to be beneficial for both hair animation/simulation Lee et al . ( 2018 ) and water simulation Lee et al . ( 2019 ) . Here , we only utilize the most basic version of the KDSM , assigning skinning weights to its vertices so that it deforms with the underlying joints similar to a skinned body surface ( alternatively , one could train a neural network to learn more complex KDSM deformations ) . This allows us to make a very straightforward and fair comparison between learning offsets from a skinned body surface and learning offsets from a skinned parameterization of three-dimensional space . Our experiments showed an overall reduction in error of approximately 50 % ( see Table 2 and Figure 8 ) as well as the removal of visual/geometric artifacts ( see e.g . Figure 9 ) that can be directly linked to the usage of the body surface mesh , and thus we advocate the KDSM for further study . The neural network we trained for a particular body can also be used to infer cloth with unique wrinkle patterns on different body shapes and T-shirt sizes without retraining ( see supplemental material ) . In order to further illustrate the efficacy of our approach , we show that the KDSM is amenable to being used with recently proposed works on texture sliding for better three-dimensional reconstruction Wu et al . ( 2020b ) as well as in conjunction with networks that use a postprocess for better physical accuracy in the L∞ norm Geng et al . ( 2020 ) ( see Figure 10 ) . In summary , our specific contributions are : 1 ) a novel three-dimensional parameterization for virtual cloth adapted from the KDSM , 2 ) an extension ( enabling plastic deformation ) of the KDSM to accurately model cloth deformation , and 3 ) a learning framework to efficiently infer such deformations from body pose . The mean error of the cloth predicted in Jin et al . ( 2020 ) is five standard deviations higher than the mean error of our results . 2 RELATED WORK . Cloth : Data-driven cloth prediction using deep learning has shown significant promise in recent years . To generate clothing on the human body , a common approach is to reconstruct the cloth and body jointly Alldieck et al . ( 2018a ; b ) ; Xu et al . ( 2018 ) ; Alldieck et al . ( 2019a ; b ) ; Habermann et al . ( 2019 ) ; Natsume et al . ( 2019 ) ; Saito et al . ( 2019 ) ; Yu et al . ( 2019 ) ; Bhatnagar et al . ( 2019 ) ; Onizuka et al . ( 2020 ) ; Saito et al . ( 2020 ) . In such cases , human body models such as SCAPE Anguelov et al . ( 2005 ) and SMPL Loper et al . ( 2015 ) can be used to reduce the dimensionality of the output space . To predict cloth shape , a number of works have proposed learning offsets from the body surface Guan et al . ( 2012 ) ; Neophytou & Hilton ( 2014 ) ; Pons-Moll et al . ( 2017 ) ; Lahner et al . ( 2018 ) ; Yang et al . ( 2018 ) ; Gundogdu et al . ( 2019 ) ; Santesteban et al . ( 2019 ) ; Patel et al . ( 2020 ) ; Jin et al . ( 2020 ) such that body skinning can be leveraged . There are a variety of skinning techniques used in animation ; the most popular approach is linear blend skinning ( LBS ) Magnenat-Thalmann et al . ( 1988 ) ; Lander ( 1998 ) . Though LBS is efficient and computationally inexpensive , it suffers from well-known artifacts addressed in Kavan & Žára ( 2005 ) ; Kavan et al . ( 2007 ) ; Jacobson & Sorkine ( 2011 ) ; Le & Hodgins ( 2016 ) . Since regularization often leads to overly smooth cloth predictions , additional wrinkles/folds can be added to initial network inference results Popa et al . ( 2009 ) ; Mirza & Osindero ( 2014 ) ; Robertini et al . ( 2014 ) ; Lahner et al . ( 2018 ) ; Wu et al . ( 2020b ) ; Patel et al . ( 2020 ) . Most recently , Patel et al . ( 2020 ) parameterized cloth as a submesh of the SMPL body mesh and decomposed cloth deformation into low-frequency and high-frequency components . However , this parameterization limits cloth to be bound by the topology of SMPL , and the high-frequency folds/wrinkles added by the network are not constrained to match those in the ground truth data . In contrast , our method allows one to predict cloth deformation independent of a predefined PCA basis , and using Geng et al . ( 2020 ) ensures that folds/wrinkles are physically consistent . 3D Parameterization : Parameterizing the air surrounding deformable objects is a way of treating collisions during physical simulation Sifakis et al . ( 2008 ) ; Müller et al . ( 2015 ) ; Wu & Yuksel ( 2016 ) . For hair simulation in particular , previous works have parameterized the volume enclosing the head or body using tetrahedral meshes Lee et al . ( 2018 ; 2019 ) or lattices Volino & Magnenat-Thalmann ( 2004 ; 2006 ) . These volumes are animated such that the embedded hairs follow the body as it deforms enabling efficient hair animation , simulation , and collisions . Interestingly , deforming a low-dimensional reference map that parameterizes high-frequency details has been explored in computational physics as well , particularly for fluid simulation , see e.g . Bellotti & Theillard ( 2019 ) . 3 SKINNING A 3D PARAMETERIZATION . We generate a KDSM using red/green tetrahedralization Molino et al . ( 2003 ) ; Teran et al . ( 2005a ) to parameterize a three-dimensional volume surrounding the body . Starting with the body in the T-pose , we surround it with an enlarged bounding box containing a three-dimensional Cartesian grid . As is typical for collision bodies in computer graphics Bridson et al . ( 2003 ) , we generate a level set representation separating the inside of the body from the outside ( see e.g . Osher & Fedkiw ( 2002 ) ) . See Figure 1a . Next , a thickened level set is computed by subtracting a constant value from the current level set values ( Figure 1b ) . Then , we use red/green tetrahedralization as outlined in Molino et al . ( 2003 ) ; Teran et al . ( 2005a ) to generate a suitable tetrahedral mesh ( Figure 1c ) . Optionally , this mesh could be compressed to the level set boundary using either physics or optimization , but we forego this step because the outer boundary is merely where our parameterization ends and does not represent an actual surface as in Molino et al . ( 2003 ) ; Teran et al . ( 2005a ) . Skinning weights are assigned to the KDSM using linear blend skinning ( LBS ) Magnenat-Thalmann et al . ( 1988 ) ; Lander ( 1998 ) , just as one would skin a co-dimension one body surface parameterization . In order to skin the KDSM so that it follows the body as it moves , each vertex vk is assigned a nonzero weight wkj for each joint j it is associated with . Then , given a pose θ with joint transformations Tj ( θ ) , the world space position of each vertex is given by vk ( θ ) = ∑ j wkjTj ( θ ) v j k where v j k is the untransformed location of vertex vk in the local reference space of joint j . See Figure 1d . Importantly , it can be quite difficult to significantly deform tetrahedral meshes without having some tetrahedra invert Irving et al . ( 2004 ) ; Teran et al . ( 2005b ) ; thus , we address inversion and robustness issues/details in Section 5 . 4 EMBEDDING CLOTH IN THE KDSM . In continuum mechanics , deformation is defined as a mapping from a material space to the world space , and one typically decomposes this mapping into purely rigid components and geometric strain measures , see e.g . Bonet & Wood ( 1997 ) . Similar in spirit , we envision the T-pose KDSM as the material space and the skinned KDSM as being defined by a deformation mapping to world space for each pose θ . As such , we denote the position of each cloth vertex in the material space ( i.e . T-pose , see Figure 2a ) as umoi . We embed each cloth vertex u mo i into the tetrahedron that contains it via barycentric weights λmoik , which are only nonzero for the parent tetrahedron ’ s vertices . Then , given a pose θ , a cloth vertex ’ s world space location is defined as ui ( θ ) = ∑ k λ mo ik vk ( θ ) so that it is constrained to follow the KDSM deformation , assuming linearity in each tetrahedron ( see Figure 2b ) . Technically , this is an indirect skinning of the cloth with its skinning weights computed as a linear combination of the skinning weights of its parent tetrahedron ’ s vertices , and leads to the obvious errors one would expect ( see e.g . Figure 3 , second row ) . The KDSM approximates a deformation mapping for the region surrounding the body . This approximation could be improved via physical simulation ( see e.g . Lee et al . ( 2018 ; 2019 ) ) , which is computationally expensive but could be made more efficient using a neural network . However , the tetrahedral mesh is only well suited to capture deformations of a volumetric three-dimensional space and as such struggles to capture deformations intrinsic to codimension one surfaces/shells including the bending , wrinkling , and folding important for cloth . Thus , we take further motivation from constitutive mechanics ( see e.g . Bonet & Wood ( 1997 ) ) and allow the cloth vertices to move in material space ( the T-pose ) akin to plastic deformation . That is , we use plastic deformation in the material space in order to recapture elastic deformations ( e.g . bending ) lost/recovered when embedding cloth into a tetrahedral mesh . These elastic deformations are encoded as a pose-dependent plastic displacement for each cloth vertex , i.e . di ( θ ) ; then , the pose-dependent , plastically deformed material space position of each cloth vertex is given by umi ( θ ) = u mo i + di ( θ ) . Given a pose θ , umi ( θ ) will not necessarily have the same parent tetrahedron or barycentric weights as umoi ; thus , a new embedding is computed for umi ( θ ) obtaining new barycentric weights λmik ( θ ) . Using this new embedding , the position of the cloth vertex in pose θ will be ui ( θ ) = ∑ k λ m ik ( θ ) vk ( θ ) . Ideally , if the di ( θ ) are computed correctly , ui ( θ ) will agree with the ground truth location of cloth vertex i in pose θ . The second row of Figure 4 shows cloth in the material space T-pose plastically deformed such that its skinned location in pose θ ( Figure 4 , first row ) well matches the ground truth shown in the first row of Figure 3 . Learning di ( θ ) for each vertex can be accomplished in exactly the same fashion as learning displacements from the skinned body surface mesh , and thus we use the same approach as proposed in Jin et al . ( 2020 ) . Afterwards , an inferred di ( θ ) is used to compute umi ( θ ) followed by λmik ( θ ) , and finally ui ( θ ) . Addressing efficiency , note that only the vertices of the parent tetrahedra of um ( θ ) need to be skinned , not the entire tetrahedral mesh . In order to compute each training example ( θ , d ( θ ) ) , we examine the ground truth cloth in pose θ , i.e . uGT ( θ ) . For each cloth vertex uGTi ( θ ) , we find the deformed tetrahedron it is located in and compute barycentric weights λGTik ( θ ) resulting in u GT i ( θ ) = ∑ k λ GT ik ( θ ) vk ( θ ) . Then , that vertex ’ s material space ( T-pose ) location is given by umi ( θ ) = ∑ k λ GT ik ( θ ) v m k where v m k are the material space ( T-pose ) positions of the tetrahedral mesh ( which are the same for all poses , and thus not a function of θ ) . Finally , we define di ( θ ) = umi ( θ ) − u mo i .
This paper proposes to model 3D cloth by embedding it into kinematically deforming skinned mesh (KDSM)[1], a tetrahedral mesh that parametrizes the volumetric region around the underlying body. A KDSM can be created and deformed using a variety of skinning and simulation techniques introduced in [1]. This paper extends KDSM by enabling plastic deformation in material space (T-pose), and accurately models the cloth deformation as per-vertex offsets. Inspired by [2], this paper trains a neural network to learn the per-vertex offset as a function of body pose. Once trained, the network is able to infer the 3D cloth on a particular body. Experiments show that the proposed 3D cloth parameterization method is better than the 2D UV parameterization method used in [2].
SP:6c1d4e09a17d1a6abe209ab96356b837dbfbd710
Maximum Categorical Cross Entropy (MCCE): A noise-robust alternative loss function to mitigate racial bias in Convolutional Neural Networks (CNNs) by reducing overfitting
1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) offer state-of-the-art results in computer vision tasks He et al . ( 2016 ) ; Szegedy et al . ( 2015 ) ; Simonyan & Zisserman ( 2014 ) but are susceptible to inherent noises in the input training data preempting overfitting on the input data during information propagation . When new data is presented , overfit models do not generalize well and offer significantly lower classification performance , exacerbating the problem of bias towards a specific subset of data . The fundamental learning theory behind CNNs is to approximate an underlying d-dimensional interpolated function f ( X ) ∈ Rd by using information from n number of d-dimensional input vectors X = { x1 , x2 , · · · , xn } where xi = < x1 , x2 , · · · , xd > and i , d ∈ Z > 0 Maiorov ( 2006 ) . The problem of approximation is theoretically non-linear and there is empirical evidence to support the assertion that CNNs simply memorize the input training data Zhang et al . ( 2016 ) . Overfitting occurs when the internal parameters of a CNN model are finely tuned to the unique variances of the input training data that it perfectly models its characteristics Hawkins ( 2004 ) . Misclassification occurs when overfit models are unable to distinguish between overlapping variances for different classes of images . Reducing overfitting is also difficult since establishing a theoretical understanding or analyzing the mechanisms of learning in CNNs for non-convex optimization problems such as image classification is generally not well understood Shamir ( 2018 ) . A simple way to reduce overfitting is to train models using a very large number of images Shorten & Khoshgoftaar ( 2019 ) , such as the ImageNet dataset consisting of millions of training images used for the purpose of natural image classification . While using big data solutions might mask the underlying problem of model overfitting , acquisition of clean/noise-free labeled data for supervised model training is challenging . The problem of data acquisition is compounded further by ethical , societal , and practical concerns when dealing with facial datasets , especially for the task of race or gender classification . Another key challenge while creating datasets is the consideration that needs to be made on the distribution of data amongst the multiple classes along with the variability of data within an individual class . Unbalanced datasets where the data distribution of images is not equal for all the classes introduces bias during model training Ganganwar ( 2012 ) . The only viable solution to rectify imbalanced datasets is to augment or supplement datasets with new images which as mentioned before is an ongoing challenge . To the best of our knowledge , there is no research/work undertaken to optimize data distribution of the convolutional kernel weights during model training . We hypothesize that balancing convolutional kernel data , during model training could aide in mitigating bias and increase classification performance through alleviating the severity of inherent noise . Some researchers attribute racial bias of CNN models to noises in the training data and associated labels proposing alternate loss functions like Mean Absolute Error ( MAE ) Ghosh et al . ( 2017 ) to commonly used loss functions like Categorical Cross Entropy ( CCE ) , as explained in Section 2.1 . MAE was proposed as a noise-robust alternative to mitigate the susceptibility of CNNs to noise , but as Zhang & Sabuncu ( 2018 ) asserts , MAE is not applicable for complex natural image datasets like ImageNet and as such it is not considered in this paper . The task of classifying race in human faces is established to be more complex than natural image classification because there exists a narrow range of possible variations in features between human faces of different races , especially when skin tone is not the major determining factor for racial identity Fu et al . ( 2014 ) . In this paper , we explore the problem of overfitting with respect to racial classification by assessing the train-test divergence to quantify the degree of generalizability where a higher train-test divergence indicates a greater degree of model overfitting on the training data . We also propose a novel extension to the commonly used CCE loss function using Maximum Entropy ( ME ) Hartley ( 1928 ) measures , called Maximum Categorical Cross Entropy ( MCCE ) . MCCE loss calculations are determined by taking into account the distribution of convolutional kernel weights during model training and the traditional CCE loss . Most related works explore model over-parameterization Zhang et al . ( 2019 ) or under-parameterization Soltanolkotabi et al . ( 2018 ) with unrealistic assumptions made about the distribution of input data ; we do not make any such assumptions . The contributions of this paper are as follows : • We propose a novel extension to the Categorical Cross Entropy ( CCE ) loss function using Maximum Entropy ( ME ) measures known as Maximum Categorical Cross Entropy ( MCCE ) loss to reduce model overfitting . • We empirically validate the MCCE loss function with respect to model overfitting using traintest divergence as a metric and evaluate generalizability across datasets by using cross-validation testing . 2 BACKGROUND . Section 2.1 presents an understanding of how CCE loss is calculated . Sections 2.2 and ? ? detail how kernel regularization and batch normalization influence CCE loss with their limitations . Section 2.3 provides the theoretical background of Maximum Entropy ( ME ) and methods to calculate ME along with estimating the reconstruction loss . 2.1 CATEGORICAL CROSS-ENTROPY ( CCE ) LOSS . The most commonly used loss function is the Categorical Cross-Entropy ( CCE ) loss given in Equation ( 1 ) , which is a measure of difference between the probability distributions of one-hot encoded CNN computed class labels and ground truths . CNN classification uses a softmax function to calculate the required probability distributions Goodfellow et al . ( 2016 ) . H ( p , q ) = Σni=1 = p ( xi ) log q ( xi ) Where , xi ∈ X ( 1 ) In Equation ( 1 ) , q ( xi ) and p ( xi ) represent the probability distributions of the one-hot encoded CNN predicted class labels and ground truths respectively for an input data vector xi . Given that CNN model training introduces noises during convolutional operations or information propagation and that any inherent noise present in the input data can significantly affect model performance , a noise-robust alternative to CCE would help improve classification performance and mitigate bias . This is the reason why stochastic optimizers and gradient descent algorithms function using the framework of maximum likelihood estimation . 2.2 KERNEL REGULARIZATION . The intuition behind regularization is that of Ockham ’ s razor to penalize complex models and to promote simpler models during training . Unlike empirical risk minimization which only considers loss minimization , regularization was proposed to minimize structural risk which considers both complexity and loss minimization . The most prominent and simple kernels that greatly minimize loss are selected Bilgic et al . ( 2014 ) . Model complexity is represented in two ways , as a function of the total number of features with nonzero weights ( L1 ) or as a function of all the weights of all the features in a model ( L2 ) . L2 regularization is most commonly used in computer vision tasks for CNN models such as ResNet . Model complexity can be quantified using the L2 regularization formula given in Equation ( 2 ) , defined by using the sum of squares of all the feature weights as the regularization term Cortes et al . ( 2012 ) . ||ω||2 = ω21 + ω22 + ω23 + ω2n ( 2 ) In Equation ( 2 ) , the magnitude of the absolute value of the weight ω indicates complexity . Feature weights close to zero have no significant impact on model complexity , while large outlier weight values have a more pronounced impact on ω . The quantity of feature weights n determined using the number of trainable model parameters also contribute greatly to ω and model complexity . Furthermore , kernel regularization as it is implemented currently for CCE loss utilizes CNN computed label errors and does not take the data distribution of the convolutional kernels into account . 2.3 MAXIMUM ENTROPY AND RECONSTRUCTION LOSS . The use of Maximum Entropy ( ME ) for applications such as convolutional kernel analysis is justified since ME is the only consistent way of selecting a single discrete data point from the set of input data vectors to best fit the regression curve , proven axiomatically in Shore & Johnson ( 1980 ) ; Johnson & Shore ( 1983 ) . A method to approximate ME for digital images is through the use of distributed normalized histograms Gonzalez & Woods ( 2007 ) ; Jain ( 1989 ) . The open-source SciKit-image processing library written in Python can be used to calculate the ME measures for images Virtanen et al . ( 2020 ) . Entropy in images is related to the complexity contained in a given neighborhood , computed by using a circular disk with a radius of r. The disk is used to measure minute variations in local grayscale level distribution . The maximum entropy for an image depends on the number of gray levels , an 8 bit image has 256 gray levels ( 0-255 ) which has a theoretical maximum entropy of log2 ( 28 ) = 8 bits per pixel . Changing the value of r can invariable produce higher or lower ME measure as illustrated in Figure 1 . Similarly higher or lower ME values will be obtained while measuring convolutional kernel weights . A decrease in ME divergence can be observed in Figure 1 for r values of 5 and 50 relative to r values of 1 and 5 . A significant difference in spatial/semantic information in the images can be seen with greater r values , which suggests loss in precision during approximation . ME measures for color images require the computation on each of the three color channels , Red ( R ) , Green ( G ) and Blue ( B ) i.e . RGB separately and averaging the result . The averaged ME measures for images in the colorFERET and UTKFace datasets are 2.09 and 2.25 bits per pixel respectively using an r value of 1 . The amount of time taken to calculate the ME measures is insignificant as the ME calculation script can be executed in parallel on the CPU , while CNN model training occurs on the GPU , as evidenced in the supplementary data uploaded . Solutions other than ME for image reproduction/reconstruction from noisy or incomplete measurements such as , the use of non-linear variations on fourier transformations fail when convolutional kernels are incorporated Donoho et al . ( 1990 ) . Furthermore , ME reconstruction has been shown to provide superior noise suppression while mostly preserving de-emphasized structural noise near the baseline ( relative to high signal information ) Donoho et al . ( 1990 ) . Accurate reconstructions can be approximated using a 1D projection of any underlying function which is reduced to g ( X ) ∈ Rd such that xi ∈ X Reis & Roberty ( 1992 ) . As discussed in Section 1 , the underlying functional representation of the input dataset is f ( X ) , the difference between the true representation f ( X ) and the ME reconstruction approximation g ( X ) is the reconstruction loss for the input dataset . Results presented in Reis & Roberty ( 1992 ) , indicate that reconstructions using accurate and noisy data had insignificantly small variations compared to the original , attesting to the noise-robust ability of using ME measures for reconstruction . This noise averse characteristic of ME is especially important for race classification as lighting or ISO parameters of the input images can significantly affect the performance of CNN models . Reconstruction loss is described as the convolutional kernel data loss whereas CCE can be characterized as a class label loss .
1.The authors propose an extension of the CE loss to reduce classification bias that occurs in present methods and datasets. They calculate Maximum Entropy (ME) for images on the entire training dataset and then calculate the reconstruction loss between this and the ME for convolutional kernels during training. Their experiments results show that minimizing this reconstruction loss along with CE speeds up convergence.
SP:1480f9299a4918309d9d2b0f658fb0f863921387
ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
1 INTRODUCTION TextWorld Embodied Welcome ! You are in the middle of the room . Looking around you , you see a diningtable , a stove , a microwave , and a cabinet . Your task is to : Put a pan on the diningtable . > goto the cabinet You arrive at the cabinet . The cabinet is closed . > open the cabinet The cabinet is empty . > goto the stove You arrive at the stove . Near the stove , you see a pan , a pot , a bread loaf , a lettuce , and a winebottle . > take the pan from the stove You take the pan from the stove . > goto the diningtable You arrive at the diningtable . > put the pan on the diningtable You put the pan on the diningtable . Figure 1 : ALFWorld : Interactive aligned text and embodied worlds . An example with high-level text actions ( left ) and low-level physical actions ( right ) . Consider helping a friend prepare dinner in an unfamiliar house : when your friend asks you to clean and slice an apple for an appetizer , how would you approach the task ? Intuitively , one could reason abstractly : ( 1 ) find an apple ( 2 ) wash the apple in the sink ( 3 ) put the clean apple on the cutting board ( 4 ) find a knife ( 5 ) use the knife to slice the apple ( 6 ) put the slices in a bowl . Even in an unfamiliar setting , abstract reasoning can help accomplish the goal by leveraging semantic priors . Priors like locations of objects – apples are commonly found in the kitchen along with implements for cleaning and slicing , object affordances – a sink is useful for washing an apple unlike a refrigerator , pre-conditions – better to wash an apple before slicing it , rather than the converse . We hypothesize that , learning to solve tasks using abstract language , unconstrained by the particulars of the physical world , enables agents to complete embodied tasks in novel environments by leveraging the kinds of semantic priors that are exposed by abstraction and interaction . To test this hypothesis , we have created the novel ALFWorld framework , the first interactive , parallel environment that aligns text descriptions and commands with physically embodied robotic simulation . We build ALFWorld by extending two prior works : TextWorld ( Côté et al. , 2018 ) - an engine for interactive text-based games , and ALFRED ( Shridhar et al. , 2020 ) - a large scale dataset for visionlanguage instruction following in embodied environments . ALFWorld provides two views of the same underlying world and two modes by which to interact with it : TextWorld , an abstract , text-based environment , generates textual observations of the world and responds to high-level text actions ; ALFRED , the embodied simulator , renders the world in high-dimensional images and responds to low-level physical actions as from a robot ( Figure 1 ) .1 Unlike prior work on instruction following ( MacMahon et al. , 2006 ; Anderson et al. , 2018a ) , which typically uses a static corpus of cross-modal expert demonstrations , we argue that aligned parallel environments like ALFWorld offer a distinct advantage : they allow agents to explore , interact , and learn in the abstract environment of language before encountering the complexities of the embodied environment . While fields such as robotic control use simulators like MuJoCo ( Todorov et al. , 2012 ) to provide infinite data through interaction , there has been no analogous mechanism – short of hiring a human around the clock – for providing linguistic feedback and annotations to an embodied agent . TextWorld addresses this discrepancy by providing programmatic and aligned linguistic signals during agent exploration . This facilitates the first work , to our knowledge , in which an embodied agent learns the meaning of complex multi-step policies , expressed in language , directly through interaction . Empowered by the ALFWorld framework , we introduce BUTLER ( Building Understanding in Textworld via Language for Embodied Reasoning ) , an agent that first learns to perform abstract tasks in TextWorld using Imitation Learning ( IL ) and then transfers the learned policies to embodied tasks in ALFRED . When operating in the embodied world , BUTLER leverages the abstract understanding gained from TextWorld to generate text-based actions ; these serve as high-level subgoals that facilitate physical action generation by a low-level controller . Broadly , we find that BUTLER is capable of generalizing in a zero-shot manner from TextWorld to unseen embodied tasks and settings . Our results show that training first in the abstract text-based environment is not only 7× faster , but also yields better performance than training from scratch in the embodied world . These results lend credibility to the hypothesis that solving abstract language-based tasks can help build priors that enable agents to generalize to unfamiliar embodied environments . Our contributions are as follows : § 2 ALFWorld environment : The first parallel interactive text-based and embodied environment . § 3 BUTLER architecture : An agent that learns high-level policies in language that transfer to low-level embodied executions , and whose modular components can be independently upgraded . § 4 Generalization : We demonstrate empirically that BUTLER , trained in the abstract text domain , generalizes better to unseen embodied settings than agents trained from corpora of demonstrations or from scratch in the embodied world . 2 ALIGNING ALFRED AND TEXTWORLD . The ALFRED dataset ( Shridhar et al. , 2020 ) , set in the THOR simulator ( Kolve et al. , 2017 ) , is a benchmark for learning to complete embodied household tasks using natural language instructions and egocentric visual observations . As shown in Figure 1 ( right ) , ALFRED tasks pose challenging interaction and navigation problems to an agent in a high-fidelity simulated environment . Tasks are annotated with a goal description that describes the objective ( e.g. , “ put a pan on the dining table ” ) . We consider both template-based and human-annotated goals ; further details on goal specification can be found in Appendix H. Agents observe the world through high-dimensional pixel images and interact using low-level action primitives : MOVEAHEAD , ROTATELEFT/RIGHT , LOOKUP/DOWN , PICKUP , PUT , OPEN , CLOSE , and TOGGLEON/OFF . 1Note : Throughout this work , for clarity of exposition , we use ALFRED to refer to both tasks and the grounded simulation environment , but rendering and physics are provided by THOR ( Kolve et al. , 2017 ) . The ALFRED dataset also includes crowdsourced language instructions like “ turn around and walk over to the microwave ” that explain how to complete a goal in a step-by-step manner . We depart from the ALFRED challenge by omitting these step-by-step instructions and focusing on the more diffcult problem of using only on goal descriptions specifying what needs to be achieved . Our aligned ALFWorld framework adopts six ALFRED task-types ( Table 1 ) of various difficulty levels.2 Tasks involve first finding a particular object , which often requires the agent to open and search receptacles like drawers or cabinets . Subsequently , all tasks other than Pick & Place require some interaction with the object such as heating ( place object in microwave and start it ) or cleaning ( wash object in a sink ) . To complete the task , the object must be placed in the designated location . Within each task category there is significant variation : the embodied environment includes 120 rooms ( 30 kitchens , 30 bedrooms , 30 bathrooms , 30 living rooms ) , each dynamically populated with a set of portable objects ( e.g. , apple , mug ) , and static receptacles ( e.g. , microwave , fridge ) . For each task type we construct a larger train set , as well as seen and unseen validation evaluation sets : ( 1 ) : seen consists of known task instances { task-type , object , receptacle , room } in rooms seen during training , but with different instantiations of object locations , quantities , and visual appearances ( e.g . two blue pencils on a shelf instead of three red pencils in a drawer seen in training ) . ( 2 ) : unseen consists of new task instances with possibly known object-receptacle pairs , but always in unseen rooms with different receptacles and scene layouts than in training tasks . The seen set is designed to measure in-distribution generalization , whereas the unseen set measures out-of-distribution generalization . The scenes in ALFRED are visually diverse , so even the same task instance can lead to very distinct tasks , e.g. , involving differently colored apples , shaped statues , or textured cabinets . For this reason , purely vision-based agents such as the unimodal baselines in Section 5.2 often struggle to generalize to unseen environments and objects . The TextWorld framework ( Côté et al. , 2018 ) procedurally generates text-based environments for training and evaluating language-based agents . In order to extend TextWorld to create text-based analogs of each ALFRED scene , we adopt a common latent structure representing the state of the simulated world . ALFWorld uses PDDL - Planning Domain Definition Language ( McDermott et al. , 1998 ) to describe each scene from ALFRED and to construct an equivalent text game using the TextWorld engine . The dynamics of each game are defined by the PDDL domain ( see Appendix C for additional details ) . Textual observations shown in Figure 1 are generated with templates sampled from a context-sensitive grammar designed for the ALFRED environments . For interaction , TextWorld environments use the following high-level actions : goto { recep } take { obj } from { recep } put { obj } in/on { recep } open { recep } close { recep } toggle { obj } { recep } clean { obj } with { recep } heat { obj } with { recep } cool { obj } with { recep } where { obj } and { recep } correspond to objects and receptacles . Note that heat , cool , clean , and goto are high-level actions that correspond to several low-level embodied actions . ALFWorld , in summary , is an cross-modal framework featuring a diversity of embodied tasks with analogous text-based counterparts . Since both components are fully interactive , agents may be trained in either the language or embodied world and evaluated on heldout test tasks in either modality . We believe the equivalence between objects and interactions across modalities make ALFWorld an ideal framework for studying language grounding and cross-modal learning . 3 INTRODUCING BUTLER : AN EMBODIED MULTI-TASK AGENT . We investigate learning in the abstract language modality before generalizing to the embodied setting . The BUTLER agent uses three components to span the language and embodied modalities : BUTLER : :BRAIN – the abstract text agent , BUTLER : :VISION – the language state estimator , and BUTLER : :BODY – the low-level controller . An overview of BUTLER is shown in Figure 2 and each component is described below . 2To start with , we focus on a subset of the ALFRED dataset for training and evaluation that excludes tasks involving slicing objects or using portable container ( e.g. , bowls ) . 3.1 BUTLER : :BRAIN ( TEXT AGENT ) ∶ o0 , ot , g → at BUTLER : :BRAIN is a novel text-based game agent that generates high-level text actions in a token-by-token fashion akin to Natural Language Generation ( NLG ) approaches for dialogue ( Sharma et al. , 2017 ) and summarization ( Gehrmann et al. , 2018 ) . An overview of the agent ’ s architecture is shown in Figure 3 . At game step t , the encoder takes the initial text observation o0 , current observation ot , and the goal description g as input and generates a context- aware representation of the current observable game state . The observation o0 explicitly lists all the navigable receptacles in the scene , and goal g is sampled from a set of language templates ( see Appendix H ) . Since the games are partially observable , the agent only has access to the observation describing the effects of its previous action and its present location . Therefore , we incorporate two memory mechanisms to imbue the agent with history : ( 1 ) a recurrent aggregator , adapted from Yuan et al . ( 2018 ) , combines the encoded state with recurrent state ht−1 from the previous game step ; ( 2 ) an observation queue feeds in the k most recent , unique textual observations . The decoder generates an action sentence at token-by-token to interact with the game . The encoder and decoder are based on a Transformer Seq2Seq model with pointer softmax mechanism ( Gulcehre et al. , 2016 ) . We leverage pre-trained BERT embeddings ( Sanh et al. , 2019 ) , and tie output embeddings with input embeddings ( Press and Wolf , 2016 ) . The agent is trained in an imitation learning setting with DAgger ( Ross et al. , 2011 ) using expert demonstrations . See Appendix A for complete details . When solving a task , an agent might get stuck at certain states due to various failures ( e.g. , action is grammatically incorrect , wrong object name ) . The observation for a failed action does not contain any useful feedback , so a fully deterministic actor tends to repeatedly produce the same incorrect action . To address this problem , during evaluation in both TextWorld and ALFRED , BUTLER : :BRAIN uses Beam Search ( Reddy et al. , 1977 ) to generate alternative action sentences in the event of a failed action . But otherwise greedily picks a sequence of best words for efficiency . Note that Beam Search is not used to optimize over embodied interactions like prior work ( Wang et al. , 2019 ) . but rather to simply improve the generated action sentence during failures . 3.2 BUTLER : :VISION ( STATE ESTIMATOR ) ∶ vt → ot At test time , agents in the embodied world must operate purely from visual input . To this end , BUTLER : :VISION ’ s language state estimator functions as a captioning module that translates visual observations vt into textual descriptions ot . Specifically , we use a pre-trained Mask R-CNN detec- tor ( He et al. , 2017 ) to identify objects in the visual frame . The detector is trained separately in a supervised setting with random frames from ALFRED training scenes ( see Appendix D ) . For each frame vt , the detector generates N detections { ( c1 , m1 ) , ( c2 , m2 ) , . . . , ( cN , mN ) } , where cn is the predicted object class , and mn is a pixel-wise object mask . These detections are formatted into a sentence using a template e.g. , On table 1 , you see a mug 1 , a tomato 1 , and a tomato 2 . To handle multiple instances of objects , each object is associated with a class cn and a number ID e.g. , tomato 1 . Commands goto , open , and examine generate a list of detections , whereas all other commands generate affirmative responses if the action succeeds e.g. , at : put mug 1 on desk 2→ ot+1 : You put mug 1 on desk 2 , otherwise produce Nothing happens to indicate failures or no state-change . See Appendix G for a full list of templates . While this work presents preliminary results with template-based descriptions , future work could generate more descriptive observations using pre-trained image-captioning models ( Johnson et al. , 2016 ) , video-action captioning frameworks ( Sun et al. , 2019 ) , or scene-graph parsers ( Tang et al. , 2020 ) . 3.3 BUTLER : :BODY ( CONTROLLER ) ∶ vt , at → { â1 , â2 , . . . , âL } The controller translates a high-level text action at into a sequence of L low-level physical actions { â1 , â2 , . . . , âL } that are executable in the embodied environment . The controller handles two types of commands : manipulation and navigation . For manipulation actions , we use the ALFRED API to interact with the simulator by providing an API action and a pixel-wise mask based on Mask R-CNN detections mn that was produced during state-estimation . For navigation commands , each episode is initialized with a pre-built grid-map of the scene , where each receptacle instance is associated with a receptacle class and an interaction viewpoint ( x , y , θ , φ ) with x and y representing the 2D position , θ and φ representing the agent ’ s yaw rotation and camera tilt . The goto command invokes an A * planner to find the shortest path between two viewpoints . The planner outputs a sequence of L displacements in terms of motion primitives : MOVEAHEAD , ROTATERIGHT , ROTATELEFT , LOOKUP , and LOOKDOWN , which are executed in an open-loop fashion via the ALFRED API . We note that a given pre-built grid-map of receptacle locations is a strong prior assumption , but future work could incorporate existing models from the vision-language navigation literature ( Anderson et al. , 2018a ; Wang et al. , 2019 ) for map-free navigation .
The paper presents a new interactive environment which is both text-based and contains visual simulation which are aligned. The authors also propose a first agent architecture which uses the visual observations as well as the text-based (named BUTLER). The authors tested the generalization capabilities of the proposed BUTLER architecture compared to a seq2seq transformer model.
SP:c6868fac7481cb241d9c5735f9184de9be9b72aa
Detection Booster Training: A detection booster training method for improving the accuracy of classifiers.
Deep learning models owe their success at large , to the availability of a large1 amount of annotated data . They try to extract features from the data that contain2 useful information needed to improve their performance on target applications.3 Most works focus on directly optimizing the target loss functions to improve the4 accuracy by allowing the model to implicitly learn representations from the data.5 There has not been much work on using background/noise data to estimate the6 statistics of in-domain data to improve the feature representation of deep neural7 networks . In this paper , we probe this direction by deriving a relationship between8 the estimation of unknown parameters of the probability density function ( pdf ) 9 of input data and classification accuracy . Using this relationship , we show that10 having a better estimate of the unknown parameters using background and in-11 domain data provides better features which leads to better accuracy . Based on12 this result , we introduce a simple but effective detection booster training ( DBT ) 13 method that applies a detection loss function on the early layers of a neural network14 to discriminate in-domain data points from noise/background data , to improve15 the classifier accuracy . The background/noise data comes from the same family16 of pdfs of input data but with different parameter sets ( e.g. , mean , variance ) . In17 addition , we also show that our proposed DBT method improves the accuracy even18 with limited labeled in-domain training samples as compared to normal training.19 We conduct experiments on face recognition , image classification , and speaker20 classification problems and show that our method achieves superior performance21 over strong baselines across various datasets and model architectures.22 1 INTRODUCTION23 Modern pattern recognition systems achieve outstanding accuracies on a vast domain of challenging24 computer vision , natural language , and speech recognition benchmarks ( Russakovsky et al . ( 2015 ) ; 25 Lin et al . ( 2014 ) ; Everingham et al . ( 2015 ) ; Panayotov et al . ( 2015 ) ) . The success of deep learning26 approaches relies on the availability of a large amount of annotated data and on extracting useful27 features from them for different applications . Learning rich feature representations from the available28 data is a challenging problem in deep learning . A related line of work includes learning deep latent29 space embedding through deep generative models ( Kingma & Welling ( 2014 ) ; Goodfellow et al.30 ( 2014 ) ; Berthelot et al . ( 2019 ) or using self-supervised learning methods ( Noroozi & Favaro ( 2016 ) ; 31 Gidaris et al . ( 2018 ) ; Zhang et al . ( 2016b ) ) or through transfer learning approaches ( Yosinski et al.32 ( 2014 ) ; Oquab et al . ( 2014 ) ; Razavian et al . ( 2014 ) ) .33 In this paper , we propose to use a different approach to improve the feature representations of deep34 neural nets and eventually improve their accuracy by estimating the unknown parameters of the35 probability density function ( pdf ) of input data . Parameter estimation or Point estimation methods36 are well studied in the field of statistical inference ( Lehmann & Casella ( 1998 ) ) . The insights from37 the theory of point estimation can help us to develop better deep model architectures for improving38 the model ’ s performance . We make use of this theory to derive a correlation between the estimation39 of unknown parameters of pdf and classifier outputs . However , directly estimating the unknown40 pdf parameters for practical problems such as image classification is not feasible since it can sum41 up to millions of parameters . In order to overcome this bottleneck , we assume that the input data42 points are sampled from a family of pdfs instead of a single pdf and propose to use a detection43 based training approach to better estimate the unknowns using in-domain and background/noise data.44 One alternative is that we can use generative models for this task , however , they mimic the general45 distribution of training data conditioned on random latent vectors and hence can not be directly applied46 for estimating the unknown parameters of a family of pdfs . Our proposed detection method involves47 a binary class discriminator that separates the target data points from noise or background data . The48 noise or background data is assumed to come from the same family of distribution of in-domain49 data but with different moments ( Please refer to the appendix for more details about the family of50 distributions and its extension to a general structure ) . In image classification , this typically represents51 the background patches from input data that fall under the same distribution family . In speech domain,52 it can be random noise or the silence intervals in speech data . Collecting such background data to53 improve the feature representations is much simpler as compared to using labeled training data since54 it is time-consuming and expensive to collect labeled data . Since the background patches in images55 or noise in speech signals are used for binary classification in our method , we refer to such data56 as the noise of an auxiliary binary classification problem denoted by auxiliary binary classification57 ( ABC ) -noise dataset . An advantage of using ABC-noise data during training is that it can implicitly58 add robustness to deep neural networks against the background or noisy data.59 Since ABC-noise data can be collected in large quantities for free and using that data in our approach60 improves the classification benchmarks , we investigate whether this data can act as a substitute for61 labeled data . We conduct empirical analysis and show that using only a fraction of labeled training62 data together with ABC-noise data in our DBT method , indeed improves the accuracy as compared63 to normal training.64 To summarize , our contributions are threefold . First , we present a detailed theoretical analysis on65 the relation between the estimation of unknown parameters of pdf of data and classification outputs.66 Second , based on the theoretical analysis , we present a simple booster training method to improve67 classification accuracy which also doubles up as an augmented training method when only limited68 labeled data is available . Third , we consistently achieve improved performances over strong baselines69 on face recognition , image classification , and speaker recognition problems using our proposed70 method , showing its generalization across different domains and model architectures.71 2 RELATED WORK72 Notations and Preliminary : In this paper , vectors , matrices , functions , and sets are denoted by bold73 lower case , bold uppercase , lower case , and calligraphic characters , respectively . Consider a datapoint74 denoted by x . We assume that x belongs to a family of probability density functions ( pdf ’ s ) defined75 as P = { p ( x , θ ) , θ ∈ Θ } , where Θ is the possible set of parameters of the pdf . In general , θ is a real76 vector in higher dimensions . For example , in a mixture of Gaussians , θ is a vector containing the77 component weights , the component means , and the component covariance matrices . In this paper , we78 assume that θ is an unknown deterministic function ( There are other approaches such as bayesian79 that consider θ as a random vector ) . In general , although the structure of the family of pdfs is itself80 unknown , defining a family of pdfs such as P can help us to develop theorems and use those results81 to derive a new method . For the family of distribution P , we can define the following classification82 problem83 { C1 : θ ∈ Θ1 , C2 : θ ∈ Θ2 , · · · , Cn : θ ∈ Θn } ( 1 ) where set of Θi ’ s is a partition of Θ . The notation of ( 1 ) means that , class Ci deals with a set of84 data points whose pdf is p ( x , θi ) where θi ∈ Θi . A wide range of classification problems can be85 defined using ( 1 ) e.g. , ( ( Lehmann & Casella , 2006 , Chapter 3 ) ) and ( ( Duda et al. , 2012 , Chapter 4 ) ) .86 The problem of estimating θ comes under the category of parametric estimation or point estimation87 ( Lehmann & Casella ( 1998 ) ) . Estimating the unknown parameters of a given pdf p ( x , θ ) , have been88 extensively studied in the field of point estimation methods ( Lindgren ( 2017 ) ; Lee et al . ( 2018 ) ; 89 Lehmann & Casella ( 2006 ) ) . An important estimator in this field is the minimum variance unbiased90 estimator and it is governed by the Cramer Rao bound . The Cramer Rao bound provides the lower91 bound of the variance of an unbiased estimator ( Bobrovsky et al . ( 1987 ) ) . Let the estimation of92 θ be denoted by θ̂ , and assume that θ̂ is an unbiased estimator , i.e. , E ( θ̂ ) = θ . Its covariance93 matrix denoted by Σθ̂ satisfies Σθ̂ − I −1 ( θ ) 0 , where A 0 implies that A is a non-negative94 definite matrix ( ( Lehmann & Casella , 1998 , chapter 5 ) ) and I ( θ ) : = −E ( ∂2 log ( p ( x , θ ) ) /∂θ2 ) 95 is called the Fisher information matrix . For an arbitrary differentiable function g ( · ) , an efficient96 estimator of g ( θ ) is an unbiased estimator when its covariance matrix equals to I−1g ( θ ) , where I −1 g ( θ ) 97 is the fisher information matrix of g ( θ ) , i.e. , the efficient estimator achieves the lowest possible98 variance among all unbiased estimators . The efficient estimator can be achieved using factorization of99 ∂ log ( p ( x , θ ) ) /∂g ( θ ) = Ig ( θ ) ( ĝ ( x ) − g ( θ ) ) , if it exists ( Rao ( 1992 ) ; Lehmann & Casella ( 1998 ) ) .100 Based on these results , we derive a relationship between the efficient estimation of unknowns and101 maximum likelihood classifier of ( 1 ) and use auxiliary binary classifiers to apply that result in our102 proposed DBT method.103 Parameter Estimations : Independent component analysis ( Hyvärinen ( 1999 ) ) decomposes a multi-104 variate signal into independent non-Gaussian signals . ICA can extract non-Gaussian features from105 Gaussian noise . Additionally , there is a class of classifiers called generalized likelihood ratio functions106 that replaces the estimation of unknown parameters into the likelihood functions . This approach107 provides a huge improvement in the field of parametric classifiers , where the family of pdf of data108 is given ( Zeitouni et al . ( 1992 ) , Conte et al . ( 2001 ) , Lehmann & Casella ( 2006 ) ) . Noise-contrastive109 estimation ( NCE ) ( Gutmann & Hyvärinen ( 2010 ) ) involves training a generative model that allows110 a model to discriminate data from a fixed noise distribution . Then , this trained model can be used111 for training a sequence of models of increasing quality . This can be seen as an informal competition112 mechanism similar in spirit to the formal competition used in the adversarial networks game . In113 Bachman et al . ( 2019 ) , a feature selection is proposed by maximizing the mutual information of the114 difference between features extracted from multiple views of a shared context . In that work , it is115 shown that the best results is given by using a mutual information bound based on NCE . The key116 difference between our method and NCE is that , we do not construct a generative model for noise.117 Instead of estimating the pdf of noise in NCE , we estimate the parameters of pdf of in-domain dataset118 using an auxiliary class that has many common parameters in its pdf . Moreover , we show that the119 estimation of that parameters are sufficient statistic for a classifier . We assume that the noise dataset is120 not pure and it has some similarity with the in-domain dataset , where it can help the feature selection121 layers to select relevant ( in-domain ) features , e.g. , see Fig . 3 . Further , in our approach , we do not122 construct the pdf of noise or in-domain data , instead we estimate its parameters directly , which is123 more efficient in terms of training , computation and also dimensionality reduction.124 Auxiliary classifiers were introduced in inception networks ( Szegedy et al . ( 2015 ) ) and used in ( Lee125 et al . ( 2015 ) ; S. et al . ( 2016 ) ) for training very deep networks to prevent vanishing gradient problems.126 Further , auxiliary classifiers were also proposed for early exit schemes ( Teerapittayanon et al . ( 2016 ) ) 127 and self-distillation methods ( Zhang et al . ( 2019a ; b ) ) . Such auxiliary classifiers tackle different128 problems by predicting the same target as the final classification layer . In contrast , our proposed DBT129 method involves auxiliary binary classifiers that detect noise , interference , and/or background data130 from in-domain data points for improving the target classification accuracy.131 3 ESTIMATION OF PARAMETERS OF PDF AND CLASSIFICATION132 For ( 1 ) , we define a deterministic discriminative function of Θi , denoted by ti ( · ) such that the133 following conditions are satisfied:134 • ti ( · ) maps Θ to real numbers such that ti ( θ ) > 0 , if θ ∈ Θi and ti ( θ ) ≤ 0 for θ /∈ Θi.135 • ti ( · ) is a differentiable function almost everywhere and ∫ Θ |ti ( θ ) |dµl ( θ ) < ∞ , where µl denotes136 the Lebesgue measure.137 The following theorem shows the relationship of ti ( · ) and the log-likelihood ratio of class Ci versus138 other classes . The proofs of Theorems 1 , 2 and 3 are provided in the appendix.139 Theorem 1 Assume that the pdf p ( x , θ ) is differentiable with respect to θ almost everywhere . If the140 efficient minimum variance and unbiased estimation of a deterministic discriminative function of Θi141 exists , then the log likelihood ratio of class i against the rest of classes is an increasing function of142 the minimum variance and unbiased estimation of Θi.143 Directly from this theorem , it follows that the optimal classifier using the maximum likelihood for ( 1 ) 144 is given as follows d ( x ) = arg maxi∈ { 1 , ··· , n } ki ( t̂i ( x ) ) , where ki ’ s are some increasing functions and145 ti ( · ) ’ s are the deterministic discriminative function of Θi ’ s such that the efficient minimum variance146 and unbiased estimation for them exists . Based on this result , a set of minimum variance and unbiased147 estimation of deterministic discriminative functions of Θi ’ s leads us to the maximum likelihood148 classifier . One approach is to directly estimate the deterministic discriminative functions , instead of149 maximizing the likelihood function . However , finding deterministic discriminative functions that150 have efficient minimum variance and unbiased estimation may not be feasible in practical problems,151 especially when the dimension of θ increases . Theorems 2 and 3 study the same relationship between152 the estimation of unknown parameters and the accuracy of classifiers for sub-optimal estimators and153 classifiers.154 Theorem 2 Consider the output of two classifiers for the ith class as follows : rj ( x ) = i if hj ( x ) > τ155 and rj ( x ) = other classes if hj ( x ) < τ , where j ∈ { 1 , 2 } . where hj ( x ) is the estimation of a156 deterministic discriminative function and τ is a classification threshold . Assume that the cumulative157 distribution function of hj ( x ) ’ s have bounded inflection points , and also , the probability of true158 positive of rj ( x ) is an increasing function of d ( θ ) , which is the deterministic discriminative function159 of class i , for all i . Further assume that for each τ the probability of false positive of r1 ( x ) is less160 than the probability of false positive of r2 ( x ) and the probability of true positive of r1 ( x ) is greater161 than the probability of true positive of r2 ( x ) . Then , there exists a hmin such that for all d ( θ ) > hmin162 and all θ we have Pr ( |h1 ( x ) − d ( θ ) | < ) > Pr ( |h2 ( x ) − d ( θ ) | < ) .163 Theorem 2 shows that a better classifier leads to a better estimation of d ( θ ) . In the next theorem , we164 show the dual property of this result.165 Theorem 3 Let Θm be a Borel set with positive Lebesgue measure in ( 1 ) for all m ∈ { 1 , · · · , n } .166 Assume that r1 ( · ) and r2 ( · ) are given as follows r1 ( x ) = m , if θ̂1 ∈ Θm and r2 ( x ) = m , if θ̂2 ∈ Θm.167 Also , assume that Pr ( ‖θ̂1 − θ‖ ≤ ) ≥ Pr ( ‖θ̂2 − θ‖ ≤ ) , for all θ ∈ Θ = ∪nm=1Θm and > 0,168 then the probability of classification error r1 ( · ) is less than r2 ( · ) where θ̂1 and θ̂2 are two different169 estimators of θ ∈ Θ = ∪M−1m=0 Θm.170 Theorem 3 proves that a more accurate estimator leads to a classifier that has a lower probability171 of classification error . From Theorem 1 , we can infer that a sufficient statistic for developing the172 maximum likelihood classification is t̂i ( x ) , which is the efficient minimum variance and unbiased173 estimation of the deterministic discriminative functions of Θi ’ s denoted by ti ( θ ) . In other words , the174 maximum likelihood classifier is a function of x only via the efficient minimum variance and unbiased175 estimation ti ( θ ) . We can estimate ti ( θ ) by replacing the estimation θ in ti ( · ) , i.e. , t̂i ( θ ) ≈ ti ( θ̂ ) ,176 where θ̂ is a function of x . From the above theorems , we conclude that improving the estimation177 of unknown parameters of pdf of data can improve the accuracy of the classifier . On the other side,178 having a good classifier means having a good estimator of unknowns of the pdf of input data . In179 many practical problems , the optimal maximum likelihood classifier may not be achievable , but the180 likelihood function of the classifier provides an optimal bound of the probability of error . In such181 cases , we can improve the accuracy of sub-optimal classifiers and that is the main focus of this paper.182 Fig . 1 illustrates the proposed theorems visually.183 4 PROPOSED METHOD : DETECTION BOOSTER TRAINING ( DBT ) 184 In this section , we propose the detection booster training ( DBT ) method based on the achieved185 theorems in the previous section to improve the accuracy of deep networks . Specifically , we divide186 a deep model into two parts - early and later layers . We apply a detector ( detection here means187 detecting a target pattern from noise/background ) on the early layers of the neural network in order188 to improve the estimation of unknown parameters of the family of pdf ( based on Theorem 2 ) . A189 better estimation of unknown parameters corresponds to better feature representations in the early190 layers and these features are input to the rest of the layers to construct the deterministic discriminative191 functions ( DDF ) useful for the in-domain data classification ( based on Theorem 3 ) .192 A general schema for dividing a deep model into two sub-models namely PEF ( parameter estimator193 functions ) and DDF is depicted in Figure 2 . The early layers of the model estimate the unknown194 parameters of pdf of data while the later layers construct the discriminative functions essential for195 classification . Based on this scheme , we formally define the three main components of DBT as196 follows:197 • parameter estimator functions ( PEF ) : The sub-network from input layer to the kth layer , where k is198 a hyperparameter in the DBT approach.199 • auxiliary binary classification ( ABC ) : Some additional layers are attached to the end of PEF,200 mapping the output of the kth layer to a one-dimensional vector.201 • deterministic discriminative functions ( DDF ) : The sub-network from kth layer to the output of the202 model . The output of model is a vector equal to the length of the number of classes n.203 From Theorem 2 , we showed that unknown parameter estimation can be improved using a detection204 approach . During training , we apply a binary classification on the early layers ( PEF ) of the model to205 improve the estimation of unknown parameters of pdf and subsequently provide rich feature vectors206 for DDF . We define the auxiliary binary classification problem ( ABC problem ) as follows:207 • Class 1 ( alternative hypothesis ) of ABC problem denoted byH1 is set of all data points of classes208 of C1 to Cn , i.e . θ ∈ ∪ni=1Θi.209 • Class 0 ( null hypothesis ) of ABC problem denoted by H0 is a dataset of data points from same210 distribution p ( x , θ ) but θ /∈ ∪ni=1Θi . We also define the dataset of Class 0 of ABC as ABC-noise211 dataset , i.e. , the ABC is given by the following hypothesis testing problem : H1 : θ ∈ ∪ni=1Θi versus212 H0 : θ /∈ ∪ni=1Θi . In many practical problems , the noise , background or interference data related to213 the in-domain dataset have same type of probability distribution but different pdf parameters . Hence,214 using that dataset is a cheap and adept choice for the null hypothesis of ABC.215 The Auxiliary Binary Classification problem influences only the PEF and ABC units while the main216 classification problem with n classes updates the parameters of both PEF and DDF using in-domain217 data . Since the auxiliary classifier is only used during training , the inference model ( IM ) consists of218 only PEF and DDF and hence , there is no additional computation cost during inference . We formulate219 the aforementioned method in the following notations and loss functions . Assume that x is a data220 point that belongs to Class Ci , i ∈ { 1 , · · · , n } or Class H0 of ABC . Here , we define two type of221 labels denoted by lABC and lMC , where the subscription `` MC '' stands for multi-classes . So , if x222 belongs to class Ci , then lABC = 1 and lMC = i− 1 , else if x is a ABC-noise data point , lABC = 0223 and lMC is None . Therefore , the loss function is defined as:224 Ltot = LABC ( QABC ( QPEF ( x ) ) , lABC ) + λlABCLMC ( QDDF ( QPEF ( x ) ) , lMC ) , ( 2 ) where QPEF , QABC and QDDF are the functions of PEF , ABC and DDF blocks , respectively . We225 set the hyperparameter λ = 1 to balance the two loss terms . It is seen that , the second term of the226 total loss is zero if lABC = 0 . LABC and LMC are selected based on the problem definition and227 datasets . For classification , a simple selection for them can be binary cross-entropy and cross-entropy,228 respectively . For a given task and deep neural network , the choice of k and LABC influences the229 feature representation of early layers differently and consequently the accuracy of the model . We230 provide empirical studies in the next section to verify the same.231 5 EXPERIMENTAL STUDY OF DBT232 FACE RECOGNITION233 We conduct experiments on face recognition benchmarks and show that the DBT method learns rich234 features essential for face recognition . We also discover an important observation that current state-235 of-the-art ( SOTA ) face recognition models are very sensitive to non-face data , in particular , animal236 faces . Fig . 4 shows a few examples of misidentified faces and their corresponding animal distractors237 from the IJB-B dataset using the ArcFace ( Deng et al . ( 2019 ) ) model . We show that our DBT method238 not only improves the verification accuracy but also implicitly tackles this robustness issue of current239 models against non-face data . Implementation details are provided in the appendix.240 We consider the PEF discussed in Section 4 to be the first three layers of the model and DDF to be241 the rest of layers . Ablation studies on the choice of PEF and DDF are provided in the supplementary242 material . We define LMC in ( 2 ) as the SOTA ArcFace loss function proposed in ( Deng et al . ( 2019 ) ) .243 The ABC-noise is a non-face dataset containing 500K images that we collected from background244 patches of MS1MV2 ( Guo et al . ( 2016 ) ) ( More details in Appendix ) . We experimented with two245 different loss functions for LABC . For the first one , since popular face recognition models ( Deng et al.246 ( 2019 ) ; Wang et al . ( 2018 ) ) use normalized output features and compute the losses on a hypersphere,247 we select LABC as follows . Let pf ∈ Rd and pnf ∈ Rd denote the prototypes for faces and non-248 faces , respectively . Following ( Mettes et al . ( 2019 ) ) , we constrain the face/non-face prototypes on249 diametrically opposite directions i.e cos ( θpfpnf ) = −1 and normalize the output feature vectors for250 faces and non-faces such that ‖pfi‖ = ‖pnfi‖ = 1 . We then define the LABC as,251 LABC = − 1 N N∑ i=1 log ( es ( cos ( m1θyi+m2 ) −m3 ) es ( cos ( m1θyi+m2 ) −m3 ) + es cos θ2 ) + 1 N N∑ i=1 ( −1− |pfi .pnfi | ) 2 , ( 3 ) where θyi and θ2 correspond to the angles between the weights and the features for face and non-face252 labels , respectively ; m1 , m2 , m3 are the angular margins ; s denotes the radius of the hypersphere . For253 the second choice , we use simple binary cross entropy for LABC . Table 1 shows that the verification254 accuracy on LFW ( Huang et al . ( 2007 ) ) using ( 3 ) is 0.16 % higher than simple cross entropy loss . This255 also shows that choosing a task-specific LABC is essential in obtaining more accurate results . We use256 Eqn.1 as the default for LABC in all our face recognition experiments , unless otherwise stated.257 Table 3 compares the verification accuracy of our method versus the current SOTA method ArcFace258 on five different test sets , LFW , CPLFW ( Zheng & Deng ( 2018 ) ) , CALFW ( Zheng et al . ( 2017 ) ) ,259 CFP-FP ( Sengupta et al . ( 2016 ) ) and AgeDb-30 ( Moschoglou et al . ( 2017 ) ) . For the LFW test set,260 we follow the unrestricted with labeled outside data protocol to report the performance . We trained261 ResNet-50 and ResNet-100 using ArcFace and DBT approaches on CASIA ( small ) and MS1MV2262 ( large ) datasets , respectively . The results show that DBT method outperforms ArcFace on all datasets.263 Table 7 shows the angle statistics of the trained ArcFace and DBT models on the LFW dataset . Min.264 Inter and Inter refer to the mean of minimum angles and mean of all angles between the template265 embedding features of different classes ( mean of the embedding features of all images for each class ) ,266 respectively . Intra refers to the mean of angles between xi and template embedding feature for each267 class . From Table 7 , we infer that DBT extracts better face features and hence reduces the intra-class268 variations . Directly from Tables 3 and 7 , we infer that first , DBT consistently improves the accuracy269 on all test sets . Second , learning better features in the early layers is crucial to obtain rich face feature270 embeddings . Third , the achieved gain using DBT is more pronounced on models trained using a271 smaller ( CASIA ) dataset ( it has fewer identities and images ) . This shows that DBT can address the272 issue of the lack of in-domain data using cheap ABC-noise data.273 We also provide the results of training Inception-ResNet-V1 and ResNet-64 models using DBT on274 MS1MV2 to show the generalization capacity of the DBT method . For the Inception-ResNet-V1 and275 ResNet-64 , the PEF is set to be the first six layers and the DDF is the rest of the model . We use large276 margin cosine loss ( LMCL ) Wang et al . ( 2018 ) for LMC and Cross entropy ( CE ) for LABC . Table 4277 shows the verification accuracy on LFW for Inception-ResNet-V1 and ResNet-64 models trained278 on MS1MV2 with and without DBT . The results show that DBT method is independent of model279 depth or architectures or loss functions and thereby consistently improves the accuracy compared280 to baseline results . Table 4 also compares the DBT method with state-of-the-art methods on LFW281 and YTF datasets . DBT method notably improves the baselines that are comparable to ArcFace and282 superior to all the other methods . We were not able to reproduce the results of the ArcFace paper283 using our Tensorflow implementation and dataset . We believe that using the original implementation284 and dataset from ArcFace will achieve superior results over the baselines on the benchmark datasets285 as evident from the results of our implementation . Finally , we compare the result ArcFace and DBT286 on IJB-B and IJB-C , in Table 5 . It is seen that DBT provides a notable boost on both IJB-B and287 IJB-C by a considerable margin . DBT improves the verification accuracy as high as 1.94 % on IJB-B288 and 2.57 % on IJB-C dataset at 10−4 false alarm rate ( FAR ) . We plot the receptive fields of the top289 ten maximally activated neurons of an intermediate layer of the face recognition model to visualize290 the features learned using the DBT method . Fig . 3 shows that the receptive fields of layer 15 of291 the inception-resnet-v1 model trained using DBT attends to the regions of eyes , nose and mouth as292 compared to insignificant regions in the normal training method . This shows that DBT learns more293 discriminative features essential to face recognition , corroborating our theoretical claims.294 To show that current SOTA models are not robust to animal faces , we performed a 1 : N identification295 experiment with approximately 3000 animal distractors on the IJB-B ( Whitelam et al . ( 2017 ) ) dataset.296 We trained the face recognition model with about 500K non-face data which contains 200 animal297 faces . This is disjoint from the 3000 distractors used in the identification experiment . We collected the298 animal faces from web images using MTCNN ( Zhang et al . ( 2016a ) ) face detector which are the false299 positives from the face detector . Table 2 shows the Rank-1 identification accuracy of ResNet-100300 on IJB-B dataset , trained on MS1MV2 using the ArcFace loss ( ResNet-100-AF ) versus our DBT301 approach ( ResNet-100-DBT ) . The third column of Table 2 denotes the accuracy on a hard subset302 of images ( false positives from ArcFace model ) on the IJB-B dataset denoted by H-set . Results303 of Table 2 show that current face recognition models are unable to discriminate out-of-distribution304 ( non-face ) images from face images . Our ResNet-100-DBT significantly ( as high as 21 % ) reduces the305 misidentification rate as compared to the ArcFace model which shows that DBT method inherently306 overcomes this issue while also improving face recognition accuracy.307 IMAGE CLASSIFICATION308 In this section , we evaluate ResNet-110 and ResNext-101 models trained with and without DBT on309 image classification problem using CIFAR-10 , CIFAR-100 , and ImageNet . We also show the power310 of DBT to compensate for the smaller in-domain training set . For all implementations , PEF is defined311 to be the first three layers and DDF is the rest of the model . LABC and LMC are set to cross-entropy312 loss . ABC-noise is the same data used in face recognition experiments . We follow the same training313 configurations from ( He et al . ( 2016 ) ; Xie et al . ( 2017 ) ) .314 To study the efficacy of the DBT method in augmenting smaller in-domain training datasets , we315 also trained ResNet-100 and ResNext-101 using partial training data on CIFAR-10 and CIFAR-100.316 We randomly selected a fraction of the training data to be our training set , e.g. , k/5 of dataset317 means that we only used k fifth of total samples for training . From first row of Table 8 , we find that318 models trained with DBT show 0.59 % and 0.35 % improvement on CIFAR-10 , 0.62 % and 1.45 % 319 improvement on CIFAR-100 over baseline models for ResNet-110 and ResNext-101 architectures,320 respectively . Furthermore , using partial training data with our DBT method achieves superior results321 ( as high as 5.49 % on ResNext ( 1/5 ) CIFAR-100 ) as compared to normal training . Table 6 shows322 the results on Imagenet . We see that DBT improves the accuracy by 0.28 % on Top-1 accuracy . This323 shows that the DBT method consistently improves the results on both small and large datasets.324 SPEAKER IDENTIFICATION325 We consider the problem of speaker identification using the VGG-M ( Chatfield et al . ( 2014 ) ) model.326 We set PEF as the first two CNN layers and DDF as the remaining CNN layers . LABC and LMC327 are defined to be the cross-entropy loss . The ABC-noise is generated from the silence intervals of328 VoxCeleb ( Nagrani et al . ( 2017 ) ) augmented with Gaussian noise with variance one . The input to the329 model is the short-time Fourier transformation of speech signals with a hamming sliding window330 of width 25 ms and step 10 ms. Table 9 provides the accuracies of VGG-M model trained with and331 without DBT on VoxCeleb , Librispeech ( Panayotov et al . ( 2015 ) ) , VCTK ( Veaux et al . ( 2016 ) ) and332 ELSDR ( L. ( 2004 ) ) datasets . Table 9 shows that the trained models using DBT significantly improves333 the accuracy ( as high as 5.62 % ) for all datasets . Implementation details are provided in the appendix.334 335 MISCELLANEOUS EXPERIMENTS336 In this section , we experiment with the naive way of using background data by considering non-faces337 as a separate class in the final classification layer . For face recognition , Table 11 shows the results338 of training with an additional background class on MS1MV2 dataset with and without using DBT.339 ResNet+mod refers to a model trained with ArcFace loss and n + 1 classes where the additional class340 corresponds to the non-faces . ResNet-DBT+mod refers to a model trained with both DBT and the341 additional non-face class . We find that adding the additional non-face class hurts the performance342 of the model whereas ResNet-DBT+mod improves the results significantly relative to ResNet+mod343 model . Since the non-face dataset is sampled from a wide range of a family of distributions compared344 with faces , it has a larger range of unknown parameters , then the sufficient statistic of them should be345 larger than the sufficient statistics of face data . Thus , when we restrict faces and non-faces on the346 surface of a hypersphere , the non-face data is more spread on the surface compared with each of the347 other face classes . We demonstrate this effect with the help of a toy example in Fig . 6 in the appendix.348 We also conduct this experiment on CIFAR-10/CIFAR-100 and report it in Table 10 . We see that349 naively incorporating the background class is inferior to DBT showing that DBT is an effective350 technique to utilize background data to boost the performance of classification models.351 6 CONCLUSION352 In this paper , we presented a detailed theoretical analysis of the dual relationship between estimating353 the unknown pdf parameters and classification accuracy . Based on the theoretical study , we presented354 a new method called DBT using ABC-noise data for improving in-distribution classification accuracy.355 We showed that using ABC-noise data helps in better estimation of unknown parameters of pdf of356 input data and thereby improves the feature representations and consequently the accuracy in image357 classification , speaker classification , and face recognition benchmarks . It also augments the training358 data when only limited labeled data is available by improving accuracy . We showed that the concept359 of DBT is generic and generalizes well across domains through extensive experiments using different360 model architectures and datasets . Our framework is complementary to existing training methods and361 hence , it can be easily integrated with current and possibly future classification methods to enhance362 accuracy . In summary , the proposed DBT method is a powerful technique that can augment limited363 training data and improve classification accuracy in deep neural networks.364 REFERENCES365 M. Abadi , A. Agarwal , P. Barham , E. Brevdo , Z. Chen , C. Citro , G. S. Corrado , A. Davis , J. Dean,366 M. Devin , and S. Ghemawat and . TensorFlow : Large-scale machine learning on heterogeneous367 systems , 2015.368 Philip Bachman , R Devon Hjelm , and William Buchwalter . Learning representations by maximizing369 mutual information across views . In Advances in Neural Information Processing Systems , pp.370 15535–15545 , 2019.371 T. L. Berg , A. C. Berg , J. Edwards , and D. A. Forsyth . Who ’ s in the picture . NeurIPS , 2004.372 David Berthelot , Colin Raffel , Aurko Roy , and Ian Goodfellow . Understanding and improving373 interpolation in autoencoders via an adversarial regularizer . ICLR , 2019.374 Ben-Zion Bobrovsky , E Mayer-Wolf , and M Zakai . Some classes of global cramér-rao bounds . The375 Annals of Statistics , pp . 1421–1438 , 1987.376 Ken Chatfield , Karen Simonyan , Andrea Vedaldi , and Andrew Zisserman . Return of the devil in the377 details : Delving deep into convolutional nets . BMVC , 2014.378 Ernesto Conte , Antonio De Maio , and Giuseppe Ricci . Glrt-based adaptive detection algorithms for379 range-spread targets . IEEE transactions on signal processing , 49 ( 7 ) :1336–1348 , 2001.380 J. Deng , J. Guo , X. Niannan , and S. Zafeiriou . Arcface : Additive angular margin loss for deep face381 recognition . In Computer Vision and Pattern Recognition ( CVPR ) , 2019.382 Richard O Duda , Peter E Hart , and David G Stork . Pattern classification . John Wiley & Sons , 2012.383 Mark Everingham , SM Ali Eslami , Luc Van Gool , Christopher KI Williams , John Winn , and Andrew384 Zisserman . The pascal visual object classes challenge : A retrospective . International journal of385 computer vision , 111 ( 1 ) :98–136 , 2015.386 F. F. Li , R. Fergus , and P. Perona . Learning generative visual models from few training examples : An387 incremental bayesian approach tested on 101 object categories . In CVPR Workshop , pp . 178–178,388 2004.389 Spyros Gidaris , Praveer Singh , and Nikos Komodakis . Unsupervised representation learning by390 predicting image rotations . ICLR , 2018.391 Ian Goodfellow , Jean Pouget-Abadie , Mehdi Mirza , Bing Xu , David Warde-Farley , Sherjil Ozair,392 Aaron Courville , and Yoshua Bengio . Generative adversarial nets . NeurIPS , pp . 2672–2680 , 2014.393 Y. Guo , L. Zhang , Y. Hu , X . He , and J. Gao . Ms-celeb-1m : A dataset and benchmark for large-scale394 face recognition . ECCV , 9907:87–102 , 2016.395 Michael Gutmann and Aapo Hyvärinen . Noise-contrastive estimation : A new estimation principle396 for unnormalized statistical models . In Proceedings of the Thirteenth International Conference on397 Artificial Intelligence and Statistics , pp . 297–304 , 2010.398 K. He , X. Zhang , S. Ren , and J . Sun . Deep residual learning for image recognition . Computer Vision399 and Pattern Recognition , pp . 770–778 , 2016.400 G. B. Huang , M. Ramesh , T. Berg , and E. Learned-Miller . Labeled faces in the wild : A database for401 studying face recognition in unconstrained environments . In Technical Report , 2007.402 Aapo Hyvärinen . Survey on independent component analysis . 1999.403 S. Ioffe and C. Szegedy . Batch normalization : Accelerating deep network training by reducing404 internal covariate shift . International Conference on Machine Learning , 37:448–456 , 2015.405 Diederik P Kingma and Max Welling . Auto-encoding variational bayes . ICLR , 2014.406 Feng L. Speaker recognition , informatics and mathematical modelling . Technical University of407 Denmark , DTU , 2004.408 C-Y . Lee , S. Xie , P. Gallagher , Z. Zhang , and Z. Tu . Deeply-Supervised Nets . Proceedings of409 Machine Learning Research ( PMLR ) , 38:562–570 , 2015.410 Youngjo Lee , John A Nelder , and Yudi Pawitan . Generalized linear models with random effects:411 unified analysis via H-likelihood , volume 153 . CRC Press , 2018.412 E. L. Lehmann and G. Casella . Theory of point estimation , 1998 . 2ndn ed.413 Erich L Lehmann and George Casella . Theory of point estimation . Springer Science & Business414 Media , 2006.415 Tsung-Yi Lin , Michael Maire , Serge Belongie , James Hays , Pietro Perona , Deva Ramanan , Piotr416 Dollár , and C Lawrence Zitnick . Microsoft coco : Common objects in context . In European417 conference on computer vision , pp . 740–755 . Springer , 2014.418 Bernard Lindgren . Statistical theory . Routledge , 2017.419 B. Maze , J. Adams , J . A. Duncan , N. Kalka , T. Miller , C. Otto , A. K. Jain , W. T. Niggel , J. Anderson,420 J. Cheney , and P. Grother . Iarpa janus benchmark - c : Face dataset and protocol . International421 Conference on Biometrics , pp . 158–165 , 2018.422 P. Mettes , E. van der Pol , and C. Snoek . Hyperspherical prototype networks . NeuRIPS , 01 2019.423 S. Moschoglou , A. Papaioannou , C. Sagonas , J. Deng , I. Kotsia , and S. Zafeiriou . Agedb : the first424 manually collected , in-the-wild age database . CVPR Workshop , 2 ( 3 ) :5 , 2017.425 Arsha Nagrani , Joon Son Chung , and Andrew Zisserman . Voxceleb : a large-scale speaker identifica-426 tion dataset . arXiv preprint arXiv:1706.08612 , 2017.427 M. Noroozi and P. Favaro . Unsupervised learning of visual representations by solving jigsaw puzzles.428 ECCV , 2016.429 M. Oquab , L. Bottou , I. Laptev , and J. Sivic . Learning and transferring mid-level image representa-430 tions using convolutional neural networks . CVPR , pp . 1717–1724 , 2014.431 V. Panayotov , G. Chen , D. Povey , and S. Khudanpur . Librispeech : an asr corpus based on public432 domain audio books . International Conference on Acoustics , Speech and Signal Processing433 ( ICASSP ) , pp . 5206–5210 , 2015.434 BLS Prakasa Rao . Cramer-rao type integral inequalities for estimators of functions of multidimen-435 sional parameter . Sankhyā : The Indian Journal of Statistics , Series A , pp . 53–73 , 1992.436 Ali Razavian , Hossein Azizpour , Josephine Sullivan , and Stefan Carlsson . Cnn features off-the-shelf:437 an astounding baseline for recognition . CVPR Workshops , 2014.438 O. Russakovsky , J. Deng , H. Su , J. Krause , S. Satheesh , S. Ma , Z. Huang , A. Karpathy , A. Khosla,439 M. Bernstein , A. C. Berg , and F.F Li . ImageNet Large Scale Visual Recognition Challenge.440 International Journal of Computer Vision ( IJCV ) , 115 ( 3 ) :211–252 , 2015.441 Christian S. , Vincent V. , Sergey I. , Jon S. , and ZB W. Rethinking the inception architecture for442 computer vision . CVPR , 2016.443 S. Sengupta , J. Chen , C. Castillo , V. M. Patel , R. Chellappa , and D. W. Jacobs . Frontal to profile face444 verification in the wild . In Winter Conference on Applications of Computer Vision ( WACV ) , pp.445 1–9 , 2016.446 N. Srivastava , G. Hinton , A. Krizhevsky , I. Sutskever , and R. Salakhutdinov . Dropout : A simple way447 to prevent neural networks from overfitting . J. Mach . Learn . Res. , 15 ( 1 ) :1929–1958 , 2014.448 C. Szegedy , W. Liu , Y. Jia , P. Sermanet , S. Reed , D. Anguelov , D. Erhan , V. Vanhoucke , and449 A. Rabinovich . Going deeper with convolutions . CVPR , pp . 1–9 , 2015.450 S. Teerapittayanon , B. McDanel , and H. T. Kung . Branchynet : Fast inference via early exiting from451 deep neural networks . ICPR , 2016.452 Christophe Veaux , Junichi Yamagishi , Kirsten MacDonald , et al . Superseded-cstr vctk corpus:453 English multi-speaker corpus for cstr voice cloning toolkit . University of Edinburgh . The Centre454 for Speech Technology Research ( CSTR ) , 2016.455 H. Wang , Y. Wang , Z. Zhou , X. Ji , D. Gong , J. Zhou , Z. Li , and W. Liu . Cosface : Large margin456 cosine loss for deep face recognition . CVPR , pp . 5265–5274 , 2018.457 C. Whitelam , E. Taborsky , A. Blanton , B. Maze , J. Adams , T. Miller , N. Kalka , A. K. Jain , J. A.458 Duncan , K. Allen , J. Cheney , and P. Grother . Iarpa janus benchmark-b face dataset . CVPR459 Workshops , pp . 592–600 , 2017.460 L. Wolf , T. Hassner , and I. Maoz . Face recognition in unconstrained videos with matched background461 similarity . CVPR , pp . 529–534 , 2011.462 S. Xie , R. Girshick , P. Dollár , Z. Tu , and K. He . Aggregated residual transformations for deep neural463 networks . CVPR , pp . 5987–5995 , 2017.464 D. Yi , Z. Lei , S. Liao , and S. Z. Li . Learning face representation from scratch . arXiv , abs/1411.7923,465 2014.466 J. Yosinski , J. Clune , Yoshua Bengio , and Hod Lipson . How transferable are features in deep neural467 networks ? NIPS , 2014.468 Ofer Zeitouni , Jacob Ziv , and Neri Merhav . When is the generalized likelihood ratio test optimal ? 469 IEEE Transactions on Information Theory , 38 ( 5 ) :1597–1602 , 1992.470 K. Zhang , Z. Zhang , Z. Li , and Y. Qiao . Joint face detection and alignment using multi-task cascaded471 convolutional networks . Signal Processing Letters , 23 ( 10 ) :1499–1503 , 2016a.472 L. Zhang , J . Song , A. Gao , J. Chen , C. Bao , and K. Ma . Be your own teacher : Improve the473 performance of convolutional neural networks via self distillation . ICCV , 2019a.474 Linfeng Zhang , Zhanhong Tan , Jiebo Song , Jingwei Chen , Chenglong Bao , and Kaisheng Ma . Scan:475 A scalable neural networks framework towards compact and efficient models . NeurIPS , 2019b.476 Richard Zhang , Phillip Isola , and Alexei Efros . Colorful image colorization . ECCV , 2016b.477 T. Zheng and W. Deng . Cross-pose lfw : A database for studying cross-pose face recognition in un-478 constrained environments . Technical Report , Beijing University of Posts and Telecommunications,479 2018.480 T. Zheng , W. Deng , and J. Hu . Cross-age lfw : A database for studying cross-age face recognition in481 unconstrained environments . arXiv , abs/1708.08197 , 2017.482 APPENDIX483 IN-DOMAIN FAMILY OF PDFS AND THE EXTENDED FAMILY OF DISTRIBUTIONS484 In this section , we discuss about background/noise and in-domain data points and their corresponding485 distributions to clarify the definition of those concepts in this paper . Consider a random vector denoted486 by s. Assume that the corresponding distribution is Gaussian with mean and variance given by α 6= 0487 and σ = 1 , respectively . Now , assume that we observed x = s + n , where the pdf of n is assumed to488 be Guassian with zero mean and variance σ2n , hence the pdf of x is Gaussian with mean α and variance489 1 + σ2n . Here , n is the background or noise data and the vector of unknowns is given by , θ = [ α , σ 2 n ] .490 The in-domain family of pdfs for x is then given by Px = { N ( α , 1 + σ2n ) |α 6= 0 , σ2n > 0 } . If we491 include the family of pdf of n to Px , then we can extend Px as P = { N ( α , 1 +σ2n ) |α ∈ R , σ2n > 0 } .492 So P is the union of family of pdfs of in-domain data points and noise/background data . From493 estimation theory , we know that the sufficient statistics and the unknown parameters of P can also494 represent the sufficient statistics and the unknown parameters of Px . In other words , an estimation of495 α can help us detect if the observed data point is from s + n or n by comparing it with a threshold.496 Thus , estimating the unknown parameters of the family of pdfs using P can provide more information497 about the observed data useful for tasks such as classification.498 In general , we can assume that a generalized family of pdfs is given by the family of pdf of noise or background along with the family of pdfs of in-domain data . Hence , estimating from the extended family of distribution can provide more information about the in-domain distribution . Let us consider that the pdf of in-domain data points is given by px ( x , [ θs , θn ] ) and the pdf of noise/background is given by pn ( x , θn ) , so the extended pdf can be represented by h ( pn ( x , θn ) , px ( x , [ θs , θn ] ) ) , where h is a function that combines two pdfs in a general structure . So a general family of distribution can be denoted as follows : P = { h ( pn ( x , θn ) , px ( x , [ θs , θn ] ) ) |θ : = [ θs , θn ] ∈ Θs , n } , where θ is defined as a new set of parameters in a higher dimension and Θs , n are set of all possible499 [ θs , θn ] that belongs to pn and px . The extended family of pdf provides more information about500 the nuisance parameters of pdf of in-domain datapoints . Inspired by this observation , we develop501 our detection booster training method using background/noise data . Figure 5 shows an example of502 background and in-domain data point.503 PROOF OF THEOREM 1504 Let ti ( · ) denote deterministic discriminative function of Θi . Since the efficient minimum variance505 and unbiased estimation of ti ( θ ) exists , we have506 ∂ ln ( p ( x , θ ) ) ∂ti ( θ ) = Iti ( θ ) ( t̂i ( x ) − ti ( θ ) ) , ( 4 ) where t̂i ( x ) is the minimum variance and unbiased estimation of ti ( θ ) using the data point x and Iti ( x ) is the Fisher information function of ti ( θ ) , which is given by Iti ( θ ) = ∂ti ( θ ) ∂θ T I ( θ ) ∂ti ( θ ) ∂θ ≥ 0 , where T denotes the transpose and I ( θ ) is the Fisher information matrix of θ . Now we show that507 the log-likelihood ratio is an increasing function in t̂i ( x ) . Note that Iti ( θ ) ≥ 0 ( Lehmann & Casella508 ( 2006 ) ) .509 On the other hand , we have d ln ( p ( x , θ ) ) = ∑ j ∂ ln ( p ( x , θ ) ) ∂θj dθj , therefore,510 ln ( p ( x , θ ) ) + k ( x ) = ∑ j ∫ ∂ ln ( p ( x , θ ) ) ∂θj dθj = ∑ j ∫ ∂ ln ( p ( x , θ ) ) ∂ti ( θ ) ∂ti ( θ ) ∂θj dθj =∫ ∂ ln ( p ( x , θ ) ) ∂ti ( θ ) ∑ j ∂ti ( θ ) ∂θj dθj = ∫ ( Iti ( θ ) ( t̂i ( x ) − ti ( θ ) ) ) ∑ j ∂ti ( θ ) ∂θj dθj = α ( θ ) t̂i ( x ) − β ( θ ) ( 5 ) where the third equality is archived based on the third property of ti ( · ) in its definition and the forth equality is given by replacing ( 4 ; k ( x ) is the constant of integration . Finally , the last equality is given by defining the following terms α ( θ ) : = ∫ Iti ( θ ) ∑ j ∂ti ( θ ) ∂θj dθj , β ( θ ) : = ∫ Iti ( θ ) ti ( θ ) ∑ j ∂ti ( θ ) ∂θj dθj , ( 6 ) thus dα ( θ ) dti ( θ ) = Iti ( θ ) ≥ 0 , i.e. , α ( θ ) is increasing in ti ( θ ) . Since , ti is a deterministic discriminative511 function of Θi , so for each j 6= i and θi ∈ Θi and θj ∈ Θj , we have ti ( θi ) > ti ( θj ) , therefore512 α ( θi ) ≥ α ( θj ) . The later inequality is achieved based on the increasing property of α ( θ ) with513 respect to ti ( θ ) .514 Using ( 5 ) , the log likelihood ratio of class i against the rest of classes is given by LLR : =515 ln ( p ( x , θi ) ) − ln ( p ( x , θj ) ) , so we have LLR = ( α ( θi ) − α ( θi ) ) t̂i ( x ) − ( β ( θi ) − β ( θj ) ) . LLR516 depends on x only via t̂i ( x ) and since for each j 6= i and θi ∈ Θi and θj /∈ Θi , α ( θi ) − α ( θi ) > 0,517 then LLR is increasing in t̂i ( x ) . 518 PROOF OF THEOREM 2519 The probability of true positive of class i of rj is given by Ptp , i , j = Prθ ( hj ( x ) > τ ) = 1− Fjθ ( τ ) , where Fiθ ( · ) denotes the Cumulative distribution function ( CDF ) of hj . Since the probability of true positive of class i of r1 is greater than r2 for all τ , F1θ ( τ ) < F2θ ( τ ) , for all τ . Now we define a function as follows u ( τ , θ ) : = F2θ ( τ ) − F1θ ( τ ) . Since the CDFs are increasing in τ and tend to 1 and the number of inflection points of these CDFs are bounded , there is an hmin such that , for τ > hmin , such that u ( τ , θ ) is a monotonically decreasing function in τ . Thus for any θ that satisfies d ( θ ) > hmin we have u ( d ( θ ) + , θ ) < u ( d ( θ ) − , θ ) . Replacing u ( h , θ ) = F2θ ( h ) − F1θ ( h ) in the last inequality , we have520 F2θ ( d ( θ ) + ) − F1θ ( d ( θ ) + ) < F2θ ( d ( θ ) − ) − F1θ ( d ( θ ) − ) ⇒ ( 7 ) F2θ ( d ( θ ) + ) − F2θ ( d ( θ ) − ) < F1θ ( d ( θ ) + ) − F1θ ( d ( θ ) − ) . ( 8 ) Based on the definition of CDF , we have521 Prθ ( |h2 ( x ) − d ( θ ) | < ) = Prθ ( d ( θ ) − < h2 ( x ) < d ( θ ) + ) < Prθ ( d ( θ ) − < h1 ( x ) ) < d ( θ ) + ) = Prθ ( |h1 ( x ) − d ( θ ) | < ) . ( 9 ) 522 PROOF OF THEOREM 3523 First , we prove the following claim,524 Claim : For any open set , there exists a set of disjoint countable open balls such that their union equals525 the origin open set.526 Proof of claim : Consider an open set O , and also consider x0 ∈ O , such that B ( x0 , r0 ) ⊆ O527 and r0 is the greatest possible radius between all possible open balls in O , where B ( x0 , r0 ) is the528 open ball with radius r0 at point x0 . Now , we define x1 ∈ O − B ( x0 , r0 ) , where B ( x0 , r0 ) is529 the closure of B ( x0 , r0 ) , as the point with greatest radius in O − B ( x0 , r0 ) and similarly xi ∈530 O − ∪i−1k=0B ( xk , rk ) such that B ( xi , ri ) provides the greatest radius in O − ∪ i−1 k=0B ( xk , rk ) . So531 we have O = ∪∞k=0B ( xk , rk ) . This is because , if the latest equality is not valid , then there exists532 an open ball in O − ∪∞k=0B ( xk , rk ) hence another open ball with greatest radius will be added to533 ∪∞k=0B ( xk , rk ) , which has a contradiction with the definition of ∪∞k=0B ( xk , rk ) . The claim is proven534 at this point.535 Now , we show the true positive probability of r1 is greater than r2 . Let Θ′m be the set of interior points of Θm , then , there exists a union of disjoint open balls such that Θ′m = ∪∞k=0B ( xk , rk ) . From assumptions in the theorem , we have Pr ( ‖θ̂1 − θ‖ ≤ ) ≥ Pr ( ‖θ̂2 − θ‖ ≤ ) , then Prθ ( θ̂1 ∈ B ( xk , rk ) ) ≥ Prθ ( θ̂2 ∈ B ( xk , rk ) ) , where θ ∈ Θm . Based on the claim we have536 Prθ ( θ̂1 ∈ Θ′m ) ≥ Prθ ( θ̂2 ∈ Θ′m ) . ( 10 ) Moreover , based on definition of ri , the true positive probability of class m is given by ptp , i = Prθ ( θ̂i ∈ Θm ) = Prθ ( θ̂i ∈ Θ′m ) + Prθ ( θ̂i ∈ Θm −Θ′m ) , for i = 1 , 2 . Additionally , from the Cauchy–Schwarz inequality , we have Prθ ( θ̂i ∈ Θm −Θ′m ) ≤ µl ( Θm −Θ′m ) = 0 , So , ptp , i = Prθ ( θ̂i ∈ Θ′m ) and from ( 10 ) the true positive probability of class i of r1 is greater than537 r1.538 The error probability of rj is given by per , j = 1− ∑n i=1 PiPtp , i , j , where Pi is the prior probability539 of class i. Therefor , per,1 ≤ per,2 . 540 541 CONNECTING THE THEOREMS WITH THE PROPOSED METHOD542 Fig . 6 shows the connection between the proposed theorems and the approach . In part 1 , Theorem543 2 connects the estimation of unknown parameters to the auxiliary classifier . In part 2 , the learned544 features are passed to a decision making network ( result of Theorem 2 ) . In part 3 , Theorem 3545 guarantees that the multi-class classifier outperforms other classifiers , because it is using the features546 from a better estimation of unknown parameters of pdf.547 548 TOY EXAMPLE:549 We demonstrate the effect of adding background class to the original classifier with a toy example550 and visualize it in Fig . 7 . In this example , the input is a sequence of binary bits ( +1 and −1 ) with551 length 3 in white Gaussian noise . the classifier is constructed using two fully connected layers with552 sigmoid and the last layer is normalized on unit circle . As seen from Fig . 7 , adding an additional553 noise class visibly reduces the feature separation between all the other classes.554 IMPLEMENTATION DETAILS555 FACE RECOGNITION556 We use Tensorflow ( Abadi et al . ( 2015 ) ) to conduct all our experiments . We train with a batch557 size of 256 on two NVIDIA TeslaV100 ( 32G ) GPUs . We train our models following small ( less558 than 1M training images ) and large ( more than 1M training images ) protocol conventions . We use559 CASIA-Webface ( Yi et al . ( 2014 ) ) dataset for small protocol and MS1MV2 dataset for the large560 protocol . We use ResNet-50 ( He et al . ( 2016 ) ) and ResNet-100 models for small and large protocols,561 respectively . The PEF is selected as the first three layers . Following ( Deng et al . ( 2019 ) ) , we apply562 BN ( Ioffe & Szegedy ( 2015 ) ) , dropout ( Srivastava et al . ( 2014 ) ) to the last feature map layer followed563 by a fully connected layer and batch normalization to obtain the 512-D embedding vector . We set564 the feature scale s parameter to 64 following ( Wang et al . ( 2018 ) ; Deng et al . ( 2019 ) ) and set the565 margin parameters ( m1 , m2 , m3 ) to ( 1 , 0.5 , 0 ) , respectively . For small scale protocol , we start the566 learning rate at 0.01 and divide the learning rate by 10 at 40K , 80K , and 100K iterations . We train for567 120K iterations . For large scale protocol , we start the learning rate at 0.01 and divide the learning568 rate by 10 at 80K , 100K , and 200K iterations . We train for 240K iterations . We use Momentum569 optimizer and set the momentum to 0.9 and weight decay to 5e-4 . We use the feature centre of all570 images from a template or all frames from a video in order to report the results on IJB-B , IJB-C and571 YTF datasets . For ABC-noise data , we cropped background images patches from MS1MV2 ( Guo572 et al . ( 2016 ) ) dataset and cropped hard examples from the Caltech-101 ( F. F. Li et al . ( 2004 ) ) dataset573 plus a few open sourced images ( animal faces ) using MTCNN ( Zhang et al . ( 2016a ) ) face detector.574 We generated roughly 500K non-face images for training the ABCs.575 SPEAKER IDENTIFICATION576 L2 loss and dropout with a rate of 0.2 are applied during training for generalization . The ABC-noise577 is collected form silence intervals of the VoxCeleb dataset , where an energy-based voice activity578 detection ( VAD ) is applied to detect the silence intervals . To augment the ABC-noise , Gaussian579 noise is added to the silence intervals . Each batch size is set to 64 and the optimizer is ADAM with580 a learning rate of 0.001 . The VoxCeleb dataset is trained for 11 epochs and the other datasets are581 trained for 6 epochs.582 LFW AND YTF DATASETS583 LFW database contains the annotations for 5171 faces in a set of 2845 images taken from the Faces584 in the Wild data set ( Berg et al . ( 2004 ) ) . YouTubeFaces ( Wolf et al . ( 2011 ) ) contains 3,425 videos of585 1,595 people . Following the standard convention , we report the results on 5000 video pairs using586 unrestricted with labeled outside data protocol.587 IJB-B AND IJB-C DATASETS588 The IJB-B contains 1,845 subjects with 21.8K still images and 55K frames from 7,011 videos . In589 total , there are 12,115 templates with 10,270 genuine matches and 8M impostor matches . The IJB-C590 dataset ( Maze et al . ( 2018 ) ) is a further extension of IJB-B , having 3,531 subjects with 31.3K still591 images and 117.5K frames from 11,779 videos . In total , there are 23 , 124 templates with 19,557592 genuine matches and 15 , 639K impostor matches.593
This paper proposes a training method for classification, with the goal of training with less data. The proposal is to train an auxiliary classifier at the same time. The auxiliary classifier and the main classifier share the early layers. The auxiliary classifier is a binary classifier that discriminates training data versus background/noise data. The proposed method is evaluated on image and speech classification tasks.
SP:4f7eaeae0559362f0caf13406b20914c120de74b
Prediction and generalisation over directed actions by grid cells
1 INTRODUCTION . A `` cognitive map '' encodes relations between objects and supports flexible planning ( Tolman [ 40 ] ) , with hippocampal place cells and entorhinal cortical grid cells thought to instantiate such a map ( O ’ Keefe and Dostrovsky [ 32 ] ; Hafting et al . [ 20 ] ) . Each place cell fires when the animal is near a specific location , whereas each grid cell fires periodically when the animal enters a number of locations arranged in a triangular grid across the environment . Together , this system could support representation and flexible planning in state spaces where common transition structure is preserved across states and tasks , affording generalisation and inference , e.g. , in spatial navigation where Euclidean transition rules are ubiquitous ( Whittington et al . [ 43 ] ) . Recent work suggests that place cell firing provides a local representation of state occupancy , while grid cells comprise an eigenbasis of place cell firing covariance ( Dordek et al . [ 15 ] ; Stachenfeld et al . [ 38 ] ; Sorscher et al . [ 37 ] ; Kropff and Treves [ 26 ] ) . Accordingly , grid cell firing patterns could be learned as eigenvectors of a symmetric ( diffusive ) transition matrix over state space , providing a basis set enabling prediction of occupancy distributions over future states . This “ intuitive planning '' operates by replacing multiplication of state representations by the transition matrix with multiplication of each basis vector by the corresponding eigenvalue ( Baram et al . [ 2 ] ; Corneil and Gerstner [ 13 ] ) . Thus a distribution over state space represented as a weighted sum of eigenvectors can be updated by re-weighting each eigenvector by its eigenvalue to predict future state occupancy . ∗Please send any enquiries to : changmin.yu.19 @ ucl.ac.uk and n.burgess @ ucl.ac.uk Fast prediction and inference of the common effects of actions across different environments is important for survival . Intuitive planning , in its original form , supports such ability under a single transition structure , most often corresponding to symmetrical diffusion ( Baram et al . [ 2 ] ) . Here we show that a single ( Fourier ) eigenbasis allows representation and prediction under the many different directed transition structures corresponding to different “ translation invariant '' actions ( whose effects are the same across states , such as moving North or South or left or right in an open environment ) , with predictions under different actions achieved by action-specific eigenvalues . We define a “ sense of direction '' quantity , i.e. , the optimal combinations of directed actions that most likely lead to the goal , based on the underlying translation-invariant transition structure ( e.g. , ignoring local obstacles ) . We then show how this method could be adapted to support planning in tasks that violate translation invariance ( e.g . with local obstacles ) , and show how adding these Fourier representations to a deep RL network improves performance in a continuous control task . We propose that the medial entorhinal grid cells support this planning function , as linear combinations of Fourier eigenvectors and therefore eigenvectors themselves , and show how traditional models of grid cells performing path integration are consistent with prediction under directed actions . Hence we demonstrate that the proposed spectral model acts as a unifying theoretical framework for understanding grid cell firing . 2 “ INTUITIVE PLANNING '' WITH A SINGLE TRANSITION STRUCTURE . Intuitive planning represents the occupancy distribution over the state space as a weighted sum of the eigenvectors of a single transition matrix ( usually corresponding to symmetric diffusion ) , so that the effect of one step of the transition dynamics on the distribution can be predicted by reweighting each of the eigenvectors by the corresponding eigenvalue . And this generalises to calculating the cumulative effect of discounted future transitions ( Baram et al . [ 2 ] ) . Specifically , consider a transition matrix , T ∈ RN×N , Tss′ = P ( st+1 = s′|st = s ) where st encodes the state at time t and N is the number of states . Then , Tn is the n-step transition matrix , and has the same set of eigenvectors as T . Specifically , the eigendecomposition of T and Tn are : T = QΛQ−1 , Tn = QΛnQ−1 ( 1 ) where each column of the matrix Q is an eigenvector of T and Λ = diag ( σP ( T ) ) , where σP ( T ) is the set of eigenvalues of T . Similarly , any polynomial in T , p ( T ) , shares the same set of eigenvectors as T and the set of eigenvalues σP ( p ( T ) ) = p ( σP ( T ) ) . Hence : ∞∑ k=0 ( γT ) k = ( I − γT ) −1 = Qdiag ( w ) Q−1 , where w = { 1 1− γλ , for λ ∈ σP ( T ) } ( 2 ) The resolvent form ( Eq . 2 ) is an infinite discounted summation of transitions , which under a policy and transition structure corresponding to diffusion , is equivalent to the successor representation ( SR , Fig . 1E ) with discounting factor γ ( Dayan [ 14 ] ; Stachenfeld et al . [ 38 ] ) . See Mahadevan and Maggioni [ 29 ] for a related spectral approach using Fourier decomposition of T for estimating the value function . The SR has been shown to be useful for navigation via gradient ascent of the future probability of occupying the target state , and has a linear relationship with the true underlying Euclidean distances in spatial tasks ( hence `` intuitive planning '' , see Fig . 1 and Fig . 2D-E ) . The eigenvectors of the diffusion transition matrix generally show grid-like patterns , suggesting a close relationship to grid cells . However , intuitive planning is restricted to predictions over a single transition structure , hence can not flexibly adjust its predictions corresponding to the effects of arbitrary directed actions ( i.e. , variable asymmetric transition structure ) , hence can not support the presumed role of grid cells in path integration.Moreover , predictions over different directed actions would require different eigendecompositions , hence incurring high computational costs that undermines its biological plausibility . In Section 3 we unify the prediction and path integration approaches by exploiting translation invariant symmetries to generalise across actions , using a single common eigenbasis and cheaply calculated updates via action-dependent eigenvalues . 3 FLEXIBLE PLANNING WITH DIRECTED TRANSITIONS . Updating state representations to predict the consequences of arbitrary directed actions is an important ability of mobile animals , known as path integration and thought to depend on grid cells ( McNaughton et al . [ 30 ] ) . To generalise the intuitive planning scheme to simultaneously incorporate arbitrary directed transition structures , we consider the transition dynamics corresponding to translation ( drift ) and Gaussian diffusion with arbitrary variance ( including 0 , equivalent to plain translation ) . Our assumption that the transition structure is translation invariant ( implying periodic boundary conditions ) , leads to circulant transition matrices . Consider a 2D rectangular environment with length L and width W where each state is a node of the unit square grid , then the transition matrix can be represented by T ∈ RLW×LW , with each row the vectorisation ( vec ( · ) ) of the matrix of transition probabilities starting from the specified location , i.e. , T [ j , : ] = vec [ P ( st+1|st = j ) ] , where T is constructed by considering the 2D state space as a 1D vector and concatenating the rows ( j = xL+ y for ( x , y ) ∈ [ 0 , W − 1 ] × [ 0 , L− 1 ] ) , see Fig . 2A . The transition matrix is circulant due to the translation invariance of the transition structure ( see Appendix Prop . A.1 ) , and takes the following form : T = T0 TLW−1 · · · T2 T1 T1 T0 TLW−1 · · · T2 ... T1 T0 . . . ... TLW−2 · · · . . . . . . TLW−1 TLW−1 TLW−2 · · · T1 T0 ( 3 ) The normalised eigenvectors of the circulant matrix T ∈ RN×N ( N = LW ) are the vectors of powers of N th roots of unity ( the Fourier modes ) : qk = 1√ N [ 1 , ωk , ω 2 k , · · · , ω N−1 k ] T ( 4 ) where ωk = exp ( 2πiN k ) , for k = 0 , . . . , N − 1 , and i = √ −1 . Hence the matrix of eigenvectors ( as the columns ) , F = ( q0 , q1 , . . . , qN−1 ) , is just the ( inverse ) discrete Fourier transform matrix ( Bracewell [ 4 ] ) , where Fkj = ωkj for 0 ≤ k , j ≤ N − 1 . The Fourier modes projected back onto the L×W 2D spatial domain are plane waves , as shown in Fig . 2G , with wavevector determined by the value of k that specifies the direction and spatial frequency of each plane wave ( see Appendix B ) . We can immediately compute the corresponding eigenvalues for the eigenvectors in Eq . 4 ( equivalent to taking the discrete Fourier transform ( DFT ) of the first row ( or column ) of T , see Bracewell [ 4 ] ) : λm = N−1∑ j=0 Tjω m j , for m = 0 , . . . , N − 1 ( 5 ) where { T0 , . . . , TN−1 } are the N unique elements that fully specifies the circulant matrix T ( Eq . 3 ) . We can then utilise tools from Fourier analysis for efficient updating of the eigenvalues whilst leaving the universal eigenbasis unaffected . For a transition matrix Tv corresponding to an arbitrary action ( translation velocity ) v = ( vx , vy ) , each row of Tv is again a circulant , but shifted version of the corresponding row vector of the symmetric transition matrix corresponding to zero drift velocity , T0 . Specifically , the first rows of the two matrices are related as follows : Tv ( k ) = T0 ( k + vxL+ vy ) , for k = 0 , . . . , N − 1 ( 6 ) Given the eigenvalues for T0 , Λ0 = [ λ00 , λ 0 1 , . . . , λ 0 N−1 ] ∈ CN ( via the DFT of the first row of T0 , Eq . 5 ) , we can immediately derive the eigenvalues of Tv , Λv , via a one-step update based on the Fourier shift theorem ( Bracewell [ 4 ] ) without recomputing the eigendecomposition : Λv [ k ] = exp ( 2πi N ( vxL+ vy ) k ) Λ0 [ k ] , for k = 0 , . . . , N − 1 , for arbitrary v , i.e. , Λv = ΦδvΛ0 , Φδ ( v ) = [ 1 , ωδ ( v ) , ω 2 δ ( v ) , . . . , ω N−1 δ ( v ) ] , where δ ( v ) = vxL+ vy ( 7 ) This allows path integration by reweighting the common set of eigenvectors at each timestep by the updated eigenvalues corresponding to the current drift velocity ( Eq . 7 ) . Note that additionally , T 0 can include diffusion , thus reweighting by the eigenvalues of the diffusive transition matrix also allows tracking of increasing uncertainty . Utilising the fixed eigenbasis ( Eq . 4 ) and the respective eigenvalues ( Eq . 7 ) for arbitrary transition structures , we can make efficient prediction for the distribution of future state occupancy with respect to arbitrary action ( see Figs . 2B-C ) . Adding translation to the translation-invariant transition matrix does not change the set of eigenvectors - allowing one set of eigenvectors ( Fourier modes ) to support prediction for actions in all directions ( or plain diffusion ) , hence prediction of effects of directed actions can be efficiently generalised across environments . Sense of Direction . We define a `` sense of direction '' , θ∗ , as the angle of the transitions ( or the linear combinations of the available actions in a non-spatial setting ) that maximise the future probability of reaching the target state given an initial state , which is modelled by the SR matrix . θ∗ = arg max θ ∑ j exp [ 2πi ( xG − x0 ) · kj ] 1− γDj exp [ 2πivθ · kj ] ( 8 ) where γ is the discounting factor , Dj , j = 1 , · · · , LW are the eigenvalues for the symmetric diffusion transition matrix , kj , j = 1 , . . . , LW are the wavevectors for the j-th Fourier components , x0 , xG are the coordinates of the start and goal states , and vθ = ( v cos ( θ ) , v sin ( θ ) ) represents the velocity ( with speed v and head direction θ ) . We see that the `` sense of direction '' supports generalisation of predictions of effects of actions across all environments with the same translation-invariant transition structure , i.e. , such predicted effects ignore any local deviations from translation invariance . See Appendix B for the derivation of Eq . 8 . Note that here we assume that the goal state sG is known a priori , e.g. , we consider a problem where the animal is navigating towards a previously visited location . The derived analytical expression for the sense of direction can be retrieved via a lookup table when the state space is small and discrete , whereas in large or continuous state spaces , it can be computed either via optimisation algorithms , or modelled by a non-linear function approximator that represents Eq . 8 . See Bush et al . [ 11 ] for neural network approaches to finding goal directions from grid representations . We thus propose that a computational role for the neural grid codes : generating a “ sense of direction '' ( capturing the transition structure of the state space , ignoring the obstacles and boundaries ) that reflects a global sense of orientation which allows generalisation to completely new environments . Flexible Planning & Application Beyond Translation-Invariant Structures . The proposed model can be applied to flexible planning under arbitrary drift velocity as demonstrated in Fig . 3 ( A-E ) . An agent is trying to navigate towards a goal state in a windy grid world . The navigation is performed by following the ascending `` gradient '' of the SR for occupancy of the target state ( the resolvent metric , Eq . 2 ) . The SR computed from the transition matrix including the effects of diffusion and wind ( Fig . 3 B ) based on our analysis ( eq . 7 ) leads straight to the target ( Fig . 3 C ) . Given the analytical expression of the SR ( Eq . 2 ) , we could efficiently adjust the SR matrix to accommodate local changes in the state space , e.g. , insertion of a barrier , using the Woodbury inversion formula to update the parts of the SR matrix affected by the local obstacles ( see Appendix A.3 for derivations [ 34 ] ) ; and again in this case , the agent correctly adjusts for the wind as well as taking the shortest path around the inserted wall ( Fig . 3 D-E ) . We note , however , that the proposed model is also able to solve tasks without periodic boundary conditions , by considering the original task state space , S0 , being embedded into a larger , periodically bounded pseudo state space Sp , at least twice as large in each dimension as S0 ( Fig . 3 F ) . We again follow the previous procedures , utilising the Fourier modes , this time computed on Sp , to perform predictions in S0 ( Fig . 3 F-G ) , and the performance is unaffected . Note that under such formulation , the underlying transition structures can be applied to environments with both periodic and non-periodic boundary conditions - allowing sense of direction planning in either case . Path Integration . We can also use our model for path integration ( see also Section 4 ) in S0 , by taking velocity inputs ( given any path in the grid world ) to update the state occupancy distribution ( Eq . 7 ) . The path integration performance is strongly correlated with the degree of uncertainty ( i.e. , the diffusion strength caused by self-motion noise in addition to translations ) . This is indeed captured by our model ( Fig . 3H ) , with perfect path integration when the uncertainty is low up to 1000 time steps ( the discretisation of state space means that uncertainty below 0.075 has no effect ) , and monotonically increasing path integration error when the uncertainty is higher .
The authors propose as extension of the successor-representation approach to Grid cells. The paper shows that this model can generate several experimentally observed properties of grid cells, and can be used in navigation of novel/mutable environments. Overall, the work should be of interest to any ICLR attendees who engage in research surrounding grid cells.
SP:27aca0420a1a3fa6cc3fdcef19d0ffcc02345a3c
Zero-shot Fairness with Invisible Demographics
1 INTRODUCTION . Machine learning is already involved in decision-making processes that affect peoples ’ lives such as in screening job candidates ( Raghavan et al. , 2020 ) and in pricing credit ( Hurley & Adebayo , 2017 ) . Efficiency can be improved , costs can be reduced , and personalization of services and products can be greatly enhanced – these are some of the drivers for the widespread development and deployment of machine learning algorithms . Algorithms such as classifiers , however , are trained from large amount of labeled data , and can therefore encode and even reinforce past discriminatory practices that are present in the data . The classifier might treat some groups of individuals unfavorably , for example , denying credit on the grounds of language , gender , age and their combined effect . Algorithmic fairness aims at building machine learning algorithms that can take biased datasets and outputs fair/unbiased decisions for people with differing protected attributes , such as race , gender , and age . A typical setting of algorithmic fairness is as follows . We are given a training set of observations x ∈ X , their corresponding protected attributes s ∈ S , and the target label y ∈ Y for learning a classifier . In a statistical notion of algorithmic fairness e.g . ( Kamiran & Calders , 2012a ; Hardt et al. , 2016 ; Zafar et al. , 2017 ) , we control the discrepancy of a classifier ’ s loss for a small number of demographic groups defined on protected attributes . Recently , several works have considered the setting where protected attributes are unknown ( Kearns et al. , 2018 ; Hashimoto et al. , 2018 ; Khani et al. , 2019 ) . They aim to control the losses of groups whose size is greater than some predefined value . These works focus on an abstract worst-off group rather than demographic groups . It has been noted that the implied worst-off groups may differ from well-specified demographic groups who are known to suffer from past discriminatory practices ( Hashimoto et al. , 2018 ) . We are interested in the setting that is in between having complete annotations for demographic groups and having none . In this paper , we introduce algorithmic fairness with invisible demographics . Who are the invisible demographics ? In the context of machine learning systems , those are individuals with thin or non-existent labeled training data . The invisible population is primarily composed of individuals with certain protected attributes ( Hendricks , 2005 ; Abualghaib et al. , 2019 ; Perez , 2019 ) . We now elaborate on several algorithmic decision scenarios involving invisible demographics . One scenario is when we observe partial outcomes for some of the demographic groups , e.g . we have labeled training data for males ( with positive and negative outcomes ) , but for the group of females , we only observe the one-sided labels ( negative outcome ) . Another scenario is when we do not observe any outcome for some of the demographic ( sub ) groups , e.g . we have training samples for white-skinned and dark-skinned males , and white-skinned females , but we have zero labeled data for dark-skinned females . An extreme version of the last scenario is when we do not observe any outcome for females regardless of their skin colors , e.g . we only have training samples for males and no training examples for females . To summarize , in the invisible demographics problem , we define the demographics groups that are expected to be seen , so they are not abstract . However , not all of the demographics are observed ( labeled ) during training , forming missing or invisible demographics . This paper presents learning disentangled representations in the presence of invisible demographics . Our source of supervision is motivated by the observation that we want to deploy our classifier to the eventual real-world population . This deployment dataset will contain individuals from all demographics . We thus consider the setting where unlabeled data is available for learning disentangled representation . We call this data a context set and this context set is much like the deployment dataset , it is unlabeled but it contains all demographics including the invisible ones . We aim to convert our unlabeled context set into a perfect dataset ( Kleinberg et al. , 2016 ; Chouldechova , 2017 ) , a dataset in which the target label and protected attribute are independent ( i.e . y ⊥ s ) . We will then use this perfect dataset as the inductive bias for learning disentangled representations . How do we construct this perfect dataset without labels ? We assume that the number of demographic groups ( hence clusters ) is known a priori corresponding to the diverse demographic groups in the real-world population in which our machine learning system will be deployed . We use unsupervised kmeans clustering , or a supervised clustering based on rank statistics ; the latter one allows to form the clusters that also support annotations in the train data . Once the clusters have been found , we can equalize the cluster size to form a perfect dataset and use it as an input for learning a disentangled fair representation . See fig . 1 for an overview of our learning with invisible demographic framework . Specifically , our paper provides the following main contributions : 1 . A problem of algorithmic fairness with invisible demographics where we have zero data for some of demographics and we still have to make predictions for those groups . 2 . Applying clustering methods to the task of transforming unlabeled context dataset into a perfect dataset . 3 . Theoretical and experimental justification that the disentangled model with the perfect dataset as an inductive bias provides a well-disentangled fair representation , one component captures the demographic factors and another component is invariant to them . Related work We describe related work in three areas : zero-shot learning , semi-supervised learning , and disentangled representation learning . On zero-shot learning . The setting with incomplete training data , where we aim to account for seen and unseen outcomes is also known as generalized zero-shot learning . Traditionally , zero-shot learning transfers knowledge from classes for which we have training data to classes for which we do not have via auxiliary knowledge , e.g . via prototype examples ( Larochelle et al. , 2008 ) , intermediate class description such as semantic attributes ( Lampert et al. , 2009 ; Xian et al. , 2018 ) , word2vec embeddings ( Bucher et al. , 2019 ) . Our method similarly uses a context set as a source of auxiliary knowledge but in in contrast to generalized zero-shot learning , our context set is an unlabeled pool of data , where class descriptions are unknown . On semisupervised learning . Wick et al . ( 2019 ) proposed a semi-supervised method that can successfully harness unlabeled data to correct for the selection bias and label bias in the training data . The unlabeled data , despite not containing the target label y , is labeled in terms of the protected variable s. Our setting is significantly harder because there is no label information about y and s in the context set . On disentangled representations learning . Locatello et al . ( 2019a ) suggested that disentanglement in representation learning may be a useful property to encourage fairness when protected variables are not observed . In order for disentangled representations to improve fairness measure without the knowledge of protected attribute s , they have to assume that the target label y and the protected attribute s are independent , i.e . y ⊥ s. Though , in fairness settings , the variable s is correlated with the variable y , and therefore unsupervised methods are not suitable for fairness ( Jaiswal et al. , 2018b ; 2019 ) . Indeed , experiments in ( Locatello et al. , 2019a ) were wholly done with procedurally generated synthetic datasets involving 2D and 3D shapes . Without some supervision or inductive bias , disentangled representation methods would not solve the issue of algorithmic fairness with invisible demographics ( Locatello et al. , 2019b ) . 2 METHODOLOGY . 2.1 THEORETICAL BACKGROUND . In this section , we first formulate mathematically the problem of invisible demographics and its associated issue of algorithmic fairness . We then motivate theoretically the idea of perfect dataset for achieving fairness , and its use for an inductive bias in learning disentangled representations . Invisible demographics and algorithmic fairness . Let S denote the set of discrete-valued protected attributes with an associated domains S. S can take the values taken by a single protected attribute , or , S = S1 × S2 × . . .× Sp with S1 , . . . , Sp be discrete-valued protected attributes more generally . X , with the associated domain X , represents other attributes of the data . Let Y denote the space of class labels for a classification task ( Y = { 0 , 1 } for binary classification or Y = { 1 , 2 , . . . , Ccls } for multi-class classification ) . For ease of exposition , we assume that we have multiple sourcesM of samples , one for each combination of class label y and protected attribute s. That is , we have : Mys , ∀y ∈ Y , ∀s ∈ S , ( 1 ) where , for example , the source My=0 , s=0 contains all data points with class label y = 0 and protected attribute s = 0 . As in a standard supervised learning task , we have access to a training set Dtr = { ( xi , si , yi ) } , that is used to learn a model M : X → Y . Dtr is composed of several sources . This labeled training dataset , however , lacks samples from some of the sources : ∃y ∈ Y , ∃s ∈ S : Dtr ∩Mys = ∅ . ( 2 ) For example , we might not have samples from two sources : My=0 , s=0 andMy=1 , s=0 . In binary classification , this corresponds to zero-labeled data for the invisible demographic group s = 0 . Or we only observe a negative outcome for the invisible demographic s = 0 , i.e . we haveMy=1 , s=0 = ∅ . Once the model M is trained , we deploy it to the real-world population with diverse demographic groups . That is , we have a deployment set , Dt = { ( xi ) } which has overlap with all sources : Dt ∩Mys 6= ∅ ∀y ∈ Y , ∀s ∈ S. ( 3 ) If the model relies only on the incomplete training set , it is not unreasonable to expect that the model to easily misunderstand the invisibles . We can all agree that this sounds unfair , and we would like to rectify this . We will be precise shortly about the adopted mathematical definitions of fairness . We propose to alleviate the issue of unfairness to the invisibles by mixing labeled with unlabeled data , which is usually much cheaper to obtain . In this paper , we call this unlabeled data a context set Dctx = { ( xi ) } . This context set has overlap with all sources : Dctx ∩Mys 6= ∅ ∀y ∈ Y , ∀s ∈ S ( 4 ) The context set is much like the deployment set : it has no information about class labels y or the protected attributes s. We adopt a statistical notion of algorithmic fairness in which it balances a certain condition between groups of individuals with different protected attributes . The term ȳ below is the prediction of a machine learning model M . Several statistical fairness criteria have been proposed ( Kamiran & Calders , 2012a ; Hardt et al. , 2016 ; Zafar et al. , 2017 ; Chouldechova , 2017 ; Raghavan et al. , 2020 ) ( shown below for the case where s and y are binary ) : Pr ( ȳ = 1|s = 0 ) = Pr ( ȳ = 1|s = 1 ) ( equality of acceptance rate ) ( 5 ) Pr ( ȳ = 1|s = 0 , y ) = Pr ( ȳ = 1|s = 1 , y ) ( equality of true positive/negative rate ) ( 6 ) Pr ( y = 1|s = 0 , ȳ ) = Pr ( y = 1|s = 1 , ȳ ) ( equality of positive/negative predicted value ) ( 7 ) Generally , those statistical notions can be expressed in terms of different ( conditional ) independence statements between the involved random variables ( Barocas et al. , 2019 ) : ȳ ⊥ s ( equation 5 ) , ȳ ⊥ s | y ( equation 6 ) , and y ⊥ s | ȳ ( equation 7 ) . If our training set has no positive outcome for the demographic s = 0 , i.e . My=1 , s=0 = ∅ , the true positive rate for this group will suffer , and therefore we will likely not be able to satisfy , among others , equality of true positive rate . Perfect dataset . We call a dataset for which y ⊥ s holds , a perfect dataset ( Chouldechova , 2017 ; Kleinberg et al. , 2016 ) . If we have access to a perfect dataset , we could equalize true positive/negative rates ( eq . 6 ) and also equalize positive/negative predicted values ( eq . 7 ) for all demographic groups . This can be shown by using the sum and product rule of conditional probabilities , e.g . ( Kannan et al. , 2019 ) . Let ’ s consider a binary-valued protected attribute , s′ versus s′′ . For s′ , we can compute : Pr ( y = 1|ȳ = 1 , s′ ) = Pr ( ȳ=1|y=1 , s′ ) Pr ( y=1|s′ ) / ( Pr ( ȳ=1|y=1 , s′ ) Pr ( y=1|s′ ) +Pr ( ȳ=1|y=0 , s′ ) ( 1−Pr ( y=1|s′ ) ) ) , and accordingly for s′′ . The conditional probability on the left hand side is a positive predicted value , and this quantity can be expressed in terms of true positive/negative rates and the base ( prior ) rate , shown on the right hand side . If we have a perfect dataset ( y ⊥ s holds , which means equal base rates Pr ( y = 1|s′ ) = Pr ( y = 1|s′′ ) ) , an equality in the true positive/negative rates will give us an equality in the positive/negative predicted values . Similarly , with a perfect dataset , we can equalize true positive/negative rates ( eq . 6 ) and also acceptance rates ( eq . 5 ) for all demographic groups . From the sum probability rule , we have : Pr ( ȳ = 1|s′ ) = Pr ( ȳ = 1|y = 1 , s′ ) Pr ( y = 1|s′ ) + Pr ( ȳ = 1|y = 0 , s′ ) ( 1− Pr ( y = 1|s′ ) ) for s′ value , and accordingly for s′′ value . Here , an acceptance rate on the left hand side is related to true positive/negative rates and the base ( prior ) rate as shown on the right hand side . In general , however , our given dataset is likely to be imperfect . In this paper , we pursue learning a fair classifier for all demographics as learning disentangled representations with an approximately perfect dataset . Disentangled representation . Disentanglement learning aims to find a split representation of a data point x and a mapping function f such that f ( x ) = ( z1 , z2 , . . . , zp ) where z1 , z2 , . . . , zp are p distinct ( independent ) factors of variations . We can mathematically formalize this intuitive definition using group and representation theories Higgins et al . ( 2018 ) , or using structural causal models Suter et al . ( 2019 ) . Specifically in this paper , we would like to split representation of data into two factors as f ( x ) = ( zy , zs ) where zy contains factors that are relevant for y-prediction and zs contains factors related to demographic group s. As noted by Jaiswal et al . ( 2018a ; 2019 ) ( also vide sec . 1 ) , since the protected variable s is correlated with the class label y , we need annotations of undesired nuisance variable s to be successful in using disentanglement learning methods for fairness . We only have annotations of variable s in the training set Dtr = { ( xi , si , yi ) } , however , crucially , this set contains missing demographic groups . We have all demographic groups in the context set Dctx = { ( xi ) } ( also in the deployment set Dtr = { ( xi ) } ) , though , the challenge is we should not expect annotations of protected variable s at the deployment time . The next section will show that we can still leverage the context set for learning the disentangled representations . Disentanglement with a perfect dataset . Our framework for learning the disentangled representations comprises four core modules : 1 ) an encoder function f that embeds x into a bipartite space f ( x ) → ( zy , zs ) ; 2 ) a decoder function g that learns the inverse of f , mapping back from the embedded space into the input domain g ( zy , zs ) → x̃ ; 3 ) a predictor function l that predicts y from zy , and 4 ) a discriminator function h that classifies whether a given batch of samples embedded in zy derives from the either context set or the training set ; this marks a significant departure from the typical GAN discriminator , which takes as input batches of data and yields a prediction for each sample independently of the other samples in the batch . Fig . 2a shows our framework , where the training signal comes from the perfect dataset . Formally , given the training set , Dtr and samples from the balanced ( i.e . perfect – see section 2.2 for how this details on how this can be practically achieved ) context set Xperf , our learning objective can be written as : Lmatch = ∑ x∈Xtr ⋃ Xperf Lrecon ( x , g ( zs , zy ) ) + λ1 ∑ x∈Xtr Lsup ( y , l ( zy ) ) + + λ2 ( log h ( f ( zy ⊂ Xperf ) ) + log ( 1− h ( f ( zy ⊂ Xtr ) ) ) , ( 8 ) where Lrecon and Lsup denote the reconstruction loss , and supervised loss , respectively , and λ1 and λ2 are pre-factors . In practice , this objective is computed over mini-batches , B , and the discriminator h is trained via the standard JSD loss ( Goodfellow et al. , 2014 ) to map a batch of data points from the training set and the context set to a binary label : 1 if the batch is judged to have been sampled from the context set , 0 otherwise . Its goal is to effectively estimate the probability that a batch of samples , as a set , has been sampled from one distribution or the other . Since the task is a set-prediction one , we require that the function it defines respects the exchangeability of the batch dimension – that is the discriminator ’ s predictions should take into account dependencies between samples in a batch but should be invariant to the order in which they appear , i.e . we have h ( { zy } Bb=i ) = h ( { zy } Bb=π ( i ) ) for all permutations π ∈ Π . To render the entirety of the function h composed of sub-functions h1 ( h2 ( h3 ... ) ) ) , it requires only the innermost , sub-function , ρ in the chain to have this property . While there are a number of choices when it comes to defining ρ , we choose a weighted average ρ = 1B ∑ i ( { attention ( zy ) } Bb=i ) , with weights computed according to a learned attention mechanism . It takes the form of the scaled dot-product attention ( Vaswani et al. , 2017 ) attention ( Q , K , V ) : = softmax ( QKT / √ d ) V , , weighting values ( V ) according to the similarity between the associated key ( K ) and query ( Q ) matrices , as measured by their dot-product . Q , K , and V are used after they have been embedded into linear subspaces by matrix-multiplication with learned weight matrices of dimension Rm×d . We found that defining K and V as zy , and Q as the mean of zy over B , yielded good results and leave it to future work to explore more sophisticated methods . The result of ρ is then processed by a series of fully-connected layers , following the DeepSets ( Zaheer et al. , 2017 ) paradigm , which ultimately computes a single prediction for the current batch . We know that the independence condition y ⊥ s holds in the perfect set , but not in the training set due to sampling bias . To do well , the discriminator should rely on this knowledge . More concretely , since the context and training set have differing support over S×Y , namely ( Str×Ytr ) ( ( Sperf×Yperf ) , that support serves as an indicator of the distribution from which the data has been drawn . The scenarios we consider dictate Ytr = Yperf , making the disentangling well-posed . However , since we wish to use Sctx × Yctx \ Str × Ytr as the training signal for the encoder , and not the relative frequency of the target classes , it is important that , like the context set , we weight the samples of the training set such that p ( str|ytr ) p ( ytr ) is equal for all str , ytr ∈ Str × Ytr . To guide the network towards the desired solution , we supplement this implicit constraint with the explicit constraint that zy be predictive of y , which we achieve using a linear predictor l ; whenever we have dim ( S ) > 1 ( in our experiments this corresponds to the partial outcomes setting ) we also impose the same constraint on zs , but with respect to s. With these conditions met , to fool the discriminator , the encoder must separate out information pertaining to S into the embedded space zs not part of the discriminator ’ s input , leaving only unprotected information in zy .
This paper tackles a fair classification problem with an invisible demographic, a situation where the records who have some specific target labels and sensitive attributes are missing. In this setting, the authors introduce a disentangled representation learning framework to make the resultant classifier fair by taking advantage of the additional dataset, context dataset. They demonstrate by the empirical evaluations that the proposed disentangled representation learning algorithm success to mitigate unfair bias by utilizing the perfect dataset, a dataset in which the target label and sensitive attribute are independent. Usually, the perfect dataset is unavailable; hence, they introduce a method to convert the context dataset into the perfect dataset. The authors also show that even if the context dataset is not perfect, the presented method successes to mitigate an unfair bias.
SP:6c57ec1533acf8cfcc2f8d9cdc8fe4d7acf9f77f
Uncertainty in Gradient Boosting via Ensembles
1 INTRODUCTION . Gradient boosting ( Friedman , 2001 ) is a widely used machine learning algorithm that achieves stateof-the-art results on tasks containing heterogeneous features , complex dependencies , and noisy data : web search , recommendation systems , weather forecasting , and many others ( Burges , 2010 ; Caruana & Niculescu-Mizil , 2006 ; Richardson et al. , 2007 ; Roe et al. , 2005 ; Wu et al. , 2010 ; Zhang & Haghani , 2015 ) . Gradient boosting based on decision trees ( GBDT ) underlies such well-known libraries like XGBoost , LightGBM , and CatBoost . In this paper , we investigate the estimation of predictive uncertainty in GBDT models . Uncertainty estimation is crucial for avoiding costly mistakes in high-risk applications , such as autonomous driving , medical diagnostics , and financial forecasting . For example , in self-driving cars , it is necessary to know when the AI-pilot is confident in its ability to drive and when it is not to avoid a fatal collision . In financial forecasting and medical diagnostics , mistakes on the part of an AI forecasting or diagnostic system could either lead to large financial or reputational loss or to the loss of life . Crucially , both financial and medical data are often represented in heterogeneous tabular form — data on which GBDTs are typically applied , highlighting the relevance of our work on obtaining uncertainty estimates for GBDT models . Approximate Bayesian approaches for uncertainty estimation have been extensively studied for neural network models ( Gal , 2016 ; Malinin , 2019 ) . Bayesian methods for tree-based models ( Chipman et al. , 2010 ; Linero , 2017 ) have also been widely studied in the literature . However , this research did not explicitly focus on studying uncertainty estimation and its applications . Some related work was ∗All authors contributed equally and are listed in alphabetical order . done by Coulston et al . ( 2016 ) ; Shaker & Hüllermeier ( 2020 ) , who examined quantifying predictive uncertainty for random forests . However , the area has been otherwise relatively under-explored , especially for GBDT models that are widely used in practice and known to outperform other approaches based on tree ensembles . While for classification problems GDBT models already return a distribution over class labels , for regression tasks they typically yield only point predictions . Recently , this problem was addressed in the NGBoost algorithm ( Duan et al. , 2020 ) , where a GBDT model is trained to return the mean and variance of a normal distribution over the target variable y for a given feature vector . However , such models only capture data uncertainty ( Gal , 2016 ; Malinin , 2019 ) , also known as aleatoric uncertainty , which arises due to inherent class overlap or noise in the data . However , this does not quantify uncertainty due to the model ’ s inherent lack of knowledge about inputs from regions either far from the training data or sparsely covered by it , known as knowledge uncertainty , or epistemic uncertainty ( Gal , 2016 ; Malinin , 2019 ) . One class of approaches for capturing knowledge uncertainty are Bayesian ensemble methods , which have recently become popular for estimating predictive uncertainty in neural networks ( Depeweg et al. , 2017 ; Gal & Ghahramani , 2016 ; Kendall et al. , 2018 ; Lakshminarayanan et al. , 2017 ; Maddox et al. , 2019 ; Smith & Gal , 2018 ) . A key feature of ensemble approaches is that they allow overall uncertainty to be decomposed into data uncertainty and knowledge uncertainty within an interpretable probabilistic framework ( Depeweg et al. , 2017 ; Gal , 2016 ; Malinin , 2019 ) . Ensembles are also known to yield improvements in predictive performance . This work examines ensemble-based uncertainty-estimation for GBDT models . The contributions are as follows . First , we consider generating ensembles using both classical Stochastic Gradient Boosting ( SGB ) as well as the recently proposed Stochastic Gradient Langevin Boosting ( SGLB ) ( Ustimenko & Prokhorenkova , 2020 ) . Importantly , SGLB allows us to guarantee that the models are asymptotically sampled from a true Bayesian posterior . Second , we show that using SGLB we can construct a virtual ensemble using only one gradient boosting model , significantly reducing the computational complexity . Third , to understand the attributes of using ensembles-based uncertainty estimation in GBDT models , we conduct extensive analysis on several synthetic datasets . Finally , we evaluate the proposed approach on a range of real regression and classification datasets . Our results show that this approach successfully enables the detection of anomalous out-of-domain inputs . Importantly , our solution is easy to combine with any implementation of GBDT . Our methods have been implemented within the open-source CatBoost library . The code of our experiments is publicly available at https : //github.com/yandex-research/GBDT-uncertainty . 2 PRELIMINARIES . Uncertainty Estimation via Bayesian Ensembles In this work we consider uncertainty estimation within the standard Bayesian ensemble-based framework ( Gal , 2016 ; Malinin , 2019 ) . Here , model parameters θ are considered random variables and a prior p ( θ ) is placed over them to compute a posterior p ( θ|D ) via Bayes ’ rule : p ( θ|D ) = p ( D|θ ) p ( θ ) p ( D ) . ( 1 ) where D = { x ( i ) , y ( i ) } Ni=1 is the training dataset . Each set of parameters can be considered a hypothesis or explanation about how the world works . Samples from the posterior should yield explanations consistent with the observations of the world contained within the training data D. However , on data far from D each set of parameters can yield different predictions . Therefore , estimates of knowledge uncertainty can be obtained by examining the diversity of predictions . Consider an ensemble of probabilistic models { P ( y|x ; θ ( m ) ) } Mm=1 sampled from the posterior p ( θ|D ) . Each model P ( y|x , θ ( m ) ) yields a different estimate of data uncertainty , represented by the entropy of its predictive distribution ( Malinin , 2019 ) . Uncertainty in predictions due to knowledge uncertainty is expressed as the level of spread , or “ disagreement ” , of models in the ensemble ( Malinin , 2019 ) . Note that exact Bayesian inference is often intractable , and it is common to consider either an explicit or implicit approximation q ( θ ) to the true posterior p ( θ|D ) . While a range of approximations has been explored for neural network models ( Gal & Ghahramani , 2016 ; Lakshminarayanan et al. , 2017 ; Maddox et al. , 2019 ) 1 , to the best of our knowledge , limited work 1A full overview is available in ( Ashukha et al. , 2020 ; Ovadia et al. , 2019 ) . has explored Bayesian inference for gradient-boosted trees . Given p ( θ|D ) , the predictive posterior of the ensemble is obtained by taking the expectation with respect to the models in the ensemble : P ( y|x , D ) = Ep ( θ|D ) [ P ( y|x ; θ ) ] ≈ 1 M M∑ m=1 P ( y|x ; θ ( m ) ) , θ ( m ) ∼ p ( θ|D ) . ( 2 ) The entropy of the predictive posterior estimates total uncertainty in predictions : H [ P ( y|x , D ) ] = EP ( y|x , D ) [ − ln P ( y|x , D ) ] . ( 3 ) Total uncertainty is due to both data uncertainty and knowledge uncertainty . However , in applications like active learning ( Kirsch et al. , 2019 ) and out-of-domain detection it is desirable to estimate knowledge uncertainty separately . The sources of uncertainty can be decomposed by considering the mutual information between the parameters θ and the prediction y ( Depeweg et al. , 2017 ) : I [ y , θ|x , D ] ︸ ︷︷ ︸ Knowledge Uncertainty = H [ P ( y|x , D ) ] ︸ ︷︷ ︸ Total Uncertainty −Ep ( θ|D ) [ H [ P ( y|x ; θ ) ] ] ︸ ︷︷ ︸ Expected Data Uncertainty ≈ H [ 1 M M∑ m=1 P ( y|x ; θ ( m ) ) ] − 1 M M∑ m=1 H [ P ( y|x ; θ ( m ) ) ] . ( 4 ) This is expressed as the difference between the entropy of the predictive posterior , a measure of total uncertainty , and the expected entropy of each model in the ensemble , a measure of expected data uncertainty . Their difference is a measure of ensemble diversity and estimates knowledge uncertainty . Unfortunately , when considering ensembles of probabilistic regression models { p ( y|x ; θ ( m ) ) } Mm=1 over continuous-valued target y ∈ R , it is no longer possible to obtain tractable estimates of the ( differential ) entropy of the predictive posterior , and , by extension , mutual information . In this cases uncertainty estimates can instead derived via the law of total variation : Vp ( y|x , D ) [ y ] ︸ ︷︷ ︸ Total Uncertainty = Vp ( θ|D ) [ Ep ( y|x , θ ) [ y ] ] ︸ ︷︷ ︸ Knowledge Uncertainty +Ep ( θ|D ) [ Vp ( y|x , θ ) [ y ] ] ︸ ︷︷ ︸ Expected Data Uncertainty . ( 5 ) This is conceptually similar to the decomposition ( 4 ) obtained via mutual information . For an ensemble of probabilistic regression models which parameterize the normal distribution , and where each models yields a mean and standard-deviation , the total variance can be computed as follows : Vp ( y|x , D ) [ y ] ︸ ︷︷ ︸ Total Uncertainty ≈ 1 M M∑ m=1 [ ( M∑ m=1 µm M ) − µm ] 2 ︸ ︷︷ ︸ Knowledge Uncertainty + 1 M M∑ m=1 σ2m︸ ︷︷ ︸ Expected Data Uncertainty , { µm , σm } = f ( x ; θ ( m ) ) . ( 6 ) However , while these measures are tractable , they are based on only first and second moments , and may therefore miss high-order details in the uncertainty . They are also not scale-invariant , which can cause issues is the scale of prediction on in-domain and out-of-domain data is very different . Gradient boosting is a powerful machine learning technique especially useful on tasks containing heterogeneous features . It iteratively combines weak models , such as decision trees , to obtain more accurate predictions . Formally , given a dataset D and a loss function L : R2 → R , the gradient boosting algorithm ( Friedman , 2001 ) iteratively constructs a model F : X → R to minimize the empirical risk L ( F |D ) = ED [ L ( F ( x ) , y ) ] . At each iteration t the model is updated as : F ( t ) ( x ) = F ( t−1 ) ( x ) + ϵh ( t ) ( x ) , ( 7 ) where F ( t−1 ) is a model constructed at the previous iteration , h ( t ) ( x ) ∈ H is a weak learner chosen from some family of functionds H , and ϵ is learning rate . The weak learner h ( t ) is usually chosen to approximate the negative gradient −g ( t ) ( x , y ) : = −∂L ( y , s ) ∂s ∣∣ s=F ( t−1 ) ( x ) : h ( t ) = argmin h∈H ED [ ( − g ( t ) ( x , y ) − h ( x ) ) 2 ] . ( 8 ) A weak learner h ( t ) is associated with parameters ϕ ( t ) ∈ Rd . We write h ( t ) ( x , ϕ ( t ) ) to reflect this dependence . The set of weak learners H often consists of shallow decision trees , which are models that recursively partition the feature space into disjoint regions called leaves . Each leaf Rj of the tree is assigned to a value , which is the estimated response y in the corresponding region . We can write h ( x , ϕ ( t ) ) = ∑d j=1 ϕ ( t ) j 1 { x∈Rj } , so the decision tree is a linear function of ϕ ( t ) . The final GBDT model F is a sum of decision trees ( 7 ) and the parameters of the full model are denoted by θ . For classification tasks , a model yields estimates data uncertainty if it is trained via negative loglikelihood and provides a distribution over class labels . However , classic GBDT regression models yield point predictions , and there has been little research devoted to estimating predictive uncertainty . Recently , this issue was addressed by Duan et al . ( 2020 ) via an algorithm called NGBoost ( Natural Gradient Boosting ) , which allows estimating data uncertainty . NGBoost simultaneously estimates the parameters of a conditional distribution p ( y|x , θ ) over the target y given the features x , by optimizing a proper scoring rule . Typically , a normal distribution over y is assumed and negative log-likelihood is taken as a scoring rule . Formally , given an input x , the model F predicts two parameters of normal distribution - the mean µ and the logarithm of the standard deviation log σ . The loss function is the expected negative log-likelihood:2 p ( y|x , θ ( t ) ) = N ( y|µ ( t ) , σ ( t ) ) , { µ ( t ) , log σ ( t ) } = F ( t ) ( x ) . L ( θ|D ) = ED [ − log p ( y|x , θ ) ] = − 1 N N∑ i=1 log p ( y ( i ) |x ( i ) , θ ) . ( 9 ) ( 10 ) Note that θ denotes the concatenation of two parameter vectors used to predict µ and log σ .
This paper studied the uncertainty estimation in GBDT method. The authors described 3 methods to estimate the uncertainty. With SGB, the estimation is achieved by training multiple models using data sub-samples. With SGLB, the authors derived that we can estimate the posterior distribution of the model parameters. These two methods both have the disadvantage that the training time is multiplicative of the number of trained models. To address this issue, the authors proposed an improvement to SGLB which they call virtual SGLB. The main idea is to use a subset of trees in a GBDT as a model sample, so that we can train a single model but still able to estimate the uncertainty.
SP:4a6172aeb95ae800b1a1e86f15a61c6b82cca9d9
Mutual Calibration between Explicit and Implicit Deep Generative Models
1 INTRODUCTION . Deep generative model , as a powerful unsupervised framework for learning the distribution of highdimensional multi-modal data , has been extensively studied in recent literature . Typically , there are two types of generative models : explicit and implicit ( Goodfellow et al. , 2014 ) . Explicit models define a density function of the distribution , while implicit models learn a mapping that generates samples by transforming an easy-to-sample random variable . Both models have their own power and limitations . The density form in explicit models endows them with convenience to characterize data distribution and infer the sample likelihood . However , the unknown normalizing constant often causes computational intractability . On the other hand , implicit models including generative adversarial networks ( GANs ) can directly generate vivid samples in various application domains including images , natural languages , graphs , etc . ( Goodfellow et al. , 2014 ; Radford et al. , 2016 ; Arjovsky et al. , 2017 ; Brock et al. , 2019 ) . Nevertheless , one important challenge is to design a training algorithm that do not suffer from instability and mode collapse . In view of this , it is natural to build a unified framework that takes full advantages of the two models and encourages them to compensate for each other . Intuitively , an explicit density estimator and a flexible implicit sampler could help each other ’ s training given effective information sharing . On the one hand , the density estimation given by explicit models can be a good metric that measures quality of samples ( Dai et al. , 2017 ) , and thus can be used for scoring generated samples given by implicit model or detecting outliers as well as noises in input true samples ( Zhai et al. , 2016 ) . On the other hand , the generated samples from implicit models could augment the dataset and help to alleviate mode collapse especially when true samples are insufficient that would possibly make explicit model fail to capture an accurate distribution . We refer to Appendix A for a more comprehensive literature review . Motivated by the discussions above , in this paper , we propose a joint learning framework that enables mutual calibration between explicit and implicit generative models . In our framework , an explicit model is used to estimate the unnormalized density ; in the meantime , an implicit generator model is exploited to minimize certain statistical distance ( such as the Wasserstein metric or Jensen-Shannon divergence ) between the distributions of the true and the generated samples . On top of these two models , a Stein discrepancy , acting as a bridge between generated samples and estimated densities , is introduced to push the two models to achieve a consensus . Unlike flow-based models ( Nguyen et al. , 2017 ; Kingma & Dhariwal , 2018 ; Papamakarios et al. , 2017 ) , our formulation does not impose invertibility constraints on the generative models and thus is flexible in utilizing general neural network architectures . Our main contribution are as follows . • Theoretically , we prove that our method allows the two generative models to impose novel mutual regularization on each other . Specifically , our formulation penalizes large kernel Sobolev norm of the critic in the implicit ( WGAN ) model , which ensures the critic not to change suddenly on the high-density regions and thus preventing the critic of the implicit model being to strong during training . In the mean time , our formulation also smooths the function given by the Stein discrepancy through Moreau-Yosida regularization , which encourages the explicit model to seek more modes in the data distribution and thus alleviates mode collapse . • In addition , we also show that the joint training helps to stabilize the training dynamics . Compared with other common regularization approaches for GAN models that may shift original optimum , our method can facilitate convergence to unbiased model distribution . • Extensive experiments on synthetic and image datasets justify our theoretical findings and demonstrate that joint training can help two models achieve better performance . On the one hand , the energy model can detect complicated modes in data more accurately and distinguish out-ofdistribution samples . On the other hand , the implicit model can generate higher-quality samples , especially when the training samples are contaminated or limited . 2 BACKGROUND . We briefly provide some technical background related to our model . Energy Model . The energy model assigns each data x ∈ Rd with a scalar energy value Eφ ( x ) , where Eφ ( · ) is called energy function and is parameterized by φ . The model is expected to assign low energy to true samples according to a Gibbs distribution pφ ( x ) = exp { −Eφ ( x ) } /Zφ , where Zφ is a normalizing constant dependent of φ . The normalizing termZφ is often hard to compute , making the training intractable , and various methods are proposed to detour such term ( see Appendix A ) . Stein Discrepancy . Stein discrepancy ( Gorham & Mackey , 2015 ; Liu et al. , 2016 ; Chwialkowski et al. , 2016 ; Oates et al. , 2017 ; Grathwohl et al. , 2020 ) is a measure of closeness between two probability distributions and does not require knowledge for the normalizing constant of one of the compared distributions . Let P and Q be two probability distributions on X ⊂ Rd , and assume Q has a ( unnormalized ) density q . The Stein discrepancy S ( P , Q ) is defined as S ( P , Q ) : = sup f∈F Ex∼P [ AQf ( x ) ] : = sup f∈F { Γ ( Ex∼P [ ∇x log q ( x ) f ( x ) > +∇xf ( x ) ] ) } , ( 1 ) where F is often chosen to be a Stein class ( see , e.g. , Definition 2.1 in ( Liu et al. , 2016 ) ) , f : Rd → Rd′ is a vector-valued function called Stein critic and Γ is an operator that transforms a d× d′ matrix into a scalar value . One common choice of Γ is trace operator when d′ = d. One can also use other forms for Γ , like matrix norm when d′ 6= d ( Liu et al. , 2016 ) . If F is a unit ball in some reproducing kernel Hilbert space ( RKHS ) with a positive definite kernel k , it induces Kernel Stein Discrepancy ( KSD ) . More details are provided in Appendix B. Wasserstein Metric . Wasserstein metric is suitable for measuring distances between two distributions with non-overlapping supports ( Arjovsky et al. , 2017 ) . The Wasserstein-1 metric between distributions P and Q is defined as W ( P , Q ) : = min γ E ( x , y ) ∼γ [ ‖x− y‖ ] , where the minimization with respect to γ is over all joint distributions with marginals P and Q . By Kantorovich-Rubinstein duality , W ( P , Q ) has a dual representation W ( P , Q ) : = max D { Ex∼P [ D ( x ) ] − Ey∼Q [ D ( y ) ] } , ( 2 ) where the maximization is over all 1-Lipschitz continuous functions . Sobolev space and Sobolev dual norm . Let L2 ( P ) be the Hilbert space on Rd equipped with an inner product 〈u , v〉L2 ( P ) : = ∫ Rd uvdP ( x ) . The ( weighted ) Sobolev space H 1 is defined as the closure of C∞0 , a set of smooth functions on Rd with compact support , with respect to norm ‖u‖H1 : = ( ∫ Rd ( u 2 + ‖∇u‖22 ) dP ( x ) ) 1/2 , where P has a density . For v ∈ L2 , its Sobolev dual norm ‖v‖H−1 is defined by ( Evans , 2010 ) ‖v‖H−1 : = sup u∈H1 { 〈v , u〉L2 : ∫ Rd ‖∇u‖22 dP ( x ) ≤ 1 , ∫ Rd u ( x ) dP ( x ) = 0 } . The constraint ∫ Rd u ( x ) dx = 0 is necessary to guarantee the finiteness of the supremum , and the supermum can be equivalently taken over C∞0 . 3 PROPOSED MODEL : STEIN BRIDGING . In this section , we formulate our model Stein Bridging . A scheme of our framework is illustrated in Figure 1 . Denote by Preal the underlying real distribution from which the data { x } are sampled . The formulation simultaneously learns two generative models – one explicit and one implicit – that represent estimates of Preal . The explicit generative model has a distribution PE on X with explicit probability density proportional to exp ( −E ( x ) ) , x ∈ X , where E is referred to as an energy function . We focus on energy-based explicit model in model formulation as it does not enforce any constraints or assume specific density forms . For specifications , one can also consider other explicit models , like autoregressive models or directly using some density forms such as Gaussian distribution with given domain knowledge . The implicit model transforms an easy-tosample random noise z with distribution P0 via a generatorG to a sample x̃ = G ( z ) with distribution PG . Note that for distribution PE , we have its explicit density without normalizing term , while for PG and Preal , we have samples from two distributions . Hence , we can use the Stein discrepancy ( that does not require the normalizing constant ) as a measure of closeness between the explicit distribution PE and the real distribution Preal , and use the Wasserstein metric ( that only requires only samples from two distributions ) as a measure of closeness between the implicit distribution PG and the real data distribution Preal . To jointly learn the two generative models PG and PE , arguably the most straightforward way is to minimize the sum of the Stein discrepancy and the Wasserstein metric : min E , G W ( Preal , PG ) + λS ( Preal , PE ) , where λ ≥ 0 . However , this approach appears no different than learning the two generative models separately . To achieve information sharing between two models , we incorporate another term S ( PG , PE ) – called the Stein bridge – that measures the closeness between the explicit distribution PE and the implicit distribution PG : min E , G W ( Preal , PG ) + λ1S ( Preal , PE ) + λ2S ( PG , PE ) , ( 3 ) where λ1 , λ2 ≥ 0 . The Stein bridge term in ( 3 ) pushes the two models to achieve a consensus . Remark 1 . Our formulation is flexible in choosing both the implicit and explicit models . In ( 3 ) , we can choose statistical distances other than the Wasserstein metricW ( Preal , PG ) to measure closeness between Preal and PG , such as Jensen-Shannon divergence , as long as its computation requires only samples from the involved two distributions . Hence , one can use GAN architectures other than WGAN to parametrize the implicit model . In addition , one can replace the first Stein discrepancy term S ( Preal , PE ) in ( 3 ) by other statistical distances as long as its computation is efficient and hence other explicit models can be used . For instance , if the normalizing constant of PE is known or easy to calculate , one can use Kullback-Leibler ( KL ) divergence . Remark 2 . The choice of the Stein discrepancy for the bridging term S ( PG , PE ) is crucial and can not be replaced by other statistical distances such as KL divergence , since the data-generating distribution does not have an explicit density form ( not even up to a normalizing constant ) . This is exactly one important reason why Stein bridging was proposed , which requires only samples from the data distribution and only the log-density of the explicit model without the knowledge of normalizing constant as estimated in MCMC or other methods . In our implementation , we parametrize the generator in implicit model and the density estimator in explicit model as Gθ ( z ) and pφ ( x ) , respectively . The Wasserstein term in ( 3 ) is implemented using its equivalent dual representation in ( 2 ) with a parametrized critic Dψ ( x ) . The two Stein terms in ( 3 ) can be implemented using ( 1 ) with either a Stein critic ( parametrized as a neural network , i.e. , fw ( x ) ) , or the non-parametric Kernel Stein Discrepancy . Our implementation iteratively updates the explicit and implicit models . Details for model specifications and optimization are in Appendix E.2 . We also compare with some related works that attempt to combine both of the worlds ( such as energy-based GAN , contrastive learning and cooperative learning ) in Appendix A.3 .
In this paper, the task is to train an implicit and an explicit model simultaneously via GAN setting and a new regularizer called "stein bridge", which is constructed from the kernel Stein discrepancy between the implicit and explicit models. The idea of adding such regularization, with the notion of mutual regularization of two models, is interesting. The proposed regularization term is clearly presented, the illustration of stablizing the training procedure, and the empirical results are clearly shown and discussed. The sample quality from the generative models are compared.
SP:1fd72534803649141dce71dd19d3998faf96f625
The Advantage Regret-Matching Actor-Critic
1 Introduction . The notion of regret is a key concept in the design of many decision-making algorithms . Regret minimization drives most bandit algorithms , is often used as a metric for performance of reinforcement learning ( RL ) algorithms , and for learning in games ( 3 ) . When used in algorithm design , the common application is to accumulate values and/or regrets and derive new policies based on these accumulated values . One particular approach , counterfactual regret ( CFR ) minimization ( 35 ) , has been the core algorithm behind super-human play in Computer Poker research ( 4 ; 25 ; 6 ; 8 ) . CFR computes an approximate Nash equilibrium by having players minimize regret in self-play , producing an average strategy that is guaranteed to converge to an optimal solution in two-player zero-sum games and single-agent games . We investigate the problem of generalizing these regret minimization algorithms over large state spaces in the sequential setting using end-to-end function approximators , such as deep networks . There have been several approaches that try to predict the regret , or otherwise , simulate the regret minimization : Regression CFR ( RCFR ) ( 34 ) , advantage regret minimization ( 17 ) , regret-based policy gradients ( 30 ) , Deep Counterfactual Regret minimization ( 5 ) , and Double Neural CFR ( 22 ) . All of these approaches have focused either on the multiagent or single-agent problem exclusively , some have used expert features , while others tree search to scale . Another common approach is based on fictitious play ( 15 ; 16 ; 21 ; 24 ) , a simple iterative self-play algorithm based on best response . A common technique is to use reservoir sampling to maintain a buffer that represents a uniform sample over past data , which is used to train a classifier representing the average policy . In Neural Fictitious Self-Play ( NFSP ) , this produced competitive policies in limit Texas Hold ’ em ( 16 ) , and in Deep CFR this method was shown to approach an approximate equilibrium in a large subgame of Hold ’ em poker . A generalization of fictitious play , policy-space response oracles ( PSRO ) ( 21 ) , stores past policies and a meta-distribution over them , replaying policies against other policies , incrementally adding new best responses to the set , which can be seen as a population-based learning approach where the individuals are the policies and the distribution is modified based on fitness . This approach only requires simulation of the policies and aggregating data ; as a result , it was able to scale to a very large real-time strategy game ( 33 ) . In this paper , we describe an approximate form of CFR in a training regime that we call retrospective policy improvement . Similar to PSRO , our method stores past policies . However , it does not store meta-distributions or reward tables , nor do the policies have to be approximate best responses , which can be costly to compute or learn . Instead , the policies are snapshots of those used in the past , which are retrospectively replayed to predict a conditional advantage , which used in a regret matching algorithm produces the same policy as CFR would do . In the single-agent setting , ARMAC is related to Politex ( 1 ) , except that it is based on regret-matching ( 14 ) and it predicts average quantities rather than explicitly summing over all the experts to obtain the policy . In the multiagent setting , it is a sample-based , model-free variant of RCFR with one important property : it uses trajectory samples to estimate quantities without requiring importance sampling as in standard Monte Carlo CFR ( 20 ) , hence it does not suffer from excessive variance in large environments . This is achieved by using critics ( value estimates ) of past policies that are trained off-policy using standard policy evaluation techniques . In particular , we introduce a novel training regime that estimates a conditional advantageWi ( s , a ) , which is the cumulative counterfactual regret Ri ( s , a ) , scaled by factor B ( s ) that depends on the information state s only ; hence , using regret-matching over this quantity yields the policy that CFR would compute when applying regret-matching to the same ( unscaled ) regret values . By doing this entirely from sampled trajectories , the algorithm is model-free and can be done using any black-box simulator of the environment ; hence , ARMAC inherits the scaling potential of PSRO without requiring a best-response training regime , driven instead by regret minimization . Problem Statement . CFR is a tabular algorithm that enumerates the entire state space , and has scaled to large games through domain-specific ( hand-crafted ) state space reductions . The problem is to define a model-free variant of CFR using only sampled trajectories and general ( domain-independent ) generalization via functional approximation , without the use of importance sampling commonly used in Monte Carlo CFR , as it can cause excessive variance in large domains . 2 Background . In this section , we describe the necessary terminology . Since we want to include the ( partiallyobservable ) multiagent case and we build on algorithms from regret minimization we use extensive-form games notations ( 29 ) . A single-player game represents the single-agent case where histories are aggregated appropriately based on the Markov property . A game is a tuple ( N , A , S , H , Z , u , τ ) , where N = { 1 , 2 , · · · , n } is the set of players . By convention we use i ∈ N to refer to a player , and −i for the other players ( N − { i } ) . There is a special player c called chance ( or nature ) that plays with a fixed stochastic strategy ( chance ’ s fixed strategy determines the transition function ) . A is a finite set of actions . Every game starts in an initial state , and players sequentially take actions leading to histories of actions h ∈ H. Terminal histories , z ∈ Z ⊂ H , are those which end the episode . The utility function ui ( z ) denotes the player i′s return over episode z . The set of states S is a partition of H where histories are grouped into information states s = { h , h′ , . . . } such that the player to play at s , τ ( s ) , can not distinguish among the possible histories ( world states ) due to private information only known by other players 1 . Let ∆ ( X ) represent all distributions over X : each player ’ s ( agent ’ s ) goal is to learn a policy πi : Si → ∆ ( A ) , where Si = { s | s ∈ S , τ ( s ) = i } . For some state s , we denote A ( s ) ⊆ A as the legal actions at state s , and all valid state policies π ( s ) assign probability 0 to illegal actions a 6∈ A ( s ) . We now show a diagram to illustrate the key ideas . Kuhn poker , shown in Figure 1 is a poker game with a 3-card deck : Jack ( J ) , Queen ( Q ) , and King ( K ) . Each player antes a single chip and has one more chip to bet with , then gets a single priavte card at random and one is left face down , and players proceed to bet ( b ) or pass ( p ) . Initially the game 1Information state is the belief about the world that a given player can infer based on her limited observations and may correspond to many possible histories ( world states ) starts in the empty history h0 = ∅ where no actions have been taken , and it is chance ’ s turn to play . Suppose chance samples , according to a fixed distribution , one of its six actions , which corresponding to one of the size-2 permutations of deals ( one card to each player ) . For example , suppose outcome 1Q2J is sampled , corresponding to the first player getting the queen and second player getting the jack . This would correspond to a new history h = ( 1Q2J ) . Label the information state corresponding to this history as s depicted by the grey joined circles : h′ = ( 1Q2K ) . At this information state s = { h , h′ } , it is the fist player ’ s turn ( τ ( s ) = 1 ) and it includes every history consistent with their information ( namely , that they were dealt the jack ) . The legal actions are now A ( s ) = { p , b } . Suppose the first player chooses p and the second player chooses b , then the history is part of s′ , the second information state shown in the figure . Finally , suppose the first player chooses to bet ( call ) , then the first player would win gaining 2 chips since they have the higher ranking card . Each player i ’ s goal is to compute πi that achieves maximal reward in expectation , where the expectation is taken over all players ’ policies , even though player i controls only their own policy . Hence , ideally , the player would learn a safe policy that guarantees the best worst-case scenario . Let π denote a joint policy . Define the state-value vπ , i ( s ) as the expected ( undiscounted ) return for player i given that state s is reached and all players follow π . Let qπ , i be defined similarly except also conditioned on player τ ( s ) taking action a at s. Formally , vπ , i ( s ) =∑ ( h , z ) ∈Z ( s ) η π ( h|s ) ηπ ( h , z ) ui ( z ) , where Z ( s ) are all terminal histories paired with their prefixes that pass through s , ηπ ( h|s ) = η π ( h ) ηπ ( s ) , where η π ( s ) = ∑ h′∈s η π ( h′ ) , and ηπ ( h , z ) is the product of probabilities of each action taken by the players ’ policies along h to z . The stateaction values qπ , i ( s , a ) are defined analogously . Standard value-based RL algorithms estimate these quantities for policy evaluation . Regret minimization in zero-sum games uses a different notion of value , the counterfactual value : vcπ , i ( s ) = ∑ ( h , z ) ∈Z ( s ) η π −i ( h ) η π ( h , z ) ui ( z ) , where ηπ−i ( h ) is the product of opponents ’ policy probabilities along h. We also write ηπi ( h ) the product of player i ’ s own probabilities along h. Under the standard assumption of perfect recall , we have that for any h , h′ ∈ s , ηπi ( h ) = ηπi ( h′ ) . Thus counterfactual values are formally related to the standard values ( 30 ) : vπ , i ( s ) = vcπ , i ( s ) β−i ( π , s ) , where β−i ( π , s ) = ∑ h∈s η π −i ( h ) . Also , qcπ , i ( s , a ) is defined similarly except over histories ( ha , z ) ∈ Z ( s ) , where ha is history h concatenated with action a. Counterfactual regret minimization ( CFR ) is a tabular policy iteration algorithm that has facilitated many advances in Poker AI ( 35 ) . On each iteration t , CFR computes counterfactual values qcπ , i ( s , a ) and vcπ , i ( s ) for each state s and action a ∈ A ( s ) and the regret of not choosing action a ( or equivalently the advantage of choosing action a at state s , rt ( s , a ) = qcπt , i ( s , a ) − vcπt , i ( s ) . CFR tracks the cumulative regrets for each state and action , RT ( s , a ) = ∑T t=1 r t ( s , a ) . Define ( x ) + = max ( 0 , x ) ; regret-matching then updates the policy of each action a ∈ A ( s ) as follows ( 14 ) : πT+1 ( s , a ) = NormalizedReLU ( RT , s , a ) = { RT , + ( s , a ) ∑ b∈A ( s ) R T , + ( s , b ) if ∑ b∈A ( s ) R T , + ( s , b ) > 0 1 |A ( s ) | otherwise , ( 1 ) In two-player zero-sum games , the mixture policy π̄T converges to the set of Nash equilibria as T →∞ . Traditional ( off-policy ) Monte Carlo CFR ( MCCFR ) is a generic family of sampling variants ( 20 ) . In outcome sampling MCCFR , a behavior policy µi is used by player i , while players −i use π−i , a trajectory ρ ∼ ( µi , π−i ) is sampled , and the sampled counterfactual value is computed : q̃cπ , i ( s , a | ρ ) = 1 η ( µi , π−i ) i ( z ) η ( µi , π−i ) i ( ha , z ) ui ( z ) , ( 2 ) if ( s , a ) ∈ ρ , or 0 otherwise . q̃cπ , i ( s , a | ρ ) is an unbiased estim . of qcπ , i ( s , a ) ( 20 , Lemma 1 ) . However , since these quantities are divided by η ( µ , π−i ) i ( z ) , the product of player i ’ s probabilities , ( i ) there can be significant variance introduced by sampling , especially in problems involving long sequences of decisions , and ( ii ) the ranges of the ṽci can vary wildly ( and unboundedly if the exploration policy is insufficiently mixed ) over iterations and states , which could make approximating the values in a general way particularly challenging ( 34 ) . Deep CFR and Double Neural CFR are successful large-scale implementations of CFR with function approximation , and they get around this variance issue by using external sampling or a robust sampling technique , both of which require a perfect game model and enumeration of the tree . This is unfeasible in very large environments or in the RL setting where full trajectories are generated from beginning to the end without having access to a generative model which could be used to generate transitions from any state .
This paper considers the problem of counterfactual regret minimization and proposes an algorithm that does not use the importance sampling procedure. The claim is that this helps in reducing the variance usually introduced by the IS procedure. They propose a new algorithm that uses the previously used policies as a buffer and replays those policies to learn a new policy. The algorithm is also claimed to be highly scalable for games with large state-action pairs.
SP:057cf13c9fd038dc102253838b888580acc6e2b6
Identifying Physical Law of Hamiltonian Systems via Meta-Learning
1 INTRODUCTION . Hamiltonian mechanics , a reformulation of Newtonian mechanics , can be used to describe classical systems by focusing on modeling continuous-time evolution of system dynamics with a conservative quantity called Hamiltonian ( Goldstein et al. , 2002 ) . Interestingly , the formalism of the Hamiltonian provides both geometrically meaningful interpretation ( Arnol ’ d et al. , 2001 ) and efficient numerical schemes ( Feng & Qin , 2010 ) representing the state of complex systems in phase space with symplectic structure . Although formalism was originally developed for classical mechanics , it has been applied to various fields of physics , such as fluid mechanics ( Salmon , 1988 ) , statistical mechanics ( Reichl , 1999 ) , and quantum mechanics ( Sakurai & Commins , 1995 ) . While it has many useful mathematical properties , establishing an appropriate Hamiltonian of the unknown phenomena is a challenging problem . A Hamiltonian for a system can be modeled by a shared expression of the Hamiltonian and physical parameters . For instance , the Hamiltonian of an ideal pendulum is described as H = p 2 2ml2 + mgl ( 1 − cos q ) ( shared expression ) , with mass m , pendulum length l , and gravity constant g ( physical parameters ) , whereas q and p are the angle of the pendulum and the corresponding conjugate momentum ( state of the system ) , respectively . Once an appropriate functional of the Hamiltonian is established from observing several pendulums , a new pendulum-like system can be readily recognized by adapting new physical parameters on the expression . Therefore , identifying an appropriate expression of the Hamiltonian is an important yet extremely difficult problem in most science and engineering areas where there still remain numerous unknown processes where it is even uncertain whether a closed-form solution or mathematically clear expression exists . In the recent era of deep learning , we can consider the use of learning-based algorithms to identify an appropriate expression of the Hamiltonian with sufficient data . To determine the Hamiltonian underlying the unknown physical process , the Hamiltonian should satisfy two fundamental conditions : ( 1 ) it should fit well on previously observed data or motions , ( 2 ) it should generalize well on newly observed data from new systems if the systems share the same physical law with previous ones . The first condition has been mitigated by explicitly incorporating symplectic structure or conservation laws on neural networks , called Hamiltonian neural networks ( HNN ) ( Greydanus et al. , 2019 ) for learning Hamiltonian dynamics . HNN and its variants have been shown to be effective in learning many useful properties of the Hamiltonian ( Toth et al. , 2020 ; Chen et al. , 2020 ; Zhong et al. , 2020a ; Sanchez-Gonzalez et al. , 2019 ; Jin et al. , 2020 ) . In their experiments , it has been shown that HNN and its variants work well on learning conservation laws or continuous-time translational symmetry , enable the learning of complex systems stably by incorporating numerical integrators and generalize on multiple initial conditions or controls for the given system . However , there is limited work regarding a trained model that works well on totally new systems governed by the same physical law with novel physical parameters . To consider the second condition , we propose that meta-learning , which aims to train a model well generalized on novel data from observing a few examples , can be a potential key to learning a functional of Hamiltonian as a data-driven method . There have been several representative categories of meta-learning algorithms , such as the metric-based method ( Snell et al. , 2017 ; Sung et al. , 2018 ) , black-box method ( Santoro et al. , 2016 ; Bertinetto et al. , 2019 ) , and gradient-based method ( Rusu et al. , 2019 ; Flennerhag et al. , 2020 ) . Among these methods , we especially focus on the gradientbased method , which is readily compatible with any differentiable model and flexibly applicable to a wide variety of learning problems ( Finn et al. , 2017 ; Xu et al. , 2018 ; Hospedales et al. , 2020 ) . One of the most successful algorithms of the gradient-based method is Model-Agnostic MetaLearning ( MAML ) ( Finn et al. , 2017 ) , which consists of a task-specific adaptation process and a meta-optimization process . The key observations supporting its potential are the resemblance between these processes and the identification of the physical laws of the Hamiltonian . The schematic is shown in Figure 1 . The task-adaptation process , which adapts the initial model parameters to a task-specific train set , resembles the process of adapting hypothesized governing equations to observations of several physical systems . The meta-optimization process , which updates the initial model parameters by validating each task-specific adapted parameters to a task-specific test set , is similar to correcting the hypothesized governing equations by validating each system-specific Hamiltonian on new data from the corresponding physical systems . In addition , ( Raghu et al. , 2020 ) proposed that the recent success behind these meta-learning algorithms was due to providing qualitative shared representation across tasks rather than learning initial model parameters that encourage rapid adaptation ( Finn et al. , 2017 ) . This hypothesis may support our suggestion that a meta-learner can be efficient in identifying the shared representation of a Hamiltonian . From this point of view , we experiment on several types of physical systems to verify whether these meta-learning algorithms are beneficial to our desired learning problems . Our contributions are summarized as follows : • We formulate the problem of identifying the shared representation of unknown Hamiltonian as a meta-learning problem . • For learning to identify the Hamiltonian representations , we incorporate the HNN architecture on meta-learning algorithms . • After meta-training the meta-learner , we adapt the model on new systems by learning the data of partial observations and predict the dynamics of the systems as a vector field in phase space . • We evaluate our method on several types of physical systems to explore the efficiency of our methods with various experimental settings . 2 PRELIMINARIES . 2.1 HAMILTONIAN NEURAL NETWORKS . In Hamiltonian mechanics , the state of a system can be described by the vector of canonical coordinates , x = ( q , p ) , which consist of position , q = ( q1 , q2 , ... , qn ) and its conjugate momentum , p = ( p1 , p2 , ... , pn ) in phase space , where n is the degree of freedom of the system . Then , the time evolution of the system is governed by Hamilton ’ s equations ẋ = ( ∂H ∂p , − ∂H ∂q ) = Ω∇xH ( x ) , where H ( x ) : R2n → R is the Hamiltonian that is conservative during the process and Ω = [ 0 In −In 0 ] is a 2n× 2n skew-symmetric matrix . From the Hamiltonian equations , the Hamiltonian vector field in phase space , which is interpreted as the time evolution of the system ẋ , is the symplectic gradient of the Hamiltonian Ω∇xH ( x ) , which is determined by the Hamiltonian function and the state of the system itself . Then , the trajectory of the state can be computed by integrating the symplectic gradient of the Hamiltonian . If the Hamiltonian does not depend on the time variable , the Hamiltonian remains constant during the time evolution , because moving along the direction of the symplectic gradient keeps the Hamiltonian constant ( Arnol ’ d , 2013 ) . In ( Greydanus et al. , 2019 ) , the Hamiltonian function can be approximated by neural networks , Hθ , called HNN . To make the Hamiltonian function constant in motion , the loss of HNN is defined by the distance between the true vector field and the symplectic gradient of Hθ , LHNN = ‖ẋ− Ω∇xHθ ( x ) ‖22 . ( 1 ) 2.2 MODEL-AGNOSTIC META-LEARNING AND FEATURE REUSE HYPOTHESES . A key assumption behind MAML is that separately trained models for each task share meta-initial parameters θ that could be improved rapidly for any task ( Finn et al. , 2017 ) . Suppose that each given task , Ti , composed of Di = { Dtri , Dtei } , is drawn from a task distribution , Ti ∼ p ( T ) . The learning algorithms usually consist of bi-level optimization processes ; ( inner-loop ) the task-specific adaptation to each train set , θ′i = θ − α∇θLTi ( Dtri ; θ ) , ( 2 ) and ( outer-loop ) the meta-optimization on each test set , θ ← θ − β∇θ ∑ Ti∼p ( T ) LTi ( Dtei ; θ′i ) , ( 3 ) where θ could be any differentiable model ’ s parameters that are expected to learn the shared representations of various tasks , and α and β are the step sizes of the inner and outer loops , respectively . Meanwhile , ( Raghu et al. , 2020 ) observed that during the inner-loop process , the task-specific distinction of the model parameters θ is mostly from the last layer of the networks , whereas the entire body of the model hardly changed . Therefore , they hypothesized that the body of the model behaves as a shared representation across the different tasks , whereas the head of the model behaves as a task-specific parameter , which is called the feature reuse hypothesis . From this hypothesis , they proposed a gradient-based meta-learning algorithm called Almost No Inner Loop ( ANIL ) by slightly modifying MAML by freezing all but updating the last layer of the networks during the inner-loop process . They showed that ANIL performs on par or better than MAML on several benchmarks , and has a computational benefit compared to its counterpart . For the algorithm , when the meta-learner consists of l layers θ = ( θ ( 1 ) , ... , θ ( l−1 ) , θ ( l ) ) , the inner-loop update is modified as θ′i = ( θ ( 1 ) , ... , θ ( l−1 ) , θ ( l ) − α∇θ ( l ) LTi ( Dtri ; θ ) ) . ( 4 ) As many physical processes could be expressed as an invariant shared expression of Hamiltonian and variable physical parameters , such meta-learning scheme , which encourages to separate the invariant part and varying part , can be expected to be more efficient to learn new systems by the relatively small number of parameter update . 3 METHOD . 3.1 IDENTIFYING SHARED REPRESENTATION OF HAMILTONIAN VIA META-LEARNER . The main goal of our study is to train a model to identify the shared representation of the Hamiltonian using observations of dynamics from several systems that are assumed to be governed by the same physical law with different physical parameters . From a meta-learning point of view , each system is regarded as a task Ti , where the physical parameters of the system are drawn from the distribution of p ( T ) . The observations of the system Ti can be split into Di = { Dtri , Dtei } , where Dtri and Dtei denote the task-specific train and test sets , respectively . The observations of both Dtri and Dtei are given by a set of tuples of canonical coordinates x = ( q , p ) and their time-derivatives ẋ = ( q̇ , ṗ ) as the ground truth . For each system , the task-specific model parameters are obtained from Equation 2 or Equation 4 by computing the task-specific loss using Equation 1 on each train set Dtri , LTi ( Dtri ; θ ) = ∑ ( x , ẋ ) ∼Dtri ‖ẋ− Ω∇xHθ ( x ) ‖22 , ( 5 ) and the meta-optimization can be operated on the batch of systems as Equation 3 by minimizing the loss over the batch of physical parameters sampled from p ( T ) . Each loss is computed by evaluating each task-specific adapted model parameters θ′i to each test set Dtri , ∑ Ti∼p ( T ) LTi ( Dtei ; θ′i ) = ∑ Ti∼p ( T ) ∑ ( x , ẋ ) ∼Dtei ∥∥ẋ− Ω∇xHθ′i ( x ) ∥∥22 . ( 6 ) Depending on the inner-loop methods , we call the algorithms Hamiltonian Model-Agnostic MetaLearning ( HAMAML ) when using Equation 2 , and Hamiltonian Almost No Inner-Loop ( HANIL ) when using Equation 4 .
The paper presents a meta-learning method for learning Hamiltonian dynamic systems from data. More specifically, the novelty is incorporating Hamiltonian Neural Networks (HNNs) within known meta-learning methods (MAML and ANIL) in order to model new dynamical systems (with previously known structures but unknown parameters) from partially observed data. The results from the experimental evaluation (on three well-known systems) show that such an approach, and in particular HNNs w/ ANIL (HANIL), leads to more accurate models of unseen dynamics compared to other benchmarks methods such as "vanilla" HNNs and HNNs w/ MAML (HAMAML).
SP:87af788d5bd4c486de01969a93d8b49a9f494da1
Transferring Inductive Biases through Knowledge Distillation
1 INTRODUCTION . Inductive biases are the characteristics of learning algorithms that influence their generalization behavior , independent of data . They are one of the main driving forces to push learning algorithms toward particular solutions ( Mitchell , 1980 ) . Having the right inductive biases is especially important for obtaining high performance when data or compute is a limiting factor , or when training data is not perfectly representative of the conditions at test time . Moreover , in the absence of strong inductive biases , a model can be equally attracted to several local minima on the loss surface ; and the converged solution can be arbitrarily affected by random variations like the initial state or the order of training examples ( Sutskever et al. , 2013 ; McCoy et al. , 2020 ; Dodge et al. , 2020 ) . There are different ways to inject inductive biases into learning algorithms , for instance through architectural choices , the objective function , the curriculum , or the optimization regime . In this paper , we exploit the power of Knowledge Distillation ( KD ) to transfer the effect of inductive biases between neural networks . KD refers to the process of transferring knowledge from a teacher model to a student model , where the logits from the teacher are used to train the student . KD is best known as an effective method for model compression ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ; Sanh et al. , 2019 ) which allows taking advantage of a huge number of parameters during training while having an efficient smaller model during inference . The advantage of KD goes beyond model compression and it can be used to combine strengths of different learning algorithms ( Kuncoro et al. , 2019 ; 2020 ) . Different algorithms vary in terms of the computational/memory efficiency at training/inference or the inductive biases for learning particular patterns . This makes them better at solving certain problems and worse at others , i.e. , there is no “ one size fits all ” learning algorithm . Hence , it is important to explore the potential of KD for finding better trade-offs . The question that we ask in this paper is : “ In KD , are the preferences of the teacher that are rooted in its inductive biases , also reflected in its dark knowledge1 , and can they thus be transferred to the student ? ” . We are interested in cases where the student model can realize functions that are realizable by the teacher , i.e. , the student model is efficient with respect to the teacher model ( Cohen et al. , 2016 ) , while the teacher has a preference inductive bias so that the desired solutions are easily learnable for the teacher ( Seung et al. , 1991 ) . We consider two scenarios where the teacher and the student are neural networks with heterogeneous architectures , hence have different inductive biases . We train the models , both independently and using KD , on tasks for which having the right inductive biases is crucial . In the first test case , we study RNNs vs. Transformers ( Vaswani et al. , 2017 ) , on the subject-verb agreement prediction task ( Linzen et al. , 2016 ) . In this task , we use LSTMs ( Hochreiter & Schmidhuber , 1997 ) as the most widely used RNN variant . LSTMs are shown to perform better than vanilla Transformers in this task and their superior performance is attributed to their so-called “ recurrent ” inductive bias ( Tran et al. , 2018 ) . First , we identify the sources of the recurrent inductive bias of LSTMs : sequentiality , memory bottleneck , and recursion , and design experiments to show the benefits of each . Then , we show that through distilling knowledge of LSTMs to Transformers , the solutions that the Transformer models learn become more similar to the solution learned by LSTMs . In the second test case , we study CNNs vs. MLPs , in the context of the MNIST-C ( Corrupted MNIST ) benchmark ( Mu & Gilmer , 2019 ) , which is designed to measure out-of-distribution robustness of models . We train our models on MNIST and evaluate them on the Translated/Scaled MNIST . The particular form of parameter sharing in CNNs combined with the pooling mechanism makes them equivariant to these kinds of transformations ( Goodfellow et al. , 2016 ) , which leads to better generalization in these scenarios compared to MLPs . In our experiments and analysis on these two test cases2 , we compare the behavior of different models , from a wide range of perspectives , when trained in different setups including ( 1 ) when trained without KD , but directly from the data , ( 2 ) when trained with KD using a teacher with a similar architecture to the student , i.e . self-distillation , and ( 3 ) when trained with KD using a teacher with a different architecture that has stronger inductive biases that suit the task , compared to the student . As the first step , in setup ( 1 ) , i.e. , no KD , we demonstrate how inductive biases arising from different architectural choices affect the generalization behavior of the models we study ( §2.1 and §3.1 ) . We show that the models with more suitable inductive biases not only have better accuracy but also the solutions they converge to is a better solution in terms of other metrics . We also show that different instances of the model with stronger inductive biases have less variance in terms of all the metrics . Then , we apply KD to train the models and contrast the behavior of models trained with the setups ( 2 ) and ( 3 ) with the models trained with setup ( 1 ) , i.e . with KD vs. without KD . We show that regardless of the properties of the teacher , KD is a powerful technique in which the teacher model drives the student toward a particular set of solutions that is more restricted compared to the set of possible solutions that a student can converge to when it learns directly from data ( §2.2 , §3.2 , and Appendix C ) . 1Dark knowledge refers to the information encoded in the output logits of a neural network ( Hinton et al. , 2015 ) . 2The codes for the input pipelines , models , analysis , and the details of the hyper-parameters used in our experiments is available at https : //ANONYMIZED . Next , as the main contribution of our paper over previous works that study KD , we contrast the behavior of models trained with setup ( 3 ) with the models trained with setups ( 1 ) and ( 2 ) : • We show the performance of the student models in setup ( 3 ) increases , not only on in- distribution test sets ( §2.2 ) , but also on out-of-distribution data ( §3.2 ) . We demonstrate that this happens when the teacher has the right inductive bias and not necessarily otherwise , i.e. , setup ( 2 ) . • In setup ( 3 ) , besides performance , we show that , the solution that a student model converges to shares similar characteristics with the solution of its teacher . For instance in terms of confidence calibration ( §2.2 and §3.2 ) , and per sample behaviour of the model ( Appendix E ) . • We demonstrate that although the student model is merely exposed to the final logits of the teacher , the structure of the latent space of the student model becomes similar to the teacher , i.e . relational similarity of the internal representations from the student and its teacher increases ( §2.2 and §3.2 ) . As an example , in our second test case ( MNIST-C ) , when training an MLP model with KD using a CNN teacher , the student model explores the solution space in ways more similar to its teacher . Figure 1 visualizes and compares the path that an MLP takes during training ( Figure 1a ) , compared to a CNN ( Figure 1b ) . The CNN model explores the surface in a completely different manner than the MLP , while the path of a student MLP distilled from the CNN model as the teacher ( Figure1c ) is more similar to the CNN . 2 DISTILLING LSTMS INTO TRANSFORMERS . LSTMs and Transformers are the basic building blocks of many state-of-the-art models for sequence modeling and natural language processing . Transformers are an expressive class of models that do extremely well on many tasks where the training data is adequate in quantity ( Devlin et al. , 2019 ; Keskar et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . Several studies , however , have shown that LSTMs can perform better than Transformers on tasks requiring sensitivity to ( linguistic ) structure , especially when the data is limited ( Tran et al. , 2018 ; Dehghani et al. , 2019 ) . We chose the subject-verb agreement prediction task , introduced by Linzen et al . ( 2016 ) , as the test case , as it yields a meaningful difference between LSTMs and Transformers ( Tran et al. , 2018 ) . We compare these two families of models and conduct experiments to emphasize the differences between them when trained independently and through KD . Recurrent Inductive Bias . Among sequence modeling architectures , models with recursion are in particular powerful for natural language processing due to their adequacy to model hierarchical structures ( Linzen et al. , 2016 ) . The recursion in a model can be implemented in various ways , like in Recurrent Neural Networks ( Elman , 1990 ) , Recursive Neural Networks ( Socher et al. , 2010 ; Le & Zuidema , 2014 ) and Universal Transformers ( Dehghani et al. , 2019 ; Hao et al. , 2019 ) . While theoretically , both recurrent neural networks ( RNNs ) and Transformers can deal with finite hierarchical structures , empirical results indicate the superiority of RNNs over Transformers ( Tran et al. , 2018 ; Dehghani et al. , 2019 ; Hahn , 2020 ) . In the literature ( Sutskever et al. , 2013 ; Dehghani et al. , 2019 ) , the inductive bias of RNNs is referred to as the recurrent inductive bias . Here , we distinguish between three main sources of this bias : 1 . Sequentiality : There is an inherent notion of order in the architecture that forces the model to access the next tokens in the input one by one and process them sequentially . 2 . Memory bottleneck : The model has no direct access to the past tokens and has to compress all the information from the past in a hidden state , which is accessible when processing a new token . 3 . Recursion : The model recursively applies the same function on the varying input at every step . Transformers ( Vaswani et al. , 2017 ) , in contrast , process the input in parallel . Although a weak notion of order is encoded by positional embeddings , no explicit assumption is made in the connectivity structure of the architecture . Moreover , they have a global receptive field and can access all tokens through self-attention . Finally , standard Transformers are not recursive . However , the standard Transformer can be modified to have an architecture with specifications that are similar to RNNs . We provide empirical results to demonstrate the benefits of these different sources of inductive biases of RNNs . For this purpose , we design experiments with variants of Transformers in which we attempt to approximate some of the RNNs ’ assumptions . Task and Models . We study the performance of LSTMs and variants of Transformers on the task of predicting number-agreement between subjects and verbs in English sentences . We investigate the quality of the solutions they converge to when they are trained independently and when they are trained through distillation . We use the subject-verb agreement dataset of Linzen et al . ( 2016 ) , for which the size of the training set is ∼121k examples and the size of the test set is ∼1m . Succeeding at this task is a strong indicator that a model can learn syntactic structures and is therefore proposed by Linzen et al . ( 2016 ) as a proxy for assessing the ability of models to capture hierarchical structure in natural language . It is shown that RNNs have better inductive biases to learn this compared to standard Transformers ( Tran et al. , 2018 ; Dehghani et al. , 2019 ) . In this task , examples are grouped into different levels of difficulty based on the number of “ agreement attractors ” 3 , and distance between the verb and its subject . Hence , we report both micro accuracy ( µ−Accuracy ) and macro accuracy over different groups in terms of distance ( D−Accuracy ) and numbers of attractors ( A−Accuracy ) . Similar to Tran et al . ( 2018 ) , we follow two setups : 1 ) when the learning objective is next word prediction , i.e. , language modeling ( LM ) ; 2 ) when we directly optimize for predicting the verb number , singular or plural , i.e. , classification . In the LM setup , we look at the probabilities predicted when the target of the prediction is the verb of interest , and see whether the probability of the correct form of the verb is higher than the other form ( singular vs plural ) . In the classification setup , the input to the model is a sentence up to the position of the verb of interest and the model predicts whether the verb at that position is singular or plural . In the LM setup , we employ two unidirectional LSTMs with different sizes , LSTM and Small LSTM , and two Transformers , Transformer and Small Transformer . In this setup , corresponding LSTMs and Transformers have roughly the same number of parameters . In the classification setup we compare the following models : ( 1 ) a standard unidirectional LSTM ( sequentiality + memory bottleneck + recursion ) ( 2 ) Transformer : Transformer encoder with a class token ( CLS ) for classification , BERT ( Devlin et al. , 2019 ) style , ( 3 ) Transformer-seq : Transformer encoder with future masking where the classification is done using the representation of the last token4 ( sequentiality ) , ( 4 ) UniversalTransformer-seq : Universal Transformer ( Dehghani et al. , 2019 ) encoder , in which the parameters are shared in depth , with future masking ( sequentiality + recursion ) . Appendix H provides more details on the architectures .
The paper investigates the oft-overlooked aspect of knowledge distillation (KD) -- why it works. The paper highlights the ability of KD for transferring not just the soft labels, but the inductive bias (assumptions inherent in the method, e.g. LSTM's notion of sequentiality, and CNN's translational invariance/equivariance) from the student so that the student exhibits, to an extent, the teacher's generalization properties as well. The paper explores doing KD between LSTMs and several versions of Transformers (with varying structural constraints) on a subject-verb-agreement dataset, and between CNNs and MLPs on MNIST and corrupted MNIST. Compared to prior work showing that better teacher performance lead to better student performance, this paper also shows that the student's performance on different aspects becomes more similar to the teacher's -- (1) if the teacher is strong on metric A and weak on metric B compared to a student on its own, the student can become stronger on A and weaker on B when distilled using the teacher; (2) if the teacher can generalize well to a separate, previously unseen dataset but the student generalizes poorly on its own, after distillation the student can generalize much better than it can possibly learn to on its own.
SP:49e648763ccbfd619a4ee8286a36d85096176cc6
MCM-aware Twin-least-square GAN for Hyperspectral Anomaly Detection
1 INTRODUCTION . Hyperspectral image ( HSI ) appears as a three-dimensional ( 3D ) data cube , two dimensions of which show the spatial information of materials , and the other reveals hundreds of contiguous bands to perceive each scene ( Yokoya et al. , 2012 ) . Among a wealth of HSIs interpretation techniques in practical situations , anomaly detection has many potential applications in video surveillance , activity recognition , and scene understanding , etc ( Lanaras et al. , 2015 ; Eyal et al. , 2019 ; Tu et al. , 2020 ) . However , due to the insufficient prior information , inaccurate labels , complex scenes , and unbalanced samples , it is high-cost and sometimes infeasible to accurately detect different types of anomalies in HSI . Consequently , hyperspectral anomaly detection without any priors is a challenging task and is of great importance . Deep learning-based methods have powerful and unique advantages in modeling and characterizing complex data ( Stanislaw et al. , 2020 ) . A lot of research has appeared in the field of anomaly detection , which can be roughly divided into three categories : supervised , semi-supervised , and unsupervised . However , due to the difficulty of annotation and collection of label training , supervised methods are rarely applied ( Grnitz et al. , 2013 ; Raghavendra & Sanjay , 2019 ) . Semi-supervised work aims to break the dilemma between the number of samples and detection performance , but it still requires pure background training samples ( Blanchar et al. , 2010 ; Wu & Prasad , 2018 ) . On the one hand , unsupervised learning based hyperspectral anomaly detection has become a new trend ( Schlegl et al. , 2017 ; Zhang et al. , 2019 ) . On the other hand , the detection performance is limited due to the lack of prior knowledge . Therefore , we propose an MCM-aware strategy to adaptively obtain reliable and stable pseudo-labeled prior information to alleviate these problems . Concretely , motivated by the observations mentioned above , we estimate the priors and model the background with multi-scale covariance matrices as the necessary preparation fed into the MTGAN model , which generates discriminative representations with second-order statistics in covariance pooling and is conducive to exploiting the intrinsic spatial-spectral information of HSI . The progress of MCM-aware priors construction strategy is illustrated in Figure 1 . Furthermore , though GAN performs well in anomaly detection tasks according to the literature , the real objective of GAN is supposed to capture more separable latent features between background and anomalies instead of minimizing the pixel-wise reconstruction error ( Gong et al. , 2020 ) . The gradient vanishing problem , which is partly caused by the hypothesize that the discriminator as a classifier with the sigmoid cross-entropy loss function in regular GANs , is not conducive to the generation of background and discrimination of anomalies . Hence , to facilitate the training stability and alleviate the gradient-vanishing problem , we present twin-least-square loss to perform background modeling in feature and image domains . Accordingly , we can solve the problem of gradient vanishing and enhance the representation directly aiming at the reconstruction of each pixel . In light of the difficulties of the separability between the anomaly and background , we also impose an anomaly rejection loss to avoid anomalies contamination in background estimation . In this way , the network can reconstruct resembled background dictionaries , but dramatically changed anomalies , thereby increasing the degree of difference between them and endow better detection accuracy . To verify the effectiveness of the proposed method , we implement evaluations on five public HSI data sets . In MTGAN , the average AUC scores of ( Pd , Pf ) and ( Pf , τ ) are 0.99809 and 0.00518 , respectively , which outperform previous state-of-the-art methods . To summary , our contributions are mainly three-fold : • To solve the problem of insufficient samples that previous methods suffer from , we propose an MCM-aware strategy to reliably and adaptively generate prior dictionaries . In specific , we calculate a series of multi-scale covariance matrices , taking advantage of the second-order statistics to naturally model the distribution with integrated spectral and spatial information . • The twin-least-square loss is introduced into both the feature and image domains to overcome the gradient vanishing problem . Meanwhile , the generative ability and training stability can be improved , which can fit the characteristics of high-dimension and complexity of HSI data . • To further reduce the false alarm rate , we design a novel anomaly rejection loss to enlarge the distribution diversity between background regions and anomalies , aiming to distinguish between background and anomalies . Experimental results illustrate that the AUC score of ( Pf , τ ) in MTGAN is one order of magnitude lower than other state-of-the-art methods . 2 RELATED WORK . For traditional methods , the RX method assumes that each spectral channel is Gaussian-distributed , and the pixel is L-dimensional multi-variate Gaussian distributed ( Guo et al. , 2014 ; Luo et al. , 2019 ; Ahmed et al. , 2020 ) . As a non-RX based methods , the ADLR method obtains abundance vectors by spectral decomposition and constructs a dictionary based on the mean value clustering of abundance vectors ( Qu et al. , 2018 ) . The PAB-DC model imposed with low-rank and sparse constraints considers the homogeneity of background and the sparsity of anomalies to construct the dictionaries ( Huyan et al. , 2019 ) . The emerging typical algorithm AED removes the background mainly by attribute filtering and difference operation . Additionally , the LSDM-MoG method combines the mixed noise models and low-rank background to characterize complex distributions more accurately ( Li et al. , 2020 ) . However , these conventional methods are based on single-scale Gaussian assumption and can not represent complex and high-dimensional data sets well , leading to the exploration of deep learning-based methods ( Ben et al. , 2014 ) . In deep auto-encoding Gaussian mixture model ( DAGMM ) ( Zong et al. , 2018 ) , an autoencoder ( AE ) is introduced to the model to generate a low-dimensional representation and reconstruction error for each input data point as the input of the Gaussian Mixture Model ( GMM ) . GAN has attracted a lot of attention for providing a generative model to minimize the distance between the training data distribution and the generative model samples without explicitly defining a parametric function ( Goodfellow et al. , 2014 ; Yuan et al. , 2019 ; Gu et al. , 2020 ) . A novel single-objective generative adversarial active learning ( SO-GAAL ) method for outlier detection considers the anomaly detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution ( Liu et al. , 2019 ) . Nevertheless , these deep learning-based methods can not achieve a balance between good performance and limited prior information . What ’ s more , the network structure of these methods is not specially designed for hyperspectral anomaly detection . Therefore , we propose MTGAN concerning hyperspectral anomaly detection for the first time , to approximate the performance of the supervised methods while releasing the limitation of training samples . 3 PROBLEM STATEMENT AND FRAMEWORK . In this work , we elaborate on MTGAN for hyperspectral anomaly detection , as shown in Figure 2 . The three key components of the framework include : 1 ) the MCM module for background dictionary construction ; 2 ) the twin-least-square GAN module for background reconstruction ; 3 ) the anomaly rejection loss added joint learning . The modules are cascaded together for hyperspectral anomaly detection . 3.1 CONSTRUCTING THE OVERALL MODEL . We denote the HSI as H ∈ Rh×w×d , where d is the number of the spectral bands . h and w represent the spatial size of the data . For convenience , as the input of the network , we transform the 3-D cube H into a 2-D matrix H = { hi } ni=1 ∈ R d×n , where each column of H is a spectral pixel vector in the HSI and n = h × w is the number of the pixels . The HSI data matrix is decomposed into two components : background and anomaly . We denote background and anomaly as Y = [ y1 , y2 , ... , ynB ] , X = [ x1 , x2 , ... , xnA ] , and nA + nB = n , respectively , where yi and xi represent the ith vectors . Based on the defined background and anomaly dictionaries , we formulate the anomaly detection method as L ( Y , X ) = LTLS ( Y ) + Lauto ( Y , Ŷ ) + Lenlarge ( Ŷ , X ) = LLS1 ( Z ) + LLS2 ( Y , Ŷ ) + ∥∥∥Y − Ŷ∥∥∥− α ∥∥∥Ŷ −X∥∥∥ , ( 1 ) s.t . Z = Enc ( Y ; arg ( G ) ) Ŷ = Dec ( Z ; arg ( D ) ) α ∼ N ( 0 , I ) , where ∥∥∥Y − Ŷ∥∥∥ denotes the reconstruction error of the basic AE network . The twin-least-square loss LTLS added for the two discriminators are denoted by LLS1 and LLS2 , respectively , which make up the whole twin-least-square loss LTLS . Lauto and Lenlarge represent the spectral reconstruction loss and separability loss between background-anomaly , respectively . Enc and Dec represent the encoder and decoder , respectively . Z is the output of the encoder . And the encoding and decoding process can be demonstrated as Ŷ = σ ( WWTY + B ) , ( 2 ) where σ ( · ) represents the activation function . Ŷ is the output of the network , and W is the weight of the encoder . B is the bias of the whole network . 3.2 MCM-AWARE PRIOR FOR BACKGROUND CONSTRUCTION . Inspired by estimating the Mahalanobis distance between the test pixels and the constructed pixels on one scale , we generate pseudo priors for GAN training through multi-scale covariance maps construction . Thus , we can meet the requirement of sufficient prior information and take advantage of the spatial and spectral information of HSI . The whole generation progress by pseudo-labeling can be expressed as ( Y , X ) = fMCM ( H ) , ( 3 ) where fMCM ( · ) represents the nolinear learning process of MCM strategy . Y and X denote the background and anomaly dictionaries , respectively . 3.2.1 MULTI-SCALE LOCALIZING . For each central pixel , we try to realize multi-scale localizing first based on the Euclidean distance with a classical classifier , i.e. , K nearest neighbors ( KNN ) , to obtain the local pixel cubes at different scales . Then , we generate a series of gradually increasing cubes of different scales . For each of the cubes , we transfer it to a vector . After that , the covariance matrix is calculated between the vectors . 3.2.2 GENERATING CO-VARIANCE MAPS . For the central pixel hi , taking the scaleR×R as an example , the covariance map of hi on the fixed one scale is extracted as Ck = 1 R2 − 1 R2∑ i=1 ( hi − µ ) ( hi − µ ) T ∈ RL×L , ( 4 ) where µ represents the mean of the set of input HSI vectors { hi|i = 1 , 2 , · · · , R } . { hi|i = 2 , · · · , R2 } represents the corresponding adjacent pixels in a window of R × R pixels . In addition , M scales of Rk , i.e . k = 1 , · · · , M , are taken into account . The covariance maps of other scales are denoted by Ck , k = 1 , · · · , M , which make up the co-variance pool to construct the background .
In this paper, the authors proposed the MTGAN framework, a GAN-based approach to the task of anomaly detection in hyperspectral images. The main idea behind this work is to exploit twin-least-square loss to perform background modeling in feature and image domains to alleviate the gradient vanishing problem of the previous GAN-based anomaly detection methods. Specifically, they proposed i) an MCM-aware strategy to construct the multi-scale priors, ii) a twin-least-square loss on GAN for training stabilization, and iii) an anomaly rejection loss for background estimation. The experiments on multiple benchmarks show the superiority of the MTGAN to the state of the art hyperspectral anomaly detection methods.
SP:a2081fef3126e03544d6c62d6b4b0e15f79d1cc6
Neural Dynamical Systems: Balancing Structure and Flexibility in Physical Prediction
1 INTRODUCTION . The use of function approximators for dynamical system modeling has become increasingly common . This has proven quite effective when a substantial amount of real data is available relative to the complexity of the model being learned ( Chua et al. , 2018 ; Janner et al. , 2019 ; Chen et al. , 1990 ) . These learned models are used for downstream applications such as model-based reinforcement learning ( Nagabandi et al. , 2017 ; Ross & Bagnell , 2012 ) or model-predictive control ( MPC ) ( Wang & Ba , 2019 ) . Model-based control techniques are exciting as we may be able to solve new classes of problems with improved controllers . Problems like dextrous robotic manipulation ( Nagabandi et al. , 2019 ) , game-playing ( Schrittwieser et al. , 2019 ) , and nuclear fusion are increasingly being approached using model-based reinforcement learning techniques . However , learning a dynamics model using , for example , a deep neural network can require large amounts of data . This is especially problematic when trying to optimize real physical systems , where data collection can be expensive . As an alternative to data-hungry machine learning methods , there is also a long history of fitting models to a system using techniques from system identification , some of which include prior knowledge about the system drawn from human understanding ( Nelles , 2013 ; Ljung et al. , 2009 ; Sohlberg & Jacobsen , 2008 ) . These models , especially in the gray-box setting , are typically data-efficient and often contain interpretable model parameters . However , they are not well suited for the situation where the given prior knowledge is approximate or incomplete in nature . They also do not generally adapt to the situation where trajectories are drawn from a variety of parameter settings at test time . This is an especially crucial point as many systems of interest exhibit path-dependent dynamics , which we aim to recover on the fly . In total , system identification methods are sample efficient but inflexible given changing parameter settings and incomplete or approximate knowledge . Conversely , deep learning methods are more flexible at the cost of many more samples . In this paper , we aim to solve both of these problems by biasing the model class towards our physical model of dynamics . Physical models of dynamics are often given in the form of systems of ordinary differential equations ( ODEs ) , which are ubiquitious and may have free parameters that specialize them to a given physical system . We develop a model that uses neural networks to predict the free parameters of an ODE system from the previous timesteps as well as residual terms added to each component of the system . To train this model , we integrate over the ODE and backpropagate gradients from the prediction error . This particular combination of prior knowledge and deep learning components is effective in quickly learning the dynamics and allows us to adjust system behavior in response to a wide variety of dynamic parameter settings . Even when the dynamical system is partially understood and only a subset of the ODEs are known , we find that our method still enjoys these benefits . We apply our algorithm to learning models in three synthetic settings : a generic model of ballistics , the Lorenz system ( Lorenz , 1963 ) , and a generalized cartpole problem , which we use for control as well . We also learn a high-level model of plasma dynamics for a fusion tokamak from real data . The contributions of this paper are • We introduce Neural Dynamical Systems ( NDS ) , a new class of model for learning dynamics that can incorporate prior knowledge about the system . • We show that these models naturally handle the issue of partial or approximate prior knowledge , irregularly spaced data , and system dynamics that change across instantiations , which generalizes the typical system identification setting . We also show that these advantages extend to control settings . • We demonstrate this model ’ s effectiveness on a real dynamics problem relevant to nuclear fusion and on synthetic problems where we can compare against a ground truth model . 2 RELATED WORK . System Identification and Deep Learning with Structure There is a long tradition of forecasting physical dynamics with either machine learning or techniques based on domain knowledge of the dynamics , especially in the field of system identification , where Ljung ( 2010 ) , Schoukens & Ljung ( 2019 ) and Cressie & Wikle ( 2015 ) are good summaries . Often , this space is discussed as a spectrum from a purely prior-knowledge-based system ( white-box ) to a purely data-driven system ( black-box ) with several shades of gray in between . White-box models use prior knowledge to precisely give the relationship between quantities of interest over time and there is extensive literature on solving them ( Brenan et al. , 1995 ) . ‘ Shades of gray ’ may distinguish between levels of prior knowledge or how equations cover subsets of the state space ( Ljung , 2010 ) . Other prior work focuses on online parameter estimation ( Vahidi et al. , 2005 ) , but this relies on an ongoing trajectory through the system and is difficult to use in our setting . In nonlinear black-box settings , there are a variety of techniques used to solve system identification models . Volterra series , a generalization of Taylor series which respects dependency on the past , have been used for system identification ( Rugh , 1981 ) . Block models such as the Hammerstein ( 1930 ) and Weiner ( Billings , 1980 ) models and their combination can also identify systems . Feedforward and recurrent neural networks have been widely used to model dynamical systems ( Chua et al. , 2018 ; Nagabandi et al. , 2017 ; Hochreiter & Schmidhuber , 1997 ) , with additional constraints on stability ( Manek & Kolter , 2020 ) or the Hamiltonian ( Chen et al. , 2019 ) and many others added . Nonlinear autoregressive moving average models with exogenous variables ( NARMAX ) have also been used widely to model dynamical systems and this class is a superset of nearly everything else discussed ( Brunton et al. , 2015 ; Rahrooh & Shepard , 2009 ) . Broadly , none of these algorithms are well-suited to a setting where the dynamic parameters of the system change across rollouts . There have also been several approaches for including physical structure in deep models . Raissi et al . ( 2019 ) use automatic partial derivative computation to force a neural network to fit a given ODE or PDE solution . de Avila Belbute-Peres et al . ( 2018 ) uses a linear complementarity problem to differentiate through 2d physics simulations however their method is not general to more dimensions or other types of problems besides mechanics . Cranmer et al . ( 2019 ) uses graph networks to discover physical laws . Chen et al . ( 2019 ) , Sanchez-Gonzalez et al . ( 2019 ) and Cranmer et al . ( 2020 ) force the network to respect Hamiltonian and Lagrangian constraints but without specific problem data on the system . Psichogios & Ungar ( 1992 ) predicts physical parameters for a given ODE model and Rico-Martinez et al . ( 1994 ) predict residuals . Thompson & Kramer ( 1994 ) similarly builds a hybrid parameter prediction function into a dynamical model . These last three works are especially similar to ours , though they use tiny networks , are problem-specific in their setup , and don ’ t take advantage of backpropagation through a numerical ODE solver . Neural Ordinary Differential Equations As most numerical ODE solvers are algorithms involving differentiable operations , it has always been possible in principle to backpropagate through the steps of these solvers dating back to at least Runge ( 1895 ) . However , since each step of the solver involves calling the derivative function , naı̈ve backpropagation incurs an O ( n ) memory cost , where n is the number of derivative calls made by the solver . Historically Pontryagin ( 2018 ) and recently Chen et al . ( 2018 ) demonstrated that by computing gradients through the adjoint sensitivity method , the memory complexity of backpropagating through a family of ODE solvers can be reduced to O ( 1 ) for a fixed network , as opposed to the naive O ( n ) . However , this work only used generic neural networks as the derivative function and did not consider dynamics modeling . They also provide a PyTorch package which we have built off of in our work . There has been some work using neural ordinary differential equations to solve physical problems . Portwood et al . ( 2019 ) used a fully-connected neural ODE with an RNN encoder and decoder to model Navier-Stokes problems . Rudy et al . ( 2019 ) used a neural network integrated with a RungeKutta method for noise reduction and irregularly sampled data . There has also been work learning the structure of dynamical systems , first with a convolutional warping scheme inspired by advectiondiffusion PDEs ( de Bezenac et al. , 2018 ) , then with a Neural ODE which was forced to respect boundary conditions and a partial observation mechanism ( Ayed et al. , 2019 ) . Machine Learning for Nuclear Fusion As far back as 1995 , van Milligen et al . ( 1995 ) showed that by approximating the differential operator with a ( single-layer , in their case ) neural network , one could fit simple cases of the Grad-Shafranov equation for magnetohydrodynamic equilibria . Recent work has shown that plasma dynamics are amenable to neural network prediction . In particular , Kates-Harbeck et al . ( 2019 ) used a convolutional and LSTM-based architecture to predict possible plasma disruptions ( when a plasma instability grows large and causes a loss of plasma containment and pressure ) . There has also been work in the field of plasma control : a neural network model of the neutral beam injection for the DIII-D tokamak has been deployed in order to diagnose the effect of controls on shots conducted at the reactor ( Boyer et al. , 2019b ) . Additionally , ( Boyer et al. , 2019a ) used classic control techniques and a simpler model of the dynamics to develop a controller that allows characteristics of the tokamak plasma to be held at desired levels . Others have used contextual Bayesian optimization to choose single-state controls which direct the plasma to desirable states ( Char et al. , 2019 ; Chung et al. , 2020 ) . 3 PROBLEM SETTING . Typically , a dynamical system ẋ = fφ ( x , u , t ) with some parameters φ is the conventional model for system identification problems . Here , state is x ∈ X , control is u ∈ U , and time is t ∈ R. The objective is to predict future states given past states , past and future controls , and prior knowledge of the form of f . We denote x ( φ , t , u , x0 ) = x0 + ∫ t 0 fφ ( x , u , t ) dt as the state obtained by integrating our dynamical system around f to time t. We consider in this work a more general setting and address the problem of prediction and control over a class of dynamical systems , which we define as the set { ẋ = fφ ( x , u , t ) | φ ∈ Φ } , where Φ is the space of parameters for the dynamical system ( e.g . spring constant or terminal velocity ) . We can generate a trajectory from a class by sampling a φ ∼ P ( Φ ) for some distribution P and choosing initial conditions and controls . In real data , we can view nature as choosing , but not disclosing , φ . For a particular example j , we sample φ ∼ P ( Φ ) and x0 ∼ P ( X0 ) and are given controls u indexed as u ( t ) and input data { x ( φ , ti , u , x0 ) } Ti=0 during training . At test time , we give a shorter , prefix time series { x ( φ , ti , u , x0 ) } T ′ i=0 but assume access to future controls . Then the prediction objective for a class of systems for N examples for timesteps { ti } TT ′+1 is x̂ = arg min x̂ Eφ∼P ( Φ ) , x0∼P ( X0 ) [ T∑ i=T ′+1 ||x ( φ , ti , u , x0 ) − x̂ti || 2 2 ] . ( 1 ) This objective differs from the traditional one in that implicitly , identifying φ for each trajectory needs to be done from the problem data in order to be able to predict the data generated by fφ . Similarly , the control problem is u = min u Eφ∼P ( Φ ) , x0∼P ( X0 ) [ ∫ t 0 c ( u ( t ) , x ( t ) ) dt ] , s.t . x ( t ) = x0 + ∫ t 0 fφ ( x , u , t ) dt ( 2 ) for some cost functional c. We will primarily explore the prediction problem in this setting , but as secondary considerations , we explore robustness to noise , the ability to handle irregularly spaced input data , and the ability to recover the parameters φ which generated the original trajectories . We will also consider the control problem in a simple setting .
The paper proposes a neural-network architecture for modeling dynamical systems that incorporates prior domain knowledge of the system's dynamics. More specifically, the main contributions are the mechanisms for incorporating such knowledge, in terms of fully or partially known structure (differential equations) of the system, which in turn positively affects the modeling performance. The results from the experimental evaluation (on 3 synthetic and one real-world experiments), in general, show that the proposed Neural Dynamical Systems (NDS), and in particular the ones trained with partial prior knowledge, have better performance than several standard benchmarks (such as NeuralODEs, LSTMs, Sparse Regression etc.).
SP:f0f3694b84631cb0ebb5cd4c3510f6279526a28c
Learned Threshold Pruning
1 INTRODUCTION . Deep neural networks ( DNNs ) have provided state-of-the-art solutions for several challenging tasks in many domains such as computer vision , natural language understanding , and speech processing . With the increasing demand for deploying DNNs on resource-constrained edge devices , it has become even more critical to reduce the memory footprint of neural networks and also to achieve power-efficient inference on these devices . Many methods in model compression Hassibi et al . ( 1993 ) ; LeCun et al . ( 1989 ) ; Han et al . ( 2015b ) ; Zhang et al . ( 2018 ) , model quantization Jacob et al . ( 2018 ) ; Lin et al . ( 2016 ) ; Zhou et al . ( 2017 ) ; Faraone et al . ( 2018 ) and neural architecture search Sandler et al . ( 2018 ) ; Tan & Le ( 2019a ) ; Cai et al . ( 2018 ) ; Wu et al . ( 2019 ) have been introduced with these goals in mind . Neural network compression mainly falls into two categories : structured and unstructured pruning . Structured pruning methods , e.g. , He et al . ( 2017 ) ; Li et al . ( 2017 ) ; Zhang et al . ( 2016 ) ; He et al . ( 2018 ) , change the network ’ s architecture by removing input channels from convolutional layers or by applying tensor decomposition to the layer weight matrices whereas unstructured pruning methods such as Han et al . ( 2015b ) ; Frankle & Carbin ( 2019 ) ; Zhang et al . ( 2018 ) rely on removing individual weights from the neural network . Although unstructured pruning methods achieve much higher weight sparsity ratio than structured pruning , unstructured is thought to be less hardware friendly because the irregular sparsity is often difficult to exploit for efficient computation Anwar et al . ( 2017 ) . However , recent advances in AI accelerator design Ignatov et al . ( 2018 ) have targeted support for highly efficient sparse matrix multiply-and-accumulate operations . Because of this , it is getting increasingly important to develop state-of-the-art algorithms for unstructured pruning . Most unstructured weight pruning methods are based on the assumption that smaller weights do not contribute as much to the model ’ s performance . These pruning methods iteratively prune the weights that are smaller than a certain threshold and retrain the network to regain the performance lost during pruning . A key challenge in unstructured pruning is to find an optimal setting for these pruning thresholds . Merely setting the same threshold for all layers may not be appropriate because the distribution and ranges of the weights in each layer can be very different . Also , different layers may have varying sensitivities to pruning , depending on their position in the network ( initial layers versus final layers ) or their type ( depth-wise separable versus standard convolutional layers ) . The best setting of thresholds should consider these layer-wise characteristics . Many methods Zhang et al . ( 2018 ) ; Ye et al . ( 2019 ) ; Manessi et al . ( 2018 ) propose a way to search these layer-wise thresholds but become quite computationally expensive for networks with a large number of layers , such as ResNet50 or EfficientNet . In this paper , we propose Learned Threshold Pruning ( LTP ) to address these challenges . Our proposed method uses separate pruning thresholds for every layer . We make the layer-wise thresholds trainable , allowing the training procedure to find optimal thresholds alongside the layer weights during finetuning . An added benefit of making these thresholds trainable is that it makes LTP fast , and the method converges quickly compared to other iterative methods such as Zhang et al . ( 2018 ) ; Ye et al . ( 2019 ) . LTP also achieves high compression on newer networks Tan & Le ( 2019a ) ; Sandler et al . ( 2018 ) ; Tan & Le ( 2019b ) with squeeze-excite Hu et al . ( 2018 ) and depth-wise convolutional layers Chollet ( 2017 ) . Our key contributions in this work are the following : • We propose a gradient-based algorithm for unstructured pruning , that introduces a learnable threshold parameter for every layer . This threshold is trained jointly with the layer weights . We use soft-pruning and soft L0 regularization to make this process end-to-end trainable . • We show that making layer-wise thresholds trainable makes LTP computationally very efficient compared to other methods that search for per-layer thresholds via an iterative pruning and finetuning process , e.g. , LTP pruned ResNet50 to 9.11x in just 18 epochs with 12 additional epochs of fine-tuning , and MixNet-S to 2x in 17 epochs without need for further finetuning . • We demonstrate state-of-the-art compression ratios on newer architectures , i.e. , 1.33× , 3× and 2× for MobileNetV2 , EfficientNet-B0 and MixNet-S , respectively , which are already optimized for efficient inference , with less than 1 % drop in Top-1 accuracy . • The proposed method provides a trace of checkpoints with varying pruning ratios and accuracies . Because of this , the user can choose any desired checkpoint based on the sparsity and performance requirements for the desired application . 2 RELATED WORK . Several methods have been proposed for both structured and unstructured pruning of deep networks . Methods like He et al . ( 2017 ) ; Li et al . ( 2017 ) use layer-wise statistics and data to remove input channels from convolutional layers . Other methods apply tensor decompositions on neural network layers , Denton et al . ( 2014 ) ; Jaderberg et al . ( 2014 ) ; Zhang et al . ( 2016 ) apply SVD to decompose weight matrices and Kim et al . ( 2015 ) ; Lebedev et al . ( 2014 ) apply tucker and cp-decompositions to compress . An overview of these methods can be found in Kuzmin et al . ( 2019 ) . These methods are all applied after training a network and need fine-tuning afterwards . Other structured methods change the shape of a neural network while training . Methods like Bayesian Compression Louizos et al . ( 2017 ) , VIBnets Dai et al . ( 2018 ) and L1/L0-regularization Srinivas et al . ( 2017 ) ; Louizos et al . ( 2018 ) add trainable gates to each layer to prune while training . In this paper we consider unstructured pruning , i.e . removing individual weights from a network . This type of pruning was already in use in 1989 in the optimal brain damage LeCun et al . ( 1989 ) and optimal brain surgeon Hassibi et al . ( 1993 ) papers , which removed individual weights in neural networks by use of Hessian information . More recently , Han et al . ( 2015a ) used the method from Han et al . ( 2015b ) as part of their full model compression pipeline , removing weights with small magnitudes and fine-tuning afterwards . This type of method is frequently used for pruning , and has recently been picked up for finding DNN subnetworks that work just as well as their mother network in Frankle & Carbin ( 2019 ) ; Zhou et al . ( 2019 ) . Another recent application of Han et al . ( 2015b ) is by Renda et al . ( 2020 ) where weight and learning-rate rewinding schemes are used to achieve competitive pruning performances . These methods , however , are very computationally extensive requiring many hundreds of epochs of re-training . Finally , papers such as Molchanov et al . ( 2017 ) ; Ullrich et al . ( 2017 ) apply a variational Bayesian framework on network pruning . Other methods that are similar to our work are Zhang et al . ( 2018 ) and Ye et al . ( 2019 ) . These papers apply the alternating method of Lagrange multipliers to pruning , which slowly coaxes a network into pruning weights with a L2-regularization-like term . One problem of these methods is that they are time-intensive , another is that they need manual tweaking of compression rates for each layer . In our method , we get rid of these restrictions and achieve comparable compression results , at fraction of the computational burden and without any need for setting per-layer pruning ratios manually . Kusupati et al . ( 2020 ) and Manessi et al . ( 2018 ) learn per-layer thresholds automatically using soft thresholding operator or a close variant of it . However they rely on L1 and/or L2 regularization , which as shown in section 3.2 , is inefficient when used in networks with batch-normalization Ioffe & Szegedy ( 2015 ) . He et al . ( 2018 ) use reinforcement learning to set layer-wise prune ratios for structured pruning , whereas we learn the pruning thresholds in the fine-tuning process . 3 METHOD . LTP comprises two key ideas , soft-pruning and soft L0 regularization , detailed in sections 3.1 and 3.2 , respectively . The full LTP algorithm is then presented in section 3.3 . 3.1 SOFT PRUNING . The main challenge in learning per-layer thresholds during training is that the pruning operation is not differentiable . More precisely , consider an N -layer DNN where the weights for the l-th convolutional or fully-connected layer are denoted by { wkl } , and let k index the weights within the layer . In magnitude-based pruning Han et al . ( 2015b ) the relation between layer l ’ s uncompressed weights and pruned weights is given by : vkl = wkl × step ( w2kl − τl ) , ( 1 ) where τl denotes the layer ’ s pruning threshold and step ( . ) denotes the Heaviside step function . We name this scheme hard-pruning . Since the step function is not differentiable , ( 1 ) can not be used to learn thresholds through back-propagation . To get around this problem , during training LTP replaces ( 1 ) with soft-pruning vkl , wkl · sigm ( w2kl − τl T ) , ( 2 ) where sigm ( . ) denotes the sigmoid function and T is a temperature hyper-parameter . As a result of ( 2 ) being differentiable , back-propagation can now be applied to learn both the weights and thresholds simultaneously . Defining soft-pruning as in ( 2 ) has another advantage . Note that if w2kl is much smaller than τl ( i.e. , τl − w2kl T ) , wkl ’ s soft-pruned version is almost zero and it is pruned away , whereas if it is much larger ( i.e. , w2kl − τl T ) , wkl ≈ vkl . Weights falling within the transitional region of the sigmoid function ( i.e. , |w2kl − τl| ∼ T ) , however , may end up being pruned or kept depending on their contribution to optimizing the loss function . If they are important , the weights are pushed above the threshold through minimization of the classification loss . Otherwise , they are pulled below the threshold through regularization . This means that although LTP utilizes pruning thresholds similar to previous methods , it is not entirely a magnitude-based pruning method , as it allows the network to keep important weights that were initially small and removing some of the unimportant weights that were initially large , c.f. , Figure 1 ( left ) . Continuing with equation ( 2 ) , it follows that ∂vkl ∂τl = −1 2 · σT ( wkl ) and ∂vkl ∂wkl = sigm ( w2kl − τl T ) + wkl · σT ( wkl ) , ( 3 ) with σT ( wkl ) , 2wkl T · sigm ( w 2 kl − τl T ) × ( 1− sigm ( w 2 kl − τl T ) ) . ( 4 ) The σT ( . ) function also appears in subsequent equations and merits some discussion . First note that σT ( wkl ) as given by ( 4 ) is the derivative of sigm ( ( w2kl − τl ) /T ) with respect to wkl . Since the latter approaches the step function ( located at w2kl = τl ) in the limit as T −→ 0 , it follows that the former , i.e. , σT ( wkl ) would approach a Dirac delta function , meaning that its value approaches zero everywhere except over the transitional region where it is inversely proportional to region ’ s width , i.e. , σT ( wkl ) ∼ 1 T , for |w2kl − τl| ∼ T. ( 5 )
The paper introduces a new type of soft threshold operator in conjunction with appropriate weight regularization that can be used in the context of neural network pruning to obtain sparse, performant networks from pre-trained, dense networks. The main idea is to replace the Heaviside step function that occurs in "hard threshold" pruning, which is non-differentiable, by a sigmoid function that can be differentiated and thus enables the efficient training/optimization of relevant pruning parameters. Pruning is hereby performed on a per-layer basis by training a regularized per-layer threshold.
SP:b7e2096e6070edf0d080bcf5113e469563f98dc2
Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
1 INTRODUCTION . Imitation learning and Reinforcement Learning ( RL ) often intersect when the goal is to imitate with incomplete information , for example , when imitating from motion capture data ( mocap ) or video . In this case , the agent needs to search for actions that will result in observations similar to the expert . However , formulating a metric that will provide a reasonable distance between the agent and the expert is difficult . Robots and people plan using types of internal and abstract pose representations that can have reasonable distances ; however , typically when animals observe others performing tasks , only visual information is available . Using distances in pose-space is ill-suited for imitation as changing some features can result in drastically different visual appearance . In order to understand how to perform tasks from visual observation a mapping/transformation is used which allows for the minimization of distance in appearance . Even with a method to transform observations to a similar pose space , each person has different capabilities . Because of this , people are motivated to learn transformations in space and time where they can reproduce the behaviour to the best of their own ability . How can we learn a representation similar to this latent space ? An essential detail of imitating demonstrations is their sequential and causal nature . There is both an ordering and speed in which a demonstration is performed . Most methods require the agent to learn to imitate the temporal and spatial structure at the same time creating a potentially narrow solution space . When the agent becomes desynchronized with the demonstration , the agent will receive a low reward . Consider the case when a robot has learned to stand when its goal is to walk . Standing is spatially close to the demonstration and actions that help the robot stand , as opposed to falling , should be encouraged . How can such latent goals be encouraged ? If we consider a phase-based reward function r = R ( s , a , φ ) where φ indexes the time in the demonstration and s and a is the agent state and action . As the demonstration timing φ , often controlled by the environment , and agent diverge , the agent receives less reward , even if it is visiting states that exist elsewhere in the demonstration . The issue of determining if an agent is displaying outof-phase behaviour can understood as trying to find the φ that would result in the highest reward φ′ = maxφR ( s , a , φ ) and the distance φ′ − φ is an indicator of how far away in time or out-ofphase the agent is . This phase-independent form can be seen as a form of reward shaping . However , this naive description ignores the ordered property of demonstrations . What is needed is a metric that gives reward for behaviour that is in the proper order , independent of phase . This ordering motivates the creation of a recurrent distance metric that is designed to understand the context between two motions . For example , does this motion look like a walk , not , does this motion look precisely like that walk . Our proposed Visual Imitation with Reinforcement Learning ( VIRL ) method uses Recurrent Siamese Networks ( RSNs ) and has similarities to both Inverse Reinforcement Learning ( IRL ) ( Abbeel & Ng , 2004 ) and Generative Advisarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) . The process of learning a cost function that understands the space of policies to find an optimal policy given a demonstration is fundamentally IRL . While using positive examples from the expert and negative examples from the policy is similar to the method GAIL uses to train a discriminator to recognize in distribution examples . In this work , we build upon these techniques by constructing a method that can learn policies using noisy visual data without action information . Considering the problem ’ s data sparsity , we include data from other tasks to learn a more robust distance function in the space of visual sequence . We also construct a cost function that takes into account the demonstration ordering as well as pose using a recurrent Siamese network . Our contribution consists of proposing and exploring these forms of recurrent Siamese networks as a way to address a critical problem in defining reward structure for imitation learning from the video for deep RL agents and accomplishing this on simulated humanoid robots for the challenging single shot learning setting . 2 RELATED WORK . Learning From Demonstration Searching for good distance functions is an active research area ( Abbeel & Ng , 2004 ; Argall et al. , 2009 ) . Given some vector of features , the goal is to find an optimal transformation of these features , such in this transformed space , there exists a strong contextual meaning . Previous work has explored the area of state-based distance functions , but most rely on pose based metrics ( Ho & Ermon , 2016 ; Merel et al. , 2017 ) that come from an expert . While there is other work using distance functions , including for example Sermanet et al . ( 2017 ) ; Finn et al . ( 2017 ) ; Liu et al . ( 2017 ) ; Dwibedi et al . ( 2018 ) , few use image based inputs and none consider the importance of learning a distance function in time as well as space . In this work , we train recurrent Siamese networks ( Chopra et al. , 2005 ) to learn distances between videos . Partially Observable Imitation Without Actions For Learning from Demonstration ( LfD ) problems the goal is to replicate the behaviour of expert πE behaviour . Unlike the typical setting for humans learning to imitate , LfD often assumes the availability of expert action and observation data . Instead , in this work , we focus on the case where only noisy actionless observations of the expert are available . Recent work uses Behavioural Cloning ( BC ) to learn an inverse dynamics model to estimate the actions used via maximum-likelihood estimation ( Torabi et al. , 2018 ) . Still , BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and student ( Ross et al. , 2011 ) . Work in ( Merel et al. , 2017 ) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration . In this work , the discriminator ’ s state input is a customized version of the expert ’ s state and does not take into account the demonstration ’ s sequential nature . The work in ( Wang et al. , 2017 ) provides a more robust GAIL framework along with a new model to encode motions for few-shot imitation . This model uses an Recurrent Neural Network ( RNN ) to encode a demonstration but uses expert state and action observations . In our work , the agent is limited to only a partial visual observation as a demonstration . Additional works learn implicit models of distance ( Yu et al. , 2018 ; Pathak et al. , 2018 ; Finn et al. , 2017 ; Sermanet et al. , 2017 ) , none of these explicitly learn a sequential model considering the demonstration timing . An additional version of GAIL , infoGAIL ( Li et al. , 2017 ) , included pixel based inputs . Goals can be specified using the latent space from a Variational Auto Encoder ( VAE ) ( Nair et al. , 2018 ) . Our work extends this VAE loss using sequence data to train a more temporally consistent latent representation . Recent work ( Peng et al. , 2018b ) has a 2D control example of learning from video data . We show results on more complex 3D tasks and additionally model distance in time . In contrast , here we train a recurrent siamese model that can be used to en- able curriculum learning and allow for computing distances even when the agent and demonstration are out of sync . 3 PRELIMINARIES . In this section , we outline the general RL framework and specific formulations for RL that we rely upon when developing our method in Section 4 . Reinforcement Learning Using the RL framework formulated with a Markov Dynamic Process ( MDP ) : at every time step t , the world ( including the agent ) exists in a state st ∈ S , wherein the agent is able to perform actions at ∈ A , sampled from a policy π ( at|st ) which results in a new state st+1 ∈ S and reward rt according to the transition probability function T ( rt , st+1|st , at ) . The policy is optimize to maximize the future discounted reward J ( π ) = Er0 , ... , rT [ T∑ t=0 γtrt ] , ( 1 ) where T is the max time horizon , and γ is the discount factor , indicating the planning horizon length . Inverse reinforcement learning refers to the problem of extracting a reward function from observed optimal behavior Ng et al . ( 2000 ) . In contrast , in our approach we learn a distance that works across a collection of behaviours . Further , we do not assume the example data to be optimal . See Appendix 7.2 for further discussion of the connections of our work to inverse reinforcement learning . GAIL VIRL is similar to the GAIL framework ( Ho & Ermon , 2016 ) which uses a Generative Advasarial Network ( GAN ) ( Goodfellow et al. , 2014 ) , where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy . The generator is a combination of the environment , policy and current state visitation probability induced by the policy pπ ( s ) . min θπ max θφ EπE [ log ( D ( s , a|θφ ) ) ] + Eπθπ [ log ( 1−D ( s , a|θφ ) ) ] ( 2 ) In this framework the discriminator provides rewards for the RL policy to optimize , as the probability of a state generated by the policy being in the distribution rt = D ( st , at|θφ ) . While this framework has been shown to work in practice , this dual optimization is often unstable . In the next section we will outline our method for learning a more stable distance based reward over sequences of images . 4 CONCEPTUAL DISTANCE-BASED REINFORCEMENT LEARNING . Our approach is aimed at facilitating imitation learning within an underlying RL formulation over partially observed observations o . Unlike the situation in GAIL , we do not rely on having accces to state , s and action , a information – our idea is to minimize a function that determintes the distance between two sequences observations , o , one from the desired example behavior oe , and another from the current agent behavior oa . We can then define the reward used within an underlying RL framework in terms of a distance function D , such that rt̂ ( o e , oa ) = −D ( oe , oa , t̂ ) = t̂∑ t=0 −d ( oet , oat ) , ( 3 ) where in our setting here D ( oe , oa , t̂ ) models a distance between video clips from time t = 0 to t̂ . A simple formulation of the approach above can be overly restrictive on sequence timing . While these distances can serve as RL rewards , they often provide insufficient signal for the policy to learn a good imitative behaviour , especially when the agent only has partial observations of the expert . We can see an example of this in Figure 1a were starting at t5 the agent ( in red ) begins to exhibit behaviour that is similar to the expert ( in blue ) yet the spatial distance indicates that this state is further away from the desired behaviour than at t4 . To encourge the agent to match any part of the expert behaviour we propose decomposing the distance into two distances , by adding a type of temporal distance shown in green . To compute a time independant distance we can find the state in the expert sequence that is closest to the agent ’ s current state argmin t̂∈T d ( oet̂ , o a t ) and use it in the following distance measure dT ( oe , oa , t̂ , t ) = . . .+ d ( oe t̂−1 , o a t−1 ) + d ( o e t̂ , oat ) + d ( o e t̂+1 , oat+1 ) + . . . ( 4 ) Using only a single state time-alined may lead to the agent fixating on mataching a single state in the expert demonstration . To avoid this the neighbouring states given sequence timing readjustment are used in the distance computation . This framework allows the agent to be rewarded for exhibiting behaviour that matches any part of the experts demonstration . The better is learns to match parts of the expert demonstration the more reward it is given . The previous spatial distance will then help the agent learn to sync up its timing with the deomonstration . Next we describe how we learn both of these distances . Distance Metric Learning Many methods can be used to learn a distance function in state-space . Here we use a Siamese network f ( oe , oa ) with a triplet loss over time and task data ( Chopra et al. , 2005 ) . The triplet loss is used to minimize the distance between two examples that are positive , very similar or from the same class , and maximize the distance between pairs of examples that are known to be unrelated . For more details see supplementary document . Sequence Imitation The distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states d ( ot|ot−1 , . . . , o0 ) . The loss function is a combination of distance Eq . 9 and VAE-based representation learning objectives from Eq . 7 and Eq . 8 , detailed in the supplementary material . This combination of sequencebased losses assists in compressing the representation while ensuring intermediate representations are informative . The loss function used to train the distance model on a positive pair of sequences is : LV IRL ( oi , op , · ) =λ0LSN ( oi , op , · ) + λ1 [ 1 T T∑ t=0 LSN ( oi , t , op , t , · ) ] + λ2 [ 1 T T∑ t=0 LV AE ( oi , t ) + LV AE ( op , t ) ] + λ3 [ LAE ( oi ) + LAE ( op ) ] . Where λ = { 0.7 , 0.1 , 0.1 , 0.1 } . With a negative pair , the second sequence used in the VAE and AE losses would be the negative sequence . The Siamese loss function remains the same as in Eq . 9 but the overall learning process evolves to use an RNN-based deep networks . A diagram of the full model is shown in Figure 2 . This model uses a time distributed Long Short-Term Memory ( LSTM ) . A single convolutional network conva is first used to transform images of the demonstration oa to an encoding vector eat . After the sequence of images is distributed through conva there is an encoded sequence < ea0 , . . . , e a t > , this sequence is fed into the RNN lstma until a final encoding is produced hat . This same process is performed for a copy of the RNN lstma producing hbt for the agent ob . The loss is computed in a similar fashion to ( Mueller & Thyagarajan , 2016 ) using the sequence outputs of images from the agent and another from the demonstration . The reward at each timestep is computed as rt = ||hat −hbt ||+ ||eat − ebt || = ||lstma ( conva ( sat ) ) − lstma ( conva ( sbt ) ) ||+ ||conva ( sat ) − conva ( sbt ) || . At the beginning of each episode , the RNN ’ s internal state is reset . The policy and value function have 2 hidden layers with 512 and 256 units , respectively . The use of additional VAE-based image and Auto Encoder ( AE ) -based sequence decoding losses improve the latent space conditioning and representation . Algorithm 1 Learning Algorithm Initialize model parameters θπ and θd Create experience memory D ← { } while not done do for i ∈ { 0 , . . . N } do τi ← { } { st , oet , oat } ← env.reset ( ) for t ∈ { 0 , . . . , T } do at ← π ( ·|st , θπ ) { st+1 , oet+1 , oat+1 } ← env.step ( at ) rt ← −d ( oet+1 , oat+1|θd ) τi , t ← { st , oet , oat , at , rt } { st , oet , oat } ← { st+1 , oet+1 , oat+1 } end for end for D ← D ⋃ { τ0 , . . . , τN } Update d ( · ) parameters θd using D Update policy θπ using { τ0 , . . . , τN } end while Unsupervised Data labelling To construct positive and negative pairs for training we make use of time information in a similar fashion to ( Sermanet et al. , 2017 ) , where observations at similar times in the same sequence are often correlated and observations at different times will likely have little similarity . We compute pairs by altering one sequence and comparing this modified version to its original . Positive pairs are created by adding noise to the sequence or altering a few frames of the sequences . Negative pairs are created by shuffling one sequence or reversing it . More details are available in the supplementary material . Imitation data for 24 other tasks are also used to help condition the distance metric learning process . These include motion clips for running , backflips , frontflips , dancing , punching , kicking and jumping along with the desired motion . For details on how positive and negative pairs are created from this data , see the supplementary document . Importantly the RL environment generates two different state representations for the agent . The first state st+1 is the internal robot pose . The second state ot+1 is the agent ’ s rendered view , shown in Figure 2 . The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration . We attempted using the visual features as the state input for the policy as well ; this resulted in poor policy quality . Details of the algorithm used to train the distance metric and policy are outlined in the supplementary document Algorithm 1 .
This paper studies the problem of visual imitation learning: given a video of an expert demonstration, take actions to reproduce that same behavior. The proposed method learns a distance metric on videos and uses that distance metric as a reward function for RL. Experiments show that this method does recover reasonable behaviors across a range of simulated robotic tasks. Compared with prior methods, the main contribution of this work is that the distance metric is parametrized and trained as a siamese network.
SP:6cc0e3b4b6385061150d8e36bcbc022069b475ba
A Surgery of the Neural Architecture Evaluators
Neural architecture search ( NAS ) has recently received extensive attention due to its effectiveness in automatically designing effective neural architectures . A major challenge in NAS is to conduct a fast and accurate evaluation ( i.e. , performance estimation ) of neural architectures . Commonly used fast architecture evaluators include parameter-sharing ones and predictor-based ones . Despite their high evaluation efficiency , the evaluation correlation ( especially of the well-performing architectures ) is still questionable . In this paper , we conduct an extensive assessment of both the parameter-sharing and predictor-based evaluators on the NASBench-201 search space , and break up how and why different configurations and strategies influence the fitness of the evaluators . Specifically , we develop a set of NAS-oriented criteria to understand the behavior of fast architecture evaluators in different training stages . And based on the findings of our experiments , we give pieces of knowledge and suggestions to guide NAS application and motivate further research . 1 INTRODUCTION . Studies have shown that the automatically discovered architectures by NAS can outperform the hand-crafted architectures for various applications , such as classification ( Nayman et al. , 2019 ; Zoph & Le , 2017 ) , detection ( Ghiasi et al. , 2019 ; Chen et al. , 2019b ) , video understanding ( Ryoo et al. , 2019 ) , text modeling ( Zoph & Le , 2017 ) , etc . Early NAS algorithms ( Zoph & Le , 2017 ) suffer from the extremely heavy computational burden , since the evaluation of neural architectures is slow . Thus , how to estimate the performance of a neural architecture in a fast and accurate way is vital for addressing the computational challenge of NAS . A neural architecture evaluator outputs the evaluated score of an architecture that indicates its quality . The straightforward solution is to train an architecture from scratch to convergence and then test it on the validation dataset , which is extremely time-consuming . Instead of exactly evaluating architectures on the target task , researchers usually construct a proxy model with fewer layers or fewer channels ( Pham et al. , 2018 ; Real et al. , 2019 ; Wu et al. , 2019 ) , and train this model to solve a proxy task of smaller scales ( Cai et al. , 2018a ; Elsken et al. , 2018 ; Klein et al. , 2017 ; Wu et al. , 2019 ) , e.g. , smaller dataset or subsets of dataset , training or finetuning for fewer epochs . Traditional evaluators conduct separate training phases to acquire the weights that are suitable for each architecture . In contrast , one-shot evaluation amortized the training cost of different architectures through parameter sharing or a global hypernetwork , thus significantly reduce the evaluation cost . Pham et al . ( 2018 ) constructs an over-parametrized super network ( supernet ) such that all architectures in the search space are sub-architectures of the supernet . Throughout the search process , the shared parameters in the supernet are updated on the training dataset split , and each architecture is evaluated by directly using the corresponding subset of the weights . Afterwards , the parameter sharing technique is widely used for architecture search in different search spaces ( Wu et al. , 2019 ; Cai et al. , 2020 ) , or incorporated with different search strategies ( Liu et al. , 2018b ; Nayman et al. , 2019 ; Xie et al. , 2018 ; Yang et al. , 2019 ; Cai et al. , 2020 ) . Hypernetwork ( Brock et al. , 2018 ; Zhang et al. , 2018 ) based evaluation is another type of one-shot evaluation strategy , in which a hypernetwork is trained to generate proper weights for each architecture . Since hypernetwork solutions are not generic currently , this paper concentrates on the evaluation of parameter sharing evaluators . One-Shot ( Hypernetwork ) SuperNet Train Loss Valid Accuracy Gradient Update Generate weights Architecture ( with weights ) Data Predictor-based Architecture Performance Loss Predicted Score Ground-Truth Performance MLPEncoder Predictor Gradient Update [ Arch1 , 78.3 % ] … [ ArchN , 80.4 % ] Architecture Provided by another “ oracle ” evaluator HyperNet One-Shot ( Shared weights ) Gradient Update Training Inference ( Evaluator Output ) ( Evaluator Output ) ( Evaluator Input ) ( Evaluator Input ) Figure 1 : An overview of fast neural architecture evaluators ( i.e. , performance estimators ) . Whether or not one-shot strategies can provide highly-correlated architecture evaluation results is essential for the efficacy of the NAS process . Many recent studies have been focusing on assessing the evaluation correlation of one-shot architecture evaluators ( Bender et al. , 2018 ; Sciuto et al. , 2019 ; Zela et al. , 2020 ) . Besides one-shot evaluation strategies , predictor-based evaluation strategies ( Luo et al. , 2018 ; Liu et al. , 2018a ; Deng et al. , 2017 ; Sun et al. , 2019 ; Wang et al. , 2018 ; Xu et al. , 2019 ; Ning et al. , 2020 ) use a performance predictor that takes the architecture description as inputs and outputs a predicted performance score . The performance predictor should be trained using “ ground-truth ” architecture performances . This paper utilizes the same set of criteria to evaluate and compare different performance predictors . The fast neural architecture evaluators ( i.e. , performance estimators ) are summarized in Fig . 1 , including parameter sharing , hypernetworks , and predictor-based ones . And this paper aims at revealing the status of current architecture evaluation strategies systematically . Specifically , we develop a set of NAS-oriented criteria to understand the behavior of fast architecture evaluators in different training stages . And based on the findings of our experiments , we give pieces of knowledge and suggestions to guide NAS application and motivate further research . The knowledge revealed by this paper includes : 1 ) Layer proxy brings a larger evaluation gap than using channel proxy , thus channel proxy can be utilized to reduce the computational cost , while proxy-less search w.r.t the layer number is worth studying . 2 ) The convergence rate of different criteria varies during the one-shot supernet training , which shows that the good architectures are distinguished from bad architectures in the early stage . 3 ) As training goes on , the one-shot performances of isomorphic sub-architectures become closer . 4 ) De-isomorphic sampling or post de-isomorphism handling can help avoid the over-estimation of simple architectures . 5 ) Parameter sharing evaluator tends to over-estimate smaller architectures , and is better at comparing smaller models than larger models . 6 ) One should use ranking losses rather than regression losses to train predictors , since they are more stable . 7 ) Different predictors under- or over-estimate different architectures , and currently , the best predictor might still have trouble in comparing large architectures . 8 ) As expected , architecture predictors can distinguish good architectures better after multiple stages of training , as the training data are more and more concentrated on the good architectures . 2 RELATED WORK . 2.1 ONE-SHOT EVALUATORS . One-shot evaluation mainly consists of two types of strategies : 1 ) parameter sharing ( Pham et al. , 2018 ; Wu et al. , 2019 ; Liu et al. , 2018b ; Nayman et al. , 2019 ; Xie et al. , 2018 ; Yang et al. , 2019 ; Cai et al. , 2020 ) , 2 ) hypernetworks ( Brock et al. , 2018 ; Zhang et al. , 2018 ) . These two strategies both amortize the training cost of different architectures via the sharing of the network or hypernetwork parameters . The ranking correlation gaps of existing shared weights evaluators are brought by two factors : 1 ) proxy model and task : due to the memory constraint , a proxy supernet ( supernet ) ( Liu et al. , 2018b ; Pham et al. , 2018 ) with fewer channels or layers is usually used ; 2 ) parameter sharing . To alleviate the first factor , there are some studies ( Cai et al. , 2018b ; Chen et al. , 2019a ) that aim at making oneshot evaluation more memory efficient , thus the one-shot search could be conducted without using a proxy supernet . As for the second factor , there are a few studies that carried out correlation evaluation for one-shot evaluators . Zhang et al . ( 2018 ) conducted a correlation comparison between the GHN hypernetwork evaluator , shared weights evaluator , and several small proxy tasks . However , the correlation is evaluated using 100 architectures randomly sampled from a large search space , which is not a convincing and consistent benchmark metric . Luo et al . ( 2019 ) did a preliminary investigation into why parameter sharing evaluation fails to provide correlated evaluations , and proposed to increase the sample probabilities of the large models . Their evaluation is also conducted on dozens of architectures sampled from the search space . Zela et al . ( 2020 ) compare the evaluation correlation of different search strategies on NAS-Bench-101 . Sciuto et al . ( 2019 ) conduct parameter sharing NAS in a toy RNN search space with only 32 architectures in total , and discover that the parameter sharing rankings do not correlate with the true rankings of architectures . To improve the evaluation correlation , Chu et al . ( 2019 ) proposed a sampling strategy in a layer-wise search space . In this paper , we analyze the ranking correlation gaps brought by the model proxy ( difference in the number of channels and layers ) and the parameter sharing technique , as well as the behavior of one-shot evaluators during the training process . 2.2 PREDICTOR-BASED EVALUATORS . An architecture performance predictor takes the architecture descriptions as inputs , and outputs the predicted performance scores without training the architectures . Two factors are crucial to the fitness of the predictors : 1 ) embedding space ; 2 ) training technique . On the one hand , to embed neural architectures into a continuous space and get a meaningful embedding space , there are studies that propose different architecture encoders , e.g. , sequence-based ( Luo et al. , 2018 ; Liu et al. , 2018a ; Deng et al. , 2017 ; Sun et al. , 2019 ; Wang et al. , 2018 ) , graph-based ( Shi et al. , 2019 ; Ning et al. , 2020 ) . As for nonparametric predictors , Kandasamy et al . ( 2018 ) design a kernel function in the architecture space and exploits gaussian process to get the posterior of the architecture performances . Shi et al . ( 2019 ) combined a graph-based encoder and nonparametric gaussian process to construct the performance predictor . On the other hand , from the aspect of training techniques , Luo et al . ( 2018 ) employed an encoder-decoder structure and used an auxiliary reconstruction loss term . Xu et al . ( 2019 ) ; Ning et al . ( 2020 ) employed learning-to-rank techniques to train the predictors . Actually , in the overall NAS framework , the predictor-based evaluator plays a different role from the traditional or one-shot evaluators , since the predictor should be trained using “ ground-truth ” architecture performances . Usually , expensive traditional evaluators that can provide relatively accurate architecture performances are chosen as the “ oracle ” evaluators to output the “ ground-truth ” scores ( Kandasamy et al. , 2018 ; Liu et al. , 2018a ; Luo et al. , 2018 ) . 3 EVALUATION CRITERIA . In this section , we introduce the evaluation criteria used in this paper . We denote the search space size as M , the true performances and approximated evaluated scores of architectures { ai } i=1 , ··· , M as { yi } i=1 , ··· , M and { si } i=1 , ··· , M , respectively . And we denote the ranking of the true performance yi and the evaluated score si as ri ∈ { 1 , · · · , M } and ni ∈ { 1 , · · · , M } ( ri = 1 indicates that ai is the best architecture in the search space ) . The correlation criteria adopted in our paper are • Linear correlation : The pearson correlation coefficient corr ( y , s ) / √ corr ( y , y ) corr ( s , s ) . • Kendall ’ s Tau ranking correlation : The relative difference of concordant pairs and discor- dant pairs ∑ i < j sgn ( yi − yj ) sgn ( si − sj ) / ( M 2 ) . • Spearman ’ s ranking correlation : The pearson correlation coefficient between the rank variables corr ( r , n ) / √ corr ( r , r ) corr ( n , n ) . Besides these correlation criteria , criteria that emphasize more on the relative order of architectures with good performances are desired . Denoting AK = { ai|ni < KM } as the set of architectures whose evaluated scores s is among the top K portion of the search space , we use two criteira • Precision @ K ( P @ K ) ∈ ( 0 , 1 ] = # { i|ri < KM ∧ ni < KM } KM : The proportion of true top-K proportion architectures in the top-K architectures according to the scores . • BestRanking @ K ( BR @ K ) ∈ ( 0 , 1 ] = argmini∈AK ri : The best normalized ranking among the top K proportion of architectures according to the scores . The two criteria are similar to those used in Ning et al . ( 2020 ) , except that rankings and architecture numbers are all normalized with respect to the search space size M . The above criteria are used to compare the fitness of various architecture evaluators with different configurations and in different stages . Besides that , we ’ d also like to interpret their evaluation results . To identify which architectures are under- or over-estimated by various evaluators , and analyze the reasons accordingly , we investigate the relationship of the true-predicted ranking differences { ri − ni } i=1 , ··· , M and the architecture properties such as the FLOPs : { FLOPs ( ai ) } i=1 , ··· , M .
The paper assesses two different approaches to speed up the evaluations of neural network architectures for neural architecture search (NAS). The first one is weight sharing, which trains a supernetwork that contains all possible architecture of the search space. The performance of single architectures can be then approximated by simply using the shared parameters of the supernetwork. The second approach is to use different kind of predictors that are trained on offline evaluated architectures. Several methods following these two approaches from the literature are evaluated on the NASBench201 benchmark based on different rank-based evaluation scores.
SP:cb3cb0e206f4c3560538906a34265fcc95ca950f
Differentiable Approximations for Multi-resource Spatial Coverage Problems
1 INTRODUCTION . Allocation of multiple resources for efficient spatial coverage is an important component of many practical single-agent and multi-agent systems , for e.g. , robotic surveillance , mobile sensor networks and security game modeling . Surveillance tasks generally involve a single agent assigning resources e.g . drones or sensors , each of which can monitor physical areas , to various points in a target domain such that a loss function associated with coverage of the domain is minimized ( Renzaglia et al. , 2012 ) . Alternatively , security domains follow a leader-follower game setup between two agents , where a defender defends a set of targets ( or a continuous target density in a geographical area ) with limited resources to be placed , while an attacker plans an attack after observing the defender ’ s placement strategy using its own resources ( Tambe , 2011 ) . Traditional methods used to solve single-agent multi-resource surveillance problems often rely on potential fields ( Howard et al. , 2002 ) , discretization based approaches ( Kong et al. , 2006 ) , voronoi tessellations ( Dirafzoon et al. , 2011 ) and particle swarm optimization ( Nazif et al. , 2010 ; Saska et al. , 2014 ) . Similarly , many exact and approximate approaches have been proposed to maximize the defender ’ s expected utility in two-agent multi-resource security domains against a best responding attacker ( Kiekintveld et al. , 2009 ; Amin et al. , 2016 ; Yang et al. , 2014 ; Haskell et al. , 2014 ; Johnson et al. , 2012 ; Huang et al. , 2020 ) . Notably , most existing traditional approaches focus on exploiting some specific spatio-temporal or symmetry structure of the domain being examined . Related Work : Since spatial coverage problems feature continuous action spaces , a common technique used across most previous works is to discretize the area to be covered into grid cells and restrict the agents ’ actions to discrete sets ( Kong et al. , 2006 ; Yang et al. , 2014 ; Haskell et al. , 2014 ; Gan et al. , 2017 ) to find the equilibrium mixed strategies or optimal pure strategies using integer linear programming . However , discretization quickly becomes intractable when the number of each agent ’ s resources grows large . While some games can be characterized by succinct agent strategies and can be solved efficiently via mathematical programming after discretizing the agents ’ actions spaces ( Behnezhad et al. , 2018 ) , this is not true for most multi-resource games . Recent works in spatial coverage domains have focused on incorporating advances from deep learning to solve the coverage problems with more general algorithms . For instance , Pham et al . ( 2018 ) focus on the multi-UAV coverage of a field of interest using a model-free multi-agent RL method while StackGrad ( Amin et al. , 2016 ) , OptGradFP ( Kamra et al. , 2018 ) , PSRO ( Lanctot et al. , 2017 ) are model-free fictitious play based algorithms which can be used to solve games in continuous action spaces . However model-free approaches are sample inefficient and require many interactions with the domain ( or with a simulator ) to infer expected utilities of agents ’ actions . Secondly , they often rely on the policy gradients to compute the derivative of the agents ’ expected utilities w.r.t . their mixed strategies , which induces a high variance in the estimate . To alleviate these issues , more recent works take an actor-critic based approach ( Lowe et al. , 2017 ) , which additionally learns a differentiable approximation to the agents ’ utilities ( Kamra et al. , 2019a ; Wang et al. , 2019 ) and calculate gradients of strategies w.r.t . the utilities . But this requires learning accurate reward/value functions which becomes combinatorially hard for multi-resource coverage . Contributions : To address the above challenge , we present a framework to tractably approximate a general class of spatial coverage objectives and their gradients via spatial discretization without having to learn neural network based reward models . We only discretize the target domain to represent integrals and all set operations over it , but not the action spaces of the agents . Hence we mitigate the intractability caused by discretizing high dimensional action spaces of agents with large number of resources , while also keeping agents ’ actions amenable to gradient-based optimization . By combining our framework with existing solution methods , we successfully solve both single-agent and adversarial two-agent multi-resource spatial coverage problems . 2 MULTI-RESOURCE SPATIAL COVERAGE PROBLEMS . In this section , we formally introduce notation and definitions for multi-resource allocation problems along with two example applications , which will be used for evaluation . Multi-agent multi-resource spatial coverage : Spatial coverage problems comprise of a target space Q ⊂ Rd ( generally d ∈ { 2 , 3 } ) and a set of agents ( or players ) P with each agent p ∈ P having mp resources . We will use the notation −p to denote all agents except p i.e . P\ { p } . Actions : An action up ∈ Rmp×dp for agent p is the placement of all its resources in an appropriate coordinate system of dimension dp . Let Up denote the compact , continuous and convex action set of agent p. Mixed strategies : We represent a mixed strategy i.e . the probability density of agent p over its action set Up as σp ( up ) ≥ 0 s.t . ∫ Up σp ( up ) dup = 1 . We denote agent p sampling an action up ∈ Up from his mixed strategy density as up ∼ σp . Joints : Joint actions , action sets and densities for all agents together are represented as u = { up } p∈P , U = ×p∈P { Up } and σ = { σp } p∈P respectively . Coverage : When placed , each resource covers ( often probabilistically ) some part of the target space Q . Let cvgp : q × u→ R be a function denoting the utility for agent p coming from a target point q ∈ Q due to a joint action u for all agents . We do not assume a specific form for the coverage utility cvgp and leave it to be defined flexibly , to allow many different coverage applications to be amenable to our framework . Rewards : Due to the joint action u , each player achieves a coverage reward rp : u → R of the form rp ( u ) = ∫ Q cvgp ( q , u ) impp ( q ) dq , where impp ( q ) denotes the importance of the target point q for agent p. With a joint mixed strategy σ , player p achieves expected utility : Eu∼σ [ rp ] = ∫ U rp ( u ) σ ( u ) du . Objectives : In single-agent settings , the agent would directly optimize his expected utility w.r.t . action up . But in multi-agent settings , the expected utilities of agents depend on other agents ’ actions and hence can not be maximized with a deterministic resource allocation due to potential exploitation by other agents . Instead agents aim to achieve Nash equilibrium mixed strategies σ = { σp } p∈P over their action spaces . Nash equilibria : A joint mixed strategy σ∗ = { σ∗p } p∈P is said to be a Nash equilibrium if no agent can increase its expected utility by changing its strategy while the other agents stick to their current strategy . Two-player settings : While our proposed framework is not restricted to the number of agents or utility structure of the game , we will focus on single-player settings and zero-sum two-player games in subsequent examples . An additional concept required by fictitious play in two-player settings is that of a best response . A best response of agent p against strategy σ−p is an action which maximizes his expected utility against σ−p : brp ( σ−p ) ∈ arg max up { Eu−p∼σ−p [ rp ( up , u−p ) ] } . The expected utility of any best response of agent p is called the exploitability of agent −p : −p ( σ−p ) : = max up { Eu−p∼σ−p [ rp ( up , u−p ) ] } . Notably , a Nash equilibrium mixed strategy for each player is also their least exploitable strategy . Example 1 ( Single-agent Areal Surveillance ) . A single agent , namely the defender ( D ) , allocates m areal drones with the ith drone Di having three-dimensional coordinates uD , i = ( pD , i , hD , i ) ∈ [ −1 , 1 ] 2 × [ 0 , 1 ] to surveil a two-dimensional forestQ ⊂ [ −1 , 1 ] 2 of arbitrary shape and with a known but arbitrary tree density ρ ( q ) . Consequently , uD ∈ Rm×3 . Each drone has a downward looking camera with a circular lens and with a half-angle θ such that at position ( pD , i , hD , i ) , the drone Di sees the set of points SD , i = { q | ||q − pD , i||2 ≤ hD , i tan θ } . A visualization of this problem with m = 2 drones is shown for a sample forest in Figure 1a . We assume a probabilistic model of coverage with a point q being covered by drone Di with probability PH ( hD , i ) = eK ( hopt−hD , i ) ( hD , i hopt ) Khopt if q ∈ SD , i and 0 otherwise . With multiple drones , the probability of a point q being covered can then be written as : cvg ( q , uD ) = 1− ∏ i|q∈SD , i P̄H ( hD , i ) where P̄H stands for 1− PH . Hence , the reward function to be maximized is : rD,1p ( uD ) = ∫ Q ( 1− ∏ i|q∈SD , i P̄H ( hD , i ) ) ρ ( q ) dq with the tree density ρ ( q ) being the importance of target point q ( subscript 1p denotes one agent ) . Example 2 ( Two-agent Adversarial Coverage ) . Two agents , namely the defender D and the attacker A , compete in a zero-sum game . The defender allocatesm areal drones with the same coverage model as in example 1 . The attacker controls n lumberjacks each with ground coordinates uA , j ∈ [ −1 , 1 ] 2 to chop trees in the forest Q. Consequently , uA ∈ Rn×2 . Each lumberjack chops a constant fraction κ of trees in a radius RL around its coordinates uA , j . We denote the area covered by the j-th lumberjack as SA , j = { q | ‖q − pA , j‖2 ≤ RL } . A visualization of this problem with m = n = 2 is shown for a sample forest in Figure 1b . A drone can potentially catch a lumberjack if its field of view overlaps with the chopping area . For a given resource allocation u = ( uD , uA ) , we define Ij = { i | ‖pA , j−pD , i‖2 ≤ RL+hD , i tan θ } as the set of all drones which overlap with the j-th lumberjack . The areal overlap αij = ∫ SD , i∩SA , j dq controls the probability of the j-th lumberjack being caught by the i-th drone : PC ( hD , i , αij ) = PH ( hD , i ) PA ( αij ) where PH is the same as that in example 1 and captures the effect of drone ’ s height on quality of coverage , while PA ( αij ) = 1−exp ( −Kaαij πR2L ) captures the effect of areal overlap on probability of being caught . Hence , the reward achieved by the j-th lumberjack can be computed as : rA , j ( uD , uA , j ) = κ ∫ SA , j∩Q ρ ( q ) dq with probability∏ i∈Ij P̄ ( hD , i , αij ) , and −κ ∫ SA , j∩Q ρ ( q ) dq otherwise i.e . the number of trees chopped if the j-th lumberjack is not caught by any drone or an equivalent negative penalty if it is caught . Hence , the total agent rewards are : rA,2p ( uD , uA ) = −rD,2p ( uD , uA ) = ∑ j rA , j ( uD , uA , j ) ( subscript 2p denotes two-agent ) . Note that in the above examples drones provide best probabilistic coverage at a height hopt . By increasing their height , a larger area can be covered at the cost of deterioration in coverage probability . Further , the defender can increase coverage probability for regions with high tree density by placing multiple drones to oversee them ; in which case , the drones can potentially stay at higher altitudes too . Example 2 further adds additional interactions due to overlaps between defender and attacker ’ s resources1 . Hence , these examples form a challenging set of evaluation domains with multiple trade-offs and complex possibilities of coverage involving combinatorial interactions between the players ’ resources . For both examples , we use the following constants : θ = π6 , hopt = 0.2 , K = 4.0 , RL = 0.1 , Ka = 3.0 , κ = 0.1 . However , note that these values only serve as practical representative values . The techniques that we introduce in this paper are not specific to the above probabilistic capture models or specific values of game constants , but rather apply to a broad class of coverage problems where the agents act by placing resources with finite coverage fields and agents ’ rewards are of the form : rp ( u ) = ∫ Q fp ( u , q ) dq . Dealing with zero gradients : In the two-agent game , the attacker ’ s reward depends on the locations of its resources , but the defender ’ s reward solely depends on overlaps with the attacker ’ s resources . In absence of such overlap , the gradient of rD,2p w.r.t . uD , i becomes 0 . Hence , we propose to use the reward from the one-agent game as an intrinsic reward for the defender similar to how RL algorithms employ intrinsic rewards when extrinsic rewards are sparse ( Pathak et al. , 2017 ) . Then the reward function for the defender becomes : r̃D,2p ( uD , uA ) = rD,2p ( uD , uA ) + µrD,1p ( uD ) . We use a small µ = 0.001 to not cause significant deviation from the zero-sum structure of the game and yet provide a non-zero gradient to guide the defender ’ s resources in the absence of gradients from rD,2p . 1In reality , lumberjacks might act independent of each other and lack knowledge of each others ’ plans . By allowing them to be placed via a single attacker and letting them collude , we tackle a more challenging problem and ensure that not all of them get caught by independently going to strongly covered forest regions .
This paper studies the coverage game where agents allocate their resources to target spaces to maximize their coverage, and the goal of this paper is to (approximately) compute the Nash Equilibrium. The proposed method simulates the game by iteratively updating the best response, and the main contribution is an algorithm to approximate the gradient of the utility function with respect to the resource allocation (over the space). In particular, the paper proposes to decompose the gradient into two parts and estimate each part by discretization.
SP:c98c40dda3d811ff76816182962ccbed03693eb4
Balancing Constraints and Rewards with Meta-Gradient D4PG
1 INTRODUCTION . Reinforcement Learning ( RL ) algorithms typically try to maximize an expected return objective ( Sutton & Barto , 2018 ) . This approach has led to numerous successes in a variety of domains which include board-games ( Silver et al. , 2017 ) , computer games ( Mnih et al. , 2015 ; Tessler et al. , 2017 ) and robotics ( Abdolmaleki et al. , 2018 ) . However , formulating real-world problems with only an expected return objective is often sub-optimal when tackling many applied problems ranging from recommendation systems to physical control systems which may include robots , self-driving cars and even aerospace technologies . In many of these domains there are a variety of challenges preventing RL from being utilized as the algorithmic solution framework . Recently , Dulac-Arnold et al . ( 2019 ) presented nine challenges that need to be solved to enable RL algorithms to be utilized in real-world products and systems . One of those challenges is handling constraints . All of the above domains may include one or more constraints related to cost , wear-and-tear , or safety , to name a few . Hard and Soft Constraints : There are two types of constraints that are encountered in constrained optimization problems ; namely hard-constraints and soft-constraints ( Boyd & Vandenberghe , 2004 ) . Hard constraints are pairs of pre-specified functions and thresholds that require the functions , when evaluated on the solution , to respect the thresholds . As such , these constraints may limit the feasible solution set . Soft constraints are similar to hard constraints in the sense that they are defined by pairs of pre-specified functions and thresholds , however , a soft constraint does not require the solution to hold the constraint ; instead , it penalizes the objective function ( according to a specified rule ) if the solution violates the constraint ( Boyd & Vandenberghe , 2004 ; Thomas et al. , 2017 ) . Motivating Soft-Constraints : In real-world products and systems , there are many examples of soft-constraints ; that is , constraints that can be violated , where the violated behaviour is undesirable but not catastrophic ( Thomas et al. , 2017 ; Dulac-Arnold et al. , 2020b ) . One concrete example is that of energy minimization in physical control systems . Here , the system may wish to reduce the amount of energy used by setting a soft-constraint . Violating the constraint is inefficient , but not catastrophic to the system completing the task . In fact , there may be desirable characteristics that can only be attained if there are some constraint violations ( e.g. , a smoother/faster control policy ) . Another common setting is where it is unclear how to set a threshold . In many instances , a product * indicates equal contribution . manager may desire to increase the level of performance on a particular product metric A , while ensuring that another metric B on the same product does not drop by ‘ approximately X % ’ . The value ‘ X ’ is often inaccurate and may not be feasible in many cases . In both of these settings , violating the threshold is undesirable , yet does not have catastrophic consequences . Lagrange Optimization : In the RL paradigm , a number of approaches have been developed to incorporate hard constraints into the overall problem formulation ( Altman , 1999 ; Tessler et al. , 2018 ; Efroni et al. , 2020 ; Achiam et al. , 2017 ; Bohez et al. , 2019 ; Chow et al. , 2018 ; Paternain et al. , 2019 ; Zhang et al. , 2020 ; Efroni et al. , 2020 ) . One popular approach is to model the problem as a Constrained Markov Decision Process ( CMDP ) ( Altman , 1999 ) . In this case , one method is to solve the following problem formulation : max⇡ J⇡R s.t . J ⇡ C , where ⇡ is a policy , J⇡ R is the expected return , J⇡ C is the expected cost and is a constraint violation threshold . This is often solved by performing alternating optimization on the unconstrained Lagrangian relaxation of the original problem ( e.g . Tessler et al . ( 2018 ) ) , defined as : min 0 max⇡ J⇡R+ ( J⇡C ) . The updates alternate between learning the policy and the Lagrange multiplier . In many previous constrained RL works ( Achiam et al. , 2017 ; Tessler et al. , 2018 ; Ray et al. , 2019 ; Satija et al. , 2020 ) , because the problem is formulated with hard constraints , there are some domains in each case where a feasible solution is not found . This could be due to approximation errors , noise , or the constraints themselves being infeasible . The real-world applications , along with empirical constrained RL research results , further motivates the need to develop a soft-constrained RL optimization approach . Ideally , in this setup , we would like an algorithm that satisfies the constraints while solving the task by maximizing the objective . If the constraints can not be satisfied , then this algorithm finds a good trade-off ( that is , minimizing constraint violations while solving the task by maximizing the objective ) . In this paper , we extend the constrained RL Lagrange formulation to perform soft-constrained optimization by formulating the constrained RL objective as a nested optimization problem ( Sinha et al. , 2017 ) using meta-gradients . We propose MetaL that utilizes meta-gradients ( Xu et al. , 2018 ; Zahavy et al. , 2020 ) to improve upon the trade-off between reducing constraint violations and improving expected return . We focus on Distributed Distributional Deterministic Policy Gradients ( D4PG ) ( Barth-Maron et al. , 2018 ) as the underlying algorithmic framework , a state-of-the-art continuous control RL algorithm . We show that MetaL can capture an improved trade-off between expected return and constraint violations compared to the baseline approaches . We also introduce a second approach called MeSh that utilizes meta-gradients by adding additional representation power to the reward shaping function . Our main contributions are as follows : ( 1 ) We extend D4PG to handle constraints by adapting it to Reward Constrained Policy Optimization ( RCPO ) ( Tessler et al. , 2018 ) yielding Reward Constrained D4PG ( RC-D4PG ) ; ( 2 ) We present a soft constrained meta-gradient technique : Meta-Gradients for the Lagrange multiplier learning rate ( MetaL ) 1 ; ( 3 ) We derive the meta-gradient update for MetaL ( Theorem 1 ) ; ( 4 ) We perform extensive experiments and investigative studies to showcase the properties of this algorithm . MetaL outperforms the baseline algorithms across domains , safety coefficients and thresholds from the Real World RL suite ( Dulac-Arnold et al. , 2020b ) . 2 BACKGROUND . A Constrained Markov Decision Process ( CMDP ) is an extension to an MDP ( Sutton & Barto , 2018 ) and consists of the tuple hS , A , P , R , C , i where S is the state space ; A is the action space ; P : S ⇥ A ! S is a function mapping states and actions to a distribution over next states ; R : S ⇥ A ! R is a bounded reward function and C : S ⇥ A ! RK is a K dimensional function representing immediate penalties ( or costs ) relating to K constraints . The solution to a CMDP is a policy ⇡ : S ! A which is a mapping from states to a probability distribution over actions . This policy aims to maximize the expected return J⇡ R = E [ P1 t=0 trt ] and satisfy the constraints J⇡ Ci = E [ P1 t=0 tci , t ] i , i = 1 . . .K . For the purpose of the paper , we consider a single constraint ; that is , K = 1 , but this can easily be extended to multiple constraints . Meta-Gradients is an approach to optimizing hyperparameters such as the discount factor , learning rates , etc . by performing online cross validation while simultaneously optimizing for the overall RL optimization objective such as the expected return ( Xu et al. , 2018 ; Zahavy et al. , 2020 ) . The goal is to optimize both an inner loss and an outer loss . The update of the ✓ parameters on the inner 1This is also the first time meta-gradients have been applied to an algorithm with an experience replay . loss is defined as ✓0 = ✓ + f ( ⌧ , ✓ , ⌘ ) , where ✓ 2 Rd corresponds to the parameters of the policy ⇡✓ ( a|s ) and the value function v✓ ( s ) ( if applicable ) . The function f : Rk ! Rd is the gradient of the policy and/or value function with respect to the parameters ✓ and is a function of an n-step trajectory ⌧ = hs1 , a1 , r2 , s2 . . . sni , meta-parameters ⌘ and is weighted by a learning rate ↵ and is defined as f ( ⌧ , ✓ , ⌘ ) = ↵ dJ ⇡✓ obj ( ✓ , ⌧ , ⌘ ) d✓ where J ⇡✓ obj ( ✓ , ⌧ , ⌘ ) is the objective being optimized with respect to ✓ . The idea is to then evaluate the performance of this new parameter value ✓0 on an outer loss – the meta-gradient objective . We define this objective as J 0 ( ⌧ 0 , ✓0 , ⌘̄ ) where ⌧ 0 is a new trajectory , ✓0 are the updated parameters and ⌘̄ is a fixed meta-parameter ( which needs to be selected/tuned in practice ) . We then need to take the gradient of the objective J 0 with respect to the meta-parameters ⌘ to yield the outer loss update ⌘0 = ⌘ + ↵⌘ @ J 0 ( ⌧ 0 , ✓0 , ⌘̄ ) @ ⌘ . This gradient is computed as follows : @ J 0 ( ⌧ 0 , ✓0 , ⌘̄ ) @ ⌘ = @ J 0 ( ⌧ 0 , ✓0 , ⌘̄ ) @ ✓0 @ ✓ 0 @ ⌘ . The outer loss is essentially the objective we are trying to optimize . This could be a policy gradient loss , a temporal difference loss , a combination of the two etc ( Xu et al. , 2018 ; Zahavy et al. , 2020 ) . Meta-gradients have been previously used to learn intrinsic rewards for policy gradient ( Zheng et al. , 2018 ) and auxiliary tasks ( Veeriah et al. , 2019 ) . Meta-gradients have also been used to adapt optimizer parameters ( Young et al. , 2018 ; Franceschi et al. , 2017 ) . In our setup , we consider the continuous control setting , provide the first implementation of metagradients for an algorithm that uses an experience replay , and focus on adapting meta-parameters that encourage soft constraint satisfaction while maximizing expected return . D4PG is a state-of-the-art continuous control RL algorithm with a deterministic policy ( BarthMaron et al. , 2018 ) . It is an incremental improvement to DDPG ( Lillicrap et al. , 2015 ) . The overall objective of DDPG is to maximize J ( ✓a , ✓c ) = E [ Q✓c ( s , a ) |s = st , a = ⇡✓a ( st ) ] where ⇡✓a ( st ) is a deterministic policy with parameters ✓a and Q✓c ( s , a ) is an action value function with parameters ✓c . The actor loss is defined as : Lactor = kSG ( raQ✓c ( st , at ) |at=⇡✓a ( s ) + a✓a , t ) a✓a , tk2 where SG is a stop gradient . The corresponding gradient update is defined as r✓aJ ( ✓a ) = E [ raQ✓c ( s , a ) r✓a⇡✓a ( st ) ] . The critic is updated using the standard temporal difference error loss : Lcritic = ( r ( s , a ) + QT ( s0 , ⇡T ( s0 ) ) Q✓c ( s , a ) ) 2 where QT , ⇡T are the target critic and actor networks respectively . In D4PG , the critic is a distributional critic based on the C51 algorithm ( Bellemare et al. , 2017 ) and the agent is run in a distributed setup with multiple actors executed in parallel , n-step returns and with prioritized experience replay . We will use the non-distributional critic update in our notation for ease of visualization and clarity for the reader2 .
The paper focuses on soft-constrained RL techniques and proposes a meta-gradient approach for the same. It first extends the RCPO (Tessler et al) algorithm using the methodology of DDPG (Lillicarp et al) to propose an off-policy version of RCPO (called RC-D4PG). The main contribution of the work is the proposal of two new meta-gradients based algorithms for the soft-constrained RL problem that are able to find a good trade-off between constraint violation and maximizing returns. The first proposed algorithm - Meta-L - is based on a meta-learning based adaptive update rule for the Lagrange multiplier's learning rate. The second algorithm is based on similar principles but instead focuses on adapting the reward-shaping update in a meta manner. The author's show the empirical evidence of their method's strengths on a bunch of continuous control based simulator tasks.
SP:5d4084ca5f3570dfd854aa399f2778e0b649f862
Out-of-Distribution Generalization Analysis via Influence Function
1 INTRODUCTION . Most machine learning systems assume both training and test data are independently and identically distributed , which does not always hold in practice ( Bengio et al . ( 2019 ) ) . Consequently , its performance is often greatly degraded when the test data is from a different domain ( distribution ) . A classical example is the problem to identify cows and camels ( Beery et al . ( 2018 ) ) , where the empirical risk minimization ( ERM , Vapnik ( 1992 ) ) may classify images by background color instead of object shape . As a result , when the test domain is “ out-of-distribution ” ( OOD ) , e.g . when the background color is changed , its performance will drop significantly . The OOD generalization is to obtain a robust predictor against this distribution shift . Suppose that we have training data collected from m domains : S = { Se : e ∈ Etr , |Etr| = m } , Se = { ze1 , ze2 , . . . , zene } with zei ∼ P e , ( 1 ) where P e is the distribution corresponding to domain e , Etr is the set of all available domains , including validation domains , and zei is a data point . The OOD problem we considered is to find a model fOOD such that fOOD = arg min f sup P e∈Eall ` ( f , P e ) , ( 2 ) where Eall is the set of all target domains and ` ( f , P e ) is the expected loss of f on the domain P e. Recent algorithms address this OOD problem by recovering invariant ( causal ) features and build the optimal model on top of these features , such as Invariant Risk Minimization ( IRM , Arjovsky et al . ( 2019 ) ) , Risk Extrapolation ( REx , Krueger et al . ( 2020 ) ) , Group Distributionally Robust Optimization ( gDRO , Sagawa et al . ( 2019 ) ) and Inter-domain Mixup ( Mixup , Xu et al . ( 2020 ) ; Yan et al . ( 2020 ) ; Wang et al . ( 2020 ) ) . Most works evaluate on Colored MNIST ( see 5.1 for details ) where we can directly obtain the worst domain accuracy over Eall . Gulrajani & Lopez-Paz ( 2020 ) has assembled many algorithms and multi-domain datasets , and finds that OOD algorithms can ’ t outperform ERM in some domain generalization tasks ( Gulrajani & Lopez-Paz ( 2020 ) ) , e.g . VLCS ( Torralba & Efros ( 2011 ) ) and PACS ( Li et al . ( 2017 ) ) . This is not surprising , since these tasks only require high performance on certain domains , while an OOD algorithm is expected to learn truly invariant features and be excellent on a large set of target domains Eall . This phenomenon is described as “ accuracy-vs-invariance trade-off ” in Akuzawa et al . ( 2019 ) . Two questions arise in the min-max problem ( 2 ) . First , previous works assume that there is sufficient diversity among the domains in Eall . Thus the supremacy of ` ( f , P e ) may be much larger than the average , which implies that ERM may fail to discover fOOD . But in reality , we do not know whether it is true . If not , the distribution of ` ( f , P e ) is concentrated on the expectation of ` ( f , P e ) , and ERM is sufficient to find an invariant model for Eall . Therefore , we call for a method to judge whether an OOD algorithm is needed . Second , how to judge a model ’ s OOD performance ? Traditionally , we consider test domains Etest ⊂ Etr and use the worst-domain accuracy over Etest ( which we call test accuracy ) to approximate the OOD accuracy . However , test accuracy is a biased estimate of the OOD accuracy unless Etr is closed to Eall . More seriously , It may be irrelevant or even negatively correlated to the OOD accuracy . This phenomenon is not uncommon , especially when there are features virtually spurious in Eall but show a strong correlation to the target in Etr . We give a toy example in Colored MNIST when the test accuracy fails to approximate the OOD accuracy . For more details , please refer to Section 5.1 and Appendix A.4 . We choose three domains from Colored MNIST and use cross-validation ( Gulrajani & Lopez-Paz ( 2020 ) ) to select models , i.e . we take turns to select a domain S ∈ Etr as the test domain and train on the rest , and select the model with max average test accuracy . Figure 1 shows the comparison between ERM and IRM . One can find that no matter which domain is the test domain , ERM model uniformly outperforms IRM model on the test domain . However , IRM model achieves consistently better OOD accuracy . Shortcomings of the test accuracy here are obvious , regardless of whether cross-validation is used . In short , the naive use of the test accuracy may result in a non-OOD model . To address this obstacle , we hope to find a metric that correlates better with model ’ s OOD property , even when Etr is much smaller than Eall and the “ worst ” domain remains unknown . Without any assumption to Eall , our goal is unrealistic . Therefore , we assume that features that are invariant across Etr should also be across Eall . This assumption is necessary . Otherwise , the only thing we can do is to collect more domains . Therefore , we need to focus on what features the model has learnt . Specifically , we want to check whether the model learns invariant features and avoid varying features . The influence function ( Cook & Weisberg ( 1980 ) ) can serve our purpose . Influence function was proposed to measures the parameter change when a data point is removed or upweighted by a small perturbation ( details in 3.2 ) . When modified it to domain-level , it measures the influence of a domain instead of a data point on the model . Note that we are not emulating the changes of the parameter when a domain is removed . Instead , we are exactly caring about upweighting the domain by δ → 0+ ( will be specified later ) . Base on this , the variance of influence function allows us to measure OOD property and solve the obstacle . Contributions we summarize our contributions here : ( i ) We introduce influence function to domain-level and propose index Vγ|θ ( formula 6 ) based on influence function of the model fθ . Our index can measure the OOD extent of available domains , i.e . how different these domains ( distributions ) are . This measurement provides a basis for whether to adopt an OOD algorithm and to collect more diverse domains . See Section 4.1 and Section 5.1.1 for details . ( ii ) We point out that the proposed index Vγ|θ can solve the weakness of test accuracy . Specifically , under most OOD generalization problems , using test accuracy and our index together , we can discern the OOD property of a model . See Section 4.2 for details . ( iii ) We propose to use only a small but important part of the model to calculate the influence function . This overcomes the huge computation cost of solving the inverse of Hessian . It is not merely for calculation efficiency and accuracy , but it coincides with our understanding that only these parameters capture what features a model has learnt ( Section 4.3 ) . We organize our paper as follows : Section 2 reviews related works and Section 3 introduces the preliminaries of OOD methods and influence function . Section 4 presents our proposal and detailed analysis . Section 5 shows our experiments . The conclusion is given in Section 6 . 2 RELATED WORK . The mismatch between the development dataset and the target domain is one major challenge in machine learning ( Castro et al . ( 2020 ) ; Kuang et al . ( 2020 ) ) . Many works assume that the ground truth can be represented by a causal Direct Acyclic Graph ( DAG ) , and they use the DAG structure to discuss the worst-domain performance ( Rojas-Carulla et al . ( 2018 ) ; Peters et al . ( 2016 ) ; Subbaswamy et al . ( 2019 ) ; Bühlmann et al . ( 2020 ) ; Magliacane et al . ( 2018 ) ) . All these works employ multiple domain data and causal assumptions to discover the parents of the target variable . Rojas-Carulla et al . ( 2018 ) and Magliacane et al . ( 2018 ) also apply this idea to Domain Generalization and Multi-Task Learning setting . Starting from multiple domain data rather than model assumptions , Arjovsky et al . ( 2019 ) proposes Invariant Risk Minimization ( IRM ) to extract causal ( invariant ) features and learn invariant optimal predictor on the top of the causal features . It analyzes the generalization properties of IRM from the view of sufficient dimension reduction ( Cook ( 2009 ) ; Cook et al . ( 2002 ) ) . Ahuja et al . ( 2020 ) considers IRM as finding the Nash equilibrium of an ensemble game among several domains and develops a simple training algorithm . Krueger et al . ( 2020 ) derives the Risk Extrapolation ( REx ) to extract invariant features and further derives a practical objective function via variance penalization . Xie et al . ( 2020 ) employs a framework from distributional robustness to interpret the benefit of REx comparing to robust optimization ( Ben-Tal et al . ( 2009 ) ; Bagnell ( 2005 ) ) . Besides , Adversarial Domain Adaption ( Li et al . ( 2018 ) ; Koyama & Yamaguchi ( 2020 ) ) uses discriminator to look for features that are independent of domains and uses these features for further prediction . Influence function is a classic method from the robust statistics literature ( Robins et al . ( 2008 ; 2017 ) ; Van der Laan et al . ( 2003 ) ; Tsiatis ( 2007 ) ) . It can be used to track the impact of a training sample on the prediction . Koh & Liang ( 2017 ) proposes a second-order optimization technique to approximate the influence function . They verify their method with different assumptions on the empirical risk ranging from being strictly convex and twice-differentiable to non-convex and non-differentiable losses . Koh et al . ( 2019 ) also estimates the effect of removing a subgroup of training points via influence function . They find out that the approximation computed by the influence function is correlated with the actual effect . Influence function has been used in many machine learning tasks . Cheng et al . ( 2019 ) proposes an explanation method , Fast Influence Analysis , that employs influence function on Latent Factor Model to solve the lack of interpretability of the collaborative filtering approaches for recommender systems . Cohen et al . ( 2020 ) uses influence function to detect adversarial attacks . Ting & Brochu ( 2018 ) proposes an asymptotically optimal sampling method via an asymptotically linear estimator and the associated influence function . Alaa & Van Der Schaar ( 2019 ) develops a model validation procedure that estimates the estimation error of causal inference methods . Besides , Fang et al . ( 2020 ) leverages influence function to select a subset of normal users who are influential to the recommendations .
The authors study the problem of out-of-distribution (OoD) generalization. The key question authors seek to answer is when given access to data from multiple training environments, can one only rely on test accuracy? or does one have to rely on some new measures to estimate the out-of-distribution performance of the model. The authors develop a metric based on influence functions, which authors claim is a better reflection of OoD accuracy than test accuracy. The metric proposed by the authors measures the variance in the model when the data from each environment is upweighted. The authors show that the proposed metric empirically correlates to the OoD performance of the models.
SP:2dcfc5ac82356d824b2c4892372c73e678924caa
Active Contrastive Learning of Audio-Visual Video Representations
1 INTRODUCTION . Contrastive learning of audio and visual representations has delivered impressive results on various downstream scenarios ( Oord et al. , 2018 ; Hénaff et al. , 2019 ; Schneider et al. , 2019 ; Chen et al. , 2020 ) . This self-supervised training process can be understood as building a dynamic dictionary per mini-batch , where “ keys ” are typically randomly sampled from the data . The encoders are trained to perform dictionary look-up : an encoded “ query ” should be similar to the value of its matching key and dissimilar to others . This training objective maximizes a lower bound of mutual information ( MI ) between representations and the data ( Hjelm et al. , 2018 ; Arora et al. , 2019 ) . However , such lower bounds are tight only for sample sizes exponential in the MI ( McAllester & Stratos , 2020 ) , suggesting the importance of building a large and consistent dictionary across mini-batches . Recently , He et al . ( 2020 ) designed Momentum Contrast ( MoCo ) that builds a queue-based dictionary with momentum updates . It achieves a large and consistent dictionary by decoupling the dictionary size from the GPU/TPU memory capacity . However , Arora et al . ( 2019 ) showed that simply increasing the dictionary size beyond a threshold does not improve ( and sometimes can even harm ) the performance on downstream tasks . Furthermore , we find that MoCo can suffer when there is high redundancy in the data , because only relevant – and thus limited – parts of the dictionary are updated in each iteration , ultimately leading to a dictionary of redundant items ( we show this empirically in Fig . 3 ) . We argue that random negative sampling is much responsible for this : a randomly constructed dictionary will contain more “ biased keys ” ( similar keys that belong to the same class ) and “ ineffective keys ” ( keys that can be easily discriminated by the current model ) than a carefully constructed one . Furthermore , this issue can get aggravated when the dictionary size is large . In this paper , we focus on learning audio-visual representations of video data by leveraging the natural correspondence between the two modalities , which serves as a useful self-supervisory signal ( Owens & Efros , 2018 ; Owens et al. , 2016 ; Alwassel et al. , 2019 ) . Our starting point is contrastive learning ( Gutmann & Hyvärinen , 2010 ; Oord et al. , 2018 ) with momentum updates ( He et al. , 2020 ) . ∗Equal contribution 1Code is available at : https : //github.com/yunyikristy/CM-ACC However , as we discussed above , there are both practical challenges and theoretical limits to the dictionary size . This issue is common to all natural data but is especially severe in video ; successive frames contain highly redundant information , and from the information-theoretic perspective , audiovisual channels of video data contain higher MI than images because the higher dimensionality – i.e. , temporal and multimodal – reduces the uncertainty between successive video clips . Therefore , a dictionary of randomly sampled video clips would contain highly redundant information , causing the contrastive learning to be ineffective . Therefore , we propose an actively sampled dictionary to sample informative and diverse set of negative instances . Our approach is inspired by active learning ( Settles , 2009 ) that aims to identify and label only the maximally informative samples , so that one can train a high-performing classifier with minimal labeling effort . We adapt this idea to construct a non-redundant dictionary with informative negative samples . Our approach , Cross-Modal Active Contrastive Coding ( CM-ACC ) , learns discriminative audiovisual representations and achieves substantially better results on video data with a high amount of redundancy ( and thus high MI ) . We show that our actively sampled dictionary contains negative samples from a wider variety of semantic categories than a randomly sampled dictionary . As a result , our approach can benefit from large dictionaries even when randomly sampled dictionaries of the same size start to have a deleterious effect on model performance . When pretrained on AudioSet ( Gemmeke et al. , 2017 ) , our approach achieves new state-of-the-art classification performance on UCF101 ( Soomro et al. , 2012 ) , HMDB51 ( Kuehne et al. , 2011 ) , and ESC50 ( Piczak , 2015b ) . 2 BACKGROUND . Contrastive learning optimizes an objective that encourages similar samples to have similar representations than with dissimilar ones ( called negative samples ) ( Oord et al. , 2018 ) : min θf , θh ExvpX [ −log ( ef ( x ; θf ) ᵀh ( x+ ; θh ) ef ( x ; θf ) ᵀh ( x+ ; θh ) + ef ( x ; θf ) ᵀh ( x− ; θh ) ) ] ( 1 ) The samples x+ and x− are drawn from the same distribution as x ∈ X , and are assumed to be similar and dissimilar to x , respectively . The objective encourages f ( · ) and h ( · ) to learn representations of x such that ( x , x+ ) have a higher similarity than all the other pairs of ( x , x− ) . We can interpret it as a dynamic dictionary look-up process : Given a “ query ” x , it finds the correct “ key ” x+ among the other irrelevant keys x− in a dictionary . Denoting the query by q = f ( x ) , the correct key by k+ = h ( x+ ) , and the dictionary of K negative samples by { ki = h ( xi ) } , i ∈ [ 1 , K ] , we can express equation 1 in a softmax form , minθq , θk ExvpX [ −log e q·k+/τ∑K i=0 e q·ki/τ ] , where θq and θk are parameters of the query and key encoders , respectively , and τ is a temperature term that controls the shape of the probability distribution computed by the softmax function . Momentum Contrast ( MoCo ) decouples the dictionary size from the mini-batch size by implementing a queue-based dictionary , i.e. , current mini-batch samples are enqueued while the oldest are dequeued ( He et al. , 2020 ) . It then applies momentum updates to parameters of a key encoder θk with respect to parameters of a query encoder , θk ← mθk + ( 1 −m ) θq , where m ∈ [ 0 , 1 ) is a momentum coefficient . Only the parameters θq are updated by back-propagation , while the parameters θk are defined as a moving average of θq with exponential smoothing . These two modifications allow MoCo to build a large and slowly-changing ( and thus consistent ) dictionary . Theoretical Limitations of Contrastive Learning . Recent work provides theoretical analysis of the shortcomings of contrastive learning . McAllester & Stratos ( 2020 ) show that lower bounds to the MI are only tight for sample size exponential in the MI , suggesting that a large amount of data are required to achieve a tighter lower bound on MI . He et al . ( 2020 ) empirically showed that increasing negative samples has shown to improve the learned presentations . However , Arora et al . ( 2019 ) showed that such a phenomenon does not always hold : Excessive negative samples can sometimes hurt performance . Also , when the number of negative samples is large , the chance of sampling redundant instances increases , limiting the effectiveness of contrastive learning . One of our main contributions is to address this issue with active sampling of negative instances , which reduces redundancy and improves diversity , leading to improved performance on various downstream tasks . 3 APPROACH . 3.1 CROSS-MODAL CONTRASTIVE REPRESENTATION LEARNING . Our learning objective encourages the representations of audio and visual clips to be similar if they come from the same temporal block of a video . LetA = { a0 , · · · , aN−1 } and V = { v0 , · · · , vN−1 } be collections of audio and visual clips , where each pair ( ai , vi ) is from the same block of a video . We define query encoders fa and fv and key encoders ha and hv for audio and visual clips , respectively , with learnable parameters { θaq , θvq } for the query encoders and { θak , θvk } for the key encoders . These encoders compute representations of audio and visual clips as queries and keys , qv = fv ( v query ) , kv = hv ( v key ) , qa = fa ( a query ) , ka = ha ( a key ) ( 2 ) We train our encoders to perform cross-modal dictionary look-up , e.g. , given a query video clip vquery , we find the corresponding audio clip akey from a dictionary Da . Adapting MoCo ( He et al. , 2020 ) to our cross-modal setup , we implement a queue-based dictionaryDa that stores keys of audio clips { kai } Ki=1 , where K is the dictionary size . We compute the contrastive loss and backpropagate the gradients only to the visual query encoder fv and update the parameters θvq . For the audio encoder ha , we apply the momentum update ( He et al. , 2020 ) , θak ← mθak + ( 1−m ) θaq ( 3 ) The parameter θaq is not updated in this contrastive coding step ; we update it during the audio-tovisual step ( similar as above with the opposite modalities ) . Here we explain the visual-to-audio step only ; we perform bi-directional contrastive coding and train the whole model end-to-end . 3.2 ACTIVE SAMPLING OF NEGATIVE INSTANCES : UNCERTAINTY AND DIVERSITY . The quality of negative samples is crucial in contrastive learning . Existing work typically adopts random negative sampling . However , we want a diverse set of negative samples so that comparisons between positive and negative pairs are the most informative they can be . Motivated by active learning ( Settles , 2009 ) , we propose a gradient-based active sampling approach to improve the quality of negative samples . In active learning , the learner chooses samples that seem maximally informative and queries an oracle for labels to obtain an optimal solution with a minimal labeling budget . Adapting this to our setting , we can empower the learner to choose the maximally informative negative samples to construct a dictionary ; the main question is how to measure the informativeness of samples without labels . One way to measure informativeness is through the lens of uncertainty : If a model is highly uncertain about its prediction of a sample , we can ensure the maximum update to the model by including the sample in a mini-batch ( conversely , if the uncertainly is low for all samples in a mini-batch , the model update will be small ) . Ash et al . ( 2020 ) showed that gradients of a loss function with respect to the model ’ s most confident predictions can approximate the uncertainty of samples , demonstrating its effectiveness in active learning . They provide a theoretical justification by showing that gradient norms of the last layer of a neural network with respect to pseudo-labels provides a lower bound on gradient norms induced by any other labels . In this work , we use gradients of the last layer to measure the uncertainty and encourage our model to include samples that have the highest gradient magnitudes to constitute a dictionary . While the uncertainty of each individual samples is important , the diversity of samples is also a critical measure of informativeness . Intuitively , it is possible that a model is highly uncertain about samples from particular semantic categories , but constructing a mini-batch of samples from just those categories can severely bias gradients and ultimately lead to a bad local minima . There are several principled approaches to ensure diversity , e.g. , submodular optimization ( Fujishige , 2005 ) and Determinantal Point Processes ( DPP ) ( Macchi , 1975 ; Kulesza & Taskar , 2011 ) . Unfortunately , those methods are typically inefficient because of the combinatorial search space ( Nemhauser et al. , 1978 ; Gilks et al. , 1995 ) . In this work , instead of using the expensive solutions , we opt to the fast solution of Ash et al . ( 2020 ) and use the initialization scheme of the k-MEANS++ seeding algorithm ( Arthur & Vassilvitskii , 2007 ) to sample a diverse set of negative samples .
In this paper, the authors propose a cross-modal (audio-video) self-supervised representation learning method with a contrastive learning framework. To overcome the high redundancy in the negative samples, they propose an active negative sampling method. They use a gradient with respect to the pseudo label to measure the uncertainty of a negative sample. They use K-means clustering to maximize the negative sample diversity when constructing a new negative set for queueing. They show their method's efficacy on the public benchmarks: Kinetics, AudioSet for retraining, and UCF-101, HMDB-51, ESC-50 for downstream tasks. 
SP:df0e5190360b8dd9f9ddc35a6f7c57834f483fbb
Automatic Music Production Using Generative Adversarial Networks
1 INTRODUCTION . The development of home music production has brought significant innovations into the process of pop music composition . Software like Pro Tools , Cubase , and Logic – as well as MIDI-based technologies and digital instruments – provide a wide set of tools to manipulate recordings and simplify the composition process for artists and producers . After recording a melody , maybe with the aid of a guitar or a piano , song writers can now start building up the arrangement one piece at a time , sometimes not even needing professional musicians or proper music training . As a result , singers and song writers – as well as producers – have started asking for tools that could facilitate , or to some extent even automate , the creation of full songs around their lyrics and melodies . To meet this new demand , the goal of designing computer-based environments to assist human musicians has become central in the field of automatic music generation ( Briot et al. , 2020 ) . IRCAM OpenMusic ( Assayag et al. , 1999 ) , Sony CSL-Paris FlowComposer ( Papadopoulos et al. , 2016 ) , and Logic Pro X Easy Drummer are just some examples . In addition , more solutions based on deep learning techniques , such as RL-Duet ( Jiang et al. , 2020 ) – a deep reinforcement learning algorithm for online accompaniment generation – or PopMAG , a transformer-based architecture which relies on a multi-track MIDI representation of music ( Ren et al. , 2020 ) , continue to be studied . A comprehensive review of the most relevant deep learning techniques applied to music is provided by ( Briot et al. , 2020 ) . Most of these strategies , however , suffer from the same critical issue , which makes them less appealing in view of music production for commercial purposes : they rely on a symbolic/MIDI representation of music . The approach proposed in this paper , instead , is a first attempt at automatically generating an euphonic arrangement ( two or more sound patterns that produce a pleasing and harmonious piece of music ) in the audio domain , given a musical sample encoded in a two-dimensional time-frequency representation ( in particular , we opted for the Mel-spectrogram time-frequency representation ) . Al- though arrangement generation has been studied in the context of symbolic audio , indeed , switching to Mel-spectrograms allows us to preserve the sound heritage of other musical pieces ( allowing operations such as sampling ) and is more suitable for real-life cases , where voice , for instance , can not be encoded in MIDI . We focused our attention on two different tasks of increasing difficulty : ( i ) given a bass line to create credible and on-time drums , and ( ii ) given the voice line , to output a new and euphonic musical arrangement . Incidentally , we found out that – for training samples – our model was able to reconstruct the original arrangement pretty well , even though no pairing among the Mel-spectrograms of the two domains was performed . By means of the Mel-spectrogram representation of music , we can consider the problem of automatically generating an arrangement or accompaniment for a specific musical sample equivalent to an image-to-image translation task . For instance , if we have the Mel-spectrogram of an acapella song , we may want to produce the Mel-spectrogram of the same song including a suitable arrangement . To solve this task , we tested an unpaired image-to-image translation strategy known as CycleGAN ( Zhu et al. , 2017 ) , which consists of translating an image from a source domain X to a target domain Y in the absence of paired examples , by training both the mapping from X to Y and from Y to X simultaneously , with the goal of minimizing a cycle consistency loss . The aforementioned system was trained on 5s pop music samples ( equivalent to 256×256 Mel-spectrograms ) coming both from the Free Music Archive ( FMA ) dataset ( Defferrard et al. , 2017 ; 2018 ) , and from the Demucs dataset ( Défossez et al. , 2019 ) . The short sample duration does not affect the proposed methodology , at least with respect to the arrangement task we focus on , and inference can be performed also on full songs . Part of the dataset was pre-processed first , since the FMA songs lack source separated channels ( i.e . differentiated vocals , bass , drums , etc. ) . The required channels were extracted using Demucs ( Défossez et al. , 2019 ) . The main innovations presented in this contribution are as follows : ( i . ) treating music pieces as images , we developed a framework to automatically generate music arrangement in the Mel-frequency domain , different from any other previous approach ; ( ii . ) our approach is able to generate arrangements with low computational resources and limited inference time , if compared to other popular solutions for automatic music generation ( Dhariwal et al. , 2020 ) ; ( iii . ) we developed a metric – partially based on or correlated to human ( and expert ) judgement – to automatically evaluate the obtained results and the creativity of the proposed system , given the challenges of a quantitative assessment of music . To the best of our knowledge , this is the first work to face the automatic arrangement production task in the audio domain by leveraging a two-dimensional time-frequency representation . 2 RELATED WORKS . The interest surrounding automatic music generation , translation and arrangement has greatly increased in the last few years , as proven by the high numbers of solutions proposed – see ( Briot et al. , 2020 ) for a comprehensive and detailed survey . Here we present a brief overview of the key contributions both in symbolic and audio domain . Music generation & arrangement in the symbolic domain . There is a very large body of research that uses a symbolic representation of music to perform music generation and arrangement . The following contributions used MIDI , piano rolls , chord and note names to feed several deep learning architectures and tackle different aspects of the music generation problem . In ( Yang et al. , 2017 ) , CNNs are used for generating melody as a series of MIDI notes either from scratch , by following a chord sequence , or by conditioning on the melody of previous bars . In ( Mangal et al. , 2019 ; Jaques et al. , 2016 ; Mogren , 2016 ; Makris et al. , 2017 ) , LSTM networks are used to generate musical notes , melodies , polyphonic music pieces , and long drum sequences , under constraints imposed by metrical rhythm information and a given bass sequence . The authors of ( Yamshchikov & Tikhonov , 2017 ; Roberts et al. , 2018 ) , instead , use VAE networks to generate melodies . In ( Boulanger-Lewandowski et al. , 2012 ) , symbolic sequences of polyphonic music are modeled in a completely general pianoroll representation , while the authors of ( Hadjeres & Nielsen , 2017 ) propose a novel architecture to generate melodies satisfying positional constraints in the style of the soprano parts of the J.S . Bach chorale harmonisations encoded in MIDI . In ( Johnson , 2017 ) , RNNs are used for prediction and composition of polyphonic music ; in ( Hadjeres et al. , 2017 ) , highly convincing chorales in the style of Bach were automatically generated using note names ; ( Lattner et al. , 2018 ) added higher-level structure on generated , polyphonic music , whereas ( Mao et al. , 2018 ) designed an end-to-end generative model capable of composing music conditioned on a specific mixture of composer styles . The approach described in ( Hawthorne et al. , 2018 ) , instead , relies on notes as an intermediate representation to a suite of models – namely , a transcription model based on a CNN and a RNN network ( Hawthorne et al. , 2017 ) , a self-attention-based music language model ( Huang et al. , 2018 ) and a WaveNet model ( Oord et al. , 2016 ) – capable of transcribing , composing , and synthesizing audio waveforms . Finally , ( Zhu et al. , 2018 ) proposes an end-to-end melody and arrangement generation framework , called XiaoIce Band , which generates a melody track with several accompaniments played by several types of instruments . As this extensive literature on music generation in the symbolic domain shows , a promising approach would be to work with symbolic music and then use state-of-the-art synthesizers to produce sounds . MIDI , music sheets and piano rolls , however , are not always easy to find or produce . Moreover , many musicians and artists can not read music and would be more comfortable to work in a less formalized setting . Finally , state-of-the-art synthesizers , although increasingly indistinguishable from live recordings , can not yet reproduce the infinite nuances of real voices and instruments . Conversely , raw audio representation could be more appealing for some creators given its flexibility and little music competence required . Music generation & arrangement in the audio domain . Some of the most relevant approaches proposed so far in the field of waveform music generation deal with raw audio representation in the time domain . Many of these approaches draw methods and ideas from the extensive literature on audio and speech synthesis . For instance , in ( Prenger et al. , 2019 ) a flow-based network capable of generating high quality speech from mel-spectrograms is proposed , while in ( Wang et al. , 2019 ) the authors present a neural source-filter ( NSF ) waveform modeling framework that is straightforward to train and fast to generate waveforms . In ( Zhao et al. , 2020 ) recent neural waveform synthesizers such as WaveNet , WaveG-low , and the neural-source-filter ( NSF ) models are compared . ( Mehri et al. , 2016 ) tested a model for unconditional audio generation based on generating one audio sample at a time , and ( Bhave et al. , 2019 ) applied Restricted Boltzmann Machine and LSTM architectures to raw audio files in the frequency domain in order to generate music . A fully probabilistic and autoregressive model , with the predictive distribution for each audio sample conditioned on all previous ones , is used in ( Oord et al. , 2016 ) to produce novel and often highly realistic musical fragments . ( Manzelli et al. , 2018 ) combined two types of music generation models , namely symbolic and raw audio models , to train a raw audio model based on the WaveNet architecture , but that incorporates the notes of the composition as a secondary input to the network . Finally , in ( Dhariwal et al. , 2020 ) the authors tackled the long context of raw audio using a multi-scale VQ-VAE to compress it to discrete codes , and modeled such context through Sparse Transformers , in order to generate music with singing in the raw audio domain . Nonetheless , due to the computational resources required to directly model long-range dependencies in the time domain , either short samples of music can be generated or complex and large architectures and long inference time are required . On the other hand , in ( Vasquez & Lewis , 2019 ) a novel approach is discussed , which proves that long-range dependencies can be more tractably modelled in two-dimensional time-frequency representations such as Mel-spectrograms . More precisely , the authors of this contribution designed a highly expressive probabilistic model and a multiscale generation procedure over Mel-spectrograms capable of generating high-fidelity audio samples which capture structure at timescales . It is worth recalling , as well , that treating spectrograms as images is the current standard for many Music Information Retrieval tasks , such as music transcription ( Sigtia et al. , 2016 ) and chord recognition . Generative adversarial networks for music generation . Our work is precisely founded on this novel assumption , thus taking the best from the raw audio representation , while tackling the main issues induced by musical signals long-range dependencies thanks to the waveform-to-spectrograms conversion . Such two-dimensional representation of music paves the way to the application of several image processing techniques and image-to-image translation networks to carry out style transfer and arrangement generation ( Isola et al. , 2017 ; Zhu et al. , 2017 ) . It is worth recalling that the application of GANs to music generation tasks is not new : in ( Brunner et al. , 2018 ) , Generative Adversarial Networks are applied on symbolic music to perform music genre transfer ; however , to the best of our knowledge , GANs have never been applied to raw audio in the Mel-frequency domain for music generation purposes . As to the arrangement generation task , also in this case the large majority of approaches proposed in literature is based on symbolic representation of music : in ( Ren et al. , 2020 ) , a novel Multi-track MIDI representation ( MuMIDI ) is presented , which enables simultaneous multi-track generation in a single sequence and explicitly models the dependency of the notes from different tracks by means of a Transformer-based architecture ; in ( Jiang et al. , 2020 ) , a deep reinforcement learning algorithm for online accompaniment generation is described . Coming to the most relevant issues in the development of music generation systems , both the training and evaluation of such systems haven proven challenging , mainly because of the following reasons : ( i ) the available datasets for music generation tasks are challenging due to their inherent high-entropy ( Dieleman et al. , 2018 ) , and ( ii ) the definition of an objective metric and loss is a common problem to generative models such as GANs : at now , generative models in the music domain are evaluated based on the subjective response of a pool of listeners , and just for the MIDI representation a set of simple musically informed objective metrics was proposed ( Yang & Lerch , 2020 ) .
In the paper, the authors adapt CycleGAN, a well-known model for unpaired image-to-image translation, to automatic music arrangement by treating MFCCs extracted from audio recordings as images. Also, the authors propose a novel evaluation metric, which learns how to rate generated audio from the ratings of (some) music experts. The authors make use of two large-scale datasets to train and evaluate the model on two scenarios, namely 1) generating drum accompaniment a given bass line, 2) generating arrangement given a voice line. They report promising results on the first task; however, the model is not as successful on the second (more challenging) task.
SP:dc90daee29d8bea60a4033e06a9e36e660597ea2
Debiased Graph Neural Networks with Agnostic Label Selection Bias
1 INTRODUCTION . Graph Neural Networks ( GNNs ) are powerful deep learning algorithms on graphs with various applications ( Scarselli et al. , 2008 ; Kipf & Welling , 2016 ; Veličković et al. , 2017 ; Hamilton et al. , 2017 ) . Existing GNNs mainly learn a node embedding through aggregating the features from its neighbors , and such message-passing framework is supervised by node label in an end-to-end manner . During this training procedure , GNNs will effectively learn the correlation between the structure pattern and node feature with node label , so that GNNs are capable of learning the embeddings of new nodes and inferring their labels . One basic requirement of GNNs making precise prediction on unseen test nodes is that the distribution of labeled training and test nodes is same , i.e. , the structure and feature of labeled training and test nodes follow the similar pattern , so that the learned correlation between the current graph and label can be well generalized to the new nodes . However , in reality , there are two inevitable issues . ( 1 ) Because it is difficult to control the graph collection in an unbiased environment , the relationship between the collected real-world graph and the labeled nodes is inevitably biased . Training on such graph will cause biased correlation with node label . Taking a scientist collaboration network as an example , if most scientists with “ machine learning ” ( ML ) label collaborate with those with “ computer vision ” ( CV ) label , existing GNNs may learn spurious correlation , i.e. , scientists who cooperate with CV scientist are ML scientists . If a new ML scientist only connects with ML scientists or the scientists in other areas , it will be probably misclassified . ( 2 ) The test node in the real scenario is usually not available , implying that the distribution of new nodes is agnostic . Once the distribution is inconsistent with that in the training nodes , the performance of all the current GNNs will be hindered . Even transfer learning is able to solve the distribution shift problem , however , it still needs the prior of test distribution , which actually can not be obtained beforehand . Therefore , the agnostic label selection bias greatly affects the generalization ability of GNNs on unknown test data . In order to observe selection bias in real graph data , we conduct an experimental investigation to validate the effect of selection bias on GNNs ( details can be seen in Section 2.1 ) . We select training nodes with different biased degrees for each dataset , making the distribution of training nodes and test nodes inconsistent . The results clearly show that selection bias drastically hinders the performance of GNNs on unseen test nodes . Moreover , with heavier bias , the performance drops more . Further , we theoretically analyze how the data selection bias results in the estimation bias in GNN parameters ( details can be seen in Section 2.2 ) . Based on the stable learning technique ( Kuang et al. , 2020 ) , we can assume that the learned embeddings consist of two parts : stable variables and unstable variables . The data selection bias will cause the spurious correlation between these two kinds of variables . Thereby we prove that with the inevitable model misspecification , the spurious correlation will further cause the parameter estimation bias . Once the weakness of the current GNNs with selection bias is identified , one natural question is “ how to remove the estimation bias in GNNs ? ” In this paper , we propose a novel Debiased Graph Neural Network ( DGNN ) framework for stable graph learning by jointly optimizing a differentiated decorrelation regularizer and a weighted GNN model . Specifically , the differentiated decorrelation regularizer is able to learn a set of sample weights under differentiated variable weights , so that the spurious correlation between stable and unstable variables would be greatly eliminated . Based on the causal view analysis of decorrelation regularizer , we theoretically prove that the weights of variables can be differentiated by the regression weights . Moreover , to better combine the decorrelation regularizer with GNNs , we prove that adding the regularizer to the embedding learned by the second to last layer could be both theoretically sound and flexible . Then the sample weights learned by decorrelation regularizer are used to reweight GNN loss so that the parameter estimation could be unbiased . In summary , the contributions of this paper are three-fold : i ) We investigate a new problem of learning GNNs with agnostic label selection bias . The problem setting is general and practical for real applications . ii ) We bring the idea of variable decorrelation into GNNs to relieve bias influence on model learning and propose a general framework DGNN which could be adopted to various GNNs . iii ) We conduct the experiments on real-world graph benchmarks with two kinds of agnostic label selection bias , and the experimental results demonstrate the effectiveness and flexibility of our model . 2 EFFECT OF LABEL SELECTION BIAS ON GNNS . In this section , we first formulate our target problem as follows : Problem 1 ( Semi-supervised Learning on Graph with Agnostic Label Selection Bias ) . Given a training graph Gtrain = { Atrain , Xtrain , Ytrain } , where Atrain ∈ RN×N ( N nodes ) represents the adjacency matrix , Xtrain ∈ R N×D ( D features ) refers to the node features and Ytrain ∈ R n×C ( n labeled nodes , C classes ) refers to the available labels for training ( n ≪ N ) , the task is to learn a GNN gθ ( ⋅ ) with parameter θ to precisely predict the label of nodes on test graph Gtest = { Atest , Xtest , Ytest } , where distribution Ψ ( Gtrain ) ≠ Ψ ( Gtest ) . 2.1 EXPERIMENTAL INVESTIGATION . We conduct an experimental investigation to examine whether the state-of-the-art GNNs are sensitive to the selection bias . The main idea is that we will perform two representative GNNs : GCN ( Kipf & Welling , 2016 ) and GAT ( Veličković et al. , 2017 ) on three widely used graph datasets : Cora , Citeseer , Pubmed ( Sen et al. , 2008 ) with different degrees of bias . If the performance drops sharply in comparison with the scenarios without selection bias , this will demonstrate that GNNs can not generalize well in selection bias setting . To simulate the agnostic selection bias scenario , we first follow the inductive setting in Wu et al . ( 2019 ) that masks the validation and test nodes as the training graph Gtrain in the training phase , and then infer the labels of validation and test nodes with whole graph Gtest . In this way , the distribution of test node can be considered agnostic . Following Zadrozny ( 2004 ) , we design a biased label selection method on training graph Gtrain . The selection variable e is introduced to control whether the node will be selected as labeled nodes , where e = 1 means selected and 0 otherwise . For node i , we compute its neighbor distribution ratio : ri = ∣ { j∣j ∈ Ni , yj ≠ yi } ∣/∣Ni∣ , where Ni is neighborhood of node i in Gtrain and yj ≠ yi means the label of central node i is not the label of its neighborhood node j . And ri measures the difference between the label of central node i with the labels of its neighborhood . Then we average all the nodes ’ r to get a threshold t. For each node , the probability to be selected is : P ( ei = 1∣ri ) = { ri ≥ t 1 − ri < t , where ∈ ( 0.5 , 1 ) is used to control the degree of selection bias and the larger means heavier bias . We set as { 0.7 , 0.8 , 0.9 } to get three bias degrees for each dataset , termed as Light , Medium , Heavy , respectively . We select 20 nodes for each class for training and the validation and test nodes are same as Yang et al . ( 2016 ) . Furthermore , we take the unbiased datasets as baselines , where the labeled nodes are selected randomly . Figure 1 is the results of GCN and GAT on biased datasets . The dashed lines mean the performances of GCN/GAT on unbiased datasets and the solid lines refer to the results on biased datasets . We can find that : i ) The dashed lines are all above the corresponding coloured solid lines , indicating that selection bias greatly affects the GNNs ’ performance . ii ) All solid lines decrease monotonically with the increase of bias degree , demonstrating that heavier bias will cause larger performance decrease . 2.2 THEORETICAL ANALYSIS . The above experiment empirically verifies the effect of selection bias on GNNs . Here we theoretically analyze the effect of selection bias on estimating the parameters in GNNs . First , because biased labeled nodes have biased neighborhood structure , GNNs will encode this biased information into the node embeddings . Based on stable learning technique ( Kuang et al. , 2020 ) , we make following assumption : Assumption 1 . All the variables of embeddings learned by GNNs for each node can be decomposed as H = { S , V } , where S represents the stable variables and V represents the unstable variables . Specifically , for both training and test environment , E ( Y∣S = s , V = v ) = E ( Y∣S = s ) . Under Assumption 1 , the distribution shift between training set and test set is mainly induced by the variation in the joint distribution over ( S , V ) , i.e. , P ( Strain , Vtrain ) ≠ P ( Stest , Vtest ) . However , there is an invariant relationship between stable variable S and outcome Y in both training and test environments , which can be expressed as P ( Ytrain∣Strain ) = P ( Ytest∣Stest ) . Assumption 1 can be guaranteed by Y⊥V∣S . Thus , one can solve the stable prediction problem by developing a function f ( ⋅ ) based on S. However , one can hardly identify such variables in GNNs . Without loss of generality , we take Y as continuous variable for analysis and have the following assumption : Assumption 2 . The true generation process of target variable Y contains not only the linear combination of stable variables S , but also the nonlinear transformation of stable variables . Based on the above assumptions , we formalize the label generation process as follows : Y = f ( X , A ) + ε = G ( X , A ; θg ) SβS + G ( X , A ; θg ) V βV + g ( G ( X , A ; θg ) S ) + ε , ( 1 ) where G ( X , A ; θg ) ∈ RN×p denotes an unknown function of X and A that learns node embedding and it can be learned by a GNN , such as GCN and GAT , the output variables of G ( X , A ; θg ) can be decomposed as stable variables G ( X , A ; θg ) S ∈ RN×m and unstable variables G ( X , A ; θg ) V ∈ R N×q ( m + q = p ) , βS ∈ R m×1 and βV ∈ R q×1 are the linear coefficients can be learned by the last layer of GNNs , ε is the independent random noise , and g ( ⋅ ) is the nonlinear transformation function of stable variables . According to Assumption 1 , we know that coefficients of unstable variables G ( X , A ; θg ) V are actually 0 ( i.e. , βV =0 ) . For a classical GNN model with linear regressor , its prediction function can be formulated as : Ŷ = Ĝ ( X , A ; θg ) S β̂S + Ĝ ( X , A ; θg ) V β̂V + ε . ( 2 ) Compared with Eq . ( 1 ) , we can find that the parameters of GNN could be unbiasedly estimated if the nonlinear term g ( G ( X , A ; θg ) S ) = 0 , because the GNN model will have the same label generation mechanism as Eq . ( 1 ) . However , limited by the nonlinear power of GNNs ( Xu et al. , 2019 ) , it is reasonable to assume that there is a nonlinear term g ( G ( X , A ; θg ) S ) ≠ 0 that can not be fitted by the GNNs . Under this assumption , next , we taking a vanilla GCN ( Kipf & Welling , 2016 ) as an example to illustrate how the distribution shift will induce parameter estimation bias . A two-layer GCN can be formulated as Âσ ( ÂXW ( 0 ) ) W ( 1 ) , where  is the normalized adjacency matrix , W is the transformation matrix at each layer and σ ( ⋅ ) is the Relu activation function . We decompose GCN as two parts : one is embedding learning part Âσ ( ÂXW ( 0 ) ) , which can be decomposed as [ ST , VT ] , corresponding to Ĝ ( X , A ; θg ) S and Ĝ ( X , A ; θg ) V in Eq . ( 2 ) , and the other part is W ( 1 ) , where the learned parameters can be decomposed as [ β̃S , β̃V ] , corresponding to [ β̂S , β̂V ] in Eq . ( 2 ) . We aim at minimizing the square loss : LGCN = ∑ni=1 ( STi β̃S +VTi β̃V −Yi ) 2 . According to the derivation rule of partitioned regression model , we have : β̃V − βV = ( 1 n n ∑ i=1 V T i Vi ) −1 ( 1 n n ∑ i=1 V T i g ( Si ) ) + ( 1 n n ∑ i=1 V T i Vi ) −1 ( 1 n n ∑ i=1 V T i Si ) ( βS − β̃S ) , ( 3 ) β̃S − βS = ( 1 n n ∑ i=1 S T i Si ) −1 ( 1 n n ∑ i=1 S T i g ( Si ) ) + ( 1 n n ∑ i=1 S T i Si ) −1 ( 1 n n ∑ i=1 S T i Vi ) ( βV − β̃V ) , ( 4 ) where n is labeled node size , Si is i-th sample of S , 1 n ∑ni=1 VTi g ( Si ) = E ( VTg ( S ) ) + op ( 1 ) , 1 n ∑ni=1 VTi Si = E ( VTS ) + op ( 1 ) and op ( 1 ) is the error which is negligible . Ideally , β̃V − βV = 0 indicates that there is no bias between the estimated and the real parameter . However , if E ( VTS ) ≠ 0 or E ( VTg ( S ) ) ≠ 0 in Eq . ( 3 ) , β̃V will be biased , leading to the biased estimation on β̃S in Eq . ( 4 ) as well . Since the correlation between V and S ( or g ( S ) ) might shift in test phase , the biased parameters learned in training set is not the optimal parameters for predicting testing nodes . Therefore , to increase the stability of prediction , we need to unbiasedly estimate the parameters of β̃V by removing the correlation between V and S ( or g ( S ) ) on training graph , making E ( VTS ) = 0 or E ( VTg ( S ) ) = 0 . Note that 1 n ∑ni=1 STi g ( Si ) in Eq . ( 4 ) can also cause estimation bias , but the relation between S and g ( S ) is stable across environments , which do not influence the stability to some extent .
This paper presents a novel method to remove the selection bias of graph data, which is neglected by previous methods. Specifically, the authors suspect that all variables observed by GNNs can be decomposed into two parts, stable variables and unstable variables. Then, DGNN, a differentiable decorrelation regularization is proposed to reweight each variable pair to eliminate estimation bias. Experiments on three datasets confirm its effectiveness.
SP:bbada593ac1fae021d96b76f47f62772da50bdce
Latent Programmer: Discrete Latent Codes for Program Synthesis
1 INTRODUCTION . Our focus in this paper is program synthesis , one of the longstanding grand challenges of artificial intelligence research ( Manna & Waldinger , 1971 ; Summers , 1977 ) . The objective of program synthesis is to automatically write a program given a specification of its intended behavior , such as a natural language description or a small set of input-output examples . Search is an especially difficult challenge within program synthesis ( Alur et al. , 2013 ; Gulwani et al. , 2017 ) , and many different methods have been explored , including top-down search ( Lee et al. , 2018 ) , bottom up search ( Udupa et al. , 2013 ) , beam search ( Devlin et al. , 2017 ) , and many others ( see Section 2 ) . We take a different philosophy : Can we learn a representation of programs specifically to help search ? A natural way of representing a program is as a sequence of source code tokens , but the synthesis task requires searching over this representation , which can be difficult for longer , more complex programs . A programmer often starts by specifying high-level components of a program as a plan , then fills in the details of each component i.e . in string editing , a plan could be to extract the first name , then the last initial . We propose to use a sequence of latent variable tokens , called discrete latent codes , to represent such plans . Instead of having a fixed dictionary of codes , we let a model discover and learn what latent codes are useful and how to infer them from specification . Our hypothesis is that a discrete latent code – a sequence of discrete latent variables – can be a useful representation for search ( van den Oord et al. , 2017 ; Roy et al. , 2018 ; Kaiser et al. , 2018 ) . This is because we can employ standard methods from discrete search , such as beam search , over a compact space of high-level plans and then over programs conditioned on the plan , in a two-level procedure . We posit that the high-level search can help to organize the search over programs . In the string editing example earlier , a model could be confident that it needs to extract the last initial , but is less sure about whether it needs to extract a first name . By changing one token in the latent code , two-level search can explore alternative programs that do different things in the beginning . Whereas in traditional single-level search , the model would need to change multi-token prefixes of the alternatives , which is difficult to achieve in limited budget search . We propose the Latent Programmer , a program synthesis method that uses learned discrete representations to guide search via a two-level synthesis . The Latent Programmer is trained by a self- supervised learning principle . First a discrete autoencoder is trained on a set of programs to learn discrete latent codes , and then an encoder is trained to map the specification of the synthesis task to these latent codes . Finally , at inference time , Latent Programmer uses a two-level search . Given the specification , the model first produces a L-best list of latent codes from the latent predictor , and uses them to synthesize potential programs . On two different program synthesis domains , we find empirically that the Latent Programmer improves synthesis accuracy by over 10 % compared to standard sequence-to-sequence baselines as RobustFill ( Devlin et al. , 2017 ) . We also find that our method improves diversity of predictions , as well as accuracy on long programs . 2 BACKGROUND . Problem Setup The goal in program synthesis is to find a program in a given language that is consistent with a specification . Formally , we are given a domain specific language ( DSL ) which defines a space Y of programs . The task is described by a specification X ∈ X and is solved by some , possibly multiple , unknown program ( s ) Y ∈ Y . For example , each specification can be a set of input/output ( I/O ) examples denoted X = { ( I1 , O1 ) , . . . ( IN , ON ) } . Then , we say that we have solved specification X if we found a program Y which correctly solves all the examples : Y ( Ii ) = Oi , ∀i = 1 , . . . , N . As another example , each specification can be a natural language description of a task , and the corresponding program implements said task . An example string transformation synthesis task with four I/O examples together with a potential correct program in the string transformation DSL is shown in Figure 1 . Vector Quantization Traditionally , neural program synthesis techniques process the input specification as a set of sequences and predicts the output program token-by-token ( Devlin et al. , 2017 ) . In this work , we present a new approach for synthesis that performs structured planning in latent space using a discrete code . We conjecture that programs have an underlying discrete structure ; specifically , programs are compositional and modular with components that get reused across different problems . Our approach leverages this structure to guide the search over large program spaces . Following works in computer vision ( van den Oord et al. , 2017 ; Roy et al. , 2018 ) , we discover such discrete structure by using a Vector Quantized Variational Autoencoder ( VQ-VAE ) . VQ-VAEs work by feeding the intermediate representation of an autoencoder through a discretization bottleneck ( van den Oord et al. , 2017 ) . For completeness , we provide background on VQ-VAEs below . In a VQ-VAE , latent codes are drawn from a discrete set of learned vectors c ∈ RK×D , or codebook . Each element in the codebook can be viewed as either a token with id k ∈ [ K ] or as an embedding ck ∈ RD . To generate the discrete codes , the continuous autoencoder output e is quantized via nearest-neighbor lookup into the codebook . Formally , the token id qk ( e ) and quantized embedding qc ( e ) are defined as qc ( e ) = cqk ( e ) where qk ( e ) = arg min k∈ [ K ] ||e− ck||2 . ( 1 ) For input x , the training loss for a VQ-VAE consists of : a reconstruction loss for the encoder-decoder weights , a codebook loss that encourages codebook embeddings to be close to the continuous vectors which are quantized to them , and a commitment loss that encourages the encoded input ec ( x ) to `` commit '' to codes i.e . not switch which discrete code it is quantized to . The loss is given by , L ( c , θ , φ ) = log pθ ( x | qc ( ecφ ( x ) ) ) + ||sg ( ecφ ( x ) ) − c ) ||22 + β||sg ( c ) − ecφ ( x ) ||22 , ( 2 ) where θ , φ are the parameters of the decoder and encoder , respectively , sg ( · ) is the stop gradient operator that fixes the operand from being updated by gradients , and β controls the strength of the commitment loss . To stabilize training , van den Oord et al . ( 2017 ) also proposed removing the codebook loss and set the codebook to an exponential moving average ( EMA ) of encoded inputs . 3 SYNTHESIS WITH DISCRETE LATENT VARIABLES . We propose a two-level hierarchical approach to program synthesis that first performs high-level planning over an intermediate sequence , which is then used for fine-grained generation of the program . In our approach , a top-level module first infers a latent code , which gets used by a low-level module to generate the final program . 3.1 HIERARCHY OF TWO TRANSFORMERS . Our proposed Latent Programmer ( LP ) architecture consists of two Transformers in a two-level structure . The architecture comprises of two modules : a latent predictor which produces a latent code , which can be interpreted as a course sketch of the program , and a latent program decoder , which generates a program conditioned on the code . The latent code consists of discrete latent variables as tokens , which we arbitrarily denote TOK_1 , ... , TOK_K , whose meanings are assigned during training . Both components use a Transformer architecture due to their impressive performance on natural language tasks ( Vaswani et al. , 2017 ) . To help the model assign useful meanings to the latents , we also leverage a program encoder , which is only used during training . The program encoder ec ( Y ) encodes the true program Y = [ y1 , y2 , . . . , yT ] into a shorter sequence of discrete latent variables Z = [ z1 , z2 , . . . , zS ] , represented as codebook entries ; that is , each zi ∈ RD is one of K entries in a codebook c. The latent sequence serves as the ground-truth high-level plan for the task . The function ec ( Y ) is a Transformer encoder , followed by a stack of convolutions of stride 2 , each halving the size of the sequence . We apply the convolution ` times , which reduces a T -length program to a latent sequence of length dT/2 ` e . This provides temporal abstraction , since the high-level planning actions are made only every 2 ` steps . In summary , the program encoder is given by ec ( Y ) ← h ` ; hm ← Conv ( hm−1 ) for m ∈ 1 . . . ` ; h0 ← TransformerEncoder ( Y ) . ( 3 ) Here TransformerEncoder ( · ) applies a stack of self-attention and feed-forward units on input embeddings via a residual path , described in detail by Vaswani et al . ( 2017 ) . This will be used , along with the latent program decoder , as an autoencoder during training ( see Section 3.2 ) . The latent predictor lp ( X ) autoregressively predicts a coarse latent code lp ( X ) ∈ RS×K , conditioned on the program specification X . The latent predictor outputs a sequence of probabilities , which can be decoded using search algorithms such as beam search to generate a predicted latent code Z ′ . This is different than the program encoder , which outputs a single sequence Z , because we use the latent predictor to organize search over latent codes ; at test time , we will obtain a L-best list of latent token sequences from lp ( X ) . The latent predictor is given by a stack of Transformer blocks with the specification X as inputs . Similarly , the latent program decoder d ( Z , X ) defines an autoregressive distribution over program tokens given the specification X and the coarse plan Z ∈ RS×K , represented as codebook entries . The decoder is a Transformer that jointly attends to the latent sequence and program specification . This is performed via two separate attention modules , whose outputs are concatenated into the hidden unit . Formally , given a partially generated program Y ′ = [ y′1 , y ′ 2 , . . . , y ′ t−1 ] , and the encoded specification E = TransformerEncoder ( X ) , the latent program decoder performs ht = Concat ( TransformerDecoder ( Y ′ , E ) t−1 , TransformerDecoder ( Y ′ , Z ) t−1 ) , ( 4 ) where TransformerDecoder ( x , y ) denotes a Transformer decoder applied to outputs y while attending to inputs encoding x , and the subscript indexes an entry in the resulting output sequence . Finally , the distribution over output token k is given by dt ( Z , X ) = Softmax ( W ( ht ) ) , where W is a learned parameter matrix . Finally , the latent program decoder defines a distribution over programs autoregressively as p ( Y |Z , X ) = ∏ t p ( yt|y < t , Z , X ) , where p ( yt|y < t , Z , X ) = dt ( Z , X ) . When X is multiple I/O examples , each example is encoded asEi = TransformerDecoder ( Ii , Oi ) . Then , a separate hidden state per I/O is computed following equation 4 , followed by a late max-pool to get the final hidden state . Note that the program encoder and latent program decoder make up a VQ-VAE model of programs , with additional conditioning on the specification . The complete LP architecture is summarized in Figure 2 , and an end-to-end example run of our architecture is shown in Figure 4 .
This paper proposes a two-level hierarchical program synthesizer, Latent Programmer, which first predicts a sequence of latent codes from given input-output examples, and then decodes the latent codes into a program. The sequence of latent codes can be viewed as a high-level synthesis plan, guiding the subsequent low-level synthesis. Latent Programer significantly outperforms RobustFill on string manipulation tasks and achieves state-of-the-art results on Python code generation tasks.
SP:e42647e1efc0582b03c3fe8f1bb8c73d6403a97c