paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Representation Balancing Offline Model-based Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) has accomplished remarkable results in a wide range of domains , but its successes were mostly based on a large number of online interactions with the environment . However , in many real-world tasks , exploratory online interactions are either very expensive or dangerous ( e.g . robotics , autonomous driving , and healthcare ) , and applying a standard online RL would be impractical . Consequently , the ability to optimize RL agents reliably without online interactions has been considered as a key to practical deployment , which is the main goal of batch RL , also known as offline RL ( Fujimoto et al. , 2019 ; Levine et al. , 2020 ) . In an offline RL algorithm , accurate policy evaluation and reliable policy improvement are both crucial for the successful training of the agent . Evaluating policies in offline RL is essentially an off-policy evaluation ( OPE ) task , which aims to evaluate the target policy given the dataset collected from the behavior policy . The difference between the target and the behavior policies causes a distribution shift in the estimation , which needs to be adequately addressed for accurate policy evaluation . OPE itself is one of the long-standing hard problems in RL ( Sutton et al. , 1998 ; 2009 ; Thomas & Brunskill , 2016 ; Hallak & Mannor , 2017 ) . However , recent offline RL studies mainly focus on how to improve the policy conservatively while using a common policy evaluation technique without much considerations for the distribution shift , e.g . mean squared temporal difference error minimization or maximum-likelihood training of environment model ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Yu et al. , 2020 ) . While conservative policy improvement helps the policy evaluation by reducing the off-policyness , we hypothesize that addressing the distribution shift explicitly during the policy evaluation can further improve the overall performance , since it can provide a better foundation for policy improvement . To this end , we aim to explicitly address the distribution shift of the OPE estimator used in the offline RL algorithm . In particular , we focus on the model-based approach , where we train an environment model robust to the distribution shift . One of the notable prior works is Representation Balancing MDP ( RepBM ) ( Liu et al. , 2018b ) , which regularizes the representation learning of the model to be invariant between the distributions . However , despite the promising results by RepBM , its step-wise estimation of the distance between the distributions has a few drawbacks that limit the algorithm from being practical : not only it assumes a discrete-action task where the target policy is deterministic , but it also performs poorly in long-term tasks due to the curse of horizon of step-wise importance sampling ( IS ) estimators ( Liu et al. , 2018a ) . To address these limitations , we present the Representation Balancing with Stationary Distribution Estimation ( RepB-SDE ) framework , where we aim to learn a balanced representation by regularizing , in the representation space , the distance between the data distribution and the discounted stationary distribution induced by the target policy . Motivated by the recent advances in estimating stationary distribution corrections , we present a new representation balancing objective to train a model of the environment that no longer suffers from the curse of horizon . We empirically show that the model trained by the RepB-SDE objective is robust to the distribution shift for the OPE task , particularly when the difference between the target and the behavior is large . We also introduce a model-based offline RL algorithm based on the RepB-SDE framework and report its performance on the D4RL benchmark ( Fu et al. , 2020 ) , showing the state-of-the-art performance in a representative set of tasks . 2 RELATED WORK . Learning balanced representation Learning a representation invariant to specific aspects of data is an established method for overcoming distribution shift that arises in unsupervised domain adaptation ( Ben-David et al. , 2007 ; Zemel et al. , 2013 ) and in causal inference from observational data ( Shalit et al. , 2017 ; Johansson et al. , 2018 ) . They have shown that imposing a bound on the generalization error under the distribution shift leads to the objective that learns a balanced representation such that the training and the test distributions look similar . RepBM ( Liu et al. , 2018b ) can be seen as a direct extension to the sequential case , which encourages the representation to be invariant under the target and behavior policies in each timestep . Stationary distribution correction estimation ( DICE ) Step-wise importance sampling ( IS ) estimators ( Precup , 2000 ) compute importance weights by taking the product of per-step distribution ratios . Consequently , these methods suffer from exponentially high variance in the lengths of trajectories , which is a phenomenon called the curse of horizon ( Liu et al. , 2018a ) . Recently , techniques of computing a stationary DIstribution Correction Estimation ( DICE ) have made remarkable progress that effectively addresses the curse of horizon ( Liu et al. , 2018a ; Nachum et al. , 2019a ; Tang et al. , 2020 ; Zhang et al. , 2020 ; Mousavi et al. , 2020 ) . DICE has been also used to explicitly address the distribution shift in online model-free RL , by directly applying IS on the policy and action-value objectives ( Liu et al. , 2019 ; Gelada & Bellemare , 2019 ) . We adopt one of the estimation techniques , DualDICE ( Nachum et al. , 2019a ) , to measure the distance between the stationary distribution and the data distribution in the representation space . Offline reinforcement learning There are extensive studies on improving standard online modelfree RL algorithms ( Mnih et al. , 2015 ; Lillicrap et al. , 2016 ; Haarnoja et al. , 2018 ) for stable learning in the offline setting . The main idea behind them is to conservatively improve policy by ( 1 ) quantifying the uncertainty of value function estimate , e.g . using bootstrapped ensembles ( Kumar et al. , 2019 ; Agarwal et al. , 2020 ) , or/and ( 2 ) constraining the optimized target policy to be close to the behavior policy ( i.e . behavior regularization approaches ) ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Lee et al. , 2020 ) . A notable exception is AlgaeDICE ( Nachum et al. , 2019b ) , which implicitly uses DICE to regularize the discounted stationary distribution induced by the target policy to be kept inside of the data support , similar to this work . On the other hand , Yu et al . ( 2020 ) argued that the model-based approach can be advantageous due to its ability to generalize predictions on the states outside of the data support . They introduce MOPO ( Yu et al. , 2020 ) , which uses truncated rollouts and penalized rewards for conservative policy improvement . MOReL ( Kidambi et al. , 2020 ) trains a state-action novelty detector and use it to penalize rewards in the data-sparse region . Matsushima et al . ( 2020 ) , MOOSE ( Swazinna et al. , 2020 ) and MBOP ( Argenson & Dulac-Arnold , 2020 ) guide their policy optimization using the behavior policy , similar to the behavior regularization approaches . Note that these aforementioned offline RL methods build on the standard approximate dynamic programming algorithm for action-value estimation ( model-free ) or on a maximum-likelihood environment model ( model-based ) , without explicitly addressing the distribution shift in the estimator . In contrast , we augment the objective for model learning to obtain a robust model under the distribution shift , which is the first attempt for offline RL to the best of our knowledge . 3 PRELIMINARIES . A Markov Decision Process ( MDP ) is specified by a tuple M = 〈S , A , T , R , d0 , γ〉 , consisting of state space S , action space A , transition function T : S × A → ∆ ( S ) , reward function R : S × A → ∆ ( [ 0 , rmax ] ) , initial state distribution d0 , and discount rate γ . In this paper , we mainly focus on continuous state space S ⊆ Rds and conduct experiments on both discrete action spaces A = { a0 , ... ana } and continuous action spaces A ⊆ Rda . Given MDP M and policy π , which is a ( stochastic ) mapping from state to action , the trajectory can be generated in the form of s0 , a0 , r0 , s1 , a1 , r1 , ... , where s0 ∼ d0 and for each timestep t ≥ 0 , at ∼ π ( st ) , rt ∼ R ( st , at ) , and st+1 ∼ T ( st , at ) . The goal of RL is to optimize or evaluate a policy , based on the normalized expected discounted return : Rπ , ( 1− γ ) EM , π [ ∑∞ t=0 γ trt ] . A useful and important concept throughout the paper is the discounted stationary distribution , which represents the long-term occupancy of states : dπ ( s , a ) , ( 1− γ ) ∞∑ t=0 γt Pr ( st = s , at = a|M , π ) . From the definition , it can be observed that Rπ can be obtained by Rπ = E ( s , a ) ∼dπ [ r ( s , a ) ] . Offline RL and off-policy evaluation In this paper , we focus on the offline RL problem where the agent can only access a static dataset D = { ( si , ai , ri , s′i ) } Ni=1 for the maximization of Rπ . We consider a behavior-agnostic setting where we do not have any knowledge of the data collection process . We denote the empirical distribution of the dataset by dD . Before improving policy , we first aim to better evaluateRπ given a target policy π and a static dataset D , which corresponds to an off-policy evaluation ( OPE ) problem . We mainly focus on a modelbased approach where the algorithm first estimates the unknown dynamics T̂ , R̂ using the dataset D. This defines an approximate MDP M̂ = 〈S , A , T̂ , R̂ , d0 , γ〉 , with the approximate expected discounted return R̂π , ( 1−γ ) E M̂ , π [ ∑∞ t=0 γ trt ] obtained from M̂ . In this paper , we are interested in the MDP estimate M̂ that can effectively reduce the error in the evaluation of policy π , |Rπ−R̂π| . In order to do so , we need to learn a good representation of a model that results in a small OPE error . We assume a bijective representation function φ : S ×A → Z where Z ⊆ Rdz is the representation space . We define the transition and the reward models in terms of the representation function φ , i.e . T̂ = T̂z ◦ φ and R̂ = R̂z ◦ φ . In practice , where we use a neural network for T̂ and R̂ , z can be chosen to be the output of an intermediate hidden layer , making φ represented by lower layers and T̂z , R̂z by the remaining upper layers . We define dπφ ( z ) the discounted stationary distribution on Z induced by dπ ( s , a ) under the representation function z = φ ( s , a ) , and similarly for dDφ ( z ) .
In this paper, the authors propose a model-based approach with representation balancing (RepB-SDE)to cope with the distribution shift of offline reinforcement learning. RepB-SDE learns a robust representation for the model learning process, which regularizes the distance between the data distribution and the discount stationary distribution of the target policy in the representation space. RepB-SDE adopts the estimation techniques of DualDICE and a novel point is that RepB-SDE plugs this trick into the model-based representation learning and proposes an effective model-based offline RL algorithm.
SP:561e202e2167aa602c815441e8ca52992d81b03b
Information distance for neural network functions
1 INTRODUCTION . Deep neural networks can be trained to represent complex functions that describe sophisticated input-output relationships , such as image classification and machine translation . Because the functions are highly non-linear and are parameterized in high-dimensional spaces , there is relatively little understanding of the functions represented by deep neural networks . One could interpret deep models by linear approximations ( Ribeiro et al. , 2016 ) , or from the perspective of piece-wise linear functions , such as in ( Arora et al. , 2018 ) . If the space of functions representable by neural networks admits a distance measure , then it would be a useful tool to help analyze and gain insight about neural networks . A major difficulty is the vast number of possibilities of parameterizing a function , which makes it difficult to characterize the similarity given two networks . Measuring similarity in the parameter space is straightforward but is restricted to networks with the same structure . Measuring similarity at the output is also restricted to networks trained on the same task . Similarity of representations produced by intermediate layers of networks is proved to be more reliable and consistent ( Kornblith et al. , 2019 ) , but is not invariant to linear transformations and can fail in some situations , as shown in our experiments . In this paper , we provide a distance measure of functions based on information distance ( Bennett et al. , 1998 ) , which is independent of the parameterization of the neural network . This also removes the arbitrariness of choosing “ where ” to measure the similarity in a neural network . Information distance has mostly been used in data mining ( Cilibrasi & Vitányi , 2007 ; Zhang et al. , 2007 ) . Intuitively , information distance measures how much information is needed to transform one function to the other . We rely on prequential coding to estimate this quantity . Prequential coding can efficiently encode neural networks and datasets ( Blier & Ollivier , 2018 ) . If we regard prequential coding as a compression algorithm for neural networks , then the codelength can give an upper bound of the information quantity in a model . We propose a method for calculating an approximated version of information distance with prequential coding for arbitrary networks . In this method , we use KL-divergence in prequential training and coding , which allow us to directly estimate the expected codelength without any sampling process . Then we perform experiments to demonstrate that this information distance is invariant to the parameterization of the network while also being faithful to the intrinsic similarity of models . Using information distance , we are able to sketch a rough view into the space of deep neural networks and uncover the relationship between datasets and models . We also found that information distance can help us understand regularization techniques , measure the diversity of models , and predict a model ’ s ability to generalize . 2 METHODOLOGY . Information distance measures the difference between two objects by information quantity . The information distance between two functions fA and fB can be defined as ( Bennett et al. , 1998 ) : d ( fA , fB ) = max { K ( fA|fB ) , K ( fB |fA ) } ( 1 ) This definition makes use of Kolmogorov complexity : K ( fB |fA ) is the length of the shortest program that transforms fA into fB , and information distance d is the larger length of either direction . ( Note that this is not the only way to define information distance with Kolmogorov complexity , however we settle with this definition for its simplicity . ) Intuitively , this is the minimum number of bits we need to encode fB with the help of fA , or how much information is needed to know fB if fA is already known . Given two functions fA : X → Y and fB : X → Y ′ defined on the same input space X , each parameterized by a neural network with weights θA and θB , we want to estimate the information distance between fA and fB . The estimation of Kolmogorov complexity term is done by calculating the codelength of prequential coding , so what we get is an upper bound of d , which we denote by dp ( p for prequential coding ) . 2.1 ESTIMATING K ( fB |fA ) WITH PREQUENTIAL CODING To send fB to someone who already knows fA , we generate predictions yi from fB using input xi sampled from X . Assume that { xi } is known , we can use prequential coding to send labels { yi } . If we send enough labels , the receiver can use { xi , yi } to train a model to recover fB . If fA and fB have something in common , i.e . K ( fB |fA ) < K ( fB ) , then with the help of fA we can reduce the codelength used to transmit fB . A convenient way of doing so is to use θA as the initial model in prequential coding . The codelength of k samples is : Lpreq ( y1 : k|x1 : k ) : = − k∑ i=1 log pθ̂i ( yi|x1 : i , y1 : i−1 ) ( 2 ) where θ̂i is the parameter of the model trained on { x1 : i−1 , y1 : i−1 } , and θ̂1 = θA . With sufficient large k , the function parameterized by θ̂k would converge to fB . If both fA and fB are classification models , we can sample y from the output distribution of fB . In this case , the codelength ( 2 ) not only transmits fB , but also k specific samples we draw from fB . The information contained in these specific samples is ∑k i=1 log pθB ( yi|xi ) . Because we only care about estimating K ( fB |fA ) , using the “ bits-back protocol ” ( Hinton & van Camp , 1993 ) the information of samples can be subtracted from the codelength , resulting in an estimation of K ( fB |fA ) as Lk ( fB |fA ) : Lk ( fB |fA ) = − k∑ i=1 log pθ ( yi|x1 : i , y1 : i−1 ) + k∑ i=1 log pθB ( yi|xi ) ( 3 ) In practice , we want to use k sufficiently large such that fθ̂k can converge to fB , for example by the criterion Ex [ DKL ( fB ( x ) ||fθ̂k ( x ) ) ] ≤ ( 4 ) However , empirically we found that this often means a large k is needed , which can make the estimation using ( 3 ) unfeasible when the number of available x is small . Also the exact value of ( 3 ) depends on the specific samples used , introducing variance into the estimation . 2.2 THE PRACTICAL INFORMATION DISTANCE dp We propose to directly estimate the expectation of Lk ( fB |fA ) , which turns out to be much more efficient in the number of examples x , by leveraging infinite y samples . The expectation of codelength Ey1 : k [ Lk ] over all possible samples y1 : k from fB is : Ey1 : k∼fB ( x1 : k ) [ Lk ( fB |fA ) ] =− k∑ i=1 Ey1 : i log pθ ( yi|x1 : i , y1 : i−1 ) + k∑ i=1 Eyi log pθB ( yi|xi ) ≥− k∑ i=1 Eyi logEy1 : i−1pθ ( yi|x1 : i , y1 : i−1 ) + k∑ i=1 Eyi log pθB ( yi|xi ) = k∑ i=1 DKL ( fB ( xi ) ||Ey1 : i−1fθ̂i ( xi ) ) ( 5 ) ≈ k∑ i=1 DKL ( fB ( xi ) ||fθ̄i ( xi ) ) = : L ′ ( fB |fA ) ( 6 ) In ( 5 ) , Ey1 : i−1fθ̂i ( xi ) represents an infinite ensemble of models θ̂i estimated from all possible samples y1 : i−1 . We replace this ensemble with a single model θ̄i that is directly trained on all the samples . θ̄i is trained using KL-divergence as objective , which is equivalent to training with infinite samples ( see Appendix A for details from ( 5 ) to ( 6 ) ) . The expected codelength E [ Lk ] is related , via ( 6 ) , to the KL-divergence between the output distributions of fB and fθ̄i . Another interpretation of the above estimation is , we finetune model θA with an increasing number of outputs generated by θB , and aggregate the KL-divergence between the two models along the way . The more information fA shares with fB , the faster the KL-divergence decreases , resulting in a lower estimation of K ( fB |fA ) . Now dp is the approximation of ( 1 ) we propose in this paper : dp ( fA , fB ) = ∧ max { L′ ( fA|fB ) , L′ ( fB |fA ) } ( 7 ) 2.3 PROPERTIES OF dp The information distance d in ( 1 ) applied to functions defines a metric on the space of functions . Now we check if dp satisfy the axioms of a metric : 1. dp ( fA , fB ) = 0 ⇔ fA = fB : fA = fB if and only if they always produce the same predictions , which is equivalent to dp ( fA , fB ) = 0 2. dp ( fA , fB ) = dp ( fB , fA ) : by definition . 3. dp ( fA , fB ) ≤ dp ( fA , fC ) + dp ( fC , fB ) : whether dp keeps this property of d depends on the efficiency of prequential coding , which in turn depends on model optimization . Another important property of the information distance d is the invariancy with respect to the parameterization of function f . We found that dp is also largely invariant to parameterization of the functions . dp can be used to compare models trained differently , having different structures , or even trained on different tasks . The only condition is that both models should have sufficient expressibility to allow approximation of each other . There is also a connection between Lk ( fB |fA ) and the information transfer measure LIT ( Zhang et al. , 2020 ) : LkIT ( θn ) = L preq θ0 ( y1 : k|x1 : k ) − Lpreqθn ( y1 : k|x1 : k ) as n→∞ , θn → θB , and when yi ∼ fB ( xi ) , we have E [ LkIT ( θn ) ] =E [ L preq θA ( y1 : k|x1 : k ) ] − E [ LpreqθB ( y1 : k|x1 : k ) ] =E [ Lk ( fB |fA ) ] ( 8 ) 2.4 DATA-DEPENDENCY AND EQUIVALENT INTERPRETATIONS OF DATA . In machine learning , we often only care about the output of a model on the data distribution of the task . Neural network models are trained on input data from a specific domain , for example , image classification models take natural images in RGB format as valid input . It would be meaningless to discuss the behavior of such a model on non-RGB images . This is an important distinction between a neural network function and a function in the mathematical sense . This motivates us to take a data-dependent formulation of distance measure . In this paper , we limit our discussion to distribution-dependent information distance : dp ( fA , fB ) = max { K ( f ′A|fB ) , K ( f ′B |fA ) } ( 9 ) where f ′A = arg min f∈FA K ( f |fB ) , f ′B = arg min f∈FB K ( f |fA ) ( 10 ) are equivalencies of fA and fB in the below function family ( can be A or B ) : F = { f |Ex∼D [ DKL ( f ( x ) ||f ( x ) ) ] ≤ } ( 11 ) F is a set containing all the functions producing outputs almost indistinguishable from f , in the expected sense over x drawn from data distribution D. Because they produce almost identical outputs for x ∼ D , we call them equivalent interpretations of data in D. Intuitively , this means that instead of transmitting fB , we can transmit f ′B , which is equivalent to fB on D , if f ′ B can be transmitted in fewer bits . A quick note on why data-dependency here in the context of neural network models does not break the definition of information distance : if f is a neural network trained on dataset D , then f is fully determined by the information in D plus a random seed ( which is of negligible information ) . By introducing data-dependency , it enables us to approximate Kolmogorov complexity by coding samples drawn from data distribution D , in other words we can use the training set for coding .
This paper explores the problem of designing a distance measure in the space of the functions parameterized by neural networks. Ideally, such a measure should be independent of the parameterization of the networks. Also, the measure should support quantifying the distance between the networks with different structures and/or different underlying training tasks. The information distance meets this natural requirement. However, information distance is computational infeasible as it is defined by Kolmogorov complexity.
SP:7689dac5ea0db9c2a021e33f03d8cdeb9b5e4290
Exploiting Verified Neural Networks via Floating Point Numerical Error
1 INTRODUCTION . Deep neural networks ( DNNs ) are known to be vulnerable to adversarial inputs ( Szegedy et al. , 2014 ) , which are images , audio , or texts indistinguishable to human perception that cause a DNN to give substantially different results . This situation has motivated the development of network verification algorithms that claim to prove the robustness of a network ( Bunel et al. , 2020 ; Tjeng et al. , 2019 ; Salman et al. , 2019 ) , specifically that the network produces identical classifications for all inputs in a perturbation space around a given input . Verification algorithms typically reason about the behavior of the network assuming real-valued arithmetic . In practice , however , the computation of both the verifier and the neural network is performed on physical computers that use floating point numbers and floating point arithmetic to approximate the underlying real-valued computations . This use of floating point introduces numerical error that can potentially invalidate the guarantees that the verifiers claim to provide . Moreover , the existence of multiple software and hardware systems for DNN inference further complicates the situation , because different implementations exhibit different numerical error characteristics . We present concrete instances where numerical error leads to unsound verification of real-valued networks . Specifically , we train robust networks on the MNIST and CIFAR10 datasets . We work with the MIPVerify complete verifier ( Tjeng et al. , 2019 ) and several inference implementations included in the PyTorch ( Paszke et al. , 2019 ) framework . For each implementation , we construct image pairs ( x0 , xadv ) where x0 is a brightness modified natural image , such that the implementation classifies xadv differently from x0 , xadv falls in a ` ∞-bounded perturbation space around x0 , and the verifier incorrectly claims that no such adversarial image xadv exists for x0 within the perturbation space . Moreover , we show that the incomplete verifier CROWN is also vulnerable to floating point error . Our method of constructing adversarial images is not limited to our setting , and it is applicable to other verifiers that do not soundly model floating point arithmetic . 2 BACKGROUND AND RELATED WORK . Training robust networks : Researchers have developed various techniques to train robust networks ( Madry et al. , 2018 ; Mirman et al. , 2018 ; Tramer & Boneh , 2019 ; Wong et al. , 2020 ) . Madry et al . formulate the robust training problem as minimizing the worst loss within the input perturbation and propose to train robust networks on the data generated by the Projected Gradient Descent ( PGD ) adversary ( Madry et al. , 2018 ) . In this work we consider robust networks trained with the PGD adversary . Complete verification : The goal of complete verification ( a.k.a . exact verification ) methods is to either prove the property being verified or provide a counterexample to disprove it . Complete verification approaches have formulated the verification problem as a Satisfiability Modulo Theories ( SMT ) problem ( Scheibler et al. , 2015 ; Huang et al. , 2017 ; Katz et al. , 2017 ; Ehlers , 2017 ; Bunel et al. , 2020 ) or as a Mixed Integer Linear Programming ( MILP ) problem ( Lomuscio & Maganti , 2017 ; Cheng et al. , 2017 ; Fischetti & Jo , 2018 ; Dutta et al. , 2018 ; Tjeng et al. , 2019 ) . While SMT solvers are able to model exact floating point arithmetic ( Rümmer & Wahl , 2010 ) or exact real arithmetic ( Corzilius et al. , 2012 ) , deployed SMT solvers for verifying neural networks all use inexact floating point arithmetic to reason about the neural network inference for efficiency reasons . MILP solvers work directly with floating point , do not attempt to exactly model real arithmetic , and therefore exhibit numerical error . Since floating point arithmetic is not associative , different neural network implementations may produce different results for the same neural network , implying that any sound verifier for this class of networks must reason about the specific floating point error characteristics of the neural network implementation at hand . To the best of our knowledge , no prior work formally recognizes the problem of floating point error in neural network complete verification or exploits floating point error to invalidate verification results . Incomplete verification : On the spectrum of the tradeoff between completeness and scalability , incomplete methods ( a.k.a . certification methods ) aspire to deliver more scalable verification by adopting over-approximation , while admitting the inability to either prove or disprove the properties in certain cases . There is a large body of related research ( Wong & Kolter , 2017 ; Weng et al. , 2018 ; Gehr et al. , 2018 ; Zhang et al. , 2018 ; Raghunathan et al. , 2018 ; Dvijotham et al. , 2018 ; Mirman et al. , 2018 ; Singh et al. , 2019 ) . Salman et al . ( 2019 ) has unified most of the relaxation methods under a common convex relaxation framework . Their results suggest that there is an inherent barrier to tight verification via layer-wise convex relaxation captured by their framework . We highlight that floating point error of implementations that use a direct dot product formulation has been accounted for in some certification frameworks ( Singh et al. , 2018 ; 2019 ) by maintaining upper and lower rounding bounds for sound floating point arithmetic ( Miné , 2004 ) . Such frameworks should be extensible to model numerical error in more sophisticated implementations like the Winograd convolution ( Lavin & Gray , 2016 ) , but the effectiveness of this extension remains to be studied . Most of the certification algorithms , however , have not considered floating point error and may be vulnerable to attacks that exploit this deficiency . Floating point arithmetic : Floating point is widely adopted as an approximate representation of real numbers in digital computers . After each calculation , the result is rounded to the nearest representable value , which induces roundoff error . In the field of neural networks , the SMT-based verifier Reluplex ( Katz et al. , 2017 ) has been observed to produce false adversarial examples due to floating point error ( Wang et al. , 2018 ) . The MILP-based verifier MIPVerify ( Tjeng et al. , 2019 ) has been observed to give NaN results when verifying pruned neural networks ( Guidotti et al. , 2020 ) . Such observed floating point unsoundness behavior occurs unexpectedly in running large scale benchmarks . However , no prior work tries to systematically invalidate neural network verification results via exploiting floating point error . The IEEE-754 ( IEEE , 2008 ) standard defines the semantics of operations and correct rounding behavior . On an IEEE-754 compliant implementation , computing floating point expressions consisting of multiple steps that are equivalent in the real domain may result in different final roundoff error because rounding is performed after each step , which complicates the error analysis . Research on estimating floating point roundoff error and verifying floating point programs has a long history and is actively growing ( Boldo & Melquiond , 2017 ) , but we are unaware of any attempt to apply these tools to obtain a sound verifier for any neural network inference implementation . Any such verifier must reason soundly about floating point errors in both the verifier and the neural network inference algorithm . The failure to incorporate floating point error in software systems has caused real-world disasters . For example , in 1992 , a Patriot missile missed its target and lead to casualties due to floating point roundoff error related to time calculation ( Skeel , 1992 ) . 3 PROBLEM DEFINITION . 3.1 ADVERSARIAL ROBUSTNESS OF NEURAL NETWORKS . We consider 2D image classification problems . Let y = NN ( x ; W ) denote the classification confidence given by a neural network with weight parametersW for an input x , where x ∈ Rm×n×c [ 0,1 ] is an image with m rows and n columns of pixels each containing c color channels represented by floating point values in the range [ 0 , 1 ] , and y ∈ Rk is a logits vector containing the classification scores for each of the k classes . The class with the highest score is the classification result of the neural network . For a logits vector y and a target class number t , we define the Carlini-Wagner ( CW ) loss ( Carlini & Wagner , 2017 ) as the score of the target class subtracted by the maximal score of the other classes : LCW ( y , t ) = yt −max i 6=t yi ( 1 ) Note that x is classified as an instance of class t if and only if LCW ( NN ( x ; W ) , t ) > 0 , assuming no equal scores of two classes . Adversarial robustness of a neural network is defined for an input x0 and a perturbation bound , such that the classification result is stable within allowed perturbations : ∀x ∈ Adv ( x0 ) : LCW ( NN ( x ; W ) , t0 ) > 0 ( 2 ) where t0 = argmax NN ( x0 ; W ) In this work we focus on ` ∞-norm bounded perturbations : Adv ( x0 ) = { x | ‖x− x0‖∞ ≤ ∧ minx ≥ 0 ∧ maxx ≤ 1 } ( 3 ) 3.2 FINDING ADVERSARIAL EXAMPLES FOR VERIFIED NETWORKS VIA EXPLOITING NUMERICAL ERROR . Due to the inevitable presence of numerical error in both the network inference system and the verifier , the exact specification of NN ( · ; W ) ( i.e. , a bit-level accurate description of the underlying computation ) is not clearly defined in ( 2 ) . We consider the following implementations of convolutional layers included in the PyTorch framework to serve as our candidate definitions of the convolutional layers in NN ( · ; W ) , and other layers use the default PyTorch implementation : • NNC , M ( · ; W ) : A matrix multiplication based implementation on x86/64 CPUs . The convolution kernel is copied into a matrix that describes the dot product to be applied on the flattened input for each output value . • NNC , C ( · ; W ) : The default convolution implementation on x86/64 CPUs . • NNG , M ( · ; W ) : A matrix multiplication based implementation on NVIDIA GPUs . • NNG , C ( · ; W ) : A convolution implementation using the IMPLICIT_GEMM algorithm from the cuDNN library ( Chetlur et al. , 2014 ) on NVIDIA GPUs . • NNG , CWG ( · ; W ) : A convolution implementation using the WINOGRAD_NONFUSED al- gorithm from the cuDNN library ( Chetlur et al. , 2014 ) on NVIDIA GPUs . It is based on the Winograd fast convolution algorithm ( Lavin & Gray , 2016 ) , which has much higher numerical error compared to others . For a given implementation NNimpl ( · ; W ) , our method finds pairs of ( x0 , xadv ) represented as single precision floating point numbers such that 1. x0 and xadv are in the dynamic range of images : minx0 ≥ 0 , minxadv ≥ 0 , maxx0 ≤ 1 , and maxxadv ≤ 1 . 2. xadv falls in the perturbation space of x0 : ‖xadv − x0‖∞ ≤ 3 . The verifier claims that ( 2 ) holds for x0 4. xadv is an adversarial image for the implementation : LCW ( NNimpl ( xadv ; W ) , t0 ) < 0 Note that the first two conditions are accurately defined for any implementation compliant with the IEEE-754 standard , because the computation only involves element-wise subtraction and maxreduction that incur no accumulated error . The Gurobi ( Gurobi Optimization , 2020 ) solver used by MIPVerify operates with double precision internally . Therefore , to ensure that our adversarial examples satisfy the constraints considered by the solver , we also require that the first two conditions hold for x′adv = float64 ( xadv ) and x ′ 0 = float64 ( x0 ) that are double precision representations of xadv and x0 .
In the recent literature there has been a rise in the number of papers which attempt to verify neural networks. The specification of the verification problems often gets adapted according to the application in mind. More specifically, for image classification networks, the problem is to prove that the output of the neural network does not flip for small perturbations to the pixel values. For a robotic setting, the problem is often safety and convergence to some goal state. Where the neural network operates in closed loop with the system dynamics.
SP:3f8deffff011d2fb7cc8d38f8e7e28ede4e632ca
Exchanging Lessons Between Algorithmic Fairness and Domain Generalization
1 INTRODUCTION . Machine learning achieves super-human performance on many tasks when the test data is drawn from the same distribution as the training data . However , when the two distributions differ , model performance can severely degrade to even below chance predictions ( Geirhos et al. , 2020 ) . Tiny perturbations can derail classifiers , as shown by adversarial examples ( Szegedy et al. , 2014 ) and common-corruptions in image classification ( Hendrycks & Dietterich , 2019 ) . Even new test sets collected from the same data acquisition pipeline induce distribution shifts that significantly harm performance ( Recht et al. , 2019 ; Engstrom et al. , 2020 ) . Many approaches have been proposed to overcome model brittleness in the face of input distribution changes . Robust optimization aims to achieve good performance on any distribution close to the training distribution ( Goodfellow et al. , 2015 ; Duchi et al. , 2016 ; Madry et al. , 2018 ) . Domain generalization on the other hand tries to go one step further , to generalize to distributions potentially far away from the training distribution . The field of algorithmic fairness meanwhile primarily focuses on developing metrics to track and mitigate performance differences between different sub-populations or across similar individuals ( Dwork et al. , 2012 ; Corbett-Davies & Goel , 2018 ; Chouldechova & Roth , 2018 ) . Like domain generalization , evaluation using data related to but distinct from the training set is needed to characterize model failure . These evaluations are curated through the design of audits , which play a central role in revealing unfair algorithmic decision making ( Buolamwini & Gebru , 2018 ; Obermeyer et al. , 2019 ) . While the ultimate goals of domain generalization and algorithmic fairness are closely aligned , little research has focused on their similarities and how they can inform each other constructively . One of their main common goals can be characterized as : Learning algorithms robust to changes across domains or population groups . Achieving this not only allows models to generalize to different and unobserved but related distributions , it also mitigates unequal treatment of individuals solely based on group membership . In this work we explore independently developed concepts from the domain generalization and fairness literatures and exchange lessons between them to motivate new methodology for both fields . Inspired by fairness approaches for unknown group memberships ( Kim et al. , 2019 ; Hébert-Johnson et al. , 2018 ; Lahoti et al. , 2020 ) , we develop a new domain generalization method that does not require domain identifiers and yet can outperform manual specification of domains ( Table 1 ) . Leveraging domain generalization insights in a fairness context , we show the regularizer from IRMv1 ( Arjovsky et al. , 2019 ) optimizes a fairness criterion termed “ group-sufficiency ” which for the first time enables us to explicitly optimize this criterion for non-convex losses in fair classification . The following contributions show how lessons can be exchanged from the two fields : • We draw several connections between the goals of domain generalization and those of algorithmic fairness , suggesting fruitful research directions in both fields ( Section 2 ) . • Drawing inspiration from recent methods on inferring worst-case sensitive groups from data , we propose a novel domain generalization algorithm—Environment Inference for Invariant Learning ( EIIL ) —for cases where training data does not include environment partition labels ( Section 3 ) . Our method outperforms IRM on the domain generalization benchmark ColorMNIST without access to environment labels ( Section 4 ) . • We also show that IRM , originally developed for domain generalization tasks , affords a differentiable regularizer for the fairness notion of group sufficiency , which was previously hard to optimize for non-convex losses . On a variant of the UCI Adult dataset where confounding bias is introduced , we leverage this insight with our method EIIL to improve group sufficiency without knowledge of sensitive groups , ultimately improving generalization performance for large distribution shifts compared with a baseline robust optimization method ( Section 4 ) . • We characterize both theoretically and empirically the limitations of our proposed method , concluding that while EIIL can correct a baseline ERM solution that uses a spurious feature or “ shortcut ” for prediction , it is not suitable for all settings ( Sections 3 and 4 ) . 2 DOMAIN GENERALIZATION AND ALGORITHMIC FAIRNESS . Here we lay out some connections between the two fields . Table 2 provides a high-level comparison of the objectives and assumptions of several relevant methods . Loosely speaking , recent approaches from both areas share the goal of matching some chosen statistic across a conditioning variable e , representing sensitive group membership in algorithmic fairness or an environment/domain indicator in domain generalization . The statistic in question informs the learning objective for the resulting model , and is motivated differently in each case . In domain generalization , learning is informed by the properties of the test distribution where good generalization should be achieved . In algorithmic fairness the choice of statistic is motivated by a context-specific fairness notion , that likewise encourages a particular solution that achieves “ fair ” outcomes ( Chouldechova & Roth , 2018 ) . Empty spaces in Table 2 suggest areas for future work , and bold-faced entries suggest connections we show in this paper . Notation Let X be the input space , E the set of environments ( a.k.a . “ domains ” ) , Y the target space . Let x , y , e ∼ pobs ( x , y , e ) be observational data , with x ∈ X , y ∈ Y , and e ∈ E . H denotes a representation space , from which a classifier w ◦ Φ ( that maps to the pre-image of ∆ ( Y ) via a linear map w ) can be applied . Φ : X → H denotes the parameterized mapping or “ model ” that we optimize . We refer to Φ ( x ) ∈ H as the “ representation ” of example x. ŷ ∈ Y denotes a hard prediction derived from the classifier by stochastic sampling or probability thresholding . ` : H× Y → R denotes the scalar loss , which guides the learning . The empirical risk minimization ( ERM ) solution is found by minimizing the global risk , expressed as the expected loss over the observational distribution : CERM ( Φ ) = Epobs ( x , y , e ) [ ` ( Φ ( x ) , y ) ] . ( 1 ) Domain Generalization Domain generalization is concerned with achieving low error rates on unseen test distributions . One way to achieve domain generalization is by casting it as a robust optimization problem ( Ben-Tal et al. , 2009 ) where one aims to minimize the worst-case loss for every subset of the training set , or other well-defined perturbation sets around the data ( Duchi et al. , 2016 ; Madry et al. , 2018 ) . Other approaches tackle domain generalization by adversarially learning representations invariant ( Zhang et al. , 2017 ; Hoffman et al. , 2018 ; Ganin et al. , 2016 ) or conditionally invariant ( Li et al. , 2018 ) to the environment . Distributionally Robust Optimization ( DRO ) ( Duchi et al. , 2016 ) , seeks good performance for all nearby distributions by minimizing the worst-case loss supq Eq [ ` ] s.t.D ( q||p ) < , whereD denotes similarity between two distributions ( e.g . χ2 divergence ) and is a hyperparameter . The objective can be computed as an expectation over p via per-example importance weights γi = q ( xi , yi ) p ( xi , yi ) . Recently , Invariant Learning approaches such as Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2019 ) and Risk Extrapolation ( REx ) ( Krueger et al. , 2020 ) were proposed to overcome the limitations of domain invariant representation learning ( Zhao et al. , 2019 ) by discovering invariant relationships between inputs and targets across domains . Invariance serves as a proxy for causality , as features representing “ causes ” of target labels rather than effects will generalize well under intervention . In IRM , a representation Φ ( x ) is learned that performs optimally within each environment—and is thus invariant to the choice of environment e ∈ E—with the ultimate goal of generalizing to an unknown test dataset p ( x , y|etest ) . Because optimal classifiers under standard loss functions can be realized via a conditional label distribution ( f∗ ( x ) = E [ y|x ] ) , then an invariant representation Φ ( x ) must satisfy the following Environment Invariance Constraint : E [ y|Φ ( x ) = h , e1 ] = E [ y|Φ ( x ) = h , e2 ] ∀h ∈ H ∀ e1 , e2 ∈ E . ( EI-CONSTR ) Intuitively , the representation Φ ( x ) encodes features of the input x that induce the same conditional distribution over labels across each environment . Because trivial representations such as mapping all x onto the same value may satisfy environment invariance , other objectives must be introduced to encourage the predictive utility of Φ. Arjovsky et al . ( 2019 ) propose IRM as a way to satisfy ( EI-CONSTR ) while achieving a good overall risk . As a practical instantiation , the authors introduce IRMv1 , a gradient-penalty regularized objective enforcing simultaneous optimality of the same classifier w ◦ Φ in all environments.1 Denoting by Re = Epobs ( x , y|e ) [ ` ] the per-environment risk , the objective to be minimized is CIRM ( Φ ) = ∑ e∈E Re ( Φ ) + λ||∇w|w=1.0Re ( w ◦ Φ ) ||2 . ( 2 ) Krueger et al . ( 2020 ) propose the related Risk Extrapolation ( REx ) principle , which dictates a stronger preference to exactly equalize Re ∀ e ( e.g . by penalizing variance across e ) , which is shown to improve generalization in several settings.2 Fairness Early approaches to learning fair representations ( Zemel et al. , 2013 ; Edwards & Storkey , 2015 ; Louizos et al. , 2015 ; Zhang et al. , 2018 ; Madras et al. , 2018 ) leveraged statistical independence regularizers from domain adaptation ( Ben-David et al. , 2010 ; Ganin et al. , 2016 ) , noting that marginal or conditional independence from domain to prediction relates to the fairness notions of demographic parity ŷ ⊥ e ( Dwork et al. , 2012 ) and equal opportunity ŷ ⊥ e|y ( Hardt et al. , 2016 ) . Recall that ( EI-CONSTR ) involves an environment-specific conditional label expectation given a data representation E [ y|Φ ( x ) = h , e ] . Objects of this type have been closely studied in the fair machine learning literature , where e now denotes a “ sensitive ” indicating membership in a protected demographic group ( age , race , gender , etc . ) , and the vector representation Φ ( x ) is typically replaced by a scalar score S ( x ) ∈ R. E [ y|S ( x ) , e ] can now be interpreted as a calibration curve that must be regulated according to some fairness constraint . Chouldechova ( 2017 ) showed that equalizing this calibration curve across groups is often incompatible with a common fairness constraint , demographic parity , while Liu et al . ( 2019 ) studied “ group sufficiency ’ ‘ —satisfied when E [ y|S ( x ) , e ] = E [ y|S ( x ) , e′ ] ∀e , e′—of classifiers with strongly convex losses , concluding that ERM naturally finds group sufficient solutions without fairness constraints . Because Liu et al . ( 2019 ) consider convex losses , their theoretical results do not hold for neural network representations . However , by noting the link between group sufficiency and the constraint from ( EI-CONSTR ) , we observe that the IRMv1 regularizer ( applicable to neural nets ) in fact minimizes the group sufficiency gap in the case of a scalar representation Φ ( x ) ⊆ R , and when e indicates sensitive group membership . It is worth noting that Arjovsky et al . ( 2019 ) briefly discuss using groups as environments , but without specifying a particular fairness criterion . Reliance on sensitive demographic information is cumbersome since it often can not be collected without legal or ethical repercussions . Hébert-Johnson et al . ( 2018 ) discussed the problem of mitigating subgroup unfairness when group labels are unknown , and proposed Multicalibration as a way of ensuring a classifier ’ s calibration curve is invariant to efficiently computable environment splits . Since the proposed algorithm requires brute force enumeration over all possible environments/groups , Kim et al . ( 2019 ) suggested a more practical algorithm by relaxing the calibration constraint to an accuracy constraint , yielding a Multiaccurate classifier.3 The goal here is to boost the predictions of a pre-trained classifier through multiple rounds of auditing ( searching for worstcase subgroups using an auxiliary model ) rather than learning an invariant representation . A related line of work also leverages inferred subgroup information to improve worst-case model performance using the framework of DRO . Hashimoto et al . ( 2018 ) applied DRO to encourage long-term fairness in a dynamical setting where the average loss for a subpopulation influences their propensity to continue engaging with the model . Lahoti et al . ( 2020 ) proposed Adversarially Reweighted Learning , which extends DRO using an auxiliary model to compute the importance weights γi mentioned above . Amortizing this computation mitigates the tendency of DRO to overfit its reweighting strategy to noisy outliers . Wang et al . ( 2020 ) proposed a group DRO method for adaptively estimating soft assignments to groups suitable for the setting where group labels are noisy . 1w ◦ Φ yields a classification decision via linear weighting on the representation features . 2Analogous to REx , Williamson & Menon ( 2019 ) adapt Conditional Variance at Risk ( CVaR ) ( Rockafellar & Uryasev , 2002 ) to equalize risk across demographic groups . 3Kearns et al . ( 2018 ) separately utilize boosting to equalize subgroup errors without sensitive attributes . Limitations of generalization-first fairness One exciting direction for future work is to apply methods developed in the domain generalization literature to tasks where distribution shift is related to some societal harm that should be mitigated . However , researchers should be wary of blind “ solutionism ” , which can be ineffectual or harmful when the societal context surrounding the machine learning system is ignored ( Selbst et al. , 2019 ) . Moreover , many aspects of algorithmic discrimination are not simply a matter of achieving few errors on unseen distributions . Unfairness due to task definition or dataset collection , as discussed in the study of target variable selection by Obermeyer et al . ( 2019 ) , may not be reversible by novel algorithmic developments .
The main contribution of the paper is to highlight the similarity between two active areas in ML namely "domain generalization" and "fairness". Further, the paper proposes an approach inspired by recent developments in the fairness literature for domain generalization. The high-level idea is that similarly to the way that fair algorithm are able to improve the worst-case accuracy of predictors across different groups without knowing the sensitive attributes, perhaps we can use these ideas to domain generalization when environment partitions are not known to the algorithm. In some sense, in both of these research areas the goal is to design robust algorithms. Similarly, the paper uses the idea from domain generalization to design fair algorithms w.r.t. a notion called "group sufficiency". The idea is to somehow infer the "worst-case" subgroup (i.e., the one that our algorithm has the worst accuracy on it) and then using a round of auditing improve the performance of the algorithm across all subgroups.
SP:efea29871d33fd89de348bc243a5ee0265b2e051
Learning Deep Latent Variable Models via Amortized Langevin Dynamics
1 INTRODUCTION . Latent variable models are widely used for generative modeling ( Bishop , 1998 ; Kingma & Welling , 2013 ) , principal component analysis ( Wold et al. , 1987 ) , and factor analysis ( Harman , 1976 ) . To learn a latent variable model , it is essential to estimate the latent variables , z , from the observations , x. Bayesian inference is a probabilistic approach for estimation , wherein the estimate is represented as a posterior distribution , i.e. , p ( z | x ) = p ( z ) p ( x | z ) /p ( x ) . A major challenge while using the Bayesian approach is that the posterior distribution is typically intractable . Markov chain Monte Carlo ( MCMC ) methods such as Langevin dynamics ( LD ) provide sample approximations for posterior distribution with an asymptotic convergence guarantee . However , MCMC methods converge slowly . Thus , it is inefficient to perform time-consuming MCMC iterations for each latent variable , particularly for large-scale datasets . Furthermore , when we obtain new observations that we would like to perform inference for , we would need to re-run the sampling procedure for them . In the context of variational inference , a method to amortize the cost of datapoint-wise optimization known as amortized variational inference ( AVI ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) was recently proposed . In this method , the optimization of datapoint-wise parameters of variational distributions is replaced with the optimization of an inference model that predicts the variational parameters from observations . This amortization enables posterior inference to be performed efficiently on large-scale datasets . In addition , inference for new observations can be efficiently performed using the optimized inference model . AVI is widely used for the training of deep generative models , and such models are known as variational autoencoders ( VAEs ) . However , methods based on variational inference have less approximation power , because distributions with tractable densities are used for approximations . Although there have been attempts to improve their flexibility ( e.g. , normalizing flows ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ; Van Den Berg et al. , 2018 ; Huang et al. , 2018 ) ) , such methods typically have constraints in terms of the model architectures ( e.g. , invertibility in normalizing flows ) . 1An implementation is available at : https : //bit.ly/2Shmsq3 Therefore , we propose an amortization method for LD , amortized Langevin dynamics ( ALD ) . In ALD , datapoint-wise MCMC iterations are replaced with updates of an inference model that maps observations into latent variables . This amortization enables simultaneous sampling from posteriors over massive datasets . In particular , when a minibatch training is used for the inference model , the computational cost is constant with data size . Moreover , when inference is performed for new test data , the trained inference model can be used as initialization of MCMC to improve the mixing , because it is expected that the properly trained inference model can map data into the high-density area of the posteriors . We experimentally show that the ALD can accurately perform sampling from posteriors without datapoint-wise iterations . Furthermore , we demonstrate its applicability to the training of deep generative models . Neural networks are used for both generative and inference models to yield Langevin autoencoders ( LAEs ) . LAEs can be easily extended for more flexible generative modeling , in which the latent prior distribution , p ( z ) , is also intractable and defined with unnormalized energy function , by combining them with contrastive divergence learning ( Hinton , 2002 ; Carreira-Perpinan & Hinton , 2005 ) . We refer to this extension of LAEs as contrastive Langevin autoencoders ( CLAEs ) . We experimentally show that our LAEs and CLAEs can generate sharper images than existing explicit generative models , such as VAEs . Moreover , we report their performance of unsupervised anomaly detection . 2 PRELIMINARIES . 2.1 PROBLEM DEFINITION . Consider a probabilistic model with observations x , continuous latent variables z , and model parameters θ , as described by the probabilistic graphical model shown in Figure 1 ( A ) . Although the posterior distribution over the latent variable is proportional to the product of the prior and likelihood : p ( z | x ) = p ( z ) p ( x | z ) /p ( x ) , this is intractable owing to the normalizing constant p ( x ) = ∫ p ( z ) p ( x | z ) dz . This study aims to approximate the posterior p ( z | x ) for all n ob- servations x ( 1 ) , . . .x ( n ) efficiently by obtaining samples from it . 2.2 LANGEVIN DYNAMICS . Langevin dynamics ( LD ) ( Neal , 2011 ) is a sampling algorithm based on the following Langevin equation : dz = −∇zU ( x , z ) dt+ √ 2β−1dB , ( 1 ) where U is a potential function that is Lipschitz continuous and satisfies an appropriate growth condition , β is an inverse temperature parameter , and B is a Brownian motion . This stochastic differential equation has exp ( −βU ( x , z ) ) / ∫ exp ( −βU ( x , z′ ) ) dz′ as its equilibrium distribution . We set β = 1 and define the potential as follows to obtain the target posterior p ( z | x ) as its equilibrium : U ( x , z ) = − log p ( z ) − log p ( x | z ) . ( 2 ) Algorithm 1 Amortized Langevin dynamics ( training time ) φ← Initialize parameters Z ( 1 ) , . . . , Z ( n ) ← ∅ . Initialize sample sets for all n datapoints repeat φ← φ′ ∼ N ( φ′ ; φ− ηφ ∑n i=1∇φU ( x ( i ) , z ( i ) = fz|x ( x ( i ) ; φ ) ) , 2ηφI ) Z ( 1 ) , . . . , Z ( n ) ← Z ( 1 ) ∪ { fφ ( x ( 1 ) ) } , . . . , Z ( N ) ∪ { fφ ( x ( n ) ) } . Add samples until convergence of parameters return Z ( 1 ) , . . . , Z ( n ) Algorithm 2 Amortized Langevin dynamics ( test time ) z ← fz|x ( x ; φ∗ ) . Initialize a sample using a trained inference model Z← ∅ . Initialize a sample set repeat z ← z′ ∼ N ( z′ ; z − η∇zU ( x , z ) , 2ηI ) . Update the sample using traditional LD Z← Z ∪ { z } . Add samples until convergence of parameters return Z We can obtain samples from the posterior by simulating Eq . ( 1 ) using the Euler–Maruyama method ( Kloeden & Platen , 2013 ) as follows : z ← z′ ∼ N ( z′ ; z − η∇zU ( x , z ) , 2ηI ) , ( 3 ) where η is the step size for the discretization . When the step size is sufficiently small , the samples asymptotically move to the target posterior by repeating this sampling iteration . LD can be applied to any posterior inference problems for continuous latent variables provided the potential energy is differentiable on the latent space . However , to obtain samples of the posterior p ( z | x ) for all observations x ( 1 ) , . . .x ( n ) , we should perform an iteration on Eq . ( 3 ) per datapoint as shown in Figure 1 ( B1 ) . It is inefficient particularly if the dataset is large . In the next section , we demonstrate a method that addresses the inefficiency by amortization . 3 AMORTIZED LANGEVIN DYNAMICS . In traditional LD , we perform MCMC iterations for each latent variable per datapoint . This is inefficient particularly if managing massive datasets . As an alternative to performing the simulation of latent dynamics directly , we define an inference model , fz|x , which is a differentiable mapping from observations into latent variables , and consider the dynamics of its parameter φ as follows : dφ = − n∑ i=1 ∇φU ( x ( i ) , z ( i ) = fz|x ( x ( i ) ; φ ) ) + √ 2dB . ( 4 ) Because function fz|x outputs latent variables , the stochastic dynamics on the parameter space induces other dynamics on the latent space and is represented as the total gradient of fz|x : dz ( i ) = dimφ∑ k=1 ∂z ( i ) ∂φk dφk = − dimφ∑ k=1 ∂z ( i ) ∂φk ( ∂ ∂φk U ( x ( i ) , fz|x ( x ( i ) ; φ ) ) ) dt − dimφ∑ k=1 ∂z ( i ) ∂φk n∑ j=1 , j 6=i ∂ ∂φk U ( x ( j ) , fz|x ( x ( j ) ; φ ) ) dt+√2 dimφ∑ k=1 ∂z ( i ) ∂φk dB . ( 5 ) MSE ESS ( 1 std . ) MSCE LD 0.00145 570.24 ( 25.25 ) 0.00136 ALD 0.00233 634.27 ( 40.94 ) 0.00151 Table 1 : Quantitative comparison of the sample quality between traditional LD and our ALD . The mean squared error ( MSE ) between the true mean and the sample average , the effective sample size ( ESS ) , and the Monte Carlo standard error ( MSCE ) are provided as evaluation metrics . The first term of Eq . ( 5 ) approximates −∇z ( i ) U ( x ( i ) , z ( i ) ) dt in Eq . ( 1 ) , and the remaining terms introduce a random walk behavior to the dynamics as in the Brownian term of Eq . ( 1 ) . For the simulation of Eq . ( 4 ) , we use the Euler–Maruyama method , as in traditional LD : φ← φ′ ∼ N ( φ′ ; φ− ηφ n∑ i=1 ∇φU ( x ( i ) , z ( i ) = fz|x ( x ( i ) ; φ ) ) , 2ηφI ) , ( 6 ) where ηφ is the step size . Through the iterations , the posterior sampling is implicitly performed by collecting outputs of the inference model for all datapoints in the training set as described in Algorithm 1 . When we perform inference for new test data , the trained inference model can be used as initialization of a MCMC method ( e.g. , traditional LD ) as shown in Algorithm 2 , because it is expected that the trained inference model can map data into the high-density area of the posteriors . For minibatch training , we can substitute the minibatch statistics of m datapoints for the derivative for all n data in Eq . ( 6 ) : n∑ i=1 ∇φU ( x ( i ) , z ( i ) = fz|x ( x ( i ) ; φ ) ) ≈ n m m∑ i=1 ∇φU ( x ( i ) , z ( i ) = fz|x ( x ( i ) ; φ ) ) . ( 7 ) In this case , we refer to the algorithm as stochastic gradient amortized Langevin dynamics ( SGALD ) . SGALD enables us to sample from posteriors of a massive dataset with a constant computational cost . By contrast , performing traditional LD requires a linearly increasing cost with data size . For minibatch training of LD , adaptive preconditioning is known to be effective to improve convergence , which is referred to as preconditioned stochastic gradient Langevin dynamics ( pSGLD ) ( Li et al. , 2015 ) . This preconditioning technique is also applicable to our SGALD , and we employ it throughout our experiments . Figure 2 shows a simple example of sampling from a posterior distribution , where its prior and likelihood are defined using conjugate bivariate Gaussian distributions ( see Appendix F for more details ) . ALD produces samples that match well the shape of the target distributions . The mean squared error ( MSE ) between the true mean and the sample average , the effective sample size ( ESS ) , and the Monte Carlo standard error ( MSCE ) are provided for quantitative comparison , as shown in Table 1 . It can be observed that the sample quality of ALD is competitive to standard LD , even though ALD does not perform direct update of samples in the latent space . Figure 3 shows the evolution of obtained sample values by traditional LD and our SGALD for posteriors defined by a simple univariate conjugate Gaussian ( see Appendix F.2 for more experimental details ) . SGALD ’ s samples converges much faster than traditional LD . The advantage of our ALD over amortized variational inference ( AVI ) is the flexibility of posterior approximation . Figure 4 is an example where the likelihood p ( x | z ) is defined using a neural network , therefore the posterior p ( z | x ) is highly multimodal . AVI methods typically approximate posteriors using variational distributions , which have tractable density function ( e.g. , Gaussian distributions ) . Hence , their approximation power is limited by the choice of variational distribution family , and they often fail to approximate such complex posteriors . On the other hand , ALD can capture well such posteriors by obtaining samples . The results in other examples are summarized in Appendix F .
In this paper, the author presented an advanced autoencoder framework LAE. Instead of element-wise MCMC, LAE collected samples from the posterior using the amortized Langevin dynamics of a potential energy distribution. In CLAE, an extended version of LAE, the author used an intractable energy function as the prior, and collected samples using its Langevin function. The author claims that LAE and CLAE are more efficient in large scale data and have better performance compared with traditional autoencoders and variational autoencoders.
SP:ca9a9e8d0066ca55d4cd760df661bec09cdeb8eb
The Intrinsic Dimension of Images and Its Impact on Learning
1 INTRODUCTION . The idea that real-world data distributions can be described by very few variables underpins machine learning research from manifold learning to dimension reduction ( Besold & Spokoiny , 2019 ; Fodor , 2002 ) . The number of variables needed to describe a data distribution is known as its intrinsic dimension ( ID ) . In applications , such as crystallography , computer graphics , and ecology , practitioners depend on data having low intrinsic dimension ( Valle & Oganov , 2010 ; Desbrun et al. , 2002 ; Laughlin , 2014 ) . The utility of representations which are low-dimensional has motivated a variety of deep learning techniques including autoencoders and regularization methods ( Hinton & Salakhutdinov , 2006 ; Vincent et al. , 2010 ; Gonzalez & Balajewicz , 2018 ; Zhu et al. , 2018 ) . It is also known that dimensionality plays a strong role in learning function approximations and non-linear class boundaries . The exponential cost of learning in high dimensions is easily captured by the trivial case of sampling a function on a cube ; in d dimensions , sampling only the cube vertices would require 2d measurements . Similar behaviors emerge in learning theory . It is known that learning a manifold requires a number of samples that grows exponentially with the manifold ’ s intrinsic dimension ( Narayanan & Mitter , 2010 ) . Similarly , the number of samples needed to learn a well-conditioned decision boundary between two classes is an exponential function of the intrinsic dimension of the manifold on which the classes lie ( Narayanan & Niyogi , 2009 ) . Furthermore , these learning bounds have no dependence on the ambient dimension in which manifold-structured datasets live . In light of the exponentially large sample complexity of learning high-dimensional functions , the ability of neural networks to learn from image data is remarkable . Networks learn complex decision boundaries from small amounts of image data ( often just a few hundred or thousand samples per class ) . At the same time , generative adversarial networks ( GANs ) are able to learn image “ manifolds ” from merely a few thousand samples . The seemingly low number of samples needed to learn these manifolds strongly suggests that image datasets have extremely low-dimensional structure . Despite the established role of low dimensional data in deep learning , little is known about the intrinsic dimension of popular datasets and the impact of dimensionality on the performance of neural networks . Computational methods for estimating intrinsic dimension enable these measurements . We adopt tools from the dimension estimation literature to shed light on dimensionality in settings of interest to the deep learning community . Our contributions can be summarized as follows : • We verify the reliability of intrinsic dimension estimation on high-dimensional data using generative adversarial networks ( GANs ) , a setting in which we can a priori upper-bound the intrinsic dimension of generated data by the dimension of the latent noise vector . • We measure the dimensionality of popular datasets such as MNIST , CIFAR-10 , and ImageNet . In our experiments , we find that natural image datasets whose images contain thousands of pixels can , in fact , be described by orders of magnitude fewer variables . For example , we estimate that ImageNet , despite containing 224 × 224 × 3 = 150528 pixels per image , only has intrinsic dimension between 26 and 43 ; see Figure 1 . • We train classifiers on data , synthetic and real , of various intrinsic dimension and find that this variable correlates closely with the number of samples needed for learning . On the other hand , we find that extrinsic dimension , the dimension of the ambient space in which data is embedded , has little impact on generalization . Together , these results put experimental weight behind the hypothesis that the unintuitively low dimensionality of natural images is being exploited by deep networks , and suggest that a characterization of this structure is an essential building block for a successful theory of deep learning . 2 RELATED WORK . While the hypothesis that natural images lie on or near a low-dimensional manifold is controversial , Goodfellow et al . ( 2016 ) argue that the low-dimensional manifold assumption is at least approximately correct for images , supported by two observations . First , natural images are locally connected , with each image surrounded by other highly similar images reachable through image transformations ( e.g. , contrast , brightness ) . Second , natural images seem to lie on a low-dimensional structure , as the probability distribution of images is highly concentrated ; uniformly sampled pixels can hardly assemble a meaningful image . It is widely believed that the combination of natural scenes and sensor properties yields very sparse and concentrated image distributions , as has been supported by several empirical studies on image patches ( Lee et al. , 2003 ; Donoho & Grimes , 2005 ; Carlsson et al. , 2008 ) . This observation motivated work on efficient coding ( Olshausen & Field , 1996 ) and served as a prior in computer vision ( Peyré , 2009 ) . Further , rigorous experiments have been conducted clearly supporting the low-dimensional manifold hypothesis for many image datasets ( Ruderman , 1994 ; Schölkopf et al. , 1998 ; Roweis & Saul , 2000 ; Tenenbaum et al. , 2000 ; Brand , 2003 ) ; see also ( Fefferman et al. , 2016 ) for principled algorithms on verifying the manifold hypothesis . The generalization literature seeks to understand why some models generalize better from training data to test data than others . One line of work suggests that the loss landscape geometry explains why neural networks generalize well ( Huang et al. , 2019 ) . Other generalization work predicts that data with low dimension , along with other properties which do not include extrinsic dimension , characterize the generalization difficulty of classification problems ( Narayanan & Niyogi , 2009 ) . In the context of deep learning , Gong et al . ( 2019 ) found that neural network features are lowdimensional . Ansuini et al . ( 2019 ) further found that the intrinsic dimension of features decreases in late layers of neural networks and observed interesting trends in the dimension of features in early layers . In contrast to Gong et al . ( 2019 ) and Ansuini et al . ( 2019 ) , who find that the intrinsic dimension of internal representations is inversely correlated with high performance , we study the dimensionality of data and its impact on performance , and we make a similar finding . Zhu et al . ( 2018 ) proposed a regularizer derived from the intrinsic dimension of images augmented with their corresponding feature vectors . Another line of work in deep learning has found that neural networks rely heavily on textures which are low-dimensional ( Geirhos et al. , 2018 ; Brendel & Bethge , 2019 ) . Similarly , some have suggested that natural images can be represented as mixtures of textures which lie on a low-dimensional manifold ( Vacher & Coen-Cagli , 2019 ; Vacher et al. , 2020 ) . 3 INTRINSIC DIMENSION ESTIMATION . Given a set of sample points P ⊂ RN , it is common to assume that P lies on or near a lowdimensional manifoldM ⊆ RN of intrinsic dimension dim ( M ) = d N . As a measure of the degrees of freedom in a dataset , as well as the information content , there is great interest in estimating the intrinsic dimension d. In the remainder of this section , we briefly describe the dimension estimation method we use in this paper ; for further information , see ( Kim et al. , 2019 ) and references therein . One of the main approaches to intrinsic dimension estimation is to examine a neighborhood around each point in the dataset , and compute the Euclidean distance to the kth nearest neighbor . Assuming that density is constant within small neighborhoods , the Maximum Likelihood Estimation ( MLE ) of Levina & Bickel ( 2005 ) uses a Poisson process to model the number of points found by random sampling within a given radius around each sample point . By relating the rate of this process to the surface area of the sphere , the likelihood equations yield an estimate of the ID at a given point x as : m̂k ( x ) = 1 k − 1 k−1∑ j=1 log Tk ( x ) Tj ( x ) −1 , ( 1 ) where Tj ( x ) is the Euclidean ( ` 2 ) distance from x to its jth nearest neighbor . Levina & Bickel ( 2005 ) propose to average the local estimates at each point to obtain a global estimate m̄k = 1 n ∑n i=1 m̂k ( xi ) . MacKay & Ghahramani ( 2005 ) suggestion a correction based on averaging of inverses m̄k = [ 1 n n∑ i=1 m̂k ( xi ) −1 ] −1 = 1 n ( k − 1 ) n∑ i=1 k−1∑ j=1 log Tk ( xi ) Tj ( xi ) −1 , ( 2 ) where n is the number of samples . We use Equation ( 2 ) as our MLE estimator throughout this paper . Since the geometry of natural images is complex and unknown , we face two challenges when verifying the accuracy of MLE on natural image datasets . First , we need to choose a proper value of k. As shown by MacKay & Ghahramani ( 2005 ) , the positive bias of the corrected estimator Equation ( 2 ) increases as k increases , but the variance decreases . In order to navigate this bias-variance tradeoff , we try various values of k in Section 4 . Second , in addition to the aforementioned local uniformity assumption , MLE assumes that data arises as a sequence of i.i.d . random variables which can be written as a continuous and sufficiently smooth function of a random variable with smooth density , which may or may not be true for natural image datasets . While the truth of these assumptions is unknown on natural images , we verify the accuracy of our MLE estimates in a controlled setting in the following section . We briefly discuss other notable techniques for dimensionality estimation . GeoMLE ( Gomtsyan et al. , 2019 ) attempts to account for non-uniformity of density and nonlinearity of manifold using a polynomial regression of standard MLE based on distances to nearest neighbors in different sized neighborhoods . However , GeoMLE chooses to approximate averages of m̂k ( xi ) , instead of averaging its reciprocal like Equation ( 2 ) , resulting in a potentially wrong maximum likelihood estimator . As a result , we find its estimation deviates significantly from expected dimensionalities . TwoNN ( Facco et al. , 2017 ) is based on the ratio of the distances to the first and second nearest neighbors . Finally , the approach of ( Granata & Carnevale , 2016 ) considers the distribution of geodesic distances over the data manifold , approximated by distances through kNN graphs , compared to the distribution of distances over hyperspheres of varying dimension . Unlike MLE , our preliminary experiments suggest that these techniques do not provide reasonable estimates for some natural and synthetic images which are key to this work ; see Appendix A.5 for further discussion . 4 VALIDATING DIMENSION ESTIMATION WITH SYNTHETIC DATA . Dimensionality estimates are often applied on “ simple ” manifolds or toy datasets where the dimensionality is known , and so the accuracy of the methods can be validated . Image manifolds , by contrast , are highly complex , may contain many symmetries and modes , and are of unknown dimension . In principle , there is no reason why MLE-based dimensionality estimates can not be applied to image datasets . However , because we lack knowledge of the exact dimensionality of image datasets , we can not directly verify that MLE-based dimensionality estimates scale up to the complexity of image structures . There is an inherent uncertainty in estimating the ID of a given dataset . First , we can not be sure if the dataset actually resembles a sampling of points on or near a manifold . Second , there are typically no guarantees that the sampling satisfies the conditions assumed by the ID estimators we are using . Towards a principled application of ID estimates in contexts of practical relevance to deep learning , we begin by validating that MLE methods can generate accurate dimensionality estimates for complex image data . We do this by generating synthetic image datasets using generative models for which the intrinsic dimensionality can be upper-bounded a priori . We believe such validations are essential to put recent findings in perspective ( Gong et al. , 2019 ; Ansuini et al. , 2019 ) . GAN Images We use the BigGAN variant with 128 latent entries and outputs of size 128×128×3 trained on the ImageNet dataset ( Deng et al. , 2009 ) . Using this GAN , we generate datasets with a varying number of images , where we fix most entries of the latent vectors to zero leaving only d̄ free entries to be chosen at random . As we increase the number of free entries , we expect the intrinsic dimension to increase with d̄ as an upper bound ; see Section A.1 for further discussion . In particular , we create several synthetic datasets of varying intrinsic dimensionality using the ImageNet class , basenji , and check if the estimates match our expectation . As seen in Figure 2 , we observe increasing diversity with increasing intrinsic dimension . In Figure 3 , we show convergence of the MLE estimate on basenji data with dimension bounded above by d̄ = 10 . We observe that the estimates can be sensitive to the choice of k as discussed in prior work ; see Appendix A.2 for additional GAN classes . Scaling to large datasets . We develop a practical approach for estimating the ID of large datasets such as ImageNet . In this approach , we randomly select a fraction α of the dataset as anchors . Then , we evaluate the MLE estimate using only the anchor points , where nearest-neighbors are computed over the entire dataset . Note that , when anchors are chosen randomly , this acceleration has no impact on the expected value of the result . See Appendix A.3 for an evaluation of this approach .
The authors report a novel application of GANs to validate the maximum likelihood estimator (MLE) of the intrinsic dimension (ID) of image data sets. Then they use the MLE ID estimator to characterize the intrinsic dimension of several commonly used computer vision data sets, and link the data set ID to the generalizability of trained classifiers. They provide additional experiments that support the notion that it is intrinsic dimension, and not extrinsic dimension (i.e. # of pixels), that governs the performance of a binary classifier on these data sets. Also, they verify that dimension plays a large role in learning on natural data.
SP:bf07fc882fe3aca4c50e07df79f22d4b8b3abb56
Improving Generalizability of Protein Sequence Models via Data Augmentations
While protein sequence data is an emerging application domain for machine learning methods , small modifications to protein sequences can result in difficult-topredict changes to the protein ’ s function . Consequently , protein machine learning models typically do not use randomized data augmentation procedures analogous to those used in computer vision or natural language , e.g. , cropping or synonym substitution . In this paper , we empirically explore a set of simple string manipulations , which we use to augment protein sequence data when fine-tuning semisupervised protein models . We provide 276 different comparisons to the Tasks Assessing Protein Embeddings ( TAPE ) baseline models , with Transformer-based models and training datasets that vary from the baseline methods only in the data augmentations and representation learning procedure . For each TAPE validation task , we demonstrate improvements to the baseline scores when the learned protein representation is fixed between tasks . We also show that contrastive learning fine-tuning methods typically outperform masked-token prediction in these models , with increasing amounts of data augmentation generally improving performance for contrastive learning protein methods . We find the most consistent results across TAPE tasks when using domain-motivated transformations , such as amino acid replacement , as well as restricting the Transformer attention to randomly sampled sub-regions of the protein sequence . In rarer cases , we even find that information-destroying augmentations , such as randomly shuffling entire protein sequences , can improve downstream performance . 1 INTRODUCTION . Semi-supervised learning has proven to be an effective mechanism to promote generalizability for protein machine learning models , as task-specific labels are generally very sparse . However , with other common data types there are simple transformations that can be applied to the data in order to improve a model ’ s ability to generalize : for instance , vision models use cropping , rotations , or color distortion ; natural language models can employ synonym substitution ; and time series data models benefit from window restriction or noise injection . Scientific data , such as a corpus of protein sequences , have few obvious transformations that can be made to it that unambiguously preserve the meaningful information in the data . Often , an easily understood transformation to a protein sequence ( e.g. , replacing an amino acid with a chemically similar one ) will unpredictably produce either a very biologically similar or very biologically different mutant protein . In this paper , we take the uncertainty arising from the unknown effect of simple data augmentations in protein sequence modeling as an empirical challenge that deserves a robust assessment . To our knowledge , no study has been performed to find out whether simple data augmentation techniques improve a suite of protein tasks . We focus on fine-tuning previously published self-supervised models that are typically used for representation learning with protein sequences , viz . the transformerbased methods of Rao et al . ( 2019 ) which have shown the best ability to generalize on a set of biological tasks , which are referred to as Tasks Assessing Protein Embeddings ( TAPE ) . We test one or more of the following data augmentations : replacing an amino acid with a pre-defined alternative ; shuffling the input sequences either globally or locally ; reversing the sequence ; or subsampling the sequence to focus only on a local region ( see Fig . 1 ) . ( a ) Replacement ( Dictionary ) ( b ) Replacement ( Alanine ) ( c ) Global Random Shuffling We demonstrate that the protein sequence representations learned by fine-tuning the baseline models with data augmentations results in relative improvements between 1 % ( secondary structure accuracy ) and 41 % ( fluorescence ρ ) , as assessed with linear evaluation for all TAPE tasks we studied . When fine-tuning the same representations during supervised learning on each TAPE task , we show significant improvement as compared to baseline for 3 out of 4 TAPE tasks , with the fourth ( fluorescence ) within 1σ in performance . We also study the effect of increasingly aggressive data augmentations : when fine-tuning baseline models with contrastive learning ( Hadsell et al. , 2006 ; Chen et al. , 2020a ) we see a local maximum in downstream performance as a function of the quantity of data augmentation , with “ no augmentations ” generally under-performing modest amounts of data augmentations . Conversely , performing the same experiments but using masked-token prediction instead of contrastive learning , we detect a minor trend of decreasing performance on the TAPE tasks as we more frequently use data augmentations during fine-tuning . We interpret this as evidence that contrastive learning techniques , which require the use of data augmentation , are important methods that can be used to improve generalizibility of protein models . 2 RELATED WORKS . Self-supervised and semi-supervised methods have become the dominant paradigm in modeling protein sequences for use in downstream tasks . Rao et al . ( 2019 ) have studied next-token and maskedtoken prediction , inspired by the BERT natural language model ( Devlin et al. , 2018 ) . Riesselman et al . ( 2019 ) have extended this to autoregressive likelihoods ; and Rives et al . ( 2019 ) , Heinzinger et al . ( 2019 ) and Alley et al . ( 2019 ) have shown that unsupervised methods trained on unlabeled sequences are competitive with mutation effect predictors using evolutionary features.Of importance to this work are self-supervised learning algorithms employed for other data types that use or learn data augmentations . For example , Gidaris et al . ( 2018 ) learn image features through random rotations ; Dosovitskiy et al . ( 2014 ) and Noroozi & Favaro ( 2016 ) study image patches and their correlations to the original samples . van den Oord et al . ( 2018 ) uses contrastive methods to predicts future values of an input sequence . We consider sequence augmentations in natural language as the most relevant comparison for the data augmentations we study in this paper . Some commonly applied augmentations on strings include Lexical Substitution ( Zhang et al. , 2015 ) , Back Translation ( Xie et al. , 2019a ) , Text Surface TransformationPermalink ( Coulombe , 2018 ) , Random Noise Injection ( Xie et al. , 2019b ; Wei & Zou , 2019 ) , and Synonym Replacement , Random Swap , Random Deletion ( RD ) ( Wei & Zou , 2019 ) . However , sequence augmentations designed for natural languages often require the preservation of contextual meaning of the sentences , a factor that is less explicit for protein sequences . Contrastive Learning is a set of approaches that learn representations of data by distinguishing positive data pairs from negative pairs ( Hadsell et al. , 2006 ) . SimCLR ( v1 & v2 ) ( Chen et al. , 2020a ; b ) describes the current state-of-the-art contrastive learning technique . ; we use this approach liberally in this paper not only because it performs well , but because it requires data transformations to ex- ist . Since we focus on protein sequence transformations , the contrastive learning part described in both ( Chen et al. , 2020a ; b ) is our focus . Following Chen et al . ( 2020a ; b ) , we denote input data as x ∈ D , with D being our training set ; we then define an embedding function , fω : x 7→ h with h ∈ RN , and a mapping function gθ : h 7→ z , with z ∈ RM , where ω and θ are the learned model weights . For any x , we form two copies x1 = t1 ( x ) and x2 = t2 ( x ) given functions t1 , t2 ∼ T , where T denotes the distribution of the augmentation functions . Given D is of sizeN , the contrastive loss is written as : L = 1 2N N∑ k=1 [ l ( z ( 1 ) k , z ( 2 ) k ) + l ( z ( 2 ) k , z ( 1 ) k ) ] where l ( u , v ) ≡ − log esim ( u , v ) /τ∑ w 6=u e sim ( u , w ) /τ ( 1 ) Here , zi , k = gθ ( fω ( ti ( xk ) ) ) , sim ( · , · ) is cosine similarity , and τ ∈ ( 0 , ∞ ) is a scalar temperature ; we choose τ = 0.2 . By minimizing the contrastive loss , we obtain the learned h as the encoded feature for other downstream tasks . Note , the contrastive loss takes z ’ s as inputs , whereas the encoded feature is h , which is the variable after the function fω ( · ) and before gθ ( · ) . 3 METHOD . 3.1 EVALUATION PROCEDURE & APPROACH TO EXPERIMENT CONTROL . Our goal is to demonstrate that training self-supervised protein sequence models , with simple string manipulations as data augmentations , will lead to better performance on downstream tasks . To attempt to control external variables , we study the following restricted setting ; we provide the procedural diagram in Figure 2 and the corresponding explanations of the four major steps below ( See Appendix A for training setups in details . ) : Baseline.— A self-supervised model M0 is trained on non-augmented sequence data Dseq to do representation learning . To have a consistent baseline , we set M0 to the Transformer-based model trained and published in Rao et al . ( 2019 ) , without modification . This was trained with maskedtoken prediction on Pfam protein sequence data ( El-Gebali et al. , 2019 ) ; it has 12 self-attention layers , 8 heads per layer , and 512 hidden dimensions , yielding 38M parameters in total . Augmented training on validation set.— We fine-tune M0 on augmented subsets Dval ⊂ Dseq , given a set of pre-defined data transformations Taug . We define Maug as the final trained model derived from Taug ( Dseq ) with M0 as the initial conditions for the model parameters . We explore two different methods of fine-tuning on augmented data — a contrastive task ( as in Eq . 1 ) and a masked-token task ( exponentiated cross entropy loss ) — as well as different combinations of data augmentations . We use reduced subsets |Dval| |Dseq| to both reduce the computational cost of running bulk experiments , as well as to protect against overfitting . For consistency , we inherit the choice of Dval from the cross-validation split used to train M0 in Rao et al . ( 2019 ) . To adapt the same baseline model M0 to different self-supervised losses , we add a loss-specific randomlyinitialized layer to theM0 architecture : contrastive learning with a fully connected layer that outputs 256 dimensional vectors and masked-token uses fully connected layers with layer normalization to output one-hot vectors for each of the masked letters . We define our different choices of Taug in the next section . Linear evaluation on TAPE.— To assess the representations learned by Maug , we evaluate performance on four TAPE downstream training tasks ( Rao et al. , 2019 ) : stability , fluorescence , remote homology , and secondary structure . For consistency , we use the same training , validation , and testing sets . The first two tasks are evaluated by Spearman correlation ( ρ ) to the ground truth and the latter two by classification accuracy . However , we do not consider the fifth TAPE task , contact map prediction , as this relies only on the single CASP12 dataset , which has an incomplete test set due to data embargoes ( AlQuraishi , 2019 ) . Secondary structure prediction is a sequence-to-sequence task where each input amino acid is classified to a particular secondary structure type ( helix , beta sheet , or loop ) , which is evaluated on data from CASP12 , TS115 , and CB513 ( Berman et al. , 2000 ; Moult et al. , 2018 ; Klausen et al. , 2019 ) , with specifically “ 3-class ” classification accuracy being the metric in this paper . The remote homology task classifies sequences into one of 1195 classes , representing different possible protein folds , which are further grouped hierarchically into families , then superfamilies ; the datasets are derived from Fox et al . ( 2013 ) . The fluorescence task regresses a protein sequence to a real-valued log-fluorescence intensity measured in Sarkisyan et al . ( 2016 ) . The stability task regresses a sequence to a real-valued measure of the protein maintaining its fold above a concentration threshold ( Rocklin et al. , 2017 ) . We perform linear evaluation by training only a single linear layer for each downstream task for each contrastive-learning model Maug , but not changing the parameters of Maug , and its corresponding learned encodings , across all tasks . To compare the contrastive learning techniques to further fine-tuning with masked-token prediction , we identify the best-performing data augmentations per-task , then replace the MCLaug with the masked-token model with the same augmentation MMTaug , then also do linear evaluation on M MT aug . Full fine-tuning on TAPE.— For the best-performing augmented models in the linear evaluation task ( eitherMCLaug orM MT aug ) , we further study how the models improve when allowing the parameters of Maug to vary along with the linear model during the task-specific supervised model-tuning .
The paper explores the impact of different types of data augmentations for protein sequence data, and does a thorough benchmark analysis on them. The authors used a pre-trained transformer model, fine tuned the model on augmented data using two approaches, namely, contrastive learning and masked token prediction. This finetuned model was evaluated with an added linear layer on a range of tasks.
SP:f1af5160de3da8d992ac6bba8fbb7b0086efdb12
MELR: Meta-Learning via Modeling Episode-Level Relationships for Few-Shot Learning
Most recent few-shot learning ( FSL ) approaches are based on episodic training whereby each episode samples few training instances ( shots ) per class to imitate the test condition . However , this strict adhering to test condition has a negative side effect , that is , the trained model is susceptible to the poor sampling of few shots . In this work , for the first time , this problem is addressed by exploiting interepisode relationships . Specifically , a novel meta-learning via modeling episodelevel relationships ( MELR ) framework is proposed . By sampling two episodes containing the same set of classes for meta-training , MELR is designed to ensure that the meta-learned model is robust against the presence of poorly-sampled shots in the meta-test stage . This is achieved through two key components : ( 1 ) a Cross-Episode Attention Module ( CEAM ) to improve the ability of alleviating the effects of poorly-sampled shots , and ( 2 ) a Cross-Episode Consistency Regularization ( CECR ) to enforce that the two classifiers learned from the two episodes are consistent even when there are unrepresentative instances . Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves 1.0 % –5.0 % improvements over the baseline ( i.e. , ProtoNet ) used for FSL in our model and outperforms the latest competitors under the same settings . 1 INTRODUCTION . Deep convolutional neural networks ( CNNs ) have achieved tremendous successes in a wide range of computer vision tasks including object recognition ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; Russakovsky et al. , 2015 ; He et al. , 2016a ) , semantic segmentation ( Long et al. , 2015 ; Chen et al. , 2018 ) , and object detection ( Ren et al. , 2015 ; Redmon et al. , 2016 ) . For most visual recognition tasks , at least hundreds of labeled training images are required from each class for training a CNN model . However , collecting a large number of labeled training samples is costly and may even be impossible in real-life application scenarios ( Antonie et al. , 2001 ; Yang et al. , 2012 ) . To reduce the reliance of deep neural networks on large amount of annotated training data , few-shot learning ( FSL ) has been studied ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ; Sung et al. , 2018 ) , which aims to recognize a set of novel classes with only a few labeled samples by knowledge transfer from a set of base classes with abundant samples . ∗Corresponding author . Recently , FSL has been dominated by meta-learning based approaches ( Finn et al. , 2017 ; Snell et al. , 2017 ; Sung et al. , 2018 ; Lee et al. , 2019 ; Ye et al. , 2020 ) , which exploit the ample samples from base classes via episodic training . During meta-training , to imitate an N -way K-shot novel class recognition task , an N -way K-shot episode/meta-task is sampled in each iteration from the base classes , consisting of a support set and a query set . By setting up the meta-training episodes exactly the same way as the meta-test ones ( i.e. , N -wayK-shot in the support set ) , the objective is to ensure that the meta-learned model can generalize to novel tasks . However , this also leads to an unwanted side-effect , that is , the model will be susceptible to the poor sampling of the few shots . Outlying training instances are prevalence in vision benchmarks which can be caused by various factors such as occlusions or unusual pose/lighting conditions . When trained with ample samples , modern CNN-based recognition models are typically robust against abnormal instances as long as they are not dominant . However , when as few as one shot per class is used to build a classifier for FSL , the poorly-sampled few shots could be catastrophic , e.g. , when the cat class is represented in the support set by a single image of a half-occluded cat viewed from behind , it would be extremely hard to build a classifier to recognize cats in the query set that are mostly full-body visible and frontal . Existing episodic-training based FSL models do not offer any solution to this problem . The main reason is that different episodes are sampled randomly and independently . When the cat class is sampled in two episodes , these models are not aware that they are the same class , and thus can not enforce the classifiers independently learned to be consistent to each other , regardless whether there exist poorly-sampled shots in one of the two episodes . In this paper , a novel meta-learning via modeling episode-level relationships ( MELR ) framework is proposed to address the poor sampling problem of the support set instances in FSL . In contrast to the existing episodic training strategy , MELR conducts meta learning over two episodes deliberately sampled to contain the same set of base classes but different instances . In this way , cross-episode model consistency can be enforced so that the meta-learned model is robust against poorly-sampled shots in the meta-test stage . Concretely , MELR consists of two key components : Cross-Episode Attention Module ( CEAM ) and Cross-Episode Consistency Regularization ( CECR ) . CEAM is composed of a cross-episode transformer which allows the support set instances to be examined through attention so that unrepresentative support samples can be identified and their negative effects alleviated ( especially for computing class prototypes/centers ) . CECR , on the other hand , exploits the fact that since the two episodes contain the same set of classes , the obtained classifiers ( class prototypes ) should produce consistent predictions regardless whether there are any poorly-sampled instances in the support set and/or which episode a query instance comes from . This consistency is enforced via cross-episode knowledge distillation . Our main contributions are three-fold : ( 1 ) For the first time , the poor sampling problem of the few shots is formally tackled by modeling the episode-level relationships in meta-learning based FSL . ( 2 ) We propose a novel MELR model with two cross-episode components ( i.e. , CEAM and CECR ) to explicitly enforce that the classifiers of the same classes learned from different episodes need to be consistent regardless whether there exist poorly-sampled shots . ( 3 ) Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves significant improvements over the baseline ProtoNet ( Snell et al. , 2017 ) and even outperforms the latest competitors under the same settings . We will release the code and models soon . 2 RELATED WORK . Few-Shot Learning . Few-shot learning ( FSL ) has become topical recently . Existing methods can be generally divided into four groups : ( 1 ) Metric-based methods either learn a suitable embedding space for their chosen/proposed distance metrics ( e.g. , cosine similarity ( Vinyals et al. , 2016 ) , Euclidean distance ( Snell et al. , 2017 ) , and a novel measure SEN ( Nguyen et al. , 2020 ) ) or directly learn a suitable distance metric ( e.g. , CNN-based relation module ( Sung et al. , 2018 ; Wu et al. , 2019 ) , ridge regression ( Bertinetto et al. , 2019 ) , and graph neural networks ( Satorras & Estrach , 2018 ; Kim et al. , 2019 ; Yang et al. , 2020 ) ) . Moreover , several approaches ( Yoon et al. , 2019 ; Li et al. , 2019a ; Qiao et al. , 2019 ; Ye et al. , 2020 ; Simon et al. , 2020 ) learn task-specific metrics which are adaptive to each episode instead of learning a shared task-agnostic metric space . ( 2 ) Model-based methods ( Finn et al. , 2017 ; Nichol et al. , 2018 ; Rusu et al. , 2019 ) learn good model initializations on base classes and then quickly adapt ( i.e. , finetune ) them on novel classes with few shots and a limited number of gradient update steps . ( 3 ) Optimization-based methods ( Ravi & Larochelle , 2017 ; Munkhdalai & Yu , 2017 ; Li et al. , 2017 ) aim to learn to optimize , that is , to meta-learn optimization algorithms suitable for quick finetuning from base to novel classes . ( 4 ) Hallucination-based methods ( Hariharan & Girshick , 2017 ; Wang et al. , 2018 ; Schwartz et al. , 2018 ; Li et al. , 2020 ) learn generators on base classes and then hallucinate new novel class data to augment the few shots . Additionally , there are also other methods that learn to predict network parameters given few novel class samples ( Qiao et al. , 2018 ; Gidaris & Komodakis , 2019 ; Guo & Cheung , 2020 ) . Although the metric-based ProtoNet ( Snell et al. , 2017 ) is used as our baseline in this paper , our proposed MELR framework can be easily integrated with other episodic-training based methods . Modeling Episode-Level Relationships . In the FSL area , relatively less effort has been made to explicitly model the relationships across episodes . For modeling such episode-level relationships , there are two recent examples : ( 1 ) LGM-Net ( Li et al. , 2019b ) proposes an inter-task normalization strategy , which applies batch normalization to all support samples across a batch of episodes in each training iteration . ( 2 ) Among a batch of episodes , Meta-Transfer Learning ( Sun et al. , 2019 ) records the class with the lowest accuracy in each episode and then re-samples ‘ hard ’ meta-tasks from the set of recorded classes . In this work , instead of utilizing the relationships implicitly , we propose to model episode-level relationships ( MELR ) explicitly by focusing on episodes with the same set of classes . Furthermore , our MELR is specifically designed to cope with the poor sampling of the few shots – an objective very different from those in ( Li et al. , 2019b ; Sun et al. , 2019 ) . Attention Mechanism . Attention mechanism was first proposed by ( Bahdanau et al. , 2015 ) for machine translation and has now achieved great success in natural language processing ( Vaswani et al. , 2017 ) and computer vision ( Xu et al. , 2015 ) . An attention module typically takes a triplet ( queries , keys , values ) as input and learns interactions between queries and key-value pairs according to certain task objectives . It is referred to as self-attention or cross-attention depending on whether keys and queries are the same . Several recent works ( Hou et al. , 2019 ; Guo & Cheung , 2020 ; Ye et al. , 2020 ) have utilized attention mechanism for meta-learning based FSL . CAN ( Hou et al. , 2019 ) employs cross-attention between support and query samples to learn better feature representations . AWGIM ( Guo & Cheung , 2020 ) adopts both self- and cross-attention for generating classification weights . FEAT ( Ye et al. , 2020 ) only uses self-attention on the class prototypes of the support set . The biggest difference between these methods and our MELR lies in whether attention is modeled within each episode or across episodes . Only MELR allows modeling cross-episode instance attention explicitly so that the meta-learned model can be insensitive to badly-sampled support set instances . In addition , in our MELR , query set instances are also updated using cross-attention whilst existing models such as FEAT only apply attention to prototypes obtained using support set instances . They thus can not directly handle instance-level anomalies .
This paper proposes a way to exploit relationships across tasks in episodic training with the goal of improving the trained models who might be susceptible to poor sampling in for few-shot learning scenarios. The proposed model consists of two components: a cross-attention transformer (CEAM) which is used to observe details across two episodes, and a regularization term (CECR) which imposes that two different instances of the same task (which have the exact same classes) are consistent in terms of prediction. Cross-attention is computed via a scaled-attention transformer using both support and query set. The consistency loss is a knowledge distillation that imposes an agreement on the two episodes. The soft target is chosen among the two predictions selecting the classifier with the highest accuracy.
SP:52b51e46d40e554920d48625707a433db2dc233c
TextTN: Probabilistic Encoding of Language on Tensor Network
1 INTRODUCTION . Machine learning incorporating with the quantum mechanics forms a novel interdisciplinary field known as quantum machine learning ( Huggins et al. , 2019 ; Ran et al. , 2020 ) . Tensor network ( TN ) as a novel model has become prominent in the field of quantum machine learning ( Biamonte et al. , 2017 ) . On the one hand , tensor network can be used as mathematical tool to enhance the theoretical understanding of existing neural network methods ( Levine et al. , 2018 ; 2019 ) . On the other hand , based on tensor network , new machine learning algorithms have been proposed , e.g. , discriminative TN ( DTN ) ( Stoudenmire & Schwab , 2016 ) for supervised tasks and generative TN ( GTN ) ( Han et al. , 2018 ) for unsupervised scenarios ( Han et al. , 2018 ) . Based on the natural analogy between the quantum concepts ( e.g. , quantum many-body system ( Levine et al. , 2018 ) ) and the image representation , many studies and applications are conducted for processing and learning natural pictures ( Stoudenmire & Schwab , 2016 ; Sun et al. , 2020 ; Liu et al. , 2019 ) . However , for natural languages , it remains unclear how to design an efficient and effective TN approach , which can accurately learn and classify texts . In the field of natural language processing ( NLP ) , researchers have realized the analogy between the quantum many-body wave function and the word interactions ( by the tensor product ) in a text sentence , and developed a quantum-inspired language representation ( Zhang et al. , 2018 ) . Based on the quantum many-body physics and tensor decomposition techniques , Zhang et al . ( 2018 ) provided a mathematical understanding of existing convolution neural network ( CNN ) based text classification methods . Similarly , a tensor space language model ( TSLM ) has been built based on the tensor network formulation ( Zhang et al. , 2019 ) . This work shows that TSLM is a more generalized language model compared with n−gram and recurrent neural network ( RNN ) based language models . In implementation , however , TSLM did not provide a tensor network algorithm . The challenge lies in the high dimensionality of each word vector , which is much higher than the dimensionality of each pixel representation in image scenarios . After the tensor product of a number of word vectors , the resulting high-order tensors will become computationally intractable . More recently , a tensor network algorithm , namely uniform matrix product state ( u-MPS ) model , has been proposed for probabilistic modeling of a text sequence ( Miller et al. , 2020 ) . u-MPS is evaluated on a context-free language task , which uses an synthetic data set . However , u-MPS has not been applied in a real-world NLP task , e.g. , typical language modeling or text classification task . In addition , the expressive power of u-MPS has not been investigated . The expressive power of tensor network is a fundamental property of various TNs and has been systematically studied for tensor network factorizations of multivariate probability distributions ( Glasser et al. , 2019 ) . This motivates us to make use of the theoretical property of TN ’ s expressive power , for developing a tensor network based probabilistic model for natural language representation and classification 1 . To build such a text tensor network , we need to address two research problems in this paper . First , how to design a probabilistic encoding architecture to efficiently and effectively learn and classify the text . Second , how to analyse its expressive power , and make use of such analyses for more theoretical understanding and practical effectiveness of the text tensor network . In this paper , we propose a novel tensor network architecture , named as TextTN . TextTN encodes each word vector in word-GTNs and classifies the sentence in a sentence-DTN . First , the proposed word-GTNs train a TN for each word and treat each element of a word vector as a node . In this manner , the word-GTNs firstly map a high-dimensional word vector a low-dimensional linear space by the tensor network operators . Then , the second layer tensor network , called as sentence-DTN , trains a TN for each sentence , by regarding the low-dimensional word vector obtained by word-GTNs as its input . In TextTN , a sentence is represented by the tensor product among word vectors . Therefore , the interaction information among different word vectors and different dimensions are both modeled in TextTN . Such interactions , are encoded in the high-order weighted tensors , which represent a high-order semantic space . In both word-GTNs and sentence-DTN , the high-order tensor can be solved by the tensor network , i.e. , a matrix product state model , which uses the idea of low-rank approximation that can conquer the exponential wall problem ( Watson & Dunn , 2010 ) . In sentence-DTN , the bond-dimension is an important hyper-parameter and reflects the expressive power of TextTN . In this paper , we analyze the upper and lower bounds of the bond-dimension . Particularly , its lower bound can be determined by the entanglement entropy , which can be considered as a measurement of the communication information encoded in tensor network . A reference bonddimension can be set as this lower bound , as we assume that a larger value means an information redundancy and a smaller value indicates an insufficiency of the TN ’ s expressive power . In the experiments , such a reference bond-dimension can achieve effective classification results , which indicates the TextTN ’ s practical advantage in its potential to save hyper-parameter tuning efforts . Moreover , the word interaction has been taken into account in sentence-DTN by the joint effect of different words for the later class predication by the loss functions . For the learning algorithm , we observe that different word positions have different weights in a sentence , so that the one-function ( for a specific position ) training in the original DTN is inappropriate . Therefore , we propose an all-function training process in the sentence-DTN to improve the stability of TextTN . We have evaluated TextTN in four major text classification datasets ( MR , Subj , CR and MPQA ) . The results show that TextTN outperforms convolutional neural network ( CNN ) on all the datasets . This departs from vision tasks where according to the recent literature , a tensor network has not been reported to outperform CNN ( Kim , 2014 ) . In addition , based on the word vectors from the pre-trained model BERT , the TextTN has better results than the BERT model on SST-2 and SST-5 tasks , and the accuracy of BERT+TextTN is comparable with the state of the art ( SOTA ) result on SST-5 dataset . 2 BACKGROUND . We now provide the background of Matrix product States ( MPS ) , a family of tensor networks . MPS ( Schollwock , 2011 ) is also known as the tensor train decomposition ( Oseledets , 2011 ) . Because of the low degree of freedom , the research based on MPS is developing rapidly . At present , the tensor network based on MPS can be roughly divided into two categories . One is the Generative Tensor Network ( GTN ) ( Han et al. , 2018 ; Sun et al. , 2020 ) , and the other one is the supervised tensor network ( also named as Discriminative Tensor Network , DTN ) ( Stoudenmire & Schwab , 2016 ) . Then , we briefly describe existing GTN and DTN models for image classification tasks . GTNs are used to model the joint probability distribution of given data . For a picture X with n pixels , each pixel is encoded into a two-dimensional vector xi = ( pi , 1− pi ) T by a feature mapping from a 1In this paper , we focus on the text classification task . However , the idea and formulation of our proposed approach are general and have potential in other NLP tasks . pixel ’ s value ( Sun et al. , 2020 ) , where i ∈ { 1 , . . . , n } and pi is the mapped probabilities of the pixel xi . The representation of the picture X can be give out Φ ( X ) through the operator of tensor product between these vectors xi . A joint probability distribution of the picture X is computed by the GTNs . Pj ( X ) = |Wj • Φ ( X ) |2 ( 1 ) where Pj ( X ) represents the probability of a pictureX with respect to the category j ( j ∈ { 1 , . . . , m } ) , m is the number of the categories on an image classification dataset , and • is the operator of tensor contraction . The MPS decomposition of the jth n-order weight tensor Wj can be written as : Wj = ∑ { α } Aα1s1 A α1α2 s2 . . . A αn−1 sn ( 2 ) In the left of the Figure 1 , we give out an illustrative GTNs in two categories ( i.e. , m=2 ) . Second or third order tensors Asi ( i ∈ { 1 , . . . , n } ) are from the decomposition of the weight tensor Wj . Each ’ virtual ’ indicator αk ( k ∈ { 1 , . . . , n − 1 } ) is the rank obtained from the tensor-train decomposition ( Oseledets , 2011 ) , and the ’ physical ’ indicator si is the dimension of the pixel xi . DTN is used to identify the class or label of a picture and computes a conditional probability distribution P ( y|X ) . In DTN , the conditional probability distribution P ( y|X ) is computed as follows . P ( y|X ) = Wl • Φ ( X ) . ( 3 ) where the ( n+1 ) -order weight tensor Wl is decomposed into a MPS and l is an extra order/index representing the label : Wl = ∑ { α } Aα1s1 . . . A l ; αi−1αi si . . . A αn−1 sn ( 4 ) where P ( y|X ) is a vector and encodes the conditional probability distribution of outputting y given a picture X . Eq . 3 is defined to classify the input X by choosing the label for which the value in the vector P ( y|X ) is largest . In practice , the MPS shown in the right of Figure 1 is a supervised learning model ( Stoudenmire & Schwab , 2016 ) , and these rank values { αk } are set to be equal as the hyper-parameter ( bond-dimension ) in TN . 3 TENSOR NETWORK FOR LANGUAGE ENCODING AND CLASSIFICATION . Problem setting Our goal is to develop a tensor network architecture to encode and classify a sentence of words in a TN ’ s probabilistic manner . For a sentence with n words ( i.e , S= ( w1 , . . . , wn ) ) , the tensor representation of the sentence can be written as ( w1 ⊗ . . .⊗wi ⊗ . . .⊗wn ) ( Zhang et al. , 2019 ) , where wi are the word vectors . However , as aforementioned in Introduction , because of the dimensional disaster , it is infeasible to directly input such as a sentence representation into a tensor network . In order to solve this problem and still model the word interaction on a high-order tensor ( denoted as Wl ) , we can formalize the TN model as follows : P ( y|S ) = Wl • f ( S ) = Wl • ( f ( w1 ) ⊗ f ( w2 ) ⊗ . . .⊗ f ( wn ) ) ( 5 ) where f is an operator to encode a sentence in a low-dimensional probabilistic space . In our work , word-GTNs are used to encode the word vectors into a low-dimensional space , and then the new representation of word can be efficiently inputted to the tensor network . Specifically , the function f in Eq 5 is embodied as word-GTNs ( in Section 3.1 ) , and Wl is embodied as sentenceDTN ( in Section 3.2 ) . In the process of text classification , the word-GTNs and sentence-DTN are unified into a new TN framework ( TextTN ) , as shown in Figure 2 . Bond-dimension is an important hyper-parameter which reflects the expressive power of TN model and influences the effectiveness of text classification . In Section 3.2 , we propose a reference bond-dimension that can be computed based on the entanglement entropy . Besides , to improve the stability of sentence-DTN , all-function learning on sentence-DTN is proposed in Section 3.3 .
The paper proposes a tensor network for text classification. There are two components: (i) word-GTNs convert word embeddings to m-d probability encoding vectors, and (ii) a sentence-DND takes the word probability encoding vectors as input, combining them using matrix product state (MPS). Experiments on several text classification datasets e.g. SST, CR, MPQA show that the proposed method outperforms existing ones when using word2vec and BERT word embeddings.
SP:5ee24df635a6659978378a8ff6e0cc41e51b6010
HyperSAGE: Generalizing Inductive Representation Learning on Hypergraphs
1 INTRODUCTION . Graphs are considered the most prevalent structures for discovering useful information within a network , especially because of their capability to combine object-level information with the underlying inter-object relations ( Wu et al. , 2020 ) . However , most structures encountered in practical applications form groups and relations that can not be properly represented using pairwise connections alone , hence a graph may fail to capture the collective flow of information across objects . In addition , the underlying data structure might be evolving and only partially observed . Such dynamic higher-order relations occur in various domains , such as social networks ( Tan et al. , 2011 ) , computational chemistry ( Gu et al. , 2020 ) , neuroscience ( Gu et al. , 2017 ) and visual arts ( Arya et al. , 2019 ) , among others . These relations can be readily represented with hypergraphs , where an edge can connect an arbitrary number of vertices as opposed to just two vertices in graphs . Hypergraphs thus provide a more flexible and natural framework to represent such multi-way relations ( Wolf et al. , 2016 ) , however , this requires a representation learning technique that exploits the full expressive power of hypergraphs and can generalize on unseen nodes from a partially observed hypergraph . Recent work in the field of geometric deep learning have presented formulations on graph structured data for the tasks of node classification ( Kipf & Welling , 2016 ) , link prediction ( Zhang & Chen , 2018 ) , or the classification of graphs ( Zhang et al. , 2018b ) . Subsequently , for data containing higher-order relations , a few recent papers have presented hypergraph-based learning approaches on similar tasks ( Yadati et al. , 2019 ; Feng et al. , 2019 ) . A common implicit premise in these papers is that a hypergraph can be viewed as a specific type of regular graph . Therefore , reduction of hypergraph learning problem to that of a graph should suffice . Strategies to reduce a hypergraph to a graph include transforming the hyperedges into multiple edges using clique expansion ( Feng et al. , 2019 ; Jiang et al. , 2019 ; Zhang et al. , 2018a ) , converting to a heterogeneous graph using star expansion ( Agarwal et al. , 2006 ) , and replacing every hyperedge with an edge created using a certain predefined metric ( Yadati et al. , 2019 ) . Yet these methods are based on the wrong premise , motivated chiefly by a larger availability of graph-based approaches . By reducing a hypergraph to regular graph , these approaches make existing graph learning algorithms applicable to hypergraphs . However , hypergraphs are not a special case of regular graphs . The opposite is true , regular graphs are simply a specific type of hypergraph ( Berge & Minieka , 1976 ) . Therefore , reducing the hypergraph problem to that of a graph can not fully utilize the information available in hypergraph . Two schematic examples outlining this issue are shown in Fig.1 . To address tasks based on complex structured data , a hypergraph-based formulation is needed that complies with the properties of a hypergraph . A major limitation of the existing hypergraph learning frameworks is their inherently transductive nature . This implies that these methods can only predict characteristics of nodes that were present in the hypergraph at training time , and fail to infer on previously unseen nodes . The transductive nature of existing hypegraph approaches makes them inapplicable in , for example , finding the most promising target audience for a marketing campaign or making movie recommendations with new movies appearing all the time . An inductive solution would pave the way to solve such problems using hypergraphs . The inductive learning framework must be able to identify both the node ’ s local role in the hypergraph , as well as its global position ( Hamilton et al. , 2017 ) . This is important for generalizing the learned node embeddings that the algorithm has optimized on to a newly observed hypergraph comprising previously unseen nodes , thus , making inductive learning a far more complex problem compared to the transductive learning methods . In this paper , we address the above mentioned limitations of the existing hypergraph learning methods . We propose a simple yet effective inductive learning framework for hypergraphs that is readily applicable to graphs as well . Our approach relies on neural message passing techniques due to which it can be used on hypergraphs of any degree of cardinality without the need for reduction to graphs . The points below highlight the contributions of this paper : • We address the challenging problem of representation learning on hypergraphs by proposing HyperSAGE , comprising a message passing scheme which is capable of jointly capturing the intra-relations ( within a hyperedge ) as well as inter-relations ( across hyperedges ) . • The proposed hypergraph learning framework is inductive , i.e . it can perform predictions on previously unseen nodes , and can thus be used to model evolving hypergraphs . • HyperSAGE facilitates neighborhood sampling and provides the flexibility in choosing different ways to aggregate information from the neighborhood . • HyperSAGE is more stable than state-of-the-art methods , thus provides more accurate results on node classification tasks on hypergraphs with reduced variance in the output . 2 RELATED WORK . Learning node representations using graph neural networks has been a popular research topic in the field of geometric deep learning ( Bronstein et al. , 2017 ) . Graph neural networks can be broadly classified into spatial ( message passing ) and spectral networks . We focus on a family of spatial message passing graph neural networks that take a graph with some labeled nodes as input and learn embeddings for each node by aggregating information from its neighbors ( Xu et al. , 2019 ) . Message passing operations in a graph simply propagate information along the edge connecting two nodes . Many variants of such message passing neural networks have been proposed , with some popular ones including Gori et al . ( 2005 ) ; Li et al . ( 2015 ) ; Kipf & Welling ( 2016 ) ; Gilmer et al . ( 2017 ) ; Hamilton et al . ( 2017 ) . Zhou et al . ( 2007 ) introduced learning on hypergraphs to model high-order relations for semisupervised classification and clustering of nodes . Emulating a graph-based message passing framework for hypergraphs is not straightforward since a hyperedge involves more than two nodes which makes the interactions inside each hyperedge more complex . Representing a hypergraph with a matrix makes it rigid in describing the structures of higher order relations ( Li et al. , 2013 ) . On the other hand , formulating message passing on a higher dimensional representation of hypergraph using tensors makes it computationally expensive and restricts it to only small datasets ( Zhang et al. , 2019 ) . Several tensor based methods do perform learning on hypergraphs ( Shashua et al. , 2006 ; Arya et al. , 2019 ) , however they are limited to uniform hypergraphs only . To resolve the above issues , Feng et al . ( 2019 ) and Bai et al . ( 2020 ) reduce a hypergraph to graph using clique expansion and perform graph convolutions on them . These approaches can not utilize complete structural information in the hypergraph and lead to unreliable learning performance for e.g . classification , clustering and active learning ( Li & Milenkovic , 2017 ; Chien et al. , 2019 ) . Another approach by Yadati et al . ( 2019 ) , named HyperGCN , replaces a hyperedge with pair-wise weighted edges between vertices ( called mediators ) . With the use of mediators , HyperGCN can be interpreted as an improved approach of clique expansion , and to the best of our knowledge , is also the state-of-the-art method for hypergraph representation learning . However , for many cases such as Fano plane where each hyperedge contains at most three nodes , HyperGCN becomes equivalent to the clique expansion ( Dong et al. , 2020 ) . In spectral theory of hypergraphs , methods have been proposed that fully exploit the hypergraph structure using non-linear Laplacian operators ( Chan et al. , 2018 ; Hein et al. , 2013 ) . In this work , we focus on message passing frameworks . Drawing inspiration from GraphSAGE ( Hamilton et al. , 2017 ) , we propose to eliminate matrix ( or tensor ) based formulations in our neural message passing frameworks , which not only facilitates utilization of all the available information in a hypergraph , but also makes the entire framework inductive in nature . 3 PROPOSED MODEL : HYPERSAGE . The core concept behind our approach is to aggregate feature information from the neighborhood of a node spanning across multiple hyperedges , where the edges can have varying cardinality . Below , we first define some preliminary terms , and then describe our generic aggregation framework . This framework performs message passing at two-levels for a hypergraph . Further , for any graphstructured data , our framework emulates the one-level aggregation similar to GraphSAGE ( Hamilton et al. , 2017 ) . Our approach inherently allows inductive learning , which makes it also applicable on hypergraphs with unseen nodes . 3.1 PRELIMINARIES . Definition 1 ( Hypergraph ) . A general hypergraph H can be represented as H = ( V , E , X ) , where V = { v1 , v2 , ... , vN } denotes a set of N nodes ( vertices ) and E = { e1 , e2 , ... , eK } denotes a set of hyperedges , with each hyperedge comprising a non-empty subset from V. X ∈ RN×d denote the feature matrix , such that xi ∈ X is the feature vector characterizing node vi ∈ V. The maximum cardinality of the hyperedges in H is denoted as M = max e∈E |e| . Unlike in a graph , the hyperedges of H can contain different number of nodes and M denotes the largest number . From the definition above , we see that graphs are a special case of hypergraphs with M=2 . Thus , compared to graphs , hypergraphs are designed to model higher-order relations between nodes . Further , we define three types of neighborhoods in a hypergraph : Definition 2 ( Intra-edge neighborhood ) . The intra-edge neighborhood of a node vi ∈ V for any hyperedge e ∈ E is defined as the set of nodes vj belonging to e and is denoted by N ( vi , e ) Further , let E ( vi ) = { e ∈ E | vi ∈ e } be the sets of hyperedges that contain node vi . Definition 3 ( Inter-edge neighborhood ) . The inter-edge neighborhood of a node vi ∈ V also referred as its global neighborhood , is defined as the neighborhood of vi spanning across the set of hyperedges E ( vi ) and is represented by N ( vi ) = ⋃ e∈E ( vi ) N ( vi , e ) . Definition 4 ( Condensed neighborhood ) . The condensed neighborhood of any node vi ∈ V is a sampled set of α ≤ |e| nodes from a hyperedge e ∈ E ( vi ) denoted by N ( vi , e ; α ) ⊂ N ( vi , e ) . 3.2 GENERALIZED MESSAGE PASSING FRAMEWORK . We propose to interpret the propagation of information in a given hypergraph as a two-level aggregation problem , where the neighborhood of any node is divided into intra-edge neighbors and inter-edge neighbors . For message aggregation , we define aggregation function F ( · ) as a permutation invariant set function on a hypergraph H = ( V , E , X ) that takes as input a countable unordered message set and outputs a reduced or aggregated message . Further , for two-level aggregation , let F1 ( · ) and F2 ( · ) denote the intra-edge and inter-edge aggregation functions , respectively . Schematic representation of the two aggregation functions is provided in Fig.2 . Similar to X we also define Z as the encoded feature matrix built using the outputs zi of aggregation functions . Message passing at node vi for aggregation of information at the lth layer can then be stated as x ( e ) i , l ← F1 ( { xj , l−1 | vj ∈ N ( vi , e ; α ) } ) , ( 1 ) xi , l ← xi , l−1 + F2 ( { x ( e ) i , l |vi ∈ E ( vi ) } ) , ( 2 ) where , x ( e ) i , l refers to the aggregated feature set at vi obtained with intra-edge aggregation for edge e. The combined two-level message passing is achieved using nested aggregation function F = F2 . To ensure that the expressive power of a hypergraph is preserved or at least the loss is minimized , the choice of aggregation function should comply with certain properties . Firstly , the aggregation function should be able to capture the features of neighborhood vertices in a manner that is invariant to the permutation of the nodes and hyperedges . Many graph representation learning methods use permutation invariant aggregation functions , such as mean , sum and max functions ( Xu et al. , 2019 ) . These aggregations have proven to be successful for node classification problems . For the existing hypergraph frameworks , reduction to simple graphs along with a matrix-based message passing framework limits the possibilities of using different types of feature aggregation functions , and hence curtails the potential to explore unique node representations . Secondly , the aggregation function should also preserve the global neighborhood invariance at the ‘ dominant nodes ’ of the graph . Here , dominant nodes refer to nodes that contain important features , thereby , impacting the learning process relatively more than their neighbors . The aggregation function should ideally be insensitive to the input , whether the provided hypergraph contains a few large hyperedges , or a larger number of smaller ones obtained from splitting them . Generally , a hyperedge would be split in a manner that the dominant nodes are shared across the resulting hyperedges . In such cases , global neighborhood invariance would imply that the aggregated output at these nodes before and after the splitting of any associated hyperedge stays the same . Otherwise , the learned representation of a node will change significantly with each hyperedge split . Based on these considerations , we define the following properties for a generic message aggregation function that should hold for accurate propagation of information through the hypergraphs . Property 1 ( Hypergraph Isomorphic Equivariance ) . A message aggregation function F ( · ) is equivariant to hypergraph isomorphism , if for two isomorphic hypergraphs H = ( V , E , X ) and H∗ = ( V∗ , E∗ , X∗ ) , given that H∗ = σ •H , and Z and Z∗ represent the encoded feature matrices obtained using F ( · ) on H and H∗ , the condition Z∗ = σ • Z holds . Here , σ denotes a permutation operator on hypergraphs . Property 2 ( Global Neighborhood Invariance ) . A message aggregation scheme F ( · ) satisfies global neighborhood invariance at any node vi ∈ V for a given hypergraph H = ( V , E , X ) if for any operation Γ ( · ) , such that H∗ = Γ ( H ) , and zi and z∗i denote the encoded feature vectors obtained using F ( · ) at node vi on H and H∗ , the condition z∗i = zi holds . Here Γ ( H ) could refer to operations such as hyperedge contraction or expansion . The flexibility of our message passing framework allows us to go beyond the simple aggregation functions on hypergraphs without violating Property 1 . We introduce a series of power mean functions as aggregators , which have recently been shown to generalize well on graphs ( Li et al. , 2020 ) . We perform message aggregation in hypergraphs using these generalized means , denoted byMp and provide in section 4.2 , a study on their performances . We also show that with appropriate combinations of the intra-edge and inter-edge aggregations Property 2 is also satisfied . This property ensures that the representation of a node after message passing is invariant to the cardinality of the hyperedge , i.e. , the aggregation scheme should not be sensitive to hyperedge contraction or expansion , as long as the global neighborhood of a node remains the same in the hypergraph . Aggregation Functions . One major advantage of our strategy is that the message passing module is decoupled from the choice of the aggregation itself . This allows our approach to be used with a broad set of aggregation functions . We discuss below a few such possible choices . Generalized means . Also referred to as power means , this class of functions are very commonly used for getting an aggregated measure over a given set of samples . Mathematically , generalized means can be expressed as Mp = ( 1 n ∑n i=1 x p i ) 1 p , where n refers to the number of samples in the aggregation , and p denotes its power . The choice of p allows providing different interpretations to the aggregation function . For example , p = 1 denotes arithmetic mean aggregation , p = 2 refers to mean squared estimate and a large value of p corresponds to max pooling from the group . Similarly , Mp can be used for geometric and harmonic means with p→ 0 and p = −1 , respectively . Similar to the recent work of Li et al . ( 2020 ) , we use generalized means for intra-edge as well as inter-edge aggregation . The two functions F1 ( · ) and F2 ( · ) for aggregation at node vi is defined as F ( i ) 1 ( s ) = 1|N ( vi , e ) ||N ( vi ) | ∑ vj∈N ( vi , e ) |E ( vi ) |∑ m=1 1 |N ( vi , em ) | −1 xpj 1 p ( 3 ) F ( i ) 2 ( s ) = 1 |E ( vi ) | ∑ e∈E ( vi ) ( F1 ( s ) ) p 1p ( 4 ) where we use ‘ s ’ for concise representation of the unordered set of input as shown in Eq.1 . Here and henceforth in this paper , we remove the superscript index ‘ ( i ) ’ for the sake of clarity and further occurrences of the two aggregation functions shall be interpreted in terms of node vi . Note that in Eq . 3 and Eq . 4 , we have chosen the power term p to be same for F1 and F2 so as to satisfy the global neighborhood invariance as stated in Property 2 . Note , the scaling term added to F1 is added to balance the bias in the weighting introduced in intra-edge aggregation due to varying cardinality across the hyperedges . These restrictions ensure that the joint aggregation F2 ( · ) satisfies the property of global neighborhood invariance at all times . Proof of the two aggregations satisfying Property 2 is stated in Appendix B. Sampling-based Aggregation . Our neural message passing scheme provides the flexibility to adapt the message aggregation module to fit the desired computational budget through aggregating information from only a subset N ( vi , e ; α ) of the full neighborhood N ( vi , e ) , if needed . We propose to apply sub-sampling only on the nodes from the training set , and use information from the full neighborhood for the test set . The advantages of this are twofold . First , reduced number of samples per aggregation at training time reduces the relative computational burden . Second , similar to dropout ( Srivastava et al. , 2014 ) , it serves to add regularization to the optimization process . Using the full neighborhood on test data avoids randomness in the test predictions , and generates consistent output .
In this paper, the authors study the problem of learning node embeddings for hypergraphs. While most of the existing studies consider reducing hyper-graphs into graphs, this paper studies learning embeddings directly on the hypergraphs using two stages of aggregations / sampling. The efficacy of the proposed method is illustrated in a semi-supervised as well as inductive setting, where the method achieves better performances than the baselines.
SP:6e300c6a4bdcf32b80cecf2d4526b99deb30912a
Provable Memorization via Deep Neural Networks using Sub-linear Parameters
1 INTRODUCTION . The modern trend of over-parameterizing neural networks has shifted the focus of deep learning theory from analyzing their expressive power toward understanding the generalization capabilities of neural networks . While the celebrated universal approximation theorems state that over-parameterization enables us to approximate the target function with a smaller error ( Cybenko , 1989 ; Pinkus , 1999 ) , the theoretical gain is too small to satisfactorily explain the observed benefits of over-parameterizing already-big networks . Instead of “ how well can models fit , ” the question of “ why models do not overfit ” has become the central issue ( Zhang et al. , 2017 ) . Ironically , a recent breakthrough on the phenomenon known as the double descent ( Belkin et al. , 2019 ; Nakkiran et al. , 2020 ) suggests that answering the question of “ how well can models fit ” is in fact an essential element in fully characterizing their generalization capabilities . In particular , the double descent phenomenon characterizes two different phases according to the capability/incapability of the network size for memorizing training samples . If the network size is insufficient for memorization , the traditional bias-variance trade-off occurs . However , after the network reaches the capacity that memorizes the dataset , i.e. , “ interpolation threshold , ” larger networks exhibit better generalization . Under this new paradigm , identifying the minimum size of networks for memorizing finite input-label pairs becomes a key issue , rather than function approximation that considers infinite inputs . The memory capacity of neural networks is relatively old literature , where researchers have studied the minimum number of parameters for memorizing arbitrary N input-label pairs . Existing results showed that O ( N ) parameters are sufficient for various activation functions ( Baum , 1988 ; Huang and Babri , 1998 ; Huang , 2003 ; Yun et al. , 2019 ; Vershynin , 2020 ) . On the other hand , Sontag ( 1997 ) established the negative result that for any network using analytic definable activation functions with o ( N ) parameters , there exists a set of N input-label pairs that the network can not memorize . The sub-linear number of parameters also appear in a related topic , namely the VC-dimension of neural networks . It has been proved that there exists a set of N inputs such that a neural network with o ( N ) parameters can “ shatter , ” i.e. , memorize arbitrary labels ( Maass , 1997 ; Bartlett et al. , 2019 ) . Comparing the two results on o ( N ) parameters , Sontag ( 1997 ) showed that not all sets of N inputs can be memorized for arbitrary labels , whereas Bartlett et al . ( 2019 ) showed that at least one set of N inputs can be shattered . This suggests that there may be a reasonably large family of N input-label pairs that can be memorized with o ( N ) parameters , which is our main interest . 1.1 SUMMARY OF RESULTS . In this paper , we identify a mild condition satisfied by many practical datasets , and show that o ( N ) parameters suffice for memorizing such datasets . In order to bypass the negative result by Sontag ( 1997 ) , we introduce a condition to the set of inputs , called the ∆-separateness . Definition 1 . For a set X ⊂ Rdx , we say X is ∆-separated if sup x , x′∈X : x 6=x′ ‖x− x′‖2 < ∆× inf x , x′∈X : x 6=x′ ‖x− x′‖2 . This condition requires that the ratio of the maximum distance to the minimum distance between distinct points is bounded by ∆ . Note that the condition is milder when ∆ is bigger . By Definition 1 , any given finite set of ( distinct ) inputs is ∆-separated for some ∆ , so one might ask why ∆- separateness is different from having distinct inputs in a dataset . The key difference is that even if the number of data points N grows , the ratio of the maximum to the minimum should remain bounded by ∆ . Given the discrete nature of computers , there are many practical datasets that satisfy ∆-separateness , as we will see shortly . Also , this condition is more general than the minimum distance assumptions ( ∀i , ‖xi‖2 = 1 , ∀i 6= j , ‖xi − xj‖2 ≥ ρ > 0 ) that are employed in existing theoretical results ( Hardt and Ma , 2017 ; Vershynin , 2020 ) . To see this , note that the minimum distance assumption implies 2/ρ-separateness . In our theorem statements , we will use the phrase “ ∆-separated set of input-label pairs ” for denoting the set of inputs is ∆-separated . In our main theorem sketched below , we prove the sufficiency of o ( N ) parameters for memorizing any ∆-separated set of N pairs ( i.e. , any ∆-separated set of N inputs with arbitrary labels ) even for large ∆ . More concretely , our result is of the following form : Theorem 1 ( Informal ) . For any w ∈ ( 23 , 1 ] , there exists aO ( N 2−2w/ logN+log ∆ ) -layer , O ( Nw+ log ∆ ) -parameter fully-connected network with sigmoidal or RELU activation that can memorize any ∆-separated set of N input-label pairs . We note that log has base 2 . Theorem 1 states that if the number of layers increases with the number of pairs N , then any ∆-separated set of N pairs can be memorized by a network with o ( N ) parameters . Here , we can check from Definition 1 that the log ∆ term does not usually dominate the depth or the number of parameters , especially for modern deep architectures and practical datasets . For example , it is easy to check that any dataset consisting of 3-channel images ( values from { 0 , 1 , . . . , 255 } ) of size a× b satisfies log ∆ < ( 9 + 12 log ( ab ) ) ( e.g. , log ∆ < 17 for the ImageNet dataset ) , which is often much smaller than the depth of modern deep architectures . For practical datasets , we can show that networks with parameters fewer than the number of pairs can successfully memorize the dataset . For example , in order to perfectly classify one million images in ImageNet dataset1 with 1000 classes , our result shows that 0.7 million parameters are sufficient . The improvement is more significant for large datasets . To memorize 15.8 million bounding boxes in Open Images V62 with 600 classes , our result shows that only 4.5 million parameters suffice . Theorem 1 improves the sufficient number of parameters for memorizing a large class of N pairs ( i.e. , 2O ( N w ) -separated ) from O ( N ) down to O ( Nw ) for any w ∈ ( 23 , 1 ) , for deep networks . Then , it is natural to ask whether the depth increasing with N is necessary for memorization with a sub-linear number of parameters . The following existing result on the VC-dimension implies that this is indeed necessary for memorization with o ( N/ logN ) parameters , at least for RELU networks . Theorem [ Bartlett et al . ( 2019 ) ] . ( Informal ) For L-layer RELU networks , Ω ( N/ ( L logN ) ) parameters are necessary for memorizing at least a single set of N inputs with arbitrary labels . The above theorem implies that for RELU networks of constant depth , Ω ( N/ logN ) parameters are necessary for memorizing at least one set of N inputs with arbitrary labels . In contrast , by increasing depth with N , Theorem 1 shows that there is a large class of datasets that can be memorized with o ( N/ logN ) parameters . Combining these two results , one can conclude that increasing depth is necessary and sufficient for memorizing a large class of N pairs with o ( N/ logN ) parameters . Given that the depth is critical for the memorization power , is the width also critical ? We prove that it is not the case , via the following theorem . Theorem 2 ( Informal ) . For a fully-connected network of width 3 with a sigmoidal or RELU activation function , O ( N2/3 + log ∆ ) parameters ( i.e. , layers ) suffice for memorizing any ∆-separated set of N input-label pairs . Theorem 2 states that under 2O ( N 2/3 ) -separateness of inputs , the network width does not necessarily have to increase with N for memorization with sub-linear parameters . Furthermore , it shows that 1http : //www.image-net.org/ 2https : //storage.googleapis.com/openimages/web/index.html even a surprisingly narrow network of width 3 has a superior memorization power than a fixed-depth network , requiring only O ( N2/3 ) parameters . Theorems 1 and 2 show the existence of network architectures that memorize N points with o ( N ) parameters , under the condition of ∆-separateness . This means that these theorems do not answer the question of how many such data points can a given network memorize . We provide generic criteria for identifying the maximum number of points given general networks ( Theorem 3 ) . In a nutshell , our criteria indicate that to memorize more pairs under the same budget for the number of parameters , a network must have a deep and narrow architecture at the final layers of the network . In contrast to the prior results that the number of arbitrary pairs that can be memorized is at most proportional to the number of parameters ( Yamasaki , 1993 ; Yun et al. , 2019 ; Vershynin , 2020 ) , our criteria successfully incorporate with the characteristics of datasets , the number of parameters , and the architecture , which enable us to memorize ∆-separated datasets with number of pairs super-linear in the number of parameters . Finally , we provide empirical results corroborating our theoretical findings that deep networks often memorize better than their shallow counterparts with a similar number of parameters . Here , we emphasize that better memorization power does not necessarily imply better generalization . We indeed observe that shallow and wide networks often generalize better than deep and narrow networks , given the same ( or similar ) training accuracy . Organization . We first introduce related works in Section 2 . In Section 3 , we introduce necessary notation and the problem setup . We formally state our main results and discuss them in Section 4 . In Section 6 , we provide empirical observations on the effect of depth and width in neural networks . Finally , we conclude the paper in Section 7 . 2 RELATED WORKS . 2.1 NUMBER OF PARAMETERS FOR MEMORIZATION . Sufficient number of parameters for memorization . Identifying the sufficient number of parameters for memorizing arbitrary N pairs has a long history . Earlier works mostly focused on bounding the number of hidden neurons of shallow networks for memorization . Baum ( 1988 ) proved that for 2-layer STEP3 networks , O ( N ) hidden neurons ( i.e. , O ( N ) parameters ) are sufficient for memorizing arbitrary N pairs when inputs are in general position . Huang and Babri ( 1998 ) showed that the same bound holds for any bounded and nonlinear activation function σ satisfying that either limx→−∞ σ ( x ) or limx→∞ σ ( x ) exists , without any condition on inputs . The O ( N ) bounds on the number of hidden neurons was improved to O ( √ N ) by exploiting an additional hidden layer by Huang ( 2003 ) ; nevertheless , this construction still requires O ( N ) parameters . With the advent of deep learning , the study has been extended to modern activation functions and deeper architectures . Zhang et al . ( 2017 ) proved that O ( N ) hidden neurons are sufficient for 2-layer RELU networks to memorize arbitrary N pairs . Yun et al . ( 2019 ) showed that for deep RELU ( or hard tanh ) networks having at least 3 layers , O ( N ) parameters are sufficient . Vershynin ( 2020 ) proved a similar result for STEP ( or RELU ) networks with an additional logarithmic factor , i.e. , Õ ( N ) parameters are sufficient , for memorizing arbitrary { xi : ‖xi‖2 = 1 } Ni=1 satisfying ‖xi − xj‖22 = Ω ( log log dmax log dmin ) and N , dmax = eO ( d 1/5 min ) where dmax and dmin denote the maximum and the minimum hidden dimensions , respectively . In addition , the memorization power of modern network architectures has also been studied . Hardt and Ma ( 2017 ) showed that RELU networks consisting of residual blocks with O ( N ) hidden neurons can memorize arbitrary { xi : ‖xi‖2 = 1 } Ni=1 satisfying ‖xi − xj‖2 ≥ ρ for some absolute constant ρ > 0 . Nguyen and Hein ( 2018 ) studied a broader class of layers and proved that O ( N ) hidden neurons suffice for convolutional neural networks consisting of fully-connected , convolutional , and max-pooling layers for memorizing arbitrary N pairs having different patches . Necessary number of parameters for memorization . On the other hand , the necessary number of parameters for memorization has also been studied . Sontag ( 1997 ) showed that for any neural network using analytic definable activation functions , Ω ( N ) parameters are necessary for memorizing arbitrary 3STEP denotes the binary threshold activation function : x 7→ 1 [ x ≥ 0 ] . N pairs . Namely , given any network using analytic definable activation with o ( N ) parameters , there exists a set of N pairs that the network can not memorize . The Vapnik-Chervonenkis ( VC ) dimension is also closely related to the memorization power of neural networks . While the memorization power studies the number of parameters for memorizing arbitrary N pairs , the VC-dimension studies the number of parameters for memorizing at least a single set of N inputs with arbitrary labels . Hence , it naturally provides the lower bound on the necessary number of parameters for memorizing arbitrary N pairs . The VC-dimension of neural networks has been studied for various types of activation functions . For memorizing at least a single set of N inputs with arbitrary labels , it is known that Θ ( N/ logN ) parameters are necessary ( Baum and Haussler , 1989 ) and sufficient ( Maass , 1997 ) for STEP networks . Similarly , Karpinski and Macintyre ( 1997 ) proved that Ω ( √ N/U ) parameters are necessary for sigmoid networks of U neurons . Recently , Bartlett et al . ( 2019 ) showed that Θ ( N/ ( L̄ logN ) ) parameters are necessary and sufficient for L-layer networks using any piecewise linear activation function where L̄ : = 1WL ∑L ` =1W ` and W ` denotes the number of parameters up to the ` -th layer .
The paper studies the memorization capacity of deep networks as a function of the number of parameters. Many prior works have shown that to memorize $N$ examples $O(N)$ parameters are sufficient and that to memorize any set of $N$ examples $\Omega(N)$ parameters are necessary. This work shows that under very mild and commonly satisfied conditions, $O(N^{\frac{2}{3}})$ parameters and layers are sufficient for memorizing $N$ examples, a significant improvement over prior results. Additionally, they show that even very narrow width-bounded networks can memorize with sub-linear parameters, but the same is not true for depth-bounded networks, demonstrating a new capability unlocked by deeper networks. Finally, they characterize the properties sufficient for memorizing $N$ examples by a given network.
SP:1d63d99b43018556784ecc4a3ee494c71e7ef06e
Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues
1 INTRODUCTION . Traditional visual question answering ( Antol et al. , 2015 ; Jang et al. , 2017 ) involves answering questions about a given image . Extending from this line of research , recently Das et al . ( 2017 ) ; Alamri et al . ( 2019 ) add another level of complexity by positioning each question and answer pair in a multi-turn or conversational setting ( See Figure 1 for an example ) . This line of research has promising applications to improve virtual intelligent assistants in multi-modal scenarios ( e.g . assistants for people with visual impairment ) . Most state-of-the-part approaches in this line of research ( Kang et al. , 2019 ; Schwartz et al. , 2019b ; Le et al. , 2019 ) tackle the additional complexity in the multi-turn setting by learning to process dialogue context sequentially turn by turn . Despite the success of these approaches , they often fail to exploit the dependencies between dialogue turns of long distance , e.g . the 2nd and 5th turns in Figure 1 . In long dialogues , this shortcoming becomes more obvious and necessitates an approach for learning long-distance dependencies between dialogue turns . To reason over dialogue context with long-distance dependencies , recent research in dialogues discovers graph-based structures at the turn level to predict the speaker ’ s emotion ( Ghosal et al. , 2019 ) or generate sequential questions semi-autoregressively ( Chai & Wan , 2020 ) . Recently Zheng et al . ( 2019 ) incorporate graph neural models to connect the textual cues between all pairs of dialogue turns . These methods , however , involve a fixed graphical structure of dialogue turns , in which only a small number of nodes contains lexical overlap with the question of the current turn , e.g . the 1st , 3rd , and 5th turns in Figure 1 . These methods also fail to factor in the temporality of dialogue turns as the graph structures do not guarantee the sequential ordering among turns . In this paper , we propose a novel framework of Reasoning Paths in Dialogue Context ( PDC ) . PDC model learns a reasoning path that traverses through dialogue turns to propagate contextual cues that are densely related to the semantics of the current questions . Our approach balances between a sequential and graphical process to exploit dialogue information . Our work is related to the long-studied research domain of discourse structures , e.g . ( Barzilay & Lapata , 2008 ; Feng & Hirst , 2011 ; Tan et al. , 2016 ; Habernal & Gurevych , 2017 ) . A form of discourse structure is argument structures , including premises and claims and their relations . Argument structures have been studied to assess different characteristics in text , such as coherence , persuasiveness , and susceptibility to attack . However , most efforts are designed for discourse study in monologues and much less attention is directed towards conversational data . In this work , we investigate a form of discourse structure through semantic graphs built upon the overlap of component representations among dialogue turns . We further enhance the models with a reasoning path learning model to learn the best information path for the next utterance generation . To learn a reasoning path , we incorporate our method with bridge entities , a concept often seen in reading comprehension research , and earlier used in entity-based discourse analysis ( Barzilay & Lapata , 2008 ) . In reading comprehension problems , bridge entities denote entities that are common between two knowledge bases e.g . Wikipedia paragraphs in HotpotQA ( Yang et al. , 2018b ) . In discourse analysis , entities and their locations in text are used to learn linguistic patterns that indicate certain qualities of a document . In our method , we first reconstruct each dialogue turn ( including question and answer ) into a set of component sub-nodes ( e.g . entities , action phrases ) using common syntactical dependency parsers . Each result dialogue turn contains sub-nodes that can be used as bridge entities . Our reasoning path learning approach contains 2 phases : ( 1 ) first , at each dialogue turn , a graph network is constructed at the turn level . Any two turns are connected if they have an overlapping sub-node or if two of their sub-nodes are semantically similar . ( 2 ) secondly , a path generator is trained to predict a path from the current dialogue turn to past dialogue turns that provide additional and relevant cues to answer the current question . The predicted path is used as a skeleton layout to propagate visual features through each step of the path . Specifically , in PDC , we adopt non-parameterized approaches ( e.g . cosine similarity ) to construct the edges in graph networks and each sub-node is represented by pre-trained word embedding vectors . Our path generator is a transformer decoder that regressively generates the next turn index conditioned on the previously generated turn sequence . Our reasoning model is a combination of a vanilla graph convolutional network ( Kipf & Welling , 2017 ) and transformer encoder ( Vaswani et al. , 2017 ) . In each traversing step , we retrieve visual features conditioned by the corresponding dialogue turn and propagate the features to the next step . Finally , the propagated multimodal features are used as input to a transformer decoder to predict the answer . Our experimental results show that our method can improve the results on the Audio-Visual SceneAware Dialogues ( AVSD ) generation settings ( Alamri et al. , 2019 ) , outperform previous state-ofthe-art methods . We evaluate our approach through comprehensive ablation analysis and qualitative study . PDC model also provides additional insights on how the inherent contextual cues in dialogue context are learned in neural networks in the form of a reasoning path . 2 RELATED WORK . Discourses in monologues . Related to our work is the research of discourse structures . A longstudied line of research in this domain focuses on argument mining to identify the structure of argument , claims and premises , and relations between them ( Feng & Hirst , 2011 ; Stab & Gurevych , 2014 ; Peldszus & Stede , 2015 ; Persing & Ng , 2016 ; Habernal & Gurevych , 2017 ) . More recently , Ghosh et al . ( 2016 ) ; Duthie & Budzynska ( 2018 ) ; Jiang et al . ( 2019 ) propose to learn argument structures in student essays and official debates . In earlier approaches , Barzilay & Lapata ( 2008 ) ; Lin et al . ( 2011 ) ; Feng et al . ( 2014 ) study discourses to derive coherence assessment methods through entity-based representations of text . These approaches are proposed from linguistic theories surrounding entity patterns in discourses , i.e . how they are introduced and discussed ( Grosz et al. , 1995 ) . Guinaudeau & Strube ( 2013 ) ; Putra & Tokunaga ( 2017 ) extend prior work with graphical structures in which sentence similarity is calculated based on semantic vectors representing those sentences . These lines of research show that studying discourse structures is useful in many tasks , such as document ranking and discrimination . However , most of these approaches are designed for monologues rather than dialogues . Discourses in dialogues . More related to our problem setting is discourse research on text in a multi-turn setting . Murakami & Raymond ( 2010 ) ; Boltužić & Šnajder ( 2014 ) ; Swanson et al . ( 2015 ) ; Tan et al . ( 2016 ) ; Niculae et al . ( 2017 ) ; Morio & Fujita ( 2018 ) ; Chakrabarty et al . ( 2019 ) introduce new corpus and different methods to mine arguments in online discussion forums . Their models are trained to extract claims and premises in each user post and identify their relations between argument components in each pair of user posts . More recently , Li et al . ( 2020a ) ; Jo et al . ( 2020 ) extend argument mining in online threads to identify attackability and persuasiveness in online posts . In this work , we address the problem of video-grounded dialogue , in which dialogue turns are often semantically connected by a common grounding information source , a video . In this task , a discourse-based approach enables dialogue models to learn to anticipate the upcoming textual information in future dialogue turns . However , directly applying prior work on discourse or argument structures into video-grounded dialogues is not straightforward due to the inherent difference between online discussion posts and video-grounded dialogues . In video-grounded dialogues , the language is often closer to spoken language and there are fewer clear argument structures to be learned . Moreover , the presence of video necessitates the interaction between multiple modalities , text and vision . Incorporating traditional discourse structures to model cross-modality interaction is not straightforward . In this work , we propose to model dialogue context by using compositional graphical structures and constructing information traversal paths through dialogue turns . Graph-based dialogue models . Related to our work is research study that investigates different types of graph structures in dialogue . Hu et al . ( 2019 ) ; Shi & Huang ( 2019 ) ; Zhu et al . ( 2020 ) address the “ reply_to ” relationship among multi-party dialogues through graph networks that incorporate conversational flows in comment threads on social networks , e.g . Reddit and Ubuntu IRC , and online games . Zheng et al . ( 2019 ) propose a fully connected graph structure at the turn level for visual dialogues . Concurrently , Ghosal et al . ( 2019 ) also propose a fully connected graph structure with heterogeneous edges to detect the emotion of participating speakers . All of these methods discover graph structures connecting pairs of dialogue turns of little lexical overlap , resulting in sub-optimal feature propagation . This drawback becomes more significant in question answering problems in multi-turn settings . Our approach constructs graph networks based on compositional similarities . Reasoning path learning . Our method is also motivated by the recent research of machine reading comprehension , e.g . WikiHop ( Welbl et al. , 2018 ) and HotpotQA ( Yang et al. , 2018a ) . De Cao et al . ( 2019 ) ; Qiu et al . ( 2019 ) construct graph networks of supporting documents with entity nodes that are connected based on different kinds of relationships . Tu et al . ( 2019 ) ; Tang et al . ( 2020 ) enhance these methods with additional edges connecting output candidates and documents . Extended from these methods are path-based approaches that learn to predict a reasoning path through supporting documents . Kundu et al . ( 2019 ) ; Asai et al . ( 2020 ) score and rank path candidates that connect entities in question to the target answer . A common strategy among these methods is the use of bridge entities . However , unlike reading comprehension , dialogues are normally not entity-centric and it is not trivial to directly adopt bridge entities into dialogue context . Cross-modality feature learning . Our work is related to study that integrates visual and linguistic information representation . A line of research in this domain is the problem of visual QA , e.g . ( Minh Le et al. , 2020 ; Gao et al. , 2019 ) . Closer to our method are methods that adopt compositionality in textual features . Specifically , Socher et al . ( 2014 ) introduce image and language representation learning by detecting the component lexical parts in sentences and combining them with image features . The main difference between these approaches and our work is the study of cross-modalities in a multi-turn setting . Our approach directly tackles the embedded sequential order in dialogue utterances and examines how cross-modality features are passed from turn to turn .
This paper addresses the visual question answering in a multi-turn or conversational setting. Given a video (series of frames or images), a model has to reason across space and time to arrive at a correct answer for a given question. This task involves understanding the content and context of dialogue turns, i.e., given a question and N dialogue turns, only M<<N of the dialogue turns are strongly related to the question posed. This paper proposes to simulate the dependencies between dialogue turns, forming a reasoning path, to answer a given question. In a way, the proposed approach selects relevant dialogue turns that are useful to answer the question.
SP:aed4e9af07b32dc6f38e851db17287e7a29f6f09
Generalized Energy Based Models
1 INTRODUCTION . Energy-based models ( EBMs ) have a long history in physics , statistics and machine learning ( LeCun et al. , 2006 ) . They belong to the class of explicit models , and can be described by a family of energies E which define probability distributions with density proportional to exp ( −E ) . Those models are often known up to a normalizing constantZ ( E ) , also called the partition function . The learning task consists of finding an optimal function that best describes a given system or target distribution P. This can be achieved using maximum likelihood estimation ( MLE ) , however the intractability of the normalizing partition function makes this learning task challenging . Thus , various methods have been proposed to address this ( Hinton , 2002 ; Hyvärinen , 2005 ; Gutmann and Hyvärinen , 2012 ; Dai et al. , 2019a ; b ) . All these methods estimate EBMs that are supported over the whole space . In many applications , however , P is believed to be supported on an unknown lower dimensional manifold . This happens in particular when there are strong dependencies between variables in the data ( Thiry et al. , 2021 ) , and suggests incorporating a low-dimensionality hypothesis in the model . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) are a particular way to enforce low dimensional structure in a model . They rely on an implicit model , the generator , to produce samples supported on a low-dimensional manifold by mapping a pre-defined latent noise to the sample space using a trained function . GANs have been very successful in generating high-quality samples on various tasks , especially for unsupervised image generation ( Brock et al. , 2018 ) . The generator is trained adversarially against a discriminator network whose goal is to distinguish samples produced by the generator from the target data . This has inspired further research to extend the training procedure to more general losses ( Nowozin et al. , 2016 ; Arjovsky et al. , 2017 ; Li et al. , 2017 ; Bińkowski et al. , 2018 ; Arbel et al. , 2018 ) and to improve its stability ( Miyato et al. , 2018 ; Gulrajani et al. , 2017 ; Nagarajan and Kolter , 2017 ; Kodali et al. , 2017 ) . While the generator of a GAN has effectively a low-dimensional support , it remains challenging to refine the distribution of mass on that support using pre-defined latent noise . For instance , as shown by Cornish et al . ( 2020 ) for normalizing flows , when the latent distribution is unimodal and the target distribution possesses multiple disconnected low-dimensional components , the generator , as a continuous map , compensates for this mismatch using steeper slopes . In practice , this implies the need for more complicated generators . ∗Correspondence : michael.n.arbel @ gmail.com . In the present work , we propose a new class of models , called Generalized Energy Based Models ( GEBMs ) , which can represent distributions supported on low-dimensional manifolds , while offering more flexibility in refining the mass on those manifolds . GEBMs combine the strength of both implicit and explicit models in two separate components : a base distribution ( often chosen to be an implicit model ) which learns the low-dimensional support of the data , and an energy function that can refine the probability mass on that learned support . We propose to train the GEBM by alternating between learning the energy and the base , analogous to f -GAN training ( Goodfellow et al. , 2014 ; Nowozin et al. , 2016 ) . The energy is learned by maximizing a generalized notion of likelihood which we relate to the Donsker-Varadhan lower-bound ( Donsker and Varadhan , 1975 ) and Fenchel duality , as in ( Nguyen et al. , 2010 ; Nowozin et al. , 2016 ) . Although the partition function is intractable in general , we propose a method to learn it in an amortized fashion without introducing additional surrogate models , as done in variational inference ( Kingma and Welling , 2014 ; Rezende et al. , 2014 ) or by Dai et al . ( 2019a ; b ) . The resulting maximum likelihood estimate , the KL Approximate Lower-bound Estimate ( KALE ) , is then used as a loss for training the base . When the class of energies is rich and smooth enough , we show that KALE leads to a meaningful criterion for measuring weak convergence of probabilities . Following recent work by Chu et al . ( 2020 ) ; Sanjabi et al . ( 2018 ) , we show that KALE possesses well defined gradients w.r.t . the parameters of the base , ensuring well-behaved training . We also provide convergence rates for the empirical estimator of KALE when the variational family is sufficiently well behaved , which may be of independent interest . The main advantage of GEBMs becomes clear when sampling from these models : the posterior over the latents of the base distribution incorporates the learned energy , putting greater mass on regions in this latent space that lead to better quality samples . Sampling from the GEBM can thus be achieved by first sampling from the posterior distribution of the latents via MCMC in the low-dimensional latent space , then mapping those latents to the input space using the implicit map of the base . This is in contrast to standard GANs , where the latents of the base have a fixed distribution . We focus on a class of samplers that exploit gradient information , and show that these samplers enjoy fast convergence properties by leveraging the recent work of Eberle et al . ( 2017 ) . While there has been recent interest in using the discriminator to improve the quality of the generator during sampling ( Azadi et al. , 2019 ; Turner et al. , 2019 ; Neklyudov et al. , 2019 ; Grover et al. , 2019 ; Tanaka , 2019 ; Wu et al. , 2019b ) , our approach emerges naturally from the model we consider . We begin in Section 2 by introducing the GEBM model . In Section 3 , we describe the learning procedure using KALE , then derive a method for sampling from the learned model in Section 4 . In Section 5 we discuss related work . Finally , experimental results are presented in Section 6 with code available at https : //github.com/MichaelArbel/GeneralizedEBM . 2 GENERALIZED ENERGY-BASED MODELS . In this section , we introduce generalized energy based models ( GEBM ) , that combine the strengths of both energy-based models and implicit generative models , and admit the first of these as a special case . An energy-based model ( EBM ) is defined by a set E of real valued functions called energies , where eachE∈E specifies a probability density over the data spaceX ⊂Rd up to a normalizing constant , Q ( dx ) =exp ( −E ( x ) −A ) dx , A=log Å∫ exp ( −E ( x ) ) dx ã . ( 1 ) While EBMs have been shown recently to be powerful models for representing complex high dimensional data distributions , they still unavoidably lead to a blurred model whenever data are concentrated on a lower-dimensional manifold . This is the case in Figure 1 ( a ) , where the ground truth distribution is supported on a 1-D line and embedded in a 2-D space . The EBM in Figure 1 ( d ) learns to give higher density to a halo surrounding the data , and thus provides a blurred representation . That is a consequence of EBM having a density defined over the whole space , and can result in blurred samples for image models . An implicit generative model ( IGM ) is a family of probability distributions Gθ parametrized by a learnable generator function G : Z 7→X that maps latent samples z from a fixed latent distribution η to the data space X . The latent distribution η is required to have a density over the latent space Z and is often easy to sample from . Thus , Sampling from G is simply achieved by first sampling z from η then applyingG , x∼G ⇐⇒ x=G ( z ) , z∼η . ( 2 ) GANs are popular instances of these models , and are trained adversarially ( Goodfellow et al. , 2014 ) . When the latent spaceZ has a smaller dimension than the input spaceX , the IGM will be supported on a lower dimensional manifold of X , and thus will not possess a Lebesgue density on X ( Bottou et al. , 2017 ) . IGMs are therefore good candidates for modelling low dimensional distributions . While GANs can accurately learn the low-dimensional support of the data , they can have limited power for representing the distribution of mass on the support . This is illustrated in Figure 1 ( b ) . A generalized energy-based model ( GEBM ) Q is defined by a combination of a base G and an energyE defined over a subsetX of Rd . The base component can typically be chosen to be an IGM as in ( 2 ) . The generalized energy component can refine the mass on the support defined by the base . It belongs to a class E of real valued functions defined on the input spaceX , and represents the negative log-density of a sample from the GEBM with respect to the base G , Q ( dx ) =exp ( −E ( x ) −AG , E ) G ( dx ) , AG , E=log Å∫ exp ( −E ( x ) ) G ( dx ) ã , ( 3 ) where AG , E is the logarithm of the normalizing constant of the model w.r.t . G. Thus , a GEBM Q re-weights samples from the base according to the un-normalized importance weights exp ( −E ( x ) ) . Using the latent structure of the base G , this importance weight can be pulled-back to the latent space to define a posterior latent distribution ν , ν ( z ) : =η ( z ) exp ( −E ( G ( z ) ) −AG , E ) . ( 4 ) Hence , the posterior latent ν can be used instead of the latent noise η for sampling from Q , as summarized by Proposition 1 : Proposition 1 . Sampling from Q requires sampling a latent z from ν ( 4 ) then applying the mapG , x∼Q ⇐⇒ x=G ( z ) , z∼ν . ( 5 ) In order to hold , Proposition 1 does not need the generatorG to be invertible . We provide a proof in Appendix C.1 which relies on a characterization of probability distribution using generalized moments . We will see later in Section 4 how equation ( 5 ) can be used to provide practical sampling algorithms from the GEBM . Next we discuss the advantages of GEBMs . Advantages of Generalized Energy Based Models . The GEBM defined by ( 3 ) can be related to exponential tilting ( re-weighting ) ( Siegmund , 1976 ; Xie et al. , 2016 ) of the base G. The important difference over classical EBMs is that the base G is allowed to change its support and shape in space . By learning the base G , GEBMs can accurately learn the low-dimensional support of data , just like IGMs do . They also benefit from the flexibility of EBMs for representing densities using an energyE to refine distribution of mass on the support defined by G , as seen in Figure 1 ( c ) . Compared to EBMs , that put mass on the whole space by construction ( positive density ) , GEBMs have the additional flexibility to concentrate the probability mass on a low-dimensional support learned by the base G , provided that the dimension of the latent spaceZ is smaller than the dimension of the ambient space X : see Figure 1 ( c ) vs Figure 1 ( d ) . In the particular case when the dimension of Z is equal to the ambient dimension andG is invertible , the baseG becomes supported over the whole space X , and GEBM recover usual EBMs . The next proposition further shows that any EBM can be viewed as a particular cases of GEBMs , as proved in Appendix C.1 . Proposition 2 . Any EBM with energyE ( as in ( 1 ) ) can be expressed as a GEBM with baseG given as a normalizing flow with density exp ( −r ( x ) ) and a generalized energy Ẽ ( x ) =E ( x ) −r ( x ) . In this particular case , the dimension of the latent is necessarily equal to the data dimension , i.e . dim ( Z ) =dim ( X ) . Compared to IGMs , that rely on a fixed pre-determined latent noise distribution η , GEBMs offer the additional flexibility of learning a richer latent noise distribution . This is particularly useful when the data is multimodal . In IGMs , such a GANs , the latent noise η is usually unimodal thus requiring a more sophisticated generator to distort a unimodal noise distribution into a distribution with multiple modes , as shown by Cornish et al . ( 2020 ) . Instead , GEBMs allow to sample from a posterior ν over the latent noise defined in ( 4 ) . This posterior noise can be multimodal in latent space ( by incorporating information from the energy ) and thus can put more or less mass in specific regions of the manifold defined by the base G. This allows GEBMs to capture multimodality in data , provided the support of the base is broad enough to subsume the data support Figure 1 ( c ) . The base can be simpler , compared to GANs , as it doesn ’ t need to distort the input noise too much to produce multimodal samples ( see Figure 8 in Appendix G.4 ) . This additional flexibility comes at no additional training cost compared to GANs . Indeed , GANs still require another model during training , the discriminator network , but do not use it for sampling . Instead , GEBMs avoid this waist since the base and energy can be trained jointly , with no other additional model , and then both are used for sampling .
This paper proposes a framework called GEBM that combine an implicit generator and an EBM to define a probabilistic model on low dimensional manifold. Specifically, the implicit generator defines the base distribution, and the EBM refines the base. Finally, this method is equivalent to define an EBM on the latent space of the implicit generator together with a mapping from the latent space to the data space. The authors propose to use the KALE to train the generator, and provide a theoretical guarantee about the validness of the KALE.
SP:d8c1eee3aad4cbe04e5602c4a4da1da44a8ca9d3
Clairvoyance: A Pipeline Toolkit for Medical Time Series
Python Software Repository : https : //github.com/vanderschaarlab/clairvoyance 1 INTRODUCTION . Inference over time series is ubiquitous in medical problems [ 1–7 ] . With the increasing availability and accessibility of electronic patient records , machine learning for clinical decision support has made great strides in offering actionable predictive models for real-world questions [ 8 , 9 ] . In particular , a plethora of methods-based research has focused on addressing specific problems along different stages of the clinical data science pipeline , including preprocessing patient data [ 10 , 11 ] , imputing missing measurements [ 12–16 ] , issuing diagnoses and prognoses of diseases and biomarkers [ 17–25 ] , estimating the effects of different treatments [ 26–31 ] , optimizing measurements [ 32–36 ] , capturing ∗Authors contributed equally uncertainty [ 37–41 ] , and interpreting learned models [ 42–46 ] . On the other hand , these component tasks are often formulated , solved , and implemented as mathematical problems ( on their own ) , resulting in a stylized range of methods that may not acknowledge the complexities and interdependencies within the real-world clinical ML project lifecycle ( as a composite ) . This leads to an often punishing translational barrier between state-of-the-art ML techniques and any actual patient benefit that could be realized from their intended application towards clinical research and decision support [ 47–51 ] . Three Challenges To bridge this gap , we argue for a more comprehensive , systematic approach to development , validation , and clinical utilization . Specifically , due to the number of moving pieces , managing real-world clinical time-series inference workflows is challenging due the following concerns : • First and foremost , the engineering problem is that building complex inference procedures involves significant investment : Over 95 % of work in a typical mature project is consumed by software technicals , and < 5 % addressing real scientific questions [ 52 ] . As a clinician or healthcare practitioner , however , few resources are available for easily developing and validating complete workflows . What is desired is a simple , consistent development and validation workflow that encapsulates all major aspects of clinical time-series ML—from initial data preprocessing all the way to the end . • Second , the evaluation problem is that the performance of any component depends on its context ; for instance , the accuracy of a prediction model is intimately tied to the data imputation method that precedes it [ 13 , 14 ] . As an ML researcher , however , current empirical practices typically examine the merits of each component individually , with surrounding steps configured as convenient for ensuring “ all else equal ” conditions for assessing performance . What is desired is a structured , realistic , and reproducible method of comparing techniques that honestly reflects interdependencies in the gestalt . • Lastly , the efficiency problem is that sophisticated designs tend to be resource-intensive to optimize , and state-of-the-art deep learning approaches require many knobs to be tuned . As a clinical or ML practitioner alike , this computational difficulty may be compounded by pipeline combinations and the potential presence of temporal distribution shifts in time-series datasets [ 53 ] . What is desired is a platform on which the process of pipeline configuration and hyperparameter optimization can be automated—and through which new optimization algorithms to that effect may be built and tested . Contributions We tackle all three issues simultaneously . The Clairvoyance package is a unified , end-to-end , autoML-friendly pipeline for medical time series . ( i ) As a software toolkit , it enables development through a single unified interface : Modular and composable structures facilitate rapid experimentation and deployment by clinical practitioners , as well as simplifying collaboration and code-sharing . ( ii ) As an empirical standard , it serves as a complete experimental benchmarking environment : Standardized , end-to-end pipelines provide realistic and systematic context for evaluating novelties within individual component designs , ensuring that comparisons are fair , transparent , and reproducible . ( iii ) Finally , as an interface for optimization over the pipeline abstraction , Clairvoyance enables leveraging and developing algorithms for automatic pipeline configuration and stepwise selection , accounting for interdependencies among components , hyperparameters , and time steps . Through illustrative examples on real-world medical datasets , we highlight the applicability of the proposed paradigm within personalized prediction , personalized treatment planning , and personalized monitoring . To the best of our knowledge , Clairvoyance is the first coherent effort to demonstrate viability of a comprehensive , structured , and automatable pipeline for clinical time-series learning . 2 THE CLAIRVOYANCE PIPELINE . The Patient Journey Consider the typical patient ’ s interactions with the healthcare system . Their healthcare lifecycle revolves tightly around ( 1 ) forecasting outcomes of interest ( i.e . the prediction problem ) , ( 2 ) selecting appropriate interventions ( i.e . the treatment effects problem ) , and ( 3 ) arranging followup monitoring ( i.e . the active sensing problem ) . Each of these undertakings involve the full complexity of preparing , modeling , optimizing , and drawing conclusions from clinical time series . Clairvoyance provides model pathways for these core tasks in the patient journey ( see Figure 1 ) —integrated into a single pipeline from start to finish ( see Figure 2 ) . Formally , these pathways include : • Predictions Path . Let { ( sn , x1 : Tn ) } Nn=1 denote any medical time-series dataset , where sn is the vector of static features for the n-th patient , and x1 : Tn . = { xn , t } Tnt=1 is the vector sequence of temporal features . One-shot problems seek to predict a vector of labels yn from ( sn , xn,1 : Tn ) : e.g . prediction of mortality or discharge , where yn ∈ { 0 , 1 } . Online problems predict some target vector yn , t from ( sn , xn,1 : t ) at every time step : e.g . τ -step-ahead prediction of biomarkers yn , t ⊆ xn , t+τ . • Treatment Effects Path . For individualized treatment-effect estimation [ 26–31 ] , we additionally identify interventional actions an , t ⊆ xn , t at each time step ( e.g . the choices and dosages of prescribed medication ) , as well as corresponding measurable outcomes yn , t ⊆ xn , t+τ . The learning problem now consists in quantifying the ( factual or counterfactual ) potential outcomes yn , t+τ that would result from any specific sequence of interventions and patient covariates ( sn , xn,1 : t , an,1 : t ) . • Active Sensing Path . In addition to mapping ( already-measured ) covariates to targets , the very decision of what ( and when ) to measure is also important under resource constraints . In medical settings , active sensing deals with balancing this trade-off between information gain and acquisition costs [ 32–36 ] . With reference to some downstream task ( e.g . predicting yn , t+1 ) , the aim is to select a subset of covariates Kn , t at each t to maximize the ( net ) benefit of observing { xn , t , k } k∈Kn , t . As a Software Toolkit Engineering complete medical time-series workflows is hard . The primary barrier to collaborative research between ML and medicine seldom lies in any particular algorithm . Instead , the difficulty is operational [ 6 , 48 , 54 ] —i.e . in coordinating the entire data science process , from handling missing/irregularly sampled patient data all the way to validation on different popu- Active Sensing . Treatment Eff .. Predictions . `` `` Configure Data Preprocessing '' '' preprocessing = PipelineComposer ( FilterNegative ( ... ) , OneHotEncoder ( ... ) , Normalizer ( ... ) , ... ) '' '' Configure Problem Specification '' '' specification = ProblemMaker ( problem_class= ‘ online ’ , max_seq_len=24 , label= [ ‘ ventilator ’ ] , treatment=None , window=4 , ... ) '' '' Configure Data Imputation '' '' imputation = PipelineComposer ( Imputation ( type= ‘ static ’ , model_name= ‘ ... ’ , ... ) , Imputation ( type= ‘ temporal ’ , model_name= ‘ ... ’ , ... ) ) '' '' Configure Feature Selection '' '' feature_selection = PipelineComposer ( FeatureSelection ( type= ‘ static ’ , model_name= ‘ ... ’ , ... ) , FeatureSelection ( type= ‘ temporal ’ , model_name= ‘ ... ’ , ... ) ) '' '' Configure Pathway Model '' '' prediction_model = Prediction ( model_name= ‘ ... ’ , parameter_dict= { ... } , ... ) '' '' Load Datasets '' '' data_train , data_test = DataLoader.load ( static_dir= ‘ ... ’ , temporal_dir= ‘ ... ’ , ... ) , DataLoader.load ( static_dir= ‘ ... ’ , temporal_dir= ‘ ... ’ ) '' '' Execute Pipeline '' '' for component in [ preprocessing , specification , imputation , feature_selection ] : data_train = component.fit_transform ( data_train ) data_test = component.transform ( data_test ) prediction_model.fit ( data_train , ... ) test_output = prediction_model.predict ( data_test , ... ) Figure 3 : Illustrative Usage . A prototypical structure of API calls for constructing a prediction pathway model . Clairvoyance is modularized to abide by established fit/transform/predict design patterns . ( Green ) ellipses denote additional configuration ; further modules ( treatments , sensing , uncertainty , etc . ) expose similar interfaces . lations [ 4 , 55–60 ] . Clairvoyance gives a single unified roof under which clinicians and researchers alike can readily address such common issues—with the only requirement that the data conform to the standard EAV open schema for clinical records ( i.e . patient key , timestamp , parameter , and value ) . Under a simple , consistent API , Clairvoyance encapsulates all major steps of time-series modeling , including ( a . ) loading and ( b . ) preprocessing patient records , ( c. ) defining the learning problem , handling missing or irregular samples in both ( d. ) static and ( e. ) temporal contexts , ( f. ) conducting feature selection , ( g. ) fitting prediction models , performing ( h. ) calibration and ( i . ) uncertainty estimation of model outputs , ( j . ) applying global or instance-wise methods for interpreting learned models , ( k. ) computing evaluation metrics , and ( l. ) visualizing results . Figure 2 shows a high-level overview of major components in the pipeline , and Figure 3 shows an illustrative example of usage . All component modules are designed around the established fit-transform-predict paradigms , and the modeling workflow is based around a single chain of API calls . In this manner , each stage in the pipeline is extensible with little effort : Novel techniques developed for specific purposes ( e.g . a new state-of-the-art imputation method ) can be seamlessly integrated via simple wrappers ( see Appendix G for an example of how this can be done for any existing method , e.g . from sklearn ) . This stepwise composability aims to facilitate rapid experimentation and deployment for research , as well as simplifying collaboration and code-sharing . Package documentation/tutorials give further software details . As an Empirical Standard Evaluating any algorithm depends on its context . For instance , how well a proposed classifier ultimately performs is invariably coupled with the upstream feature-selection method it is paired with [ 44 ] . Likewise , the accuracy of a state-of-the-art imputation method can not be assessed on its own : With respect to different downstream prediction models , more sophisticated imputation may actually yield inferior performance relative to simpler techniques [ 13 , 14 ] —especially if components are not jointly optimized [ 15 ] . While current research practices typically seek to isolate individual gains through “ all-else-equal ” configurations in benchmarking experiments , the degree of actual overlap in pipeline configurations across studies is lacking : There is often little commonality in the datasets used , preprocessing done , problem types , model classes , and prediction endpoints . This dearth of empirical standardization may not optimally promote practical assessment/reproducibility , and may obscure/entangle true progress . ( Tables 6–7 in Appendix A give a more detailed illustration ) . Clairvoyance aims to serve as a structured evaluation framework to provide such an empirical standard . After all , in order to be relevant from a real-world medical standpoint , assessment of any single proposed component ( e.g . a novel ICU mortality predictor ) can—and should—be contextualized in the entire end-to-end workflow as a whole . Together , the ‘ problem-maker ’ , ‘ pipeline-composer ’ , and all the pipeline component modules aim to simplify the process of specifying , benchmarking , and ( self- ) documenting full-fledged experimental setups for each use case . At the end of the day , while results from external validation of is often heterogeneous [ 2 , 59 , 61 ] , improving transparency and reproducibility greatly facilitates code re-use and independent verification [ 54 , 56 , 57 ] . Just as the “ environment ” abstraction in OpenAI Gym does for reinforcement learning , the “ pipeline ” abstraction in Clairvoyance seeks to promote accessibility and fair comparison as pertains medical time-series . ( a ) Example : SASH ‘ decomposed ’ as SMS ( fulfilled here by DKL ) followed by combiner ( stacking ensemble ) : ( b ) Example : SPSC ‘ decomposed ’ as PSC ( fulfilled here by SKL ) followed by SMS ( fulfilled here by DKL ) : As an Optimization Interface Especially in cross-disciplinary clinical research—and during initial stages of experimentation— automated optimization may alleviate potential scarcity of expertise in the specifics of design and tuning . The Clairvoyance pipeline abstraction serves as a software interface for optimization algorithms—through which new/existing techniques can be applied , developed , and tested in a more systematic , realistic setting . In particular , by focusing on the temporal aspect of medical time series , this adds a new dimension to classes of autoML problems . Briefly ( see Figure 5 ) , consider the standard task of hyperparameter optimization ( for a given model ) [ 62 ] . By optimizing over classes of algorithms , the combined algorithm selection and hyperparameter optimization ( “ CASH ” ) problem [ 63–65 ] has been approached in healthcare settings by methods such as progressive sampling , filtering , and fine-tuning [ 50 , 66 ] . By further optimizing over combinations of pipeline components , the pipeline selection and configuration ( “ PSC ” ) problem [ 67 ] has also been tackled in clinical modeling via such techniques as fast linear search ( “ FLASH ” ) [ 68 ] and structured kernel learning ( “ SKL ” ) [ 67 , 69 ] . Now , what bears further emphasis is that for clinical time series , the temporal dimension is critical due to the potential for temporal distribution shifts within time-series data—a common phenomenon in the medical setting ( we refer to [ 53 , 70 , 71 ] for additional background ) . Precisely to account for such temporal settings , the stepwise model selection ( “ SMS ” ) problem [ 71 ] has recently been approached by such methods as relaxed parameter sharing ( “ RPS ” ) [ 53 ] as well as deep kernel learning ( “ DKL ” ) [ 71 , 72 ] . Further , what the pipeline interface also does is to naturally allow extending this to define the stepwise algorithm selection and hyperparameter optimization ( “ SASH ” ) problem , or even—in the most general case—the stepwise pipeline selection and configuration ( “ SPSC ” ) problem . Although these latter two are new—and clearly hard—problems ( with no existing solutions ) , Figure 4 shows simple examples of how the interface allows minimally adapting the SMS and PSC sub-problems ( which do have existing solutions ) to form feasible ( approximate ) solutions . Two distinctions are due : First , Clairvoyance is a pipeline toolkit , not an autoML toolkit . It is not our goal to ( re- ) implement new/existing optimization algorithms—which abound in literature . Rather , the standardized interface is precisely what enables existing implementations to be plugged in , as well as allowing new autoML techniques to be developed and validated within a realistic medical pipeline . All that is required , is for the optimizing agent to expose an appropriate ‘ optimize ’ method given candidate components , and for such candidates to expose a ‘ get-hyperparameter-space ’ method . Second—but no less importantly—we must emphasize that we are not advocating removing human oversight from the healthcare loop . Rather , the pipeline simply encourages systematizing the initial development stages in clinical ML , which stands to benefit from existing literature on efficient autoML techniques . Design Principles Our philosophy is based on the authors ’ experience in prototyping and developing real-world collaborative projects in clinical time-series . • Pipeline First , Models Second : Our first emphasis is on reproducibility : The process of engineering and evaluating complete medical time-series workflows needs to be clear and transparent . Concretely , this manifests in the strict “ separation of concerns ” enforced by the highlevel API of each component module along the pipeline ( see e.g . Figure 3 ) . With the ‘ problem-maker ’ and ‘ problem-composer ’ as first-class objects , the central abstraction here is the pipeline itself , while the intricacies and configurations of individual model choices ( e.g . a specific deep learning temporal imputation method ) are limited to within each component module . • Be Minimal and Unintrusive : Our second emphasis is on standardization : While workflow development needs to be unified and systematic , learning to use the framework should be intuitive as well . Concretely , this manifests in the API ’ s adherence to the existing and popular ‘ fit-transform-predict ’ paradigm ( see e.g . sklearn ) in all component modules—both ‘ along ’ the pipeline steps , as well as ‘ across ’ the pathways that define the patient ’ s healthcare lifecycle ( see Figure 2 ) . This enables easy adoption and rapid prototyping—qualities that are paramount given the degree of collaborative research and cross-disciplinary code-sharing required in healthcare-related research . • Encourage Extension : Our third emphasis is on extensibility : Given that novel methods are proposed in the ML community every day , the pipeline components should be easily extensible to incorporate new algorithms . Concretely , this manifests in the encapsulated design for models within each component module : Specifically , in order to integrate a new component method ( e.g . from another researcher ’ s code , or from an external package ) into the framework , all that is required is a simple wrapper class that implements the ‘ fit ’ , ‘ predict ’ , and ‘ get-hyperparameter-space ’ methods ; likewise , for an optimization agent ( see subsection on optimization interface below ) , all that is required is to expose an ‘ optimize ’ method . Worked Examples For a discussion of the choice of built-in techniques to include with the initial release , see Appendix C. Appendix E gives a worked example of using the Clairvoyance pipeline to train and use a model in a standard setting ( for this , we use the predictions pathway ) . Appendix F gives a worked example of using the optimization interface to perform stepwise model selection ( for this , we use the treatment effects pathway for variety ) . Appendix G gives an example of how a generic wrapper can be written for integrating an external model/algorithm that is not already implemented in the current version of Clairvoyance . Finally , the software repository contains Jupyter notebooks and top-level API code with examples of pathways and optimization .
The authors present a new package aimed at improving the design and validation of pipelines using medical time series data. The pipeline covers many aspects of time series pipelines including pre-processing, prediction, treatment effect estimation, calibration, etc. The package, as depicted in the paper, appears to be very comprehensive and well motivated.
SP:3cc3f1b3e24923e2e84d0b9761e5fb30fa88bbeb
PURE: An Uncertainty-aware Recommendation Framework for Maximizing Expected Posterior Utility of Platform
1 INTRODUCTION . Commercial recommendation systems have been widely applied among prevalent content distribution platforms such as YouTube , TikTok , Amazon and Taobao . During the interactive process on the recommendation platform , the users may find contents of their interests and avoid the information overload problem with the help of recommendation services . Meanwhile , the platform may gain commercial benefits from user behaviors on the platform such as clicks and purchases . As the platform may serve millions of users and can determine which contents to be recommended , it naturally has some advantages over individual user . Therefore , it would be crucial for the platform to make full use of its advantages in order to maximize the commercial benefits . One typical advantage of the platform is its information advantage , i.e. , they may collect plenty of information over users and items for conducting better recommendation . Typical state-of-the-art recommendation systems ( Covington et al. , 2016 ; Guo et al. , 2017 ; Ren et al. , 2019 ; Zhou et al. , 2019 ) always take these information into consideration including user profiles , item features and historical interactions between users and recommended items . It is worth noting that information over item features is always directly incorporated into the recommendation models without considering that the user may be with different levels of uncertainty over different item dimensions ( which can be regarded as different hidden attributes describing different high-order features of the item ) . For instance , when buying a new coat on the platform , a user may be sure that the logistics is very fast as she ( he ) has bought clothes from the same online store before ( i.e. , the user is with low uncertainty over the logistics ) . But she ( he ) may be uncertain about the quality of the coat since it is of the brand that she ( he ) does not know much about ( i.e. , the user is with high uncertainty over the quality ) . Thus , it would be crucial for the platform to figure out whether it is possible to leverage the user uncertainty over different item dimensions to maximize the platform utility , and if yes , how ? 1Item dimensions : Typical state-of-the-art solutions for recommendation systems always encode each item as an embedding . The item dimensions refer to different dimensions of the item embedding , which can be explained as different high-order features . Actually , with consideration of the user uncertainty over different item dimensions , we would show that more commercial benefits can be gained from the item dimensions with higher uncertainty . Another advantage of the platform is that it owns the capacity of determining which items to display for the users and thus may affect the users ’ behaviors . It has been proved by lots of works ( Kamenica & Gentzkow , 2011 ; Immorlica et al. , 2019 ; Abdollahpouri & Mansoury , 2020 ) that the display signal itself would highly affect users ’ behaviors , and affected behaviors would apparently result in different benefits for the platform . Regarding the recommendation as a game between the platform and the users , it is possible for the platform to achieve more commercial benefits from the game by taking a proper display ( recommendation ) policy . However , though there are works to explore the impact of recommendation policies , it is still not well-studied in recommendation area how to explicitly model and exploit the impact of the display policy over users . In this paper , we propose an uncertainty-aware expected Posterior Utility maximization framework for REcommendation platforms ( denoted as PURE in short ) . We take both the two previously mentioned factors , i.e. , user uncertainty over different item dimensions and influence of display policy over the user , into account and introduce a generic utility function which can be flexibly adjusted for different real-world scenarios . Then , we formulate the problem of maximizing expected posterior utility for the platform as a constrained non-convex optimization problem , and correspondingly propose a solution based on Alternating Direction Method of Multipliers ( ADMM , Boyd et al . ( 2011 ) ) to derive the approximately optimal policy . To verify the effectiveness of the proposed framework , extensive experiments are conducted over data collected from a real-world recommendation platform . Furthermore , we also provide practical insights derived from carefully designed experiments and empirically reveal how the platform utilizes its information advantage to achieve more commercial benefits , which may help to better understand and conduct commercial recommendation . 2 RELATED WORK . Existing state-of-the-art recommendation systems ( Zhou et al. , 2018 ; Pi et al. , 2019 ; Qu et al. , 2016 ) mainly try to make full use of the information advantage of the platform . These works take these information into consideration including user profiles , item features , contextual information and historical interactions between users and recommended items . Typically , some works ( Qu et al. , 2016 ; Zhou et al. , 2018 ; Li et al. , 2019 ) focus on how to achieve better feature interactions or conduct better user interest modeling , while some works ( Ren et al. , 2019 ; Pi et al. , 2019 ) may pay more attention to utilizing extremely long sequential interactive information . However , most of them ignore the existence of user uncertainty over different item dimensions , which might be crucial to conduct better commercial recommendation . In the research area to explore the display influence to the information receiver , Bayesian Persuasion ( Kamenica & Gentzkow , 2011 ) is one of the most crucial works , which theoretically proves that the information sender may benefit from displaying proper information to the receiver . Some works ( Immorlica et al. , 2019 ; Mansour et al. , 2016 ) follow this idea and strive to incentivize exploration via information asymmetry in scenarios such as recommendation . In another research direction that try to develop Reinforcement Learning ( RL ) based solutions for recommendation scenarios , a series of works ( Dulac-Arnold et al. , 2015 ; Zhao et al. , 2018 ; Chen et al. , 2019 ) model the recommendation process as a Markov Decision Process ( MDP ) and maximize the long-term reward via utilizing learned sequential patterns , which can also be regarded as taking the display ( recommendation ) influence into consideration to some extent . 3 METHODOLOGY . 3.1 OPTIMAL POLICY FOR MAXIMIZING PLATFORM ’ S EXPECTED POSTERIOR UTILITY . From the perspective of the platform , the optimal recommendation policy is the one with maximal expected utility ( i.e. , maximal expected commercial benefits ) . As mentioned before , the influence of display policy over users can not be ignored as it would highly affect the commercial benefits of the platform . In this paper , taking the impact of display policy on users into consideration , we formulate the platform ’ s optimal policy πu for user u over a given item set I as follows . πu = argmax π ∑ i∈I πiUu ( i|display ; π ) , s.t. , ∀i ∈ I , πi ≥ 0 and ∑ i∈I πi = 1 , ( 1 ) where Uu ( i|display ; π ) is the posterior utility of recommending item i to user u with consideration of the influence of display policy π . With this formulation , the remaining problem is how to model the posterior utility properly . In the following , we illustrate two reasonable assumptions in detail , which make it possible to model the posterior utility with consideration of the user uncertainty over different item dimensions as well as the influence of display policy over user . As discussed before , it would be crucial to explicitly consider the user uncertainty over different item dimensions to conduct recommendation . For a given user , we assume that the representation for an item is sampled from a multivariate Gaussian distribution and adopt the variances to describe the user uncertainty over different item dimensions , which is formulated as the following assumption . Assumption 1 ( Assumption of uncertainty ( correlation ) over different item dimensions ) . For a user u , the representation of item i is sampled from a n-dimension multivariate Gaussian distribution N ( µu , i , Σu , i ) , i.e. , the probability density function of the representation is : pu , i ( x ) = 1√ ( 2π ) n|Σu , i| e− 1 2 ( x−µu , i ) TΣ−1u , i ( x−µu , i ) , ( 2 ) where x ∈ Rn , µu , i and Σu , i denote the mean vector and the covariance matrix respectively . The covariance matrix can be decomposed as Σu , i = Du , iCu , iDu , i , where Du , i is the diagonal standard deviation matrix andCu , i is the correlation matrix ( see Barnard et al . ( 2000 ) for more information ) . Thus , the covariance matrix can depict the user uncertainty over different item dimensions ( with diagonal standard deviation matrix Du , i ) as well as the correlation between different item dimensions ( with correlation matrix Cu , i ) . Note that we provide a practical method to gain µu , i and Σu , i in Section 4.1 while any other reasonable approach to get µu , i and Σu , i can be applied . From the perspective of users , they may try to understand the display policy of the platform from the interactive process . When an item i is recommended to a user u , the user may consider the corresponding display probability , which could influence his behavior . One reasonable assumption is that the probability of displaying item i from the perspective of user u is linear to the similarity between the item representation x and the representations of historical recommended items . Without loss of generality , we formulate this assumption as follows . Assumption 2 ( Assumption of the influence of the display policy over user ) . Given display policy π and item representation x , the probability of recommending item i to user u from the user ’ s perspective is : pu , i ( display|x ; π ) = Φ ( avTux+ b ) , ( 3 ) where display denotes the event of displaying ( recommending ) the corresponding item to the target user , a and b are scale hyper-parameters , a > 0 , Φ ( z ) = ∫ z −∞ 1√ 2π e− x2 2 dx and vu = Ei∼π ( µu , i ) = ∑ i∈I πiµu , i . Note that Φ is the cumulative distribution function of standard normal distribution , which is widely adopted to map the input into range [ 0 , 1 ] in many models such as probit model , and vu takes the expected value of item representations w.r.t . the display policy π over item set I . Thus , with a > 0 , higher similarity ( calculated by inner product ) between the current item representation x and the expected representation of display items would lead to higher likelihood , which is reasonable as discussed before . So far , we have presented two crucial assumptions adopted in our framework . In the following , we introduce the utility function and derive the formula of posterior utility . To ease the illustration , we denote the utility function of recommending item i to user u given sampled item representation x as f ( wu , x ) , where wu is the embedding of user side typically and f ( wu , x ) depicts the utility ( benefits ) that the platform can gain from recommending item i to user u with item representation x . For instance , when we regard the click trough rate ( CTR ) as the platform ’ s utility , one simple way is to model f as the inner product of wu and x where wu can be regarded as the user preference vector of user u . Note that the function f can be flexibly adjusted to fit different requirements of different scenarios . For example , when we try to maximize the gross merchandise volume ( GMV ) in a recommendation scenario , f can be defined as the product of the corresponding CTR , conversion rate ( CVR ) and the item price . Now we can combine the two assumptions and the utility function f to derive the formula of posterior utility . By adopting the Bayes ’ theorem and the law of total probability , we present the formula of posterior utility of recommending item i to user u as follows . Uu ( i|display ; π ) = ∫ Rn f ( wu , x ) pu , i ( x|display ; π ) dx ( 4 ) = ∫ Rn f ( wu , x ) pu , i ( x ) pu , i ( display|x ; π ) pu , i ( display ; π ) dx ( 5 ) Equation 5 provides a way to model the posterior utility of the platform taking into account both the user uncertainty over different item dimensions and the influence of display policy over user . However , it is still challenging to derive the optimal policy πu for given user u as the right part of Equation 5 is not a close-form expression ( which makes it intractable to calculate the exact value of the posterior utility ) .
The paper aims to model the posterior utility of showing an item given a static display policy, where the utility function captures both: (a) uncertainty over item dimensions from the user perspective, and (b) influence of the policy on the user. It is motivated by the fact that most recommender systems don't take into account that user may be highly uncertain about value/utility in certain dimensions (e.g., color of a product) while more certain about others. The platform can use this information explicitly while optimizing for what to show.
SP:415de370d5dea4aa3136d79bef9bf04f733d8285
Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning
1 INTRODUCTION . Deep learning methods have made immense progress on many reinforcement learning ( RL ) tasks in recent years . However , the performance of these methods still pales in comparison to human abilities in many cases . Contemporary deep reinforcement learning models have a ways to go to achieve robust generalization ( Nichol et al. , 2018 ) , efficient planning over flexible timescales ( Silver & Ciosek , 2012 ) , and long-term credit assignment ( Osband et al. , 2019 ) . Model-based methods in RL ( MBRL ) can potentially mitigate this issue ( Schrittwieser et al. , 2019 ) . These methods observe sequences of state-action pairs , and from these observations are able to learn a self-supervised model of the environment . With a well-trained world model , these algorithms can then simulate the environment and look ahead to future events to establish better value estimates , without requiring expensive interactions with the environment ( Sutton , 1991 ) . Model-based methods can thus be far more sample-efficient than their model-free counterparts when multiple objectives are to be achieved in the same environment . However , for model-based approaches to be successful , the learned models must capture relevant mechanisms that guide the world , i.e. , they must discover the right causal variables and structure . Indeed , models sensitive to causality have been shown to be robust and easily transferable ( Bengio et al. , 2019 ; Ke et al. , 2019 ) . As a result , there has been a recent surge of interest in learning causal models for deep reinforcement learning ( de Haan et al. , 2019 ; Dasgupta et al. , 2019 ; Nair et al. , 2019 ; Goyal et al. , 2019 ; Rezende et al. , 2020 ) . Yet , many challenges remain , and a systematic framework to modulate environment causality structure and evaluate models ’ capacity to capture it is currently lacking , which motivates this paper . What limits the use of causal modeling approaches in many AI tasks and realistic RL settings is that most of the current causal learning literature presumes abstract domain representations in which the cause and effect variables are explicit and given ( Pearl , 2009 ) . Methods are needed to automate the inference and identification of such causal variables ( i.e . causal induction ) from low-level state representations ( like images ) . Although one solution is manual labeling , it is often impractical and in some cases impossible to manually label all the causal variables . In some domains , the causal structure may not be known . Further , critical causal variables may change from one task to another , or from one environment to another . And in unknown environments , one ideally aims for an RL agent that could induce the causal structure of the environment from observations and interventions . In this work , we seek to evaluate various model-based approaches parameterized to exploit structure of environments purposfully designed to modulate causal relations . We find that modular network architectures appear particularly well suited for causal learning . Our conjecture is that causality can provide a useful source of inductive bias to improve the learning of world models . Shortcomings of current RL development environments , and a path forward . Most existing RL environments are not a good fit for investigating causal induction in MBRL , as they have a single fixed causal graph , lack proper evaluation and have entangled aspects of causal learning . For instance , many tasks have complicated causal structures as well as unobserved confounders . These issues make it difficult to measure progress for causal learning . As we look towards the next great challenges for RL and AI , there is a need to better understand the implications of varying different aspects of the underlying causal graph for various learning procedures . Hence , to systematically study various aspects of causal induction ( i.e. , learning the right causal graph from pixel data ) , we propose a new suite of environments as a platform for investigating inductive biases , causal representations , and learning algorithms . The goal is to disentangle distinct aspects of causal learning by allowing the user to choose and modulate various properties of the ground truth causal graph , such as the structure and size of the graph , the sparsity of the graph and whether variables are observed or not ( see Figure 1 ( a ) - ( d ) ) . We also provide evaluation criteria for measuring causal induction in MBRL that we argue help measure progress and facilitate further research in these directions . We believe that the availability of standard experiments and a platform that can easily be extended to test different aspects of causal modeling will play a significant role in speeding up progress in MBRL . Insights and causally sufficient inductive biases . Using our platform , we investigate the impact of explicit structure and modularity for causal induction in MBRL . We evaluated two typical of monolithic models ( autoencoders and variational autoencoders ) and two typical models with explicit structure : graph neural networks ( GNNs ) and modular models ( shown in Figure 5 ) . Graph neural networks ( GNNs ) have a factorized representation of variables and can model undirected relationships between variables . Modular models also have a factorized representation of variables , along with directed edges between variables which can model directed relationship such as A causing B , but not the other way around . We investigated the performance of such structured approaches on learning from causal graphs with varying complexity , such as the size of the graph , the sparsity of the graph and the length of cause-effect chains ( Figure 1 ( a ) - ( d ) ) . The proposed environment gives novel insights in a number of settings . Especially , we found that even our naive implementation of modular networks can scale significantly better compared to other models ( including graph neural networks ) . This suggests that explicit structure and modularity such as factorized representations and directed edges between variables help with causal induction in MBRL . We also found that graph neural networks , such as the ones from Kipf et al . ( 2019 ) are good at modeling pairwise interactions and significantly outperform monolithic models under this setting . However , they have difficulty modeling complex causal graphs with long cause-effect chains , such as the chain graph ( demonstration of chain graphs are found in Figure 1 ( i ) ) . Another finding is that evaluation metrics such as likelihood and ranking loss do not always correspond to the performance of these models in downstream RL tasks . 2 ENVIRONMENTS FOR CAUSAL INDUCTION IN MODEL-BASED RL . Causal models are frequently described using graphs in which the edges represent causal relationships . In these structural causal models , the existence of a directed edge from A to B indicates that intervening on A directly impacts B , and the absence of an edge indicates no direct interventional impact ( see Appendix A for formal definitions ) . In parallel , world models in MBRL describe the underlying data generating process of the environment by modeling the next state given the current state-action pair , where the actions are interventions in the environment . Hence , learning world models in MBRL can be seen as a causal induction problem . Below , we first outline how a collection of simple causal structures can capture real-world MBRL cases , and we propose a set of elemental environments to express them for training . Second , we describe precise ways to evaluate models in these environments . 2.1 MINI-ENVIRONMENTS : EXPLICIT CASES FOR CAUSAL MODULATION IN RL . The ease with which an agent learns a task greatly depends on the structure of the environment ’ s underlying causal graph . For example , it might be easier to learn causal relationships in a collider graph ( see Figure 1 ( a ) ) where all interactions are pairwise , meaning that an intervention on one variable Xi impacts no more than one other variable Xj , hence the cause-effect chain has a length of at most 1 . However , causal graphs such as full graphs ( see Figure 1 ( a ) ) can have more complex causal interactions , where intervening on one variable impacts can impact up to n − 1 variables for graphs of size n ( see Figure 1 ) . Therefore , one important aspect of understanding a model ’ s performance on causal induction in MBRL is to analyze how well the model performs on causal graphs of varying complexity . Impotant factors that contribute to the complexity of discovering the causal graph are the structure , size , sparsity of edges and length of cause-effect chains of the causal graph ( Figure 1 ) . Presence of unobserved variables also adds to the complexity . The size of the graph increases complexity because the number of possible graphs grows super-exponentially with the size of the graph ( Eaton & Murphy , 2007 ; Peters et al. , 2016 ; Ke et al. , 2019 ) . The sparsity of graphs also impacts the difficulty of learning , as observed in ( Ke et al. , 2019 ) . Given graphs of the same size , denser graphs are often more challenging to learn . Futhermore , the length of the cause-effect chains can also impact learning . We have observed in our experiments , that graphs with shorter cause-effect lengths such as colliders ( Figure 1 ( a ) ) can be easier to model as compared to chain graphs with longer cause-effect chains . Finally , unobserved variables which commonly exist in the real-world can greatly impact learning , especially if they are confounding causes ( shared causes of observed variables ) . Taking these factors into account , we designed two suites of ( toy ) environments : the physics environment and the chemistry environment , which we discuss in more detail in the following section . They are designed with a focus on the underlying causal graph and thus have a minimalist design that is easy to visualize . 2.1.1 PHYSICS ENVIRONMENT : WEIGHTED-BLOCK PUSHING . The physics environment simulates very simple physics in the world . It consists of blocks of different , unique weights . The rule for interaction between blocks is that heavier objects can push lighter ones . Interventions ammount to move a particular block , and the consequence depends on whether the block next to it ( if present ) is heavier or lighter . For an accurate world model , inferring the weights becomes essential . Additionally , one can allow the weight of the objects to be either observed through the intensity of the color , or unobserved , leading to two environment settings described below . The underlying causal graph is an acyclic tournament , shown in Figure 3 . For more details about the setup , please refer to Appendix E. Fully observed setting . In the fully observed setting , all objects are given a particular color and the weight of each block is represented by the intensity of the color . Once the agent learns this underlying causal structure , it does not have to perform interventions on new objects in order to infer they will interact with the others . Unobserved setting . In this setting , the weight of each object is not directly observable by its color . The agent thus needs to interact with the object in order to understand the order of weights associated with the blocks . In this case , the weight of objects needs to be inferred through interventions . We consider two sub-divisions of this setting - FixedUnobserved where there is a fixed assignment between the shapes of the objects and their weights and Unobserved where there is no fixed assignment between the shape and the weight , hence making it a more challenging environment . We refer the reader to Appendix E.2 for details .
This paper is a review of model-based approaches of integrating causal inference to reinforcement learning (RL) in different environments (application areas). The authors provide software to analyse how three types of models (“monolithic”, i.e. latent space models without a graph-like structure of the latent space, graph neural networks (GNN) and “modular”, i.e. the C-SWM model (Kipf et al., 2020)) perform in two artificial “environments” devised by the authors (physics and chemistry) based on a number of metrics, some of them also proposed by the authors. The main contributions are the platform for evaluating models in the environments and the insights from the experiments performed on the selected models (taken from existing literature).
SP:86435186f0d117c14bbf6d300053dd46884ea061
N-Bref : A High-fidelity Decompiler Exploiting Programming Structures
1 INTRODUCTION . Decompilation , which is a process of recovering source code from binary , is useful in many situations where it is necessary to analyze or understand software for which source code is not available . For example , decompilation is highly valuable in many security and forensics applications ( Lin et al . ( 2010 ) ; Lee et al . ( 2011 ) ; Brumley et al . ( 2011 ) ) . Given a binary executable , an ideal decompiler generates the high-level program that preserves both the semantics and the functionality of the source code . However , this process is difficult as the data structure and semantics are largely destroyed or obfuscated during the compilation . Inspired by remarkable performance in neural machine translation ( NMT ) tasks ( Liu et al . ( 2019 ) ; Vaswani et al . ( 2017 ) ; Dai et al . ( 2019 ) ; Devlin et al . ( 2018 ) ; Dong & Lapata ( 2016 ) ) , recent works ( Fu et al . ( 2019 ) ; Katz et al . ( 2019 ) ) leverage NMT model for neural-based decompilation and achieve promising performance on small code snippets . To make neural-based decompilation useful in practice , many challenges remain : ( C1 ) Current stateof-the-art neural architectures for machine translation – transformer ( Vaswani et al . ( 2017 ) ) or its variants ( Dai et al . ( 2019 ) ; Devlin et al . ( 2018 ) ; Liu et al . ( 2019 ) ) – focused on sequential data ( e.g. , language ) , while neural decompilers deal with data with intrinsic structures ( e.g. , tree/graph ) and long-range dependencies . ( C2 ) The main decompilation task consists of many sub-tasks ( e.g. , datatype recovery , control/dataflow recovery ) . Training one neural network can not solve them all . ( C3 ) Practical data types ( e.g. , pointers ) are not modeled and compiling configurations need to be known beforehand ( Fu et al . ( 2019 ) ) . ( C4 ) Due to a lack of unification in terms of library usage , variable type , and/or control-flow complexity , a simple crawling from public repositories does not 1 N-Bref is the abbreviation for “ neural-based binary reverse engineering framework ” work well . Source code of different styles can be compiled into identical binary code ( i.e. , “ expression collision ” or EC ) and yield issues when evaluating decomplied code against original source code . To our best knowledge , no code generation toolkit with configurable code complexity exists . In this paper , we present N-Bref , an end-to-end neural-based decompiler framework that learns to decompile the source code to assembly . For ( C1 ) , we design a back-bone structural transformer by incorporating inductive Graph Neural Networks ( GNNs ) ( Hamilton et al . ( 2017 ) ) to represent the low-level code ( LLC ) as control/dataflow dependency graphs and source code as Abstract Syntax Tree ( AST ) . To better model long-range correlations in the structural representations , we add a graph neural network after each of the self-attention layers in the transformer . The AST decoder expands the AST of the source code in a tree fashion to better capture the dependency of each predicted node . Also , we adopt memory augmentation ( Cornia et al . ( 2019 ) ) and new tokenizing methods to improve the scalability of our neural networks with the growing size of programs . The backbone network is learned to iteratively generate AST for source code from structured representation of assembly . For ( C2 ) and ( C3 ) , we decouple decompilation into two sub-tasks : data type solver ( DT-Solver ) and source code generator ( SC-Gen ) , both use the same backbone structural transformer with different parameters . The output of the data type solver is used as the decoder input of the source code generation . For ( C4 ) , we design a dataset generator to generate training data , test and analyze the performance of different design principles across configurable code complexity . Different from conventional dataset generators ( Yang et al . ( 2011 ) ; IntelC++compiler ( 2017 ) ) used in programming language studies , our generator produces similar code styles as those written by human programmers , has unified source code representation that avoids EC , has configurable complexity and data types to facilitate factor analysis , and is specifically designed for learning-based methodologies . Extensive experiments show that on our new metrics , N-Bref outperforms transformer baseline/previous neural-based decompiler ( Fu et al . ( 2019 ) ) by 3.5 % /6.1 % and 5.5 % /8.8 % in data type recovery and source code generation tasks , respectively . Furthermore , on 5 human-written Leetcode solutions , N-Bref shows 4.1 % /6.0 % and 6.0 % /9.7 % margins over transformer/previous neural decompiler in data type recovery and source code generation , respectively . We also perform a comprehensive study of the design component in neural-based decompiler across different dataset configurations . In summary , this paper makes the following contributions : We construct an end-to-end decompilation system by integrating a LLC Encoder , an AST encoder , an AST decoder , and a set of novel embedding methods in a holistic manner . Our new architectures bridge the gap between low-level code and high-level code by transforming both of them into a graph space . We perform a comprehensive analysis of the influence of each neural-based decompiler design component to the overall program recovery accuracy across different dataset configurations . We corroborate the design performance on various generated benchmarks and Leetcode tasks . We boost decompilation performance by decomposing the decompilation process into separate tasks , data type recovery and AST generation . In addition , we present corresponding new metrics to evaluate data type recovery and source code generation . We develop the first dataset generation tool for neural-based decompiler development and testing . It randomly generates programs with configurable complexity and data types ; it also unifies source code representation to prevent ” expression collision ” . 2 PRELIMINARIES OF DECOMPILERS . Decompilation takes an executable file as input and attempts to create high-level source code that are more semantically meaningful and can be compiled back . Figure 1 shows a low-level code snippet disassembled from a stripped binary and the corresponding high-level program . A commonly used low-level code ( LLC ) is assembly ( ASM ) . An assembly program is a sequence of instructions that can be executed on a particular processor architecture ( e.g . MIPS , x86-64 ) . The first token for each instruction is called an ” opcode ” , which specifies the operation to be performed by the instruction . Many instructions in a program operate on processor registers ( a small amount of fast storage in the processor ) or instant values to perform arithmetic operations , such as shifting ( e.g.shl , shr ) , floating-point multiplications ( e.g . mulss ) , etc . Other instructions include ( 1 ) 2 Complete assembly code and graph are shown in Appendix H & I. memory instructions that load ( Figure 1 ( b ) Line 1 ) or store ( Line 9 ) data from memory/register to register/memory ; ( 2 ) branch instructions that conditionally ( Line 6 ) or unconditionally ( Line 10 ) redirect program execution to a different sequence . Each instruction has a certain internal structure , depending on the opcode . For example , in Line 8 of Figure 1 ( b ) , the first operand is a floating-point value in the memory and multss multiplies the value with the destination register ( xmm0 ) and stores the value back to xmm0 . Besides , connections also exist between instructions : ( i ) branch instructions ( e.g. , je , jmp ) reveal the ‘ control flow ’ of the high-level program ; ( ii ) the register which stores the new value of multss ( Line 8 ) is consumed later as a source register ( Line 9 ) . These data movements reveal the ’ data flow ’ of the program . In this paper , we formulate the low-level instructions as a graph using the instruction structure , control-flow and data-flow between each nodes as shown in Figure 1 ( b ) . High-level programming languages can be represented in its equivalent abstract syntax tree ( AST ) ( Baxter et al . ( 1998 ) ) during code generation ( Figure 1 ( a ) ) . This representation has many advantages over its sequential representations : ( i ) adjacent nodes are logically closer in AST compared with sequential representations , ( ii ) error propagation in sequential expansion can be alleviated in a tree decoder , and ( iii ) AST grammar helps prevent error predictions . 3 N-BREF OVERVIEW . In this section , we provide an overview of our design components with an illustrative example . Figure 2 shows an example of the prediction procedures . The Backbone Structural Transformer . Our structural transformer has three components : ( 1 ) LLC encoder , ( 2 ) AST encoder , and ( 3 ) AST decoder ( Detailed in Sec . 4 ) . The LLC encoder takes the low-level instructions converted from binaries using disassembler as input . AST encoder takes the input of a previous ( partial ) AST , and the predictions of AST decoder are AST nodes , which can be equivalently converted to the high-level program . As mentioned earlier , we formulate input low-level code into graphs and high-level code into tree structures . As the AST of the data declaration is very distinct from the rest of the code ( Figure 1 ( a ) ) in highlevel program , we decompose decompilation into two tasks : data type solver ( DT-Solver ) and source code generator ( SC-Gen ) . Both have the backbone structural transformer . Prediction Procedure . Figure 1 shows an example of the code recovery process of N-Bref . The assembly graph and high-level AST is the input of LLC encoder and AST encoder . The input of the AST decoder is the tree path from the root node to the expansion node . Initially , a single root-node is fed into the AST encoder/decoder . Once a new node is generated from decoder in each step , we update the AST and use it as the AST encoder input in the next prediction step . We expand the AST in a breadth-first ( BFS ) fashion . AST contains explicit terminal nodes , which are tokens with no child , such as registers , numerics , variable references and variable types . Non-terminal nodes ( e.g . binary operator ‘ = ’ ) must have children , otherwise there is a syntax error . The branch stop expansion when its leaf nodes are all terminal nodes . Note that during training , we apply ‘ teacher forcing ’ by attaching the correct node label into the AST encoder at each step . ( See Appendix E for formal algorithm ) Cascading DT-Solver and SC-Gen. As shown in Figure 1 , we divide the AST into two parts : ( i ) AST of data type and ( ii ) AST of main code body . Each part is generated using DT-Solver and SC-Gen respectively . This method allows the network to focus on each task individually and resolves more complicated data types . During testing , DT-Solver first generates the left part of the AST in Figure 1 , then the SC-Gen will continue the expansion from this intermediate results . During training , the initial data type input to the SC-Gen is the program golden .
The authors present a neural-based decompilation framework. They generate synthetic input data in order to make sure source code examples have a consistent code style that can be more easily learned. They predict types of variables and the actual code in two separate steps. For both steps, they employ a custom transformer architecture where evey second layer of the encoder is a graph neural network. There are two separate encoders of this kind, one conditions on a CFG obtained from static analysis of the input assembly, while the other one conditions on the partial output AST that has been generated thus far.
SP:ec1ff351fa8fb2cd61f9662a9d0e7db6531fcb4f
Meta Back-Translation
1 INTRODUCTION . While Neural Machine Translation ( NMT ) delivers state-of-the-art performance across many translation tasks , this performance is usually contingent on the existence of large amounts of training data ( Sutskever et al. , 2014 ; Vaswani et al. , 2017 ) . Since large parallel training datasets are often unavailable for many languages and domains , various methods have been developed to leverage abundant monolingual corpora ( Gulcehre et al. , 2015 ; Cheng et al. , 2016 ; Sennrich et al. , 2016 ; Xia et al. , 2016 ; Hoang et al. , 2018 ; Song et al. , 2019 ; He et al. , 2020 ) . Among such methods , one particularly popular approach is back-translation ( BT ; Sennrich et al . ( 2016 ) ) . In BT , in order to train a source-to-target translation model , i.e. , the forward model , one first trains a target-to-source translation model , i.e. , the backward model . This backward model is then employed to translate monolingual data from the target language into the source language , resulting in a pseudoparallel corpus . This pseudo-parallel corpus is then combined with the real parallel corpus to train the final forward translation model . While the resulting forward model from BT typically enjoys a significant boost in translation quality , we identify that BT inherently carries two weaknesses . First , while the backward model provides a natural way to utilize monolingual data in the target language , the backward model itself is still trained on the parallel corpus . This means that the backward model ’ s quality is as limited as that of a forward model trained in the vanilla setting . Hoang et al . ( 2018 ) proposed iterative BT to avoid this weakness , but this technique requires multiple rounds of retraining models in both directions which are slow and expensive . Second , we do not understand how the pseudo-parallel data translated by the backward model affects the forward model ’ s performance . For example , Edunov et al . ( 2018 ) has observed that pseudoparallel data generated by sampling or by beam-searching with noise from the backward model train better forward models , even though these generating methods typically result in lower BLEU scores compared to standard beam search . While Edunov et al . ( 2018 ) associated their observation to the diversity of the generated pseudo-parallel data , diversity alone is obviously insufficient – some degree of quality is necessary as well . In summary , while BT is an important technique , training a good backward model for BT is either hard or slow and expensive , and even if we have a good backward model , there is no single recipe how to use it to train a good forward model . In this paper , we propose a novel technique to alleviate both aforementioned weaknesses of BT . Unlike vanilla BT , which keeps the trained backward model fixed and merely uses it to generate pseudo- parallel data to train the forward model , we continue to update the backward model throughout the forward model ’ s training . Specifically , we update the backward model to improve the forward model ’ s performance on a held-out set of ground truth parallel data . We provide an illustrative example of our method in Fig . 1 , where we highlight how the forward model ’ s held-out set performance depends on the pseudo-parallel data sampled from the backward model . This dependency allows us to mathematically derive an end-to-end update rule to continue training the backward model throughout the forward model ’ s training . As our derivation technique is similar to meta-learning ( Schmidhuber , 1992 ; Finn et al. , 2017 ) , we name our method Meta Back-Translation ( MetaBT ) . In theory , MetaBT effectively resolves both aforementioned weaknesses of vanilla BT . First , the backward model continues its training based on its own generated pseudo-parallel data , and hence is no longer limited to the available parallel data . Furthermore , MetaBT only trains one backward model and then trains one pair of forward model and backward model , eschewing the expense of multiple iterations in Iterative BT ( Hoang et al. , 2018 ) . Second , since MetaBT updates its backward model in an end-to-end manner based on the forward model ’ s performance on a held-out set , MetaBT no longer needs to explicitly understand the effect of its generated pseudo-parallel data on the forward model ’ s quality . Our empirical experiments verify the theoretical advantages of MetaBT with definitive improvements over strong BT baselines on various settings . In particular , on the classical benchmark of WMT En-De 2014 , MetaBT leads to +1.66 BLEU score over sampling-based BT . Additionally , we discover that MetaBT allows us to extend the initial parallel training set of the backward model by including parallel data from slightly different languages . Since MetaBT continues to refine the backward model , the negative effect of language discrepancy is eventually rebated throughout the forward model ’ s training , boosting up to +1.20 BLEU score for low-resource translation tasks . 2 A PROBABILISTIC PERSPECTIVE OF BACK-TRANSLATION . To facilitate the discussion of MetaBT , we introduce a probabilistic framework to interpret BT . Our framework helps to analyze the advantages and disadvantages of a few methods to generate pseudo-parallel data such as sampling , beam-searching , and beam-searching with noise ( Sennrich et al. , 2016 ; Edunov et al. , 2018 ) . Analyses of these generating methods within our framework also motivates MetaBT and further allows us to mathematically derive MetaBT ’ s update rules in § 3 . Our Probabilistic Framework . We treat a language S as a probability distribution over all possible sequences of tokens . Formally , we denote by PS ( x ) the distribution of a random variable x , whose each instance x is a sequence of tokens . To translate from a source language S into a target language T , we learn the conditional distribution PS , T ( y|x ) for sentences from the languages S and T with a parameterized probabilistic model P ( y|x ; θ ) . Ideally , we learn θ by minimizing the objective : J ( θ ) = Ex , y∼PS , T ( x , y ) [ ` ( x , y ; θ ) ] where ` ( x , y ; θ ) = −logP ( y|x ; θ ) ( 1 ) Since PS , T ( x , y ) = PS , T ( y ) PS , T ( x|y ) = PT ( y ) PS , T ( x|y ) , we can refactor J ( θ ) from Eq . 1 as : J ( θ ) = Ey∼PT ( y ) Ex∼PS , T ( x|y ) [ ` ( x , y ; θ ) ] ( 2 ) Motivating BT . In BT , since it is not feasible to draw exact samples y ∼ PT ( y ) and x ∼ PS , T ( x|y ) , we rely on two approximations . First , instead of sampling y ∼ PT ( y ) , we collect a corpus DT of monolingual data in the target language T and draw the samples y ∼ Uniform ( DT ) . Second , instead of sampling x ∼ PS , T ( x|y ) , we derive an approximate distribution P̂ ( x|y ) and sample x ∼ P̂ ( x|y ) . Before we explain the derivation of P̂ ( x|y ) , let us state that with these approximations , the objective J ( θ ) from Eq . 2 becomes the BT following objective : ĴBT ( θ ) = Ey∼Uniform ( DT ) Ex∼P̂ ( x|y ) [ ` ( x , y ; θ ) ] ( 3 ) Rather unsurprisingly , P̂ ( x|y ) in Eq . 3 above is derived from a pre-trained parameterized backward translation model P ( x|y ; ψ ) . For example : • P̂ ( x|y ) 4= 1 [ x = argmaxẋ P ( ẋ|y ; ψ ) ] results in BT via beam-search ( Sennrich et al. , 2016 ) . • P̂ ( x|y ) 4= P ( x|y ; ψ ) results in BT via sampling ( Edunov et al. , 2018 ) . • P̂ ( x|y ) 4= 1 [ x = argmaxẋ P̃ ( ẋ|y ; ψ ) ] results in BT via noisy beam-search ( Edunov et al. , 2018 ) where P̃ ( x|y ; ψ ) denotes the joint distribution of the backward model P ( x|y ; ψ ) and the noise . Therefore , we have shown that in our probabilistic framework for BT , three common techniques to generate pseudo-parallel data from a pre-trained backward model correspond to different derivations from the backward model ’ s distribution P ( x|y ; ψ ) . Our framework naturally motivates two questions : ( 1 ) given a translation task , how do we tell which derivation of P̂ ( x|y ) from P ( x|y , ψ ) is better than another ? and ( 2 ) can we derive better choices for P̂ ( x|y ) from a pre-trained backward model P ( x|y ; ψ ) according to the answer of question ( 1 ) ? Metric for the Generating Methods . In the existing literature , the answer for our first question is relatively straightforward . Since most papers view the method of generating pseudo-parallel data as a hyper-level design , i.e . similar to the choice of an architecture like Transformer or LSTM , and hence practitioners choose one method over another based on the performance of the resulting forward model on held-out validation sets . Automatically Derive Good Generating Methods . We now turn to the second question that our probabilistic framework motivates . Thanks to the generality of our framework , every choice for P̂ ( x|y ) results in an optimization objective . Using this objective , we can train a forward model and measure its validation performance to evaluate our choice of P̂ ( x|y ) . This process of choosing and evaluating P̂ ( x|y ) can be posed as the following bi-level optimization problem : Outer loop : P̂ ∗ = argmax P̂ ValidPerformance ( θ∗ P̂ ) , Inner loop : θ∗ P̂ = argmin θ ĴBT ( θ ; P̂ ) , where ĴBT ( θ ; P̂ ) = Ey∼Uniform ( DT ) Ex∼P̂ ( x|y ) [ ` ( x , y ; θ ) ] ( 4 ) The optimal solution of this bi-level optimization problem can potentially train a forward model that generalizes well , as the forward model learns on a pseudo-parallel dataset and yet achieves a good performance on a held-out validation set . Unfortunately , directly solving this optimization problem is not feasible . Not only is the inner loop quite expensive as it includes training a forward model from scratch according to P̂ , the outer loop is also poorly defined as we do not have any restriction on the space that P̂ can take . Next , in § 3 , we introduce a restriction on the space that P̂ can take , and show that our restriction turns the task of choosing P̂ into a differentiable problem which can be solved with gradient descent . 3 META BACK-TRANSLATION . Continuing our discussion from § 2 , we design Meta Back-Translation ( MetaBT ) which finds a strategy to generate pseudo-parallel data from a pre-trained backward model such that if a forward model training on the generated pseudo-parallel data , it will achieve a strong performance on a held-out validation set . The Usage of “ Validation ” Data . Throughout this section , readers will see that MetaBT makes extensive use of the “ validation ” set to provide feedback for refine the pseudo-parallel data ’ s generating strategy . Thus , to avoid nullifying the meaning of a held-out validation set , we henceforth refer to the ground-truth parallel dataset where the forward model ’ s performance is measured throughout its training as the meta validation dataset and denote it by DMetaDev . Other than this meta validation set , we also have a separate validation set for hyper-parameter tuning and model selection . A Differentiable Bi-level Optimization Problem . We now discuss MetaBT , starting with formulating a differentiable version of Problem 4 . Suppose we have pre-trained a paramterized backward translation model P ( x|y ; ψ ) . Instead of designing the generating distribution P̂ ( x|y ) by applying actions such as sampling or beam-search to P ( x|y ; ψ ) , we let P̂ ( x|y ) 4= P ( x|y ; ψ ) and continue to update the backmodel ’ s parameters ψ throughout the course of training the forward model . Clearly , under this association P̂ ( x|y ) 4= P ( x|y ; ψ ) , the parameters ψ controls the generating distribution of the pseudo-parallel data to train the forward model . By setting the differentiable parameters ψ as the optimization variable for the outer loop , we turn the intractable Problem 4 into a differentiable one : Outer loop : ψ∗ = argmax ψ Performance ( θ∗ ( ψ ) , DMetaDev ) Inner loop : θ∗ ( ψ ) = argmin θ Ey∼Uniform ( DT ) Ex∼P̂ ( x|y ) [ ` ( x , y ; θ ) ] ( 5 ) Bi-level optimization problems whose both outer and inner loops operate on differentiable variables like Problem 5 have appeared repeatedly in the recent literature of meta-learning , spanning many areas such as learning initialization ( Finn et al. , 2017 ) , learning hyper-parameters ( Baydin et al. , 2018 ) , designing architectures ( Liu et al. , 2019 ) , and reweighting examples ( Wang et al. , 2019b ) . We thus follow their successful techniques and design a two-phase alternative update rule for the forward model ’ s parameters θ in the inner loop and the backward model ’ s parameters ψ in the outer loop : Phase 1 : Update the Forward Parameters θ . Given a batch of monolingual target data y ∼ Uniform ( DT ) , we sample the pseudo-parallel data ( x̂ ∼ P ( x|y ; ψ ) , y ) and update θ as if ( x̂ , y ) was real data . For simplicity , assuming that θ is updated using gradient descent on ( x̂ , y ) , using a learning rate ηθ , then we have : θt = θt−1 − ηθ∇θ ` ( x̂ , y ; θ ) ( 6 ) Phase 2 : Update the Backward Parameters ψ . Note that Eq . 6 means that θt depends on ψ , because x̂ is sampled from a distribution parameterized by ψ . This dependency allows us to compute the meta validation loss of the forward model at θt , which we denote by J ( θt ( ψ ) , DMetaDev ) , and back-propagate this loss to compute the gradient∇ψJ ( θt ( ψ ) , DMetaDev ) . Once we have this gradient , we can perform a gradient-based update on the backward parameter ψ with learning rate ηψ : ψt = ψt−1 − ηψ∇ψ∇θJ ( θt ( ψ ) , DMetaDev ) ( 7 ) Computing∇ψJ ( θt ( ψ ) , DMetaDev ) . Our derivation of this gradient utilizes two techniques : ( 1 ) the chain rule to differentiate J ( θt ( ψ ) , DMetaDev ) with respect to ψ via θt ; and ( 2 ) the log-gradient trick from reinforcement learning literature ( Williams , 1992 ) to propagate gradients through the sampling of pseudo-source x̂ . We refer readers to § A.1 for the full derivation . Here , we present the final result : ∇ψJ ( θt ( ψ ) , DMetaDev ) ≈ − [ ∇θJ ( θt , DMetaDev ) > · ∇θ ` ( x̂ , y ; θt−1 ) ] · ∇ψlogP ( x̂|y ; ψ ) ( 8 ) In our implementation , we leverage the recent advances in high-order AutoGrad tools to efficiently compute the gradient dot-product term via Jacobian-vector products . By alternating the update rules in Eq . 6 and Eq . 7 , we have the complete MetaBT algorithm . Remark : An Alternative Interpretation of MetaBT . The update rule of the backward model in Eq . 8 strongly resembles the REINFORCE equation from the reinforcement learning literature . This similarity suggests that the backward model is trained as if it were an agent in reinforcement learning . From this perspective , the backward model is trained so that the pseudo-parallel data sampled from it would maximize the “ reward ” : R ( x̂ ) = ∇θJ ( θt , DMetaDev ) > · ∇θ ` ( x̂ , y ; θt−1 ) ( 9 ) Since this dot-product measures the similarity in directions of the two gradients , it can be interpreted that MetaBT optimizes the backward model so that the forward model ’ s gradient on pseudo-parallel data sampled from the backward model is similar to the forward model ’ s gradient computed on the meta validation set . This is a desirable goal because the reward guides the backward model ’ s parameters to favor samples that are similar to those in the meta validation set .
This paper applies techniques from meta-learning to derive and end-to-end update rule for a workflow involving backtranslation, specificically maximizing translation performance of the forward model, while updating the backward model to produce backtranslations that are maximally useful to improve the forward model's quality (as measured on a meta validation set). The approach is evaluated on WMT EN-DE and EN-FR and compared against a simple sampling strategy for backtranslation, and dual learning. In addition, the paper considers a multilingual setup where the translation direction is low-resource, and the initial backtranslation model is trained on a mix of parallel data from the language pair of interest, as well as auxiliary data with a high-resource, related source language.
SP:0586aff632d77cbf60cefae509a93bc22c95655e
Response Modeling of Hyper-Parameters for Deep Convolutional Neural Networks
1 INTRODUCTION . The choice of Hyper-Parameters ( HP ) – such as initial learning rate , batch size , and weight decay – has shown to greatly impact the generalization performance of Deep Neural Network ( DNN ) training ( Keskar et al. , 2017 ; Wilson et al. , 2017 ; Li et al. , 2019 ; Yu & Zhu , 2020 ) . By increasing the complexity of network architectures ( from high to low parameterized models ) and training datasets ( class number and samples ) , the manual intervention to tune these parameters for optimization becomes a practically expensive and highly challenging task . Therefore , the problem of Hyper-Parameter Optimization ( HPO ) becomes central to developing highly efficient training workflows . Recent studies shift the gear toward development of a meaningful metric measure to explain effective HP tuning for DNN training . This is done in several behavioural studies , including changes in loss surfaces ( Keskar et al. , 2017 ) , input perturbation analysis ( Novak et al. , 2018 ) , and the energy norm of the covariance of gradients ( Jastrzebski et al. , 2020 ) , just to name a few . In fact , the abstract formulation of the HPO problem , as highlighted by Bergstra & Bengio ( 2012 ) , can be modelled by λ∗ ← arg min λ∈Λ { Ex∼M [ L ( x ; Aλ ( X ( train ) ) ] } , ( 1 ) where , X ( train ) and x are random variables , modelled by some natural distributionM , that represent the train and validation data , respectively , L ( · ) is some expected loss , and Aλ ( X ( train ) ) is a learning algorithm that maps X ( train ) to some learned function , conditioned on the hyper-parameter set λ . Note that this learned function , denoted as f ( θ ; λ ; X ( train ) ) , involves its own inner optimization problem . The HPO in ( 1 ) highlights two optimization problems of which optimization over λ can not occur until optimization over f ( θ ; λ ; X ( train ) ) is complete . This fact applies heavy computational burden for HPO . Bergstra & Bengio ( 2012 ) reduce this burden by attempting to solve the following λ∗ ← arg min λ∈Λ τ ( λ ) , ( 2 ) where τ is called the hyper-parameter response function or response surface , and Λ is some set of choices for λ ( i.e . the search space ) . The goal of the response surface is to introduce an auxiliary 1 Under review as a conference paper at ICLR 2021 function parameterized by λ of which its minimization is directly correlated to minimization of the objective function f ( θ ) . Little advancements in an analytical model of the response surface has led to estimating it by ( a ) running multiple trials of different HP configurations ( e.g . grid searching ) , using evaluation against validation sets as an estimate to τ ; or ( b ) characterizing the distribution model of a configuration ’ s performance metric ( e.g . cross-validation performances ) to numerically define a relationship between τ and λ . An important shift occurred when Bergstra & Bengio ( 2012 ) showed that random searching is more efficient to grid searching , particularly when optimizing high-dimensional HP sets . To mitigate the time complexity and increase overall performance , subsequent methods attempted to characterize the distribution model for such random configurations ( Snoek et al. , 2012 ; Eggensperger et al. , 2013 ; Feurer et al. , 2015a ; b ; Klein et al. , 2017 ; Falkner et al. , 2018 ) or employed population control ( Young et al. , 2015 ; Jaderberg et al. , 2017 ) or early-stopping ( Karnin et al. , 2013 ; Li et al. , 2017 ; 2018 ) . However , these methods suffer from ( a ) additional internal HPs that require manual tuning facilitated by extensive domain knowledge ; ( b ) heavy computational overhead whereby the optimization process takes days to weeks in most cases ( Li et al. , 2017 ; Falkner et al. , 2018 ; Yu & Zhu , 2020 ) ; ( c ) poor generalization across model selection , datasets , and general experimental configurations ( e.g . optimizers ) ; and ( d ) strong dependence on a manually defined search ranges that heavily influences results ( Choi et al. , 2020 ; Sivaprasad et al. , 2020 ) . Importantly , these ranges are generally chosen based on intuition , expert domain knowledge , or some form of a priori knowledge . In this paper , we employ the notion of knowledge gain ( Hosseini & Plataniotis , 2020 ) to model a response surface – solvable with low computational overhead – and use it to perform automatic HPO that does not require any a priori knowledge while still achieving competitive performance against baselines and existing state of the art ( SOTA ) methods . Our goal is therefore to develop an algorithm that is fully autonomous and domain independent that can achieve competitive performance ( not necessarily superior performance ) . We restrict our response surface to consider a single HP , namely the initial learning rate η , and support this choice by noting that the initial learning rate is the most sensitive and important HP towards final model performance ( Goodfellow et al. , 2016 ; Bergstra & Bengio , 2012 ; Yu & Zhu , 2020 ) ( see also Figure 10 in Appendix C ) . We demonstrate how our method ’ s optimization directly correlates to optimizing model performance . Finally , we provide empirical measures of the computational requirements of our algorithm and present thorough experiments on a diverse set of Convolutional Neural Network ( CNN ) and Computer Vision dataset that demonstrate the generalization of our response surface . The main contributions of this work are as follows : 1 . Inspired by knowledge gain , we introduce a well-defined , analytical response surface using the low-rank-factorization of convolution weights ( Equation 5 ) . 2 . We propose a dynamic tracking algorithm of low computational overhead on the order of minutes and hours , dubbed autoHyper , to optimize our response surface and conduct HPO . 3 . This algorithm requires no domain knowledge , human intuition , or manual intervention , and is not bound by a manually set searching space , allowing for completely automatic setting of the initial learning rate ; a novelty for deep learning practitioners . 1.1 RELATED WORKS . We leave extensive analysis of the related works to established surveys ( Luo , 2016 ; He et al. , 2019 ; Yu & Zhu , 2020 ) but present a general overview here . Grid searching and manual tuning techniques that require extensive domain knowledge trial various configurations and retain the best . Random search ( Bergstra & Bengio , 2012 ) was proven to be more efficient , particularly in high-dimensional cases , but these methods suffer from redundancy and high computational overhead . Bayesian optimization ( Snoek et al. , 2012 ; Eggensperger et al. , 2013 ; Feurer et al. , 2015a ; b ; Klein et al. , 2017 ) techniques attempt to characterize the distribution model of the random HP configurations . They fail to properly define the response surface τ and resolve to estimating it by rationalizing a Gaussian process over sampling points . The use of neural networks over Gaussian to model the generalization performance was shown to have better computational performance ( Snoek et al. , 2015 ; Springenberg et al. , 2016 ) . Furthermore , the early stopping methods ( Karnin et al. , 2013 ; Li et al. , 2017 ; 2018 ) spawn various configurations with equal resource distributions , successively stopping poorperforming configurations and reassigning resources dynamically . Population-based training ( PBT ) 2 Under review as a conference paper at ICLR 2021 methods ( Young et al. , 2015 ; Jaderberg et al. , 2017 ) follow an evolutionary approach by spawning various experimental configurations and adapting poor-performing trials to warm restart with inherited learnable parameters and HPs . In addition , other methods such as orthogonal array tuning ( Zhang et al. , 2019 ) , box-constrained derivative-free optimization ( Diaz et al. , 2017 ) , reverse dynamics algorithm for SGD optimization ( Maclaurin et al. , 2015 ) , and hybrid methods ( Swersky et al. , 2013 ; 2014 ; Domhan et al. , 2015 ; Falkner et al. , 2018 ; Kandasamy et al. , 2016 ) exist but demonstrate no significant benefits over the previous techniques . Generally , each of these methods suffer from high computational overheads – on the order of days to weeks to converge – as well as additional internal HPs that heavily influence performance and generalization . In recent years , many Python libraries have also been developed that include these optimization methods ( Bergstra et al. , 2013 ; Kotthoff et al. , 2017 ; Akiba et al. , 2019 ) . 2 A NEW RESPONSE SURFACE MODEL . In this section , we motivate and develop a new response surface model τ ( λ ) based on the low-rank factorization of convolutional weights in a CNN . Unlike the common approach of cross-validation performance measures , we define a new measure on the well-posedness of the intermediate layers of a CNN and relate this measure to the general performance of the network . We first start by adopting the low-rank measure of convolution weights . 2.1 KNOWLEDGE GAIN VIA LOW-RANK FACTORIZATION . Consider a four-way array ( 4-D tensor ) W ∈ RN1×N2×N3×N4 as the convolution weights of an intermediate layer of a CNN ( N1 and N2 being the height and width of kernel size , and N3 and N4 to the input and output channel size , respectively ) . Under the convolution operation , the input feature maps F I ∈ RW×H×N3 are mapped to an arbitrary output feature map FO ∈ RW×H×N4 by FO : , : ,i4 = N3∑ i3=1 F I : , : ,i3 ∗W : , : ,i3 , i4 . W ( 4-D Tensor ) unfold−−−→Wd ( 2-D Matrix ) factorize then decompose−−−−−−−−−−−−−→ ÛdΣ̂dV̂ Td︸ ︷︷ ︸ Ŵd +Ed We note the importance of factorizing the unfolded matrix Wd using a low-rank factorization ( we use the Variational Bayesian Matrix Factorization ( VBMF ) ( Nakajima et al. , 2013 ) ) . Without this factorization , the presence of noise will inhibit proper analysis . This noise Ed will “ capture ” the randomness of initialization and ignoring it will allow us to better analyze our unfolded matrices and make our response surface robust to initialization method . Following the definition of Knowledge Gain ( KG ) from Hosseini & Plataniotis ( 2020 ) , one can now define a metric for each network layer using the norm energy of the low-rank factorization as Gd ( Ŵd ) = 1 Nd · σ1 ( Ŵd ) N ′ d∑ i=1 σi ( Ŵd ) . ( 3 ) where , σ1 ≥ σ2 ≥ . . . ≥ σNd are the associated low-rank singular values in descending order . Here Nd = rank { Ŵd } and the unfolding can be done in either input or output channels i.e . d ∈ { 3 , 4 } . For more information on KG as well as its efficient algorithmic computation , we refer the reader to Hosseini & Plataniotis ( 2020 ) . The metric defined in ( 3 ) is normalized such that Gd ∈ [ 0 , 1 ] and can be used to probe CNN layers to monitor their efficiency in the carriage of information from input to output feature maps . We can further parameterize the KG by the HP set λ , epoch t , and network layer ` as Ḡd , t , ` ( λ ) . A perfect network and set of HPs would yield Ḡd , T , ` ( λ ) = 1 ∀ ` ∈ [ L ] where L is the number of layers in the network and T is the last epoch . In this case , network layer functions as a better autoencoder through iterative training and the carriage of information throughout the network is maximized . Conversely , Ḡd , T , ` ( λ ) = 0 indicates that the information flow is very weak such that the mapping is effectively random ( ‖Ed ‖ is maximized. ) . 3 Under review as a conference paper at ICLR 2021
The paper proposes an efficient framework to search for the optimal initial learning rate to train neural networks. The key idea is to introduce Knowledge Gain, a metric derived from the singular values of each layer, to indicate the convergency quality of training. Taking advantage of the metric, a logarithmic grid search algorithm (AutoHyper) is proposed to search for the optimal learning rate according to Eq 5 via short-time training (e.g. for 5 epochs), which is demonstrated to be very efficient and take effect to some extent.
SP:ea8234f4533090e0cfe197ddb70f375f3ed49418
Training Federated GANs with Theoretical Guarantees: A Universal Aggregation Approach
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) have attracted much attention due to their ability to generate realistic-looking synthetic data ( Goodfellow et al. , 2014 ; Zhang et al. , 2018 ; Liu et al. , 2019b ; Shaham et al. , 2019 ; Dai et al. , 2017 ; Kumar et al. , 2017 ) . In order to obtain a powerful GAN model , one needs to use data with a wide range of characteristics ( Qi , 2019 ) . However , these diverse data are often owned by different sources , and to acquire their data is often infeasible . For instance , most hospitals and research institutions are unable to share data with the research community , due to privacy concerns ( Annas et al. , 2003 ; Mercuri , 2004 ; lex , 2014 ; Gostin et al. , 2009 ) and government regulations ( Kerikmäe , 2017 ; Seddon & Currie , 2013 ) . To circumvent the barrier of data sharing for GAN training , one may resort to Federated Learning ( FL ) , a promising new decentralized learning paradigm ( McMahan et al. , 2017 ) . In FL , one trains a centralized model but only exchanges model information with different data sources . Since the central model has no direct access to data at each source , privacy concerns are alleviated ( Yang et al. , 2019 ; Kairouz et al. , 2019 ) . This opens the opportunity for a federated GAN , i.e. , a centralized generator with multiple local and privately hosted discriminators ( Hardy et al. , 2019 ) . Each local discriminator is only trained on its local data and provides feedback to the generator w.r.t . synthesized data ( e.g. , gradient ) . A federated GAN empowers GAN with much more diversified data without violating privacy constraints . Despite the promises , a convincing approach for training a federated GAN remains unknown . The major challenge comes from the non-identical local distributions from multiple data sources/entities . The centralized generator is supposed to learn a mixture of these local distributions from different entities , whereas each discriminator is only trained on local data and learns one of the local distributions . The algorithm and theoretical guarantee of traditional single-discriminator GAN ( Goodfellow et al. , 2014 ) do not easily generalize to this federated setting . A federated GAN should integrate feedback from local discriminators in an intelligent way , so that the generator can ‘ correctly ’ learn the mixture distribution . Directly averaging feedbacks from local discriminators ( Hardy et al. , 2019 ) results in a strong bias toward common patternsowever , such non-identical distribution setting is classical in federated learning ( Zhao et al. , 2018 ; Smith et al. , 2017 ; Qu et al. , 2020 ) and characteristic of local data improves the diversity of data . In this paper , we propose the first theoretically guaranteed federated GAN , that can correctly learn the mixture of local distributions . Our method , called Universal Aggregation GAN ( UA-GAN ) , focuses on the odds value rather than the predictions of local discriminators . We simulate an unbiased centralized discriminator whose odds value approximates that of the mixture of local discriminators . We prove that by aggregating gradients from local discriminators based on the odds value of the central discriminator , we are guaranteed to learn the desired mixture of local distributions . A second theoretical contribution of this paper is an analysis of the quality of the federated GAN when the local discriminators can not perfectly learn with local datasets . This is a real concern in a federated learning setting ; the quantity and quality of local data can be highly variant considering the limitation of real-world institutions/sites . Classical theoretical analysis of GAN ( Goodfellow et al. , 2014 ) assumes an optimal discriminator . To understand the consequence of suboptimal discriminators , we develop a novel analysis framework of the Jensen-Shannon Divergence loss ( Goodfellow et al. , 2014 ; Lin , 1991 ) through the odds value of the local discriminators . We show that when the local discriminators behave suboptimally , the approximation error of the learned generator deteriorates linearly to the error . It is worth noting that our theoretical result on suboptimality also applies to the classical GAN . To the best of our knowledge , this is the first suboptimality bound on the federated or classical GAN . In summary , the contributions are threefold . • We propose UA-GAN , a novel federated GAN approach that aggregates feedback from local discriminators through their odds value rather than posterior probability . • We prove that UA-GAN correctly learns the mixture of local distributions when they are perfectly modeled by local discriminators . • We prove when the discriminators are suboptimal in modeling their local distributions , the generator ’ s approximation error is also linear . We also show that our bound is tight . We show with various experiments that our method ( UA-GAN ) outperforms the state-of-theart federated GAN approaches both qualitatively and quantitatively . Training on large scale heterogeneous datasets makes it possible to unleash the power of GANs . Federated GANs show their promise in utilizing unlimited amount of sensitive data without privacy and regulatory concerns . Our method , as the first theoretically guaranteed GAN , will be one step further in building such a foundation . Fig . 1 shows the workflow of UA-GAN . 2 RELATED WORK . The Generative Adversarial Networks ( GANs ) have enjoyed much success in various machine learning and computer vision tasks ( Zhang et al. , 2018 ; Liu et al. , 2019b ; Shaham et al. , 2019 ; Dai et al. , 2017 ; Kumar et al. , 2017 ) . Numerous methods are proposed for GAN training , such as Spectral Normalization ( SN ) ( Miyato et al. , 2018 ) , zero-centered gradient penalty ( Mescheder et al. , 2018 ; Thanh-Tung et al. , 2019 ) , WGAN ( Arjovsky et al. , 2017 ) , WGAN-GP ( Gulrajani et al. , 2017 ) , WGAN-TS ( Liu et al. , 2018 ) , WGAN-QC ( Liu et al. , 2019a ) etc . A common approach in practice is the conditional GAN ( cGAN ) ( Mirza & Osindero , 2014 ) , which uses supervision from data ( e.g. , class labels ) to improve GAN ’ s performance . Multi-discriminator/-generator GANs have been proposed for various learning tasks . To train these GANs , one common strategy is to directly exchange generator/discriminator model parameters during training ( Xin et al. , 2020 ; Hardy et al. , 2019 ) . This is very expensive in communication ; a simple ResNet18 ( He et al. , 2016a ) has 11 million parameters ( 40MB ) . Closest to us is MD-GAN ( Hardy Algorithm 1 Training Algorithm of UA-GAN . 1 : Input : Batch size m , datasets { Dj } , size of datasets { πj = njn } . 2 : Output : G , Dj , ∀j ∈ [ K ] . 3 : for t = 1 , · · · , T do 4 : { Work at the central server . } 5 : G generates synthetic data : x̂i = G ( zi ) , i = 1 , · · · , m. 6 : Send batch of synthetic data Dsyn = { x̂1 , · · · , x̂m } to all K sites . 7 : for j = 1 , · · · , K do 8 : { Work at each local site . } 9 : Update the local discriminator , Dj , using real samples from Dj and synthetic data batch , Dsyn , based on Eq . 2 . 10 : Output predictions and gradients for synthetic data Dj ( x̂i ) , ∂Dj ( x̂i ) /∂x̂i , i = 1 , · · · , m. Send them to the central server . 11 : end for 12 : { Work at the central server . } 13 : Simulate value of Dua ( x̂i ) via Eq . 4 , ∀i . 14 : Update G based on Eq . 5 , using gradients from Dj ’ s . 15 : end for et al. , 2019 ) , which aggregates feedbacks ( gradients ) from local discriminators through averaging . It also swaps parameters between discriminators . None of these methods provide theoretical guarantee as ours . Meanwhile , our method is the only one without model swapping , and thus is much more efficient in bandwidth consumption . Federated Learning ( FL ) ( Kairouz et al. , 2019 ; McMahan et al. , 2016 ) offers the opportunity to integrate sensitive datasets from multiple sources through distributed training . Many works have been done tackling practical concerns in FL , such as convergence under Non-IID data assumption ( Yu et al. , 2019 ; Lian et al. , 2017 ; Li et al. , 2020 ) , decentralized SGD without freezing parameters ( Recht et al. , 2011 ; Nguyen et al. , 2018 ) , communication efficiency ( Konečnỳ et al. , 2016 ; Li et al. , 2019 ) , provable privacy guarantees ( Alistarh et al. , 2017 ; Wei et al. , 2020 ) . Federated GAN is also of great interest from a federated learning perspective . A successful federated GAN makes it possible to train a centralized model ( e.g. , a classifier ) using data synthesized by the centralized generator . This becomes a solution when existing trained FL model needs to be replaced and updated by advanced machine learning approaches , as one can retrain the model at any time using the generator . It also alleviates some privacy concerns of FL , e.g. , the gradient leakage problem ( Zhu et al. , 2019 ) . 3 METHOD . To introduce our algorithm , we first introduce notations and formalize the mixture distribution learning problem . Next , we present our Universal Aggregation approach and prove that it is guaranteed to learn the target mixture distribution . We also analyze the suboptimality of the model when local discriminators are suboptimal . For ease of exposition , we mostly use ordinary GAN to illustrate the algorithm and prove its theoretical properties . At the end of this section , we extend the algorithm , as well as its theoretical guarantees , to conditional GAN ( cGAN ) Mirza & Osindero ( 2014 ) . The empirical results in this work are established on the cGAN since its training is much more controllable , thanks to the additional supervision by the auxiliary variable ( e.g. , classes of images ) . Notations and problem formulation . We assume a cross-silo FL setting Kairouz et al . ( 2019 ) , i.e. , K entities hosting K private datasets D1 , ... , DK , with size n1 , · · · , nK . The total data size n = ∑K j=1 nj . The overall goal is to learn a target mixture distribution p ( x ) = ∑K j=1 πjpj ( x ) , ( 1 ) in which a component distributions pj ( x ) is approximated by the empirical distribution from the j-th local dataset Dj . The mixing weight πj is computed using the fraction of dataset Dj : πj = nj/n . In general , different mixture components pi ( x ) and pj ( x ) may ( but not necessarily ) be non-identical , namely , ∃x , such that pi ( x ) 6= pj ( x ) . Universal Aggregation GAN : Now we are ready to introduce our multi-discriminator aggregation framework . A pseudo-code of UA framework can be found in Algorithm 1 . We have a centralized ( conditional ) generator G ( z ) seeking to learn the global distribution p ( x ) . In each local site , a discriminator Dj ( x ) has access to local dataset Dj . Note data in Dj are only sampled from pj ( x ) . During training , the generator generates a batch of synthetic data to allK sites . The j-th discriminator seeks to minimize the cross entropy loss of GAN from a local perspective Goodfellow et al . ( 2014 ) : max Dj V ( G , Dj ) = Ex∼pj ( x ) [ logDj ( x ) ] + Ez∼N ( 0 , Id ) [ log ( 1−Dj ( G ( z ) ) ) ] ( 2 ) To formalize the generator training , we first introduce odds value . It is an essential quantity for our algorithm and its analysis . Definition 1 ( odds value ) . Given a probability φ ∈ ( 0 , 1 ) , its odds value is Φ ( φ ) , φ1−φ . Note the definition requires φ 6= 1 . Also it is straightforward to see φ = Φ ( φ ) 1+Φ ( φ ) . The central idea of UA Framework is to simulate a centralized discriminator Dua ( x ) which behaves like the mixture of all local discriminators ( in terms of odds value ) . A well behaved Dua ( x ) can then train the centralized generator G using its gradient , just like in a classical GAN . We design Dua so that its odds value Φ ( Dua ( x ) ) is identical to the mixture of the odds values of local discriminators : Φ ( Dua ( x ) ) = ∑K j=1 πjΦ ( Dj ( x ) ) . ( 3 ) Given Φ ( Dua ( x ) ) , we can compute Dua ( x ) as Dua ( x ) = Φ ( Dua ( x ) ) 1 + Φ ( Dua ( x ) ) ( 4 ) Once the central discriminator Dua is simulated , the generator can be computed by minimizing the generator loss : min G V ( G , Dua ) = Ex∼p ( x ) [ logDua ( x ) ] + Ez∼N ( 0 , Id ) [ log ( 1−Dua ( G ( z ) ) ] ( 5 ) Note that mathematically , Eq . ( 5 ) can be directly written in terms of local discriminators Dj ’ s ( by substituting in Eqs ( 3 ) and ( 4 ) ) . In implementation , the simulated central discriminator can be written as a pytorch or tensorflow layer . Intuition . The reason we define Dua ’ s behavior using a mixture of odds values instead of a mixture of the predictions is mathematical . It has been shown in Goodfellow et al . ( 2014 ) that a perfect discriminator learning a data distribution p ( x ) and a fixed generator distribution q ( x ) satisfies D ( x ) = p ( x ) p ( x ) +q ( x ) . It can be shown that only with the odds value equivalency , this optimal solution of the central discriminator D ( x ) can be recovered if each individual discriminator is optimal , i.e. , Dj ( x ) = pj ( x ) pj ( x ) +q ( x ) . This is not true if we define the central discriminator behavior using the average prediction , i.e. , Dua = ∑ j πjDj . More details can be found in Theorem ( 4 ) and its proof . Remark 1 ( Privacy Safety ) . For federated learning , it is essential to ensure information of real data are not leaked outside the local site . This privacy safety is guaranteed in our method . To optimize G w.r.t . Eq . ( 5 ) , we only need to optimize the second term and use gradient on synthetic images G ( z ) from local discriminators . One important concern is about the optimal discriminator condition . Dua ( x ) is designed to be optimal only when Dj ’ s are optimal . We need to investigate the consequence if the local discriminators Dj ’ s are suboptimal . We will provide an error bound of the learned distribution w.r.t. , the suboptimality of Dj ’ s in Corollary ( 2 ) .
The paper proposes a new method, UA-GAN to train GANs in a federated learning setup. The method simulates a central discriminator D_ua such that the odds values of the central discriminator is equivalent to the weighted sum of the local discriminators. The central generator is then trained based on the simulated central discriminator. The paper provides theoretical analysis on its proposed method and conducts experiments on toy datasets and mixtures of real world datasets to simulate a federated learning setup.
SP:3d65be849f99ab19e16eab84ba7cd7748d3ed8ad
Parameter-Efficient Transfer Learning with Diff Pruning
1 INTRODUCTION . Task-specific finetuning of pretrained deep networks has become the dominant paradigm in contemporary NLP , achieving state-of-the-art results across a suite of natural language understanding tasks ( Devlin et al. , 2019 ; Liu et al. , 2019c ; Yang et al. , 2019 ; Lan et al. , 2020 ) . While straightforward and empirically effective , this approach is difficult to scale to multi-task , memory-constrained settings ( e.g . for on-device applications ) , as it requires shipping and storing a full set of model parameters for each task . Inasmuch as these models are learning generalizable , task-agnostic language representations through self-supervised pretraining , finetuning the entire model for each task is an especially inefficient use of model parameters . A popular approach to parameter-efficiency is to learn sparse models for each task where a subset of the final model parameters are exactly zero ( Gordon et al. , 2020 ; Sajjad et al. , 2020 ; Zhao et al. , 2020 ; Sanh et al. , 2020 ) . Such approaches often face a steep sparsity/performance tradeoff , and a substantial portion of nonzero parameters ( e.g . 10 % -30 % ) are still typically required to match the performance of the dense counterparts . An alternative is to use multi-task learning or feature-based transfer for more parameter-efficient transfer learning with pretrained models ( Liu et al. , 2019b ; Clark et al. , 2019 ; Stickland & Murray , 2019 ; Reimers & Gurevych , 2019 ; Feng et al. , 2020 ) . These methods learn only a small number of additional parameters ( e.g . a linear layer ) on top of a shared model . However , multi-task learning generally requires access to all tasks during training to prevent catastrophic forgetting ( French , 1999 ) , while feature-based transfer learning ( e.g . based on taskagnostic sentence representations ) is typically outperformed by full finetuning ( Howard & Ruder , 2018 ) . Adapters ( Rebuffi et al. , 2018 ) have recently emerged as a promising approach to parameterefficient transfer learning within the pretrain-finetune paradigm ( Houlsby et al. , 2019 ; Pfeiffer et al. , 2020a ; b ; c ) . Adapter layers are smaller , task-specific modules that are inserted between layers of a pretrained model , which remains fixed and is shared across tasks . These approaches do not require access to all tasks during training making them attractive in settings where one hopes to obtain and share performant models as new tasks arrive in stream . Houlsby et al . ( 2019 ) find that adapter layers trained on BERT can match the performance of fully finetuned BERT on the GLUE benchmark ( Wang et al. , 2019a ) while only requiring 3.6 % additional parameters ( on average ) per task . In this work , we consider a similar setting as adapters but propose a new diff pruning approach with the goal of even more parameter-efficient transfer learning . Diff pruning views finetuning as learning a task-specific difference vector that is applied on top of the pretrained parameter vector , which remains fixed and is shared across different tasks . In order to learn this vector , we reparameterize the task-specific model parameters as θtask = θpretrained +δtask , where the pretrained parameter vector θpretrained is fixed and the task-specific diff vector δtask is finetuned . The diff vector is regularized with a differentiable approximation to the L0-norm penalty ( Louizos et al. , 2018 ) to encourage sparsity . This approach can become parameter-efficient as the number of tasks increases as it only requires storing the nonzero positions and weights of the diff vector for each task . The cost of storing the shared pretrained model remains constant and is amortized across multiple tasks . On the GLUE benchmark ( Wang et al. , 2019a ) , diff pruning can match the performance of the fully finetuned BERT baselines while finetuning only 0.5 % of the pretrained parameters per task , making it a potential alternative to adapters for parameter-efficient transfer learning . 2 BACKGROUND : TRANSFER LEARNING FOR NLP . The field of NLP has recently seen remarkable progress through transfer learning with a pretrainand-finetune paradigm , which initializes a subset of the model parameters for all tasks from a pretrained model and then finetunes on a task specific objective . Pretraining objectives include context prediction ( Mikolov et al. , 2013 ) , autoencoding ( Dai & Le , 2015 ) , machine translation ( McCann et al. , 2017 ) , and more recently , variants of language modeling ( Peters et al. , 2018 ; Radford et al. , 2018 ; Devlin et al. , 2019 ) objectives . Here we consider applying transfer learning to multiple tasks . We consider a setting with a potentially unknown set of tasks , where each τ ∈ T has an associated training set { x ( n ) τ , y ( n ) τ } Nn=1.1 For all tasks , the goal is to produce ( possibly tied ) model parameters θτ to minimize the empirical risk , min θτ 1 N N∑ n=1 L ( f ( x ( n ) τ ; θτ ) , y ( n ) τ ) + λR ( θτ ) where f ( · ; θ ) is a parameterized function over the input ( e.g . a neural network ) , L ( · , · ) is a loss function ( e.g . cross-entropy ) , and R ( · ) is an optional regularizer with hyperparameter λ . This multi-task setting can use the pretrain-then-finetune approach by simply learning independent parameters for each task ; however the large size of pretrained models makes this approach exceedingly parameter inefficient . For example , widely-adopted models such as BERTBASE and BERTLARGE have 110M and 340M parameters respectively , while their contemporaries such as T5 ( Raffel et al. , 2020 ) , Megatron-LM ( Shoeybi et al. , 2019 ) , and Turing-NLG ( Rajbhandari et al. , 2019 ) have parameter counts in the billions . Storing the fully finetuned models becomes difficult even for a moderate number of tasks.2 A classic approach to tackling this parameterinefficiency ( Caruana , 1997 ) is to train a single shared model ( along with a task-specific output layer ) against multiple tasks through joint training . However , the usual formulation of multi-task learning requires the set of tasks T to be known in advance in order to prevent catastrophic forgetting ( French , 1999 ) ,3 making it unsuitable for applications in which the set of tasks is unknown ( e.g . when tasks arrive in stream ) . 3 DIFF PRUNING . Diff pruning formulates task-specific finetuning as learning a diff vector δτ that is added to the pretrained model parameters θpretrained . We first reparameterize the task-specific model parameters , θτ = θpretrained + δτ , 1Therefore our setup is different from the classic multitask setting which usually assumes that set of tasks is known 2An intriguing line of work suggests that large-scale language models can be used without finetuning for a variety of tasks if given the appropriate context ( Radford et al. , 2019 ; Brown et al. , 2020 ) . While interesting , these models generally underperform task-specific models and require billions of parameters , though recent work suggests that they can be made substantially smaller ( Schick & Schutze , 2020 ) . 3However , work on continual learning mitigates these issues to an extent ( Shin et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Lee et al. , 2017 ; Kirkpatrick et al. , 2017 ; Parisi et al. , 2018 ) . which results in the following empirical risk minimization problem , min δτ 1 N N∑ n=1 L ( f ( x ( n ) τ ; θpretrained + δτ ) , y ( n ) τ ) + λR ( θpretrained + δτ ) . This trivial reparameterization is equivalent to the original formulation . Its benefit comes in the multi-task setting where the cost of storing the pretrained parameters θpretrained is amortized across tasks , and the only marginal cost for new tasks is the diff vector . If we can regularize δτ to be sparse such that ‖δτ‖0 ‖θpretrained‖0 , then this approach can become more parameter-efficient as the number of tasks increases . We can specify this goal with an L0-norm penalty on the diff vector , R ( θpretrained + δτ ) = ‖δτ‖0 = d∑ i=1 1 { δτ , i 6= 0 } . 3.1 DIFFERENTIABLE APPROXIMATION TO THE L0-NORM This regularizer is difficult to directly optimize as it is non-differentiable . In order to approximate this L0 objective , we follow the standard approach for gradient-based learning with L0 sparsity using a relaxed mask vector ( Louizos et al. , 2018 ) . This approach involves relaxing a binary vector into continuous space , and then multiplying it with a dense weight vector to determine how much of the weight vector is applied during training . After training , the mask is deterministic and a large portion of the diff vector is true zero . To apply this method we first decompose δτ into a binary mask vector multiplied with a dense vector , δτ = zτ wτ , zτ ∈ { 0 , 1 } d , wτ ∈ Rd We can now instead optimize an expectation with respect to zτ , whose distribution p ( zτ ; ατ ) is initially Bernoulli with parameters ατ , min ατ , wτ Ezτ∼p ( zτ ; ατ ) [ 1 N N∑ n=1 L ( f ( x ( n ) τ ; θpretrained + zτ wτ , ) , y ( n ) τ ) + λ‖δτ‖0 ] . This objective is still difficult in practice due to zτ ’ s being discrete ( which requires the score function gradient estimator ) , but the expectation provides some guidance for empirically effective relaxations . We follow prior work ( Louizos et al. , 2018 ; Wang et al. , 2019b ) and relax zτ into continuous space [ 0 , 1 ] d with a stretched Hard-Concrete distribution ( Jang et al. , 2017 ; Maddison et al. , 2017 ) , which allows for the use of pathwise gradient estimators . Specifically , zτ is now defined to be a deterministic and ( sub ) differentiable function of a sample u from a uniform distribution , u ∼ U ( 0,1 ) , sτ = σ ( logu− log ( 1− u ) + ατ ) , s̄τ = sτ × ( r − l ) + l , zτ = min ( 1 , max ( 0 , s̄τ ) ) . Here l < 0 and r > 1 are two constants used to stretch sτ into the interval ( l , r ) d before it is clamped to [ 0 , 1 ] d with the min ( 1 , max ( 0 , · ) ) operation . In this case we have a differentiable closed-form expression for the expected L0-norm , E [ ‖δτ‖0 ] = d∑ i=1 E [ 1 { zτ , i > 0 } ] = d∑ i=1 σ ( ατ , i − log −l r ) . Thus the final optimization problem is given by , min ατ , wτ Eu∼U [ 0,1 ] [ 1 N N∑ n=1 L ( f ( x ( n ) τ ; θpretrained + zτ wτ , ) , y ( n ) τ ) ] + λ d∑ i=1 σ ( ατ , i − log −l r ) , and we can now utilize pathwise gradient estimators to optimize the first term with respect to ατ since the expectation no longer depends on it.4 After training we obtain the final diff vector δτ by sampling u once to obtain zτ ( which is not necessarily a binary vector but has a significant number of dimensions equal to exactly zero due to the clamping function ) , then setting δτ = zτ wτ .5 4To reduce notation clutter we subsume the parameters of the task-specific output layer , which is not pretrained , into θpretrained . We do not apply the L0-norm penalty on these parameters during training . 5We found sampling once to work as well as more complicated alternatives ( e.g . based on multiple samples ) . 3.2 L0-BALL PROJECTION WITH MAGNITUDE PRUNING FOR SPARSITY CONTROL Differentiable L0 regularization provides a strong way to achieve high sparsity rate . However , it would be ideal to have more fine-grained control into the exact sparsity rate in the diff vector , especially considering applications which require specific parameter budgets . As λ is just the Lagrangian multiplier for the constraint E [ ‖δτ‖0 ] < η for some η , this could be achieved in principle by searching over different values of λ . However we found it more efficient and empirically effective to achieve an exact sparsity rate by simply projecting onto the L0-ball after training . Specifically we use magnitude pruning on the diff vector δτ and target a sparsity rate t % by only keeping the top t % × d values in δτ .6 Note that unlike standard magnitude pruning , this is based on the magnitude of the diff vector values and not the model parameters . As is usual in magnitude pruning , we found it important to further finetune δτ with the nonzero masks fixed to maintain good performance ( Han et al. , 2016 ) . Since this type of parameter-efficiency through projection onto the L0-ball can be applied without adaptive diff pruning,7 such an approach will serve as one of our baselines in the empirical study .
This paper proposes diff pruning, an alternative paradigm for parameter-efficient transfer learning of pre-trained models. Similar to adapters, diff pruning leaves the body of the pre-trained model unchanged. Rather than inserting additional task-specific parameters into the pre-trained model, diff pruning adds reparameterizes the parameters of the transferred model $\theta_\tau$ by adding a diff vector $\delta_\tau$ to them: $\theta_\tau = \theta_{\text{pretrained}} + \delta_\tau$. Parameter efficiency is achieved by regularizing $\theta_\tau$ to be sparse. The authors achieve this by using a relaxed mask vector to approximate the $L_0$ norm. They also propose a way to control for a specific sparsity rate via projection onto the $L_0$ ball after training and to enforce group sparsity that takes the model's structure into account. The approach is evaluated on the GLUE benchmark where it achieves competitive performance to full fine-tuning a BERT Large model and adapters while being more parameter-efficient than both of them.
SP:f08a22a7a003f9fff3d1366b923ed961489d9158
Interpretability Through Invertibility: A Deep Convolutional Network With Ideal Counterfactuals And Isosurfaces
Current state of the art computer vision applications rely on highly complex models . Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model . To elucidate a model ’ s decision , we present a novel interpretable model based on an invertible deep convolutional network . Our model generates meaningful , faithful , and ideal counterfactuals . Using PCA on the classifier ’ s input , we can also create “ isofactuals ” – image interpolations with the same outcome but visually meaningful different features . Counter- and isofactuals can be used to identify positive and negative evidence in an image . This can also be visualized with heatmaps . We evaluate our approach against gradient-based attribution methods , which we find to produce meaningless adversarial perturbations . Using our method , we reveal biases in three different datasets . In a human subject experiment , we test whether nonexperts find our method useful to spot spurious correlations learned by a model . Our work is a step towards more trustworthy explanations for computer vision . For code : https : //anonymous.4open.science/r/ae263acc-aad1-42f8a639-aec20ff31fc3/ 1 INTRODUCTION . The lack of interpretability is a significant obstacle for adopting Deep Learning in practice . As deep convolutional neural networks ( CNNs ) can fail in unforeseeable ways , are susceptible to adversarial perturbations , and may reinforce harmful biases , companies rightly refrain from automating high-risk applications without understanding the underlying algorithms and the patterns used by the model . Interpretable Machine Learning aims to discover insights into how the model makes its predictions . For image classification with CNNs , a common explanation technique are saliency maps , which estimate the importance of individual image areas for a given output . The underlying assumption , that users studying local explanations can obtain a global understanding of the model ( Ribeiro et al. , 2016 ) , was , however , refuted . Several user studies demonstrated that saliency explanations did not significantly improve users ’ task performance , trust calibration , or model understanding ( Kaur et al. , 2020 ; Adebayo et al. , 2020 ; Alqaraawi et al. , 2020 ; Chu et al. , 2020 ) . Alqaraawi et al . ( 2020 ) attributed these shortcomings to the inability to highlight global image features or absent ones , making it difficult to provide counterfactual evidence . Even worse , many saliency methods fail to represent the model ’ s behavior faithfully ( Sixt et al. , 2020 ; Adebayo et al. , 2018 ; Nie et al. , 2018 ) . While no commonly agreed definition of faithfulness exists , it is often characterized by describing what an unfaithful explanation is ( Jacovi & Goldberg , 2020 ) . For example , if the method fails to create the same explanations for identically behaving models . To ensure faithfulness , previous works have proposed building networks with interpretable components ( e.g . ProtoPNet ( Chen et al. , 2018 ) or Brendel & Bethge ( 2018 ) ) or mapping network activations to human-defined concepts ( e.g . TCAV ( Kim et al. , 2018 ) ) . However , the interpretable network components mostly rely on fixed-sized patches and concepts have to be defined a priori . Here , we argue that explanations should neither be limited to patches and not rely on a priori knowledge . Instead , users should discover hypotheses in the input space themselves with faithful counterfactuals that are ideal , i.e . samples that exhibit changes that directly and exclusively correspond to changes in the network ’ s prediction ( Wachter et al. , 2018 ) . We can guarantee this property by combining an invertible deep neural network z = ϕ ( x ) with a linear classifier y = wTϕ ( x ) + b . This yields three major advantages : 1 ) the model is powerful ( can approximate any function Zhang et al . ( 2019 ) ) , 2 ) the weight vector w of the classifier directly and faithfully encodes the feature importance of a target class y in the z feature space . 3 ) Human-interpretable explanations can be obtained by simply inverting explanations for the linear classifier back to input space . As a local explanation for one sample x , we generate ideal counterfactuals by altering its feature representation z along the direction of the weight vector z̃ = z + αw . The logit score can be manipulated directly via α. Inverting z̃ back to input space results in a human-understandable counterfactual x̃ = ϕ−1 ( z+αw ) . Any change orthogonal tow will create an “ isofactual ” , a sample that looks different but results in the same prediction . While many vectors are orthogonal to w , we find the directions that explain the highest variance of the features z using PCA . As the principal components explain all variance of the features , they can be used to summarize the model ’ s behavior globally . We demonstrate the usefulness of our method on a broad range of evaluations . We compared our approach to gradient-based saliency methods and find that gradient-based counterfactuals are not ideal as they also change irrelevant features . We evaluated our method on three datasets , which allowed us to create hypotheses about potential biases in all three . After statistical evaluation , we confirmed that these biases existed . Finally , we evaluated our method ’ s utility against a strong baseline of example-based explanations in an online user study . We confirmed that participants could identify the patterns relevant to the model ’ s output and reject irrelevant ones . This work demonstrates that invertible neural networks provide interpretability that conceptually stands out against the more commonly used alternatives . 2 METHOD . Throughout this work , we rely on the following definitions , which are based on Wachter et al . ( 2018 ) : Definition 2.1 ( Counterfactual Example ) . Given a data point x and its prediction y , a counterfactual example is an alteration of x , defined as x̃ = x+ ∆x , with a altered prediction ỹ = y + ∆y where ∆y 6= 0 . Samples x̄ with ∆y = 0 are designated “ isofactuals ” . Almost any ∆x will match the counterfactual definition , including those that additionally change aspects which are unrelated to the model ’ s prediction , e.g . removing an object but also changing the background ’ s color . It is desirable to isolate the change most informative about a prediction : Definition 2.2 ( Ideal Counterfactual ) . Given a set of unrelated properties ξ ( x ) = { ξi ( x ) } , a sample x̃ is called ideal counterfactual of x if all unrelated properties ξi remain the same . The following paragraphs describe how we generate explanations using an invertible neural network ϕ : Rn 7→ Rn . The forward function ϕ maps a data point x to a feature vector z = ϕ ( x ) . Since ϕ is invertible , one can regain x by applying the inverse x = ϕ−1 ( z ) . We used the features z to train a binary classifier f ( x ) = wTz + b that predicts the label y . In addition to the supervised loss , we also trained ϕ as a generative model ( Dinh et al. , 2016 ; 2015 ) to ensure that the inverted samples are human-understandable . Counterfactuals To create a counterfactual example x̃ for a datapoint x , we can exploit that w encodes feature importance in the z-space directly . To change the logit score of the classifier , we simply add the weight vector to the features z and then invert the result back to the input space : x̃ = ϕ−1 ( z + αw ) . Hence , for any sample x , we can create counterfactuals x̃ with an arbitrary change in logit value ∆y = αwTw by choosing α accordingly . Figure 1a shows several such examples . Since the generation ( ϕ−1 ) and prediction ( ϕ ) are performed by the same model , we know that x̃ will correspond exactly to the logit offset αwTw . Consequently , x̃ is a faithful explanation . To show that our counterfactuals are ideal , we have to verify that no property unrelated to the prediction is changed . For such a property ξ ( x ) = vTz , v has to be orthogonal to w.1 As the unrelated property ξ does not change for the counterfactual ξ ( x̃ ) = vT ( z + αw ) = vTz = ξ ( x ) , we know that x̃ = ϕ−1 ( z + αw ) is indeed an ideal counterfactual . 1ξ ( x ) could actually be non-linear in the features z as long as the gradient ∂ξ ∂z is orthogonal tow . PCA Isosurface Since users can only study a limited number of examples , it is desirable to choose samples that summarize the model ’ s behavior well ( Ribeiro et al. , 2016 ; Alqaraawi et al. , 2020 ) . For counterfactual explanations , the change ∆x may vary significantly per example as ϕ ( x ) is a non-linear function . As each x has a unique representation z in the feature space , we want to find examples describing the different directions of the feature distribution . To isolate the effect of w , such examples would have the same prediction and only vary in features unrelated to the prediction . We implement this by first removing the variation along w using a simple projection z⊥ = z − ( wTz/wTw ) w and then applying PCA on z⊥ . The resulting principal components e1 . . . em are orthogonal tow except of the last principal component em which has zero variance and can therefore be discarded . The principal components span a hyperplane αw + ∑m−1 i βiei . Since all samples on this hyperplane have the same prediction ( a logit value of αwTw ) , it is an isosurface . As a principal component ei is a vector in the z-space , we can create counterfactuals for it ϕ−1 ( ei + αw ) and understand how the changes of adding w differ per location in the z-space . The e1 , . . . , em−1 are sorted by the explained variance allowing to prioritize the most relevant changes in the data . As the principal components cover the whole feature distribution , understanding the effect of w on them allows forming a global understanding of the model ’ s behavior . Saliency maps Saliency maps are supposed to draw attention to features most relevant to a prediction . In our case , it is most reasonable to highlight the difference between x and the counterfactual x̃ . We can measure the difference although in an intermediate feature map h. The saliency map of an intermediate layer can be resized to fit the input ’ s resolution as information remains local in convolutional networks . Per feature map location ( i , j ) , we calculate the similarity measure m ( i , j ) = |∆hij | cos ( ] ( ∆hij , hij ) ) . The sign of the saliency mapm depends on the alignment of the change ∆h with the feature vector h , i.e . ( ] ( ∆hij , hij ) > 0 ) . The magnitude is dominated by the length of the change |∆hij | . Figure 1b presents saliency maps for the CELEBA Attractive label . Model Our invertible network follows the Glow architecture ( Kingma & Dhariwal , 2018 ) . The network is trained to map the data distribution to a standard normal distribution . We reduce the input dimensionality of ( 3 , 128 , 128 ) down to ( 786 ) by fading half of the channels out with each downsampling step . When generating a counterfactual , we reuse the z values out-faded from the lower layers as they correspond to small details and noise . We have 7 downsampling steps and 351 flow layers . The network has 158.769.600 parameters in total . An important design decision is that the final layer ’ s output is not input to the linear classifier . The PCA would fail to discover meaningful directions as the N ( 0 , I ) prior induces equal variance in all directions . The classifier uses the output of layer 321 . The layers 322-351 are optimized using the standard unsupervised flow objective . For the first 321 layers , we also train on the classifier ’ s supervised loss ( for details see Appendix A.1 ) .
The paper presents a promising idea to build interpretable models by combining discriminative and generative approach. The proposed model uses an invertible neural network to model the data distribution. The invertibility helps in transforming the learned feature vector back to the image domain. A linear discriminative classifier is trained on the feature vector to perform binary classification. Using the inverse function, the model generates a counterfactual explanation by inverting a modified logit score to create a new image as an explanation. The authors further construct an orthogonal basis using PCA, such that modifying feature vector in those directions results in no change in the classifier's prediction. Decomposing the feature space into such a basis helps discover potential biases in the dataset and the classification model. The experiments compare the proposed method's performance with fully discriminative models and post-hoc interpretability methods such as gradient-based saliency maps.
SP:83a6062cbcad4c8c40fe066abc7bd32a62f38b52
Impact-driven Exploration with Contrastive Unsupervised Representations
1 INTRODUCTION . Reinforcement learning ( RL ) algorithms aim to learn an optimal policy that maximizes expected reward from the environment . The search for better RL algorithms is motivated by the fact that many complex real-world problems can be formulated as RL problems . Yet , environments with sparse rewards , which often occur in the real-world , pose a significant challenge for RL algorithms that rely on random actions for exploration . Sparsity of the reward can make it extremely unlikely for the agent to stumble upon any positive feedback by chance . The agent may spend a long time simply exploring and not receiving a single positive reward . To overcome this issue of exploration , several previous works have used intrinsic rewards ( Schmidhuber , 1991 ; Oudeyer et al. , 2007 ; 2008 ; Oudeyer & Kaplan , 2009 ) . Intrinsic rewards , as the name suggests , are reward signals generated by the agent which can make RL algorithms more sample efficient by encouraging exploratory behavior that is more likely to encounter rewards . Previous works have used state novelty in the form of state visitation counts ( Strehl & Littman , 2008 ) for tabular states , pseudo-counts for high-dimensional state spaces ( Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Martin et al. , 2017 ) , prediction error of random networks ( Burda et al. , 2019b ) , and curiosity about environment dynamics ( Stadie et al. , 2015 ; Pathak et al. , 2017 ) as intrinsic rewards . Although such advances in exploration methods have enabled RL agents to achieve high rewards in notoriously difficult sparse reward environments such as Montezuma ’ s Revenge and Pitfall ( Bellemare et al. , 2013 ) , many existing exploration methods use the same environment for training and testing ( Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Aytar et al. , 2018 ; Ecoffet et al. , 2019 ) . As a result , agents trained in this fashion do not generalize to new environments . Indeed , several recent papers point out that deep RL agents overfit to the environment they were trained on ( Rajeswaran et al. , 2017 ; Zhang et al. , 2018 ; Machado et al. , 2018 ) , leading to the creation of new benchmarks consisting of procedurally-generated environments ( Cobbe et al. , 2019 ; Risi & Togelius , 2020 ; Küttler et al. , 2020 ) . In practice , agents often have to act in environments that are similar , but different from the environments they were trained on . Hence , it is crucial that the agent learns a policy that generalizes across diverse ( but similar ) environments . This adds another layer of difficulty , the diversity of environment layout for each episode , to the already challenging sparse reward structure . To tackle this challenge head-on , Raileanu & Rocktäschel ( 2020 ) focus on exploration in procedurally-generated environments and propose RIDE , an intrinsic rewarding scheme based on the “ impact ” of a new observation . Denoting the observation embedding function by φ , RIDE measures the impact of observation o′ by computing ‖φ ( o′ ) − φ ( o ) ‖2 , where o is the previous observation . Similarly , Savinov et al . ( 2019 ) propose episodic curiosity ( EC ) , an intrinsic rewarding scheme which rewards visiting states that are dis-similar to states in the agent ’ s episodic memory . RIDE uses forward and inverse dynamics prediction ( Pathak et al. , 2017 ) to train the observation embedding φ in a self-supervised manner . Hence , a question that one might ask is : What is the ` 2-distance in this embedding space measuring ? We address this question by modifying the embedding training procedure , thereby changing the definition of impact . That is , we modify RIDE so that impact corresponds to an explicitly trained similarity measure in the embedding space , where we define two observations to be similar if they are reachable from each other within a few steps . The original definition of “ impact ” in RIDE is not conceptually clear because the learned embedding space is not inherently equipped with a similarity measure , let alone ` 2-distance . It is still possible that RIDE ’ s measure of impact based on ` 2-distance may implicitly correspond to some similarity measure in the embedding space , but we leave this investigation for future work . Our main contributions are 1 ) proposing a conceptually clear measure of impact by training observation embeddings explicitly with the cosine similarity objective instead of forward and inverse dynamics prediction , 2 ) providing a new perspective which connects RIDE and EC , and 3 ) outperforming RIDE via episodic memory extensions . We use SimCLR ( Chen et al. , 2020 ) to train the embedding function and propose a novel intrinsic rewarding scheme , which we name RIDESimCLR . As in EC , the positive pairs used in the contrastive learning component of RIDE-SimCLR correspond to pairs of observations which are within k-steps of each other ( referred to as “ k-step reachability ” in their work ) . Following the experimental setup of Raileanu & Rocktäschel ( 2020 ) , we use MiniGrid ( ChevalierBoisvert et al. , 2018 ) to evaluate our method as it provides a simple , diverse suite of tasks that allows us to focus on the issue of exploration instead of other issues such as visual perception . We focus on the comparison of our approach to RIDE since Raileanu & Rocktäschel ( 2020 ) report that RIDE achieves the best performance on all their MiniGrid benchmarks among other exploration methods such as intrinsic curiosity ( ICM ) by Pathak et al . ( 2017 ) and random network distillation ( RND ) by Burda et al . ( 2019b ) . We note that MiniGrid provides a sufficiently challenging suite of tasks for RL agents despite its apparent simplicity , as ICM and RND fail to learn any effective policies for some tasks due to the difficulty posed by procedurally-generated environments . Our experimental results show that RIDE-SimCLR performs similarly to RIDE on these benchmarks with the added benefit of having a conceptually clear similarity measure for the embedding space . Our qualitative analysis shows interesting differences between RIDE and RIDE-SimCLR . For instance , RIDE highly rewards interactions with controllable objects such as opening a door , which is not the case in RIDE-SimCLR . We also observe that our episodic memory extension improves the quantitative performance of both methods , which demonstrates the benefit of establishing a connection between RIDE and EC . The Never Give Up ( NGU ) agent by Badia et al . ( 2020 ) can be seen as a close relative of our memory extension of RIDE since it uses ` 2 distance in embedding space trained with the same inverse dynamics objective to compute approximate counts of states and aggregates episodic memory to compute a novelty bonus . Our work is different from EC because we do not explicitly sample negative pairs for training the observation embedding network , and we use cosine similarity , instead of a separately trained neural network , to output similarity scores for pairs of observations . We note that Campero et al . ( 2020 ) report state-of-the-art results on even more challenging MiniGrid tasks by training an adversarially motivated teacher network to generate intermediate goals for the agent ( AMIGo ) , but we do not compare against this method since their agent receives full observations of the environment . Both RIDE and RIDE-SimCLR agents only receive partial observations . 2 BACKGROUND . We consider the standard episodic RL setting in which an agent interacts with the environment to maximize its expected scalar reward for each episode . The interaction proceeds in discrete time steps and terminates at some fixed time T unless the goal is achieved earlier . At time step t , the agent receives observation ot and samples an action from its policy π ( ot ) . The environment then provides the agent with a scalar reward rt , the next observation ot+1 , and an end-of-episode indicator . The goal of the agent is to maximize the expected discounted reward R = ∑T t=1 γ trt , where rt is the ( extrinsic ) reward given by the environment at time step t. When extrinsic reward is sparse , standard RL algorithms such as PPO ( Schulman et al. , 2017 ) or IMPALA ( Espeholt et al. , 2018 ) often fail to learn a good policy . To overcome this challenge , previous works have proposed augmenting the reward function by r̂t = rt + wirit , where r i t is the intrinsic reward and wi is a hyperparameter which controls the relative importance of rit . The intrinsic reward rit is typically designed to be a dense reward function which pushes the policy towards exploratory behavior more likely to encounter positive extrinsic rewards . Note that intrinsic rewards can be used with any existing RL algorithms without altering their underlying network architecture or training procedure . It suffices to simply replace the extrinsic reward rt with the augmented reward r̂t . The observation embedding used to compute intrinsic rewards can be trained either 1 ) offline with respect to a uniformly random policy or 2 ) online with respect to agent ’ s current policy . For a fair comparison with Raileanu & Rocktäschel ( 2020 ) , we use the embedding trained online for computing intrinsic rewards . 2.1 IMPACT-DRIVEN EXPLORATION ( RIDE ) . RIDE ( Raileanu & Rocktäschel , 2020 ) is an intrinsic reward based on the magnitude of change in the observation embedding produced by the agent ’ s action ( See Figure 1 ) . More precisely , RIDE is defined as RIDE ≡ rit ( st , at ) = ‖φ ( ot+1 ) − φ ( ot ) ‖2√ Nep ( st+1 ) , ( 1 ) where φ is the observation embedding function and Nep ( s ) is the number of times state s is visited within the current episode . The purpose of this discount by Nep ( s ) is to prevent the agent from going back and forth a sequence of states with large ` 2 differences . The embedding function φ ( o ) used in RIDE is parametrized by a neural network and trained by minimizing forward and inverse dynamics prediction error ( Pathak et al. , 2017 ) . Note that the policy network π has its own observation embedding network ψ , which is trained separately from φ . The embedding φ is only used to compute the intrinsic reward and never used for control , and the opposite holds for ψ . The purpose of using forward and inverse dynamics models to train the embedding is to store information that is useful for predicting the agent ’ s action or effects actions have on the environment . This leads to learning an action-focused embedding space . RIDE builds upon the intrinsic curiosity module ( ICM ) by Pathak et al . ( 2017 ) , which uses the forward dynamics prediction error as intrinsic reward . The novelty in RIDE is using the ` 2 distance between two different observations as a measure of qualitative change between the states . Indeed , visualizations of RIDE by Raileanu & Rocktäschel ( 2020 ) show that RIDE assigns higher intrinsic rewards for actions that qualitatively “ change the dynamics ” , such as opening doors or interacting with objects in MiniGrid . However , RIDE introduces conceptual difficulties as the embedding space is not explicitly trained with any similarity measure . That is , the forward and inverse dynamics objective does not explicitly pull together or push apart embeddings of different observations . In ICM , the ` 2 distance between φ ( ot+1 ) and the prediction φ̂ ( ot+1 ) has a clear interpretation as the forward prediction “ error ” , since the forward dynamics objective explicitly minimizes this ` 2 distance . RIDE , on the other hand , uses the ` 2 distance between different observations ot and ot+1 as the intrinsic reward . Yet , the forward and inverse dynamics prediction objective does not specify which pairs of observation embeddings ( oi , oj ) should be pulled closer together and which should be pushed apart ( in ` 2 distance ) . This does not mean that RIDE fails to capture qualitative changes in the dynamics . In fact , our visualizations ( Figure 7 ) corroborate the findings of Raileanu & Rocktäschel ( 2020 ) by demonstrating that RIDE assigns higher rewards for actions such as opening doors . However , it is difficult to precisely define what having ” different dynamics ” means , let alone giving a quantitative definition of it . Moreover , the question of why the ` 2 distance is larger for pairs of observations corresponding to such actions is not well-understood , and thus , requires further investigation . Without understanding why , we can not guarantee that RIDE will always assign higher rewards to actions we perceive as “ significantly changing the dynamics ” .
The work provided a nice new method with some performance gains by combining several existing techniques. The presentation was clear and organized, with the new method getting both better performance and some improvements in interpretability. It provides a variety of visual analyses that are typical of this area of research and present the contrasts between this work and prior efforts.
SP:0b0a27b56520c182d6cdc92a338695f8a7813b83
Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies
1 INTRODUCTION . Until recently , machine learning models were largely trained in the data center setting ( Dean et al. , 2012 ) using powerful computing nodes , fast inter-node communication links , and large centrally available training datasets . The future of machine learning lies in moving both data collection as well as model training to the edge . The emerging paradigm of federated learning ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ; Bonawitz et al. , 2019 ) considers a large number of resource-constrained mobile devices that collect training data from their environment . Due to limited communication capabilities and privacy concerns , these data can not be directly sent over to the cloud . Instead , the nodes locally perform a few iterations of training using local-update stochastic gradient descent ( SGD ) ( Yu et al. , 2018 ; Stich , 2018 ; Wang & Joshi , 2018 ; 2019 ) , and only send model updates periodically to the aggregating cloud server . Besides communication limitations , the key scalability challenge faced by the federated learning framework is that the client nodes can have highly heterogeneous local datasets and computation speeds . The effect of data heterogeneity on the convergence of local-update SGD is analyzed in several recent works ( Reddi et al. , 2020 ; Haddadpour & Mahdavi , 2019 ; Khaled et al. , 2020 ; Stich & Karimireddy , 2019 ; Woodworth et al. , 2020 ; Koloskova et al. , 2020 ; Huo et al. , 2020 ; Zhang et al. , 2020 ; Pathak & Wainwright , 2020 ; Malinovsky et al. , 2020 ; Sahu et al. , 2019 ) and methods to overcome the adverse effects of data and computational heterogeneity are proposed in ( Sahu et al. , 2019 ; Wang et al. , 2020 ; Karimireddy et al. , 2019 ) , among others . Partial Client Participation . Most of the recent works described above assume full client participation , that is , all nodes participate in every training round . In practice , only a small fraction of client nodes participate in each training round , which can exacerbate the adverse effects of data heterogeneity . While some existing convergence guarantees for full client participation and methods to tackle heterogeneity can be generalized to partial client participation ( Li et al. , 2020 ) , these generalizations are limited to unbiased client participation , where each client ’ s contribution to the expected global objective optimized in each round is proportional to its dataset size . In Ruan et al . ( 2020 ) , the authors analyze the convergence with flexible device participation , where devices can freely join or leave the training process or send incomplete updates to the server . However , adaptive client selection that is cognizant of the training progress at each client has not been understood yet . It is important to analyze and understand biased client selection strategies since they can sharply accelerate error convergence , and hence boost communication efficiency in heterogeneous environments by preferentially selecting clients with higher local loss values , as we show in our paper . This idea has been explored in recent empirical studies ( Goetz et al. , 2019 ; Laguel et al. , 2020 ; Ribero & Vikalo , 2020 ) . Nishio & Yonetani ( 2019 ) proposed grouping clients based on hardware and wireless resources in order to save communication resources . Goetz et al . ( 2019 ) ( which we include as a benchmark in our experiments ) proposed client selection with local loss , and Ribero & Vikalo ( 2020 ) proposed utilizing the progression of clients ’ weights . But these schemes are limited to empirical demonstration without a rigorous analysis of how selection skew affects convergence speed . Another relevant line of work ( Jiang et al. , 2019 ; Katharopoulos & Fleuret , 2018 ; Shah et al. , 2020 ; Salehi et al. , 2018 ) employs biased selection or importance sampling of data to speed-up convergence of classic centralized SGD – they propose preferentially selecting samples with highest loss or highest gradient norm to perform the next SGD iteration . In contrast , Shah et al . ( 2020 ) proposes biased selection of lower loss samples to improve robustness to outliers . Generalizing such strategies to the federated learning setting is a non-trivial and open problem because of the large-scale distributed and heterogeneous nature of the training data . Our Contributions . In this paper , we present the first ( to the best of our knowledge ) convergence analysis of federated learning with biased client selection that is cognizant of the training progress at each client . We discover that biasing the client selection towards clients with higher local losses increases the rate of convergence compared to unbiased client selection . Using this insight , we propose the POWER-OF-CHOICE client selection strategy and show by extensive experiments that POWER-OF-CHOICE yields up to 3× faster convergence with 10 % higher test performance than the standard federated averaging with random selection . POWER-OF-CHOICE is designed to incur minimal communication and computation overhead , enhancing resource efficiency in federated learning . In fact , we show that even with 3× less clients participating in each round as compared to random selection , POWER-OF-CHOICE gives 2× faster convergence and 5 % higher test accuracy . 2 PROBLEM FORMULATION . Consider a cross-device federated learning setup with total K clients , where client k has a local dataset Bk consisting |Bk| = Dk data samples . The clients are connected via a central aggregating server , and seek to collectively find the model parameter w that minimizes the empirical risk : F ( w ) = 1∑K k=1Dk K∑ k=1 ∑ ξ∈Bk f ( w , ξ ) = K∑ k=1 pkFk ( w ) ( 1 ) where f ( w , ξ ) is the composite loss function for sample ξ and parameter vector w. The term pk = Dk/ ∑K k=1Dk is the fraction of data at the k-th client , and Fk ( w ) = 1 |Bk| ∑ ξ∈Bk f ( w , ξ ) is the local objective function of client k. In federated learning , the vectors w∗ , and w∗k for k = 1 , . . . , K that minimize F ( w ) and Fk ( w ) respectively can be very different from each other . We define F ∗ = minw F ( w ) = F ( w∗ ) and F ∗k = minw Fk ( w ) = Fk ( w ∗ k ) . Federated Averaging with Partial Client Participation . The most common algorithm to solve ( 1 ) is federated averaging ( FedAvg ) proposed in McMahan et al . ( 2017 ) . The algorithm divides the training into communication rounds . At each round , to save communication cost at the central server , the global server only selects a fraction C of m = CK clients to participate in the training . Each selected/active client performs τ iterations of local SGD ( Stich , 2018 ; Wang & Joshi , 2018 ; Yu et al. , 2018 ) and sends its locally updated model back to the server . Then , the server updates the global model using the local models and broadcasts the global model to a new set of active clients . Formally , we index the local SGD iterations with t ≥ 0 . The set of active clients at iteration t is denoted by S ( t ) . Since active clients performs τ steps of local update , the active set S ( t ) also remains constant for every τ iterations . That is , if ( t+ 1 ) mod τ = 0 , then S ( t+1 ) = S ( t+2 ) = · · · = S ( t+τ ) . Accordingly , the update rule of FedAvg can be written as follows : w ( t+1 ) k = { w ( t ) k − ηtgk ( w ( t ) k , ξ ( t ) k ) for ( t+ 1 ) mod τ 6= 0 1 m ∑ j∈S ( t ) ( w ( t ) j − ηtgj ( w ( t ) j , ξ ( t ) j ) ) , w ( t+1 ) for ( t+ 1 ) mod τ = 0 ( 2 ) where w ( t+1 ) k denotes the local model parameters of client k at iteration t , ηt is the learning rate , and gk ( w ( t ) k , ξ ( t ) k ) = 1 b ∑ ξ∈ξ ( t ) k ∇f ( w ( t ) k , ξ ) is the stochastic gradient over mini-batch ξ ( t ) k of size b that is randomly sampled from client k ’ s local dataset Bk . Moreover , w ( t+1 ) denotes the global model at server . Although w ( t ) is only updated after every τ iterations , for the purpose of convergence analysis we consider a virtual sequence of w ( t ) that is updated at each iteration as follows w ( t+1 ) = w ( t ) − ηtg ( t ) = w ( t ) − ηt 1 m ∑ k∈S ( t ) gk ( w ( t ) k , ξ ( t ) k ) ( 3 ) with g ( t ) = 1m ∑ k∈S ( t ) gk ( w ( t ) k , ξ ( t ) k ) . Note that in ( 2 ) and ( 3 ) we do not weight the client models by their dataset fractions pk because pk is considered in the client selection scheme used to decide the set S ( t ) . Our convergence analysis can be generalized to when the global model is a weighted average instead of a simple average of client models , and we show in Appendix E that our convergence analysis also covers the sampling uniformly at random without replacement scheme proposed by Li et al . ( 2020 ) . The set S ( t ) can be sampled either with or without replacement . For sampling with replacement , we assume that multiple copies of the same client in the set S ( t ) behave as different clients , that is , they perform local updates independently . Client Selection Strategy . To guarantee FedAvg converges to the stationary points of the objective function ( 1 ) , most current analysis frameworks ( Li et al. , 2020 ; Karimireddy et al. , 2019 ; Wang et al. , 2020 ) consider a strategy that selects the set S ( t ) by sampling m clients at random ( with replacement ) such that client k is selected with probability pk , the fraction of data at that client . This sampling scheme is unbiased since it ensures that in expectation , the update rule ( 3 ) is the same as full client participation . Hence , it enjoys the same convergence properties as local-update SGD methods ( Stich , 2018 ; Wang & Joshi , 2018 ) . We denote this unbiased random client selection strategy as πrand . In this paper , we consider a class of biased client selection strategies that is cognizant of the global training progress which ( to the best of our knowledge ) has not been worked on before . Note that for any aggregation scheme and sampling scheme with partial client participation , if the expectation over the sampling scheme for the update rule of the global model is equal to the case of the update rule for full client participation , we distinguish this as an unbiased client participation scheme . For example in Horváth & Richtárik ( 2020 ) , even with a biased sampling scheme , with the normalizing aggregation the update rule is unbiased . Henceforth , we state that our paper encompasses both biased and unbiased update rules . In the two-client example in Figure 1 , we set S ( t+1 ) = arg maxk∈ [ K ] Fk ( w ( t ) ) , a single client with the highest local loss at the current global model . In this toy example , the selection strategy can not guarantee the updates ( 3 ) equals to the full client participation case in expectation . Nevertheless , it gives faster convergence to the global minimum than the random one . Motivated by this observation , we define a client selection strategy π as a function that maps the current global model w to a selected set of clients S ( π , w ) .
This work investigates federated optimization considering data heterogeneity, communication and computation limitations, and partial client participation. In contrast to past works, this paper focuses on deeper understanding of the effect of partial client participation on the convergence rate by considering biased client participation. The paper provides convergence analysis for any biased selection strategy, showing that the rate is composed of vanishing error term and non-vanishing bias term. The obtained rates explicitly show the effect of client selection strategy and the trade-off between convergence speed and the solution bias. Then it proposes a parametric family of biased selection strategy, called power-of-choice, which aims to speed up the convergence of the error term at the cost of possibly bigger bias term. Experiments are provided to highlight the benefits of the proposed pow-d strategy over the standard unbiased selection strategies.
SP:a25b3465107fa32de50e4457b87eba134792e07b
Generalized Variational Continual Learning
1 INTRODUCTION . Continual learning methods enable learning when a set of tasks changes over time . This topic is of practical interest as many real-world applications require models to be regularly updated as new data is collected or new tasks arise . Standard machine learning models and training procedures fail in these settings ( French , 1999 ) , so bespoke architectures and fitting procedures are required . This paper makes two main contributions to continual learning for neural networks . First , we develop a new regularization-based approach to continual learning . Regularization approaches adapt parameters to new tasks while keeping them close to settings that are appropriate for old tasks . Two popular approaches of this type are Variational Continual Learning ( VCL ) ( Nguyen et al. , 2018 ) and Online Elastic Weight Consolidation ( Online EWC ) ( Kirkpatrick et al. , 2017 ; Schwarz et al. , 2018 ) . The former is based on a variational approximation of a neural network ’ s posterior distribution over weights , while the latter uses Laplace ’ s approximation . In this paper , we propose Generalized Variational Continual Learning ( GVCL ) of which VCL and Online EWC are two special cases . Under this unified framework , we are able to combine the strengths of both approaches . GVCL is closely related to likelihood-tempered Variational Inference ( VI ) , which has been found to improve performance in standard learning settings ( Zhang et al. , 2018 ; Osawa et al. , 2019 ) . We also see significant performance improvements in continual learning . Our second contribution is to introduce an architectural modification to the neural network that combats the deleterious overpruning effect of VI ( Trippe & Turner , 2018 ; Turner & Sahani , 2011 ) . We analyze pruning in VCL and show how task-specific FiLM layers mitigate it . Combining this architectural change with GVCL results in a hybrid architectural-regularization based algorithm . This additional modification results in performance that exceeds or is within statistical error of strong baselines such as HAT ( Serra et al. , 2018 ) and PathNet ( Fernando et al. , 2017 ) . The paper is organized as follows . Section 2 outlines the derivation of GVCL , shows how it unifies many continual learning algorithms , and describes why it might be expected to perform better than them . Section 3 introduces FiLM layers , first from the perspective of multi-task learning , and then through the lens of variational over-pruning , showing how FiLM layers mitigate this pathology of VCL . Finally , in Section 5 we test GVCL and GVCL with FiLM layers on many standard bench- marks , including ones with few samples , a regime that could benefit more from continual learning . We find that GVCL with FiLM layers outperforms existing baselines on a variety of metrics , including raw accuracy , forwards and backwards transfer , and calibration error . In Section 5.4 we show that FiLM layers provide a disproportionate improvement to variational methods , confirming our hypothesis in Section 31 . 2 GENERALIZED VARIATIONAL CONTINUAL LEARNING . In this section , we introduce Generalized Variational Continual Learning ( GVCL ) as a likelihoodtempered version of VCL , with further details in Appendix C. We show how GVCL recovers Online EWC . We also discuss further links between GVCL and the Bayesian cold posterior in Appendix D . 2.1 LIKELIHOOD-TEMPERING IN VARIATIONAL CONTINUAL LEARNING . Variational Continual Learning ( VCL ) . Bayes ’ rule calculates a posterior distribution over model parameters θ based on a prior distribution p ( θ ) and some dataset DT = { XT , yT } . Bayes ’ rule naturally supports online and continual learning by using the previous posterior p ( θ|DT−1 ) as a new prior when seeing new data ( Nguyen et al. , 2018 ) . Due to the intractability of Bayes ’ rule in complicated models such as neural networks , approximations are employed , and VCL ( Nguyen et al. , 2018 ) uses one such approximation , Variational Inference ( VI ) . This approximation is based on approximating the posterior p ( θ|DT ) with a simpler distribution qT ( θ ) , such as a Gaussian . This is achieved by optimizing the ELBO for the optimal qT ( θ ) , ELBOVCL = Eθ∼qT ( θ ) [ log p ( DT |θ ) ] −DKL ( qT ( θ ) ‖qT−1 ( θ ) ) , ( 1 ) where qT−1 ( θ ) is the approximation to the previous task posterior . Intuitively , this refines a distribution over weight samples that balances good predictive performance ( the first expected prediction accuracy term ) while remaining close to the prior ( the second KL-divergence regularization term ) . Likelihood-tempered VCL . Optimizing the ELBO will recover the true posterior if the approximating family is sufficiently rich . However , the simple families used in practice typically lead to poor test-set performance . Practitioners have found that performance can be improved by downweighting the KL-divergence regularization term by a factor β , with 0 < β < 1 . Examples of this are seen in Zhang et al . ( 2018 ) and Osawa et al . ( 2019 ) , where the latter uses a “ data augmentation factor ” for down-weighting . In a similar vein , sampling from “ cold posteriors ” in SG-MCMC has also been shown to outperform the standard Bayes posterior , where the cold posterior is given by pT ( θ|D ) ∝ p ( θ|D ) 1 T , T < 1 ( Wenzel et al. , 2020 ) . Values of β > 1 have also been used to improve the disentanglement variational autoencoder learned models ( Higgins et al. , 2017 ) . We down-weight the KL-divergence term in VCL , optimizing the β-ELBO2 , β-ELBO = Eθ∼qT ( θ ) [ log p ( DT |θ ) ] − βDKL ( qT ( θ ) ‖qT−1 ( θ ) ) . VCL is trivially recovered when β = 1 . We will now show that surprisingly as β → 0 , we recover a special case of Online EWC . Then , by modifying the term further as required to recover the full version of Online EWC , we will arrive at our algorithm , Generalized VCL . 2.2 ONLINE EWC IS A SPECIAL CASE OF GVCL . We analyze the effect of KL-reweighting on VCL in the case where the approximating family is restricted to Gaussian distributions over θ . We will consider training all the tasks with a KLreweighting factor of β , and then take the limit β → 0 , recovering Online EWC . Let the approximate posteriors at the previous and current tasks be denoted as qT−1 ( θ ) = N ( θ ; µT−1 , ΣT−1 ) and qT ( θ ) = N ( θ ; µT , ΣT ) respectively , where we are learning { µT , ΣT } . The optimal ΣT under the β-ELBO has the form ( see Appendix C ) , Σ−1T = 1 β ∇µT∇µTEqT ( θ ) [ − log p ( DT |θ ) ] + Σ −1 T−1 . ( 2 ) 1Code is available at https : //github.com/yolky/gvcl 2We slightly abuse notation by writing the likelihood as p ( DT |θ ) instead of p ( yT |θ , XT ) . Now take the limit β → 0 . From Equation 2 , ΣT → 0 , so qT ( θ ) becomes a delta function , and Σ−1T = − 1 β ∇µT∇µT log p ( DT |θ = µT ) + Σ−1T−1 = 1 β HT + Σ −1 T−1 = 1 β T∑ t=1 Ht + Σ −1 0 , ( 3 ) where HT is the T th task Hessian3 . Although the learnt distribution qT ( θ ) becomes a delta function ( and not a full Gaussian distribution as in Laplace ’ s approximation ) , we will see that a cancellation of β factors in the β-ELBO will lead to the eventual equivalence between GVCL and Online EWC . Consider the terms in the β-ELBO that only involve µT : β-ELBO = Eθ∼qT ( θ ) [ log p ( DT |θ ) ] − β 2 ( µT − µT−1 ) > Σ−1T−1 ( µT − µT−1 ) = log p ( DT |θ = µT ) − 1 2 ( µT − µT−1 ) > ( T−1∑ t=1 Ht + βΣ −1 0 ) ( µT − µT−1 ) , ( 4 ) where we have set the form of ΣT−1 to be as in Equation 3 . Equation 4 is an instance of the objective function used by a number of continual learning methods , most notably Online EWC4 ( Kirkpatrick et al. , 2017 ; Schwarz et al. , 2018 ) , Online-Structured Laplace ( Ritter et al. , 2018 ) , and SOLA ( Yin et al. , 2020 ) . These algorithms can be recovered by changing the approximate posterior class Q to Gaussians with diagonal , block-diagonal Kronecker-factored covariance matrices , and low-rank precision matrices , respectively ( see Appendices C.4 and C.5 ) . Based on this analysis , we see that β can be seen as interpolating between VCL , with β = 1 , and continual learning algorithms which use point-wise approximations of curvature as β → 0 . In Appendix A we explore how β controls the scale of the quadratic curvature approximation , verifying with experiments on a toy dataset .. Small β values learn distributions with good local structure , while higher β values learn distributions with a more global structure . We explore this in more detail in Appendices A and B , where we show the convergence of GVCL to Online-EWC on a toy experiment . Inference using GVCL . When performing inference with GVCL at test time , we use samples from the unmodified q ( θ ) distribution . This means that when β = 1 , we recover the VCL predictive , and as β → 0 , the posterior collapses as described earlier , meaning that the weight samples are effectively deterministic . This is in line with the inference procedure given by Online EWC and its variants . In practice , we use values of β = 0.05 − 0.2 in Section 5 , meaning that some uncertainty is retained , but not all . We can increase the uncertainty at inference time by using an additional tempering step , which we describe , along with further generalizations in Appendix D. 2.3 REINTERPRETING λ AS COLD POSTERIOR REGULARIZATION As described above , the β-ELBO recovers instances of a number of existing second-order continual learning algorithms including Online EWC as special cases . However , the correspondence does not recover a key hyperparameter λ used by these methods that up-weights the quadratic regularization term . Instead , our derivation produces an implicit value of λ = 1 , i.e . equal weight between tasks of equal sample count . In practice it is found that algorithms such as Online EWC perform best when λ > 1 , typically 10 − 1000 . In this section , we view this λ hyperparameter as a form of cold posterior regularization . In the previous section , we showed that β controls the length-scale over which we approximate the curvature of the posterior . However , the magnitude of the quadratic regularizer stays the same , because theO ( β−1 ) precision matrix and the β coefficient in front of the KL-term cancel out . Taking inspiration from cold posteriors ( Wenzel et al. , 2020 ) , which temper both the likelihood and the prior and improve accuracy with Bayesian neural networks , we suggest tempering the prior in GVCL . Therefore , rather than measuring the KL divergence between the posterior and prior , qT and qT−1 , respectively , we suggest regularizing towards tempered version of the prior , qλT−1 . However , this 3The actual Hessian may not be positive semidefinite while Σ is , so here we refer to a positive semidefinite approximation of the Hessian . 4EWC uses the Fisher information , but our derivation results in the Hessian . The two matrices coincide when the model has near-zero training loss , as is often the case ( Martens , 2020 ) . form of regularization has a problem : in continual learning , over the course of many tasks , old tasks will be increasingly ( exponentially ) tempered . In order to combat this , we also use the tempered version of the posterior in the KL divergence , qλT . This should allow us to gain benefits from tempering the prior while being stable over multiple tasks in continual learning . As we now show , tempering in this way recovers the λ hyperparameter from algorithms such as Online EWC . Note that raising the distributions to the power λ is equivalent to tempering by τ = λ−1 . For Gaussians , tempering a distribution by a temperature τ = λ−1 is the same as scaling the covariance by λ−1 . We can therefore expand our new KL divergence , DKL ( qλT ‖qλT−1 ) = 1 2 ( ( µT − µT−1 ) > λΣ−1T−1 ( µT − µT−1 ) + Tr ( λΣ −1 T−1λ −1ΣT ) + log |ΣT−1|λ−d |ΣT |λ−d − d ) = 1 2 ( ( µT − µT−1 ) > λΣ−1T−1 ( µT − µT−1 ) + Tr ( Σ −1 T−1ΣT ) + log |ΣT−1| |ΣT | − d ) = DKLλ ( qT ‖qT−1 ) . In the limit of β → 0 , our λ coincides with Online EWC ’ s λ , if the tasks have the same number of samples . However , this form of λ has a slight problem : it increases the regularization strength of the initial prior Σ0 on the mean parameter update . We empirically found that this negatively affects performance . We therefore propose a different version of λ , which only up-weights the “ datadependent ” parts of ΣT−1 , which can be viewed as likelihood tempering the previous task posterior , as opposed to tempering both the initial prior and likelihood components . This new version still converges to Online EWC as β → 0 , since the O ( 1 ) prior becomes negligible compared to the O ( β−1 ) Hessian terms . We define , Σ̃−1T , λ : = λ β T∑ t=1 Ht + Σ −1 0 = λ ( Σ −1 T − Σ −1 0 ) + Σ −1 0 . In practice , it is necessary to clip negative values of Σ−1T −Σ −1 0 to keep Σ̃ −1 T , λ positive definite . This is only required because of errors during optimization . We then use a modified KL-divergence , DKLλ̃ ( qT ‖qT−1 ) = 1 2 ( ( µT − µT−1 ) > Σ̃−1T−1 , λ ( µT − µT−1 ) + Tr ( Σ −1 T−1ΣT ) + log |ΣT−1| |ΣT | − d ) . Note that in Online EWC , there is another parameter γ , that down-weights the previous Fisher matrices . As shown in Appendix C , we can introduce this hyperparameter by taking the KL divergence priors and posteriors at different temperatures : qλT−1 and q γλ T . However , we do not find that this approach improves performance . Combining everything , we have our objective for GVCL , Eθ∼qT ( θ ) [ log p ( DT |θ ) ] − βDKLλ̃ ( qT ( θ ) ‖qT−1 ( θ ) ) .
This paper proposes Generalized Variational Continual Learning (GVCL). It is shown that Online EWC and VCL are special cases of GVCL, along with other theoretical contributions. Further, GVCL is augmented with FiLM to alleviate weaknesses of VCL and GVCL. GVCL and GVCL-F are applied to a number of continual learning tasks and demonstrate competitive performance.
SP:40f435881d361a57f68c000a5cf06d868acbcda8
FedMix: Approximation of Mixup under Mean Augmented Federated Learning
Federated learning ( FL ) allows edge devices to collectively learn a model without directly sharing data within each device , thus preserving privacy and eliminating the need to store data globally . While there are promising results under the assumption of independent and identically distributed ( iid ) local data , current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases . To resolve this issue , we propose a simple framework , Mean Augmented Federated Learning ( MAFL ) , where clients send and receive averaged local data , subject to the privacy requirements of target applications . Under our framework , we propose a new augmentation algorithm , named FedMix , which is inspired by a phenomenal yet simple data augmentation method , Mixup , but does not require local raw data to be directly shared among devices . Our method shows greatly improved performance in the standard benchmark datasets of FL , under highly non-iid federated settings , compared to conventional algorithms . 1 INTRODUCTION . As we enter the era of edge computing , more data is being collected directly from edge devices such as mobile phones , vehicles , facilities , and so on . By decoupling the ability to learn from the delicate process of merging sensitive personal data , Federated learning ( FL ) proposes a paradigm that allows a global neural network to learn to be trained collaboratively from individual clients without directly accessing the local data of other clients , thus preserving the privacy of each client ( Konečný et al. , 2016 ; McMahan et al. , 2017 ) . Federated learning lets clients do most of the computation using its local data , with the global server only aggregating and updating the model parameters based on those sent by clients . One of the standard and most widely used algorithm for federated learning is FedAvg ( McMahan et al. , 2017 ) , which simply averages model parameters trained by each client in an element-wise manner , weighted proportionately by the size of data used by clients . FedProx ( Li et al. , 2020b ) is a variant of FedAvg that adds a proximal term to the objective function of clients , improving statistical stability of the training process . While several other methods have been proposed until recently ( Mohri et al. , 2019 ; Yurochkin et al. , 2019 ; Wang et al. , 2020 ) , they all build on the idea that updated model parameters from clients are averaged in certain manners . Although conceptually it provides an ideal learning environment for edge devices , the federated learning still has some practical challenges that prevent the widespread application of it ( Li et al. , 2020a ; Kairouz et al. , 2019 ) . Among such challenges , the one that we are interested in this paper is the heterogeneity of the data , as data is distributed non-iid across clients in many real-world settings ; in other words , each local client data is not fairly drawn from identical underlying distribution . Since each client will learn from different data distributions , it becomes harder for the model to be trained efficiently , as reported in ( McMahan et al. , 2017 ) . While theoretical evidence on the convergence of FedAvg with non-iid case has recently been shown in ( Li et al. , 2020c ) , efficient algorithms suitable for this setting have not yet been developed or systematically examined despite some efforts ( Zhao et al. , 2018 ; Hsieh et al. , 2020 ) . In addition to non-iid problem , another important issue is that updating model parameters individually trained by each client is very costly and becomes even heavier as the model complexity increases . Some existing works Smith et al . ( 2016 ) ; Sattler et al . ( 2019 ) target this issue to decrease the amount of communications while maintaining the performance of FedAvg . A more practical approach to reduce communication cost is to selectively update individual models at each round , rather than having all clients participate in parameter updates . This partial participation of clients per round hardly affects test performance in ideal iid settings but it can exacerbate the heterogeneity of weight updates across clients and as a result , the issue of non-iid ( McMahan et al. , 2017 ) . In order to mitigate the heterogeneity across clients while protecting privacy , we provide a novel yet simple framework , mean augmented federated learning ( MAFL ) , in which each client exchanges the updated model parameters as well as its mashed ( or averaged ) data . MAFL framework allows the trade-off between the amount of meaningful information exchanged and the privacy across clients , depending on several factors such as the number of data instances used in computing the average . We first introduce a naive approach in our framework that simply applies Mixup ( Zhang et al. , 2018 ) between local data and averaged external data from other clients to reduce a myopic bias . Here , we go further in our framework and ask the following seemingly impossible question : can only averaged data in our framework that has lost most of the discriminative information , bring the similar effect as a global Mixup in which clients directly access others ’ private data without considering privacy issues ? Toward this , we introduce our second and more important approach in our framework , termed Federated Mixup ( FedMix ) , that simply approximates the loss function of global Mixup via Taylor expansion ( it turns out that such approximation only involves the averaged data from other clients ! ) . Figure 1 briefly describes the concept of our methods . We validate our method on standard benchmark datasets for federated learning , and show its effectiveness against the standard federated learning methods especially for non-iid settings . In particular , we claim that FedMix shows better performance and smaller drop in accuracy with more heterogeneity or fewer clients update per communication round , further increasing difficulty of federated learning . Our contribution is threefold : • We propose a simple framework for federated learning that averages and exchanges each local data . Even naive approach in this framework performing Mixup with other clients ’ mashed data shows performance improvement over existing baselines on several settings . • We further develop a novel approximation for insecure global Mixup accessing other clients ’ local data , and find out that Taylor expansion of global Mixup only involves the averaged data from other clients . Based on this observation , we propose FedMix in our framework approximating global Mixup without accessing others ’ raw data . • We validate FedMix on several FL benchmark datasets especially focusing on non-iid data settings where our method significantly outperforms existing baselines while still preserving privacy with minimal increases in communication cost . 2 RELATED WORK . Federated learning Federated learning was first proposed in Konečný et al . ( 2016 ) where the prevalent asynchronous SGD ( Dean et al. , 2012 ) is used to update a global model in a distributed fashion . A pioneering work in this field proposed the currently most widely used algorithm , FedAvg ( McMahan et al. , 2017 ) , which is also the first synchronous algorithm dedicated to federated setting . Shortly after , Li et al . ( 2020b ) proposed a variant of FedAvg , named FedProx , where the authors claimed to overcome statistical heterogeneity and increase stability in federated learning . Recent studies attempt to expand federated learning with the aim of providing learning in more diverse and practical environments such as multi-task learning ( Smith et al. , 2017 ) , generative models ( Augenstein et al. , 2020 ) , continual learning ( Yoon et al. , 2020 ) , semi-supervised learning ( Jeong et al. , 2020 ) , and data with noisy labels ( Tuor et al. , 2020 ) . Our paper focuses on general federated settings , but it could be considered in such various situations . However , these algorithms may obtain suboptimal performance when clients participating in FL have non-iid ( Zhao et al. , 2018 ; Hsieh et al. , 2020 ) distributions . While the convergence of FedAvg on such settings was initially shown by experiments in McMahan et al . ( 2017 ) and later proved in Li et al . ( 2020c ) , it does not guarantee performance as good as it would have been for iid setting . Existing algorithms that pointed out this issue have major limitations , such as privacy violation by partial global sharing of local data ( Zhao et al. , 2018 ) or no indication of improvement over baseline algorithms such as FedAvg ( Hsieh et al. , 2020 ) . Our method aims to improve performance particularly on these non-iid situations , without compromising privacy . Mixup Mixup ( Zhang et al. , 2018 ) is a popular data augmentation technique that generates additional data by linear interpolation between actual data instances . Mixup has been usually applied to image classification tasks and shown to improve test accuracy on various datasets such as CIFAR10 and ImageNet-2012 ( Russakovsky et al. , 2015 ) , and , on popular architectures such as ResNet ( He et al. , 2016 ) and ResNeXt ( Xie et al. , 2017 ) , for various model complexity . It is also reported in Zhang et al . ( 2018 ) that Mixup helps with stability , adversarial robustness ( Zhang et al. , 2018 ) , calibration , and predictive certainty ( Thulasidasan et al. , 2019 ) . Mixup is expanding from various angles due to its simplicity and popularity . First , beyond image classification tasks , its effectiveness has been proven in various domains such as image segmentation ( Eaton-Rosen et al. , 2020 ) , speech recognition ( Warden , 2018 ) , and natural language processing ( Guo et al. , 2019 ) . Also , several extensions such as Manifold Mixup ( Verma et al. , 2018 ) , which performs Mixup in latent space , or CutMix ( Yun et al. , 2019 ) , which replaces specific regions with others patches , have been proposed . In most of the previous studies on federated learning , Mixup was partially ( or locally ) used as a general data augmentation technique . Some recent studies ( Oh et al. , 2020 ; Shin et al. , 2020 ) proposed to send blended data to server using Mixup , but they require sending locally- and linearly-mixed ( mostly from two instances ) data to server at every round , therefore being susceptible to privacy issues with huge communication costs . Our work properly modifies Mixup under the restrictions of federated learning and mitigates the major challenges of federated learning such as non-iid clients . 3 MEAN AUGMENTED FEDERATED LEARNING ( MAFL ) AND FEDMIX . We now provide our framework exchanging averaged data for federated learning and main method approximating insecure global Mixup under our framework , after briefly introducing the setup . 3.1 SETUP AND BACKGROUND . Federated learning and FedAvg Federated Averaging ( FedAvg ) ( McMahan et al. , 2017 ) has been the most popular algorithmic framework for federated learning . For every communication round t = 0 , . . . , T − 1 , a client k ∈ 1 , . . . , N selected for local training sends back its model wkt ( or only difference to reduce communication cost ) to a global server . For every round , K number of clients are selected to locally update and send model parameters . The server simply averages parameters received , so that the global model wt after t rounds of communications becomes wt = ∑n k=1 pkw k t where pk is the importance of client k based on the relative number of data in k among all selected clients at t. The updated global model is sent back to clients for the next round , which undergoes the following E local updates via stochastic gradient descent ( SGD ) : wkt+1 , i+1 ← wkt+1 , i − ηt+1∇ ` ( f ( xki ; wt+1 , i ) , yki ) for i = 0 , 1 , . . . , E − 1 , batch size B , and local learning rate η . Here , ` is the loss function for learning and f ( x ; wt ) is the model output for input x given model weight wt . Mixup Mixup ( Zhang et al. , 2018 ) is a simple data augmentation technique using a linear interpolation between two input-label pairs ( xi , yi ) and ( xj , yj ) to augment x̃ = λxi + ( 1 − λ ) xj and ỹ = λyi + ( 1 − λ ) yj . The variable λ ∈ [ 0 , 1 ] is a hyperparameter that is chosen from the beta distribution for each training step .
The paper proposed MAFL, a novel approach to conduct Mixup under the federated learning setting whiling preserving data privacy. The proposed FedMix scheme is inspired by Taylor’s expansion of the global Mixup formulation. The effectiveness of MAFL is justified via empirical studies over a simulated federated learning environment, which indicates that Mixup achieves better test accuracies on various machine learning tasks.
SP:2d435fd5053bd60dd56049e2177e4cc8f5218759
Cortico-cerebellar networks as decoupled neural interfaces
1 INTRODUCTION . Efficient credit assignment in the brain is a critical part of learning . However , how the brain solves the credit assignment problem remains a mystery . One of the central issues of credit assignment across multiple stages of processing is the need to wait for previous stages to finish their computation before others can proceed ( Rumelhart et al. , 1986 ; Schmidhuber , 1990 ; Lee et al. , 2015 ; Marblestone et al. , 2016 ; Jaderberg et al. , 2017 ) . In deep artificial neural networks these constraints are explicit . During the forward phase a given layer has to wait for all its previous layers to finish before it can proceed , a constraint known as the forward lock . Similarly , during the backward phase a given layer has to wait for all the layers above to finish computing its gradients – backward lock . Recently , a framework was introduced to decouple artificial neural networks – decoupled neural interfaces ( DNI ; ( Jaderberg et al. , 2017 ) ) 1 , effectively breaking forward and/or backward locks . Here , we propose that a specialised brain area , the cerebellum , performs a similar role in the brain . In the classical view the cerebellum is key for fine motor control and learning by constructing internal models of behaviour . ( Marr , 1969 ; Albus , 1971 ; Raymond and Medina , 2018 ; Wolpert et al. , 1998 ; 1DNIs are related to earlier work on using network critics to train neural networks ( Schmidhuber , 1990 ) . Miall et al. , 1993 ) . More recently , however , the idea that the cerebellum is also involved in cognition has gained significant traction ( Schmahmann et al. , 2019 ; Wagner and Luo , 2020 ; Brissenden and Somers , 2019 ) . An increasing body of behavioural , anatomical and imaging studies points to a role of the cerebellum in cognition in humans and non-human primates ( Schmahmann et al. , 2019 ; Brissenden and Somers , 2019 ; Guell et al. , 2015 ; 2018 ) . Impairments in cerebellar patients occur across a range of tasks including language ( Guell et al. , 2015 ) , working memory ( Deverett et al. , 2019 ) , planning ( Baker et al. , 1996 ) , and others ( Fiez et al. , 1992 ) . These observations suggest that the cerebellum implements a universal function across the brain ( Marr , 1969 ; Albus , 1971 ; Raymond and Medina , 2018 ; Diedrichsen et al. , 2019 ) . Moreover , experimental studies looking at corticocerebellar interactions have demonstrated that cerebellar output is crucial for maintaining neocortical representations in order to drive behaviour ( Chabrol et al. , 2019 ; Gao et al. , 2018 ) . However , to the best of our knowledge , no theoretical framework has considered what might be the function of such interactions between the cerebellum and cortical areas . In an attempt to reduce the existing gap between experimental observations and existing computational approaches we introduce DNI as a cortico-cerebellar model – cortico-cerebellar DNI ( CC-DNI ) . Consistent with the cerebellar universal role we theorise that the cerebellum serves to break the locks inherent to both feedforward and feedback information processing in the brain , akin to DNI . In particular , we posit that the two classical internal models of the cerebellum , forward and inverse models , are equivalent to DNI-mediated unlocking of feedback ( gradients ) and feedforward communication , respectively . Following this view the cerebellum not only provides motor or sensory estimates , but also any other modality encoded by a particular brain region . Inspired by neuroscientific studies , we test our model on sensorimotor tasks : ( i ) a target reaching task ( Sanes et al. , 1990 ; Butcher et al. , 2017 ; Nashef et al. , 2019 ) and ( ii ) a set of more complex temporal tasks based on the MNIST dataset , but also ( iii ) on a cognitive task – caption generation ( Guell et al. , 2015 ) . Our results support the cortico-cerebellar DNI models we study and show that they generally speed up learning by unlocking the main network , qualitatively consistent with a wide range of behavioural observations ( Guell et al. , 2015 ; Sanes et al. , 1990 ; Butcher et al. , 2017 ; Nashef et al. , 2019 ) . Two defining features of the cerebellum are the large expansion at the granule cell input layer with 50 billion neurons ( the most numerous cell in the brain ) and the highly sparse connectivity ( each granule cell receives ∼ 4 synapses ) ( Sanger et al. , 2020 ) . These observations have been long suggested to help speed-up learning in the cerebellum through decorrelation ( Albus , 1971 ; Sanger et al. , 2020 ; Cayco-Gajic et al. , 2017 ) . Building on these studies we introduce a new DNI model , sparse CC-DNI . Consistent with classical cerebellar models ( Albus , 1971 ; Cayco-Gajic et al. , 2017 ) we show that input sparsity can improve learning in the presence of high correlations . We finish with a discussion on the implications and predictions of this new brain-wide model of the cerebellum . 2 CEREBELLUM AS A DECOUPLING MACHINE . We first describe DNIs following Jaderberg et al . ( 2017 ) and then establish the link to corticocerebellar networks . Assume that a feedforward neural network consists of N layers , with the ith layer ( 1 ≤ i ≤ N ) performing a “ computational step ” fi with parameters θi . Given input x at layer 1 , the output of the network at its final layer is therefore given by fN ( fN−1 ( . . . f2 ( f1 ( x ) ) . . . ) ) . We use Fji to denote the composition of steps from layer i to layer j ( inclusively ) . Finally , let hi denote the ( hidden ) activity at layer i , so that hi = fi ( hi−1 ) with h0 = x . To illustrate the locking constraints of standard artificial neural networks used in deep learning suppose that a network is in the process of learning via backpropogation , with current input-target pair ( x , ytarg ) . To update the layer parameters θi the gradient ∂L∂θi is required , where L = L ( y , ytarg ) is the loss which compares the target value against the model output y = FN1 ( x ) under some loss function L ; we then apply gradient descent on the parameters , θi ← θi − α ∂L∂θi , with learning rate α > 0 . Suppose however that the network has only recently received the input x and is currently only at module i of the forward computation . In order to update the corresponding parameters of that layer , θi , the layer must first wait for all remaining layers to finish fj ( j > i ) for the loss to be computed . Only then the various gradients of the loss are backpropogated and ∂L∂θi is finally available . These two characteristics of backpropogation make layer i “ backward locked ” to F ji+1 , enforcing a strong dependence of the layer ’ s learning to the speed of forward and backward propagation through the rest of the network . Similarly the the network is forward locked during the forward pass . DNI can be used to unlock both backward and forward locks Jaderberg et al . ( 2017 ) . To illustrate the model here we focus on backward DNI , which goal is to break the backward lock by feeding the hidden layer i activity hi to a separate neural network , a backward synthesiser CBi , that learns to produce a synthetic gradient ĝi – an estimate of the real gradient expected by layer i , Ci ( hi ) = ĝi ≈ ∂L∂hi . This synthetic gradient can then be used to update the weights of layer i as soon as hi is available , following θi ← θi − αC ĝi ∂hi ∂θi ; ĝi = Ci ( hi ) ( 1 ) More specifically , the parameters of CBi ( hi ) are learned by comparing the estimated gradient with a target gradient ḡi so as to minimise LC = ||ḡi − ĝi|| . Ideally we set the target gradient as the true gradient , ḡi = ∂L∂θi , as can be implemented without difficulty in the case of feedforward networks . However , if we consider the temporal case ( see sections 2.1.1 and 3 ) where we wish to estimate gradients of future losses many timesteps ahead , L = ∑ t Lt , the true backpropagated gradient is computational expensive to obtain . Jaderberg et al . ( 2017 ) counter this potential problem by applying a bootstrapping principle in which a DNI module itself is used to guide learning ; explicitly , the target gradient at timestep t is ḡt = ∑T τ > t ∂Lτ ht + CT ( hT ) ∂hT∂ht , where T defines some limited horizon ; note the interesting resemblance to the n-step return used in reinforcement learning algorithms . 2.1 CORTICO-CEREBELLAR DNI MODELS . Building directly on DNI we introduce a model of cortico-cerebellar computation ( CC-DNI ) . These models use a simple feedforward neural network , consistent with the mostly feedforward architecture of the cerebellum ( Marr , 1969 ; Albus , 1971 ; Raymond and Medina , 2018 ) . The input layer of the cerebellum module C models the mossy fiber input onto granule cells ( GCs ; Fig . 1 ) , which are the most numerous neurons in the brain ( > 50 billion in humans ( Herculano-Houzel , 2009 ) ) . Consistent with this cerebellar dimensionality expansion , in our models we useM N , whereM is the number of GCs and N the number of neurons in the main ( cortical ) network ( Fig . 1 ) . In particular , we use ratios MN ∼ 4 , consistent with experimental observations ( Herculano-Houzel , 2009 ) . This is different to DNI , in which the synthesizer uses a single hidden layer with the same number of units as LSTM ( for comparison in performance see ref to Fig . S2 ) . In addition , the hidden layers and the output of C approximate the role of GCs and Purkinje cells , and the cerebellar output nuclei , respectively . Cerebellar granule cells receive sparse input connections ( K ) with only around 3-7 synapses per GC ( Herculano-Houzel , 2009 ; Eccles et al. , 2013 ) . These architectural constraints have led to sparse encoding and decorrelation theories of cerebellar function ( Albus , 1971 ; Cayco-Gajic et al. , 2017 ; Schweighofer et al. , 2001 ; Broomhead and Lowe , 1988 ; Billings et al. , 2014 ; Litwin-Kumar et al. , 2017 ; Sanger et al. , 2019 ) . Inspired on these features , we introduce a new model – sparse CC-DNI ( sCC-DNI ) , for which we set a small number of incoming input connections K = 4 ( Fig . 1 ) . To measure decorrelation achieved by sCC-DNI we use the Pearson correlation and a population correlation metric rpop that has been shown to better capture cerebellar effects ( Cayco-Gajic et al. , 2017 ) rpop = ZZ−1 ( max { √ λi } i∑ i √ λi − 1Z ) where Z are the number of neurons being considered and λi are the eigenvalues of the covariance matrix of the neuronal activity ( e.g . hM ) . To manipulate the hidden correlations of the model , we adjust the variance of its input and recurrent weights by scaling each by a factor b with b 6= 1 ( see SM ) . There is a strong similarity between CC-DNI models and the flow of information in standard internal models of the cortico-cerebellar networks ( Fig . 1 , Table S1 ) . Below we draw a parallel between classical cerebellar internal models and our CC-DNI models . For simplicity , below and in our results we focus on the link between forward internal models and backward DNI ( but see S1 for our interpretation of inverse internal model as forward DNI ) .
The authors proposed that cerebellum in the brain computes synthetic gradients, as used in decoupled neural interfaces (DNI), to enable learning in neural circuits without waiting for gradients to propagate backwards. The authors incorporated several architectural properties of biological cerebellum into their cerebellar model. They showed that a LSTM trained with such synthetic gradients can learn a variety of tasks, from motor reaching to caption generation. The paper is clearly written, and the link between DNI and cerebellum is a novel idea. However, the authors made little attempt to actually compare their model with experimental findings from cerebellum (except for the cerebellar properties built into the network), limiting its scientific impact. Meanwhile, it is not clear whether the cerebellum-inspired DNI provides concrete advantages over DNI proposed in Jaderberg 2017.
SP:332c59e494e36b043e48760cb2dfac206cafdcec
Ruminating Word Representations with Random Noise Masking
1 Introduction Most machine learning methodologies can be formulated to get computational representations from real-life objects ( e.g. , images , languages , and sounds ) and then get high-level representations using model architectures . Therefore , there have been two main approaches to improve model performances : ( 1 ) starting with better representations ( Melamud et al. , 2016 ; Peters et al. , 2018 ) , and ( 2 ) building more sophisticated architectures that can extract important features and generate higherlevel representations ( Vaswani et al. , 2017 ; Conneau et al. , 2017 ) . For better initial representations , many NLP researchers have used pretrained word vectors trained on substantially large corpus through unsupervised algorithms , such as word2vec ( Mikolov et al. , 2013a ) , GloVe ( Pennington et al. , 2014 ) , and fastText ( Bojanowski et al. , 2016 ) . The pretrained word vectors represent the general meaning of words and increase the model performances on most NLP tasks ( Turian et al. , 2010 ) . In addition to the algorithms , word vector post-processing research ( Faruqui et al. , 2015 ; Vulić et al. , 2017 ; Mrkšić et al. , 2017 ; Jo & Choi , 2018 ) have attempted to enrich the pretrained representations using external resources . They simply modified the values of the vector representations in some way , and showed improved performance . It implies that we can get further improvement through better initial representations . When training NLP models , we first initialize word representations with pretrained word vectors and then update both the model parameters and the word representations . However , in the training process , the model performance can be limited due to the initial word vectors . For example , the pretrained word representations have general meanings of words , but , in some tasks , the words might not be used for the general meaning . Although the gap between meanings can be learned through the training process , it could fail . Since the pretrained representations are trained from a huge dataset , and their objective functions are based on language modeling , the word vectors are naturally biased to general and frequently used meaning . Besides , the word vectors are updated through gradient descent algorithms , so the values are changed slightly . The word vectors are thus easy to converge on local minima . Therefore , our method starts with an idea–using the word representations fine-tuned by a training process as pretrained word vectors in the next re-training process . Then , word vectors can be trained to learn more appropriate representations to the task . However , the model must be overfitted , and then the word representation would be stuck in local minima . Thus , we add random noise and bias to the word representations before the re-training processes , in order to prevent the model from overfitting and take the word representations far from the local minima . 1http : //github.com/Sweetblueday/GraVeR In this paper , we propose a simple training framework to find better representations by adding random noise and bias on the word vectors during iterative training processes , which we call GraVeR ( Gradual Vector Rumination ) . We expect that the model makes good uses of the re-training processes with noises , both for learning better representation and for model regularization . 2 RelatedWorks The representations fine-tuned by GraVeR can be considered as pretrained representations from the previous training process . Also , GraVeR utilizes word-level noises , which are used for model regularization . 2.1 Pretrained Representations Pretrained Embedding Vector is also called pretrained word representation . According to the distributional representation hypothesis ( Mikolov et al. , 2013b ) , pretrained embedding vectors are composed of pairs of ( token , n-dimensional float vector ) . Unsupervised algorithms ( e.g. , word2vec ( Mikolov et al. , 2013a ) , GloVe ( Pennington et al. , 2014 ) , fastText ( Bojanowski et al. , 2016 ) ) learn the word vectors on substantial corpora to represent general meanings of words . The pretrained embedding vectors are widely used to initialize the word vectors in models . Pretrained Embedding Model is suggested to get a deep representation of each word in the context . Previous research McCann et al . ( 2017 ) ; Peters et al . ( 2018 ) ; Devlin et al . ( 2018 ) trained deep architecture models and then utilized the model weights to represent words by using the outputs of the models . Although recent advanced pretrained representations ( Peters et al. , 2018 ; Devlin et al. , 2018 ) show good performances , we take the pretrained embedding vector approach because ( 1 ) retraining processes in the pretrained embedding models are very expensive and ( 2 ) we use word-level noises whereas the embedding models use token-level embeddings combined with position embeddings . 2.2 Word-level Noises Adding noises to input data is an old idea ( Plaut et al. , 1986 ) . However , a small number of studies were on word-level noises , since the noises on words can distort words ’ meaning . Word Dropping . NLP tasks that utilize the text as a form of sentence and phrase have considered each word as features . However , too many features can lead models to be overfitted to the training data due to the curse of dimensionality . Therefore , the easiest way to reduce the number of features is to drop words in the sentence at random . Word Embedding Perturbation . Miyato et al . ( 2016 ) tried to perturb word vectors and used them in the adversarial training framework for model regularization . Cheng et al . ( 2018 ) utilized the noises to build a robust machine translation model . Also , Zhang & Yang ( 2018 ) considered the perturbation as a data augmentation method . The previous works added the noises to all word embeddings . It can regularize the model weights , but they ignored the change of word representations and its re-usability . On the other hand , our method gradually adds noises to word embeddings by controlling the amount of noise . Also , iterative training processes that re-use fine-tuned word representation as pretrained word vectors for the next training process can benefit from the noises and make better word representations . 2.3 Regularization Techniques Some research explained that the normalization could be used for model regularization ( van Laarhoven , 2017 ; Luo et al. , 2018 ; Hoffer et al. , 2018 ) . Dropout ( Srivastava et al. , 2014 ) is applied to neural network models , masking random neurons with 0 . Dropout randomly and temporarily removes the neural activations during training , so the masked weights are not updated . As a result , the model is prevented from over-tuning on specific features , which involves regularization . Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) normalizes the features according to minibatch statistics . Batch normalization enables the features to avoid covariate shift–the weight gradients are highly dependent on the previous layers ’ gradients . Besides , batch normalization speeds up the training process by reshaping loss function . Layer Normalization ( LN ) ( Ba et al. , 2016 ) also utilizes mini-batch statistics to normalize the features . The difference with batch normalization is that layer normalization normalizes the inputs across the features . The statistics are computed across each feature , which is the same for all the feature dimensions . 3 ProposedMethod 3.1 Overall Process The overall process of GraVeR is illustrated in Figure 1 . GraVeR is applied to a conventional training framework , but it needs a meta-level approach that trains the model again . The iterative training process will also be denoted as a meta-epoch . When a training process finishes , we extract the fine-tuned word embeddings ( W ′ ) and add bias ( W ′ −W ) weighted by 1/ [ MaskingRate ] . The additive bias is optional but it takes the word embeddings far from the minima in which the fine-tuned embeddings converged . Next , we add maskers filled with random values to a portion of W ′ , and then re-train the model from scratch with the noised word embeddings . Observing the validation performance in the training process , we select the best-performing model and the fine-tuned embeddings . The additional details in GraVeR will be described . In short , GraVeR is a training framework that adds random noises ( and bias ) to the fine-tuned word embeddings and then repeats the training process with the word embeddings . 3.2 GraVeR Details The details in GraVeR are based on the idea that moderate noises are required to change word vector distribution , but the noise scale should not be too large to distort the word vector distribution . Random Noises and Bias . We have two hyperparameters related to the noise maskers ; Masking Rate and Noise Scale . The masking rate denotes a portion of word vectors masked in every ( re ) training process . We set the masking rate to 20 % of the vocabulary size as default . The maskers are filled with random values in a range , which is denoted as noise scale . The default is sampled from a uniform distribution ( -1,1 ) since word vectors in the range are widely used . Well-defined perturbation methods like Gaussian kernels ( Vilnis & McCallum , 2014 ) can be an extension . Next , we add a bias to the fine-tuned word embeddings ( W ′ ) in order to take W ′ to the same embedding space with better initial values . Intuitively , since we initialize with the word embeddings fine-tuned in the previous training process , the embedding space in the next training process is the same as the space in the previous training process . And , the difference of fine-tuned embedding and initial embedding ( |W ′ −W | ) in the space means the representations learned by a training process , which improved the performance in the previous training process . So , we add the bias to W ′ , weighted by 1/ [ MaskingRate ] where 0 < MaskingRate≤1 in order to give a large bias when a small number of word vectors are noised ; 1/ [ MaskingRate ] × ( W ′−W ) . When the validation performance decreases , the bias is [ MaskingRate ] × ( W −W ′ ) for small noise . When selecting a portion of words to noise , we use word frequency driven from training data . Intuitively , if the representations of frequently used words are changed , which means the input data are changed a lot , the high-level representation such as sentence representation are largely affected . Gradualness . While following the aforementioned process , we change the number of random maskers according to validation performance . If the re-trained model ’ s validation performance increases , we move the maskers to the next frequently used words . Otherwise , we gradually increase the number of random maskers without moving them so that the maskers make noises on the words again in the next re-training process . As a result , GraVeR can process both words in the previous training process and words in the current training process . This gradualness lets the re-training processes benefit from newly generated noises again , which makes GraVeR dynamic . In the end , overall algorithms of GraVeR are summarized as follows : Algorithm 1 Training Framework with GraVeR Accval ← 0 , MaxAccval ← 0 Train set ( Train ) , Validation set ( Val ) , A classifier ( M ) , Word embeddings ( W ) , Word frequency information counted from training data - Train M with W - Get trained M′ and fine-tuned W ′ - Meta-level validation ; Accval ← M′ ( Valval ; W ′ ) if MaxAccval < Acc then MaxAccval ← Accval MaskedWords← NextFrequentlyUsedWords W ← W ′ + 1/MaskingRate × ( W ′ −W ) else MaskedWords←MaskedWords + NextFrequentlyUsedWords W ← W −MaskingRate × ( W ′ −W ) end if W [ MaskedWords ] ← W [ MaskedWords ] + U ( −1 , 1 ) - Repeat the training process until all the words are masked GraVeR can be applied to any conventional training frameworks , so the method is independent of model architectures in that most NLP models use word-level embeddings . The random noises might disturb the representation , but some of the noises which harm the performance are compensated during the re-training processes . In contrast , other noises are used to update the word vectors over the initial values . Besides , the additive bias makes the use of random noises stable . By using this re-training with the noise process , we expect that the model is prevented from overfitting . Meanwhile , the model is incrementally fitted to the validation set through early-stopping in a training process and meta-level early-stopping in re-training processes . Therefore , the model keeps being fitted to the validation set with regularization , showing better performance on the test set since the model performance on the validation set is normally correlated to the performance on the test set . 4 Experiment 4.1 Datasets We prepare three topic classification datasets : DBpedia ontology ( Lehmann et al. , 2015 ) , YahooAnswer2 ( Chang et al. , 2008 ) , AGNews . We also prepare two sentiment classification datasets : Yelp reviews ( Zhang et al. , 2015 ) , IMDB ( Maas et al. , 2011 ) . YahooAnswer dataset is used for two different tasks : classify upper-level categories and classify lower-level categories . The data statistics are presented in Appendix . We assign 15 % of each train set to validation set , and each dataset has its own test set . The validation set is used for early-stopping both at every epoch and every meta-epoch . We use all words tokenized by space and all symbols using 300 dimensional embedding space . 2https : //cogcomp.seas.upenn.edu/page/resource view/89 Note that Chang et al . ( 2008 ) said the dataset has 20 top-level categories , but it has three duplicated top-level categories because of typos . 4.2 Classifier We use TextCNN ( Kim , 2014 ) classifiers . The model consists of 2 convolutional layers with 32 channels and 16 channels , respectively . We also adopt multiple sizes of kernels–2 , 3 , 4 , and 5 , followed by ReLU activation ( Hahnloser et al. , 2000 ) and max-pooling . The kernels are concatenated after every max-pooling layer . Finally , the features are classified into the classes using fully connected layer . Although this model has few parameters ( 136K ) compared with recent high-performance models like BERT ( Devlin et al. , 2018 ) , we use this model to utilize multiple re-training processes . Also , we employ simple techniques for model improvements such as Dropout and Layer Normalization to get similar performances to recent models and justify the use of a simple model as a basic classifier . The vanilla classifier and the tuned classifier are denoted as TextCNNbase and TextCNNtune , respectively . Additional comparison with very deep models is described in Appendix . We optimize the model using Adam ( Kingma & Ba , 2014 ) with 1e-3 learning rate and earlystopping . If the validation accuracy does not increase over five epochs , we stop model training . Initial word embeddings are random if we do not mention explicitly . 4.3 Baseline Implementation We can not fully compare our method with Word Embedding Perturbation ( Miyato et al. , 2016 ) because we do not use adversarial training framework . Instead , random noises are added to all word embeddings , as other word embedding perturbation methods did ( Cheng et al. , 2018 ; Zhang & Yang , 2018 ) . In order to compare the effect of regularization , we implement five regularization ( including normalization ) methods . Word dropping is implemented in the pre-processing part , which removes random words in the text . We set the random probability p as 0.1 . Dropout ( Srivastava et al. , 2014 ) is added to the final fully connected layer with dropout probability 0.1 , which performs the best in our experiments . Batch Normalization ( Ioffe & Szegedy , 2015 ) is located between every convolutional layer and an activation function , as used in the original paper . Layer Normalization ( Ba et al. , 2016 ) is implemented in the same position . We report the performance averaged over five runs . 5 Results 5.1 Performance Firstly , we present the effect of GraVeR on major pretrained word embeddings , as presented in Table 1 . We use three pretrained word embeddings : word2vec ( w2v ) ( Mikolov et al. , 2013a ) GoogleNews-vectors-negative300.bin , GloVe ( glv ) ( Pennington et al. , 2014 ) glove.42B.300d.txt , fastText ( ftt ) ( Bojanowski et al. , 2016 ) wiki-news-300d-1M-subword.vec , and 1 random embedding . GraVeR improves the model performance on most of the datasets , and the largest performance gain is in random embeddings . Random embeddings with GraVeR even perform better than pretrained embeddings in a few datasets . The result implies that we can learn better word representations through GraVeR in any embedding space since we train a model from scratch except for word embeddings . However , with GloVe , GraVeR is not effective on the model performance because the distribution of word vectors in the pretrained embeddings is already good enough for the tasks . In contrast , GraVeR makes substantial improvements when using relatively poor embeddings for the tasks . The comparison with regularization techniques is presented in Table 2 ( Top ) . The result shows that the performance gain by GraVeR is larger than other regularization methods . We also present the results when our method is combined with other regularization methods in Table 2 ( Bottom ) . The results show that GraVeR positively matches with the other regularization techniques , further improving the model performance . Figure 2 also reports that our method definitely increases validation performance , which results in improvements in the test set . Comparisons of a recent model ( e.g. , BERT ) without extra data resources and word embedding perturbation methods will be discussed in Further Analysis ( §6 ) . 5.2 Word Distributions To analyze GraVeR with respect to word representation , we extract the word representations updated on DBpedia dataset and present the list of top-20 nearest words of a cue word in Table 3 . In order to reduce the effect of randomness , we use GloVe for initial embedding . We can see that the word vectors are further fine-tuned and even find other similar words not shown in the embedding finetuned once . These results imply that our method can change the word vector distribution to be further fine-tuned to the tasks . We also present another cue word results and visualization of top-100 nearest word vector distribution using t-SNE ( Maaten & Hinton , 2008 ) in Appendix . 6 Further Analysis From this Section , we mainly use TextCNNtune to show the effect of GraVeR on the model whose performance is similar to state-of-the-art models . 6.1 Random Noise and Bias Noises . The range of random values filled in the maskers also affects the performance . We first re-train the model without any modification to observe the effect of the re-training processes . The model shows slightly better performance within a few meta-epochs , but it becomes overfitting , as we expected ( see Table 4 ) . The performance when we use Gaussian noise instead of the uniform distribution is also presented . It shows a comparable but slightly worse result than our default settings . When the noise range is 0 , the noise only consists of the additive bias , which also shows marginally worse performance . The word perturbation method performs slightly better than just re-training , but there are large gaps between variations of GraVeR . 6.2 Hyperparameters Masking Rate The amount of noise added by GraVeR is an important factor in that some noises should be small enough to be corrected during the re-training processes , while other noises should be large enough to change the word vector distribution . We first change the masking rate of how much random maskers move in every re-training process . The larger the masking rate becomes , the more words are masked in a re-training process , so the noise increases . Conversely , the amount of noise decreases as the masking rate becomes small . The effect of the masking rate is presented in Table 5 . Masking Rate=0.1 also shows good performance , but it needs 2x re-training processes more than Masking Rate=0.2 . We thus use 0.2 as a default . Gradualness Policy Our proposed method is to increase the number of maskers when the validation performance decreases in order to make noises on the words masked in the previous re-training process again . Otherwise , we simply move to the next frequently used words . We try changing the gradualness that ( 1 ) do not have gradualness , ( 2 ) have gradualness only when the validation performance increase , which is reverse to our proposed method , and ( 3 ) always have gradualness regardless of the validation performance . The result is presented in Table 6 . Among the ablations , our proposed approach performs the best . 6.3 On otherModel Architecture We opt our method to the transformer-based classifier . The transformer classifier has 300 embedding dimension with positional-embedding , 32 batch size , 512 sequence length . It also has 10 multi-heads but only 1 encoder layer . Stacking more than 2 encoder layers harms the performance ; We guess that the number of training set is not enough to train the model parameters . We do average-pooling to the encoded sentence vectors and use linear layer in order for classification . The other parameters follow the default setting of Pytorch nn.TransformerEncoderLayer . Table 7 shows that GraVeR works well even in other model architectures . 6.4 Pretraining by Training Data . Table 8 shows the classification performance according to the usage of pretrained word embeddings . Genmeans general representation trained on a large corpus , which is the pretrained embeddings what we mentioned in §5.1 and bert-base-uncased from huggingface ( Wolf et al. , 2019 ) . Sp denotes specialized representation trained using only training data . We set the hyperparameter for training as default settings used in their API . Despite a small number of parameters , TextCNNtune with GraVeR shows comparable performance with BERT ( Sp ) . GraVeR ’ s improvement is even similar to glv ( Gen ) , which means GraVeR can complement the information from external resources , in this case , Common Crawl ( 42B tokens , 1.9M vocab ) . Although there is a performance gap with pretrained BERT , the advantages of GraVeR are ( 1 ) do not need any extra data resources , and ( 2 ) do not need to use very deep model architectures consisting of a bunch of parameters . Therefore , GraVeR must be an attractive option when using pretrained word embeddings . 7 Discussion Re-training Cost . Although GraVeR shows strong performance , its re-training processes take 1/ [ MaskingRate ] more times than conventional training process . However , we showed that a small model with GraVeR even performs on par with recent huge models . When considering the parameter size , for example , TextCNNtune has 136K parameters and it needs five times re-training process in our default setting MaskingRate=0.2 while BERT has 110M parameters with one fine-tuning process . Then , the parameters need to be trained are 780K and 110M , respectively ; i.e . 141 times cheaper . Furthermore , the training time for small model must be faster since it can be trained with larger mini-batch size . As a representation learning . GraVeR ’ s representations are learned from a given training set only . In Table 8 , general word embeddings trained from large data shows worse performance than domain ( or data ) specialized word embeddings . That is , in order to solve the task , the word representation should be specialized in given data . By using GraVeR , we can easily get specialized ( further fine-tuned ) representation only with the random noise including bias and the iterative training processes . Besides , we believe that using a sophisticated noising trick instead of simple random makes further improvement in GraVeR . 8 Conclusion We propose GraVeR , which adds random noises and bias to word embeddings in order to change the word vector distribution and regularize a model . Through the re-training process , we can make the use of noises to learn better representations . In the experiments , as the model incrementally fits the validation set , GraVeR largely improves model performances . We expect that our general training approach can be used to a various models to improve model performances . References Jimmy Lei Ba , Jamie Ryan Kiros , and Geoffrey E Hinton . Layer normalization . arXiv preprint arXiv:1607.06450 , 2016 . Piotr Bojanowski , Edouard Grave , Armand Joulin , and Tomas Mikolov . Enriching word vectors with subword information . arXiv preprint arXiv:1607.04606 , 2016 . Ming-Wei Chang , Lev-Arie Ratinov , Dan Roth , and Vivek Srikumar . Importance of semantic representation : Dataless classification . In AAAI , volume 2 , pp . 830–835 , 2008 . Yong Cheng , Zhaopeng Tu , Fandong Meng , Junjie Zhai , and Yang Liu . Towards robust neural machine translation . In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics ( Volume 1 : Long Papers ) , pp . 1756–1766 , 2018 . Alexis Conneau , Holger Schwenk , Loı̈c Barrault , and Yann Lecun . Very deep convolutional networks for text classification . In European Chapter of the Association for Computational Linguistics EACL ’ 17 , 2017 . Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . Bert : Pre-training of deep bidirectional transformers for language understanding . arXiv preprint arXiv:1810.04805 , 2018 . Manaal Faruqui , Jesse Dodge , Sujay Kumar Jauhar , Chris Dyer , Eduard Hovy , and Noah A Smith . Retrofitting word vectors to semantic lexicons . In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies , pp . 1606–1615 , 2015 . Richard HR Hahnloser , Rahul Sarpeshkar , Misha A Mahowald , Rodney J Douglas , and H Sebastian Seung . Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit . Nature , 405 ( 6789 ) :947 , 2000 . Elad Hoffer , Ron Banner , Itay Golan , and Daniel Soudry . Norm matters : efficient and accurate normalization schemes in deep networks . In Advances in Neural Information Processing Systems , pp . 2164–2174 , 2018 . Sergey Ioffe and Christian Szegedy . Batch normalization : Accelerating deep network training by reducing internal covariate shift . In International Conference on Machine Learning , pp . 448–456 , 2015 . Hwiyeol Jo and Stanley Jungkyu Choi . Extrofitting : Enriching word representation and its vector space with semantic lexicons . ACL 2018 , pp . 24 , 2018 . Yoon Kim . Convolutional neural networks for sentence classification . In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing ( EMNLP ) , pp . 1746–1751 , 2014 . Diederik P Kingma and Jimmy Ba . Adam : A method for stochastic optimization . arXiv preprint arXiv:1412.6980 , 2014 . Jens Lehmann , Robert Isele , Max Jakob , Anja Jentzsch , Dimitris Kontokostas , Pablo N Mendes , Sebastian Hellmann , Mohamed Morsey , Patrick Van Kleef , Sören Auer , et al . Dbpedia–a largescale , multilingual knowledge base extracted from wikipedia . Semantic Web , 6 ( 2 ) :167–195 , 2015 . Ping Luo , Xinjiang Wang , Wenqi Shao , and Zhanglin Peng . Towards understanding regularization in batch normalization . 2018 . Andrew L Maas , Raymond E Daly , Peter T Pham , Dan Huang , Andrew Y Ng , and Christopher Potts . Learning word vectors for sentiment analysis . In Proceedings of the 49th annual meeting of the association for computational linguistics : Human language technologies-volume 1 , pp . 142–150 . Association for Computational Linguistics , 2011 . Laurens van der Maaten and Geoffrey Hinton . Visualizing data using t-sne . Journal of machine learning research , 9 ( Nov ) :2579–2605 , 2008 . Bryan McCann , James Bradbury , Caiming Xiong , and Richard Socher . Learned in translation : Contextualized word vectors . In Advances in Neural Information Processing Systems , pp . 6294– 6305 , 2017 . Oren Melamud , Jacob Goldberger , and Ido Dagan . context2vec : Learning generic context embedding with bidirectional lstm . In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning , pp . 51–61 , 2016 . Tomas Mikolov , Kai Chen , Greg Corrado , and Jeffrey Dean . Efficient estimation of word representations in vector space . arXiv preprint arXiv:1301.3781 , 2013a . Tomas Mikolov , Ilya Sutskever , Kai Chen , Greg S Corrado , and Jeff Dean . Distributed representations of words and phrases and their compositionality . In Advances in neural information processing systems , pp . 3111–3119 , 2013b . Takeru Miyato , Andrew M Dai , and Ian Goodfellow . Adversarial training methods for semisupervised text classification . arXiv preprint arXiv:1605.07725 , 2016 . Nikola Mrkšić , Ivan Vulić , Diarmuid Ó Séaghdha , Ira Leviant , Roi Reichart , Milica Gašić , Anna Korhonen , and Steve Young . Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints . Transactions of the Association for Computational Linguistics , 5:309–324 , 2017 . Jeffrey Pennington , Richard Socher , and Christopher Manning . Glove : Global vectors for word representation . In Proceedings of the 2014 conference on empirical methods in natural language processing ( EMNLP ) , pp . 1532–1543 , 2014 . Matthew Peters , Mark Neumann , Mohit Iyyer , Matt Gardner , Christopher Clark , Kenton Lee , and Luke Zettlemoyer . Deep contextualized word representations . In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies , Volume 1 ( Long Papers ) , volume 1 , pp . 2227–2237 , 2018 . David C Plaut et al . Experiments on learning by back propagation . 1986 . Nitish Srivastava , Geoffrey Hinton , Alex Krizhevsky , Ilya Sutskever , and Ruslan Salakhutdinov . Dropout : a simple way to prevent neural networks from overfitting . The Journal of Machine Learning Research , 15 ( 1 ) :1929–1958 , 2014 . Joseph Turian , Lev Ratinov , and Yoshua Bengio . Word representations : a simple and general method for semi-supervised learning . In Proceedings of the 48th annual meeting of the association for computational linguistics , pp . 384–394 . Association for Computational Linguistics , 2010 . Twan van Laarhoven . L2 regularization versus batch and weight normalization . arXiv preprint arXiv:1706.05350 , 2017 . Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , Łukasz Kaiser , and Illia Polosukhin . Attention is all you need . In Advances in Neural Information Processing Systems , pp . 5998–6008 , 2017 . Luke Vilnis and Andrew McCallum . Word representations via gaussian embedding . arXiv preprint arXiv:1412.6623 , 2014 . Ivan Vulić , Nikola Mrkšić , and Anna Korhonen . Cross-lingual induction and transfer of verb classes based on word vector space specialisation . arXiv preprint arXiv:1707.06945 , 2017 . Thomas Wolf , Lysandre Debut , Victor Sanh , Julien Chaumond , Clement Delangue , Anthony Moi , Pierric Cistac , Tim Rault , Rémi Louf , Morgan Funtowicz , Joe Davison , Sam Shleifer , Patrick von Platen , Clara Ma , Yacine Jernite , Julien Plu , Canwen Xu , Teven Le Scao , Sylvain Gugger , Mariama Drame , Quentin Lhoest , and Alexander M. Rush . Huggingface ’ s transformers : Stateof-the-art natural language processing . ArXiv , abs/1910.03771 , 2019 . Dongxu Zhang and Zhichao Yang . Word embedding perturbation for sentence classification . arXiv preprint arXiv:1804.08166 , 2018 . Xiang Zhang , Junbo Zhao , and Yann LeCun . Character-level convolutional networks for text classification . In Advances in neural information processing systems , pp . 649–657 , 2015 .
It's already known that embeddings like word2vec and glove are biased [1] and needs postprocessing for better performance. This paper designed a novel approach to do embedding normalizations. After each round of training, noise is intentionally introduced to perturbate the finetuned parameters. Afterwards, another round of training starting from perturbated local optima could in potential converge to a better one. This method is validated via CNN-based text classification.
SP:1292de91b0e7ab81457f925f72022d83ec061cc6
Improving the accuracy of neural networks in analog computing-in-memory systems by a generalized quantization method
1 INTRODUCTION . Deep neural networks ( DNNs ) have been widely used in a variety of fields , such as computer vision ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , speech recognition ( Graves et al. , 2013 ; Hinton et al. , 2012 ; Graves & Jaitly , 2014 ) , natural language processing ( Kim , 2014 ; Yang et al. , 2016 ; Lai et al. , 2015 ) and so on ( Mnih et al. , 2015 ; Silver et al. , 2016 ) . However , the high complexity of DNN models makes them hard to be applied on edge devices ( mobile phones , onboard computers , smart sensors , wearable devices , etc . ) , which can only provide limited computing speed and power ( Sze et al. , 2017 ) . Crossbar-enabled analog computing-in-memory ( CACIM ) systems is a promising approach to facilitate the applications of DNN on edge devices ( Yang et al. , 2013 ) . It can carry out some typical operations in situ , exactly where the data are located ( Ielmini & Wong , 2018 ) . Such as the multiplyaccumulate operation ( MAC ) , which is the most frequently performed operation in DNNs . The cost of data transferring for doing the operations can be reduced . Both the computation speed and energy efficiency can be improved significantly ( Yao et al. , 2020 ) . The footstone of CACIM systems for DNN is the crossbar array of the computational memory units ( Hu et al. , 2012 ) . As shown in Figure 1 , taking the memristor device as an example , each weight ( Wij ) of the connection in one layer of a neural network is stored as the conductance state ( Gij ) of a memristor . The input data are represented as the voltage ( Vi ) . After applying the voltage ( Vi ) to each row , the current ( Ij ) collected at each column is exactly the MAC result according to Kirchhoff ’ s law and Ohm ’ s law , Ij = ∑ i ViGij . Before applying a DNN in CACIM systems , an essential step is writing the weights of DNN into the memory units , which is usually called as mapping . However , the mapping overhead is directly related to the precision of weights.Therefore , a weight quantization method that compresses highprecision weights in DNN is important for an efficient implementation of DNN in CACIM systems . The most important criterion of a weight quantization method is the accuracy loss , which has a strong correlation with the quantization error . The quantization error is determined by the quantizer used in the quantization method . As far as our knowledge is concerned , the generalized quantizer has not been used to quantize DNN weights in CACIM systems . The previous work used either uniform quantizers ( Jung et al. , 2019 ; Yang et al. , 2019 ) or non-uniform quantizers ( Tang et al. , 2017 ; Sun et al. , 2018 ) . However , an empirical study has shown that the weights in one layer of DNN usually follow a bell-shape and long-tailed distribution ( Han et al. , 2016 ) . Meanwhile , we found that weights in the last fully-connected layer of DNN for the classification tasks usually follow an asymmetric distribution . The generalized density-aware quantizer ( GDQ ) , which means the quantization results are determined all depends on the data without any constraints , can obtain less quantization error than either uniform or non-uniform quantizer ( Figure 2 ) . Since the weights are stored and operated as analog quantities in CACIM systems , using GDQ to quantize the weights won ’ t produce extra cost in the inference phase . In CACIM systems , the noise of analog weights is inevitable and can not be ignored . The perturbations of weights will degrade the performance of networks severely . It is better to quantize the weights and improve the robustness to noise in the training process together . In this work , we introduced a generalized density-aware quantization method and noise-aware training scheme ( NATS ) for DNN in CACIM systems and achieved no degradation of performance by using 8-level weights on a series of computer vision tasks . Under the same weight level , our proposed method performed better than others . 2 PRELIMINARY . A quantization method for DNNs consists of two parts , one is the quantizer , the other is the quantization algorithm which describes how to use the quantizer in a neural network . 2.1 QUANTIZER . Formulation : A quantizer is a function , f : R → q , where q = { qi ∈ R|i = 1 , 2 , · · · , v } . x = { xi ∈ R|i = 1 , 2 , · · · , d } is the data to be quantized . Each qi has a corresponding domain Qi ⊂ R that f ( x ) = qi , if x ∈ Qi , ( 1 ) where ⋃v i=1Qi = R , and Qi ∩Qj = ∅ when i 6= j . In most cases , { Qi|i = 1 , 2 , · · · , v } are v intervals separated by v − 1 endpoints e = { ei ∈ R|i = 1 , 2 , · · · , v − 1 } on the real axis . Without loss of generality , we assume that q1 < q2 < · · · < qv and e1 < e2 < · · · < ev−1 , that is , Q1 = { x : −∞ < x ≤ e1 } , Q2 = { x : e1 < x ≤ e2 } , ... Qv−1 = { x : ev−2 < x ≤ ev−1 } , Qv = { x : ev−1 < x < ∞ } . ( 2 ) We use Θ = { q , e } to denote a quantizer , and call v = |q| the precision of the quantizer . The quantization error of a data point z ( xi ) is defined as z ( xi ) = f ( xi ) − xi = qα − xi , if xi ∈ Qα . ( 3 ) A quantizer Θ = { q , e } is uniform if q is an arithmetic progression . The quantization method using uniform quantizer is referred as a uniform quantization method , such as the BinaryConnect ( Courbariaux et al. , 2015 ) , binary weight network ( BWN ) ( Rastegari et al. , 2016 ) , which have two levels , the ternary weight networks ( TWN ) ( Li et al. , 2016 ) and the trained ternary quantization ( TTQ ) ( Zhu et al. , 2016 ) , which have three levels , and some other methods that have more levels ( He & Fan , 2019 ; Jung et al. , 2019 ; Yang et al. , 2019 ; Shin et al. , 2017 ; Esser et al. , 2019 ) . The q in a non-uniform quantizer is constrained to be a kind of non-uniform distribution . Such as q is a geometric sequence ( Li et al. , 2020 ; Miyashita et al. , 2016 ; Zhou et al. , 2017 ) . These quantization methods work best when the data to be quantized follows the corresponding exponential distribution . The work ( Choi et al. , 2016 ) adopts a k-means like algorithm to quantized the weights . Beyond the uniform quantization and non-uniform quantization , the generalized quantization methods do not constrain the distribution of q. q is determined based on the distribution of the data to be quantized , which is robust to all kinds of data distribution . 2.2 QUANTIZATION ALGORITHM . To accelerate the inference process of a neural network , the quantizer is usually applied to both the weights and feature maps . In most of CACIM systems , the activations are still implemented by digital circuits , which is significantly different from the analog weights . So in this work , we focused on the weight quantization problem . There are two main strategies for weight quantization . The first one directly quantizes the weights in a trained neural network without fine-tuning or retraining ( Zhao et al. , 2019 ; Banner et al. , 2019 ) . This strategy is fast and convenient , but its accuracy loss is usually greater than the other one , which will repeat the training and quantization iteratively until the performance is better enough ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Li et al. , 2016 ; Zhu et al. , 2016 ) . In the iterative quantization scheme , starts from the pre-trained neural network is more likely to get better performance than starts from scratch ( Yang et al. , 2019 ) . 3 GENERAL QUANTIZATION ALGORITHMS FOR DNNS IN CACIM SYSTEMS . 3.1 LLOYD ’ S QUANTIZER . We used Lloyd ( 1982 ) ’ s quantizer to quantize the weights of DNN in this work . The Θ = { q , e } is a quantizer as defined in Section 2.1 . Quantization distortion E of a quantizer is defined as E = ∫ +∞ −∞ z2 ( x ) dF ( x ) , ( 4 ) = v∑ α=1 ∫ Qα ( qα − x ) 2 dF ( x ) , ( 5 ) where F ( x ) is the cumulative probability distribution function of x . To minimize E , the quantizer iteratively optimizes the q and e until the relative error of E of two consecutive iterations is smaller than a given threshold . 3.2 NOISE-AWARE TRAINING SCHEME . We used the noise-aware training scheme ( Murray & Edwards , 1994 ) to improve the robustness to weight noise in this work . A Gaussian noise with zero mean and σ standard deviation was added to each weight when doing the forward calculation . The σ is determined by the production of the maximum of quantized weights ( |W̄ | ) and a constant ratio δ. W̃ = N ( 0 , δ · |W̄ |max ) ( 6 ) 3.3 TRAIN A WEIGHT QUANTIZED NEURAL NETWORK . Algorithm 1 Training a L-layers quantization network : Input : A mini-batch of inputs and targets ( I , Y ) , the pretrained full precision weights W , v distinguish quantized levels , learning rate η. Initialize : quantizer Θ by Lloyd ( 1982 ) , projection thresh T for i = 1 to L do quantized weight W̄i is calculated by Θi noised weight Ŵi = W̄i + W̃ i end for compute model output : Ŷ = forward ( I , Ŵ ) compute loss L for i = L to 1 do compute weight gradient ∂L ∂Wi = ∂L ∂Ŵi update the full presicion weight Wi according to ∂L ∂Wi and η if | ‖ W̄i ⊙ Wi ‖1 ‖ W̄i ⊙ W̄i ‖1 − 1| > T then re-initialize Θ by Lloyd ( 1982 ) end if end for The training algorithm with quantization and noise awareness is shown in Algorithm 1 . ‖ X ‖p= ( ∑ i ∑ j | xij |p ) 1 p is its p−norm , X ⊙ Y denotes the element-wise multiplication . There is one quantizer for each layer in the neural network . As some previous work do ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ) , the quantized weights are used in the forward and backward propagations , but the update is applied on the full-precision weights . The Straight-Through-Estimator ( STE ) method ( Bengio et al. , 2013 ) are used during backward propagations . In order to reduce the frequency of optimizing the quantizer , we used the projection of the quantized weight vector on the full-precision weight vector to determine whether the quantizer need to be updated after each iteration . When the projection exceeds a certain range , the quantizer will be updated . It is found that , if we start from a pre-trained neural network , the distribution of weights won ’ t have too much change . This means the projection won ’ t exceed the proper range during the whole training phase and the quantizer will be optimized only once at the beginning .
This paper proposes a method to train weight-quantized neural networks. The authors propose to directly calculate the endpoints that minimize the quantization error according to the weight distribution of each layer. Empirical results on image classification tasks and object detection tasks show that the proposed method outperforms other compared weight quantization methods under the same number of bits.
SP:1d5d07627d5218eea719362a51fd1175bc2f841e
Understanding the effects of data parallelism and sparsity on neural network training
1 INTRODUCTION . Data parallelism is a straightforward and common approach to accelerate neural network training by processing training data in parallel using distributed systems . Being model-agnostic , it is applicable to training any neural networks , and the degree of parallelism equates to the size of mini-batch for synchronized settings , in contrast to other forms of parallelism such as task or model parallelism . While its utility has attracted much attention in recent years , however , distributing and updating large network models at distributed communication rounds still remains a bottleneck ( Dean et al. , 2012 ; Hoffer et al. , 2017 ; Goyal et al. , 2017 ; Smith et al. , 2018 ; Shallue et al. , 2019 ; Lin et al. , 2020 ) . Meanwhile , diverse approaches to compress such large network models have been developed , and network pruning – the sparsification process that zeros out many parameters of a network to reduce computations and memory associated with these zero values – has been widely employed ( Reed , 1993 ; Han et al. , 2015 ) . In fact , recent studies discovered that pruning can be done at initialization prior to training ( Lee et al. , 2019 ; Wang et al. , 2020 ) , and by separating the training process from pruning entirely , it not only saves tremendous time and effort in finding trainable sparse networks , but also facilitates the analysis of pruned sparse networks in isolation . Nevertheless , there has been little study concerning the subsequent training of these sparse networks , and various aspects of the optimization of sparse networks remain rather unknown as of yet . In this work , we focus on studying data parallelism and sparsity1 , and provide clear explanations for their effects on neural network training . Despite a surge of recent interest in their complementary 1For the purpose of this work , we equate data parallelism and sparsity to increasing batch size and pruning model parameters , respectively ; we explain these more in detail in Appendix E. benefits in modern deep learning , there is a lack of fundamental understanding of their effects . For example , Shallue et al . ( 2019 ) provide comprehensive yet empirical evaluations on the effect of data parallelism , while Zhang et al . ( 2019 ) use a simple noisy quadratic model to describe the effect ; for sparsity , Lee et al . ( 2020 ) approach the difficulty of training under sparsity solely from the perspective of initialization . In this regard , we first accurately measure their effects by performing extensive metaparameter search independently for each and every study case of batch size and sparsity level . As a result , we find a general scaling trend as the effect of data parallelism in training sparse neural networks , across varying sparsity levels and workloads of data set , model and optimization algorithm . Also , the critical batch size turns out to be no less with sparse networks , despite the general difficulty of training sparse networks . We formalize our observation and theoretically prove the effect of data parallelism based on the convergence properties of generalized stochastic gradient methods irrespective of sparsity levels . We take this result further to understand the effect of sparsity based on Lipschitz smoothness analysis , and find that pruning results in a sparse network whose gradient changes relatively too quickly . Notably , this result is developed under standard assumptions used in the optimization literature and generally applied to training using any stochastic gradient method with nonconvex objective and learning rate schedule . Being precise and general , our results could help understand the effects of data parallelism and sparsity on neural network training . 2 SETUP . We follow closely the experiment settings used in Shallue et al . ( 2019 ) . We describe more details including the scale of our experiments in Appendix B , and provide additional results in Appendix D. The code can be found here : https : //github.com/namhoonlee/effect-dps-public Experiment protocol . For a given workload ( data set , network model , optimization algorithm ) and study ( batch size , sparsity level ) setting , we measure the number of training steps required to reach a predefined goal error . We repeat this process for a budget of runs while searching for the best metaparameters involved in the optimization ( e.g. , learning rate , momentum ) , so as to record the lowest number of steps , namely steps-to-result , as our primary quantity of interest . To this end , we regularly evaluate intermediate models on the entire validation set for each training run . Workload and study . We consider the workloads as the combinations of the followings : ( data set ) MNIST , Fashion-MNIST , CIFAR-10 ; ( network model ) Simple-CNN , ResNet-8 ; ( optimization algorithm ) SGD , Momentum , Nesterov with either a fixed or decaying learning rate schedule . For the study setting , we consider a batch size from 2 up to 16384 and a sparsity level from 0 % to 90 % . Metaparameter search . We perform a quasi-random search to tune metaparameters efficiently . More precisely , we first generate Sobol low-discrepancy sequences in a unit hypercube and convert them into metaparameters of interest , while taking into account a predefined search space for each metaparameter . The generated values for each metaparameter is in length of the budget of trials , and the search space is designed based on preliminary experimental results . Pruning . Sparse networks can be obtained by many different ways , and yet , for the purpose of this work , they must not undergo any training beforehand so as to measure the effects of data parallelism while training from scratch . Recent pruning-at-initialization approaches satisfy this requirement , and we adopt the connection sensitivity criterion in Lee et al . ( 2019 ) to obtain sparse networks . 3 EXPERIMENTAL RESULTS . 3.1 MEASURING THE EFFECT OF DATA PARALLELISM . First of all , we observe in each and every sparsity level across different workloads a general scaling trend in the relationship between batch size and steps-to-result for the effects of data parallelism ( see the 1st and 2nd columns in Figure 1 ) : Initially , we observe a period of linear scaling where doubling the batch size reduces the steps to achieve the goal error by half ( i.e. , it aligns closely with the dashed line ) , followed by a region of diminishing returns where the reduction in the required number of steps by increasing the batch size is less than the inverse proportional amount ( i.e. , it starts to digress from the linear scaling region ) , which eventually arrives at a maximal data parallelism ( i.e. , it hits a plateau ) where increasing the batch size no longer decreases the steps to reach a goal error . The same trend is observed across various workloads of data set , network model , and optimization algorithm as well as different goal errors ( see Appendix D ) . We note that our observation is consistent with the results of regular network training presented in Shallue et al . ( 2019 ) ; Zhang et al . ( 2019 ) . When we put the results for all sparsity levels together , we observe that training a sparser network takes a longer time ; a data parallelism curve for higher sparsity usually lies above than that for lower sparsity ( see the 3rd column in Figure 1 ) . For example , compared to the case of sparsity 0 % ( i.e. , the dense , over-parameterized network ) , 90 % sparse network takes about 2 − 4 times longer training time ( or the number of training steps ) , consistently across different batch sizes ( see Figure 12 in Appendix D.1 for more precise comparisons ) . Recall that we tune all metaparameters independently for each and every study case of batch size and sparsity level without relying on a single predefined training rule in order to find the best steps-to-result . Therefore , this result on the one hand corroborates the general difficulty of training sparse neural networks against the ease of training overly parameterized neural networks . On the other hand , when we normalize the y-axis of each plot by dividing by the number of steps at the first batch size , we can see the phase transitions more clearly . As a result , we find that the regions of diminishing returns and maximal data parallelism appear no earlier when training sparse networks than the dense network ( see the 4th column in Figure 1 ) . This is quite surprising in that one could have easily guessed that the general optimization difficulty incurred by sparsification may influence the data parallelism too , at least to some degree ; however , it turns out that the effects of data parallelism on sparse network training remain no worse than the dense case . Moreover , notice that in many cases the breakdown of linear scaling regime occurs even much later at larger batch sizes for a higher sparsity case ; this is especially evident for Momentum and Nesterov optimizers ( e.g. , compare training 90 % sparse network using Momentum against 0 % dense network ) . In other words , for sparse networks , a critical batch size can be larger , and hence , when it comes to training sparse neural networks , one can increase the batch size ( or design a parallel computing system for distributed optimization ) more effectively , while better exploiting given resources . We find this result particularly promising since SGD with momentum is often the method of choice in practice . We further show that momentum optimizers being capable of exploiting large batch sizes hold the same across different sparsity levels by displaying all plots together in Figure 2 . Overall , we believe that it is important to confirm the robustness of the data parallelism in sparse neural network training , which has been unknown thus far and difficult to estimate a priori . 3.2 ANALYZING METAPARAMETER SEARCH . In this section , we analyze the metaparameter search used to measure the effect of data parallelism . We specifically investigate the workload { MNIST , Simple-CNN , Momentum } where there are two metaparameters to tune ( i.e. , learning rate and momentum ) , to visualize all metaparameters easily in a 2D figure ( see Appendix D.2 for other results ) . The results are presented in Figure 3 , and we summarize our key findings below : • Our quasi-random search samples metaparameters efficiently , so that they are distributed evenly ( without being cluttered in a log-space ) and flexibly ( rather than sitting in a grid with fixed spac- ing ) within the search spaces . Also , the best metaparameters to yield lowest steps ( marked by gold star ? ) are located in the middle of the search ranges rather than sitting at the search boundaries across different batch sizes and sparsity levels . This means that our experiments are designed reasonably well , and the results are reliable . • There are two distinguished regions ( i.e. , complete ( ) and incomplete ( N ) ) being separated by a seemingly linear boundary as per the relationship between learning rate and momentum . This indicates that the optimization is being done by an interplay between these two metaparameters ; if one metaparameter is not chosen carefully with respect to the other ( e.g. , increase learning rate for fixed momentum ) , the optimizer may be stuck in a region spending time oscillating and eventually results in incomplete runs . This highlights the importance of performing metaparameter search , although it is expensive , rather than relying on predetermined heuristic training strategies , in order to accurately measure the effect of data parallelism and avoid potentially suboptimal results . • The successful region ( filled with blue circles ) becomes larger as with increasing batch size , showing that large batch training reaches a given goal error in less number of training iterations than small batch training , and hence , yields more complete runs . Notably , the best learning rate tends to increase as with increasing batch size too . This aligns well with the classic result in learning theory that large batch training allows using bigger learning rates ( Robbins & Monro , 1951 ; Bottou , 1998 ; Krizhevsky , 2014 ) .
The manuscript studies the effect of batch size at different sparsity levels (achieved by applying connection sensitivity pruning) on the required number of optimisation steps to reach a certain accuracy. The goal is to understand the interplay between those fundamental parameters. The empirical evaluation is performed for different triples of dataset, network architecture and optimisation scheme. The theoretical analysis is based on established bounds on the expected gradient norm.
SP:1a2c88b471d463d79a172c254483d8c92314fe3b
Contrastive Code Representation Learning
1 INTRODUCTION . Programmers increasingly rely on machine-aided programming tools to aid software development ( Kim et al. , 2012 ) . However , the wide diversity of programs encountered in practice limits the generalization of hand-written rules . Catching semantic bugs such as naming errors requires deeper language understanding , motivating learning-based programming tools . Recent work uses machine learning for bug detection ( Pradel & Sen , 2018 ) and optimization ( Mendis et al. , 2019 ) . Consider predicting the type of the variable declaration “ var median = ... ; ” . Static analysis fails as the type is underspecified , but the variable name indicates the statement is a float . Programming language datasets suffer from scarce annotations due to the time and expertise required to label . State-of-the-art approaches generally rely on either ( 1 ) synthetic supervised datasets or ( 2 ) self-supervised pre-training . Synthetic auto-generated labels have been used for method naming ( Alon et al. , 2019a ; b ) and bug detection ( Ferenc et al. , 2018 ; Benton et al. , 2019 ; Pradel & Sen , 2018 ) . However , synthetic code datasets suffer from duplication issues ( Allamanis , 2019 ) and biases ( Shin et al. , 2019 ) which degrade generalization . Moreover , auto-generated data does not cover the diverse program behaviors encountered in the wild . In contrast , self-supervised learning can leverage large open-source repositories such as GitHub with limited or no annotations . Inspired by the success of pre-training in natural language processing , recent work uses self-supervision to learn code representations . Authors have explored context-based token embeddings ( Ben-Nun et al. , 2018 ) and masked language modeling , where tokens are corrupted and reconstructed ( Feng et al. , 2020 ; Kanade et al. , 2020 ) However , reconstruction focuses on superficial language reasoning and does not explicitly address the underlying program functionality . The resulting models attend to program implementation specifics such as variable names . We hypothesize that programs with the same functionality should have the same underlying representation for downstream code understanding tasks , a principle illustrated in Fig . 1 . While it is time intensive to identify equivalent programs in a large corpus , it is cheap to leverage static compiler transformations to automatically generate many equivalent versions of a particular source program . In this work , we develop ContraCode , a self-supervised representation learning algorithm that uses source-to-source compiler transformation techniques ( e.g. , dead code elimination , obfuscation and constant folding ) to generate syntactically diverse but functionally equivalent programs . ContraCode uses these equivalent programs to construct a challenging discriminative pretext task that requires the model to identify equivalent programs out of a large dataset of distractors . In doing so , it has to embed the functionality , not the form , of the code . In essence , the domain knowledge from our code transformations induces the knowledge of the structure of programs onto learned representations . The contributions of our work include : 1. the novel use of compiler-inspired transformations as data augmentations for code , 2. the concept of program representation learning based on functional equivalence , and 3. a detailed analysis of architectures , code transforms and pre-train strategies , where ContraCode improves static type inference top-1 accuracy by 9 % , learned inference by 2 % – 13 % , summarization F1 score by up to 8 % and clone detection AUROC by 5 % – 10 % . 2 RELATED WORK . Self-supervised learning ( SSL ) is a general representation learning strategy where some dimensions or attributes of a datapoint are predicted from the remaining parts . These methods are unsupervised in the sense that they do not rely on labels , but SSL tasks often adapt losses and architectures designed for supervised learning . Self-supervised pre-training has yielded large improvements in both NLP ( Howard & Ruder , 2018 ; Devlin et al. , 2018 ; Radford et al. , 2018 ; 2019 ) and computer vision ( Mahajan et al. , 2018 ) by improving generalization ( Erhan et al. , 2010 ; Hao et al. , 2019 ) . Weak visual features , such as orientation ( Gidaris et al. , 2018 ) , color ( Zhang et al. , 2016 ) , and context ( Pathak et al. , 2016 ) , are meaningful signals for representations ( Mahajan et al. , 2018 ) . Contrastive learning unifies many past SSL approaches that compare pairs or collections of similar and dissimilar items ( Hadsell et al. , 2006 ) . Rather than training the network to predict labels or reconstruct data , contrastive methods minimize the distance between the representations of similar examples ( positives ) while maximizing the distance between dissimilar examples ( negatives ) . Examples include Siamese networks ( Bromley et al. , 1994 ) and triplet losses ( Schroff et al. , 2015 ) . Contrastive predictive coding ( Oord et al. , 2018 ; Hénaff et al. , 2019 ) learns to encode chunks of sequential data to predict of future chunks with the InfoNCE loss , a variational lower bound on mutual information between views of the data ( Tian et al. , 2019 ; Wu et al. , 2020 ) inspired by noiseconstrastive estimation ( Gutmann & Hyvärinen , 2010 ) . In instance discrimination tasks ( Wu et al. , 2018 ) , views and not pieces of an entire image are compared . SimCLR ( Chen et al. , 2020a ) and Momentum Contrast ( He et al. , 2019 ; Chen et al. , 2020b ) recently made progress by using many negatives for dense loss signal . Beyond images , InfoNCE has been applied to NLP ( Chuang et al. , 2020 ; Giorgi et al. , 2020 ) , but may require supervision ( Fang & Xie , 2020 ) . Code representation learning There has been substantial work on architectures and tasks for machine learning on code ( Allamanis et al. , 2018 ) . We adopt the summarization task of Alon et al . function x ( maxLine ) { const section = { text : `` , data } ; for ( ; i < maxLine ; i += 1 ) { section.text += ` $ { lines [ i ] } \n ` ; } if ( section ) { parsingCtx.sections.push ( section ) ; } } Original JavaScript method function x ( t ) { const n = { 'text ' : `` , 'data ' : data } ; for ( ; i < t ; i += 1 ) { n.text += lines [ i ] + '\n ' ; } n & & parsingCtx.sections.push ( n ) ; } Renamed variables , explicit object style , explicit concatenation , inline conditional function x ( t ) { const n= { 'text ' : '' , 'data ' : data } ; for ( ; i < t ; i+= 1 ) n.text+=lines [ i ] +'\n ' ; n & & parsingCtx.sections.push ( n ) } Mangled source with compressed whitespace Figure 2 : A JavaScript method from the unlabeled training set with two automatically generated semantically-equivalent programs . The original method is from the StackEdit Markdown editor . ( 2019a ) , and the variable type inference task of DeepTyper ( Hellendoorn et al. , 2018 ) . Other authors have explored summarization ( Movshovitz-Attias & Cohen , 2013 ; Allamanis et al. , 2016 ; Iyer et al. , 2016 ) and type inference ( Pradel et al. , 2019 ; Pandi et al. , 2020 ; Wei et al. , 2020 ; Allamanis et al. , 2020 ; Bielik & Vechev , 2020 ) with different languages and datasets . The tree or graph structure of code can be exploited to encode invariances in the representation . Inst2vec ( Ben-Nun et al. , 2018 ) locally embeds individual statements in LLVM IR by processing a contextual flow graph with a context prediction objective ( Mikolov et al. , 2013 ) . Tree-Based CNN embeds the Abstract Syntax Tree ( AST ) nodes of high-level source code . Code2seq ( Alon et al. , 2019a ) embeds AST paths with an attention-based encoder and LSTM decoder for supervised sequence-to-sequence tasks . Kanade et al . ( 2020 ) ; Feng et al . ( 2020 ) pre-train the Transformer ( Vaswani et al. , 2017 ) on code using the masked language modeling objective ( Devlin et al. , 2018 ) , an instance of the cloze task ( Taylor , 1953 ) where the model reconstructs corrupted tokens . Recurrent networks have also been pre-trained on code ( Hussain et al. , 2020 ) as language models ( Peters et al. , 2018 ; Karampatsis & Sutton , 2020 ) . Wang & Christodorescu ( 2019 ) ; Wang & Su ( 2019 ) assess the stability of program analyzers under semi-automated program transformations . Concurrent work by Rabin & Alipour ( 2020 ) found that code2vec and code2seq often change their classifications when statements are permuted , variables are renamed , or other-semantic preserving transformations are applied . 3 METHOD : CONTRASTIVE CODE REPRESENTATION LEARNING . Understanding program functionality and global structure is important for difficult tasks like summarizing code in natural language . For these problems , learned code representations should be similar for functionally equivalent programs and dissimilar for non-equivalent programs ( Figure 1 ) . The principle of contrastive learning offers a simple objective for learning such representations if data can be organized into pairs of positives and negatives . We use each pair to shape representation space , drawing positives together and pushing negatives apart . However , a major question remains : given an unlabeled corpus of programs , how do we identify or generate similar programs ? We address this question in Sec . 3.1 , then introduce our learning framework in Sec . 3.2 . 3.1 COMPILATION AS DATA AUGMENTATION . Modern programming languages afford great flexibility to software developers , allowing them to implement the same desired functionality in different ways . Crowdsourced datasets mined from developers , such as GitHub repositories , have many near-duplicates in terms of textual similarity ( Allamanis , 2019 ) , and are bound to contain even more functional equivalences for common tasks . Satisfiability solvers can identify these equivalent programs ( Joshi et al. , 2002 ; Bansal & Aiken , 2006 ) , but functional equivalence is also undecidable in general ( Rice , 1953 ) . Also , formal documentation of semantics is required . Programs can instead be compared approximately using test-cases ( Massalin , 1987 ) , but this is costly and requires executing untrusted code . Instead of searching for equivalences , we propose correct by construction data augmentation . Our insight is to apply source-to-source compiler transformations to unlabeled code to generate many variants with the same functionality . For example , dead-code elimination ( DCE ) is a common compiler optimization that removes operations that leave the output of a function unchanged . While Code compression Identifier modification 3 Reformatting ( R ) 3 Variable renaming ( VR ) 3 Beautification ( B ) 3 Identifier mangling ( IM ) 3 Compression ( C ) Regularization 3 Dead-code elimination ( DCE ) 3 Dead-code insertion ( DCI ) 3 Type upconversion ( T ) 3 Subword regularization ( SW ) 3 Constant folding ( CF ) 7 Line subsampling ( LS ) 3 = semantics-preserving transformation 7 = lossy transformation Table 1 : We augment programs with 11 automated source-tosource compiler transformations . 10 of the 11 transformations are correct-by-construction and do not modify operational semantics . More details are in Section A.3 . DCE preserves program functionality , Wang & Christodorescu ( 2019 ) find that up to 12.7 % of the predictions of current algorithm classification models change after DCE—supervised datasets were not enough to acquire the domain knowledge that DCE does not matter . A particular source code sequence , e.g . “ W * x + b ” is parsed unambiguously into a tree-structured representation “ ( + ( * W x ) b ) ” . This tree is then transformed by automated traversal algorithms . A rich body of prior programming language work explores parsing then tranforming Abstract Syntax Trees to optimize a program prior to machine code generation . If source code is output rather than machine code , this is called source-to-source transformation . Source-to-source transformations are common for optimization and obfuscation purposes in dynamic languages like JavaScript . If each transformation preserves code functionality , then any composition also preserves code functionality . We leverage the Babel and Terser compiler infrastructure tools for JavaScript ( McKenzie et al. , 2020 ; Santos et al. , 2020 ) to parse code into an Abstract Syntax Tree ( AST ) and then perform correctnesspreserving transformations on method bodies . Table 1 and Appendix A.3 list all transformations , but we broadly group program transformations into three categories . Code compression changes the syntactic structure of code and performs correct-by-construction transformations such as precomputing constant expressions at compile time . Identifier modification substitutes method and variable names with random tokens , thereby masking part of the semantic information in programs . Finally , transformations for Regularization improve model generalization by reducing the number of trivial positive pairs with high text overlap ; this group potentially modifies program semantics through the line subsampling pass .
This paper studies the self-supervised code functional representation learning and proposes a method called ContraCode. ContraCode utilizes some code functionality invariant transformations to generate positive pairs from the same code and negative pairs from different codes. After that, these codes pairs will be used to do the contrastive pre-training. Experiment results based on two tasks are reported.
SP:e30f87da31dcb7e7ee9dd0abd503731d11d5160a
Evaluating representations by the complexity of learning low-loss predictors
1 INTRODUCTION . One of the first steps in building a machine learning system is selecting a representation of data . Whereas classical machine learning pipelines often begin with feature engineering , the advent of deep learning has led many to argue for pure end-to-end learning where the deep network constructs the features ( LeCun et al. , 2015 ) . However , huge strides in unsupervised learning ( Hénaff et al. , 2019 ; Chen et al. , 2020 ; He et al. , 2019 ; van den Oord et al. , 2018 ; Bachman et al. , 2019 ; Devlin et al. , 2019 ; Liu et al. , 2019 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) have led to a reversal of this trend in the past two years , with common wisdom now recommending that the design of most systems start from a pretrained representation . With this boom in representation learning techniques , practitioners and representation researchers alike have the question : Which representation is best for my task ? This question exists as the middle step of the representation learning pipeline . The first step is representation learning , which consists of training a representation function on a training set using an objective which may be supervised or unsupervised . The second step , which this paper considers , is representation evaluation . In this step , one uses a measure of representation quality and a labeled evaluation dataset to see how well the representation performs . The final step is deployment , in which the practitioner or researcher puts the learned representation to use . Deployment could involve using the representation on a stream of user-provided data to solve a variety of end tasks ( LeCun , 2015 ) , or simply releasing the trained weights of the representation function for general use . In the same way that BERT ( Devlin et al. , 2019 ) representations have been applied to a whole host of problems , the task or amount of data available in deployment might differ from the evaluation phase . We take the position that the best representation is the one which allows for the most efficient learning of a predictor to solve the task . We will measure efficiency in terms of either number of samples or information about the optimal predictor contained in the samples . This position is motivated by practical concerns ; the more labels that are needed to solve a task in the deployment phase , the more expensive to use and the less widely applicable a representation will be . We build on a substantial and growing body of literature that attempts to answer the question of which representation is best . Simple , traditional means of evaluating representations , such as the validation accuracy of linear probes ( Ettinger et al. , 2016 ; Shi et al. , 2016 ; Alain & Bengio , 2016 ) , have been widely criticized ( Hénaff et al. , 2019 ; Resnick et al. , 2019 ) . Instead , researchers have taken up a variety of alternatives such as the validation accuracy ( VA ) of nonlinear probes ( Conneau et al. , 2018 ; Hénaff et al. , 2019 ) , mutual information ( MI ) between representations and labels ( Bachman et al. , 2019 ; Pimentel et al. , 2020 ) , and minimum description length ( MDL ) of the labels conditioned on the representations ( Blier & Ollivier , 2018 ; Yogatama et al. , 2019 ; Voita & Titov , 2020 ) . We find that these methods all have clear limitations . As can be seen in Figure 1 , VA and MDL are liable to choose different representations for the same task when given evaluation datasets of different sizes . Instead we want an evaluation measure which depends on the data distribution , not a particular dataset or dataset size . Furthermore , VA and MDL lack a predefined notion of success in solving a task . In combination with small evaluation datasets , these measures may lead to premature evaluation by producing a judgement even when there is not enough data to solve the task or meaningfully distinguish one representation from another . Meanwhile , MI measures the lowest loss achievable by any predictor irrespective of the complexity of learning it . We note that while these methods do not correspond to our notion of best representation , they may be correct for different notions of “ best ” . To eliminate these issues , we propose two measures . In both of our measures , the user must specify a tolerance ε so that a population loss of less than ε qualifies as solving the task . The first measure is the surplus description length ( SDL ) which modifies the MDL to measure the complexity of learning an ε-loss predictor rather than the complexity of the labels in the evaluation dataset . The second is the ε-sample complexity ( εSC ) which measures the sample complexity of learning an ε-loss predictor . To facilitate our analysis , we also propose a framework called the loss-data framework , illustrated in Figure 1 , that plots the validation loss against the evaluation dataset size ( Talmor et al. , 2019 ; Yogatama et al. , 2019 ; Voita & Titov , 2020 ) . This framework simplifies comparisons between measures . Prior work measures integrals ( MDL ) and slices ( VA and MI ) along the data-axis . Our work proposes instead measuring integrals ( SDL ) and slices ( εSC ) along the loss-axis . This illustrates how prior work makes tacit choices about the function to learn based on the choice of dataset size . Our work instead makes an explicit , interpretable choice of threshold ε and measures the complexity of solving the task to ε error . We experimentally investigate the behavior of these methods , illustrating the sensitivity of VA and MDL , and the robustness of SDL and εSC , to dataset size . Efficient implementation . To enable reproducible and efficient representation evaluation for representation researchers , we have developed a highly optimized open source Python package ( see supplementary materials ) . This package enables construction of loss-data curves with arbitrary representations and datasets and is library-agnostic , supporting representations and learning algorithms implemented in any Python ML library . By leveraging the JAX library ( Bradbury et al. , 2018 ) to parallelize the training of probes on a single accelerator , our package constructs loss-data curves in around two minutes on one GPU . 2 THE LOSS-DATA FRAMEWORK FOR REPRESENTATION EVALUATION . In this section we formally present the representation evaluation problem , define our loss-data framework , and show how prior work fits into the framework . Notation . We use bold letters to denote random variables . A supervised learning problem is defined by a joint distribution D over observations and labels ( X , Y ) in the sample space X × Y with density denoted by p. Let the random variable Dn be a sample of n i.i.d . ( X , Y ) pairs , realized by Dn = ( Xn , Y n ) = { ( xi , yi ) } ni=1 . Let R denote a representation space and φ : X → R a representation function . The methods we consider all use parametric probes , which are neural networks p̂θ : R → P ( Y ) parameterized by θ ∈ Rd that are trained onDn to estimate the conditional distribution p ( y | x ) . We often abstract away the details of learning the probe by simply referring to an algorithm A which returns a predictor : p̂ = A ( φ ( Dn ) ) . Abusing notation , we denote the composition of A with φ by Aφ . Define the population loss and the expected population loss for p̂ = Aφ ( Dn ) , respectively as L ( Aφ , Dn ) = E ( X , Y ) − log p̂ ( Y | X ) , L ( Aφ , n ) = E Dn L ( Aφ , Dn ) . ( 1 ) In this section we will focus on population quantities , but note that any algorithmic implementation must replace these by their empirical counterparts . The representation evaluation problem . The representation evaluation problem asks us to define a real-valued measurement of the quality of a representation φ for solving solving the task defined by ( X , Y ) . Explicitly , each method defines a real-valued function m ( φ , D , A , Ψ ) of a representation φ , data distribution D , probing algorithm A , and some method-specific set of hyperparameters Ψ . By convention , smaller values of the measure m correspond to better representations . Defining such a measurement allows us to compare different representations . 2.1 DEFINING THE LOSS-DATA FRAMEWORK .. The loss-data framework is a lens through which we contrast different measures of representation quality . The key idea , demonstrated in Figure 1 , is to plot the loss L ( Aφ , n ) against the dataset size n. Explicitly , at each n , we train a probing algorithmA using a representation φ to produce a predictor p̂ , and then plot the loss of p̂ against n. Similar analysis has appeared in Voita & Titov ( 2020 ) ; Yogatama et al . ( 2019 ) ; Talmor et al . ( 2019 ) . We can represent each of the prior measures as points on the curve at fixed x ( VA , MI ) or integrals of the curve along the x-axis ( MDL ) . Our measures correspond to evaluating points at fixed y ( εSC ) and integrals along the y-axis ( SDL ) . 2.2 EXISTING METHODS IN THE LOSS-DATA FRAMEWORK . Nonlinear probes with limited data . A simple strategy for evaluating representations is to choose a probe architecture and train it on a limited amount of data from the task and representation of interest ( Hénaff et al. , 2019 ; Zhang & Bowman , 2018 ) . On the loss-data curve , this corresponds to evaluation at x = n , so that mVA ( φ , D , A , n ) = L ( Aφ , n ) . ( 2 ) Mutual information . Mutual information ( MI ) between a representation φ ( X ) and targets Y is another often-proposed metric for learning and evaluating representations ( Pimentel et al. , 2020 ; Bachman et al. , 2019 ) . In terms of entropy , mutual information is equivalent to the information gain about Y from knowing φ ( X ) : I ( φ ( X ) ; Y ) = H ( Y ) −H ( Y | φ ( X ) ) . ( 3 ) In general mutual information is intractable to estimate for high-dimensional or continuous-valued variables ( McAllester & Stratos , 2020 ) , and a common approach is to use a very expressive model for p̂ and maximize a variational lower bound : I ( φ ( X ) ; Y ) ≥ H ( Y ) + E ( X , Y ) log p̂ ( Y | φ ( X ) ) . ( 4 ) Since H ( Y ) is not a function of the parameters , maximizing the lower bound is equivalent to minimizing the negative log-likelihood . Moreover , if we assume that p̂ is expressive enough to represent p and take n→∞ , this inequality becomes tight . As such , MI estimation can be seen a special case of nonlinear probes as described above , where instead of choosing some particular setting of n we push it to infinity . We formally define the mutual information measure of a representation as mMI ( φ , D , A ) = lim n→∞ L ( Aφ , n ) . ( 5 ) A decrease in this measure reflects an increase in the mutual information . On the loss-data curve , this corresponds to evaluation at x =∞ . Minimum description length . Recent studies ( Yogatama et al. , 2019 ; Voita & Titov , 2020 ) propose using the Minimum Description Length ( MDL ) principle ( Rissanen , 1978 ; Grünwald , 2004 ) to evaluate representations . These works use an online or prequential code ( Blier & Ollivier , 2018 ) to encode the labels given the representations . The codelength ` of Y n given φ ( Xn ) is then defined as ` ( Y n | φ ( Xn ) ) = − n∑ i=1 log p̂i ( yi | φ ( xi ) ) , ( 6 ) where p̂i is the output of running a pre-specified algorithm A on the dataset up to element i : p̂i = Aφ ( Xn1 : i , Y n1 : i ) . Taking an expectation over the sampled datasets for each i , we define a population variant of the MDL measure ( Voita & Titov , 2020 ) as mMDL ( φ , D , A , n ) = E [ ` ( Yn | φ ( Xn ) ) ] = n∑ i=1 L ( A , i ) . ( 7 ) Thus , mMDL measures the area under the loss-data curve on the interval x ∈ [ 0 , n ] .
The submission addresses the problem of representation evaluation from the perspective of efficient learning of downstream predictors. Leveraging the introduced loss-data curve framework, the paper studies and demonstrates the limitations of the existing methods in terms of their implicit dependency on evaluation dataset size. Motivated by practicality and interpretability of the measures for choosing the best representations, the paper introduces two novel methods, $\epsilon$ sample complexity ($\epsilon$SC) and surplus description length (SDL), which are well-motivated and supported both theoretically and empirically. The paper also delivers efficient implementation.
SP:5682a82e8671bdd5dee966273b981f63b4eebf2d
An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder
Deep neural networks ( DNNs ) , especially convolutional neural networks , have achieved superior performance on image classification tasks . However , such performance is only guaranteed if the input to a trained model is similar to the training samples , i.e. , the input follows the probability distribution of the training set . Out-Of-Distribution ( OOD ) samples do not follow the distribution of training set , and therefore the predicted class labels on OOD samples become meaningless . Classification-based methods have been proposed for OOD detection ; however , in this study we show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm because of dimensionality reduction in the DNN models . We also show that Glow likelihood-based OOD detection is breakable as well . 1 INTRODUCTION . Deep neural networks ( DNNs ) , especially convolutional neural networks ( CNNs ) , have become the method of choice for image classification . Under the i.i.d . ( independent and identically distributed ) assumption , a high-performance DNN model can correctly-classify an input sample as long as the sample is “ generated ” from the distribution of training data . If an input sample is not from this distribution , which is called Out-Of-Distribution ( OOD ) , then the predicted class label from the model is meaningless . It would be great if the model has the ability to distinguish OOD samples from in-distribution samples . OOD detection is needed especially when applying DNN models in life-critical applications , e.g. , vision-based self-driving or image-based medical diagnosis . It was shown by Nguyen et al . ( 2015 ) ( Nguyen et al. , 2015 ) that DNN classifiers can be easily fooled by OOD data , and an evolutionarily algorithm was used to generate OOD samples such that DNN classifiers had high output confidence on these samples . Since then , many methods have been proposed for OOD detection using classifiers or encoders ( Hendrycks & Gimpel , 2017 ; Hendrycks et al. , 2019 ; Liang et al. , 2018 ; Lee et al. , 2018b ; a ; Alemi et al. , 2018 ; Hendrycks & Gimpel , 2017 ) . For instance , Hendrycks et al . ( Hendrycks & Gimpel , 2017 ) show that a classifier ’ s prediction probabilities of OOD examples tend to be more uniform , and therefore the maximum predicted class probability from the softmax layer was used for OOD detection . Regardless of the details of these methods , every method needs a classifier or an encoder , which takes an image x as input and compresses it into a vector z in the laten space ; after some further transform , z is converted to an OOD detection score τ . This computing process can be expressed as : z = f ( x ) and τ = d ( z ) . To perform OOD detection , a detection threshold needs to be specified , and then x is OOD if τ is smaller/larger than the threshold . For the evaluation of OOD detection methods , ( Hendrycks & Gimpel , 2017 ) , an OOD detector is usually trained on a dataset ( e.g . Fashion-MNIST as indistribution ) and then it is tested on another dataset ( e.g . MNIST as OOD ) . As will be shown in this study , the above mentioned classification-based OOD detection methods are practically breakable . As an example ( more details in Section 3 ) , we used the Resnet-18 model ( He et al. , 2016 ) pre-trained on the ImageNet dataset . Let xin denote a 224×224×3 image ( indistribution sample ) in ImageNet and xout denote an OOD sample which could be any kinds of images ( even random noises ) not belonging to any category in ImageNet . Let z denote the 512- dimensional feature vector in Resnet-18 , which is the input to the last fully-connected linear layer before the softmax operation . Thus , we have zin = f ( xin ) and zout = f ( xout ) . In Fig . 1 , xin is the image of Santa Claus , and xout could be a chest x-ray image or a random-noise image , and “ surprisingly ” , zout ∼= zin which renders OOD detection score to be useless : d ( zout ) ∼= d ( zin ) . In Section 2 , we will introduce an algorithm to generate OOD samples such that zout ∼= zin . In Section 3 , we will show the evaluation results on publicly available datasets , including ImageNet subset , GTSRB , OCT , and COVID-19 CT . Since some generative models ( e.g . Glow ( Kingma & Dhariwal , 2018 ) ) can approximate the distribution of training samples ( i.e . p ( xin ) ) , likelihood-based generative models were utilized for OOD detection ( Nalisnick et al. , 2019 ) . It has been shown that likelihoods derived from generative models may not distinguish between OOD and training samples ( Nalisnick et al. , 2019 ; Ren et al. , 2019 ; Choi et al. , 2018 ) , and a fix to the problem could be using likelihood ratio instead of raw likelihood score ( Serrà et al. , 2019 ) . Although not the main focus of this study , we will show that the OOD sample ’ s likelihood score from the Glow model ( Kingma & Dhariwal , 2018 ; Serrà et al. , 2019 ) can be arbitrarily manipulated by our algorithm ( Section 2.1 ) such that the output probability p ( xin ) ∼= p ( xout ) , which further diminishes the effectiveness of any Glow likelihood-based detection methods . 2 METHODOLOGY . 2.1 OOD ATTACK ON DNN ENCODER . We introduce an algorithm to perform OOD attack on a DNN encoder z = f ( x ) which takes an image x as input and transforms it into a feature vector z in a latent space . Preprocessing on x can be considered as the very first layer inside of the model f ( x ) . The algorithm needs a weak assumption that f ( x ) is sub-differentiable . A CNN classifier can be considered a composition of a feature encoder z = f ( x ) and a feature classifier p = g ( z ) where p is the softmax probability distribution over multiple classes . Let ’ s consider an in-distribution sample xin and an OOD sample x′out , and apply the model : zin = f ( xin ) and z′out = f ( x ′ out ) . Usually , z ′ out 6= zin . However , if we add a relatively small amount of noise δ to x′out , then it could be possible that f ( x ′ out + δ ) = zin and x ′ out + δ is still OOD . This idea is realized in Algorithm 1 , OOD Attack on DNN Encoder . The clip operation in Algorithm 1 is very important : it will limit the difference between xout and x′out so that xout may be OOD . The algorithm is inspired by the method called projected gradient descent ( PGD ) ( Kurakin et al. , 2016 ; Madry et al. , 2018 ) which is used for adversarial attacks . We note that the term “ adversarial attack ” usually refers to adding a small perturbation to a clean sample x in a dataset such that a classifier will incorrectly-classify the noisy sample while being Algorithm 1 OOD Attack on DNN Encoder Input : An in-distribution sample xin in a dataset . An OOD sample x′out not similar to any sample in the dataset . f , the neural network feature encoder . , the maximum perturbation measured by Lp norm . N , the total number of iterations . α the learning rate of the optimizer . Output : an OOD sample xout s.t . f ( xout ) ∼= f ( xin ) Process : 1 : Generate a random noise ξ with ||ξ|| ≤ 2 : Initialize xout = x′out + ξ 3 : Setup loss J ( xout ) = ||f ( xout ) − f ( xin ) ||2 ( L2 norm ) 4 : for n from 1 to N do 5 : xout ← clip ( xout − α · h ( J ′ ( xout ) ) ) , where J ′ ( x ) = ∂J/∂x . 6 : end for Note : The clip operation ensures that ||xout − x′out||p ≤ . The clip operation also ensures that pixel values stay within the feasible range ( e.g . 0 to 1 ) . If L-inf norm is used , h ( J ′ ) is the sign function ; and if L2 norm is used , h ( J ′ ) is a function that normalizes J ′ by its L2 norm . Adamax optimizer is used in the implementation able to correctly-classify the original clean sample x . Thus , OOD attack and adversarial attack are completely different things . In practice , the Algorithm 1 can be repeated many times to find the best solution . Random initialization is performed in line-1 and line-2 of the algorithm process . By adding initial random noise ξ to x′out , the algorithm will have a better chance to avoid local minima caused by a bad initialization . 2.2 DIMENSIONALITY REDUCTION AND OOD ATTACK . Recall that in a classification-based OOD Detection approach , a DNN encoder transforms the input to a feature vector , i.e. , z = f ( x ) , and an OOD detection score is computed by another transform on z , i.e. , and τ = d ( z ) . If zout ∼= zin , then d ( zout ) ∼= d ( zin ) which breaks the OOD detector regardless of the transform d. Usually , a DNN encoder makes dimensionality reduction : the dimension of z is significantly smaller than the dimension of x . In the example shown in Fig . 1 , z is a 512-dimensional feature vector ( dim ( z ) = 512 ) in Resnet-18 , and the dimension of x is 150528 ( 224× 224× 3 ) . Dimensionality reduction in an encoder provides the opportunity for the existence of the mapping of OOD and in-distribution samples to the same locations in the latent space . This is simply because the vectors in a lower-dimensional space can not represent all of the vectors/objects in a higher-dimensional space , which is the Pigeonhole Principle . Let ’ s do an analysis on the Resnet-18 example in Fig . 1 . A pixel of the color image x has 8-bits . In the 150528-dimension discrete input space , there are 256224×224×3 different images/vectors , which defines the size of the input space . float32 data type is usually used in computation , a float32 variable can roughly represent 232 unique real numbers . Thus , in the 512-dimensional latent space , there are 232×512 unique vectors/objects , which defines the size of the latent space . The ratio is ( 232×512 256224×224×3 ) 1 , and it shows that the latent space is significantly smaller than the input space . Thus , for some sample x in the dataset , we can find another sample x′ such that f ( x′ ) = f ( x ) as long as dim ( z ) < dim ( x ) . A question arises : will the x′ be in-distribution or OOD ? To answer this question , let ’ s partition the input discrete space Ω into two disjoint regions ( Ω = Ωin ∪ Ωout ) , Ωin of in-distribution samples and Ωout of OOD samples . |Ω| denotes the size of Ω . Usually , the training set is only a subset of Ωin , and the size of Ωout is significantly larger than the size of Ωin . For example , if Ωin is ImageNet , then Ωout contains medical images , noise images , and other weird images . If Ωin contains human face images , then Ωout contains non-face images and then |Ωin| |Ωout| . The latent space ( z-space ) is denoted by F and partitioned into two subspaces : F = Fin ∪Fout . An encoder is applied such that Ωin → Fin and Ωout → Fout . If there is overlap Fin∩Fout 6= ∅ , then the encoder is vulnerable to OOD attack . Usually , the encoder is a part of a classifier trained to classify in-distribution samples into different classes , and therefore the encoder can not guarantee that there is no overlap between Fin andFout . What is the size ofFin∩Fout or what is the probability P ( |Fin ∩ Fout| ≥ a ) ? While it is hard to calculate it for an arbitrary encoder and dataset , we can do a worst-case-scenario analy- sis . Assuming that every OOD sample is i.i.d . mapped to the latent space with a uniform distribution over a number of |F| spots , then the probability of OOD samples covering the entire latent space is P ( Fout = F ) = |F| ! ×Stirling ( |Ωout| , |F| ) / |F||Ωout| → 1 as |F| / |Ωout| → 0 , where Stirling is the Stirling number of the second kind . Noting that |F| / |Ωout| = 2 32×512 256224×224×3−1.4×107 ≈ 0 and 1.4 × 107 being the number of samples in ImageNet , then it could be true that almost ( with probability close to 1 ) the entire latent space of Resnet-18 is covered by the z vectors of OOD samples . Next , we discuss how to construct OOD samples to fool neural networks . First , let ’ s take a look at one-layer linear network : z = Wx , and make notations : an in-distribution input x ∈ RM , latent code z ∈ RK and K M . W is a K ×M matrix , and rank ( W ) ≤ K. The null space of W is Ωnull = { η ; Wη = 0 } . Now , let ’ s take out the basis vectors of this space , η1 , η2 , . . . , ηM−K , and compute x′ = ∑ i λiηi + x where λi is a non-zero scalar . Obviously , z ′ = Wx′ = z . We can set the magnitude of the “ noise ” ∑ i λiηi to be arbitrarily large such that x ′ will look like garbage and become OOD , which is another explanation of the existence of OOD samples . Then , we can try to apply this attack method to multi-layer neural network . If the neural network only uses ReLU activation , then the input-output relationship can be exactly expressed as a piecewise-linear mapping ( Ding et al. , 2020 ) , a similar approach can be applied layer by layer . If ReLU is not used , a new method is needed . We note that the filter bank of a convolution layer can be converted to a weight matrix . We have examined the state-of-the-art CNN models that are pre-trained on ImageNet and available in Pytorch , and dimensionality reduction is performed in most of the layers ( except 1 or 2 layers near the input ) , i.e . |F| ≤ |Ωin| |Ωout| . Instead of constructing an OOD sample by adding perturbations to an in-distribution sample , in Algorithm-1 , we construct an OOD sample paired with an in-distribution sample by starting from an initial sample that is OOD . Could an encoder be made robust to the OOD attack by including OOD samples in training set for supervised binary classification : in vs out ? Usually |Ωin| |Ωout| and we will have to collect and label “ enough ” samples in Ωout , which is infeasible considering the large size of Ωout ≈ Ω . As a comparison , to enhance DNN classifier robustness against adversarial noises , it is very effective to include noisy samples in the training set , i.e . Ωin = Ωin clean∪Ωin noisy . It is known as adversarial training ( Goodfellow et al. , 2018 ) and computationally feasible as |Ωin noisy| |Ωout| .
The paper presents a method that attacks existing out-of-distribution (OOD) detection methods. Most of the existing OOD detection methods perform detection using a latent representation. Main motivation of the paper is that the size of the latent representation is much smaller than the input images which results mapping both OOD and in-distribution images to the same place in the latent space and diminishing OOD detection performance. With this motivation, the proposed method perturbs input images to obtain an image whose latent representation is similar to the latent representation of an in-distribution image. Since such perturbations can be obtained for any OOD image, existing OOD detection algorithms fails distinguishing such OOD samples. The paper contains experiments on multiple dataset to demonstrate that the proposed method obtains a latent representation similar to the representation of an in-distribution image.
SP:193b21c862d83fd412bd5a07f49ca62e7285f62d
Inferring Principal Components in the Simplex with Multinomial Variational Autoencoders
INFERRING PRINCIPAL COMPONENTS IN THE SIMPLEX WITH MULTINOMIAL VARIATIONAL AUTOENCODERS . Anonymous authors Paper under double-blind review 1 ABSTRACT . Covariance estimation on high-dimensional data is a central challenge across multiple scientific disciplines . Sparse high-dimensional count data , frequently encountered in biological applications such as DNA sequencing and proteomics , are often well modeled using multinomial logistic normal models . In many cases , these datasets are also compositional , presented item-wise as fractions of a normalized total , due to measurement and instrument constraints . In compositional settings , three key factors limit the ability of these models to estimate covariance : ( 1 ) the computational complexity of inverting high-dimensional covariance matrices , ( 2 ) the non-exchangeability introduced from the summation constraint on multinomial parameters , and ( 3 ) the irreducibility of the multinomial logistic normal distribution that necessitates the use of parameter augmentation , or similar techniques , during inference . Using real and synthetic data we show that a variational autoencoder augmented with a fast isometric log-ratio ( ILR ) transform can address these issues and accurately estimate principal components from multinomially logistic normal distributed data . This model can be optimized on GPUs and modified to handle mini-batching , with the ability to scale across thousands of dimensions and thousands of samples . 2 INTRODUCTION . Many scientific disciplines that collect survey data , such as economics , psychology , political science and the biological sciences routinely deal with compositional data , where only relative information can be measured . These datasets are often in the form of counts , where the total counts within a sample are only indicative of the confidence of the measured proportions . The resulting proportions lie within a simplex and failing to account for the structure of this simplicial sample space can confound the interpretation of the measurements . As a result , there has been wide discussion across disparate disciplines ( 1 ; 2 ; 3 ; 4 ) concerning the reproducibility crisis that has arisen from the misinterpretation of compositional data . One of the obstacles to the appropriate analysis of compositional data is the difficulty of efficiently estimating the latent parameters that lie in the simplex . Accurately scaling probabilistic inference across high-dimensional count data is a major outstanding challenge ( 5 ) . This problem is apparent in the social sciences and is particularly pronounced in biological fields where datasets can obtain observations on tens of thousands of features across hundreds or millions of samples . One major computational bottleneck with Gaussian distributed data is the inversion of a d-dimensional covariance matrix that has a runtime of O ( d3 ) ( 6 ; 7 ) . As a result , probabilistic covariance estimation for high-dimensional data is a computationally challenging problem . Recent theoretical developments ( 8 ) cementing the connection between Variational Autoencoders ( VAEs ) ( 9 ) and Probabilistic Principal Components Analysis ( PPCA ) ( 10 ) holds much promise for enabling accurate , scalable , low-rank approximations of large covariance matrices . Variational autoencoders were originally proposed as a generative model ( 9 ) , but are now commonly deployed across scientific disciplines and have made contributions to single-cell RNA sequencing ( 11 ) , microbiome modeling ( 12 ) , protein modeling ( 13 ; 14 ; 15 ) , natural language processing ( 16 ) and image processing ( 9 ) . Following insights that connected regularized linear autoencoders and PCA ( 17 ) , Lucas et al . ( 8 ) showed that carefully designed VAEs can recover the weights that are solved by PPCA . A computational advantage of VAEs is that they do not require the inversion of a covariance matrix , and the resulting runtime is O ( ndkT ) for n samples , d dimensions , k latent dimensions and T epochs . While it has been noted that VAEs may take tens of thousands of epochs to estimate the principal component ( 18 ) , VAEs are easily parallelizable and can be accelerated with GPUs , presenting an attractive alternative to estimating principal components ( 17 ) and the resulting covariance matrix . The connection between VAEs and PPCA is currently limited to Gaussian distributed data and not well-suited to a compositional setting . Showing that VAEs can recover the correct principal components from count data is nontrivial due to the non-conjugacy issues between the logistic normal distribution and count distributions such as the multinomial distribution . Furthermore , the parameters of the multinomial distribution are compositional ; they are constrained within the simplex and the resulting covariance matrix is singular and non-invertible ( 1 ; 19 ) . Aitchison ( 20 ) showed that PCA can be adapted to compositional data through the use of the center log-ratio ( CLR ) transform , which maintains isometry . However , this transformation is not isomorphic , requiring that the resulting log-ratios sum to zero , and as a result , CLR-transformed data will produce a singular covariance matrix and rank-deficient principal components . It has been shown that the isometric log-ratio ( ILR ) transform ( 21 ) satisfies both isomorphism and isometry and can handle this singularity issue ( 22 ; 23 ) while enabling the estimation of full-rank principal components . Here , we show that VAEs augmented with the ILR transform can infer principal components learned from PPCA on multinomially distributed data , beginning to address these critical shortcomings . 3 RELATED WORK . In the microbiome literature , there have been a number of methods ( 24 ; 25 ; 26 ; 27 ; 28 ) that have attempted to model ecological networks through the estimation of pairwise microbe correlations or pairwise inverse-covariance , where microbes are aggregated at different taxonomical scales or ‘ taxa ’ . Of these tools , only Flashweave can scale across more than thousands of taxa ; however , it does this by avoiding the estimation of the covariance matrix . Methods that attempt to estimate the covariance matrix can only handle on the order of a few thousand dimensions . Although there is no widely accepted consensus definition of Multinomial PPCA in this context , being able to efficiently estimate the parameters of Multinomial PPCA would be highly useful for exploratory biological analysis . A number of studies have proposed using mixture modeling as a proxy for PCA ( 29 ; 30 ; 31 ) ; however , these techniques depend either on the Dirichlet distribution , whose covariance matrix is not flexible , or on stick-breaking , which violates permutation invariance ( 32 ) . Lucas et al . ( 8 ) has previously shown that the following two models can obtain the same maximum likelihood estimates of principal componentsW : Probabilistic PCA p ( x|z ) = N ( Wz + µ , σ2Id ) p ( z ) = N ( 0 , Ik ) ∣∣∣∣∣ Linear VAE p ( x|z ) = N ( Wz + µ , σ2Id ) q ( z|x ) = N ( V ( x− µ ) , D ) Here , p ( x|z ) denotes the likelihood of observations x ∈ Rd given the latent representation z ∈ Rk , p ( z ) denotes the prior on z and q ( z|x ) denotes the estimated variational posterior distribution of z given an encoder parameterized by V and diagonal variances D. Both models estimate the same low dimensional representation of the data through z , and learn the same factorization of the covariance matrix throughW . While PPCA parameters are typically estimated through expectation maximization ( 10 ) , linear VAEs are optimized by maximizing the Evidence Lower Bound ( ELBO ) given by log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) ] −KL ( q ( z|x ) ∣∣∣∣p ( z ) ) For linear VAEs with a Gaussian likelihood , the variational posterior distribution q ( z|x ) can be shown to analytically agree with the posterior distribution p ( z|x ) learned from PPCA ( 8 ) . However , deriving this connection for count-based likelihoods such as the multinomial distribution is complicated due to non-conjugacy issues ( Appendix A ) . This is a major obstacle for many biological applications ; multiple works have shown the merits of incorporating count distributions explicitly into the model ( 33 ; 34 ; 35 ; 36 ) . Here , we provide directions for overcoming this issue . 4 METHODS . First , we will redefine Multinomial PPCA with the ILR transform ( 21 ) . Then we will make the connection between Multinomial VAEs and Multinomial PPCA by leveraging insights from the Collapse-Uncollapse ( CU ) sampler ( 33 ) . We will then derive an algorithm to obtain the maximum a posteriori ( MAP ) estimate for the VAE parameters . 4.1 PROBABILISTIC MULTINOMIAL PCA . PPCA can be extended to multinomially distributed data with the following generative model : p ( x|η ) = Mult ( φ ( Ψη ) ) ( 1 ) p ( η|z ) = N ( Wz , σ2Id−1 ) ( 2 ) p ( z ) = N ( 0 , Ik ) ( 3 ) Here W ∈ Rd−1×k represents the PCA loading matrix , σ2 is the variance , Ψ ∈ Rd×d−1 is a fixed contrast matrix whose columns sum to zero and φ is the softmax transform . For a single sample , x ∈ Nd are the observed d -dimensional counts , η ∈ Rd−1 are the latent logits and z ∈ Rk is the latent representation . The term φ ( Ψη ) is distributed logistic normal , φ ( Ψη ) ∼ LN ( Wz , σ2I ) , as shown by Aitchison ( 37 ) . Furthermore , p ( x|z ) yields a multinomial logistic normal distribution , which is given by marginalizing out η in the following expression : MLN ( x|z ) = ∫ η p ( x|η ) p ( η|z ) dη This integral is not tractable ; as a result , this distribution does not have an analytically defined expectation , variance or probability density function . There have been multiple attempts to estimate the posterior distribution with MCMC ( 38 ; 39 ; 35 ; 40 ) , but the complexity of this distribution requires a large number of samples , limiting the scalability of these methods . Variational methods have been developed to estimate the logistic normal distribution , but due to conditional non-conjugacy , these methods often rely on approximations to the ELBO , further complicating estimation ( 41 ) . Recently , Silverman et al . ( 33 ) proposed to use a Laplace approximation to estimate the parameters of the multinomial logistic normal posterior distribution . This approach relies on a two-stage optimization procedure on the factorized posterior distribution given by p ( η , z|x ) ∝ p ( η|x ) p ( z|η ) ( 4 ) If η can be directly estimated , then conditional independence can be used to factorize the posterior distribution . Since the probability densities of the multinomial distribution and the normal distribution are both log-convex functions , a global optimum can be obtained for the multinomial logistic normal distribution ( 33 ) ( Appendix B.3 ) . Furthermore , the multinomial distribution does not introduce additional local optima for estimating Multinomial PCA . Given this , in addition to recent evidence that a PPCA MAP estimator can be obtained from regularized linear autoencoders ( 17 ) , we can design a new algorithm to obtain a Multinomial PPCA MAP estimator . 4.2 THE ILR TRANSFORM ENFORCES IDENTIFIABLITY . The softmax function is a shift-invariant function , which introduces an identifiability issue that has been addressed by the compositional data analysis community ( 1 ; 42 ) . In order to remove the identifiability issues , an isomorphism between the logits η and the multinomial parameters must be maintained . One commonly used solution is to use a degenerate softmax , also known as the inverse additive log-ratio ( ALR ) transform ( 1 ) ( Appendix B.2 ) . Previous work has suggested that the isometric log-ratio transform ( ILR ) ( 21 ; 22 ) is more suitable for principal components analysis ( Appendix B ) . The ILR and inverse ILR are given as follows : ILR ( x ) = ΨT logx ILR ( x ) −1 = φ ( Ψx ) ( 5 ) where Ψ ∈ Rd×d−1 is a basis such that ΨTΨ = Id−1 and ΨΨT = Id − 1d1d×d . A naive implementation of the ILR transform can be memory-intensive and computationally intensive for large d. However , any orthonormal basis can be used to parameterize the ILR transform and some of these bases can be represented by binary trees ( 43 ; 44 ; 45 ) . A binary tree can be used to represent Ψ with O ( d log d ) elements , where the lth column vector of Ψ is given as follows : Ψ.l = ( 0 , . . . 0︸ ︷︷ ︸ k , a , . . . a︸ ︷︷ ︸ r , b , . . . , b︸ ︷︷ ︸ s , 0 , . . . , 0︸ ︷︷ ︸ t ) ( 6 ) a = √ |s|√ |r| ( |r|+ |s| ) b = − √ |r|√ |s| ( |r|+ |s| ) ( 7 ) where l indexes an internal node in the tree with left children r , right children s , nodes to the left k and nodes to the right t ( 46 ) ( Figure S2 ) . Due to rotation invariance , it doesn ’ t matter which tree is used to parameterize the ILR basis , but the choice of tree can influence the runtime of the ILR transform . If a balanced binary tree is used , the memory requirements representing Ψ can be brought down from O ( d2 ) to O ( d log d ) and can reduce the matrix vector multplication runtime from O ( d2 ) to O ( d log d ) ( See Appendix B.1 ) . This can speed up the matrix-vector multiplication operations by an order of magnitude for datasets with more than ten thousand dimensions .
This paper extends prior results, namely that VAEs are able to learn the principal components. The novelty is the extension to a new distribution: multinomial logistic-normal distribution. This is achieved by using the Isometric log-ratio (ILR) transform. While prior results were derived analytically, this paper provides (only) empirical evidence for the claim regarding the multinomial logistic-normal distribution.
SP:19a28a50180cda10be3344064701fee76f354cf9
Adaptive Self-training for Neural Sequence Labeling with Few Labels
1 INTRODUCTION . Motivation . Deep neural networks typically require large amounts of training data to achieve stateof-the-art performance . Recent advances with pre-trained language models like BERT ( Devlin et al. , 2019 ) , GPT-2 ( Radford et al. , 2019 ) and RoBERTa ( Liu et al. , 2019 ) have reduced this annotation bottleneck . In this paradigm , large neural network models are trained on massive amounts of unlabeled data in a self-supervised manner . However , the success of these large-scale models still relies on fine-tuning them on large amounts of labeled data for downstream tasks . For instance , our experiments show 27 % relative improvement on an average when fine-tuning BERT with the full training set ( 2.5K-705K labels ) vs. fine-tuning with only 10 labels per class . This poses several challenges for many real-world tasks . Not only is acquiring large amounts of labeled data for every task expensive and time consuming , but also not feasible in many cases due to data access and privacy constraints . This issue is exacerbated for sequence labeling tasks that require annotations at token- and slot-level as opposed to instance-level classification tasks . For example , an NER task can have slots like B-PER , I-PER , O-PER marking the beginning , intermediate and out-of-span markers for person names , and similar slots for the names of location and organization . Similarly , language understanding models for dialog systems rely on effective identification of what the user intends to do ( intents ) and the corresponding values as arguments ( slots ) for use by downstream applications . Therefore , fully supervised neural sequence taggers are expensive to train for such tasks , given the requirement of thousands of annotations for hundreds of slots for the many different intents . Semi-supervised learning ( SSL ) ( Chapelle et al. , 2010 ) is one of the promising paradigms to address labeled data scarcity by making effective use of large amounts of unlabeled data in addition to task-specific labeled data . Self-training ( ST , ( III , 1965 ) ) as one of the earliest SSL approaches has recently shown state-of-the-art performance for tasks like image classification ( Li et al. , 2019 ; Xie et al. , 2020 ) performing at par with supervised systems while using very few training labels . In contrast to such instance-level classification tasks , sequence labeling tasks have dependencies between the slots demanding different design choices for slot-level loss optimization for the limited labeled data setting . For instance , prior work ( Ruder & Plank , 2018 ) using classic self-training techniques for sequence labeling did not find much success in the low-data regime with 10 % labeled data for the target domain . Although there has been some success with careful task-specific data selection ( Petrov & McDonald , 2012 ) and more recently for distant supervision ( Liang et al. , 2020 ) using external resources like knowledge bases ( e.g. , Wikipedia ) . In contrast to these prior work , we develop techniques for self-training with limited labels and without any task-specific assumption or external knowledge . For self-training , a base model ( teacher ) is trained on some amount of labeled data and used to pseudo-annotate ( task-specific ) unlabeled data . The original labeled data is augmented with the pseudo-labeled data and used to train a student model . The student-teacher training is repeated until convergence . Traditionally in self-training frameworks , the teacher model pseudo-annotates unlabeled data without any sample selection . This may result in gradual drifts from self-training on noisy pseudo-labeled instances ( Zhang et al. , 2017 ) . In order to deal with noisy labels and training set biases , Ren et al . ( 2018 ) propose a meta-learning technique to automatically re-weight noisy samples by their loss changes on a held-out clean labeled validation set . We adopt a similar principle in our work and leverage meta-learning to re-weight noisy pseudo-labeled examples from the teacher . While prior techniques for learning to re-weight examples have been developed for instance-level classification tasks , we extend them to operate at token-level for discrete sequence labeling tasks . To this end , we address some key challenges on how to construct an informative held-out validation set for token-level re-weighting . Prior works ( Ren et al. , 2018 ; Shu et al. , 2019 ) for instance classification construct this validation set by random sampling . However , sequence labeling tasks involve many slots ( e.g . WikiAnn has 123 slots over 41 languages ) with variable difficulty and distribution in the data . In case of random sampling , the model oversamples from the most populous category and slots . This is particularly detrimental for low-resource languages in the multilingual setting . To this end , we develop an adaptive mechanism to create the validation set on the fly considering the diversity and uncertainty of the model for different slot types . Furthermore , we leverage this validation set for token-level loss estimation and re-weighting pseudo-labeled sequences from the teacher in the meta-learning setup . While prior works ( Li et al. , 2019 ; Sun et al. , 2019 ; Bansal et al. , 2020 ) on meta-learning for image and text classification leverage multi-task learning to improve a target classification task based on several similar tasks , in this work we focus on a single sequence labeling task – making our setup more challenging altogether . Our task and framework overview . We focus on sequence labeling tasks with only a few annotated samples ( e.g. , K = { 5 , 10 , 20 , 100 } ) per slot type for training and large amounts of task-specific unlabeled data . Figure 1 shows an overview of our framework with the following components : ( i ) Self-training : Our self-training framework leverages a pre-trained language model as a teacher and co-trains a student model with iterative knowledge exchange ( ii ) Adaptive labeled data acquisition for validation : Our few-shot learning setup assumes a small number of labeled training samples per slot type . The labeled data from multiple slot types are not equally informative for the student model to learn from . While prior works in meta-learning randomly sample some labeled examples for held-out validation set , we develop an adaptive mechanism to create this set on the fly . To this end , we leverage loss decay as a proxy for model uncertainty to select informative labeled samples for the student model to learn from in conjunction with the re-weighting mechanism in the next step . ( iii ) Meta-learning for sample re-weighting : Since pseudo-labeled samples from the teacher can be noisy , we employ meta-learning to re-weight them to improve the student model performance on the held-out validation set obtained from the previous step . In contrast to prior work ( Ren et al. , 2018 ) on sample re-weighting operating at instance-level , we incorporate the re-weighting mechanism at token-level for sequence labeling tasks . Here the token-level weights are determined by the student model loss on the above validation set . Finally , we learn all of the above steps jointly with end-toend learning in the self-training framework . We refer to our adaptive self-training framework with meta-learning based sample re-weighting mechanism as MetaST . We perform extensive experiments on six benchmark datasets for several tasks including multilingual Named Entity Recognition and slot tagging for user utterances from task-oriented dialog systems to demonstrate the generalizability of our approach across diverse tasks and languages . We adopt BERT and multilingual BERT as encoder and show that its performance can be significantly improved by nearly 10 % for low-resource settings with few training labels ( e.g. , 10 labeled examples per slot type ) and large amounts of unlabeled data . In summary , our work makes the following contributions . ( i ) Develops a self-training framework for neural sequence tagging with few labeled training examples . ( ii ) Leverages an acquisition strategy to adaptively select a validation set from the labeled set for meta-learning of the student model . ( iii ) Develops a meta-learning framework for re-weighting pseudo-labeled samples at token-level to reduce drifts from noisy teacher predictions . ( iv ) Integrates the aforementioned components into an end-to-end learning framework and demonstrates its effectiveness for neural sequence labeling across six benchmark datasets with multiple slots , shots , domains and languages . 2 BACKGROUND . Sequence labeling and slot tagging . This is the task identifying the entity span of several slot types ( e.g. , names of person , organization , location , date , etc . ) in a text sequence . Formally , given a sentence with N tokens X = { x1 , ... , xN } , an entity or slot value is a span of tokens s = [ xi , ... , xj ] ( 0 ≤ i ≤ j ≤ N ) associated with a type . This task assumes a pre-defined tagging policy like BIO ( Tjong et al. , 1999 ) , where B marks the beginning of the slot , I marks an intermediate token in the span , and O marks out-of-span tokens . These span markers are used to extract multi-token values for each of the slot types with phrase-level evaluation for the performance . Self-training . Consider f ( · ; θtea ) and f ( · ; θstu ) to denote the teacher and student models respectively in the self-training framework . The role of the teacher model ( e.g. , a pre-trained language model ) is to assign pseudo-labels to unlabeled data that is used to train a student model . The teacher and student model can exchange knowledge and the training schedules are repeated till convergence . The success of self-training with deep neural networks in recent works ( He et al. , 2019 ; Xie et al. , 2020 ) has been attributed to a number of factors including stochastic regularization with dropouts and data regularization with unlabeled data . Formally , given m-th unlabeled sentence with N tokens Xum = { xu1 , m , ... , xuN , m } and C pre-defined labels , consider the pseudo-labels Ŷ ( t ) m = [ ŷ ( t ) m,1 , ... , ŷ ( t ) m , N ] generated by the teacher model at the t-th iteration where , ŷ ( t ) m , n = argmax c∈C fn , c ( x u m , n ; θ ( t ) tea ) . ( 1 ) The pseudo-labeled data set , denoted as ( Xu , Ŷ ( t ) ) = { ( Xum , Ŷ ( t ) m ) } Mm , is used to train the student model and learn its parameters as : θ̂ ( t ) stu = argmin θ 1 M M∑ m=1 l ( Ŷ ( t ) m , f ( X u m ; θ ( t−1 ) stu ) ) , ( 2 ) where l ( · , · ) can be modeled as the cross-entropy loss . 3 ADAPTIVE SELF TRAINING . Given a pre-trained language model ( e.g. , BERT ( Devlin et al. , 2019 ) ) as the teacher , we first finetune it on the small labeled data to make it aware of the underlying task . The fine-tuned teacher model is now used to pseudo-label the large unlabeled data . We consider the student model as another instantiation of the pre-trained language model that is trained over the pseudo-labeled data . However , our few-shot setting with limited labeled data results in a noisy teacher . A naive transfer of teacher knowledge to the student results in the propagation of noisy labels limiting the performance of the student model . To address this challenge , we develop an adaptive self-training framework to re-weight pseudo-labeled predictions from the teacher with a meta-learning objective that optimizes the token-level loss from the student model on a held-out labeled validation set . This held-out set is adaptively constructed via labeled data acquisition which selects labeled samples with high uncertainty for efficient data exploration .
This paper proposes an adaptive self-training framework, called MetaST, for tackling few-shot sequence labeling tasks. The framework consists of several components: a teacher model that finetunes with the few-shot training data and generates noisy labels for the unlabeled examples; a student model that learns from re-weighted noisy labels (at the token level), and an iterative process to update the teacher with the trained student. It also uses a meta-learning mechanism to adjust the token-level weights based on a subsampled set of clean data. This subset is sampled based on the student model’s uncertainty to improve learning efficiency.
SP:dd0782278b556d2946ddd4bb7ea71c2bfbea948d
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute . Although this trend of scaling is affirmed to be a sure-fire approach for better model quality , there are challenges on the path such as the computation cost , ease of programming , and efficient implementation on parallel devices . In this paper we demonstrate conditional computation as a remedy to the above mentioned impediments , and demonstrate its efficacy and utility . We make extensive use of GShard , a module composed of a set of lightweight annotation APIs and an extension to the XLA compiler to enable large scale models with up to trillions of parameters . GShard and conditional computation enable us to scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-ofExperts . We demonstrate that such a giant model with 600 billion parameters can efficiently be trained on 2048 TPU v3 cores in 4 days to achieve far superior quality for translation from 100 languages to English compared to the prior art . 1 INTRODUCTION . Scaling neural networks brings dramatic quality gains over a wide array of machine learning problems such as computer vision , language understanding and neural machine translation ( Devlin et al. , 2018 ; Mahajan et al. , 2018 ; Arivazhagan et al. , 2019 ; Huang et al. , 2019 ; Brown et al. , 2020b ) . This general tendency motivated recent studies to scrutinize the factors playing a critical role in the success of scaling , including the amounts of training data , the model size , and the computation being utilized as found by past studies ( Advani & Saxe , 2017 ; Hestness et al. , 2019 ; Geiger et al. , 2020 ) . While the final model quality was found to have a power-law relationship with these factors ( Hestness et al. , 2017 ; Kaplan et al. , 2020 ) , the significant quality gains brought by larger models also came with various practical challenges . Training efficiency , which we define as the amount of compute and time used to achieve a superior model quality against the best system existed , is oftentimes left out . In this study , we strive for improving the model quality while being training efficiently . We built a 600 billion parameters sequence-to-sequence Transformer model with Sparsely-Gated Mixture-of-Experts layers , which enjoys sub-linear computation cost and O ( 1 ) compilation time . We trained this model with 2048 TPU v3 devices for 4 days on a multilingual machine translation task and achieved far superior translation quality compared to prior art when translating 100 languages to English with a single non-ensemble model . We conducted experiments with various model sizes and found that the translation quality increases as the model gets bigger , yet the total wall-time to train only increases sub-linearly with respect to the model size , as illustrated in Figure 1 . To train such an extremely large model , we relied on the following key design choices . Conditional computation First , model architecture should be designed to keep the computation and communication requirements sublinear in the model capacity . Conditional computation enables us to satisfy training and inference efficiency by having a sub-network activated on the per-input basis . Shazeer et al . ( 2017 ) has shown that scaling RNN model capacity by adding Sparsely Gated Mixture-of-Experts ( MoE ) layers allowed to achieve improved results with sub-linear cost . We therefore present our approach to extend Transformer architecture with MoE layers in this study . GShard Annotation Second , the model description should be separated from the partitioning implementation and optimization . This separation of concerns let model developers focus on the network architecture and flexibly change the partitioning strategy , while the underlying system applies semantic-preserving transformations and implements efficient parallel execution . To this end we propose a module , GShard , which only requires the user to annotate a few critical tensors in the model with partitioning policies . It consists of a set of simple APIs for annotations , and a compiler extension in XLA for automatic parallelization . Model developers write models as if there is a single device with huge memory and computation capacity , and the compiler automatically partitions the computation for the target based on the user annotations and their own heuristics . 2 MODEL . The Transformer ( Vaswani et al. , 2017 ) architecture has been widely used for natural language processing . We scale Transformer with conditional computation by replacing every other feedforward layer with a sparsely activated Position-wise Mixture of Experts ( MoE ) layer ( Shazeer et al. , 2017 ) , with a variant of top-2 gating in both the encoder and the decoder ( Figure 2 ) . Each subword token in the training example activates a sub-network of the MoE Transformer during both training and inference . The size of the sub-network is roughly independent of the number of experts per MoE Layer , allowing sublinear scaling of the computation cost . 2.1 POSITION-WISE MIXTURE-OF-EXPERTS LAYER . The Mixture-of-Experts ( MoE ) layers used in our model differ from Shazeer et al . ( 2017 ) ’ s in the sparse gating function and the auxiliary loss being used . A MoE layer for Transformer consists of E feed-forward networks FFN1 . . . FFNE , each of which outputs woe · ReLU ( wie · xs ) , where xs is the input token to the MoE layer , wi and wo being the input and output projection matrices for the feed-forward layer ( an expert ) with shapes [ M , H ] and [ H , M ] , respectively . The output of a MoE layer is the combination of the expert outputs ∑E e=1 Gs , e · FFNe ( xs ) , where the vector Gs , E is computed by a gating function GATE ( · ) . We choose to let each token dispatched to at most two experts . The corresponding gating entries Gs , e become non-zeros , representing how much an expert contributes to the final network output . The gating function GATE ( · ) is critical to the MoE layer , which is modeled by a softmax activation function to indicate the weights of each expert in processing incoming tokens . We designed a novel efficient gating function with the following mechanisms ( details illustrated in Algorithm 1 ) . Load balancing Naively picking top-k experts from the softmax probability distribution leads to load imbalance problem for training as shown in Shazeer et al . ( 2017 ) . Most tokens would have been dispatched to a small number of experts , leaving other experts insufficiently trained . To ensure the load is balanced , we enforce that the number of tokens processed by one expert is below some uniform threshold called expert capacity . Assuming N total tokens in a batch and at most two experts per token , then the expert capacity C is set to be O ( N/E ) . GATE ( · ) keeps a running counter ce for how many tokens are dispatched to an expert . When both experts selected by a token already exceed their capacity , the token is considered as an overflowed token , where Gs , E degenerates into a zero vector . Such tokens will be passed on to the next layer via residual connections . The introduction of the fixed expert capacity instead of loading balancing functions in Shazeer et al . ( 2017 ) allows us to run parallel execution of gating function as described blow . Local dispatching for parallel gating Load balancing required the token assignments of one expert dependent on assignments of the other experts . The original gating function proposed by ( Shazeer et al. , 2017 ) had to be implemented sequentially , especially under the static shape constraints on TPUs . In our study , we distributed thousands of experts over thousands of devices , a sequential implementation of the gating function would keep most of the devices idle most of the time . Instead , we propose a new GATE ( · ) function that partitions all tokens in a training batch evenly into G local groups , i.e. , each group contains S = N/G tokens for local dispatching . All local groups are processed independently in parallel . Each group is given a fractional capacity of each expert , C = 2N/ ( G · E ) , to ensure that at most this many tokens are dispatched to an expert . In general , increasing the expect capacity C decreases the number of overflowed tokens thus improves the model quality . Since G× C is a constant , however , the higher capacity leads to smaller number of groups which hurts the training throughput by limiting the number of parallel gating execution . In this way , we can ensure that expert capacity is still enforced and the overall load is balanced . With fixed expert capacity and local dispatching , we are able to speed up the gating function by O ( G ) times . Auxiliary loss Following Shazeer et al . ( 2017 ) , we define a new differentiable auxiliary loss term ` aux to enforce the load balancing . It is added to the overall loss function of the model L = ` ori +k ∗ ` aux with a constant multiplier k , where ` aux is defined in line ( 13 ) of algorithm 1 , and the term ce/S represents the fraction of input routed to each expert . We replace the mean square ( ce/S ) 2 with Algorithm 1 : Group-level top-2 gating with auxiliary loss Data : xS , a group of tokens of size S Data : C , Expert capacity allocated to this group Result : GS , E , group combine weights Result : ` aux , group auxiliary loss ( 1 ) for e← 1 to E do ( 2 ) ce ← 0 . gating decisions per expert ( 3 ) gS , e ← softmax ( wg · xS ) . gates per token per expert , wg are trainable weights ( 4 ) me ← 1S ∑S s=1 gs , e . mean gates per expert ( 5 ) end ( 6 ) for s← 1 to S do ( 7 ) g1 , e1 , g2 , e2 = top_2 ( { gs , e|e = 1 · · ·E } ) . top-2 gates and expert indices ( 8 ) g1← g1/ ( g1 + g2 ) . normalized g1 ( 9 ) c← ce1 . position in e1 expert buffer ( 10 ) if ce1 < C then ( 11 ) Gs , e1 ← g1 . e1 expert combine weight for xs ( 12 ) end ( 13 ) ce1 ← c + 1 . incrementing e1 expert decisions count ( 14 ) end ( 15 ) ` aux = 1 E ∑E e=1 ce S ·me ( 16 ) for s← 1 to S do ( 17 ) g1 , e1 , g2 , e2 = top_2 ( { gs , e|e = 1 · · ·E } ) . top-2 gates and expert indices ( 18 ) g2← g2/ ( g1 + g2 ) . normalized g2 ( 19 ) rnd← uniform ( 0 , 1 ) . dispatch to second-best expert with probability ∝ 2 · g2 ( 20 ) c← ce2 . position in e2 expert buffer ( 21 ) if c < C ∧ 2 · g2 > rnd then ( 22 ) Gs , e2 ← g2 . e2 expert combine weight for xs ( 23 ) end ( 24 ) ce2 ← c + 1 ( 25 ) end differentiable approximation me ( ce/S ) , which can provide better numerical stability since it can be optimized with gradient descent . Random routing Intuitively , the output ys is a weighted average of what selected experts return . If the weight for the 2nd expert is very small , we can simply ignore the 2nd expert to conserve the overall expert capacity . Hence , in addition to respecting the expert capacity constraint , GATE ( · ) dispatches to the 2nd-best expert with the probability proportional to its weight g2 . We observed much less overflowed tokens thus better accuracy with random routing for models at the small scale . We then adopted this approach for our experiments at large scales .
The paper applies mixture-of-experts (MoE) [1] to the Transformers to significantly increase the number of parameters in the model while keeping the total computational cost feasible. The main differences with [1] are: (1) Only choose 2 experts at each timestep; (2) Set a capacity upper bound for each expert to make sure that no expert becomes a lagger. The paper also presents a convenient library to make implementing MoE models easier. The proposed method is tested on a large multilingual machine translation dataset and shows performance gains over models trained on a single language pair and a model trained without MoE.
SP:f76f1289d7b47dd1bd381108f5b86a410613af9e
Fine-Tuning Offline Reinforcement Learning with Model-Based Policy Optimization
1 INTRODUCTION . Deep reinforcement learning has recently been able to achieve impressive results in a variety of video games ( Badia et al. , 2020 ) and board games ( Schrittwieser et al. , 2020 ) . However , it has had limited success in complicated real-world tasks . In contrast , deep supervised learning algorithms have been achieving extraordinary success in scaling to difficult real-world datasets and tasks , especially in computer vision ( Deng et al. , 2009 ) and NLP ( Rajpurkar et al. , 2016 ) . The success of supervised learning algorithms can be attributed to the combination of deep neural networks and methods that can effectively scale with large corpora of varied data . The previous successes of deep RL ( Levine , 2016 ; Schrittwieser et al. , 2020 ) seem to indicate that reinforcement learning can potentially scale with large active data exploration to solve specific tasks . However , the ability to collect such large datasets online seems infeasible in many real-world applications such as automated driving or robotassisted surgery , due to the difficulty and inherent risks in collecting online exploratory data with an imperfect agent . Existing off-policy RL algorithms can potentially leverage large , previously collected datasets , but they often struggle to learn effective policies without collecting their own online exploratory data ( Agarwal et al. , 2020 ) . These failures are often attributed to the Q-function poorly extrapolating to out-of-distribution actions , which leads to overly optimistic agents that largely over-estimate the values of unseen actions . Because we train Q-functions using bootstrapping , these errors will often compound and lead to divergent Q-functions and unstable policy learning ( Kumar et al. , 2019 ) . Recently , there have been a variety of offline RL approaches that have attempted to address these issues . Broadly , we group these approaches into two main categories based on how they address the extrapolation issue . The first set of approaches ( Wu et al. , 2019 ; Kumar et al. , 2019 ) rely on behavior-regularization to limit the learned policy ’ s divergence from the perceived behavioral policy that collected the data . These approaches discourage the agent from considering out-of-distribution actions in order to avoid erroneous extrapolation . While these methods can often be effective when given some amount of expert demonstrations , they often seem too conservative and rarely outperform the best demonstrated behavior . The second set of approaches ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) leverage uncertainty-aware MB RL to learn a policy that is discouraged from taking state-action transitions where the learned model has low confidence . Thus , these methods allow a certain degree of extrapolation where the models are confident . Because these methods tend to be less restrictive , they can generalize better than behavior-regularization methods and sometimes outperform the behavioral dataset . However , this flexibility also seems to make it harder for these methods to recover the expert policy when it is present in the dataset , and reduce their effectiveness when trained with a narrow distribution . In this work , we develop an algorithmic framework that combines ideas from behavior-regularization and uncertainty-aware model-based learning . Specifically , we first train a policy using behaviorregularized model-free RL . Then , we fine-tune our results with our novel algorithm Model-Based Behavior-Regularized Policy Optimization ( MB2PO ) . We find that our approach is able to combine the upside of these approaches and achieve competitive or superior results on most of the GymMuJoCo ( Todorov et al. , 2012 ) tasks in the D4RL ( Fu et al. , 2020 ) benchmark . 2 RELATED WORK . While there exist many off-policy RL methods that can learn to solve a large variety of complex control tasks and can scale with large amounts of online data collection , these methods often perform quite poorly when run completely offline without any online data collection . Recently , there have been several methods that made progress in improving the capabilities of offline RL . For a general overview of the field of offline RL , we refer the reader to Levine et al . ( 2020 ) . Here we will discuss some recent works that are particularly relevant to our approach . 2.1 IMPROVING OFF-POLICY Q-LEARNING . Many of the recent advances in both discrete and continuous action off-policy deep RL can be attributed to improvements in stabilizing off-policy Q-learning and reducing overestimation due to erroneous extrapolation . Some notable methods include target networks ( Mnih et al. , 2013 ) , double Q-learning ( DDQN ) ( van Hasselt et al. , 2015 ) , distributional RL ( Bellemare et al. , 2017 ; Dabney et al. , 2017 ) , and variance reduction through invertible transforms ( Pohlen et al. , 2018 ) . In learning for continuous control , Fujimoto et al . ( 2018 ) introduced a conservative method that uses the minimum estimate of an ensemble of Q-networks as the target , which is often referred to as clipped double-Q-learning . Agarwal et al . ( 2020 ) demonstrated that Quantile Regression DDQN ( Dabney et al. , 2017 ) and other ensemble methods can be effective in certain discrete action offline RL problems . However , Agarwal et al . ( 2020 ) showed that when used naively , these methods do not perform well on complex continuous control tasks . In our work , we incorporate the mentioned advances in off-policy Q-learning into our approach to stabilize performance and prevent potential divergence . Additionally , the offline RL algorithm Conservative Q-learning ( CQL ) ( Kumar et al. , 2020 ) has attempted to address Q-learning ’ s overestimation issue on offline data directly by including a constraint term that discourages the agent from valuing an out-of-distribution action more than the demonstrated actions . In our method , instead of using a constraint on the Q-values , we use a combination of behavior-regularized model-free RL and uncertainty-aware model-based RL to discourage erroneous extrapolation . 2.2 BEHAVIOR-REGULARIZED MODEL-FREE RL . A variety of recent offline RL approaches have incorporated constraints or penalties on the learned policy ’ s divergence from the empirical behavioral policy . In particular , recent works have used both KL Divergence ( Wu et al. , 2019 ) and mean measure of divergence ( MMD ) ( Kumar et al. , 2019 ) . MMD is sometimes used over KL Divergence because MMD approximately constrains the learned policy to be in the support of the behavioral policy , which is less restricting than KL Divergence . However , most behavior-regularization or policy-constraint methods require the behavioral policy to be represented explicitly in order to estimate these divergences or to enforce their policy constraint ( Laroche et al. , 2019 ) . In contrast , AWAC ( Nair et al. , 2020 ) or CRR ( Wang et al. , 2020 ) is able to incorporate a KL divergence constraint without explicitly representing the behavioral policy . They do this by reformulating the policy-constrained RL optimization equations into a form that resembles behavioral cloning re-weighted by the exponential of the advantage . Wang et al . ( 2020 ) demonstrates that this method can effectively learn complex control tasks purely from offline data , and Nair et al . ( 2020 ) demonstrate that performance can even be improved with further online data collection . In this work , we demonstrate that these properties make AWAC work exceptionally well when used for initialization as well as when used for fine-tuning with Model-Based Policy Optimization ( MBPO ) ( Janner et al. , 2019 ) . 2.3 UNCERTAINTY-AWARE MODEL-BASED RL . MB RL algorithms have several natural advantages for offline RL compared to model-free RL algorithms . First , MB RL algorithms rely on supervised learning , which provide more robust gradient signals compared to bootstrapped learning and policy gradients . Second , learning a dynamics model often provides strong task-independent supervision , which allows MB RL algorithms to learn from sub-optimal trajectories . These benefits make generalization easier , and can allow MB RL algorithms to surpass the performance of the demonstrated data . In fact , in many environments , MB RL methods have already been effective in learning with offline or randomly collected datasets . Additionally , there is a rich history of prior works that have explored robust solutions to MDPs with uncertain transition dynamics ( Nilim & El Ghaoui , 2005 ; Iyengar , 2005 ) . However , it can be difficult to scale these type of methods to high-dimensional continuous control tasks , especially when using deep neural networks . Recently , incorporating uncertainty estimation techniques from supervised learning in MB RL has demonstrated further improvement in both online ( Chua et al. , 2018 ) and offline deep RL . In particular , two recent works , Model-Based Offline Policy Optimization ( MOPO ) ( Yu et al. , 2020 ) and Model-Based Offline Reinforcement Learning ( MoREL ) ( Kidambi et al. , 2020 ) , have demonstrated impressive results by incorporating uncertainty-aware MB RL with the Dyna ( Sutton , 1991 ) style algorithm MBPO ( Janner et al. , 2019 ) . Both methods use these models to create conservative MDPs that have a lower potential expected sum of rewards compared to the true MDP . By performing policy optimization in the conservative MDP through MBPO they are able to learn a conservative policy that can outperform the demonstrated trajectories . However , these methods can often fail to recover the expert policy even though it was demonstrated in the dataset . We believe that this is largely due to a lack of effective methods for estimating epistemic uncertainty for neural network regression . 3 PRELIMINARIES . In RL , we assume our agent operates within a standard Markov decision process ( MDP ) M = ( S , A , T , r , ρ0 , γ ) , where S denotes the state space , A denotes the action space , T ( s′|s , a ) represents the probabilistic transition dynamics , r is the reward function , ρ0 is the initial state distribution , and γ ∈ ( 0 , 1 ) is the discount factor . The objective in RL is to learn a policy π ( a|s ) that optimizes the expected discounted sum of rewards Rπ = Eπ , T , ρ0 [ ∑∞ t=0 γ tr ( st , at ) ] . In offline RL , we assume that during training we only have access to a fixed dataset Dβ containing a set of tuples ( s , a , s′ , r ) of environment transitions and associated rewards . We assume that the data was collected by a policy πβ , which we call the behavioral policy . Typically , when training with data not collected by your current policy π , we either use off-policy model-free algorithms or model-based algorithms . The most common off-policy model-free algorithms are actor-critic algorithms that use policy iteration . Policy iteration involves alternating between policy evaluation and policy improvement in order to learn an effective policy . In policy evaluation , these methods train a parametric Q-function by iteratively minimizing the temporal difference equation Qπk+1 = arg min Q Es , a , s′∼D [ ( ( r ( s , a ) + γEa′∼π ( ·|s′ ) [ Qπk ( s′ , a′ ) ] ) −Qπ ( s , a ) ) 2 ] ( 1 ) In policy improvement , we update our parametric policy π to maximize our current Q-function πk+1 = arg max π Es∼D , a∼π ( ·|s ) [ Qπk ( s , a ) ] ( 2 ) In MB RL , we attempt to learn a model T̂ of the transition dynamics and a model r̂ of the reward function . With this learned model of the dynamics and reward function we can create a model MDP M̂ = ( S , A , T̂ , r̂ , ρ0 , γ ) to estimate the true underlying MDP M . These methods tend to use either trajectory optimization or policy optimization in the model MDP to produce their policy .
This paper proposes an improved offline RL (batch RL) algorithm combining the state-of-the-art behavior-regularization actor-critic method (Nair et al., 2020) with a model-based RL technique. N-trained probabilistic dynamics models generate fictitious trajectories with uncertainty-penalized rewards after pretraining the policy with the behavior-regularization solely on the offline data. Both these generated data and the original offline data are used for the further behavior-regularized actor-critic training. Numerical results showed that the proposed method outperformed recent offline model-free and model-based RL algorithms.
SP:36c53ab1d8f25c8c61d8b1538ed304b710c14849
Conditional Generative Modeling for De Novo Hierarchical Multi-Label Functional Protein Design
1 INTRODUCTION . Designing proteins with a target biological function is an important task in biotechnology with highimpact implications in pharmaceutical research , such as in drug design or synthetic biology ( Huang et al. , 2016 ) . However , the task is challenging since the sequence-structure-function relationship of proteins is extremely complex and not yet understood ( Dill & MacCallum , 2012 ) . Functional protein design is currently done by traditional methods such as directed evolution ( Arnold , 1998 ) , which rely on a few random mutations of known proteins and selective pressure to explore a space of related proteins . However , this process can be time-consuming and cost-intensive , and most often only explores a small part of the sequence space . In parallel , data characterizing proteins and their functions is readily available and constitutes a promising opportunity for machine learning applications in protein sequence design . Moreover , the hierarchical organisation of protein functions in a complex ontology of labels could help machine learning models capture sequence-information relationships adequately . Recently , generative models have attempted to design proteins for different tasks , such as developing new therapies ( Muller et al. , 2018 ; Davidsen et al. , 2019 ) or enzymes ( Repecka et al. , 2019 ) . Nonetheless , most of the de novo protein sequence design methods , which generate sequences from scratch , focus on a specific function or on families of short proteins . Instead , we would like to focus on modeling several different biological functions at the same time to eventually be able to freely combine them . To this end , one first requires a model that is able to deal with and to understand the inherent label structure . We concern ourselves with the development of such a generative model . In this work , we introduce the general-purpose generative model ProteoGAN , a conditional generative adversarial network ( cGAN ) that is able to generate protein sequences given a large set of functions in the Gene Ontology ( GO ) Molecular Function directed acyclic graph ( DAG ) ( Gene On- tology Consortium , 2019 ) . To the extent of our knowledge , we are the first to propose a hierarchical multi-label de novo protein design framework , which does not require prior knowledge about the protein , such as seed sequence fragments or structure . Our contributions can be summarized as follows : ( i ) we propose a data-driven approach to de novo functional protein generation that leverages a large set of annotated sequences , ( ii ) we present a new extensive evaluation scheme to assess validity , conditional consistency , diversity , and biological relevance of the generated sequences , and ( iii ) we conduct an in-depth model optimization to derive actionable insights on architectural choices and efficient conditioning mechanisms while outperforming existing state-of-the-art protein generators . We focus on generative adversarial networks , due to their promising performance on specific sequence design tasks ( Repecka et al. , 2019 ) . We choose a conditional setting not to rely on oracles nor on multiple rounds of training-generation-measurement , since to this date a well performing general-purpose predictor of protein function remains elusive ( Zhou et al. , 2019 ) . As opposed to most existing methods ( see Section 2 ) , we aim to generate a comprehensive variety of proteins exhibiting a wide range of functions , rather than focusing on optimising a single function within a unique protein family . As this is a different task from the ones found in the literature , we need to define an adequate evaluation pipeline . Therefore , we establish a multiclass protein generation evaluation scheme centered around validity and conditional consistency . The model should generate protein sequences whose distribution resembles that of natural proteins and hence have similar chemo-physical properties , and it should do so conditionally , namely generating proteins of a given functional class without off-target functions . We are hence confronted with the problem of assessing i ) the performance of the generative model in a general sense , which is defined by how well the generated distribution fits the training data distribution , and ii ) the conditional performance of the model which we define as a special case of the general performance , where we compare sequence feature distributions between labels . We therefore require distribution-based evaluations . A natural choice to evaluate the performance of a generative model is a two-sample test , which allows to answer whether a generated and a real set of samples ( i.e . the dataset ) could originate from the same distribution . The difficulty here is to define a measure that can handle the structured data , in our case protein sequences . To this end , we design Maximum Mean Discrepancy ( MMD ) -based evaluation criteria ( Gretton et al. , 2012 ) , which ensure good model performance and a functioning conditioning mechanism by measuring differences in empirical distribution between sets of generated and real protein sequences . To ensure diversity , we monitor the duality gap ( Grnarova et al. , 2019 ) , a domain-agnostic indicator for GAN training . Lastly , we use a series of biologically-driven criteria in the evaluation phase that confirms the biological validity of the generated protein by relying on the standard protein feature software ProFET ( Ofer & Linial , 2015 ) . With this arsenal of measures , and given the low computational complexity of our MMD-based criteria , we compare different architectural choices and hyperparameters in an extensive and efficient Bayesian Optimization and HyperBand ( BOHB ) ( Falkner et al. , 2018 ) search . In particular , we develop improved variants of two existing conditional mechanisms on GANs ( Odena et al. , 2017 ; Miyato & Koyama , 2018 ) and show for the first time that the previously unexplored combination of both is beneficial to conditional generation . Moreover , the selected model outperforms ( i ) de novo conditional model CVAE ( Greener et al. , 2018 ) , repurposed and trained towards functional protein generation , other introduced baselines ( HMM , n-gram model ) , and ( ii ) models specifically built to challenge the necessity of a conditional mechanism . The remainder of the document is organized as follows . First , the background and related work section gives a concise overview of the biological mechanisms underlying the function of proteins , summarises the state-of-the-art generative models applied to protein design , details some conditional mechanisms in GANs and identifies existing evaluation criteria for GANs and cGANs . Subsequently , the method section describes ProteoGAN and its components and explains our protein generation evaluation framework . Finally , the results obtained by conditioning the generation of new sequences on 50 GO classes are presented and discussed before concluding with some final remarks . 2 BACKGROUND AND RELATED WORK . Biological mechanisms underlying protein functions . Proteins are biological structures that serve a wide variety of purposes in organisms . They are composed of chains of amino acids and can therefore be represented as simple sequences . However , the relationship between physico-chemical properties of amino-acids , three dimensional structure and resulting biological activity of the macromolecule is highly complex ( see supplementary Section A.1 ) . Nevertheless , since the advent of modern sequencing techniques , millions of proteins have been registered in databases , along with curated descriptions of their function . For example , the GO is a species-agnostic ontology that aims at classifying genes ( and the resulting proteins ) according to their functions , locations , and governing biological processes using a hierarchical structure of functional labels . As such , it represents an ideal interface between scientists who wish to design proteins with descriptive and modular labels , and a generative model that captures the complex relationships of sequence , structure and function . Guided and conditional generative models . Machine learning models and more recently deep generative models ( Eddy , 2004 ; Goodfellow et al. , 2014 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Vaswani et al. , 2017 ; Li et al. , 2017a ) have been used to design in silico biological sequences , such as RNA , DNA or protein sequences ( R. Durbin & Mitchinson , 1998 ; Davidsen et al. , 2019 ; Brookes et al. , 2019 ; Hawkins-Hooker et al. , 2020 ; Costello & Martin , 2019 ; Anand & Huang , 2018 ) . Among them , several approaches have been developed in order to control sequence generation . They can be sorted in three categories , guided , conditional or combinations thereof . Guided approaches use a predictor oracle in order to guide the design towards target properties , through iterative training , generation and prediction steps ( Brookes et al. , 2019 ; Gane et al. , 2019 ; Angermueller et al. , 2019 ; Gupta & Zou , 2019 ; Killoran et al. , 2017 ; Repecka et al. , 2019 ) . While these guided methods have the theoretical advantage to produce proteins with specific characteristics , for example brightness ( Brookes et al. , 2019 ) , they require an independent oracle . This oracle can be itself hard to train and remains imperfect , even for highly specialized prediction tasks . Moreover , the lack of well-functioning predictors for large numbers of labels impairs the usage of guided-generation techniques to multiclass applications such as functional protein generation ( Zhou et al. , 2019 ) . On the contrary , conditional approaches integrate the desired properties in the generation mechanism , eliminating the need for an oracle . Karimi et al . ( 2019 ) provided a guided conditional WassersteinGAN to generate proteins with novel folds . Interestingly , Madani et al . ( 2020 ) developed ProGen , a conditional transformer that enables a controlled generation of a large range of functional proteins . However , the method ’ s need for sequence context can be experimentally constraining and is not compatible with de novo design . Ingraham et al . ( 2019 ) present a graph-based conditional generative model that unfortunately needs only sparsely available structural information . Das et al . ( 2018 ) and Greener et al . ( 2018 ) train conditional VAEs in order to generate specific proteins , such as metalloproteins . Conditional mechanisms in GANs . Several conditional mechanisms have been proposed to conditionally generate samples with GANs . Among the most successful ones in conditional image generation tasks , Odena et al . ( 2017 ) introduced the auxiliary classifier GAN ( AC-GAN ) , which uses a third integrated network , in addition to the generator and the discriminator , to predict labels of both real and generated inputs to the discriminator . Miyato & Koyama ( 2018 ) proposed an alternative conditioning mechanism , where the label information is introduced to the network as the inner product of the embedded label vector and an intermediate layer of the network , a mechanism they refer to as projection . Projections can be seen as an alternative to simple concatenations of label information to the network input ( Mirza & Osindero , 2014 ) , in a way that respects the underlying probabilistic model . Generative models evaluation . To this date , there is no definitive consensus on the best evaluation measures for the evaluation of quality , diversity and conditional consistency of the output of a ( conditional ) generative model ( Papineni et al. , 2002 ; Salimans et al. , 2016 ; Heusel et al. , 2017 ; Shmelkov et al. , 2018 ; Kynkäänniemi et al. , 2019 ; DeVries et al. , 2019 ) . Most measures that stand out in computer vision such as the Inception Score ( IS ) ( Salimans et al. , 2016 ) , the Frechet Inception Distance ( FID ) ( Heusel et al. , 2017 ) , GAN-train and GAN-test ( Shmelkov et al. , 2018 ) depend on an external , domain-specific predictor . On the contrary , the domain-agnostic duality gap can be computed during training and at test time , and has been shown to correlate well with FID ( Grnarova et al. , 2019 ) . In functional protein prediction , results obtained by state-of-the-art classification mod- els are encouraging but still not good nor fast enough to entirely rely on them when evaluating and training GANs ( Fmax = 0.631 ( Radivojac et al. , 2013 ; Zhou et al. , 2019 ; You et al. , 2019 ) .
In this manuscript, the authors present a conditional GAN for generating protein sequences given specified GO terms. They argue that this approach to conditional protein generation is more appropriate than sequence-based generation, because it gets directly at functional specification. At a high level, this is an interesting idea though it has already started to be explored by other works. The authors are correct that these works focus primarily on optimize a single function of interest. However, there doesn’t seem to be any specific reason that guided design approaches could not generalize to multiple criteria. Regardless, controlled generation of proteins with pre specified functions is certainly interesting.
SP:c5a5db22e2ac2eaa16a74238256753e567b07d9a
AC-VAE: Learning Semantic Representation with VAE for Adaptive Clustering
1 INTRODUCTION Unsupervised representation learning is a long-standing interest in the field of machine learning ( Peng et al. , 2016a ; Chen et al. , 2016 ; 2018 ; Deng et al. , 2019 ; Peng et al. , 2016b ) , which offers a promising way to scale-up the usable data amount for the current artificial intelligence methods without the requirement for human annotation by leveraging on the vast amount of unlabeled data ( Chen et al. , 2020b ; a ) . Recent works ( Chen et al. , 2020b ; a ; He et al. , 2020 ) advocate to structure the unsupervised representation learning at the pre-training stage and then apply semi-supervised or selfsupervised techniques on the learned representations in the fine-tuning stage . So the representation learning acts as a feature extractor , which extracts semantic features from the image , and wellextracted features should lead to excellent classification performance ( He et al. , 2020 ) . Moreover , representation learning assigns close vectors to images with similar semantic meanings , thus making it possible to cluster the same meaning images together ( Xie et al. , 2016 ; Van Gansbeke et al. , 2020 ) . When no label is available , unsupervised or self-supervised classification methods rely on the neighbor clustering to provide the supervisory signal to guide the self-supervised fine-tuning process ( Van Gansbeke et al. , 2020 ; Xie et al. , 2016 ) . In this scenario , accurately clustering neighbors among representations is crucial for the followed classification fine-tuning . In many of the prior unsupervised methods ( Van Gansbeke et al. , 2020 ; Xie et al. , 2016 ) , the neighbor clustering process is performed by KNN ( k-nearest neighbor ) based methods . However , KNN based methods introduce k as a super parameter , which needs to be fine-tuned regarding different datasets . In an unsupervised setup , selecting a suitable k without any annotation or prior knowledge is not straightforward . Therefore it is desirable to have a neighbor clustering process that automatically adapts to different datasets , thus eliminating the need for pre-selecting the super parameter k. To achieve adaptive neighbors clustering , the proposed method tries to encode the image representation into the multivariate normal distribution , as the multivariate normal distribution provides distance information , such as z-score , which can naturally adapt to different datasets without the help of any additional mechanism . Prior works ( Kingma & Welling , 2013 ; Higgins et al. , 2016 ; Burgess et al. , 2018 ) showed VAE ’ s ability to encode images into multivariate normal distributions ; nonethe- less , these works struggled to extract high-level semantic features , as most of them were trained by image recovery tasks , which encourages the network to focus on the low-level imagery features . Consequently , the extracted low-level features can not be utilized in the unsupervised classification method , which needs semantic features to function . To provide VAE with the ability to extract the high-level semantic features , as well as to utilize its strength to produce adaptive clusters , this paper proposes a framework , AC-VAE , including a VAE based network and a z-score based clustering method , as shown in Figure 1 . The VAE based network encodes the image into the multivariate normal distribution N ( µ , Σ ) , The distribution ’ s mean µ is taken as the representation ; meanwhile , its z-score provides the boundary information that can naturally adapt to different datasets . The proposed clustering method takes advantage of the boundary information to achieve adaptive neighbor clustering . The proposed framework ’ s efficacy is evaluated on CIFAR10 , CIFAR100-20 , and SLT datasets , and it surpasses the current state-of-theart methods in neighbor clustering on these datasets . Particularly , AC-VAE achieves 95 % and 82 % accuracy on CIFAR10 dataset when the average neighbor cluster sizes are 10 and 100 , surpassing the current state-of-the-art method by a margin of 10 % . Our main innovations and contributions can be summarized as follows : • This work proposed a VAE based network to encode the image into the representation with its boundary information . The representation and boundary information are retrieved from the multivariate normal distribution , which encoded from the image . The efficacy of the adaptive boundary is demonstrated by neighbor clustering results . • In this work , a loss function is proposed based on consistency regulation to train the VAEbased network for extracting the high-level semantic feature from the image . Experiments demonstrate that the proposed method assigns close vectors to images with similar semantic meanings . • This work proposed a clustering method to take advantage of the adaptive boundary of each representation . The proposed method delivers high accuracy neighbor clusters . Besides , the neighbor clusters are found converge within the clustering range ( α ≤ 2 ) , and the selfsupervised learning framework utilizing the converged clusters delivers competitive results without the need of a pre-selecting parameter k. 2 RELATED WORKS Many frameworks cluster the dataset directly into semantic classes , and train the network in an endto-end manner ( Asano et al. , 2019 ; Caron et al. , 2019 ; Haeusser et al. , 2018 ; Yang et al. , 2016 ; Xie et al. , 2016 ) . Although the end-to-end training method is easy to apply , the network ’ s initialization largely influences these frameworks ’ performance . Therefore , complex mechanisms ( such as cluster reassignment ) are needed to assist the clustering process . As an alternative approach , methods ( Caron et al. , 2018 ; Hu et al. , 2017 ; Yang et al. , 2016 ) based on maximizing the mutual information between image augmentations are proposed to address this issue . In contrast to end-to-end training , the multi-stage method ( Van Gansbeke et al. , 2020 ) is introduced , which first aim to obtain accurate neighbor clusters from the representation learning , then apply these neighbor clusters in the followed finetuning , and this method made breakthroughs in unsupervised classification . This method depend mainly on the accurate neighbor cluster results from the representation learning . A large number of representation learning methods ( Doersch et al. , 2015 ; Gidaris et al. , 2018 ; Noroozi & Favaro , 2016 ; Pathak et al. , 2016 ; Zhang et al. , 2016 ) have been introduced , and these methods usually assign pretext tasks to the network , and the network learns the image representation by solving these tasks . However , most of these methods aim to use the learned representations to serve the following supervised or semi-supervised tasks . Therefore , the neighbor clustering performance of these learned representations is not optimized , and few of these methods have strong neighbor clustering performance . SimCLR ( Chen et al. , 2020a ) and MoCo ( He et al. , 2020 ) utilizing consistency regularization outstanding other methods in neighbor clustering performance and assisted SCAN framework ( Van Gansbeke et al. , 2020 ) to reach the state-of-the-art results in unsupervised classification tasks . However , the SCAN framework needs to pre-select the super parameter k to perform the KNN clustering from the starter . This paper aims to provide an adaptive clustering method that needs no super parameters by creating the boundary for each representation . The representation and its boundary information are retrieved from a VAE based structure . VAE based networks are typically used for image generation tasks ( Kingma & Welling , 2013 ; Razavi et al. , 2019 ) or disentanglement tasks ( Higgins et al. , 2016 ; Burgess et al. , 2018 ) . Although VAE shows the potential to encode images into multivariate normal distributions ( Razavi et al. , 2019 ; Burgess et al. , 2018 ) , the efficacy of utilizing VAE to extracting high-level representations is not heavily studied . Besides , VAEs are usually trained by different forms of image recovery tasks , which keep VAE away from extracting high-level semantic features . This paper adopts the consistency regulation to train the proposed VAE based network for extracting the high-level representation and its boundary information . Moreover , a clustering method is proposed to utilize this boundary information to deliver adaptive cluster results . In the end , these adaptive clusters are utilized in the unsupervised classification task . 3 METHOD The following sections presents the generative network that produces the representation and its boundary information first , and then introduces the cluster method that benefits from the boundary information . 3.1 GENERATIVE MODEL In the unsupervised setup , the ground truth of the desired image representation is not available . However , there are general assumptions about the desired presentation ’ s behavior pattern , i.e. , how desired representations should interact with each other , such as images showing the same kind of object should have similar semantic representations . This paper introduces the latent vector that controls the representation ’ s behavior pattern and utilizes a VEA based network to generate representation from this latent vector . The proposed network aims to generate representations that follow the behavior assumption of the expected representation . In that case , the generated representation and the expected one would share the same latent vector , as the latent vector decides the representation ’ s behavior pattern . However , the generated representation may differ from the expected one , even when they have the same behavior pattern . It is hard to directly generate the desired representation , which needs the ground truth to train from the starter . Therefore , this paper adopts the latent vector as a close approximation of the desired representation . The proposed VAE based network is shown in Figure 2 . This paper states the latent vector as a multivariate normal distribution N ( x ; µ , Σ ) encoded from the image x by an encoder e ( x ) . Then , a sample z is drawn from this distribution with a stochastic process . This random process creates variations that support tryouts to find this latent vector ’ s acceptable range in the training stage . Besides , the distribution ’ s mean µ is also taken as a standard latent vector to provide a short-cut for better encoder training . As the encoder is a deep neural network for extracting high-level semantic features , the stochastically sampled latent vector z can not provide a stable guide to train the deep encoder . The sum of the sampled latent vector z and the standard latent vector µ is fed into the decoder d ( x ) to generate the representation r. The network is trained by the behavior regulations , which impose regulations on the generated representation . This work adopts consistency regulation , a commonly used behavior assumption of the semantic representation , which regulates an image and its argumentations to have the same semantic representation . The consistency regulation can be performed by minimizing the behavior loss stated in Equation ( 1 ) . BLx = d ( ri , r ′ i ) = d ( v ( xi ) , v ( T ( xi ) ) ) , ( 1 ) in which , T ( xi ) is the augmentation of image xi , ri is the representation of xi generated by the proposed network v ( . ) , and d ( , ) measures the distance of two representations . As the proposed network is based on VAE , the loss function of the vanilla VAE ( Kingma & Welling , 2013 ) is adapted to train the proposed network by replacing its image recovery loss with the behavior loss BLx . The loss function is used to train the proposed network is shown in Equation ( 2 ) . Ez∼Q ( z|x ) [ log ( P ( r|z ) ) −KL [ Q ( z|x ) ||P ( z ) ] , ( 2 ) in which , P ( r|z ) is distribution of the representation r generate by latent vector z , P ( z ) is the distribution of the latent vector , Q ( z|x ) is the distribution of z given x , and KL [ Q ( z|x ) ||P ( z ) ] is the KL divergence between Q ( z|x ) and P ( z ) . As mentioned earlier , the latent distribution will act as a close approximation of the desired representation . The mean µ will be regarded as the image representation , and the z-score of the distribution characterizes its boundary information . 3.2 NEIGHBOR CLUSTERING METHOD This work clusters the images based on the boundary information based on z-score . The insight is that , when an image is encoded into the distribution , the image ’ s close variations should be located within a small z-score range of this distribution . Figure 3 ( a ) illustrates the representation and its boundary information , z-score range , in a five-dimensional distribution . For neighbor clustering , the neighbor that its means µ fall into the required z-score ranges will be clustered , and an illustrated cluster criterion is shown in Figure 3 ( b ) . This cluster criterion is strict , as it requires the clustered neighbor not only close to the root of the cluster but also has a similar flow as the root . For fast calculation , this work proposes a z-score based distance-vector , in which each element of this vector corresponds to the distance at each dimension . The z-score is used because the direct compression of z-scores between different normal distributions is veiled . The proposed z-score based distance-vector d ( xi , xj ) between xi and xj shows in Equation ( 3 ) . d ( xi , xj ) = α abs ( µi − µj ) 2σi − 0.5 , ( 3 ) in which , µi is the mean of xi ’ s latent distribution , and σi is the diagonal elements of the covariance matrix Σ . The α controls the z-score range to be used . When α = 1 , the distance is normalized by the z-score range [ -1 , 1 ] . This work will expand the z-score range to increase the cluster size until it reaches [ -2,2 ] , covering more than 95 % of the normal distribution . However , the sample falls out of the z-score range of [ -2,2 ] is most unlikely from this distribution ’ s population , therefore , the z-score range is limited within [ -2 , 2 ] . To be clustered , all element of the z-score based distance vector should not larger than 0 . Besides , d ( xi , xj ) may not equal d ( xi , xi ) , as the z-score range of different representations may differ from each other , as demonstrated in Figure 3 ( c ) . By modifying α in Equation ( 3 ) , the z-score based distance will change accordingly . The cluster threshold α indicates how strict the clustering criterion is : small α will tighten the cluster restriction , and large α will reduce the restriction . As the experiment in section 4 will demonstrate , cluster converging after α surpasses a certain value are observed on all evaluated datasets , so the α needs no fine-tuning for each dataset . Notably , the early converge is observed in the experiments , in which the cluster size stops from increasing before reached the desired cluster number . This situation is introduced by the strict clustering method , which requires the neighbor representation to satisfy the criterion in all dimensions . In some cases , the clustering criteria are hard to reach ; hence the clustering process stops . To address this issue , this work introduces the loose match strategy and a parameter θ ( 0 < θ < 1 ) . Instant of requiring a full match in every dimension as the standard clustering process , a certain mismatch , 1 − θ , is accepted , as demonstrated in Figure 3 ( d ) . The loose match strategy is a backup method and is unnecessary for the unsupervised classification , which will be demonstrated in the experiment section . 4 EXPERIMENTS This paper first evaluates the neighbor cluster performance of the proposed method . Then it uses the neighbor cluster results in a self-supervised classification framework to demonstrate the potential to adapt the proposed method to support self-supervised classification . At last , additional feature of the proposed method is introduced . 4.1 EXPERIMENTAL SETUP Experiments are performed on CIFAR10 ( Krizhevsky et al. , 2009 ) , CIFAR100-20 ( Krizhevsky et al. , 2009 ) and STL10 ( Coates et al. , 2011 ) . The proposed network is trained on the training set of all datasets . For all experiences , the same set of configurations is applied . A ResNet-34 is adopted as the encoder network , and a two-layer full connection is utilized as the decoder network . The latent distribution dimension is set as 512 , and the decoded representation vector has a size of 64 . Both the encoder and the decoder networks are initialized randomly . Besides , the NT Xent loss ( Chen et al. , 2020a ) and its augmentation strategy are used for the behavior loss implementation . 4.2 EVALUATION OF NEIGHBOR CLUSTER The neighbor cluster performance is evaluated under cluster sizes of 10 , 50 , and 100 , as it is desired that a clustering method can maintain high accuracy when keep increasing the cluster size . The accuracy of the neighbor cluster is obtained by averaging all neighbor clusters ’ accuracy . While in the proposed method , neighbor clusters have different cluster sizes , the average cluster size is used for convenient comparison . For comparison , two instant discrimination based methods ( Caron et al. , 2018 ; 2019 ) , Rot ( Coates et al. , 2011 ) , SimCLR ( Doersch et al. , 2015 ) , and MoCo ( Gidaris et al. , 2018 ) are chosen , as they all use consistency regulation to perform unsupervised representation learning . To get the cluster results , a KNN is applied to the learned representation . For the proposed VAE based network , both KNN and z-score based methods are employed for clustering . The comparing methods are not suitable to use the z-score based clustering method , as it requires the distributional information . The neighbor cluster accuracy on the training set is studied , as the neighbor cluster results on the training set will be utilized to support the self-supervised classification training . The neighbor cluster accuracy comparison on the training set is shown in Table 1 . The proposed VAE based network with KNN is compared with others to demonstrate its ability to project the same meaning images to close representations . With KNN clustering , the proposed method ’ s accuracy is lower than the state-of-the-art , SimCLR . This is because the sampling process introduced by the proposed network creates uncertainty in the representation , which contributes the decline of accuracy . After the KNN is replaced by the z-score based clustering method , the proposed methods ( AC-VAE ) outperformed all other methods in all cases . Notably , around 10 % increases are found when cluster size is 10 on CIFAR10 and STL datasets . This performance comes from the z-score based cluster method , which uses the boundary information to exclude those nearby samples that do not have the same flow shape . Figure 4 is used to demonstrate the efficacy of the proposed clustering method . In Figure 4 , the representations from different classes overlap each other in some areas . It is hard for the method that only considers the distance to deliver a highly accurate cluster . To utilize the neighbor cluster method in unsupervised classification , the parameter α ’ s effect on the cluster size and cluster accuracy is also studied . The results are shown in Figure 5 . Clusters have been found naturally converged in the training set of all three datasets within the range of 0 < α ≤ 2 . As shown in Figure 5 ( a ) , the cluster size remains the same after the threshold α reaches a certain point . These results benefit from encoding the image as the distribution . For a distribution , most of its population will fall into its z-score range of [ -2 , 2 ] ( α = 2 ) , covering 95 % of the distribution population . During the network training , the VAE-based network should push samples with the similar meanings into this high-possibility region . This analysis matches the experiment results , in which the converges happened before the z-score range expands to [ -2 , 2 ] , as shown in Figure 5 . Table 2 : The α and θ applied in the experiments Dataset CIFAR10 CIFAR100-20 STL Cluster Size 10 50 100 10 50 100 10 50 100 α 0.52 0.94 1.28 1.45 1.62 1.72 0.57 1.24 1.42 θ N/A N/A N/A N/A 0.95 0.91 N/A 0.89 0.92 To get the results mentioned in Table 1 , the α and θ listed in Table 2 are applied . However , these super parameters are precisely selected for easy compassion to the KNN based clustering results . In practice , there is no need to fine-select α nor θ to reach a specific cluster size unless a particular neighbor cluster size is highly desired . number and cluster accuracy under different threshold ! . As shown in Figure 4.a the cluster remains the same after the threshold ! reaches a certain point . These results imply that our method proved a converged cluster result . As shown in Figure 4. b , after the cluster converged , the accuracy of the cluster r mains 87 % on Cifar10 datas t. This behavior is offered by the cluster method . Our cluster method only clusters the representation when every dimension of the representation falls into the boundary . As the representation is a high dimensional distribution , this boundary requires the neighbor has the same manifold to be clustered . Figure 4 . ( a ) the clustered neighbor number when vs. threshold ! , ( b ) the average cluster accuracy vs. threshold ! . Curriculum learning with phototypes . As training processes , the representation ’ s clustering ability increased without hugely sacrifice the accuracy of the clustering results . This suggests the framework produces a natural curriculum learning , in which easy samples fist start to collect data , and complex samples will collect data along with the network training progress . Besides , we visualize the porotypes from each class . The results are shown together in Fig . 5 . ( a ) ( b ) ( c ) Figure5 . Prototype images on the different datasets , ( a ) Cifar10 , ( b ) Cifar100-20 , ( c ) STL 4.3 SELF SUPERVISED CLASSIFICATION In order to demonstrate the efficacy of utilizing the proposed method to perform unsupervised classification , the self-supervised classification framework SCAN ( Van Gansbeke et al. , 2020 ) is adapted by replacing its KNN based neighbor clustering method with the proposed method to perform the unsupervised classification . The converged clusters showed in section 4.2 are utilized , in which all the clusters are naturally converged without using the loose match strategy nor fine-tuning α . Therefore , the self-adaptive cluster results are used to perform the unsupervised classification . As Table 3 demonstrated , the framework utilizing the adaptive cluster results outperform most of the comparing methods . When compared to the state-of-the-art ( SCAN , Van Gansbeke et al . ( 2020 ) ) , the proposed method still delivers competitive results without selecting super parameter k. 7
This paper proposes an adaptive neighbor clustering method by estimating normal distribution on the representation space. The proposed neighbor clustering can utilize the acceptable range for each dimension of each instance from the estimated variance, which leads to different size of neighbors for each instance, and improved the neighbor clustering performances. In addition, the proposed neighbor clustering method can replace the KNN-based neighbor clustering in the previous SCAN (Semantic Clustering by Adopting Nearest neighbors) framework for image semantic clustering.
SP:9ebf89e9e24ce1a745f97b9d33bb5ec9979e60e5
Multi-View Disentangled Representation
1 INTRODUCTION . Multi-view representation learning ( MRL ) involves learning representations by effectively leveraging information from different perspectives . The representations produced by MRL are effective when correlations across different views are accurately modeled and thus properly exploited for downstream tasks . One representative algorithm , Canonical Components Analysis ( CCA ) ( Hotelling , 1992 ) , aims to maximize linear correlations between two views under the assumption that factors from different views are highly correlated . Under a similar assumption , the extended versions of CCA , including kernelized CCA ( Akaho , 2006 ) and Deep CCA ( Andrew et al. , 2013 ) , explore more general correlations . There are also several methods ( Cao et al. , 2015 ; Sublime et al. , 2017 ) that maximize the independence between different views to enhance the complementarity . Going beyond the simple assumptions above , the latent representation encodes different views with a degradation process implicitly exploiting both consistency and complementarity ( Zhang et al. , 2019 ) . These existing MRL algorithms are effective , however , the assumed correlations between different views are usually simple thus can not accurately model or explicitly disentangle complex real-world correlations , which hinders the further improvement and interpretability . Although there are a few heuristic algorithms ( Tsai et al. , 2019 ; Hu et al. , 2017 ) that explicitly decompose the multiview representation into shared and view-specific parts , they are especially designed for supervised learning tasks without any disentangled representation guarantee and fall short in formally defining the relationships between different parts . To address this issue , we propose to unsupervisedly disentangle the original data from different views into shared representation across different views and exclusive ( private ) part within each view , which explicitly depicts the correlations and thus not only enhances the performance of existing tasks but could also inspire potential applications . Specifically , we firstly provide a definition for the multi-view disentangled representation by introducing the sufficient and necessary conditions for guaranteeing the disentanglement of different views . According to these conditions , an information-theory-based algorithm is proposed to accurately disentangle different views . To summarize , the main contributions of our work are as follows : • To the best of our knowledge , this is the first work to formally study multi-view disentangled representation with strict conditions , which might serve as the foundations of the future research on this problem . • Based on our definition , we propose a multi-view disentangling model , in which information- theory-based multi-view disentangling can accurately decompose the information into shared representation across different views and exclusive representation within each view . The explicit decomposition enhances the performance of multi-view analysis tasks and could also inspire new potential applications . • Different from the single-view unsupervised disentangled representation learning ( Locatello et al. , 2019 ) , we provide a new paradigm for unsupervised disentangled representation learning from a fresh perspective - disentangling factors between different views instead of each single view . • Extensive experiments on a range of applications verify that the proposed information-theory- based multi-view disentangling algorithm can accurately disentangle data from multiple views into expected shared and exclusive representations . 2 MULTI-VIEW DISENTANGLED REPRESENTATION Existing multi-view representation learning methods ( Wu & Goodman , 2018 ; Zhang et al. , 2019 ) can obtain a common representation for multi-view data , however , the correlations between different views are not explicitly expressed . The supervised algorithms ( Hu et al. , 2017 ; Tan et al. , 2019 ) can decompose multiple views into a common part and private parts , but there is no disentangling guarantee . Therefore , we propose a multi-view disentanglement algorithm that can explicitly separate the shared and exclusive information in unsupervised settings . Formally , we first propose a definition of a multi-view disentangled representation by introducing four criteria , which are considered as sufficient and necessary conditions of disentangling multiple views . The definition is as follows : Definition 2.1 ( Multi-View Disentangled Representation ) Given a sample with two views , i.e. , X = { xi } 2i=1 , the representation Sdis = { si , ei } 2i=1 is a multi-view disentangled representation if the following conditions are satisfied : • Completeness : 1 The shared representation si and exclusive representation ei should jointly contain all information of the original representation xi ; • Exclusivity : 2 There is no shared information between common representation si and exclusive representation ei , which ensures the exclusivity within each view ( intra-View ) . 3 There is no shared information between ei and ej , which ensures the exclusivity between private information of different views ( inter-View ) . • Commonality : 4 The common representations si and sj should contain the same informa- tion . Equipped with the exclusivity constraints , the common representations are guaranteed to not only be the same but also contain maximized common information . The necessity for each criterion is illustrated in Fig . 1 ( satisfaction of all the four conditions produces exact disentanglement , and violation of any condition may result in an unexpected disentangled representation ) . Note that , existing ( single-view ) unsupervised disentanglement focuses on learning a representation to identify explanatory factors of variation , which has been proved fundamentally impossible ( Locatello et al. , 2019 ) . The goal of the proposed multi-view disentanglement is to disentangle multiple views into the shared and exclusive parts which can be well guaranteed as illustrated in definition 2.1 and Fig . 1 . Mutual information has been widely used in representation learning ( Hjelm et al. , 2019 ; Belghazi et al. , 2018 ) . In probability theory and information theory , the mutual information of two random variables quantifies the “ amount of information ” obtained about one random variable when observing the other one , which is well-suited for measuring the amount of shared information between two different views . To approach the disentangling goal , according to conditions 1 ∼ 4 , the general form of the object function is naturally induced as : max 2∑ i , j=1 [ I ( xi ; ei , si ) ︸ ︷︷ ︸ 1 −I ( ei ; si ) ︸ ︷︷ ︸ 2 ] − ∑ i6=j I ( ei ; ej ) ︸ ︷︷ ︸ 3 + ∑ i 6=j I ( si ; sj ) ︸ ︷︷ ︸ 4 , ( 1 ) where I ( · ; · ) denotes the mutual information . We provide an implementation in Fig . 2 and , in the following subsections , we will describe this implementation in detail . 2.1 CONDITION ¬ : INFORMATION PRESERVATION FOR THE SHARED AND EXCLUSIVE REPRESENTATIONS . • How to maximize I ( x ; e , s ) ? For simplicity , x , s , e and xi , si , ei are denoted with the same meanings and used alternately , where the former and latter are used for intra-view and inter-view cases , respectively . To preserve the information from the original data in the shared and exclusive representations , the mutual information I ( x ; e , s ) should be maximized . There are different ways to implement the maximization of I ( x ; e , s ) based on the following assumptions . Assumption 2.1 The shared representation s and exclusive representation e are simultaneously independent and conditionally independent : p ( s , e ) = p ( s ) p ( e ) , p ( s , e|x ) = p ( s|x ) p ( e|x ) . ( 2 ) Firstly , we expand I ( x ; e , s ) to obtain the following equation ( more details are shown in supplement C ) : I ( x ; e , s ) = ∫ ∫ ∫ p ( x ) p ( e , s|x ) log p ( e , s|x ) p ( e , s ) dedsdx . Then , under Assumption 2.1 , the following equation is derived ( more details are shown in supplement C ) : I ( x ; e , s ) = I ( x ; e ) + I ( x ; s ) . ( 3 ) According to the above equation , it seems that we can maximize I ( x ; e ) + I ( x ; s ) to maximize I ( x ; e , s ) , which involves making s and e contain as much information from x as possible ( ideally , it will produce e and s to meet I ( x ; e ) = I ( x ; s ) = H ( x ) , where H ( x ) is the entropy of x ) . This actually leads to a strong correlation between s and e , which is in conflict with the independence Assumption 2.1 about s and e. In other words , it is difficult to balance the completeness ( condition 1 ) and intra-view exclusivity ( condition 2 ) ( see experimental results in supplement B.4 ) . Fortunately , there is an alternative strategy which avoids the difficulty in balancing the completeness and intra-view exclusivity . Specifically , we introduce a latent representation r generated by two independent distributions with respect to s and e under a mild assumption : Assumption 2.2 ( Relationship between s , e and r ) : p ( s , e , x ) = p ( r , x ) . ( 4 ) In our formulation , we define r = f ( s , e ) , where r is derived from s and ewith the underlying function f ( · ) and satisfies p ( r , x ) = p ( s , e , x ) . Eq . 4 is a mild assumption , for example the invertibility of mapping r = f ( s , e ) ensuring a sufficient condition which can be easily verified . Note that r = [ s , e ] is one special case and will be discussed later . Based on Assumption 2.2 , we can get ( more details are shown in supplement C ) : p ( r ) = p ( s , e ) , p ( r|x ) = p ( s , e|x ) . ( 5 ) Then , we can induce the following result ( more details are shown in supplement C ) : I ( x ; e , s ) = I ( x ; r ) . ( 6 ) This result indicates that the maximization of I ( x ; e , s ) can be achieved by maximizing the mutual information of agency r and x . In this way , the independence of e and s is well preserved and the previous conflict is dispelled . Next , we will explain how to encode the information of x into independent representations s and e by introducing the agency r. • How to obtain independent representations e and s by maximizing I ( x ; r ) ? First , we consider encoding the observed data x into a latent representation r by maximizing the mutual information between x and r. Considering robustness and effectiveness ( Alemi et al. , 2018 ) , we can maximize the mutual information between r and x through Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) . Accordingly , we have the following objective function : min qr , d Ex∼p ( x ) [ − Er∼qr ( r|x ) [ log ( d ( x|r ) ) ] + Er∼qr ( r|x ) log qr ( r|x ) p ( r ) ] , ( 7 ) where d ( x|r ) ( the “ decoder ” ) is a variational approximation to p ( x|r ) , and qr ( r|x ) ( the “ encoder ” ) is a variational approximation to p ( r|x ) , which converts the observed data x into the latent representation r. Second , we consider how to obtain independent representations e and s by modeling qr ( r|x ) . For this goal , the relationships between s , e and r should be jointly modeled . As shown in Eq . 5 , we obtain p ( r|x ) = p ( s , e|x ) . Under Assumption 2.1 , Eq . 5 can be rewritten as p ( r|x ) = p ( s|x ) p ( e|x ) , which implies that qr ( r|x ) can be considered as the product of p ( s|x ) and p ( e|x ) . Furthermore , we introduce PoE ( product-of-experts ) ( Hinton , 2002 ; Wu & Goodman , 2018 ) to model the product of qs ( s|x ) and qe ( e|x ) , where the variational networks qs ( s|x ) and qe ( e|x ) are designed to approximate p ( s|x ) and p ( e|x ) . It is worth noting that the key difference from MVAE ( Multimodal Variational Autoencoder ) ( Wu & Goodman , 2018 ) is that our model obtains the latent representation r from two independent components within each single view , while MVAE achieves the unified representation of all views by assuming independence of representations of different views . Under the assumption that the true posteriors for individual factors p ( s|x ) and p ( e|x ) are contained in the family of their variational counterparts qs ( s|x ) and qe ( e|x ) , we have qr ( r|x ) = qs ( s|x ) qe ( e|x ) . With Gaussian distributions , we can obtain the closed-form solution for the product of two distributions : µr = µsσ 2 s+µeσ 2 e σ2s+σ 2 e , σ2r = σ2sσ 2 e σ2s+σ 2 e . Therefore , the independent representations between e and s are well preserved by modeling qr ( r|x ) . Accordingly , with the results qr ( r|x ) = qs ( s|x ) qe ( e|x ) and p ( r ) = p ( s ) p ( e ) , the objective in Eq . 7 is rewritten as : min qs , qe , d Ex∼p ( x ) [ − Er∼qs ( s|x ) qe ( e|x ) [ log ( d ( x|r ) ) ] + Es∼qs ( s|x ) log qs ( s|x ) p ( s ) + Ee∼qe ( e|x ) log qe ( e|x ) p ( e ) ] , where p ( s ) and p ( e ) are set to Gaussian distributions , which in turn forces qs ( s|x ) and qe ( e|x ) to be closer to a Gaussian distribution , allowing us to find the product of the two distributions . The above objective is actually the ELBO ( evidence lower bound ) ( Kingma & Welling , 2014 ) with the first term being the reconstruction loss , and the second and third terms being the KL divergence . The proposed variant of VAE inherits two advantages from VAE and PoE , respectively . The first is that we can obtain approximate distributions of s and e given x to preserve the independence . The second is that the proposed model still works even when there is a missing case for e or s in the testing . This means that we can use only s or e as input to the decoder to reconstruct x ( shown in the experimental section ) , which is quite different from the concatenation of e and s or other forms that require e and s simultaneously to obtain r. In addition , the way of concatenating s and e does not well exploit the independent prior of s and e .
The goal of this paper is to define multi-view disentanglement in an unsupervised manner. The authors list four principal rules for representation disentanglement, including completeness of combining shared and specific representations, the exclusivity of two specific representations and between specific and shared representation, and commonality of two shared representation. The authors follow the above rules to design a VAE-based model and demonstrate favorable results on image clustering and classification tasks.
SP:751b7dfd3e7de8e6f533137fc9ae7a65583c09e0
You Only Need Adversarial Supervision for Semantic Image Synthesis
∗Equal contribution . Correspondence to { edgar.schoenfeld , vadim.sushko } @ bosch.com . 1 INTRODUCTION . Conditional generative adversarial networks ( GANs ) ( Mirza & Osindero , 2014 ) synthesize images conditioned on class labels ( Zhang et al. , 2019 ; Brock et al. , 2019 ) , text ( Reed et al. , 2016 ; Zhang et al. , 2018a ) , other images ( Isola et al. , 2017 ; Huang et al. , 2018 ) , or semantic label maps ( Wang et al. , 2018 ; Park et al. , 2019 ) . In this work , we focus on the latter , addressing semantic image synthesis . Semantic image synthesis enables rendering of realistic images from user-specified layouts , without the use of an intricate graphic engine . Therefore , its applications range widely from content creation and image editing to generating training data that needs to adhere to specific semantic requirements ( Wang et al. , 2018 ; Chen & Koltun , 2017 ) . Despite the recent progress on stabilizing GANs ( Gulrajani et al. , 2017 ; Miyato et al. , 2018 ; Zhang & Khoreva , 2019 ) and developing their architectures ( Zhang et al. , 2019 ; Karras et al. , 2019 ) , state-of-the-art GAN-based semantic image synthesis models ( Park et al. , 2019 ; Liu et al. , 2019 ) still greatly suffer from training instabilities and poor image quality when trained only with adversarial supervision ( see Fig . 1 ) . An established practice to overcome this issue is to employ a perceptual loss ( Wang et al. , 2018 ) to train the generator , in addition to the discriminator loss . The perceptual loss aims to match intermediate features of synthetic and real images , that are estimated via an external perception network . A popular choice for such a network is VGG ( Simonyan & Zisserman , 2015 ) , pre-trained on ImageNet ( Deng et al. , 2009 ) . Although the perceptual loss substantially improves the accuracy of previous methods , it comes with the computational overhead introduced by utilizing an extra network for training . Moreover , it usually dominates over the adversarial loss during training , which can have a negative impact on the diversity and quality of generated images , as we show in our experiments . Therefore , in this work we propose a novel , simplified model that achieves state-of-the-art results without requiring a perceptual loss . A fundamental question for GAN-based semantic image synthesis models is how to design the discriminator to efficiently utilize information from the given semantic label maps . Conventional methods ( Park et al. , 2019 ; Wang et al. , 2018 ; Liu et al. , 2019 ; Isola et al. , 2017 ) adopt a multi-scale classification network , taking the label map as input along with the image , and making a global image-level real/fake decision . Such a discriminator has limited representation power , as it is not incentivized to learn high-fidelity pixel-level details of the images and their precise alignment with the input semantic label maps . To mitigate this issue , we propose an alternative architecture for the discriminator , re-designing it as an encoder-decoder semantic segmentation network ( Ronneberger et al. , 2015 ) , and directly exploiting the given semantic label maps as ground truth via a ( N+1 ) -class cross-entropy loss ( see Fig . 3 ) . This new discriminator provides semantically-aware pixel-level feedback to the generator , partitioning the image into segments belonging to one of the N real semantic classes or the fake class . Enabled by the discriminator per-pixel response , we further introduce a LabelMix regularization , which fosters the discriminator to focus more on the semantic and structural differences of real and synthetic images . The proposed changes lead to a much stronger discriminator , that maintains a powerful semantic representation of objects , giving more meaningful feedback to the generator , and thus making the perceptual loss supervision superfluous ( see Fig . 1 ) . Next , we propose to enable multi-modal synthesis of the generator via 3D noise sampling . Previously , directly using 1D noise as input was not successful for semantic image synthesis , as the generator tended to mostly ignore it or synthesized images of poor quality ( Isola et al. , 2017 ; Wang et al. , 2018 ) . Thus , prior work ( Wang et al. , 2018 ; Park et al. , 2019 ) resorted to using an image encoder to produce multi-modal outputs . In this work , we propose a lighter solution . Empowered by our stronger discriminator , the generator can effectively synthesize different images by simply re-sampling a 3D noise tensor , which is used not only as the input but also combined with intermediate features via conditional normalization at every layer . Such noise is spatially sensitive , so we can re-sample it both globally ( channel-wise ) and locally ( pixel-wise ) , allowing to change not only the appearance of the whole scene , but also of specific semantic classes or any chosen areas ( see Fig . 2 ) . We call our model OASIS , as it needs only adversarial supervision for semantic image synthesis . In summary , our main contributions are : ( 1 ) We propose a novel segmentation-based discriminator architecture , that gives more powerful feedback to the generator and eliminates the necessity of the perceptual loss supervision . ( 2 ) We present a simple 3D noise sampling scheme , notably increasing the diversity of multi-modal synthesis and enabling complete or partial change of the generated image . ( 3 ) With the OASIS model , we achieve high quality results on the ADE20K , Cityscapes and COCO-stuff datasets , on average improving the state of the art by 6 FID and 5 mIoU points , while relying only on adversarial supervision . We show that images synthesized by OASIS exhibit much higher diversity and more closely follow the color and texture distributions of real images . Our code and pretrained models are available at https : //github.com/boschresearch/OASIS . 2 RELATED WORK . Semantic image synthesis . Pix2pix ( Isola et al. , 2017 ) first proposed to use conditional GANs ( Mirza & Osindero , 2014 ) for semantic image synthesis , adopting an encoder-decoder generator which takes semantic label maps as input , and employing a PatchGAN discriminator . Since then , various generator and discriminator modifications have been introduced ( Wang et al. , 2018 ; Park et al. , 2019 ; Liu et al. , 2019 ; Tang et al. , 2020c ; b ; Ntavelis et al. , 2020 ) . Besides GANs , Chen & Koltun ( 2017 ) proposed to use a cascaded refinement network ( CRN ) for high-resolution semantic image synthesis , and SIMS ( Qi et al. , 2018 ) extended it with a non-parametric component , serving as a memory bank of source material to assist the synthesis . Further , Li et al . ( 2019 ) employed implicit maximum likelihood estimation ( Li & Malik , 2018 ) to increase the variety of the CRN model . However , these approaches still underperform in comparison to state-of-the-art GAN models . Therefore , next we focus on the recent GAN architectures for semantic image synthesis . Discriminator architectures . Pix2pix ( Isola et al. , 2017 ) , Pix2pixHD ( Wang et al. , 2018 ) and SPADE ( Park et al. , 2019 ) all employed a multi-scale PatchGAN discriminator , that takes an image and its semantic label map as input . CC-FPSE ( Liu et al. , 2019 ) proposed a feature-pyramid discriminator , embedding both images and label maps into a joint feature map , and then consecutively upsampling it in order to classify it as real/fake at multiple scales . LGGAN ( Tang et al. , 2020c ) introduced a classification-based feature learning module to learn more discriminative and class-specific features . In this work , we propose to use a pixel-wise semantic segmentation network as a discriminator instead of multi-scale image classifiers as in the above approaches , and to directly exploit the semantic label maps for its supervision . Segmentation-based discriminators have been shown to improve semantic segmentation ( Souly et al. , 2017 ) and unconditional image synthesis ( Schönfeld et al. , 2020 ) , but to the best of our knowledge have not been explored for semantic image synthesis and our work is the first to apply adversarial semantic segmentation loss for this task . Generator architectures . Conventionally , the semantic label map is provided to the image generation pipeline via an encoder ( Isola et al. , 2017 ; Wang et al. , 2018 ; Tang et al. , 2020c ; b ; Ntavelis et al. , 2020 ) . However , it is shown to be suboptimal at preserving the semantic information until the later stages of image generation . Therefore , SPADE introduced a spatially-adaptive normalization layer that directly modulates the label map onto the generator ’ s hidden layer outputs at various scales . Alternatively , CC-FPSE proposed to use spatially-varying convolution kernels conditioned on the label map . Struggling with generating diverse images from noise , both Pix2pixHD and SPADE resorted to having an image encoder in the generator design to enable multi-modal synthesis . The generator then combines the extracted image style with the label map to reconstruct the original image . By alternating the style vector , one can generate multiple outputs conditioned on the same label map . However , using an image encoder is a resource demanding solution . In this work , we enable multi-modal synthesis directly through sampling of a 3D noise tensor injected at every layer of the network . Differently from structured noise injection of Alharbi & Wonka ( 2020 ) and class-specific latent codes of Zhu et al . ( 2020 ) , we inject the 3D noise along with label maps and adjust it to image resolution , also enabling re-sampling of selected semantic segments ( see Fig . 2 ) . Perceptual losses . Gatys et al . ( 2015 ) ; Gatys et al . ( 2016 ) ; Johnson et al . ( 2016 ) and Bruna et al . ( 2016 ) were pioneers at exploiting perceptual losses to produce high-quality images for superresolution and style transfer using convolutional networks . For semantic image synthesis , the VGGbased perceptual loss was first introduced by CRN , and later adopted by Pix2pixHD . Since then , it has become a default for training the generator ( Park et al. , 2019 ; Liu et al. , 2019 ; Tan et al. , 2020 ; Tang et al. , 2020a ) . As the perceptual loss is based on a VGG network pre-trained on ImageNet ( Deng et al. , 2009 ) , methods relying on it are constrained by the ImageNet domain and the representational power of VGG . With the recent progress on GAN training , e.g . by architecture designs and regularization techniques , the actual necessity of the perceptual loss requires a reassessment . We experimentally show that such loss imposes unnecessary constraints on the generator , significantly limiting sample diversity . While our model , trained without the VGG loss , achieves improved image diversity while not compromising image quality . 3 OASIS MODEL . In this section , we present our OASIS model , which , in contrast to other semantic image synthesis methods , needs only adversarial supervision for generator training . Using SPADE as a starting point ( Sec . 3.1 ) , we first propose to re-design the discriminator as a semantic segmentation network , directly using the given semantic label maps as ground truth ( Sec . 3.2 ) . Empowered by spatiallyand semantically-aware feedback of the new discriminator , we next re-design the SPADE generator , enabling its effective multi-modal synthesis via 3D noise sampling ( Sec . 3.3 ) .
In this paper, the authors approach the problem of conditional image generation via generative adversarial networks. To this end, they propose an approach that utilizes only semantic segmentation annotations and adversarial loss. No perceptual loss is required. Their discriminator leverages semantic labels to improve the image generations. They evaluate their approach on a variety of datasets including ADE20K, COCO, and CityScapes. They demonstrate substantial quantitative and qualitative performance over baselines and perform an ablation analysis.
SP:cef0728e41977750c56af5228b0e0dff4ec13358
Parametric Density Estimation with Uncertainty using Deep Ensembles
In parametric density estimation , the parameters of a known probability density1 are typically recovered from measurements by maximizing the log-likelihood.2 Prior knowledge of measurement uncertainties is not included in this method , po-3 tentially producing degraded or even biased parameter estimates . We propose4 an efficient two-step , general-purpose approach for parametric density estimation5 using deep ensembles . Feature predictions and their uncertainties are returned6 by a deep ensemble and then combined in an importance weighted maximum7 likelihood estimation to recover parameters representing a known density along8 with their respective errors . To compare the bias-variance tradeoff of different9 approaches , we define an appropriate figure of merit . We illustrate a number of10 use cases for our method in the physical sciences and demonstrate state-of-the-art11 results for X-ray polarimetry that outperform current classical and deep learning12 methods.13 1 INTRODUCTION14 The majority of state-of-the-art NN performances are single ( high-dimensional ) input , multiple-15 output tasks , for instance classifying images ( Krizhevsky et al. , 2012 ) , scene understanding ( Red-16 mon et al. , 2015 ) and voice recognition ( Graves et al. , 2006 ) . These tasks typically involve one input17 vector or image and a single output vector of predictions.18 In parametric density estimation , there is a known probability density that the data ( or latent features19 of the data ) are expected to follow . The goal is to find representative distribution parameters for a20 given dataset . In simple cases where the likelihood is calculable , maximum likelihood estimation21 can be used effectively . In cases where latent features of the data follow a known distribution ( e.g.,22 heights of people in a dataset of photographs ) , NNs can potentially be used to directly estimate the23 distribution parameters . For clarity , we define this direct/end-to-end approach as parametric feature24 density estimation ( PFDE ) . Such an approach requires employing entire datasets ( with potentially25 thousands to millions of high-dimensional examples ) as inputs in order to output a vector of den-26 sity parameters . Furthermore , to be useful these NNs would need to generalize to arbitrarily sized27 dataset-inputs.28 One example of NNs making sense of large dataset-inputs is found in natural language processing.29 Here large text corpora , converted to word vectors ( Pennington et al. , 2014 ; Devlin et al. , 2019 ) ,30 can be input and summarized by single output vectors using recurrent neural networks ( RNNs ) , for31 instance in sentiment analysis ( Can et al. , 2018 ) . However , these problems and RNNs themselves32 contain inductive bias – there is inherent structure in text . Not all information need be given at once33 and a concept of memory or attention is sufficient ( Vaswani et al. , 2017 ) . The same can be said34 about time domain problems , such as audio processing or voice recognition . Memory is inherently35 imperfect – for PFDE , one ideally wants to know all elements of the ensemble at once to make36 the best prediction : sequential inductive bias is undesirable . Ultimately , memory and architectural37 constraints make training NNs for direct PFDE computationally intractable.38 On the other hand , density estimation on data directly ( not on its latent features ) , is computationally39 tractable . Density estimation lets us find a complete statistical model of the data generating process.40 Applying deep learning to density estimation has advanced the field significantly ( Papamakarios,41 2019 ) . Most of the work so far focuses on density estimation where the density is unknown a priori.42 This can be achieved with non-parametric methods such as neural density estimation ( Papamakarios43 et al. , 2018 ) , or with parametric methods such as mixture density networks ( Bishop , 1994 ) . In PFDE,44 however , we have a known probability density over some features of the whole dataset . The features45 may be more difficult to predict accurately in some datapoints than others.46 Typical parametric density estimation does not make use of data uncertainties where some elements47 in the dataset may be more noisy than others . Not including uncertainty information can lead to48 biased or even degraded parameter estimates . The simplest example of parametric density estimation49 using uncertainties is a weighted mean . This is the result of a maximum likelihood estimate for a50 multi-dimensional Gaussian . For density estimation on predicted data features , PFDE , we would51 like a way to quantify the predictive uncertainty . A general solution is offered by deep ensembles52 ( Lakshminarayanan et al. , 2017 ) . While these are not strictly equivalent to a Bayesian approach,53 although they can be made such using appropriate regularization ( Pearce et al. , 2018 ) , they offer54 practical predictive uncertainties , and have been shown to generalize readily ( Fort et al. , 2019 ) .55 Additionally Ovadia et al . ( 2019 ) have shown deep ensembles perform the best across a number56 of uncertainty metrics , including dataset shift , compared to competing methods such as stochastic57 variational inference and Monte Carlo methods.58 In this work , we propose a NN approach that circumvents large dataset-input training or recurrent59 architectures to predict known feature density parameters over large input datasets . We use predic-60 tive uncertainties on features of individual dataset elements as importance weights in a maximum61 likelihood estimation . We will show that estimating known density parameters in a 2-step approach62 provides greater interpretability and flexibility . We are able to predict uncertainties on our density63 parameter estimates using bootstrap methods ( Efron , 1979 ) . Our method is widely applicable to a64 number of applied machine learning fields ; §3 showcases a few important examples.65 Contributions : Our contributions in this paper are as follows : ( 1 ) We introduce a general , flexi-66 ble method for PFDE using NNs . The method can be applied to any domain requiring PFDE . We67 illustrate a number of varied domain examples in the physical sciences in §3 . ( 2 ) In an in-depth68 evaluation we show that our method outperforms not only classical methods for density estimation,69 but also standard NN implementations in an application to X-ray polarimetry . ( 3 ) We investigate the70 bias-variance tradeoff associated with our method and introduce a tuneable hyperparameter to con-71 trol it . Note : In the following we focus on regression examples , ( since unbinned density estimation72 is preferable to binned ) . However , a similar method can be applied to prediction examples where73 softmax class probabilities are used as heteroscedastic aleatoric uncertainty.74 2 IMPORTANCE WEIGHTED ESTIMATION WITH DEEP ENSEMBLES75 2.1 PROBLEM SETUP AND HIGH-LEVEL SUMMARY76 We wish to estimate the feature density parameters of N high dimensional data points { x } :77 f ( { xn } Nn=1 ) . Here x ∈ RD can be any high dimensional data ( e.g . images , time series ) . N is78 arbitrary , although usually large since otherwise density estimation is inaccurate . For example , con-79 sider estimating the mean and variance of human heights from a dataset consisting of photographs80 of people . A person ’ s height in each photograph is the image feature and we know this feature81 approximately follows a Gaussian distribution . We develop a method that can estimate the density82 parameters ( mean and variance ) and generalize to any dataset of photographs.83 In general , the function f mapping the high dimensional data points to the desired density parame-84 ters is unknown , since the high dimensional data is abstracted from its features . Learning f directly85 is typically infeasible because an entire ensemble of inputs { xn } Nn=1 must be processed simultane-86 ously to estimate density parameters , and this approach would have to generalize to arbitrary N and87 density parameter values . We discuss some special cases where this is possible in §1 . However , the88 function g mapping data features yn to the density parameters is known.89 We cast this as a supervised learning problem where we have a datasetD consisting of N data points90 D = { xn , yn } Ntrainn=1 with labels y ∈ RK where x ∈ RD . We want to estimate the density parameters91 ψ1 , ψ2 , ... ψk for an unseen test set g ( { yn } Ntestn=1 ) for arbitrary Ntest.92 The basic recipe that comes to mind is training a single NN to predict output labels { yn } Nn=1 then93 evaluate g directly . This ignores the high variance in single NN predictions ( dependent on train-94 ing/random initialization ) , that some individual examples may be more informative than others , and95 that an objective to predict the most accurate output labels may not be the best for predicting good96 density parameters ( high bias may be introduced , for instance ) .97 Our hybrid approach is as follows . ( i ) Train a deep ensemble of M NNs1 to predict { yn , σn } Nn=198 where σn is the total uncertainty on each prediction yn , ( ii ) use the { σn } Nn=1 as weights in an99 importance weighted maximum likelihood estimate . The next section , §2.2 , describes procedure ( i ) .100 2.2 DEEP ENSEMBLES101 Deep ensembles ( Lakshminarayanan et al. , 2017 ) return robust and accurate supervised learning pre-102 dictions and predictive uncertainties , which enable the best density parameter predictions . These use103 an ensemble of individual NNs ( with different random initializations ) trained to predict features and104 their aleatoric uncertainties . Final predictions and their epistemic uncertainties are then recovered105 by combining the estimates from each of the NNs in the ensemble.106 In regression , deep ensembles model heteroscedastic aleatoric σa uncertainty by modifying the typ-107 ical mean-squared errors ( MSE ) objective to a negative log-likelihood ( NLL ) ( Lakshminarayanan108 et al. , 2017 ) ,109 Loss ( y|x ) = 1 2 logσ2a ( x ) + 1 2σ2a ( x ) ‖y − ŷ ( x ) ) ‖22 . ( 1 ) Extensions using more complex distributions like mixture density networks or heavy tailed distribu-110 tions may be more applicable to certain problems with prior knowledge about the error distribution.111 In practice , the log-likelihood of any exponential family could be used ; we find this simple Gaussian112 approach to be sufficient and robust for regression problems . Our results in §3.4 for a compare a113 Gaussian and Von Mises distribution.114 Epistemic uncertainty σe is modelled using a uniformly weighted ensemble of M NNs each115 trained starting from a different random initialization . The regression prediction and uncertainty116 are approximated by the mean and standard deviation over the M NN ensemble predictions re-117 spectively ( each NN in the ensemble contributes equally ) i.e . ŷ ( x ) = M−1 ∑M m=1 ŷm ( x ) and118 σ2e ( x ) = Var ( { ŷm ( x ) } Mm=1 ) . The epistemic uncertainty is then combined with the aleatoric in119 quadrature to arrive at the total uncertainty : σ2 = σ2a + σ 2 e . Typically M ∼ 5− 15.120 In part ( i ) of our hybrid approach for PFDE , we train a deep ensemble to minimize the NLL ( 1 ) on121 desired features y . We follow the deep ensemble training procedure outlined in Lakshminarayanan122 et al . ( 2017 ) ( with recast loss function from Kendall & Gal ( 2017 ) ) without using adversial examples,123 using the full dataset for each NN . Since the individual density parameters over predicted features124 are the final desired values in PFDE , it is possible that an objective maximizing feature accuracy on125 the validation set is not the true objective . This is possible if the training dataset is biased or the126 model ( 1 ) is highly misspecified for the particular problem . The Kitaguchi et al . ( 2019 ) single CNN127 method in table 1 , §3.4 , shows a clear case of training bias . If de-biasing the training dataset or using128 a more appropriate model is not possible , we have identified two potential ways of ameliorating this129 issue for PFDE:130 1 . Include terms in the individual NN objectives to penalize known sources of bias.131 2 . Select the top M performing NNs , as measured by a criterion that includes density param-132 eter prediction bias on a held out test set.133 In practice both can be used simultaneously . However , the former runs into batch size problems134 ( since one needs a large sample size to accurately estimate bias ) , and the source of bias is not always135 well understood . The latter naturally arises from the use of deep ensembles , but could include its136 own unwanted bias and risk underestimating the epistemic uncertainty . We compare selecting the137 top performing NNs for the ensemble by a domain specific criterion against randomly selecting NNs138 for the ensemble in §3.139 1We note that the NN architecture used will of course depend on the dataset domain . 2.3 IMPORTANCE WEIGHTED LOG-LIKELIHOOD140 Provided a mapping between high dimensional inputs and interpretable features xn 7→ yn , we can141 calculate the density parameters ψ1 , ψ2 , ... ψk by minimizing the appropriate negative log-likelihood142 function p ( { yn } |ψ1 , ψ2 , ... ψk ) . Some feature predictions yn will have greater total predictive uncer-143 tainties , σn . We estimate feature density parameters by incorporating the total uncertainty into an144 importance weighted maximum likelihood estimate . This makes up part ( ii ) of our hybrid method.145 An importance weight quantifies the relative importance of one example over another . Importance146 weighting an element should be the same as if that element were included multiple times in the147 dataset , proportional to its importance weight Karampatziakis & Langford ( 2011 ) . The deep ensem-148 ble , once trained , will act as mapping between high dimensional inputs xn and feature-uncertainty149 output pairs yn , σn . For each input xn there will be M output pairs { ŷnm , ( σa ) nm } Mm=1 , one for150 each NN in the deep ensemble . Both the features ŷnm and aleatoric uncertainty variances ( σa ) 2nm151 can be combined by taking the appropriate mean over m ; this mean may depend on the distribution152 used in ( 1 ) , but for the simple Gaussian case the standard mean is sufficient . Taking the mean results153 in a single output pair ( ŷn , ( σa ) n ) for each input . Epistemic uncertainties are included as in §2.2,154 resulting in the final output ( ŷn , σn ) .155 In order to use all possible information when estimating the desired density parameters ψ1 , ψ2 , ... ψk,156 we define an importance weighted negative log-likelihood function157 Lw ( { ŷn } , ψ1 , ψ2 , . . . , ψk ) = − N∑ n=1 wnlogL ( ŷn|ψ1 , ψ2 , . . . , ψk ) , ( 2 ) 158 wn = σ −λ n ( 3 ) Each individual prediction yn has an associated importance weight wn . The σ−λn term weights each159 yn by its predictive uncertainty . The hyperparamter λ ≥ 0 controls the importance weighting distri-160 bution . A high λ means the yn with the lowest ( estimated ) MSE will dominate the final ensemble161 statistic . As always in estimation problems , there is a trade-off between lower variance predictions162 and more bias . This can be tuned for a specific application using λ ; we discuss the procedure in163 detail in our example application , §3 . Final density parameters are found by minimizing ( 2 ) over the164 domain of the density parameters ψ.165 Typically , the weights in weighted likelihood estimation are determined heuristically ( Hu & Zidek,166 2002 ) . In this example , we choose w = σ−λ since it approximates the simple functional form of167 the likelihood used in a weighted mean estimate ( λ = 2 ) . This weighting choice is also inspired168 by the dispersion parameter used in generalized linear models ( GLMs ) ( Nelder & Wedderburn,169 1972 ) . We expect that this weighting will retain similar robustness properties in terms of model170 fitting , and will generalize well to many domains . However , of course , any decreasing function171 f : R+ → R+ may be used to determine weights , with the most suitable choice of function f172 within a given class of functions ( in our case , parameterized by λ ) to be determined by either cross-173 validation or performance on a holdout set . In some applications it is possible to find the exact174 weighting function [ in prep. , reference deleted to maintain integrity of review process ] . Further175 discussion of weight choice in our application is given in section §3.4.176 Confidence intervals on the density parameters can be calculated using the non-parametric bootstrap177 Efron ( 1979 ) : select N yn , σn pairs with replacement and minimize ( 2 ) . In the limit of many trials178 with different random subsamples , this will give the output distribution on the density parameters.179 2.4 DENSITY PARAMETER REGRESSION180 For a special class of parameterized densities it is possible to find the global minimizer or minimize181 ( 2 ) analytically ( e.g . for a multivariate Gaussian ) . In practice , the majority of parametric densities182 of interest for PFDE are likely to be convex ( exponential families , our application example §3 , etc . ) ,183 so will fall into this special class . In the general case , minimization is performed numerically to find184 locally optimal solutions.185 In this work , we employ Ipopt ( Wächter & Biegler , 2006 ) , an open-source interior-point solver for186 large-scale non-convex optimization problems , to minimize ( 2 ) . This method can be used for convex187 or non-convex parametric density estimates , but only convex ones are guaranteed to be global opti-188 mal . Because Ipopt finds locally optimal solutions , which are highly dependent upon an initial guess189 of the parameters provided to the solver , in the non-convex case , we recommend nested sampling190 Feroz et al . ( 2009 ) to test many initial guesses and then select the best local solution . Constraints191 on the density parameters , for instance if they have a finite domain , can be incorporated for both the192 convex and non-convex case . Of course , any optimizer appropriate for ( 2 ) can be used and this will193 depend on the problem.194 The overall training and evaluation procedure is summarized in Algorithm 1.195 Algorithm 1 : Pseudocode for our PFDE method . 1 : Identify output features yn relevant to the desired density parameter ( s ) ( e.g. , subject height in photographs ) . 2 : Train a deep ensemble of NNs using loss function ( 1 ) to maximise accuracy on the desired output features 3 : Evaluate the density parameter ( s ) using importance weights by minimizing ( 2 ) . 4 : Tune λ hyperparameter for the specific application . 196 3 EXPERIMENTS197 3.1 X-RAY POLARIMETRY198 Measuring X-ray polarization has been a major goal in astrophysics for the last 40 years . X-ray po-199 larization can provide essential measurements of magnetic fields very close to high energy sources,200 such as accreting black holes and astrophysical jets ( Weisskopf , 2018 ) . The recent development201 of photoelectron tracking detectors ( Bellazzini et al. , 2003 ) has greatly improved the prospects of202 doing so . X-ray polarization telescopes with photoelectron tracking detectors directly image elec-203 tron tracks formed from photoelectrons scattered by the incoming X-ray photons . We describe an204 application of our hybrid PFDE method to X-ray polarimetry using photoelectron tracking detec-205 tors . We use data from the upcoming NASA Imaging X-ray Polarization explorer ( IXPE ) ( Sgrò & 206 IXPE Team , 2019 ) as a working example . The problem of recovering polarization parameters from207 a dataset of ( IXPE ) electron track images has recently been announced as an open problem in the208 machine learning community ( Moriakov et al. , 2020 ) .209 The linear polarization of light can be fully described by two degrees of freedom : the polarization210 fraction 0 ≤ Π ≤ 1 , ( 0 % – 100 % ) , and the electric vector position angle −π/2 ≤ φ ≤ π/2.211 These can be thought of as the magnitude and direction of a vector perpendicular to the direction212 of propagation of the light . In imaging X-ray polarimetry , when the detector images an X-ray213 source , it measures individual 2D images of electron tracks excited by incoming X-ray photons.214 The initial directions the electrons travel follow a known probability density that depend on the215 source polarization , and the problem is to recover the polarization parameters Π and φ from the216 collected dataset of 2D track images.217 In the case of IXPE , charge tracks are imaged by hexagonal pixels . Fig . 1 shows some example218 photoelectron tracks at different X-ray energies . Each track represents the interaction of a single219 photon with a single gas molecule . The initial track angle y follows the probability density220 p ( y | Π , φ ) = 1 2π ( 1 + Πcos ( 2 ( y + φ ) ) , ( 4 ) where Π and φ are fixed polarization parameters that depend on the source . By estimating y for a221 large number of tracks , we may recover the original polarization parameters Π and φ , using para-222 metric density estimation.223 Track morphologies vary greatly with energy ( and even for the same energy ) ; this affects how dif-224 ficult it is to recover an accurate intial photoelectron angle y . Low energy tracks are typically less225 elliptical and so more difficult to estimate . For this reason it is essential to incorporate some form of226 quality control in the tracks used for polarization estimates.227 Current IXPE methods estimate individual track y using a moment analysis ( Sgro , 2017 ) . This228 calculates the first , second and third charge moments using the 2D coordinates of the hexagonal229 detector pixels , combining them to extract y . For each track , a single −π ≤ y ≤ π is output.230 The polarization parameters are then estimated using a standard ( unweighted ) MLE . The moment231 analysis additionally outputs an estimate of the track ellipticity , which can be used as a proxy for232 y estimation accuracy . The standard moment analysis uses a track cut to improve polarization re-233 covery – 20 % of the tracks are cut based on ellipticity . NNs have also recently been applied to234 this problem Kitaguchi et al . ( 2019 ) . This approach uses single CNNs for classification on y , with235 binned fits to y histograms to extract polarization parameters and track quality cuts . Our hybrid236 method exhibits significantly improved performance over both the standard IXPE method and this237 basic NN approach.238 3.2 PARAMETRIC FEATURE DENSITY ESTIMATION239 Following §2 , we define CNNs that take single track images as input and ( ŷ , σ̂ ) as output . In this240 case the track angles y are the data features that follow the known density ( 4 ) , the density parameters241 Π ≡ ψ1 , φ ≡ ψ2 , and the CNNs will make up the deep ensemble.242 To make the hexagonal track images admissable inputs to standard CNN architectures , we first243 convert the hexagonal images to square image arrays by shifting every other column and rescaling244 the distance between points , as described in Steppa & Holch ( 2019 ) . Since there are two possible245 shifts ( odd and even rows ) , we apply both and stack the two shifted images , similar to color channels246 in rgb images . We do this to more closely approximate spatial equivariance of the CNN convolution247 kernels in the hexagonal space . At test time , we apply the deep ensemble to the same track 3 times,248 each time rotated by 120◦ in hexagonal space . We find this reduces all relevant prediction bias on ŷ249 ( and later Π , φ ) introduced when converting from hexagonal to square coordinates.250 To recover Π , φwe need to predict 2y , so we use the loss function ( 1 ) but parameterize the true angle251 y as a 2D vector v = ( cos2y , sin2y ) to capture the periodicity . The loss function is as follows:252 Loss ( v , v̂ ) = 1 2 logσ̂2 + 1 2σ̂2 ‖v − v̂‖22 . ( 5 ) The final NN ensembles output the 3-vector ( v̂ , σ̂ ) . In this case the mean over ensemble predictions253 is calculated using the circular mean of { v̂m } Mm=1 . Then ŷ = 12arctan v̂2 v̂1 . To calculate the final254 Π , φ with an ensemble ofM NNs for a given test dataset withN tracks we minimize the importance255 weighted NLL ( 2 ) with likelihood256 L ( ŷn|Π , φ ) = 1 2π ( 1 + Πcos ( 2 ( ŷn + φ ) ) ) . ( 6 ) We can recast this as the convex optimization problem257 minimize x − N∑ n=1 σ̂−λn log ( 1 + vTnx ) subject to ‖x‖2 ≤ 1 ( 7 ) where vn = ( cosŷn , sinŷn ) and x = ( Πcosφ , Πsinφ ) . By recasting ( 2 ) as a convex optimization258 problem , we have a guaranteed globally optimal solution for ( Π , φ ) . We can solve ( 7 ) quickly and259 efficiently using second order Newton methods . In practice we use the robust open source software260 IpOpt , §2.4.261 We also consider a more domain specific , non-Gaussian likelihood function for our loss , ( 5 ) . We262 use the log-likelihood of the von Mises distribution for the NN loss:263 Loss ( v , v̂ ) = log ( I0 ( σ̂ −2 ) ) − 1 σ̂2 vTv̂ , ( 8 ) where I0 is the modified Bessel function of the first kind . This is a close approximation of the264 wrapped Gaussian on the circle . It is more appropriate than the Gaussian ( 5 ) for angular estimates265 since it can capture the π periodicity in ŷ . For very small σ̂ this is equivalent to the Gaussian . We266 compare the results from both losses in §3.4 and table 1.267 3.2.1 FIGURE OF MERIT268 In polarization estimation , we want high recovered Π̂100 % ( and accurate φ ) for a known 100 % 269 polarized source ( Π = 1 ) , and low recovered Π̂0 % for an unpolarized source ( Π = 0 ) . Since there270 is irreducible noise in the tracks , it is impossible for any method to achieve Π̂100 % ∼ 1 , so Π̂meas271 estimates are calibrated to get the final Π̂ for an unknown source2 : Π̂ = Π̂meas/Π̂100 % . We define a272 figure of merit for polarization estimation:273 FoM = 100× Π̂0 % /Π̂100 % . ( 9 ) We use the FoM to evaluate model performance : a lower FoM means better polarization estimation.274 This is effectively a measure of the signal to noise ratio , a simplified extension of the minimum275 detectable polarization ( MDP ) typically defined for X-ray polarization ( Weisskopf et al. , 2010 ) that276 does not preclude biased estimators . It is evaluated on unseen polarized and unpolarized datasets.277 In estimating the FoM , we take the number of tracks N ∼ 360 , 000 so we can compare directly to278 Kitaguchi et al . ( 2019 ) . We average the FoM over 200 independent track dataset samples of size N .279 We use the FoM as the criterion to select the hyperparameter λ in ( 2 ) . In this way we can tradeoff280 accuracy and bias in our Π , φ estimates.281 3.3 NN TRAINING AND SELECTION282 Our training dataset consists of 3 million simulated tracks , examples of which are shown in fig . 1.283 The track energies uniformly span 1.0 − 9.0keV , IXPE ’ s most sensitive range and are unpolarized284 ( uniform track angle distribution ) . Since we don ’ t know a priori what energy each track is , we want285 2Π̂100 % is measured before on a source with the same track energy distribution . NNs that can make predictions for tracks of all energies . This also makes for a more generalizable286 system , since some high energy tracks have similar characteristics to lower energy ones . Each track287 is labelled with its 2D angle vector v.288 We use a ResNet-19 ( He et al. , 2015 ) convolutional NN architecture as our base NN . This particular289 architecture is large enough to overfit the training set , and trains in a reasonable amount of time.290 Before training we preprocess the training data ( square track images ) . We apply pixelwise centering291 and rescaling . We use stochastic gradient descent with momentum and a decaying learning rate292 starting at 1e − 2 . We choose batch sizes 512 , 1024 , 2048 ( tracks per batch ) . We trained for 150293 epochs , using early stopping to prevent overfitting . We use L2-norm regularization 5 × 10−5 . We294 train 30 NNs and compare randomly selecting M = 10 NNs to selecting M = 10 NNs with the top295 MSEs on y for an unseen test dataset spanning all energies to make up our final NN ensemble . The296 results for both methods are shown in table 1.297 3.4 RESULTS298 Table 1 shows the results of our deep ensemble PFDE method alongside the current state of the art299 methods . The single CNN method with optimized cuts , developed in ( Kitaguchi et al. , 2019 ) , pro-300 vides significant improvements in Π100 % over the moment analysis , but adds bias to the unpolarized301 measurement Π0 % , increasing its FoM and making it a worse method for all energies . We perform302 an ablation study over our method , testing a single NN without using weighting when estimating303 ( Π , φ ) ( i.e . wn = 1 ∀n , ( 3 ) ) , an ensemble of NNs without weighting , a randomly selected ensemble304 with weighting , a top MSE selected ensemble with weighting and a von Mises loss weighted en-305 semble . We find a single NN without weighting beats the classical moments and moments with cuts306 baselines . This result is visualized in the right panel of fig . 3.3 for the 6.4keV dataset : the single NN307 shows improved ŷ estimates and thus a density that more closely resembles the ground truth . Using308 an ensemble of NNs improves this result slightly , but the real power of our method comes with the309 importance weights . Our final importance weighted ensemble method , with λ tuned accordingly for310 each energy , significantly outperforms the rest , especially in the power law datasets , where there is311 a reduction in FoM of almost a factor of 1.5 . This shows the power of a simple weighted scheme312 over quality cuts in PFDE , it allows our method to take advantage of higher signal ( Π100 % ) at higher313 energies in the power law datasets . The λ tuning procedure is shown in the left panel of fig.3.3.314 Comparing a randomly selected ensemble with a top MSE selected ensemble we find the results315 are almost identical . Random selection should yield more accurate approximations of the epistemic316 uncertainty and thus better weights , while selecting top performing NN on MSE should improve ŷ317 accuracy . Since the results are identical , but selecting NNs has the potential to bias density estima-318 tion , we recommend randomly selecting NNs . We note that , although not included in the table , a319 single NN with importance weighting performs only slightly worse than than the weighted ensem-320 ble . Since a single NN only produces aleatoric uncertainties , this suggests , as expected , that for a321 correctly specified model aleatoric uncertainties dominate epistemic ones . Finally , the von Mises322 loss shows a small improvement over the simple Gaussian . This is expected , since characterizing323 the predictive uncertainties by a periodic distribution is more appropriate for the polarimetry ap-324 plication , but the improvement is small , suggesting that the Gaussian is a robust starting point for325 many applications . We plan to release further results and more domain specific information for this326 particular application [ reference deleted to maintain integrity of review process ] .327 3.5 OTHER APPLICATIONS328 There are numerous application of PFDE with uncertainty in the physical sciences and engineering.329 In high energy particle physics massive , short-lived particles can be detected by fitting a Cauchy330 distribution to the frequencies of measured decay states . Raw sensor data from hadronic particle331 colliders like the LHC are very noisy with variable uncertainty , meaning our PFDE approach to332 estimate the Cauchy distribution parameters could be very fruitful . This especially true with the333 widespread current use of deep learning in particle physics ( Guest et al. , 2018 ) . Our approach is334 heuristically justified due to the asymptotic efficiency of the maximum likelihood estimator in a335 Cauchy location model ( Cohen Freue , 2007 ) . In manufacturing , GLMs fit to binomial distributions336 are commonly used to assess product quality , or the probability of a product being defunct . Today,337 computer vision is used for much of the inspection ( Rossol , 1983 ) , making our hybrid PFDE method338 a potential step forward . These are just a few application examples – our method may be useful for339 any GLM based method with high dimensional data.340 4 DISCUSSION341 We have proposed a supervised learning framework for parametric feature density estimation . Our342 method uses deep ensembles to predict high dimensional data features , their aleatoric and epistemic343 uncertainties . We estimate feature density parameters by incorporating both of these uncertainties344 into an importance weighted maximum likelihood estimate . We include a tuneable weighting hyper-345 parameter λ , allowing one to control the bias-variance tradeoff for density estimation . Intuitively,346 in many real feature density estimation problems , some high dimensional data points may be much347 more informative than others due to complex noise or differing generative distributions . Our method348 models this explicitly , weighting datapoint features by their predictive uncertainty when estimating349 density parameters . This avoids throwing away valuable data with quality cuts , yielding improved350 density estimates . Our method is scaleable to any feature dataset size and is completely flexible for351 specific domain applications ; most NN architectures can be used . We achieve state-of-the-art results352 over standard deep learning methods and classical algorithms in X-ray polarimetry - a recent open353 problem in ML . We expect our method would provide similar improvements to a number of PFDE354 application fields , including high energy particle physics and manufacturing.355 We perform an ablation study comparing a single NN , a deep ensemble , and various importance356 weighted deep ensembles . A single NN approach or standard deep ensemble improves slightly357 on the classical baselines , but importance weighting by predictive uncertainty provides the main358 improvements to our method . Selecting NNs for the deep ensemble based on quality of density359 estimation provides no additional gain in performance compared to random selection – since it is360 possible performance-based NN selection can degrade epistemic uncertainty estimates , we recom-361 mend randomly selecting NNs for the ensemble . Comparing the Gaussian and von Mises distribution362 for feature prediction we find the standard Gaussian likelihood ( 1 ) an effective and robust approx-363 imation , although results can potentially be improved for specific applications by choosing a more364 appropriate distribution over the predictive uncertainties.365 While our method works well for densities with convex log-likelihoods , non-convex ones will not366 necessarily yield globally optimal solutions and may be very time consuming to evaluate . Future367 Work : Future additions to the method include more complex aleatoric uncertainty modelling . We368 assume a Gaussian distribution for our feature prediction ( 1 ) , but for domain applications where369 there is an expected feature uncertainty , one could use an alternative distribution , or even a mixture370 density network ( Bishop , 1994 ) for more flexibility . In that case the functional form of weighting371 would have to be reconsidered . Additionally , finding the optimal weighting function for specific372 problem applications is likely to yield significant improvements.373 REFERENCES374 Ronaldo Bellazzini , F. Angelini , Luca Baldini , Alessandro Brez , Enrico Costa , Giuseppe Di375 Persio , Luca Latronico , M. M. Massai , Nicola Omodei , Luigi Pacciani , Paolo Soffitta,376 and Gloria Spandre . Novel gaseous x-ray polarimeter : data analysis and simulation.377 In Polarimetry in Astronomy , volume 4843 , pp . 383–393 . International Society for Op-378 tics and Photonics , February 2003. doi : 10.1117/12.459381 . URL https : //www.379 spiedigitallibrary.org/conference-proceedings-of-spie/4843/0000/380 Novel-gaseous-x-ray-polarimeter-data-analysis-and-simulation/381 10.1117/12.459381.short.382 Christopher Bishop . Mixture Density Networks . January 1994 . URL383 https : //www.microsoft.com/en-us/research/publication/384 mixture-density-networks/.385 Ethem F. Can , Aysu Ezen-Can , and Fazli Can . Multilingual Sentiment Analysis : An RNN-Based386 Framework for Limited Data . arXiv:1806.04511 [ cs ] , June 2018 . URL http : //arxiv.org/387 abs/1806.04511 . arXiv : 1806.04511.388 Gabriela V. Cohen Freue . The Pitman estimator of the Cauchy location parameter . Jour-389 nal of Statistical Planning and Inference , 137 ( 6 ) :1900–1913 , June 2007 . ISSN 0378-3758.390 doi : 10.1016/j.jspi.2006.05.002 . URL http : //www.sciencedirect.com/science/391 article/pii/S0378375806001285.392 Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . BERT : Pre-training of Deep393 Bidirectional Transformers for Language Understanding . arXiv:1810.04805 [ cs ] , May 2019.394 URL http : //arxiv.org/abs/1810.04805 . arXiv : 1810.04805.395 B. Efron . Bootstrap Methods : Another Look at the Jackknife . Annals of Statistics , 7 ( 1 ) :1–26,396 January 1979 . ISSN 0090-5364 , 2168-8966. doi : 10.1214/aos/1176344552 . URL https : //397 projecteuclid.org/euclid.aos/1176344552 . Publisher : Institute of Mathematical398 Statistics.399 F. Feroz , M. P. Hobson , and M. Bridges . MultiNest : an efficient and robust Bayesian inference tool400 for cosmology and particle physics . Monthly Notices of the Royal Astronomical Society , 398 ( 4 ) :401 1601–1614 , October 2009 . ISSN 00358711 , 13652966. doi : 10.1111/j.1365-2966.2009.14548.x.402 URL http : //arxiv.org/abs/0809.3437 . arXiv : 0809.3437.403 Stanislav Fort , Huiyi Hu , and Balaji Lakshminarayanan . Deep Ensembles : A Loss Landscape Per-404 spective . arXiv:1912.02757 [ cs , stat ] , December 2019 . URL http : //arxiv.org/abs/405 1912.02757. arXiv : 1912.02757.406 Alex Graves , Santiago Fernández , Faustino Gomez , and Jürgen Schmidhuber . Connectionist tem-407 poral classification : labelling unsegmented sequence data with recurrent neural networks . In408 Proceedings of the 23rd international conference on Machine learning , ICML ’ 06 , pp . 369–409 376 , Pittsburgh , Pennsylvania , USA , June 2006 . Association for Computing Machinery . ISBN410 978-1-59593-383-6. doi : 10.1145/1143844.1143891 . URL https : //doi.org/10.1145/411 1143844.1143891.412 Dan Guest , Kyle Cranmer , and Daniel Whiteson . Deep Learning and its Application to LHC Physics.413 Annual Review of Nuclear and Particle Science , 68 ( 1 ) :161–181 , October 2018 . ISSN 0163-8998,414 1545-4134. doi : 10.1146/annurev-nucl-101917-021019 . URL http : //arxiv.org/abs/415 1806.11484. arXiv : 1806.11484.416 Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep Residual Learning for Image417 Recognition . arXiv:1512.03385 [ cs ] , December 2015 . URL http : //arxiv.org/abs/418 1512.03385. arXiv : 1512.03385.419 Feifang Hu and James V. Zidek . The Weighted Likelihood . The Canadian Journal of Statistics / La420 Revue Canadienne de Statistique , 30 ( 3 ) :347–371 , 2002 . ISSN 0319-5724. doi : 10.2307/3316141.421 URL https : //www.jstor.org/stable/3316141.422 Nikos Karampatziakis and John Langford . Online importance weight aware updates . In Proceedings423 of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence , UAI ’ 11 , pp . 392–399,424 Barcelona , Spain , July 2011 . AUAI Press . ISBN 978-0-9749039-7-2.425 Alex Kendall and Yarin Gal . What Uncertainties Do We Need in Bayesian Deep Learning for426 Computer Vision ? In I. Guyon , U. V. Luxburg , S. Bengio , H. Wallach , R. Fergus , S. Vish-427 wanathan , and R. Garnett ( eds . ) , Advances in Neural Information Processing Systems 30,428 pp . 5574–5584 . Curran Associates , Inc. , 2017 . URL http : //papers.nips.cc/paper/429 7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.430 pdf.431 Takao Kitaguchi , Kevin Black , Teruaki Enoto , Asami Hayato , Joanne E. Hill , Wataru B. Iwakiri,432 Philip Kaaret , Tsunefumi Mizuno , and Toru Tamagawa . A convolutional neural network ap-433 proach for reconstructing polarization information of photoelectric X-ray polarimeters . Nuclear434 Instruments and Methods in Physics Research Section A : Accelerators , Spectrometers , Detectors435 and Associated Equipment , 942:162389 , October 2019 . ISSN 01689002. doi : 10.1016/j.nima.436 2019.162389 . URL http : //arxiv.org/abs/1907.06442 . arXiv : 1907.06442.437 Alex Krizhevsky , Ilya Sutskever , and Geoffrey E Hinton . ImageNet Classification with438 Deep Convolutional Neural Networks . In F. Pereira , C. J. C. Burges , L. Bottou , and439 K. Q. Weinberger ( eds . ) , Advances in Neural Information Processing Systems 25 , pp.440 1097–1105 . Curran Associates , Inc. , 2012 . URL http : //papers.nips.cc/paper/441 4824-imagenet-classification-with-deep-convolutional-neural-networks.442 pdf.443 Balaji Lakshminarayanan , Alexander Pritzel , and Charles Blundell . Simple and scalable predictive444 uncertainty estimation using deep ensembles . In Proceedings of the 31st International Conference445 on Neural Information Processing Systems , NIPS ’ 17 , pp . 6405–6416 , Long Beach , California,446 USA , December 2017 . Curran Associates Inc. ISBN 978-1-5108-6096-4.447 Nikita Moriakov , Ashwin Samudre , Michela Negro , Fabian Gieseke , Sydney Otten , and Luc Hen-448 driks . Inferring astrophysical X-ray polarization with deep learning . arXiv:2005.08126 [ astro-449 ph ] , May 2020 . URL http : //arxiv.org/abs/2005.08126 . arXiv : 2005.08126.450 J . A. Nelder and R. W. M. Wedderburn . Generalized Linear Models . Journal of the Royal Statistical451 Society . Series A ( General ) , 135 ( 3 ) :370–384 , 1972 . ISSN 0035-9238. doi : 10.2307/2344614.452 URL https : //www.jstor.org/stable/2344614 . Publisher : [ Royal Statistical Soci-453 ety , Wiley ] .454 Yaniv Ovadia , Emily Fertig , Jie Ren , Zachary Nado , D. Sculley , Sebastian Nowozin , Joshua V.455 Dillon , Balaji Lakshminarayanan , and Jasper Snoek . Can You Trust Your Model ’ s Uncertainty ? 456 Evaluating Predictive Uncertainty Under Dataset Shift . arXiv:1906.02530 [ cs , stat ] , December457 2019 . URL http : //arxiv.org/abs/1906.02530 . arXiv : 1906.02530.458 George Papamakarios . Neural Density Estimation and Likelihood-free Inference . arXiv:1910.13233459 [ cs , stat ] , October 2019 . URL http : //arxiv.org/abs/1910.13233 . arXiv:460 1910.13233.461 George Papamakarios , Theo Pavlakou , and Iain Murray . Masked Autoregressive Flow for Density462 Estimation . arXiv:1705.07057 [ cs , stat ] , June 2018 . URL http : //arxiv.org/abs/1705.463 07057. arXiv : 1705.07057.464 Tim Pearce , Mohamed Zaki , and Andy Neely . Bayesian Neural Network Ensembles.465 arXiv:1811.12188 [ cs , stat ] , November 2018 . URL http : //arxiv.org/abs/1811.466 12188. arXiv : 1811.12188.467 Jeffrey Pennington , Richard Socher , and Christopher Manning . Glove : Global Vectors for Word468 Representation . In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan-469 guage Processing ( EMNLP ) , pp . 1532–1543 , Doha , Qatar , October 2014 . Association for Com-470 putational Linguistics . doi : 10.3115/v1/D14-1162 . URL https : //www.aclweb.org/471 anthology/D14-1162.472 Joseph Redmon , Santosh Divvala , Ross Girshick , and Ali Farhadi . You Only Look Once : Unified,473 Real-Time Object Detection . arXiv:1506.02640 [ cs ] , June 2015 . URL http : //arxiv.org/474 abs/1506.02640 . arXiv : 1506.02640.475 Lothar Rossol . Computer Vision in Industry . In Alan Pugh ( ed . ) , Robot Vision , International476 Trends in Manufacturing Technology , pp . 11–18 . Springer , Berlin , Heidelberg , 1983 . ISBN 978-477 3-662-09771-7. doi : 10.1007/978-3-662-09771-7 2 . URL https : //doi.org/10.1007/478 978-3-662-09771-7_2.479 Carmelo Sgro . The gas pixel detector on board the IXPE mission . In Oswald H.480 Siegmund ( ed . ) , UV , X-Ray , and Gamma-Ray Space Instrumentation for Astron-481 omy XX , pp . 16 , San Diego , United States , August 2017 . SPIE . ISBN 978-1-482 5106-1251-8 978-1-5106-1252-5. doi : 10.1117/12.2273922 . URL https : //www.483 spiedigitallibrary.org/conference-proceedings-of-spie/10397/484 2273922/The-gas-pixel-detector-on-board-the-IXPE-mission/10.485 1117/12.2273922.full.486 C. Sgrò and IXPE Team . The Imaging X-ray Polarimetry Explorer ( IXPE ) . Nuclear Instru-487 ments and Methods in Physics Research A , 936:212–215 , August 2019 . ISSN 0168-9002. doi:488 10.1016/j.nima.2018.10.111 . URL http : //adsabs.harvard.edu/abs/2019NIMPA.489 936 .. 212S.490 Constantin Steppa and Tim Lukas Holch . HexagDLy - Processing hexagonally sampled data with491 CNNs in PyTorch . SoftwareX , 9:193–198 , January 2019 . ISSN 23527110. doi : 10.1016/j.softx.492 2019.02.010 . URL http : //arxiv.org/abs/1903.01814 . arXiv : 1903.01814.493 Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez,494 Łukasz Kaiser , and Illia Polosukhin . Attention is All you Need . In I. Guyon , U. V. Luxburg,495 S. Bengio , H. Wallach , R. Fergus , S. Vishwanathan , and R. Garnett ( eds . ) , Advances in Neu-496 ral Information Processing Systems 30 , pp . 5998–6008 . Curran Associates , Inc. , 2017 . URL497 http : //papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.498 Martin Weisskopf . An Overview of X-Ray Polarimetry of Astronomical Sources . Galaxies , 6:33,499 March 2018. doi : 10.3390/galaxies6010033 . URL http : //adsabs.harvard.edu/abs/500 2018Galax ... 6 ... 33W.501 Martin C. Weisskopf , Ronald F. Elsner , and Stephen L. O ’ Dell . On understanding the figures of merit502 for detection and measurement of x-ray polarization . arXiv:1006.3711 [ astro-ph ] , pp . 77320E,503 July 2010. doi : 10.1117/12.857357 . URL http : //arxiv.org/abs/1006.3711 . arXiv:504 1006.3711.505 Andreas Wächter and Lorenz T. Biegler . On the implementation of an interior-point filter line-506 search algorithm for large-scale nonlinear programming . Mathematical Programming , 106 ( 1 ) :507 25–57 , March 2006 . ISSN 1436-4646. doi : 10.1007/s10107-004-0559-y . URL https : //508 doi.org/10.1007/s10107-004-0559-y.509
This paper addresses the problem of estimating the distribution parameters of features extracted from a set of high dimensional observations, a problem that is common in the physical sciences. To solve this problem, the authors present a deep learning approach that utilises a combination of (i) deep ensemble training, (ii) post hoc model selection, and (iii) importance weighted parameter estimation. First, a deep ensemble is trained to solve a regression task (observation -> feature). During testing, this ensemble is frozen and used to generate feature samples from unseen observations. Using these feature samples, it is possible to estimate the distribution parameters using maximum likelihood estimation. The authors evaluate their method on X-ray polarimetry, and compare it with two other approaches, one of which is also a deep learning approach. On all tasks, the presented method outperforms both baseline approaches.
SP:f4fafd66830ad4d90a13395ac5327de33d127a73
Energy-Based Models for Continual Learning
1 INTRODUCTION . Humans are able to rapidly learn new skills and continuously integrate them with prior knowledge . The field of Continual Learning ( CL ) seeks to build artificial agents with the same capabilities ( Parisi et al. , 2019 ) . In recent years , CL has seen increased attention , particularly in the context of classification problems . A crucial characteristic of continual learning is the ability to learn new data without forgetting prior data . Models must also be able to incrementally learn new skills , without necessarily having a notion of an explicit task identity . However , standard neural networks ( He et al. , 2016 ; Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ) experience the catastrophic forgetting problem and perform poorly in this setting . Different approaches have been proposed to mitigate catastrophic forgetting , but many rely on the usage of external memory ( Lopez-Paz & Ranzato , 2017 ; Li & Hoiem , 2017 ) , additional models ( Shin et al. , 2017 ) , or auxiliary objectives and regularization ( Kirkpatrick et al. , 2017 ; Schwarz et al. , 2018 ; Zenke et al. , 2017 ; Maltoni & Lomonaco , 2019 ) , which can constrain the wide applicability of these methods . In this work , we focus on classification tasks . These tasks are usually tackled by utilizing normalized probability distribution ( i.e. , softmax output layer ) and trained with a cross-entropy objective . In this paper , we argue that by viewing classification from the lens of training an un-normalized probability distribution , we can significantly improve continual learning performance in classification problems . In particular , we interpret classification as learning an Energy-Based Model ( EBM ) across seperate classes ( Grathwohl et al. , 2019 ) . Training becomes a wake-sleep process , where the energy of an input data and its ground truth label is decreased while the energy of the input and another selected class is increased . This offers freedom to choose what classes to update in the CL process . By contrast , the cross entropy objective reduces the likelihood of all negative classes when given a new input , creating updates that lead to forgetting . The energy function , which maps a data and class pair to a scalar energy , also provides a way for the model to select and filter portions of the input that are relevant towards the classification on hand . We show that this enables EBMs training updates for new data to interfere less with previous data . In particular , our formulation of the energy function allows us to compute the energy of a data by learning a conditional gain based on the input label which serves as an attention filter to select the most relevant information . In the event of a new class , a new conditional gain can be learned . These unique benefits are applicable across a range of continual learning tasks . Most existing works ( Kirkpatrick et al. , 2017 ; Zhao et al. , 2020 ) toward continual learning typically learn a sequence of distinct tasks with clear task boundaries ( Boundary-Aware ) . Many of these methods depend on knowing the task boundaries that can provide proper moments to consolidate knowledge . However , this scenario is not very common in the real world , and a more natural scenario is the Boundary- Agnostic setting ( Zeno et al. , 2018 ; Rajasegaran et al. , 2020 ) , in which data gradually changes without a clear notion of task boundaries . This setting has also been used as a standard evaluation in the continual reinforcement learning ( Al-Shedivat et al. , 2017 ; Nagabandi et al. , 2018 ) . Many common CL methods are not applicable to the Boundary-Agnostic scenario as the task boundaries are unknown or undefined . In contrast , EBMs are readily applied to this setting without any modification and are able to support both Boundary-Aware and Boundary-Agnostic settings . There are four primary contributions of our work . First , we introduce energy-based models for classification CL problems in both boundary-aware and boundary-agnostic regimes . Secondly , we use the standard contrastive divergence training procedure and show that it significantly reduces catastrophic forgetting . Thirdly , we propose to learn new conditional gains during the training process which makes EBMs parameter updates cause less interference with old data . Lastly , we show that in practice EBMs bring a significant improvement on four standard CL benchmarks , split MNIST , permuted MNIST , CIFAR-10 , and CIFAR-100 . These observations towards EBMs as a class of models naturally inclined towards the CL regime . 2 RELATED WORK . 2.1 CONTINUAL LEARNING SETTINGS . Boundary-aware versus boundary-agnostic . In most existing continual learning studies , models are trained in a “ boundary-aware ” setting , in which a sequence of distinct tasks with clear task boundaries is given ( e.g. , Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Shin et al. , 2017 ) . Typically there are no overlapping classes between any two tasks ; for example task 1 has data with ground truth class labels “ 1,2 ” and task 2 has data with ground truth class labels “ 3,4 ” . In this setting , models are first trained on the entire first task and then move to the second one . Moreover , models are typically told when there is a transition from one task to the next . However , it could be argued that it is more realistic for tasks to change gradually and for models to not be explicitly informed about the task boundaries . Such a boundary-agnostic setting has been explored in ( Zeno et al. , 2018 ; Rajasegaran et al. , 2020 ; Aljundi et al. , 2019 ) . In this setting , models learn in a streaming fashion and the data distributions gradually change over time . For example , the percentage of “ 1s ” presented to the model might gradually decrease while the percentage of “ 2s ” increases . Importantly , most existing continual learning approaches are not applicable to the boundary-agnostic setting as they require the task boundaries to decide when to consolidate the knowledge ( Zeno et al. , 2018 ) . In this paper , we will show that our proposed approach can also be applied to the boundary-agnostic setting . Task-incremental versus class-incremental learning . Another important distinction in continual learning is between task-incremental learning and class-incremental learning ( van de Ven & Tolias , 2019 ; Prabhu et al. , 2020 ) . In task-incremental learning , also referred to as the multi-head setting ( Farquhar & Gal , 2018 ) , models have to predict the label of an input data by choosing only from the labels in the task where the data come from . On the other hand , in class-incremental learning , also referred to as the single-head setting , models have to chose between the classes from all tasks so far when asked to predict the label of an input data . Class-incremental learning is substantially more challenging than task-incremental learning , as it requires models to select the correct labels from the mixture of new and old classes . So far , only methods that store data or use replay have been shown to perform well in the class-incremental learning scenario ( Rebuffi et al. , 2017 ; Rajasegaran et al. , 2019 ) . In this paper , we try to tackle class-incremental learning without storing data and replay . 2.2 CONTINUAL LEARNING APPROACHES . In recent years , numerous methods have been proposed for CL . Here we broadly partition these methods into three categories : task-specific , regularization , and replay-based approaches . Task-specific methods . One way to reduce interference between tasks is by using different parts of a neural network for different problems . For a fixed-size network , such specialization could be achieved by learning a separate mask for each task ( Fernando et al. , 2017 ; Serra et al. , 2018 ) , by a priori defining a different , random mask for every task to be learned ( Masse et al. , 2018 ) or by using a different set of parameters for each task ( Zeng et al. , 2019 ; Hu et al. , 2019 ) . Other methods let a neural network grow or recruit new resources when it encounters new tasks , examples of which are progressive neural networks ( Rusu et al. , 2016 ) and dynamically expandable networks ( Yoon et al. , 2017 ) . Although these task-specific approaches are generally successful in reducing catastrophic forgetting , an important disadvantage is that they require knowledge of the task identity at both training and test time . These methods are therefore not suitable for class-incremental learning . Regularization-based methods . Regularization is used in continual learning to encourage stability of those aspects of the network that are important for previously learned tasks . A popular strategy is to add a regularization loss to penalise changes to model parameters that are important for previous tasks . EWC ( Kirkpatrick et al. , 2017 ) ) and online EWC ( Schwarz et al. , 2018 ) evaluate the importance of each parameter using the diagonal elements in the fisher information matrices , while SI ( Zenke et al. , 2017 ) tracks the past and current parameters and estimates their importance online . An alternative strategy is to regularize the network at the functional level . Learning without Forgetting ( Li & Hoiem , 2017 ) uses knowledge distillation to encourage stability of the network ’ s learned input-output mapping . However , these regularization-based approaches gradually reduce the model ’ s capacity for learning new tasks and they have been shown to consistently fail in the class-incremental learning ( Farquhar & Gal , 2018 ; van de Ven & Tolias , 2019 ) . Replay methods . To preserve knowledge , replay methods periodically rehearse previously acquired information during training ( Robins , 1995 ) . One way to do this , referred to as exact or experience replay , is to store data from previous tasks and revisit them when training on a new task . Although this might seem straightforward , critical non-trivial questions are how to select the data to be stored as well as exactly how to use them ( Rebuffi et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Rajasegaran et al. , 2019 ; Hou et al. , 2019 ; Wu et al. , 2019 ; Mundt et al. , 2020 ) . An alternative to storing data is to generate the data to be replayed . In such generative replay ( Shin et al. , 2017 ) , a generative model is sequentially trained to generate input samples representative of those from previously seen tasks . While both types of replay can prevent catastrophic forgetting in both task- and class-incremental learning settings , an important disadvantage is that these methods are computationally relatively expensive as each replay event requires at least one forward and one backward pass through the model . In addition , storing data might not always be possible while incrementally training a generative model is a challenging problem in itself ( Lesort et al. , 2019 ; van de Ven et al. , 2020 ) . In contrast , our EBMs reduce catastrophic forgetting without requiring knowledge of task-identity , without gradually restricting the model ’ s learning capabilities and without using stored data . 3 BACKGROUND . In this section , we will introduce traditional continual learning methods , which are typically modified based on the feed-forward classifier . We will show the limitation of using such structures and how EBMs can be applied to these problems in the next section .
This paper explores the usage of EBMs in continual learning for classification. Although the application of EBMs in continual learning is novel, the general idea is a special case of the usage of EBMs for structured prediction, which has been widely studied. For instance, multi-class classification can be considered as a special version of multi-label classification, which has been studied in Belanger and McCallum (2016) and a set of follow-up works. The main difference here is that multi-class classification is a simpler problem, and all possible classes can be enumerated in O(N), but in multi-label classification, more complicated inference such as gradient-descent based approaches must be used.
SP:ff7570a39b118ef58a9bf05561824c85c5b48535
Variational Structured Attention Networks for Dense Pixel-Wise Prediction
1 INTRODUCTION . Over the past decade , convolutional neural networks ( CNNs ) have become the privileged methodology to address computer vision tasks requiring dense pixel-wise prediction , such as semantic segmentation ( Chen et al. , 2016b ; Fu et al. , 2019 ) , monocular depth prediction ( Liu et al. , 2015 ; Roy & Todorovic , 2016 ) , contour detection ( Xu et al. , 2017a ) and normal surface computation ( Eigen et al. , 2014 ) . Recent studies provided clear evidence that attention mechanisms ( Mnih et al. , 2014 ) within deep networks are undoubtedly a crucial factor in improving the performance ( Chen et al. , 2016b ; Xu et al. , 2017a ; Fu et al. , 2019 ; Zhan et al. , 2018 ) . In particular , previous works demonstrated that deeply learned attentions acting as soft weights to interact with different deep features at each channel ( Zhong et al. , 2020 ; Zhang et al. , 2018 ; Song et al. , 2020 ) and at each pixel location ( Li et al. , 2020a ; Johnston & Carneiro , 2020 ; Tay et al. , 2019 ) permits to improve the pixel-wise prediction accuracy ( see Fig.1.a and Fig.1.b ) . Recently , Fu et al . ( 2019 ) proposed the Dual Attention Network ( DANet ) , embedding in a fully convolutional network ( FCN ) two complementary attention modules , specifically conceived to model separately the semantic dependencies associated to the spatial and to the channel dimensions ( Fig.1.c ) . Concurrently , other approaches have considered the use of structured attention models integrated within a graph network framework ( Zhang et al. , 2020 ; Chen et al. , 2019 ; Xu et al. , 2017a ) , showing the empirical advantage of adopting a graphical model to effectively capture the structured information present in the hidden layers of the neural network and thus enabling the learning of better deep feature representations . Notably , Xu et al . ( 2017a ) first introduced attention-gated conditional random fields ( AG-CRFs ) , a convolutional neural network implementing a probabilistic graphical model that considers attention variables as gates ( Minka & Winn , 2009 ) in order to learn improved deep features and effectively fuse multi-scale information . However , their structured attention model is only learned at the spatial-wise level , while channel-wise dependencies are not considered . This paper advances the state of the art in dense pixel-wise prediction by proposing a novel approach to learn more effective deep representations by integrating a structured attention model which jointly account for spatial- and channel-level dependencies using an attention tensor ( Fig.1.d ) within a CRF framework . More precisely , inspired from Xu et al . ( 2017a ) we model the attention as gates . Crucially , we address the question on how to enforce structure within these latent gates , in order to jointly model spatial- and channel-level dependencies while learning deep features . To do so , we hypothesize that the attention tensor is nothing but the sum of T rank-1 tensors , each of them being the tensor product of a spatial attention map and a channel attention vector . This attention tensor is used as a structured latent attention gate , enhancing the feature maps . We cast the inference problem into a maximum-likelihood estimation formulation that is made computationally tractable thanks to a variational approximation . Furthermore , we implement the maximum likelihood update rules within a neural network , so that they can be jointly learned with the preferred CNN front-end . We called our approach based on structured attention and variational inference VarIational STructured Attention Networks or VISTA-Net . We evaluate our method on multiple pixel-wise prediction problems , i.e . monocular depth estimation , semantic segmentation and surface normale prediction , considering six publicly available datasets , i.e . NYUD-V2 ( Silberman et al. , 2012 ) , KITTI ( Geiger et al. , 2013 ) , Pascal-Context ( Mottaghi et al. , 2014 ) , Pascal VOC2012 ( Everingham et al. , 2010 ) , Cityscape ( Cordts et al. , 2016 ) and ScanNet ( Dai et al. , 2017 ) . Our results demonstrate that VISTANet is able to learn rich deep representations thanks to the proposed structured attention and our probabilistic formulation , outperforming state-of-the-art methods . Related Work . Several works have considered integrating attention models within deep architectures to improve performance in several tasks such as image categorization ( Xiao et al. , 2015 ) , speech recognition ( Chorowski et al. , 2015 ) and machine translation ( Vaswani et al. , 2017 ; Kim et al. , 2017 ; Luong et al. , 2015 ) . Focusing on pixel-wise prediction , Chen et al . ( 2016b ) first described an attention model to combine multi-scale features learned by a FCN for semantic segmentation . Zhang et al . ( 2018 ) designed EncNet , a network equipped with a channel attention mechanism to model global context . Zhao et al . ( 2018 ) proposed to account for pixel-wise dependencies introducing relative position information in spatial dimension within the convolutional layers . Huang et al . ( 2019b ) described CCNet , a deep architecture that embeds a criss-cross attention module with the idea of modeling contextual dependencies using sparsely-connected graphs , such as to achieve higher computational efficiency . Fu et al . ( 2019 ) proposed to model semantic dependencies associated with spatial and channel dimensions by using two separate attention modules . Zhong et al . ( 2020 ) introduced a squeeze-and-attention network ( SANet ) specialized to pixel-wise prediction that takes into account spatial and channel inter-dependencies in an efficient way . Attention was first adopted within a CRF framework by Xu et al . ( 2017a ) , which introduced gates to control the message passing between latent variables and showed that this strategy is effective for contour detection . Our work significantly departs from these previous approaches , as we introduce a novel structured attention mechanism , jointly handling spatial- and channel-level dependencies within a probabilistic framework . Notably , we also prove that our model can be successfully employed in case of several challenging dense pixel-level prediction tasks . Our work is also closely related to previous studies on dual graph convolutional network ( Zhang et al. , 2019c ) and dynamic graph message passing networks ( Zhang et al. , 2020 ) , which have been successfully used for pixellevel prediction tasks . However , while they also resort on message passing for learning refined deep feature representations , they lack a probabilistic formulation . Finally , previous studies ( Xu et al. , 2017c ; Arnab et al. , 2016 ; Chen et al. , 2019 ) described CRF-based models for pixel-wise estima- tion , e.g . to learn and optimally fuse deep representations at multiple scales . However , they did not employ structured attention gates . 2 VARIATIONAL STRUCTURED ATTENTION NETWORKS : VISTA-NET . As previously discussed , we aim to enhance the learned representation by structuring the attention within a probabilistic formulation . One the one side , inducing structure in the attention mechanisms has been proven to be successful ( Fu et al. , 2019 ; Zhong et al. , 2020 ) . On the other side , probabilistic formulations combined with deep architectures are interesting for pixel-level prediction tasks ( Xu et al. , 2017b ) . Up to our knowledge , we are the first to bring together recent advances in pixel-wise prediction by formulating a novel structured attention mechanism within a probabilistic CRF-like inference framework . Inspired by Fu et al . ( 2019 ) , where two spatial- and a channel-wise fullrank tensors are computed , we opt to infer different spatial and channel attention variables.Very differently from Fu et al . ( 2019 ) , we propose to structure a generic attention tensor a of dimension W ×H × C ( widht , height , channels ) , as the sum of T rank-1 tensors : a = T∑ t=1 mt ⊗ vt , mt ∈ R1×W×H , vt ∈ RC×1×1 , ( 1 ) meaning that mt can be understood as an image of W ×H pixels and vt as a vector of dimension C , and ⊗ denotes the tensor product , in the case above leading to a 3-way tensor of dimensions W × H × C. Each of the tensor products within the sum yields a tensor of rank-1 , consequently limiting the rank of a to be at maximum T . Equation ( 1 ) is the algebraic expression of the proposed structured attention mechanism , and is the methodological foundation of VISTA-Net . Moreover , we inspire from the CRF formulation with gating variables proposed in ( Xu et al. , 2017a ) , and derive a new energy function and variational approximation to enable efficient learning and inference procedures . Additionally , this formulation allows us to consider the CRF kernels as latent variables and infer them from the data , together with the structured attention variables mt and vt. We believe learning the kernels is important because it allows the CRF to weight the information flow depending on the content of the rather than keeping the same weights for all images . We assume a generic CNN front-end providing a set of S multi-scale feature maps F = { fs } Ss=1 . To ease notation , we assume that each feature map has P pixels and C channels , but in practice these dimensions depend on the scale s. For each scale , we also consider the set of hidden variables zs corresponding to fs , and Z = { zs } Ss=1 . These hidden variables correspond to refined convolutional futures that incorporate information and attention from other feature maps , so as to better represent the key information for the pixel-level task at hand . Intuitively , the structured attention tensor should help refining the hidden variables to allow better performance at various pixel-level prediction tasks . As in ( Xu et al. , 2017a ) , for every pair of emitting e and receiving r scales , we consider a dedicated attention tensor ae , r . Very importantly , in our case this attention tensor is structured following ( 1 ) , and so we have a set of hidden spatial attention maps M = { mte , r } S , S , T e , r , t=1 and hidden channel attention vectors V = { vte , r } S , S , T e , r , t=1 . More precisely , m t e , r ∈ { 0 , 1 } P and vte , r ∈ { 0 , 1 } C are a binary spatial map and a stochastic channel-wise vector , hence ∑C c=1 v t , c e , r = 1 . In this way , we reduce ambiguity and ease the learning . This also means that the model is conceived to pay attention to only T channels of the feature map . While this could seem limiting at first glance we remark that : ( i ) the model learns which are the optimal T channels among the possible C that have to be used to refine the hidden variables and ( ii ) the posterior distribution of mt boils down to a convex combination of all channels , as it will appear clear when discussing the inference procedure . 2.1 ENERGY FUNCTION AND VARIATIONAL APPROXIMATION . Our model consists on three different latent variables : the hidden features Z , and the hidden attention maps M and vectors V. In addition , we also consider inferring the CRF kernels , denoted by K from the data . More precisely , the energy function associated to the proposed models writes : − E ( Z , M , V , K , F , Θ ) = ∑ s ∑ p , c φz ( z p , c r , f p , c r ) + ∑ e , r ∑ p , c , p′ , c′ ∑ t mt , pe , rv t , c e , rψ ( z p , c r , z p′ , c′ e , k e , p′ , c′ r , p , c ) + φk ( f p , c r , f p′ , c′ e , k e , p′ , c′ r , p , c ) , ( 2 ) where φz , φk and ψ are potentials to be defined and ke , p ′ , c′ r , p , c denotes the kernel value weighting the information flow from the ( p′ , c′ ) -th value of the feature map of scale e to the ( p , c ) -th value of the feature map of scale r. Since the exact posterior distribution is not computationally tractable , we opt to approximate it with the following family of separable distributions : p ( Z , M , V , K|F , Θ ) ≈ q ( Z , M , V , K ) = qz ( Z ) qm ( M ) qv ( V ) qk ( K ) . ( 3 ) In that case , the optimal solution for each of the factors of the distribution is to take the expectation w.r.t . to all the others , for instance : qz ( Z ) ∝ exp ( − Eqm ( M ) qv ( V ) qk ( K ) { E ( Z , M , V , K , F , Θ ) } ) . ( 4 ) It can be shown that the optimal variational factors write : qz ( z p , c r ) ∝ exp ( φz ( z p , c r , f p , c r ) + ∑ e 6=r ∑ t m̄t , pe , rv̄ t , c e , r ∑ p′ , c′ Eqzqk { ψ ( zp , cr , zp ′ , c′ e , k e , p′ , c′ r , p , c ) } ) , qm ( m t , p e , r ) ∝ exp ( mt , pe , r ∑ c v̄t , ce , r ∑ p′ , c′ Eqz , qk { ψ ( zp , cs , z p′ , c′ s′ , k e , p′ , c′ r , p , c ) } ) , qv ( v t , c e , r ) ∝ exp ( vt , ce , r ∑ p m̄t , pe , r ∑ p′ , c′ Eqz , qk { ψ ( zp , cs , z p′ , c′ s′ , k e , p′ , c′ r , p , c ) } ) , qk ( k e , p′ , c′ r , p , c ) ∝ exp ( φk ( f p , c r , f p′ , c′ e , k e , p′ , c′ r , p , c ) + ∑ t m̄t , pe , rv̄ t , c e , rEqz { ψ ( zp , cs , z p′ , c′ s′ , k e , p′ , c′ r , p , c ) } ) , ( 5 ) where m̄t , pe , r = Eqm { mt , pe , r } denotes the the posterior mean , and analogously for v̄t , ce , r . This result also implies that thanks to the variational approximation in ( 3 ) , the posterior distributions factorise in each of the variables above , e.g . qz ( Z ) = ∏S , P , C r , p , c=1 qz ( z p , c r ) . The relation between the various hidden variables as for their inference is shown in Figure 2 ( left ) . In addition , we also show the information flow between the hidden variables using arrows . Finally , on Figure 2 ( right ) we show the relation between the channel-wise and spatial attention variables and how the final structured attention tensor is computed .
This paper proposes the VarIational STructured Attention networks (VISTA-Net), which improves pervious SOTA models for dense pixel-wise prediction tasks. The proposed VISTA-Net is featured by two aspects: 1) A new structured attention is proposed, which is able to jointly model spatial-level and channel-level dependencies; 2) It incorporates the proposed structured attention with a CRF-like inference framework, which allows the probabilistic inference. Experimental studies are conducted on monocular depth estimation and semantic image segmentation, showing improved performances of VISTA-Net consistently.
SP:5e964be1417deb994f62cd256e24ed7cafd2bd9c
Manifold Regularization for Locally Stable Deep Neural Networks
1 INTRODUCTION . Recent results in deep learning highlight the remarkable performance deep neural networks can achieve on tasks using data from the natural world , such as images , video , and audio . Though such data inhabits an input space of high dimensionality , the physical processes which generate the data often manifest significant biases , causing realistic inputs to be sparse in the input space . One way of capturing this intuition is the manifold assumption , which states that input data is not drawn uniformly from the input space , but rather supported on some smooth submanifold ( s ) of much lower dimension . Starting with the work of Belkin et al . ( 2006 ) , this formulation has been studied extensively in the setting of semi-supervised kernel and regression methods , where algorithms exploit the unlabelled data points to learn functions which are smooth on the input manifold ( Geng et al. , 2012 ; Goldberg et al. , 2008 ; Niyogi , 2013 ; Sindhwani et al. , 2005 ; Tsang and Kwok , 2007 ; Xu et al. , 2010 ) . Such techniques have seen less use in the context of deep neural networks , owing in part to the ability of such models to generalize from relatively sparse data ( Zhang et al. , 2016 ) . Contributions We apply concepts from manifold regularization to train locally stable deep neural networks . In light of recent results showing that neural networks suffer widely from adversarial inputs ( Szegedy et al. , 2013 ) , our goal is to learn a function which does not vary much in the neighborhoods of natural inputs , independently of whether the network classifies correctly . We show that this definition of local stability has a natural interpretation in the context of manifold regularization , and propose an efficient regularizer based on an approximation of the graph Laplacian when the data is sparse , i.e. , the pairwise distances are large . Crucially , our regularizer exploits the continuous piecewise linear nature of ReLU networks to learn a function which is smooth over the data manifold in not only its outputs but also its decision boundaries . We evaluate our approach by training neural networks with our regularizers for the task of image classification on CIFAR-10 ( Krizhevsky et al. , 2009 ) . Empirically , our networks exhibit robustness against a variety of adversarial models implementing ` 2 , ` ∞ , and Wasserstein-based attacks . We also achieve state-of-the-art verified robust accuracy under ` ∞ of size = 8/255 . Furthermore , our regularizers are cheap : we simply evaluate the network at two additional random points for each training sample , so the total computational cost is on par with three parallel forward passes through the network . Our techniques thus present a novel , regularization-only approach to learning robust neural networks , which achieves performance comparable to existing defenses while also being an order of magnitude more efficient . 2 BACKGROUND . Manifold regularization The manifold assumption states that input data is not drawn uniformly from the input domain X , also know as the ambient space , but rather is supported on a submanifold M⊂ X , called the intrinsic space . There is thus a distinction between regularizing on the ambient space , where the learned function is smooth with respect to the entire input domain ( e.g. , Tikhonov regularization ( Phillips , 1962 ; Tikhonov et al. , 2013 ) ) , and regularizing over the intrinsic space , which uses the geometry of the input submanifold to determine the regularization norm . A common form of manifold regularization assumes the gradient of the learned function ∇Mf ( x ) should be small where the probability of drawing a sample is large ; we call such functions “ smooth ” . Let µ be a probability measure with supportM . This leads to the following intrinsic regularizer : ||f ||2I : = ∫ M ||∇Mf ( x ) ||2dµ ( x ) ( 1 ) In general , we can not compute this integral becauseM is not known , so Belkin et al . ( 2006 ) propose the following discrete approximation that converges to the integral as the number of samples grows : ||f ||2I ≈ 1 N2 N∑ i , j=1 ( f ( xi ) − f ( xj ) ) 2Li , j ( 2 ) Here , the x1 , ... , xN are samples drawn , by assumption , from the input manifoldM according to µ , and L is a matrix of weights measuring the similarity between samples . The idea is to approximate the continuous input manifold using a discrete graph , where the vertices are samples , the edge weights are distances between points , and the Laplacian matrix L encodes the structure of this graph . A common choice of weights is a heat kernel : Li , j = L ( xi , xj ) : = exp ( −||xi−xj ||2/s ) . To improve computational costs , weights are often truncated to the k-nearest neighbors or within some -ball . Note that the Laplacian can also be interpreted as a discrete matrix operator , which converges under certain conditions to the continuous Laplace operator ( Belkin and Niyogi , 2008 ) . ReLU networks Our development focuses on a standard architecture for deep neural networks : fully-connected feedforward networks with ReLU activations . In general , we can write the function represented by such a network with n layers and parameters θ = { Ai , bi } i=1 , ... , n−1 as z0 = x ( 3 ) ẑi = Ai · zi−1 + bi for i = 1 , ... , n− 1 ( 4 ) zi = σ ( ẑi ) for i = 1 , ... , n− 2 ( 5 ) f ( x ; θ ) = ẑn−1 ( 6 ) where the Ai are the weight matrices and the bi are the bias vectors . We call the zi “ hidden activations ” , or more simply , activations , and the ẑi “ pre-activations ” . In this work , we consider networks in which σ ( · ) in ( 5 ) is the Rectified Linear Unit ( ReLU ) zi = σ ( ẑi ) : = max ( 0 , ẑi ) ( 7 ) It is clear from this description that ReLU networks are a family of continuous piecewise linear functions . We denote the linear function induced by an input x as fx ( · ; θ ) , i.e. , the analytic extension of the local linear component about x over the input domain . Adversarial robustness One common measure of robustness for neural networks is against a norm-bounded adversary . In this model , the adversary is given an input budget over a norm || · ||in , and asked to produce an output perturbation δ over a norm | · |out . A point x′ is an -δ adversarial example for an input pair ( x , y ) if ||x′ − x||in ≤ ( 8 ) |f ( x′ ; θ ) − y|out ≥ δ ( 9 ) When the specific norm is either unimportant or clear from context , we also write the first condition as x′ ∈ N ( x ) , where N ( x ) refers to the -ball or neighborhood about x . If such an adversarial example does not exist , we say that the network is -δ robust at x . Standard examples of || · ||in include the ` 2 and ` ∞ “ norm ” , defined for vectors as ||x||∞ : = maxi |xi| . For classification tasks , the adversary is successful if it produces an example in the -neighborhood of x which causes the network to misclassify . In this case , we drop δ and say that the network is -robust at x . Note that if f ( x ; θ ) is already incorrect , then x suffices as an adversarial example . 3 RELATED WORK . Manifold regularization was first introduced by Belkin et al . ( 2006 ) in the context of semi-supervised learning , where the goal was to leverage unlabeled samples to learn a function which behaves well ( e.g. , is smooth , or has low complexity ) over the data manifold . The use of manifold regularization for deep neural networks has been explored in several contexts ( Tomar and Rose , 2014 ; 2016 ; Hu et al. , 2018 ; Zhu et al. , 2018 ) . In particular , Lee et al . ( 2015 ) combine manifold regularization with adversarial training and show improvements in standard test accuracy . Our approach is to use manifold regularization to induce stability separately from accuracy . We note that a similar decomposition between accuracy and stability forms the basis for the TRADES algorithm ( Zhang et al. , 2019 ) , though the training procedure ultimately relies on adversarial training . Hein and Andriushchenko ( 2017 ) propose a conceptually similar regularizer to minimize the difference between logits and show improved ` 2 certified robustness . Finally , several prior works explore regularizing for robustness based on other differential forms ( Ross and Doshi-Velez , 2017 ; Jakubovitz and Giryes , 2018 ) , though they only report results using the weaker single-step PGD adversary . In particular , a recent work by Zhai et al . ( 2020 ) uses randomized smoothing for ` 2 certified robustness , and claim similar computational advantage due to avoiding adversarial training , but still take 61 hours to train , compared to only 3 hours in our approach . Adversarial examples were introduced by Szegedy et al . ( 2013 ) , who found that naively trained neural networks suffer almost complete degradation of performance on natural images under slight perturbations which are imperceptible to humans . A standard class of defenses is adversarial training , which is characterized by training on adversarially generated input points ( Goodfellow et al. , 2014 ) . In particular , the Projected Gradient Descent ( PGD ) attack ( Kurakin et al. , 2016 ; Madry et al. , 2017 ) is widely considered to be an empirically sound algorithm for both training and evaluation of robust models . However , such training methods rely on solving an inner optimization via an iterative method , effectively increasing the number of epochs by a multiplicative factor ( e.g. , an overhead of 5–10x for standard PGD ) . Achieving robustness against multiple adversarial models has also been explored previously ( Schott et al. , 2018 ; Tramer and Boneh , 2019 ; Maini et al. , 2019 ; Croce and Hein , 2019b ) , though in most cases these works use weaker variants of the subset of standard adversaries we consider ( e.g. , a smaller or the single-step version of PGD ) . Another approach is to train models which are provably robust . One method is to use an exact verification method , such as an MILP solver , to prove that the network is robust on given inputs ( Tjeng et al. , 2017 ) . In particular , Xiao et al . ( 2019 ) use a similar loss based on ReLU pre-activations to learn stable ReLUs for efficient verification , but rely on a PGD adversary to train a robust model . Certification methods modify models to work directly with neighborhoods instead of points ( Dvijotham et al. , 2018 ; Gowal et al. , 2018 ; Mirman et al. , 2018 ; Wong et al. , 2018 ) . In practice , the inference algorithms must overapproximate the neighborhoods to preserve soundness while keeping the representation compact as it passes through the network . This strategy can be interpreted as solving a convex relaxation of the exact verification problem . Though certification thus far has produced better lower bounds , verification as a technique is fully general and can be applied to any model ( given sufficient time ) ; recent work also suggests that methods using layerwise convex relaxations may face an inherent barrier to tight verification ( Salman et al. , 2019 ) .
This work introduces manifold regularization as an approach for learning stable deep nets, towards the goal of adversarial robustness. Several regularizers are proposed: intrinsic, sparse Laplacian and Hamming regularizers. As the proposed method relies only on adding these regularization terms to the loss, it is more computationally efficient than methods that require computation of adversarial examples during training. The proposed method is evaluated on CIFAR-10 under $\ell_2$ and $\ell_{\infty}$ ball attacks and shown to be state-of-the-art in terms of verifiable robustness to $\ell_{\infty}$ attacks at $\epsilon = 8/255$.
SP:0d62919086db1e43bdd5acbb80c25f82e5466cf6
Inverse Constrained Reinforcement Learning
1 INTRODUCTION . Reward functions are a critical component in reinforcement learning settings . As such , it is important that reward functions are designed accurately and are well-aligned with the intentions of the human designer . This is known as agent ( or value ) alignment ( see , e.g. , Leike et al . ( 2018 ; 2017 ) ; Amodei et al . ( 2016 ) ) . Misspecified rewards can lead to unwanted and unsafe situations ( see , e.g , Amodei & Clark ( 2016 ) ) . However , designing accurate reward functions remains a challenging task . Human designers , for example , tend to prefer simple reward functions that agree well with their intuition and are easily interpretable . For example , a human designer might choose a reward function that encourages an RL agent driving a car to minimize its traveling time to a certain destination . Clearly , such a reward function makes sense in the case of a human driver since inter-human communication is contextualized within a framework of unwritten and unspoken constraints , often colloquially termed as ‘ common-sense ’ . That is , while a human driver will try to minimize their traveling time , they will be careful not to break traffic rules , take actions that endanger passersby , and so on . However , we can not assume such behaviors from RL agents since they are are not imbued with common-sense constraints . Constrained reinforcement learning provides a natural framework for maximizing a reward function subject to some constraints ( we refer the reader to Ray et al . ( 2019 ) for a brief overview of the field ) . However , in many cases , these constraints are hard to specify explicitly in the form of mathematical functions . One way to address this issue is to automatically extract constraints by observing the behavior of a constraint-abiding agent . Consider , for example , the cartoon in Figure 1 . Agents start at the bottom-left corner and are rewarded according to how quickly they reach the goal at the bottom-right corner . However , what this reward scheme misses out is that in the real world the lower bridge is occupied by a lion which attacks any agents attempting to pass through it . Therefore , agents that are naı̈vely trained to maximize the reward function will end up performing poorly in the real world . If , on the other hand , the agent had observed that the expert ( in Figure 1 ( a ) ) actually performed suboptimally with respect to the stipulated reward scheme by taking a longer route to the goal , it could have concluded that ( for some unknown reason ) the lower bridge must be avoided and consequently would have not been eaten by the lion ! Scobee & Sastry ( 2020 ) formalizes this intuition by casting the problem of recovering constraints in the maximum entropy framework for inverse RL ( IRL ) ( Ziebart et al. , 2008 ) and proposes a greedy algorithm to infer the smallest number of constraints that best explain the expert behavior . However , Scobee & Sastry ( 2020 ) has two major limitations : it assumes ( 1 ) tabular ( discrete ) settings , and ( 2 ) the environment ’ s transition dynamics . In this work , we aim to address both of these issues by learning a constraint function instead through a sample-based approximation of the objective function of Scobee & Sastry . Consequently , our approach is model-free , admits continuous states and actions and can learn arbitrary Markovian constraints1 . Further , we empirically show that it scales well to high-dimensions . Typical inverse RL methods only make use of expert demonstrations and do not assume any knowledge about the reward function at all . However , most reward functions can be expressed in the form “ do this task while not doing these other things ” where other things are generally constraints that a designer wants to impose on an RL agent . The main task ( “ do this ” ) is often quite easy to encode in the form of a simple nominal reward function . In this work , we focus on learning the constraint part ( “ do not do that ” ) from provided expert demonstrations and using it in conjunction with the nominal reward function to train RL agents . In this perspective , our work can be seen as a principled way to inculcate prior knowledge about the agent ’ s task in IRL . This is a key advantage over other IRL methods which also often end up making assumptions about the agent ’ s task in the form of regularizers such as in Finn et al . ( 2016 ) . The main contributions of our work are as follows : • We formulate the problem of inferring constraints from a set of expert demonstrations as a learning problem which allows it to be used in continuous settings . To the best of our knowledge , this is the first work in this regard . • We eliminate the need to assume , as Scobee & Sastry do , the environment ’ s transition dynamics . • We demonstrate the ability of our method to train constraint-abiding agents in highdimensions and show that it can also be used to prevent reward hacking . 2 PRELIMINARIES . 2.1 UNCONSTRAINED RL . A finite-horizon Markov Decision Process ( MDP ) M is a tuple ( S , A , p , r , γ , T ) , where S ∈ R|S| is a set of states , A ∈ R|A| is a set of actions , p : S × A × S 7→ [ 0 , 1 ] is the transition probability function ( where p ( s′|s , a ) denotes the probability of transitioning to state s′ from state s by taking 1Markovian constraints are of the form c ( τ ) = ∏T t=1 c ( st , at ) i.e . constraint function is independent of the past states and actions in the trajectory . action a ) , r : S × A 7→ R is the reward function , γ is the discount factor and T is the timehorizon . A trajectory τ = { s1 , a1 , . . . , sT , aT } denotes a sequence of states-action pairs such that st+1 ∼ p ( ·|st , at ) . A policy π : S 7→ P ( A ) is a map from states to probability distributions over actions , with π ( a|s ) denoting the probability of taking action a in state s. We will sometimes abuse notation to write π ( s , a ) to mean the joint probability of visiting state s and taking action a under the policy π and similarly π ( τ ) to mean the probability of the trajectory τ under the policy π . Define r ( τ ) = ∑T t=1 γ tr ( st , at ) to be the total discounted reward of a trajectory . Forward RL algorithms try to find an optimal policy π∗ that maximizes the expected total discounted reward J ( π ) = Eτ∼π [ r ( τ ) ] . On the other hand , given a set of trajectories sampled from the optimal ( also referred to as expert ) policy π∗ , inverse RL ( IRL ) algorithms aim to recover the reward function r , which can then be used to learn the optimal policy π∗ via some forward RL algorithm . 2.2 CONSTRAINED RL . While normal ( unconstrained ) RL tries to find a policy that maximizes J ( π ) , constrained RL instead focuses on finding a policy that maximizes J ( π ) while respecting explicitly-defined constraints . A popular framework in this regard is the one presented in Altman ( 1999 ) which introduces the notion of a constrained MDP ( CMDP ) . A CMDP Mc is a simple MDP augmented with a cost function c : S ×A 7→ R and a budget α ≥ 0 . Define c ( τ ) = ∑T t=1 γ tc ( st , at ) to be the total discounted cost of the trajectory τ and Jc ( π ) = Eτ∼π [ c ( τ ) ] to be the expected total discounted cost . The forward constrained RL problem is to find the policy π∗c that maximizes J ( π ) subject to J c ( π ) ≤ α . In this work , given a set D of trajectories sampled from π∗c , the corresponding unconstrained MDP M ( i.e. , Mc without the cost function c ) and a budget α , we are interested in recovering a cost function which when augmented withM has an optimal policy that generates the same set of trajectories as in D. We call this as the inverse constrained reinforcement learning ( ICRL ) problem . If the budget α is strictly greater than 0 , then the cost function c defines soft constraints over all possible state-action pairs . In other words , a policy is allowed to visit states and take actions that have non-zero costs as long as the expected total discounted cost remains less than α . If , however , α is 0 then the cost function translates into hard constraints over all state-action pairs that have a non-zero cost associated with them . A policy can thus never visit these state-action pairs . In this work , we restrict ourselves to this hard constraint setting . Note that this is not particularly restrictive since , for example , safety constraints are often hard constraints as well are constraints imposed by physical laws . Since we restrict ourselves to hard constraints , we can rewrite the constrained RL problems as follows : define C = { ( s , a ) |c ( s , a ) 6= 0 } to be the constraint set induced by c. The forward constraint RL problem is to find the optimal constrained policy π∗C that maximizes J ( π ) subject to π∗C ( s , a ) = 0 ∀ ( s , a ) ∈ C. The inverse constrained RL problem is to recover the constraint set C from trajectories sampled from π∗C . Finally , we will refer to our unconstrained MDP as the nominal MDP hereinafter . The nominal MDP represents the nominal environment ( simulator ) in which we train our agent . 3 FORMULATION . 3.1 MAXIMUM LIKELIHOOD CONSTRAINT INFERENCE . We take Scobee & Sastry as our starting point . Suppose that we have a set of trajectories D = { τ ( i ) } Ni=1 sampled from an expert π∗C navigating in a constrained MDPMC ∗ where C∗ denotes the ( true ) constraint set . Furthermore , we are also given the corresponding nominal MDPM2 . Our goal is to recover a constraint set which when augmented withM results in a CMDP that has an optimal policy that respects the same set of constraints as π∗C does . Scobee & Sastry pose this as a maximum likelihood problem . That is , if we let pM denote probabilities given that we are considering MDP M and assume a uniform prior on all constraint sets , then we can choose C∗ according to C∗ ← arg max C pM ( D|C ) . ( 1 ) 2Availability of transition dynamics model of nominal MDP is not necessary . Under the maximum entropy ( MaxEnt ) model presented in Ziebart et al . ( 2008 ) , the probability of a trajectory under a deterministic MDPM can be modelled as πM ( τ ) = exp ( βr ( τ ) ) ZM 1 M ( τ ) , ( 2 ) where ZM = ∫ exp ( βr ( τ ) ) 1M ( τ ) dτ is the partition function , β ∈ [ 0 , ∞ ) is a parameter describing how close the agent is to the optimal distribution ( as β →∞ the agent becomes a perfect optimizer and as β → 0 the agent simply takes random actions ) and 1 is an indicator function that is 1 for trajectories feasible under the MDPM and 0 otherwise . Assume that all trajectories in D are i.i.d . and sampled from the MaxEnt distribution . We have p ( D|C ) = 1 ( ZMC ) N N∏ i=1 exp ( βr ( τ ( i ) ) ) 1M C ( τ ( i ) ) . ( 3 ) Note that 1M C ( τ ( i ) ) is 0 for all trajectories that contain any state-action pair that belongs to C. To maximize this , Scobee & Sastry propose a greedy strategy wherein they start with an empty constraint set and incrementally add state-action pairs that result in the maximal increase in p ( D|C ) .
The submission focuses on a variant of inverse reinforcement learning, where the learner knows the task reward but is unaware of hard constraints that need to be respected while completing the task. The authors provide an algorithm to recover these constraints from expert demonstrations. The proposed algorithm builds upon a recent technique (Scobee & Sastry 2020) and addresses problems with large and continuous state spaces.
SP:a1bb6da48c8ed54c0bbc88d2109a17276a529c5f
The Lipschitz Constant of Self-Attention
1 INTRODUCTION . Lipschitz continuity is a strong form of continuity for functions . Loosely speaking , a function is Lipschitz continuous if changing its input by a certain amount can not change its output by more than K times that amount . The constant K is a hard constraint on how rapidly the function ’ s output can vary , and the smallest such K is known as the function ’ s Lipschitz constant . For example , f1 ( x ) = √ |x| and f2 ( x ) = exp ( x ) for x ∈ R are not Lipschitz continuous , because their output can change arbitrarily fast as x approaches 0 and +∞ respectively . On the other hand , g1 ( x ) = tanh ( x ) and g2 ( x ) = αx are Lipschitz continuous , because their rate of change ( derivative ) is bounded . In deep learning , we often use Lipschitz continuity as a constraint for neural networks , to control how much a network ’ s output can change relative to its input . Such Lipschitz constraints are useful in several contexts . For example , Lipschitz constraints can endow models with provable robustness against adversarial pertubations ( Cisse et al. , 2017 ; Tsuzuku et al. , 2018 ; Anil et al. , 2019 ) , and guaranteed generalisation bounds ( Sokolić et al. , 2017 ) . Moreover , the dual form of the Wasserstein distance is defined as a supremum over Lipschitz functions with a given Lipschitz constant , hence Lipschitz-constrained networks are used for estimating Wasserstein distances ( Peyré & Cuturi , 2019 ) . Further , Lipschitz-constrained networks can stabilise training for GANs , an example being spectral normalisation ( Miyato et al. , 2018 ) . Finally , Lipschitz-constrained networks are also used to construct invertible models and normalising flows . For example , Lipschitz-constrained networks can be used as a building block for invertible residual networks and hence flow-based generative models ( Behrmann et al. , 2019 ; Chen et al. , 2019 ) . Additionally , Neural ODEs ( Chen et al. , 2018 ; Grathwohl et al. , 2019 ) are typically defined using vector fields parameterized via Lipschitz networks , so that the flow generated by the vector field is guaranteed to exist for all times . Nonetheless , designing Lipschitz-continuous neural networks and computing ( or even upperbounding ) their Lipschitz constant is a hard problem . Previous work mostly focused on fullyconnected and convolutional networks , not only because they are common in deep learning , but also because they are relatively simple to analyze , as compositions of linear maps and pointwise nonlinearities . Even in this case however , exact evaluation of the Lipschitz constant of fully-connected and convolutional networks is NP-hard ( Virmaux & Scaman , 2018 ) and obtaining a tight upper bound remains a challenging task ( Virmaux & Scaman , 2018 ; Fazlyab et al. , 2019 ; Latorre et al. , 2020 ) . Fully-connected and convolutional networks are not the only neural networks worthy of interest . Recently , self-attention ( Vaswani et al. , 2017 ) has become a popular alternative to recurrent neural networks . Self-attention is a key component of the Transformer ( Vaswani et al. , 2017 ) , that has found success as a building block in models of various data modalities , starting with natural-language processing ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) and extending to computer vision ( Zhang et al. , 2019 ; Parmar et al. , 2019 ) , audio generation ( Huang et al. , 2019 ) , and reinforcement learning ( Parisotto et al. , 2020 ) . However , so far no previous work has analyzed the Lipschitz properties of self-attention , and thus it has been unclear whether self-attention is a viable option in applications that require Lipschitz constraints . In this work , we address this gap in the theory of self-attention by providing a thorough analysis of its Lipschitz properties . In particular , we make the following contributions : • We prove that the widely used dot-product self-attention is not Lipschitz , and therefore not suitable to use in applications requiring Lipschitz constraints . • We formulate L2 self-attention as an alternative , and show that it is Lipschitz . • We derive a theoretical upper bound on the Lipschitz constant of L2 self-attention , and provide empirical evidence of the asymptotic tightness of the bound . • As a practical demonstration of the theory , we use this bound to formulate invertible self-attention , and explore its use in a Transformer architecture for character-level language modelling . 2 LIPSCHITZ CONSTANT OF FULLY-CONNECTED/CONVOLUTIONAL LAYERS . We first define the notion of Lipschitz continuity , and proceed to define the Lipschitz constant . Definition 2.1 . Given two metric spaces ( X , dX ) and ( Y , dY ) , a function f : X → Y is called Lipschitz continuous ( or K-Lipschitz ) if there exists a constant K ≥ 0 such that dY ( f ( x ) , f ( x ′ ) ) ≤ KdX ( x , x′ ) for all x , x′ ∈ X . ( 1 ) The smallest such K is the Lipschitz constant of f , denoted Lip ( f ) . In this paper , we focus on the common case where X = Rn , Y = Rm , and dX , dY are induced by a p-norm ‖x‖p : = ( ∑ i |xi|p ) 1/p . We will primarily consider the cases p = 2 and p = ∞ , where ‖x‖∞ : = maxi |xi| . To emphasise the dependence of the Lipschitz constant on the choice of p-norm , we will often denote it by Lipp ( f ) . In this case , it follows directly from Definition 2.1 that the Lipschitz constant is given by Lipp ( f ) = sup x6=x′∈Rn ‖f ( x ) − f ( x′ ) ‖p ‖x− x′‖p . ( 2 ) Next , we outline some basic results that are useful for estimating Lipschitz constants , also covered in related works ( Virmaux & Scaman , 2018 ; Behrmann et al. , 2019 ) . We describe how these results are used to provide bounds on the Lipschitz constant of fully-connected networks ( FCN ) and convolutional neural networks ( CNN ) , using the fact that both are compositions of linear maps and pointwise non-linearities . To begin with , the following theorem suggests a way to bound Lipp ( f ) for a differentiable Lipschitz function f : Theorem 2.1 ( Federer , 1969 ) . Let f : Rn → Rm be differentiable and Lipschitz continuous under a choice of p-norm ‖ · ‖p . Let Jf ( x ) denote its total derivative ( Jacobian ) at x . Then Lipp ( f ) = supx∈Rn ‖Jf ( x ) ‖p where ‖Jf ( x ) ‖p is the induced operator norm on Jf ( x ) . Hence if f is a linear map represented by a matrix W then Lipp ( f ) = ‖W‖p : = sup ‖x‖p=1 ‖Wx‖p = { σmax ( W ) , if p = 2 maxi ∑ j |Wij | if p =∞ ( 3 ) where ‖W‖p is the operator norm on matrices induced by the vector p-norm , and σmax ( W ) is the largest singular value of W . Under this choice of norm , many common non-linearities ( including relu , sigmoid , tanh , elu ) are 1-Lipschitz . ‖W‖2 = σmax ( W ) is usually estimated via power iteration ; we provide details on how this is done in Appendix B . Since we now know the Lipschitz constants of the components of both FCN and CNN , we can bound their Lipschitz constants by applying the following lemma : Lemma 2.1 ( Federer , 1969 ) . Let g , h be two composable Lipschitz functions . Then g ◦ h is also Lipschitz with Lip ( g ◦ h ) ≤ Lip ( g ) Lip ( h ) . Corollary 2.1 . For a fully-connected network ( FCN ) or a convolutional neural network ( CNN ) f =WK ◦ ρK−1 ◦WK−1 ◦ . . . ◦ ρ1 ◦W1 , we have Lipp ( f ) ≤ ∏ k ‖Wk‖p under a choice of p-norm with 1-Lipschitz non-linearities ρk . The above bound is not necessarily tight ; there are various works that compute tighter bounds for FCN and CNN ( e.g . Virmaux & Scaman , 2018 ; Fazlyab et al. , 2019 ; Latorre et al. , 2020 ) . 3 LIPSCHITZ CONSTANT OF SELF-ATTENTION . 3.1 DOT-PRODUCT SELF-ATTENTION IS not LIPSCHITZ Moving on , we investigate whether self-attention is Lipschitz . We first consider the widely used ( scaled ) dot-product multihead self-attention as formulated by Vaswani et al . ( 2017 ) . Let x1 , . . . , xN be a sequence of N elements , where xi ∈ RD for i = 1 , . . . , N . We represent this sequence as a matrix X ∈ RN×D such that the ith row of X is the ith element of the sequence , i.e . Xi : = x > i . Dot-product multihead self-attention ( DP-MHA ) is a map from RN×D to RN×D consisting of H ‘ heads ’ , where H is chosen to divide D. Each head is a map from RN×D to RN×D/H defined by DP ( X ) : = softmax ( XWQ ( XWK ) > / √ D/H ) XWV , ( 4 ) where WQ , WK , WV ∈ RD×D/H are learnable parameters specific to each head . The input to the softmax is an N ×N matrix of dot products ( hence dot-product self-attention ) , and the softmax is applied to each row of this matrix . Finally , the outputs of all heads are concatenated into an N ×D matrix and are right multiplied by WO ∈ RD×D , thus DP-MHA is defined by MHA ( X ) : = [ DP1 ( X ) , . . . , DPH ( X ) ] WO . ( 5 ) In what follows , we will prove that MHA as defined above is not Lipschitz , assuming that the MHA map is non-trivial , i.e . WQ , WK , WV , WO 6= 0 . It is sufficient to show that a single head DP is not Lipschitz , since MHA is a linear combination of the outputs of each head . Let us write Equation ( 4 ) as DP ( X ) = PXWV , where P ∈ RN×N is the output of the softmax ( we suppress the dependence of P on X to reduce clutter below ) . P is a stochastic matrix , i.e . its entries are non-negative and its rows sum to 1 . Since the rows of X are the xi ’ s , a linear transformation of each xi by some matrix A is equivalent to right multiplication of X by A > . So right multiplication of X by WV is a linear map and thus Lipschitz . Therefore , we are interested in the mapping f ( X ) = PX ; this is not a linear mapping because P itself is a non-linear function of X . In fact , we show that f is not Lipschitz , thus proving the first main result of the paper : Theorem 3.1 . DP-MHA is not Lipschitz for any vector p-norm ‖ · ‖p with p ∈ [ 1 , ∞ ] . Summary of Proof . We use Theorem 2.1 , noting that if the supremum of the norm of the Jacobian is infinite , then the mapping is not Lipschitz . In particular , we show that when xi = 0 for some i , some elements of the Jacobian of f grow proportionally to the sample variance of x 6=i , which is unbounded . Proof . We show the proof for the case D = 1 ( i.e . X ∈ RN×1 , a column vector ) for readability . See Appendix C for the general case , which follows the same logic . The mapping f can be written as f ( X ) = PX = softmax ( aXX > ) X = f1 ( X ) > ... fN ( X ) > ∈ RN×1 , ( 6 ) where a = WKWQ ∈ R ( we assume a 6= 0 such that self-attention is non-trivial ) and fi ( X ) =∑N j=1 Pijxj with P > i : = softmax ( axiX ) . Hence f can be interpreted as a map of each xi to a point in the convex hull of x1 , ... , xN . Since f is a map from RN×1 to RN×1 , its Jacobian is Jf = J11 . . . J1N ... . . . ... JN1 . . . JNN ∈ RN×N , ( 7 ) where Jij = ∂fi ( X ) ∂xj ∈ R. By taking partial derivatives we can show that Jij = aX > P ( i ) [ ejiX + δijX ] + PijI where eij ∈ RN×N is a binary matrix with zeros everywhere except the ( i , j ) th entry , δij is the Kronecker delta , and P ( i ) : = diag ( Pi : ) − P > i : Pi : . So for i = j : Jii = aX > P ( i ) eiiX + aX > P ( i ) X + Pii ( 8 ) Let us investigate the scalarX > P ( i ) X . We observe that it is in fact a variance of a discrete distribution . Specifically : X > P ( i ) X = ∑ k Pikx 2 k − ( ∑ k Pikxk ) 2 = Var ( X ) , ( 9 ) where X is a discrete distribution with support at the inputs { x1 , . . . , xN } and probability mass function given by their softmax probabilities P ( X = xj ) = Pij . A consequence of this interpretation is that P ( i ) is positive semi-definite ( PSD ) since X > P ( i ) X = Var ( X ) ≥ 0 , with equality if and only if the xj are all equal . We use this observation to show that Jii is unbounded , and so ‖Jf‖p is unbounded , hence DP-MHA is not Lipschitz . Consider the case xi = 0 . Then P > i : = softmax ( XAxi ) = 1 N 1 , i.e . we have uniform attention regardless of x 6=i . The first term of Jii in Equation ( 8 ) disappears since eiiX = [ 0 , . . . , xi , . . . , 0 ] = 0 , and the last term becomes 1N I . Now consider the second term aX > P ( i ) X = aVar ( Xl ) . Note X is uniformly distributed , since P ( X = xj ) = Pij = 1/N . Hence the second term is equal to a times the sample variance of x1 , . . . , xN , which can be arbitrarily large . High-level intuition for proof . At xi = 0 , fi ( X ) = 1N ∑ k xk , the mean of the inputs . The rate of change of fi is governed by how fast the softmax saturates when xi is perturbed , which is determined by how spread out the x 6=i are . The more spread out they are ( the higher the sample variance ) , the greater the rate of saturation of the softmax , and the faster the rate of change of fi . Since the sample variance of x 6=i can be arbitrarily large , the rate of change of fi can also be arbitrarily large , i.e . the entries of the Jacobian ( and hence its p-norm ) can become arbitrarily large . In Appendix D , we show that adding bias terms to x > i W Q and x > j W K does not resolve the issue . The implications of this result are the following . ( 1 ) There can be undesirable behaviour ( e.g . training instabilities ) for the Transformer when some inputs are close to zero . ( 2 ) Dot-product self-attention ( and hence the standard Transformer ) is not a suitable choice when we require a Lipschitz neural network , such as for formulating invertible residual networks ( Behrmann et al. , 2019 ) . Therefore , to use self-attention and Transformers in such applications , a Lipschitz formulation of self-attention is required , together with an explicit ( ideally tight ) upper bound to its Lipschitz constant , to quantify how much the output can change with respect to changes in the input . One method to make dot-product self-attention Lipschitz is by ensuring its inputs are bounded . Indeed , if the input space is compact , e.g . [ 0 , 1 ] N×D , any continuously differentiable function is Lipschitz , including dot-product self-attention . However , as we further discuss in Section 6 , such an approach has its own challenges , since it makes the Lipschitz constant depend on the input range . Instead , in the next section we formulate a version of self-attention that is provably Lipschitz on all of RN×D , allowing us to derive an upper bound that holds for any subset of RN×D .
This paper studies the Lipschitz continuity properties of self-attention. It is proved that the widely-used dot-product self-attention is not Lipschitz continuous. A novel L2 self-attention is proposed and proven to be Lipschitz continuous. Experiments show that the upper bound of Lipschitz constant for L2 self-attention is asymptotically tight. Invertibility of MHA residual map is investigated to prove the Lipschitz continuity of L2 self-attention. Finally, experiments on Transformers with L2 self-attention are studied.
SP:cd6f5c3ee37991ff572589467b2216ba364275ba
Robust and Generalizable Visual Representation Learning via Random Convolutions
1 INTRODUCTION . Generalizability and robustness to out-of-distribution samples have been major pain points when applying deep neural networks ( DNNs ) in real world applications ( Volpi et al. , 2018 ) . Though DNNs are typically trained on datasets with millions of training samples , they still lack robustness to domain shift , small perturbations , and adversarial examples ( Luo et al. , 2019 ) . Recent research has shown that neural networks tend to use superficial features rather than global shape information for prediction even when trained on large-scale datasets such as ImageNet ( Geirhos et al. , 2019 ) . These superficial features can be local textures or even patterns imperceptible to humans but detectable to DNNs , as is the case for adversarial examples ( Ilyas et al. , 2019 ) . In contrast , image semantics often depend more on object shapes rather than local textures . For image data , local texture differences are one of the main sources of domain shift , e.g. , between synthetic virtual images and real data ( Sun & Saenko , 2014 ) . Our goal is therefore to learn visual representations that are invariant to local texture and that generalize to unseen domains . While texture and color may be treated as different concepts , we follow the convention in Geirhos et al . ( 2019 ) and include color when talking about texture . We address the challenging setting of robust visual representation learning from single domain data . Limited work exists in this setting . Proposed methods include data augmentation ( Volpi et al. , 2018 ; Qiao et al. , 2020 ; Geirhos et al. , 2019 ) , domain randomization ( Tobin et al. , 2017 ; Yue et al. , 2019 ) , self-supervised learning ( Carlucci et al. , 2019 ) , and penalizing the predictive power of low-level network features ( Wang et al. , 2019a ) . Following the spirit of adding inductive bias towards global shape information over local textures , we propose using random convolutions to improve the robustness to domain shifts and small perturbations . While recently Lee et al . ( 2020 ) proposed a similar technique for improving the generalization of reinforcement learning agents in 1Code is available at https : //github.com/wildphoton/RandConv . unseen environments , we focus on visual representation learning and examine our approach on visual domain generalization benchmarks . Our method also includes the multiscale design and a mixing variant . In addition , considering that many computer vision tasks rely on training deep networks based on ImageNet-pretrained weights ( including some domain generalization benchmarks ) , we ask “ Can a more robust pretrained model make the finetuned model more robust on downstream tasks ? ” Different from ( Kornblith et al. , 2019 ; Salman et al. , 2020 ) who studied the transferability of a pretrained ImageNet representation to new tasks while focusing on in-domain generalization , we explore generalization performance on unseen domains for new tasks . We make the following contributions : • We develop RandConv , a data augmentation technique using multi-scale random-convolutions to generate images with random texture while maintaining global shapes . We explore using the RandConv output as training images or mixing it with the original images . We show that a consistency loss can further enforce invariance under texture changes . • We provide insights and justification on why RandConv augments images with different local texture but the same semantics with the shape-preserving property of random convolutions . • We validate RandConv and its mixing variant in extensive experiments on synthetic and real- world benchmarks as well as on the large-scale ImageNet dataset . Our methods outperform single domain generalization approaches by a large margin on digit recognition datasets and for the challenging case of generalizing to the Sketch domain in PACS and to ImageNet-Sketch . • We explore if the robustness/generalizability of a pretrained representation can transfer . We show that transferring a model pretrained with RandConv on ImageNet can further improve domain generalization performance on new downstream tasks on the PACS dataset . 2 RELATED WORK . Domain Generalization ( DG ) aims at learning representations that perform well when transferred to unseen domains . Modern techniques range between feature fusion ( Shen et al. , 2019 ) , metalearning ( Li et al. , 2018a ; Balaji et al. , 2018 ) , and adversarial training ( Shao et al. , 2019 ; Li et al. , 2018b ) . Note that most current DG work ( Ghifary et al. , 2016 ; Li et al. , 2018a ; b ) requires a multisource training setting to work well . However , in practice , it might be difficult and expensive to collect data from multiple sources , such as collecting data from multiple medical centers ( Raghupathi & Raghupathi , 2014 ) . Instead , we consider the more strict single-domain generalization DG setting , where we train the model on source data from a single domain and generalize it to new unseen domains ( Carlucci et al. , 2019 ; Wang et al. , 2019b ) . Domain Randomization ( DR ) was first introduced as a DG technique by Tobin et al . ( 2017 ) to handle the domain gap between simulated and real data . As the training data in ( Tobin et al. , 2017 ) is synthesized in a virtual environment , it is possible to generate diverse training samples by randomly selecting background images , colors , lighting , and textures of foreground objects . When a simulation environment is not accessible , image stylization can be used to generate new domains ( Yue et al. , 2019 ; Geirhos et al. , 2019 ) . However , this requires extra effort to collect data and to train an additional model ; further , the number of randomized domains is limited by the number of predefined styles . Data Augmentation has been widely used to improve the generalization of machine learning models ( Simard et al. , 2003 ) . DR approaches can be considered a type of synthetic data augmentation . To improve performance on unseen domains , Volpi et al . ( 2018 ) generate adversarial examples to augment the training data ; Qiao et al . ( 2020 ) extend this approach via meta-learning . As with other adversarial training algorithms , significant extra computation is required to obtain adversarial examples . Learning Representations Biased towards Global Shape Geirhos et al . ( 2019 ) demonstrated that convolutional neural networks ( CNNs ) tend to use superficial local features even when trained on large datasets . To counteract this effect , they proposed to train on stylized ImageNet , thereby forcing a network to rely on object shape instead of textures . Wang et al . improved out-of-domain performance by penalizing the correlation between a learned representation and superficial features such as the gray-level co-occurrence matrix ( Wang et al. , 2019b ) , or by penalizing the predictive power of local , low-level layer features in a neural network via an adversarial classifier ( Wang et al. , 2019a ) . Our approach shares the idea that learning representations invariant to local texture helps generalization to unseen domains . However , RandConv avoids searching over many hyper-parameters , collecting extra data , and training other networks . It also scales to large-scale datasets since it adds minimal computation overhead . Random Mapping in Machine Learning Random projections have also been effective for dimensionality reduction based on the distance-preserving property of the Johnson–Lindenstrauss lemma ( Johnson & Lindenstrauss , 1984 ) . ( Vinh et al. , 2016 ) applied random projections on entire images as data augmentation to make neural networks robust to adversarial examples . Lee et al . ( 2020 ) recently used random convolutions to help reinforcement learning ( RL ) agents generalize to new environments . Neural networks with fixed random weights can encode meaningful representations ( Saxe et al. , 2011 ) and are therefore useful for neural architecture search ( Gaier & Ha , 2019 ) , generative models ( He et al. , 2016b ) , natural language processing ( Wieting & Kiela , 2019 ) , and RL ( Osband et al. , 2018 ; Burda et al. , 2019 ) . In contrast , RandConv uses non-fixed randomly-sampled weights to generate images with different local texture . 3 RANDCONV : RANDOMIZE LOCAL TEXTURE AT DIFFERENT SCALES . We propose using a convolution layer with non-fixed random weights as the first layer of a DNN during training . This strategy generates images with random local texture but consistent shapes , and is beneficial for robust visual representation learning . Sec . 3.1 justifies the shape-preserving property of a random convolution layer . Sec . 3.2 describes RandConv , our data augmentation algorithm using a multi-scale randomized convolution layer and input mixing . 3.1 A RANDOM CONVOLUTION LAYER PRESERVES GLOBAL SHAPES . Convolution is the key building block for deep convolutional neural networks . Consider a convolution layer with filters Θ ∈ Rh×w×Cin×Cout with an input image I ∈ RH×W×Cin , where H and W are the height and width of the input and Cin and Cout are the number of feature channels for the input and output , and h and w are the height and width of the layer ’ s filter . The output ( with appropriate input padding ) will be g = I ∗Θ with g ∈ RH×W×Cout . In images , nearby pixels with similar color or texture can be grouped into primitive shapes that represent parts of objects or the background . A convolution layer linearly projects local image patches to features at corresponding locations on the output map using shared parameters . While a convolution with random filters can project local patches to arbitrary output features , the output of a random linear projection approximately preserves relative similarity between input patches , proved in Appendix B . In other words , since any two locations within the same shape have similar local textures in the input image , they tend to be similar in the output feature map . Therefore , shapes that emerge in the output feature map are similar to shapes in the input image provided that the filter size is sufficiently small compared to the size of a typical shape . In other words , the size of a convolution filter determines the smallest shape it can preserve . For example , 1x1 random convolutions preserve shapes at the single-pixel level and thus work as a random color mapping ; large filters perturb shapes smaller than the filter size that are considered local texture of a shape at this larger scale . See Fig . 1 for examples . More discussion and a formal proof are in Appendix A and B . 3.2 MULTI-SCALE IMAGE AUGMENTATION WITH A RANDOMIZED CONVOLUTION LAYER . Algorithm 1 Learning with Data Augmentation by Random Convolutions 1 : Input : Model Φ , task loss Ltask , training images { Ii } Ni=1 and their labels { yi } Ni=1 , pool of filter sizes K = { 1 , ... , n } , fraction of original data p , whether to mix with original images , consistency loss weight λ 2 : function RANDCONV ( I , K , mix , p ) 3 : Sample p0 ∼ U ( 0 , 1 ) 4 : if p0 < p and mix is False then 5 : return I . When not in mix mode , use the original image with probability p 6 : else 7 : Sample scale k ∼ K 8 : Sample convolution weights Θ ∈ Rk×k×3×3 ∼ N ( 0 , 1 3k2 ) 9 : Irc = I ∗Θ . Apply convolution on I 10 : if mix is True then 11 : Sample α ∼ U ( 0 , 1 ) 12 : return αI + ( 1− α ) Irc . Mix with original images 13 : else 14 : return Irc 15 : Learning Objective : 16 : for i = 1→ N do 17 : for j = 1→ 3 do 18 : ŷji = Φ ( RandConv ( Ii ) ) . Predict labels for three augmented variants of the same image 19 : Lcons = λ ∑3 j=1 KL ( ŷ j i ||ȳi ) where ȳi = ∑3 j=1 ŷ j i /3 . Consistency Loss 20 : L = Ltask ( ŷ1i , yi ) + λLcons . Learning with the task loss and the consistency loss Sec . 3.1 discussed how outputs of randomized convolution layers approximately maintain shape information at a scale larger than their filter sizes . Here , we develop our RandConv data augmentation technique using a randomized convolution layer with Cout = Cin to generate shape-consistent images with randomized texture ( see Alg . 1 ) . Our goal is not to use RandConv to parameterize or represent texture as in previous filter-bank based texture models ( Heeger & Bergen , 1995 ; Portilla & Simoncelli , 2000 ) . Instead , we only use the three-channel outputs of RandConv as new images with the same shape and different “ style ” ( loosely referred to as `` texture '' ) . We also note that , a convolution layer is different from a convolution operation in image filtering . Standard image filtering applies the same 2D filter on three color channels separately . In contrast , our convolution layer applies three different 3D filters and each takes all color channels as input and generates one channel of the output . Our proposed RandConv variants are as follows : RCimg : Augmenting Images with Random Texture A simple approach is to use the randomized convolution layer outputs , I ∗Θ , as new images ; where Θ are the randomly sampled weights and I is a training image . If the original training data is in the domain D0 , a sampled weight Θk generates images with consistent global shape but random texture forming the random domain Dk . Thus , by random weight sampling , we obtain an infinite number of random domains D1 , D1 , . . . , D∞ . Input image intensities are assumed to be a standard normal distribution N ( 0 , 1 ) ( which is often true in practice thanks to data whitening ) . As the outputs of RandConv should follow the same distribution , we sample the convolution weights from N ( 0 , σ2 ) where σ = 1/ √ Cin × h× w , which is commonly applied for network initialization ( He et al. , 2015 ) . We include the original images for training at a ratio p as a hyperparameter . RCmix : Mixing Variant As shown in Fig . 1 , outputs from RCimg can vary significantly from the appearance of the original images . Although generalizing to domains with significantly different local texture distributions is useful , we may not want to sacrifice much performance on domains similar to the training domain . Inspired by the AugMix ( Hendrycks et al. , 2020b ) strategy , we propose to blend the original image with the outputs of the RandConv layer via linear convex combinations αI + ( 1 − α ) ( I ∗Θ ) , where α is the mixing weight uniformly sampled from [ 0 , 1 ] .In RCmix , the RandConv outputs provide shape-consistent perturbations of the original images . Varying α , we continuously interpolate between the training domain and the randomly sampled domains of RCimg . Multi-scale Texture Corruption As discussed in Sec . 3.1 „ image shape information at a scale smaller than a filter ’ s size will be corrupted by RandConv . Therefore , we can use filters of varying sizes to preserve shapes at various scales . We choose to uniformly randomly sample a filter size k from a pool K = 1 , 3 , ... n before sampling convolution weights Θ ∈ Rk×k×Cin×Cout from a Gaussian distribution N ( 0 , 1k2Cin ) . Fig . 1 shows examples of multi-scale RandConv outputs . Consistency Regularization To learn representations invariant to texture changes , we use a loss encouraging consistent network predictions for the same RandConv-augmented image for different random filter samples . Approaches for transform-invariant domain randomization ( Yue et al. , 2019 ) , data augmentation ( Hendrycks et al. , 2020b ) , and semi-supervised learning ( Berthelot et al. , 2019 ) use similar strategies . We use Kullback-Leibler ( KL ) divergence to measure consistency . However , enforcing prediction similarity of two augmented variants may be too strong . Instead , following ( Hendrycks et al. , 2020b ) , we use RandConv to obtain 3 augmentation samples of image I : Gj = RandConvj ( I ) for j = 1 , 2 , 3 and obtain their predictions with a model Φ : yj = Φ ( Gj ) . We then compute the relaxed loss as λ ∑3 j=1 KL ( y j ||ȳ ) , where ȳ = ∑3 j=1 y j/3 is the sample average .
This paper proposes a simple way to increase the robustness of the learned representations in a network perform a series of object recognition tasks by adding a random convolution layer as a pre-processing stage, thus “filtering the image” and preserving the global shape but altering the local `texture’ of the newly transformed image. Here, the hope is that -- analogous to Geirhos et al. 2019 that induces a shape bias by transforming the image distribution into a new one with altered *global* textures that induce a shape bias and increases general robustness to o.o.d distortions -- the authors here go about doing something similar at the local level given the small size of the receptive field of the filter, thus preserving the shape and slightly altering “the texture”.
SP:a0f8915a46a06042331002c9fe2ed47382cc25e9
Neural Architecture Search of SPD Manifold Networks
1 INTRODUCTION . Designing a favorable neural network architecture for a given application requires a lot of time , effort , and domain expertise . To mitigate this issue , researchers in the recent years have started developing algorithms to automate the design process of neural network architectures ( Zoph & Le , 2016 ; Zoph et al. , 2018 ; Liu et al. , 2017 ; 2018a ; Real et al. , 2019 ; Liu et al. , 2018b ; Tian et al. , 2020 ) . Although these neural architecture search ( NAS ) algorithms have shown great potential to provide an optimal architecture for a given application , it is limited to handle architectures with Euclidean operations and representation . To deal with non-euclidean data representation and corresponding set of operations , researchers have barely proposed any NAS algorithms —to the best of our knowledge . It is well-known that manifold-valued data representation such as symmetric positive definite ( SPD ) matrices have shown overwhelming accomplishments in many real-world applications such as pedestrian detection ( Tuzel et al. , 2006 ; 2008 ) , magnetic resonance imaging analysis ( Pennec et al. , 2006 ) , action recognition ( Harandi et al. , 2014 ) , face recognition ( Huang et al. , 2014 ; 2015 ) , braincomputer interfaces ( Barachant et al. , 2011 ) , structure from motion ( Kumar et al. , 2018 ; Kumar , 2019 ) , etc . Also , in applications like diffusion tensor imaging of the brain , drone imaging , samples are collected directly as SPD ’ s . As a result , neural network usage based on Euclidean data representation becomes inefficient for those applications . Consequently , this has led to the development of the SPD neural network ( SPDNet ) architectures for further improvements in these areas of research ( Huang & Van Gool , 2017 ; Brooks et al. , 2019 ) . However , these architectures are handcrafted , so the operations or the parameters defined for these networks generally change as per the application . This motivated us to propose a new NAS problem of SPD manifold networks . A solution to this problem can reduce unwanted efforts in SPDNet design . Compared to the traditional NAS problem , our NAS problem requires a new definition of computation cell and proposal for diverse SPD candidate operation set . In particular , we model the basic architecture cell with a specific directed acyclic graph ( DAG ) , where each node is a latent SPD representation , and each edge corresponds to a SPD candidate operation . Here , the intermediate transformations between nodes respect the geometry of the SPD manifolds . For solving the suggested NAS problem , we exploit a supernet search strategy which models the architecture search problem as a one-shot training process of a supernet that comprises of a mixture of SPD neural architectures . The supernet modeling enables us to perform a differential architecture search on a continuous relaxation of SPD neural architecture search space , and therefore , can be solved using a gradient descent approach . Our evaluation validates that the proposed method can build a reliable SPD network from scratch . We show the results of our method on benchmark datasets that clearly show results better than handcrafted SPDNet . Our work makes the following contributions : • We introduce a NAS problem of SPD manifold networks that opens up a new direction of research in automated machine learning and SPD manifold learning . Based on a supernet modeling , we propose a novel differentiable NAS algorithm for SPD neural architecture search . Concretely , we exploit a sparsemax-based Fréchet mixture of SPD operations to introduce sparsity that is essential for an effective diffentiable search , and bi-level optimization with manifold-based update and convexity-based update to jointly optimize architecture parameters and network kernel weights . • Besides well-studied operations from exiting SPDNets ( Huang & Van Gool , 2017 ; Brooks et al. , 2019 ; Chakraborty et al. , 2020 ) , we follow Liu et al . ( 2018b ) to further introduce some new SPD layers , i.e. , skip connection , none operation , max pooling and averaging pooling . Our introduced additional set of SPD operations make the search space more diverse for the neural architecture search algorithm to obtain more generalized SPD neural network architectures . • Evaluation on three benchmark datasets shows that our searched SPD neural architectures can outperform the existing handcrafted SPDNets ( Huang & Van Gool , 2017 ; Brooks et al. , 2019 ; Chakraborty et al. , 2020 ) and the state-of-the-art NAS methods ( Liu et al. , 2018b ; Chu et al. , 2020 ) . Notably , our searched architecture is more than 3 times lighter than those searched by the traditional NAS algorithms . 2 BACKGROUND . In recent years , plenty of research work has been published in the area of NAS ( Gong et al. , 2019 ; Liu et al. , 2019 ; Nayman et al. , 2019 ; Guo et al. , 2020 ) . This is probably due to the success of deep learning for several applications which has eventually led to the automation of neural architecture design . Also , improvements in the processing capabilities of machines has influenced the researchers to work out this computationally expensive yet an important problem . Computational cost for some of the well-known NAS algorithms is in thousands of GPU days which has resulted in the development of several computationally efficient methods ( Zoph et al. , 2018 ; Real et al. , 2019 ; Liu et al. , 2018a ; 2017 ; Baker et al. , 2017 ; Brock et al. , 2017 ; Bender , 2019 ; Elsken et al. , 2017 ; Cai et al. , 2018 ; Pham et al. , 2018 ; Negrinho & Gordon , 2017 ; Kandasamy et al. , 2018 ; Chu et al. , 2020 ) . In this work , we propose a new NAS problem of SPD networks . We solve this problem using a supernet modeling methodology with a one-shot differentiable training process of an overparameterized supernet . Our modeling is driven by the recent progress in supernet methodology . Supernet methodology has shown a great potential than other NAS methodologies in terms of search efficiency . Since our work is directed towards solving a new NAS problem , we confine our discussion to the work that have greatly influenced our method i.e. , one-shot NAS methods and SPD networks . To the best of our knowledge , there are mainly two types of one-shot NAS methods based on the architecture modeling ( Elsken et al. , 2018 ) ( a ) parameterized architecture ( Liu et al. , 2018b ; Zheng et al. , 2019 ; Wu et al. , 2019 ; Chu et al. , 2020 ) , and ( b ) sampled architecture ( Deb et al. , 2002 ; Chu et al. , 2019 ) . In this paper , we adhere to the parametric modeling due to its promising results on conventional neural architectures . A majority of the previous work on NAS with continuous search space fine-tunes the explicit feature of specific architectures ( Saxena & Verbeek , 2016 ; Veniat & Denoyer , 2018 ; Ahmed & Torresani , 2017 ; Shin et al. , 2018 ) . On the contrary , Liu et al . ( 2018b ) ; Liang et al . ( 2019 ) ; Zhou et al . ( 2019 ) ; Zhang et al . ( 2020 ) ; Wu et al . ( 2020 ) ; Chu et al . ( 2020 ) provides architectural diversity for NAS with highly competitive performances . The other part of our work focuses on SPD network architectures . There exist algorithms to develop handcrafted SPDNet ( Huang & Van Gool , 2017 ; Brooks et al. , 2019 ; Chakraborty et al. , 2020 ) . To automate the process of SPD network design , in this work , we choose the most promising approaches from these fields ( NAS ( Liu et al. , 2018b ) , SPD networks ( Huang & Van Gool , 2017 ) ) and propose a NAS algorithm for SPD inputs . Next , we summarize the essential notions of Riemannian geometry of SPD manifolds , followed by an introduction of some basic SPDNet operations and layers . As some of the introduced operations and layers have been well-studied by the existing literature , we applied them directly to define our SPD neural architectures ’ search space . Representation and Operation : We denote n × n real SPD as X ∈ Sn++ . A real SPD matrix X ∈ Sn++ satisfies the property that for any non-zero z ∈ Rn , zTXz > 0 ( Harandi et al. , 2017 ) . We denote TXM as the tangent space of the manifoldM at X ∈ Sn++ and log corresponds to matrix logarithm . Let X1 , X2 be any two points on the SPD manifold then the distance between them is given by δM ( X1 , X2 ) = 0.5‖ log ( X − 1 2 1 X2X − 1 2 1 ) ‖F ( 1 ) There are other efficient methods to compute distance between two points on the SPD manifold ( Gao et al. , 2019 ; Dong et al. , 2017b ) , however , their discussion is beyond the scope of our work . Other property of the Riemannian manifold of our interest is local diffeomorphism of geodesics which is a one-to-one mapping from the point on the tangent space of the manifold to the manifold ( Pennec , 2020 ; Lackenby , 2020 ) . To define such notions , letX ∈ Sn++ be the base point and , Y ∈ TXSn++ , then Eq : ( 2 ) associates Y ∈ TXSn++ to a point on the manifold ( Pennec , 2020 ) . expX ( Y ) = X 1 2 exp ( X− 1 2Y X− 1 2 ) X 1 2 ∈ Sn++ , ∀ Y ∈ TX ( 2 ) Similarly , an inverse map is defined as logX ( Z ) = X 1 2 log ( X− 1 2ZX− 1 2 ) X 1 2 ∈ TX , ∀ Z ∈ Sn++ . 1 ) Basic operations of SPD Network : It is well-known that operations such as mean centralization , normalization , and adding bias to a batch of data are inherent performance booster for most neural networks . In the same spirit , existing works like Brooks et al . ( 2019 ) ; Chakraborty ( 2020 ) use the notion of these operations for the SPD or general manifold data to define analogous operations on manifolds . Below we introduce them following the work of Brooks et al . ( 2019 ) . • Batch mean , centering and bias : Given a batch of N SPD matrices { Xi } Ni=1 , we can compute its Riemannian barycenter ( B ) as B = argmin Xµ∈Sn++ ∑N i=1 δ 2 M ( Xi , Xµ ) . It is sometimes referred as Fréchet mean ( Moakher , 2005 ; Bhatia & Holbrook , 2006 ) . This definition can be extended to compute the weighted Riemannian Barycenter 1 also known as weighted Fréchet Mean ( wFM ) . B = argmin Xµ∈Sn++ N∑ i=1 wiδ 2 M ( Xi , Xµ ) ; s.t . wi ≥ 0 and N∑ i=1 wi = 1 ( 3 ) Eq : ( 3 ) can be approximated using Karcher flow ( Karcher , 1977 ; Bonnabel , 2013 ; Brooks et al. , 2019 ) or recursive geodesic mean ( Cheng et al. , 2016 ; Chakraborty et al. , 2020 ) . 2 ) Basic layers of SPD Network : Analogous to standard CNN , methods like Huang & Van Gool ( 2017 ) ; Brooks et al . ( 2019 ) ; Chakraborty et al . ( 2020 ) designed SPD layers to perform operations that respect SPD manifold constraints . AssumingXk−1 ∈ Sn++ be the input SPD matix to the kth layer , the SPD network layers are defined as follows : • BiMap layer : This layer corresponds to a dense layer for SPD data . The BiMap layer reduces the dimension of a input SPD matrix via a transformation matrix Wk as Xk = WkXk−1W Tk . To ensure the matrixXk to be an SPD matrix , theWk matrix must be of full row-rank . • Batch normalization layer : To perform batch normalization after each BiMap layer , we first compute the Riemannian barycenter of the batch of SPD matrices followed by a running mean update step , which is Riemannian weighted average between the batch mean and the current running mean , with the weights ( 1 − θ ) and ( θ ) respectively . Once mean is calculated , we centralize and add bias to each SPD sample of the batch using Eq : ( 4 ) ( Brooks et al. , 2019 ) , where P is the notation used for parallel transport : Batch centering : Centering the B : Xci = PB→I ( Xi ) = B − 1 2XiB − 1 2 , I is the identity matrix Bias the batch : Bias towards G : Xbi = PI→G ( X c i ) = G 1 2XciG 1 2 , I is the identity matrix ( 4 ) • ReEig layer : The ReEig layer is analogous to ReLU like layers present in the classical ConvNets . It aims to introduce non-linearity to SPD network . The ReEig for the kth layer is defined as : Xk = Uk−1 max ( I , Σk−1 ) U T k−1 where , Xk−1 = Uk−1Σk−1U T k−1 , I is the identity matrix , and > 0 is a rectification threshold value . Uk−1 , Σk−1 are the orthonormal matrix and singular-value matrix respectively which are obtained via matrix factorization ofXk−1 . 1Following ( Tuzel et al . ( 2006 ; 2008 ) ; Brooks et al . ( 2019 ) ) , we focus on the estimate of wFM with Karcher flow , and the thorough study on the general wFM ’ s existence and uniqueness is beyond the focus of this paper . • LogEig layer : To map the manifold representation of SPD to flat space so that a Euclidean operation can be performed , LogEig layer is introduced . The LogEig layer is defined as : Xk = Uk−1 log ( Σk−1 ) U T k−1 where , Xk−1 = Uk−1Σk−1U T k−1 . The LogEig layer is used with fully connected layers to solve tasks with SPD representation . • ExpEig layer : This layer maps the corresponding SPD representation from flat space back to SPD manifold space . It is defined asXk = Uk−1 exp ( Σk−1 ) UTk−1 where , Xk−1 = Uk−1Σk−1U T k−1 . • Weighted Riemannian pooling layer : It uses wFM definition to compute the output of the layer . Recent method use recursive geodesic mean algorithm to calculate the mean ( Chakraborty et al. , 2020 ) , in contrast , we use Karcher flow algorithm to compute it ( Karcher , 1977 ) as it is simple and widely used in practice .
The paper considers a generalization of convolutional neural networks (CNNs) to manifold-valued data such as Symmetric Positive Definite (SPD). This paper proposes a neural architecture search problem of SPD manifold networks. A SPD cell representation and corresponding candidate operation search space is introduced. They demonstrate on drone, action and emotion recognition datasets that their method is performed well compared to SPD approaches.
SP:9d2df0c7b57ce7d4ee0a222ad11361172cb7cbc7
Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
1 INTRODUCTION Recently , excellent breakthrough in various domains has been achieved with the success of deep learning ( Ronneberger et al. , 2015 ; Devlin et al. , 2018 ; Ren et al. , 2015 ) . However , the most advanced deep neural networks always consume a large amount of computation and memory , which has limited their deployment in edge devices such as self-driving cars and mobile phones . To address this problem , abundant techniques are proposed , including pruning ( Han et al. , 2016 ; Zhang et al. , 2018 ; Liu et al. , 2018 ; Frankle & Carbin , 2018 ) , quantization ( Nagel et al. , 2019 ; Zhou et al. , 2017 ) , compact model design ( Sandler et al. , 2018 ; Howard et al. , 2019 ; Ma et al. , 2018 ; Iandola et al. , 2016 ) and knowledge distillation ( Hinton et al. , 2014 ; Buciluǎ et al. , 2006 ) . Knowledge distillation , which is also known as teacher-student learning , aims to transfer the knowledge of an over-parameterized teacher to a lightweight student . Since the student is trained to mimic the logits or features of the teacher , the student can inherit the dark knowledge from the teacher , and thus often achieves much higher accuracy . Due to its simplicity and effectiveness , knowledge distillation has become a popular technique for both model compression and model accuracy boosting . ∗Corresponding author †https : //github.com/ArchipLab-LinfengZhang/Object-Detection-Knowledge-Distillation-ICLR2021 As one of the most crucial challenges in computer vision , object detection has an urgent requirement of both accurate and efficient models . Unfortunately , most of the existing knowledge distillation methods in computer vision are designed for image classification and usually leads to trivial improvements on object detection ( Li et al. , 2017 ) . In this paper , we impute the failure of knowledge distillation on object detection to the following two issues , which will be solved later , respectively . Imbalance between foreground and background . In an image to be detected , the background pixels are often more overwhelming than the pixels of the foreground objects . However , in previous knowledge distillation , the student is always trained to mimic the features of all pixels with the same priority . As a result , students have paid most of their attention to learning background pixels features , which suppresses student ’ s learning on features of the foreground objects . Since foreground pixels are more crucial in detection , the imbalance hurts the performance of knowledge distillation severely . To overcome this obstacle , we propose the attention-guided distillation which distills only the crucial foreground pixels . Since the attention map can reflect the position of the important pixels ( Zhou et al. , 2016 ) , we adopt the attention map as the mask for knowledge distillation . Concretely , the pixel with a higher attention value is regarded as a pixel of a foreground object and then is learned by the student model with a higher priority . Compared with the previous binary mask method ( Wang et al. , 2019 ) , the mask generated by attention maps in our methods is more fine-grained and requires no additional supervision . Compared with the previous attention-based distillation methods ( Zagoruyko & Komodakis , 2017 ) , the attention map in our methods is not only utilized as the information to be distilled but also utilized as the mask signal for feature distillation . Lack of distillation on relation information . It is generally acknowledged that the relation between different objects contains valuable information in object detection . Recently , lots of researchers successfully improve the performance of detectors by enabling detectors to capture and make use of these relations , such as non-local modules ( Wang et al. , 2018 ) and relation networks ( Hu et al. , 2018 ) . However , the existing object detection knowledge distillation methods only distill the information of individual pixels but ignore the relation of different pixels . To solve this issue , we propose the non-local distillation , which aims to capture the relation information of students and teachers with non-local modules and then distill them from teachers to students . Since the non-local modules and attention mechanism in our methods are only required in the training period , our methods don ’ t introduce additional computation and parameters in the inference period . Besides , our methods are feature-based distillation methods which do not depend on a specific detection algorithm so they can be directly utilized in all kinds of detectors without any modification . On MS COCO2017 , 2.9 , 2.9 and 2.2 AP improvements can be observed on two-stage , one-stage , and anchor-free models on average , respectively . Experiments on Mask RCNN show that our methods can also improve the performance of instance segmentation by 2.0 AP , on average . We have conducted a detailed ablation study and sensitivity study to show the effectiveness and stability of each distillation loss . Moreover , we study the relation between teachers and students on object detection and find that knowledge distillation on object detection requires a high AP teacher , which is different from the conclusion in image classification where a high AP teacher may harm the performance of students ( Mirzadeh et al. , 2019 ; Cho & Hariharan , 2019 ) . We hope that these results are worth more contemplation of knowledge distillation on tasks except for image classification . To sum up , the contribution of this paper can be summarized as follows . • We propose the attention-guided distillation , which emphasizes students ’ learning on the foreground objects and suppresses students ’ learning on the background pixels . • We propose the non-local distillation , which enables the students to learn not only the information of the individual pixel but also the relation between different pixels from teachers . • We show that a teacher with higher AP is usually a better teacher in knowledge distillation on object detection , which is different from the conclusion in image classification . 2 RELATED WORK . As an effective method for model compression and model accuracy boosting , knowledge distillation has been widely utilized in various domains and tasks , including image classification ( Hinton et al. , 2014 ; Romero et al. , 2015 ; Zagoruyko & Komodakis , 2017 ) , object detection ( Chen et al. , 2017 ; Li et al. , 2017 ; Wang et al. , 2019 ; Bajestani & Yang , 2020 ) , semantic segmentation ( Liu et al. , 2019 ) , face recognition ( Ge et al. , 2018 ) , pretrained language model ( Sanh et al. , 2019 ; Xu et al. , 2020 ) , multi-exit networks training ( Zhang et al. , 2019b ; a ) , model robustness ( Zhang et al. , 2020b ) and so on . Hinton et al . ( 2014 ) first propose the concept of knowledge distillation where the students are trained to mimic the results after softmax layers of teachers . Then , abundant methods are proposed to transfer the knowledge in teacher ’ s features ( Romero et al. , 2015 ) or the variants , such as attention ( Zagoruyko & Komodakis , 2017 ; Hou et al. , 2019 ) , FSP ( Yim et al. , 2017 ) , mutual information ( Ahn et al. , 2019 ) , positive features ( Heo et al. , 2019 ) , relation of samples in a batch ( Park et al. , 2019 ; Tung & Mori , 2019 ) . Improving the performance of object detection becomes a hot topic in knowledge distillation recently . Chen et al . ( 2017 ) design the first knowledge distillation method on object detection , which includes distillation loss on the backbone , the classification head and the regression head . Then , many researchers find that the imbalance between the foreground objects and background is a crucial problem in detection distillation . Instead of distilling the whole features of backbone networks , Li et al . ( 2017 ) only apply L2 distillation loss to the features sampled by RPN . Bajestani & Yang ( 2020 ) propose the temporal knowledge distillation , which introduces a hyper-parameter to balance the distillation loss between the pixels of the foreground and background . Wang et al . ( 2019 ) propose the fine-grained feature imitation , which only distills the feature near object anchor locations . However , although these works have tried to distill only the pixels of foreground objects , they always reply on the annotation in groundtruth , anchors , and bounding boxes and thus can not be transferred to different kinds of detectors and tasks . In contrast , in our method , the pixels of foreground objects are found with attention mechanism , which can be easily generated from features . As a result , it can be directly utilized in all kinds of detectors without any modification . As shown in Figure 3 , the difference between the previous mask-based detection distillation method ( Wang et al. , 2019 ) and our attention-guided distillation can be summarized as follows ( i ) Our methods generate the mask with attention mechanism while they generate the mask with ground truth bounding boxes and anchor priors . ( ii ) The mask in our methods is a pixel-wise and fine-grained mask while the mask in their method is an object-wise and binary mask . ( iii ) The masks in our methods are composed of a spatial mask and a channel mask while they only have a spatial mask . More detailed comparison with related work can be found in Appendix.E . 3 METHODOLOGY . 3.1 ATTENTION-GUIDED DISTILLATION . We use A ∈ RC , H , W to denote the feature of the backbone in an object detection model , where C , H , W denotes its channel number , height and width , respectively . Then , the generation of the spatial attention map and channel attention map is equivalent to finding the mapping function Gs : RC , H , W −→ RH , W and Gc : RC , H , W −→ RC , respectively . Note that the superscripts s and c here are utilized to discriminate ‘ spatial ’ and ‘ channel ’ . Since the absolute value of each element in the feature implies its importance , we construct Gs by summing the absolute values across the channel dimension and construct Gc by summing the absolute values across the width and height dimension , which can be formulated as Gc ( A ) = 1HW ∑i=1 H ∑j=1 W |A· , i , j | and Gs ( A ) = 1 C ∑k=1 C |Ak , · , ·| , where i , j , k denotes the ith , jth , kth slice of A in the height , width , and channel dimension , respectively . Then , the spatial attention mask Ms and the channel attention mask M c used in attentionguided distillation can be obtained by summing the attention maps from the teacher and the student detector , which can be formulated as Ms = HW · softmax ( ( Gs ( AS ) + Gs ( AT ) ) /T ) , M c = C · softmax ( ( Gc ( AS ) + Gc ( AT ) ) /T ) . Note that the superscripts S and T here are used to discriminate students and teachers . T is a hyper-parameter in softmax introduced by Hinton et al . to adjust the distribution of elements in attention masks ( see Figure 4 ) . The attention-guided distillation loss LAGD is composed of two components – attention transfer loss LAT and attention-masked loss LAM . LAT is utilized to encourage the student model to mimic the spatial and channel attention of the teacher model , which can be formulated as LAT = L2 ( Gs ( AS ) , Gs ( AT ) ) + L2 ( Gc ( AS ) , Gc ( AT ) ) . ( 1 ) LAM is utilized to encourage the student to mimic the features of teacher models by a L2 norm loss masked by Ms and M c , which can be formulated as LAM = C∑ k=1 H∑ i=1 W∑ j=1 ( ATk , i , j −ASk , i , j ) 2 ·Msi , j ·M ck 12 . ( 2 )
This paper explores the knowledge distillation problem in object detection. It claims that the failure of knowledge distillation in object detection is mainly caused by the imbalance between pixels of foreground and background, and the relation distillation between different pixels. The authors then propose non-local distillation to tackle the problem. Extensive experiments are conducted on MS COCO and verify the effectiveness of the proposed method.
SP:eb8d96fd5cd18569cfa519c5a09af90ea272d533
Learning Structural Edits via Incremental Tree Transformations
1 INTRODUCTION . Iteratively revising existing data for a certain purpose is ubiquitous . For example , researchers repetitively polish their manuscript until the writing becomes satisfactory ; computer programmers keep editing existing code snippets and fixing bugs until desired programs are produced . Can we properly model such iterative editing processes with neural generative models ? To answer this question , previous works have examined models for editing sequential data such as natural language sentences . Some example use cases include refining results from a first-pass text generation system ( Simard et al. , 2007 ; Xia et al. , 2017 ) , editing retrieved text into desired outputs ( Gu et al. , 2018 ; Guu et al. , 2018 ) , or revising a sequence of source code tokens ( Yin et al. , 2019 ; Chen et al. , 2019 ; Yasunaga & Liang , 2020 ) . These examples make a single editing pass by directly generating the edited sequence . In contrast , there are also works on modeling the incremental edits of sequential data , which predict sequential edit operations ( e.g . keeping , deleting or adding a token ) either in a single pass ( Shin et al. , 2018 ; Vu & Haffari , 2018 ; Malmi et al. , 2019 ; Dong et al. , 2019 ; Stahlberg & Kumar , 2020 ; Iso et al. , 2020 ) or iteratively ( Zhao et al. , 2019 ; Stern et al. , 2019 ; Gu et al. , 2019a ; b ) , or modify a sequence in a non-autoregressive way ( Lee et al. , 2018 ) . However , much interesting data in the world has strong underlying structure such as trees . For example , a syntactic parse can be naturally represented as a tree to indicate the compositional relations among constituents ( e.g . phrases , clauses ) in a sentence . A computer program inherently is also a tree defined by the programming language ’ s syntax . In the case that this underlying structure exists , many edits can be expressed much more naturally and concisely as transformations over the underlying trees than conversions of the tokens themselves . For example , removing a statement from a ∗Work done while interning at CMU . computer program can be easily accomplished by deleting the corresponding tree branch as opposed to deleting tokens one by one . Despite this fact , work on editing tree-structured data has been much more sparse . In addition , it has focused almost entirely on single-pass modification of structured outputs as exemplified by Yin et al . ( 2019 ) ; Chakraborty et al . ( 2020 ) for computer program editing . In this work , we are interested in a generic model for incremental editing of structured data ( “ structural edits ” ) . Particularly , we focus on tree-structured data , taking abstract syntax trees of computer programs as our canonical example . We propose a neural editor that runs iteratively . At each step , the editor generates and applies a tree edit ( e.g . deleting or adding a subtree ) to the partially edited tree , which deterministically transforms the tree into its modified counterpart . Therefore , the entire tree editing process can be formulated as consecutive , incremental tree transformations ( Fig . 1 ) . While recent works ( Tarlow et al. , 2019 ; Dinella et al. , 2020 ; Brody et al. , 2020 ) have also examined models that make changes to trees , our work is distinct from them in that : First , compared with Dinella et al . ( 2020 ) , we studied a different problem of editing tree-structured data particularly triggered by an edit specification ( which implies a certain edit intent such as a code refactoring rule ) . Second , we model structural edits via incremental tree transformations , while Tarlow et al . ( 2019 ) and Brody et al . ( 2020 ) predict a complete edit sequence based on the fixed input tree , without applying the edits or performing any tree transformations incrementally . Although Dinella et al . ( 2020 ) have explored a similar idea , our proposed tree editor is more general owing to the adoption of the Abstract Syntax Description Language ( ASDL ; Wang et al . ( 1997 ) ) . This offers our editor two properties : being language-agnostic and ensuring grammar validity . In contrast , Dinella et al . ( 2020 ) include JavaScript-specific design and employ only ad-hoc grammar checking . Finally , our tree editor supports a comprehensive set of operations such as adding or deleting a tree node and copying a subtree , which can fulfill a broad range of tree editing requirements . These operations are not fully allowed by previous work , e.g. , Brody et al . ( 2020 ) can not add ( or generate ) a new tree node from scratch ; Tarlow et al . ( 2019 ) and Dinella et al . ( 2020 ) do not support subtree copying . We further propose two modeling and training improvements , specifically enabled by and tailored to our incremental editing formalism . First , we propose a new edit encoder for learning to represent the edits to be performed . Unlike existing edit encoders , which compress tree differences at the token level ( Yin et al. , 2019 ; Hoang et al. , 2020 ; Panthaplackel et al. , 2020b ) or jointly encode the initial and the target tree pairs in their surface forms ( Yin et al. , 2019 ) , our proposed edit encoder learns the representation by encoding the sequence of gold tree edit actions . Second , we propose a novel imitation learning ( Ross et al. , 2011 ) method to train our editor to correct its mistakes dynamically , given that it can modify any part of a tree at any time . We evaluate our proposed tree editor on two source code edit datasets ( Yin et al. , 2019 ) . Our experimental results show that , compared with previous approaches that generate the edited program in one pass , our editor can better capture the underlying semantics of the intended edits , which allows it to outperform existing approaches by more than 7 % accuracy in a one-shot evaluation setting . With the proposed edit encoder , our editor significantly improves accuracy over existing state-of-the-art methods on both datasets . We also demonstrate that our editor can become more robust by learning to imitate expert demonstrations dynamically . Our source code is available at https : //github.com/neulab/incremental_tree_edit . 2 PROBLEM FORMULATION . As stated above , our goal is to create a general-purpose editor for tree-structured data . Specifically , we are interested in editing tree structures defined following an underlying grammar that , for every parent node type , delineates the allowable choices of child nodes . Such syntactic tree structures , like syntax trees of sentences or computer programs , are ubiquitous in fields like natural language processing and software engineering . In this paper we formulate editing such tree structures as revising an input tree C− into an output tree C+ according to an edit specification ∆ . As a concrete example , we use editing abstract syntax trees ( ASTs ) of C # programs , as illustrated in Fig . 1 . This figure shows transforming the AST of “ x=list.ElementAt ( i+1 ) ” ( C− ) to the AST of “ x=list [ i+1 ] ” ( C+ ) . In this case , the edit specification ∆ could be interpreted as a refactoring rule that uses the bracket operator [ · ] for accessing elements in a list.1 In practice , the edit specification is learned 1The corresponding Roslyn analyzer in C # can be found at https : //github.com/JosefPihrt/ Roslynator/blob/master/docs/analyzers/RCS1246.md . by an edit encoder f∆ from a pair of input-output examples 〈C ′− , C ′+〉 , and encoded as a real-valued edit representation , i.e . f∆ ( C ′− , C ′ + ) ∈ Rn . The learned edit representation could then be used to modify C− in a similar way as editing C ′− . Onwards we use f∆ as a simplified notation for edit representations . Revising a tree into another typically involves a sequence of incremental edits . For instance , to modify the input tree in the aforementioned example , one may first delete the subtree rooted at the node MethodCall , which corresponds to the code fragment “ list.ElementAt ( i+1 ) ” , and then replace it with an ElementAccess node denoting the bracket operator , etc . We formulate this editing process as a sequential decision making process ( 〈g1 , a1〉 , . . . , 〈gT , aT 〉 ) , where for each tree gt at time step t , the editor executes a tree edit action at , deterministically transforming it into gt+1 . In particular , g1 is the initial input tree C−.2 The process stops at gT when the editor predicts a special Stop action as aT . Denoting g1 : t = ( g1 , ... , gt ) as the tree history and a1 : t = ( a1 , ... , at ) the edit history until step t , then the editing can be framed as the following autoregressive process : p ( a1 : T |f∆ , g1 ) = p ( a1|f∆ , g1 ) p ( a2|f∆ , g1:2 ) · · · p ( aT |f∆ , g1 : T ) = T∏ t=1 p ( at|f∆ , g1 : t ) . ( 1 ) 3 MODEL . We will introduce our neural editor for modeling p ( at|· ) in § 3.1 , followed by the edit representation model f∆ in § 3.2 . 3.1 NEURAL TREE EDITOR . Fig . 1 ( c ) illustrates our editor architecture . At each time step , the editor first encodes the current tree gt and the tree history g1 : t. It then employs a modular decoder to predict a tree edit action at . Next , we will first introduce our tree edit actions and then elaborate the model details . 3.1.1 TREE EDIT ACTIONS . Our editor uses a sequence of editing actions to incrementally modify a tree-structured input . At each time step , the decoder takes an action at to update a partially-edited tree gt . Specifically , an action at consists of an operator ( e.g . an operator that removes a subtree from gt ) with its optional arguments ( e.g . the target subtree to delete ) . Importantly , the space of actions is limited to maintain consistency with the underlying syntax of the language . While a number of syntactic formalisms such as context free grammar ( Chomsky , 1956 ) or tree substitution grammar ( Cohn et al. , 2010 ) exist , in this work we choose the ASDL formalism due to its ability to flexibly handle optional and sequential fields ( interested readers may reference Wang et al . ( 1997 ) and Yin & Neubig ( 2018 ) for details ) . Under this framework , we define four types of operators . 2Notably , the special case of empty initial trees corresponds to code generation from scratch . Thus our formulation applies to both tasks of editing existing trees and generating new ones . Delete operators take a tree node nt as argument and remove nt and its descendants from gt ( e.g . t = 1 in Fig . 1 ( b ) ) . Note that removing arbitrary ( child ) nodes from gt might produce syntactically invalid trees , since under the grammar , a parent node type would always have a fixed set of edge types . For instance , if the node MethodCall and its incoming edge right were to be removed at t = 1 , the resulting AST would be syntactically invalid under C # ’ s grammar , as the node AssignStmt denoting a variable assignment statement is missing a child node representing its right operand . To maintain syntactic correctness ( no missing child nodes for any parent nodes ) , we therefore replace the to-be-deleted node with a pseudo Dummy node as a placeholder . Next , we define an Add operator to add new nodes to gt . The operator first locates a target position by selecting an existing tree node . We consider two cases based on the edge type ( or “ field ” ) of the target position : for single or optional fields that allow at most one child ( e.g . field right in Fig . 1 ( b ) at t = 1 ) , the selected tree node has to be their dummy child node ( e.g . node Dummy at t = 1 ) and the Add operator will then replace the dummy node with the new tree node ; for sequential fields that accept more than one child ( e.g . the field of a “ statement block ” that allows an arbitrary number of statements ) , the selected tree node can be any child node of the field ( including a rightmost dummy node we append to every sequential field ) and the Add operator will then insert the new node before the selected node . We elaborate this mechanism in § A.1 . For our editor , adding a non-terminal node ( e.g . node ElementAccess in Fig . 1 ( b ) at t = 2 ) is equivalent to selecting a production rule to derive its field ( e.g . AssignStmt right−−→ ElementAccess ) . As with Delete actions , to ensure there is no missing child node , we instantiate the set of child nodes with dummy nodes for the newly added node based on the underlying grammar , which leads to nodes Dummy1 and Dummy2 at t = 2 . Add can also be used to populate empty terminal nodes with actual values ( e.g . string token “ list ” at t = 3 ) . This is the same as picking a token from the token vocabulary . Additionally , observing that in many cases , revising a tree can be easily done by copying a subtree from the initial input g1 ( e.g . subtree Expr 7→ i + 1 in Fig . 1 ( a ) ) to a new position in the updated tree gt ( e.g . the right child position of node ElementAccess in Fig . 1 ( b ) at t = 4 ) , we introduce a high-level operator CopySubTree . This operator locates a target position similarly as the Add operator and then copies a complete subtree from g1 to the target position in a single step . Finally , a Stop action is used to terminate the iterative tree editing procedure , after which the remaining dummy nodes will be cleared . We note that our framework has decoupled the language grammar specifications ( handled by ASDL ) from the model architecture ( corresponding to our languageagnostic model implementation ) , and thus can be applied to various languages flexibly .
The paper presents an approach for predicting edits in programs, by modeling the programs as trees. The approach is mainly an extension of Yin et al. (2019), with the main difference that the model is required to predict only the output **actions**, instead of generating the entire output tree as in Yin et al. (2019). This difference of predicting only output actions is shared with other previous work though.
SP:d5f2c31689e6b6f52bb6f21916e8acacba444f76
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
1 INTRODUCTION . Facial recognition systems ( FR ) are widely deployed for mass surveillance by government agencies , government contractors , and private companies alike on massive databases of images belonging to private individuals ( Hartzog , 2020 ; Derringer , 2019 ; Weise & Singer , 2020 ) . Recently , these systems have been thrust into the limelight in the midst of outrage over invasion into personal life and concerns regarding fairness ( Singer , 2018 ; Lohr , 2018 ; Cherepanova et al. , 2021 ) . Practitioners populate their databases by hoarding publicly available images from social media outlets , and so users are forced to choose between keeping their images outside of public view or taking their chances with mass surveillance . We develop a tool , LowKey , for protecting users from unauthorized surveillance by leveraging methods from the adversarial attack literature , and make it available to the public as a webtool . ∗Authors contributed equally . LowKey is the first such evasion tool that is effective against commercial facial recognition APIs . Our system pre-processes user images before they are made publicly available on social media outlets so they can not be used by a third party for facial recognition purposes . We establish the effectiveness of LowKey throughout this work . Our contributions can be summarized as follows : • We design a black-box adversarial attack on facial recognition models . Our algorithm moves the feature space representations of gallery faces so that they do not match corresponding probe images while preserving image quality . • We interrogate the performance of our method on commercial black-box APIs , including Amazon Rekognition and Microsoft Azure Face , whose inner workings are not publicly known . We provide comprehensive comparisons with the existing data poisoning alternative , Fawkes ( Shan et al. , 2020 ) , and we find that while Fawkes is ineffective in every experiment , our method consistently prevents facial recognition . • We release an easy-to-use webtool , LowKey , so that social media users are no longer confronted with a choice between withdrawing their social media presence from public view and risking the repercussions of being surveilled . 2 RELATED WORK . Neural networks are known to be vulnerable to adversarial attacks , small perturbations to inputs that do not change semantic content , and yet cause the network to misbehave ( Goodfellow et al. , 2014 ) . The adversarial attack literature has largely focused on developing new algorithms that , in simulations , are able to fool neural networks ( Carlini & Wagner , 2017 ; Chiang et al. , 2020 ) . Most works to date focus on the idea of physical world attacks , in which the attacker places adversarial patterns on an object in hopes that the adversarial properties transfer to an image of the object . Such attacks do not succeed reliably because the adversarial perturbation must survive imaging under various lighting conditions , object orientations , and occlusions ( Kurakin et al. , 2016 ) . While researchers have succeeded in crafting such attacks against realistic systems , these attacks do not work consistently across environments ( Wu et al. , 2019 ; Xu et al. , 2019 ; Goldblum et al. , 2020 ) . In facial recognition , attacks have largely focused on physical backdoor threat models , evasion attacks on verification ( Wenger et al. , 2020 ; Zhong & Deng , 2020 ) and attacks on face detection ( Pedraza et al. , 2018 ) . Unlike these physical threat models , the setting in which we operate is purely digital , meaning that we can manipulate the contents of digital media at the bit level , and then hand manipulated data directly to a machine learning system . The ability to digitally manipulate media greatly simplifies the task of attacking a system , and has been shown to enhance transferability to black box industrial systems for applications like copyright detection ( Saadatpanah et al. , 2020 ) and financial time series analysis ( Goldblum et al. , 2020 ) . Recently , the Fawkes algorithm was developed for preventing social media images from being used by unauthorized facial recognition systems ( Shan et al. , 2020 ) . However , Fawkes , along with the experimental setup on which it is evaluated in the original work , suffers from critical problems . First , Fawkes assumes that facial recognition practitioners train their models on each individual ’ s data . However , high-performance FR systems instead harness large pre-trained Siamese networks ( Liu et al. , 2017 ; Deng et al. , 2019 ) . Second , the authors primarily use image classifiers . In contrast , commercial systems are trained with FR-specific heads and loss functions , as opposed to the standard cross-entropy loss used by classifiers . Third , the authors perform evaluations on very small datasets . Specifically , they test Fawkes against commercial APIs with a gallery containing only 50 images . Fourth , the system was only evaluated using top-1 accuracy , but FR users such as police departments often compile a list of suspects rather than a single individual . As a result , other metrics like top-50 accuracy are often used in facial recognition , and are a more realistic metric for when a system has been successfully suppressed . Fifth , while the original work portrays Fawkes ’ perturbations are undetectable by the human eye , experience with the codebase suggests the opposite ( indeed , a New York Times journalist likewise noted that the Fawkes images she was shown during a demonstration were visibly heavily distorted ) . Finally , Fawkes has not yet released an app or a webtool , and regular social media users are unlikely to make use of git repositories . Our attack avoids the aforementioned limitations , and we perform thorough evaluations on a large collection of images and identities . When comparing with Fawkes , we use the authors ’ own implementation in order to make sure that all evaluations are fair . Furthermore , we use Fawkes ’ highest protection setting to make sure that LowKey performs better than Fawkes ’ best attack . Another work uses targeted adversarial attack on probe images for facial recognition systems so that they can not be matched with images in a database ( Yang et al. , 2020 ) . 3 THE LOWKEY ATTACK ON MASS SURVEILLANCE . 3.1 PROBLEM SETUP . To help make our work more widely accessible , we begin by introducing common facial recognition terms . Gallery images are database images with known identities . These often originate from such sources as passport photos and social media profiles . The gallery is used as a reference for comparing new images . Probe images are new photos whose subject the FR system user wants to identify . For example , probe images may be extracted from video surveillance footage . The extracted images are then fed into the FR system , and matches to gallery images with known identities . Identification is the task of answering the question , “ who is this person ? ” Identification entails comparing a probe image to gallery images in order to find potential matches . In contrast , verification answers the question , “ is this person who they say they are ? ” , or equivalently “ are these two photos of the same person ? ” Verification is used , for example , to unlock phones . In our work , we focus on identification , which can be used for mass surveillance . State-of-the-art facial recognition systems first detect and align faces before extracting facial features from the probe image using a neural network . These systems then find gallery images with the closest feature vectors using a k-nearest neighbors search . The matched gallery images are then considered as likely identities corresponding to the person in the probe photo . LowKey applies a filter to user images which may end up in an organization ’ s database of gallery images . The result is to corrupt the gallery feature vectors so that they will not match feature vectors corresponding to the user ’ s probe images . A visual depiction of the LowKey pipeline can be found in Figure 2 . 3.2 THE LOWKEY ATTACK . LowKey manipulates potential gallery images so that they do not match probe images of the same person . LowKey does this by generating a perturbed image whose feature vector lies far away from the original image , while simultaneously minimizing a perceptual similarity loss between the original and perturbed image . Maximizing the distance in feature space prevents the image from matching other images of the individual , while the perceptual similarity loss prevents the image quality from degrading . In this section , we formulate the optimization problem , and describe a number of important details . LowKey is designed to evade proprietary FR systems that contain pre-processing steps and neural network backbones that are not publicly known . In order to improve the transferability of our attack to unknown facial recognition systems , LowKey simultaneously attacks an ensemble of models with various backbone architectures that are produced using different training algorithms . Additionally , for each model in the ensemble , the objective function considers the locations of feature vectors of the attacked image both with and without a Gaussian blur . We find that this technique improves both the appearance and transferability of attacked images . Experiments and ablations concerning ensembling and Gaussian smoothing can be found in Section 6 . For perceptual similarity loss , we use LPIPS , a metric based on ` 2 distance in the feature space of an ImageNet-trained feature extractor ( Zhang et al. , 2018 ) . LPIPS has been used effectively in the image classification setting to improve the image quality of adversarial examples ( Laidlaw et al. , 2020 ) . Formally , the optimization problem we solve is max x′ 1 2n n∑ i=1 non-smoothed︷ ︸︸ ︷ ‖fi ( A ( x ) ) − fi ( A ( x′ ) ) ‖22 + smoothed︷ ︸︸ ︷ ‖fi ( A ( x ) ) − fi ( A ( G ( x′ ) ) ) ‖22 ‖fi ( A ( x ) ) ‖2 − αLPIPS ( x , x′ ) ︸ ︷︷ ︸ perceptual loss , ( 1 ) where x is the original image , x′ is the perturbed image , fi denotes the ith model in our ensemble , G is the Gaussian smoothing function with fixed parameters , and A denotes face detection and extraction followed by 112× 112 resizing and alignment . The face detection step is an important part of the LowKey objective function , as commercial systems rely on face detection and extraction because probe images often contain a scene much larger than a face , or else contain a face who ’ s alignment is not compatible with the face recognition system . We solve this maximization problem iteratively with signed gradient ascent , which is known to be highly effective for breaking common image classification systems ( Madry et al. , 2017 ) . Namely , we iteratively update x′ by adding the sign of the gradient of the maximization objective ( 1 ) with respect to x′ . By doing this , we move x′ and G ( x′ ) far away from the original image x in the feature spaces of models fi used in the LowKey ensemble . The ensemble contains four feature extractors , IR-152 and ResNet-152 backbones trained with ArcFace and CosFace heads . More details can be found in the next section . Additional details concerning attack hyperparameters can be found in Appendix 8.1 .
This paper presents a method/tool, i.e., LowKey, to protect user privacy which leverages adversarial attacks to pre-process facial images against the black-box facial recognition system in social media, yet the processed facial images remain visually acceptable. The LowKey method proposes to attack an ensemble of facerec models by optimizing the Gaussian blur to the original face images, with the LIIPS metric on the L2 distance in the \emph{feature space}. Thus, the processed face images remain visually legible to human. The ensemble of facerec models include ResNet-50, RestNet-152, IR-50 and IR-152 trained on MS-Celeb-1M dataset. The LowKey method has demonstrated very effective in combating black-box commercial facerec system at Amazon and Microsoft.
SP:f3fedc975625ffe149ab536fdf537871fcadcbbf
Goal-Driven Imitation Learning from Observation by Inferring Goal Proximity
1 INTRODUCTION . Humans are capable of effectively leveraging demonstrations from experts to solve a variety of tasks . Specifically , by watching others performing a task , we can learn to infer how close we are to completing the task , and then take actions towards states closer to the goal of the task . For example , after watching a few tutorial videos for chair assembly , we learn to infer how close an intermediate configuration of a chair is to completion . With the guidance of such a task progress estimate , we can efficiently learn to assemble the chair to progressively get closer to and eventually reach , the fully assembled chair . Can machines likewise first learn an estimate of progress towards a goal from demonstrations and then use this estimate as guidance to move closer to and eventually reach the goal ? Typical learning from demonstration ( LfD ) approaches ( Pomerleau , 1989 ; Pathak et al. , 2018 ; Finn et al. , 2016 ) greedily imitate the expert policy and therefore suffer from accumulated errors causing a drift away from states seen in the demonstrations . On the other hand , adversarial imitation learning approaches ( Ho & Ermon , 2016 ; Fu et al. , 2018 ) encourage the agent to imitate expert trajectories with a learned reward that distinguishes agent and expert behaviors . However , such adversarially learned reward functions often overfit to the expert demonstrations and do not generalize to states not covered in the demonstrations ( Zolna et al. , 2019 ) , leading to unsuccessful policy learning . Inspired by how humans leverage demonstrations to measure progress and complete tasks , we devise an imitation learning from observation ( LfO ) method which learns a task progress estimator and uses the task progress estimate as a dense reward signal for training a policy as illustrated in Figure 1 . To measure the progress of a goal-driven task , we define goal proximity as an estimate of temporal distance to the goal ( i.e. , the number of actions required to reach the goal ) . In contrast to prior adversarial imitation learning algorithms , by having additional supervision of task progress and learning to predict it , the goal proximity function can acquire more structured task-relevant information , and hence generalize better to unseen states and provide better reward signals . However , the goal proximity function can still output inaccurate predictions on states not in demonstrations , which results in unstable policy training . To improve the accuracy of the goal proximity function , we continually update the proximity function with trajectories both from expert and agent . In addition , we penalize trajectories with the uncertainty of the goal proximity prediction , which prevents the policy from exploiting high proximity estimates with high uncertainty . As a result , by leveraging the agent experience and predicting the proximity function uncertainty , our method can achieve more efficient and stable policy learning . The main contributions of this paper include ( 1 ) an algorithm for imitation from observation that uses estimated goal proximity to inform an agent of the task progress ; ( 2 ) modeling uncertainty of goal proximity estimation to prevent exploiting uncertain predictions ; and ( 3 ) a joint training algorithm of the goal proximity function and policy . We show that the policy learned with our proposed goal proximity function is more effective and generalizes better than the state-of-the-art LfO algorithms on various domains , such as navigation , robot manipulation , and locomotion . Moreover , our method demonstrates comparable results with GAIL ( Ho & Ermon , 2016 ) , which learns from expert actions . 2 RELATED WORK . Imitation learning ( Schaal , 1997 ) aims to leverage expert demonstrations to acquire skills . While behavioral cloning ( Pomerleau , 1989 ) is simple but effective with a large number of demonstrations , it suffers from compounding errors caused by the distributional drift ( Ross et al. , 2011 ) . On the other hand , inverse reinforcement learning ( Ng & Russell , 2000 ; Abbeel & Ng , 2004 ; Ziebart et al. , 2008 ) estimates the underlying reward from demonstrations and learns a policy through reinforcement learning with this reward , which can better handle the compounding errors . Specifically , generative adversarial imitation learning ( GAIL ) ( Ho & Ermon , 2016 ) and its variants ( Fu et al. , 2018 ; Kostrikov et al. , 2020 ) shows improved demonstration efficiency by training a discriminator to distinguish expert and agent transitions and using the discriminator output as a reward for policy training . While most imitation learning algorithms require expert actions , imitation learning from observation ( LfO ) approaches learn from state-only demonstrations . This enables the LfO methods to learn from diverse sources of demonstrations , such as human videos , demonstrations with different controllers , and other robots . To imitate demonstrations without expert actions , inverse dynamics models ( Niekum et al. , 2015 ; Torabi et al. , 2018a ; Pathak et al. , 2018 ) or learned reward functions ( Edwards et al. , 2016 ; Sermanet et al. , 2017 ; 2018 ; Liu et al. , 2018 ; Lee et al. , 2019a ) can be used to train the policy . However , these methods require large amounts of data to train inverse dynamics models or representations . On the other hand , state-only adversarial imitation learning ( Torabi et al. , 2018b ; Yang et al. , 2019 ) can imitate an expert with few demonstrations , similar to GAIL . In addition to discriminating expert and agent trajectories , our method proposes to also estimate the proximity to the goal , which can provide more informed reward signals and generalize better . Closely related works to our approach are reinforcement learning algorithms that learn a value function or proximity estimator from successful trajectories and use them as an auxiliary reward ( Mataric , 1994 ; Edwards & Isbell , 2019 ; Lee et al. , 2019b ) . While these value function and proximity estimator are similar to our proposed goal proximity function , these works require environment reward signals , and do not incorporate adversarial online training and uncertainty estimates . Moreover , demonstrating the value of learning a proximity estimate for imitation learning , Angelov et al . ( 2020 ) utilizes the learned proximity to choose a proper sub-policy but does not train a policy from the learned proximity . Similar to our method , Burke et al . ( 2020 ) proposes to learn a reward function using a ranking model and use it for policy optimization , demonstrating the advantage of using goal proximity as a reward for training a policy . However , they learn the proximity function from demonstrations alone and solely provide proximity as a reward . This hinders agent learning when the proximity function fails to generalize to agent experience , allowing the agent to exploit inaccurate proximity predictions for reward . By incorporating the online update , uncertainty estimates , and difference-based proximity reward , our method can robustly imitate state-only demonstrations to solve goal-driven tasks without access to the true environment reward . 3 METHOD . In this paper , we address the problem of learning from observations for goal-driven tasks . Adversarial imitation learning methods ( Torabi et al. , 2018b ; Yang et al. , 2019 ) suggest learning a reward function that penalizes an agent state transition off the expert trajectories . However , these learned reward functions often overfit to expert demonstrations and do not generalize to states which are not covered in the demonstrations , leading to unsuccessful policy learning . To acquire a more structured and generalizable reward function from demonstrations , we propose to learn a goal proximity function that estimates proximity to the goal distribution in terms of temporal distance ( i.e. , number of actions required to reach the goal ) . Then , a policy learns to reach states with higher proximity ( i.e. , that are closer to the goal ) predicted by the goal proximity function . Moreover , during policy training , we propose to measure the uncertainty of the goal proximity function which prevents the policy from exploiting over-optimistic proximity predictions and yielding undesired behaviors . In Section 3.2 , we describe the goal proximity function in detail . Then , in Section 3.3 , we elaborate how the policy is jointly trained with the goal proximity function . 3.1 PRELIMINARIES . We formulate our learning problem as a Markov decision process ( Sutton , 1984 ) defined through a tuple ( S , A , R , P , ρ0 , γ ) for the state space S , action space A , reward function R ( s , a ) , transition distribution P ( s′|s , a ) , initial state distribution ρ0 , and discounting factor γ . We define a policy π ( a|s ) that maps from states to actions and correspondingly moves an agent to a new state according to the transition probabilities . The policy is trained to maximize the expected sum of discounted rewards , E ( s , a ) ∼π [ ∑Ti t=0 γ tR ( st , at ) ] , where Ti represents the variable length of episode i . In imitation learning , the learner receives a fixed set of expert demonstrations , De = { τe1 , . . . , τeN } . In this paper , we specifically consider the learning from observation ( LfO ) setup where each demonstration τei is a sequence of states . Moreover , we assume that all expert demonstrations are successful ; therefore , the final state of an expert trajectory reaches the task goal . 3.2 LEARNING GOAL PROXIMITY FUNCTION . In goal-driven tasks , an estimate of how close an agent is to the goal can be utilized as a direct learning signal . Therefore , instead of learning to discriminate agent and expert trajectories ( Ho & Ermon , 2016 ; Torabi et al. , 2018b ) , we propose a goal proximity function , f : S → R , that learns how close states are to the goal distribution . Specifically , we define goal proximity as a proximity that is discounted based on its temporal distance to the goal ( i.e. , inversely proportional to the number of actions required to reach the goal ) . Note that the goal proximity function measures the temporal distance , not the spatial distance , between the current and goal states . Therefore , a single proximity value can entail all information about the task , goal , and any roadblocks . In this paper , we define goal proximity of a state st as the linearly discounted proximity f ( st ) = 1− δ · ( Ti − t ) , where δ ∈ ( 0 , 1 ) is a discounting factor and Ti is the episode horizon . In this paper , we set δ to 1/H to evenly distribute the proximity between 0 and 1 , where H is the maximum task horizon . Note that we use the maximum episode length H , instead of the variable episode length Ti , to define a fixed δ for the temporal discounting to be consistent between episodes . We use mean squared error as the objective for training the goal proximity function fφ parameterized by φ : Lφ = Eτei ∼De , st∼τei [ fφ ( st ) − ( 1− δ · ( Ti − t ) ) ] 2 . ( 1 ) Algorithm 1 Imitation learning with goal proximity function Require : Expert demonstrations De = { τe1 , . . . , τeN } 1 : Initialize weights of goal proximity function fφ and policy πθ 2 : for i = 0 , 1 , ... , M do 3 : Sample expert demonstration τe ∼ De . Offline proximity function training 4 : Update goal proximity function fφ with τe to minimize Equation 1 5 : end for 6 : for i = 0 , 1 , ... , L do 7 : Rollout trajectories τi = ( s0 , . . . , sTi ) with πθ . Policy training 8 : Compute proximity reward Rφ ( st , st+1 ) for ( st , st+1 ) ∼ τi using Equation 5 9 : Update πθ using any RL algorithm 10 : Update fφ with τi and τe ∼ De to minimize Equation 4 11 : end for There are alternative ways to represent and learn goal proximity , such as exponentially discounted proximity and ranking-based proximity ( Brown et al. , 2019 ) . But , in our experiments , linearly discounted proximity consistently performed better than alternatives ; therefore , the linearly discounted proximity is used throughout this paper ( see Figure 5b and Figure 11 ) . By learning to predict the goal proximity , the goal proximity function not only learns to discriminate agent and expert trajectories ( i.e. , predict 0 proximity for an agent trajectory and positive proximity for an expert trajectory with Equation 4 ) , but also acquires the task information about temporal progress entailed in the trajectories . From this additional supervision , the proximity function provides more informative learning signals to the policy and generalizes better to unseen states as empirically shown in Section 4 .
The authors propose a new method for imitation learning from observation that attempts to estimate and leverage a notion of goal proximity in order to help the learning process. The authors provide a framework for computing this estimate, and a technique for using that estimate -- along with a measure of uncertainty -- to perform imitation learning from observation. Experimental results for several domains are presented in which the proposed technique achieves better performance than the comparison methods.
SP:174fe6b43a8516e5ea5323ce96a66d64b8745130
Adaptive Gradient Methods Can Be Provably Faster than SGD with Random Shuffling
1 INTRODUCTION . We consider the finite sum minimization problem in stochastic optimization : min x2Rd f ( x ) = 1 n nX i=1 fi ( x ) , ( 1 ) where f is the objective function and its component functions fi : Rd ! R are smooth and possibly non-convex . This formulation has been used extensively in training neural networks today . Stochastic gradient descend ( SGD ) and its variants have shown to be quite effective for solving this problem , whereas recent works demonstrate another prominent line of gradient-based algorithms by introducing adaptive step sizes to automatically adjust the learning rate ( Duchi et al. , 2011 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2014 ) . Despite the superior performance of adaptive gradient methods in many tasks ( Devlin et al. , 2019 ; Vaswani et al. , 2017 ) , their theoretical convergence remains the same or even worse for non-convex objectives , compared to SGD . In general non-convex settings , it is often impractical to discuss optimal solutions . Therefore , the attention of analysis turns to stationary points instead . Many works have been proposed to study first-order ( Chen et al. , 2019 ; Zhou et al. , 2018 ; Zaheer et al. , 2018 ; Ward et al. , 2018 ; Zhou et al. , 2018 ) and second-order ( Allen-Zhu , 2018 ; Staib et al. , 2019 ) stationary points . Table 1 summarized some previous best-known results for finding first-order stationary points . One might notice that the best dependence on the total iteration number T of adaptive gradient methods matches that of vanilla SGD . In addition , with the introduction of incremental sampling techniques , an even better convergence of SGD can be obtained ( Haochen & Sra , 2019 ; Nguyen et al. , 2020 ) . This gap between theory and practice of adaptive gradient methods has been an open problem that we aim to solve in this paper . Motivated by the analysis of sampling techniques , we rigorously prove that adaptive gradient methods exhibit a faster non-asymptotic convergence rate that matches the best result on SGD . In particular , we make the following contributions : • Our main contribution ( Theorem 1,2,3 ) is to prove that two variants of AdaGrad can find Õ ( T 1/2 ) -approximate first-order stationary points under the strong growth condition assumption ( Schmidt & Roux , 2013 ) . This improves previous best convergence results of adaptive gradient methods and shuffling SGD by factors of O ( T 1/4 ) and O ( T 1/6 ) , Our analysis points out two key components that lead to better convergence results of adaptive gradient methods : the epoch-wise analysis of random shuffling can incorporate the benefit of full gradients ; the adaptive learning rates along with the strong growth condition provide better improvement of objective value in consecutive epochs . Finite Sum Minimization vs . Expectation Minimization The comparison in Table 1 shows the convergence rates in the non-convex setting with respect to first-order stationary points . The results in the first two categories apply to the general expectation minimization problem with f ( x ) = Ezf ( x , z ) . Whereas the convergences for expectation minimization naturally transform to finite sum minimization , the statements remain asymptotic , meaning Ekrf ( x ) k ⇠ O ( T↵ ) , where the expectation is taken to compensate the stochastic gradients . Many efforts have been made to reduce variance in finite sum minimization ( Johnson & Zhang , 2013 ; Reddi et al. , 2016 ; Haochen & Sra , 2019 ) . In particular , non-asymptotic results can be gained using random shuffling , under which the dependency on sample size n seems to be unavoidable ( Haochen & Sra , 2019 ) . 2 PRELIMINARIES . A typical setting of machine learning using gradient methods is the finite sum minimization in equation ( 1 ) . In this problem , the number of samples n is usually very large , rendering the evaluation of full gradients expensive . Therefore , a mini-batch gradient is introduced to approximate the full gradient . Mini-batch gradient descent is often carried out in epochs , where each epoch includes several iterations of parameter updates . This epoch-wise implementation can easily incorporate shuffling techniques , which have proven to be effective for SGD both in theory and practice . We aim to analyze the convergence rate of adaptive gradient methods under this framework , where the objective can be non-convex . Throughout this paper , we restrict the discussions of convergence to achieving ✏-approximate first-order stationary point defined as x satisfying krf ( x ) k ✏ . We leave for future work analysis related to saddle points and second-order stationary points . We want to show that adaptive gradient methods can find x such that krf ( x ) k = Õ ( T 1/2 ) in T epochs . Notations . v2 denotes the matrix vv > and kvk is the l2-norm of vector v ; diag ( V ) , kV k , min ( V ) and max ( V ) are the diagonal matrix , the spectral norm , the largest and smallest non-zero eigenvalues of the matirx V , respectively . For alphabets with subscripts , vi : j denotes the collection of { vi , vi+1 , ... , vj } and v : denotes the entire set of v· ; similar notations are used for alphabets with double subscripts . Let [ n ] = { 1 , ... , n } , O ( · ) , Õ ( · ) be the standard asymptotic notations . Denote ei as the unit vector with its i-th component being 1 and e the all-one vector whose dimension depend on the context . As a clarification , we use T to denote the number of epochs ( instead of the number of iterations in Table 1 ) starting from this section . AdaGrad-type methods . As opposed to SGD , adaptive gradient methods assign a coordinate-wise adaptive learning rate to the stochastic gradient . We formulate the generic AdaGrad-type optimizers , including their full and diagonal versions , as follows . At the i-th iteration of epoch t , the parameter is updated by : xt , i+1 = xt , i ⌘tV 1/2t , i gt , i , ( full version ) xt , i+1 = xt , i ⌘tdiag ( Vt , i ) 1/2gt , i , ( diagonal version ) where gt , i is the mini-batch gradient of the objective at xt , i , the matrix Vt , i contains second-moment calculated using all the past stochastic gradients and ⌘t is the step size of epoch t. The initial parameter of epoch t + 1 is taken to be the parameter updated by epoch t , i.e . xt+1,1 = xt , m+1 , where we have m iterations in each epoch . The full version is impractical for high-dimensional x . Thus the diagonal version is often preferred in literature . As an example , the second-moment matrix in AdaGrad is taken to be Vt , i = ( P t 1 s=1 P m j=1 g 2 s , j + P i j=1 g 2 t , j ) /t . SGD can also be written into this general form where we set Vt , i to be the identity matrix . Sampling Strategy . Random shuffling , also known as sampling without replacement , is an oftenused technique to accelerate the convergence of SGD . The idea is to sample a random permutation of function indices [ n ] for each epoch and slide through this permutation to get the mini-batch gradients for the iterations in this epoch . Some implementations shuffle the set [ n ] uniformly independently for each epoch while others shuffle the set once during initialization and use the same permutation for all epochs . Generally speaking , suppose we have a permutation = ( 1 , ... , n ) at epoch t , we define the set Bt , i = { j : ( i 1 ) nm < j i n m } where m is the number of iterations in one epoch . Then the mini-batch gradient is taken to be gt , i = m/n · P j2Bt , i rfj ( xt , i ) . This sampling method of mini-batches benefits the theoretical analysis of SGD by providing a bounded error between the full gradient and the aggregation of mini-batch gradients in one epoch ( Haochen & Sra , 2019 ; Nguyen et al. , 2020 ) . A naive observation that backups this point can be made by assuming xt,1 = ... = xt , m , since [ mi=1Bt , i = [ n ] , we would have P m i=1 gt , i = rf ( xt,1 ) . Then full gradient can be used to obtain convergence better than plain SGD . Random shuffling for AdaGrad . Unlike SGD , in adaptive methods , it is hard to approximate the full gradient with the aggregation of mini-batch gradient updates in one epoch due to the presence of the second moments . As we will show in experiments , the simple shuffling variant that only changes the sampling method of mini batches in AdaGrad does not lead to better convergence . The major difficulty hampering the analysis of this variant is that the second-moment matrix uses all the gradient information in history without distinguishment . Thus to be able to leverage the benefit of the full gradient , we propose to study a slight modification of AdaGrad . Formally , we shuffle the set [ n ] once at initialization and obtain the mini-batch gradients in a random shuffling manner . We update the parameters by the same rules of AdaGrad-type methods described above where the second-moment matrix is taken to be : Vt , i = mX j=i+1 g 2 t 1 , j + iX j=1 g 2 t , j . ( AdaGrad-window ) The difference between AdaGrad-window and AdaGrad is that the former only use the latest m mini-batch gradients instead of an epoch-wise average of all the mini-batch gradients in history . The step size is ⌘t = ⌘/ p t where ⌘ is a constant for both methods . The updates of AdaGrad-window is also very similar to the GGT method ( Agarwal et al. , 2019 ) without momentum . However , GGT uses the full matrix inversion , where our analysis applies to both full and diagonal versions . 3 MAIN RESULTS . We will show that AdaGrad-window has the convergence rate of Õ ( T 1/2 ) for non-convex problems under some mild assumptions . This is a significant improvement compared with previous best convergence results of adaptive gradient methods and random shuffling SGD , which are of order O ( T 1/4 ) and Õ ( T 1/3 ) respectively . The key towards our convergence rate improvement is twofold : the epoch-wise analysis of random shuffling enables us to leverage the benefit of full gradients ; the adaptive learning rates and the strong growth condition endow a better improvement of objective value in consecutive epochs . In order to achieve this better convergence , we first state the assumptions and important concepts used in the proof . Apart from the general assumptions ( A1 ) and ( A2 ) used in previous analysis ( Fang et al. , 2018 ; Zaheer et al. , 2018 ; Ward et al. , 2018 ) , we pose another assumption described below in ( A3 ) to characterize the consistency between individual gradients and the full gradient . Assumptions . We assume the following for AdaGrad-window : ( A1 ) The objective function is lower bounded and component-wise L-smooth , i.e . 9f⇤ 2 R s.t . f ( x ) f⇤ > 1 , 8x and krfi ( x ) rfi ( y ) k L kx yk , 8x , y , i . ( A2 ) The mini-batch gradients in the algorithm are uniformly upper bounded , i.e . 9G 2 R s.t . kgt , ik G , 8t , i . ( A3 ) The objective function satisfies the strong growth condition with constant r2 , i.e . 8x , 1 n P n i=1 krfi ( x ) k2 r2krf ( x ) k2 . The strong growth condition assumption is essentially enforcing the norms of individual gradients to be at the same scale as the norm of the full gradient . This condition was originally used to derive a faster convergence for SGD in the context of convex finite sum minimization ( Schmidt & Roux , 2013 ) . It was further explored to analyze SGD-type methods after showing its closed relationship with interpolation ( Ma et al. , 2018 ; Vaswani et al. , 2019 ; Gower et al. , 2019 ) . Built upon these previous analysis , we will give a more in-depth discussion for this assumption in section 6 . Under these assumptions , the following theorems show our convergence result for the full and diagonal versions of AdaGrad-window . Theorem 1 ( The convergence rate of full AdaGrad-window ) . For any T > 4 , set ⌘ = m 5/4 , denote C1 = m5/4 p 11/3 + 8 · r2 ( f ( x1,1 ) f⇤ +G ) / p 2 and C2 = 5m5/4 p 11/3 + 8 · r2L/ p 2 as constants independent of T . We have : min 1tT krf ( xt,1 ) k 1p T ( C1 + C2 lnT ) . ( 2 ) Theorem 2 ( The convergence rate of diagonal AdaGrad-window ) . For any T > 4 , set ⌘ = m 5/4 , denote C 01 = m5/4 p 11/3 + 8 · r2 ⇣ f ( x1,1 ) f⇤ +G p d ⌘ / p 2 and C 02 = 5m5/4 p 11/3 + 8 · r2d3/2L/ p 2 as constants independent of T . We have : min 1tT krf ( xt,1 ) k 1p T ( C 01 + C 0 2 lnT ) . ( 3 ) The interpretation of these two theorems is that we are able to find an approximate first-order stationary point such that krf ( x ) k = Õ ( T 1/2 ) within T epochs using both versions . We notice that the convergence rate of AdaGrad-window matches that of GD when m = 1 , which denotes that our results are relatively tight with respect to T . The complete proof is included in the appendix . We will give the intuition and key lemmas explaining how to utilize random shuffling and second moments to obtain these results in the next section . In addition , we also prove for another variant of AdaGrad , namely , AdaGrad-truncation with secondmoment matrix defined as V1 , i = m · I and Vt , i = mk P m j=1 gt 1 , jk2 · I when t > 1 . This second-moment matrix is very similar to the norm version of AdaGrad ( Ward et al. , 2018 ) whereas we use the aggregation of mini-batch gradients in the previous epoch as the coefficient . AdaGradtruncation is beneficial since the formulation leads to a fast and simple implementation without needing to discuss the full and diagonal versions . Due to the space limitation , we list the result below and defer the discussions to the appendix . Theorem 3 ( The convergence rate of AdaGrad-truncation ) . For any T > 4 , set ⌘ =p 3/ ( 10m1/2Lr ) · p f ( x1,1 f⇤ +G2 ) / p L+ 2 , denote C = 80m ( Lr+2 ) p f ( x1,1 ) f⇤ +G2 as constants independent of T . We have : min 1tT krf ( xt,1 ) k Cp T . ( 4 )
This paper shows that adaptive learning rates are beneficial for finding critical points of finite-sum optimization problems. In particular, with appropriate learning rates, a variant of adagrad can find a epsilon critical point in \tilde O(1/epsilon^2) iterations. This improves upon previous results of either O(1/\epsilon^3) or O(1/\epsion^4) in various situations. The key new assumption is a “consistency condition” that bounds how big individual example gradients can be when the overall gradient is small.
SP:c0bbcd2b046db616816cf717a30e2547b501378a
Batch Reinforcement Learning Through Continuation Method
1 INTRODUCTION . While RL is fundamentally an online learning paradigm , many practical applications of RL algorithms , e.g. , recommender systems [ 5 , 7 ] or autonomous driving [ 36 ] , fall under the batch RL setup . Under this setting , the agent is asked to learn its policy from a fixed set of interactions collected by a different ( and possibly unknown ) policy commonly referred to as the behavior policy , without the flexibility to gather new interactions . Realizing the interactive nature of online RL has been hindering its wider adoptions , researchers strive to bring these techniques offline [ 24 , 11 , 20 , 23 , 31 , 12 , 21 , 2 , 32 , 8 ] . We focus on policy optimization under batch RL setup . As pointed out in [ 3 , 26 ] , even with access to the exact gradient , the loss surface of the objective function maximizing the expected return is difficult to optimize , leading to slow convergence . Chen et al . [ 8 ] show that the objective function of expected return exhibits sub-optimal plateaus and exponentially many local optima in the worst case . Batch setup makes the learning even harder as it adds large variance to the gradient estimate , especially when the learned policy differs from the behavior policy used to generate the fixed trajectories . Recent works propose to constrain the size of the policy update [ 27 , 28 ] or the distance between the learned policy and the behavior policy [ 14 , 21 ] . The strength of that constraint is a critical hyperparameter that can be hard to tune [ 28 ] , as a loose constraint does not alleviate the distribution shift while a strict one results in conservative updates . Here we propose to address the challenges using continuation methods [ 35 , 6 , 17 ] . Continuation methods attempt to solve the global optimization problem by progressively solving a sequence of new objectives that can be optimized more efficiently and then trace back the solutions to the original one . We change the objective function of policy optimization by including an additional term penalizing the KL divergence between the parameterized policy ⇡✓ and the behavior policy . We then gradually decrease the weight of that penalty , eventually converging to optimizing the expected return . With this additional constraint , we benefit from more accurate policy evaluation in the early stage of training as the target policy is constrained to be close to the behavior policy . As training continues , we relax the constraint and allow for more aggressive improvement over the behavior policy as long as the policy evaluation is still stable and relatively reliable , i.e . with a small enough variance . By doing so , the proposed method exhaustively exploits the information in the collected trajectories while avoiding the overestimation of state-action pairs that lack support . The contributions of this paper are as follows : ( 1 ) We propose a soft policy iteration approach to batch RL through the continuation method . ( 2 ) We theoretically verify that in the tabular setting with exact gradients , maximizing KL regularized expected return leads to faster convergence than optimizing the expected return alone . Also , our method converges to the globally optimal policy if there are sufficient data samples for accurate value estimation . ( 3 ) We demonstrate the effectiveness of our method in reducing errors in value estimation using visualization ; ( 4 ) We empirically verify the advantages of our method over existing batch RL methods on various complex tasks . 2 RELATED WORK . Batch Reinforcement Learning . Off-policy reinforcement learning has been extensively studied [ 11 , 20 , 30 , 23 , 31 ] , with many works [ 12 , 21 , 2 ] focusing on variants of Q-learning . Fujimoto et al . [ 12 ] , Kumar et al . [ 21 ] investigated the extrapolation error in batch RL resulting from the mismatch of state-action visitation distribution between the fixed dataset and the current policy , and proposed to address it by constraining the action distribution of the current policy from deviating much from the training dataset distribution . Recent works [ 29 , 33 ] studied policy iteration under batch RL . The Q function is estimated in the policy evaluation step without special treatment while the policy updates are regularized to remain close to the prior policy with a fixed constraint . To further reduce uncertainty in Q learning , an ensemble of Q networks [ 21 , 29 ] and distributional Q-function [ 2 , 33 ] are introduced for the value estimation . [ 34 , 18 ] use the KL divergence between the the target policy and the behavior policy as a regularization term in the policy update and/or value estimation . The constraint is controlled by a fixed weight of the KL regularization or a fixed threshold for the KL divergence . While all of these works apply a fixed constraint determined by a sensitive hyperparameter to control the distance between the behavior/prior policy and the target policy , we focus on gradually relaxed constraints . Constrained Policy Updates . Several works [ 27 , 1 , 15 ] studied constrained policy updates in online settings . Kakade & Langford [ 19 ] show that large policy updates can be destructive , and propose a conservative policy iteration algorithm to find an approximately optimal policy . Schulman et al . [ 27 ] constrain the KL divergence between the old policy and new policy to guarantee policy improvement in each update . Grau-Moya et al . [ 15 ] force the policy to stay close to a learned prior distribution over actions , deriving a mutual-information regularization between state and action . Cheng et al . [ 9 ] propose to regularize in the function space . Again these methods focused on a fixed constraint while we are interested in continuing relaxing the constraint to maximize the expected return eventually . Also none of these methods have been extensively tested for batch RL with fixed training data . Continuation Method . Continuation method [ 35 ] is a global optimization technique . The main idea is to transform a nonlinear and highly non-convex objective function to a series of smoother and easier to optimize objective functions . The optimization procedure is successively applied to the new functions that are progressively more complex and closer to the original non-context problem , to trace their solutions back to the original objective function . Chapelle et al . [ 6 ] use the continuation method to optimize the objective function of semi-supervised SVMs and reach lower test error compared with algorithms directly minimizing the original objective . Hale et al . [ 17 ] apply the continuation method to l1-regularized problems and demonstrate better performance for compressed sensing problems . Inspired by prior works , we employ the continuation method to transform the objective of batch RL problems by adding regularization . We gradually decrease the regularization weight to trace the solution back to the original problem . 3 METHOD . In classical RL , an agent interacts with the environment while updating its policy . At each step t , the agent observes a state st 2 S , selects an action at 2 A according to its policy to receive a reward rt = r ( st , at ) : S ⇥ A ! R and transitions to the next state st+1 ⇠ P ( ·|st , at ) . The state value of a policy ⇡ at a state s is V ⇡ ( s ) = Es0=s , at⇠⇡ ( ·|st ) , st+1⇠P ( ·|st , at ) [ P1 t=0 tr ( st , at ) ] . 2 [ 0 , 1 ] is the discounting factor . At each step , the agent updates the policy ⇡ so that the expected return V ⇡ ( ⇢ ) = Es⇠⇢ [ V ⇡ ( s ) ] ( where ⇢ is the initial state distribution ) is maximized . In batch RL , the agent is not allowed to interact with the environment during policy learning . Instead it has access to a fixed set of trajectories sampled from the environment according to a behavior policy1 . A trajectory { ( s0 , a0 , r0 ) , ( s1 , a1 , r1 ) , · · · , ( sT , aT , rT ) } is generated by sampling s0 from the initial state distribution ⇢ , sampling the action at ⇠ ( ·|st ) at the state st and moving to st+1 ⇠ P ( ·|st , at ) for each step t 2 [ 0 , 1 , · · · , T ] . The length T can vary among trajectories . We then convert the generated trajectories to a dataset D = { ( si , ai , ri , s0i ) } Ni=1 , where s0i is the next state after si in a trajectory . The goal of batch RL is to learn a parameterized policy ⇡✓ with the provided dataset to maximize the expected return V ⇡ ( ⇢ ) . In Sec . 3.1 , we will first introduce a new objective function Ṽ ⇡ , ⌧ ( ⇢ ) , i.e . the expected return of policy ⇡ with KL regularization term and the regularization weight ⌧ . With exact gradients , Ṽ ⇡ , ⌧ ( ⇢ ) can be optimized more efficiently than the original objective V ⇡ ( ⇢ ) . With the 1If the behavior policy is not known in advance , it can be fitted from the data [ 30 , 7 ] . continuation method , solving a sequence of optimization problems for Ṽ ⇡ , ⌧ ( ⇢ ) with decaying value of ⌧ converges toward optimizing V ⇡ ( ⇢ ) and makes the optimization easier . In Sec . 3.2 , we derive soft policy iteration with KL regularization to optimize Ṽ ⇡ , ⌧ ( ⇢ ) , without the assumption of exact gradients . Finally , in Sec . 3.3 , we propose a practical batch RL algorithm with value estimation for target policy based on this theory . 3.1 OPTIMIZING EXPECTED RETURN WITH KL REGULARIZATION . In batch RL , the distribution of the trajectories generated by the behavior policy can be very different from that of the learned policy . We thus restrict the learned policy to stay close to the behavior policy via the regularization of KL divergence . Define the soft state value of a policy ⇡ at a state s as Ṽ ⇡ , ⌧ ( s ) = Es0=s , at⇠⇡ ( ·|st ) , st+1⇠P ( ·|st , at ) '' 1X t=0 t ✓ r ( st , at ) ⌧ log ⇡ ( at|st ) ( at|st ) ◆ # , ( 1 ) where the temperature parameter ⌧ controls the deviation from . The new objective function becomes Ṽ ⇡ , ⌧ ( ⇢ ) = Es⇠⇢ [ Ṽ ⇡ , ⌧ ( s ) ] . This KL regularized objective differs from the original objective V ⇡ ( ⇢ ) , which however can be recovered as ⌧ ! 0 . As pointed out in [ 3 ] , even with exact gradients , the objective function V ⇡ ( ⇢ ) is still difficult to optimize due to its highly non-smooth landscape . Mei et al . [ 26 ] further prove that , in a tabular setting with softmax parameterized policy and exact gradients , the vanilla policy gradient method ( i.e . directly updating the parameters of policy ⇡ to maximize V ⇡ ( ⇢ ) with gradient descent ) converges to the global optimal policy at a convergence rate O ( 1/t ) , while the entropy-regularized policy gradient enjoys a significantly faster linear convergence rate O ( e t ) . Motivated by this line of work , we investigate the convergence rate of optimizing Ṽ ⇡ , ⌧ ( ⇢ ) with the exact gradient descent and compare it with the vanilla policy gradient method . We study the smoothness and Łojasiewicz inequality for the function Ṽ ⇡ , ⌧ ( ⇢ ) to prove the convergence rate , similar to [ 26 ] . The detailed proofs of all following theorems are provided in the appendix . Theorem 1 . In the tabular setting with softmax parameterized policy ⇡✓ , maximizing Ṽ ⇡ , ⌧ ( ⇢ ) using policy gradient with the learning rate ⌘ = ( 1 ) 3 ( 8M+⌧ ( 4+8 logA ) ) , for all t > 1 , we have Ṽ ⇡ ⇤ ⌧ , ⌧ ( ⇢ ) Ṽ ⇡✓t , ⌧ ( ⇢ ) C · e C⌧ ( t 1 ) · M + ⌧ logA ( 1 ) 2 where ⇡⇤ ⌧ is the optimal policy maximizing Ṽ ⇡ , ⌧ ( ⇢ ) , M is the bound of the absolute value of r ( s , a ) + ⌧ log ( a|s ) , A is the size of action space , S is the size of state space , C⌧ / ( 1 ) 4 ( 8M/⌧+4+8 logA ) ·S , and C is a constant independent with t and ⌧ . Theorem 1 states that KL regularized expected return Ṽ ⇡ , ⌧ ( ⇢ ) can be optimized with a convergence rate O ( e t ) rather than the O ( 1/t ) , the convergence rate of vanilla policy gradient for expected return alone . The faster convergence inspires us to optimize Ṽ ⇡ , ⌧ ( ⇢ ) to reach policy ⇡⇤ ⌧ , then use ⇡⇤ ⌧ as initialization , gradually decrease the temperature ⌧ towards 0 , and eventually move from ⇡⇤ ⌧ to ⇡⇤ = argmax⇡ V ⇡ ( ⇢ ) . With a reasonable value of ⌧ , we enjoy a linear convergence rate toward ⇡⇤⌧ from the randomly initialized policy ⇡✓ . As ⌧ decreases , ⇡⇤⌧ gets closer to ⇡⇤ . The final optimization of V ⇡✓ ( ⇢ ) from ⇡⇤ ⌧ can be much faster than from a randomly initialized ⇡✓ . ( a ) 0 2000 4000 6000 8000 10000 iteration 0.0 0.2 0.4 0.6 0.8 1.0 V π θ i ( ρ ) vanilla continuation ( b ) Figure 1 : ( a ) A grid world with sparse rewards . ( b ) Learning curve of the value of learned policy ⇡✓i . We conduct a hyper-parameter search for the learning rate 5,1,0.5,0.1,0.05,0.01,0.005,0.001 and report the best performance for each method . We construct a toy example to illustrate this motivation . In the grid world ( Fig . 1a ) , the start state , annotated with ‘ S ’ , is in the center and the terminal states are marked in yellow . There are only two states with positive rewards ( 0.9 and 1 ) . There are four actions { up , down , left , right } . A badly initialized policy ⇡✓0 is shown as arrows in Fig . 1a ) . The initialization results in a poor policy , having high tendency to go right toward a terminal state with zero reward . The vanilla policy gradient method ( i.e . maximizing V ⇡ ( ⇢ ) with true gradient ) starting from this initial point takes more than 7000 iterations to escape a sub-optimal solution ( Fig . 1b ) . In contrast , we escape the sub-optimal solution much faster when ap- plying the continuation method to update the policy with the gradients of Ṽ ⇡ , ⌧ ( ⇢ ) , where the behavior policy ( ·|s ) = [ u1 , u2 , u3 , u4 ] with ui , i = 1 , 2 , 3 , 4 randomly sampled from U [ 0 , 1 ] and Algorithm 1 Soft Policy Iteration through Continuation Method 1 : Initialize : actor network ⇡✓ , ensemble critic network { Q ( 1 ) , Q ( 2 ) , · · · , Q ( K ) } , behavior pol- icy network , penalty coefficient ⌧ , decay rate , number of iterations I for each ⌧ 2 : Input : training dataset D = { ( si , ai , ri , s0i ) } Ni=0 3 : for update j = 0 , 1 , · · · do 4 : Sample batch of data { ( si , ai , ri ) } Bi=1 from D 5 : # Learn the behavior policy with behavior cloning objective 6 : Update to maximize 1 B P B i=1 log ( ai|si ) 7 : # Train the critic network 8 : Update ( k ) to minimize the temporal difference 1 B P B i=1 ri + V ( s0i ) Q ( k ) ( si , ai ) 2 9 : where V ( s ) = 1 K P K k=1 Ea⇠⇡✓ ( ·|s ) ( Q ( k ) ( s , a ) ) ⌧KL ( ⇡✓ ( ·|s ) | ( ·|s ) ) 10 : # Train the actor network 11 : Update ✓ to maximize 1 B P B i=1 h 1 K P K k=1 Ea⇠⇡✓ ( ·|si ) ( Q ( k ) ( si , a ) ) ⌧KL ( ⇡✓ ( ·|si ) | ( ·|si ) ) i 12 : # Decay the weight of KL regularization ⌧ for every I updates 13 : if j mod I = 0 then 14 : ⌧ ⌧ ⇤ 15 : end if 16 : end for normalized for each state s. In Fig . 1b , as we decrease ⌧ , the value of learned policy ⇡✓i for each iteration i quickly converges to the optimal value . In other words , optimizing a sequence of objective functions Ṽ ⇡ , ⌧ ( ⇢ ) can reach the optimal solution for V ⇡ ( ⇢ ) significantly faster .
The paper extends soft actor-critic (SAC) to the batch RL setting, replacing the policy entropy in the objective function with the KL divergence from the behavioral policy. The temperature parameter tau weighting the reward agains the KL term is annealed towards zero during the optimization process, which corresponds to starting with behavioral cloning for high values of tau and ending up with the standard reward maximization RL objective for tau=0. Theoretical analysis and experiments confirm the advantages of the proposed method.
SP:58854dfcef81cc8a791a3b79939046ccfbf9053e
Shape-Texture Debiased Neural Network Training
1 INTRODUCTION . It is known that both shape and texture serve as essential cues for object recognition . A decade ago , computer vision researchers had explicitly designed a variety of hand-crafted features , either based on shape ( e.g. , shape context ( Belongie et al. , 2002 ) and inner distance shape context ( Ling & Jacobs , 2007 ) ) or texture ( e.g. , textons ( Malik et al. , 2001 ) ) , for object recognition . Moreover , researchers found that properly combining shape and texture can further recognition performance ( Shotton et al. , 2009 ; Zheng et al. , 2007 ) , demonstrating the superiority of possessing both features . Nowadays , as popularized by Convolutional Neural Networks ( CNNs ) ( Krizhevsky et al. , 2012 ) , the features used for object recognition are automatically learned , rather than manually designed . This change not only eases human efforts on feature engineering , but also yields much better performance on a wide range of visual benchmarks ( Simonyan & Zisserman , 2015 ; He et al. , 2016 ; Girshick et al. , 2014 ; Girshick , 2015 ; Ren et al. , 2015 ; Long et al. , 2015 ; Chen et al. , 2015 ) . But interestingly , as pointed by Geirhos et al . ( 2019 ) , the features learned by CNNs tend to bias toward either shape or texture , depending on the training dataset . We verify that such biased representation learning ( towards either shape or texture ) weakens CNNs ’ performance.1 Nonetheless , surprisingly , we also find ( 1 ) the model with shape-biased representations and the model with texture-biased representations are highly complementary to each other , e.g. , they focus on completely different cues for predictions ( an example is provided in Figure 1 ) ; and ( 2 ) being biased towards either cue may inevitably limit model performance , e.g. , models may not be able to tell the difference between a lemon and an orange without texture information . These observations altogether deliver a promising message—biased models ( e.g. , ImageNet trained ( texturebiased ) CNNs ( Geirhos et al. , 2019 ) or ( shape-biased ) CNNs ( Shi et al. , 2020 ) ) are improvable . 1Biased models are acquired similar to Geirhos et al . ( 2019 ) , see Section 2 for details . To this end , we hereby develop a shape-texture debiased neural network training framework to guide CNNs for learning better representations . Our method is a data-driven approach , which let CNNs automatically figure out how to avoid being biased towards either shape or texture from their training samples . Specifically , we apply style transfer to generate cue conflict images , which breaks the correlation between shape and texture , for augmenting the original training data . The most important recipe of training a successful shape-texture debiased model is that we need to provide supervision from both shape and texture on these generated cue conflict images , otherwise models will remain being biased . Experiments show that our proposed shape-texture debiased neural network training significantly improves recognition models . For example , on the challenging ImageNet dataset ( Russakovsky et al. , 2015 ) , our method helps ResNet-152 gain an absolute improvement of 1.2 % , achieving 79.8 % top-1 accuracy . Additionally , compared to its vanilla counterpart , this debiased ResNet-152 shows better generalization on ImageNet-A ( Hendrycks et al. , 2019 ) ( +5.2 % ) , ImageNet-C ( Hendrycks & Dietterich , 2019 ) ( +8.3 % ) and Stylized ImageNet ( Geirhos et al. , 2019 ) ( +11.1 % ) , and stronger robustness on defending against FGSM adversarial attacker on ImageNet ( +14.4 % ) . Our shape-texture debiased neural network training is orthogonal to other advanced data augmentation strategies , e.g. , it further boosts CutMix-ResNeXt-101 ( Yun et al. , 2019 ) by 0.7 % on ImageNet , achieving 81.2 % top-1 accuracy . 2 SHAPE/TEXTURE BIASED NEURAL NETWORKS . The biased feature representation of CNNs mainly stems from the training dataset , e.g. , Geirhos et al . ( 2019 ) point out that models will be biased towards shape if trained on Stylized-ImageNet dataset . Following Geirhos et al . ( 2019 ) , we hereby present a similar training pipeline to acquire shapebiased models or texture-biased models . By evaluating these two kinds of models , we observe the necessity of possessing both shape and texture representations for CNNs to better recognize objects . 2.1 MODEL ACQUISITION . Data generation . Similar to Geirhos et al . ( 2019 ) , we apply images with conflicting shape and texture information as training samples to obtain shape-biased or texture-biased models . But different from Geirhos et al . ( 2019 ) , an important change in our cue conflict image generation procedure is that we override the original texture information with the informative texture patterns from another … Chimpanzee Lemon … Original Data Chimpanzee Lemon Chimpanzee & Lemon ( a ) Shape-biased Model ( b ) Texture-biased Model ( c ) Debiased Model Training Images Label Assignment & Results … Chimpanzee Lemon … Original Data Chimpanzee Lemon Chimpanzee & Lemon ( a ) Shape-biased Model ( b ) Texture-biased Model ( c ) Debiased Model Training Images Label Assignment & Results Figure 2 : Illustration of the our training pipeline for acquiring ( a ) a shape-biased model , ( b ) a texture-biased model , and ( c ) a shape-texture debiased model . Specifically , these models share the same training samples , i.e . images with conflicting texture and shape information , generated by style transfer between two randomly selected images ; but apply distinct labelling strategies : in ( a ) & ( b ) , labels are determined by the images that provides shape ( or texture ) information in style transfer , for guiding models to learn more shape ( or texture ) representations ; in ( c ) , labels are jointly determined by the pair of images in style transfer , for avoiding bias in representation learning . randomly selected image , rather than with the uninformative style of randomly selected artistic paintings . That being said , to create a new training sample , we need to first select a pair of images from the training set uniformly at random , and then apply style transfer to blend their shape and texture information . Such a generated example is shown in Figure 2 , i.e. , the image of chimpanzee shape but with lemon texture . Label assignment . The way of assigning labels to cue conflict images controls the bias of learned models . Without loss of generality , we show the case of learning a texture-biased model . To guide the model to attend more on texture , the labels assigned to the cue conflict images here will be exclusively based on the texture information , e.g. , the image of chimpanzee shape but with lemon texture will be labelled as lemon , shown in Figure 2 ( b ) . By this way , the texture information is highly related to the “ ground-truth ” while the shape information only serves as a nuisance factor during learning . Similarly , to learn a shape-biased model , the label assignment of cue conflict images will be based on shape only , e.g. , the image of chimpanzee shape but with lemon texture now will be labelled as chimpanzee , shown in Figure 2 ( a ) . 2.2 EVALUATION AND OBSERVATION . To reduce the computational overhead in this ablation , all models are trained and evaluated on ImageNet-200 , which is a 200 classes subset of the original ImageNet , including 100,000 images ( 500 images per class ) for training and 10,000 images ( 50 images per class ) for validation . Akin to Geirhos et al . ( 2019 ) , we observe that the models with biased feature representations tend to have inferior accuracy than their vanilla counterparts . For example , our shape-biased ResNet-18 only achieves 73.9 % top-5 ImageNet-200 accuracy , which is much lower than the vanilla ResNet-18 with 88.2 % top-5 ImageNet-200 accuracy . Though biased representations weaken the overall classification accuracy , surprisingly , we find they are highly complementary to each other . We first visualize the attended image regions of biased models , via Class Activation Mapping ( Zhou et al. , 2016 ) , in Figure 3 . As we can see here , the shape-biased model and the texture-biased model concentrate on different cues for predictions . For Egyptian Cat ✕ Tabby Cat ✓ Cl as sif ie d As Orange ✕ Lemon ✓ Lion ✓ Tabby Cat ✕ Alsatian ✓ Chihuahua ✕ Shape-biased ModelMore Accurate Less Accurate Obelisk ATM Bucket , pail Fur CoatPotpieCliff , Drop … Brain Coral Expresso Wooden Spoon TrolleybusTriumphal ArchBarn instance , on the leftmost tabby cat image , the shape-biased model mainly focuses on the cat head , while the texture-biased model mainly focuses on the lower body and the front legs of the cat . Such attention mechanisms are correlated to their learned representations—the shape-biased model extracts the shape of the cat head as an important signal for predictions , while the texture-biased model relies on the texture information of cat fur for predictions . As distinct cues are picked by shape-biased/texture-biased models , a more concrete observation is they are good/bad at classifying quite different object categories . As showed in Figure 4 , the shapebiased model is good at recognizing objects with representative shape structure like obelisk , but is bad at recognizing objects whose shape is uninformative or almost indistinguishable from others like fur coat . Similarly , the texture-biased model can effectively recognize objects with unique texture patterns like brain coral but may fail to recognize objects with unpredictable texture like trolleybus ( as its side body can be painted with different advertisements ) . Besides , biased models may inevitably perform poorly on certain categories as insufficient cues are applied . For examples , it is challenging to distinguish between a lemon and an orange if texture information can not be utilized , or to distinguish between an lion and a tabby cat without shape information . Given the analysis above , we can conclude that biased representations limit models ’ recognition ability . But meanwhile , our ablation delivers a promising message—the features learned by biased models are highly complementary to each other . This observation indicates the current training framework is improvable ( as the resulted models are biased towards texture ( Geirhos et al. , 2019 ) or shape ( Shi et al. , 2020 ) ) , and offers a potential direction for building a stronger one—we should train models to properly acquire both shape and texture feature representations . We will introduce a simple method for doing so next . 3 SHAPE-TEXTURE DEBIASED NEURAL NETWORK TRAINING . Recall that when obtaining a biased model , the strategy of label assignment is pivot—when the labels are exclusively determined by the images that provide shape ( or texture ) information in style transfer , we will obtain a shape-biased ( or texture-biased ) model . Therefore , to guide models for leveraging both shape and texture for predictions , we hereby propose a simple way , which is inspired by Mixup ( Zhang et al. , 2018 ) , to softly construct labels during training . In other words , given the one-hot label of the shape-source image ys and the one-hot label of the texture-source image yt , the new label that we assigned to the cue conflict image is ỹ = γ ∗ ys + ( 1− γ ) ∗ yt , ( 1 ) where γ ∈ [ 0 , 1 ] is a manually selected hyperparameter to control the relative importance between shape and texture . By ranging the shape-texture coefficient γ from 0 to 1 , we obtain a path to evolve the model from being a texture-biased one ( i.e. , γ = 0 ) to being a shape-biased one ( i.e. , γ = 1 ) . Although the two extreme ends lead to biased models with inferior performance , we empirically show that there exist a sweet point along this interpolation path , i.e. , the learned models can properly acquires both shape and texture feature representations and achieve superior performance on a wide range of image recognition benchmarks . We name this simple method as shape-texture debiased neural network training , and illustrate the training pipeline in Figure 2 ( c ) . It is worth to mention that , although Figure 2 only shows the procedure of applying our method to the image classification task , this training framework is general and has the potential to be extended to other computer vision tasks , e.g. , a simple showcase on semantic segmentation is presented in Section 4.4 .
The authors propose a method to mitigate the bias towards either texture or shape, in convolutional network training. The method follows the idea from Geirhos et al (2019), but use images randomly sampled from the same dataset, instead of style transfer from paintings. Then, depending on a manually selected hyperparameter, the weights of conflicting labels are blended by weighted average of the one-hot encoding.
SP:d5a996a81845a53ae405b4aac0a9f5342129d43c
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search
1 INTRODUCTION . The architecture of deep neural networks ( NNs ) is critical to their performance . This fact motivates neural architecture search ( NAS ) , wherein the choice of architecture is often framed as an automated search for effective motifs , i.e . the design of a repeating recurrent cell or activation function that is repeated often in a larger NN blueprint . However , evaluating a candidate architecture ’ s ground-truth performance in a task of interest depends upon training the architecture to convergence . Complicating efficient search , the performance of an architectural motif nearly always benefits from increased computation ( i.e . larger NNs trained with more data ) . The implication is that the best architectures often require training near the bounds of what computational resources are available , rendering naive NAS ( i.e . where each candidate architecture is trained to convergence ) exorbitantly expensive . To reduce the cost of NAS , methods often exploit heuristic surrogates of true performance . For example , motif performance can be evaluated after a few epochs of training or with scaled-down architectural blueprints , which is often still expensive ( because maintaining reasonable fidelity between ground-truth and surrogate performance precludes aggressive scaling-down of training ) . Another approach learns models of the search space ( e.g . Gaussian processes models used within Bayesian optimization ) , which improve as more ground-truth models are trained , but can not generalize well beyond the examples seen . This paper explores whether the computational efficiency of NAS can be improved by creating a new kind of surrogate , one that can benefit from miniaturized training and still generalize beyond the observed distribution of ground-truth evaluations . To do so , we take inspiration from an idea in biology , bringing to machine learning the application of a Synthetic Petri Dish microcosm that aims to identify high-performing architectural motifs . The overall motivation behind “ in vitro ” ( test-tube ) experiments in biology is to investigate in a simpler and controlled environment the key factors that explain a phenomenon of interest in a messier and more complex system . For example , to understand causes of atypical mental development , scientists extract individual neuronal cells taken from brains of those demonstrating typical and atypical behavior and study them in a Petri dish ( Adhya et al. , 2018 ) . The approach proposed in this paper attempts to algorithmically recreate this kind of scientific process for the purpose of finding better neural network motifs . The main insight is that biological Petri dish experiments often leverage both ( 1 ) key aspects of a system ’ s dynamics ( e.g . the behavior of a single cell taken from a larger organism ) and ( 2 ) a human-designed intervention ( e.g . a measure of a test imposed on the test-tube ) . In an analogy to NAS , ( 1 ) the dynamics of learning through backpropagation are likely important to understanding the potential of a new architectural motif , and ( 2 ) compact synthetic datasets can illuminate an architecture ’ s response to learning . That is , we can use machine learning to learn data such that training an architectural motif on the learned data results in performance indicative of the motif ’ s ground-truth performance . In the proposed approach , motifs are extracted from their ground-truth evaluation setting ( i.e . from large-scale NNs trained on the full dataset of the underlying domain of interest , e.g . MNIST ) , instantiated into very small networks ( called motif-networks ) , and evaluated on learned synthetic data samples . These synthetic data samples are trained such that the performance ordering of motifs in this Petri dish setting ( i.e . a miniaturized network trained on a few synthetic data samples ) matches their ground-truth performance ordering . Because the relative performance of motifs is sufficient to distinguish good motifs from bad ones , the Petri dish evaluations of motifs can be a surrogate for ground-truth evaluations in NAS . Training the Synthetic Petri Dish is also computationally inexpensive , requiring only a few ground-truth evaluations , and once trained it enables extremely rapid evaluations of new motifs . A key motivating hypothesis is that because the Synthetic Petri Dish evaluates the motif by actually using it in a simple experiment ( e.g . training it with SGD and then evaluating it ) , its predictions can generalize better than other neural network ( NN ) based models that predict motif performance based on only observing the motif ’ s structure and resulting performance ( Liu et al. , 2018a ; Luo et al. , 2018 ) . For example , consider the demonstration problem of predicting the ground-truth performance of a two-layer feedforward MNIST network with sigmoidal non-linearity . The blue points in Figure 1 shows how the ground-truth performance of the MNIST network varies when the slope of its sigmoid activations ( the term c in the sigmoid formula 1/ ( 1 + e−cx ) ) is varied in the range of 0.01− 2.01 . The MNIST network performance peaks near a slope-value of 0.23 . Similarly to the NN-based model previously developed in Liu et al . ( 2018a ) ; Luo et al . ( 2018 ) , one can try to train a neural network that predicts the performance of the corresponding MNIST network given the sigmoid slope value as input ( Section 4.1 provides full details ) . When training points ( tuples of sigmoid slope value and its corresponding MNIST network performance ) are restricted to an area to the right of the peak ( Figure 1 , blue-shaded region ) , the NN-based prediction model ( Figure 1 , red diamonds ) generalizes poorly to the test points on the left side of the peak ( c < 0.23 ) . However , unlike such a conventional prediction model , the prediction of the Synthetic Petri Dish generalizes to test points left of the peak ( despite their behavior being drastically different than what would be expected solely based on the points in the blue shaded region ) . That occurs because the Synthetic Petri Dish trains and evaluates the actual candidate motifs , rather than just making predictions about their performance based on data from past trials . Beyond this explanatory experiment , the promise of the Synthetic Petri Dish is further demonstrated on a challenging and compute-intensive language modelling task that serves as a popular NAS benchmark . The main result is that Petri dish obtains highly competitive results even in a limitedcompute setting . Interestingly , these results suggest that it is indeed possible to extract a motif from a larger setting and create a controlled setting ( through learning synthetic data ) where the instrumental factor in the performance of the motif can be isolated and tested quickly , just as scientists use Petri dishes to test specific hypothesis to isolate and understand causal factors in biological systems . 2 RELATED WORK . NAS methods have discovered novel architectures that significantly outperform hand-designed solutions ( Zoph and Le , 2017 ; Elsken et al. , 2018 ; Real et al. , 2017 ) . These methods commonly explore the architecture search space with either evolutionary algorithms ( Suganuma et al. , 2017 ; Miikkulainen et al. , 2018 ; Real et al. , 2019 ; Elsken et al. , 2019 ) or reinforcement learning ( Baker et al. , 2016 ; Zoph and Le , 2017 ) . Because running NAS with full ground-truth evaluations can be extremely expensive ( i.e . requiring many thousands of GPU hours ) , more efficient methods have been proposed . For example , instead of evaluating new architectures with full-scale training , heuristic evaluation can leverage training with reduced data ( e.g . sub-sampled from the domain of interest ) or for fewer epochs ( Baker et al. , 2017 ; Klein et al. , 2017 ) . More recent NAS methods such as DARTS ( Liu et al. , 2018b ) and ENAS ( Pham et al. , 2018 ) exploit sharing weights across architectures during training to circumvent full ground-truth evaluations . However , a significant drawback of such weight sharing approaches is that they constrain the architecture search space and therefore limit the discovery of novel architectures . Another approach to accelerate NAS is to train a NN-based performance prediction model that estimates architecture performance based on its structure ( Liu et al. , 2018a ) . Building on this idea , Neural Architecture Optimization ( NAO ) trains a LSTM model to simultaneously predict architecture performance as well as to learn an embedding of architectures . Search is then performed by taking gradient ascent steps in the embedding space to generate better architectures . NAO is used as a baseline for comparison in Experiment 4.2 . Bayesian optimization ( BO ) based NAS methods have also shown promising results ( Kandasamy et al. , 2018 ; Cao et al. , 2019 ) . BO models the architecture space using a Gaussian process ( GP ) , although its behavior is sensitive to the choice of a kernel function that models the similarity between any two architectures . Another recent NAS method presents a technique to progressively grow an architecture by adding skip connections , and is named similarly ( “ Petridish ” ) to the method proposed here ( Hu et al. , 2019 ) . However unlike the Synthetic Petri Dish introduced here , which is a learned surrogate for NAS , Petridish ( Hu et al. , 2019 ) is instead an incremental growth method . Generative teaching networks ( GTNs ) also learn synthetic data to accelerate NAS ( Such et al. , 2020 ) . However , learned data in GTNs helps to more quickly train full-scale networks to evaluate their potential on real validation data . In the Petri dish , synthetic training and validation instead enables a surrogate microcosm training environment for much smaller extracted motif-networks . Additionally , GTNs are not explicitly trained to differentiate between different networks ( or network motifs ) . In contrast , the Synthetic Petri Dish is optimized to find synthetic input data on which the performance of various architectural motifs is different . 3 METHODS . Recall that the aim of the Synthetic Petri Dish is to create a microcosm training environment such that the performance of a small-scale motif trained within it well-predicts performance of the fullyexpanded motif in the ground-truth evaluation . First , a few initial ground-truth evaluations of motifs are needed to create training data for the Petri dish . In particular , consider N motifs for which ground-truth validation loss values ( Litrue , where i ∈ 1 , 2 , ... N ) have already been pre-computed by training each motif in the ground-truth setting . The next section details how these initial evaluations are leveraged to train the Synthetic Petri Dish . 3.1 TRAINING THE SYNTHETIC PETRI DISH . To train the Synthetic Petri Dish first requires extracting the N motifs from their ground-truth setting and instantiating each of them in miniature as separate motif-networks . For the experiments performed in this paper , the ground-truth network and the motif-network have the same overall blueprint and differ only in the width of their layers . For example , Figure 2a shows a ground-truth network ’ s size reduced from a 2-layer , 100-neuron wide MLP to a motif-network that is a 2-layer MLP with a single neuron per layer . Given such a collection of extracted motif-networks , a small number of synthetic training and validation data samples are then learned that can respectively be used to train and evaluate the motifnetworks . The learning objective is that the validation loss of motifs trained in the Petri dish resemble the validation loss of the motif ’ s ground-truth evaluation ( Litrue ) . Note that this training process requires two nested optimization loops : an inner-loop that trains and evaluates the motif-networks on the synthetic data and an outer-loop that trains the synthetic data itself . Initializing the Synthetic Petri Dish : Before training the Petri dish , the motif-networks and synthetic data must be initialized . Once the motifs have been extracted into separate motif-networks , each motif-network is assigned the same initial random weights ( θinit ) . This constraint reduces confound- ing factors by ensuring that the motif-networks differ from each other only in their instantiated motifs . At the start of Synthetic Petri Dish training , synthetic training data ( Strain = ( xtrain , ytrain ) ) and validation data samples ( Svalid = ( xvalid , yvalid ) ) are randomly initialized . Note that these learned training and validation data can play distinct and complementary roles , e.g . the validation data can learn to test out-of-distribution generalization from a learned training set . Empirically , setting the training and validation data to be the same initially ( i.e . Strain = Svalid ) benefited optimization at the beginning of outer-loop training ; over iterations of outer-loop training , the synthetic training and validation data then diverge . The size of the motif-network and the number of synthetic data samples are chosen through the hyperparameter selection procedure described in Appendix A.2 . Inner-loop training : The inner optimization loop is where the performance of motif-networks is evaluated by training each such network independently with synthetic data . This training reveals a sense of the quality of the motifs themselves . In each inner-loop , the motif-networks are independently trained with SGD using the synthetic training data ( Strain ) . The motif-networks take synthetic training inputs ( xtrain ) and produce their respective output predictions ( ŷtrain ) . For each motif-network , a binary cross-entropy ( BCE ) loss is computed between the output predictions ( ŷtrain ) and the synthetic training labels ( ytrain ) . Because the Petri dish is an artificial setting , the choice of BCE as the inner-loop loss ( Linner ) is independent of the actual domain loss ( used for ground-truth training ) , and other losses like regression loss could instead be used . The gradients of the BCE loss w.r.t . the motif-network weights inform weight updates ( as in regular SGD ) . θit+1 = θ i t − α∇Liinner_train ( Strain , θit ) i ∈ 1 , 2 , .. , N ( 1 ) where α is the inner-loop learning rate and θi0 = θinit . Inner-loop training proceeds until individual BCE losses converge . Once trained , each motif-network is independently evaluated using the synthetic validation data ( Svalid ) to obtain individual validation loss values ( Liinner_valid ) . These inner-loop validation losses then enable calculating an outer-loop loss to optimize the synthetic data , which is described next . Outer-loop training : Recall that an initial sampling of candidate motifs evaluated in the ground-truth setting serve as a training signal for crafting the Petri dish ’ s synthetic data . That is , in the outer loop , synthetic training data is optimized to encourage motif-networks trained upon it to become accurate surrogates for the performance of full networks built with that motif evaluated in the ground-truth setting . The idea is that training motif-networks on the right ( small ) set of synthetic training data can potentially isolate the key properties of candidate motifs that makes them effective . To frame the outer-loop loss function , what is desired is for the validation loss of the motif-network to induce the same relative ordering as the validation loss of the ground-truth networks ; such relative ordering is all that is needed to decide which new motif is likely to be best . One way to design such an outer-loop loss with this property is to penalize differences between normalized loss values in the Petri dish and ground-truth setting1 . To this end , the motif-network ( inner-loop ) loss values and their respective ground-truth loss values are first independently normalized to have zero-mean and unit-variance . Then , for each motif , a mean squared error ( MSE ) loss is computed between the normalized inner-loop validation loss ( L̂iinner_valid ) and the normalized ground-truth validation loss ( L̂itrue ) . The MSE loss is averaged over all the motifs and used to compute a gradient step to improve the synthetic training and validation data . Louter = 1 N N∑ i=1 ( L̂iinner_valid − L̂itrue ) 2 ( 2 ) Straint+1 = S train t − β∇Louter and Svalidt+1 = Svalidt − β∇Louter ( 3 ) where β is the outer-loop learning rate . For simplicity , only the synthetic training ( xtrain ) and validation ( xvalid ) inputs are learned and the corresponding labels ( ytrain , yvalid ) are kept fixed to 1We tried an explicit rank-loss as well , but the normalized regression loss performed slightly better empirically . their initial random values throughout training . Minimizing the outer-loop MSE loss ( Louter ) modifies the synthetic training and validation inputs to maximize the similarity between the motif-networks ’ performance ordering and motifs ’ ground-truth ordering . After each outer-loop training step , the motif-networks are reset to their original initial weights ( θinit ) and the inner-loop training and evaluation procedure ( equation 1 ) is carried out again . The outer-loop training proceeds until the MSE loss converges , resulting in optimized synthetic data .
This paper presents an approach to accelerating NAS with 'petri-dish' networks, which hope to mimic the response of original networks at a fraction of training time cost. The key idea is to evaluate an architectural setting on a miniaturized network as opposed to the original network. With this approach computational effort is saved by eschewing expensive 'ground truth' original network evaluations.
SP:ef8a9ec9f2c482ffacdf56b7a36e1fa567b6ba29
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization
1 INTRODUCTION . Machine learning has achieved state-of-the-art ( SOTA ) performance in many fields , especially in computer vision tasks . This success can mainly be attributed to the deep architecture of convolutional neural networks ( CNN ) that typically have 10 to 100 millions of learnable parameters . Such a huge number of parameters enable the deep CNNs to solve complex problems . However , besides the powerful representation ability , a huge number of parameters increase the probability of overfitting when the number of training examples is insufficient , which results in a poor generalization of the model . In order to improve the generalization ability of deep learning models , several data augmentation strategies have been studied . Random feature removal is one of the popular techniques that guides the CNNs not to focus on some small regions of input images or on a small set of internal activations , thereby improving the model robustness . Dropout ( Nitish et al. , 2014 ; Tompson et al. , 2015 ) and regional dropout ( Junsuk & Hyunjung , 2019 ; Terrance & Graham , 2017 ; Golnaz et al. , 2018 ; Singh & Lee , 2017 ; Zhun et al. , 2017 ) are two established training strategies where the former randomly turns off some internal activations and later removes and/or alters random regions of the input images . Both of them force a model to learn the entire object region rather than focusing on the most ∗Department of Computer Science & Engineering , Kyung Hee University , South Korea . †Corresponding author . Published as a conference paper at ICLR 2021 T ar ge t Im ag e So u rc e Im ag e Dog - 80 % ? Cat - 20 % ? Augmented Image Augmented Label Dog - 80 % ? Cat - 20 % ? LabelOriginal Image Dog Cat T a rg e t Im ag e So u rc e Im ag e Dog - 80 % ? Cat - 20 % ? Augmented Image Augmented Label Dog - 80 % ? Cat - 20 % ? LabelOriginal Image Dog Cat T ar g et I m ag e S o u rc e I m ag e Dog - 80 % ? Cat - 20 % ? Augmented Image Augmented Label Dog - 80 % ? Cat - 20 % ? LabelOriginal Image Dog Cat important features and thereby improving the generalization of the model . Although dropout and regional dropout improve the classification performance , this kind of feature removal is undesired since they discard a notable portion of informative pixels from the training images . Recently , Yun et al . ( 2019 ) proposed CutMix , that randomly replaces an image region with a patch from another training image and mixes their labels according to the ratio of mixed pixels . Unlike Cutout ( Devries & Taylor , 2017 ) , this method can enjoy the properties of regional dropout without having any blank image region . However , we argue that the random selection process may have some possibility to select a patch from the background region that is irrelevant to the target objects of the source image , by which an augmented image may not contain any information about the corresponding object as shown in Figure 1 . The selected source patch ( background ) is highlighted with a black rectangle on the source image . Two possible augmented images are shown wherein both of the cases , there is no information about the source object ( cat ) in the augmented images despite their mixing location on the target image . However , their interpolated labels encourage the model to learn both objects ’ features ( dog and cat ) from that training image . But we recognize that it is undesirable and misleads the CNN to learn unexpected feature representation . Because , CNNs are highly sensitive to textures ( Geirhos et al. , 2019 ) and since the interpolated label indicates the selected background patch as the source object , it may encourage the classifier to learn the background as the representative feature for the source object class . We address the aforementioned problem by carefully selecting the source image patch with the help of some prior information . Specifically , we first extract a saliency map of the source image that highlights important objects and then select a patch surrounding the peak salient region of the source image to assure that we select from the object part and then mix it with the target image . Now the selected patch contains relevant information about the source object that leads the model to learn more appropriate feature representation . This more effective data augmentation strategy is what we call , ” SaliencyMix ” . We present extensive experiments on various standard CNN architectures , benchmark datasets , and multiple tasks , to evaluate the proposed method . In summary , SaliencyMix has obtained the new best known top-1 error of 2.76 % and 16.56 % for WideResNet ( Zagoruyko & Komodakis , 2016 ) on CIFAR-10 and CIFAR-100 ( Krizhevsky , 2012 ) , respectively . Also , on ImageNet ( Olga et al. , 2015 ) classification problem , SaliencyMix has achieved the best known top-1 and top-5 error of 21.26 % and 5.76 % for ResNet-50 and 20.09 % and 5.15 % for ResNet-101 ( He et al. , 2016 ) . In object detection task , initializing the Faster RCNN ( Shaoqing et al. , 2015 ) with SaliencyMix trained model and then fine-tuning the detector has improved the detection performance on Pascal VOC ( Everingham et al. , 2010 ) dataset by +1.77 mean average precision ( mAP ) . Moreover , SaliencyMix trained model has proved to be more robust against adversarial attack and improves the top-1 accuracy by 1.96 % on adversarially perturbed ImageNet validation set . All of these results clearly indicate the effectiveness of the proposed SaliencyMix data augmentation strategy to enhance the model performance and robustness . 2 RELATED WORKS . 2.1 DATA AUGMENTATION . The success of deep learning models can be accredited to the volume and diversity of data . But collecting labeled data is a cumbersome and time-consuming task . As a result , data augmentation has been introduced that aims to increase the diversity of existing data by applying various transformations e.g. , rotation , flip , etc . Since this simple and inexpensive technique significantly improves the model performance and robustness , data augmentation has widely been used to train deep learning models . Lecun et al . ( 1998 ) applied data augmentation to train LeNet for hand written character recognition . They performed several affine transformations such as translation , scaling , shearing , etc . For the same task , Bengio et al . ( 2011 ) applied more diverse transformation such as Gaussian noise , salt and pepper noise , Gaussian smoothing , motion blur , local elastic deformation , and various occlusions to the images . Krizhevsky et al . ( 2012 ) applied random image patch cropping , horizontal flipping and random color intensity changing based on principal component analysis ( PCA ) . In Deep Image ( Wu et al. , 2015 ) , color casting , vignetting , and lens distortion are applied besides flipping and cropping to improve the robustness of a very deep network . Besides these manually designed data augmentations , Lemley et al . ( 2017 ) proposed an end-to-end learnable augmentation process , called Smart Augmentation . They used two different networks where one is used to learn the suitable augmentation type and the other one is used to train the actual task . Devries & Taylor ( 2017 ) proposed Cutout that randomly removes square regions of the input training images to improve the robustness of the model . Zhang et al . ( 2017 ) proposed MixUp that blends two training images to some degree where the labels of the augmented image are assigned by the linear interpolation of those two images . But the augmented images look unnatural and locally ambiguous . Recently , Cubuk et al . ( 2019 ) proposed an effective data augmentation method called AutoAugment that defines a search space of various augmentation techniques and selects the best suitable one for each mini-batch . Kim et al . ( 2020 ) proposed PuzzleMix that jointly optimize two objectives i.e. , selecting an optimal mask and an optimal mixing plan . The mask tries to reveal most salient data of two images and the optimal transport plan aims to maximize the saliency of the revealed portion of the data . Yun et al . ( 2019 ) proposed CutMix that randomly cuts and mixes image patches among training samples and mixes their labels proportionally to the size of those patches . However , due to the randomness in the source patch selection process , it may select a region that does not contain any informative pixel about the source object , and the label mixing according to those uninformative patches misleads the classifier to learn unexpected feature representation . In this work , the careful selection of the source patch always helps to contain some information about the source object and thereby solves the class probability assignment problem and helps to improve the model performance and robustness . 2.2 LABEL SMOOTHING . In object classification , the class labels are usually represented by one-hot code i.e. , the true labels are expected to have the probability of exactly 1 while the others to have exactly 0 . In other words , it suggests the model to be overconfident which causes overfitting to training dataset . As a result , the models have low performance on unseen test dataset . To alleviate this problem , label smoothing allows to relax the model confidence on the true label by setting the class probability to a slightly lower value e.g. , lower than 1 . As a result , it guides the model to be more adaptive instead of being over-confident and ultimately improves the model robustness and performance ( Szegedy et al. , 2016 ) . Our method also mixes the class labels and enjoys the benefit of label smoothing . 2.3 SALIENCY DETECTION . Saliency detection aims to simulate the natural attention mechanism of human visual system ( HVS ) and can be classified into two main categories . The first one is a bottom-up approach ( Cheng et al. , 2014 ; Zhu et al. , 2014 ; Li et al. , 2015 ; Zhou et al. , 2015 ; Achanta et al. , 2009 ; Li et al. , 2013 ; Hou & Zhang , 2007 ; Qin et al. , 2015 ; Peng et al. , 2016 ; Lei et al. , 2016 ; Montabone & Soto , 2010 ) that focuses on exploring low-level vision features . Some visual priors that are inspired by the HVS properties are utilized to describe a salient object . Cheng et al . ( 2014 ) utilized a contrast prior and proposed a regional contrast based salient object detection algorithm . Zhu et al . ( 2014 ) introduced a robust background measure in an optimization framework to integrate multiple low level cues to obtain clean and uniform saliency maps . Li et al . ( 2015 ) optimized the image boundary selection by a boundary removal mechanism and then used random walks ranking to formulate pixel-wised saliency maps . Zhou et al . ( 2015 ) proposed a saliency detection model where the saliency information is propagated using a manifold ranking diffusion process on a graph . In addition , some Published as a conference paper at ICLR 2021 Saliency Detection Peak Salient Region Source Image Saliency Detection Peak Salient Region Target ImageSource Patch Augmented Image Source Image Saliency Map of the Source Image Selecting the Peak Salient Region of the Saliency Map Target Image Selecting the Source Patch Based on the Peak Salient Region Augmented Image Mixing the Source Patch with the Target Image traditional techniques are also introduced to achieve ima e saliency detection , such as frequency domain analysis ( Achanta et al. , 2009 ) , sparse representation ( Li et al. , 2013 ) , log-spectrum ( Hou & Zhang , 2007 ) cellular automata ( Qin et al. , 2015 ) , low-rank recovery ( Peng et al. , 2016 ) , and Bayesian theory ( Lei et al. , 2016 ) . Hou & Zhang ( 2007 ) proposed a spectral residual method that focuses on the properties of background . Achanta et al . ( 2009 ) proposed a frequency tuned approach that preserves the boundary information by retaining sufficient amount of high frequency contents . Montabone & Soto ( 2010 ) introduced a method that was originally designed for a fast human detection in a scene by proposing novel features derived from a visual saliency mechanism . Later on this feature extraction mechanism was generalized for other forms of saliency detection . The second one is a top-down approach which is task-driven and utilizes supervised learning with labels . Several deep learning based methods have been proposed for saliency detection ( Deng et al. , 2018 ; Liu et al. , 2018 ; Zhang et al. , 2018a ; b ; Qin et al. , 2019 ) . Deng et al . ( 2018 ) proposed a recurrent residual refinement network ( R3Net ) equipped with residual refinement blocks ( RRBs ) to more accurately detect salient regions . Contexts play an important role in the saliency detection task and based on that Liu et al . ( 2018 ) proposed a pixel-wise contextual attention network , called PiCANet , to learn selectively attending to informative context locations for each pixel . Zhang et al . ( 2018a ) introduced multi-scale context-aware feature extraction module and proposed a bi-directional message passing model for salient object detection . Zhang et al . ( 2018b ) focused on powerful feature extraction and proposed an attention guided network which selectively integrates multi-level contextual information in a progressive manner . Recently , Qin et al . ( 2019 ) proposed a predict and refine architecture for salient object detection called boundary-aware saliency detection network ( BASNet ) . The author introduced a hybrid loss to train a densely supervised Encoder-Decoder network . However , despite the high performance of top-down approach , there is a lack of generalization for various applications since they are biased towards the training data and limited to the specific objects . In this study , we require a saliency model to focus on the important object/region in a given scene without knowing their labels . As a result , we rely on bottom-up approach which are unsupervised , scale-invariant and more robust for unseen data . It is worth noting that the training based saliency methods can also be applied where the quality and quantity of data for training the saliency methods may be correlated with the effectiveness of data augmentation . Section 3.3 further explains the effects of different saliency detection algorithm on the proposed data augmentation method .
This paper proposes a new augmentation method based on CutMix. The authors find out that randomly selecting may mix background textures and this will mislead the model. So, they propose to use saliency maps to control the selection of mixed patches, which is called SaliencyMix. This idea seems easy and reasonable, many experiments are conducted to prove the effectiveness of the proposed method. However, the experiments’ results fail to show the ability of the method, and some explanation is missed.
SP:c4bb47d4a04a539331e2ab2ef62b2854804f6a3c
Efficient estimates of optimal transport via low-dimensional embeddings
1 Introduction . Optimal Transport metrics ( Kantorovich , 1960 ) or Wasserstein distances , have emerged successfully in the field of machine learning , as outlined in the review by Peyré et al . ( 2017 ) . They provide machinery to lift distances on X to distances over probability distributions in P ( X ) . They have found multiple applications in machine learning : domain adaptation Courty et al . ( 2017 ) , density estimation ( Bassetti et al. , 2006 ) and generative networks ( Genevay et al. , 2017 ; Patrini et al. , 2018 ) . However , it is prohibitively expensive to compute OT between distributions with support in a high-dimensional space and might not even be practically possible as the sample complexity can grow exponentially as shown by Dudley ( 1969 ) . Similarly , work by Weed et al . ( 2019 ) showed a theoretical improvement when the support of distributions is found in a low-dimensional space . Furthermore , picking the ground metric that one should use is not obvious when using high-dimensional data . One of the earlier ideas from Santambrogio ( 2015 ) showed that OT projections in a 1-D space may be sufficient enough to extract geometric information from high dimensional data . This further prompted Kolouri et al . ( 2018 ) to use this method to build generative models , namely the Sliced Wasserstein Autoencoder . Following a similar approach Paty & Cuturi ( 2019 ) and Muzellec & Cuturi ( 2019 ) project the measures into a linear subspace E of low-dimension k that maximizes the transport cost and show how this can be used in applications of color transfer and domain adaptation . This can be seen as an extension to earlier work by Cuturi & Doucet ( 2014 ) whereby the cost function is parameterized . One of the fundamental innovations that made OT appealing to the machine learning com- munity was the seminal paper by Cuturi ( 2013 ) that introduced the idea of entropic regularization of OT distances and the Sinkhorn algorithm . Since then , regularized OT has been successfully used as a loss function to construct generative models such as GANs ( Genevay et al. , 2017 ) or RBMs ( Montavon et al. , 2015 ) and computing Barycenters ( Cuturi & Doucet , 2014 ; Claici et al. , 2018 ) . More recently , the new class of Sinkhorn Divergences was shown by Feydy et al . ( 2018 ) to have good geometric properties , and interpolate between Maximum Mean Discrepancies ( MMD ) and OT . Building on this previous work , we introduce a general framework for approximating highdimensional OT using low-dimensional projections f by finding the subspace with the worst OT cost , i.e . the one maximizing the ground cost on the low-dimensional space . By taking a general family of parameterizable fφs that are 1-Lipschitz , we show that our method generates a pseudo-metric and is computationally efficient and robust . We start the paper in §2 with background on optimal transport and pseudo-metrics . In §3 we define the theoretical framework for approximating OT distances and show how both linear ( Paty & Cuturi , 2019 ) and non-linear projections can be seen as a special instance of our framework . In §4 we present an efficient algorithm for computing OT distances using Sinkhorn Divergences and fφs that are 1-Lipschitz under the L2 norm . We conclude in §5 with experiments illustrating the efficiency and robustness of our method . 2 Preliminaries . We start with a brief reminder of the basic notions needed for the rest of the paper . Let X be a set equipped with a map dX : X × X → R≥0 with non-negative real values . The pair ( X , dX ) is said to be a metric space and dX is said to be a metric on X if it satisfies the usual properties : • dX ( x , y ) = 0 if and only if x = y • dX ( x , y ) = dX ( y , x ) • dX ( x , z ) ≤ dX ( x , y ) + dX ( y , z ) If dX verifies the above except for the only if condition , it is called a pseudo-metric , and ( X , dX ) is said to be a pseudo-metric space . For a pseudo-metric , it may be that dX ( x , y ) = 0 while x 6= y . We write dX ≤ d′X if for all x , y dX ( x , y ) ≤ d′X ( x , y ) . It is easy to see that : 1 ) “ ≤ ” is a partial order on pseudo-metrics over X , 2 ) “ ≤ ” induces a complete lattice structure on the set of pseudo-metrics over X , where 3 ) suprema are computed pointwise ( but not infima ) . Consider X , Y , two metric spaces equipped with respective metrics dX , dY . A map f from X to Y is said to be α-Lipschitz continuous if dY ( f ( x ) , f ( x′ ) ) ≤ αdX ( x , x′ ) . A 1-Lipschitz map is also called non-expansive . Given a map f from X to Y one defines the pullback of dY along f as : f̂ ( dY ) ( x , x′ ) = dY ( f ( x ) , f ( x′ ) ) ( 1 ) It is easily seen that : 1 ) f̂ ( dY ) is a pseudo-metric on X , 2 ) f̂ ( dY ) is a metric iff f is injective , 3 ) f̂ ( dY ) ≤ dX iff f is non-expansive , 4 ) f̂ ( dY ) is the least pseudo-metric on the set X such that f is non-expansive from ( X , f̂ ( dY ) ) to ( X , dY ) . Thereafter , we assume that all metric spaces considered are complete and separable , i.e . have a dense countable subset . Let ( X , dX ) be a ( complete separable ) metric space . Let ΣX be the σ-algebra generated by the open sets of X ( aka the Borelian subsets ) . We write P ( X ) for the set of probability distributions on ( X , ΣX ) . Given a measurable map f : X → Y , and µ ∈P ( X ) one defines the push-forward of µ along f as : f # ( µ ) ( B ) = µ ( f−1 ( B ) ) ( 2 ) for B ∈ ΣY . It is easily seen that f # ( µ ) is a probability measure on ( Y , ΣY ) Given µ in P ( X ) , ν in P ( Y ) , a coupling of µ and ν is a probability measure γ over X × Y such that for all A in ΣX , B in ΣY , γ ( A×X ) = µ ( A ) , and γ ( X ×B ) = ν ( B ) . Equivalently , µ = π0 # ( π ) , and ν = π1 # ( π ) for π0 , π1 the respective projections . We write Γ ( µ , ν ) for the set of couplings of µ and ν . There are several ways to lift a given metric structure on dX to one on P ( X ) . We will be specifically interested in metrics on P ( X ) derived from optimal transport problems . The p-Wasserstein metric with p ∈ [ 1 , ∞ ) is defined by : Wp ( dX ) ( µ , ν ) p = inf γ∈Γ ( µ , ν ) ∫ X×X dpX dγ ( 3 ) Villani ( 2008 ) establishes that if dX is ( pseudo- ) metric so isWp ( dX ) . The natural ‘ Dirac ’ embedding of X into P ( X ) is isometric ( there is only one coupling ) . The idea behind the definition is that dpX is used as a measure of the cost of transporting units of mass in X , while a coupling γ specifies how to transport the µ distribution to the ν one . One can therefore compute the mean transportation cost under γ , and pick the optimal γ . Hence the name optimal transport . In most of the paper , we are concerned with the case X = Rd+ for some large d with a metric structure dX given by the Euclidean norm , and we wish to compute the W2 metric between distributions with finite support . Since OT metrics are costly to compute in high dimension , to estimate these efficiently , and mitigate the impact of dimension , we will use a well-chosen family of fs to push the data along a map with a low dimensional co-domain Y also equipped with the Euclidean metric . The reduction maps may be linear or not . They have to be non-expansive to guarantee that the associated pull-back metrics are always below the Euclidean one , and therefore we provide a lower estimate of W2 ( d2 ) . 3 Approximate OT with General Projections - GPW . With the ingredients from the above section in place , we can now construct a general framework for approximating Wasserstein-like metrics by low-dimensional mappings of X . We write simply W instead of Wp as the value of p plays no role in the development . Pick two metric spaces ( X , dX ) , ( Y , dY ) , and a family S = ( fφ : X → Y ; φ ∈ S ) of mappings from X to Y . Define a map from P ( X ) ×P ( X ) to non-negative reals as follows : dS ( µ , ν ) = sup S W ( dY ) ( fφ # ( µ ) , fφ # ( ν ) ) ( 4 ) Equivalently and more concisely dS can be defined as : dS ( µ , ν ) = sup φ W ( f̂φ ( dY ) ) ( µ , ν ) ( 5 ) It is easily seen that : 1. the two definitions are equivalent 2. dS is a pseudo-metric on P ( X ) 3. dS is a metric ( not just a pseudo one ) if the family fφ jointly separates points in X , and 4. if the fφs are non-expansive from ( X , dX ) to ( Y , dY ) , then dS ≤W ( dX ) The second point follows readily from the second definition . Each f̂φ ( dY ) is a pseudo-metric on X obtained by pulling back dY ( see preceding section ) , hence , so is W ( f̂φ ( dY ) ) on P ( X ) , and therefore dS being the supremum of this family ( in the lattice of pseudo-metrics over X ) is itself a pseudo-metric . The first definition is important because it allows one to perform the OT computation in the target space where it will be cheaper . Thus we have derived from S a pseudo-metric dS on the space of probability measures P ( X ) . We assume from now on that mappings in S are non-expansive . By point 4. above , we know that dS is bounded above by W ( dX ) . We call dS the generalized projected Wasserstein metric ( GPW ) associated to S. In good cases , it is both cheaper to compute and a good estimate .
This paper uses the fact that the Wasserstein distance is decreasing under 1-Lipschitz mappings from the ambient space to a feature space in order to propose more robust (to dimensionality ?) estimation of the Wasserstein distance between two probability distributions. A neural network with some sort of weight renormalization is used to produce 1-Lipschitz embeddings. The authors then maximise over the parametrized maps.
SP:62a3d12370f3248b2283ea33d6767b1c914bcbe2
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
1 INTRODUCTION . Self-supervised representation learning pre-trains good feature extractors from massive unlabeled data , which show promising transferability to various downstream tasks . Recent success includes large-scale pre-trained language models ( e.g. , BERT , RoBERTa , and GPT-3 ( Devlin et al. , 2019 ; Liu et al. , 2019 ; Brown et al. , 2020 ) ) , which have advanced state of the art over a wide range of NLP tasks such as NLI and QA , even surpassing human performance . Specifically , in the computer vision domain , many studies have shown that self-supervised representation learning is essentially solving the problem of maximizing the mutual information ( MI ) I ( X ; T ) between the input X and the representation T ( van den Oord et al. , 2018 ; Belghazi et al. , 2018 ; Hjelm et al. , 2019 ; Chen et al. , 2020 ) . Since MI is computationally intractable in high-dimensional feature space , many MI estimators ( Belghazi et al. , 2018 ) have been proposed to serve as lower bounds ( Barber & Agakov , 2003 ; van den Oord et al. , 2018 ) or upper bounds ( Cheng et al. , 2020 ) of MI . Recently , Kong et al . point out that the MI maximization principle of representation learning can be applied to not only computer vision but also NLP domain , and propose a unified view that recent pre-trained language models are maximizing a lower bound of MI among different segments of a word sequence . On the other hand , deep neural networks are known to be prone to adversarial examples ( Goodfellow et al. , 2015 ; Papernot et al. , 2016 ; Eykholt et al. , 2017 ; Moosavi-Dezfooli et al. , 2016 ) , i.e. , the outputs of neural networks can be arbitrarily wrong when human-imperceptible adversarial perturbations are added to the inputs . Textual adversarial attacks typically perform word-level substitution ( Ebrahimi et al. , 2018 ; Alzantot et al. , 2018 ; Ren et al. , 2019 ) or sentence-level paraphrasing ( Iyyer et al. , 2018 ; Zhang et al. , 2019 ) to achieve semantic/utility preservation that seems innocuous to human , while fools NLP models . Recent studies ( Jin et al. , 2020 ; Zang et al. , 2020 ; Nie et al. , 2020 ; Wang et al. , 2020 ) further show that even large-scale pre-trained language models ( LM ) such as ∗Work was done during Boxin Wang ’ s Summer internship in Microsoft Dynamics 365 AI Research . BERT are vulnerable to adversarial attacks , which raises the challenge of building robust real-world LM applications against unknown adversarial attacks . We investigate the robustness of language models from an information theoretic perspective , and propose a novel learning framework InfoBERT , which focuses on improving the robustness of language representations by fine-tuning both local features ( word-level representation ) and global features ( sentence-level representation ) for robustness purpose . InfoBERT considers two MI-based regularizers : ( i ) the Information Bottleneck regularizer manages to extract approximate minimal sufficient statistics for downstream tasks , while removing excessive and noisy information that may incur adversarial attacks ; ( ii ) the Anchored Feature regularizer carefully selects useful local stable features that are invulnerable to adversarial attacks , and maximizes the mutual information between local stable features and global features to improve the robustness of the global representation . In this paper , we provide a detailed theoretical analysis to explicate the effect of InfoBERT for robustness improvement , along with extensive empirical adversarial evaluation to validate the theory . Our contributions are summarized as follows . ( i ) We propose a novel learning framework InfoBERT from the information theory perspective , aiming to effectively improve the robustness of language models . ( ii ) We provide a principled theoretical analysis on model robustness , and propose two MIbased regularizers to refine the local and global features , which can be applied to both standard and adversarial training for different NLP tasks . ( iii ) Comprehensive experimental results demonstrate that InfoBERT can substantially improve robust accuracy by a large margin without sacrificing the benign accuracy , yielding the state-of-the-art performance across multiple adversarial datasets on NLI and QA tasks . 2 RELATED WORK . Textual Adversarial Attacks/Defenses Most existing textual adversarial attacks focus on wordlevel adversarial manipulation . Ebrahimi et al . ( 2018 ) is the first to propose a whitebox gradientbased attack to search for adversarial word/character substitution . Following work ( Alzantot et al. , 2018 ; Ren et al. , 2019 ; Zang et al. , 2020 ; Jin et al. , 2020 ) further constrains the perturbation search space and adopts Part-of-Speech checking to make NLP adversarial examples look natural to human . To defend against textual adversarial attacks , existing work can be classified into three categories : ( i ) Adversarial Training is a practical method to defend against adversarial examples . Existing work either uses PGD-based attacks to generate adversarial examples in the embedding space of NLP as data augmentation ( Zhu et al. , 2020a ) , or regularizes the standard objective using virtual adversarial training ( Jiang et al. , 2020 ; Liu et al. , 2020 ; Gan et al. , 2020 ) . However , one drawback is that the threat model is often unknown , which renders adversarial training less effective when facing unseen attacks . ( ii ) Interval Bound Propagation ( IBP ) ( Dvijotham et al. , 2018 ) is proposed as a new technique to consider the worst-case perturbation theoretically . Recent work ( Huang et al. , 2019 ; Jia et al. , 2019 ) has applied IBP in the NLP domain to certify the robustness of models . However , IBPbased methods rely on strong assumptions of model architecture and are difficult to adapt to recent transformer-based language models . ( iii ) Randomized Smoothing ( Cohen et al. , 2019 ) provides a tight robustness guarantee in ` 2 norm by smoothing the classifier with Gaussian noise . Ye et al . ( 2020 ) adapts the idea to the NLP domain , and replace the Gaussian noise with synonym words to certify the robustness as long as adversarial word substitution falls into predefined synonym sets . However , to guarantee the completeness of the synonym set is challenging . Representation Learning MI maximization principle has been adopted by many studies on selfsupervised representation learning ( van den Oord et al. , 2018 ; Belghazi et al. , 2018 ; Hjelm et al. , 2019 ; Chen et al. , 2020 ) . Specifically , InfoNCE ( van den Oord et al. , 2018 ) is used as the lower bound of MI , forming the problem as contrastive learning ( Saunshi et al. , 2019 ; Yu et al. , 2020 ) . However , Tian et al . ( 2020 ) suggests that the InfoMax ( Linsker , 1988 ) principle may introduce excessive and noisy information , which could be adversarial . To generate robust representation , Zhu et al . ( 2020b ) formalizes the problem from a mutual-information perspective , which essentially performs adversarial training for worst-case perturbation , while mainly considers the continuous space in computer vision . In contrast , InfoBERT originates from an information-theoretic perspective and is compatible with both standard and adversarial training for discrete input space of language models . 3 INFOBERT . Before diving into details , we first discuss the textual adversarial examples we consider in this paper . We mainly focus on the dominant word-level attack as the main threat model , since it achieves higher attack success and is less noticeable to human readers than other attacks . Due to the discrete nature of text input space , it is difficult to measure adversarial distortion on token level . Instead , because most word-level adversarial attacks ( Li et al. , 2019 ; Jin et al. , 2020 ) constrain word perturbations via the bounded magnitude in the semantic embedding space , by adapting from Jacobsen et al . ( 2019 ) , we define the adversarial text examples with distortions constrained in the embedding space . Definition 3.1 . ( -bounded Textual Adversarial Examples ) . Given a sentence x = [ x1 ; x2 ; ... ; xn ] , where xi is the word at the i-th position , the -bounded adversarial sentence x′ = [ x′1 ; x ′ 2 ; ... ; x ′ n ] for a classifier F satisfies : ( 1 ) F ( x ) = o ( x ) = o ( x′ ) but F ( x′ ) 6= o ( x′ ) , where o ( · ) is the oracle ( e.g. , human decision-maker ) ; ( 2 ) ||ti − t′i||2 ≤ for i = 1 , 2 , ... , n , where ≥ 0 and ti is the word embedding of xi . 3.1 INFORMATION BOTTLENECK AS A REGULARIZER . In this section , we first discuss the general IB implementation , and then explain how IB formulation is adapted to InfoBERT as a regularizer along with theoretical analysis to support why IB regularizer can help improve the robustness of language models . The IB principle formulates the goal of deep learning as an information-theoretic trade-off between representation compression and predictive power ( Tishby & Zaslavsky , 2015 ) . Given the input source X , a deep neural net learns the internal representation T of some intermediate layer and maximizes the MI between T and label Y , so that T subject to a constraint on its complexity contains sufficient information to infer the target label Y . Finding an optimal representation T can be formulated as the maximization of the Lagrangian LIB = I ( Y ; T ) − βI ( X ; T ) , ( 1 ) where β > 0 is a hyper-parameter to control the tradeoff , and I ( Y ; T ) is defined as : I ( Y ; T ) = ∫ p ( y , t ) log p ( y , t ) p ( y ) p ( t ) dy dt . ( 2 ) Since Eq . ( 2 ) is intractable , we instead use the lower bound from Barber & Agakov ( 2003 ) : I ( Y ; T ) ≥ ∫ p ( y , t ) log qψ ( y | t ) dy dt , ( 3 ) where qψ ( y|t ) is the variational approximation learned by a neural network parameterized by ψ for the true distribution p ( y|t ) . This indicates that maximizing the lower bound of the first term of IB I ( Y ; T ) is equivalent to minimizing the task cross-entropy loss ` task = H ( Y | T ) . To derive a tractable lower bound of IB , we here use an upper bound ( Cheng et al. , 2020 ) of I ( X ; T ) I ( X ; T ) ≤ ∫ p ( x , t ) log ( p ( t | x ) ) dx dt − ∫ p ( x ) p ( t ) log ( p ( t | x ) ) dx dt . ( 4 ) By combining Eq . ( 3 ) and ( 4 ) , we can maximize the tractable lower bound L̂IB of IB in practice by : L̂IB = 1 N N∑ i=1 [ log qψ ( y ( i ) | t ( i ) ) ] − β N N∑ i=1 [ log ( p ( t ( i ) | x ( i ) ) ) − 1 N N∑ j=1 log ( p ( t ( j ) | x ( i ) ) ) ] ( 5 ) with data samples { x ( i ) , y ( i ) } Ni=1 , where qψ can represent any classification model ( e.g. , BERT ) , and p ( t | x ) can be viewed as the feature extractor fθ : X → T , where X and T are the support of the input source X and extracted feature T , respectively . The above is a general implementation of IB objective function . In InfoBERT , we consider T as the features consisting of the local word-level features after the BERT embedding layer fθ . The following BERT self-attentive layers along with the linear classification head serve as qψ ( y|t ) that predicts the target Y given representation T . Formally , given random variablesX = [ X1 ; X2 ; ... ; Xn ] representing input sentences withXi ( word token at i-th index ) , let T = [ T1 ; ... ; Tn ] = fθ ( [ X1 ; X2 ; ... ; Xn ] ) = [ fθ ( X1 ) ; fθ ( X2 ) ; ... ; fθ ( Xn ) ] denote the random variables representing the features generated from input X via the BERT embedding layer fθ , where Ti ∈ Rd is the high-dimensional word-level local feature for word Xi . Due to the high dimensionality d of each word feature ( e.g. , 1024 for BERT-large ) , when the sentence length n increases , the dimensionality of features T becomes too large to compute I ( X ; T ) in practice . Thus , we propose to maximize a localized formulation of IB LLIB defined as : LLIB : = I ( Y ; T ) − nβ n∑ i=1 I ( Xi ; Ti ) . ( 6 ) Theorem 3.1 . ( Lower Bound of LIB ) Given a sequence of random variables X = [ X1 ; X2 ; ... ; Xn ] and a deterministic feature extractor fθ , let T = [ T1 ; ... ; Tn ] = [ fθ ( X1 ) ; fθ ( X2 ) ; ... ; fθ ( Xn ) ] . Then the localized formulation of IB LLIB is a lower bound of LIB ( Eq . ( 1 ) ) , i.e. , I ( Y ; T ) − βI ( X ; T ) ≥ I ( Y ; T ) − nβ n∑ i=1 I ( Xi ; Ti ) . ( 7 ) Theorem 3.1 indicates that we can maximize the localized formulation of LLIB as a lower bound of IB LIB when I ( X ; T ) is difficult to compute . In Eq . ( 6 ) , if we regard the first term ( I ( Y ; T ) ) as a task-related objective , the second term ( −nβ ∑n i=1 I ( Xi ; Ti ) ) can be considered as a regularization term to constrain the complexity of representation T , thus named as Information Bottleneck regularizer . Next , we give a theoretical analysis for the adversarial robustness of IB and demonstrate why localized IB objective function can help improve the robustness to adversarial attacks . Following Definition 3.1 , let T = [ T1 ; T2 ; ... ; Tn ] and T ′ = [ T ′1 ; T ′ 2 ; ... ; T ′ n ] denote the features for the benign sentence X and adversarial sentence X ′ . The distributions of X and X ′ are denoted by probability p ( x ) and q ( x ) with the support X and X ′ , respectively . We assume that the feature representation T has finite support denoted by T considering the finite vocabulary size in NLP . Theorem 3.2 . ( Adversarial Robustness Bound ) For random variables X = [ X1 ; X2 ; ... ; Xn ] and X ′ = [ X ′1 ; X ′ 2 ; ... ; X ′ n ] , let T = [ T1 ; T2 ; ... ; Tn ] = [ fθ ( X1 ) ; fθ ( X2 ) ; ... ; fθ ( Xn ) ] and T ′ = [ T ′1 ; T ′ 2 ; ... ; T ′ n ] = [ fθ ( X ′ 1 ) ; fθ ( X ′ 2 ) ; ... ; fθ ( X ′ n ) ] with finite support T , where fθ is a deterministic feature extractor . The performance gap between benign and adversarial data |I ( Y ; T ) − I ( Y ; T ′ ) | is bounded above by |I ( Y ; T ) − I ( Y ; T ′ ) | ≤ B0 +B1 n∑ i=1 √ |T | ( I ( Xi ; Ti ) ) 1/2 +B2 n∑ i=1 |T |3/4 ( I ( Xi ; Ti ) ) 1/4 +B3 n∑ i=1 √ |T | ( I ( X ′i ; T ′i ) ) 1/2 +B4 n∑ i=1 |T |3/4 ( I ( X ′i ; T ′i ) ) 1/4 , ( 8 ) where B0 , B1 , B2 , B3 and B4 are constants depending on the sequence length n , and p ( x ) . The sketch of the proof is to express the difference of |I ( Y ; T ) − I ( Y ′ ; T ) | in terms of I ( Xi ; Ti ) . Specifically , Eq . ( 25 ) factorizes the difference into two summands . The first summand , the conditional entropy |H ( T | Y ) − H ( T ′ | Y ) | , can be bound by Eq . ( 42 ) in terms of MI between benign/adversarial input and representation I ( Xi ; Ti ) and I ( X ′i ; T ′ i ) . The second summand |H ( T ) − H ( T ′ ) | has a constant upper bound ( Eq . ( 85 ) ) , since language models have bounded vocabulary size and embedding space , and thus have bounded entropy . The intuition of Theorem 3.2 is to bound the adversarial performance drop |I ( Y ; T ) − I ( Y ; T ′ ) | by I ( Xi ; Ti ) . As explained in Eq . ( 3 ) , I ( Y ; T ) and I ( Y ; T ′ ) can be regarded as the model performance on benign and adversarial data . Thus , the LHS of the bound represents such a performance gap . The adversarial robustness bound of Theorem 3.2 indicates that the performance gap becomes closer when I ( Xi ; Ti ) and I ( X ′i ; T ′ i ) decrease . Note that our IB regularizer in the objective function Eq . ( 6 ) achieves the same goal of minimizing I ( Xi ; Ti ) while learning the most efficient information features , or approximate minimal sufficient statistics , for downstream tasks . Theorem 3.2 also suggests that combining adversarial training with our IB regularizer can further minimize I ( X ′i ; T ′ i ) , leading to better robustness , which is verified in §4 .
This work (InfoBERT) proposes additional objectives for transformer finetuning to obtain models more robust to adversarial inputs. The authors first propose a mutual information based information bottleneck objective, next the authors propose an adversarial loss inspired method for identifying robust features and a subsequent objective to emphasize the mutual information between global representations and these robust features. The experiments demonstrate that InfoBERT consistently outperforms other adversarial training approaches on a variety of adversarial evaluations.
SP:d06908461594cd2fb28e636fc85b53589a5e1207
Language Controls More Than Top-Down Attention: Modulating Bottom-Up Visual Processing with Referring Expressions
1 INTRODUCTION . As human beings , we can easily understand the surrounding environment with our visual system and interact with each other using language . Since the work of Winograd ( 1972 ) , developing a system that understands human language in a situated environment is one of the long-standing goals of artificial intelligence . Recent successes of deep learning studies in both language and vision domains have increased the interest in tasks that combine language and vision ( Antol et al. , 2015 ; Xu et al. , 2015 ; Krishna et al. , 2016 ; Suhr et al. , 2017 ; Anderson et al. , 2018b ; Hudson & Manning , 2019 ) . However , how to best integrate linguistic and perceptual processing is still an important open problem . In this work we investigate whether language should be used to control the filters for bottom-up visual processing as well as top-down attention . In the human visual system , attention is driven by both “ top-down ” cognitive processes ( e.g . focusing on target ’ s color or location ) and “ bottom-up ” salient , behaviourally relevant stimuli ( e.g . fast moving objects ) ( Corbetta & Shulman , 2002 ; Connor et al. , 2004 ; Theeuwes , 2010 ) . Studies on embodied language explore the link between linguistic and perceptual representations ( Pulvermüller , 1999 ; Vigliocco et al. , 2004 ; Gallese & Lakoff , 2005 ) and it is often assumed that language has a high-level effect on perception and drives the “ top-down ” visual attention ( Bloom , 2002 ; Jackendoff & Jackendoff , 2002 ; Dessalegn & Landau , 2008 ) . However , recent studies from cognitive science point out that language comprehension also affects low-level visual processing ( Meteyard et al. , 2007 ; Boutonnet & Lupyan , 2015 ) . Motivated by this , we propose a model1 that can modulate either or both of “ bottom-up ” and “ top-down ” visual processing with language conditional filters . Current deep learning systems for language-vision tasks typically start with low-level image processing that is not conditioned on language , then connect the language representation with high level visual features to control the visual focus . To integrate both modalities , concatenation ( Malinowski et al. , 2015 ) , element-wise multiplication ( Malinowski et al. , 2015 ; Lu et al. , 2016 ; Kim et al. , 2016 ) or attention from language to vision ( Xu et al. , 2015 ; Xu & Saenko , 2016 ; Yang et al. , 2016 ; Lu et al. , 2017 ; Anderson et al. , 2018a ; Zellers et al. , 2019 ) may be used . Specifically they do not condition low-level visual features on language . One exception is De Vries et al . ( 2017 ) which proposes conditioning the ResNet ( He et al. , 2016 ) image processing network with language conditioned batch normalization parameters at every stage . Our model differs from these architectures by having explicit “ bottom-up ” and “ top-down ” branches and allowing us to experiment with modulating one or both branches with language generated kernels . 1We will release our code and pre-trained models along with a reproducible environment after the blind review process . We evaluate our proposed model on the task of image segmentation from referring expressions where given an image and a natural language description , the model returns a segmentation mask that marks the object ( s ) described . We can contrast this with purely image based object detection ( Girshick , 2015 ; Ren et al. , 2017 ) and semantic segmentation ( Long et al. , 2015 ; Ronneberger et al. , 2015 ; Chen et al. , 2017 ) tasks which are limited to predefined semantic classes . Our task gives users more flexibility to interact with the system by allowing them to describe objects of interest in free form language . The language input may contain various visual attributes ( e.g. , color , shape ) , spatial information ( e.g. , “ on the right ” , “ in front of ” ) , actions ( e.g. , “ running ” , “ sitting ” ) and interactions/relations between different objects ( e.g. , “ arm of the chair that the cat is sitting in ” ) . This makes the task both more challenging and suitable for comparing different strategies of language control . The perceptual module of our model is based on the U-Net image segmentation architecture ( Ronneberger et al. , 2015 ) . This architecture has clearly separated bottom-up and top-down branches which allows us to easily vary what parts are conditioned on language . The bottom-up branch starts from low level visual features and applies a sequence of contracting filters that result in successively higher level feature maps with lower spatial resolution . Following this is a top-down branch which takes the final low resolution feature map and applies a sequence of expanding filters that eventually result in a segmentation mask at the original image resolution . Information flows between branches through skip connections between contracting and expanding filters at the same level . We experiment with conditioning one or both of these branches with language . To make visual processing conditional on language , we add language-conditional filters at each level of the architecture , similar to Misra et al . ( 2018 ) . Our baseline only applies languageconditional filters on the top-down branch . Modulating only the top-down/expanding branch with language means the high level features extracted by the bottom-up/contracting branch can not be language-conditional . Our model expands on this baseline by modulating both branches with language-conditional filters . Empirically , we find that adding language modulation to the bottomup/contracting branch has a significant positive improvement on the baseline model . Our proposed model achieves state-of-the art performance on three different English referring expression datasets . 2 RELATED WORK . In this section , we review related work in several related areas : Semantic segmentation classifies the object category of each pixel in an image without language input . Referring expression comprehension locates a bounding box for the object ( s ) described in the language input . Image segmentation from referring expressions generates a segmentation mask for the object ( s ) described in the language input . We also cover work on language-conditional ( dynamic ) filters and studies that use them to modulate deep-learning models with language . 2.1 SEMANTIC SEGMENTATION . Primitive semantic segmentation models are based on Fully Convolutional Networks ( FCN ) ( Long et al. , 2015 ) . DeepLab ( Chen et al. , 2017 ) and U-Net ( Ronneberger et al. , 2015 ) are the most notable state-of-the-art semantic segmentation models related to our work . DeepLab replaces regular convolutions with atrous ( dilated ) convolutions in the last residual block of ResNets ( He et al. , 2016 ) and implements Atrous Spatial Pyramid Pooling ( ASPP ) which fuses multi-scale visual information . The U-Net architecture ( Ronneberger et al. , 2015 ) improves over the standard FCN by connecting contracting ( bottom-up ) and expanding ( top-down ) paths at the same resolution : the output of the encoder layer at each level is passed to the decoder at the same level . 2.2 REFERRING EXPRESSION COMPREHENSION . Early models for this task were typically built using a hybrid LSTM-CNN architecture ( Hu et al. , 2016b ; Mao et al. , 2016 ) . Newer models ( Hu et al. , 2017 ; Yu et al. , 2016 ; 2018 ; Wang et al. , 2019 ) use an Region-based CNN ( R-CNN ) variant ( Girshick et al. , 2014 ; Ren et al. , 2017 ; He et al. , 2017 ) as a sub-component to generate object proposals . Nagaraja et al . ( 2016 ) proposes a solution based on multiple instance learning . Cirik et al . ( 2018 ) implements a model based on Neural Module Networks ( NMN ) by using syntax information . Among the literature , Compositional Modular Network ( CMN ) ( Hu et al. , 2017 ) , Modular Attention Network ( MAttNet ) ( Yu et al. , 2018 ) and Neural Module Tree Networks ( NMTree ) ( Liu et al. , 2019 ) are the most notable state-of-the-art methods , and all of them are based on NMN ( Andreas et al. , 2016 ) . 2.3 IMAGE SEGMENTATION FROM REFERRING EXPRESSIONS . Notable models for this task include Recurrent Multimodal Interaction ( RMI ) model ( Liu et al. , 2017 ) , Recurrent Refinement Networks ( RRN ) ( Li et al. , 2018 ) , Dynamic Multimodal Network ( DMN ) ( Margffoy-Tuay et al. , 2018 ) , Convolutional RNN with See-through-Text Embedding Pixelwise heatmaps ( Step-ConvRNN or ConvRNN-STEM ) ( Chen et al. , 2019a ) , Caption-aware Consistent Segmentation Model ( CAC ) ( Chen et al. , 2019b ) , Bi-directional Relationship Inferring Network ( BRINet ) Hu et al . ( 2020 ) and Linguistic Structure guided Context Modelling ( LSCM ) module Hui et al . ( 2020 ) . RRN which has a structure similar to U-Net , is built on top of a Convolutional LSTM ( ConvLSTM ) ( SHI et al. , 2015 ) network . Unlike our model , ConvLSTM filters are not generated from language representation and the multi-modal representation is used only in the initial time step . DMN generates 1 x 1 language-conditional filters for language representation of each word . It performs convolution operation on visual representation with language-conditional filters to generate multi-modal representation for each word . Like RMI , word-level multi-modal representations are fed as input to a multi-modal RRN to obtain multi-modal representation for image/language pairs . Step-ConvRNN starts with a visual-textual co-embedding and uses a ConvRNN to iteratively refine a heatmap for image segmentation . Step-ConvRNN uses a bottom-up and top-down approach similar to this work , however , our model uses spatial language generated kernels within a simpler architecture . CAC also generates 1 x 1 language-conditional dynamic filters . Unlike our model , CAC applies these dynamic filters to single resolution / single feature map and additionally generates location-specific dynamic filters ( e.g . left , bottom ) to capture relations between the objects exist at the different parts of the image . BRINet implements two different attention mechanisms : language-guided visual attention and vision-guided linguistic attention . LSCM implements a dependency parsing guided bottom-up attention mechanism to predict masks . 2.4 LANGUAGE-CONDITIONAL FILTERS . To control a deep learning model with language , early work such as Modulated ResNet ( MODERN ) ( De Vries et al. , 2017 ) and Feature-wise Linear Modulation ( FiLM ) ( Perez et al. , 2018 ) used conditional batch normalization layers with only language-conditioned coefficients rather than customized filters . Finn et al . ( 2016 ) generates action-conditioned dynamic filters . Li et al . ( 2017 ) is the first work which generates dynamic language-conditional filters . Gao et al . ( 2018 ) proposes a VQA solution method which has a group convolutional layer whose filters are generated from the question input . Gavrilyuk et al . ( 2018 ) introduces a new task called as actor and action segmentation and to solve this task , proposes an architecture which uses dynamic filters for multiple resolutions . Similar to our work , Misra et al . ( 2018 ) adds language conditional filters to a U-Net based architecture for the task of mapping instructions to actions in virtual environments. ? also uses an architecture based on U-Net and Misra et al . ( 2018 ) to solve a navigation and spatial reasoning problem . Those models only modulate top-down visual processing with language . Referring expression models that incorporate language-conditional filters into the architecture include ( Chen et al. , 2019b ; Margffoy-Tuay et al. , 2018 ) . Margffoy-Tuay et al . ( 2018 ) generates language-conditional filters for words individually rather than whole sentence . Chen et al . ( 2019b ) generates 1 x 1 language-conditional filters from expressions . To make 1 x 1 language-conditional filters spatially aware , different filters are generated for different image regions ( e.g . top , left , right , bottom ) . Our main contribution in this work is an explicit evaluation of language conditional filters for bottom-up visual processing in comparison to only using language for top-down attention control .
This paper concerns the problem of image segmentation from referring expressions. Given an image and a query phrase about a particular object in the image, the goal is to locate the target object as a mask at the pixel level. The basic framework is U-Net, which consists of two branches: an image encoder and segmentation map decoder (connected at the bottom in a U-shape). The paper proposes to use language to modulate the image encoding and decoding process intensively, by applying auxiliary convolutional connections between the two branches and further condition the convolution kernel on the language embedding. Overall, the paper is easy to follow and has done a good literature review.
SP:c2c3dfb15f6f05041cbbe6b4542f5dee3eb4e763
Acting in Delayed Environments with Non-Stationary Markov Policies
1 INTRODUCTION . The body of work on reinforcement learning ( RL ) and planning problem setups has grown vast in recent decades . Examples for such distinctions are different objectives and constraints , assumptions on access to the model or logged trajectories , on-policy or off-policy paradigms , etc . ( Puterman , 2014 ) . However , the study of delay in RL remains scarce . It is almost always assumed the action is executed as soon as the agent chooses it . This assumption seldom holds in real-world applications ( DulacArnold et al. , 2019 ) . Latency in action execution can either stem from the increasing computational complexity of modern systems and related tasks , or the infrastructure itself . The wide range of such applications includes robotic manipulation , cloud computing , financial trading , sensor feedback in autonomous systems , and more . To elaborate , consider an autonomous vehicle required for immediate response to a sudden hazard on the highway . Driving at high speed , it suffers from perception module latency when inferring the surrounding scene , as well as delay in actuation once a decision has been made . While the latter phenomenon is an instance of execution delay , the former corresponds to observation delay . These two types of delay are in fact equivalent and can thus be treated with the same tools ( Katsikopoulos & Engelbrecht , 2003 ) . Related works . The notion of delay is prominent in control theory with linear time-invariant systems ( Bar-Ilan & Sulem , 1995 ; Dugard & Verriest , 1998 ; Richard , 2003 ; Fridman , 2014 ; Bruder & Pham , 2009 ) . While the delayed control literature is vast , our work intersects with it mostly in motivation . In the above control theory formulations , the system evolves according to some known diffusion or stochastic differential equation . Differently , the discrete-time MDP framework does not require any structural assumption on the transition function or reward . ˚Equal contribution A few works consider a delay in the reward signal rather than in observation or execution . Delayed reward has been studied on multi-armed bandits for deterministic and stochastic latencies ( Joulani et al. , 2013 ) and for the resulting arm credit assignment problem ( Pike-Burke et al. , 2017 ) . In the MDP setting , Campbell et al . ( 2016 ) proposed a Q-learning variant for reward-delay that follows a Poisson distribution . Katsikopoulos & Engelbrecht ( 2003 ) considered three types of delay : observation , execution , and reward . Chen et al . ( 2020b ) studied execution delay on multi-agent systems . The above works on MDPs employed state-augmentation with a primary focus on empirical evaluation of the degradation introduced by the delay . In this augmentation method , all missing information is concatenated with the original state to overcome the partial observability induced by the delay . The main drawback of this embedding method is the exponential growth of the state-space with the delay value ( Walsh et al. , 2009 ; Chen et al. , 2020a ) and , in the case of ( Chen et al. , 2020b ) , an additional growth that is polynomial with the number of agents . Walsh et al . ( 2009 ) avoided state-augmentation in MDPs with delayed feedback via a planning approach . By assuming the transition kernel to be close to deterministic , their model-based simulation ( MBS ) algorithm relies on a most-likely present state estimate . Since the Delayed-Q algorithm we devise here resembles to MBS in spirit , we highlight crucial differences between them : First , MBS is a conceptual algorithm that requires the state-space to be finite or discretized . This makes it highly sensitive to the state-space size , as we shall demonstrate in Sec . 7 [ Fig . 5 ( c ) ] , prohibiting it from running on domains like Atari . Differently , Delayed-Q works with the original , possibly continuous state-space . Second , MBS is an offline algorithm : it estimates a surrogate , non-delayed MDP from samples , and only then does it solve that MDP to obtain the optimal policy ( Walsh et al. , 2009 ) [ Alg . 2 , l. 16 ] . This is inapplicable to large continuous domains and is again in contrast to Delayed-Q . Recent studies considered a concurrent control setting where action sampling occurs simultaneously with state transition ( Ramstedt & Pal , 2019 ; Xiao et al. , 2020 ) . Both assumed a single action selection between two consecutive observations , thus reducing the problem to an MDP with execution delay of m “ 1 . Chen et al . ( 2020a ) have generalized it to an arbitrary number of actions between two observations . Hester & Stone ( 2013 ) addressed execution delay in the braking control of autonomous vehicles with a relatively low delay of m § 3 . All these works employ state-augmentation to preserve the Markov property of the process , whereas we are interested whether this restriction can be lifted . Additionally , they studied policy-gradient ( policy-based ) methods , while we introduce a Q-learning style ( value-based ) algorithm . Likewise , Firoiu et al . ( 2018 ) proposed a modified version of the policy-based IMPALA ( Espeholt et al. , 2018 ) which is evaluated on a single video game with delay values of m § 7 . To the best of our knowledge , our work is the first to tackle a delayed variant of the popular Atari suite ( Bellemare et al. , 2013 ) . Contributions . Revisiting RL with execution delay both in theory and practice , we introduce : 1 . Analysis of a delayed MDP quantifying the trade-off between stochasticity and delay . 2 . The first tight upper and lower complexity bounds on policy iteration for action-augmented MDPs . We stress that this is also a contribution to general RL theory of non-delayed MDPs . 3 . A new formalism of execution-delay MDPs that avoids action-embedding . Using it , we prove that out of the larger set of history-dependent policies , restricting to non-stationary deterministic Markov policies is sufficient for optimality in delayed MDPs . We also derive a Bellman-type recursion for a delayed value function . 4 . A model-based DQN-style algorithm that yields non-stationary Markov policies . Our algorithm outperforms the alternative standard and state-augmented DDQN in 39 of 42 experiments spanning over 3 environment categories and delay of up to m “ 25 . 2 PRELIMINARIES : NON-DELAYED STANDARD MDP . Here , we describe the standard non-delayed MDP setup . Later , in Sec . 5 , we introduce its generalization to the delayed case . We follow and extend notations from ( Puterman , 2014 ) [ Sec . 2.1. ] . An infinite horizon discounted MDP is a tuple pS , A , P , r , q where S and A are finite state and action spaces , P : S ˆA Ñ S is a transition kernel , the reward r : S ˆA Ñ R is a bounded function , and P r0 , 1q is a discount factor . At time t , the agent is in st and draws an action at according to a decision rule dt that maps past information to a probability distribution qdt over the action set . Once at is taken , the agent receives a reward rpst , atq . A decision rule can be history-dependent ( H ) or Markovian ( M ) , and randomized ( R ) or deterministic ( D ) . Denote by Ht the set of possible histories up to time t. Then , a history-dependent decision-rule is given by dt : Ht Ñ A with ht fiÑ qdtphtqp¨q . A Markovian decision-rule , on the other hand , maps states to actions , i.e. , dt : S Ñ A with s fiÑ qdtpsqp¨q . A policy ⇡ : “ pdtqt•0 is a sequence of decision rules whose type dictates that of the policy . It can be either Markovian deterministic ( ⇧MD ) or randomized ( ⇧MR ) , history-dependent deterministic ( ⇧HD ) or randomized ( ⇧HR ) . It is stationary if its decision rules do not depend on time , i. e. , dt “ d for all t • 0 . This defines the smaller class of stationary policies : deterministic ( ⇧SD ) and randomized ( ⇧SR ) . Note that stationary policies are inherently Markovian . Indeed , at time t “ 0 , d : H0 Ñ A is state-dependent because H0 “ S. Since the policy is stationary , i. e. , dt “ d @ t , subsequent decision rules are also state-dependent , thus Markovian . This makes ⇧HR the most general set and ⇧SD the most specific . We denote probability model by P⇡0 , where the subscript 0 stands for the delay value m “ 0 . The related random variables are denoted by s̃t P S , ãt P A and h̃t P pS ˆAqt ˆ S . The value function given policy ⇡ P ⇧HR is defined as v⇡psq “ E⇡0 „ ∞8 t “ 0 t rps̃t , ãtq ˇ̌ ˇ̌ s̃0 “ s ⇢ , where the expectation is taken with respect to ( w.r.t . ) P⇡0 p¨|s̃0 “ sq . Let the optimal value function v ˚psq : “ max ⇡P⇧HR v ⇡psq , @ s P S . ( 1 ) Our goal is to find a policy ⇡˚ that yields v˚ , and it is known that focusing on stationary deterministic policies ⇡ P ⇧SD is sufficient for reaching the optimum in ( 1 ) ( Puterman , 2014 ) [ Thm . 6.2.10. ] . 3 MDPS WITH DELAY : A DEGRADATION EXAMPLE In an MDP with execution delay1 m , any action chosen at time t is executed at t ` m. Therefore , at each step , the agent witnesses the current state and action being executed , but selects a new action that will be applied in a future state . We assume that m decided actions are already awaiting execution at t “ 0 , so at any given time , the queue of pending actions is of constant length m. As we illustrate in the next example , having a delay generally comes at a price . Example 3.1 ( Two-state MDP ) . Consider the MDP in Fig . 1 . It has two states and two actions : S “ ts0 , s1u , A “ ta0 , a1u . The transition kernel is independent of the action : for all s , s 1 P S s.t . s ‰ s1 , P ps1|s , aq “ P ps1|sq “ p where p P r0.5 , 1s . The reward is positive for one of the two actions only : rps0 , a0q “ rps1 , a1q “ 1 , rps0 , a1q “ rps1 , a0q “ 0 . We inspect the return obtained from the commonly used set of stationary deterministic policies ⇧SD . As expected , the highest possible return is attained when m “ 0 , but monotonically decreases with the delay , m , and increases with the level of certainty , p. We analytically quantify this effect in the following and give a proof in Appx . A.1 . Proposition 3.1 . For delay m P N and p P r0.5 , 1s , the optimal return of ⇡˚ P ⇧SD is 1 ` p2p´1qm2p1´ q . Remark 3.1 . This result demonstrates a clear tradeoff between stochasticity and delay . For p Ñ 0.5 or m Ñ 8 , the return goes to its minimal value of 0.5 { p1 ´ q. Contrarily , for p Ñ 1 or m Ñ 0 , it goes to its maximal value of 1 { p1 ´ q .
This paper investigated the problem of learning agents when there are execution delays. The authors (i) used a two-state MDP example to show some equivalence between execution delay and stochasticity of transitions; (ii) analyzed the action aggregation method, which cumulated all the history and then made decisions. They show a classic Policy Iteration (PI) method with the aggregation, unfortunately, has its iteration complexity exponentially depending on the delay time $m > 0$; (iii) formulated the Execution-Delay (ED)-MDP and showed that there exists a non-stationary Markov policy which attains optimal value, while any stationary policies will have suboptimal performance; (iv) proposed a model-based Q-learning method, delayed-Q, which used the predicted future state-action sequence to make decisions; (v) did experiments on Maze, CartPole, Acrobot and Atari tasks to verify the proposed delayed-Q method.
SP:22d8012175584e1a71a2ebc6bb6d3103ff42f87d
Feature-Robust Optimal Transport for High-Dimensional Data
1 INTRODUCTION . Optimal transport ( OT ) is a machine learning problem with several applications in the computer vision and natural language processing communities . The applications include Wasserstein distance estimation ( Peyré et al. , 2019 ) , domain adaptation ( Yan et al. , 2018 ) , multitask learning ( Janati et al. , 2019 ) , barycenter estimation ( Cuturi & Doucet , 2014 ) , semantic correspondence ( Liu et al. , 2020 ) , feature matching ( Sarlin et al. , 2019 ) , and photo album summarization ( Liu et al. , 2019 ) . The OT problem is extensively studied in the computer vision community as the earth mover ’ s distance ( EMD ) ( Rubner et al. , 2000 ) . However , the computational cost of EMD is cubic and highly expensive . Recently , the entropic regularized EMD problem was proposed ; this problem can be solved using the Sinkhorn algorithm with a quadratic cost ( Cuturi , 2013 ) . Owing to the development of the Sinkhorn algorithm , researchers have replaced the EMD computation with its regularized counterparts . However , the optimal transport problem for high-dimensional data has remained unsolved for many years . Recently , a robust variant of the OT was proposed for high-dimensional OT problems and used for divergence estimation ( Paty & Cuturi , 2019 ; 2020 ) . In the robust OT framework , the transport plan is computed with the discriminative subspace of the two data matrices X ∈ Rd×n and Y ∈ Rd×m . The subspace can be obtained using dimensionality reduction . An advantage of the subspace robust approach is that it does not require prior information about the subspace . However , given prior information such as feature groups , we can consider a computationally efficient formulation . The computation of the subspace can be expensive if the dimensionality of data is high , for example , 104 . One of the most common prior information items is a feature group . The use of group features is popular in feature selection problems in the biomedical domain and has been extensively studied in Group Lasso ( Yuan & Lin , 2006 ) . The key idea of Group Lasso is to prespecify the group variables and select the set of group variables using the group norm ( also known as the sum of ` 2 norms ) . For example , if we use a pretrained neural network as a feature extractor and compute OT using the features , then we require careful selection of important layers to compute OT . Specifically , each layer output is regarded as a grouped input . Therefore , using a feature group as prior information is a natural setup and is important for considering OT for deep neural networks ( DNNs ) . In this paper , we propose a high-dimensional optimal transport method by utilizing prior information in the form of grouped features . Specifically , we propose a feature-robust optimal transport ( FROT ) problem , for which we select distinct group feature sets to estimate a transport plan instead of determining its distinct subsets , as proposed in ( Paty & Cuturi , 2019 ; 2020 ) . To this end , we formulate the FROT problem as a min–max optimization problem and transform it into a convex optimization problem , which can be accurately solved using the Frank–Wolfe algorithm ( Frank & Wolfe , 1956 ; Jaggi , 2013 ) . The FROT ’ s subproblem can be efficiently solved using the Sinkhorn algorithm ( Cuturi , 2013 ) . An advantage of FROT is that it can yield a transport plan from high-dimensional data using feature selection , using which the significance of the features is obtained without any additional cost . Therefore , the FROT formulation is highly suited for high-dimensional OT problems . Through synthetic experiments , we initially demonstrate that the proposed FROT is robust to noise dimensions ( See Figure 1 ) . Furthermore , we apply FROT to a semantic correspondence problem ( Liu et al. , 2020 ) and show that the proposed algorithm achieves SOTA performance . Contribution : . • We propose a feature robust optimal transport ( FROT ) problem and derive a simple and efficient Frank–Wolfe based algorithm . Furthermore , we propose a feature-robust Wasserstein distance ( FRWD ) . • We apply FROT to a high-dimensional feature selection problem and show that FROT is consistent with the Wasserstein distance-based feature selection algorithm with less computational cost than the original algorithm . • We used FROT for the layer selection problem in a semantic correspondence problem and showed that the proposed algorithm outperforms existing baseline algorithms . 2 BACKGROUND . In this section , we briefly introduce the OT problem . Optimal transport ( OT ) : The following are given : independent and identically distributed ( i.i.d . ) samples X = { xi } ni=1 ∈ Rd×n from a d-dimensional distribution p , and i.i.d . samples Y = { yj } mj=1 ∈ Rd×m from the d-dimensional distribution q . In the Kantorovich relaxation of OT , admissible couplings are defined by the set of the transport plan : U ( µ , ν ) = { Π ∈ Rn×m+ : Π1m = a , Π > 1n = b } , where Π ∈ Rn×m+ is called the transport plan , 1n is the n-dimensional vector whose elements are ones , and a = ( a1 , a2 , . . . , an ) > ∈ Rn+ and b = ( b1 , b2 , . . . , bm ) > ∈ Rm+ are the weights . The OT problem between two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj determines the optimal transport plan of the following problem : min Π∈U ( µ , ν ) n∑ i=1 m∑ j=1 πijc ( xi , yj ) , ( 1 ) where c ( x , y ) is a cost function . For example , the squared Euclidean distance is used , that is , c ( x , y ) = ‖x − y‖22 . To solve the OT problem , Eq . ( 1 ) ( also known as the earth mover ’ s distance ) using linear programming requires O ( n3 ) , ( n = m ) computation , which is computationally expensive . To address this , an entropic-regularized optimal transport is used ( Cuturi , 2013 ) . min Π∈U ( µ , ν ) n∑ i=1 m∑ j=1 πijc ( xi , yj ) + H ( Π ) , where ≥ 0 is the regularization parameter , and H ( Π ) = ∑ni=1 ∑m j=1 πij ( log ( πij ) − 1 ) is the entropic regularization . If = 0 , then the regularized OT problem reduces to the EMD problem . Owing to entropic regularization , the entropic regularized OT problem can be accurately solved using Sinkhorn iteration ( Cuturi , 2013 ) with a O ( nm ) computational cost ( See Algorithm 1 ) . Wasserstein distance : If the cost function is defined as c ( x , y ) = d ( x , y ) with d ( x , y ) as a distance function and p ≥ 1 , then we define the p-Wasserstein distance of two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj as Wp ( µ , ν ) = min Π∈U ( µ , ν ) n∑ i=1 m∑ j=1 πijd ( xi , yj ) p 1/p . Recently , a robust variant of the Wasserstein distance , called the subspace robust Wasserstein distance ( SRW ) , was proposed ( Paty & Cuturi , 2019 ) . The SRW computes the OT problem in the discriminative subspace . This can be determined by solving dimensionality-reduction problems . Owing to the robustness , it can compute the Wasserstein from noisy data . The SRW is given as SRW ( µ , ν ) = min Π∈U ( µ , ν ) max U∈Rd×k , U > U=Ik n∑ i=1 m∑ j=1 πij‖U > xi −U > yj‖22 1 2 , ( 2 ) where U is the projection matrix with k ≤ d , and Ik ∈ Rk×k is the identity matrix . The SRW or its relaxed problem can be efficiently estimated using either eigenvalue decomposition or the Frank–Wolfe algorithm . 3 PROPOSED METHOD . This paper proposes FROT . We assume that the vectors are grouped as x = ( x ( 1 ) > , . . . , x ( L ) > ) > and y = ( y ( 1 ) > , . . . , y ( L ) > ) > . Here , x ( ` ) ∈ Rd ` and y ( ` ) ∈ Rd ` are the d ` dimensional vectors , where ∑L ` =1 d ` = d. This setting is useful if we know the explicit group structure for the feature vectors a priori . In an application in L-layer neural networks , we consider x ( ` ) and y ( ` ) as outputs of the ` th layer of the network . If we do not have a priori information , we can consider each feature independently ( i.e. , d1 = d2 = . . . = dL = 1 and L = d ) . All proofs in this section are provided in the Appendix . 3.1 FEATURE-ROBUST OPTIMAL TRANSPORT ( FROT ) . The FROT formulation is given by min Π∈U ( µ , ν ) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ ` =1 α ` c ( x ( ` ) i , y ( ` ) j ) , ( 3 ) where ΣL = { α ∈ RL+ : α > 1L = 1 } is the probability simplex . The underlying concept of FROT is to estimate the transport plan Π using distinct groups with large distances between { x ( ` ) i } ni=1 and { y ( ` ) j } mj=1 . We note that determining the transport plan in nondistinct groups is difficult because the data samples in { x ( ` ) i } ni=1 and { y ( ` ) j } mj=1 overlap . By contrast , in distinct groups , { x ( ` ) i } ni=1 and { y ( ` ) j } mj=1 are different , and this aids in determining an optimal transport plan . This is an intrinsically similar idea to the subspace robust Wasserstein distance ( Paty & Cuturi , 2019 ) , which estimates the transport plan in the discriminative subspace , while our approach selects important groups . Therefore , FROT can be regarded as a feature selection variant of the vanilla OT problem in Eq . ( 1 ) , whereas the subspace robust version uses dimensionality-reduction counterparts . Algorithm 1 Sinkhorn algorithm . 1 : Input : a , b , C , , tmax 2 : Initialize K = e−C/ , u = 1n , v = 1m , t = 0 3 : while t ≤ tmax and not converge do 4 : u = a/ ( Kv ) 5 : v = b/ ( K > u ) 6 : t = t+ 1 7 : end while 8 : return Π = diag ( u ) Kdiag ( v ) Algorithm 2 FROT with the Frank–Wolfe . 1 : Input : { xi } ni=1 , { yj } mj=1 , η , and . 2 : Initialize Π , compute { C ` } L ` =1 . 3 : for t = 0 . . . T do 4 : Π̂ = argminΠ∈U ( µ , ν ) 〈Π , MΠ ( t ) 〉 + H ( Π ) 5 : Π ( t+1 ) = ( 1− γ ) Π ( t ) + γΠ̂ 6 : with γ = 22+t . 7 : end for 8 : return Π ( T ) Using FROT , we can define a p-feature robust Wasserstein distance ( p-FRWD ) . Proposition 1 For the distance function d ( x , y ) , FRWDp ( µ , ν ) = min Π∈U ( µ , ν ) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ ` =1 α ` d ( x ( ` ) i , y ( ` ) j ) p 1/p , ( 4 ) is a distance for p ≥ 1 . Note that we can show that 2-FRWD is a special case of SRW with d ( x , y ) = ‖x − y‖2 ( See Appendix ) . The key difference between SRW and FRWD is that FRWD can use any distance , while SRW can only use d ( x , y ) = ‖x− y‖2 .
This work proposes variants of robust OT/p-wasserstein-dist (3)/(4), where the ground cost is in some sense the maximum over costs with (prefixed) groups of features. The motivation is similar to that for feature selection: where perhaps only few of these groups of features are critical/sufficient for OT purposes. So it can also be understood as joint feature-group selection with OT. The resulting convex problem is proposed to be solved using FW, whose details are presented (including convergence).
SP:90db8e0421db85e4e43b6fbed1cb68aab5c414e7
Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent
1 Introduction . Stochastic Gradient Descent ( SGD ) is one of the main optimization algorithm used throughout machine learning . Scaling SGD can mean aggregating more but inevitably less well-sanitized data , and distributing the training over several machines , making SGD even more vulnerable to Byzantine faults : corrupted/malicious training datapoints , software vulnerabilities , etc . Many Byzantine-resilient techniques have been proposed to keep SGD safer from these faults , e.g . Alistarh et al . ( 2018 ) ; Damaskinos et al . ( 2018 ) ; Yang & Bajwa ( 2019b ) ; TianXiang et al . ( 2019 ) ; Bernstein et al . ( 2019 ) ; Yang & Bajwa ( 2019a ) ; Yang et al . ( 2019 ) ; Rajput et al . ( 2019 ) ; Muñoz-González et al . ( 2019 ) . These techniques mainly use the same adversarial model ( Figure 2 ) : a central , trusted parameter server distributing gradient computations to several workers , a minority of which is controlled by an adversary and can submit arbitrary gradients . Two families of defense techniques can be distinguished . The first employs redundancy schemes , inspired by coding theory . This approach has strong resilience guarantees , but * Author list written in alphabetical order , as for all the papers from the DCL at EPFL . its requirement to share data between workers makes this approach unsuitable for several classes of applications , e.g . when data can not be shared for privacy , scalability or legal reasons . The second family uses statistically-robust aggregation schemes , and is the focus of this paper . The underlying idea is simple . At each training step , the server aggregates the stochastic gradients computed by the workers into one gradient , using a function called a Byzantine-resilient Gradient Aggregation Rule ( GAR ) . These statistically-robust GARs are designed to produce at each step a gradient that is expected to decrease the loss . Intuitively , one can think of this second family as different formulations of the multivariate median . In particular , if the non-Byzantine gradients were all equal at each step , any different ( adversarial ) gradient would be rejected by each of these medians , and no attack would succeed . But due to their stochastic nature , the non-Byzantine gradients are different : their variance is strictly positive . Formal guarantees on any given statistically-robust GAR typically require that the variance-norm ratio , the ratio between the variance of the nonByzantine gradients and the norm of the expected non-Byzantine gradient , remains below a certain constant ( constant which depends on the GAR itself and fixed hyperparameters ) . Intuitively , this notion of variance-norm ratio can be comprehended quite analogously to the inverse of the signal-to-noise ratio ( i.e . the “ noise-to-signal ” ratio ) in signal processing . However , Baruch et al . ( 2019 ) noted that an attack could send gradients that are close to non-Byzantine outlier gradients , building an apparent majority of gradients that could be sufficiently far from the expected non-Byzantine gradient to increase the loss . This can happen against most statistically-robust GARs in practice , as the variance-norm ratio is often too large for them . Two recent attacks ( Baruch et al. , 2019 ; Xie et al. , 2019a ) were able to exploit this fact to substantially hamper the training process ( which our experiments confirm ) . The work presented here aims at ( substantially ) improving the resilience of statistically robust GARs “ also in practice ” , by reducing the variance-norm ratio of the gradients received by the server . We do that by taking advantage of an old technique normally used for acceleration : momentum . This technique is regularly applied at the server , but instead we propose to confer it upon each distributed worker , effectively making the Byzantine-resilient GAR aggregate accumulated gradients . Crucially , there is no computational complexity attached to our reformulation : it only reorders operations in existing ( distributed ) algorithms . Contributions . Our main contributions can be summarized as follows : • A reformulation of classical/Nesterov momentum which can significantly improve the effectiveness ( Figure 1 ) of any statistically-robust Gradient Aggregation Rule ( GAR ) . We formally analyze the impact of our reformulation on the variance-norm ratio of the aggregated gradients , ratio on which the studied GARs assume an upper bound . • An extensive and reproducible1 set of experiments substantiating the effectiveness of our reformulation of momentum in improving existing defenses against state-of-the-art attacks . Paper Organization . Section 2 provides the necessary background . Section 3 presents our distributed momentum scheme and provides some intuitions on its effects . Formal developments of these intuitions are given in the appendix . Section 4 describes our experimental settings in details , before presenting and analysing some of our experimental results . The appendix reports on the entirety of our experiments , and details how they can be reproduced ( in one command , graphs included ) . Section 5 discusses related and future work . 2 Background . 2.1 Byzantine Distributed SGD . Stochastic Gradient Descent ( SGD ) . We consider the classical problem of optimizing a non-convex , differentiable loss function Q : Rd → R , where Q ( θt ) , E x∼D [ q ( θt , x ) ] for a fixed data distribution D. Ideally , we seek θ∗ such that θ∗ = arg minθ ( Q ( θ ) ) . We employ mini-batch SGD optimization . Starting from initial parameter θ0 ∈ Rd , at every step t ≥ 0 , b samples ( x ( 1 ) t . . . x ( b ) t ) are sampled from D to estimate one stochastic gradient gt , 1 b ∑b k=1∇q ( θt , x ( k ) t ) ≈ ∇Q ( θt ) . This stochastic gradient is then used to update the parameters θt , with : θt+1 = θt − αt gt . The sequence αt > 0 is called the learning rate . Classical and Nesterov momentum One field-tested amendment to mini-batch SGD is classical momentum ( Polyak , 1964 ) , where each gradient keeps an exponentially-decreasing effect on every subsequent update . Formally : θt+1 = θt − αt ∑t u=0 µ t−ugu , with 0 < µ < 1 . Nesterov ( 1983 ) proposed another revision . Noting vt the velocity vector , v0 = 0 , formally : vt+1 = µ vt + 1 b b∑ k=1 ∇q ( θt − αt µ vt , x ( k ) t ) θt+1 = θt − αt vt+1 Compared to classical momentum , the gradient is estimated at θt − αt µ vt instead of θt . Distributed SGD with Byzantine workers . We follow the parameter server model ( Li et al. , 2014 ) : one single process ( the parameter server ) holding the parameter vector θt ∈ Rd , and n other ( the workers ) estimating gradients . Among these n workers , up to f < n are said Byzantine , i.e . adversarial . Unlike the other n − f honest workers , these f Byzantine workers can submit arbitrary gradients ( Figure 2 ) . At each step t , the parameter server receives n different gradients g ( 1 ) t . . . g ( n ) t , among which f are arbitrary ( submitted by the Byzantine workers ) . So the update equation becomes : θt+1 = θt − αtGt , where : Gt , t∑ u=0 µt−uF ( g ( 1 ) u , . . . , g ( n ) u ) ( 1 ) 1Namely : 736 × 5 = 3680 seeded runs , and one single script to reproduce all of our results . Function F is called a Gradient Aggregation Rule ( GAR ) . In non-Byzantine settings , averaging is used ; formally : F ( g ( 1 ) t , . . . , g ( n ) t ) = 1n ∑n i=1 g ( i ) t . In the presence of Byzantine workers , a more robust aggregation is performed with a Byzantine-resilient GAR . Sections 2.2 and 2.3 respectively describe the 6 existing GARs and 2 attacks studied in this paper . Adversarial Model . The goal of the adversary is to impede the learning process , which is defined as the maximization of the loss Q or , more judiciously for the image classification tasks tackled in this paper , as the minimization2 of the model ’ s top-1 cross-accuracy . The adversary can not directly overwrite θt at the parameter server . The adversary only submits f arbitrary gradients to the server per step , via the f Byzantine workers it controls3 . We assume an omniscient adversary . In particular , the adversary knows the GAR used by the parameter server and , at each step , the adversary can generate Byzantine gradients dependent on the honest gradients submitted at the same step and any previous step . 2.2 Byzantine-resilient GARs . We briefly present below the 6 studied Gradient Aggregation Rules ( GARs ) . These GARs are Byzantine-resilient ( Section A ) , a notion first introduced by Blanchard et al . ( 2017 ) under the name ( α , f ) -Byzantine-resilience . When used within its operating assumptions , a Byzantine-resilient GAR guarantees convergence even in an adversarial setting . Let n be the number of gradients the parameter server received from the n workers ( Figure 2 ) , and let f be the maximum number of Byzantine gradients the GAR must tolerate . Krum ( Blanchard et al. , 2017 ) . Each received gradient is assigned a score . The score of gradient x is the sum of the squared ` 2-distances between x and the n−f−2 closest gradients to x . The aggregated gradient is then the arithmetic mean of the n− f − 2 gradients with the smallest scores . This variant is called Multi-Krum in the original paper . To be proven ( α , f ) -Byzantine resilient , Krum requires the variance of the honest gradients E ‖Gt − EGt‖ 2 to be bounded above as follows : 2 · ( n−f+ f ( n−f−2 ) +f 2 ( n−f−1 ) n−2f−2 ) · E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 ( 2 ) Median ( Yin et al. , 2018 ) . The coordinate-wise median of the n received gradients . Median is proven ( α , f ) -Byzantine resilience with the following condition on the variance-norm ratio : ( n− f ) · E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 ( 3 ) Trimmed Mean ( Yin et al. , 2018 ) . The coordinate-wise trimmed-mean of the n received gradients . The trimmed-mean of n values is the arithmetic mean , after the f smallest and the f largest values have been discarded , of the remaining values . From Theorem 1 of Xie et al . ( 2018b ) , we can derive the following condition on the variance-norm ratio : 2 ( f+1 ) ( n−f ) ( n−2f ) 2 E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 ( 4 ) Phocas ( Xie et al. , 2018b ) . The coordinate-wise arithmetic mean of the n − f closest values to the coordinate-wise trimmed-mean . From Theorem 2 of Xie et al . ( 2018b ) : ( 4 + 12 ( f+1 ) ( n−f ) ( n−2f ) 2 ) E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 ( 5 ) 2Exempli gratia , with 10 classes , the worst possible final accuracy is arguably 0.1 . 3Said otherwise , the f Byzantine workers can collude . MeaMed ( Xie et al. , 2018a ) . Same as Phocas , but with median replacing trimmed-mean . Theorem 5 of Xie et al . ( 2018a ) provides the following condition : 10 ( n− f ) · E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 ( 6 ) Bulyan ( El-Mhamdi et al. , 2018 ) . This is a composite GAR , iterating on another GAR in a first selection phase . In the remaining of this paper , Bulyan will use Krum , so the first phase selects n− 2 f − 2 gradients , at each iteration removing the highest scoring gradient . The aggregated gradient is the coordinate-wise arithmetic mean of the n− 4 f − 2 closest values to the ( coordinate-wise ) median of the selected gradients . The theoretical requirement on the variance-norm ratio are the same as the ones of the underlying GAR . That is , in this paper , they are the same as Krum ( Equation 2 ) .
The paper describes an approach to counteract Byzantine attacks for distributed stochastic gradient descent by using the momentum of the gradient computed at the workers, which relies only on memory of the previous momentum. This seems to thwart current attacks in the majority of scenarios tested. The theoretical analysis seems appropriate. The empirical results are accompanied by precise details and should facilitate reproducibility (given a week of computation time).
SP:0b4980e5cb0ed470b3a93111e76fef1035438077
Central Server Free Federated Learning over Single-sided Trust Social Networks
1 INTRODUCTION . Federated learning has been well recognized as a framework able to protect data privacy Konečnỳ et al . ( 2016 ) ; Smith et al . ( 2017a ) ; Yang et al . ( 2019 ) . State-of-the-art federated learning adopts the centralized network architecture where a centralized node collects the gradients sent from child agents to update the global model . Despite its simplicity , the centralized method suffers from communication and computational bottlenecks in the central node , especially for federated learning , where a large number of clients are usually involved . Moreover , to prevent reverse engineering of the user ’ s identity , a certain amount of noise must be added to the gradient to protect user privacy , which partially sacrifices the efficiency and the accuracy Shokri and Shmatikov ( 2015 ) . To further protect the data privacy and avoid the communication bottleneck , the decentralized architecture has been recently proposed Vanhaesebrouck et al . ( 2017 ) ; Bellet et al . ( 2018 ) , where the centralized node has been removed , and each node only communicates with its neighbors ( with mutual trust ) by exchanging their local models . Exchanging local models is usually favored to the data privacy protection over sending private gradients because the local model is the aggregation or mixture of quite a large amount of data while the local gradient directly reflects only one or a batch of private data samples . Although advantages of decentralized architecture have been well recognized over the state-of-the-art method ( its centralized counterpart ) , it usually can only be run on the network with mutual trusts . That is , two nodes ( or users ) can exchange their local models only if they trust each other reciprocally ( e.g. , node A may trust node B , but if node B does not trust node A , they can not communicate ) . Given a social network , one can only use the edges with mutual trust to run decentralized federated learning algorithms . Two immediate drawbacks will be : ( 1 ) If all mutual trust edges do not form a connected network , the federated learning does not apply ; ( 2 ) Removing all single-sided edges from the communication network could significantly reduce the efficiency of communication . These drawbacks lead to the question : How do we effectively utilize the single-sided trust edges under decentralized federated learning framework ? In this paper , we consider the social network scenario , where the centralized network is unavailable ( e.g. , there does not exist a central node that can build up the connection with all users , or the centralized communication cost is not affordable ) . We make a minimal assumption on the social network : The data may come in a streaming fashion on each user node as the federated learning algorithm runs ; the trust between users may be single-sided , where user A trusts user B , but user B may not trust user A ( “ trust ” means “ would like to send information to ” ) . For the setting mentioned above , we develop a decentralized learning algorithm called online pushsum ( OPS ) which possesses the following features : • Only models rather than local gradients are exchanged among clients in our algorithm . This scheme can reduce the risk of exposing clients ’ data privacy Aono et al . ( 2017 ) . • Our algorithm removes some constraints imposed by typical decentralized methods , which makes it more flexible in allowing arbitrary network topology . Each node only needs to know its out neighbors instead of the global topology . • We provide the rigorous regret analysis for the proposed algorithm and specifically distinguish two components in the online loss function : the adversary component and the stochastic component , which can model clients ’ private data and internal connections between clients , respectively . Notation We adopt the following notation in this paper : • For random variable ξ ( i ) t subject to distribution D ( i ) t , we use Ξn , T and Dn , T to denote the set of random variables and distributions , respectively : Ξn , T = { ξ ( i ) t } 1≤i≤n,1≤t≤T , Dn , T = { D ( i ) t } 1≤i≤n,1≤t≤T . Notation Ξn , T ∼ Dn , T implies ξ ( i ) t ∼ D ( i ) t for any i ∈ [ n ] and t ∈ [ T ] . • For a decentralized network with n nodes , we use W ∈ Rn×n to present the confusion matrix , where Wij ≥ 0 is the weight that node i sends to node j ( i , j ∈ [ n ] ) . N outi = { j ∈ [ n ] : Wij > 0 } and N ini = { k ∈ [ n ] : Wki > 0 } are also used for denoting the sets of in neighbors of and out neighbors of node i respectively . • Norm ‖ · ‖ denotes the ` 2 norm ‖ · ‖2 by default . 2 RELATED WORK . The concept of federated learning was first proposed in McMahan et al . ( 2016 ) , which advocates a novel learning setting that learns a shared model by aggregating locally-computed gradient updates without centralizing distributed data on devices . Early examples of research into federated learning also include Konečný et al . ( 2015 ; 2016 ) , and a widespread blog article posted by Google AI McMahan and Ramage ( 2017 ) . To address both statistical and system challenges , Smith et al . ( 2017b ) and Caldas et al . ( 2018 ) propose a multi-task learning framework for federated learning and its related optimization algorithm , which extends early works SDCA Shalev-Shwartz and Zhang ( 2013 ) ; Yang ( 2013 ) ; Yang et al . ( 2013 ) and COCOA Jaggi et al . ( 2014 ) ; Ma et al . ( 2015 ) ; Smith et al . ( 2016 ) to the federated learning setting . Among these optimization methods , Federated Averaging ( FedAvg ) , proposed by McMahan et al . ( 2016 ) , beats conventional synchronized mini-batch SGD regarding communication rounds as well as converges on non-IID and unbalanced data . Recent rigorous theoretical analysis Stich ( 2018 ) ; Wang and Joshi ( 2018 ) ; Yu et al . ( 2018 ) ; Lin et al . ( 2018 ) shows that FedAvg is a special case of averaging periodic SGD ( also called “ local SGD ” ) which allows nodes to perform local updates and infrequent synchronization between them to communicate less while converging quickly . However , they can not be applied to the single-sided trust network ( asymmetric topology matrix ) . Decentralized learning is a typical parallel strategy where each worker is only required to communicate with its neighbors , which means the communication bottleneck ( in the parameter server ) is removed . It has already been proved that decentralized learning can outperform the traditional centralized learning when the worker number is comparably large under a poor network condition Lian et al . ( 2017 ) . There are two main types of decentralized learning algorithms : fixed network topology He et al . ( 2018 ) , and time-varying Nedić and Olshevsky ( 2015 ) ; Lian et al . ( 2018 ) during training . Wu et al . ( 2017 ) ; Shen et al . ( 2018 ) shows that the decentralized SGD would converge with a comparable convergence rate to the centralized algorithm with less communication to make large-scale model training feasible . Li et al . ( 2018 ) provides a systematic analysis of the decentralized learning pipeline . Online learning has been studied for decades . It is well known that the lower bounds of online optimization methods are O ( √ T ) and O ( log T ) for convex and strongly convex loss functions respectively Hazan et al . ( 2016 ) ; Shalev-Shwartz et al . ( 2012 ) . In recent years , due to the increasing volume of data , distributed online learning , especially decentralized methods , has attracted much attention . Examples of these works include Kamp et al . ( 2014 ) ; Shahrampour and Jadbabaie ( 2017 ) ; Lee et al . ( 2016 ) . Notably , Zhao et al . ( 2019 ) shares a similar problem definition and theoretical result as our paper . However , single-sided communication is not allowed in their setting , restricting their results . 3 PROBLEM SETTING . In this paper , we consider federated learning with n clients ( a.k.a. , nodes ) . Each client can be either an edge server or some other kind of computing device such as smart phone , which has local private data and the local machine learning model xi stored on it . We assume the topological structure of the network of these n nodes can be represented by a directed graph G = ( nodes : [ n ] , edges : E ) with vertex set [ n ] = { 1 , 2 , . . . , n } and edge set E ⊂ [ n ] × [ n ] . If there exist an edge ( u , v ) ∈ E , it means node u and node v have network connection and u can directly send messages to v. Let x ( i ) t denote the local model on the i-th node at iteration t. In each iteration , node i receives a new sample and computes a prediction for this new sample according to the current model x ( i ) t ( e.g. , it may recommend some items to the user in the online recommendation system ) . After that , a loss function , fi , t ( · ) associated with that new sample is received by node i . The typical goal of online learning is to minimize the regret , which is defined as the difference between the summation of the losses incurred by the nodes ’ prediction and the corresponding loss of the global optimal model x∗ : R̃T : = T∑ t=1 n∑ i=1 ( fi , t ( x ( i ) t ) − fi , t ( x∗ ) ) , where x∗ = arg minx ∑T t=1 ∑n i=1 fi , t ( x ) is the optimal solution . However , here we consider a more general online setting : the loss function of the i-th node at iteration t is fi , t ( · ; ξi , t ) , which is additionally parametrized by a random variable ξi , t . This ξi , t is drawn from the distribution Di , t , and is mutually independent in terms of i and t , and we call this part as the stochastic component of loss function fi , t ( · ; ξi , t ) . The stochastic component can be utilized to characterize the internal randomness of nodes ’ data , and the potential connection among different nodes . For example , music preference may be impacted by popular trends on the Internet , which can be formulated by our model by letting Di , t ≡ Dt for all i ∈ [ n ] with some time-varying distribution Dt . On the other hand , function fi , t ( · ; · ) is the adversarial component of the loss , which may include , for example , user ’ s profile , location , etc . Therefore , the objective regret naturally becomes the expectation of all the past losses : RT : = E Ξn , T∼Dn , T { T∑ t=1 n∑ i=1 ( fi , t ( x ( i ) t ; ξ ( i ) t ) − fi , t ( x∗ ; ξ ( i ) t ) ) } ( 1 ) with x∗ = arg minx EΞn , T∼Dn , T ∑T t=1 ∑n i=1 fi , t ( x ; ξ ( i ) t ) . One benefit of the above formulation is that it partially resolves the non-I.I.D . issue in federated learning . A fundamental assumption in many traditional distributed machine learning methods is that the data samples stored on all nodes are I.I.D. , which fails to hold for federated learning since the data on each user ’ s device is highly correlated to that user ’ s preferences and habits . However , our formulation does not require the I.I.D . assumption to hold for the adversarial component at all . Even though the random samples for the stochastic component still need to be independent , they are allowed to be drawn from different distributions . Finally , one should note that online optimization also includes stochastic optimization ( i.e. , data samples are drawn from a fixed distribution ) and offline optimization ( i.e. , data are already collected before optimization begins ) as its typical cases Shalev-Shwartz et al . ( 2012 ) . Hence , our setting covers a wide range of applications .
The paper proposed Online Push-Sum (OPS) method, which aims at solving decentralized federated optimization problems under a social network scenario where the centralized authority does not exist in a federated learning (FL) system. A social network application scenario is assumed by OPS where the graph is of single-sided trust. The author further extends the proposed OPS method to the online setting and provide regret analysis. The experimental study indicates that OPS is effective and converges faster than other decentralized online methods.
SP:5c55df2071e1ac64d07205434359816ed60f92f9
Fundamental Limits and Tradeoffs in Invariant Representation Learning
Many machine learning applications involve learning representations that achieve two competing goals : To maximize information or accuracy with respect to a target while simultaneously maximizing invariance or independence with respect to a subset of features . Typical examples include privacy-preserving learning , domain adaptation , and algorithmic fairness , just to name a few . In fact , all of the above problems admit a common minimax game-theoretic formulation , whose equilibrium represents a fundamental tradeoff between accuracy and invariance . In this paper , we provide an information theoretic analysis of this general and important problem under both classification and regression settings . In both cases , we analyze the inherent tradeoffs between accuracy and invariance by providing a geometric characterization of the feasible region in the information plane , where we connect the geometric properties of this feasible region to the fundamental limitations of the tradeoff problem . In the regression setting , we also derive a tight lower bound on the Lagrangian objective that quantifies the tradeoff between accuracy and invariance . Our results shed new light on this fundamental problem by providing insights on the interplay between accuracy and invariance . These results deepen our understanding of this fundamental problem and may be useful in guiding the design of adversarial representation learning algorithms . 1 INTRODUCTION . One of the fundamental tasks in both supervised and unsupervised learning is to learn proper representations of data for various downstream tasks . Due to the recent advances in deep learning , there has been a surge of interest in learning so-called invariant representations . Roughly speaking , the underlying problem of invariant representation learning is to find a feature transformation of the data that balances two goals simultaneously . First , the features should preserve enough information with respect to the target task of interest , e.g. , good predictive accuracy . On the other hand , the representations should be invariant to the change of a pre-defined attribute , e.g. , in visual perceptions the representations should be invariant to the change of perspective or lighting conditions , etc . Clearly , in general there is often a tension between these two competing goals of error minimization and invariance maximization . Understanding the fundamental limits and tradeoffs therein remains an important open problem . In practice , the problem of learning invariant representations is often formulated as solving a minimax sequential game between two agents , a feature encoder and an adversary . Under this framework , the goal of the feature encoder is to learn representations that could confuse a worst-case adversary in discriminating the pre-defined attribute . Meanwhile , the representations given by the feature encoder should be amenable for a follow-up predictor of target task . In this paper , we consider the situation where both the adversary and the predictor have infinity capacity , so that the tradeoff between accuracy and invariance solely depends on the representations given by the feature encoder . In particular , our results shed light on the best possible tradeoff attainable by any algorithm . This leads to a Lagrangian objective with a tradeoff parameter between these two competing goals , and we study the fundamental limitations of this tradeoff by analyzing the extremal values of this Lagrangian in both classification and regression settings . Our results shed new light on the fundamental tradeoff between accuracy and invariance , and give a crisp characterization of how the dependence between the target task and the pre-defined attribute affects the limits of representation learning . Contributions We geometrically characterize the tradeoff between accuracy and invariance via the information plane ( Shwartz-Ziv & Tishby , 2017 ) analysis under both classification and regression settings , where each feature transformation correspond to a point on the information plane . For the classification setting , we provide a fundamental characterization of the feasible region in the information plane , including its boundedness , convexity , and extremal vertices . For the regression setting , we provide an analogous characterization of the feasible region by replacing mutual information with conditional variances . Finally , in the regression setting , we prove a tight information-theoretic lower bound on a Lagrangian objective that trades off accuracy and invariance . The proof relies on an interesting SDP relaxation , which may be of independent interest . Related Work There are abundant applications of learning invariant representations in various downstream tasks , including domain adaptation ( Ben-David et al. , 2007 ; 2010 ; Ganin et al. , 2016 ; Zhao et al. , 2018 ) , algorithmic fairness ( Edwards & Storkey , 2015 ; Zemel et al. , 2013 ; Zhang et al. , 2018 ; Zhao et al. , 2019b ) , privacy-preserving learning ( Hamm , 2015 ; 2017 ; Coavoux et al. , 2018 ; Xiao et al. , 2019 ) , invariant visual representations ( Quiroga et al. , 2005 ; Gens & Domingos , 2014 ; Bouvrie et al. , 2009 ; Mallat , 2012 ; Anselmi et al. , 2016 ) , and causal inference ( Johansson et al. , 2016 ; Shalit et al. , 2017 ; Johansson et al. , 2020 ) , just to name a few . To the best of our knowledge , no previous work studies the particular tradeoff problem in this paper . Closest to our work are results in domain adaptation ( Zhao et al. , 2019a ) and algorithmic fairness ( Menon & Williamson , 2018 ; Zhao & Gordon , 2019 ) , showing a lower bound on the classification accuracy on two groups , e.g. , source vs. target in domain adaptation and majority vs. minority in algorithmic fairness . Compared to these previous results , our work directly characterizes the tradeoff between accuracy and invariance using information-theoretic concepts in both classification and regression settings . Furthermore , we also give an approximation to the Pareto frontier between accuracy and invariance in both cases . 2 BACKGROUND AND PRELIMINARIES . Notation We adopt the usual setup given ( X , Y ) ∈ X × Y , where Y is the response , X ∈ Rp represents the input vector , and we seek a classification/regression function f ( X ) that minimizes E ` ( f ( X ) , Y ) where ` : Y ×Y → R is some loss function depending on the context of the underlying problem . In this paper , we consider two typical choices of ` : ( 1 ) the cross entropy loss , i.e . ` ( y , y′ ) = −y log ( y′ ) − ( 1−y ) log ( 1−y′ ) , which is typically used when Y is a discrete variable in classification ; ( 2 ) the squared loss , i.e . ` ( y , y′ ) = ( y − y′ ) 2 , which is suitable for Y continuous , as in regression . Throughout the paper , we will assume that all random variables have finite second-order moments . Problem Setup Apart from the input/output pairs , in our setting there is a third variable A , which corresponds to a variable that a predictor should be invariant to . Depending on the particular application , A could correspond to potential protected attributes in algorithmic fairness , e.g. , the ethnicity or gender of an individual ; or A could be the identity of domain index in domain adaptation , etc . In general , we assume that there is a joint distribution D over the triple ( X , A , Y ) , from which our observational data are sampled from . Upon receiving the data , the goal of the learner has two folds . On one hand , the learner aims to accurately predict the target Y . On the other hand , it also tries to be insensitive to variation in A . To achieve this dual goal , one standard approach in the literature ( Zemel et al. , 2013 ; Edwards & Storkey , 2015 ; Hamm , 2015 ; Ganin et al. , 2016 ; Zhao et al. , 2018 ) is through the lens of representation learning . Specifically , let Z = g ( X ) where g ( · ) is a ( possibly randomized ) transformation function that takes X as input and gives the corresponding feature encoding Z . The hope is that , by learning the transformation function g ( · ) , Z contains as much information as possible about the target Y while at the same time filtering out information related to A . This problem is often phrased as an adversarial game : min f , g max f ′ ED [ ` ( f ◦ g ( X ) , Y ) ] − λ · ED [ ` ( f ′ ◦ g ( X ) , A ) ] , ( 1 ) where the two competing agents are the feature transformation g and the adversary f ′ , and λ > 0 is a tradeoff hyperparameter between the task variable Y and the attribute A . For example , the adversary f ′ could be understood as a domain discriminator in applications related to domain adaptation , or an auditor of sensitive attribute in algorithmic fairness . In the above minimax game , the first term corresponds to the accuracy of the target task , and the second term is the loss incurred by the adversary . It is worth pointing out that the minimax problem in ( 1 ) is separable for any fixed feature transformation g , in the sense that once g has been fixed , the optimization of f and f ′ are independent of each other . Formally , define R∗Y ( g ) : = inff ED ` ( f ( g ( X ) ) , Y ) to be the optimal risk in predicting Y using Z = g ( X ) under loss ` , and similarly define R∗A ( g ) . The separation structure of the problem leads to the following compact form : OPT ( λ ) : = min g R∗Y ( g ) − λ ·R∗A ( g ) . ( 2 ) The minimization here is taken over a family of ( possibly randomized ) transformations g. Intuitively , ( 2 ) characterizes the situation where for a given transformation Z = g ( X ) , both f and f ′ play their optimal responses . Hence this objective function characterizes a fundamental limit of what the best possible representation we can hope to achieve for a fixed value λ . In general , with 0 < λ < ∞ , there is an inherent tension between the minimization of R∗Y ( g ) and the maximization of R ∗ A ( g ) , and a choice of the tradeoff hyperparameter λ essentially corresponds to a realization of such tradeoff . Motivating Examples We discuss several examples to which the above framework is applicable . Example 2.1 ( Privacy-Preservation ) . In privacy applications , the goal is to make it difficult to predict sensitive data , represented by the attribute A , while retaining information about Y ( Hamm , 2015 ; 2017 ; Coavoux et al. , 2018 ; Xiao et al. , 2019 ) . A way to achieve this is to pass information through Z , the “ privatized ” or “ sanitized ” data . Example 2.2 ( Algorithmic Fairness ) . In fairness applications , we seek to make predictions about the response Y without discriminating based on the information contained in the protected attributes A . For example , A may represent a protected class of individuals defined by , e.g . race or gender . This definition of fairness is also known as statistical parity in the literature , and has received increasing attention recently from an information-theoretic perspective ( McNamara et al. , 2019 ; Zhao & Gordon , 2019 ; Dutta et al. , 2019 ) . Example 2.3 ( Domain Adaptation ) . In domain adaptation , our goal is to train a predictor using labeled data from the source domain that generalizes to the target domain . In this case , A corresponds to the identity of domains , and the hope here is to learn a domain-invariant representation Z that is informative about the target Y ( Ben-David et al. , 2007 ; 2010 ; Ganin et al. , 2016 ; Zhao et al. , 2018 ) . Example 2.4 ( Group Invariance ) . In many applications in computer vision , it is desirable to learn predictors that are invariant to the action of a group G on the input space . Typical examples include rotation , translation , and scale . By considering random variables A that take their values in G , one approach to this problem is to learn a representation Z that “ ignores ” changes in A ( Quiroga et al. , 2005 ; Gens & Domingos , 2014 ; Bouvrie et al. , 2009 ; Mallat , 2012 ; Anselmi et al. , 2016 ) . Example 2.5 ( Information bottleneck ) . The information bottleneck ( Tishby et al. , 2000 ) is the problem of finding a representation Z that minimizes the objective I ( Z ; Y ) − λI ( Z ; X ) in an unsupervised manner . This is closely related to , but not the same as the problem we study , owing to the invariant attribute A .
The paper formulates the problems of learning in invariant representations as a min-max game, exploring tradeoffs between accuracy and invariance of these representations via a geometric plane analysis. Specifically. the paper considers both classification (cross entropy loss) and regressions settings (squared loss).The related minimax problem is separable in the sense that for any fixed feature transformation, the optimization for the min and max agent are independent of each other, resulting is a simple, concise representation of the resulting optimization problem.  The symmetric nature of this description allows for a geometric description of the feasible set in regards to the actions of both min-max agents and the paper goes on to provide some characterizations of external points and other properties (e.g. convexity) of this region/set. The paper also derives a tight lower bound that for the Lagrangian form of accuracy and invariance.
SP:30cbdae14b7b36f103023b56e32c30c8effbf5e6
Fast Binarized Neural Network Training with Partial Pre-training
1 INTRODUCTION . Quantizing neural networks ( Gupta et al. , 2015 ) , constraining weights and activations to take on values within some small fixed set , is a popular set of techniques for reducing the storage ( Han et al. , 2016 ) or compute ( Fromm et al. , 2020 ) requirements of deep neural networks . Weights and activations can often be quantized down to as few as 8 bits with no loss in accuracy compared to a full-precision model . Further quantization often comes at the expense of accuracy : it is possible to binarize neural networks ( Hubara et al. , 2016 ; Rastegari et al. , 2016 ) , constraining weights and activations to take on values within a set of two elements ( often { −1 , 1 } ) , but such binarization often lowers the accuracy of the resultant network , necessitating a tradeoff between desired compression and accuracy . In the literature , there are two primary techniques for obtaining a quantized neural network : quantizing a pre-trained full-precision network ( Banner et al. , 2019 ; Han et al. , 2016 ) , and training a quantized network from scratch ( Hubara et al. , 2016 ; Gupta et al. , 2015 ) . Full-precision training . Quantizing a full-precision network requires few or even no additional training epochs on top of training that full-precision network . Typical procedures for quantizing a full-precision network range from data-blind procedures like selecting quantization bins to minimize distance from the original weights ( Banner et al. , 2019 ) , to data-intensive procedures such as retraining the network to be more amenable to quantization ( Han et al. , 2016 ) . However without significant additional training time , quantizing a pre-trained network often does not reach the highest accuracy possible for the quantized network architecture ( Alizadeh et al. , 2019 ) . Further , achieving high accuracy with heavy quantization , such as binarization , often requires changing the network architecture , for instance by adding skip connections ( Bethge et al. , 2019 ) ; such architectural changes mean that the weights of a pre-trained full-precision network may not transfer to the new architecture . Low-precision training . Alternatively , training a quantized network from scratch allows for achieving high accuracy regardless of the availability of pre-trained full-precision weights ( Alizadeh et al. , 2019 ) . Typical procedures for training a quantized network from scratch involve tracking and optimizing latent weights , weights which are quantized during the forward pass but treated as fullprecision during the backward pass ( Hubara et al. , 2016 ) . However , training a quantized network from scratch can be costly . Quantized networks typically require more training iterations to plateau in accuracy ( Hubara et al. , 2016 , Figure 1 ; Bethge et al. , 2019 , Figure 2 ) . Further , since quantized networks are often trained by simulating the quantized operations in floating-point ( Zhang et al. , 2019 ) , low-precision training can be even more computationally expensive than the full-precision equivalent . Research question . In this paper , we explore the question : Can we accelerate training a binarized neural network from scratch to a given target accuracy ? Concretely , we assume that a network architecture and standard training schedule are provided , but that pre-trained full-precision networks are not available . We also specifically focus on achieving accuracy in the early phase of training , exposing the tradeoff between training cost and accuracy . Partial pre-training . To answer the above research question , we evaluate a technique , partial pre-training , that allows for faster training of binarized neural networks by first training the network as a standard floating point network with standard full-precision training for a short amount of time , then converting the network to a binarized neural network and continuing to train from there with standard low-precision training for the remainder of the budgeted training time . We specifically evaluate partial pre-training ’ s speedup over standard low-precision training , when training a binarized neural network from scratch . We find that partial pre-training can train VGG , ResNet , and Neural Collaborative Filtering networks on CIFAR-10 , ImageNet , and MovieLens-20m between 1.26× and 1.61× faster than standard low-precision training . Contributions . • We present partial pre-training , which can train binarized neural networks from scratch between 1.26× and 1.61× faster than standard low-precision training . • We find that partial pre-training both requires fewer iterations to train to a given accuracy , and also that partial pre-training takes on average less time per iteration than standard low-precision training . • We analyze the sensitivity of partial pre-training to the choice of split between full-precision and low-precision training finding that an even split , though not always optimal , nearly matches the highest accuracy achievable by any other choice of split . All together , we find that partial pre-training is a simple and effective approach for accelerating binarized neural network training . Partial pre-training is a step towards the goal of binarized neural network training procedures that can match the efficiency gains of binarized neural network inference . 2 BACKGROUND . Binarized neural networks trade off accuracy for inference efficiency . However , binarized neural networks often take longer to train than the full-precision versions of the same network architecture , both in terms of training iterations until convergence and wall-clock time per iteration . Training iterations . Binarized neural networks tend to take more iterations to train than the fullprecision versions of the same network architecture . For instance , Hubara et al . ( 2016 , Figure 1 ) show a binarized neural network with a custom architecture requiring 4× as many training iterations to plateau in accuracy as a full-precision baseline on CIFAR-10 . Bethge et al . ( 2019 , Figure 2 ) similarly show a binarized ResNet-18 taking 2× as many training iterations to plateau in accuracy as a full-precision baseline on ImageNet . Wall-clock time . Beyond requiring more iterations to train , binarized neural networks tend to take more wall-clock time to complete each iteration of training than full-precision networks do . This is because binarized neural networks are often trained by simulating low-precision operations with standard floating point operations ( Zhang et al. , 2019 ; Fromm et al. , 2020 ) , and require additional bookkeeping that full-precision networks do not require , such as performing the binarization of weights and activations ( Hubara et al. , 2016 ) and calculating scaling factors ( Rastegari et al. , 2016 ) . While it is theoretically possible to accelerate binarized neural network training , we are not aware of any effort to exploit binarization during the training phase . It is also not clear what the maximum speedup possible from accelerating binarized neural network training would be : Fromm et al . ( 2020 ) show an acceleration of 6.33× for a VGG during inference ; with the additional bookkeeping of binarized neural network training and potentially requiring higher precision gradients in the backward pass ( Zhou et al. , 2016 ) , real training speedups would likely be lower . 3 PARTIAL PRE-TRAINING . This paper focuses on accelerating the training time of binarized neural networks . Specifically , we aim to reduce both the training iterations and wall clock time of training binarized neural networks . We achieve this by leveraging the faster training time—both iteration-count and time-per-iteration—of full-precision networks . This section formalizes the design space of partial pre-training algorithms , describing the exact training methodology used in the experiments in Sections 5 and 6 . Partial pre-training splits training into multiple phases . First , partial pre-training trains the network for a short amount of time at full precision ( 32 bits ) using no quantization methods : weights , activations , and gradients are not quantized or clipped . Next , the binarization operators are added to the network ( binarizing weights and activations ) . Partial pre-training then continues to train the binarized neural network using standard low-precision training . To avoid requiring hyperparameter search for each different network , we prescribe a standard training schedule based on the original training schedule of the full-precision network ( i.e. , learning rate and associated decay , which is assumed to be provided ) . Each step of partial pre-training takes 50 % of the allotted training time . Within the training time for each step of partial pre-training , the network is trained using the original learning rate schedule compressed to the allotted training time . The partial pre-training algorithm is presented in Algorithm 1 : Algorithm 1 Partial pre-training . 1 . Train the network at full precision , using the original learning rate schedule compressed down to half of the desired training time . 2 . Binarize the network , inserting quantization operators into the network . 3 . Train the binarized network , using the original learning rate schedule compressed down to the remaining half of the desired training time . 4 EXPERIMENTAL METHODOLOGY . 4.1 DATASETS AND NETWORKS . We evaluate partial pre-training across a variety of datasets and networks . Specifically , we evaluate partial pre-training on a CIFAR-10 ( Krizhevsky , 2009 ) ResNet-20 ( He et al. , 2016 ) , a CIFAR-10 VGG-16 ( Simonyan & Zisserman , 2014 ) , an ImageNet ( Russakovsky et al. , 2015 ) ResNet-34 , and a MovieLens 20M ( Harper & Konstan , 2015 ) Neural Collaborative Filtering ( NCF ) ( He et al. , 2017 ) model . Following Bethge et al . ( 2019 ) , our ResNet-20 and ResNet-34 are extended with additional skip connections past every quantized convolution , rather than past every block of quantized convolutions as is standard for ResNets ; this architectural change facilitates training binarized ResNets from scratch . Our NCF model additionally has BatchNorm ( Ioffe & Szegedy , 2015 ) layers after each binarized layer . The networks otherwise use standard architectures , data augmentation schemes , and training schedules drawn from the literature . Details about the networks and their respective training regimes are presented in Table 1 . 4.2 TRAINING DETAILS . All networks were trained on AWS GPU instances . The CIFAR-10 and MovieLens-20M networks were trained on p3.2xlarge instances , using an NVIDIA V100 GPU . The ImageNet networks were trained on p3.8xlarge instances , training with 4 data-parallel NVIDIA V100 GPUs . The vision networks were trained using custom implementations of each network in TensorFlow 2.2.0 ( Abadi et al. , 2016 ) . The NCF was trained using the PyTorch 0.4 ( Paszke et al. , 2019 ) implementation included in the MLPerf 0.5 training benchmark suite ( Mattson et al. , 2020 ) . 4.3 BINARIZATION . We evaluate partial pre-training using standard approaches to neural network binarization : Inputs . We binarize input activations using PACT ( Choi et al. , 2018 ) , a gradient-based method of determining activation scale which is used in place of the ReLU activation in binarized neural networks . PACT introduces one trainable parameter per layer , α , which controls the scale of the activations . The PACT activation on binarized networks has the following form : PACT ( x ) = { 0 , x ∈ ( −∞ , α2 ] α , x ∈ ( α2 , ∞ ) ∂ PACT ( x ) ∂α = { 0 , x ∈ ( −∞ , α ) 1 , x ∈ [ α , ∞ ) ∂ PACT ( x ) ∂x = { 1 , x ∈ [ 0 , α ] 0 , otherwise We initialize α = 3 for each layer , and control its magnitude with an L2 regularization penalty with coefficient 0.0002 . Weights . We binarize weights using the sign of the weight and the straight-through estimator : sign ( x ) = { −1 , x ≤ 0 1 , x > 0 ∂ sign ( x ) x = { 1 , x ∈ [ −1 , 1 ] 0 , otherwise Full precision layers . Following standard practice ( Hubara et al. , 2016 ; Simons & Lee , 2019 ; Qin et al. , 2020 ) , we do not binarize inputs or weights to the first , last , or batch normalization layers , nor do we binarize projection layers in ResNets ( Bethge et al. , 2019 ) .
The paper suggests a method for training binary neural networks. The proposed method is to partially train with full precision and then continue with binarized training using the straight-through estimator. The method is very simple and there is very limited technical contribution, so in order to be worthy of publication it needs to be supported with compelling experimental results. Unfortunately this is not the case.
SP:4d39ce0230993594b0fddb3e2655f6f2cfdd308a
Shuffle to Learn: Self-supervised learning from permutations via differentiable ranking
1 INTRODUCTION . Supervised learning has achieved important successes on large annotated datasets ( Deng et al. , 2009 ; Amodei et al. , 2016 ) . However , most available data , whether images , audio , or videos are unlabelled . For this reason , pre-training representations in an unsupervised way , with subsequent fine-tuning on labelled data , has become the standard to extend the performance of deep architectures to applications where annotations are scarce , such as understanding medical images ( Rajpurkar et al. , 2017 ) , recognizing speech from under-resourced languages ( Rivière et al. , 2020 ; Conneau et al. , 2020 ) , or solving specific language inference tasks ( Devlin et al. , 2018 ) . Among unsupervised training schemes , self-supervised learning focuses on designing a proxy training objective , that requires no annotation , such that the representations incidentally learned will generalize well to the task of interest , limiting the amount of labeled data needed for fine-tuning . Such “ pretext ” tasks , a term coined by Doersch et al . ( 2015 ) , include learning to colorize an artificially gray-scaled image ( Larsson et al. , 2017 ) , inpainting removed patches ( Pathak et al. , 2016 ) or recognizing with which angle an original image was rotated ( Gidaris et al. , 2018 ) . Other approaches for self-supervision include classification to original images after data augmentation ( Chen et al. , 2020 ) and clustering ( Caron et al. , 2018 ) . In this work , we consider the pretext task of reordering patches of an input , first proposed for images by Noroozi & Favaro ( 2016 ) , the analogue of solving a jigsaw puzzle . In this setting , we first split an input into patches and shuffle them by applying a random permutation . We train a neural network to predict which permutation was applied , taking the shuffled patches as inputs . We then use the inner representations learned by the neural network as input features to a low-capacity supervised classifier ( see Figures 1 and 2 for illustration ) . We believe that permutations provide a promising avenue for self-supervised learning , as they are conceptually general enough to be applied across a large range of modalities , unlike colorization ( Larsson et al. , 2017 ) or rotations ( Gidaris et al. , 2018 ) that are specific to images . The idea of using permutations was also explored in Santa Cruz et al . ( 2018 ) where they use a bi level optimization scheme which leverages sinkhorn iterations to learn visual reconstructions . Their method resorts to approximating the permuation matrix with such continuous methods . Our method relies on no such approximations and can efficiently represent all possible permutations . Moreover , the encouraging results of Noroozi & Favaro ( 2016 ) when transferring learned image features for object detection and image retrieval inspire us to advance this method a step forward . However , including permutations into an end-to-end differentiable pipeline is challenging , as permutations are a discontinuous operation . Noroozi & Favaro ( 2016 ) circumvent this issue by using a fixed set of permutations and casting the permutation prediction problem as a classification one . Given that the number of possible permutations of n patches is n ! , this approach can not scale to exploiting the full set of permutations , even when n is moderately small . In this work , we leverage recent advances in differentiable ranking ( Berthet et al. , 2020 ; Blondel et al. , 2020b ) to integrate permutations into end-to-end neural training . This allows us to solve the permutation inversion task for the entire set of permutations , removing a bottleneck that was heretofore sidestepped in manners that could deteriorate downstream performance . Moreover , we successfully demonstrate for the first time the effectiveness of permutations as a pretext task on multiple modalities with minimal modality-specific adjustments . In particular , we improve music understanding by learning to reorder spectrogram frames , over the time and frequency axes . We also improve video understanding by reordering video frames along time . To summarize , we make the following two contributions . - We integrate differentiable ranking into end-to-end neural network training for representation learning . This provides an efficient manner to learn in reordering tasks for all permutations , for larger numbers of patches . We show that this drastic increase in the number of permutations improves the quality of learned representations for downstream tasks . - We successfully demonstrate for the first time the effectiveness of permutations as a general purpose self-supervision method , efficient on multiple modalities with extremely minimal modifications to the network . Additionally , the pre-trained representations perform well across diverse tasks of the same modality . We purposefully divert from domain-specific transformations , often inspired by data augmentation techniques , as predicting the permutation applied to input patches is not restricted either to a domain , nor a modality or a dimensionality . This also creates opportunities for applications beyond the scope of what we illustrate in our work . The rest of the paper is organized as follows . In Section 2 we present the problem formulation and the methods used . In Section 3 we demonstrate the effectiveness of our experiments on audio , video , and image tasks . Further details can be found in the Appendix . 2 METHODS . 2.1 GENERAL METHODOLOGY . To efficiently leverage the existence of large amounts of unlabeled data , we present a self-supervised pretext task that predicts the permutation applied to patches of an input . We do so in a manner that removes an existing bottleneck , and allows to use all possible permutations as targets during training . This pretext task is performed upstream and the internal representation learned by the pretext neural network can then be transferred and used on secondary downstream tasks – see Figures 1 and 2 . In the upstream task , for each data point , n patches , sub-parts x1 , . . . , xn of of identical dimensions d are extracted . Their exact structure depend naturally on the modality : e.g . horizontal bands of an image , frequency bands of a spectrogram , frames of a video . Accordingly , the dimensions in d can represent height , width , channels , etc . These patches are then permuted randomly , and organized in a tensor Xi of dimension n× d ( see Figure 2 ) , which is paired to the applied permutation as a label yi ( see Section 2.2 for details on permutation encoding ) . We then train the weights w of a neural network to minimize the loss between its output fw ( Xi ) , of size n , and the encoding yi ∈ Rn of the permutation applied to the n patches ( see Figure 2 ) . Note that the last operation of the network is a differentiable ranking operator y∗ε . This operator , the encoding of the permutations for the labels , and the loss used in this upstream task are detailed in Section 2.2 below . The network , and the details of the data-processing pipeline generating the patches , are detailed in Section 2.3 . After upstream training on the initial dataset , the upstream network weights can be used to generate representations . By truncating the network , removing some of the last layers , we can extract an embedding of any input vector . These representations can be used in a downstream task to train a new network , with its own weights , minimizing a loss ( typically classification or regression ) between its output and the downstream task labels ( see Figures 1 and 2 ) . We mostly evaluate our methods on downstream performance : the accuracy after training of the downstream network , on a task that is unknown during upstream training . However , the pretext reordering task can be of interest in and of itself , as in learning-to-rank problems ( Liu , 2011 ) , and we also report generalization performance in this task . As an aside , in the Jigsaw puzzle reassembly task , Noroozi & Favaro ( 2016 ) show that the choice of permutation set matters when it comes to performance . They make use of the Hamming distance to choose a permutation set that it maximally separated . This permutation set is diverse but is not close to covering the entire permutation space . By supporting the full set of permutations , our approach does not suffer from this issue . The downstream tasks are dataset-dependent . However , the network in the upstream reordering task requires minimal modification across tasks and modalities ( see Section 3 for details ) . In this work , we demonstrate the effectiveness of our method on audio , video , and images , across several classification and regression tasks . 2.2 DIFFERENTIABLE RANKING METHODOLOGY . Our methodology for representation learning relies importantly on the ability to incorporate ordering or ranking operations in an end-to-end differentiable pipeline . During training for the upstream task , the last two layers of the network consist of : a vector of score values θw ( X ) ∈ Rn , and network outputs fw ( X ) = y∗ε ( θw ( X ) ) ∈ Rn , using a differentiable ranking operator y∗ε that we detail here . The goal of the upstream task is to find good parameters w such that the network outputs fw ( X ) correctly recover the label y representing the permutation applied to the patches in X . In earlier works using permutations as pretext task ( Noroozi & Favaro , 2016 ; Lee et al. , 2017 ) , training with permutation labels is achieved by reducing the permutation inversion task to classification . More specifically , it is encoding a fixed number of permutations L n ! as classes . Each class is represented by a one-hot vector , and network outputs are logits θw ( X ) of size L , leading to a prediction by taking a softmax among these L classes . This approach is obviously limited : representing all the permutations requires in principle n ! classes , which is quickly not manageable , even for small values of n. Further , this does not take into account the similarity between permutations : with this encoding , permutations are orthogonal , no matter how similar they are . We address these issues by having network outputs θw ( X ) of size only n , and interpreting their relative orders as the predicted permutation ( e.g . y = ( 0 , 1 , 2 , 3 ) if they are in decreasing order , predicting the identity permutation ) . The pretext labels also encode the permutations in this manner . This gives a unique encoding to each permutation , operates in dimension n , and can be computed with sorting operations in time O ( n log n ) . Further , small distances in these encodings naturally represent similar permutations . However , applying directly a ranking operator y∗ on the last layer of the network would not allow for backpropagation of gradients : the function of the weights w 7→ L ( y∗ ( θw ( X ) ) ; y ) is piece-wise constant . Small changes in the weights w induce either large jumps or no change in value at all , its gradients are 0 almost everywhere , and undefined otherwise . In order to overcome this matter , we consider instead two differentiable ranking operations , one using stochastic perturbations , introduced in Berthet et al . ( 2020 ) and another one using regularization ( Blondel et al. , 2020b ) . These operations , denoted here by y∗ε , map any vector of k values to a point in the convex hull of permutation encodings in dimension k ( e.g . ( 0.1 , 0.9 , 2.2 , 2.8 ) over 4 elements ) . They can be thought of analogous to the softmax operator for ranking . They share some of its properties : good approximation of the original function , differentiability in the input values θ with non-zero derivatives everywhere , ease of computation , and tuning by a temperature parameter ε ( see Appendix A.1 for further details ) . Adjusting the network parameters w requires a notion of loss function between y∗ε ( θ ) and y . For the version of y∗ε ( θ ) based on stochastic perturbations ( Berthet et al. , 2020 ) , we use the associated Fenchel–Young loss ( Blondel et al. , 2020a ) , that act directly on θ = θw ( X ) ( outputs of the vector , and inputs of the sorting operations ) , written here as LFY ( θ ; y ) ( see Appendix A.1 ) . This loss is convex in θ , smooth , equal to 0 if and only if y∗ε ( θ ) = y . Its gradients are given by ∇θLFY ( θ ; y ) = y∗ε ( θ ) − y . We call this loss “ Perturbed F-Y ” in our empirical results . For the regularized version of y∗ε ( θ ) ( Blondel et al. , 2020b ) , we use 1 2 ‖y∗ε ( θ ) − y‖2 . We call this loss “ Fast Soft Ranking ( FSR ) ” in our empirical results . We opt for these two losses for their good theoretical properties and O ( n log n ) complexity . Other choices ( Mena et al. , 2018 ; Cuturi et al. , 2019 ; Vlastelica et al. , 2019 ; Rolı́nek et al. , 2020 ; Grover et al. , 2019 ) are also possible , potentially with higher computational cost , or regions with zero gradient .
This paper presents a self-supervised learning task of shuffling input patches and demanding the network to learn to unshuffle. A related prior work, Noorozi and Favaro (2016) uses a fixed set of permutations to do this task for a given number of patches, and the current paper argues to expand this idea for the full set of permutations. To this end, the paper encodes a permutation as a number tuple, with the goal of the network to learn to produce the correct tuple that has the numbers in order. As the numbers are discrete and thus non-differentiable, the paper suggests differentiable soft-variants using stochastic perturbations and regularizations (Fenchel-Young loss). Experiments are provided on audio and video tasks and show promise over the method of Noorozi and Favaro (2016).
SP:95bf4c07cd691ae29a95ccffa8883f9c92c3eb02
Identifying the Sources of Uncertainty in Object Classification
In image-based object classification , the visual appearance of objects determines which class they are assigned to . External variables that are independent of the object , such as the perspective or the lighting conditions , can modify the object ’ s appearance resulting in ambiguous images that lead to misclassifications . Previous work has proposed methods for estimating the uncertainty of predictions and measure their confidence . However , such methods do not indicate which variables are the potential sources that cause uncertainty . In this paper , we propose a method for image-based object classification that uses disentangled representations to indicate which are the external variables that contribute the most to the uncertainty of the predictions . This information can be used to identify the external variables that should be modified to decrease the uncertainty and improve the classification . 1 INTRODUCTION . An object from the real world can be represented in terms of the data gathered from it through an observation/sensing process . These observations contain information about the properties of the object that allows their recognition , identification , and discrimination . In particular , one can obtain images from objects which represent its visual characteristics through photographs or rendering of images from 3D models . Image-based object classification is the task of assigning a category to images obtained from an object based on their visual appearance . The visual appearance of objects in an image is determined by the properties of the object itself ( intrinsic variables ) and the transformations that occur in the real world ( extrinsic variables ) ( Kulkarni et al. , 2015 ) . Probabilistic classifiers based on neural networks can provide a measure for the confidence of a model for a given prediction in terms of a probability distribution over the possible categories an image can be classified into . However , they do not indicate what variable contributes to the uncertainty . In some cases the extrinsic variables can affect the visual appearance of objects in images in such way that the predictions are highly uncertain . A measure of the uncertainty in terms of these extrinsic features can improve interpretability of the output of a classifier . Disentangled representation learning is the task of crating low-dimensional representations of data that capture the underlying variability of the data and in particular this variability can be explained in terms of the variables involved in data generation . These representations can provide interpretable data representations that can be used for different tasks such as domain adaptation ( Higgins et al. , 2017 ) , continuous learning ( Achille et al. , 2018 ) , noise removal ( Lopez et al. , 2018 ) , and visual reasoning ( van Steenkiste et al. , 2019 ) . In this paper we propose a method for the identification of the sources of uncertainty in imagebased object classification with respect to the extrinsic features that affect the visual appearance of objects in images by using disentangled data representations . Given an image of an object , our model identifies which extrinsic feature contributes the most to the classification output and provides information on how to modify such feature to reduce the uncertainty in the predictions . 2 RELATED WORK . Achieving explainable results in predictive models is an important task , especially for critical applications in which the decisions can affect human life such as in health , security , law and defence Barredo Arrieta et al . ( 2020 ) . Even though deep neural networks provide successful results for image classification , their predictions can ’ t be directly interpreted due to their complexity ( Zhang & Zhu , 2018 ) . In order to solve this different approaches have been proposed to provide visual interpretability to the results such as identification of the image regions that contribute to classification ( Selvaraju et al. , 2016 ) . The uncertainty of predictions provides an extra level of interpretability to the predictions of a model by indicating the level of confidence in a prediction Kendall & Gal ( 2017 ) . There are different methods to introduce uncertainty measures in classifiers which include bayesian neural networks , ensembles , etc . Obtaining disentangled representations , that capture distinct sources of variation independently , is an important step towards interpretable machine learning systems Kim & Mnih ( 2018 ) . Despite the lack of agreement on the definition , one description states that a disentangled representation should separate the distinct , informative factors of variations in the data Bengio et al . ( 2012 ) . Within deep generative models , disentanglement is achieved by using neural networks that approximate a conditional distribution on the data and a set of unobserved latent variables . Particularly variational autoencoders ( VAEs ) Kingma & Welling ( 2014 ) are heavily favored due to their ability to model a joint distribution while maintaining scalability and training stability Higgins et al . ( 2016 ) . Therefore most of the methods are based on augmentations on original VAE framework Higgins et al . ( 2016 ) ; Kim & Mnih ( 2018 ) ; Kulkarni et al . ( 2015 ) ; Mathieu et al . ( 2018 ) . In image-based object classification the variables that explain the visual characteristics of objects in data can be divided into those which represent the inherent properties of objects and those which represent its transformations . This explanation has been explored in Kulkarni et al . ( 2015 ) by describing the object ’ s properties as the intrinsic variables and the properties that describe the object transformations as the extrinsic variables . Other work refers to similar sets of variables and their disentanglement under different names but representing similar concepts . Hamaguchi et al . ( 2019 ) disentangles the variables corresponding to ambient variables with respect to object identity information on images . ( Gabbay & Hoshen , 2020 ) proposes the learning of disentangled representations that express the intra-class variability in terms of the class and content . ( Detlefsen & Hauberg , 2019 ) proposes the disentanglement of the appearance and the perspective . Separate the identity of cars from their pose ( Yang et al. , 2015 ) . 3 SETTING . Consider a dataset of images that have been generated from the observations of an object and which should be classified into a certain category . We will assume that this category depends only on the properties of the object itself and not on its surroundings . We use a neural network as a probabilistic classifier to assign each of the images to a category . Usually the output of a neural network can ’ t be directly interpreted in terms of the characteristics of the object have affected the confidence of the prediction . Disentanglement serves as a method to produce interpretable low-dimensional data representations that separate the variables that describe the properties of the objects and their surrounding . The main idea is that one can train a probabilistic classifier on disentangled low dimensional representations and identify which variables contribute to the uncertainty of the classification . 3.1 PROBABILISTIC CLASSIFIERS ON IMAGES . A probabilistic classifier is a model which outputs a conditional probability density PY |x over the set of K ∈ N possible categories Y = { 1 , 2 , . . . , K } conditioned on an input image x ∈ X . The value PY |x ( y ) can be interpreted as the degree of confidence the model assigns for an image x ∈ X to be classified into category y ∈ Y . We will be only considering throughout this work probabilistic classifiers that use deep neural networks to obtain the predictions . One can train a probabilistic classifier using a dataset of labeled images . Given a labeled datapoint x ∈ X with true label y∗ ∈ Y , the neural network ’ s weights are optimized to reduce the cross entropy between the network ’ s output distribution PY |x and the true label y∗ . The cross entropy loss corresponds to , L ( PY |x , y∗ ) = − ∑ y∈Y δy , y∗ logPY |x ( y ) , ( 1 ) with δy , y∗ the Kroenecker delta . One can measure the degree of uncertainty of the output by calculating the entropy of the output distribution PY |x for a given image . Higher entropy corresponds to a higher uncertainty . The entropy is calculated as , H ( PY |x ) = ∑ y∈Y PY |x ( y ) logPY |x ( y ) . ( 2 ) Even though it is possible to provide a degree of uncertainty to the estimates of a probabilistic classifier trained on images , these estimates do not indicate which true generative variables that describe the image have contributed to the uncertainty . In this paper we assume that those sources of uncertainty are due to the extrinsic variables that participate in the data generation process . 3.2 DATA GENERATION : INTRINSIC AND EXTRINSIC VARIABLES . We will assume that the underlying generative process for our data can be modelled by the joint probability distribution PX×V over the data space X and a set of true generative variables V . The true variables condition the data generative process and represent the properties of the real world . In our case , we will consider that these variables can be divided into two sets V = V ( I ) × V ( E ) that represent the intrinsic and extrinsic variables respectively . The intrinsic variables are those which represent the properties of the object itself ( e.g . its color , shape , size , material ) , while the extrinsic variables represent external factors ( e.g . the light conditions or the relative position and orientation of the object with respect to the observer that generates the image ) . Intrinsic variables are invariant to the transformations described by the extrinsic variables . The generative process consists of the independent sampling of an intrinsic variable v ( I ) ∼ PV ( I ) and an extrinsic variable v ( E ) ∼ PV ( E ) . Those generative variables together determine a conditional distribution over the data space from which a data point x ∼ PX| ( v ( I ) , v ( E ) ) is sampled . We assume that the intrinsic variables are independent of the extrinsic variables . During data generation both variables are combined to produce the visual appearance of the objects following the formula for the joint probability density PX×V ( x , ( v ( I ) , v ( E ) ) ) = PX| ( v ( I ) , v ( E ) ) ( x ) PVI ( v ( I ) ) PVE ( v ( E ) ) . ( 3 ) We assume that the set of intrinsic and extrinsic variables can be separated into a finite product of MI , ME ∈ N variables such that V ( I ) = V ( I ) 1 × · · · × V ( I ) MI and V ( E ) = V ( E ) 1 × · · · × V ( E ) ME . The total number of true generative variables is then M =MI +ME . 3.3 DISENTANGLEMENT FOR INTERPRETABILITY . Learning disentangled representations is the task of creating interpretable low-dimensional data representations that separate the information about the variables that are involved in the generation of the data ( Locatello et al. , 2018 ) . There is no common agreement on the definition of a disentangled representation . However , two properties have been proposed for disentangled representations that will be useful for our goal of producing interpretable measures of uncertainty in classification predictions . • Modularity : No more than a single dimension of the data representation should encode information about a true generative variable . • Compactness : Each of the true generative variables is encoded by a single dimension of the data representation . If a data representation is both modular and compact , we can identify for each true generative variable its corresponding data representation ’ s dimension that has all information about it . Consider Z = Z1 × Z2 × · · · × ZD as a D-dimensional data representation space . One can measure the compactness of a learned representation by measuring the mutual information between a data representation dimension in Zi and a true variable Vm as I ( PZi , PVm ) = KL ( PZi×Vm ||PZi ⊗ PVm ) A disentangled representation is modular and compact if for each generative variable Vm there is a unique latent dimension Zi such that I ( PZi , PVm ) 6= 0 and I ( PZj , PVm ) = 0 for all j 6= i . If a disentangled data representation is obtained which fulfills both modularity and compactness then we can separate the latent dimensions in such a way that there is a decomposition of the latent space Z = Z ( I ) × Z ( E ) into an intrinsic and an extrinsic set of latent variable dimensions . This would mean that a probabilistic classifier trained on the latent variables PY |Z ( y ) = PY |Z ( I ) ( y ) would only depend on the intrinsic variables . However , it has been proved in ( Locatello et al. , 2018 ; Mathieu et al. , 2018 ) that without supervision disentanglement can not be achieved . Therefore one might not be able to achieve perfect modularity or compactness . This means that some information about the true intrinsic variables might be contained in the extrinsic latent dimensions such that there is a dependency of the uncertainty in those variables as seen in Section 4.2 .
The paper presents an approach that for every object identifies the factors that have a high impact on the models' uncertainty. The approach consists of i) disentangled representations ii) a classifier on the top of the trained representations iii) technique that select dimensions of representation of an objects' (factors), that impact the uncertainty of a model the most. The i) and ii) are done in a known way; The disentanglement is done via Deep Convolutional Inverse Graphics Network (2015), and a classifier is trained with a standard maximum likelihood approach.
SP:eed27a2d9c5d77bfc9aacb5d2ca5c7885b2e29f9
Conservative Safety Critics for Exploration
1 INTRODUCTION . Reinforcement learning ( RL ) is a powerful framework for learning-based control because it can enable agents to learn to make decisions automatically through trial and error . However , in the real world , the cost of those trials – and those errors – can be quite high : a quadruped learning to run as fast as possible , might fall down and crash , and then be unable to attempt further trials due to extensive physical damage . However , learning complex skills without any failures at all is likely impossible . Even humans and animals regularly experience failure , but quickly learn from their mistakes and behave cautiously in risky situations . In this paper , our goal is to develop safe exploration methods for RL that similarly exhibit conservative behavior , erring on the side of caution in particularly dangerous settings , and limiting the number of catastrophic failures . A number of previous approaches have tackled this problem of safe exploration , often by formulating the problem as a constrained Markov decision process ( CMDP ) ( Garcıa & Fernández , 2015 ; Altman , 1999 ) . However , most of these approaches require additional assumptions , like assuming access to a function that can be queried to check if a state is safe ( Thananjeyan et al. , 2020 ) , assuming access to a default safe controller ( Koller et al. , 2018 ; Berkenkamp et al. , 2017 ) , assuming knowledge of all the unsafe states ( Fisac et al. , 2019 ) , and only obtaining safe policies after training converges , while being unsafe during the training process ( Tessler et al. , 2018 ; Dalal et al. , 2018 ) . In this paper , we propose a general safe RL algorithm , with bounds on the probability of failures during training . Our method only assumes access to a sparse ( e.g. , binary ) indicator for catastrophic failure , in the standard RL setting . We train a conservative safety critic that overestimates the probability of catastrophic failure , building on tools in the recently proposed conservative Q-learning framework ( Kumar et al. , 2020 ) for offline RL . In order to bound the likelihood of catastrophic failures at every iteration , we impose a KL-divergence constraint on successive policy updates so that the stationary distribution of states induced by the old and the new policies are not arbitrarily ∗Work done during HB ’ s ( virtual ) visit to Sergey Levine ’ s lab at UC Berkeley different . Based on the safety critic ’ s value , we consider a chance constraint denoting probability of failure , and optimize the policy through primal-dual gradient descent . Our key contributions in this paper are designing an algorithm that we refer to as Conservative Safety Critics ( CSC ) , that learns a conservative estimate of how safe a state is , using this conservative estimate for safe-exploration and policy updates , and theoretically providing upper bounds on the probability of failures throughout training . Through empirical evaluation in five separate simulated robotic control domains spanning manipulation , navigation , and locomotion , we show that CSC is able to learn effective policies while reducing the rate of catastrophic failures by up to 50 % over prior safe exploration methods . 2 PRELIMINARIES . We describe the problem setting of a constrained MDP ( Altman , 1999 ) specific to our approach and the conservative Q learning ( Kumar et al. , 2020 ) framework that we build on in our algorithm . Constrained MDPs . We take a constrained RL view of safety ( Garcıa & Fernández , 2015 ; Achiam et al. , 2017 ) , and define safe exploration as the process of ensuring the constraints of the constrained MDP ( CMDP ) are satisfied while exploring the environment to collect data samples . A CMDP is a tuple ( S , A , P , R , γ , µ , C ) , where S is the state space , A is the action space , P : S ×A×S → [ 0 , 1 ] is a transition kernel , R : S × A → R is a task reward function , γ ∈ ( 0 , 1 ) is a discount factor , µ is a starting state distribution , and C = { ( ci : S → { 0 , 1 } , χi ∈ R ) |i ∈ Z } is a set of ( safety ) constraints that the agent must satisfy , with constraint functions ci taking values either 0 ( alive ) or 1 ( failure ) and limits χi defining the maximal allowable amount of non-satisfaction , in terms of expected probability of failure . A stochastic policy π : S → P ( A ) is a mapping from states to action distributions , and the set of all stationary policies is denoted by Π . Without loss of generality , we can consider a single constraint , where C denotes the constraint satisfaction function C : S → { 0 , 1 } , ( C ≡ 1 { failure } ) similar to the task reward function , and an upper limit χ . Note that since we assume only a sparse binary indicator of failure from the environmentC ( s ) , in purely online training , the agent must fail a few times during training , and hence 0 failures is impossible . However , we will discuss how we can minimize the number of failures to a small rate , for constraint satisfaction . We define discounted state distribution of a policy π as dπ ( s ) = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s|π ) , the state value function as V πR ( s ) = Eτ∼π [ R ( τ ) |s0 = s ] , the state-action value function as QπR ( s , a ) = Eτ∼π [ R ( τ ) |s0 = s , a0 = a ] , and the advantage function as AπR ( s , a ) = QπR ( s , a ) − V πR ( s ) . We define similar quantities for the constraint function , as VC , QC , and AC . So , we have V πR ( µ ) = Eτ∼π [ ∑∞ t=0R ( st , at ) ] and V π C ( µ ) denoting the average episodic failures , which can also be interpreted as expected probability of failure since V πC ( µ ) = Eτ∼π [ ∑∞ t=0 C ( st ) ] = Eτ∼π [ 1 { failure } ] = P ( failure|µ ) . For policy parameterized as πφ , we denote dπ ( s ) as ρφ ( s ) . Note that although C : S → { 0 , 1 } takes on binary values in our setting , the function V πC ( µ ) is a continuous function of the policy π . Conservative Q Learning . CQL ( Kumar et al. , 2020 ) is a method for offline/batch RL ( Lange et al. , 2012 ; Levine et al. , 2020 ) that aims to learn a Q-function such that the expected value of a policy under the learned Q function lower bounds its true value , preventing over-estimation due to out-ofdistribution actions as a result . In addition to training Q-functions via standard Bellman error , CQL minimizes the expected Q-values under a particular distribution of actions , µ ( a|s ) , and maximizes the expected Q-value under the on-policy distribution , π ( a|s ) . CQL in and of itself might lead to unsafe exploration , whereas we will show in Section 3 , how the theoretical tool introduced in CQL can be used to devise a safe RL algorithm . 3 THE CONSERVATIVE SAFE-EXPLORATION FRAMEWORK . In this section we describe our safe exploration framework . The safety constraint C ( s ) defined in Section 2 is an indicator of catastrophic failure : C ( s ) = 1 when a state s is unsafe and C ( s ) = 0 when it is not , and we ideally desire C ( s ) = 0 ∀s ∈ S that the agent visits . Since we do not make any assumptions in the problem structure for RL ( for example a known dynamics model ) , we can not guarantee this , but can at best reduce the probability of failure in every episode . So , we formulate the constraint as V πC ( µ ) = Eτ∼π [ ∑∞ t=0 C ( st ) ] ≤ χ , where χ ∈ [ 0 , 1 ) denotes probability of failure . Our approach is motivated by the insight that by being “ conservative ” with respect to how safe a state is , and hence by over-estimating this probability of failure , we can effectively ensure constrained exploration . Figure 1 provides an overview of the approach . The key idea of our algorithm is to train a conservative safety critic denoted as QC ( s , a ) , that overestimates how unsafe a particular state is and modifies the exploration strategy to appropriately account for this safety under-estimate ( by overestimating the probability of failure ) . During policy evaluation in the environment , we use the safety critic QC ( s , a ) to reduce the chance of catastrophic failures by checking whether taking action a in state s has QC ( s , a ) less than a threshold . If not , we re-sample a from the current policy π ( a|s ) . We now discuss our algorithm more formally . We start by discussing the procedure for learning the safety critic QC , then discuss how we incorporate this in the policy gradient updates , and finally discuss how we perform safe exploration ( Garcıa & Fernández , 2015 ) during policy execution in the environment . Overall objective . Our objective is to learn an optimal policy π∗ that maximizes task rewards , while respecting the constraint on expected probability of failures . π∗ = arg max π∈ΠC V πR ( µ ) where ΠC = { π ∈ Π : V πC ( µ ) ≤ χ } ( 1 ) Learning the safety critic . The safety critic QC is used to obtain an estimate of how unsafe a particular state is , by providing an estimate of probability of failure , that will be used to guide exploration . We desire the estimates to be “ conservative ” , in the sense that the probability of failure should be an over-estimate of the actual probability so that the agent can err on the side of caution while exploring . To train such a critic QC , we incorporate tools from CQL to estimate QC through updates similar to those obtained by reversing the sign of α in Equation 2 of CQL ( H ) ( Kumar et al. , 2020 ) . This gives us an upper bound on QC instead of a lower bound , as ensured by CQL . We denote the over-estimated advantage corresponding to this safety critic as ÂC . Formally the safety critic is trained via the following objective , where the objective inside arg min is called CQL ( ζ ) , ζ parameterizes QC , and k denotes the kth update iteration . Q̂k+1C ← arg min QC α · ( −Es∼Denv , a∼πφ ( a|s ) [ QC ( s , a ) ] + E ( s , a ) ∼Denv [ QC ( s , a ) ] ) + 1 2 E ( s , a , s′ , c ) ∼Denv [ ( QC ( s , a ) − B̂πφQ̂kC ( s , a ) ) 2 ] ( 2 ) Here , B̂πφ is the empirical Bellman operator discussed in section 3.1 and equation 2 of Kumar et al . ( 2020 ) . α is a weight that varies the importance of the first term in equation 2 , and controls the magnitude of value over-estimation , as we now highlight in red above . For states sampled from the replay buffer Denv , the first term seeks to maximize the expectation of QC over actions sampled from the current policy , while the second term seeks to minimize the expectation of QC over actions sampled from the replay buffer . Denv can include off-policy data , and also offline-data ( if available ) . We interleave the gradient descent updates for training ofQC , with gradient ascent updates for policy πφ and gradient descent updates for Lagrange multiplier λ , which we describe next . Policy learning . Since we want to learn policies that obey the constraint we set in terms of the safety critic , we can solve the objective in equation 1 via : max πφ Es∼ρφ , a∼πφ [ A πφ R ( s , a ) ] s.t . Es∼ρφ , a∼πφQC ( s , a ) ≤ χ ( 3 ) We can construct a Lagrangian and solve the policy optimization problem through primal dual gradient descent max πφ min λ≥0 Es∼ρφ , a∼πφ [ A πφ R ( s , a ) − λ ( QC ( s , a ) − χ ) ] We can apply vanilla policy gradients or some actor-critic style Q-function approximator for optimization . Here , QC is the safety critic trained through CQL as described in equation 2 . We defer specific implementation details for policy learning to the final paragraph of this section . Algorithm 1 CSC : safe exploration with conservative safety critics 1 : Initialize V rθ ( task value fn ) , Q s ζ ( safety critic ) , policy πφ , λ , Denv , thresholds , δ , χ . 2 : Set V̂ πφold C ( µ ) ← χ. . V̂ πφold C ( µ ) denotes avg . failures in the previous epoch . 3 : for epochs until convergence do . Execute actions in the environment . Collect on-policy samples . 4 : for episode e in { 1 , . . . , M } do 5 : Set ← ( 1− γ ) ( χ− V̂ πφold C ( µ ) ) 6 : Sample a ∼ πφold ( s ) . Execute a iff QC ( s , a ) ≤ . Else , resample a . 7 : Obtain next state s′ , r = R ( s , a ) , c = C ( s′ ) . 8 : Denv ← Denv ∪ { ( s , a , s′ , r , c ) } . If available , Denv can be seeded with off-policy/offline data 9 : end for 10 : Store the average episodic failures V̂ πφold C ( µ ) ← ∑M e=1 V̂ e C 11 : for step t in { 1 , . . . , N } do . Policy and Q function updates using Denv 12 : Gradient ascent on φ and ( Optionally ) add Entropy regularization ( Appendix A.2 ) 13 : Gradient updates for the Q-function ζ : = ζ − ηQ∇ζCQL ( ζ ) 14 : Gradient descent step on Lagrange multiplier λ ( Appendix A.2 ) 15 : end for 16 : φold ← φ 17 : end for Executing rollouts ( i.e. , safe exploration ) . Since we are interested in minimizing the number of constraint violations while exploring the environment , we do not simply execute the learned policy iterate in the environment for active data collection . Rather , we query the safety critic QC to obtain an estimate of how unsafe an action is and choose an action that is safe via rejection sampling . Formally , we sample an action a ∼ πφold ( s ) , and check if QC ( s , a ) ≤ . We keep re-sampling actions πφold ( s ) until this condition is met , and once met , we execute that action in the environment . In practice , we execute this loop for 100 iterations , and choose the action a among all actions in state s for which QC ( s , a ) ≤ and the value of QC ( s , a ) is minimum . If no such action a is found that maintains QC ( s , a ) ≤ , we just choose a for which QC ( s , a ) is minimum ( although above the threshold ) . Here , is a threshold that varies across iterations and is defined as = ( 1 − γ ) ( χ − V̂ πφoldC ( µ ) ) where , V̂ πφold C ( µ ) is the average episodic failures in the previous epoch , denoting a sample estimate of the true V πφold C ( µ ) . This value of is theoretically obtained such that Lemma 1 holds . In the replay buffer Denv , we store tuples of the form ( s , a , s′ , r , c ) , where s is the previous state , a is the action executed , s′ is the next state , r is the task reward from the environment , and c = C ( s′ ) , the constraint value . In our setting , c is binary , with 0 denoting a live agent and 1 denoting failure . Overall algorithm . Our overall algorithm , shown in Algorithm 1 , executes policy rollouts in the environment by respecting the constraint QC ( s , a ) ≤ , stores the observed data tuples in the replay buffer Denv , and uses the collected tuples to train a safety value function QC using equation 2 , update the policy and the dual variable λ following the optimization objective in equation 6 . Implementation details . Here , we discuss the specifics of the implementation for policy optimization . We consider the surrogate policy improvement problem Sutton ( 2020 ) : max πφ Es∼ρφold , a∼πφ [ A πφold R ( s , a ) ] s.t . Es∼ρφold [ DKL ( πφold ( ·|s ) ||πφ ( ·|s ) ) ] ≤ δ and V πφ C ( µ ) ≤ χ ( 4 ) Here , we have introduced a DKL constraint to ensure successive policies are close in order to help obtain bounds on the expected failures of the new policy in terms of the expected failures of the old policy in Section 4 . We replace the DKL ( πφold ( ·|s ) ||πφ ( ·|s ) ) term by its second order Taylor expansion ( expressed in terms of the Fisher Information Matrix F ) and enforce the resulting constraint exactly ( Schulman et al. , 2015a ) . Following equation 22 ( Appendix A.2 ) we have , max πφ Es∼ρφold , a∼πφ [ A πφold R ( s , a ) ] s.t . V πφold C ( µ ) + 1 1− γ Es∼ρφold , a∼πφ [ AC ( s , a ) ] ≤ χ s.t . Es∼ρφold [ DKL ( πφold ( ·|s ) ||πφ ( ·|s ) ) ] ≤ δ ( 5 ) We replace the true AC by the learned over-estimated ÂC , and consider the Lagrangian dual of this constrained problem , which we can solve by alternating gradient descent as shown below . max πφ min λ≥0 Es∼ρφold , a∼πφ [ A πφold R ( s , a ) ] − λ ( V πφold C ( µ ) + 1 1− γ Es∼ρφold , a∼πφ [ ÂC ( s , a ) ] − χ ) s.t . 1 2 ( φ− φold ) TF ( φ− φold ) ≤ δ ( 6 ) Note that although we use FIM for the updates , we can also apply vanilla policy gradients or some actor-critic style Q-function approximator to optimize equation 6 . Detailed derivations of the gradient updates are in Appendix A.2 .
This paper introduces a method for performing safe exploration in RL. It addresses the problem of ensuring that partially-trained policies do not visit unsafe regions of the state space, while still being exploratory enough to collect useful training experiences. The proposed technique is based on learning conservative estimates of the probability of a catastrophic failure occurring at different states. Based on these, the authors show that it is possible to upper bound the likelihood of reaching an unsafe state at every training step, thereby guaranteeing that all safety constraints are satisfied with high probability. Importantly, the authors also show that (at least asymptotically), the method is no worse than standard unsafe reinforcement learning algorithms.
SP:fa6456aa23ea8635e04081a043a07915b1c0808f
$\alpha$VIL: Learning to Leverage Auxiliary Tasks for Multitask Learning
1 INTRODUCTION . In Machine Learning , we often encounter tasks that are at least similar , if not even almost identical . For example , in Computer Vision , multiple datasets might require object segmentation or recognition ( Deng et al. , 2009 ; LeCun et al. , 1998 ; Lin et al. , 2014 ) whereas in Natural Language Processing , tasks can deal with sentence entailment ( De Marneffe et al. , 2019 ) or paraphrase recognition ( Quirk et al. , 2004 ) , both of which share similarities and fall under the category of Natural Language Understanding . Given that many such datasets are accessible to researchers , a naturally emerging question is whether we can leverage their commonalities in training setups . Multitask Learning ( Caruana , 1993 ) is a Machine Learning paradigm that aims to address the above by training a group of sufficiently similar tasks together . Instead of optimizing each individual task ’ s objective , a shared underlying model is fit so as to maximize a global performance measure , for example a LeNet-like architecture ( LeCun et al. , 1998 ) for Computer Vision , or a Transformer-based encoder ( Vaswani et al. , 2017 ) for Natural Language Processing problems . For a broader perspective of Multitask Learning approaches , we refer the reader to the overviews of Ruder ( 2017 ) ; Vandenhende et al . ( 2020 ) . In this paper we introduce αVIL , an approach to Multitask Learning that estimates individual task weights through direct , gradient-based metaoptimization on a weighted accumulation of taskspecific model updates . To our knowledge , this is the first attempt to leverage task-specific model deltas , that is , realized differences of model parameters before and after a task ’ s training steps , to directly optimize task weights for target task-oriented multitask learning . We perform initial experiments on multitask setups in two domains , Computer Vision and Natural Language Understanding , and show that our method is able to successfully learn a good weighting of classification tasks . 2 RELATED WORK . Multitask Learning ( MTL ) can be divided into techniques which aim to improve a joint performance metric for a group of tasks ( Caruana , 1993 ) , and methods which use auxiliary tasks to boost the performance of a single target task ( Caruana , 1998 ; Bingel & Søgaard , 2017 ) . Some combinations of tasks suffer when their model parameters are shared , a phenomenon that has been termed negative transfer . There have been efforts to identify the cause of negative transfer . Du et al . ( 2018 ) use negative cosine similarity between gradients as a heuristic for determining negative transfer between target and auxiliary tasks . Yu et al . ( 2020 ) suggest that these conflicting gradients are detrimental to training when the joint optimization landscape has high positive curvature and there is a large difference in gradient magnitudes between tasks . They address this by projecting task gradients onto the normal plane if they conflict with each other . Wu et al . ( 2020 ) hypothesize that the degree of transfer between tasks is influenced by the alignment of their data samples , and propose an algorithm which adaptively aligns embedded inputs . Sener & Koltun ( 2018 ) avoid the issue of negative transfer due to competing objectives altogether , by casting MTL as a Multiobjective Optimization problem and searching for a Pareto optimal solution . In this work , we focus on the target task approach to Multitask Learning , tackling the problem of auxiliary task selection and weighting to avoid negative transfer and maximally utilize positively related tasks . Auxiliary tasks have been used to improve target task performance in Computer Vision , Reinforcement Learning ( Jaderberg et al. , 2016 ) , and Natural Language Processing ( Collobert et al. , 2011 ) . They are commonly selected based on knowledge about which tasks should be beneficial to each other through the insight that they utilize similar features to the target task ( Caruana , 1998 ) , or are grouped empirically ( Søgaard & Goldberg , 2016 ) . While this may often result in successful task selection , such approaches have some obvious drawbacks . Manual feature-based selection requires the researcher to have deep knowledge of the available data , an undertaking that becomes ever more difficult with the introduction of more datasets . Furthermore , this approach is prone to failure when it comes to Deep Learning , where model behaviour does not necessarily follow human intuition . Empirical task selection , e.g. , through trialling various task combinations , quickly becomes computationally infeasible when the number of tasks becomes large . Therefore , in both approaches to Multitask Learning ( optimizing either a target task using auxiliary data or a global performance metric ) , automatic task weighting during training can be beneficial for optimally exploiting relationships between tasks . To this end , Guo et al . ( 2019 ) use a two-staged approach ; first , a subset of auxiliary tasks which are most likely to improve the main task ’ s validation performance is selected , by utilizing a MultiArmed Bandit , the estimates of which are continuously updated during training . The second step makes use of a Gaussian Process to infer a mixing ratio for data points belonging to the selected tasks , which subsequently are used to train the model . A different approach by Wang et al . ( 2020 ) aims to directly differentiate at each training step the model ’ s validation loss with respect to the probability of selecting instances of the training data ( parametrised by a scorer network ) . This approach is used in multilingual translation by training the scorer to output probabilities for all of the tasks ’ training data . However , this method relies on noisy , per step estimates of the gradients of the scorer ’ s parameters as well as the analytical derivation of it depending on the optimizer used . Our method in comparison is agnostic to the optimization procedure used . Most similarly to our method , Sivasankaran et al . ( 2017 ) recently introduced Discriminative Importance Weighting for acoustic model training . In their work , the authors train a model on the CHiME-3 dataset ( Barker et al. , 2015 ) , adding 6 artificially perturbed datasets as auxiliary tasks . Their method relies on estimating model performances on the targeted validation data when training tasks in isolation , and subsequently using those estimates as a proxy to adjust individual task weights . Our method differs from this approach by directly optimizing the target validation loss with respect to the weights applied to the model updates originating from training each task . 3 α-VARIABLE IMPORTANCE LEARNING The target task-oriented approach to Multitask Learning can be defined as follows . A set of classification tasks T = { t1 , t2 , . . . , tn } are given , each associated with training and validation datasets , Dtrainti and D dev ti , as well as a target task t ∗∈T . We want to find weights W= { ω1 , ω2 , . . . , ωn } capturing the importance of each task such that training the parameters θ of a Deep Neural Network on the weighted sum of losses for each task maximizes the model ’ s performance on the target task ’ s development set : θ∗ = argmin θ |T |∑ i=0 wi∑ w · Lti ( Dtrainti , θ ) s.t . L t∗ ( Ddevt∗ , θ∗ ) ≈ min θ Lt ∗ ( Ddevt∗ , θ ) ( 1 ) where Lti ( Dti , θ ) is defined as the average loss over all data points in the dataset Dti for network parameters θ , computed using the appropriate loss function for task ti ( in our case the standard cross entropy loss ) : Lti ( Dti , θ ) = 1 |Dti | ∑ k L ( xk , yk ; θ ) , ( xk , yk ) ∈ Dti ( 2 ) The introduction of task weights w in Equation 1 aims at scaling the tasks ’ model updates in a way that positive transfer towards the target is exploited and negative transfer avoided . It is crucial therefore to have an efficient and reliable way of estimating the influence of tasks on the target ’ s performance , and of adjusting the weights accordingly during model training . To this end , we introduce α-Variable Importance Learning ( αVIL ) , a novel method for target taskoriented Multitask training , outlined in Algorithm 1. αVIL introduces a number of additional parameters – α-variables – into the model , which are associated with the actually realized task-specific model updates . Algorithm 1 : The α-Variable Importance Learning algorithm . Data : Model parameters θ ; a set of tasks T = { t1 , . . . , tn } ; a target task t∗ ; training data Dtrainti for each task ti ; development data D dev t∗ for the target task ; maximum number of training epochs E ; ratio ρ of tasks ’ training data to sample per epoch ; number of α tuning steps s Result : updated parameters θ , optimized for performance on t∗ 1 W← { wti = 1 | ti ∈ T } // initialize all task weights to 1 2 for = 1 . . . E do 3 for ti ∈ T do 4 D ti ρ∼ Dtrainti // sample task-specific data 5 θti ← argminθ′ wti∑ wL ti ( D ti , θ ′ ) // task ’ s model update starting at θ 6 δti ← θti − θ 7 // task-specific weight update , optimizing wrt . α parameters on δ 8 for s ∈ 1 . . . s do 9 { α1 , α2 , . . . , α|T | } ← argmin { α1 , α2 , ... , α|T | } L ( D dev t∗ , θ + α1δ1 + . . .+ α|T |δ|T | ) 10 θ ← θ + α1δ1 + . . .+ α|T |δ|T | 11 W← { wti + ( αti − 1 ) | ti ∈ T } During training , our approach first performs weighted task-specific model updates on a proportion of the available training data for each individual task , starting from the current model parameters . It collects the resulting model deltas , i.e. , the differences between the model ’ s parameters before and after the singletask update and resets the model . After this delta collection phase , the optimal mixing factors , that is , { α1 , α2 , . . . , α|T | } of the model updates are found , such that the parameters resulting from the interpolation of scaled task-specific δ ’ s minimize the loss on the target task ’ s development data . The α–parameters can be optimized through any type of optimization method however , since our models are end to end differentiable , we can backpropagate directly and use gradient descent . Once we have found the optimal mixing ratio of task updates , we write the new state back to the model , and update the task weights subject to the optimized α parameters . The task weight update rule ( line 11 in Algorithm 1 ) combined with the weighted task-specific model updates ( line 5 ) tries to capture the intuition that if a task update was up- or down-scaled in the α-tuning stage , we likely want to update the parameters more/less for this task , during the next delta collection phase .
This paper proposes a novel multi-task learning method which adjusts task weights dynamically during training, by exploiting task-specific updates of the model parameters between training epochs. Specifically, the proposed model takes the differences between the model’s parameters before and after the singletask update, after that the mixing factors of the model updates are found based on the differences to minimize the loss on the target task’s development data. Empirical studies are performed on tasks of computer vision and natural language understanding.
SP:6945d14d266d0ca1c931a55091166604e7984604
Learning What To Do by Simulating the Past
1 INTRODUCTION . As deep learning has become popular , many parts of AI systems that were previously designed by hand have been replaced with learned components . Neural architecture search has automated architecture design ( Zoph & Le , 2017 ; Elsken et al. , 2019 ) , population-based training has automated hyperparameter tuning ( Jaderberg et al. , 2017 ) , and self-supervised learning has led to impressive results in language modeling ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Clark et al. , 2020 ) and reduced the need for labels in image classification ( Oord et al. , 2018 ; He et al. , 2020 ; Chen et al. , 2020 ) . However , in reinforcement learning , one component continues to be designed by humans : the task specification . Handcoded reward functions are notoriously difficult to specify ( Clark & Amodei , 2016 ; Krakovna , 2018 ) , and learning from demonstrations ( Ng et al. , 2000 ; Fu et al. , 2018 ) or preferences ( Wirth et al. , 2017 ; Christiano et al. , 2017 ) requires a lot of human input . Is there a way that we can automate even the specification of what must be done ? It turns out that we can learn part of what the user wants simply by looking at the state of the environment : after all , the user will already have optimized the state towards their own preferences ( Shah et al. , 2019 ) . For example , when a robot is deployed in a room containing an intact vase , it can reason that if its user wanted the vase to be broken , it would already have been broken ; thus she probably wants the vase to remain intact . However , we must ensure that the agent distinguishes between aspects of the state that the user couldn ’ t control from aspects that the user deliberately designed . This requires us to simulate what the user must have done to lead to the observed state : anything that the user put effort into in the past is probably something the agent should do as well . As illustrated in Figure 1 , if we observe a Cheetah balancing on its front leg , we can infer how it must have launched itself into that position . Unfortunately , it is unclear how to simulate these past trajectories that lead to the observed state . So far , this has only been done in gridworlds , where all possible trajectories can be considered using dynamic programming ( Shah et al. , 2019 ) . Our key insight is that we can sample such trajectories by starting at the observed state and simulating backwards in time . To enable this , we derive a gradient that is amenable to estimation through backwards simulation , and learn an inverse policy and inverse dynamics model using supervised ∗Work done at the Center for Human-Compatible AI , UC Berkeley . learning to perform the backwards simulation . Then , the only remaining challenge is finding a reward representation that can be meaningfully updated from a single state observation . To that end , rather than defining the reward directly on the raw input space , we represent it as a linear combination of features learned through self-supervised representation learning . Putting these components together , we propose the Deep Reward Learning by Simulating the Past ( Deep RLSP ) algorithm . We evaluate Deep RLSP on MuJoCo environments and show that it can recover fairly good performance on the task reward given access to a small number of states sampled from a policy optimized for that reward . We also use Deep RLSP to imitate skills generated using a skill discovery algorithm ( Sharma et al. , 2020 ) , in some cases given just a single state sampled from the policy for that skill . Information from the environment state can not completely replace reward supervision . For example , it would be hard to infer how clean Bob would ideally want his room to be , if the room is currently messy because Bob is too busy to clean it . Nonetheless , we are optimistic that information from the environment state can be used to significantly reduce the burden of human supervision required to train useful , capable agents . 2 METHOD . In this section , we describe how Deep RLSP can learn a reward function for high dimensional environments given access only to a simulator and the observed state s0 . Notation . A finite-horizon Markov Decision Process ( MDP ) M = 〈S , A , T , r , P , T 〉 contains a set of states S and a set of actions A . The transition function T : S × A × S 7→ [ 0 , 1 ] determines the distribution over next states given a state and an action , and P is a prior distribution over initial states . The reward function r : S 7→ R determines the agent ’ s objective . T ∈ Z+ is a finite planning horizon . A policy π : S ×A 7→ [ 0 , 1 ] specifies how to choose actions given a state . Given an initial state distribution , a policy and the transition function , we can sample a trajectory τ by sampling the first state from P , every subsequent action from π , and every subsequent state from T . We denote the probability distribution over trajectories as 〈P , π , T 〉 and write τ ∼ 〈P , π , T 〉 for the sampling step . We will sometimes write a single state s instead of a distribution P if the initial state is deterministic . The goal of reinforcement learning ( RL ) is to find a policy π∗ that maximizes the expected cumulative reward Eτ∼〈P , π , T 〉 [ ∑T t=1 r ( st ) ] . We use φ : S → Rn to denote a feature function ( whether handcoded or learned ) that produces a feature vector of length n for every state . The reward function r is linear over φ if it can be expressed in the form r ( s ) = θTφ ( s ) for some θ ∈ Rn . We assume that some past trajectory τ−T :0 = s−Ta−T . . . a−1s0 produced the observed state s0 . 2.1 IDEALIZED ALGORITHM . We first explain what we would ideally do , if we had a handcoded a feature function φ and an enumerable ( small ) state space S that affords dynamic programming . This is a recap of Reward Learning by Simulating the Past ( RLSP ; Shah et al. , 2019 ) . We assume the human follows a Boltzmann-rational policy πt ( a | s , θ ) ∝ exp ( Qt ( s , a ; θ ) ) , where the Q values are computed using soft value iteration . Marginalizing over past trajectories , yields a distribution over the observed state p ( s0 | θ ) = ∑ s−T ... a−1 p ( τ = s−Ta−T . . . a−1s0 | θ ) . We compute the maximum likelihood estimate , argmaxθ ln p ( s0 | θ ) , via gradient ascent , by expressing the gradient of the observed state as a weighted combination of gradients of consistent trajectories ( Shah et al. , 2019 , Appendix B ) : ∇θ ln p ( s0 | θ ) = E τ−T : −1 ∼ p ( τ−T : −1|s0 , θ ) [ ∇θ ln p ( τ | θ ) ] ( 1 ) ∇θ ln p ( τ | θ ) is a gradient for inverse reinforcement learning . Since we assume a Boltzmann-rational human , this is the gradient for Maximum Causal Entropy Inverse Reinforcement Learning ( MCEIRL ; Ziebart et al. , 2010 ) . However , we still need to compute an expectation over all trajectories that end in s0 , which is in general intractable . Shah et al . ( 2019 ) use dynamic programming to compute this gradient in tabular settings . 2.2 GRADIENT AS BACKWARDS-FORWARDS CONSISTENCY . Approximating the expectation . For higher-dimensional environments , we must approximate the expectation over past trajectories p ( τ−T : −1 | s0 , θ ) . We would like to sample from the distribution , but it is not clear how to sample the past conditioned on the present . Our key idea is that just as we can sample the future by rolling out forwards in time , we should be able to sample the past by rolling out backwards in time . Note that by the Markov property we have : p ( τ−T : −1 | s0 , θ ) = −1∏ t=−T p ( st | at , st+1 , . . . s0 , θ ) p ( at | st+1 , at+1 , . . . s0 , θ ) = −1∏ t=−T p ( st | at , st+1 , θ ) p ( at | st+1 , θ ) Thus , given the inverse policy π−1t ( at | st+1 , θ ) , the inverse dynamics T −1t ( st | at , st+1 , θ ) , and the observed state s0 , we can sample a past trajectory τ−T : −1 ∼ p ( τ−T : −1 | s0 , θ ) by iteratively applying π−1 and T −1 , starting from s0 . Analogous to forward trajectories , we express the sampling as τ−T : −1 ∼ 〈s0 , π−1 , T −1〉 . Thus , we can write the gradient in Equation 1 as Eτ−T : −1 ∼ 〈s0 , π−1 , T −1〉 [ ∇θ ln p ( τ | θ ) ] . Learning π , π−1 and T −1 . In order to learn π−1 , we must first know π . We assumed that the human was Boltzmann-rational , which corresponds to the maximum entropy reinforcement learning objective ( Levine , 2018 ) . We use the Soft Actor-Critic algorithm ( SAC ; Haarnoja et al. , 2018 ) to estimate the policy π ( a | s , θ ) , since it explicitly optimizes the maximum entropy RL objective . Given the forward policy π ( a | s , θ ) and simulator T , we can construct a dataset of sampled forward trajectories , and learn the inverse policy π−1 and the inverse dynamics T −1 using supervised learning . Given these , we can then sample τ−T : −1 , allowing us to approximate the expectation in the gradient . In general , both π−1 and T −1 could be stochastic and time-dependent . Estimating the gradient for a trajectory . We now turn to the term within the expectation , which is the inverse reinforcement learning gradient given a demonstration trajectory τ = s−Ta−T . . . s0 . Assuming that the user is Boltzmann-rational , this is the MCEIRL gradient ( Ziebart et al. , 2010 ) , which can be written as ( Shah et al. , 2019 , Appendix A ) : ∇θ ln p ( τ | θ ) = ( 0∑ t=−T φ ( st ) ) −F−T ( s−T ) + −1∑ t=−T ( E s′t+1∼T ( ·|st , at ) [ Ft+1 ( s′t+1 ) ] −Ft+1 ( st+1 ) ) ( 2 ) F is the expected feature count under π , that is , F−t ( s−t ) , Eτ−t:0 ∼ 〈s−t , π , T 〉 [ ∑0 t′=−t φ ( st′ ) ] . The first term computes the feature counts of the demonstrated trajectory τ , while the second term computes the feature counts obtained by the policy for the current reward function θ ( starting from the initial state s−T ) . Since r ( s ) = θTφ ( s ) , these terms increase the reward of features present in the demonstration τ and decrease the reward of features under the current policy . Thus , the gradient incentivizes consistency between the demonstration and rollouts from the learned policy . The last term is essentially a correction for the observed dynamics : if we see that st , at led to st+1 , it corrects for the fact that we “ could have ” seen some other state s′t+1 . Since this correction is zero in expectation ( and expensive to compute ) , we drop it for our estimator . Gradient estimator . After dropping the last term in Equation 2 , expanding the definition of F , and substituting in to Equation 1 , our final gradient estimator is : ∇θ ln p ( s0 | θ ) = E τ−T : −1 ∼ 〈s0 , π−1 , T −1〉 [ ( 0∑ t=−T φ ( st ) ) − E τ ′ ∼ 〈s−T , π , T 〉 [ ( 0∑ t=−T φ ( s′t ) ) ] ] ( 3 ) Thus , given s0 , θ , π , T , π−1 , and T −1 , computing the gradient consists of three steps : 1 . Simulate backwards from s0 , and compute the feature counts of the resulting trajectories . 2 . Simulate forwards from s−T of these trajectories , and compute their feature counts . 3 . Take the difference between these two quantities . This again incentivizes consistency , this time between the backwards and forwards trajectories : the gradient leads to movement towards “ what the human must have done ” and away from “ what the human would do if they had this reward ” . The gradient becomes zero when they are identical . It may seem like the backwards and forwards trajectories should always be consistent with each other , since π−1 and T −1 are inverses of π and T . The key difference is that s0 imposes constraints on the backwards trajectories , but not on the forward trajectories . For example , suppose we observe s0 in which a vase is unbroken , and our current hypothesis is that the user wants to break the vase . When we simulate backwards , our trajectory will contain an unbroken vase , but when we simulate forwards from s−T , π will break the vase . The gradient would then reduce the reward for a broken vase and increase the reward for an unbroken vase .
This paper introduces an algorithm, called deep reward learning by simulating the past (deep RLSP), that seeks to infer a reward function by looking at states in demonstration data. An example of this described in the paper is an environment with a vase: if demonstration data shows an intact vase in the presence of an embodied agent then breaking the vase is unlikely to be the intended behavior. Otherwise the vase would already be broken in the demo.
SP:e23149389db2c50bba31eacfdef723e015e58386
Amortized Conditional Normalized Maximum Likelihood
1 INTRODUCTION . Current machine learning methods provide unprecedented accuracy across a range of domains , from computer vision to natural language processing . However , in many high-stakes applications , such as medical diagnosis or autonomous driving , rare mistakes can be extremely costly , and thus effective deployment of learned models requires not only high expected accuracy , but also a way to measure the certainty in a model ’ s predictions in order to assess risk and allow the model to abstain from making decisions when there is low confidence in the prediction . While deep networks offer excellent prediction accuracy , they generally do not provide the means to accurately quantify their uncertainty . This is especially true on out-of-distribution inputs , where deep networks tend to make overconfident incorrect predictions ( Ovadia et al. , 2019 ) . In this paper , we tackle the problem of obtaining reliable uncertainty estimates under distribution shift . Most prior work approaches the problem of uncertainty estimation from the standpoint of Bayesian inference . By treating parameters as random variables with some prior distribution , Bayesian inference can compute posterior distributions that capture a notion of epistemic uncertainty and allow us to quantitatively reason about uncertainty in model predictions . However , computing accurate posterior distributions becomes intractable as we use very complex models like deep neural nets , and current approaches require highly approximate inference methods that fall short of the promise of full Bayesian modeling in practice . Bayesian methods also have a deep connection with the minimum description length ( MDL ) principle , a formalization of Occam ’ s razor that recasts learning as performing efficient lossless data compression and has been widely used as a motivation for model selection techniques . Codes corresponding to maximum-a-posteriori estimators and Bayes marginal likelihoods have been commonly used within the MDL framework . However , other coding schemes have been proposed in MDL centered around achieving different notions of minimax optimality . Interpreting coding schemes as predictive distributions , such methods can directly inspire prediction strategies that give conservative predictions and do not suffer from excessive overconfidence due to their minimax formulation . One such predictive distribution is the conditional normalized maximum likelihood ( CNML ) ( Grünwald , 2007 ; Rissanen and Roos , 2007 ; Roos et al. , 2008 ) model , also known as sequential NML or predictive NML ( Fogel and Feder , 2018b ) . To make a prediction on a new input , CNML considers every possible label and tries to find the model that best explains that label for the query point together with the training set . It then uses that corresponding model to assign probabilities for each input and normalizes to obtain a valid probability distribution . Intuitively , instead of relying on a learned model to extrapolate from the training set to the new ( potentially out-of-distribution ) input , CNML can obtain more reasonable predictive distributions by asking “ given the training data , which labels would make sense for this input ? ” While CNML provides compelling minimax regret guarantees , practical instantiations have been exceptionally difficult , because computing predictions for a test point requires retraining the model on the test point concatenated with the entire training set . With large models like deep neural networks , this can potentially require hours of training for every prediction . In this paper , we proposed amortized CNML ( ACNML ) , a tractable and practical algorithm for approximating CNML utilizing approximate Bayesian inference . ACNML avoids the need to optimize over large datasets during inference by using an approximate posterior in place of the training set . We demonstrate that our proposed approach is substantially more feasible and computationally efficient than prior techniques for using CNML predictions with deep neural networks and compares favorably to a number of prior techniques for uncertainty estimation on out-of-distribution inputs . 2 MINIMUM DESCRIPTION LENGTH : BACKGROUND AND PRELIMINARIES . ACNML is motivated from the minimum description length ( MDL ) principle , which can be used to derive a connection between optimal codes and prediction . We begin with a review of the MDL principle and discuss the challenges in implementing minimax codes that motivate our method . For more comprehensive treatments of MDL , we refer the readers to ( Grünwald , 2007 ; Rissanen , 1989 ) . Minimum description length . The MDL principle states that any regularities in a dataset can be exploited to compress it , and hence learning is reformulated as losslessly transmitting the data with the fewest number of bits ( Rissanen , 1989 ; Grünwald , 2007 ) . Simplicity is thus formalized as the length of the resulting description . MDL was originally formulated in a generative setting where the goal is to code arbitrary data , and we will present a brief overview in this setting . We can translate the results to a supervised learning setting , which corresponds to transmitting the labels after assuming either a fixed coding scheme for the inputs or that the inputs are known beforehand . While MDL is typically described in terms of code lengths , in general , we can associate codes with probability distributions , with the code length of an object corresponding to the negative log-likelihood under that probability distribution ( Cover and Thomas , 2006 ) . Normalized Maximum Likelihood . Let θ̂ ( x1 : n ) denote the maximum likelihood estimator for a sequence of data x1 : n over all θ ∈ Θ . For any x1 : n ∈ Xn and distribution q over Xn , we can define a regret relative to the model class Θ as R ( q , Θ , x1 : n ) def = log pθ̂ ( x1 : n ) ( x1 : n ) − log q ( x1 : n ) . ( 1 ) This regret corresponds to the excess number of bits q uses to encode x1 : n compared to the best distribution in Θ , denoted θ̂ ( x1 : n ) . We can then define the normalized maximum likelihood distribution ( NML ) with respect to Θ as pNML ( x1 : n ) = pθ̂ ( x1 : n ) ( x1 : n ) ∑ x̃1 : n∈Xn pθ̂ ( x̃1 : n ) ( x̃1 : n ) ( 2 ) when the denominator is finite . The NML distribution can be shown to achieve minimax regret ( Shtarkov , 1987 ; Rissanen , 1996 ) pNML = argmin q max x1 : n∈Xn R ( q , Θ , x1 : n ) . ( 3 ) This corresponds , in a sense , to an optimal coding scheme for sequences of known fixed length . Conditional NML . Instead of making predictions across entire sequences at once , we can adapt NML to the setting where we make predictions about the next data point based on the previously seen data , resulting in conditional NML ( CNML ) ( Rissanen and Roos , 2007 ; Grünwald , 2007 ; Fogel and Feder , 2018a ) . While several variations on CNML exist , we consider the following : pCNML ( xn|x1 : n−1 ) ∝ pθ̂ ( x1 : n ) ( xn ) . ( 4 ) For any fixed sequence x1 : n−1 , pCNML solves the minimax regret problem pCNML = argmin q max xn log pθ̂ ( x1 : n ) ( xn ) − log q ( xn ) , ( 5 ) where the inner maximization is only over the last data point xn . We can extend this approach to the supervised classification setting , where our models represent conditional distributions pθ ( y|x ) . The CNML distribution , given a sequence of already seen datapoints ( x1 : n−1 , y1 : n−1 ) and the next input xn , then takes the form pCNML ( yn|xn ; x1 : n−1 , y1 : n−1 ) ∝ pθ̂ ( y1 : n|x1 : n ) ( yn|xn ) , ( 6 ) and solves the minimax problem pCNML = argmin q max yn log pθ̂ ( y1 : n|x1 : n ) ( yn|xn ) − log q ( yn ) . ( 7 ) We see that this conditional distribution is amenable to our usual inductive learning procedure , where ( x1 : n−1 , y1 : n−1 ) is our training set , and we want to output a predictive distribution over labels yn for a new test input xn . CNML provides conservative predictions . For each query point , CNML considers each potential label and finds the model that would be most consistent with that label and with the training set . If that model assigns high probability to the label , then minimizing the worst-case regret forces CNML to assign relatively high probability to it . In particular , compared to simply letting a model trained only on the training set extrapolate blindly , we expect CNML to give more conservative predictions on out-of-distribution inputs , since it explicitly considers what would have happened if the new data point had been included in the training dataset with each particular label . We use a 2D logistic regression example to illustrate CNML ’ s conservative predictions , showing a heatmap of CNML probabilities in Figure 1 . CNML provides uniform predictions on most of the input space away from the training samples . In Figure 2 , we illustrate how CNML arrives at these predictions , showing the predictions for the parameters θ̂0 and θ̂1 , corresponding to labeling the test point ( shown in pink in Figure 2 , left ) with either the label 0 or 1 . However , CNML may be too conservative when the model class Θ is very expressive . Naïvely applying CNML with large model classes can result in the per-label models fitting their labels for the query point arbitrarily well , such that CNML gives unhelpful uniform predictions even on inputs we would hope to reasonably extrapolate on . We see this in the 2D logistic regression example in Figure 1 . Thus , the model class Θ would need to be restricted in some form , for example by only considering only parameters within a certain distance from the training set solution as a hard constraint . Another approach for controlling the expressivity of the model class is to generalize CNML to use regularized estimators instead of maximum likelihood , resulting in normalized maximum a posteriori ( NMAP ) ( Kakade et al. , 2006 ) codes . Instead of using maximum likelihood parameters , NMAP selects θ̂s to be the parameter that maximizes both data likelihood and a regularization term , or prior , over parameters , and we can define slightly altered notions of regret using these MAP estimators in all the previous equations to get a conditional normalized maximum a posteriori distribution instead . See Appendix D for completeness . Going back to the logistic regression example , we plot heatmaps of CNMAP predictions in Figure 3 , adding different amounts of L2 regularization to the logistic regression weights . As we add more regularization , the model class becomes effectively less expressive , and the CNMAP predictions become less conservative . Computational Costs of CNML . A major practical issue with actually utilizing CNML or CNMAP with neural networks is the prohibitive computational costs of computing the maximum likelihood estimators for each new input and label combination . To evaluate the distribution on a new test point , one must solve a nonconvex optimization problem for each possible label , with each problem involving the entire training dataset along with the new test point . This direct evaluation of CNML therefore becomes computationally infeasible with large datasets and high-capacity models , and further requires that the model carry around the entire training set even when it is deployed . In settings where critical decisions must be made in real time , even running a single epoch of additional training would be infeasible . For this reason , NML-based methods have not gained much traction as a practical tool for improving the predictive performance of high-capacity models .
The paper presents an approach based on conditional normalized maximum likelihood (CNML) for uncertainty estimation, calibration, and out-of-distribution robustness with deep networks. CNML is intractable to compute in general and therefore the authors propose a tractable approximation which uses approximate Bayesian inference techniques. Experimentally, the authors show that their new approach is competitive and sometimes better than existing approaches for uncertainty estimation and calibration on out-of-distribution test data points.
SP:134b968f05fc55567e46428a36359228efa15c85
The role of Disentanglement in Generalisation
1 INTRODUCTION . Generalisation to unseen data has been a key challenge for neural networks since the early days of connectionism , with considerable debate about whether these models can emulate the kinds of behaviours that are present in humans ( McClelland et al. , 1986 ; Fodor & Pylyshyn , 1988 ; Smolensky , 1987 ; 1988 ; Fodor & McLaughlin , 1990 ) . While the modern successes of Deep Learning do indeed point to impressive gains in this regard , human level generalisation still remains elusive ( Lake & Baroni , 2018 ; Marcus , 2018 ) . One explanation for this is that humans encode stimuli in a compositional manner , with a small set of independent and more primitive features ( e.g. , separate representations of size , position , line orientation , etc . ) being used to build more complex representation ( e.g. , a square of a given size and position ) . The meaning of the more complex representation comes from the meaning of it ’ s parts . Critically , compositional representations afford the ability to recombine primitives in novel ways : if a person has learnt to recognize squares and circles in context where all squares are blue and all circles are red , they can nevertheless also recognise red squares , even though they have never seen these in the training data . This ability to perform combinatorial generalisation based on compositional representations is thought to be a hallmark of human level intelligence ( Fodor & Pylyshyn , 1988 ) ( See McClelland et al . ( 1986 ) for a diverging opinion ) . Recently it has been proposed that generalisation in neural networks can be improved by extracting disentangled representations ( Higgins et al. , 2017 ) from data using ( variational ) generative models ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . In this view , disentangled representations capture the compositional structure of the world ( Higgins et al. , 2018a ; Duan et al. , 2020 ) , separating the generative factors present in the stimuli into separate components of the internal representation ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) . It has been argued that these representations allow downstream models to perform better due to the structured nature of the representations ( Higgins et al. , 2017 ; 2018b ) and to share information across related tasks ( Bengio et al. , 2014 ) . Here we are interested in the question of whether networks can support combinatorial generalisation and extrapolation by exploiting these disentangled representations . In this study we systematically tested whether and how disentangled representations support three forms of generalisation : two forms of combinatorial generalisation that varied in difficulty as well as extrapolation , as detailed below . We explored this issue by assessing how well models could render images when we varied ( 1 ) the image datasets ( dSprites and 3DShape ) , ( 2 ) the models used to reconstruct these images ( β-VAEs and FactorVAEs with different disentanglement pressures , and decoder models in which we dropped the encoders and directly input perfectly disentangled latents ) , and ( 3 ) the tasks that varied in their combinatorial requirements ( image reconstruction vs. image transformation ) . Across all conditions we found that models only supported the simplest versions of combinatorial generalisation and the degree of disentanglement had no impact on the degree of generalisation . These findings suggest that models with entangled and disentangled representations are both generalising on the basis of overall similarity of the trained and test images ( interpolation ) , and that combinatorial generalisation requires more than learning disentangled representations . 1.1 PREVIOUS WORK . Recent work on learning disentangled representations in unsupervised generative models has indeed shown some promise in improving the performance of downstream tasks ( Higgins et al. , 2018b ; van Steenkiste et al. , 2019 ) but this benefit is mainly related to sample efficiency rather than generalisation . Indeed , we are only aware of two studies that have considered the importance of learned disentanglement for combinatorial generalisation and they have used different network architectures and have reached opposite conclusions . Bowers et al . ( 2016 ) showed that a recurrent model of shortterm memory tested on lists of words that required some degree of combinatorial generalisation ( recalling a sequence of words when one or more of words at test were novel ) only succeeded when it had learned highly selective ( disentangled ) representations ( `` grandmother cell '' units for letters ) . By contrast , Chaabouni et al . ( 2020 ) found that models with disentangled representations do not confer significant improvements in generalisation over entangled ones in a language modeling setting , with both entangled and disentangled representations supporting combinatorial generalisation as long as the training set was rich enough . At the same time , they found that languages generated through compositional representations were easier to learn , suggesting this as a pressure to learn disentangled representations . A number of recent papers have reported that VAEs can support some degree of combinatorial generalisation , but there is no clear understanding of whether and how disentangled representations played any role in supporting this performance . Esmaeili et al . ( 2019 ) showed that a model trained on the MNIST dataset could reconstruct images even when some particular combination of factors were removed during training , such as a thick number 7 or a narrow 0 . The authors also showed that the model had learned disentangled representations and concluded that the disentangled representations played a role in the successful performance . However , the authors did not vary the degree of disentanglement in their models and , accordingly , it is possible that a VAE that learned entangled representations would do just as well . Similarly , Higgins et al . ( 2018c ) have highlighted how VAEs that learn disentangled representations can support some forms of combinatorial generalisation when generating images from text . For example , their model could render a room with white walls , pink floor and blue ceiling even though it was never shown that combination in the training set . This is an impressive form of combinatorial generalisation but , as we show below , truly compositional representations should be able to support several other forms of combinatorial generalisations that were not tested in this study . Moreover , it is not clear what role disentanglement played in this successful instance of generalisation . Finally , Zhao et al . ( 2018 ) assessed VAE performance on a range of combinatorial generalisation tasks that varied in difficulty , and found that the model performed well in the simplest settings but struggled in more difficult ones . But again , they did not consider whether learning disentangled representations was relevant to generalisation performance . Another work that has significant relation to ours is Locatello et al . ( 2019 ) , who examine how hard it is to learn disentangled representations and their relation to sampling efficiency for downstream tasks . We are interested in a related , but different question : even if a model learns a disentangled representation in an intermediate layer , does this enable models to achieve combinatorial generali- sation ? So while Locatello et al . ( 2019 ) train their models on complete datasets to investigate the degree of disentanglement and sampling efficiency , we systematically exclude generative factors from training in order to test for combinatorial generalisation ( see Methods and Results ) . 2 METHODS AND RESULTS . We assessed combinatorial generalisation on two different datasets . The dSprites image dataset ( Matthey et al. , 2017 ) contains 2D images in black and white that vary along five generative factors : shape , scale , orientation , position-x and position-y and focuses on manipulations of single objects . The 3D Shapes dataset ( Burgess & Kim , 2018 ) contains 3D images in colour that vary along six generative factors : floor-hue , wall-hue , object-hue , object-shape , object-scale , object-orientation . In contrast to dSprites , the images are more realistic , which has been shown to aid reconstruction performance ( Locatello et al. , 2019 ) . To test combinatorial generalisation , we systematically excluded some combinations of these generative factors from the training data and tested reconstruction on these unseen values . Test cases can be divided into three broad categories based on the number of combinations excluded from training . • Recombination-to-Element ( red squares in Figure 1 ) : The model has never been trained on one combination of all of the generative factors . In dSprites , an example of this case would be excluding the combination : [ shape=ellipse , scale=1 , orientation < 120◦ , position-x > 0.5 , positiony > 0.5 ] from the training set – i.e . the model has never seen a large ellipse at < 120◦ in the bottom-right corner , though it has seen all other combinations . • Recombination-to-Range ( green squares in Figure 1 ) : The model has never been trained on all combinations of some of the factors ( i.e . a subset of generative factors ) . For example , in the 3D Shapes dataset , all combinations with [ object-hue=1 , shape=sphere ] have been left out of the training set – i.e . none of the training images contain a blue sphere . This condition is more complex than Recombination-to-Element as an entire range of combinations [ floor-hue=0 . . . 1 , wall-hue=0 . . . 1 , ob ject-hue=1 , shape=sphere , scale=0 . . . 1 , orientation=0 . . . 1 ] have been left out ( here bold text indicates the range of values excluded ) . When the number of generative factors is larger than three , “ Recombination-to-Range ” is , in fact , a set of conditions that vary in difficulty , depending upon how many generative factors have been excluded . Another example would be excluding all combinations where [ floor-hue=1 , wall-hue=1 , object-hue=1 , shape=1 , scale=1 ] . Here a smaller range of combinations [ floor-hue=1 , wall-hue=1 , object-hue=1 , shape=1 , scale=1 , orientation=0 . . . 1 ] have been excluded . • Extrapolation ( blue squares in Figure 1 ) : This is the most challenging form of generalisation where models are tested on values of generative factors that are beyond the range of values observed in the training dataset . For example , in the dSprites dataset , all combinations where [ posi tion-x > 0.5 ] have never been seen . Each of these conditions is interesting for different reasons . A model that learns compositional representations should be able to combine observed values of shape ( ellipses ) , translation ( bottom-right ) and rotation ( 0◦ to 120◦ ) to generalise to all unseen combination of factors . The simplest case is the recombination-to-element condition in which all combinations but one have been trained , but a model that learns entangled representations might also succeed based on its training on highly similar patterns ( generalisation by interpolation ) . A more challenging case is recombination-to-range condition given that more combinations have been excluded , making generalisation by similarity ( interpolation ) more difficult . The final condition is not a form of combinatorial generalisation as the model can not combine observed values of generative factors to render images . Indeed compositional representations may be inadequate for this form of generalisation . 2.1 IMAGE RECONSTRUCTION WITH DSPRITES DATASET . In the dSprites dataset , for testing the Recombination-to-element case , we split each range of values of a generative factor into three bins , so that we had 3 × 3 × 3 × 3 × 3 such combinations of bins for all five generative factors . We then remove one of these 243 combinations during training , namely those that satisfied [ shape=ellipsis , position-x > = 0.6 , , position-y > = 0.6 , 120◦ < = rotation < = 240◦ , scale < 0.6 ] . In other words ellipses in the bottom right corner with those given rotations , which is a relatively small number of combinations that are all very similar to each other . For the Recombination-to-range case , we tested three different variants . First , we excluded all combinations where [ shape=square , position-x > 0.5 ] . The model sees other shapes at those positions during training and it sees squares on the left-hand side of the screen . Thus the models experiences both generative factor values independently and has to recombine them to produce a novel image at test time . In the second case , we excluded all combinations where [ shape=square , scale > 0.5 ] . In the third case , we excluded all combinations where [ shape=square , rotation > 90◦ ] . We observed very similar results for all three cases and below we report the results for the first variant . Finally , for the Extrapolation case , we excluded all combinations of generative factors where [ po sition-x > x ] . We chose a set of different values for x : x ∈ 0.16 , 0.25 , 0.50 , 0.75 , where x is normalised in the range [ 0 , 1 ] ( results shown in Figure 2 for x = 0.50 ) . At test time the model needed to reconstruct images where translation along the x-axis , x , was greater than the cutoff value . We tested three classes of models on all three types of generalisation : standard Variational Autoencoder ( VAEs , Kingma & Welling ( 2013 ) ; Rezende et al . ( 2014 ) ) , β-VAE ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) with β = 8 and β = 12 , FactorVAE ( Kim & Mnih , 2019 ) with γ = 20 , γ = 50 and γ = 100 . The architectures are the ones found in Higgins et al . ( 2017 ) , Burgess et al . ( 2018 ) and Kim & Mnih ( 2019 ) ( Details in the Appendix ) . We used a batch size of 64 and a learning rate of 5e− 4 for the Adam optimizer ( Kingma & Ba , 2017 ) . In each case , we simulated three seeds and we report results for runs where we obtained largest disentanglement . As shown by Locatello et al . ( 2019 ) , none of the models trained end-to-end in an unsupervised manner produce perfectly disentangled representations . Since we were interested in studying the effect of disentanglement on generalisation , we compared our results with a model where we removed the encoder and directly gave disentangled latents as inputs to the decoder . We call this model the ground-truth decoder ( GT Decoder from here on ) . This decoder uses the same MLP architecture as the one used in Higgins et al . ( 2017 ) . We tested deeper decoders with convolutions and batch norm as well , but found no benefit or a decrease in performance . We measured the level of disentanglement using the framework introduced in Eastwood & Williams ( 2018 ) . The procedure consists of using the latent representations generated for each image to predict the true generative factors using a regression model ( in our case , Lasso regression ; see Ap- to-Element condition where the models did not see [ shape = ellipse , scale = 1 , orientation < 120◦ , po sition-x > 0.5 , position-y > 0.5 ] , Middle ) Recombination-to-Range condition where models did not see [ shape = square , position-x > 0.5 ] , Right ) Extrapolation condition where models did not see [ posi tion-x > 0.5 ] ( b ) Visualisation of disentanglement . In each panel , columns show latent variables and rows show the generative factors . The size of the square represents the relative importance of the latent variable for predicting the generative factor . Sparse matrices indicate higher disentanglement ( Eastwood & Williams , 2018 ) . Each disentanglement matrix corresponds to the model on that row in ( a ) in the Reconstruction-to-Range condition . The visualisation of the entire set of models and all conditions is shown in Appendix B pendix A ) . The level of disentanglement is quantified by their ‘ Overall disentanglement metric ’ , which we call D-score here . Figure 2 shows examples of model reconstructions for each of the conditions which help assess the reconstruction success qualitatively ( more examples are shown in Appendix C ) . A more quantitative assessment of the models can be made by examining the negative-log-likelihood of reconstructions for different conditions , plotted in Figure 3 . The amount of disentanglement achieved by the models trained end-to-end varied over a broad range and was a function of model architecture and the hyperparameter ( β and γ ) values . In general , reconstruction accuracy was better for smaller values of β both during training and testing . This has been observed before and is a known issue encountered when increasing the value of β parameter ( Hoffman & Johnson , 2016 ) . We found that models were able to perform the Recombination-to-Element generalisation but failed in the Recombination-toRange and Extrapolation cases . In these cases , models either showed really poor reconstruction of the critical element or substituted one of the excluded combination with a combination that had been observed during training ( see reconstructions for test cases in Figure 2 ( a ) ) . Moreover , the amount of generalisation did not depend on the degree of disentanglement . Indeed , the GT Decoder using perfectly disentangled representations was no better than the end-to-end models . Even though this model achieved a lower NLL score , examining the image reconstructions showed that it failed to reconstruct the essential combination excluded from the training data ( see Appendix B ) . The Recombination-to-Range condition shows another interesting qualitative difference between the entangled and disentangled models . All models failed to generalise , but in different ways . Entangled models tended to put a blob in the correct location , which allows them to minimise loss in pixel space over a large set of test examples . In contrast , the models with higher level of disentanglement fell back to the most similar shape ( in pixel space ) that they had seen at that location . Finally , the Recombination-to-Element condition was solved by all the models , regardless of disentanglement score . In fact , the entangled models tended to achieve better reconstructions as evidenced by the disentangled models with β=12 which had a hard time reconstructing ellipses at small scales and tended to just produce a circle instead . The second panel in Figure 2 shows the coefficients computed by the disentanglement metric for the Reconstruction-to-Range condition . The size of each square denotes the relative importance of a latent ( column ) in predicting the corresponding generative factor ( row ) . The higher the disentanglement , the sparser the matrices . An examination of these matrices revealed that different models achieved a large range of disentanglement though none of the end-to-end models achieved perfect disentanglement .
Learning disentangled representation is often considered an important step to achieve human-like generalization. This paper studies how the degree of disentanglement affects various forms of generalization. Variational autoencoders (VAEs) is trained with different levels of disentanglement on an unsupervised task by excluding combinations of generative factors during training. At test time the models are used to reconstruct the missing combinations in order to measure generalization performance. The paper shows that the models support only weak combinatorial generalization. The paper also tests the models in a more complex task which explicitly required independent generative factors to be controlled. The paper concludes that learning disentanglement representation is not sufficient for supporting more difficult forms of generalization.
SP:81573408426e479610a9d751ebed97dc74f63fb1
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning
1 INTRODUCTION . The lottery ticket hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) suggests the existence of an extremely sparse sub-network , within an overparameterized dense neural network , that can reach similar performance as the dense network when trained in isolation with proper initialization . Such a subnetwork together with the used initialization is called a winning ticket ( Frankle & Carbin , 2019 ) . The original LTH studies the sparse pattern of neural networks with a single task ( classification ) , leaving the question of generalization across multiple tasks open . Following that , a few works ( Morcos et al. , 2019 ; Mehta , 2019 ) have explored LTH in transfer learning . They study the transferability of a winning ticket found in a source task to another target task . This provides insights on one-shot transferability of LTH . In parallel , lifelong learning not only suffers from notorious catastrophic forgetting over sequentially arriving tasks but also often comes at the price of increasing model capacity . With those in mind , we ask a much more ambitious question : Does LTH hold in the setting of lifelong learning when different tasks arrive sequentially ? Intuitively , a desirable “ ticket ” sub-network in lifelong learning ( McCloskey & Cohen , 1989 ; Parisi et al. , 2019 ) needs to be : 1 ) independently trainable , same as the original LTH ; 2 ) trained to perform ∗Equal Contribution . 1 competitively to the dense lifelong model , including both maintaining the performance of previous tasks , and quickly achieving good generalization at newly added tasks ; 3 ) found online , as the tasks sequentially arrive without any pre-assumed order . We define such a sub-network with its initialization as a lifelong lottery ticket . This paper seeks to locate the lifelong ticket in class-incremental learning ( CIL ) ( Wang et al. , 2017 ; Rosenfeld & Tsotsos , 2018 ; Kemker & Kanan , 2017 ; Li & Hoiem , 2017 ; Belouadah & Popescu , 2019 ; 2020 ) , a popular , realistic and challenging setting of lifelong learning . A natural idea to extend the original LTH is to introduce sequential pruning : we continually prune the dense network until the desired sparsity level , as new tasks are incrementally added . However , we show that the direct application of the iterative magnitude pruning ( IMP ) used in LTH fails in the scenario of CIL since the pruning schedule becomes critical when tasks arrive sequentially . To circumvent this challenge , we generalize IMP to incorporate a curriculum pruning schedule . We term this technique top-down lifelong pruning . When the total number of tasks is pre-known and small , then with some “ lottery ” initialization ( achieved by rewinding ( Frankle et al. , 2019 ) or similar ) , we find that the pruned sparse ticket can be re-trained to similar performance as the dense network . However , if the number of tasks keeps increasing , the above ticket will soon witness performance collapse as its limited capacity can not afford the over-pruning . The limitation of top-down lifelong pruning reminds us of two unique dilemmas that might challenge the validity of lifelong tickets . i ) Greedy weight pruning v.s . all tasks ’ performance : While the sequential pruning has to be performed online , its greedy nature inevitably biases against later arriving tasks , as earlier tasks apparently will contribute to shaping the ticket more ( and might even use up the sparsity budget ) . ii ) Catastrophic forgetting v.s . small ticket size : To overcome the notorious catastrophic forgetting ( McCloskey & Cohen , 1989 ; Tishby & Zaslavsky , 2015 ) , many lifelong learning models have to frequently consolidate weights to carefully re-assign the model capacity ( Zhang et al. , 2020 ) or even grow model size as tasks come in ( Wang et al. , 2017 ) . Those seem to contradict our goal of pruning by seeing more tasks . To address the above two limitations , we propose a novel bottom-up lifelong pruning approach , which allows for re-growing the model capacity to compensate for any excessive pruning . It therefore flexibly calibrates between increasing and decreasing tickets throughout the entire learning process , alleviating the intrinsic greedy bias caused by the top-down pruning . We additionally introduce lottery teaching to overcome forgetting , which regularizes previous task models ’ soft logit outputs by using free unlabeled data . That is inspired by lifelong knowledge preservation techniques ( Castro et al. , 2018 ; He et al. , 2018 ; Javed & Shafait , 2018 ; Rebuffi et al. , 2017 ) . For validating our proposal , we conduct extensive experiments on CIFAR-10 , CIFAR-100 , and TinyImageNet datasets for class-incremental learning ( Rebuffi et al. , 2017 ) . The results demonstrate the existence and the high competitiveness of lifelong tickets . Our best lifelong tickets ( found by bottom-up pruning and lottery teaching ) achieve comparable or better performance across all sequential tasks , with as few as 3.64 % parameters , compared to state-of-the-art dense models . Our contributions can be summarized as : • The problem of lottery tickets is formulated and studied in lifelong learning ( class incremental learning ) for the first time . • Top-down pruning : a generalization of iterative weight magnitude pruning used in the original LTH over continual learning tasks . • Bottom-up pruning : a novel pruning method , which is unique to allow for re-growing model capacity , throughout the lifelong process . • Extensive experiments and analyses demonstrating the promise of lifelong tickets , in achieving superior yet extremely light-weight lifelong learners . 2 RELATED WORK . Lifelong Learning A lifelong learning system aims to continually learn sequential tasks and accommodate new information while maintaining previously learned knowledge ( Thrun & Mitchell , 1995 ) . One of its major challenges is called catastrophic forgetting ( McCloskey & Cohen , 1989 ; Kirkpatrick et al. , 2017 ; Hayes & Kanan , 2020 ) , i.e. , the network can not maintain expertise on tasks that they have not experienced for a long time . 2 This paper ’ s study subject is class-incremental learning ( CIL ) ( Rebuffi et al. , 2017 ; Elhoseiny et al. , 2018 ) : a popular , realistic , albeit challenging setting of lifelong learning . CIL requires the model to recognize new classes emerging over time while maintaining recognizing ability over old classes without access to the previous data . Typical solutions are based on regularization ( Li & Hoiem , 2017 ; Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Aljundi et al. , 2018a ; Ebrahimi et al. , 2019 ) , for example , knowledge distillation ( Hinton et al. , 2015 ) is a common regularizer to inherit previous knowledge through preserving soft logits of those samples ( Li & Hoiem , 2017 ) while learning new tasks . Besides , several approaches are learning with memorized data ( Castro et al. , 2018 ; Javed & Shafait , 2018 ; Rebuffi et al. , 2017 ; Belouadah & Popescu , 2019 ; 2020 ; Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2018 ) . And some generative lifelong learning methods ( Liu et al. , 2020 ; Shin et al. , 2017 ) mitigate catastrophic forgetting by generating simulated data of previous tasks . There also exist a few architecture-manipulation-based lifelong learning methods ( Rajasegaran et al. , 2019 ; Aljundi et al. , 2018b ; Hung et al. , 2019 ; Abati et al. , 2020 ; Rusu et al. , 2016 ; Kemker & Kanan , 2017 ) , while their target is dividing a dense model into task-specific parts for lifelong learning , rather than localizing sparse networks and the lottery tickets . Pruning and Lottery Ticket Hypothesis It is well-known that deep networks could be pruned of excess capacity ( LeCun et al. , 1990b ) . Pruning algorithms can be categorized into unstructured ( Han et al. , 2015b ; LeCun et al. , 1990a ; Han et al. , 2015a ) and structured pruning ( Liu et al. , 2017 ; He et al. , 2017 ; Zhou et al. , 2016 ) . The former sparsifies weight elements based on magnitudes , while the latter removes network sub-structures such as channels for more hardware friendliness . LTH ( Frankle & Carbin , 2019 ) advocates the existence of an independently trainable sparse subnetwork from a dense network . In addition to image classification ( Frankle & Carbin , 2019 ; Liu et al. , 2019 ; Wang et al. , 2020 ; Evci et al. , 2019 ; Frankle et al. , 2020 ; Savarese et al. , 2020 ; You et al. , 2020 ; Ma et al. , 2021 ; Chen et al. , 2020a ) , LTH has been explored widely in numerous contexts , such as natural language processing ( Gale et al. , 2019 ; Chen et al. , 2020b ) , reinforcement learning ( Yu et al. , 2019 ) , generative adversarial networks ( Chen et al. , 2021b ) , graph neural networks ( Chen et al. , 2021a ) , and adversarial robustness ( Cosentino et al. , 2019 ) . Most of them adopt unstructured weight magnitude pruning ( Han et al. , 2015a ; Frankle & Carbin , 2019 ) to obtain the ticket , which we also follow in this work . ( Frankle et al. , 2019 ) analyzes large models and datasets , and presents a rewinding technique that re-initializes ticket training from the early training stage rather than from scratch . ( Renda et al. , 2020 ) further compares different retraining techniques and endorses the effectiveness of rewinding . ( Mehta , 2019 ; Morcos et al. , 2019 ; Desai et al. , 2019 ) pioneer to study the transferability of the ticket identified on one source task to another target task , which delivers insights on one-shot transferability of LTH . One latest work ( Golkar et al. , 2019 ) aimed at lifelong learning in fixed-capacity models based on pruning neurons of low activity . The authors observed that a controlled way of “ graceful forgetting ” after training each task can regain network capacity for new tasks , meanwhile not suffering from forgetting . Sokar et al . ( 2020 ) further compresses the sparse connections of each task during training , which reduces the interference between tasks and alleviates forgetting . 3 LOTTERY TICKET FROM SINGLE-TASK LEARNING TO CIL 3.1 PROBLEM SETUP In CIL , a model continuously learns from a sequential data stream in which new tasks ( namely , classification tasks with new classes ) are added over time , as shown in Figure 1 . At the inference stage , the model can operate without having access to the information of task IDs . Following ( Castro et al. , 2018 ; He et al. , 2018 ; Rebuffi et al. , 2017 ) , a handful of samples from previous classes are stored in a fixed memory buffer . More formally , let T1 , T2 , · · · represent a sequence of tasks , and the ith task Ti contains data that fall into ( ki − ki−1 ) classes Ci = { cki−1+1 , cki−1+2 , · · · , cki } , where k0 = 0 by convention . Let Θ ( i ) = { θ ( i ) , θ ( i ) c } denote the model of the learner used at task i , where θ ( i ) corresponds to the base model cross all tasks from T1 to Ti , and θ ( i ) c denotes the task-specific classification head at Ti . 3 Thus , the size of θ ( i ) is fixed , but the dimension of θ ( i ) c aligns with the number of classes , which have been seen at Ti . In general , the learner has access to two types of information at task i : the current training dataD ( i ) , and certain previous information P ( i ) . The latter includes a small amount of data from previous tasks { Tj } i−11 stored in the memory buffer , and the previous model Θ ( i−1 ) at task Ti−1 . This is commonly used to overcome the catastrophic forgetting issue of the current task i against the previous tasks . Based on the aforementioned setting , we state the CIL problem as below . Problem of CIL . At the current task i , we aim to learn a full model Θ ( i ) = { θ ( i ) , θ ( i ) c } based on the information ( D ( i ) , P ( i ) ) such that Θ ( i ) not only ( I ) yields the generalization ability to the newly added data at task Ti but also ( II ) does not lose its power to the previous tasks { Tj } i−11 . We note that the aforementioned problem statement applies to CIL with any fixed-length learning period . That is , for n time stamps ( one task per time ) , the validity of the entire trajectory { Θ ( i ) } n1 is justified by each Θ ( i ) from the CIL criteria ( I ) and ( II ) stated in ‘ Problem of CIL ’ .
The research question of this paper is the existence of an extremely sparse network with an initial weight assignment that can be trained online to perform multiple tasks to compete with a dense network, in a lifelong continual learning configuration. Another research question of this paper is how to identify this sparse network and achieve competitive performance. To address these questions, the authors proposed to incrementally introduce new non-zero weights when learning incoming tasks (Figure 2 and Equation 1). The network considered by the authors has a common base for all models and a head for individual tasks.
SP:f7f0a0d566d1de41a8a8edfed70d363a57c671ef
Contrasting distinct structured views to learn sentence embeddings
1 INTRODUCTION . We propose a self-supervised method that builds sentence embeddings from the combination of diverse explicit syntactic structures . Such a method aims at improving the ability of models to perform compositional knowledge . In particular , we evaluate the embedding potential to solve downstream tasks . Building generic sentence embeddings remains an open question . Many training methods have been explored : generating past and previous sentences ( Kiros et al. , 2015 ; Hill et al. , 2016 ) , discriminating context sentences ( Logeswaran & Lee , 2018 ) , predicting specific relations between pairs of sentences ( Conneau et al. , 2017 ; Nie et al. , 2019 ) . While all these methods propose efficient training objectives , they all rely on a similar RNN as encoder architecture . Nonetheless , model architectures have been subject to extensive work as well ( Tai et al. , 2015 ; Zhao et al. , 2015 ; Arora et al. , 2017 ; Lin et al. , 2017 ) , and in supervised frameworks , many encoder structures outperform standard RNN networks . We hypothesize structure is a crucial element to perform compositional knowledge . In particular , the heterogeneity of performances given models and tasks makes us assume that some structures may be better adapted for a given example or task . Therefore , combining diverse structures should be more robust for tasks requiring complex word composition to derive their meaning . Hence , we aim here to evaluate the potential benefit from interactions between pairs of encoders . In particular , we propose a training method for which distinct encoders are learned jointly . We conjecture this association might improve our embeddings ’ power of generalization and propose an experimental setup to corroborate our hypothesis . We take inspiration from multi-view learning , which is successfully applied in a variety of domains . In such a framework , the model learns representations by aligning separate observations of the same object . Traditionally , views are issued from a complementary natural perception of the data . For example , a picture and a sound recording of a dog . However , it can be extended to any pair of samples that share similar semantic content , such as the translation of the same sentence in two different languages . The definition can be extended to synthetic views , which are derived from the same unimodal data . In our case , we derived multiple views from a single sentence by pairing it with a distinct syntactic framework . We illustrated in Figure 2 , two views derived from the same input sentence by applying respectively a constituent or dependency parser . As proposed in image processing ( Tian et al. , 2019 ; Bachman et al. , 2019 ) , we propose to align the different views using a contrastive learning framework . Indeed , contrastive learning is broadly used in NLP Mikolov et al . ( 2013b ; a ) ; Logeswaran & Lee ( 2018 ) . We proposed to enhance the sentence embedding framework proposed in Logeswaran & Lee ( 2018 ) with a multi-view paradigm . As detailed in Section 2 , composing multiple views has demonstrated its effectiveness in many NLP applications . However , as far as we are aware , combining distinct structured models to build standalone embeddings has not yet be explored . Nevertheless , this paradigm benefits from several structural advantages : as already mentioned , it pairs nicely with contrastive learning . It might thus be trained in a self-supervised manner that does not require data annotation . Moreover , contrary to models presented in Section 2 , our method is not specific to a certain kind of encoder architecture . It does not require , for example , the use of attention layers or tree-structured models . More generally , it could be extended to any notion of view , even in other domains than language processing . Our setup could therefore be extended with any encoding function . Finally , our training method induces an interaction between models during inference and , paramountly , during the training phase . Our paper is organized as follows : we detail our contrastive multi-view framework in Section 3 . In Section 4 , we propose an evaluation of our framework on standard evaluation benchmarks and propose qualitative analysis from our embeddings . 2 RELATED WORK . Multi-view is effectively used in a broad variety of domains . For image processing , some methods aim to learn representations by filling the missing part of an image or solving jigsaw puzzles . For video , Tian et al . ( 2019 ) propose to build image tuples using video frames and flow . For audio , van den Oord et al . ( 2018 ) maximize the mutual information between the embedding of the signal at different time steps . Regarding NLP , combining different structural views has already been proven to be successful . Kong & Zhou ( 2011 ) provide a heuristic to combine dependency and constituency analysis for coreference resolution . Zhou et al . ( 2016 ) ; Ahmed et al . ( 2019 ) combine Tree LSTM and standard sequential LSTM with a cross-attention method and observe improvement on a semantic textual similarity task . Chen et al . ( 2017a ) combine CNN and Tree LSTM using attention methods on a sentiment classification task , and CNN outperforms both Tree-LSTM and CNN separately . Finally , Chen et al . ( 2017b ) combine sequential LSTM and Tree LSTM for natural language inference tasks . However , to our knowledge , combining distinct structured models in a contrastive learning setup was not attempted to build sentence embeddings . 3 METHOD . Given a sentence s , the model aims at discriminating the sentences s+ in the neighborhood of s from sentences s− outside of this neighborhood . This is contrastive learning ( Section 3.1 ) . The representation of each sentence is acquired by using multiple views ( Section 3.2 ) . 3.1 CONTRASTIVE LEARNING . Contrastive learning is successfully applied in a variety of domains including audio van den Oord et al . ( 2018 ) , image ( Wu et al. , 2018 ; Tian et al. , 2019 ) , video or natural language processing for word embedding ( Mikolov et al. , 2013b ) or sentence embedding ( Logeswaran & Lee , 2018 ) . Some mathematical foundations are detailed in Saunshi et al . ( 2019 ) . The idea is to build a dataset such that each sample x is combined with another sample x+ , which is somehow close . For word or sentence embeddings , the close samples are the words or the sentences appearing in the given textual context . For image processing , close samples might be two different parts of the same image . Systems are trained to bring close sentences together while dispersing negative examples . In particular , a sentence embedding framework is proposed by Logeswaran & Lee ( 2018 ) . The method takes inspiration from the distributional hypothesis successfully applied for word , but this time , to identify context sentences . The network is trained using a contrastive method . Given a sentence s , a corresponding context sentence s+ and a set of K negative samples s−1 · · · s − K , the training objective is to maximize the probability of discriminate the correct sentence among negative samples : p ( s+|s , s−1 · · · s − K ) . The algorithm architecture used to estimate p is close to word2vec ( Mikolov et al. , 2013b ; a ) . Two sentences encoders f and g are defined and the conditional probability is estimated as follow : p ( s+|s , s−1 · · · s − K ) = ef ( s ) T g ( s+ ) ef ( s ) T g ( s+ ) + ∑N i=1 e f ( s ) T g ( s−i ) At inference time , the sentence representation is obtained as the concatenation of the two encoders f and g such as s → [ f ( s ) ; g ( s ) ] . In Logeswaran & Lee ( 2018 ) , f and g are chosen identical and consist in two RNN . However , the authors observe that the encoders might learn redundant features . To limit this effect , they propose to use a distinct set of embeddings for each encoder . We propose addressing this aspect by enhancing the method with a multi-view framework and using a distinct structured model for the encodes f and g. We hypothesize that some structures may be better adapted for a given example or task . For example , Figure 2 illustrates that dependency parsing uses the verb ” filled ” as the root node . Whereas in constituency parsing , subject and verb are respectively , the right and left child from the root node . Therefore , the combination of different structures should be more robust for tasks requiring complex word composition and be less sensitive to lexical variations . Consequently , we propose a training procedure that allows the model to benefit from the interaction of various syntactic structures . The choice for the encoder architecture is detailed in the following section . 3.2 LANGUAGE VIEWS . Multi-view aims as learning representations from data represented by multiple independent sets of features . The method is not specific for any particular nature of data and can be applied to a broad scale of domains , making it an efficient framework in self-supervised representation learning . As depicted in Section 1 , we generalize the notion of view for a sentence as the application of a specific syntactic framework . For each view , we use an ad-hoc algorithm that maps the structured representation of the sentence into an embedding space . For tree-based views , we consider both phrase structure trees and dependency trees . The phrase structure of a sentence is represented as nested multi-word constituents . The dependency tree represents the relationship between individual words . Although equivalences might be derived between the two representations schemes , we hypothesize that , in our context , the corresponding sequence of operations might allow capturing rather distinct linguistic properties . The various models may , therefore , be complementary and their combination allows for more fine-grained analysis . Together with the trees , we consider the following views of a sentence . Bag Of Word ( BOW ) This setup does not assume any underlying structure . The sentence is modeled as an unordered set of words . The associated encoding method is a simple commutative sum operation of the word embeddings . In our case , vectors are initialized using GloVe vectors ( Pennington et al. , 2014 ) publicly available1 . We used 300-dimensional word vectors trained on the common crawl dataset ( 840B tokens ) with a vocabulary of 2.2M case sensitive words . Vanilla LSTM ( SEQ ) assumes a sequential structure where each word depends on the previous words in the sentence . The framework is a bidirectional sequential LSTM ( Hochreiter & Schmidhuber , 1997 ) . The concatenation of the forward and backward last hidden state of the model is used as sequence embedding . Dependency tree ( DEP ) In the dependency tree model , words are connected through dependency edges . A word might have an arbitrary number of dependants . As illustrated in Figure 2 , the sentence can be represented as a tree where nodes correspond to words and edges indicate whether or not the words are connected in the dependency tree . In our case , the dependency tree is obtained using the deep biaffine parser from Dozat & Manning ( 2017 ) 2 For this view , we compute sentence embeddings with the Child-Sum Tree LSTM model described in Tai et al . ( 2015 ) : Each node is assigned an embedding given its dependent with a recursive function . The recursive node function is derived from standard LSTM formulations but adapted for tree inputs . In particular , the hidden state is computed as the sum of all children hidden states : h̃j = ∑ k∈C ( j ) hk ( 1 ) with C ( j ) , the set of children of node j . All equations are detailed in Tai et al . ( 2015 ) . However , we slightly modify the computation of h̃j using Equation 2 . As in Zhou et al . ( 2016 ) , we propose to 1https : //nlp.stanford.edu/projects/glove/ 2We use an open-source implementation of the parser and replace the pos-tags features with features obtained with BERT . Therefore we do not need pos-tags annotations to parse our corpus : https : //github . com/yzhangcs/biaffine-parser compute h̃j as the weighted sum of children vectors in order to allow the model to filter semantically less relevant children . h̃j = ∑ k∈C ( j ) αkjhk ( 2 ) The parameters αkj are attention weights computed using a soft attention layer . Given a node j , we consider h1 , h2 , . . . , hn the corresponding children hidden states . the soft attention layer produces a weight αk for each child ’ s hidden state . We did not use any external query to compute the attention but instead use a projection from the current node embedding . The attention mechanism is detailed in equations below : qj =W ( q ) xj + b ( q ) ( 3 ) pk =W ( p ) hk + b ( p ) ( 4 ) akj = qj · pᵀk ‖qj‖2 · ‖pk‖2 ( 5 ) αkj = softmaxk ( a1j · · · anj ) ( 6 ) The embedding at the root of the tree is used as the sentence embedding as the Tree LSTM model computes representations bottom up . Constituency tree ( CONST ) Constituent analysis describes the sentence as a nested multi-word structure . In this framework , words are grouped recursively in constituents . In the resulting tree , only leaf nodes correspond to words , while internal nodes encode recursively word sequences . The structure is obtained using the constituency neural parser from Kitaev & Klein ( 2018 ) . The framework is associated with the N-Ary Tree LSTM , which is defined in Tai et al . ( 2015 ) . Similarly to the original article , we binarize the trees to ensure that every node has exactly two dependents . The binarization is performed using a left markovization and unary productions are collapsed in a single node . Again the representation is computed bottom-up and the embedding of the tree root node is used as sentence embedding . The equations detailed in Tai et al . ( 2015 ) make the distinction between right and left nodes . Therefore we do not propose to enhance the original architecture with a weighted sum as on the DEP view .
The authors introduce a pretrianing paradigm based on contrastive learning between multiple syntactic views of the same sentence. The method maximizes representations between different setence encoders when given the same sentence, and minimize the similarity to all other sentence repre sentations. The results on the infersent benchmark show competitive performance of the approach when compared to non-syntactic pretraining methods.
SP:a74439e1ce3691416cb8557a7662c20855b187ee
Hamiltonian Q-Learning: Leveraging Importance-sampling for Data Efficient RL
Model-free reinforcement learning ( RL ) , in particular Q-learning is widely used to learn optimal policies for a variety of planning and control problems . However , when the underlying state-transition dynamics are stochastic and high-dimensional , Q-learning requires a large amount of data and incurs a prohibitively high computational cost . In this paper , we introduce Hamiltonian Q-Learning , a data efficient modification of the Q-learning approach , which adopts an importance-sampling based technique for computing the Q function . To exploit stochastic structure of the state-transition dynamics , we employ Hamiltonian Monte Carlo to update Q function estimates by approximating the expected future rewards using Q values associated with a subset of next states . Further , to exploit the latent low-rank structure of the dynamic system , Hamiltonian Q-Learning uses a matrix completion algorithm to reconstruct the updated Q function from Q value updates over a much smaller subset of state-action pairs . By providing an efficient way to apply Qlearning in stochastic , high-dimensional problems , the proposed approach broadens the scope of RL algorithms for real-world applications , including classical control tasks and environmental monitoring . 1 INTRODUCTION . In recent years , reinforcement learning ( Sutton & Barto , 2018 ) have achieved remarkable success with sequential decision making tasks especially in complex , uncertain environments . RL algorithms have been widely applied to a variety of real world problems , such as resource allocation ( Mao et al. , 2016 ) , chemical process optimization ( Zhou et al. , 2017 ) , automatic control ( Duan et al. , 2016 ) , and robotics ( Kober et al. , 2013 ) . Existing RL techniques often offer satisfactory performance only when it is allowed to explore the environment long enough and generating a large amount of data in the process ( Mnih et al. , 2015 ; Kamthe & Deisenroth , 2018 ; Yang et al. , 2020a ) . This can be prohibitively expensive and thereby limits the use of RL for complex decision support problems . Q-Learning ( Watkins , 1989 ; Watkins & Dayan , 1992 ) is a model-free RL framework that captures the salient features of sequential decision making , where an agent , after observing current state of the environment , chooses an action and receives a reward . The action chosen by the agent is based on a policy defined by the state-action value function , also called the Q function . Performance of such policies strongly depends on the accessibility of a sufficiently large data set covering the space spanned by the state-action pairs . In particular , for high-dimensional problems , existing model-free RL methods using random sampling techniques leads to poor performance and high computational cost . To overcome this challenge , in this paper we propose an intelligent sampling technique that exploits the inherent structures of the underlying space related to the dynamics of the system . It has been observed that formulating planning and control tasks in a variety of dynamical systems such as video games ( Atari games ) , classical control problems ( simple pendulum , cart pole and double integrator ) and adaptive sampling ( ocean sampling , environmental monitoring ) as Q-Learning problems leads to low-rank structures in the Q matrix ( Ong , 2015 ; Yang et al. , 2020b ; Shah et al. , 2020 ) . Since these systems naturally consist of a large number of states , efficient exploitation of low rank structure of the Q matrix can potentially lead to significant reduction in computational complexity and improved performance . However , when the state space is high-dimensional and further , the state transition is probabilistic , high computational complexity associated with calculating the expected Q values of next states renders existing Q-Learning methods impractical . A potential solution for this problem lies in approximating the expectation of Q values of next states with the sample mean of Q values over a subset of next states . A natural way to select a subset of next states is by drawing IID samples from the transition probability distribution . However , this straight forward approach becomes challenging when the state transition probability distribution is highdimensional and is known only up to a constant . We address this problem by using Hamilton Monte Carlo ( HMC ) to sample next states ; HMC draws samples by integrating a Hamiltonian dynamics governed by the transition probability ( Neal et al. , 2011 ) . We improve the data efficiency further by using matrix completion methods to exploit the low rank structure of a Q matrix . RELATED WORK Data efficient Reinforcement Learning : The last decade has witnessed a growing interest in improving data efficiency in RL methods by exploiting emergent global structures from underlying system dynamics . Deisenroth & Rasmussen ( 2011 ) ; Pan & Theodorou ( 2014 ) ; Kamthe & Deisenroth ( 2018 ) ; Buckman et al . ( 2018 ) have proposed model-based RL methods that improve data efficiency by explicitly incorporating prior knowledge about state transition dynamics of the underlying system . Dearden et al . ( 1998 ) ; Koppel et al . ( 2018 ) ; Jeong et al . ( 2017 ) propose Baysean methods to approximate the Q function . Ong ( 2015 ) and Yang et al . ( 2020b ) consider a model-free RL approach that exploit structures of state-action value function . The work by Ong ( 2015 ) decomposes the Q matrix into a low-rank and sparse matrix model and uses matrix completion methods ( Candes & Plan , 2010 ; Wen et al. , 2012 ; Chen & Chi , 2018 ) to improve data efficiency . A more recent work by Yang et al . ( 2020b ) has shown that incorporation of low rank matrix completion methods for recovering Q matrix from a small subset of Q values can improve learning of optimal policies . At each time step the agent chooses a subset of state-action pairs and update their Q value according to the Bellman optimally equation that considers a discounted average between reward and expectation of the Q values of next states . Shah et al . ( 2020 ) extends this work by proposing a novel matrix estimation method and providing theoretical guarantees for the convergence to a -optimal Q function . On the other hand , entropy regularization ( Ahmed et al. , 2019 ; Yang et al. , 2019 ; Smirnova & Dohmatob , 2020 ) , by penalizing excessive randomness in the conditional distribution of actions for a given state , provides an alternative means to implicitly exploit the underlying low-dimensional structure of the value function . Lee et al . ( 2019 ) proposes an approach that samples a whole episode and then updates values in a recursive , backward manner . CONTRIBUTION The main contribution of this work is three-fold . First , we introduce a modifiedQ-learning framework , called Hamiltonian Q-learning , which uses HMC sampling for efficient computation of Q values . This innovation , by proposing to sample Q values from the region with the dominant contribution to the expectation of discounted reward , provides a data-efficient approach for using Q-learning in real-world problems with high-dimensional state space and probabilistic state transition . Furthermore , integration of this sampling approach with matrix-completion enables us to update Q values for only a small subset of state-action pairs and thereafter reconstruct the complete Q matrix . Second , we provide theoretical guarantees that the error between the optimal Q function and the Q function obtained by updating Q values using HMC sampling can be made arbitrarily small . This result also holds when only a handful ofQ values are updated using HMC and the rest are estimated using matrix completion . Further , we provide theoretical guarantee that the sampling complexity of our algorithm matches the mini-max sampling complexity proposed by Tsybakov ( 2008 ) . Finally , we demonstrate the effectiveness of Hamiltonian Q-learning by applying it to a cart-pole stabilization problem and an adaptive ocean sampling problem . Our results also indicate that our proposed approach becomes more effective with increase in state space dimension . 2 PRELIMINARY CONCEPTS . In this section , we provide a brief background on Q-Learning , HMC sampling and matrix completion , as well as introduce the mathematical notations . In this paper , |Z| denotes the cardinality of a set Z . Moreover , R represent the real line and AT denotes the transpose of matrix A . 2.1 Q-LEARNING Markov Decision Process ( MDP ) is a mathematical formulation that captures salient features of sequential decision making ( Bertsekas , 1995 ) . In particular , a finite MDP is defined by the tuple ( S , A , P , r , γ ) , where S is the finite set of system states , A is the finite set of actions , P : S×A×S → [ 0 , 1 ] is the transition probability kernel , r : S ×A → R is a bounded reward function , and γ ∈ [ 0 , 1 ) is a discounting factor . Without loss of generality , states s ∈ S and actions a ∈ A can be assumed to be Ds-dimensional and Da-dimensional real vectors , respectively . Moreover , by letting si denote the ith element of a state vector , we define the range of state space in terms of the following intervals [ d−i , d + i ] such that s i ∈ [ d−i , d+i ] ∀i ∈ { 1 , . . . , Ds } . At each time t ∈ { 1 , . . . , T } over the decision making horizon , an agent observes the state of the environment st ∈ S and takes an action at according to some policy π which maximizes the discounted cumulative reward . Once this action has been executed , the agent receives a reward r ( st , at ) from the environment and the state of the environment changes to st+1 according to the transition probability kernel P ( ·|st , at ) . The Q function , which represents the expected discounted reward for taking a specific action at the current time and following the policy thereafter , is defined as a mapping from the space of state-action pairs to the real line , i.e . Q : S × A → R. Then , by letting Qt represent the Q matrix at time t , i.e . the tabulation of Q function over all possible state-action pairs associated with the finite MDP , we can express the Q value iteration over time steps as Qt+1 ( st , at ) = ∑ s∈S P ( s|st , at ) ( r ( st , at ) + γmax a Qt ( s , a ) ) . ( 1 ) Under this update rule , the Q function converges to its unique optimal value Q∗ ( Melo , 2001 ) . But computing the sum ( 1 ) over all possible next states is computationally expensive in certain problems ; in these cases taking the summation over a subset of the next states provides an efficient alternative for updating the Q values . 2.2 HAMILTONIAN MONTE CARLO . Hamiltonian Monte Carlo is a sampling approach for drawing samples from probability distributions known up to a constant . It offers faster convergence than Markov Chain Monte Carlo ( MCMC ) sampling ( Neal et al. , 2011 ; Betancourt ; Betancourt et al. , 2017 ; Neklyudov et al. , 2020 ) . To draw samples from a smooth target distribution P ( s ) , which is defined on the Euclidean space and assumed to be known up to a constant , HMC extends the target distribution to a joint distribution over the target variable s ( viewed as position within the HMC context ) and an auxiliary variable v ( viewed as momentum within the HMC context ) . We define the Hamiltonian of the system as H ( s , v ) = − logP ( s , v ) = − logP ( s ) − logP ( v|s ) = U ( s ) +K ( v , s ) , where U ( s ) , − logP ( s ) and K ( v , s ) , − logP ( v|s ) = 12vTM−1v represent the potential and kinetic energy , respectively , and M is a suitable choice of the mass matrix . HMC sampling method consists of the following three steps − ( i ) a new momentum variable v is drawn from a fixed probability distribution , typically a multivariate Gaussian ; ( ii ) then a new proposal ( s′ , v′ ) is obtained by generating a trajectory that starts from ( s , v ) and obeys Hamiltonian dynamics , i.e . ṡ = ∂H∂v , v̇ = −∂H∂s ; and ( iii ) finally this new proposal is accepted with probability min { 1 , exp ( H ( s , v ) −H ( s′ , −v′ ) ) } following the Metropolis–Hastings acceptance/rejection rule . 2.3 LOW-RANK STRUCTURE IN Q-LEARNING AND MATRIX COMPLETION Prior work ( Johns & Mahadevan , 2007 ; Geist & Pietquin , 2013 ; Ong , 2015 ; Shah et al. , 2020 ) on value function approximation based approaches for RL has implicitly assumed that the state-action value functions are low-dimensional and used various basis functions to represent them , e.g . CMAC , radial basis function , etc . This can be attributed to the fact that the underlying state transition and reward function are often endowed with some structure . More recently , Yang et al . ( 2020b ) provide empirical guarantees that the Q-matrices for benchmark Atari games and classical control tasks exhibit low-rank structure . Therefore , using matrix completion techniques ( Xu et al. , 2013 ; Chen & Chi , 2018 ) to recover Q ∈ R|S|×|A| from a small number of observed Q values constitutes a viable approach towards improving data efficiency . As low-rank matrix structures can be recovered by constraining the nuclear norm , the Q matrix can be reconstructed from its observed values ( Q̂ ) by solving Q = arg min Q̃∈R|S|×|A| ‖Q̃‖∗ subject to JΩ ( Q̃ ) = JΩ ( Q̂ ) , ( 2 ) where ‖ · ‖∗ denotes the nuclear norm ( i.e. , the sum of its singular values ) , Ω is the observed set of elements , and JΩ is the observation operator , i.e . JΩ ( x ) = x if x ∈ Ω and zero otherwise . 3 HAMILTONIAN Q-LEARNING A large class of real world sequential decision making problems - for example , board/video games , control of robots ’ movement , and portfolio optimization - involves high-dimensional state spaces and often has large number of distinct states along each individual dimension . As using a Q-Learning based approach to train RL-agents for these problems typically requires tens to hundreds of millions of samples ( Mnih et al. , 2015 ; Silver et al. , 2017 ) , there is a strong need for data efficient algorithms forQ-Learning . In addition , state transition in such systems is often probabilistic in nature ; even when the underlying dynamics of the system is inherently deterministic ; presence of external disturbances and parameter variations/uncertainties lead to probabilistic state transitions . Learning an optimal Q∗ function through value iteration methods requires updating Q values of state-action pairs using a sum of the reward and a discounted expectation of Q values associated with next states . In this work , we assume the reward to be a deterministic function of state-action pairs . However , when the reward is stochastic , these results can be extended by replacing the reward with its expectation . Subsequently , we can express ( 1 ) as Qt+1 ( st , at ) = r ( st , at ) + γE ( max a Qt ( s , a ) ) , ( 3 ) where E denotes the expectation over the discrete probability measure P. When the underlying state space is high-dimensional and has large number of states , obtaining a more accurate estimate of the expectation is computationally very expensive . The complexity increases quadratically with the number of states and linearly with number of actions , rendering the existing algorithms impractical . In this work , we propose a solution to this issue by introducing an importance-sampling based method to approximate the aforementioned expectation from a sample mean of Q values over a subset of next states . A natural way to sample a subset from the set of all possible next states is to draw identically and independently distributed ( IID ) samples from the transition probability distribution P ( ·|st , at ) . However , when the transition probability distribution is high-dimensional and known only up to a constant , drawing IID samples incurs a very high computation cost .
This work focuses on dynamic programming in the tabular setting. It proposes to use Hamiltonian Monte-Carlo (HMC) to sample the next states (instead of IID samples) and matrix completion to learn a low-rank Q matrix. It shows theoretical convergence. Experiments on discretized problems (CartPole and an ocean sampling problem) show that HMC and low-rank learning can behave more benignly compared to IID samples.
SP:10b339c326238eeef479079dbe713af4ef5b2d92
Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning
1 INTRODUCTION . Graph representation learning ( GRL ) has become an invaluable approach for a variety of tasks , such as node classification ( e.g. , in biological and citation networks ; Veličković et al . ( 2018 ) ; Kipf & Welling ( 2017 ) ; Hamilton et al . ( 2017 ) ; Xu et al . ( 2018 ) ) , edge classification ( e.g. , link prediction for social and protein networks ; Perozzi et al . ( 2014 ) ; Grover & Leskovec ( 2016 ) ) , entire graph classification ( e.g. , for chemistry and drug discovery Gilmer et al . ( 2017 ) ; Chen et al . ( 2018a ) ) , etc . In this work , we propose an algorithmic unification of various GRL methods that allows us to re-implement existing GRL methods and introduce new ones , in merely a handful of code lines per method . Our algorithm ( abbreviated GTTF , Section 3.2 ) , receives graphs as input , traverses them using efficient tensor1 operations , and invokes specializable functions during the traversal . We show function specializations for recovering popular GRL methods ( Section 3.3 ) . Moreover , since GTTF is stochastic , these specializations automatically scale to arbitrarily large graphs , without careful derivation per method . Importantly , such specializations , in expectation , recover unbiased gradient estimates of the objective w.r.t . model parameters . 1To disambiguate : by tensors , we refer to multi-dimensional arrays , as used in Deep Learning literature ; and by operations , we refer to routines such as matrix multiplication , advanced indexing , etc GTTF uses a data structure  ( Compact Adjacency , Section 3.1 ) : a sparse encoding of the adjacency matrix . Node v contains its neighbors in row  [ v ] , Âv , notably , in the first degree ( v ) columns of  [ v ] . This encoding allows stochastic graph traversals using standard tensor operations . GTTF is a functional , as it accepts functions ACCUMULATEFN and BIASFN , respectively , to be provided by each GRL specialization to accumulate necessary information for computing the objective , and optionally to parametrize sampling procedure p ( v ’ s neighbors | v ) . The traversal internally constructs a walk forest as part of the computation graph . Figure 1 depicts the data structure and the computation . From a generalization perspective , GTTF shares similarities with Dropout ( Srivastava et al. , 2014 ) . Our contributions are : ( i ) A stochastic graph traversal algorithm ( GTTF ) based on tensor operations that inherits the benefits of vectorized computation and libraries such as PyTorch and Tensorflow . ( ii ) We list specialization functions , allowing GTTF to approximately recover the learning of a broad class of popular GRL methods . ( iii ) We prove that this learning is unbiased , with controllable variance . Wor this class of methods , ( iv ) we show that GTTF can scale previously-unscalable GRL algorithms , setting the state-of-the-art on a range of datasets . Finally , ( v ) we open-source GTTF along with new stochastic traversal versions of several algorithms , to aid practitioners from various fields in applying and designing state-of-the-art GRL methods for large graphs . 2 RELATED WORK . We take a broad standpoint in summarizing related work to motivate our contribution . Method Fa m ily Sc al e L ea rn in g Models GCN , GAT MP 7 exact node2vec NE 3 approx WYS NE 7 exact Stochastic Sampling Methods SAGE MP 3 approx FastGCN MP 3 approx LADIES MP 3 approx GraphSAINT MP 3 approx CluterGCN MP 3 heuristic Software Frameworks PyG Both inherits / reDGL Both implements Algorithmic Abstraction ( ours ) GTTF Both 3 approx Models for GRL have been proposed , including message passing ( MP ) algorithms , such as Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) , Graph Attention ( GAT ) ( Veličković et al. , 2018 ) ; as well as node embedding ( NE ) algorithms , including node2vec ( Grover & Leskovec , 2016 ) , WYS ( Abu-El-Haija et al. , 2018 ) ; among many others ( Xu et al. , 2018 ; Wu et al. , 2019 ; Perozzi et al. , 2014 ) . The full-batch GCN of Kipf & Welling ( 2017 ) , which drew recent attention and has motivated many MP algorithms , was not initially scalable to large graphs , as it processes all graph nodes at every training step . To scale MP methods to large graphs , researchers proposed Stochastic Sampling Methods that , at each training step , assemble a batch constituting subgraph ( s ) of the ( large ) input graph . Some of these sampling methods yield unbiased gradient estimates ( with some variance ) including SAGE ( Hamilton et al. , 2017 ) , FastGCN ( Chen et al. , 2018b ) , LADIES ( Zou et al. , 2019 ) , and GraphSAINT ( Zeng et al. , 2020 ) . On the other hand , ClusterGCN ( Chiang et al. , 2019 ) is a heuristic in the sense that , despite its good performance , it provides no guarantee of unbiased gradient estimates of the full-batch learning . Gilmer et al . ( 2017 ) and Chami et al . ( 2021 ) generalized many GRL models into Message Passing and Auto-Encoder frameworks . These frameworks prompt bundling of GRL methods under Software Libraries , like PyG ( Fey & Lenssen , 2019 ) and DGL ( Wang et al. , 2019 ) , offering consistent interfaces on data formats . We now position our contribution relative to the above . Unlike generalized message passing ( Gilmer et al. , 2017 ) , rather than abstracting the model computation , we abstract the learning algorithm . As a result , GTTF can be specialized to recover the learning of MP as well as NE methods . Morever , unlike Software Frameworks , which are re-implementations of many algorithms and therefore inherit the scale and learning of the copied algorithms , we re-write the algorithms themselves , giving them new properties ( memory and computation complexity ) , while maintaining ( in expectation ) the original algorithm outcomes . Further , while the listed Stochastic Sampling Methods target MP algorithms ( such as GCN , GAT , alike ) , as their initial construction could not scale to large graphs , our learning algorithm applies to a wider class of GRL methods , additionally encapsulating NE methods . Finally , while some NE methods such as node2vec ( Grover & Leskovec , 2016 ) and DeepWalk ( Perozzi et al. , 2014 ) are scalable in their original form , their scalability stems from their multi-step process : sample many ( short ) random walks , save them to desk , and then learn node embeddings using positional embedding methods ( e.g. , word2vec , Mikolov et al . ( 2013 ) ) – they are sub-optimal in the sense that their first step ( walk sampling ) takes considerable time ( before training even starts ) and also places an artificial limit on the number of training samples ( number of simulated walks ) , whereas our algorithm conducts walks on-the-fly whilst training . 3 GRAPH TRAVERSAL VIA TENSOR FUNCTIONALS ( GTTF ) . At its core , GTTF is a stochastic algorithm that recursively conducts graph traversals to build representations of the graph . We describe the data structure and traversal algorithm below , using the following notation . G = ( V , E ) is an unweighted graph with n = |V | nodes and m = |E| edges , described as a sparse adjacency matrix A ∈ { 0 , 1 } n×n . Without loss of generality , let the nodes be zero-based numbered i.e . V = { 0 , . . . , n− 1 } . We denote the out-degree vector δ ∈ Zn – it can be calculated by summing over rows of A as δu = ∑ v∈V A [ u , v ] . We assume δu > 0 for all u ∈ V : pre-processing can add self-connections to orphan nodes . B denotes a batch of nodes . 3.1 DATA STRUCTURE . Internally , GTTF relies on a reformulation of the adjacency matrix , which we term CompactAdj ( for `` Compact Adjacency '' , Figure 1c ) . It consists of two tensors : 1. δ ∈ Zn , a dense out-degree vector ( figure 1c , right ) 2 .  ∈ Zn×n , a sparse edge-list matrix in which the row u contains left-aligned δu non-zero values . The consecutive entries {  [ u , 0 ] ,  [ u , 1 ] , . . . ,  [ u , δu − 1 ] } contain IDs of nodes receiving an edge from node u . The remaining |V | − δu are left unset , therefore ,  only occupies O ( m ) memory when stored as a sparse matrix ( Figure 1c , left ) . CompactAdj allows us to concisely describe stochastic traversals using standard tensor operations . To uniformly sample a neighbor to node u ∈ V , one can draw r ∼ U [ 0 .. ( δu − 1 ) ] , then get the neighbor ID with  [ u , r ] . In vectorized form , given node batch B and access to continuous U [ 0 , 1 ) , we sample neighbors for each node in B as : R ∼ U [ 0 , 1 ) b , where b = |B| , then B′ =  [ B , bR ◦ δ [ B ] c ] is a b-sized vector , with B′u containing a neighbor of Bu , floor operation b.c is applied element-wise , and ◦ is Hadamard product . 3.2 STOCHASTIC TRAVERSAL FUNCTIONAL ALGORITHM . Our traversal algorithm starts from a batch of nodes . It expands from each into a tree , resulting in a walk forest rooted at the nodes in the batch , as depicted in Figure 1d . In particular , given a node batch B , the algorithm instantiates |B| seed walkers , placing one at every node in B. Iteratively , each walker first replicates itself a fanout ( f ) number of times . Each replica then samples and transitions to a neighbor . This process repeats a depth ( h ) number of times . Therefore , each seed walker becomes the ancestor of a f -ary tree with height h. Setting f = 1 recovers traditional random walk . In practice , we provide flexibility by allowing a custom fanout value per depth . Algorithm 1 : Stochastic Traverse Functional , parametrized by ACCUMULATEFN and BIASFN . input : u ( current node ) ; T ← [ ] ( path leading to u , starts empty ) ; F ( list of fanouts ) ; ACCUMULATEFN ( function : with side-effects and no return . It is model-specific and records information for computing model and/or objective , see text ) ; BIASFN ← U ( function mapping u to distribution on u ’ s neighbors , defaults to uniform ) 1 def Traverse ( T , u , F , ACCUMULATEFN , BIASFN ) : 2 if F.size ( ) = 0 then return # Base case . Traversed up-to requested depth 3 f ← F.pop ( ) # fanout duplication factor ( i.e . breadth ) at this depth . 4 sample_bias ← BIASFN ( T , u ) 5 if sample_bias.sum ( ) = 0 then return # Special case . No sampling from zero mass 6 sample_bias ← sample_bias / sample_bias.sum ( ) # valid distribution 7 K ← Sample (  [ u , : δu ] ] , sample_bias , f ) # Sample f nodes from u ’ s neighbors 8 for k ← 0 to f − 1 do 9 Tnext ← concatenate ( T , [ u ] ) 10 ACCUMULATEFN ( Tnext , K [ k ] , f ) 11 Traverse ( Tnext , K [ k ] , f , ACCUMULATEFN , BIASFN ) # Recursion 12 13 def Sample ( N , W , f ) : 14 C ← tf.cumsum ( W ) # Cumulative sum . Last entry must = 1 . 15 coin_flips← tf.random.uniform ( ( f , ) , 0 , 1 ) 16 indices← tf.searchsorted ( C , coin_flips ) 17 return N [ indices ] Functional Traverse is listed in Algorithm 1 . It accepts : a batch of nodes2 ; a list of fanout values F ( e.g . to F = [ 3 , 5 ] samples 3 neighbors per u ∈ B , then 5 neighbors for each of those ) ; and more notably , two functions : ACCUMULATEFN and BIASFN . These functions will be called by the functional on every node visited along the traversal , and will be passed relevant information ( e.g . the path taken from root seed node ) . Custom settings of these functions allow recovering wide classes of graph learning methods . At a high-level , our functional can be used in the following manner : 1 . Construct model & initialize parameters ( e.g . to random ) . Define ACCUMULATEFN and BIASFN . 2 . Repeat ( many rounds ) : i. Reset accumulation information ( from previous round ) and then sample batch B ⊂ V . ii . Invoke Traverse on ( B , ACCUMULATEFN , BIASFN ) , which invokes the FN ’ s , allowing the first to accumulate information sufficient for running the model and estimating an objective . iii . Use accumulated information to : run model , estimate objective , apply learning rule ( e.g . SGD ) . ACCUMULATEFN is a function that is used to track necessary information for computing the model and the objective function . For instance , an implementation of DeepWalk ( Perozzi et al. , 2014 ) on top of GTTF , specializes ACCUMULATEFN to measure an estimate of the sampled softmax likelihood of nodes ’ positional distribution , modeled as a dot-prodct of node embeddings . On the other hand , GCN ( Kipf & Welling , 2017 ) on top of GTTF uses it to accumulate a sampled adjacency matrix , which it passes to the underlying model ( e.g . 2-layer GCN ) as if this were the full adjacency matrix . BIASFN is a function that customizes the sampling procedure for the stochastic transitions . If provided , it must yield a probability distribution over nodes , given the current node and the path that lead to it . If not provided , it defaults to U , transitioning to any neighbor with equal probability . It can be defined to read edge weights , if they denote importance , or more intricately , used to parameterize a second order Markov Chain ( Grover & Leskovec , 2016 ) , or use neighborhood attention to guide sampling ( Veličković et al. , 2018 ) , as discussed in the Appendix . 2Our pseudo-code displays the traversal starting from one node rather than a batch only for clarity , as our actual implementation is vectorized e.g . u would be a vector of nodes , T would be a 2D matrix with each row containing transition path preceeding the corresponding entry in u , ... etc . Refer to Appendix and code . 3.3 SOME SPECIALIZATIONS OF ACCUMULATEFN & BIASFN
The authors propose a "meta-algorithm" for approximating various graph representation learning schemes: generate batches of random trees with fixed fanout (and possibly biased probabilities of selecting different edges), and use them to accumulate information to approximate operations on the graph. The idea is beautifully simple, and generalizes running independent random walkers, an approach that is used in deriving many related algorithms. The biasing and accumulation functions are user provided, and the authors show how different choices of these functions can be used to approximate a number of graph representation learning schemes. The authors also provide a software framework, though it was inaccessible at review due to anonymization. Experiments show the approach is much more scalable than competing approaches (though, to be fair, some of the competition was not targeting scalability).
SP:eead17fca9c9dd9c1def9e314e19235141fbe709
A Truly Constant-time Distribution-aware Negative Sampling
1 INTRODUCTION . Neural Networks ( NN ) have successfully pushed the boundaries of many application tasks , such as image or text classification ( Wang et al. , 2017 ; Yao et al. , 2019 ) , speech recognition ( Dong et al. , 2018 ) and recommendation systems ( Zhang et al. , 2015 ; Medini et al. , 2019 ) . Many hard AI problems are currently modeled as massive multiclass or multilabel problems leading to a drastic improvement over prior work . For example , popular NLP models predicts the best word , given the full context observed so far . Such models are becoming the state-of-the-art . Recommendation systems and related Information Retrieval ( IR ) problems are classical examples of machine learning with outrageously large outputs ( Medini et al. , 2019 ; Jain et al. , 2019 ) . In IR , given the user query , the task is to predict few relevant documents ( or products ) from among hundreds of millions possible documents , a typical machine learning problem with massive output space . Owing to the significance of the problem , machine learning with large output space or alternatively also known as extreme classification is a field in itself ( Bengio et al. , 2019 ) . A large number of classes naturally brings a new set of computational and memory challenge . Fortunately , with access to our powerful Graphic Processing Unit ( GPU ) ( Owens et al. , 2008 ) , training processes of large models have been accelerated heavily . That is because GPUs have a unique advantage for matrix multiplication , which usually requires a cubic time algebraic operation ( O ( N3 ) ) and is the major and costly building block of NN computations . However , the number of concurrent operations required in large matrix multiplications for classification with extensive number of classes has reached a limit for further speedups even using GPUs . 1.1 NEGATIVE SAMPLING . The common approach to address this challenge is known as negative sampling ( Pennington et al. , 2014 ; Jean et al. , 2014 ; Rawat et al. , 2019 ; Mikolov et al. , 2013b ) . In Negative Sampling , we only sample a small subset of classes for each input and compute the softmax and cross-entropy function . This subset usually includes the positive ( true ) and a small set of negative ( false ) classes . Negative sampling scales down the computations in the most cumbersome last layer , thereby making training efficient . However , approximating full-softmax with small sub-sample results in poor convergence if the negative samples are not chosen appropriately . For instance , let us take the example of a recommendation system ( predicting products relevant to a query ) with a large number of products . If the input query is ‘ Nike Running Shoes ’ , the true loss concentrates on the specific small number of confusing ( ’ hard ’ ) negative classes like ‘ Adidas Running Shoes ’ . Since the number of classes is huge , random sampling is unlikely to identify this hard negative class . Other heuristics like frequent class sampling as negative samples are also unlikely to find these hard negatives most of the time . Clearly , without discriminating between closely related negative samples , the classifier can not achieve good accuracy . Our experiments on recommendations datasets clearly indicate this sub-optimality of current negative sampling heuristics . If there exists a way to sample the subset of confusing classes from the skewed distribution , the training progress would be largely accelerated . However , as evident from the example , such ground-truth distribution depends on the input sample and current model parameters . Moreover , this distribution varies significantly as training progresses . Consider the same query ’ Nike Running Shoes ’ , initially , when the network has not learned anything and has random weights , all classes are equally confusing . Thus , uniform sampling is optimal initially as the network has just started to learn . As the training progresses , the network ’ s belief starts getting more concentrated on a few classes ; at this time , a negative sample of say ’ baby toys ’ is not at all useful because the network has already learned to tell them apart . The sampling distribution keeps changing , often drastically , as the training progresses . To the best of our knowledge , there does not exist any statistical sampling scheme and implementation for adaptive Negative Sampling , where the cost of maintaining and updating the distribution , per iteration , is O ( 1 ) ( independent of the number of classes ) . This is because the input , current true class , and parameters update all the sampling weights in every iteration . It is widely assumed that there is no such sampling scheme , and hence several heuristic alternatives are proposed . The first set of alternatives use a static distribution . The most popular ones , implemented in TensorFlow , assume a static distribution such as the distribution based on the frequency of classes . Uniform sampling is another popular choice . Learning-based alternatives are also proposed ( Bamler & Mandt , 2020 ) , where a machine learning generator predicts ( or generates ) the negative samples . The sampler is solving the same hard problem , prediction over a large number of classes , as a sub-routine . Most importantly , since the sampling distribution for the same data point shifts drastically throughout training , ML models are likely to suffer . Negative sampling alternatives try to balance the sampling cost with quality . So far , negative sampling methods , other than the ones based on static sampling , have failed to demonstrate any training time improvements over the optimized full softmax implementation over GPUs . Static sampling strategies are known to be fast but lead to poor accuracy . With current strategies , the cost of improving the quality with current alternatives does not seem worth it over the GPU acceleration of softmax . In this paper , we change this . Our work provides a truly constant time adaptive sampling scheme utilizing the recent advances in Locality Sensitive Sampling ( Charikar & Siminelakis , 2017 ; Spring & Shrivastava , 2017a ) . More impressively , we provide an efficient implementation of our proposal on CPU , which outperforms TensorFlow ’ s implementation of softmax and other negative sampling strategies on some of the best available GPUs ( V100 ) in terms of wall-clock training time . Summary of Contributions : 1 ) We propose two efficient schemes for sampling ‘ hard ’ negatives where the negative sampling distribution provably adapts to changing parameters and the data instance . Furthermore , the sampling cost is provably constant ( independent of the number of classes ) . 2 ) We show that our technique is not only provably adaptive but also practical . We provide an efficient CPU implementation , in C++ , of our negative sampling approach . We demonstrate the effectiveness of a truly constant time negative sampler by showing that our implementation significantly outperforms standard TensorFlow on V100 GPU in-wall clock speed , while retaining the accuracy . 3 ) We provide a rigorous evaluation of our proposal with its efficient implementation against full softmax and popular approximations like sampled softmax , frequency-based sampled softmax , top-K activation softmax , differentiation softmax ( D-Softmax ) , and Noise Contrastive Estimation ( NCE ) . We report the time-wise and iteration-wise precision on large recommendation datasets like Amazon670K , WikiLSH-325K , and popular natural language processing dataset Text8 corpus . 1.2 LSH BASED HASH TABLES . In this section , we briefly describe the recent development of using locality sensitive hashing for sampling and estimation ( Chen et al. , 2019b ; Spring & Shrivastava , 2017a ; Charikar & Siminelakis , 2017 ; Spring & Shrivastava , 2017b ) . Locality Sensitive Hashing ( Indyk & Motwani , 1998 ; Indyk & Woodruff , 2006 ) is a widely used paradigm for large scale similarity search and nearest neighbor search . LSH is a family of hash functions with a unique property that vectors ‘ close ’ wrt some distance metric are more likely to have the same hash code as opposed to vectors that are ‘ far ’ from each other . Formally , one sufficient condition for a hash family H to be a LSH family is that the collision probability PrH ( h ( x ) = h ( y ) ) is a monotonically increasing function of the similarity : PrH ( h ( x ) = h ( y ) ) = f ( Sim ( x , y ) ) , ( 1 ) where f is a monotonically increasing function . The idea is to use the hash value of x , i.e. , h ( x ) , to generate key of x in the hash table . We first initialize L hash tables by constructing a meta-LSH hash function usingK independent hash functions for each of them . For details , see ( Andoni & Indyk , 2004 ) . There are three major steps : Pre-processing Phase : Given a dataset of size n , we first insert all the data points into the hash tables using the meta-LSH formed by concatenating K independent LSH hash functions . We only store the index/pointer of the data point in the hash tables instead of the entire vector . The cost of addition is K × L hash computations followed by L insertions in the buckets . Query Phase : During the query phase , we use the same meta-LSH hash to compute the hash codes for the query . Then we probe the corresponding bucket of each table and retrieve samples from it . The union of candidates from all hash tables constitute the samples for the particular query . Update Phase : If an existing element in the database is updated , we can delete it from the hash table and re-add it . The cost is equivalent to twice the insertion cost of an element which is 2×K × L. 1.3 ADAPTIVE SAMPLING VIEW OF LSH Denote pqx be the probability of retrieving x from the datasets , when queried with a given query q . In ( Indyk & Motwani , 1998 ) , it was shown that for ( K , L ) parametrized LSH algorithm the precise form of pqx = 1− ( 1−αK ) L , where α is the collision probability of query q and x under the given LSH function , i.e . α = PrH ( h ( x ) = h ( q ) ) . pqx is monotonic in α which is further monotonic in the similarity between query q and the data element x . Note the similarity measure is dependent on the LSH function in use . See ( Spring & Shrivastava , 2017a ) for details . Constant Time Sampling : It should be noted that the cost of sampling is the cost of querying , which is only K × L. This sampling cost is independent of the number of elements in the data . Clearly , the probability pqx is dependent on the query , and every element x in the data has a different sampling probability . Thus , even though our sampling scheme induces n different sampling probabilities every time the query q is changed , the sampling cost is independent of n , and in-fact is constant if K and L are small constants . All this is assuming one O ( n ) time preprocessing . This efficient sampling view of LSH has been used in a wide range of applications , such as deep neural networks ( Spring & Shrivastava , 2017b ; Chen et al. , 2019a ) , kernel density estimation ( Charikar & Siminelakis , 2017 ) , record linkage ( Chen et al. , 2018 ) , and optimization ( Chen et al. , 2019c ) . Recent advances in fast inner product search using asymmetric LSH has made it possible to sample large inner products ( Shrivastava & Li , 2014 ) . Effectively , given a query q , it is possible to sample an element x from the database with probability proportional to a monotonic function of inner product f ( qTx ) . Here f is a monotonically increasing function .
When the number of classes is very large, calculating softmax for classification (e.g., in backpropagation) is computationally costly. Approaches based on negative sampling have been used in literature to alleviate this problem. However, most of existing approaches are (argued to be) either inaccurate or computationally costly. This paper proposes to use the well-known LSH (locality sensitive hashing) method to address this problem. In particular, two variants, LSH label and LSH Embedding are showed to speed up the training in terms of time needed to converge compared with a number of baseline methods over three large scale datasets.
SP:1f2d445f78bb495d09e9b796de3662ab6a6b26af
The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods
1 INTRODUCTION . Understanding the success of deep convolutional neural networks on images remains challenging because images are high-dimensional signals and deep neural networks are highly-non linear models with a substantial amount of parameters : yet , the curse of dimensionality is seemingly avoided by these models . This problem has received a plethora of interest from the machine learning community . One approach taken by several authors ( Mairal , 2016 ; Li et al. , 2019 ; Shankar et al. , 2020 ; Lu et al. , 2014 ) has been to construct simpler models with more tractable analytical properties ( Jacot et al. , 2018 ; Rahimi and Recht , 2008 ) , that still share various elements with standard deep learning models . Those simpler models are based on kernel methods with a particular choice of kernel that provides a convolutional representation of the data . In general , these methods are able to achieve reasonable performances on the CIFAR-10 dataset . However , despite their simplicity compared to deep learning models , it remains unclear which ones of the multiple ingredients they rely on are essential . Moreover , due to their computational cost , it remains open to what extend they achieve similar performances on more complex datasets such as ImageNet . In this work , we show that an additional implicit ingredient , common to all those methods , consists in a data-dependent feature extraction step that makes the convolutional kernel data-driven ( as opposed to purely handcrafted ) and is key for obtaining good performances . Data driven convolutional kernels compute a similarity between two images x and y , using both their translation invariances and statistics from the training set of images X . In particular , we focus on similarities K that are obtained by first standardizing a representation Φ of the input images and then feeding it to a predefined kernel k : Kk , Φ , X ( x , y ) = k ( LΦx , LΦy ) , ( 1 ) where a rescaling and shift is ( potentially ) performed by a diagonal affine operator L = L ( Φ , X ) and is mainly necessary for the optimization step Jin et al . ( 2009 ) : it is typically a standardization . The kernel K ( x , y ) is said to be data-driven if Φ depends on training set X , and data-independent otherwise . This , for instance , is the case if a dictionary is computed from the data ( Li et al. , 2019 ; Mairal , 2016 ; Mairal et al. , 2014 ) or a ZCA ( Shankar et al. , 2020 ) is incorporated in this representation . The convolutional structure of the kernel K can come either from the choice of the representation Φ ( convolutions with a dictionary of patches ( Coates et al. , 2011 ) ) or by design of the predefined kernel k ( Shankar et al. , 2020 ) , or a combination of both ( Li et al. , 2019 ; Mairal , 2016 ) . One of the goal of this paper is to clearly state that kernel methods for vision do require to be data-driven and this is explicitly responsible for their success . We thus investigate , to what extent this common step is responsible for the success of those methods , via a shallow model . Our methodology is based on ablation experiments : we would like to measure the effect of incorporating data , while reducing other side effects related to the design of Φ , such as the depth of Φ or the implicit bias of a potential optimization procedure . Consequently , we focus on 1-hidden layer neural networks of any widths , which have favorable properties , like the ability to be a universal approximator under non-restrictive conditions . The output linear layer shall be optimized for a classification task , and we consider first layers which are predefined and kept fixed , similarly to Coates et al . ( 2011 ) . We will see below that simply initializing the weights of the first layer with whitened patches leads to a significant improvement of performances , compared to a random initialization , a wavelet initialization or even a learning procedure . This patch initialization is used by several works ( Li et al. , 2019 ; Mairal , 2016 ) and is implicitly responsible for their good performances . Other works rely on a whitening step followed by very deep kernels ( Shankar et al. , 2020 ) , yet we noticed that this was not sufficient in our context . Here , we also try to understand why incorporating whitened patches is helpful for classification . Informally , this method can be thought as one of the simplest possible in the context of deep convolutional kernel methods , and we show that the depth or the non-linearities of such kernels play a minor role compared to the use of patches . In our work , we decompose and analyze each step of our feature design , on gold-standard datasets and find that a method based solely on patches and simple non-linearities is actually a strong baseline for image classification . We investigate the effect of patch-based pre-processing for image classification through a simple baseline representation that does not involve learning ( up to a linear classifier ) on both CIFAR-10 and ImageNet datasets : the path from CIFAR-10 to ImageNet had never been explored until now in this context . Thus , we believe our baseline to be of high interest for understanding ImageNet ’ s convolutional kernel methods , which almost systematically rely on a patch ( or descriptor of a patch ) encoding step . Indeed , this method is straightforward and involves limited ad-hoc feature engineering compared to deep learning approach : here , contrary to ( Mairal , 2016 ; Coates et al. , 2011 ; Recht et al. , 2019 ; Shankar et al. , 2020 ; Li et al. , 2019 ) we employ modern techniques that are necessary for scalability ( from thousands to million of samples ) but can still be understood through the lens of kernel methods ( e.g. , convolutional classifier , data augmentation , ... ) . Our work allows to understand the relative improvement of such encoding step and we show that our method is a challenging baseline for classification on Imagenet : we outperform by a large margin the classification accuracy of former attempts to get rid of representation learning on the large-scale ImageNet dataset . While the literature provides a detailed analysis of the behavior of a dictionary of patches for image compression ( Wallace , 1992 ) , texture synthesis ( Efros and Leung , 1999 ) or image inpainting ( Criminisi et al. , 2004 ) , we have a limited knowledge and understanding of it in the context of image classification . The behavior of those dictionaries of patches in some classification methods is still not well understood , despite often being the very first component of many classic vision pipelines ( Perronnin et al. , 2010 ; Lowe , 2004 ; Oyallon et al. , 2018b ) . Here , we proposed a refined analysis : we define a Euclidean distance between patches and we show that the decision boundary between image classes can be approximated using a rough description of the image patches neighborhood : it is implied for instance by the fame low-dimensional manifold hypothesis ( Fefferman et al. , 2016 ) . Our paper is structured as follows : first , we discuss the related works in Sec . 2 . Then , Sec . 3 explains precisely how our visual representation is built . In Sec . 4 , we present experimental results on the vision datasets CIFAR-10 and the large scale ImageNet . The final Sec . 4.3 is a collection of numerical experiments to understand better the dictionary of patches that we used . Our code as well as commands to reproduce our results are available here : https : //github.com/louity/patches . 2 RELATED WORK . The seminal works by Coates et al . ( 2011 ) and Coates and Ng ( 2011 ) study patch-based representations for classification on CIFAR-10 . They set the first baseline for a single-layer convolutional network initialized with random patches , and they show it can achieve a non-trivial performance ( ∼ 80 % ) on the CIFAR-10 dataset . Recht et al . ( 2019 ) published an implementation of this technique and conducted numerous experiments with hundreds of thousands of random patches , improving the accuracy ( ∼ 85 % ) on this dataset . However , both works lack two key ingredients : online optimization procedure ( which allows to scale up to ImageNet ) and well-designed linear classifier ( as we propose a factorization of our linear classifier ) . Recently , ( Li et al. , 2019 ; Shankar et al. , 2020 ) proposed to handcraft kernels , combined with deep learning tools , in order to obtain high-performances on CIFAR-10 . Those performances match standard supervised methods ( ∼ 90 % ) which involve end-to-end learning of deep neural networks . Note that the line of work ( Li et al. , 2019 ; Shankar et al. , 2020 ; Mairal , 2016 ) employs a wellengineered combination of patch-extracted representation and a cascade of kernels ( possibly some neural tangent kernels ) . While their works suggest that patch extraction is crucial , the relative improvement due to basic-hyper parameters such as the number of patches or the classifier choice is unclear , as well as the limit of their approach to more challenging dataset . We address those issues . From a kernel methods perspective , a dictionary of random patches can be viewed as the building block of a random features method ( Rahimi and Recht , 2008 ) that makes kernel methods computationally tractable . Rudi et al . ( 2017 ) provided convergence rates and released an efficient implementation of such a method . However , previously mentioned kernel methods ( Mairal , 2016 ; Li et al. , 2019 ; Shankar et al. , 2020 ) have not been tested on ImageNet to our knowledge . Simple methods involving solely a single-layer of features have been tested on the ImageNet-2010 dataset1 , using for example SIFT , color histogram and Gabor texture encoding of the image with K-nearest neighbors , yet there is a substential gap in accuracy that we attempt to fill in this work on ImageNet-2012 ( or simply ImageNet ) . We note also that CNNs with random weights have been tested on ImageNet , yielding to low accuracies ( ∼ 20 % top-1 , ( Arandjelovic et al. , 2017 ) ) . The Scattering Transform ( Mallat , 2012 ) is also a deep non-linear operator that does not involve representation learning , which has been tested on ImageNet ( ∼ 45 % top-5 accuracy ( Zarka et al. , 2019 ) and CIFAR-10 ( ∼ 80 % , ( Oyallon and Mallat , 2015 ) ) and is related to the HoG and SIFT transforms ( Oyallon et al. , 2018a ) . Some works also study directly patch encoders that achieve competitive accuracy on ImageNet but involve deep cascade of layers that are difficult to interpret ( Oyallon et al. , 2017 ; Zarka et al. , 2019 ; Brendel et al. , 2019 ) . Here , we focus on shallow classifiers . 3 METHOD . We first introduce our preliminary notations to describe an image . A patch p of size P of a larger image x , is a restriction of that image to a squared domain of surface P 2 . We denote by N2 the size of the natural image x and require that P ≤ N . Hence , for a spatial index i of the image , pi , x represents the patch of image x located at i . We further introduce the collection of all overlapping patches of that image , denoted by : Px = { pi , x , i ∈ I } where I is a spatial index set such that |I| = ( N − P + 1 ) 2 . Fig . 1 corresponds to an overview of our classification pipeline that consist of 3 steps : an initial whitening step of a dictionary D of random patches , followed by a nearest neighbor quantization of images patches via D that are finally spatially averaged . 1As one can see on the Imagenet2010 leaderboard http : //image-net.org/challenges/LSVRC/2010/results , and the accuracies on ImageNet2010 and ImageNet2012 are comparable . Whitening We describe the single pre-processing step that we used on our image data , namely a whitening procedure on patches . Here , we view natural image patches of size P 2 as samples from a random vector of mean µ and covariance Σ . We then consider whitening operators which act at the level of each image patch by first subtracting its mean µ then applying the linear transformation W = ( λI + Σ ) −1/2 to the centered patch . The additional whitening regularization with parameter λ was used to avoid ill-conditioning effects . Figure 1 : Our classification pipeline described synthetically to explain how we build the representation Φ ( x ) of an input image x . Dictionary … Find Q-nearest neighbours per patch 0 1patch 1 patch 2 … 01 … 1 1 … 01 … 1 … Spatial Average pooling … 1 1 … … … … 01 … 1 …2 0 2 2 1 1 2 … … 2 0 … 2 … patch 1 patch 2 input x Split the image in overlapping patches Representation Dictionary ( x ) < latexit sha1_base64= '' TGV8Qz6p5CvLBilkjVUuHjjuCuo= '' > AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90= < /latexit > < latexit sha1_base64= '' TGV8Qz6p5CvLBilkjVUuHjjuCuo= '' > AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90= < /latexit > < latexit sha1_base64= '' TGV8Qz6p5CvLBilkjVUuHjjuCuo= '' > AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90= < /latexit > < latexit sha1_base64= '' TGV8Qz6p5CvLBilkjVUuHjjuCuo= '' > AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90= < /latexit > D < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > D < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > < latexit sha1_base64= '' Rb/kpNpYCeioxAjDXIA8A4KhYjk= '' > AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ== < /latexit > … The whitening operation is defined up to an isometry , but the Euclidean distance between whitened patches ( i.e. , the Mahanobolis distance ( Chandra et al. , 1936 ) ) is not affected by the choice of such isometry ( choices leading to PCA , ZCA , ... ) , as discussed in Appendix A . In practice , the mean and covariance are estimated empirically from the training set to construct the whitening operators . For the sake of simplicity , we only consider whitened patches , and unless explicitly stated , we assume that each patch p is already whitened , which holds in particular for the collection of patches in Px of any image x . Once this whitening step is performed , the Euclidean distance over patches is approximatively isotropic and is used in the next section to represent our signals . Q-Nearest Neighbors on patches The basic idea of this algorithm is to compare the distances between each patch of an image and a fixed dictionary of patches D , with size |D| that is the number of patches extracted . Note that we also propose a variant where we simply use a soft-assignment operator . For a fixed dataset , this dictionary D is obtained by uniformly sampling patches from images over the whole training set . We augmentD into ∪d∈D { d , −d } because it allows the dictionary of patches to be contrast invariant and we observe it leads to better classification accuracies ; we still refer to it as D. An illustration is given by Fig . 2 . Once the dictionary D is fixed , for each patch pi , x we consider the set Ci , x of pairwise distances Ci , x = { ‖pi , x − d‖ , d ∈ D } . For each whitened patch we encode the Q-Nearest Neighbors of pi , x from the set D , for some Q ∈ N. More formally , we consider τi , x the Q-th smallest element of Ci , x , and we define the Q-Nearest Neighbors binary encoding as follow , for ( d , i ) ∈ D × I : φ ( x ) d , i = { 1 , if ‖pi , x − d‖ ≤ τi , x 0 , otherwise . ( 2 ) Eq . 2 can be viewed as a Vector Quantization ( VQ ) step with hard-assignment ( Coates and Ng , 2011 ) . The representation φ encodes the patch neighborhood in a subset of randomly selected patches and can be seen as a crude description of the topological geometry of the image patches . Moreover , it allows to view the distance between two images x , y as a Hamming distance between the patches neighborhood encoding as : ‖φ ( x ) − φ ( y ) ‖2 = ∑ i , d 1φ ( x ) d , i 6=φ ( y ) d , i . ( 3 ) In order to reduce the computational burden of our method , we perform an intermediary averagepooling step . Indeed , we subdivide I in squared overlapping regions Ij ⊂ I , leading to the representation Φ defined , for d ∈ D , j by : Φ ( x ) d , j = ∑ i∈Ij φ ( x ) d , i . ( 4 ) Hence , the resulting kernel is simply given by K ( x , y ) = 〈Φ ( x ) , Φ ( y ) 〉 . Implementation details can be found in Appendix B . The next section describes our classification pipeline , as we feed our representation Φ to a linear classifier on challenging datasets .
This paper proposes a powerful non-learning Kernal based baseline for ImageNet classification. The proposed non-learning Kernal based baseline (which can be interpretable to a vector quantization) shows comparable results (88.5) with AlexNet (89.1) in CIFAR-10 top-1 accuracy. The ImageNet result (39.4) shows that it is still challenging to classify the images without deep features, but about 40% is an impressive baseline without any learning method (e.g., these results is almost comparable to BagNet top-5 error).
SP:a0281c7b8cc747c8ced8b7ddfcc56fb6e082eb84
Deep Convolution for Irregularly Sampled Temporal Point Clouds
1 INTRODUCTION . Many real-world problems feature observations that are sparse and irregularly sampled in both space and time . Weather stations scattered across the landscape reporting at variable rates without synchronization ; citizen-science applications producing observations at the whim of individuals ; or even opportunistic reports of unit positions in search-and-rescue or military operations . These sparse and irregular observations naturally map to a set of discrete space-time points – forming a spatiotemporal point cloud representing the underlying process . Critically , the dynamics of these points are often highly related to the other points in their spatio-temporal neighborhood . Modelling spatio-temporal point clouds is difficult with standard deep networks which assume observations are dense and regular – at every grid location in CNNs , every time step in RNNs , or both for spatio-temporal models like Convolutional LSTMs ( Xingjian et al. , 2015 ) . While there has been work examining irregularly sampled data through time ( Rubanova et al. , 2019 ; Shukla & Marlin , 2018 ) and in space ( Wu et al. , 2019 ) , modeling both simultaneously has received little attention ( Choy et al. , 2019 ) . This is due in part to the difficulty of scaling prior solutions across both space and time . For instance , voxelization followed by sparse convolution ( Choy et al. , 2019 ) or dense imputation ( Shukla & Marlin , 2018 ) now face a multiplicative increase in the number of cells . Rather than forcing irregular data into dense representations , an emerging line of research treats spatial point-clouds as first-class citizens ( Qi et al. , 2017a ; b ; Su et al. , 2018 ; Xu et al. , 2018 ) . Several works directly extend 2D convolutions to point clouds ( Simonovsky & Komodakis , 2017 ; Wang et al. , 2019 ; Hermosilla et al. , 2018 ) , with ( Wu et al. , 2019 ) being the first that allows efficient exact computation of convolution with dozens of layers . In this work , we build on this line of research to model spatio-temporal point clouds . Specifically , we extend the work of Wu et al . ( 2019 ) with an additional module to reason about point representations through time . Our new model , TemporalPointConv ( TPC ) , is a simple but powerful extension that can learn from an arbitrary number of space-time points . Each layer in TemporalPointConv updates the representation of each point by applying two operators in sequence – one that considers the spatial neighborhood in a narrow temporal window and another that models how this spatial representation changes over time . By factorizing the representation update into separate spatial and temporal operators , we gain significant modeling flexibility . Further , by operating directly on point clouds , we can predict observations at arbitrary space-time , regardless of the distribution of observations . We demonstrate TemporalPointConv on two distinct problems : 1 ) predicting future states of a custom Starcraft II environment involving battles between variable-sized groups , and 2 ) predicting the weather at stations distributed throughout the state of Oklahoma . Further , we show the utility of these networks in identifying damaged or anomalous weather sensors after being trained exclusively on the associated prediction problem . The results show that TemporalPointConv outperforms both state of the art set functions and a discrete sparse convolution algorithm in terms of raw performance , ability to detect anomalies , and generalization to previously unseen input and query distributions . 2 RELATED WORK . Xingjian et al . ( 2015 ) gives an early approach to spatio-temporal modeling via convolution by incorporating a standard convolutional structure into the latent memory of an LSTM . This approach is appropriate for situations where the data is regularly sampled in both space and time , which is different from our setting . Interaction networks ( Battaglia et al. , 2016 ) and related approaches allow for modeling sets of interacting objects or points over time , with an original motivation to model physics processes . These models are more flexible in their modeling of spatial relationships among points . However , there is an assumption of uniform temporal sampling , which is violated in our setting . A significant amount of work on spatio-temporal modeling for non-uniform spatial sampling uses Graph Convolutional Networks ( GCNs ) for modeling spatial interactions . For example , Li et al . ( 2018b ) used a GCN followed by an RNN and Yu et al . ( 2018 ) used GCNs for spatial correlation and temporal convolution for temporal correlations . They require sampling at continuous temporal intervals and did not deal with generalization outside the fixed given graph . Rather , our approach generalizes to any spatio-temporal point outside of the training data . Yao et al . ( 2019 ) introduces an attention model to deal with dynamic spatial relationships , however this is only possible for the dense CNN version in their paper , whereas their version with irregular spatial sampling utilizes the GCN and shares the same issues with the above GCN approaches . PointNet ( Qi et al. , 2017a ) sparked significant interest in networks for 3D Point cloud processing . A number of networks have been proposed ( Qi et al. , 2017a ; b ; Su et al. , 2018 ; Xu et al. , 2018 ) with the highest performing using either sparse convolutional networks ( Graham & van der Maaten , 2018 ; Choy et al. , 2019 ) or point convolutional networks ( Wu et al. , 2019 ; Thomas et al. , 2019 ) . Set networks , such as DeepSets ( Zaheer et al. , 2017b ) , are similar to PointNet ( Qi et al. , 2017a ) with neither explicitly considering neighborhood information of elements/points , making them less powerful than convolutional methods . Recently , Horn et al . ( 2020 ) proposed a set network approach for non-uniform time-series prediction , which encodes time into the feature vector of points . Our experiments show that this approach is outperformed by our convolutional method . Sparse convolutional networks are similar to dense volumetric convolutional networks that use a regular grid to discretize space-time , but they are only computed at locations with occupied points . Minkowski networks ( Choy et al. , 2019 ) is a sparse convolutional network that models spatio- temporal correlations by concatentating the spatial location and time for each point sample into a 4D tesseract . It is thus sensitive to an appropriate resolution for the discretization since excess sparsity can result in empty neighborhoods and trivial convolutions , and too coarse a resolution may result in an inaccurate representation of the data . Furthermore , the approach has difficulties accounting for the case where points should be treated as moving entities themselves . On the other hand , point convolutions discretize 3D volumetric convolutions on each point directly and hence easily generalize to the entire space under irregular sampling density . Early versions ( Simonovsky & Komodakis , 2017 ; Hermosilla et al. , 2018 ; Wang et al. , 2019 ) require explicit discretization hence can not scale to large networks . Recently , PointConv ( Wu et al. , 2019 ) proposes an equivalent form that avoids explicit discretization and significantly improved scalability . However , so far it has been applied only to static point clouds . Our work builds on PointConv , by extending it in the temporal direction and demonstrating that space-time convolutions can be effectively learned and used for modeling and anomaly detection . On the temporal side , a significant amount of recent state-of-the-art were based on point processes which studies time series models from a statistical perspective ( Du et al. , 2016 ; Li et al. , 2018a ; Zuo et al. , 2020 ; Zhang et al. , 2019 ) . These support irregular temporal sampling , but generally do not consider the spatial correlation among points . 3 PROBLEM SETUP . We consider extrapolative tasks in which the value at new locations must be inferred from existing observations . Let P be a spatio-temporal point cloud with each individual point pj ∈ P defined as pj = ( lj , tj , oj ) where pj exists at location lj at time tj and has associated features oj ( e.g . temperature and humidity values for a weather station ) . Further , let Q be a set of query locations at which the model is to make predictions given P . For example , a forecasting model might be given queries qk = ( lk , tk ) for locations in the future and be tasked with predicting the corresponding features ok representing the desired properties to be predicted . We place no restrictions on the regularity of either P or Q such that this corresponds to a setting where both input and output may be sparse and irregularly sampled through space and time . Further , query points may be in the future , the past , or concurrent with those in P – corresponding to weather forecasting , backcasting , or nowcasting respectively . We aim to train models that can accurately answer queries as represented via training set of point-cloud / query-set pairs D = { ( Pi , Qi ) } Ni=1 . 4 TEMPORAL POINTCONV ARCHITECTURE . Given a spatio-temporal point-cloud containing points pj = ( lj , tj , oj ) , a Temporal PointConv layer is an operator that produces an updated point representation p′j = ( lj , tj , o ′ j ) for each point . The updated feature representation o′j incorporates information from a spatio-temporal neighborhood around pj . This is accomplished by applying two point-based convolutional operators in sequence for each point – first a spatial PointConv over points within a narrow temporal band , and then a temporal PointConv over points within a narrow spatial band . These Temporal PointConv layers can be stacked to arbitrary depth . Below we give background on PointConv and describe our model . 4.1 PRELIMINARIES : POINTCONV . PointConv is based on the idea of discretizing continuous convolution on irregularly sampled points : Conv ( P , p0 ; w , d ( · , · ) ) = ∑ pi∈Nd ( p0 ) 〈w ( pi − p0 ) , oi〉 ( 1 ) where P is a point cloud with features at each point , w ( · ) is a vector-valued weight function of the positional difference between a point pi in the neighborhood Nd of a centroid p0 , defined by a metric d , and oi is the input features at pi . w ( · ) can be learned with a neural network ( Simonovsky & Komodakis , 2017 ) . PointConv ( Wu et al. , 2019 ) introduces an equivalent form so that w does not need to be computed explicitly , saving computation and memory . This approach is flexible since w ( · ) as a function can apply to any point in the space of P , hence convolution can be computed over any irregularly sampled neighborhoodNd . We note that this even holds when we did not have any feature at p0 , since a neighborhood can still be found even in this case and eq . ( 1 ) can still be used . Previously , PointConv has only been used in spatial domains in cases where p0 has features associated with it . In this paper we generalize it to spatio-temporal neighborhoods and to p0 that are featureless query points . For expositional clarity , we denote PointConv as an operator that transforms a feature-augmented point-cloud P into a new point-cloud P ′ consisting of points at target locations Q with eq . ( 1 ) : P ′ = PointConv ( P , Q ; d ( · , · ) ) , where we will omit Q if Q = P .
This paper studies the problem of modeling spatial-temporal point clouds which are sampled at irregular space and time points. It proposes the Temporal PointConv model which is an extension of the PointConv model (Wu et al., 2019). In particular, PointConv computes a convolution by aggregating the features of nearby points of a point p as the new feature of p. Temporal PointConv extends this by aggregating the features of points near p in both space and time in a two-step process: first weighting the aggregation by the space distance and then weighting the aggregation by the temporal distance.
SP:5724a8799e8b77f19887fb7925405a7f151523cc
AdaLead: A simple and robust adaptive greedy search algorithm for sequence design
1 INTRODUCTION . An important problem across many domains in biology is the challenge of finding DNA , RNA , or protein sequences which perform a function of interest at a desired level . This task is challenging for two reasons : ( i ) the map φ between sequences X = { x1 , · · · , xn } and their biological function y = { y1 , · · · , yn } is non-convex and ( ii ) has sparse support in the space of possible sequences AL , which also grows exponentially in the length of the sequence L for alphabet A . This map φ is otherwise known as a “ fitness landscape ” ( de Visser et al. , 2018 ) . Currently , the most widely used practical approach in sequence design is “ directed evolution '' ( Arnold , 1998 ) , where populations of biological entities are selected through an assay according to their function y , with each iteration becoming more stringent in the selection criteria . However , this model-free approach relies on evolutionary random walks through the sequence space , and most attempted optimization steps ( mutations ) are discarded due to their negative impact on y . Recent advances in DNA sequencing and synthesis technologies allow large assays which query y for specific sequences x with up to 105 physical samples per batch ( Barrera et al. , 2016 ) . This development presents an opening for machine learning to contribute in building better surrogate models φ′ : X → y which approximate the oracle φ that maps each sequence to its true function . We may use these models φ′ as proxies of φ , in order to generate and screen sequences in silico before they are sent to synthesis ( Yang et al. , 2019 ; Fox et al. , 2007 ) . While a large body of work has focused on building better local approximate models φ′ on already published data ( Otwinowski et al. , 2018 ; Alipanahi et al. , 2015 ; Riesselman et al. , 2017 ; Sinai et al. , 2017 ) , the more recent work is being done on optimization in this setting ( Biswas et al. , 2018 ; Angermueller et al. , 2020 ; Brookes & Listgarten , 2018 ; Brookes et al. , 2019 ) . Although synthesizing many sequences within a batch is now possible , because of the labor-intensive nature of the process , only a handful of iterations of learning can be performed . Hence data is often collected in serial batches bi , comprising data Dt = { b0 , · · · , bt } and the problem of sequence design is generally cast as that of proposing batches so that we may find the optimal sequence x∗t = argmaxx∈Dt φ ( x ) over the course of these experiments . In this paper , we focus our attention on ML-augmented exploration algorithms which use ( possibly non-differentiable ) surrogate models φ′ to improve the process of sequence design . While the work is under an active learning setting , in which an algorithm may select samples to be labelled , with data arriving in batches bi , our primary objective is black-box optimization , rather than improving the accuracy of surrogate model . We define Eθ ( D , φ′ ) to denote an exploration algorithm with parameters θ , which relies on dataset D and surrogate model φ′ . When the context is clear , we will simply use E as shorthand . In most contexts , the sequence space is large enough that even computational evaluation is limited to a very small subset of possible options . For this reason , we consider the optimization as samplerestricted , not only in the number of queries to the ground truth oracle , but also the number of queries to the surrogate model ( Among other reasons , this allows us to thoroughly study the algorithms on landscapes that can be brute-forced , simulating a similar situation when the sequence space is very large compared to computation power , a very common setting . ) The algorithm E may perform v sequence evaluations in silico for every sequence proposed . For example , v × B samples may be evaluated by the model before B samples are proposed for measurement . Ideally , E should propose strong sequences even when v is small ; that is , the algorithm should not need to evaluate many sequences to arrive at a strong one . 2 CONTRIBUTIONS . In this study , we make three main contributions towards improving algorithms for sequence design : 1 . To build on recent progress in biological sequence design , the research community needs good benchmarks and reference algorithm implementations against which to compare new methods . We implement an open-source simulation environment FLEXS that can emulate complex biological landscapes and can be readily used for training and evaluating sequence design algorithms . We hope that FLEXS will help ensure meaningful and reproducible research results and accelerate the process of algorithm development for ML-guided biological sequence design . 2 . We introduce an abstracted oracle to allow the empirical study of exploration strategies , independent of the underlying models . This helps us understand relevant properties , such as robustness and consistency of the algorithms . 3 . Inspired by evolutionary and Follow the Perturbed Leader approaches in combinatorial optimization , we propose a simple model-guided greedy approach , termed Adapt-with-the-Leader ( ADALEAD ) . ADALEAD is simple to implement and is competitive with previous state-of-the-art algorithms . We propose ADALEAD as a strong , accessible baseline for testing sequence design algorithms . We show that in general , simple evolutionary algorithms are strong benchmarks to compete against and should be included in future analyses of new methods . 3 EVALUATION . We evaluate the algorithms on a set of criteria designed to be relevant to both the biological applicability as well as the soundness of the algorithms considered ( Purohit et al. , 2018 ) . We run the algorithms using FLEXS , where all of these algorithms and criteria evaluators are implemented . • Optimization : We let maximization be the objective . Most optimization algorithms operate under the assumption that critical information such as the best possible y∗ or the set of all local maxima M is unknown . While it is reasonable to assume that the best sequence is the one with the highest fitness , this is not necessarily the case in reality . For instance , we might wish to bind a particular target , but binding it too strongly may be less desirable than binding it at a moderate level . As measurements of this criterion , we consider the maximum y = maxx φ ( x ) over all sequences considered , as well as the cardinality |S| , where S = { xi | φ ( xi ) > yτ } and yτ > 0 is a minimum threshold value . It is noteworthy that we often do not know if any solutions y > yτ exists , hence finding many solutions by an algorithm is a sign of its strength . • Robustness : A major challenge for input design in model-guided algorithms is that optimizing directly on the surrogate φ′ can result in proposing a sequence x with large error , instead of approximating x∗ ( e.g . if the proposed input x is far outside D ) . Additionally , while biological models φ may contain regularities that are universally shared , those regularities are not known . Hence , a desired property of the algorithm is that it is robust in the face of a poor model . • Consistency : The algorithm E should produce better performing sequences if it has access to a higher quality model φ′ . Additionally , we desire that the high-performing sequences proposed by the algorithm are diverse . Because a sequence may be disqualified for reasons which are unknown during the design phase , we would like to find distinct solutions which meet the optimality criteria . However , metrics for measuring diversity can be ambiguous and we only focus on measuring diversity in a narrow sense . We measure diversity using |S| . When the ground truth model can be fully enumerated to find its local optima ( peaks ) by brute force ( i.e . we know the maximaM ) , we can use the number of found maxima |M′yτ | as a measure of diversity , whereM′yτ ⊂ M represents the maxima found by the algorithm above fitness yτ . 4 RELATED WORK . Bayesian Optimization ( BO ) . BO algorithms are designed to optimize black-box functions which are expensive to query . Importantly , these algorithms make use of the uncertainty of model estimates to negotiate exploration versus exploitation . In a pioneering study , Romero et al . ( 2013 ) demonstrate the use of BO for protein engineering . Many successful attempts have followed since ( e.g . Gonzalez et al . ( 2015 ) and Yang et al . ( 2019 ) ) , however in each of these cases a specific model of the design space is assumed to shrink the search space significantly . Otherwise BO is known to perform poorly in higher dimensional space ( Frazier , 2018 ) , and to our knowledge , no general purpose sequence design algorithm using BO has performed better than the models considered below . For this reason , while we implement variations of BO as benchmarks ( see similar interpretations in Belanger et al . ( 2019 ) ) , we do not consider these implementations as competitive standards . In our figures , we use the EI ( Expected Improvement ) acquisition function with an evolutionary sequence generator as our BO algorithm , and show comparisons with alternatives ( on TF landscapes ) in the supplement . Generative models . Another class of algorithms approach the task of sequence design by using regularized generative models . At a high level , these approaches pair a generative model Gϕ with an oracle φ′ , and construct a feedback loop that updates Gϕ ( and sometimes φ′ ) to produce high-performing sequences . In Feedback-GAN ( FBGAN ) , Gupta & Zou ( 2018 ) pair a generative adversarial network ( Goodfellow et al. , 2014 ) that is trained to predict whether sequences belong to the functional set using a frozen oracle φ′ which filters a subset of sequences at each training step . They bias the training of the generator and discriminator towards high-performing sequences . Killoran et al . ( 2017 ) pursue the sequence optimization by regularizing an “ inverted model '' , φ′θ −1 ( yi ) = xi with a Wasserstein GAN ( Arjovsky et al. , 2017 ) which is trained to produce realistic samples . In this case , both φ′θ and the generator are trained jointly . Brookes & Listgarten ( 2018 ) propose an algorithm , Design by Adaptive Sampling ( DbAS ) , that works by training a generative model Gϕ on a set of sequences X0 , and generating a set of proposal sequences X̂ ∼ Gϕ . They then use φ′θ to filter X̂ for high-performing sequences , retrain Gϕ , and iterate this process until convergence . This scheme is identical to the cross-entropy method with a VAE as the generative model , an important optimization scheme ( Rubinstein , 1999 ) . In follow-up work , termed Conditioning by Adaptive Sampling ( CbAS ) ( Brookes et al. , 2019 ) , the authors aim to address the pitfall in which the oracle is biased , and gives poor estimates outside its training domain D. The authors enforce a soft pessimism penalty for samples that are very distinct from those that the oracle could have possibly learned from . Specifically , they modify the paradigm so that as the generator updates its parameters ϕt → ϕ′t while training on samples in the tail of the distribution , it discounts the weight of the samples xi by Pr ( xi|G ; ϕ0 ) Pr ( xi|G ; ϕt ) . In other words , if the generative model that was trained on the original data was more enthusiastic about a sample than the one that has updated according to the oracle ’ s recommendations , that sample is up-weighted in the next training round ( and vice versa ) . Notably , as the oracle is not updated during the process , there are two rounds of experiments where they maximize the potential gains from their oracle given what it already knows : a round to create the oracle , and a round to improve the generative model . While it is trivial to repeat the process for multiple rounds , the process can be improved by incorporating information about how many rounds it will be used for . We use DbAS and CbAS as state of the art representatives in this class of regularized generative algorithms . Reinforcement Learning ( RL ) . RL algorithms are another method used to approach this problem . As these algorithms learn to perform tasks by experience , their success is often dependent on whether interactions with the environment are cheap or if there is a good simulator of the environment which they can practice on ( Mnih et al. , 2015 ) . However , in our setting , good simulators often do not exist , and sampling the environment directly can be very expensive . As a result , recent approaches to this problem have built locally accurate simulators and used them to train an RL agent . With DyNA-PPO , Angermueller et al . ( 2020 ) achieve state-of-the-art results in sequence design tasks following this approach . They train a policy network based on Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) by simulating φ through an ensemble of models φ′′ . The models are trained on all measured data so far , and those that achieve a high R2 score in cross-validation are selected as part of the ensemble . Additionally , to increase the diversity of proposed sequences , they add a penalty for proposing samples that are close to previously proposed samples . They compare their results against some of the methods mentioned above , as well as other RL training schemes , showing superior results . While DyNA-PPO reaches state of the art performance , a major drawback of policy gradient algorithms is that they are complex to implement , and their performance is highly sensitive to implementation ( Engstrom et al. , 2019 ) . We use DyNA-PPO as a representative state of the art for all sequence design algorithms for our study . Most recent sequence design algorithms do not use evolution-inspired methods . Despite their lack of popularity , as we show here , they are fairly strong baselines that should be considered . Here we investigate classic algorithms like standard Wright-Fisher process , without and without models , Covariance Matrix Adaptation evolutionary strategy , ( CMA-ES ) ( Hansen & Ostermeier , 2001 ) and ADALEAD . It ’ s important to note that the focus of our study is distinct from those focused on better local approximate models ( e.g . ( Rao et al. , 2019 ; Riesselman et al. , 2017 ) ) , or semi-supervised and transfer learning approaches ( e.g . ( Madani et al. , 2020 ; Biswas et al. , 2020 ) ) .
In this work, the authors propose a greedy search approach for designing biological sequences in an active learning, batch setting. The algorithm is a fairly standard evolutionary algorithm which identifies the set of candidates at each iteration by adapting the best candidates from the previous iteration, and then evaluating those adaptations using a surrogate model. A set of experiments (using an evaluation sandbox also proposed by the authors for this work) suggest the proposed approach finds more high-quality solutions than competing approaches; however, especially in the experiments using the realistic/empirical surrogate model (an ensemble of 3 CNNs), the quality of the best solutions found by several approaches are statistically similar.
SP:55e0dbbbabecb54def7b092761ee4e6bd41095c4
Zero-Shot Learning with Common Sense Knowledge Graphs
1 INTRODUCTION . Deep neural networks require large amounts of labeled training data to achieve optimal performance . This is a severe bottleneck , as obtaining large amounts of hand-labeled data is an expensive process . Zero-shot learning is a training strategy which allows a machine learning model to predict novel classes without the need for any labeled examples for the new classes ( Romera-Paredes & Torr , 2015 ; Socher et al. , 2013 ; Wang et al. , 2019 ) . Zero-shot models learn parameters for seen classes along with their class representations . During inference , new class representations are provided for the unseen classes . Previous zero-shot learning systems have used hand-engineered attributes ( Akata et al. , 2015 ; Farhadi et al. , 2009 ; Lampert et al. , 2014 ) , pretrained embeddings ( Frome et al. , 2013 ) and learned embeddings ( e.g . sentence embeddings ) ( Xian et al. , 2016 ) as class representations . Class representations in a zero-shot learning framework should satisfy the following properties : ( 1 ) they should adapt to unseen classes without requiring additional human effort , ( 2 ) they should provide rich features such that the unseen classes have sufficient distinguishing characteristics among themselves , ( 3 ) they should be applicable to a range of downstream tasks . Previous approaches for class representations have various limitations . On one end of the spectrum , attribute-based methods provide rich features but the attributes have to be fixed ahead of time for the unseen classes . On the other end of the spectrum , pretrained embeddings such as GloVe ( Pennington et al. , 2014 ) and Word2Vec ( Mikolov et al. , 2013 ) offer the flexibility of easily adapting to new classes but rely on unsupervised training on large corpora—which may not provide distinguishing characteristics necessary for zero-shot learning . Many methods lie within the spectrum and learn class representations for zero-shot tasks from descriptions such as attributes , text , and image prototypes . Existing approaches that have achieved state-of-the-art performance make task-specific adjustments and can not exactly be adapted to tasks in different domains ( Liu et al. , 2019a ; Verma et al. , 2020 ) . Methods using graph neural networks on the ImageNet graph to learn class representations have achieved strong performance on zero-shot object classification ( Kampffmeyer et al. , 2019 ; Wang et al. , 2018 ) . These methods are general-purpose , since we show that they can be adapted to other tasks as well . However , the ImageNet graph may not provide rich features suitable for a wide range of downstream tasks . In our work , we propose to learn class representations from common sense knowledge graphs . Common sense knowledge graphs ( Liu & Singh , 2004 ; Speer et al. , 2017 ; Tandon et al. , 2017 ; Zhang et al. , 2020 ) are an untapped source of explicit high-level knowledge that requires little human effort to apply to a range of tasks . These graphs have explicit edges between related concept nodes and provide valuable information to distinguish between different concepts . However , adapting existing zero-shot learning frameworks to learn class representations from common sense knowledge graphs is challenging in several ways . GCNZ ( Wang et al. , 2018 ) learns graph neural networks with a symmetrically normalized graph Laplacian , which not only requires the entire graph structure during training but also needs retraining if the graph structure changes , i.e. , GCNZ is not inductive . Common sense knowledge graphs can be large ( 2 million to 21 million edges ) and training a graph neural network on the entire graph can be prohibitively expensive . DGP ( Kampffmeyer et al. , 2019 ) is an inductive method and aims to generate expressive class representations but assumes a directed acyclic graph such as WordNet . Common sense knowledge graphs do not have a directed acyclic graph structure . To address these limitations , we propose ZSL-KG , a general-purpose framework with a novel transformer graph convolutional network ( TrGCN ) to learn class representations . Graph neural networks learn to represent the structure of graphs by aggregating information from each node ’ s neighbourhood . Aggregation techniques used in GCNZ , DGP , and most other graph neural network approaches are linear , in the sense that they take a ( possibly weighted ) mean or maximum of the neighbourhood features . To capture the complex information in the common sense knowledge graph , TrGCN learns a transformer-based aggregator to compute the non-linear combination of the node neighbours . A few prior works have considered LSTM-based aggregators ( Hamilton et al. , 2017a ) as a way to increase the expressivity of graph neural networks , but their outputs can be sensitive to the ordering imposed on the nodes in each neighborhood . For example , on the Animals with Attributes 2 dataset , we find that when given the same test image 10 times with different neighborhood orderings , an LSTM-based graph neural network outputs inconsistent predictions 16 % of the time ( Appendix A ) . One recent work considers trying to make LSTMs less sensitive by averaging the outputs over permutations , but this significantly increases the computational cost and provides only a small boost to prediction accuracy ( Murphy et al. , 2019 ) . In contrast , TrGCN learns a transformer-based aggregator , which is non-linear and naturally permutation invariant . Additionally , our framework is inductive , i.e. , the graph neural network can be executed on graphs that are different from the training graph , which is necessary for inductive zero-shot learning under which the test classes are unknown during training . We demonstrate the effectiveness of our framework on three zero-shot learning tasks in vision and language : object classification , intent classification , and fine-grained entity typing . We report new state-of-the-art accuracies on six zero-shot benchmark datasets ( Xian et al. , 2018a ; Farhadi et al. , 2009 ; Deng et al. , 2009 ; Coucke et al. , 2018 ; Gillick et al. , 2014 ; Weischedel & Brunstein , 2005 ) . ZSL-KG outperforms the state-of-the-art specialized method for each task by an average 1.7 accuracy points . ZSL-KG also outperforms GCNZ , the best general-purpose method on average by 5.3 accuracy points . Our ablation study on ZSL-KG with alternate graph neural networks shows that our transformer-based aggregator adds up to 2.8 accuracy points improvement on these tasks . In summary , our main contributions are the following : 1 . We propose to learn class representations from common sense knowledge graphs for zeroshot learning . 2 . We present ZSL-KG , a general-purpose framework based on graph neural networks with a novel transformer-based architecture . Our proposed architecture learns non-linear combination of the nodes neighbourhood and generates expressive class representations . 3 . ZSL-KG achieves new state-of-the-art accuracies on Animals with Attributes 2 ( Xian et al. , 2018a ) , aPY ( Farhadi et al. , 2009 ) , ImageNet ( Deng et al. , 2009 ) , SNIPS-NLU ( Coucke et al. , 2018 ) , Ontonotes ( Gillick et al. , 2014 ) , and BBN ( Weischedel & Brunstein , 2005 ) . 2 BACKGROUND . In this section , we summarize zero-shot learning and graph neural networks . 2.1 ZERO-SHOT LEARNING . Zero-shot learning has several variations ( Wang et al. , 2019 ; Xian et al. , 2018a ) . Our work focuses on inductive zero-shot learning , under which we do not have access to the unseen classes during training . We train a zero-shot classifier by optimizing over the seen classes . But , unlike traditional methods , zero-shot classifiers are trained along with class representations such as attributes , pretrained embeddings , etc . Recent approaches learn a class encoder φ ( y ) ∈ Rd to produce vector-valued class representations from an initial input , such as a string or other identifier of the class . ( In our case , y is a node in a graph and its k-hop neighborhood . ) During inference , the class representations are used to label examples with the unseen classes by passing the examples through an example encoder θ ( x ) ∈ Rd and predicting the class whose representation has the highest inner product with the example representation . Recent work in zero-shot learning commonly uses one of two approaches to learn the class encoder φ ( y ) . One approach uses a bilinear similarity function defined by a compatibility matrix W ∈ Rd×d ( Frome et al. , 2013 ; Xian et al. , 2018a ) : f ( θ ( x ) , W , φ ( y ) ) = θ ( x ) TWφ ( y ) . ( 1 ) The bilinear similarity function gives a score for each example-class pair . The parameters of θ , W , and φ are learned by taking a softmax over f for all possible seen classes y ∈ YS and minimizing either the cross entropy loss or a ranking loss with respect to the true labels . In other words , f should give a higher score for the correct class ( es ) and lower scores for the incorrect classes . W is often constrained to be low rank , to reduce the number of learnable parameters ( Obeidat et al. , 2019 ; Yogatama et al. , 2015 ) . Lastly , other variants of the similarity function add minor variations such as non-linearities between factors of W ( Socher et al. , 2013 ; Xian et al. , 2016 ) . The other common approach is to first train a neural network classifier in a supervised fashion . The final fully connected layer of this network has a vector representation for each seen class , and the remaining layers are used as the example encoder θ ( x ) . Then , the class encoder φ ( y ) is trained by minimizing the L2 loss between the representations from supervised learning and φ ( y ) ( Kampffmeyer et al. , 2019 ; Socher et al. , 2013 ; Wang et al. , 2018 ) . The class encoder that we propose in Section 3 can be plugged into either approach . 2.2 GRAPH NEURAL NETWORKS . The basic idea behind graph neural networks is to learn node embeddings that reflect the structure of the graph ( Hamilton et al. , 2017b ) . Consider the graph G = ( V , E , R ) , where V is the set of vertices with node features Xv and ( vi , r , vj ) ∈ E are the labeled edges and r ∈ R are the relation types . Graph neural networks learn node embeddings by iterative aggregation of the k-hop neighbourhood . Each layer of a graph neural network has two main components AGGREGATE and COMBINE ( Xu et al. , 2019 ) : a ( l ) v = AGGREGATE ( l ) ( { h ( l−1 ) u ∀u ∈ N ( v ) } ) ( 2 ) where a ( l ) v ∈ Rdl−1 is the aggregated node feature of the neighbourhood , h ( l−1 ) u is the node feature in neighbourhood N ( . ) of node v including a self loop . The aggregated node is passed to the COMBINE to generate the node representation h ( l ) v ∈ Rdl for the l-th layer : h ( l ) v = COMBINE ( l ) ( h ( l−1 ) v , a ( l ) v ) ( 3 ) h ( 0 ) v = xv where xv is the initial feature vector for the node . Previous works on graph neural networks for zero-shot learning have used GloVe ( Pennington et al. , 2014 ) to represent the initial features ( Kampffmeyer et al. , 2019 ; Wang et al. , 2018 ) .
This paper tackles zero-shot learning by leveraging the large-scale knowledge graph i.e., ConceptNet to propagate the knowledge learned from seen classes to unseen classes. The authors propose a novel propagation rule that aggregates node embeddings by the self-attention technique. It is infeasible to run GCN on such large-scale knowledge graph. Therefore they reduce the knolwedge graph by adopting a neighborhood sampling strategy based on random walks. The method is evaluated on multiple zero-shot learning tasks including object classification, intent classification and fine-grained entity typing. The SOTA results are achieved.
SP:aff048d4b28972615e99e9d2e82258fb0e35f656
Blending MPC & Value Function Approximation for Efficient Reinforcement Learning
1 INTRODUCTION . Model-free Reinforcement Learning ( RL ) is increasingly used in challenging sequential decision-making problems including high-dimensional robotics control tasks ( Haarnoja et al. , 2018 ; Schulman et al. , 2017 ) as well as video and board games ( Silver et al. , 2016 ; 2017 ) . While these approaches are extremely general , and can theoretically solve complex problems with little prior knowledge , they also typically require a large quantity of training data to succeed . In robotics and engineering domains , data may be collected from real-world interaction , a process that can be dangerous , time consuming , and expensive . Model-Predictive Control ( MPC ) offers a simpler , more practical alternative . While RL typically uses data to learn a global model offline , which is then deployed at test time , MPC solves for a policy online by optimizing an approximate model for a finite horizon at a given state . This policy is then executed for a single timestep and the process repeats . MPC is one of the most popular approaches for control of complex , safetycritical systems such as autonomous helicopters ( Abbeel et al. , 2010 ) , aggressive off-road vehicles ( Williams et al. , 2016 ) and humanoid robots ( Erez et al. , 2013 ) , owing to its ability to use approximate models to optimize complex cost functions with nonlinear constraints ( Mayne et al. , 2000 ; 2011 ) . However , approximations in the model used by MPC can significantly limit performance . Specifically , model bias may result in persistent errors that eventually compound and become catastrophic . For example , in non-prehensile manipulation , practitioners often use a simple quasi-static model that assumes an object does not roll or slide away when pushed . For more dynamic objects , this can lead to aggressive pushing policies that perpetually over-correct , eventually driving the object off the surface . Recently , there have been several attempts to combine MPC with model free RL , showing that the combination can improve over the individual approaches alone . Many of these approaches involve using RL to learn a terminal cost function , thereby increasing the effective horizon of MPC ( Zhong et al. , 2013 ; Lowrey et al. , 2018 ; Bhardwaj et al. , 2020 ) . However , the learned value function is only applied at the end of the MPC horizon . Model errors would still persist in horizon , leading to sub-optimal policies . Similar approaches have also been applied to great effect in discrete games with known models ( Silver et al. , 2016 ; 2017 ; Anthony et al. , 2017 ) , where value functions and policies learned via model-free RL are used to guide Monte-Carlo Tree Search . In this paper , we focus on a somewhat broader question : can machine learning be used to both increase the effective horizon of MPC , while also correcting for model bias ? One straightforward approach is to try to learn ( or correct ) the MPC model from real data encountered during execution ; however there are some practical barriers to this strategy . Hand-constructed models are often crude-approximations of reality and lack the expressivity to represent encountered dynamics . Moreover , increasing the complexity of such models leads to computationally expensive updates that can harm MPC ’ s online performance . Model-based RL approaches such as Chua et al . ( 2018 ) ; Nagabandi et al . ( 2018 ) ; Shyam et al . ( 2019 ) aim to learn general neural network models directly from data . However , learning globally consistent models is an exceptionally hard task due to issues such as covariate shift ( Ross & Bagnell , 2012 ) . We propose a framework , MPQ ( λ ) , for weaving together MPC with learned value estimates to trade-off errors in the MPC model and approximation error in a learned value function . Our key insight is to view MPC as tracing out a series of local Q-function approximations . We can then blend each of these Q-functions with value estimates from reinforcement learning . We show that by using a blending parameter λ , similar to the trace decay parameter in TD ( λ ) , we can systematically trade-off errors between these two sources . Moreover , by smoothly decaying λ over learning episodes we can achieve the best of both worlds - a policy can depend on a prior model before its has encountered any data and then gradually become more reliant on learned value estimates as it gains experience . To summarize , our key contributions are : 1 . A framework that unifies MPC and Model-free RL through value function approximation . 2 . Theoretical analysis of finite horizon planning with approximate models and value functions . 3 . Empirical evaluation on challenging manipulation problems with varying degrees of model-bias . 2 PRELIMINARIES . 2.1 REINFORCEMENT LEARNING . We consider an agent acting in an infinite-horizon discounted Markov Decision Process ( MDP ) . An MDP is defined by a tupleM= ( S , A , c , P , γ , µ ) where S is the state space , A is the action space , c ( s , a ) is the per-step cost function , st+1∼P ( ·|st , at ) is the stochastic transition dynamics and γ is the discount factor and µ ( s0 ) is a distribution over initial states . A closed-loop policy π ( ·|s ) outputs a distribution over actions given a state . Let µπM be the distribution over state-action trajectories obtained by running policy π onM . The value function for a given policy π , is defined as V πM ( s ) =EµπM [ ∑∞ t=0γ tc ( st , at ) |s0 =s ] and the action-value function as QπM ( s , a ) = EµπM [ ∑∞ t=0γ tc ( st , at ) |s0 =s , a0 =a ] . The objective is to find an optimal policy π∗ = argmin π Es0∼µ [ V πM ( s0 ) ] . We can also define the ( dis ) -advantage function AπM ( s , a ) =Q π M ( s , a ) −V π ( s ) , which measures how good an action is compared to the action taken by the policy in expectation . It can be equivalently expressed in terms of the Bellman error as AπM ( s , a ) =c ( s , a ) +γEs′∼P , a′∼π [ QπM ( s′ , a′ ) ] −Ea∼π [ QπM ( s , a ) ] . 2.2 MODEL-PREDICTIVE CONTROL . MPC is a widely used technique for synthesizing closed-loop policies for MDPs . Instead of trying to solve for a single , globally optimal policy , MPC follows a more pragmatic approach of optimizing simple , local policies online . At every timestep on the system , MPC uses an approximate model of the environment to search for a parameterized policy that minimizes cost over a finite horizon . An action is sampled from the policy and executed on the system . The process is then repeated from the next state , often by warm-starting the optimization from the previous solution . We formalize this process as solving a simpler surrogate MDP M̂= ( S , A , ĉ , P̂ , γ , µ̂ , H ) online , which differs fromM by using an approximate cost function ĉ , transition dynamics P̂ and limiting horizon to H. Since it plans to a finite horizon , it ’ s also common to use a terminal state-action value function Q̂ that estimates the cost-to-go . The start state distribution µ̂ is a dirac-delta function centered on the current state s0 =st . MPC can be viewed as iteratively constructing an estimate of the Q-function of the original MDPM , given policy πφ at state s : QφH ( s , a ) =EµπφM [ H−1∑ i=0 γiĉ ( si , ai ) +γ HQ̂ ( sH , aH ) |s0 =s , a0 =a ] ( 1 ) MPC then iteratively optimizes this estimate ( at current system state st ) to update the policy parameters φ∗t =argmin φ QφH ( st , πφ ( st ) ) ( 2 ) Alternatively , we can also view the above procedure from the perspective of disadvantage minimization . Let us define an estimator for the 1-step disadvantage with respect to the potential function Q̂ as A ( si , ai ) =c ( si , ai ) +γQ̂ ( si+1 , ai+1 ) −Q̂ ( si , ai ) . We can then equivalently write the above optimization as minimizing the discounted sum of disadvantages over time via the telescoping sum trick argmin π∈Π E µ πφ M [ Q̂ ( s0 , a0 ) + H−1∑ i=0 γiA ( si , ai ) |s0 =st ] ( 3 ) Although the above formulation queries the Q̂ at every timestep , it is still exactly equivalent to the original problem and hence , does not mitigate the effects of model-bias . In the next section , we build a concrete method to address this issue by formulating a novel way to blend Q-estimates from MPC and a learned value function that can balance their respective errors . 3 MITIGATING BIAS IN MPC VIA REINFORCEMENT LEARNING . In this section , we develop our approach to systematically deal with model bias in MPC by blending-in learned value estimates . First , we take a closer look at the different sources of error in the estimate in ( 1 ) and then propose an easy-to-implement , yet effective strategy for trading them off . 3.1 SOURCES OF ERROR IN MPC . The performance of MPC algorithms critically depends on the quality of the Q-function estimatorQφH ( s , a ) in ( 1 ) . There are three major sources of approximation error . First , model bias can cause compounding errors in predicted state trajectories , which biases the estimates of the costs of different action sequences . The effect of model error becomes more severe asH−→∞ . Second , the error in the terminal value function gets propagated back to the estimate of the Q-function at the start state . With discounting , the effect of error due to inaccurate terminal value function diminishes asH increases . Third , using a smallH with an inaccurate terminal value function can make the MPC algorithm greedy and myopic to rewards further out in the future . We can formally bound the performance of the policy with approximate models and approximate learned value functions . In Theorem 3.1 , we show the loss in performance of the resulting policy as a function of the model error , value function error and the planning horizon . Theorem 3.1 ( Proof Appendix A.1.2 ) . Let MDP M̂ be an α-approximation ofM such that ∀ ( s , a ) , we have ∣∣∣∣∣∣P̂ ( s′|s , a ) −P ( s′|s , a ) ∣∣∣∣∣∣ 1 ≤α and |̂c ( s , a ) −c ( s , a ) |≤α . Let the learned value function Q̂ ( s , a ) be an -approximation of the true value function ∣∣∣∣∣∣Q̂ ( s , a ) −Qπ∗M ( s , a ) ∣∣∣∣∣∣∞≤ . The performance of the MPC policy is bounded w.r.t the optimal policy as ∣∣∣∣V π∗M ( s ) −V π̂M ( s ) ∣∣∣∣∞ ≤2 ( γ ( 1−γH−1 ) ( 1−γH ) ( 1−γ ) αH ( cmax−cmin 2 ) + γHαH 1−γH ( Vmax−Vmin 2 ) + α 1−γ + γH 1−γH ) ( 4 ) This theorem generalizes over various established results . SettingH=1 , =0 gives us the 1-step simulation lemma in Kearns & Singh ( 2002 ) ( Appendix A.1.1 ) . Settingα=0 , i.e . true model , recovers the cost-shaping result in Sun et al . ( 2018 ) . Further inspecting terms in ( 4 ) , we see that the model error increases with horizon H ( the first two terms ) while the learned value error decreases withH which matches our intuitions . In practice , the errors in model and value function are usually unknown and hard to estimate making it impossible to set the MPC horizon to the optimal value . Instead , we next propose a strategy to blend the Q-estimates from MPC and the learned value function at every timestep along the horizon , instead of just the terminal step such that we can properly balance the different sources of error .
The paper proposes to combine MPC and model-free RL to overcome the possible modelling errors. Thereby the approach achieves the sample-efficiency of MPC and the control quality of model-free RL. The resulting MPQ(\lambda) algorithm uses MPPI to obtain the actions by optimizing the blended MPC objective. The Q-targets for fitting the q function also use the blended Q-estimate.
SP:42f0f05d335d004a58b91ec986ddd5af72a35a15
Multiscale Score Matching for Out-of-Distribution Detection
1 INTRODUCTION . Modern neural networks do not tend to generalize well to out-of-distribution samples . This phenomenon has been observed in both classifier networks ( Hendrycks & Gimpel ( 2017 ) ; Nguyen et al . ( 2015 ) ; Szegedy et al . ( 2013 ) ) and deep likelihood models ( Nalisnick et al . ( 2018 ) ; Hendrycks et al . ( 2018 ) ; Ren et al . ( 2019 ) ) . This certainly has implications for AI safety ( Amodei et al . ( 2016 ) ) , as models need to be aware of uncertainty when presented with unseen examples . Moreover , an out-ofdistribution detector can be applied as an anomaly detector . Ultimately , our research is motivated by the need for a sensitive outlier detector that can be used in a medical setting . Particularly , we want to identify atypical morphometry in early brain development . This requires a method that is generalizable to highly variable , high resolution , unlabeled real-world data while being sensitive enough to detect an unspecified , heterogeneous set of atypicalities . To that end , we propose multiscale score matching to effectively detect out-of-distribution samples . Hyvärinen ( 2005 ) introduced score matching as a method to learn the parameters of a nonnormalized probability density model , where a score is defined as the gradient of the log density with respect to the data . Conceptually , a score is a vector field that points in the direction where the log density grows the most . The authors mention the possibility of matching scores via a nonparametric model but circumvent this by using gradients of the score estimate itself . However , Vincent ( 2011 ) later showed that the objective function of a denoising autoencoder ( DAE ) is equivalent to matching the score of a non-parametric Parzen density estimator of the data . Thus , DAEs provide a methodology for learning score estimates via the objective : 1 2 Ex̃∼qσ ( x̃|x ) pdata ( x ) [ ||sθ ( x̃ ) −∇x̃ log qσ ( x̃|x ) || ] ( 1 ) Here sθ ( x ) is the score network being trained to estimate the true score ∇x log pdata ( x ) , and qσ ( x̃ ) = ∫ qσ ( x̃|x ) pdata ( x ) dx . It should be noted that the score of the estimator only matches 1https : //github.com/ahsanMah/msma the true score when the noise perturbation is minimal i.e qσ ( x̃ ) ≈ pdata ( x ) . Recently , Song & Ermon ( 2019 ) employed multiple noise levels to develop a deep generative model based on score matching , called Noise Conditioned Score Network ( NCSN ) . Let { σi } Li=1 be a positive geometric sequence that satisfies σ1σ2 = ... = σL−1 σL > 1 . NCSN is a conditional network , sθ ( x , σ ) , trained to jointly estimate scores for various levels of noise σi such that ∀σ ∈ { σi } Li=1 : sθ ( x , σ ) ≈ ∇x log qσ ( x ) . In practice , the network is explicitly provided a one-hot vector denoting the noise level used to perturb the data . The network is then trained via a denoising score matching loss . They choose their noise distribution to beN ( x̃|x , σ2I ) ; therefore∇x̃ log qσ ( x̃|x ) = − ( x̃−x/σ2 ) . Thus the objective function is : 1 L L∑ i=1 λ ( σi ) [ 1 2 Ex̃∼qσi ( x̃|x ) pdata ( x ) [ ∣∣∣∣∣∣∣∣sθ ( x̃ , σi ) + ( x̃− xσ2i ) ∣∣∣∣∣∣∣∣2 2 ] ] ( 2 ) Song & Ermon ( 2019 ) set λ ( σi ) = σ2 after empirically observing that ||σsθ ( x , σ ) ||2 ∝ 1 . We similarly scaled our score norms for all our experiments . Our work directly utilizes the training objective proposed by Song & Ermon ( 2019 ) i.e . we use an NCSN as our score estimator . However , we use the score outputs for out-of-distribution ( OOD ) detection rather than for generative modeling . We demonstrate how the space of multiscale score estimates can separate in-distribution samples from outliers , outperforming state-of-the-art methods . We also apply our method on real-world medical imaging data of brain MRI scans . 2 MULTISCALE SCORE ANALYSIS . Consider taking the L2-norm of the score function : ||s ( x ) || = ||∇x log p ( x ) || = ∣∣∣∣∣∣∇x p ( x ) p ( x ) ∣∣∣∣∣∣ . Since the data density term appears in the denominator , a high likelihood will correspond to a low norm . Since out-of-distribution samples should have a low likelihood with respect to the indistribution log density ( i.e . p ( x ) is small ) , we can expect them to have high score norms . However , if these outlier points reside in “ flat ” regions with very small gradients ( e.g . in a small local mode ) , then their score norms can be low despite the point belonging to a low density region . This is our first indicator informing us that a true score norm may not be sufficient for detecting outliers . We empirically validate our intuition by considering score estimates for a relatively simple toy dataset : FashionMNIST . Following the denoising score matching objective ( Equation 2 ) , we can obtain multiple estimates of the true score by using different noise distributions qσ ( x̃|x ) . Like Song & Ermon ( 2019 ) , we choose the noise distributions to be zero-centered Gaussian scaled according to σi . Recall that the scores for samples perturbed by the lowest σ noise should be closest to the true score . Our analyses show that this alone was inadequate at separating inliers from OOD samples . We trained a score network sFM ( x , σL ) on FashionMNIST and used it to estimate scores of FashionMNIST ( x ∼ DFM ) , MNIST ( x ∼ DM ) and CIFAR-10 ( x ∼ DC ) test sets . Figure 1a shows the distribution of the score norms corresponding to the lowest noise level used . Note that CIFAR10 samples are appropriately given a high score by the model . However , the model is unable to distinguish FashionMNIST from MNIST , giving MNIST roughly the same scores as in-distribution samples . Though far from ideal , this result is still a considerable improvement on existing likelihood methods , which have been shown to assign higher likelihoods to OOD samples ( Nalisnick et al . ( 2018 ) ) . Our next line of inquiry was to utilize multiple noise levels . That is instead of simply considering sFM ( x , σL ) , we analyze the L-dimensional space [ ||sFM ( x , σ1 ) || , ... , ||sFM ( x , σL ) || ] for x ∼ { DFM , DM , DC } . Our observations showed that datasets did tend to be separable in the Ldimensional space of score norms . Figure 1b visualizes the UMAP embeddings of scores calculated via a network trained to estimate L = 10 scales of σs , with the lowest σ being the same as one in Figure 1a . 2.1 SCALES AND NEIGHBORHOODS . To our knowledge multiscale score analysis has not been explored in the context of OOD detection . In this section , we present an analysis in order to give an intuition for why multiple scales can be beneficial . Consider the toy distribution shown in Figure 2 . We have three regions of interest : an inlier region with high density centered around x = 10 , an outlier region with low density around x = 30 , and a second outlier region with a local mode centered around x = 50 . Recall that adding Gaussian noise to a distribution is equivalent to convolving it with the Gaussian distribution . This not only allows us to visualize perturbations of our toy distribution , but also analytically compute the score estimates given any σi . Initially with no perturbation , both a point in the low-density region and one very close to ( or at ) the local-mode will have small gradients . As we perturb the samples we smooth the original density , causing it to widen . The relative change in density at each point is dependent on neighboring modes . A large scale perturbation will proportionally take a larger neighborhood into account at each point of the convolution . Therefore , at a sufficiently large scale , nearby outlier points gain context from in-distribution modes . This results in an increased gradient signal in the direction of inliers . Figure 3 plots the score norms of samples generated from the original density along with markers indicating our key regions . Note how even a small scale perturbation ( σL = 0.1 ) is enough to bias the density of the Low-Density outliers towards the nearby in-distribution mode . A medium scale ( σM = 10 ) Gaussian perturbation is still not wide enough to reach the inlier region from the LocalMode outlier densities , causing them to simply smooth away into flat nothingness . It is only after we perform a large scale ( σH = 20 ) perturbation that the in-distribution mode gets taken into account , resulting in a higher gradient norm . This analysis allows us to intuit that larger noise levels account for a larger neighborhood context . We surmise that given a sufficiently large scale , we can capture gradient signals from distant outliers . It is imperative to note that one large scale is not guaranteed to work for all outliers . Consider outliers close to inlier modes such as the samples between Low-Density outliers and Inliers in Figure 2. σH results in an overlap in the score distribution of inliers and Low-Density outliers . This makes it difficult to differentiate the aforementioned “ in-between ” outliers from the in-distribution samples . However , this large scale was necessary to get a big enough neighborhood context in order to capture the more distant Local-Mode outliers . Therefore , we postulate that a range of scales is necessary for separating outliers . Admittedly , selecting this range according to the dataset is not a trivial problem . In a very recent work , Song & Ermon ( 2020 ) outlined some techniques for selecting { σi } Li=1 for NCSNs from the perspective of generative modelling . Perhaps there is a similar analog to OOD detection . We leave such analyses for future work and use the default range for NCSN in all our experiments . However , we observed that our defaults are surprisingly generalizable , evident by the fact that all our experiments in Section 5 were performed with the same scale range . In Section 5.5 , we further analyze how varying the scale range effects downstream accuracy and observe that our defaults already provide near optimal performance . 2.2 PROPOSED TRAINING SCHEME . In this work , we propound the inclusion of all noisy score estimates for the task of separating in- and out-of-distribution points , allowing for a Multiscale Score Matching Analysis ( MSMA ) . Concretely , given L noise levels , we calculate the L2-norms of per-sample scores for each level , resulting in an L-dimensional vector for each input sample . Motivated by our observations , we posit that indistribution data points occupy distinct and dense regions in the L-dimensional score space . The cluster assumption states that decision boundaries should not pass high density regions , but instead lie in low density regions . This implies that any auxiliary method trained to learn in-distribution regions should be able to identify OOD data points that reside outside the learned space . Thus , we propose a two step unsupervised training scheme . First , we train a NCSN model sIN ( x , σL ) to estimate scores for inlier samples , given { σi } Li=1 levels of noise . Once trained , we calculate all L noisy score estimates for the N training samples and take the L2-norms across the input dimensions : [ ||sIN ( X , σ1 ) ||22 , ... , ||sIN ( X , σL ) || 2 2 ] . This results in an NxL matrix . We now train an auxiliary model ( such as a Gaussian Mixture Model ) on this matrix to learn the spatial regions of in-distribution samples in the L-dimensional space .
The authors leveraged and repurposed Noise Conditioned Score Network (NCSN) that was originally introduced by Song & Ermon (2019) for generative modeling to be used for detection out-of-distribution (OOD) images. The authors unfold the intuition and rationale behind score matching followed by the equivalence of denoising autoencoder (DAE) to derive NCSN as a score estimator and provide an analysis to demonstrate the value of multiscale score analysis. In an experimental analysis on SVHN and CIFAR datasets they demonstrate superiority of their method (MSMA) over previously reported findings in the literature using state-of-the-art models (ODIN, JEM, Likelihood Ratios) on OOD task.
SP:2227006cc52059641d5d7d2fca467d5e392bce65
Recurrent Neural Network Architecture based on Dynamic Systems Theory for Data Driven Modelling of Complex Physical Systems
1 INTRODUCTION . Dynamic systems occur in many different areas of life ( Isermann & Münchhof ( 2011 ) ) . From biology , engineering , medicine to economics and more : Often , if a system changes its state based on a external input , this system can be viewed as a dynamic system . Dynamic system identification is the process of modelling the system ’ s properties . Such models can be used , for example , for anomaly detection , controller design or outcome prediction . For linear systems , this identification task is already well understood and state of the art methods exist . However , if a system exhibits non-linear behaviour , for example slip-stick-effects due to mechanical friction , the applicability of these methods is limited . In this case different approaches implemented in the state of the art range from white-box to black-box models . Generally , increasing system complexity raises the need for more powerful and often less understandable model architectures in order to produce satisfactory results : White box ( based on differential equations or numerical simulations of the physical system components ) , black box systems ( like Gaussian processes , deep neural networks , Support Vector Machines ) and grey box models , which often employ a mix of linear and non-linear building blocks . One example of a tool used in engineering are Hammerstein-Wiener models which are a combination of linear and ( prior known ) non-linear equations ( shown in Figure 1 ) . The linear model parameters are determined based on the training data . The non-linear behaviour of models is modeled using lookup tables or user defined non-linear functions . In this work we present a new type of recurrent neural network layer called the Dynamic Recurrent Neural Network ( DYRNN ) . It is designed for data based modelling of dynamic systems in a sequence-to-sequence manner based on input ( x ( t ) ) and output ( y ( t ) ) data . With it , we intend to bridge the gap between dynamic systems theory and recurrent neural networks . The layer ’ s internal computation is based on elemental transfer blocks from linear system identification . By combining it with non-linear neural networks , a Hammerstein-Wiener style model is emulated . This way , the model can offer additional knowledge about the examined system ’ s internal properties . Furthermore , while the model is trained on sampled data of one sampling rate it can be applied to data of the same system at a different sampling rate . This can be used to check the robustness of the model or to save time during training . We show that our network produces results which are better than or comparable to other recurrent networks ( RNN , LSTM , GRU ) on three different problem datasets . Since the layer can be implemented to be compatible to current deep learning frameworks , it can be combined with state of the art neural network layers ( like convolutional or fully connected layers ) and training techniques . 2 RELATED WORK . Dynamic system identification can be viewed as a sequence-to-sequence task of the modelling of a systems ’ output based on certain inputs . Isermann & Münchhof ( 2011 ) , for example , list several different tools like ARIMA processes for linear systems and multiple neural network architectures for non-linear systems . Examples for the latter are locally recurrent locally feedforward networks ( LRGF ) , Multi Layer Perceptrons ( MLP ) and Radial Basis Function ( RBF ) networks of different types of dynamics . These model structures are generalized , however , and as we will show , further theoretical background on linear systems theory could be leveraged . Generally , deep learning offers multiple neural network layer types that can be employed when dealing with sequence-to-sequence problems , like fully connected ( FC ) networks , convolutional networks ( CNN ) or recurrent networks . Recurrent networks are also known as sequential models ( like RNN , LSTM by Hochreiter & Schmidhuber ( 1997 ) and GRU by Cho et al . ( 2014 ) ) and have been used successfully for text based sequence-to-sequence problems like machine translation or text processing . Wang ( 2017 ) demonstrates a concept of LSTM for dynamic system identification by using several parallel LSTM layers which predict the systems behaviour based on its input and prior predictions and their derivatives . A different approach of modelling dynamic systems are neural ordinary differential equations ( ODEs ) by Chen et al . ( 2018 ) . These networks learn dy/dt of a function f with y ( t ) = f ( x ( t ) ) and the resulting ODE model is used an numerical integrator/solver ( like Runge-Kutta ) to compute y ( t ) . This has the advantage of a varying sampling step size which is determined by the solver , but these methods are agnostic of dynamic systems theory knowledge . Similarly , Raissi et al . ( 2019 ) use deep learning to learn Partial Differential Equations ( PDE ) of physical systems in a FC model combined with a numerical integrator . Furthermore , since the evaluation of ODE/PDE models is done using a numerical integrator , the model is difficult to apply in combination with other neural network layers like for example convolutional or recurrent layers . In terms of sampling frequency of the measurement data , recurrent network architectures can only be trained on one specific data frequency , and do not provide the functionality to generalize to other sampling rates of the same system . In such a case one would have to resample the new data to the frequency of the training set . Explainability approaches for sequential models in text processing deduce which parts of the sentences are relevant for the model ’ s prediction based on the activation of the internal gates ( as shown by e.g.Krakovna & Doshi-Velez ( 2016 ) ) . Interpretability of RNN/LSTM or GRU models for continuous measurement data has not been explored yet to our knowledge . 3 DYNAMIC RECURRENT NETWORK . The complexity of the modelling of dynamic systems does not only result from the potential nonlinearity , but also from the fact that the model has to keep track of the system ’ s current and past states in order to predict the output based on new input . We intend to model a dynamic system in a sequence-to-sequence task , which is why we chose a recurrent network architecture . Recurrent neural networks can be seen as units which iterate over a given sequence and predict an output . During this computation , a hidden state is computed which is leveraged for the prediction of the next time step . Our network differs from other state of the art recurrent networks in its computation function , which is derived from linear dynamic systems theory . In the following we explain the theoretical background used in this work . Then we describe the structure of our DYRNN . Finally , we show different advantages that result from this structure like training and prediction at different signal sampling rates and interpretability of the models . 3.1 LAYER STRUCTURE . The background knowledge in this section is covered by Föllinger et al . ( 2016 ) and Yarlagadda ( 2010 ) . In dynamic systems theory a linear system with the external input u ( t ) and resulting system output y ( t ) is expressed as the differential equation y ( t ) + a1 · ẏ ( t ) + . . .+ an · y ( n ) ( t ) = b0 · u ( t ) + b1 · u̇ ( t ) + . . .+ bn · u ( n ) ( t ) . ( 1 ) Therefore , a linear system acts as transformation of the input u ( t ) based to a convolution ( ∗ ) with the system ’ s linear transfer function g ( t ) to produce the system output y ( t ) with y ( t ) = u ( t ) ∗ g ( t ) = ∫ u ( τ ) g ( t− τ ) dτ . ( 2 ) The transfer function g ( t ) is used in engineering for , amongst others , controller design , system stability analysis or frequency response estimation . Larger , more complicated transfer functions can be seen as several basic functions which are interconnected in a circuit type fashion in parallel or in series ( see Figure 3 ) . Dynamic systems theory identifies five basic linear transfer functions from which all linear dynamic systems can be modeled , which we use in the construction of the DYRNN : P , I , D , PT1 and PD . For further information and and visualization see Appendix Section A ) . They have the following functionalities : • P : Proportional gain of the input , implemented as a multiplication with a constant • I : Integrating component , meaning the step-wise summation of an input over time . • D : Differential component acting as a high-pass filter • PT1 : Proportional multiplication of input with time delay , used to model e.g . voltage in capacitors in RC circuits . This function also acts as a low-pass filter • PD : Proportional increase with a differential component In the following equations K stands for a constant with influence on the output amplitude , while T is a time constant which influences the speed of the system ’ s reaction . K and T act as the trainable weights in a DYRNN layer . For usage in a recurrent layer , these differential equations are discretized with – in our case – first degree forward Euler . This is common in many engineering applications , and means replacing ẏ ( t ) with ẏ ( t ) = y ( k ) − y ( k − 1 ) ∆t . ( 3 ) This results in discrete recurrence equations , with the sample number k and the time distance between samples ∆t . The respective equations of the basic transfer functions are as follows : p ( k ) = KP · x ( k ) ( 4 ) i ( k ) = i ( k − 1 ) + ∆t KI · x ( k ) ( 5 ) d ( k ) = KD ∆t · ( x ( k ) − x ( k − 1 ) ) ( 6 ) pt1 ( k ) = pt1 ( k − 1 ) + ( KPT1 · x ( k ) − pt1 ( k − 1 ) ) · ∆t ∆t+ TPT1 ( 7 ) pd ( k ) = KPD · ( x ( k ) + TPD ∆t · ( x ( k ) − x ( k − 1 ) ) ) , ( 8 ) with the input x ( k ) and all K , T > 0 . The equations above are implemented as the computation function of a recurrent network layer as described in Appendix Section A.1 . K and T in the equations become trainable weights of the layer , while the hidden state consists of x ( k − 1 ) , i ( k − 1 ) and pt ( k − 1 ) . In the following we refer to these internal computations as subcomponents . We explore two different network variants in our work : The DYRNN5 with all five known subcomponents and the DYRNN3 with just P , PD and PT1 . The reason for this is , that a D subcomponent can be approximated by PD and I by a PT . Since integrators can also cause model instabilites , we explore both variants in our experiments . We formulate one unit ’ s computations based on the number of input channels nic and the number of outputs per subcomponent type noc . The number of output channels per layer nlayer amounts to ( subcomponent count ) * noc = 3 * noc for DYRNN3 and 5 * noc for DYRNN5 ( see Figure 4a ) . The fact that DYRNN can be implemented with an interface compatible with other recurrent networks allows the modelling of a systems properties by training in a sequence-to-sequence fashion using out of the box backpropagation through time , gradient descent and optimizer functions as implemented in state of the art frameworks like Pytorch ( by Paszke et al . ( 2019 ) ) and Tensorflow ( by Martı́n Abadi et al . ( 2015 ) ) . Since each input of the layer is connected to five basic components , stacking DYRNN layers results in an increasing number of output channels , as shown in Figure 4b . In our experiments , we achieved good results with two cascading DYRNN layers followed by a per-time step linear FC layer with one neuron . This final layer can be enhanced using non-linear activation functions and more layers/neurons , which would result in a similar structure to Hammerstein-Wiener models for modelling of static non-linearities in the data . In case of noisy input data , for example due to sensor noise , additional CNN layers could be used as well .
This paper aims at proposing Dynamic Recurrent Network to understand the underlying system properties of RNNs. By first showing five basic linear transfer functions in dynamic systems theory, the paper formulates DYRNN units. To solve the increasing number of layers issue, they concatenate inputs and intermediate results before passing into an FC layer. It is interesting to see how adjusting
SP:9f5792697be57f9be662cebfb28e46f123d96682
NeurWIN: Neural Whittle Index Network for Restless Bandits via Deep RL
1 INTRODUCTION . Many sequential decision problems can be modeled as multi-armed bandit problems . A bandit problem models each potential decision as an arm . In each round , we play M arms out of a total of N arms by choosing the corresponding decisions . We then receive a reward from the played arms . The goal is to maximize the long-term total discounted reward . Consider , for example , displaying advertisements on an online platform with the goal to maximize the long-term discounted clickthrough rates . This can be modeled as a bandit problem where each arm is a piece of advertisement and we choose which advertisements to be displayed every time a particular user visits the platform . It should be noted that the reward , i.e. , click-through rate , of an arm is not stationary , but depends on our actions in the past . For example , a user that just clicked on a particular advertisement may be much less likely to click on the same advertisement in the near future . Such a problem is a classic case of the restless bandit problem , where the reward distribution of an arm depends on its state , which changes over time based on our past actions . The restless bandit problem is notoriously intractable ( Papadimitriou & Tsitsiklis , 1999 ) . Most recent efforts , such as recovering bandits ( Pike-Burke & Grunewalder , 2019 ) , rotting bandits ( Seznec et al. , 2020 ) , and Brownian bandits ( Slivkins & Upfal , 2008 ) , only study some special instances of the restless bandit problem . The fundamental challenge of the restless bandit problem lies in the explosion of state space , as the state of the entire system is the Cartesian product of the states of individual arms . A powerful tool to address the explosion of state space is the Whittle index policy ( Whittle , 1988 ) . In a nutshell , the Whittle index policy calculates a Whittle index for each arm based on the arm ’ s current state , where the index loosely corresponds to the amount of cost that we are willing to pay to play the arm , and then plays the arm with the highest index . It has been shown that the Whittle index policy is either optimal or asymptotically optimal in many settings . In this paper , we present Neural Whittle Index Network ( NeurWIN ) , a principled machine learning approach that finds the Whittle indices for virtually all restless bandit problems . We note that the Whittle index is an artificial construct that can not be directly measured . Finding the Whittle index is typically intractable . As a result , the Whittle indices of many practical problems remain unknown except for a few special cases . We are able to circumvent the challenges of finding the Whittle indices by leveraging an important mathematical property of the Whittle index : Consider an alternative problem where there is only one arm and we decide whether to play the arm in each time instance . In this problem , we need to pay a constant cost of λ every time we play the arm . The goal is to maximize the long-term discounted net reward , defined as the difference between the rewards we obtain from the arm and the costs we pay to play it . Then , the optimal policy is to play the arm whenever the Whittle index becomes larger than λ . Based on this property , a neural network that produces the Whittle index can be viewed as one that finds the optimal policy for the alternative problem for any λ . Using this observation , we propose a deep reinforcement learning method to train NeurWIN . To demonstrate the power of NeurWIN , we employ NeurWIN for three recently studied restless bandit problems , namely , recovering bandit ( Pike-Burke & Grunewalder , 2019 ) , wireless scheduling ( Aalto et al. , 2015 ) , and stochastic deadline scheduling ( Yu et al. , 2018 ) . There is no known Whittle index for the first problem , and there is only an approximation of the Whittle index under some relaxations for the second problem . Only the third problem has a precise characterization of the Whittle index . For the first two problems , the index policy using our NeurWIN achieves better performance than existing studies . For the third problem , the index policy using our NeurWIN has virtually the same performance as the Whittle index policy . The rest of the paper is organized as follows : Section 2 reviews related literature . Section 3 provides formal definitions of the Whittle index and our problem statement . Section 4 introduces our training algorithm for NeurWIN . Section 5 demonstrates the utility of NeurWIN by evaluating its performance under three recently studied restless bandit problems . Finally , Section 6 concludes the paper . 2 RELATED WORK . Restless bandit problems were first introduced in ( Whittle , 1988 ) . They are known to be intractable , and are in general PSPACE hard ( Papadimitriou & Tsitsiklis , 1999 ) . As a result , many studies focus on finding the Whittle index policy for restless bandit problems , such as in ( Le Ny et al. , 2008 ; Meshram et al. , 2018 ; Tripathi & Modiano , 2019 ; Dance & Silander , 2015 ) . However , these studies are only able to find the Whittle indices under various specific assumptions about the bandit problems . There has been a lot of studies on applying RL methods for bandit problems . ( Dann et al. , 2017 ) proposed a tool called Uniform-PAC for contextual bandits . ( Zanette & Brunskill , 2018 ) described a framework-agnostic approach towards guaranteeing RL algorithms ’ performance . ( Jiang et al. , 2017 ) introduced contextual decision processes ( CDPs ) that encompass contextual bandits for RL exploration with function approximation . ( Riquelme et al. , 2018 ) compared deep neural networks with Bayesian linear regression against other posterior sampling methods . However , none of these studies are applicable to restless bandits , where the state of an arm can change over time . Deep RL algorithms have been utilized in problems that resemble restless bandit problems , including HVAC control ( Wei et al. , 2017 ) , cyber-physical systems ( Leong et al. , 2020 ) , and dynamic multichannel access ( Wang et al. , 2018 ) . In all these cases , a major limitation for deep RL is scalability . As the state spaces grows exponentially with the number of arms , these studies can only be applied to small-scale systems , and their evaluations are limited to cases when there are at most 5 zones , 6 sensors , and 8 channels , respectively . An emerging research direction is applying machine learning algorithms to learn Whittle indices . ( Borkar & Chadha , 2018 ) proposed employing the LSPE ( 0 ) algorithm ( Yu & Bertsekas , 2009 ) coupled with a polynomial function approximator . The approach was applied in ( Avrachenkov & Borkar , 2019 ) for scheduling web crawlers . However , this work can only be applied to restless bandits whose states can be represented by a single number , and it only uses a polynomial function approximator , which may have low representational power ( Sutton & Barto , 2018 ) . ( Fu et al. , 2019 ) proposed a Q-learning based heuristic to find Whittle indices . However , as shown in its experiment results , the heuristic may not produce Whittle indices even when the training converges . 3 PROBLEM SETTING . In this section , we provide a brief overview of restless bandit problems and the Whittle index . We then formally define the problem statement . 3.1 RESTLESS BANDIT PROBLEMS . A restless bandit problem consists of N restless arms . In each round t , a control policy observes the state of each arm i , denoted by si [ t ] , and selects M arms to activate . We call the selected arms as active and the others as passive . We use ai [ t ] to denote the policy ’ s decision on each arm i , where ai [ t ] = 1 if the arm is active and ai [ t ] = 0 if it is passive at round t. Each arm i generates a stochastic reward ri [ t ] with distribution Ri , act ( si [ t ] ) if it is active , and with distribution Ri , pass ( si [ t ] ) if it is passive . The state of each arm i in the next round evolves by the transition kernel of either Pi , act ( si [ t ] ) or Pi , pass ( si [ t ] ) , depending on whether the arm is active . The goal of the control policy is to maximize the total discounted reward , which can be expressed as ∑∞ t=1 ∑N i=1 β tri [ t ] with β being the discount factor . A control policy is effectively a function that takes the vector ( s1 [ t ] , s2 [ t ] , . . . , sN [ t ] ) as the input and produces the vector ( a1 [ t ] , a2 [ t ] , . . . , aN [ t ] ) as the output . It should be noted that the space of input is exponential inN . If each arm can be in one ofK possible states , then the number of possible inputs isKN . This feature , which is usually referred to as the curse of dimensionality , makes finding the optimal control policy intractable . 3.2 THE WHITTLE INDEX . An index policy seeks to address the curse of dimensionality through decomposition . In each round , it calculates an index , denoted by Wi ( si [ t ] ) , for each arm i based on its current state . The index policy then selects the M arms with the highest indices to activate . It should be noted that the index of an arm i is independent from the states of any other arms . Obviously , the performance of an index policy depends on the design of the index functionWi ( · ) . A popular index with solid theoretical foundation is the Whittle index , which is defined below . Since we only consider one arm at a time , we drop the subscript i for the rest of the paper . Consider a system with only one arm , and a control policy that determines whether to activate the arm in each round t. Suppose that the policy needs to pay an activation cost of λ every time it chooses to activate the arm . The goal of the control policy is to maximize the total discounted net reward , ∑∞ t=1 β t ( r [ t ] − λa [ t ] ) . The optimal control policy can be expressed by the set of states in which it would activate this arm for a particular λ , and we denote this set by A ( λ ) . Intuitively , the higher the cost , the less likely the optimal control policy would activate the arm in a given state , and hence the set A ( λ ) should decrease monotonically . When an arm satisfies this intuition , we say that the arm is indexable . Definition 1 ( Indexability ) . An arm is said to be indexable if A ( λ ) decreases monotonically from the set of all states to the empty set as λ increases from −∞ to∞ . A restless bandit problem is said to be indexable if all arms are indexable . Definition 2 ( The Whittle Index ) . If an arm is indexable , then its Whittle index of each state s is defined as W ( s ) : = supλ { λ : s ∈ A ( λ ) } . Even when an arm is indexable , finding its Whittle index can still be intractable , especially when the transition kernel of the arm is convoluted1 . Our NeurWIN finds the Whittle index by leveraging the following property of the Whittle index : Consider the single-armed bandit problem . Suppose the initial state of an indexable arm is s at round one . Consider two possibilities : The first is that the control policy activates the arm at round one , and then uses the optimal policy starting from round two ; and the second is that the control policy does not activate the arm at round one , and then uses the optimal policy starting from round two . Let Qλ , act ( s ) and Qλ , pass ( s ) be the expected discounted net reward for these two possibilities , respectively , and let Ds ( λ ) : = ( Qλ , act ( s ) − Qλ , pass ( s ) ) be their difference . Clearly , the optimal policy should activate an arm under state s and activation cost λ if Ds ( λ ) ≥ 0 . We then have the following : Theorem 1 . ( Zhao , 2019 , Thm 3.14 ) If an arm is indexable , then , for every state s , Ds ( λ ) ≥ 0 if and only if λ ≤W ( s ) . 1Niño-Mora ( 2007 ) described a generic approach for finding the Whittle index . The complexity of this approach is at least exponential to the number of states . Our NeurWIN uses Thm . 1 to train neural networks that predict the Whittle index for any indexable arms . From Def . 1 , a sufficient condition for indexability is when Ds ( λ ) is a decreasing function . Thus , we define the concept of strong indexability as follows : Definition 3 ( Strong Indexability ) . An arm is said to be strongly indexable if Ds ( λ ) is strictly decreasing in λ for every state s .
This paper considers the problem of learning how to control restless bandits. When all parameters of the system are known, Whittle index policy usually offers a good performance. The main contribution of the paper is to propose an algorithm, NeurWIN, that uses a neural network architecture to learn the Whittle indices. Most of the paper is devoted to the description and the derivation of the algorithm. At the end of the paper, the authors present four illustrations of how the algorithm works. The algorithm is compared to an index-based policy and an (old?) RL algorithm REINFORCE for the small systems. The learning behavior of NeurWIN is very good in all tested cases.
SP:e2d78e6eba2bc0e6273a6ce65549866bc3a29fe7
Enforcing robust control guarantees within neural network policies
1 INTRODUCTION . The field of robust control , dating back many decades , has been able to provide rigorous guarantees on when controllers will succeed or fail in controlling a system of interest . In particular , if the uncertainties in the underlying dynamics can be bounded in specific ways , these techniques can produce controllers that are provably robust even under worst-case conditions . However , as the resulting policies tend to be simple ( i.e. , often linear ) , this can limit their performance in typical ( rather than worst-case ) scenarios . In contrast , recent high-profile advances in deep reinforcement learning have yielded state-of-the-art performance on many control tasks , due to their ability to capture complex , nonlinear policies . However , due to a lack of robustness guarantees , these techniques have still found limited application in safety-critical domains where an incorrect action ( either during training or at runtime ) can substantially impact the controlled system . In this paper , we propose a method that combines the guarantees of robust control with the flexibility of deep reinforcement learning ( RL ) . Specifically , we consider the setting of nonlinear , time-varying systems with unknown dynamics , but where ( as common in robust control ) the uncertainty on these dynamics can be bounded in ways amenable to obtaining provable performance guarantees . Building upon specifications provided by traditional robust control methods in these settings , we construct a new class of nonlinear policies that are parameterized by neural networks , but that are nonetheless provably robust . In particular , we project the outputs of a nominal ( deep neural network-based ) controller onto a space of stabilizing actions characterized by the robust control specifications . The resulting nonlinear control policies are trainable using standard approaches in deep RL , yet are guaranteed to be stable under the same worst-case conditions as the original robust controller . We describe our proposed deep nonlinear control policy class and derive efficient , differentiable projections for this class under various models of system uncertainty common in robust control . We demonstrate our approach on several different domains , including synthetic linear differential inclusion ( LDI ) settings , the cart-pole task , a quadrotor domain , and a microgrid domain . Although these domains are simple by modern RL standards , we show that purely RL-based methods often produce unstable policies in the presence of system disturbances , both during and after training . In contrast , we show that our method remains stable even when worst-case disturbances are present , while improving upon the performance of traditional robust control methods . 2 RELATED WORK . We employ techniques from robust control , ( deep ) RL , and differentiable optimization to learn provably robust nonlinear controllers . We discuss these areas of work in connection to our approach . Robust control . Robust control is concerned with the design of feedback controllers for dynamical systems with modeling uncertainties and/or external disturbances ( Zhou and Doyle , 1998 ; Başar and Bernhard , 2008 ) , specifically controllers with guaranteed performance under worst-case conditions . Many classes of robust control problems in both the time and frequency domains can be formulated using linear matrix inequalities ( LMIs ) ( Boyd et al. , 1994 ; Kothare et al. , 1996 ) ; for reasonably-sized problems , these LMIs can be solved using off-the-shelf numerical solvers based on interior-point or first-order ( gradient-based ) methods . However , providing stability guarantees often requires the use of simple ( linear ) controllers , which greatly limits average-case performance . Our work seeks to improve performance via nonlinear controllers that nonetheless retain the same stability guarantees . Reinforcement learning ( RL ) . In contrast , RL ( and specifically , deep RL ) is not restricted to simple controllers or problems with uncertainty bounds on the dynamics . Instead , deep RL seeks to learn an optimal control policy , represented by a neural network , by directly interacting with an unknown environment . These methods have shown impressive results in a variety of complex control tasks ( e.g. , Mnih et al . ( 2015 ) ; Akkaya et al . ( 2019 ) ) ; see Buşoniu et al . ( 2018 ) for a survey . However , due to its lack of safety guarantees , deep RL has been predominantly applied to simulated environments or highly-controlled real-world problems , where system failures are either not costly or not possible . Efforts to address the lack of safety and stability in RL fall into several main categories . The first tries to combine control-theoretic ideas , predominantly robust control , with the nonlinear control policy benefits of RL ( e.g. , Morimoto and Doya ( 2005 ) ; Abu-Khalaf et al . ( 2006 ) ; Feng et al . ( 2009 ) ; Liu et al . ( 2013 ) ; Wu and Luo ( 2013 ) ; Luo et al . ( 2014 ) ; Friedrich and Buss ( 2017 ) ; Pinto et al . ( 2017 ) ; Jin and Lavaei ( 2018 ) ; Chang et al . ( 2019 ) ; Han et al . ( 2019 ) ; Zhang et al . ( 2020 ) ) . For example , RL has been used to address stochastic stability in H∞ control synthesis settings by jointly learning Lyapunov functions and policies in these settings ( Han et al. , 2019 ) . As another example , RL has been used to address H∞ control for continuous-time systems via min-max differential games , in which the controller and disturbance are the “ minimizer ” and “ maximizer ” ( Morimoto and Doya , 2005 ) . We view our approach as thematically aligned with this previous work , though our method is able to capture not only H∞ settings , but also a much broader class of robust control settings . Another category of methods addressing this challenge is safe RL , which aims to learn control policies while maintaining some notion of safety during or after learning . Typically , these methods attempt to restrict the RL algorithm to a safe region of the state space by making strong assumptions about the smoothness of the underlying dynamics , e.g. , that the dynamics can be modeled as a Gaussian process ( GP ) ( Turchetta et al. , 2016 ; Akametalu et al. , 2014 ) or are Lipschitz continuous ( Berkenkamp et al. , 2017 ; Wachi et al. , 2018 ) . This framework is in theory more general than our approach , which requires using stringent uncertainty bounds ( e.g . state-control norm bounds ) from robust control . However , there are two key benefits to our approach . First , norm bounds or polytopic uncertainty can accommodate sharp discontinuities in the continuous-time dynamics . Second , convex projections ( as used in our method ) scale polynomially with the state-action size , whereas GPs in particular scale exponentially ( and are therefore difficult to extend to high-dimensional problems ) . A third category of methods uses Constrained Markov Decision Processes ( C-MDPs ) . These methods seek to maximize a discounted reward while bounding some discounted cost function ( Altman , 1999 ; Achiam et al. , 2017 ; Taleghan and Dietterich , 2018 ; Yang et al. , 2020 ) . While these methods do not require knowledge of the cost functions a-priori , they only guarantee the cost constraints hold during test time . Additionally , using C-MDPs can yield other complications , such as optimal policies being stochastic and the constraints only holding for a subset of states . Differentiable optimization layers . A great deal of recent work has studied differentiable optimization layers for neural networks : e.g. , layers for quadratic programming ( Amos and Kolter , 2017 ) , SAT solving ( Wang et al. , 2019 ) , submodular optimization ( Djolonga and Krause , 2017 ; Tschiatschek et al. , 2018 ) , cone programs ( Agrawal et al. , 2019 ) , and other classes of optimization problems ( Gould et al. , 2019 ) . These layers can be used to construct neural networks with useful inductive bias for particular domains or to enforce that networks obey hard constraints dictated by the settings in which they are used . We create fast , custom differentiable optimization layers for the latter purpose , namely , to project neural network outputs into a set of certifiably stabilizing actions . 3 BACKGROUND ON LQR AND ROBUST CONTROL SPECIFICATIONS . In this paper , our aim is to control nonlinear ( continuous-time ) dynamical systems of the form ẋ ( t ) ∈ A ( t ) x ( t ) +B ( t ) u ( t ) +G ( t ) w ( t ) , ( 1 ) where x ( t ) ∈ Rs denotes the state at time t ; u ( t ) ∈ Ra is the control input ; w ( t ) ∈ Rd captures both external ( possibly stochastic ) disturbances and any modeling discrepancies ; ẋ ( t ) denotes the time derivative of the state x at time t ; and A ( t ) ∈ Rs×s , B ( t ) ∈ Rs×a , G ( t ) ∈ Rs×d . This class of models is referred to as linear differential inclusions ( LDIs ) ; however , we note that despite the name , this class does indeed characterize nonlinear systems , as , e.g. , w ( t ) can depend arbitrarily on x ( t ) and u ( t ) ( though we omit this dependence in the notation for brevity ) . Within this class of models , it is often possible to construct robust control specifications certifying system stability . Given such specifications , our proposal is to learn nonlinear ( deep neural network-based ) policies that provably satisfy these specifications while optimizing some objective of interest . We start by giving background on the robust control specifications and objectives considered in this work . 3.1 ROBUST CONTROL SPECIFICATIONS . In the continuous-time , infinite-horizon settings we consider here , the goal of robust control is often to construct a time-invariant control policy u ( t ) = π ( x ( t ) ) , alongside some certification that guarantees that the controlled system will be stable ( i.e. , that trajectories of the system will converge to an equilibrium state , usually x = 0 by convention ; see Haddad and Chellaboina ( 2011 ) for a more formal definition ) . For many classes of systems,1 this certification is typically in the form of a positive definite Lyapunov function V : Rs → R , with V ( 0 ) = 0 and V ( x ) > 0 for all x 6= 0 , such that the function is decreasing along trajectories – for instance , V̇ ( x ( t ) ) ≤ −αV ( x ( t ) ) ( 2 ) for some design parameter α > 0 . ( This particular condition implies exponential stability with a rate of convergence α.2 ) For certain classes of bounded dynamical systems , time-invariant linear control policies u ( t ) = Kx ( t ) , and quadratic Lyapunov functions V ( x ) = xTPx , it is possible to construct such guarantees using semidefinite programming . For instance , consider the class of norm-bounded LDIs ( NLDIs ) ẋ = Ax ( t ) +Bu ( t ) +Gw ( t ) , ‖w ( t ) ‖2 ≤ ‖Cx ( t ) +Du ( t ) ‖2 , ( 3 ) where A ∈ Rs×s , B ∈ Rs×a , G ∈ Rs×d , C ∈ Rk×s , and D ∈ Rk×a are time-invariant and known , and the disturbance w ( t ) is arbitrary ( and unknown ) but obeys the norm bounds above.3 For these systems , it is possible to specify a set of stabilizing policies via a set of linear matrix inequalities ( LMIs , Boyd et al . ( 1994 ) ) : [ AS + SAT + µGGT +BY + Y TBT + αS SCT + Y TDT CS +DY −µI ] 0 , S 0 , µ > 0 , ( 4 ) where S ∈ Rs×s and Y ∈ Ra×s . For matrices S and Y satisfying ( 4 ) , K = Y S−1 and P = S−1 are then a stabilizing linear controller gain and Lyapunov matrix , respectively . While the LMI above is specific to NLDI systems , this general paradigm of constructing stability specifications using LMIs applies to many settings commonly considered in robust control ( e.g. , settings with normbounded disturbances or polytopic uncertainty , or H∞ control settings ) . More details about these types of formulations are given in , e.g. , Boyd et al . ( 1994 ) ; in addition , we provide the relevant LMI constraints for the settings we consider in this work in Appendix A . 1In this work , we consider sub-classes of system ( 1 ) that may indeed be stochastic ( e.g. , due to a stochastic external disturbance w ( t ) ) , but that can be bounded so as to be amenable to deterministic stability analysis . However , other settings may require stochastic stability analysis ; please see Astrom ( 1971 ) . 2See , e.g. , Haddad and Chellaboina ( 2011 ) for a more rigorous definition of ( local and global ) exponential stability . Condition ( 2 ) comes from Lyapunov ’ s Theorem , which characterizes various notions of stability using Lyapunov functions . 3A slightly more complex formulation involves an additional term in the norm bound , i.e. , Cx ( t ) +Du ( t ) + Hw ( t ) , which creates a quadratic inequality in w. The mechanics of obtaining robustness specifications in this setting are largely the same as presented here , though with some additional terms in the equations . As such , as is often done , we assume that H = 0 for simplicity .
In this paper, a neural control method is proposed with stability guarantees. The control is assumed to be from a neural network that takes in the state. Stability is guaranteed by projecting the control to the set that satisfies the Lyapunov stability condition for the LQR problem. In particular, minimizing the cost of LQR cost subject to stability constraints can be cast as an SDP for norm-bouned linear differential inclusions. Through making use of the convex optimization layers proposed in Agrawal et al. (2019), the SDP can be added as a layer after the neural policy and efficient projections can be derived such that implicit function theorem can be utilized to differentiate through the fixed point (the optimal conditions of the SDP), such that end to end learning is possible. The proposed approach is compared with the unconstrained method on various tasks. Both model-based and model-free RL algorithms are used as the neural policy for comparison. The stability-guaranteed approach is able to remain stable even under bounded adversarial dynamics. In comparison, the non-robust methods fail to maintain stability.
SP:14f1bc469eb56dec5dd691c4a4865aa607fa344e
Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data
1 INTRODUCTION . Existing machine learning methods , particularly deep learning models , typically require big data to pursue remarkable performance . For instance , conditional deep generative models are able to generate high-fidelity and diverse images , but they have to rely on vast amounts of labeled data ( Lucic et al. , 2019 ) . Nevertheless , it is often laborious or impractical to collect large-scale accurate class-labeled data in real-world scenarios , and thus the label scarcity is ubiquitous . Under such circumstances , the performance of classification and conditional generation ( Mirza & Osindero , 2014 ) drops significantly ( Lucic et al. , 2019 ) . At the same time , diverse unlabeled data are available in enormous quantities , and therefore a key issue is how to take advantage of the extra data to enhance the conditional generation or classification . Within the unlabeled data , both in-distribution and out-of-distribution data exist , where indistribution data conform to the distribution of the labeled data while out-of-distribution data do not . Our key insight is to harness the out-of-distribution data . In the generation with extra data , most related works focused on the in-distribution data ( Lucic et al. , 2019 ; Gui et al. , 2020 ; Donahue & Simonyan , 2019 ) . When it comes to the out-of-distribution data , the majority of existing methods ( Noguchi & Harada , 2019 ; Yamaguchi et al. , 2019 ; Zhao et al. , 2020 ) attempted to forcibly train generative models on a large amount of unlabeled data , and then transferred the learned knowledge of the pre-trained generator to the in-distribution data . In classification , a common setting to utilize unlabeled data is semi-supervised learning ( Miyato et al. , 2018 ; Sun et al. , 2019 ; Berthelot et al. , 2019 ) , which usually assumes that the unlabeled and labeled data come from the same distribution , ignoring their distributional mismatch . In contrast , Positive and Unlabeled ( PU ) Learning ( Bekker & Davis , 2020 ; Kiryo et al. , 2017 ) is an elegant way of handling this under-studied problem , where a model has the only access to positive samples and unlabeled data . Therefore , it is possible to utilize pseudo labels predicted by a PU classifier on unlabeled data to guide the conditional gen- eration . However , the predicted signals from the classifier tend to be noisy . Although there are a flurry of papers about learning from noisy labels for classification ( Tsung Wei Tsai , 2019 ; Ge et al. , 2020 ; Guo et al. , 2019 ) , to our best knowledge , no work has considered to leverage the noisy labels seamlessly in the joint classification and generation . Additionally , another work ( Hou et al. , 2018 ) leveraged GANs to recover both positive and negative data distribution to step away from overfitting , but they never considered the noise-invariant generation or their mutual improvement . The generative-discriminative complementary learning ( Xu et al. , 2019 ) was investigated in weakly supervised learning , but we are the first attempt to tackle the ( Multi- ) Positive and Unlabeled learning setting while developing the method of noise-invariant generation from noisy labels . Please refer to Section 5 for the discussion about more related works . In this paper , we focus on the mutual benefits of conditional generation and PU classification , when we are only accessible to little class-labeled data , but extra unlabeled data , including outof-distribution data , can be available . Firstly , a parallel non-negative multi-class PU estimator is derived to classify both the positive data of all classes and the negative data . Then we design a Classifier-Noise-Invariant Conditional Generative Adversarial Network ( CNI-CGAN ) that is able to learn the clean data distribution on all unlabeled data with noisy labels provided by the PU classifier . Simultaneously , we also leverage our CNI-CGAN to enhance the performance of the PU classification through data augmentation , demonstrating a reciprocal benefit for both generation and classification . We provide the theoretical analysis on the optimal condition of our CNI-CGAN and conduct extensive experiments to verify the superiority of our approach . 2 OUR METHOD . 2.1 POSITIVE-UNLABELED LEARNING . Traditional Binary Positive-Unlabeled Problem Setting Let X ∈ Rd and Y ∈ { ±1 } be the input and output variables and p ( x , y ) is the joint distribution with marginal distribution pp ( x ) = p ( x|Y = +1 ) and pn ( x ) = p ( x|Y = −1 ) . In particular , we denote p ( x ) as the distribution of unlabeled data . np , nn and nu are the amount of positive , negative and unlabeled data , respectively . Parallel Non-Negative PU Estimator Vanilla PU learning ( Bekker & Davis , 2020 ; Kiryo et al. , 2017 ; Du Plessis et al. , 2014 ; 2015 ) employs unbiased and consistent estimator . Denote gθ : Rd → R as the score function parameterized by θ , and ` : R× { ±1 } → R as the loss function . The risk of gθ can be approximated by its empirical version denoted as R̂pn ( gθ ) : R̂pn ( gθ ) = πpR̂ + p ( gθ ) + πnR̂ − n ( gθ ) , ( 1 ) where πp represents the class prior probability , i.e . πp = P ( Y = +1 ) with πp+πn = 1 . In addition , R̂+p ( gθ ) = 1 np ∑np i=1 ` ( gθ ( x p i ) , +1 ) and R̂ − n ( gθ ) = 1 nn ∑nn i=1 ` ( gθ ( x n i ) , −1 ) . As negative data xn are unavailable , a common strategy is to offset R−n ( gθ ) . We also know that πnpn ( x ) = p ( x ) − πppp ( x ) , and hence πnR̂−n ( gθ ) = R̂−u ( gθ ) − πpR̂−p ( gθ ) . Then the resulting unbiased risk estimator R̂pu ( gθ ) can be formulated as : R̂pu ( gθ ) = πpR̂ + p ( gθ ) − πpR̂−p ( gθ ) + R̂−u ( gθ ) , ( 2 ) where R̂−p ( gθ ) = 1 np ∑np i=1 ` ( gθ ( x p i ) , −1 ) and R̂−u ( gθ ) = 1nu ∑nu i=1 ` ( gθ ( x u i ) , −1 ) . The advantage of this unbiased risk minimizer is that the optimal solution can be easily obtained if g is linear in θ . However , in real scenarios we tend to leverage more flexible models gθ , e.g. , deep neural networks . This strategy will push the estimator to a point where it starts to suffer from overfitting . Hence , we decide to utilize non-negative risk ( Kiryo et al. , 2017 ) for our PU learning , which has been verified in ( Kiryo et al. , 2017 ) to allow deep neural network to mitigate overfitting . The non-negative PU estimator is formulated as : R̂pu ( gθ ) = πpR̂ + p ( gθ ) + max { 0 , R̂−u ( gθ ) − πpR̂−p ( gθ ) } . ( 3 ) In pursue of the parallel implementation of R̂pu ( gθ ) , we replace max { 0 , R̂−u ( gθ ) − πpR̂−p ( gθ ) } with its lower bound 1N ∑N i=1 max { 0 , R̂−u ( gθ ; X iu ) − πpR̂−p ( gθ ; X ip ) } where X iu and X ip denote as the unlabeled and positive data in the i-th mini-batch , and N is the number of batches . From Binary PU to Multi-PU Learning Previous PU learning focuses on learning a classifier from positive and unlabeled data , and can not easily be adapted to K + 1 multi-classification tasks where K represents the number of classes in the positive data . Multi-Positive and Unlabeled learning ( Xu et al. , 2017 ) was ever developed , but the proposed algorithm may not allow deep neural networks . Instead , we extend binary PU learning to multi-class version in a straightforward way by additionally incorporating cross entropy loss on all the positive data with labels for different classes . More precisely , we consider theK+1-class classifier fθ as a score function fθ = ( f1θ ( x ) , . . . , f K+1 θ ( x ) ) . After the softmax function , we select the first K positive data to construct cross-entropy loss ` CE , i.e. , ` CE ( fθ ( x ) , y ) = log ∑K+1 j=1 exp ( f jθ ( x ) ) − fyθ ( x ) where y ∈ [ K ] . For the PU loss , we consider the composite function h ( fθ ( x ) ) : Rd → R where h ( · ) conducts a logit transformation on the accumulative probability for the first K classes , i.e. , h ( fθ ( x ) ) = ln ( p1−p ) in which p = ∑K j=1 exp ( f jθ ( x ) ) / ∑K+1 j=1 exp ( f jθ ( x ) ) . The final mini-batch risk of our PU learning can be presented as : R̃pu ( fθ ; X i ) = πpR̂+p ( h ( fθ ) ; X ip ) + max { 0 , R̂−u ( h ( fθ ) ; X iu ) − πpR̂−p ( h ( fθ ) ; X ip ) } + R̂CEp ( fθ ; X ip ) , ( 4 ) where R̂CEp ( fθ ; X ip ) = 1np ∑np i=1 ` CE ( fθ ( x p i ) , y ) . 2.2 CLASSIFIER-NOISE-INVARIANT CONDITIONAL GENERATIVE ADVERSARIAL . NETWORK ( CNI-CGAN ) To leverage extra data , i.e. , all unlabeled data , to benefit the generation , we deploy our conditional generative model on all data with pseudo labels predicted by our PU classifier . However , these predicted labels tend to be noisy , reducing the reliability of the supervision signals and thus worsening the performance of the conditional generative model . Besides , the noise depends on the accuracy of the given PU classifier . To address this issue , we focus on developing a novel noise-invariant conditional GAN that is robust to noisy labels provided by a specified classifier , e.g . a PU classifier . We call our method Classifier-Noise-Invariant Conditional Generative Adversarial Network ( CNI-CGAN ) and the architecture is depicted in Figure 1 . In the following , we elaborate on each part of it . Principle of the Design of CNI-CGAN Albeit being noisy , the pseudo labels given by the PU classifier still provide rich information that we can exploit . The key is to take the noise generation mechanism into consideration during the generation . We denote the real data as xr and the predicted hard label through the PU classifier as PUθ ( xr ) , i.e. , PUθ ( xr ) = arg maxi f i θ ( xr ) , as displayed in Figure 1 . We let the generator “ imitate ” the noise generation mechanism to generate pseudo labels for the labeled data . With both pseudo and real labels , we can leverage the PU classifier fθ to estimate a confusion matrix C̃ to model the label noise from the classifier . During the generation , a real label y , while being fed into the generator G , will also be polluted by C̃ to compute a noisy label ỹ , which then will be combined with the generated fake sample xg for the following discrimination . Finally , the discriminator D will distinguish the real samples [ xr , PUθ ( xr ) ] out of fake samples [ xg , ỹ ] . Overall , the noise “ generation ” mechanism from both sides can be balanced . Estimation of C̃ The key in the design of C̃ is to estimate the label noise of the pre-trained PU classifier by considering all the samples of each class . More specifically , the confusion matrix C̃ is k+ 1 by k+ 1 and each entry C̃ij represents the probability of a generated sample xg , given a label i , being classified as class j by the PU classifier . Mathematically , we denote C̃ij as : C̃ij = P ( PUθ ( xg ) = j|y = i ) = Ez [ I { PUθ ( xg ) =j|y=i } ] , ( 5 ) where xg = G ( z , y = i ) and I is the indicator function . Owing to the stochastic optimization nature when training deep neural networks , we incorporate the estimation of C̃ in the processing of training by Exponential Moving Average ( EMA ) method . This choice can balance the utilization of information from previous training samples and the updated PU classifier to estimate C̃ . We formulate the update of C̃ ( l+1 ) in the l-th mini-batch as follows : C̃ ( l+1 ) = λC̃ ( l ) + ( 1− λ ) ∆C̃Xl , ( 6 ) where ∆C̃Xl denotes the incremental change of C̃ on the current l-th mini-batch data Xl via Eq . 5. λ is the averaging coefficient in EMA . Theoretical Guarantee of Clean Data Distribution Firstly , we denote O ( x ) as the oracle class of sample x from an oracle classifier O ( · ) . Let πi , i = 1 , ... , K+1 , be the class-prior probability of the class i in the multi-positive unlabeled setting . Theorem 1 proves the optimal condition of CNI-CGAN to guarantee the convergence to the clean data distribution . The proof is provided in Appendix A. Theorem 1 . ( Optimal Condition of CNI-CGAN ) Let P g be a probabilistic transition matrix where P gij = P ( O ( xg ) = j|y = i ) indicates the probability of sample xg with the oracle label j generated by G with the initial label i . We assume that the conditional sample space of each class is disjoint with each other , then ( 1 ) P g is a permutation matrix if the generator G in CNI-CGAN is optimal , with the permutation , compared with an identity matrix , only happens on rows r where corresponding πr , r ∈ r are equal . ( 2 ) If P g is an identity matrix and the generator G in CNI-CGAN is optimal , then pr ( x , y ) = pg ( x , y ) where pr ( x , y ) and pg ( x , y ) are the real and the generating joint distribution , respectively . Briefly speaking , CNI-CGAN can learn the clean data distribution if P g is an identity matrix . More importantly , the method we elaborate till now has already guaranteed Pg as a permutation matrix , which is very close to an identity one . We need an additional constraint , although the permutation happens only when same class-prior probabilities exist . The Auxiliary Loss The optimal G in CNI-CGAN can only guarantee that pg ( x , y ) is close to pr ( x , y ) as the optimal permutation matrix P g is close to the identity matrix . Hence in practice , to ensure that we can exactly learn an identity matrix for P g and thus achieve the clean data distribution , we introduce an auxiliary loss to encourage a larger trace of P g , i.e. , ∑K+1 i=1 P ( O ( xg ) = i ) |y = i ) . As O ( · ) is intractable , we approximate it by the current PU classifier PUθ ( xg ) . Then we obtain the auxiliary loss ` aux : ` aux ( z , y ) = max { κ− 1 K + 1 K+1∑ i=1 Ez ( I { PUθ ( xg ) =i|y=i } ) , 0 } , ( 7 ) where κ ∈ ( 0 , 1 ) is a hyper-parameter . With the support of auxiliary loss , P g has the tendency to converge to the identity matrix where CNI-CGAN can learn the clean data distribution even in the presence of noisy labels . Comparison with RCGAN ( Thekumparampil et al. , 2018 ; Kaneko et al. , 2019 ) The theoretical property of CNI-CGAN has a major advantage over existing Robust CGAN ( RCGAN ) ( Thekumparampil et al. , 2018 ; Kaneko et al. , 2019 ) , for which the optimal condition can only be achieved when the label confusion matrix is known a priori . Although heuristics can be employed , such as RCGAN-U ( Thekumparampil et al. , 2018 ) , to handle the unknown label noise setting , these approaches still lack the theoretical guarantee to converge to the clean data distribution . To guarantee the efficacy of our approach , one implicit and mild assumption is that our PU classifier will not overfit on the training data , while our non-negative estimator helps to ensure that it as explained in Section 2.1 . To further clarify the optimization process of CNI-CGAN , we elaborate the training steps of D and G , respectively . D-Step : We train D on an adversarial loss from both the real data and the generated ( xg , ỹ ) , where ỹ is corrupted by C̃ . C̃y denotes the y-th row of C̃ . We formulate the loss of D as : max D∈F E x∼p ( x ) [ φ ( D ( x , PUθ ( x ) ) ) ] + E z∼PZ , y∼PY ỹ|y∼C̃y [ φ ( 1−D ( G ( z , y ) , ỹ ) ) ] , ( 8 ) where F is a family of discriminators and PZ is the distribution of latent space vector z , e.g. , a Normal distribution . PY is a discrete uniform distribution on [ K + 1 ] and φ is the measuring function . G-Step : We train G additionally on the auxiliary loss ` aux ( z , y ) as follows : min G∈G E z∼PZ , y∼PY ỹ|y∼C̃y [ φ ( 1−D ( G ( z , y ) , ỹ ) ) + β ` aux ( z , y ) ] , ( 9 ) where β controls the strength of auxiliary loss and G is a family of generators . In summary , our CNI-CGAN conducts K+ 1 classes generation , which can be further leveraged to benefit the K+ 1 PU classification via data augmentation . Algorithm 1 Alternating Minimization for PU Learning and Classifier-Noise-Invariant Generation . Input : Training data ( Xp , Xu ) . Batch size M and hyper-parameter β > 0 , λ , κ ∈ ( 0 , 1 ) . L0 and L ∈ N+ . Initializing C̃ ( 1 ) as identity matrix . Number of batches N during the training . Output : Model parameter for generator G , and θ for the PU classifier fθ . 1 : / * Pre-train PU classifier fθ * / 2 : for i = 1 to N do 3 : Update fθ by descending its stochastic gradient of R̃pu ( fθ ; X i ) via Eq . 4 . 4 : end for 5 : repeat 6 : / * Update CNI-CGAN * / 7 : for l = 1 to L do 8 : Sample { z1 , ... , zM } , { y1 , ... , yM } and { x1 , ... , xM } from PZ , PY and all training data , respectively , and then sample { ỹ1 , ... , ỹM } through the current C̃ ( l ) . Then , update the discriminator D by ascending its stochastic gradient of 1 M M∑ i=1 [ φ ( D ( xi , PUθ ( xi ) ) ) ] + φ ( 1−D ( G ( zi , yi ) , ỹi ) ) ] . 9 : Sample { z1 , ... , zM } and { y1 , ... , yM } from PZ and PY , and then sample { ỹ1 , ... , ỹM } through the current C̃ ( l ) . Update the generator G by descending its stochastic gradient of 1 M M∑ i=1 [ φ ( 1−D ( G ( zi , yi ) , ỹi ) ) + β ` aux ( yi , zi ) ] . 10 : if l ≥ L0 then 11 : Compute ∆C̃Xl = 1 M ∑M i=1 I { PUθ ( G ( zi , yi ) ) |yi } via Eq . 5 , and then estimate C̃ by C̃ ( l+1 ) = λC̃ ( l ) + ( 1− λ ) ∆C̃Xl . 12 : end if 13 : end for 14 : / * Update PU classifier via Data Augmentation * / 15 : Sample { z1 , ... , zM } and { y1 , ... , yM } from PZ and PY , respectively , and then update the PU classifier fθ by descending its stochastic gradient of 1 M M∑ i=1 ` CE ( fθ ( G ( zi , yi ) ) , yi ) . 16 : until convergence
This paper proposed the combination of two techniques for improved learning with unlabelled data: 1) Positive-Unlabelled (PU) classifier, and 2) class-conditional GAN (cGAN). The idea is that the PU classifier can help produce more accurate pseudo labels for training of a cGAN, and with the improved cGAN, the generated images can be used in turn to further improve the PU classifier. The idea looks interesting and the empirical results verified its effectiveness.
SP:bf9538a602859eaf9e0c3138c5e46c782863a054
PODS: Policy Optimization via Differentiable Simulation
1 INTRODUCTION . The main goal in RL is to formalize principled algorithmic approaches to solving sequential decision-making problems . As a defining characteristic of RL methodologies , agents gain experience by acting in their environments in order to learn how to achieve specific goals . While learning directly in the real world ( Haarnoja et al. , 2019 ; Kalashnikov et al. , 2018 ) is perhaps the holy grail in the field , this remains a fundamental challenge : RL is notoriously data hungry , and gathering real-world experience is slow , tedious and potentially unsafe . Fortunately , recent years have seen exciting progress in simulation technologies that create realistic virtual training grounds , and sim-2real efforts ( Tan et al. , 2018 ; Hwangbo et al. , 2019 ) are beginning to produce impressive results . A new class of differentiable simulators ( Zimmermann et al. , 2019 ; Liang et al. , 2019 ; de Avila Belbute-Peres et al. , 2018 ; Degrave et al. , 2019 ) is currently emerging . These simulators not only predict the outcome of a particular action , but they also provide derivatives that capture the way in which the outcome will change due to infinitesimal changes in the action . Rather than using simulators as simple black box oracles , we therefore ask the following question : how can the additional information provided by differentiable simulators be exploited to improve RL algorithms ? To provide an answer to this question , we propose a novel method to efficiently learn control policies for finite horizon problems . The policies learned with our approach use neural networks to model deterministic actions . In a departure from established methodologies , learning these policies does not hinge on learned approximations of the system dynamics or of the value function . Instead , we leverage differentiable simulators to directly compute the analytic gradient of a policy ’ s value function with respect to the actions it outputs for a specific set of points sampled in state space . We show how to use this gradient information to compute first and second order update rules for locally optimal policy improvement iterations . Through a simple line search procedure , the process of updating a policy avoids instabilities and guarantees monotonic improvement of its value function . To evaluate the policy optimization scheme that we propose , we apply it to a set of control problems that require payloads to be manipulated via stiff or elastic cables . We have chosen to focus our attention on this class of high-precision dynamic manipulation tasks for the following reasons : • they are inspired by real-world applications ranging from cable-driven parallel robots and crane systems to UAV-based transportation to ( Figure 1 ) ; • the systems we need to learn control policies for exhibit rich , highly non-linear dynamics ; • the specific tasks we consider constitute a challenging benchmark because they require very precise sequences of actions . This is a feature that RL algorithms often struggle with , as the control policies they learn work well on average but tend to output noisy actions . Given that sub-optimal control signals can lead to significant oscillations in the motion of the payload , these manipulation tasks therefore make it possible to provide an easy-to-interpret comparison of the quality of the policies generated with different approaches ; • by varying the configuration of the payloads and actuation setups , we can finely control the complexity of the problem to test systematically the way in which our method scales . Figure 1 : Real-world applications that inspire the control problems we focus on in this paper The results of our experiments confirm our theoretical derivations and show that our method consistently outperforms two state-of-the-art ( SOTA ) model-free RL algorithms , Proximal Policy Optimization ( PPO ) ( Wang et al. , 2019 ) and Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) , as well as the model-based approach of Backpropagation Through Time ( BPTT ) . Although our policy optimization scheme ( PODS ) can be interleaved within the algorithmic framework of most RL methods ( e.g . by periodically updating the means of the probability distributions represented by stochastic policies ) , we focused our efforts on evaluating it in isolation to pinpoint the benefits it brings . This allowed us to show that with minimal hyper-parameter tuning , the second order update rule that we derive provides an excellent balance between rapid , reliable convergence and computational complexity . In conjunction with the continued evolution of accurate differentiable simulators , our method promises to significantly improve the process of learning control policies using RL . 2 RELATED WORK . Deep Reinforcement Learning . Deep RL ( DRL ) algorithms have been increasingly more successful in tackling challenging continuous control problems in robotics ( Kober et al. , 2013 ; Li , 2018 ) . Recent notable advances include applications in robotic locomotion ( Tan et al. , 2018 ; Haarnoja et al. , 2019 ) , manipulation ( OpenAI et al. , 2018 ; Zhu et al. , 2019 ; Kalashnikov et al. , 2018 ; Gu et al. , 2016 ) , and navigation ( Anderson et al. , 2018 ; Kempka et al. , 2016 ; Mirowski et al. , 2016 ) to mention a few . Many model-free DRL algorithms have been proposed over the years , which can be roughly divided into two classes , off-policy methods ( Mnih et al. , 2016 ; Lillicrap et al. , 2016 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) and on-policy methods ( Schulman et al. , 2015 ; 2016 ; Wang et al. , 2019 ) , based on whether the algorithm can learn independently from how the samples were generated . Recently , model-based RL algorithms ( Nagabandi et al. , 2017 ; Kurutach et al. , 2018 ; Clavera et al. , 2018 ; Nagabandi et al. , 2019 ) have emerged as a promising alternative for improving the sample efficiency . Our method can be considered as an on-policy algorithm as it computes first or second-order policy improvements given the current policy ’ s experience . Policy Update as Supervised Learning . Although policy gradient methods are some of the most popular approaches for optimizing a policy ( Kurutach et al. , 2018 ; Wang et al. , 2019 ) , many DRL algorithms also update the policy in a supervised learning ( SL ) fashion by explicitly aiming to mimic expert demonstration ( Ross et al. , 2011 ) or optimal trajectories ( Levine & Koltun , 2013a ; b ; Mordatch & Todorov , 2015 ) . Optimal trajectories , in particular , can be computed using numerical methods such as iterative linear–quadratic regulators ( Levine & Koltun , 2013a ; b ) or contact invariant optimization ( Mordatch & Todorov , 2015 ) . The solutions they provide have the potential to improve the sample efficiency of RL methods either by guiding the learning process through meaningful samples ( Levine & Koltun , 2013a ) or by explicitly matching action distributions ( Mordatch & Todorov , 2015 ) . Importantly , these approaches are not only evaluated in simulation but have also been shown to be effective for many real-world robotic platforms , including manipulators ( Schenck & Fox , 2016 ; Levine et al. , 2016 ) and exoskeletons ( Duburcq et al. , 2019 ) . Recently , Peng et al . ( 2019 ) proposed an off-policy RL algorithm that uses SL both to learn the value function and to fit the policy to the advantage-weighted target actions . While our method shares some similarities with this class of approaches that interleave SL and RL , the updates of our policy do not rely on optimal trajectories that must be given as input . Rather , we show how to leverage differentiable simulators to compute locally optimal updates to a policy . These updates are computed by explicitly taking the gradient of the value function with respect to the actions output by the policy . As such , our method also serves to reinforce the bridge between the fields of trajectory optimization and reinforcement learning . Differentiable Models . Our approach does not aim to learn a model of the system dynamics , but rather leverages differentiable simulators that explicitly provide gradients of simulation outcomes with respect to control actions . We note that traditional physics simulators such as ODE Drumwright et al . ( 2010 ) or PyBullet Coumans & Bai ( 2016–2019 ) are not designed to provide this information . We build , in particular , on a recent class of analytically differentiable simulators that have been shown to effectively solve trajectory optimization problems , with a focus on sim-2-real transfer , for both manipulation ( Zimmermann et al. , 2019 ) and locomotion tasks ( Bern et al. , 2019 ) . Degrave et al . ( 2019 ) embed a differentiable rigid body simulator within a recurrent neural network to concurrently perform simulation steps while learning policies that minimize a loss corresponding to the control objective . While their goal is related to ours , we show how to leverage explicitlycomputed gradients to formulate second order policy updates that have a significant positive effect on convergence . Furthermore , in contrast to Degrave et al . ( 2019 ) , we show that PODS consistently outperforms two common RL baselines , PPO ( Wang et al. , 2019 ) and SAC ( Haarnoja et al. , 2018 ) . Also related to our method is the very recent work of Clavera et al . ( 2020 ) . Their observation is that while most model-based RL algorithms use models simply as a source of data augmentation or as a black-box oracle to sample from ( Nagabandi et al. , 2017 ) , the differentiability of learned dynamics models can and should be exploited further . In an approach that is related to ours , they propose a policy optimization algorithm based on derivatives of the learned model . In contrast , we directly use differentiable simulators for policy optimization , bypassing altogether the need to learn the dynamics – including all the hyperparameters that are involved in the process , as well as the additional strategies required to account for the inaccuracies introduced by the learned dynamics ( Boney et al. , 2019 ) . Thanks to the second order update rule that we derive , our method consistently outperforms SOTA model-free RL algorithms in the tasks we proposed . In contrast , their method only matches the asymptotic performance of model-free RL ( which is a feat for model-based RL ) . It is also worth pointing out that while model-based approaches hold the promise of enabling learning directly in the real world , with continued progress in sim-2-real transfer , methods such as ours that rely on accurate simulation technologies will continue to be indispensable in the field of RL . A common approach to leverage differentable models is that of backpropagating through time ( BPTT ) as is the main focus of Grzeszczuk et al . ( 1998 ) , Deisenroth & Rasmussen ( 2011 ) , Parmas ( 2018 ) , Degrave et al . ( 2019 ) , and Clavera et al . ( 2020 ) , where a policy πθ parametrized by θ is optimized directly in parameter space ( PS ) , coupling the actions at each time step by the policy parameters . In contrast , our approach alternates between optimizing in trajectory space ( TS ) , following gradient information of the value function for an independent set of actions at = πθ ( s ) |s=st , and in parameter space ( PS ) by doing imitation learning of the monotonically improved actions at by πθ . Alternating between TS and PS allows PODS to avoid the well-know problems of BPTT ( vanishing and exploding gradients ) , that have been reported for a long time ( Bengio et al. , 1994 ) .
The paper argues for using differentiable simulators for policy optimization. To avoid back propagation through time, the paper splits the policy optimization problem into two steps: i) find improved action sequence for a set of initial conditions, ii) fit a parametric policy to the set of improved action sequences. First and second order methods are presented and evaluated on a version of payload-on-crane stabilization problem.
SP:d262c708016b776be1799df31f4b052c107c2b5b
Outlier Preserving Distribution Mapping Autoencoders
1 INTRODUCTION . Background . Outlier detection , the task of discovering abnormal instances in a dataset , is critical for applications from fraud detection , error measurement identification to system fault detection ( Singh & Upadhyaya , 2012 ) . Given outliers are by definition rare , it is often infeasible to get enough labeled outlier examples that are represetnative of all the forms the outliers could take . Consequently , unsupervised outlier detection methods that do not require prior labeling of inliers or outliers are frequently adopted ( Chandola et al. , 2009 ) . State-of-Art Deep Learning Methods for Outlier Detection . Deep learning methods for outlier detection commonly utilize the reconstruction error of an autoencoder model as an outlier score for outlier detection ( Sakurada & Yairi , 2014 ; Vu et al. , 2019 ) . However , directly using the reconstruction error as the outlier score has a major flaw . As the learning process converges , both outliers and inliers tend to converge to the average reconstruction error ( to the same outlier score ) – making them indistinguishable ( Beggel et al. , 2019 ) . This is demonstrated in Figure 1a , which shows that the ratio of average reconstruction error for outliers converges to that of the inliers . To overcome this shortcoming , recent work ( Beggel et al. , 2019 ; Perera et al. , 2019 ) utilizes the distribution-mapping capabilities of generative models that encourage data to follow a prior distribution in the latent space . These cutting-edge methods assume that while the mapping of inlier points will follow the target prior distribution , outliers will not due to their anomalous nature . Instead , outliers will be mapped to low-probability regions of the prior distribution , making it easy to detect them as outliers ( Beggel et al. , 2019 ; Perera et al. , 2019 ) . However , this widely held assumption has been shown to not hold in practice ( Perera et al. , 2019 ) . Unfortunately , as shown in Figure 1b , both inliers and outliers are still mapped to the same high probability regions of the target prior distribution , making them difficult to distinguish . Problem Definition . Given a given dataset X ∈ RM of multivariate observations , let f : RM → RN , N ≤M , be a function from the multivariate feature space of X to a latent space f ( x ) ∈ RN such that f ( X ) ∼ PZ , where PZ is a known and tractable prior probability density function . The dataset X ∈ RM is composed as X = XO + XI , where XO and XI are a set of outlier and inlier points , respectively . During training , it is unknown whether any given point x ∈ X is an outlier or an inlier . Intuitively , our goal is to find a function f that maps instances of a dataset X into a latent space S with a known distribution , such that outliers are mapped to low probability regions and inliers to high probability regions . More formally , we define unsupervised distribution-mapping outlier detection as the problem of finding a function f∗ with the aforementioned properties of f such that we maximize the number of outliers xo ∈XO and inliers xi ∈XI for which PZ ( f∗ ( xo ) ) < PZ ( f ∗ ( xi ) ) holds . Challenges . To address the open problem defined above , the following challenges exist : 1 . Overpowering divergence penalty . Intuitively , distribution mapping methods utilize a divergence penalty to achieve a latent space mapping of input data that has a high probability of following a target prior distribution . While the data overall should follow this prior distribution , a solution must be found to instead maps outliers to low-probability regions of the prior . Having the data match the prior overall , while having outliers mapped to low probability regions of the prior creates a conflict , as the two tasks are diametrically opposed . To achieve such a mapping requires overpowering the divergence penalty in order to map outliers to low probability regions in the latent space . 2 . Unknown outlier status . In unsupervised outlier detection , during training points do not have any labels indicating whether they are outliers or inliers . This unsupervised scenario , while common in practice ( Singh & Upadhyaya , 2012 ) , makes it challenging to design strategies that explicitly coerce outliers to be mapped to low-probability regions . Our OP-DMA Approach . In this work , we propose the Outlier Preserving Distribution Mapping Autoencoder ( OP-DMA ) . Our core idea is to propose a novel Prior Weighted Loss ( PWL ) function that solves the two conflicting tasks of mapping the input data to a prior distribution while encouraging outliers to be mapped to low probability regions of that prior . This PWL directly addresses the shortcomings of the existing distribution mapping outlier detection methods ( Vu et al. , 2019 ; Perera et al. , 2019 ) , and to the best of our knowledge is the first unsupervised cost function that explicitly encourages outliers to be mapped to low probability regions . We assume that outliers will have a high reconstruction error during the initial stages of training , which causes the PWL to place them in low-probability ( low PDF ) regions in the latent space . This way , PWL overcomes the challenge of overpowering the divergence penalty . It succeeds in mapping outliers to low-probability regions ( far from the mean of the latent distribution ) even though each input point ’ s outlier status is unknown . Our OP-DMA framework is pluggable , meaning off-theshelf distance-based outlier methods can be flexibly plugged in post-transformation . Our key contributions are as follows : 1 . Propose OP-DMA , a novel distribution-mapping autoencoder that effectively separates outliers from inliers in the latent space without knowing nor making assumptions on the original distribution of the data in the feature space . 2 . Design the Prior-Weighted Loss ( PWL ) , which when coupled with a divergence penalty encourages outliers to be mapped to low-probability regions while inliers are mapped to high-probability regions of the latent space of an autoencoder . 3 . Provide rigorous theoretical proof that the optimal solution for OP-DMA places outliers further than inliers from the mean of the distribution of the data in the latent space . 4 . Demonstrate experimentally that OP-DMA consistently outperforms other state-of-art outlier detection methods on a rich variety of real-world benchmark outlier datasets . Significance : OP-DMA is a versatile outlier detection strategy as it can handle input data that has arbitrary distributions in the feature space , while not making any distance or density assumptions on the data . To the best of our knowledge , we are the first to propose a loss function that explicitly encourages outliers to be mapped to low-probability regions while inliers are mapped to high probability regions . Our PWL approach is pluggable , and can easily be incorporated into alternate outlier detectors . Our ideas could also spur further research into various prior weighted loss functions . 2 RELATED WORK . State-of-the-art deep outlier detection methods fall into one of three categories : 1 ) Autoencoders coupled with classic outlier detectors ( Erfani et al. , 2016 ; Chalapathy et al. , 2018 ) , 2 ) Reconstruction error-based outlier detection methods ( Zhou & Paffenroth , 2017 ; Chen et al. , 2017 ; Sabokrou et al. , 2018 ; Xia et al. , 2015 ) , or 3 ) Generative outlier detection methods ( Perera et al. , 2019 ; Vu et al. , 2019 ; Liu et al. , 2019 ) . 1 ) Autoencoders coupled with classic outlier detectors project data into a lower dimensional latent space before performing outlier detection on that latent representation . These methods make the strict assumption that outliers in the original space will remain outliers in the latent space . Further , they fail to explicitly encourage this in the mapping function . 2 ) Reconstruction error-based outlier detection methods utilize the reconstruction error of an autoencoder network to identify outliers . They typically use the reconstruction error directly as the anomaly score ( An & Cho , 2015 ) . In more recent work , they try to separate outliers into a separate low-rank matrix analogous to RPCA ( Zhou & Paffenroth , 2017 ) or they introduce a separate discriminator network ( Sabokrou et al. , 2018 ) . However , as shown in ( Beggel et al. , 2019 ) , for autoencoders the reconstruction error of outliers often converges to that of inliers . This negatively impacts the performance of such reconstruction error methods . 3 ) Generative outlier detection methods leverage deep generative models ( Goodfellow et al. , 2014 ; Kingma & Welling , 2013 ) to generate the latent space such that the distribution of the latent space is encouraged to match a known prior so that thereafter an appropriate outlier method for the prior can be applied ( Vu et al. , 2019 ) to the latent space , or a discriminator can identify outliers in the latent space ( Vu et al. , 2019 ) or both the latent space and reconstructed space ( Perera et al. , 2019 ) . However , as discussed in Section 1 , in practice both inliers and outliers are both mapped to the prior distribution as outliers that are mapped to low-probability regions will generally incur a high cost from the divergence term which matches the latent distribution to the prior . OP-DMA shares characteristics with each of these three categories . However , unlike the other methods in these categories , OP-DMA actively encourages outlier to be mapped to low-probability regions instead of just assuming that this will be the case . OP-DMA is is a generative outlier method that uses the reconstruction error to encourage outliers to be mapped to low-probability regions . Further , it can flexibly be paired with nearly any classic outlier detector after distribution mapping . 3 PROPOSED APPROACH : OP-DMA . Overview of approach . OP-DMA consists of three main components : 1 . A distribution mapping autoencoder ( DMA ) that OP-DMA utilizes to map a datasetX from the feature space RM into a lower dimensional latent space RN , such that the distribution of the encoded data in the lower dimensional latent space has a known probability distribution PZ . This step is crucial as it makes it easy for OP-DMA to easily identify low probability regions of the latent space ( outliers should be mapped here ) . This can be done because after the distribution mapping , we can explicitly calculate the Probability Density Function ( PDF ) of the latent space so long as we selected a prior distribution with a known PDF . 2 . A novel Probability-Weighted Loss ( PWL ) function for distribution mapping that encourages outliers to be mapped to low-probability regions of the latent space , solving both the challenges of overpowering divergence penalty and unknown outlier status . 3 . An traditional outlier detection method is used to identify outliers in the transformed latent space . The choice of outlier detection method is flexible as long as it is amenable to the prior distribution PZ selected in step 1 of OP-DMA . For instance , when a Gaussian distribution is used for the prior , then OP-DMA utilizes a classical distance-based outlier detection method for step 3 . These steps are described in the following subsections and illustrated in Figure 2 . 3.1 DISTRIBUTION MAPPING AUTOENCODER ( DMA ) . In order to use prior-weighting to map outliers to low-probability regions of a known PDF in a latent space , our distribution mapping method must meet two design requirements : 1 . A one-to-one mapping between each original data point , its latent representation and the reconstructed data point must be established so that each data point ’ s reconstructed data point is unique and can be determined , and vice versa . 2 . The divergence term must impose a cost based on how well a batch of latent data points match the prior overall , rather than requiring individual data points to have a high probability of being a draw from the prior . To meet these requirements , we select the Wasserstein AutoEncoder ( WAE ) ( Tolstikhin et al. , 2017 ) as the foundation for our distribution mapping . WAEs are distribution-mapping autoencoders that minimize the Wasserstein distance between the original data and its reconstruction , while mapping the input data to a latent space with a known prior distribution . To see why we base our distributionmapping technique on this method , consider the WAE objective function for encoder networkQ and decoder network G : Wλc ( X , Y ) = Reconstruction Error︷ ︸︸ ︷ inf Q EPXEQ ( Z|X ) [ c ( X , G ( Z ) ) ] + Divergence Penalty︷ ︸︸ ︷ λD ( PQ , PZ ) . ( 1 ) The first term on the right hand side of Equation 1 corresponds to the reconstruction error between the input data and reconstructed data for cost function c. The second term D is a divergence penalty between the distribution of the latent space and the prior distribution , with λ a constant weight term that determines how much that divergence is penalized . Let us deterministically produce the latent representation Q ( X ) and output G ( Q ( X ) |X ) ( by using Q ( X ) = δµ ( X ) , where µ is some function mapping input data set X to Q ( X ) , for instance ) . It is now clear why Wasserstein autoencoders are an appropriate choice to model our distribution mapping method , as the reconstruction error term EPXEQ ( Z|X ) [ c ( X , G ( Z ) ) ] in Equation 1 represents a one-to-one correspondence between input data , its latent representation and the reconstructed output ( meeting requirement 1 ) . Additionally , D is a batch-level cost term that would be incurred if the latent representation of a batch doesn ’ t match the prior distribution but doesn ’ t require individual points to be mapped to a high probability region of the prior ( meeting requirement 2 ) . However , we note that WAEs unfortunately do not encourage outliers in the feature space to remain outliers in the latent space . Consider D to be a discriminator network . Then D is likely to learn a boundary around the high probability region of the prior distribution . Thus the encoder network Q will be penalized for mapping an outlier to a low probability region outside of the boundary found by D as the discriminator D would correctly identify it as a generated point .
The paper proposes an autoencoder-based outlier detection system. The main idea of the paper is to ensure that outlier points are mapped to areas distant from the inliers in the embedding space. To this end, a novel cost function is introduced, which weighs the reconstruction error based on a prior distribution for the embedding space. This cost function hopes to force inliers to be mapped to the high probability region of the prior distribution and push outliers to low probability regions. Combined with a normal/multivariate prior distribution, this then enables the use of simple distance-based outlier detection methods.
SP:172fcdf24499acfeba5a1593b48135c0e2b5e6b1
Geometry of Program Synthesis
1 INTRODUCTION . The idea of program synthesis dates back to the birth of modern computation itself ( Turing , 1948 ) and is recognised as one of the most important open problems in computer science ( Gulwani et al. , 2017 ) . However , there appear to be serious obstacles to synthesising programs by gradient descent at scale ( Neelakantan et al. , 2016 ; Kaiser & Sutskever , 2016 ; Bunel et al. , 2016 ; Gaunt et al. , 2016 ; Evans & Grefenstette , 2018 ; Chen et al. , 2018 ) and these problems suggest that it would be appropriate to make a fundamental study of the geometry of loss surfaces in program synthesis , since this geometry determines the learning process . To that end , in this paper we explain a new point of view on program synthesis using the singular learning theory of Watanabe ( 2009 ) and the smooth relaxation of Turing machines from Clift & Murfet ( 2018 ) . In broad strokes this new geometric point of view on program synthesis says : • Programs to be synthesised are singularities of analytic functions . If U ⊆ Rd is open and K : U −→ R is analytic , then x ∈ U is a critical point of K if ∇K ( x ) = 0 and a singularity of the function K if it is a critical point where K ( x ) = 0 . • The Kolmogorov complexity of a program is related to a geometric invariant of the associated singularity called the Real Log Canonical Threshold ( RLCT ) . This invariant controls both the generalisation error and the learning process , and is therefore an appropriate measure of “ complexity ” in continuous program synthesis . See Section 3 . • The geometry has concrete practical implications . For example , a MCMC-based approach to program synthesis will find , with high probability , a solution that is of low complexity ( if it finds a solution at all ) . We sketch a novel point of view on the problem of “ bad local minima ” ( Gaunt et al. , 2016 ) based on these ideas . See Section 4 . We demonstrate all of these principles in experiments with toy examples of synthesis problems . Program synthesis as inference . We use Turing machines , but mutatis mutandis everything applies to other programming languages . Let T be a Turing machine with tape alphabet Σ and set of states Q and assume that on any input x ∈ Σ∗ the machine eventually halts with output T ( x ) ∈ Σ∗ . Then to the machine T we may associate the set { ( x , T ( x ) ) } x∈Σ∗ ⊆ Σ∗ × Σ∗ . Program synthesis is the study of the inverse problem : given a subset of Σ∗ × Σ∗ we would like to determine ( if possible ) a Turing machine which computes the given outputs on the given inputs . If we presume given a probability distribution q ( x ) on Σ∗ then we can formulate this as a problem of statistical inference : given a probability distribution q ( x , y ) on Σ∗ × Σ∗ determine the most likely machine producing the observed distribution q ( x , y ) = q ( y|x ) q ( x ) . If we fix a universal Turing machine U then Turing machines can be parametrised by codes w ∈ W code with U ( x , w ) = T ( x ) for all x ∈ Σ∗ . We let p ( y|x , w ) denote the probability of U ( x , w ) = y ( which is either zero or one ) so that solutions to the synthesis problem are in bijection with the zeros of the Kullback-Leibler divergence between the true distribution and the model K ( w ) = ∫ ∫ q ( y|x ) q ( x ) log q ( y|x ) p ( y|x , w ) dxdy . ( 1 ) So far this is just a trivial rephrasing of the combinatorial optimisation problem of finding a Turing machine T with T ( x ) = y for all ( x , y ) with q ( x , y ) > 0 . Smooth relaxation . One approach is to seek a smooth relaxation of the synthesis problem consisting of an analytic manifold W ⊇ W code and an extension of K to an analytic function K : W −→ R so that we can search for the zeros of K using gradient descent . Perhaps the most natural way to construct such a smooth relaxation is to takeW to be a space of probability distributions overW code and prescribe a model p ( y|x , w ) for propagating uncertainty about codes to uncertainty about outputs ( Gaunt et al. , 2016 ; Evans & Grefenstette , 2018 ) . The particular model we choose is based on the semantics of linear logic ( Clift & Murfet , 2018 ) . Supposing that such a smooth relaxation has been chosen together with a prior ϕ ( w ) over W , smooth program synthesis becomes the study of the statistical learning theory of the triple ( p , q , ϕ ) . There are perhaps two primary reasons to consider the smooth relaxation . Firstly , one might hope that stochastic gradient descent or techniques like Markov chain Monte Carlo will be effective means of solving the original combinatorial optimisation problem . This is not a new idea ( Gulwani et al. , 2017 , §6 ) but so far its effectiveness for large programs has not been proven . Independently , one might hope to find powerful new mathematical ideas that apply to the relaxed problem and shed light on the nature of program synthesis . This is the purpose of the present paper . Singular learning theory . We denote by W0 = { w ∈W |K ( w ) = 0 } so that W0 ∩W code ⊆W0 ⊆W ( 2 ) whereW0∩W code is the discrete set of solutions to the original synthesis problem . We refer to these as the classical solutions . As the vanishing locus of an analytic function , W0 is an analytic space over R ( Hironaka , 1964 , §0.1 ) , ( Griffith & Harris , 1978 ) and it is interesting to study the geometry of this space near the classical solutions . Since K is a Kullback-Leibler divergence it is non-negative and so it not only vanishes onW0 but∇K also vanishes , hence every point ofW0 is a singular point . Beyond this the geometry ofW0 depends on the particular model p ( y|x , w ) that has been chosen , but some aspects are universal : the nature of program synthesis means that typically W0 is an extended object ( i.e . it contains points other than the classical solutions ) and the Hessian matrix of second order partial derivatives of K at a classical solution is not invertible - that is , the classical solutions are degenerate critical points of K. This means that singularity theory is the appropriate branch of mathematics for studying the geometry of W0 near a classical solution . It also means that the Fisher information matrix I ( w ) ij = ∫ ∫ ∂ ∂wi [ log p ( y|x , w ) ] ∂ ∂wj [ log p ( y|x , w ) ] q ( y|x ) q ( x ) dxdy , is degenerate at a classical solution , so that the appropriate branch of statistical learning theory is singular learning theory ( Watanabe , 2007 ; 2009 ) . For an introduction to singular learning theory in the context of deep learning see ( Murfet et al. , 2020 ) . Broadly speaking the contribution of this paper is to realise program synthesis within the framework of singular learning theory , at both a theoretical and an experimental level . In more detail the contents of the paper are : • We define a staged pseudo-UTM ( Appendix E ) which is well-suited to experiments with the ideas discussed above . Propagating uncertainty about the code through this UTM using the ideas of ( Clift & Murfet , 2018 ) defines a triple ( p , q , ϕ ) associated to a synthesis problem . This formally embeds program synthesis within singular learning theory . • We realise this embedding in code by providing an implementation in PyTorch of this propagation of uncertainty through a UTM . Using the No-U-Turn variant of MCMC ( Hoffman & Gelman , 2014 ) we can approximate the Bayesian posterior of any program synthesis problem ( of course in practice we are limited by computational constraints in doing so ) . • We explain how the real log canonical threshold ( a geometric invariant ) is related to Kolmogorov complexity ( Section 3 ) . • We give a simple example ( Appendix C ) in whichW0 contains the set of classical solutions as a proper subset and every point of W0 is a degenerate critical point of K. • For two simple synthesis problems detectA and parityCheck we demonstrate all of the above , using MCMC to approximate the Bayesian posterior and theorems from Watanabe ( 2013 ) to estimate the RLCT ( Section 5 ) . We discuss how W0 is an extended object and how the RLCT relates to the local dimension of W0 near a classical solution . RELATED WORK The idea of synthesising Turing machines can be traced back to the work of Solomonoff on inductive inference ( Solomonoff , 1964 ) . A more explicit form of the problem was given in Biermann ( 1972 ) who proposed an algorithmic method . Machine learning based approaches appear in Schmidhuber ( 1997 ) and Hutter ( 2004 ) , which pay particular attention to model complexity , and Gaunt et al . ( 2016 ) and Freer et al . ( 2014 ) , the latter using the notion of “ universal probabilistic Turing machine ” ( De Leeuw et al. , 1956 ) . A different probabilistic extension of a universal Turing machine was introduced in Clift & Murfet ( 2018 ) via linear logic . Studies of the singular geometry of learning models go back to Amari et al . ( 2003 ) and notably , the extensive work of Watanabe ( 2007 ; 2009 ) . 2 TURING MACHINE SYNTHESIS AS SINGULAR LEARNING . All known approaches to program synthesis can be formulated in terms of a singular learning problem . Singular learning theory is the extension of statistical learning theory to account for the fact that the set of learned parameters W0 has the structure of an analytic space as opposed to an analytic manifold ( Watanabe , 2007 ; 2009 ) . It is organised around triples ( p , q , ϕ ) consisting of a class of models { p ( y|x , w ) : w ∈W } , a true distribution q ( y|x ) and a prior ϕ on W . In our approach we fix a Universal Turing Machine ( UTM ) , denoted U , with a description tape ( which specifies the code of the Turing machine to be executed ) , a work tape ( simulating the tape of that Turing machine during its operation ) and a state tape ( simulating the state of that Turing machine ) . The general statistical learning problem that can be formulated using U is the following : given some initial string x on the work tape , predict the state of the simulated machine and the contents of the work tape after some specified number of steps ( Clift & Murfet , 2018 , §7.1 ) . For simplicity , in this paper we consider models that only predict the final state ; the necessary modifications in the general case are routine . We also assume that W parametrises Turing machines whose tape alphabet Σ and set of states Q have been encoded by individual symbols in the tape alphabet of U . Hence U is actually what we call a pseudo-UTM ( see Appendix E ) . Again , treating the general case is routine and for the present purposes only introduces uninteresting complexity . Let Σ denote the tape alphabet of the simulated machine , Q the set of states and let L , S , R stand for left , stay and right , the possible motions of the Turing machine head . We assume that |Q| > 1 since otherwise the synthesis problem is trivial . The set of ordinary codes W code for a Turing machine sits inside a compact space of probability distributions W over codes W code : = ∏ σ , q Σ×Q× { L , S , R } ⊆ ∏ σ , q ∆Σ×∆Q×∆ { L , S , R } = : W ( 3 ) where ∆X denotes the set of probability distributions over a set X , see ( 8 ) , and the product is over pairs ( σ , q ) ∈ Σ×Q.1 For example the point { ( σ′ , q′ , d ) } σ , q ∈ W code encodes the machine which when it reads σ under the head in state q writes σ′ , transitions into state q′ and moves in direction d. Given w ∈ W code let stept ( x , w ) ∈ Q denote the contents of the state tape of U after t timesteps ( of the simulated machine ) when the work tape is initialised with x and the description tape with w. 1The space W of parameters is clearly semi-analytic , that is , it is cut out of Rd for some d by the vanishing f1 ( x ) = · · · = fr ( x ) = 0 of finitely many analytic functions on open subsets of Rd together with finitely many inequalities g1 ( x ) ≥ 0 , . . . , gs ( x ) ≥ 0 where the gj ( x ) are analytic . In fact W is semi-algebraic , since the fi and gj may all be chosen to be polynomial functions . Under review as a conference paper at ICLR 2021 There is a principled extension of this operation of U to a smooth function ∆ stept : Σ∗ ×W −→ ∆Q ( 4 ) which propagates uncertainty about the symbols on the description tape to uncertainty about the final state and we refer to this extension as the smooth relaxation of U . The details are given in Appendix F but at an informal level the idea behind the relaxation is easy to understand : to sample from ∆ stept ( x , w ) we run U to simulate t timesteps in such a way that whenever the UTM needs to “ look at ” an entry on the description tape we sample from the corresponding distribution specified by w.2 The significance of the particular smooth relaxation that we use is that its derivatives have a logical interpretation ( Clift & Murfet , 2018 , §7.1 ) . The class of models that we consider is p ( y|x , w ) = ∆ stept ( x , w ) ( 5 ) where t is fixed for simplicity in this paper . More generally we could also view x as consisting of a sequence and a timeout , as is done in ( Clift & Murfet , 2018 , §7.1 ) . The construction of this model is summarised in Figure 1. w or k < latexit sha1_base64= '' TJWz1OU75YFDdX7wl0/SZiBZv6w= '' > AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMA9IljA7mSRD5rHMzCphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KYs6M9f1vr7CxubW9U9wt7e0fHB6Vj0/aRiWa0BZRXOluhA3lTNKWZZbTbqwpFhGnnWh6m/mdR6oNU/LBzmIaCjyWbMQItpn0pPR0UK74VX8BtE6CnFQgR3NQ/uoPFUkElZZwbEwv8GMbplhbRjidl/qJoTEmUzymPUclFtSE6eLWObpwyhCNlHYlLVqovydSLIyZich1CmwnZtXLxP+8XmJHN2HKZJxYKsly0SjhyCqUPY6GTFNi+cwRTDRztyIywRoT6+IpuRCC1ZfXSbtWDerVq/tapVHP4yjCGZzDJQRwDQ24gya0gMAEnuEV3jzhvXjv3seyteDlM6fwB97nD0s7jl0= < /latexit > st at e < latexit sha1_base64= '' SC7T0UNtxJZeOTT+l5BLbIY5Owg= '' > AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx4rmFZoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSMUmmGfdZIhP9GFLDpVDcR4GSP6aa0ziUvBtObud+94lrIxL1gNOUBzEdKREJRtFKvkGKfFCtuXV3AbJOvILUoEB7UP3qDxOWxVwhk9SYnuemGORUo2CSzyr9zPCUsgkd8Z6lisbcBPni2Bm5sMqQRIm2pZAs1N8TOY2Nmcah7Ywpjs2qNxf/83oZRjdBLlSaIVdsuSjKJMGEzD8nQ6E5Qzm1hDIt7K2EjammDG0+FRuCt/ryOuk06l6zfnXfqLWaRRxlOINzuAQPrqEFd9AGHxgIeIZXeHOU8+K8Ox/L1pJTzJzCHzifPwBVjsU= < /latexit > co d e < latexit sha1_base64= '' om3Rg/XzVobOPXTCvLtEyjMGzbs= '' > AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMImQLGF2dpIMmccyMyuEJb/gxYMiXv0hb/6Ns8keNLGgoajqprsrSjgz1ve/vdLG5tb2Tnm3srd/cHhUPT7pGpVqQjtEcaUfI2woZ5J2LLOcPiaaYhFx2oumt7nfe6LaMCUf7CyhocBjyUaMYJtLRMV0WK35dX8BtE6CgtSgQHtY/RrEiqSCSks4NqYf+IkNM6wtI5zOK4PU0ASTKR7TvqMSC2rCbHHrHF04JUYjpV1Jixbq74kMC2NmInKdAtuJWfVy8T+vn9rRTZgxmaSWSrJcNEo5sgrlj6OYaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66TbqQbN+dd+otZpFHGU4g3O4hACuoQV30IYOEJjAM7zCmye8F+/d+1i2lrxi5hT+wPv8AQ5RjjU= < /latexit > w < latexit sha1_base64= '' R7WKE1UfW8UlgiOq9cSzJJOgiHo= '' > AAAB6HicbVDLTgJBEOzFF+IL9ehlIjHxRHYJRo8kXjxCIo8ENmR2aGBkdnYzM6shG77AiweN8eonefNvHGAPClbSSaWqO91dQSy4Nq777eQ2Nre2d/K7hb39g8Oj4vFJS0eJYthkkYhUJ6AaBZfYNNwI7MQKaRgIbAeT27nffkSleSTvzTRGP6QjyYecUWOlxlO/WHLL7gJknXgZKUGGer/41RtELAlRGiao1l3PjY2fUmU4Ezgr9BKNMWUTOsKupZKGqP10ceiMXFhlQIaRsiUNWai/J1Iaaj0NA9sZUjPWq95c/M/rJmZ446dcxolByZaLhokgJiLzr8mAK2RGTC2hTHF7K2FjqigzNpuCDcFbfXmdtCplr1q+alRKtWoWRx7O4BwuwYNrqMEd1KEJDBCe4RXenAfnxXl3PpatOSebOYU/cD5/AOJ3jPM= < /latexit > x < latexit sha1_base64= '' tfWAW1SjmyNhrA0capfB+UlJ15k= '' > AAAB6HicbVDLTgJBEOzFF+IL9ehlIjHxRHYJRo8kXjxCIo8ENmR2aGBkdnYzM2skG77AiweN8eonefNvHGAPClbSSaWqO91dQSy4Nq777eQ2Nre2d/K7hb39g8Oj4vFJS0eJYthkkYhUJ6AaBZfYNNwI7MQKaRgIbAeT27nffkSleSTvzTRGP6QjyYecUWOlxlO/WHLL7gJknXgZKUGGer/41RtELAlRGiao1l3PjY2fUmU4Ezgr9BKNMWUTOsKupZKGqP10ceiMXFhlQIaRsiUNWai/J1Iaaj0NA9sZUjPWq95c/M/rJmZ446dcxolByZaLhokgJiLzr8mAK2RGTC2hTHF7K2FjqigzNpuCDcFbfXmdtCplr1q+alRKtWoWRx7O4BwuwYNrqMEd1KEJDBCe4RXenAfnxXl3PpatOSebOYU/cD5/AOP7jPQ= < /latexit > y < latexit sha1_base64= '' GM5ZmXW1PJCOVOCvnUffSh1cn3U= '' > AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4Kkmp6LHgxWML9gPaUDbbSbt2swm7GyGU/gIvHhTx6k/y5r9x2+agrQ8GHu/NMDMvSATXxnW/nY3Nre2d3cJecf/g8Oi4dHLa1nGqGLZYLGLVDahGwSW2DDcCu4lCGgUCO8Hkbu53nlBpHssHkyXoR3QkecgZNVZqZoNS2a24C5B14uWkDDkag9JXfxizNEJpmKBa9zw3Mf6UKsOZwFmxn2pMKJvQEfYslTRC7U8Xh87IpVWGJIyVLWnIQv09MaWR1lkU2M6ImrFe9ebif14vNeGtP+UySQ1KtlwUpoKYmMy/JkOukBmRWUKZ4vZWwsZUUWZsNkUbgrf68jppVyterXLdrJbrtTyOApzDBVyBBzdQh3toQAsYIDzDK7w5j86L8+58LFs3nHzmDP7A+fwB5X+M9Q== < /latexit > step < latexit sha1_base64= '' NI9QQ7XStFY2oO5uCpCo3hbaGCA= '' > AAACBXicbVA9SwNBEN3zM8avqKUWi4lgFe5CRMuAFpYRzAckIextJsmSvdtjd04IRxob/4qNhSK2/gc7/417SQpNfDDweG+GmXl+JIVB1/12VlbX1jc2M1vZ7Z3dvf3cwWHdqFhzqHEllW76zIAUIdRQoIRmpIEFvoSGP7pO/cYDaCNUeI/jCDoBG4SiLzhDK3VzJ4X2DUhktK0i0AyVDlkAiUGIJoVuLu8W3SnoMvHmJE/mqHZzX+2e4nEAIXLJjGl5boSdhGkUXMIk244NRIyP2ABalqarTCeZfjGhZ1bp0b7StkKkU/X3RMICY8aBbzsDhkOz6KXif14rxv5VJxFhFCOEfLaoH0uKiqaR0J7QwFGOLWFcC3sr5UOmGUcbXNaG4C2+vEzqpaJXLl7clfKV8jyODDkmp+SceOSSVMgtqZIa4eSRPJNX8uY8OS/Ou/Mxa11x5jNH5A+czx92LpiG < /latexit > init < latexit sha1_base64= '' eVVGLMwSrwOU7JIqXWoswyQdy7Y= '' > AAAB/nicbVBNSwMxEM36WevXqnjyEmwFT2W3VPRY8OKxgv2AdinZNG1Ds8mSzAplKfhXvHhQxKu/w5v/xmy7B219MPB4b4aZeWEsuAHP+3bW1jc2t7YLO8Xdvf2DQ/fouGVUoilrUiWU7oTEMMElawIHwTqxZiQKBWuHk9vMbz8ybbiSDzCNWRCRkeRDTglYqe+elnsqZpqA0pJELOWSw6zcd0texZsDrxI/JyWUo9F3v3oDRZOISaCCGNP1vRiClGjgVLBZsZcYFhM6ISPWtTRbZYJ0fv4MX1hlgIdK25KA5+rviZRExkyj0HZGBMZm2cvE/7xuAsObwL4UJ8AkXSwaJgKDwlkWeMA1oyCmlhCqub0V0zHRhIJNrGhD8JdfXiWtasWvVa7uq6V6LY+jgM7QObpEPrpGdXSHGqiJKErRM3pFb86T8+K8Ox+L1jUnnzlBf+B8/gCKDZXS < /latexit > w or k < latexit sha1_base64= '' TJWz1OU75YFDdX7wl0/SZiBZv6w= '' > AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMA9IljA7mSRD5rHMzCphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KYs6M9f1vr7CxubW9U9wt7e0fHB6Vj0/aRiWa0BZRXOluhA3lTNKWZZbTbqwpFhGnnWh6m/mdR6oNU/LBzmIaCjyWbMQItpn0pPR0UK74VX8BtE6CnFQgR3NQ/uoPFUkElZZwbEwv8GMbplhbRjidl/qJoTEmUzymPUclFtSE6eLWObpwyhCNlHYlLVqovydSLIyZich1CmwnZtXLxP+8XmJHN2HKZJxYKsly0SjhyCqUPY6GTFNi+cwRTDRztyIywRoT6+IpuRCC1ZfXSbtWDerVq/tapVHP4yjCGZzDJQRwDQ24gya0gMAEnuEV3jzhvXjv3seyteDlM6fwB97nD0s7jl0= < /latexit > st at e < latexit sha1_base64= '' SC7T0UNtxJZeOTT+l5BLbIY5Owg= '' > AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx4rmFZoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSMUmmGfdZIhP9GFLDpVDcR4GSP6aa0ziUvBtObud+94lrIxL1gNOUBzEdKREJRtFKvkGKfFCtuXV3AbJOvILUoEB7UP3qDxOWxVwhk9SYnuemGORUo2CSzyr9zPCUsgkd8Z6lisbcBPni2Bm5sMqQRIm2pZAs1N8TOY2Nmcah7Ywpjs2qNxf/83oZRjdBLlSaIVdsuSjKJMGEzD8nQ6E5Qzm1hDIt7K2EjammDG0+FRuCt/ryOuk06l6zfnXfqLWaRRxlOINzuAQPrqEFd9AGHxgIeIZXeHOU8+K8Ox/L1pJTzJzCHzifPwBVjsU= < /latexit > co d e < latexit sha1_base64= '' om3Rg/XzVobOPXTCvLtEyjMGzbs= '' > AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMImQLGF2dpIMmccyMyuEJb/gxYMiXv0hb/6Ns8keNLGgoajqprsrSjgz1ve/vdLG5tb2Tnm3srd/cHhUPT7pGpVqQjtEcaUfI2woZ5J2LLOcPiaaYhFx2oumt7nfe6LaMCUf7CyhocBjyUaMYJtLRMV0WK35dX8BtE6CgtSgQHtY/RrEiqSCSks4NqYf+IkNM6wtI5zOK4PU0ASTKR7TvqMSC2rCbHHrHF04JUYjpV1Jixbq74kMC2NmInKdAtuJWfVy8T+vn9rRTZgxmaSWSrJcNEo5sgrlj6OYaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66TbqQbN+dd+otZpFHGU4g3O4hACuoQV30IYOEJjAM7zCmye8F+/d+1i2lrxi5hT+wPv8AQ5RjjU= < /latexit > w or k < latexit sha1_base64= '' TJWz1OU75YFDdX7wl0/SZiBZv6w= '' > AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMA9IljA7mSRD5rHMzCphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KYs6M9f1vr7CxubW9U9wt7e0fHB6Vj0/aRiWa0BZRXOluhA3lTNKWZZbTbqwpFhGnnWh6m/mdR6oNU/LBzmIaCjyWbMQItpn0pPR0UK74VX8BtE6CnFQgR3NQ/uoPFUkElZZwbEwv8GMbplhbRjidl/qJoTEmUzymPUclFtSE6eLWObpwyhCNlHYlLVqovydSLIyZich1CmwnZtXLxP+8XmJHN2HKZJxYKsly0SjhyCqUPY6GTFNi+cwRTDRztyIywRoT6+IpuRCC1ZfXSbtWDerVq/tapVHP4yjCGZzDJQRwDQ24gya0gMAEnuEV3jzhvXjv3seyteDlM6fwB97nD0s7jl0= < /latexit > st at e < latexit sha1_base64= '' SC7T0UNtxJZeOTT+l5BLbIY5Owg= '' > AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx4rmFZoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSMUmmGfdZIhP9GFLDpVDcR4GSP6aa0ziUvBtObud+94lrIxL1gNOUBzEdKREJRtFKvkGKfFCtuXV3AbJOvILUoEB7UP3qDxOWxVwhk9SYnuemGORUo2CSzyr9zPCUsgkd8Z6lisbcBPni2Bm5sMqQRIm2pZAs1N8TOY2Nmcah7Ywpjs2qNxf/83oZRjdBLlSaIVdsuSjKJMGEzD8nQ6E5Qzm1hDIt7K2EjammDG0+FRuCt/ryOuk06l6zfnXfqLWaRRxlOINzuAQPrqEFd9AGHxgIeIZXeHOU8+K8Ox/L1pJTzJzCHzifPwBVjsU= < /latexit > co d e < latexit sha1_base64= '' om3Rg/XzVobOPXTCvLtEyjMGzbs= '' > AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMImQLGF2dpIMmccyMyuEJb/gxYMiXv0hb/6Ns8keNLGgoajqprsrSjgz1ve/vdLG5tb2Tnm3srd/cHhUPT7pGpVqQjtEcaUfI2woZ5J2LLOcPiaaYhFx2oumt7nfe6LaMCUf7CyhocBjyUaMYJtLRMV0WK35dX8BtE6CgtSgQHtY/RrEiqSCSks4NqYf+IkNM6wtI5zOK4PU0ASTKR7TvqMSC2rCbHHrHF04JUYjpV1Jixbq74kMC2NmInKdAtuJWfVy8T+vn9rRTZgxmaSWSrJcNEo5sgrlj6OYaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66TbqQbN+dd+otZpFHGU4g3O4hACuoQV30IYOEJjAM7zCmye8F+/d+1i2lrxi5hT+wPv8AQ5RjjU= < /latexit > · · · < latexit sha1_base64= '' 0bCcXyz6HriuD2v71FSD+L8Alno= '' > AAAB73icbVBNSwMxEJ2tX7V+VT16CbaCp7JbKnosePFYwX5Au5RsNtuGZpM1yQpl6Z/w4kERr/4db/4b03YP2vpg4PHeDDPzgoQzbVz32ylsbG5t7xR3S3v7B4dH5eOTjpapIrRNJJeqF2BNORO0bZjhtJcoiuOA024wuZ373SeqNJPiwUwT6sd4JFjECDZW6lUHJJRGV4fliltzF0DrxMtJBXK0huWvQShJGlNhCMda9z03MX6GlWGE01lpkGqaYDLBI9q3VOCYaj9b3DtDF1YJUSSVLWHQQv09keFY62kc2M4Ym7Fe9ebif14/NdGNnzGRpIYKslwUpRwZiebPo5ApSgyfWoKJYvZWRMZYYWJsRCUbgrf68jrp1Gteo3Z1X680G3kcRTiDc7gED66hCXfQgjYQ4PAMr/DmPDovzrvzsWwtOPnMKfyB8/kDZpuPgw== < /latexit > step < latexit sha1_base64= '' NI9QQ7XStFY2oO5uCpCo3hbaGCA= '' > AAACBXicbVA9SwNBEN3zM8avqKUWi4lgFe5CRMuAFpYRzAckIextJsmSvdtjd04IRxob/4qNhSK2/gc7/417SQpNfDDweG+GmXl+JIVB1/12VlbX1jc2M1vZ7Z3dvf3cwWHdqFhzqHEllW76zIAUIdRQoIRmpIEFvoSGP7pO/cYDaCNUeI/jCDoBG4SiLzhDK3VzJ4X2DUhktK0i0AyVDlkAiUGIJoVuLu8W3SnoMvHmJE/mqHZzX+2e4nEAIXLJjGl5boSdhGkUXMIk244NRIyP2ABalqarTCeZfjGhZ1bp0b7StkKkU/X3RMICY8aBbzsDhkOz6KXif14rxv5VJxFhFCOEfLaoH0uKiqaR0J7QwFGOLWFcC3sr5UOmGUcbXNaG4C2+vEzqpaJXLl7clfKV8jyODDkmp+SceOSSVMgtqZIa4eSRPJNX8uY8OS/Ou/Mxa11x5jNH5A+czx92LpiG < /latexit > step < latexit sha1_base64= '' NI9QQ7XStFY2oO5uCpCo3hbaGCA= '' > AAACBXicbVA9SwNBEN3zM8avqKUWi4lgFe5CRMuAFpYRzAckIextJsmSvdtjd04IRxob/4qNhSK2/gc7/417SQpNfDDweG+GmXl+JIVB1/12VlbX1jc2M1vZ7Z3dvf3cwWHdqFhzqHEllW76zIAUIdRQoIRmpIEFvoSGP7pO/cYDaCNUeI/jCDoBG4SiLzhDK3VzJ4X2DUhktK0i0AyVDlkAiUGIJoVuLu8W3SnoMvHmJE/mqHZzX+2e4nEAIXLJjGl5boSdhGkUXMIk244NRIyP2ABalqarTCeZfjGhZ1bp0b7StkKkU/X3RMICY8aBbzsDhkOz6KXif14rxv5VJxFhFCOEfLaoH0uKiqaR0J7QwFGOLWFcC3sr5UOmGUcbXNaG4C2+vEzqpaJXLl7clfKV8jyODDkmp+SceOSSVMgtqZIa4eSRPJNX8uY8OS/Ou/Mxa11x5jNH5A+czx92LpiG < /latexit > Figure 1 : The state of U is represented by the state of the work tape , state tape and description ( code ) tape . The work tape is initialised with a sequence x ∈ Σ∗ , the code tape with w ∈ W and the state tape with some standard initial state , the smooth relaxation ∆ step of the pseudo-UTM is run for t steps and the final probability distribution over states is y . Definition 2.1 ( Synthesis problem ) . A synthesis problem for U consists of a probability distribution q ( x , y ) over Σ∗ × Q . We say that the synthesis problem is deterministic if there is f : Σ∗ −→ Q such that q ( y = f ( x ) |x ) = 1 for all x ∈ Σ∗ . Definition 2.2 . The triple ( p , q , ϕ ) associated to a synthesis problem is the model p of ( 5 ) together with the true distribution q and uniform prior ϕ on the parameter space W . The Kullback-Leibler function K ( w ) of the synthesis problem is defined by ( 1 ) and a solution to the synthesis problem is a point of W0 . A classical solution is a point of W0 ∩W code . As ∆ stept is a polynomial function , K is analytic and so W0 is a semi-analytic space ( it is cut out of the semi-analytic space W by the vanishing of K ) . If the synthesis problem is deterministic and q ( x ) is uniform on some finite subset of Σ∗ then W0 is semi-algebraic ( it is cut out of W by polynomial equations ) and all solutions lie at the boundary of the parameter spaceW ( Appendix D ) . However in general W0 is only semi-analytic and intersects the interior of W ( Example C.2 ) . We assume that q ( y|x ) is realisable that is , there exists w0 ∈W with q ( y|x ) = p ( y|x , w0 ) . A triple ( p , q , ϕ ) is regular if the model is identifiable , ie . for all inputs x ∈ Rn , the map sending w to the conditional probability distribution p ( y|x , w ) is one-to-one , and the Fisher information matrix is non-degenerate . Otherwise , the learning machine is strictly singular ( Watanabe , 2009 , §1.2.1 ) . Triples arising from synthesis problems are typically singular : in Example 2.5 below we show an explicit example where multiple parameters w determine the same model , and in Example C.2 we give an example where the Hessian ofK is degenerate everywhere on W0 ( Watanabe , 2009 , §1.1.3 ) . 2Noting that this sampling procedure is repeated every time the UTM looks at a given entry . Remark 2.3 . Non-deterministic synthesis problems arise naturally in various contexts , for example in the fitting of algorithms to the behaviour of deep reinforcement learning agents . Suppose an agent is acting in an environment with starting states encoded by x ∈ Σ∗ and possible episode end states by y ∈ Q . Even if the optimal policy is known to determine a computable function Σ∗ −→ Q the statistics of the observed behaviour after finite training time will only provide a function Σ∗ −→ ∆Q and if we wish to fit algorithms to behaviour it makes sense to deal with this uncertainty directly . Definition 2.4 . Let ( p , q , ϕ ) be the triple associated to a synthesis problem . The Real Log Canonical Threshold ( RLCT ) λ of the synthesis problem is defined so that −λ is the largest pole of the meromorphic extension ( Atiyah , 1970 ) of the zeta function ζ ( z ) = ∫ K ( w ) zϕ ( w ) dw . The more singular the analytic space W0 of solutions is , the smaller the RLCT . One way to think of the RLCT is as a count of the effective number of parameters near W0 ( Murfet et al. , 2020 , §4 ) . In Section 3 we relate the RLCT to Kolmogorov complexity and in Section 5 we estimate the RLCT of the synthesis problem detectA given below , using the method explained in Appendix A . Example 2.5 ( detectA ) . The deterministic synthesis problem detectA has Σ = { , A , B } , Q = { reject , accept } and q ( y|x ) is determined by the function taking in a string x of A ’ s and B ’ s and returning the state accept if the string contains an A and state reject otherwise . The conditional true distribution q ( y|x ) is realisable because this function is computed by a Turing machine . Two solutions are shown in Figure 2 . On the left is a parameter wl ∈ W0 \ W code and on the right is wr ∈ W0 ∩W code . Varying the distributions in wl that have nonzero entropy we obtain a submanifold V ⊆ W0 containing wl of dimension 14 . This leads by ( Watanabe , 2009 , Remark 7.3 ) to a bound on the RLCT of λ ≤ 12 ( 30 − 14 ) = 8 which is consistent with the experimental results in Table 1 . This highlights that solutions need not lie at vertices of the probability simplex , and W0 may contain a high-dimensional submanifold around a given classical solution .
The authors apply algebraic geometry to program synthesis, by identifying programs with points of analytic varieties. They construct a smooth relaxation to the synthesis problem by considering the space of probability distributions over codes for universal turning machines (which is a smooth/continuous manifold), and then translate this probability into corresponding probabilities of generating a correct program. This allows them to extend the KL divergence on the discrete distribution of codes to a smooth function, whose zeros correspond to the desired programs. They use MCMC to find (approximate) these zeros.
SP:d19db4a50cde893b283fb305d8ce11ef37f3edfc
Network-Agnostic Knowledge Transfer for Medical Image Segmentation
1 INTRODUCTION . Deep learning often requires a sufficiently large training dataset , which is expensive to build and not easy to share between users . For example , a big challenge with semantic segmentation of medical images is the limited availability of annotated data ( Litjens et al. , 2017 ) . Due to ethical concerns and confidentiality constraints , medical datasets are not often released with the trained networks . This highlights the need for knowledge transfer between neural networks , wherein the original training dataset does not need to be accessed . On the other hand , according to the black-box metaphor in deep learning based methods , transferring knowledge is difficult between heterogeneous neural networks . To address these limitations , algorithms were proposed to reuse or share the knowledge of neural networks , such as network weight transfer ( Tan et al. , 2018 ) , knowledge distillation ( Hinton et al. , 2015 ) , federated learning ( Yang et al. , 2019 ) , and self-training ( Xie et al. , 2020b ) . Some conventional algorithms directly transfer the weights of standard large models that were trained on natural image datasets for different tasks ( Kang & Gwak , 2019 ; Motamed et al. , 2019 ; Jodeiri et al. , 2019 ; Raghu et al. , 2019 ) . For example , Iglovikov & Shvets ( 2018 ) adopted VGG11 pre-trained on ImageNet as the encoder of U-Net for 2D image segmentation . Similarly , the convolutional 3D ( Tran et al. , 2015 ) , pre-trained on natural video datasets , was used as the encoder of 3D U-Net for the 3D MR ( Magnetic Resonance ) medical image segmentation ( Zeng et al. , 2017 ) . Transferring the network weights generally requires adjustments to be made to the architecture of the receiver model , this in turn , limits the flexibility of the receiver network . Another technique that involves knowledge transfer is federated learning ( Yang et al. , 2019 ) ; it has received attention for its capability to train a large-scale model in a decentralized manner without requiring users ’ data . In general , federated learning approaches adopt the central model to capture the shared knowledge of all users by aggregating their gradients . Due to the difficulties in transferring knowledge between heterogeneous networks , federated learning often requires all devices , including both the central servers and local users , to use the same neural architecture ( Xie et al. , 2020a ) . To our best knowledge , there has been no federated learning system that uses heterogeneous networks . Knowledge distillation is the process of transferring the knowledge of a large neural network or an ensemble of neural networks ( teacher ) to a smaller network ( student ) ( Hinton et al. , 2015 ) . Given a set of trained teacher models , one feeds training data to them and uses their predictions instead of the true labels to train the student model . For effective transfer of knowledge , however , it is essential that a reasonable fraction of the training examples are observable by the student ( Li et al. , 2018 ) or the metadata at each layer is provided ( Lopes et al. , 2017 ) . Yoo et al . ( 2019 ) used a generative network to extract the knowledge of a teacher network , which generated labeled artificial images to train another network . As can be seen , Yoo et al. ’ s method had to train an additional generative network for each teacher network . Different from knowledge distillation , self-training aims to transfer knowledge to a more capable model . The self-training framework ( Scudder , 1965 ) has three main steps : train a teacher model on labeled images ; use the teacher to generate pseudo labels on unlabeled images ; and train a student model on the combination of labeled images and pseudo labeled images . Xie et al . ( 2020b ) proposed self-training with a noisy student for classification , which iterates this process a few times by treating the student as a teacher to relabel the unlabeled data and training a new student . These studies required labeled images for training the student ; moreover , they implicitly required that the pseudolabeled images should be similar in content to the original labeled images . Inspired by self-training , we propose a network-agnostic knowledge transfer algorithm for medical image segmentation . This algorithm transfers the knowledge of a teacher model to a student model by training the student on a transferal dataset whose annotations are generated by the teacher . The algorithm has the following characteristics : the transferal dataset requires no manual annotation and is independent of the teacher-training dataset ; the student does not need to inherit the weights of the teacher , and such , the knowledge transfer can be conducted between heterogeneous neural architectures ; it is straightforward to implement the algorithm with fine-tuning to solve downstream task , especially by using the downstream task dataset as the transferal dataset ; the algorithm is able to transfer the knowledge of an ensemble of models that are trained independently into one model . We conducted extensive experiments on semantic segmentation using five state-of-the-art neural networks and seven datasets . The neural networks include DeepLabv3+ ( Chen et al. , 2018 ) , UNet ( Ronneberger et al. , 2015 ) , AttU-Net ( Oktay et al. , 2018 ) , SDU-Net ( Wang et al. , 2020 ) , and Panoptic-FPN ( Kirillov et al. , 2019 ) . Out of the seven datasets , four public datasets involve breast lesion , nerve structure , skin lesion , and natural image objects , and three internal/in-house datasets involve breast lesion ( a single dataset with two splits ) and thyroid nodule . Experiments showed that the proposed algorithm performed well for knowledge transfer on semantic image segmentation . 2 ALGORITHM . The main goal of the proposed algorithm is to transfer the knowledge of one or more neural networks to an independent one , without access to the original training datasets . This section presents the proposed knowledge transfer algorithm for semantic segmentation in Algorithm 1 and its application with fine-tuning for a downstream task in Algorithm 2 ( A.4 ) . The knowledge transfer algorithm first employs one or more teacher models to generate pseudo masks for the transferal dataset and then trains the student model on the pseudo-annotated transferal dataset . For an image x in the transferal dataset D , we get the pseudo mask by the weighted average of the teacher models ’ outputs : y = ∑ Ti∈T wi · Ti ( x ) , where wi is the weight for the model Ti . The output of a model Ti ( x ) is either a soft mask ( pixel value varies from 0 to 1 ) or a binary mask ( pixel value is 0 or 1 ) . Since the ensemble of teacher models is not our primary focus , we simply set the weights equally for all teacher models . We adopt two constraints to exclude images without salient targets , as shown in Figure 1 . The first constraint is on the target pixel number in the pseudo mask , where target pixel indicates the pixel with a value above 0.5 . We exclude the image ( of size 384 ×384 pixels ) if the target pixel number is less than a threshold of 256 . The second constraint is on the gray-level entropy of the pseudo Algorithm 1 Network-Agnostic Knowledge Transfer for Semantic Segmentation Require : Teacher set T with trained models , randomly initialized student model S Require : Transferal dataset D Require : Target pixel number threshold α , gray-level entropy threshold β 1 : for each datapoint x in D do 2 : Obtain pseudo mask y = ∑ Ti∈T wi · Ti ( x ) 3 : N ← number of target pixels ( value above 0.5 ) in y 4 : E ← gray-level entropy of y 5 : if N < α or E > β then 6 : Exclude x from D 7 : else 8 : Update x as ( x , y ) 9 : end if 10 : end for 11 : Train student model S on pseudo-annotated D 12 : return Trained S mask . A high entropy value implies the target and the background have no obvious difference and vice versa . We exclude images with a gray-level entropy higher than a threshold of 2.5 . We employ the second constraint only in the case that the teacher model outputs a soft mask rather than a binary mask . In this algorithm , the teacher models and student model can be independent of each other and do not rely on each other during the training or testing phase . The transferal dataset is independent of the teacher-training datasets , which are not observable for the student model . The student model gains the knowledge of the teacher by the transferal dataset . As a result , the student model is trained to work well on datasets similar to the teacher-training dataset , while it is not aimed to predict the ground truth of the transferal dataset . In the case that a small dataset Dt with ground truth is available for a downstream segmentation task ( target task ) , it is straightforward to further fine-tune the student model that has been trained on pseudo-annotated transferal dataset , as Algorithm 2 shows . The transferal dataset D is independent of the target task , while it can also be a non-annotated dataset from the target task . 3 RELATED NEURAL NETWORKS . We posit that if the teacher model is ineffective , a more capable student model can easily achieve similar performance ; if the student model , however , is ineffective , the cause can not be easily identified and the reason can be attributed to the knowledge transfer algorithm and the inherent limitations of the student model . We experimented with five neural networks , all of which have different architectures and have been shown to provide the best outcome on various segmentation tasks . DeepLabv3+ : DeepLabv3+ ( Chen et al. , 2018 ) is one of Google ’ s latest and best performing semantic segmentation models . It combines the advantages of spatial pyramid pooling with an encoderdecoder structure . Spatial pyramid pooling captures rich contextual information while the decoder path is able to gradually recover object boundaries . U-Net : U-Net ( Ronneberger et al. , 2015 ) has been the most widely used network for medical image segmentation . U-Net consists of an encoder-decoder structure with the encoder path extracting rich semantic information , and the decoder path recovering resolution and enabling contextual information . AttU-Net : AttU-Net ( Oktay et al. , 2018 ) is a modification of U-Net , where attention gates are used to control the skip connections from the encoder to the decoder . Attention gates make the model focus on target structures by highlighting important features and suppressing irrelevant features . SDU-Net : SDU-Net ( Wang et al. , 2020 ) utilizes parallel atrous convolutions with different dilation rates in each layer , which is effective in capturing features in both small and large receptive fields . SDU-Net has demonstrated better performance using merely 40 percent of the parameters in U-Net . Panoptic-FPN : Panoptic-FPN ( Kirillov et al. , 2019 ) merges semantic segmentation and object detection , such that each pixel is given a class as in semantic segmentation and each object is given a unique ID as in object detection . Panoptic-FPN provides a more rich and complete segmentation .
The paper proposes to use student-teacher training as a way of knowledge transfer between neural networks with different architectures without access to the source data. Instead the authors propose to use a separate dataset to transfer the knowledge of the teacher network and a potential different dataset for fine-tuning. The paper evaluates their method with various segmentation architectures by pretraining a DeepLab v3+ on an internal breast lesion dataset and testing transfer and fine-tuning using different medical datasets. The authors find that knowledge transfer performs similar to regular transfer learning in most combinations of datasets.
SP:8ddb96d9abf2c524bd664360a755cbe76703c109
Better sampling in explanation methods can prevent dieselgate-like deception
1 INTRODUCTION . Machine learning models are used in many areas where besides predictive performance , their comprehensibility is also important , e.g. , in healthcare , legal domain , banking , insurance , consultancy , etc . Users in those areas often do not trust a machine learning model if they do not understand why it made a given decision . Some models , such as decision trees , linear regression , and naı̈ve Bayes , are intrinsically easier to understand due to the simple representation used . However , complex models , mostly used in practice due to better accuracy , are incomprehensible and behave like black boxes , e.g. , neural networks , support vector machines , random forests , and boosting . For these models , the area of explainable artificial intelligence ( XAI ) has developed post-hoc explanation methods that are model-independent and determine the importance of each feature for the predicted outcome . Frequently used methods of this type are IME ( Štrumbelj & Kononenko , 2013 ) , LIME ( Ribeiro et al. , 2016 ) , and SHAP ( Lundberg & Lee , 2017 ) . To determine the features ’ importance , these methods use perturbation sampling . Slack et al . ( 2020 ) recently noticed that the data distribution obtained in this way is significantly different from the original distribution of the training data as we illustrate in Figure 1a . They showed that this can be a serious weakness of these methods . The possibility to manipulate the post-hoc explanation methods is a critical problem for the ML community , as the reliability and robustness of explanation methods are essential for their use and public acceptance . These methods are used to interpret otherwise black-box models , help in debugging models , and reveal models ’ biases , thereby establishing trust in their behavior . Non-robust explanation methods that can be manipulated can lead to catastrophic consequences , as explanations do not detect racist , sexist , or otherwise biased models if the model owner wants to hide these biases . This would enable dieselgate-like cheating where owners of sensitive prediction models could hide the socially , morally , or legally unacceptable biases present in their models . As the schema of the attack on explanation methods on Figure 1b shows , owners of prediction models could detect when their models are examined and return unbiased predictions in this case and biased predictions in normal use . This could have serious consequences in areas where predictive models ’ reliability and fairness are essential , e.g. , in healthcare or banking . Such weaknesses can undermine users ’ trust in machine learning models in general and slow down technological progress . In this work , we propose to change the main perturbation-based explanation methods and make them more resistant to manipulation attempts . In our solution , the problematic perturbation-based sampling is replaced with more advanced sampling , which uses modern data generators that better capture the distribution of the training dataset . We test three generators , the RBF network based generator ( Robnik-Šikonja , 2016 ) , random forest-based generator , available in R library semiArtificial ( Robnik-Šikonja , 2019 ) , as well as the generator using variational autoencoders ( Miok et al. , 2019 ) . We show that the modified gLIME and gSHAP methods are much more robust than their original versions . For the IME method , which previously was not analyzed , we show that it is already quite robust . We release the modified explanation methods under the open-source license1 . In this work , we use the term robustness of the explanation method as a notion of resilience against adversarial attacks , i.e . as the ability of an explanation method to recognize the biased classifier in an adversary environment . This type of robustness could be more formally defined as the number of instances where the adversarial model ’ s bias was correctly recognized . We focus on the robustness concerning the attacks described in Slack et al . ( 2020 ) . There are other notions of robustness in explanation methods ; e.g. , ( Alvarez-Melis & Jaakkola , 2018 ) define the robustness of the explanations in the sense that similar inputs should give rise to similar explanations . The remainder of the paper is organized as follows . In Section 2 , we present the necessary background and related work on explanation methods , attacks on them , and data generators . In Section 3 , we propose a defense against the described weaknesses of explanation methods , and in Section 4 , we empirically evaluate the proposed solution . In Section 5 , we draw conclusions and present ideas for further work . 2 BACKGROUND AND RELATED WORK . In this section , we first briefly describe the background on post-hoc explanation methods and attacks on them , followed by data generators and related works on the robustness of explanation methods . 2.1 POST-HOC EXPLANATION METHODS . The current state-of-the-art perturbation-based explanation methods , IME , LIME , and SHAP , explain predictions for individual instances . To form an explanation of a given instance , they measure the difference in prediction between the original instance and its neighboring instances , obtained 1https : //anonymous.4open.science/r/5d550c62-5c5c-4ee3-81ef-ab96fe0838ca/ with perturbation sampling . Using the generated instances , the LIME method builds a local interpretable model , e.g. , a linear model . The SHAP and IME methods determine the impact of the features as Shapley values from the coalitional game theory ( Shapley , 1988 ) . In this way , they assure that the produced explanations obey the four Shapley fairness axioms ( Štrumbelj & Kononenko , 2013 ) . Due to the exponential time complexity of Shapley value calculation , both methods try to approximate them . The three methods are explained in detail in the above references , and a formal overview is presented in Appendix A , while below , we present a brief description . In our exposition of the explanation methods , we denote with f the predictive model and with x the instance we are explaining . Explanations of instances with the LIME method is obtained with an interpretable model g. The model g has to be both locally accurate ( so that it can obtain correct feature contributions ) and simple ( so that it is interpretable ) . Therefore in LIME , g is a linear regression model trained on the instances sampled in the neighborhood of the explained instance x , which is first normalized.The perturbed explanation instances are generated by adding Gaussian noise to each feature of x independently . Using linear models to determine the contributions of features , LIME is fast but may produce poor explanations for instances close to complex decision boundaries . SHAP estimates Shapley values using the approach of LIME ( i.e . local linear models computed with the weighted least squares algorithm ) but using a different neighborhood and loss function for local accuracy , which ensures that Shapley values are obtained . The sampling of explanation instances in SHAP relies on the distribution set D , obtained from the training set , by computing the centroids of k-means clusters . IME computes Shapley values using a pure sampling approach ( no proxy with an interpretable model ) . Its sampling replaces part of the explained instance with existing values of selected features and then averages differences in the prediction of explained instance and samples . This makes the method slower but also potentially more robust as the method does not assume the shape of the dependency . 2.2 ATTACKS ON EXPLANATIONS . The idea of the attack on explanations presented in Slack et al . ( 2020 ) is that the attacker creates an adversarial ( biased ) model whose behavior it wants to hide from explanation methods , e.g. , a racist model that does not grant credits to blacks . As Figure 1b illustrates , a part of the attacker ’ s adversarial model is an unbiased model , which , e.g. , does not take disputed attributes such as race into account . The adversarial model tries to manipulate the explanation method by behaving like an unbiased model on perturbed instances used for the explanation – in this case , it uses the unbiased model ’ s output . On instances from the original distributions that are a part of normal model use , the adversarial model uses the biased model ’ s output . To distinguish between the original and perturbation-sampled data , the adversarial model contains a decision model that selects the right type of model ( biased or unbiased ) based on the predicted type of instances . The reason that the decision function can distinguish between the original data and sampled instances are the shortcomings of explanation methods that use the perturbation sampling ( see Figure 1a ) . For that graph , we created new instances based on the COMPAS dataset ( see Section 4 ) with the Gaussian noise as in the LIME explanation method . We mapped the original and sampled instances to 2D space using the PCA transformation . We can see that the sampled data is distributed differently than the original data . As the difference is revealed with PCA , other machine learning algorithms can also effectively distinguish between sampled and original data . The attack can be formally defined in the following way . The attacker has a biased model b and a sample S from the distribution Xdist . The attacker has no additional information about the distribution Xdist . The attacker ’ s goal is to produce an adversarial model e that behaves like b on the data from Xdist but does not reveal b ’ s bias to the explanation methods . We assume that the attacker also has an unbiased model ψ that hides the f ’ s bias . The attacker creates a decision model d , that should output 1 if the input instance x is from Xdist and 0 otherwise . The model d is trained on S and generated perturbation samples . The attacker creates the adversarial model e with its output defined by the following equation : e ( x ) = { b ( x ) , d ( x ) = 1 ψ ( x ) , d ( x ) = 0 ( 1 ) 2.3 DATA GENERATORS . We used three different data generators based on different algorithms , modeling the distribution of the training set : variational autoencoder with Monte Carlo dropout ( Miok et al. , 2019 ) , RBF network ( Robnik-Šikonja , 2016 ) , and random forest ensemble ( Robnik-Šikonja , 2019 ) . In the remainder of the paper , we refer to the listed generators consecutively as MCD-VAE , rbfDataGen , and TreeEnsemble . Autoencoder ( AE ) consists of two neural networks called encoder and decoder . It aims to compress the input instances by passing them through the encoder and then reconstructing them to the original values with the decoder . Once the AE is trained , it can be used to generate new instances . Variational autoencoder ( Doersch , 2016 ) is a special type of autoencoder , where the vectors z in the latent dimension ( output of the encoder and input of the decoder ) are normally distributed . Encoder is therefore approximating the posterior distribution p ( z|x ) , where we assume p ( z|x ) ∼ N ( µx , Σx ) . The generator proposed by Miok et al . ( 2019 ) uses the Monte Carlo dropout ( Gal & Ghahramani , 2016 ) on the trained decoder . The idea of this generator is to propagate the instance x through the encoder to obtain its latent encoding z . This can be propagated many times through the decoder , obtaining every time a different result due to the Monte Carlo dropout but preserving similarity to the original instance x . The RBF network ( Moody & Darken , 1989 ) uses Gaussian kernels as hidden layer units in a neural network . Once the network ’ s parameters are learned , the rbfDataGen generator ( Robnik-Šikonja , 2016 ) can sample from the normal distributions , defined with obtained Gaussian kernels , to generate new instances . The TreeEnsemble generator ( Robnik-Šikonja , 2019 ) builds a set of random trees ( forest ) that describe the data . When generating new instances , the generator traverses from the root to the leaves of a randomly chosen tree , setting values of features in the decision nodes on the way . When reaching a leaf , it assumes that it has captured the dependencies between features . Therefore , the remaining features can be generated independently according to the observed empirical distribution in this leaf . For each generated instance , all attributes can be generated in one leaf , or another tree can be randomly selected where unassigned feature values are filled in . By selecting different trees , different features are set in the interior nodes and leaves .
This paper proposes a defense against the adversarial attacks on explanation methods described in Slack et al. (2020). In particular, by using sampling methods that more closely resemble the original data distribution, the authors make it difficult for the Out-of-Distribution detector to successfully discriminate between instances used for predicting and instances used for explaining.
SP:8c8d93b1668b5497a4d2b318b6d709f200788262
How Does Mixup Help With Robustness and Generalization?
1 INTRODUCTION . Mixup was introduced by Zhang et al . ( 2018 ) as a data augmentation technique . It has been empirically shown to substantially improve test performance and robustness to adversarial noise of state-of-the-art neural network architectures ( Zhang et al. , 2018 ; Lamb et al. , 2019 ; Thulasidasan et al. , 2019 ; Zhang et al. , 2018 ; Arazo et al. , 2019 ) . Despite the impressive empirical performance , it is still not fully understood why Mixup leads to such improvement across the different aspects mentioned above . We first provide more background about robustness and generalization properties of deep networks and Mixup . Then we give an overview of our main contributions . Adversarial robustness . Although neural networks have achieved remarkable success in many areas such as natural language processing ( Devlin et al. , 2018 ) and image recognition ( He et al. , 2016a ) , it has been observed that neural networks are very sensitive to adversarial examples — prediction can be easily flipped by human imperceptible perturbations ( Goodfellow et al. , 2014 ; Szegedy et al. , 2013 ) . Specifically , in Goodfellow et al . ( 2014 ) , the authors use fast gradient sign method ( FGSM ) to generate adversarial examples , which makes an image of panda to be classified as gibbon with high confidence . Although various defense mechanisms have been proposed against adversarial attacks , those mechanisms typically sacrifice test accuracy in turn for robustness ( Tsipras et al. , 2018 ) and many of them require a significant amount of additional computation time . In contrast , Mixup training tends to improve test accuracy and at the same time also exhibits a certain degree of resistance to adversarial examples , such as those generated by FGSM ( Lamb et al. , 2019 ) . Moreover , the corresponding training time is relatively modest . As an illustration , we compare the robust test ∗Equal contribution . accuracy between a model trained with Mixup and a model trained with standard empirical risk minimization ( ERM ) under adversarial attacks generated by FGSM ( Fig . 1a ) . The model trained with Mixup loss has much better robust accuracy . Robustness of Mixup under other attacks have also been empirically studied in Lamb et al . ( 2019 ) . Generalization . Generalization theory has been a central focus of learning theory ( Vapnik , 1979 ; 2013 ; Bartlett et al. , 2002 ; Bartlett & Mendelson , 2002 ; Bousquet & Elisseeff , 2002 ; Xu & Mannor , 2012 ) , but it still remains a mystery for many modern deep learning algorithms ( Zhang et al. , 2016 ; Kawaguchi et al. , 2017 ) . For Mixup , from Fig . ( 1b ) , we observe that Mixup training results in better test performance than the standard empirical risk minimization . That is mainly due to its good generalization property since the training errors are small for both Mixup training and empirical risk minimization ( experiments with training error results are included in the appendix ) . While there have been many enlightening studies trying to establish generalization theory for modern machine learning algorithms ( Sun et al. , 2015 ; Neyshabur et al. , 2015 ; Hardt et al. , 2016 ; Bartlett et al. , 2017 ; Kawaguchi et al. , 2017 ; Arora et al. , 2018 ; Neyshabur & Li , 2019 ) , few existing studies have illustrated the generalization behavior of Mixup training in theory . Our contributions . In this paper , we theoretically investigate how Mixup improves both adversarial robustness and generalization . We begin by relating the loss function induced by Mixup to the standard loss with additional adaptive regularization terms . Based on the derived regularization terms , we show that Mixup training minimizes an upper bound on the adversarial loss , which leads to the robustness against single-step adversarial attacks . For generalization , we show how the regularization terms can reduce over-fitting and lead to better generalization behaviors than those of standard training . Our analyses provides insights and framework to understand the impact of Mixup . Outline of the paper . Section 2 introduces the notations and problem setup . In Section 3 , we present our main theoretical results , including the regularization effect of Mixup and the subsequent analysis to show that such regularization improves adversarial robustness and generalization . Section 4 concludes with a discussion of future work . Proofs are deferred to the Appendix . 1.1 RELATED WORK . Since its advent , Mixup training ( Zhang et al. , 2018 ) has been shown to substantially improve generalization and single-step adversarial robustness among a wide rage of tasks , on both supervised ( Lamb et al. , 2019 ; Verma et al. , 2019a ; Guo et al. , 2019 ) , and semi-supervised settings ( Berthelot et al. , 2019 ; Verma et al. , 2019b ) . This has motivated a recent line of work for developing a number of variants of Mixup , including Manifold Mixup ( Verma et al. , 2019a ) , Puzzle Mix ( Kim et al. , 2020 ) , CutMix ( Yun et al. , 2019 ) , Adversarial Mixup Resynthesis ( Beckham et al. , 2019 ) , and PatchUp ( Faramarzi et al. , 2020 ) . However , theoretical understanding of the underlying mechanism of why Mixup and its variants perform well on generalization and adversarial robustness is still limited . Some of the theoretical tools we use in this paper are related to Wang & Manning ( 2013 ) and Wager et al . ( 2013 ) , where the authors use second-order Taylor approximation to derive a regularized loss function for Dropout training . This technique is then extended to drive more properties of Dropout , including the inductive bias of Dropout ( Helmbold & Long , 2015 ) , the regularization effect in matrix factorization ( Mianjy et al. , 2018 ) , and the implicit regularization in neural networks ( Wei et al. , 2020 ) . This technique has been recently applied to Mixup in a parallel and independent work ( Carratino et al. , 2020 ) to derive regularization terms . Compared with the results in Carratino et al . ( 2020 ) , our derived regularization enjoys a simpler form and therefore enables the subsequent analysis of adversarial robustness and generalization . We clarify the detailed differences in Section 3 . To the best of our knowledge , our paper is the first to provide a theoretical treatment to connect the regularization , adversarial robustness , and generalization for Mixup training . 2 PRELIMINARIES . In this section , we state our notations and briefly recap the definition of Mixup . Notations . We denote the general parameterized loss as l ( θ , z ) , where θ ∈ Θ ⊆ Rd and z = ( x , y ) is the input and output pair . We consider a training dataset S = { ( x1 , y1 ) , · · · , ( xn , yn ) } , where xi ∈ X ⊆ Rp and yi ∈ Y ⊆ Rm are i.i.d . drawn from a joint distribution Px , y . We further denote x̃i , j ( λ ) = λxi + ( 1 − λ ) xj , ỹi , j ( λ ) = λyi + ( 1 − λ ) yj for λ ∈ [ 0 , 1 ] and let z̃i , j ( λ ) = ( x̃i , j ( λ ) , ỹi , j ( λ ) ) . Let L ( θ ) = Ez∼Px , y l ( θ , z ) denote the standard population loss and Lstdn ( θ , S ) = ∑n i=1 l ( θ , zi ) /n denote the standard empirical loss . For the two distributions D1 and D2 , we use pD1 + ( 1− p ) D2 for p ∈ ( 0 , 1 ) to denote the mixture distribution such that a sample is drawn with probabilities p and ( 1 − p ) from D1 and D2 respectively . For a parameterized function fθ ( x ) , we use∇fθ ( x ) and∇θfθ ( x ) to respectively denote the gradient with respect to x and θ . For two vectors a and b , we use cos ( x , y ) to denote 〈x , y〉/ ( ‖x‖ · ‖y‖ ) . Mixup . Generally , for classification cases , the output yi is the embedding of the class of xi , i.e . the one-hot encoding by taking m as the total number of classes and letting yi ∈ { 0 , 1 } m be the binary vector with all entries equal to zero except for the one corresponding to the class of xi . In particular , if we take m = 1 , it degenerates to the binary classification . For regression cases , yi can be any real number/vector . The Mixup loss is defined in the following form : Lmixn ( θ , S ) = 1 n2 n∑ i , j=1 Eλ∼Dλ l ( θ , z̃ij ( λ ) ) , ( 1 ) where Dλ is a distribution supported on [ 0 , 1 ] . Throughout the paper , we consider the most commonly used Dλ – Beta distribution Beta ( α , β ) for α , β > 0 . 3 MAIN RESULTS . In this section , we first introduce a lemma that characterizes the regularization effect of Mixup . Based on this lemma , we then derive our main theoretical results on adversarial robustness and generalization error bound in Sections 3.2 and 3.3 respectively . 3.1 THE REGULARIZATION EFFECT OF MIXUP . As a starting point , we demonstrate how Mixup training is approximately equivalent to optimizing a regularized version of standard empirical loss Lstdn ( θ , S ) . Throughout the paper , we consider the following class of loss functions for the prediction function fθ ( x ) and target y : L = { l ( θ , ( x , y ) ) |l ( θ , ( x , y ) ) = h ( fθ ( x ) ) − yfθ ( x ) for some function h } . ( 2 ) This function class L includes many commonly used losses , including the loss function induced by Generalized Linear Models ( GLMs ) , such as linear regression and logistic regression , and also crossentropy for neural networks . In the following , we introduce a lemma stating that the Mixup training with λ ∼ Dλ = Beta ( α , β ) induces a regularized loss function with the weights of each regularization specified by a mixture of Beta distributions D̃λ = αα+βBeta ( α+ 1 , β ) + β α+βBeta ( β + 1 , α ) . Lemma 3.1 . Consider the loss function l ( θ , ( x , y ) ) = h ( fθ ( x ) ) − yfθ ( x ) , where h ( · ) and fθ ( · ) for all θ ∈ Θ are twice differentiable . We further denote D̃λ as a uniform mixture of two Beta distributions , i.e. , αα+βBeta ( α+ 1 , β ) + β α+βBeta ( β+ 1 , α ) , andDX as the empirical distribution of the training dataset S = ( x1 , · · · , xn ) , the corresponding Mixup loss Lmixn ( θ , S ) , as defined in Eq . ( 1 ) with λ ∼ Dλ = Beta ( α , β ) , can be rewritten as Lmixn ( θ , S ) = L std n ( θ , S ) + 3∑ i=1 Ri ( θ , S ) + Eλ∼D̃λ [ ( 1− λ ) 2ϕ ( 1− λ ) ] , where lima→0 ϕ ( a ) = 0 and R1 ( θ , S ) = Eλ∼D̃λ [ 1− λ ] n n∑ i=1 ( h′ ( fθ ( xi ) ) − yi ) ∇fθ ( xi ) > Erx∼DX [ rx − xi ] , R2 ( θ , S ) = Eλ∼D̃λ [ ( 1− λ ) 2 ] 2n n∑ i=1 h′′ ( fθ ( xi ) ) ∇fθ ( xi ) > Erx∼DX [ ( rx − xi ) ( rx − xi ) > ] ∇fθ ( xi ) , R3 ( θ , S ) = Eλ∼D̃λ [ ( 1− λ ) 2 ] 2n n∑ i=1 ( h′ ( fθ ( xi ) ) − yi ) Erx∼DX [ ( rx − xi ) ∇2fθ ( xi ) ( rx − xi ) > ] . By putting the higher order terms of approximation in ϕ ( · ) , this result shows that Mixup is related to regularizing ∇fθ ( xi ) and ∇2fθ ( xi ) , which are the first and second directional derivatives with respect to xi . Throughout the paper , our theory is mainly built upon analysis of the quadratic approximation of Lmixn ( θ , S ) , which we further denote as L̃mixn ( θ , S ) : = L std n ( θ , S ) + 3∑ i=1 Ri ( θ , S ) . ( 3 ) Comparison with related work . The result in Lemma 3.1 relies on the second-order Taylor expansion of the loss function Eq . ( 1 ) . Similar approximations have been proposed before to study the regularization effect of Dropout training , see Wang & Manning ( 2013 ) ; Wager et al . ( 2013 ) ; Mianjy et al . ( 2018 ) ; Wei et al . ( 2020 ) . Recently , Carratino et al . ( 2020 ) independently used similar approximation to study the regularization effect of Mixup . However , the regularization terms derived in Carratino et al . ( 2020 ) is much more complicated than those in Lemma 3.1 . For example , in GLM , our technique yields the regularization term as shown in Lemma 3.3 , which is much simpler than those in Corollaries 2 and 3 in Carratino et al . ( 2020 ) . One technical step we use here to simplify the regularization expression is to equalize Mixup with input perturbation , see more details in the proof in the Appendix . This simpler expression enables us to study the robustness and generalization of Mixup in the subsequent sections . Validity of the approximation . In the following , we present numerical experiments to support the approximation in Eq . ( 3 ) . Following the setup of numerical validations in Wager et al . ( 2013 ) ; Carratino et al . ( 2020 ) , we experimentally show that the quadratic approximation is generally very accurate . Specifically , we train a Logistic Regression model ( as one example of a GLM model , which we study later ) and a two layer neural network with ReLU activations . We use the two-moons dataset ( Buitinck et al. , 2013 ) . Fig . 2 shows the training and test data ’ s loss functions for training two models with different loss functions : the original Mixup loss and the approximate Mixup loss . Both models had the same random initialization scheme . Throughout training , we compute the test and training loss of each model using its own loss function . The empirical results shows the approximation of Mixup loss is quite close to the original Mixup loss .
The paper theoretically studies the beneficial effect of mixup on robustness and generalization of machine models. The mixup loss is rewritten to be the sum of the original empirical loss and a regularization term (plus a high order term). For robustness, the regularization term is proven to be upper bound of first and second order terms of the adversarial loss's Taylor expansion. Hence, the mixup loss can upper bound the approximate adversarial loss. For generalization, the regularization term is used to control the hypothesis to have small Rademacher complexity. The paper is clearly written and well organized.
SP:2d40321225b606569305bd303ebad2e1711fd07b
Deep Clustering and Representation Learning that Preserves Geometric Structures
In this paper , we propose a novel framework for Deep Clustering and multimanifold Representation Learning ( DCRL ) that preserves the geometric structure of data . In the proposed DCRL framework , manifold clustering is done in the latent space guided by a clustering loss . To overcome the problem that clusteringoriented losses may deteriorate the geometric structure of embeddings in the latent space , an isometric loss is proposed for preserving intra-manifold structure locally and a ranking loss for inter-manifold structure globally . Experimental results on various datasets show that the DCRL framework leads to performances comparable to current state-of-the-art deep clustering algorithms , yet exhibits superior performance for manifold representation . Our results also demonstrate the importance and effectiveness of the proposed losses in preserving geometric structure in terms of visualization and performance metrics . The code is provided in the Supplementary Material . 1 INTRODUCTION . Clustering , a fundamental tool for data analysis and visualization , has been an essential research topic in data science and machine learning . Conventional clustering algorithms such as K-Means ( MacQueen , 1965 ) , Gaussian Mixture Models ( GMM ) ( Bishop , 2006 ) , and spectral clustering ( Shi & Malik , 2000 ) perform clustering based on distance or similarity . However , handcrafted distance or similarity measures are rarely reliable for large-scale high-dimensional data , making it increasingly challenging to achieve effective clustering . An intuitive solution is to transform the data from the high-dimensional input space to the low-dimensional latent space and then to cluster the data in the latent space . This can be achieved by applying dimensionality reduction techniques such as PCA ( Wold et al. , 1987 ) , t-SNE ( Maaten & Hinton , 2008 ) , and UMAP ( McInnes et al. , 2018 ) . However , since these methods are not specifically designed for clustering tasks , some of their properties may be contrary to our expectations , e.g. , two data points from different manifolds that are close in the input space will be closer in the latent space derived by UMAP . Therefore , the first question here is how to learn the manifold representation that favors clustering ? The two main points for the multi-manifold representation learning are Point ( 1 ) preserving the local geometric structure within each manifold and Point ( 2 ) ensuring the discriminability between different manifolds . Most previous work seems to start with the assumption that the label of each data point is known , and then design the algorithm in a supervised manner , which greatly simplifies the problem of multi-manifold learning . However , it is challenging to decouple complex crossover relations and ensure discriminability between different manifolds , especially in unsupervised settings . One natural strategy is to achieve Point ( 2 ) through performing clustering in the input space to get pseudo-labels and then performing representation learning for each manifold . However , clustering is in fact contradictory to Point ( 1 ) ( which will be analyzed in detail in Sec . 3.3 ) , making it important to alleviate this contradiction so that clustering helps both point ( 1 ) and point ( 2 ) . Thus , the second question here is how to cluster data that favors learning manifold representation ? To answer these two questions , some pioneering work has proposed to integrate deep clustering and representation learning into a unified framework by defining a clustering-oriented loss . Though promising performance has been demonstrated on various datasets , we observe that a vital factor has been ignored by these work that the defined clustering-oriented loss may deteriorate the geometric structure of the latent space 1 , which in turn hurts the performance of visualization , clustering generalization , and manifold representation . In this paper , we propose to jointly perform deep clustering and multi-manifold representation learning with geometric structure preservation . Inspired by Xie et al . ( 2016 ) , the clustering centers are defined as a set of learnable parameters , and we use a clustering loss to simultaneously guide the separation of data points from different manifolds and the learning of the clustering centers . To prevent clustering loss from deteriorating the latent space , an isometric loss and a ranking loss are proposed to preserve the intra-manifold structure locally and inter-manifold structure globally . Finally , we achieve the following three goals related to clustering , geometric structure , and manifold representation : ( 1 ) Clustering helps to ensure inter-manifold discriminability ; ( 2 ) Local structure preservation can be achieved with the presence of clustering ; ( 3 ) Geometric structure preservation helps clustering . The contributions of this work are summarized as below : • Proposing to integrate deep clustering and multi-manifold representation learning into a unified framework with local and global structure preservation . • Unlike conventional multi-manifold learning algorithms that deal with all point pair relationships between different manifolds simultaneously , we set the clustering centers as a set of learnable parameters and achieve global structure preservation in a faster , more efficient , and easier to optimize manner by applying ranking loss to the clustering centers . • Analyzing the contradiction between two optimization goals of clustering and local structure preservation and proposing an elegant training strategy to alleviate it . • The proposed DCRL algorithm outperforms competing algorithms in terms of clustering effect , generalizability to out-of-sample , and performance in manifold representation . 2 RELATED WORK . Clustering analysis . As a fundamental tool in machine learning , it has been widely applied in various domains . One branch of classical clustering is K-Means ( MacQueen , 1965 ) and Gaussian Mixture Models ( GMM ) ( Bishop , 2006 ) , which are fast , easy to understand , and can be applied to a large number of problems . However , limited by Euclidean measure , their performance on high-dimensional data is often unsatisfactory . Spectral clustering and its variants ( such as SCNcut ( Bishop , 2006 ) ) extend clustering to high-dimensional data by allowing more flexible distance measures . However , limited by the computational efficiency of the full Laplace matrix , spectral clustering is challenging to extend to large-scale datasets . Deep clustering . The success of deep learning has contributed to the growth of deep clustering . One branch of deep clustering performs clustering after learning a representation through existing unsupervised techniques . For example , Tian et al . ( 2014 ) use autoencoder to learn low dimensional features and then runK-Means to get clustering results ( AE+K-Means ) . Considering the geometric structure of the data , N2D ( McConville et al. , 2019 ) applies UMAP to find the best clusterable manifold of the obtained embedding , and then runK-Means to discover higher-quality clusters . The other category of algorithms tries to optimize clustering and representation learning jointly . The closest work to us is Deep Embedding Clustering ( DEC ) ( Xie et al. , 2016 ) , which learns a mapping from the input space to a low dimensional latent space through iteratively optimizing clustering-oriented objective . As a modified version of DEC , while IDEC ( Guo et al. , 2017 ) claims to preserve the local structure of the data , in reality , their contribution is nothing more than adding a reconstruction loss . JULE ( Yang et al. , 2016b ) unifies unsupervised representation learning with clustering based on the CNN architecture to improve clustering accuracy , which can be considered as a neural extension of hierarchical clustering . DSC devises a dual autoencoder to embed data into latent space , and then deep spectral clustering ( Shaham et al. , 2018 ) is applied to obtain label assignments ( Yang et al. , 2019 ) . ASPC-DA ( Guo et al. , 2019 ) combines data augmentation with self-paced learning to encourage the learned features to be cluster-oriented . While sometimes they both evaluate performance in terms of accuracy , we would like to highlight that deep clustering and visual self-supervised learning ( SSL ) are two different research fields . SSL typically uses more powerful CNN architecture ( applicable only to image data ) , and uses sophisticated techniques such as contrastive learning ( He 1This claim was first made by IDEC ( Guo et al. , 2017 ) , but they did not provide experiments to support it . In this paper , however , we show that the geometry of the latent space is indeed disrupted by visualization of learned embeddings ( Fig . 4 ) , visualization of clustering process ( Fig . A3 ) , and statistical analysis ( Fig . A5 ) . et al. , 2020 ) , data augmentation ( Chen et al. , 2020 ) , and clustering ( Zhan et al. , 2020 ; Ji et al. , 2019 ; Van Gansbeke et al. , 2020 ) for better performance on large-scale datasets such as ImageNet . Deep clustering , however , uses general MLP architecture ( applicable to both image and vector data ) , so it is difficult to scale directly to large datasets without considering those sophisticated techniques . Manifold Representation Learning . Isomap , as a representative algorithm of single-manifold learning , aims to capture global nonlinear features and seek an optimal subspace that best preserves the geodesic distance between data points ( Tenenbaum et al. , 2000 ) . In contrast , some algorithms , such as the Locally Linear Embedding ( LLE ) ( Roweis & Saul , 2000 ) , are more concerned with the preservation of local neighborhood information . Combining DNN with manifold learning , the recently proposed Markov-Lipschitz Deep Learning ( MLDL ) algorithm achieves the preservation of local and global geometries by imposing Locally Isometric Smoothness ( LIS ) prior constraints ( Li et al. , 2020 ) . Furthermore , multi-manifold learning is proposed to obtain intrinsic properties of different manifolds . Yang et al . ( 2016a ) proposed a supervised discriminant isomap where data points are partitioned into different manifolds according to label information . Similarly , Zhang et al . ( 2018 ) proposed a semi-supervised learning framework that applies the labeled and unlabeled training samples to perform the joint learning of local neighborhood-preserving features . In most previous work on multi-manifold learning , the problem is considered from the perspective that the label is known or partially known , which significantly simplifies the problem . However , it is challenging to decouple multiple overlapping manifolds in unsupervised settings , and that is what this paper aims to explore . 3 PROPOSED METHOD . Consider a dataset X with N samples , and each sample xi ∈ Rd is sampled from C different manifolds { Mc } Cc=1 . Assume that each category in the dataset lies in a compact low-dimensional manifold , and the number of manifolds C is prior knowledge . Define two nonlinear mapping zi = f ( xi , θf ) and yi = g ( zi , θg ) , where zi ∈ Rm is the embedding of xi in the latent space , yi is the reconstruction of xi . The j-th cluster center is denoted as µj ∈ Rm , where { µj } Cj=1 is defined as a set of learnable parameters . We aim to find optimal parameters θf and µ so that the embeddings { zi } Ni=1 can achieve clustering with local and global structure preservation . To this end , a denoising autoencoder ( Vincent et al. , 2010 ) shown in Fig 1 is first pre-trained in an unsupervised manner to learn an initial latent space . Denoising autoencoder aims to optimize the self-reconstruction loss LAE = MSE ( x̂ , y ) , where the x̂ is a copy of x with Gaussian noise added , that is , x̂ = x + N ( 0 , σ2 ) . Then the autoencoder is finetuned by optimizing the following clustering-oriented loss { Lcluster ( z , µ ) } and structure-oriented losses { Lrank ( x , µ ) , LLIS ( x , z ) , Lalign ( z , µ ) } . Since the clustering should be performed on features of clean data , instead of noised data x̂ that is used in denoising autoencoder , the clean data x is used for fine-tuning .
In this paper, the authors proposes a deep clustering model to enable the clustering and representation learning to favor each other via preserving the geometric structure of data. The proposed DCRL framework integrates an isometric loss for local intra-manifold structure and a ranking loss for global inter-manifold structure. The authors evaluate the proposed framework on five datasets and the experimental results show that the proposed framework brings certain improvements over the baseline approaches.
SP:f0fdbe6d66e21168cf3653190ea1f751acf8f2bb
Disambiguating Symbolic Expressions in Informal Documents
1 INTRODUCTION . Despite huge advancements in machine learning , the task of understanding informal reasoning is still beyond current methods . In fact , it became commonplace that humans annotate informal documents containing reasoning in many domains , e.g . law ( Libal & Steen , 2020 ) . Reasoning is most visible in mathematical documents and software specification and as such in the last decades , the formalization of mathematical knowledge , and the verification of formal proofs , has become increasingly popular . By now , dozens of interactive and automated theorem prover systems are available , each providing libraries with up to hundreds of thousands of formalizations of mathematical definitions , theorems , and their proofs written by human mathematicians ( Harrison et al. , 2014 ) . While formal methods are still primarily used by computer scientists ( e.g . to verify software and hardware , as well as in program synthesis ) , by now they have also drawn the interest of an increasing number of research mathematicians – primarily thanks to famous problems such as Kepler ’ s conjecture ( Hales et al. , 2017 ) or the classification theorem for finite simple groups ( Solomon , 1995 ) , which have successfully been verified using theorem prover systems . However , while some mathematicians have begun actively adapting formal methods for their work , there is a prohibitively large discrepancy between the way new mathematical results are developed , presented , and published in mathematical practice , and the way they are formalized and implemented in formal systems ( Kaliszyk & Rabe , 2020 ) : Most theorem proving systems implement a fixed logical foundation ( such as variants of set theory or various kinds of type theories ) , a surface syntax in which a user declares new definitions and statements in terms of the underlying foundations , and either a tactic language or a language for expressing proof terms ( usually on basis of the Curry-Howardcorrespondence in a typed λ-calculus ) that allow for declaring proofs . Consequently , the process of formalizing new content in a formal system resembles programming much more than it does developing informal proofs . This discrepancy results in severe challenges for traditional mathematicians : Formal systems are difficult to learn and use , even if one is well acquainted with the ( informal ) mathematics involved . They require learning dedicated formal languages resembling programming languages , declaring content on a level of detail that is prohibitive for beginners even for “ obvious ” conclusions , and their libraries are difficult to grasp without already being familiar with the system ’ s language , conventions and functionalities . Due to the required level of detail , knowledge of the existing libraries is crucial when formalizing new content . Furthermore , many “ intuitively valid ” arguments can not be easily expressed in terms of a logical foundation in the first place , and knowing how to deal with those requires familiarity with the logical foundation involved and lots of practice . Consequently , the utility of formalizing mathematical results can be too easily ( and too often is ) dismissed in light of the additional time and work required for non-experts . This is despite the fact that many services available for formal mathematics are already enabled by semi-formal ( or flexiformal ) representations , such as semantic annotations in natural language texts , or formal representations containing opaque informal expressions ( see e.g . Kohlhase ( 2013 ) ; Lange ( 2011a ) ; Iancu ( 2017 ) ; Kohlhase et al . ( 2017a ) ; Corneli & Schubotz ( 2017 ) ; Dehaye et al . ( 2016 ) ) . Therefore , we need to invest into methods for bridging the gap between informal mathematical practice and ( semi- ) formal mathematics . One way to do so is to investigate autoformalization , the task of ( semi-automatically ) converting existing informal mathematical presentations to ( increasingly ) formal representations . Notably , these issues extend beyond pure mathematics to other STEM ( science , technology , engineering and math ) fields , where the formal verification ( or lack thereof ) of results can have direct real-world implications – examples include an infamous and costly error in the floating-point unit of Intel processors ( Harrison , 2003 ) and several human failures to adequately convert between SI and imperial units , most famously in NASA ’ s Mars orbiter ( Grossman ) . In fact , the former has already established formal verification as a vital tool in hardware design ( Harrison , 2003 ) . Two observations motivate the research presented here : 1 . The vast majority of STEM researchers can be assumed to be comfortable with using LATEX ; any integration of formal methods in a LATEX development environment ( e.g . via new packages or IDE integration ) would consequently lower the entry barrier significantly . 2 . The task of going from purely informal mathematical texts to fully formal representations of the contained knowledge is best done via a separation of concerns , by focussing on individual subtasks ( such as disambiguating symbolic expressions , parsing natural language , and translating it to a formal foundation ) using dedicated tools for each . In this paper , we discuss specifically the task of disambiguating symbolic expressions – i.e . associating all symbols in an expression with their precise semantics – in LATEX documents as a machine learning task , using sTEX semantically annotated LATEX ( Kohlhase , 2008 ) . The contributions are threefold : 1 . We discuss the details of disambiguating symbolic expressions in informal STEM documents as a neural machine translation task , 2. we present a new dataset specifically for this task , based on the existing SMGLoM library of sTEX macros ( see Subsection 2.2 ) , and 3. we present a methodology ( using transformer language models ) that allows us to achieve positive results on our dataset . We previously evaluated several baseline NMT models ( such as Luong et al . ( 2017 ) ; Vaswani et al . ( 2017 ) and a plain character-based sequence-to-sequence model ) , which all failed to yield meaningful results due to our dataset being considerably smaller than is required for traditional NMT models.1 2 PRELIMINARIES . By disambiguating , we mean the task of transforming a sequence of symbols ( representing a mathematical formula ) into an abstract syntax tree and associating each leaf in the tree with a unique identifier specifying the precise semantics of the corresponding symbol . While this might superficially seem an easy task , closer consideration shows that even obvious seeming statements such as “ a+ b ” can in fact correspond to a multitude of possible disambiguations : a and b can be variables or previously defined constants , whereas + can represent e.g . addition on multiple different number spaces , generic ring or vector space operations , or string concatenation . In order to adequately disambiguate expressions generically , it is , therefore , necessary to take the context in which the expression occurs into account . 1All code and data relevant to this paper is available at https : //gl.kwarc.info/dmueller/ fifom . In this paper , we consider informal documents in LATEX specifically , which we will disambiguate with the sTEX package , using semantic identifiers provided by the SMGloM library . This eventually enables various formal knowledge management services ( such as type/proof checking ) provided by the MMT system . 2.1 STEX . Kohlhase proposed sTEX ( Kohlhase , 2008 ) , a package for annotating LATEX documents with structural and formal semantics which is today used by multiple groups formalizing mathematics in various systems . In particular , sTEX is based on OMDOC ( Kohlhase , 2006 ) , an extension of OpenMath ( Buswell et al. , 2004 ) which is foundation-agnostic in the sense that it does not favor a specific foundation ( such as type or set theories ) over any other . This approach is consequently best suited for semantifying informal documents , where foundations are often unspecified , left implicit or switched fluently . For example , category-theoretic and set-theoretic formulations are often used interchangeably in algebraic settings , whereas type theories are generally favored for computational aspects and formal systems . Figure 1 shows example sTEX macros and their usage in various stages . Relevant for this paper is primarily the \symdef command , which introduces a new mathematical concept ( e.g . \nattimes in Figure 1 ) . It takes as arguments a macro name ( e.g . nattimes ) , a symbolic notation ( last argument ) and optionally an OMDOC-name ( e.g . multiplication ) , arity ( e.g . [ 1 ] , which may be flexary ) and notational precedence ( e.g . p=600 , for automatic bracketing ) . It generates a unique identifier for the concept being declared ( based on the provided OMDOC-name ) , and a new LATEX macro ( e.g . \nattimes ) for referring to the symbol . Alternative notational variants for symbols can be introduced via \symvariant , which are used as options to the macro ( e.g . \nattimes [ cdot ] ) . In addition to being valid LATEX , compilable via pdflatex , sTEX-documents can be transformed to OMDOC using the LaTeXML-software ( Ginev et al. , 2011 ) , yielding a formally disambiguated representation of the document and in particular the symbolic expressions therein on the basis of the macros provided by \symdefs . LaTeXML also heuristically attempts to disambiguate nonsTEX-symbols , e.g . by considering “ = ” and “ + ” as infix notations for generic equality and addition operators , respectively . 2.2 SMGLOM . The SMGloM ( Kohlhase , 2014 ) , semantic multilingual glossary of mathematics ) is a library of hundreds of sTEX-modules containing mathematical concepts and definitions . It is separated into signature modules ( using the modsig-environment , see Figure 1 ) containing only symbol declarations , and natural language modules ( using the mhmodnl-environment , here exemplary for English ) that serve as dictionary entries for these , in which the semantics of the symbols are described in a semi-formal manner . The second row of Figure 1 shows an SMGLoM entry . 2.3 MMT . sTEX itself is integrated , and shares an underlying OMDOC ontology , with the MMT system ( Rabe & Kohlhase , 2013 ; Horozal et al. , 2012 ; Rabe , 2017 ) – a foundation-independent meta-framework and API for knowledge management services . This integration makes the generic services provided by MMT– e.g . type checking , library management/browsing , translation – available to informal mathematical texts . Using alignments ( Müller , 2019 ; Müller et al. , 2017 ) , OMDOC-expressions can be translated between different libraries , languages and foundations . This allows for e.g . translating ( originally ) sTEX-content to a typed setting in order to e.g . check expressions and run type inference . Additionally , several theorem prover libraries have been translated to OMDOC and integrated in the MMT system , e.g . Kohlhase et al . ( 2017b ) ; Müller et al . ( 2019 ) ( for a detailed overview , see Müller ( 2019 ) and Kohlhase & Rabe ( 2020 ) ) . Extending these integrations to enable exporting from MMT as well ( and in conjunction with natural language processing ) , this could enable verifying informal mathematics imported via sTEX using external state-of-the-art theorem prover systems . 3 STATE OF THE ART . Various papers over the last years have – explicitly or implicitly – attempted to extract formal information from informal documents using machine learning . These fall into two categories : Firstly , there are projects that attempt to fully formalize informal mathematical documents using machine learning techniques , using the surface language of some theorem prover system directly as a target . In Kaliszyk et al . ( 2017a ; 2015 ; 2014 ) , the Flyspeck project ( Hales et al. , 2017 ) – the formalization of Kepler ’ s theorem – was used as a basis for a parallel dataset in order to translate from informal mathematics to HOL Light ( Harrison , 1996 ) syntax . Kaliszyk et al . ( 2017b ) ; Wang et al . ( 2018 ; 2020 ) target the Mizar language ( Mizar ) instead , using the Journal of Formalized Mathematics ( JFM ) as data – an informal representation of the formal Mizar Mathematical Library ( Bancerek et al. , 2018 ) . While these projects achieved impressive results given the ambitious nature of the task , their success rate is naturally limited by the involved models having to solve several tasks at once ( see second observation in Section 1 ) , including ours . Additionally , by going to a fully formal language ( and logical foundation ) immediately , the result does not preserve the narrative presentation of the input document , effectively losing ( for us ) valuable information in the process . Consequently , our task and results obtained on it are not directly comparable to these projects . Secondly , various projects have aimed to solve informally presented mathematical problems of various kinds . These include Arai et al . ( 2014 ) ; Matsuzaki et al . ( 2014 ; 2017 ; 2018 ) on pre-university math problems , Saxton et al . ( 2019 ) and Lample & Charton ( 2019 ) on high-school level equations , Gan & Yu ( 2017 ) and Seo et al . ( 2015 ) on geometric problems , and Huang et al . ( 2018 ) and Wang et al . ( 2017 ) on solving typical high-school word problems . While this naturally entails disambiguating symbolic expressions , all these projects reduce their domain of applicability to specific areas where all occurring formal symbols are syntactically unambiguous – primarily common arithmetic operations , functions , and relations on real numbers – such that disambiguation reduces to simple parsing of a fixed , small set of a priori known symbols .
In more mathematical fields, theorem provers and similar systems can validate claims made about formal systems. However, many research contributions come in the form of papers, and thus they are never validated in this way. Math researchers can express their contributions in a special purpose language to do this, but that places an additional burden on them to learn this skill.
SP:2cf935e397f642fb22a861cb62cb395834eef5b6
Localized Meta-Learning: A PAC-Bayes Analysis for Meta-Learning Beyond Global Prior
Meta-learning methods learn the meta-knowledge among various training tasks1 and aim to promote the learning of new tasks under the task similarity assumption.2 Such meta-knowledge is often represented as a fixed distribution ; this , however,3 may be too restrictive to capture various specific task information because the4 discriminative patterns in the data may change dramatically across tasks . In this5 work , we aim to equip the meta learner with the ability to model and produce6 task-specific meta knowledge and , accordingly , present a localized meta-learning7 framework based on the PAC-Bayes theory . In particular , we propose a Local8 Coordinate Coding ( LCC ) based prior predictor that allows the meta learner to9 generate local meta-knowledge for specific tasks adaptively . We further develop a10 practical algorithm with deep neural network based on the bound . Empirical results11 on real-world datasets demonstrate the efficacy of the proposed method.12 1 INTRODUCTION13 Task 1 Instances Task 2 Instances Global Meta-knowledge Localized Meta-knowledge Adaptation Recent years have seen a resurgence of interest in the14 field of meta-learning , or learning-to-learn ( Thrun15 & Pratt , 2012 ) , especially for empowering deep neu-16 ral networks with the capability of fast adapting to17 unseen tasks just as humans ( Finn et al. , 2017 ; Ravi18 & Larochelle , 2017 ) . More concretely , the neural19 networks are trained from a sequence of datasets,20 associated with different tasks sampled from a meta-21 distribution ( also called task environment ( Baxter,22 2000 ; Maurer , 2005 ) ) . The principal aim of meta23 learner is to extract transferable meta-knowledge24 from observed tasks and facilitate the learning of25 new tasks sampled from the same meta-distribution.26 The performance is measured by the generalization27 tasks : distinguishing motorcycle versus bicycle and distinguishing motorcycle versus car . Intuitively,45 each task uses distinct discriminative patterns and thus the desired meta-knowledge is required46 to extract these patterns simultaneously . It could be a challenging problem to represent it with a47 global hyperposterior since the most significant patterns in the first task could be irrelevant or even48 detrimental to the second task . Figure schematically illustrates this notion . Therefore , customized49 meta-knowledge such that the patterns are most discriminative for a given task is urgently desired.50 Can the meta-knowledge be adaptive to tasks ? How can one achieve it ? Intuitively , we could51 implement this idea by reformulating the meta-knowledge as a maping function . Leveraging the52 samples in the target task , the meta model produces tasks specific meta-knowledge.53 Naturally yet interestingly , one can see quantitatively how customized prior knowledge improves54 generalization capability , in light of the PAC-Bayes literature on the data distribution dependent-priors55 ( Catoni , 2007 ; Parrado-Hernández et al. , 2012 ; Dziugaite & Roy , 2018 ) . Specifically , PAC-Bayes56 bounds control the generalization error of Gibbs Classifiers . They usually depend on a tradeoff57 between the empirical error of the posterior Q and a KL-divergence term KL ( Q‖P ) , where P is the58 prior . Since this KL-divergence term forms part of the generalization bound and is typically large in59 standard PAC-Bayes approaches ( Lever et al. , 2013 ) , the choice of posterior is constrained by the60 need to minimize the KL-divergence between prior P and posterior Q . Thus , choosing an appropriate61 prior for each task which is close to the related posterior could yield improved generalization bounds.62 This encourages the study of data distribution-dependent priors for the PAC-Bayes analysis and gives63 rise to principled approaches to localized PAC-Bayes analysis . Previous related work are mainly64 discussed in Appendix A.65 Inspired by this , we propose a Localized Meta-Learning ( LML ) framework by formulating meta-66 knowledge as a conditional distribution over priors . Given task data distribution , we allow a meta67 learner to adaptively generate an appropriate prior for a new task . The challenges of developing this68 model are three-fold . First , the task data distribution is not explicitly given , and our only perception69 for it is via the associated sample set . Second , it should be permutation invariant — the output of70 model should not change under any permutation of the elements in the sample set . Third , the learned71 model could be used for solving unseen tasks . To address these problems , we further develop a prior72 predictor using Local Coordinate Coding ( LCC ) ( Yu et al. , 2009 ) . In particular , if the classifier in73 each task is specialized to a parametric model , e.g . deep neural network , the proposed LCC-based74 prior predictor predicts base model parameters using the task sample set . The main contributions75 include : ( 1 ) A localized meta-learning framework which provides a means to tighten the original76 PAC-Bayes meta-learning bound ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) by minimizing77 the task-complexity term by choosing data-dependent prior ; ( 2 ) An LCC-based prior predictor , an78 implementation of conditional hyperposterior , which generates local meta-knowledge for specific79 task ; ( 3 ) A practical algorithm for probabilistic deep neural networks by minimizing the bound80 ( though the optimization method can be applied to a large family of differentiable models ) ; ( 4 ) 81 Experimental results which demonstrate improved performance over meta-learning method in this82 field.83 2 PRELIMINARIES84 Our prior predictor was implemented by Local Coordinate Coding ( LCC ) . The LML framework85 was inspired by PAC-Bayes theory for meta learning . In this section we briefly review the related86 definitions and formulations.87 2.1 LOCAL COORDINATE CODING88 Definition 1 . ( Lipschitz Smoothness Yu et al . ( 2009 ) . ) A function f ( x ) in Rd is a ( α , β ) -Lipschitz89 smooth w.r.t . a norm ‖ · ‖ if ‖f ( x ) −f ( x′ ) ‖ ≤ α‖x−x′‖ and ‖f ( x′ ) −f ( x ) −∇f ( x ) > ( x′−x ) ‖ ≤90 β‖x− x′‖2.91 Definition 2 . ( Coordinate Coding Yu et al . ( 2009 ) . ) A coordinate coding is a pair ( γ , C ) , where92 C ⊂ Rd is a set of anchor points ( bases ) , and γ is a map of x ∈ Rd to [ γu ( x ) ] u∈C ∈ R|C| such that93 ∑ u γu ( x ) = 1 . It induces the following physical approximation of x in Rd : x̄ = ∑ u∈C γu ( x ) u.94 Definition 3 . ( Latent Manifold Yu et al . ( 2009 ) . ) A subsetM ⊂ Rd is called a smooth manifold with an intrinsic dimension |C| : = dM if there exists a constant cM such that given any x ∈ M , there exists |C| anchor points u1 ( x ) , . . . , u|C| ( x ) ∈ Rd so that ∀x′ ∈M : inf γ∈R|C| ‖x′ − x− |C|∑ j=1 γjuj ( x ) ‖2 ≤ cM‖x′ − x‖22 , where γ = [ γ1 , . . . , γ|C| ] > are the local codings w.r.t . the anchor points.b95 Definition 2 and 3 imply that any point in Rd can be expressed as a linear combination of a set96 of anchor points . Later , we will show that a high dimensional nonlinear prior predictor can be97 approximated by a simple linear function w.r.t . the coordinate coding , and the approximation quality98 is ensured by the locality of such coding ( each data point can be well approximated by a linear99 combination of its nearby anchor points ) .100 2.2 PAC-BAYES REGULAR META-LEARNING101 In order to present the advances proposed in this paper , we recall some definitions in PAC-Bayes102 theory for single-task learning and meta-learning ( Catoni , 2007 ; Baxter , 2000 ; Pentina & Lampert,103 2014 ; Amit & Meir , 2018 ) . In the context of classification , we assume all tasks share the same input104 space X , output space Y , space of classifiers ( hypotheses ) H ⊂ { h : X → Y } and loss function105 ` : Y × Y → [ 0 , 1 ] . The meta learner observes n tasks in the form of sample sets S1 , . . . , Sn . The106 number of samples in task i is denoted by mi . Each observed task i consists of a set of i.i.d . samples107 Si = { ( xj , yj ) } mij=1 , which is drawn from a data distribution Si ∼ D mi i . Following the meta-learning108 setup in ( Baxter , 2000 ) , we assume that each data distribution Di is generated i.i.d . from the same109 meta distribution τ . Let h ( x ) be the prediction of x , the goal of each task is to find a classifier h110 that minimizes the expected loss Ex∼D ` ( h ( x ) , y ) . Since the underlying ‘ true ’ data distribution Di is111 unknown , the base learner receives a finite set of samples Si and produces an “ optimal ” classifier112 h = Ab ( Si ) with a learning algorithm Ab ( · ) that will be used to predict the labels of unseen inputs.113 PAC-Bayes theory studies the properties of randomized classifier , called Gibbs classifier . Let Q be a114 posterior distribution overH . To make a prediction , the Gibbs classifier samples a classifier h ∈ H115 according to Q and then predicts a label with the chosen h. The expected error under data distribution116 D and empirical error on the sample set S are then given by averaging over distribution Q , namely117 er ( Q ) = Eh∼QE ( x , y ) ∼D ` ( h ( x ) , y ) and êr ( Q ) = Eh∼Q 1m ∑m j=1 ` ( h ( xj ) , yj ) , respectively.118 In the context of meta-learning , the goal of the meta learner is to extract meta-knowledge contained in the observed tasks that will be used as prior knowledge for learning new tasks . In each task , the prior knowledge P is in the form of a distribution over classifiersH . The base learner produces a posterior Q = Ab ( S , P ) over H based on a sample set S and a prior P . All tasks are learned through the same learning procedure . The meta learner treats the prior P itself as a random variable and assumes the meta-knowledge is in the form of a distribution over all possible priors . Let hyperprior P be an initial distribution over priors , meta learner uses the observed tasks to adjust its original hyperprior P into hyperposterior Q from the learning process . Given this , the quality of the hyperposterior Q is measured by the expected task error of learning new tasks using priors generated from it , which is formulated as : er ( Q ) = EP∼QE ( D , m ) ∼τ , S∼Dmer ( Q = Ab ( S , P ) ) . ( 1 ) Accordingly , the empirical counterpart of the above quantity is given by : êr ( Q ) = EP∼Q 1 n n∑ i=1 êr ( Q = Ab ( Si , P ) ) . ( 2 ) 2.3 PAC-BAYES REGULAR META-LEARNING BOUND WITH GAUSSIAN RANDOMIZATION119 Based on the above definitions , Pentina & Lampert ( 2014 ) and Amit & Meir ( 2018 ) present regular120 meta-learning PAC-Bayes generalization bounds w.r.t . hyperposteriorQ . Notably , the proof technique121 in Amit & Meir ( 2018 ) allows to incorporate different single task bounds . Consider the benefit of122 Catoni ’ s bound ( Catoni , 2007 ) ( the minimization problem derived from the bound is a simple linear123 combination of empirical risk plus a regularizer ) , here we instantiate a regular meta-learning bound124 with Gaussian randomization based on that . To make fair comparison , we will adopt the same Catoni ’ s125 bound to analysis the proposed LML framework later . Particularly , the classifier h is parameterized126 as hw with w ∈ Rdw . The prior and posterior are a distribution over the set of all possible parameters127 w. We choose both the prior P and posterior Q to be spherical Gaussians , i.e . P = N ( wP , σ2wIdw ) 128 and Q = N ( wQ , σ2wIdw ) . The mean wP is a random variable distributed first according to the129 hyperprior P , which we formulate as N ( 0 , σ2wIdw ) , and later according to hyperposterior Q , which130 we model as N ( wQ , σ2wIdw ) . When encountering a new task i , we first sample the mean of prior131 wPi from the hyperposterior N ( wQ , σ2wIdw ) , and then use it as a basis to learn the mean of posterior132 wQi = Ab ( Si , P ) , as shown in Figure 2 ( left ) . Then , we could derive the following PAC-Bayes133 meta-learning bound.134 Theorem 1 . Consider the regular meta-learning framework , given the hyperpriorP = N ( 0 , σ2wIdw ) . Then for any hyperposterior Q , any c1 , c2 > 0 and any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ we have , er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2w + c′1 2c1nσ2w ) ‖wQ‖2 + n∑ i=1 c′1c ′ 2 2c2nmiσ2w ‖ E wP wQi −w Q‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ( 1 2 + log 2n δ ) + c′1 c1nσ2w log 2 δ , ( 3 ) where c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . To get a better understanding , we further simplify the notation and obtain that er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2w + c′1 2c1nσ2w ) ‖wQ‖2 + n∑ i=1 c′1c ′ 2 2c2nmiσ2w ‖ E wP wQi −w Q‖2︸ ︷︷ ︸ task−complexity + const ( δ , n , mi , σw , c1 , c2 ) . ( 4 ) See Appendix D.4 for the proof . Notice that the expected task generalization error is bounded by the135 empirical multi-task error plus two complexity terms which measures the environment-complexity136 and the task-complexity , respectively.137 3 PAC-BAYES LOCALIZED META-LEARNING138 3.1 MOTIVATION AND OVERALL FRAMEWORK139 Our motivation stems from a core challenge in PAC-Bayes meta-learning bound in ( 4 ) , wherein140 the task-complexity term ∑n i=1 c′1c ′ 2 2c2nmiσ2w ‖EwQi −wQ‖2 , which measures the closeness between141 the mean of posterior and the mean of global hyperposterior for each task , is typically vital to the142 generalization bound . Finding the tightest possible bound generally depends on minimizing this143 term . It is obvious that the optimal wQ is ∑n i=1 c′1c ′ 2Ew Q i 2c2nmiσ2w . This solution for global hyperposterior is144 required to satisfy the task similarity assumption that the optimal posteriors for each task are close145 together and lie within a small subset of the model space . Under this circumstance , there exists a146 global hyperposterior from which a good prior for any individual task is reachable . However , if the147 optimal posteriors for each task are not related or even mutually exclusive , i.e. , one optimal posterior148 has a negative effect on another task , the global hyperposterior may impede the learning of some149 tasks . Moreover , this complexity term could be inevitably large and incur large generalization error.150 Note that wQ is the mean of hyperposterior Q and this complexity term naturally indicates the151 divergence between the mean of prior wPi sampled from the hyperposterior Q and the mean of152 posterior wQi in each task . Therefore , we propose to adaptively choose the mean of prior w P i153 according to task i . It is obvious that the complexity term vanishes if we set wPi = w Q i , but the prior154 Pi in each task has to be chosen independently of the sample set Si . Fortunately , the PAC-Bayes155 theorem allows us to choose prior upon the data distribution Di . Therefore , we propose a prior156 predictor Φ : Dm → wP which receives task data distribution Dm and outputs the mean of prior157 wP . In this way , the generated priors could focus locally on those regions of model parameters that158 are of particular interest in solving specific tasks.159 Particularly , the prior predictor is parameterized as Φv with v ∈ Rdv . We assume v to be a random160 variable distributed first according to the hyperprior P , which we reformulate as N ( 0 , σ2vIdv ) , and161 later according to hyperposteriorQ , which we reformulate asN ( vQ , σ2vIdv ) . Given a new task i , we162 first sample v from hyperposterior N ( vQ , σ2vIdv ) and estimate the mean of prior wPi by leveraging163 prior predictor wPi = Φv ( D m i ) . Then , the base learner utilizes the sample set Si and the prior164 Pi = N ( wPi , σ2wIdw ) to produce a mean posterior w Q i = Ab ( Si , Pi ) , as shown in Figure 2 ( right ) .165 To make wP close to wQ in each task , what properties are the prior predictor is expected to exhibit ? 166 Importantly , it is required to ( i ) uncover the tight relationship between the sample set and model167 parameters . Intuitively , features and parameters yield similar local and global structures in their168 respective spaces in the classification problem . Features in the same category tend to be spatially169 clustered together while maintaining the separation between different classes . Take linear classifiers170 as an example , let wk be the parameters w.r.t . category k , the separability between classes is171 implemented as x · wk , which also explicitly encourages intra-class compactness . A reasonable172 choice of wk is to maximize the inner product distance with the input features in the same category173 and minimize the distance with the input features of the non-belonging categories . Besides , the prior174 predictor should be ( ii ) category-agnostic since it will be used continuously as new tasks and hence175 new categories become available . Lastly , it should be ( iii ) invariant under permutations of its inputs.176 3.2 LCC-BASED PRIOR PREDICTOR177 There exists many implementations , such as set transformer ( Lee et al. , 2018 ) , relation network ( Rusu178 et al. , 2019 ) , task2vec ( Achille et al. , 2019 ) , that satisfy the above conditions . We follow the idea of179 nearest class mean classifier ( Mensink et al. , 2013 ) , which represents class parameter by averaging180 its feature embeddings . This idea has been explored in transductive few-shot learning problems ( Snell181 et al. , 2017 ; Qiao et al. , 2018 ) . Snell et al . ( 2017 ) learn a metric space across tasks such that when182 represented in this embedding , prototype ( centroid ) of each class can be used for label prediction183 in the new task . Qiao et al . ( 2018 ) directly predict the classifier weights using the activations by184 exploiting the close relationship between the parameters and the activations in a neural network185 associated with the same category . In summary , the classification problem of each task is transformed186 as a generic metric learning problem which is shared across tasks . Once this mapping has been187 learned on observed tasks , due to the structure-preserving property , it could be easily generalized to188 new tasks . Formally , consider each task as a K-class classification problem , and the parameter of the189 classifier in task i denoted as wi = [ wi [ 1 ] , . . . , wi [ k ] , . . . , wi [ K ] ] , the prior predictor for class k can190 be defined as:191 wPi [ k ] = Φv ( D mik ik ) = E Sik∼D mik ik 1 mik ∑ xj∈Sik φv ( xj ) , ( 5 ) where φv ( · ) : Rd → Rdw is the feature embedding function , mik is the number of samples belonging to category k , Sik and Dik are the sample set and data distribution for category k in task i . We call this function the expected prior predictor . Since data distribution Dik is considered unknown and our only insight as to Dik is through the sample set Sik , we approximate the expected prior predictor by its empirical counterpart . Note that if the prior predictor is relatively stable to perturbations of the sample set , then the generated prior could still reflect the underlying task data distribution , rather than the data , resulting in a generalization bound that still holds perhaps with smaller probability ( Dziugaite & Roy , 2018 ) . Formally , the empirical prior predictor is defined as : ŵPi [ k ] = Φ̂v ( Sik ) = 1 mik ∑ xj∈Sik φv ( xj ) . ( 6 ) Although we can implement the embedding function φv ( · ) with a multilayer perceptron ( MLP ) , both input x ∈ Rd and model parameter w ∈ Rdw are high-dimensional , making the empirical prior predictor Φ̂v ( · ) difficult to learn . Inspired by the local coordinate coding method , if the anchor points are sufficiently localized , the embedding function φv ( xj ) can be approximated by a linear function w.r.t . a set of codings , [ γu ( xj ) ] u∈C . Accordingly , we propose an LCC-based prior predictor , which is defined as : w̄Pi [ k ] = Φ̄v ( Sik ) = 1 mik ∑ xj∈Sik ∑ u∈C γu ( xj ) φv ( u ) , ( 7 ) where φv ( u ) ∈ Rdw is the embedding of the corresponding anchor point u ∈ C. As such,192 the parameters of LCC-based prior predictor w.r.t . category k can be represented as vk =193 [ φvk ( u1 ) , φvk ( u2 ) , . . . , φvk ( u|C| ) ] . Lemma 1 illustrates the approximation error between empirical194 prior predictor and LCC-based prior predictor.195 Lemma 1 . ( Empirical Prior Predictor Approximation ) Given the definition of ŵPi [ k ] and w̄Pi [ k ] in Eq . ( 6 ) and Eq . ( 7 ) , let ( γ , C ) be an arbitrary coordinate coding on Rd and φv ( · ) be an ( α , β ) -Lipschitz smooth function . We have for all x ∈ Rd ‖ŵPi [ k ] − w̄Pi [ k ] ‖ ≤ Oα , β ( γ , C ) ( 8 ) whereOα , β ( γ , C ) = 1mik ∑ xj∈Sik ( α‖xj− x̄j‖+β ∑ u∈C ‖x̄j−u‖2 ) and x̄j = ∑ u∈C γu ( xj ) u.196 See Appendix D.1 for the proof . Lemma 1 shows that a good LCC-based prior predictor should make197 x close to its physical approximation x̄ and should be localized . The complexity of LCC coding198 scheme depends on the number of anchor points |C| . We follow the optimization method in Yu et al.199 ( 2009 ) to find the coordinate coding ( γ , C ) , which is presented in Appendix B.200 3.3 PAC-BAYES LOCALIZED META-LEARNING BOUND WITH GAUSSIAN RANDOMIZATION201 In order to derive a PAC-Bayes generalization bound for localized meta-learning , we first bound the202 approximation error between expected prior predictor and LCC-based prior predictor.203 Lemma 2 . Given the definition of wP and w̄P in Eq . ( 5 ) and ( 7 ) , let X be a compact set with radius R , i.e. , ∀x , x′ ∈ X , ‖x− x′‖ ≤ R. For any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ , we have ‖wP − w̄P ‖2 ≤ K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 1 δ ) ) +Oα , β ( γ , C ) ) 2 . See Appendix D.2 for the proof . Lemma 2 shows that the approximation error between expected204 prior predictor and LCC-based prior predictor depends on ( i ) the concentration of prior predictor205 and ( ii ) the quality of LCC coding scheme . The first term implies the number of samples for each206 category should be larger for better approximation . This is consistent with the results of estimating207 the center of mass ( Cristianini & Shawe-Taylor , 2004 ) . Based on Lemma 2 , using the same Catoni ’ s208 bound . we have the following PAC-Bayes LML bound.209 Theorem 2 . Consider the localized meta-learning framework . Given the hyperprior P = N ( 0 , σ2vIdv ) , then for any hyperposterior Q , any c1 , c2 > 0 and any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ we have , er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v ) ‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ( 1 σ2w K∑ k=1 ( αR√ mik ( 1 + √ 1 2 log ( 4n δ ) ) +Oα , β ( γ , C ) ) 2 + dwK ( σv σw ) 2 ) + n∑ i=1 c′1c ′ 2 c2nmiσ2w log 4n δ + c′1 2c1nσ2v log 2 δ , ( 9 ) where c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . To get a better understanding , we further simplify the notation and obtain that er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v ) ‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2︸ ︷︷ ︸ task−complexity + const ( α , β , R , δ , n , mi , σv , σw , c1 , c2 ) . ( 10 ) See appendix D.3 for the proof . Similarly to the regular meta-learning bound in Theorem 1 , the210 expected task error er ( Q ) is bounded by the empirical task error êr ( Q ) plus the task-complexity and211 environment-complexity terms . The main innovation here is to exploit the potential to choose the212 mean of prior wP adaptively , based on task data S. Intuitively , if the selection of the LCC-based213 prior predictor is appropriate , it will narrow the divergence between the mean of prior wPi sampled214 from the hyperposterior Q and the mean of posterior wQi in each task . Therefore , the bound can be215 tighter than the ones in the regular meta-learning ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) .216 Our empirical study in Section 4 will illustrate that the algorithm derived from this bound can217 reduce task-complexity and thus achieve better performance than the methods derived from regular218 meta-learning bounds.219 When one is choosing the number of anchor points |C| , there is a balance between accuracy and220 simplicity of prior predictor . As we increase |C| , it will essentially increase the expressive power of221 Φ̄v ( · ) and reduce the task-complexity term ‖E v wQ − Φ̄vQ ( S ) ‖2 . However , at the same time , it will222 increase the enviornment-complexity term ‖vQ‖2 and make the bound loose . If we set |C| to 1 , it223 degenerates to the regular meta-learning framework.224 3.4 LOCALIZED META-LEARNING ALGORITHM225 Since the bound in ( 9 ) holds uniformly w.r.t . Q , the guarantees of Theorem 2 also hold for the resulting learned hyperposterior Q = N ( vQ , σ2vIdv ) , so the mean of prior wP sampled from the learned hyperposterior work well for future tasks . The PAC-Bayes localized meta-learning bound in ( 9 ) can be compactly written as ∑n i=1 Ev êri ( Qi = Ab ( Si , P ) ) +α1‖v Q‖2 + ∑n i=1 α2 mi ‖E v wQi − Φ̄vQ ( Si ) ‖2 , where α1 , α2 > 0 are hyperparameters . For task i , the learning algorithm Ab ( · ) can be formulated as w ? i = arg min wQi E v êri ( Qi = N ( wQi , σ2wIdw ) ) . To make fair comparison and guarantee the benefit of the proposed LML is not from using an improved optimization method , we follow the same learning algorithm in ( Amit & Meir , 2018 ) . Specifically , we jointly optimize the parameters of LCC-based prior predictor v and the parameters of classifiers in each task w1 , w2 , . . . , wn , which is formulated as arg min v , w1 , ... , wn n∑ i=1 E v êri ( wi ) + α1‖vQ‖2 + n∑ i=1 α2 mi ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 . ( 11 ) We can optimize v and w via mini-batch SGD . The details of the localized meta-learning algorithm226 is given in Appendix F. The expectation over Gaussian distribution and its gradient can be efficiently227 estimated by using the re-parameterization trick Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) .228 For example , to sample w from the posterior Q = N ( wQ , σ2wIdw ) , we first draw ξ ∼ N ( 0 , Idw ) 229 and then apply the deterministic function wQ + ξ σ , where is an element-wise multiplication.230 4 EXPERIMENTS231 Datasets and Setup . We use CIFAR-100 and Caltech-256 in our experiments . CIFAR-100232 Krizhevsky ( 2009 ) contains 60,000 images from 100 fine-grained categories and 20 coarse-level233 categories . As in Zhou et al . ( 2018 ) , we use 64 , 16 , and 20 classes for meta-training , meta-validation,234 and meta-testing , respectively . Caltech-256 has 30,607 color images from 256 classes Griffin et al.235 ( 2007 ) . Similarly , we split the dataset into 150 , 56 and 50 classes for meta-training , meta-validation,236 and meta-testing . We consider 5-way classification problem . Each task is generated by randomly237 sampling 5 categories and each category contains 50 samples . The base model uses the convolutional238 architecture in Finn et al . ( 2017 ) , which consists of 4 convolutional layers , each with 32 filters and a239 fully-connected layer mapping to the number of classes on top . High dimensional data often lies on240 some low dimensional manifolds . We utilize an auto-encoder to extract the semantic information of241 image data and then construct the LCC scheme based on the embeddings . The parameters of prior242 predictor and base model are random perturbations in the form of Gaussian distribution.243 We design two different meta-learning environment settings to validate the efficacy of the proposed244 method . The first one uses the pre-trained base model as an initialization , which utilizes all the245 meta-training classes ( 64-class classification in CIFAR-100 case ) to train the feature extractor . The246 second one uses the random initialization . We compare the proposed LML method with ML-PL247 method Pentina & Lampert ( 2014 ) , ML-AM method Amit & Meir ( 2018 ) and ML-A which is248 derived from Theorem 1 . In all these methods , we use their main theorems about the generalization249 upper bound to derive the objective of the algorithm . To ensure a fair comparison , all approaches250 adopt the same network architecture and pre-trained feature extractor ( more details can be found in251 Appendix E ) .252 Results . In Figure 3 , we demonstrate the average test error of learning a new task based on the253 number of training tasks , together with the standard deviation , in different settings ( with or without254 a pre-trained feature extractor ) . It is obvious that the performance continually increases as we255 increase the number of training tasks for all the methods . This is consistent with the generalization256 bounds that the complexity term converges to zero if large numbers of tasks are observed . ML-A257 consistently outperforms ML-PL and ML-AM since the single-task bound used in Theorem 1 ( ML-A ) 258 converges at the rate of O ( 1m ) while the bounds w.r.t . ML-PL and ML-AM converge at the rate259 of O ( 1√ m ) . This demonstrates the importance of using tight generalization bound . Moreover , our260 proposed LML significantly outperforms the baselines , which validates the effectiveness of the261 proposed LCC-based prior predictor . This confirms that LCC-based prior predictor is a more suitable262 representation for meta-knowledge than the traditional global hyperposterior in ML-A , ML-AM,263 and ML-PL . Finally , we observe that if the pre-trained feature extractor is provided , all of these264 methods do better than meta-training with random initialization . This is because the pre-trained265 feature extractor can be regarded as a data-dependent hyperpior . It is closer to the hyperposteior than266 the randomly initialized hyperprior . Therefore , it is able to reduce the environment complexity term267 and improves the generalization performance . In Figure 4 ( b ) , we show the divergence between the mean of generated prior wP from meta model269 and the mean of learned posterior wQ for LML and ML-A . This further validates the effectiveness of270 the LCC-based prior predictor which could narrow down the divergence term and thus tighten the271 bound . In Figure 4 ( a ) , we vary the number of anchor points |C| in LCC scheme from 4 to 256 , the272 optimal value is around 64 in both datasets . This indicates that LML is sensitive to the number of273 anchor points |C| , which further affects the quality of LCC-based prior predictor and the performance274 of LML.275 5 CONCLUSION276 This work contributes a novel localized meta-learning framework from both the theoretical and277 computational perspectives . In order to tailor meta-knowledge to various individual task , we formulate278 meta model as a mapping function that leverages the samples in target set and produces task specific279 meta-knowledge as a prior . Quantitatively , this idea essentially provides a means to theoretically280 tighten the PAC-Bayes meta-learning generalization bound . We propose a LCC-based prior predictor281 to output localized meta-knowledge by using task information and further develop a practical282 algorithm with deep neural networks by minimizing the generalization bound . An interesting283 topic for future work would be to explore other principles to construct the prior predictor and apply284 the localized meta-learning framework to more realistic scenarios where tasks are sampled non-i.i.d.285 from an environment . Another challenging problem is to extend our techniques to derive localized286 meta-learning algorithms for regression and reinforcement learning problems.287 REFERENCES288 Alessandro Achille , Michael Lam , Rahul Tewari , Avinash Ravichandran , Subhransu Maji , Charless C289 Fowlkes , Stefano Soatto , and Pietro Perona . Task2vec : Task embedding for meta-learning . In290 Proceedings of the IEEE International Conference on Computer Vision , pp . 6430–6439 , 2019.291 Ron Amit and Ron Meir . Meta-learning by adjusting priors based on extended PAC-Bayes theory . In292 International Conference on Machine Learning , pp . 205–214 , 2018.293 Maria-Florina Balcan , Mikhail Khodak , and Ameet Talwalkar . Provable guarantees for gradient-based294 meta-learning . In Proceedings of the 36th International Conference on Machine Learning , ICML295 2019 , 9-15 June 2019 , Long Beach , California , USA , pp . 424–433 , 2019.296 Jonathan Baxter . A model of inductive bias learning . Journal of Artificial Intelligence Research , 12:297 149–198 , 2000.298 O Catoni . PAC-Bayesian supervised classification : The thermodynamics of statistical learning.299 institute of mathematical statistics lecture notes—monograph series 56 . IMS , Beachwood , OH.300 MR2483528 , 2007.301 Nello Cristianini and John Shawe-Taylor . Kernel methods for pattern analysis , volume 173 . Cam-302 bridge University Press Cambridge , 2004.303 Giulia Denevi , Carlo Ciliberto , Dimitris Stamos , and Massimiliano Pontil . Incremental learning-to-304 learn with statistical guarantees . In Proceedings of the Thirty-Fourth Conference on Uncertainty305 in Artificial Intelligence , UAI 2018 , Monterey , California , USA , August 6-10 , 2018 , pp . 457–466,306 2018a.307 Giulia Denevi , Carlo Ciliberto , Dimitris Stamos , and Massimiliano Pontil . Learning to learn around a308 common mean . In Advances in Neural Information Processing Systems , pp . 10169–10179 , 2018b.309 Giulia Denevi , Carlo Ciliberto , Riccardo Grazzi , and Massimiliano Pontil . Learning-to-learn stochas-310 tic gradient descent with biased regularization . In Proceedings of the 36th International Conference311 on Machine Learning , ICML 2019 , 9-15 June 2019 , Long Beach , California , USA , pp . 1566–1575,312 2019.313 Gintare Karolina Dziugaite and Daniel M Roy . Data-dependent PAC-Bayes priors via differential314 privacy . In Advances in Neural Information Processing Systems , pp . 8430–8441 , 2018.315 Chelsea Finn , Pieter Abbeel , and Sergey Levine . Model-agnostic meta-learning for fast adaptation of316 deep networks . In Proceedings of the 34th International Conference on Machine Learning-Volume317 70 , pp . 1126–1135 . JMLR . org , 2017.318 Tomer Galanti , Lior Wolf , and Tamir Hazan . A theoretical framework for deep transfer learning.319 Information and Inference : A Journal of the IMA , 5 ( 2 ) :159–209 , 2016.320 Gregory Griffin , Alex Holub , and Pietro Perona . Caltech-256 object category dataset . 2007.321 Benjamin Guedj . A primer on pac-bayesian learning . arXiv preprint arXiv:1901.05353 , 2019.322 Diederik P. Kingma and Jimmy Ba . Adam : A method for stochastic optimization . In 3rd International323 Conference on Learning Representations , ICLR 2015 , San Diego , CA , USA , May 7-9 , 2015,324 Conference Track Proceedings , 2015.325 Diederik P. Kingma and Max Welling . Auto-encoding variational bayes . In 2nd International326 Conference on Learning Representations , ICLR 2014 , Banff , AB , Canada , April 14-16 , 2014,327 Conference Track Proceedings , 2014.328 Alex Krizhevsky . Learning multiple layers of features from tiny images . Technical report , Citeseer,329 2009.330 Juho Lee , Yoonho Lee , Jungtaek Kim , Adam R Kosiorek , Seungjin Choi , and Yee Whye Teh . Set331 transformer : A framework for attention-based permutation-invariant neural networks . arXiv332 preprint arXiv:1810.00825 , 2018.333 Guy Lever , François Laviolette , and John Shawe-Taylor . Tighter PAC-Bayes bounds through334 distribution-dependent priors . Theoretical Computer Science , 473:4–28 , 2013.335 Andreas Maurer . Algorithmic stability and meta-learning . Journal of Machine Learning Research , 6336 ( Jun ) :967–994 , 2005.337 Thomas Mensink , Jakob Verbeek , Florent Perronnin , and Gabriela Csurka . Distance-based image338 classification : Generalizing to new classes at near-zero cost . IEEE transactions on pattern analysis339 and machine intelligence , 35 ( 11 ) :2624–2637 , 2013.340 Emilio Parrado-Hernández , Amiran Ambroladze , John Shawe-Taylor , and Shiliang Sun . PAC-Bayes341 bounds with data dependent priors . Journal of Machine Learning Research , 13 ( Dec ) :3507–3531,342 2012.343 Anastasia Pentina and Christoph Lampert . A PAC-Bayesian bound for lifelong learning . In Interna-344 tional Conference on Machine Learning , pp . 991–999 , 2014.345 Siyuan Qiao , Chenxi Liu , Wei Shen , and Alan L. Yuille . Few-shot image recognition by predicting pa-346 rameters from activations . In 2018 IEEE Conference on Computer Vision and Pattern Recognition,347 CVPR 2018 , Salt Lake City , UT , USA , June 18-22 , 2018 , pp . 7229–7238 , 2018.348 Sachin Ravi and Hugo Larochelle . Optimization as a model for few-shot learning . In 5th Interna-349 tional Conference on Learning Representations , ICLR 2017 , Toulon , France , April 24-26 , 2017,350 Conference Track Proceedings , 2017.351 Danilo Jimenez Rezende , Shakir Mohamed , and Daan Wierstra . Stochastic backpropagation and352 approximate inference in deep generative models . In Proceedings of the 31th International353 Conference on Machine Learning , ICML 2014 , Beijing , China , 21-26 June 2014 , pp . 1278–1286,354 2014.355 Omar Rivasplata , Csaba Szepesvari , John S Shawe-Taylor , Emilio Parrado-Hernandez , and Shiliang356 Sun . PAC-Bayes bounds for stable algorithms with instance-dependent priors . In Advances in357 Neural Information Processing Systems , pp . 9214–9224 , 2018.358 Andrei A. Rusu , Dushyant Rao , Jakub Sygnowski , Oriol Vinyals , Razvan Pascanu , Simon Osindero,359 and Raia Hadsell . Meta-learning with latent embedding optimization . In 7th International360 Conference on Learning Representations , ICLR 2019 , New Orleans , LA , USA , May 6-9 , 2019,361 2019.362 Jake Snell , Kevin Swersky , and Richard Zemel . Prototypical networks for few-shot learning . In363 Advances in Neural Information Processing Systems , pp . 4077–4087 , 2017.364 Sebastian Thrun and Lorien Pratt . Learning to learn . Springer Science & Business Media , 2012.365 Oriol Vinyals , Charles Blundell , Timothy Lillicrap , Daan Wierstra , et al . Matching networks for one366 shot learning . In Advances in neural information processing systems , pp . 3630–3638 , 2016.367 Risto Vuorio , Shao-Hua Sun , Hexiang Hu , and Joseph J Lim . Toward multimodal model-agnostic368 meta-learning . arXiv preprint arXiv:1812.07172 , 2018.369 Xin Wang , Fisher Yu , Ruth Wang , Trevor Darrell , and Joseph E Gonzalez . Tafe-net : Task-aware370 feature embeddings for low shot learning . In Proceedings of the IEEE Conference on Computer371 Vision and Pattern Recognition , pp . 1831–1840 , 2019.372 Kai Yu , Tong Zhang , and Yihong Gong . Nonlinear learning using local coordinate coding . In373 Advances in neural information processing systems , pp . 2223–2231 , 2009.374 Fengwei Zhou , Bin Wu , and Zhenguo Li . Deep meta-learning : Learning to learn in the concept space.375 arXiv preprint arXiv:1802.03596 , 2018.376 Luisa M. Zintgraf , Kyriacos Shiarlis , Vitaly Kurin , Katja Hofmann , and Shimon Whiteson . Fast377 context adaptation via meta-learning . In Proceedings of the 36th International Conference on378 Machine Learning , ICML 2019 , 9-15 June 2019 , Long Beach , California , USA , pp . 7693–7702,379 2019.380 Supplementary Materials for Localized Meta-381 Learning : A PAC-Bayes Analysis for Meta-Learning382 Beyond Global Prior383 This supplementary document contains the discussion of previous work , the technical proofs of384 theoretical results and details of experiments . It is structured as follows : Appendix A gives a detailed385 discussion of previous work . Appendix B presents the optimization method for LCC . Appendix C386 presents notations for prior predictor . Appendix D gives the proofs of the main results . Appendix387 D.1 and D.2 show the approximation error between LCC-based prior predictor and empirical prior388 predictor , expected prior predictor , respectively . They are used in the proof of Theorem 2 . Next , in389 Appendix D.3 and D.4 we show the PAC-Bayes generalization bound of localized meta-learning390 in Theorem 2 and also provides the PAC-Bayes generalization bound of regular meta-learning in391 Theorem 1 . Details of experiments and more empirical results are presented in Appendix E. Finally,392 we summarize the localized meta-learning algorithm in Appendix F.393 A RELATED WORK394 Meta-Learning . Meta-learning literature commonly considers the empirical task error by directly395 optimizing a loss of meta learner across tasks in the training data . Recently , this has been successfully396 applied in a variety of models for few-shot learning Ravi & Larochelle ( 2017 ) ; Snell et al . ( 2017 ) ; 397 Finn et al . ( 2017 ) ; Vinyals et al . ( 2016 ) . Although Vuorio et al . ( 2018 ) ; Rusu et al . ( 2019 ) ; Zintgraf398 et al . ( 2019 ) ; Wang et al . ( 2019 ) consider task adaptation when using meta-knowledge for specific399 tasks , all of them are not based on generalization error bounds , which is the in the same spirit as400 our work . Meta-learning in the online setting has regained attention recently Denevi et al . ( 2018b ; a ; 401 2019 ) ; Balcan et al . ( 2019 ) , in which online-to-batch conversion results could imply generalization402 bounds . Galanti et al . ( 2016 ) analyzes transfer learning in neural networks with PAC-Bayes tools.403 Most related to our work are Pentina & Lampert ( 2014 ) ; Amit & Meir ( 2018 ) , which provide a404 PAC-Bayes generalization bound for meta-learning framework . In contrast , neither work provides a405 principled way to derive localized meta-knowledge for specific tasks.406 Localized PAC-Bayes Learning . There has been a prosperous line of research for learning priors407 to improve the PAC-Bayes bounds Catoni ( 2007 ) ; Guedj ( 2019 ) . Parrado-Hernández et al . ( 2012 ) 408 showed that priors can be learned by splitting the available training data into two parts , one for409 learning the prior , one for learning the posterior . Lever et al . ( 2013 ) bounded the KL divergence by a410 term independent of data distribution and derived an expression for the overall optimal prior , i.e . the411 prior distribution resulting in the smallest bound value . Recently , Rivasplata et al . ( 2018 ) bounded412 the KL divergence by investigating the stability of the hypothesis . Dziugaite & Roy ( 2018 ) optimized413 the prior term in a differentially private way . In summary , theses methods construct some quantities414 that reflect the underlying data distribution , rather than the sample set , and then choose the prior P415 based on these quantities . These works , however , are only applicable for single-task problem and416 could not transfer knowledge across tasks in meta-learning setting.417 B OPTIMIZATION OF LCC418 We minimize the inequality in ( 8 ) to obtain a set of anchor points . As with Yu et al . ( 2009 ) , we simplify the localization error term by assuming x̄ = x , and then we optimize the following objective function : arg min γ , C n∑ i=1 ∑ xj∈Si α‖xj − x̄j‖2 + β ∑ u∈C ‖xj − u‖2 s.t.∀x , ∑ u∈C γu ( x ) = 1 , ( 12 ) where x̄ = ∑ u∈C γu ( x ) u . In practice , we update C and γ by alternately optimizing a LASSO419 problem and a least-square regression problem , respectively.420 C NOTATIONS421 Let φv ( · ) : Rd → Rdw be the feature embedding function . mik denotes the number of samples belonging to category k. Sik and Dik are the sample set and data distribution for category k in task i , respectively . Then , the expected prior predictor w.r.t . class k in task i is defined as : wPi [ k ] = Φv ( D mik ik ) = E Sik∼D mik ik 1 mik ∑ xj∈Sik φv ( xj ) . The empirical prior predictor w.r.t . class k in task i is defined as : ŵPi [ k ] = Φ̂v ( Sik ) = 1 mik ∑ xj∈Sik φv ( xj ) . The LCC-based prior predictor w.r.t . class k in task i is defined as : w̄Pi [ k ] = Φ̄v ( Sik ) = 1 mik ∑ xj∈Sik ∑ u∈C γu ( xj ) φv ( u ) . D THEORETICAL RESULTS422 D.1 PROOF OF LEMMA 1423 This lemma bounds the error between the empirical prior predictor ŵPi [ k ] and the LCC-based prior424 predictor w̄Pi [ k ] .425 Lemma 1 Given the definition of ŵPi [ k ] and w̄Pi [ k ] in Eq . ( 6 ) and Eq . ( 7 ) , let ( γ , C ) be an arbitrary coordinate coding on Rdx and φ be an ( α , β ) -Lipschitz smooth function . We have for all x ∈ Rdx ‖ŵPi [ k ] − w̄Pi [ k ] ‖ ≤ 1 mik ∑ xj∈Sik ( α‖xj − x̄j‖+ β ∑ u∈C ‖x̄j − u‖2 ) = Oα , β ( γ , C ) , ( 13 ) where x̄j = ∑ u∈C γu ( xj ) u.426 Proof . Let x̄j = ∑ u∈C γu ( xj ) u . We have ‖Φ̂v ( Sik ) − Φ̄v ( Sik ) ‖2 = 1 mik ∑ xj∈Sik ‖φv ( xj ) − ∑ u∈C γu ( xj ) φv ( u ) ‖2 ≤ 1 mik ∑ xj∈Sik ( ‖φv ( xj ) − φv ( x̄j ) ‖2 + ‖ ∑ u∈C γu ( xj ) ( φv ( u ) − φv ( x̄j ) ‖2 ) = 1 mik ∑ xj∈Sik ( ‖φv ( xj ) − φv ( x̄j ) ‖2 + ‖ ∑ u∈C γu ( xj ) ( φv ( u ) − φv ( ∑ u∈C γu ( xj ) u ) ) −∇φv ( x̄j ) ( u− x̄j ) ‖2 ) ≤ 1 mik ∑ xj∈Sik ( ‖φv ( xj ) − φv ( x̄j ) ‖2 + ∑ u∈C |γu ( xj ) |‖ ( φv ( u ) − φv ( ∑ u∈C γu ( xj ) u ) ) −∇φv ( x̄j ) ( u− x̄j ) ‖2 ) ≤ 1 mik ∑ xj∈Sik ( α‖xj − x̄j‖2 + β ∑ u∈C ‖x̄j − u‖22 ) = Oα , β ( γ , C ) In the above derivation , the first inequality holds by the triangle inequality . The second equality427 holds since ∑ u∈C γu ( xj ) = 1 for all xj . The last inequality uses the assumption of ( α , β ) -Lipschitz428 smoothness of φv ( · ) . This implies the desired bound.429 This lemma demonstrates that the quality of LCC approximation is bounded by two terms : the first term ‖xj − x̄j‖2 indicates x should be close to its physical approximation x̄ , the second term ‖x̄j −u‖ implies that the coding should be localized . According to the Manifold Coding Theorem in Yu et al . ( 2009 ) , if the data points x lie on a compact smooth manifoldM . Then given any > 0 , there exists anchor points C ⊂M and coding γ such that 1 mik ∑ xj∈Sik ( α‖xj − x̄j‖2 + β ∑ u∈C ‖x̄j − u‖22 ) ≤ [ αcM + ( 1 + 5 √ dM ) β ] 2 . ( 14 ) It shows that the approximation error of local coordinate coding depends on the intrinsic dimension430 of the manifold instead of the dimension of input.431 D.2 PROOF OF LEMMA 2432 In order to proof Lemma 2 , we first introduce a relevant theorem.433 Theorem 3 . ( Vector-valued extension of McDiarmid ’ s inequality Rivasplata et al . ( 2018 ) ) Let X1 , . . . , Xm ∈ X be independent random variables , and f : Xm → Rdw be a vector-valued mapping function . If , for all i ∈ { 1 , . . . , m } , and for all x1 , . . . , xm , x′i ∈ X , the function f satisfies sup xi , x′i ‖f ( x1 : i−1 , xi , xi+1 : m ) − f ( x1 : i−1 , x′i , xi+1 : m ) ‖ ≤ ci ( 15 ) Then E‖f ( X1 : m ) − E [ f ( X1 : m ) ] ‖ ≤ √∑m i=1 c 2 i . For any δ ∈ ( 0 , 1 ) with probability ≥ 1 − δ we have ‖f ( X1 : m ) − E [ f ( X1 : m ) ] ‖ ≤ √√√√ m∑ i=1 c2i + √∑m i=1 c 2 i 2 log ( 1 δ ) . ( 16 ) The above theorem indicates that bounded differences in norm implies the concentration of f ( X1 : m ) 434 around its mean in norm , i.e. , ‖f ( X1 : m ) − E [ f ( X1 : m ) ] ‖ is small with high probability.435 Then , we bound the error between expected prior predictor wPi and the empirical prior predictor ŵ P i .436 Lemma 3 . Given the definition of wPi [ k ] and ŵPi [ k ] in ( 5 ) and ( 6 ) , let X be a compact set with radius R , i.e. , ∀x , x′ ∈ X , ‖x− x′‖ ≤ R. For any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ , we have ‖wPi [ k ] − ŵPi [ k ] ‖ ≤ αR √ mik ( 1 + √ 1 2 log ( 1 δ ) ) . ( 17 ) Proof . According to the definition of Φ̂v ( · ) in ( 6 ) , for all points x1 , . . . , xj−1 , xj+1 , . . . , xmk , x′j in the sample set Sik , we have sup xi , x′i ‖Φ̂v ( x1 : j−1 , xj , xj+1 : mk ) − Φ̂v ( x1 : j−1 , x′j , xj+1 : mk ) ‖ = 1 mik sup xj , x′j ‖φv ( xj ) − φv ( x′j ) ‖ ≤ 1 mik sup xj , x′j α‖xj − x′j‖ ≤ αR mik , ( 18 ) where R denotes the domain of x , say R = supx ‖x‖ . The first inequality follows from the Lipschitz smoothness condition of Φv ( · ) and the second inequality follows by the definition of domain X . Utilizing Theorem 3 , for any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ we have ‖wPi [ k ] − ŵPi [ k ] ‖ = ‖Φ̂v ( Sik ) − E [ Φ̂v ( Sik ) ] ‖ ≤ αR √ mik ( 1 + √ 1 2 log ( 1 δ ) ) . ( 19 ) This implies the bound.437 Lemma 3 shows that the bounded difference of function Φv ( · ) implies its concentration , which can438 be further used to bound the differences between empirical prior predictor w̄Pi [ k ] and expected prior439 predictor wPi [ k ] . Now , we bound the error between expected prior predictor w P i and the LCC-based440 prior predictor w̄Pi .441 Lemma 2 Given the definition of wPi and w̄Pi in ( 5 ) and ( 7 ) , let X be a compact set with radius R , i.e. , ∀x , x′ ∈ X , ‖x− x′‖ ≤ R. For any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ , we have ‖wPi − w̄Pi ‖2 ≤ K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 1 δ ) ) +Oα , β ( γ , C ) ) 2 . ( 20 ) Proof According to the definition of wP , w̄P and ŵP , we have ‖wPi − w̄Pi ‖2 = K∑ k=1 ‖wPi [ k ] − w̄Pi [ k ] ‖2 = K∑ k=1 ‖E [ Φ̂v ( Sik ) ] − Φ̂v ( Sik ) + Φ̂v ( Sik ) − Φ̄v ( Sik ) ‖2 = K∑ k=1 ( ‖E [ Φ̂v ( Sik ) ] − Φ̂v ( Sik ) ‖2 + ‖Φ̂v ( Sik ) − Φ̄v ( Sik ) ‖2 + 2 ( E [ Φ̂v ( Sik ) ] − Φ̂v ( Sik ) ) > ( Φ̂v ( Sik ) − Φ̄v ( Sik ) ) ) ≤ K∑ k=1 ( ‖E [ Φ̂v ( Sik ) ] − Φ̂v ( Sik ) ‖2 + ‖Φ̂v ( Sik ) − Φ̄v ( Sik ) ‖2 + 2‖E [ Φ̂v ( Sik ) ] − Φ̂v ( Sik ) ‖‖Φ̂v ( Sik ) − Φ̄v ( Sik ) ‖ ) . ( 21 ) Substitute Lemma 3 and Lemma 1 into the above inequality , we can derive PSik∼Dmkk ‖wP − w̄P ‖2 ≤ K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 1 δ ) ) +Oα , β ( γ , C ) ) 2 ≥ 1− δ . ( 22 ) This gives the assertion.442 Lemma 2 shows that the approximation error between expected prior predictor and LCC-based prior443 predictor depends on the number of samples in each category and the quality of the LCC coding444 scheme.445 D.3 PROOF OF THEOREM 2446 Theorem 3 Let Q be the posterior of base learner Q = N ( wQ , σ2wIdw ) and P be the prior N ( Φ̄v ( S ) , σ2wIdw ) . The mean of prior is produced by the LCC-based prior predictor Φ̄v ( S ) in Eq . ( 7 ) and its parameter v is sampled from the hyperposterior of meta learner Q = N ( vQ , σ2vIdv ) . Given the hyperprior P = N ( 0 , σ2vIdv ) , then for any hyperposterior Q , any c1 , c2 > 0 and any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ we have , er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v ) ‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w 1 σ2w K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 4n δ ) ) +Oα , β ( γ , C ) ) 2 + dwK ( σv σw ) 2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w log 4n δ + c′1 2c1nσ2v log 2 δ , ( 23 ) where c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . We can simplify the notation and obtain that er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v ) ‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + const ( α , β , R , δ , n , mi ) . ( 24 ) Proof Our proof contains two steps . First , we bound the error within observed tasks due to observing447 a limited number of samples . Then we bound the error on the task environment level due to observing448 a finite number of tasks . Both of the two steps utilize Catoni ’ s classical PAC-Bayes bound Catoni449 ( 2007 ) to measure the error . We give here a general statement of the Catoni ’ s classical PAC-Bayes450 bound.451 Theorem 4 . ( Classical PAC-Bayes bound , general notations ) Let X be a sample space and X be some distribution over X , and let F be a hypotheses space of functions over X . Define a loss function g ( f , X ) : F × X → [ 0 , 1 ] , and let XG1 , { X1 , . . . , XG } be a sequence of G independent random variables distributed according to X . Let π be some prior distribution over F ( which must not depend on the samples X1 , . . . , XG ) . For any δ ∈ ( 0 , 1 ] , the following bounds holds uniformly for all posterior distribution ρ over F ( even sample dependent ) , PXG1 ∼i.i.dX { E X∼X E f∼ρ g ( f , X ) ≤ c 1− e−c [ 1 G G∑ g=1 E f∼ρ g ( f , Xg ) + KL ( ρ||π ) + log 1δ G× c ] , ∀ρ } ≥ 1− δ . ( 25 ) First step We utilize Theorem 4 to bound the generalization error in each of the observed tasks.452 Let i ∈ 1 , . . . , n be the index of task . For task i , we substitute the following definition into the453 Catoni ’ s PAC-Bayes Bound . Specifically , Xg , ( xij , yij ) , K , mi denote the samples and X , Di454 denotes the data distribution . We instantiate the hypotheses with a hierarchical model f , ( v , w ) ,455 where v ∈ Rdv and w ∈ Rdw are the parameters of meta learner ( prior predictor ) Φv ( · ) and456 base learner h ( · ) respectively . The loss function only considers the base learner , which is defined457 as g ( f , X ) , ` ( hw ( x ) , y ) . The prior over model parameter is represented as π , ( P , P ) ,458 ( N ( 0 , σ2vIdv ) , N ( wP , σ2wIdw ) ) , a Gaussian distribution ( hyperprior of meta learner ) centered at 0459 and a Gaussian distribution ( prior of base learner ) centered at wP , respectively . We set the posterior460 to ρ , ( Q , Q ) , ( N ( vQ , σ2vIdv ) , N ( wQ , σ2wIdw ) ) , a Gaussian distribution ( hyperposterior of461 meta learner ) centered at vQ and a Gaussian distribution ( posterior of base learner ) centered at wQ.462 According to Theorem 4 , the generalization bound holds for any posterior distribution including463 the one generated in our localized meta-learning framework . Specifically , we first sample v from464 hyperposteriorN ( vQ , σ2vIdv ) and estimate wP by leveraging expected prior predictor wP = Φv ( D ) .465 The base learner algorithm Ab ( S , P ) utilizes the sample set S and the prior P = N ( wP , σ2wIdw ) 466 to produce a posterior Q = Ab ( S , P ) = N ( wQ , σ2wIdw ) . Then we sample base learner parameter467 w from posterior N ( wQ , σ2wIdw ) and compute the incurred loss ` ( hw ( x ) , y ) . On the whole , meta-468 learning algorithmAm ( S1 , . . . , Sn , P ) observes a series of tasks S1 , . . . , Sn and adjusts its hyperprior469 P = N ( vP , σ2vIdv ) into hyperposterior Q = Am ( S1 , . . . , Sn , P ) = N ( vQ , σ2vIdv ) .470 The KL divergence term between prior π and posterior ρ is computed as follows : KL ( ρ‖π ) = E f∼ρ log ρ ( f ) π ( f ) = E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) log N ( vQ , σ2vIdv ) N ( wQ , σ2wIdw ) N ( 0 , σ2vIdv ) N ( wP , σ2wIdw ) = E v∼N ( vQ , σ2vIdv ) log N ( vQ , σ2vIdv ) N ( 0 , σ2vIdv ) + E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) log N ( wQ , σ2wIdw ) N ( wP , σ2wIdw ) = 1 2σ2v ‖vQ‖2 + E v∼N ( vQ , σ2vIdv ) 1 2σ2w ‖wQ −wP ‖2 . ( 26 ) In our localized meta-learning framework , in order to make KL ( Q||P ) small , the center of prior distribution wP is generated by the expected prior predictor wP = Φv ( D ) . However , the data distribution D is considered unknown and our only insight as to Dik is through the sample set Sik . In this work , we approximate the expected prior predictor Φv ( D ) with the LCC-based prior predictor w̄P = Φ̄v ( S ) . Denote the term E v∼N ( vQ , σ2vIdv ) 1 2σ2w ‖wQ −wP ‖2 by E v 1 2σ2w ‖wQ −wP ‖2 for convenience , we have E v 1 2σ2w ‖wQ −wP ‖2 =E v 1 2σ2w ‖wQ − w̄P + w̄P −wP ‖2 =E v 1 2σ2w [ ‖wQ − w̄P ‖2 + ‖w̄P −wP ‖2 + 2 ( wQ − w̄P ) > ( w̄P −wP ) ] ≤E v 1 2σ2w [ ‖wQ − w̄P ‖2 + ‖w̄P −wP ‖2 + 2‖wQ − w̄P ‖‖w̄P −wP ‖ ] ≤ 1 σ2w E v ‖wQ − Φ̄v ( S ) ‖2 + 1 σ2w E v ‖w̄P −wP ‖2 . ( 27 ) Since w̄Pi = Φ̄v ( Si ) = [ Φ̄v ( Si1 ) , . . . , Φ̄v ( Sik ) , . . . , Φ̄v ( SiK ) ] , we have E v ‖wQi − Φ̄v ( Si ) ‖ 2 = K∑ k=1 E v ‖wQi [ k ] − Φ̄v ( Sik ) ‖ 2 = K∑ k=1 ( E v ‖wQi [ k ] ‖ 2 − 2 ( E v wQi [ k ] ) > ( Φ̄vQ ( Sik ) ) + ‖Φ̄vQ ( Sik ) ‖2 + V v [ ‖Φ̄v ( Sik ) ‖ ] ) = K∑ k=1 ( ‖E v wQi [ k ] − Φ̄vQ ( Sik ) ‖ 2 + dv |C| σ2v ) =‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + dwKσ 2 v , ( 28 ) where V v [ ‖Φ̄v ( Sik ) ‖ ] denotes the variance of ‖Φ̄v ( Sik ) ‖ . The last equality uses the fact that dv = |C|dw . Combining Lemma 2 , for any δ′ ∈ ( 0 , 1 ] with probability ≥ 1− δ′ we have E v 1 2σ2w ‖wQi −w P i ‖2 ≤ 1 σ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + dwK ( σv σw ) 2 + 1 σ2w K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 1 δ ) ) +Oα , β ( γ , C ) ) 2 ( 29 ) Then , according to Theorem 4 , we obtain that for any δi2 > 0 PSi∼Dmii { E ( x , y ) ∼Di E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) ` ( hw ( x ) , y ) ≤ c2 1− e−c2 · 1 mi mi∑ j=1 E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) ` ( hw ( xj ) , yj ) + 1 ( 1− e−c2 ) ·mi ( 1 2σ2v ‖vQ‖2 + E v∼N ( vQ , σ2vIdv ) 1 2σ2w ‖wQi −w P i ‖2 + log 2 δi ) , ∀Q } ≥ 1− δi 2 , ( 30 ) for all observed tasks i = 1 , . . . , n. Define δ′ = δi2 and combine inequality ( 29 ) , we obtain PSi∼Dmii { E ( x , y ) ∼Di E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) ` ( hw ( x ) , y ) ≤ c2 1− e−c2 · 1 mi mi∑ j=1 E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) ` ( hw ( xj ) , yj ) + 1 ( 1− e−c2 ) mi · ( 1 2σ2v ‖vQ‖2 + 1 σ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + log 2 δi + dwK ( σv σw ) 2 + 1 σ2w K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 2 δi ) ) +Oα , β ( γ , C ) ) 2 ) , ∀Q } ≥ 1− δi , ( 31 ) Using the notations in Section 3 , the above bound can be simplified as PSi∼Dmii { E v∼N ( vQ , σ2vIdv ) , wP =Φv ( D ) , Pi=N ( wP , σ2wIdw ) er ( Ab ( Si , Pi ) ) ≤ c2 1− e−c2 E v∼N ( vQ , σ2vIdv ) , wP =Φv ( D ) , Pi=N ( wP , σ2wIdw ) êr ( Ab ( Si , Pi ) ) + 1 ( 1− e−c2 ) mi ( 1 2σ2v ‖vQ‖2 + 1 σ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + log 2 δi + dwK ( σv σw ) 2 + 1 σ2w K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 2 δi ) ) +Oα , β ( γ , C ) ) 2 ) , ∀Q } ≥ 1− δi . ( 32 ) Second step Next we bound the error due to observing a limited number of tasks from the environment . We reuse Theorem 4 with the following substitutions . The samples are ( Di , mi , Si ) , i = 1 , . . . , n , where ( Di , mi ) are sampled from the same meta distribution τ and Si ∼ Dmii . The hyposthesis is parameterized as Φv ( D ) with meta learner parameter v. The loss function is g ( f , X ) , E ( x , y ) ∼D E w∼N ( wQ , σ2wIdw ) ` ( hw ( x ) , y ) , where wQ = Ab ( Si , Pi ) . Let π , N ( 0 , σ2vIdv ) be the prior over meta learner parameter , the following holds for any δ0 > 0 , P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { E ( D , m ) ∼τ E S∼Dm E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) E ( x , y ) ∼Di ` ( hw ( x ) , y ) ≤ c1 1− e−c1 · 1 n n∑ i=1 E v∼N ( vQ , σ2vIdv ) E w∼N ( wQ , σ2wIdw ) E ( x , y ) ∼Di ` ( hw ( x ) , y ) + 1 ( 1− e−c1 ) n ( 1 2σ2v ‖vQ‖2 + log 1 δ0 ) , ∀Q } ≥ 1− δ0 . ( 33 ) Using the term in Section 3 , the above bound can be simplified as P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { er ( Q ) ≤ c1 1− e−c1 · 1 n n∑ i=1 E v∼N ( vQ , σ2vIdv ) , wP =Φv ( D ) , Pi=N ( wP , σ2wIdw ) er ( Ab ( Si , Pi ) ) + 1 ( 1− e−c1 ) n ( 1 2σ2v ‖vQ‖2 + log 1 δ0 ) , ∀Q } ≥ 1− δ0 , ( 34 ) Finally , by employing the union bound , we could bound the probability of the intersection of the events in ( 32 ) and ( 34 ) For any δ > 0 , set δ0 , δ2 and δi , δ 2n for i = 1 , . . . , n , we have P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { er ( Q ) ≤ c1c2 ( 1− e−c1 ) ( 1− e−c2 ) · 1 n n∑ i=1 E v∼N ( vQ , σ2vIdv ) , wP =Φv ( D ) , Pi=N ( wP , σ2wIdw ) êr ( Ab ( Si , Pi ) ) + c1 1− e−c1 · 1 n n∑ i=1 1 ( 1− e−c2 ) mi ( 1 2σ2v ‖vQ‖2 + 1 σ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 + log 4n δ + 1 σ2w K∑ k=1 ( αR √ mik ( 1 + √ 1 2 log ( 4n δ ) ) +Oα , β ( γ , C ) ) 2 + dwK ( σv σw ) 2 + 1 ( 1− e−c1 ) n ( 1 2σ2v ‖vQ‖2 + log 2 δ ) , ∀Q } ≥ 1− δ . ( 35 ) We can further simplify the notation and obtain that P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { er ( Q ) ≤ c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v ) ‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ ( Si ) ‖ 2 +const ( α , β , R , δ , n , mi ) , ∀Q } ≥ 1− δ , ( 36 ) where c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . This completes the proof.471 D.4 PROOF OF THEOREM 1472 Theorem 2 Let Q be the posterior of base learner Q = N ( wQ , σ2wIdw ) and P be the prior N ( wP , σ2wIdw ) . The mean of prior is sampled from the hyperposterior of meta learner Q = N ( wQ , σ2wIdw ) . Given the hyperprior P = N ( 0 , σ2wIdw ) , then for any hyperposterior Q , any c1 , c2 > 0 and any δ ∈ ( 0 , 1 ] with probability ≥ 1− δ we have , er ( Q ) ≤c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2w + c′1 2c1nσ2w ) ‖wQ‖2 + n∑ i=1 c′1c ′ 2 2c2nmiσ2w ‖ E wP wQi −w Q‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ( 1 2 + log 2n δ ) + c′1 c1nσ2w log 2 δ , ( 37 ) where c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 .473 Proof Instead of generating the mean of prior with a prior predictor , the vanilla meta-learning framework directly produces the mean of prior wP by sampling from hyperposteriorQ = N ( wQ , σ2wIdw ) . Then the base learner algorithmAb ( S , P ) utilizes the sample set S and the prior P = N ( wP , σ2wIdw ) to produce a posterior Q = Ab ( S , P ) = N ( wQ , σ2wIdw ) . Similarly with the two-steps proof in Theorem 2 , we first get an intra-task bound by using Theorem 4 . For any δi > 0 , we have PSi∼Dmii { E ( x , y ) ∼Di E wP∼N ( wQ , σ2wIdw ) E w∼N ( wQ , σ2wIdw ) ` ( hw ( x ) , y ) ≤ c2 1− e−c2 · 1 mi mi∑ j=1 E wP∼N ( wQ , σ2wIdw ) E w∼N ( wQ , σ2wIdw ) ` ( hw ( xj ) , yj ) + 1 ( 1− e−c2 ) ·mi ( 1 2σ2w ‖wQ‖2 + E wPi ∼N ( wQ , σ2wIdw ) 1 2σ2w ‖wQi −w P i ‖2 + log 1 δi ) , ∀Q } ≥ 1− δi , ( 38 ) The term E wPi ∼N ( wQ , σ2wIdw ) 1 2σ2w ‖wQi −wPi ‖2 can be simplified as E wPi ∼N ( wQ , σ2wIdw ) 1 2σ2w ‖wQi −w P i ‖2 = 1 2σ2w ( E wP ‖wQi ‖ 2 − 2 ( E wP wQi ) > wQ + ‖wQ‖2 + V wPi [ ‖wPi ‖ ] ) = 1 2σ2w ( ‖ E wP wQi −w Q‖2 + σ2w ) , ( 39 ) where V wPi [ ‖wPi ‖ ] denotes the variance of ‖wPi ‖ . Then we get an inter-task bound . For any δ0 > 0 , we have P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { E ( D , m ) ∼τ E S∼Dm E wP∼N ( wQ , σ2wIdw ) E w∼N ( wQ , σ2wIdw ) E ( x , y ) ∼Di ` ( hw ( x ) , y ) ≤ c1 1− e−c1 · 1 n n∑ i=1 E wP∼N ( wQ , σ2wIdw ) E w∼N ( wQ , σ2wIdw ) E ( x , y ) ∼Di ` ( hw ( x ) , y ) + 1 ( 1− e−c1 ) n ( 1 2σ2w ‖wQ‖2 + log 1 δ0 ) , ∀Q } ≥ 1− δ0 . ( 40 ) For any δ > 0 , set δ0 , δ2 and δi , δ 2n for i = 1 , . . . , n. Using the union bound , we finally get P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { er ( Q ) ≤ c1c2 ( 1− e−c1 ) ( 1− e−c2 ) · 1 n n∑ i=1 E v∼N ( vQ , σ2vIdv ) , wP =Φv ( D ) , Pi=N ( wP , σ2wIdw ) êr ( Ab ( Si , Pi ) ) + c1 1− e−c1 · 1 n n∑ i=1 1 ( 1− e−c2 ) ·mi ( 1 2σ2w ‖wQ‖2 + 1 2σ2w ‖ E wP wQi −w Q‖2 + 1 2 + log 2n δ ) + 1 ( 1− e−c1 ) n ( 1 2σ2w ‖wQ‖2 + log 2 δ ) , ∀Q } ≥ 1− δ . ( 41 ) Similarly , we can further simplify the notation and obtain that P ( Dmii ) ∼τ , Si∼Dmii , i=1 , ... , n { er ( Q ) ≤ c′1c′2êr ( Q ) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2w + c′1 2c1nσ2w ) ‖wQ‖2 + n∑ i=1 c′1c ′ 2 2c2nmiσ2w ‖ E wP wQi −w Q‖2 +const ( δ , n , mi ) , ∀Q } ≥ 1− δ , ( 42 ) where c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . This completes the proof.474 E DETAILS OF EXPERIMENTS475 While the theorems consider bounded-loss , we use an unbounded loss in our experiments , we can476 have theoretical guarantees on a variation of the loss which is clipped to [ 0 ; 1 ] . Besides , in practice477 the loss function is almost always smaller than one.478 E.1 DATA PREPARATION479 We used the 5-way 50-shot classification setups , where each task instance involves classifying480 images from 5 different categories sampled randomly from one of the meta-sets . We did not employ481 any data augmentation or feature averaging during meta-training , or any other data apart from the482 corresponding training and validation meta-sets.483 E.2 NETWORK ARCHITECHTURE484 Auto-Encoder for LCC For CIFAR100 , the encoder is 7 layers with 16-32-64-64-128-128-256485 channels . Each convolutional layer is followed by a LeakyReLU activation and a batch normalization486 layer . The 1st , 3rd and 5th layer have stride 1 and kernel size ( 3 , 3 ) . The 2nd , 4th and 6th layer have487 stride 2 and kernel size ( 4 , 4 ) . The 7th layer has stride 1 and kernel size ( 4 , 4 ) . The decoder is the488 same as encoder except that the layers are in reverse order . The input is resized to 32 × 32 . For489 Caltech-256 , the encoder is 5 layers with 32-64-128-256-256 channels . Each convolutional layer is490 followed by a LeakyReLU activation and a batch normalization layer . The first 4 layers have stride 2491 and kernel size ( 4 , 4 ) . The last layer has stride 1 and kernel size ( 6 , 6 ) . The decoder is the same as492 encoder except that the layers are in reverse order . The input is resized to 96× 96.493 Base Model The network architecture used for the classification task is a small CNN with 4 con-494 volutional layers , each with 32 filters , and a linear output layer , similar to Finn et al . ( 2017 ) . Each495 convolutional layer is followed by a Batch Normalization layer , a Leaky ReLU layer , and a max-496 pooling layer . For CIFAR100 , the input is resized to 32× 32 . For Caltech-256 , the input is resized to497 96× 96.498 E.3 OPTIMIZATION499 Auto-Encoder for LCC As optimizer we used AdamKingma & Ba ( 2015 ) with β1 = 0.9 and500 β2 = 0.999 . The initial learning rate is 1× 10−4 . The number of epochs is 100 . The batch size is501 512.502 LCC Training We alternatively train the coefficients and bases of LCC with Adam with β1 = 0.9503 and β2 = 0.999 . In specifics , for both datasets , we alternatively update the coefficients for 60 times504 and then update the bases for 60 times . The number of training epochs is 3.The number of bases is505 64 . The batch size is 256.506 Pre-Training of Feature Extractor We use a 64-way classification in CIFAR-100 and 150-way507 classification in Caltech-256 to pre-train the feature embedding only on the meta-training dataset . For508 both CIFAR100 and Caltech-256 , an L2 regularization term of 5e−4 was used . We used the Adam509 optimizer . The initial learning rate is 1× 10−3 , β1 is 0.9 and β2 is 0.999 . The number of epochs is510 50 . The batch size is 512.511 Meta-Training We use the cross-entropy loss as in Amit & Meir ( 2018 ) . Although this is inconsistent512 with the bounded loss setting in our theoretical framework , we can still have a guarantee on a variation513 of the loss which is clipped to [ 0 , 1 ] . In practice , the loss is almost always smaller than one . For514 CIFAR100 and Caltech-256 , the number of epochs of meta-training phase is 12 ; the number of epochs515 of meta-testing phase is 40 . The batch size is 32 for both datasets . As optimizer we used Adam with516 β1 = 0.9 and β2 = 0.999 . In the setting with a pre-trained base model , the learning rate is 1× 10−5517 for convolutional layers and 5× 10−4 for the linear output layer . In the setting without a pre-trained518 base model , the learning rate is 1× 10−3 for convolutional layers and 5× 10−3 for the linear output519 layer . The confidence parameter is chosen to be δ = 0.1 . The variance hyper-parameters for prior520 predictor and base model are σw = σv = 0.01 . The hyperparameters α1 , α2 in LML and ML-A are521 set to 0.01.522 E.4 MORE EXPERIMENTAL RESULTS523 We also compare with two typical meta-learning few-shot learning methods : MAML ( Finn et al.,524 2017 ) and MatchingNet ( Vinyals et al. , 2016 ) . Both two methods use the Adam optimizer with initial525 learning rate 0.0001 . In the meta-training phase , we randomly split the samples of each class into526 support set ( 5 samples ) and query set ( 45 samples ) . The number of epochs is 100 . For MAML , the527 learning rate of inner update is 0.01.528 In Figure 5 , we demonstrate the average test error of learning a new task based on the number of529 training tasks , together with the standard deviation , in different settings ( with or without a pre-trained530 feature extractor ) . We can find that all PAC-Bayes baselines outperform MAML and MatchingNet.531 Note that MAML and MatchingNet adopt the episodic training paradigm to solve the few-shot532 learning problem . The meta-training process requires millions of tasks and each task contains limited533 samples , which is not the case in our experiments . Scarce tasks in meta-training leads to severely534 meta-overfitting . In our method , the learned prior serves both as an initialization of base model and535 as a regularizer which restricts the solution space in a soft manner while allowing variation based on536 specific task data . It yields a model with smaller error than its unbiased counterpart when applied to a537 similar task.538 F PSEUDO CODE539 Algorithm 1 Localized Meta-Learning ( LML ) algorithm Input : Data sets of observed tasks : S1 , . . . , Sn . Output : Learned prior predictor Φ̄ parameterized by v. Initialize v ∈ Rdv and wi ∈ Rdw for i = 1 . . . , n. Construct LCC scheme ( γ , C ) from the whole training data by optimizing Eq . ( 12 ) . while not converged do for each task i ∈ { 1 , . . . , n } do Sample a random mini-batch from the data S′i ⊂ Si . Approximate E v êri ( wi ) using S′i . end for Compute the objective in ( 11 ) , i.e . J ← ∑n i=1 Ev êri ( wi ) + α1‖v Q‖2 + ∑n i=1 α2 mi ‖E v wQi − Φ̄vQ ( Si ) ‖2 . Evaluate the gradient of J w.r.t . { v , w1 , . . . , wn } using backpropagation . Take an optimization step . end while
The paper presents an algorithm for offline meta-learning, where tasks are drawn from a distribution and presented to a learner sequentially, the objective being to use accumulated knowledge in order to facilitate the learning of new tasks. The algorithm is motivated from the PAC-Bayes theory of generalization, extended in recent years to the meta-learning setup, and provides a new localized approach where the prior used for a novel task is allowed to depend on the data from the new task in addition to all previous tasks. They make use of a previously proposed local learning method (LCC) that leads to extra flexibility and to a tightening of the meta-learning bounds, and provide an algorithm based on these bounds from which a learning algorithm is derived based on minimization of the upper bound. Empirical results are provided demonstrating the efficacy of the method and comparing to other recent approaches.
SP:c7ca1f4cc4801fa55cf90d97980f49bf144a1a4c
Diverse Exploration via InfoMax Options
1 INTRODUCTION . Abstracting a course of action as a higher-level action , or an option ( Sutton et al. , 1999 ) , is a key ability for reinforcement learning ( RL ) agents in several aspects , including exploration . In RL problems , an agent learns to approximate an optimal policy only from experience , given no prior knowledge . This leads to the necessity of exploration : an agent needs to explore the poorly known states for collecting environmental information , sometimes sacrificing immediate rewards . For statistical efficiency , it is important to explore the state space in a deep and directed manner , rather than taking uniformly random actions ( Osband et al. , 2019 ) . Options can represent such directed behaviors by capturing long state jumps from their starting regions to terminating regions . It has been shown that well-defined options can facilitate exploration by exploiting an environmental structure ( Barto et al. , 2013 ) or , more generally , by reducing decision steps ( Fruit and Lazaric , 2017 ) . A key requirement for such explorative options is diversity . If all options have the same terminating region , they will never encourage exploration . Instead , options should lead to a variety of regions for encouraging exploration . However , automatically discovering diverse options in a scalable , online manner is challenging due to two difficulties : generalization and data limitation . Generalization with function approximation ( Sutton , 1995 ) is important for scaling up RL methods to large or continuous domains . However , many existing option discovery methods for exploration are graph-based ( e.g. , Machado et al . ( 2017 ) ) and incompatible with function approximation , except for that by Jinnai et al . ( 2020 ) . Discovering options online in parallel with polices requires us to work with limited data sampled from the environment and train the model for evaluating the diversity in a data-efficient manner . To address these difficulties , we introduce the infomax termination objective defined as the mutual information ( MI ) between options and their corresponding state transitions . This formulation reflects a simple inductive bias : for encouraging exploration , options should terminate in a variety of regions per starting regions . Thanks to the information-theoretical formulation , this objective is compatible with function approximation and scales up to continuous domains . A key technical contribution of this paper is the optimization scheme for maximizing this objective . Specifically , we employ a simple classification model over options as a critic for termination conditions , which makes our method data-efficient and tractable in many domains . The paper is organized as follows . After introducing background and notations , we present the infomax termination objective and derive a practical optimization scheme using the termination gradient theorem ( Harutyunyan et al. , 2019 ) . We then implement the infomax objective on the option-critic architecture ( OC ) ( Bacon et al. , 2017 ) with algorithmic modifications , yielding the InfoMax Option Critic ( IMOC ) algorithm . Empirically , we show that ( i ) IMOC improves exploration in structured environments , ( ii ) IMOC improves exporation in lifelong learning , ( iii ) IMOC is scalable to MuJoCo continuous control tasks , and ( iv ) the options learned by IMOC are diverse and meaningful . We then relate our method to other option-learning methods and the empowerment concept ( Klyubin et al. , 2005 ) , and finally give concluding remarks . 2 BACKGROUND AND NOTATION . We assume the standard RL setting in the Markov decision process ( MDP ) , following Sutton and Barto ( 2018 ) . An MDPM consists of a tuple ( X , A , p , r , γ ) , where X is the set of states , A is the set of actions , p : X ×A×X → [ 0 , 1 ] is the state transition function , r : X ×A → [ rmin , rmax ] is the reward function , and 0 ≤ γ ≤ 1 is the discount factor . A policy is a probability distribution over actions conditioned on a state x , π : X ×A → [ 0 , 1 ] . For simplicity , we consider the episodic setting where each episode ends when a terminal state xT is reached . In this setting , the goal of an RL agent is to approximate a policy that maximizes the expected discounted cumulative reward per episode : JRL ( π ) = Eπ , x0 [ T−1∑ t=0 γtRt ] , ( 1 ) where Rt = r ( xt , at ) is the reward received at time t , and x0 is the initial state of the episode . Relatedly , we define the action-value function Qπ ( xt , at ) def = Ext , at , π [ ∑T−1 t′=t γ t′−tRt′ ] and the state-value function V π ( xt ) def = Ext , π ∑ a π ( a|xt ) Qπ ( xt , a ) . Assuming that π is differentiable by the policy parameters θπ , a simple way to maximize the objective ( 1 ) is the policy gradient method ( Williams , 1992 ) that estimates the gradient by : ∇θπJRL ( π ) = Eπ , xt [ ∇θπ log π ( at|xt )  ( xt , at ) ] , ( 2 ) where  ( xt , at ) is the estimation of the advantage function Aπ ( xt , at ) def = Qπ ( xt , at ) − V π ( xt ) . A common choice of  ( xt , at ) is N -step TD error ∑N i=0 γ iRt+i + γ N V̂ ( xt+N ) − V̂ ( xt ) , where N is a fixed rollout length ( Mnih et al. , 2016 ) . 2.1 OPTIONS FRAMEWORK . Options ( Sutton et al. , 1999 ) provide a framework for representating temporally abstracted actions in RL . An option o ∈ O consists of a tuple ( Io , βo , πo ) , where Io ⊆ X is the initiation set , βo : X → [ 0 , 1 ] is a termination function with βo ( x ) denoting the probability that option o terminates in state x , and πo is intra-option policy . Following related studies ( Bacon et al. , 2017 ; Harutyunyan et al. , 2019 ) , we assume that Io = X and learn only βo and πo . Letting xs denote an option-starting state and xf denote an option-terminating state , we can write the option transition function as : P o ( xf |xs ) = βo ( xf ) Ixf=xs + ( 1− βo ( xs ) ) ∑ x pπ o ( x|xs ) P o ( xf |x ) , ( 3 ) where I is the indicator function and pπo is the policy-induced transition function pπo ( x′|x ) def=∑ a∈A π o ( a|x ) p ( x′|x , a ) . We assume that all options eventually terminate so that P o is a valid probability distribution over xf , following Harutyunyan et al . ( 2019 ) . To present option-learning methods , we define two option-value functions : QO and UO , where QO is the option-value function denoting the value of selecting an option o at a state xt defined by QO ( xt , o ) def = Eπ , β , µ [ ∑T−1 t′=t γ t′−tRt ] . Analogously to Qπ and V π , we let VO denote the marginalized option-value function VO ( x ) def = ∑ o µ ( o|x ) QO ( x , o ) , where µ ( o|xs ) : X ×O → [ 0 , 1 ] is the policy over options . Function UO ( x , o ) def = ( 1− βo ( x ) ) QO ( x , o ) + βo ( x ) VO ( x ) is called the option-value function upon arrival ( Sutton et al. , 1999 ) and denotes the value of reaching a state xt with o and not having selected the new option . 2.2 OPTION CRITIC ARCHITECTURE . OC ( Bacon et al. , 2017 ) provides an end-to-end algorithm for learning πo and βo in parallel . To optimize πo , OC uses the intra-option policy gradient method that is the option-conditional version of the gradient estimator ( 2 ) , ∇θπoJRL ( πo ) = E [ ∇θπo log πo ( at|xt ) Âo ( xt , at ) ] , where Âo is an estimation of the option-conditional advantage Aπ o . For optimizing βo , OC directly maximizes QO using the estimated gradient : ∇θβoQO ( x , o ) = γE [ −∇θβoβ o ( x ) ( QO ( x , o ) − VO ( x ) ) ] . ( 4 ) Intuitively , this decreases the termination probability βo ( x ) when holding an o is advantageous , i.e. , QO ( x ) − VO ( x ) is positive , and vice versa . Our method basically follows OC but has a different objective for learning βo . 2.3 TERMINATION CRITIC . Recently proposed termination critic ( TC ) ( Harutyunyan et al. , 2019 ) optimizes βo by maximizing the information-theoretic objective called predictability : JTC ( P o ) = −H ( Xf |o ) , ( 5 ) where H denotes entropy and Xf is the random variable denoting the option-terminating states . Maximizing −H ( Xf |o ) makes the terminating region of an option smaller and more predictable . In other words , we can compress terminating regions by optimizing the objective ( 5 ) . To derivate this objective by the beta parameters θβo , Harutyunyan et al . ( 2019 ) introduced the termination gradient theorem : Theorem 1 . Let βo be parameterized with a sigmoid function and ` βo denote the logit of βo . We have ∇θβP o ( xf |xs ) = ∑ x P o ( x|xs ) ∇θβ ` βo ( x ) ( Ixf=x − P o ( xf |x ) ) , ( 6 ) Leveraging the theorem 1 , TC performs gradient ascent using the estimated gradient : ∇θβoJ TC ( P o ) = −Exs , x , xf [ ∇θβ ` βo ( x ) βo ( x ) ( ( logP oµ ( x ) − logP oµ ( xf ) ) + ( 1− P o ( xf |xs ) P oµ ( x ) P oµ ( xf ) P o ( x|xs ) ) ) ] . where P oµ ( x ) is the marginalized distribution of option-terminating states . Contrary to the termination objective of OC ( 4 ) , this objective does not depend on state values , making learned options robust against the reward structure of the environment . Our method is inspired by TC and optimizes a similar information-theoretic objective , not for predictability but for diversity . Also , our infomax objective requires an estimation of p̂ ( o|xs , xf ) instead of the option transition model P o ( xf |xs ) , which makes our method tractable in more environments . 3 INFOMAX OPTION CRITIC . We now present the key idea behind the InfoMax Option Critic ( IMOC ) algorithm . We first formulate the infomax termination objective based on the MI maximization , then derive a practical gradient estimation for maximizing this objective on βo , utilizing the termination gradient theorem 1 . To evaluate the diversity of options , we use the MI between options and option-terminating states conditioned by option-starting states : J IMOC = I ( Xf ; O|Xs ) = H ( Xf |Xs ) −H ( Xf |Xs , O ) , ( 7 ) where I denotes conditional MI I ( A ; B|Z ) = H ( A|Z ) −H ( A|B , Z ) , Xs is the random variable denoting an option-starting state , and O is the random variable denoting an option . We call this objective the infomax termination objective . Let us interpret Xf |Xs as the random variable denoting a state transition induced by an option . Then maximizing the MI ( 7 ) ( i ) diversifies a state transition Xf |Xs and ( ii ) makes an option-conditional state transition Xf |Xs , o more deterministic . Note that the marginalized MI I ( Xf ; O ) also makes sense in that it prevents the terminating region of each option from being too broad , as predictability ( 5 ) does . However , in this study , we focus on the conditional objective since it is easier to optimize . To illustrate the limitation of infomax options , we conducted an analysis in a toy four-state deterministic chain environment , which has four states and two deterministic actions ( go left and go right ) per each state . Since deriving the exact solution is computationally difficult , we searched options that maximize H ( Xf |Xs ) from deterministic options that has deterministic option-policies and termination functions ( thus has the minimum H ( Xf |Xs , O ) ) . Among multiple solutions , Figure 1 shows two interesting instances of deterministic infomax options when |O| = 2 . The left options enable diverse behaviors per state , although they fail to capture long-term behaviors generally favorable in the literature ( e.g. , Mann et al . ( 2015 ) ) . On the other hand , the right options enable relatively long , two step state transitions , but they are the same and the rightmost state and the next one . Furthermore , an agent can be caught in a small loop that consists of the leftmost state and the next one . This example shows that ( i ) we can obtain short and diverse options with only a few options , ( ii ) to obtain long and diverse options , we need sufficiently many options , and ( iii ) an agent can be caught in a small loop with only a few options , failing to visit diverse states . As we show in Appendix A , this ’ small loop ’ problem can not happen with four options . Thus , the number of options is important when we are to maximize the MI ( 7 ) and a limitation of this method . However , in experiments , we show that we can practically learn diverse options with relatively small number of options . For maximizing the MI by gradient ascent , we now derive the gradient of the infomax termination objective ( 7 ) . First , we estimate the gradient of the objective using the option transition model P o and marginalized option-transition model P ( xf |xs ) = ∑ o µ ( o|xs ) P o ( xf |xs ) . Proposition 1 . Let βo be parameterized with a sigmoid function . Given a trajectory τ = xs , . . . , x , . . . , xf sampled by πo and βo , we can obtain unbiased estimations of∇θβH ( Xf |Xs ) and ∇θβH ( Xf |Xs , O ) by ∇θβH ( Xf |Xs ) = Exs , x , xf , o [ −∇θβ ` βo ( x ) βo ( x ) ( logP ( x|xs ) − logP ( xf |xs ) ) ] ( 8 ) ∇θβH ( Xf |Xs , O ) = Exs , x , xf , o [ −∇θβ ` βo ( x ) βo ( x ) ( logP o ( x|xs ) − logP o ( xf |xs ) ) ] ( 9 ) where ` βo ( x ) denotes the logit of βo ( x ) . Note that the additional term βo is necessary because x is not actually a terminating state . The proof follows section 4 in Harutyunyan et al . ( 2019 ) and is given in Appendix B.1 . The estimated gradient of the infomax termination objective ( 7 ) can now be written as : ∇θβI ( Xf ; O|Xs ) = ∇θβH ( Xf |Xs ) −∇θβH ( Xf |Xs , O ) = Exs , x , xf , o [ −∇θβ ` βo ( x ) βo ( x ) ( logP ( x|xs ) − logP ( xf |xs ) − ( logP o ( x|xs ) − logP o ( xf |xs ) ) ) ] , ( 10 ) which means that we can optimize this objective by estimating P o and P . However , estimating the probability over the state space can be difficult , especially when the state space is large , as common in the deep RL setting . Hence , we reformulate the gradient using Bayes ’ rule in a similar way as Gregor et al . ( 2017 ) . The resulting term consists of the reverse option transition p ( o|xs , xf ) that denotes the probability of having an o given a state transition xs , xf . Proposition 2 . We now have ∇θβI ( Xf ; O|Xs ) =∇θβH ( Xf |Xs ) −∇θβH ( Xf |Xs , O ) =Exs , x , xf , o [ ∇θβ ` βo ( x ) βo ( x ) ( log p ( o|xs , x ) − log p ( o|xs , xf ) ) ] ( 11 ) The proof is given in Appendix B.2 . In the following sections , we estimate the gradient ( 11 ) by learning a classification model over options p̂ ( o|xs , xf ) from sampled option transitions .
The authors propose a modification of the option-critic algorithm for hierarchical reinforcement learning. The proposed algorithm modifies how the termination conditions of the options are improved by experience. Specifically, the algorithm aims to maximize the mutual information between the options and their termination states. The authors develop an optimization scheme for achieving this objective and provide empirical results in a number of domains
SP:f24872ae71c6883964f865312805f1c969e97d2c
Learning Long-term Visual Dynamics with Region Proposal Interaction Networks
1 INTRODUCTION . As argued by Kenneth Craik , if an organism carries a model of external reality and its own possible actions within its head , it is able to react in a much fuller , safer and more competent manner to emergencies which face it ( Craik , 1952 ) . Indeed , building prediction models has been long studied in computer vision and intuitive physics . In computer vision , most approaches make predictions in pixel-space ( Denton & Fergus , 2018 ; Lee et al. , 2018 ; Ebert et al. , 2018b ; Jayaraman et al. , 2019 ; Walker et al. , 2016 ) , which ends up capturing the optical flow ( Walker et al. , 2016 ) and is difficult to generalize to long-horizon . In intuitive physics , a common approach is to learn the dynamics directly in an abstracted state space of objects to capture Newtonian physics ( Battaglia et al. , 2016 ; Chang et al. , 2016 ; Sanchez-Gonzalez et al. , 2020 ) . However , the states end up being detached from raw sensory perception . Unfortunately , these two extremes have barely been connected . In this paper , we argue for a middle-ground to treat images as a window into the world , i.e. , objects exist but can only be accessed via images . Images are neither to be used for predicting pixels nor to be isolated from dynamics . We operationalize it by learning to extract a rich state representation directly from images and build dynamics models using the extracted state representations . It is difficult to make predictions , especially about the future — Niels Bohr Contrary to Niels Bohr , predictions are , in fact , easy if made only for the short-term . Predictions that are indeed difficult to make and actually matter are the ones made over the long-term . Consider the example of “ Three-cushion Billiards ” in Figure 1 . The goal is to hit the cue ball in such a way that it touches the other two balls and contacts the wall thrice before hitting the last ball . This task is extremely challenging even for human experts because the number of successful trajectories is very sparse . Do players perform classical Newtonian physics calculations to obtain the best action before each shot , or do they just memorize the solution by practicing through exponentially many configurations ? Both extremes are not impossible , but often impractical . Players rather build a physical understanding by experience ( McCloskey , 1983 ; Kubricht et al. , 2017 ) and plan by making intuitive , yet accurate predictions in the long-term . Learning such a long-term prediction model is arguably the “ Achilles ’ heel ” of modern machine learning methods . Current approaches on learning physical dynamics of the world cleverly side-step the long-term dependency by re-planning at each step via model-predictive control ( MPC ) ( Allgöwer & Zheng , 2012 ; Camacho & Alba , 2013 ) . The common practice is to train short-term dynamical models ( usually 1-step ) in a simulator . However , small errors in short-term predictions can accumulate over time in MPC . Hence , in this work , we focus primarily on the long-term aspect of prediction by just considering environments , such as the three-cushion billiards example or the PHYRE ( Bakhtin et al. , 2019 ) in Figure 1 , where an agent is allowed to take only one action in the beginning so as to preclude any scope of re-planning . How to learn an accurate dynamics model has been a popular research topic for years . Recently , there are a series of work trying to represent video frames using object-centric representations ( Battaglia et al. , 2016 ; Watters et al. , 2017 ; Chang et al. , 2016 ; Janner et al. , 2019 ; Ye et al. , 2019 ; Kipf et al. , 2020 ) . However , those methods either operate in the state space , or ignore the environment information , both of which are not practical in real-world scenarios . In contrast , our objective is to build a data-driven prediction model that can both : ( a ) model long-term interactions over time to plan successfully for new instances , and ( b ) work from raw visual input in complex real-world environments . Therefore , the question we ask is : how to extract such an effective and flexible object representation and perform long-term predictions ? We propose Region Proposal Interaction Network ( RPIN ) which contains two key components . Firstly , we leverage the region of interests pooling ( RoIPooling ) operator ( Girshick , 2015 ) to extract object features maps from the frame-level feature . Object feature extraction based on region proposals has achieved huge success in computer vision ( Girshick , 2015 ; He et al. , 2017 ; Dai et al. , 2017 ; Gkioxari et al. , 2019 ) , and yet , surprisingly under-explored in the field of intuitive physics . By using RoIPooling , each object feature contains not only its own information but also the context of the environment . Secondly , we extend the Interaction Network and propose Convolutional Interaction Networks that perform interaction reasoning on the extracted RoI features . Interaction Networks is originally proposed in ( Battaglia et al. , 2016 ) , where the interaction reasoning is conducted via MLPs . By changing MLPs to convolutions , we can effectively utilize the spatial information of an object and make accurate future prediction of object location and shapes changes . Notably , our approach is simple , yet outperforms the state-of-the-art methods in both simulation and real datasets . In Section 5 , we thoroughly evaluate our approach across four datasets to study scientific questions related to a ) prediction quality , b ) generalization to time horizons longer than training , c ) generalization to unseen configurations , d ) planning ability for downstream tasks . Our method reduces the prediction error by 75 % in the complex PHYRE environment and achieves state-of-the-art performance on the PHYRE reasoning benchmark . 2 RELATED WORK . Physical Reasoning and Intuitive Physics . Learning models that can predict the changing dynamics of the scene is the key to building physical common-sense . Such models date back to “ NeuroAnimator ” ( Grzeszczuk et al. , 1998 ) for simulating articulated objects . Several methods in recent years have leveraged deep networks to build data-driven models of intuitive physics ( Bhattacharyya et al. , 2016 ; Ehrhardt et al. , 2017 ; Fragkiadaki et al. , 2015 ; Chang et al. , 2016 ; Stewart & Ermon , 2017 ) . However , these methods either require access to the underlying ground-truth state-space or do not scale to long-range due to absence of interaction reasoning . A more generic yet explicit approach has been to leverage graph neural networks ( Scarselli et al. , 2009 ) to capture interactions between entities in a scene ( Battaglia et al. , 2018 ; Chang et al. , 2016 ) . Closest to our approach are interaction models that scale to pixels and reason about object interaction ( Watters et al. , 2017 ; Ye et al. , 2019 ) . However , these approaches either reason about object crops with no context around or can only deal with a predetermined number and order of objects . A concurrent work ( Girdhar et al. , 2020 ) studies using prediction for physical reasoning , but their prediction model is either in the state space or in the pixel space . Other common ways to measure physical understanding are to predict future judgments given a scene image , e.g. , predicting the stability of a configuration ( Groth et al. , 2018 ; Jia et al. , 2015 ; Lerer et al. , 2016 ; Li et al. , 2016a ; b ) . Several hybrid methods take a data-driven approach to estimate Newtonian parameters from raw images ( Brubaker et al. , 2009 ; Wu et al. , 2016 ; Bhat et al. , 2002 ; Wu et al. , 2015 ) , or model Newtonian physics via latent variable to predict motion trajectory in images ( Mottaghi et al. , 2016a ; b ; Ye et al. , 2018 ) . An extreme example is to use an actual simulator to do inference over objects ( Hamrick et al. , 2011 ) . The reliance on explicit Newtonian physics makes them infeasible on real-world data and un-instrumented settings . In contrast , we take into account the context around each object via RoIPooling and explicitly model their interaction with each other or with the environment without relying on Newtonian physics , and hence , easily scalable to real videos for long-range predictions . Video Prediction . Instead of modeling physics from raw images , an alternative is to treat visual reasoning as an image translation problem . This approach has been adopted in the line of work that falls under video prediction . The most common theme is to leverage latent-variable models for predicting future ( Lee et al. , 2018 ; Denton & Fergus , 2018 ; Babaeizadeh et al. , 2017 ) . Predicting pixels is difficult so several methods leverage auxiliary information like back/fore-ground ( Villegas et al. , 2017a ; Tulyakov et al. , 2017 ; Vondrick et al. , 2016 ) , optical flow ( Walker et al. , 2016 ; Liu et al. , 2017 ) , appearance transformation ( Jia et al. , 2016 ; Finn et al. , 2016 ; Chen et al. , 2017 ; Xue et al. , 2016 ) , etc . These inductive biases help in a short interval but do not capture long-range behavior as needed in several scenarios , like playing billiards , due to lack of explicit reasoning . Some approaches can scale to relative longer term but are domain-specific , e.g. , pre-defined human-pose space ( Villegas et al. , 2017b ; Walker et al. , 2017 ) . However , our goal is to model long-term interactions not only for prediction but also to facilitate planning for downstream tasks . Learning Dynamics Models . Unlike video prediction , dynamics models take actions into account for predicting the future , also known as forward models ( Jordan & Rumelhart , 1992 ) . Learning these forward dynamics models from images has recently become popular in robotics for both specific tasks ( Wahlström et al. , 2015 ; Agrawal et al. , 2016 ; Oh et al. , 2015 ; Finn et al. , 2016 ) and exploration ( Pathak et al. , 2017 ; Burda et al. , 2019 ) . In contrast to these methods where a deep network directly predicts the whole outcome , we leverage our proposed region-proposal interaction module to capture each object trajectories explicitly to learn long-range forward dynamics as well as video prediction models . Planning via Learned Models . Leveraging models to plan is the standard approach in control for obtaining task-specific behavior . Common approach is to re-plan after each action via Model Predictive Control ( Allgöwer & Zheng , 2012 ; Camacho & Alba , 2013 ; Deisenroth & Rasmussen , 2011 ) . Scaling the models and planning in a high dimensional space is a challenging problem . With deep learning , several approaches shown promising results on real-world robotic tasks ( Finn et al. , 2016 ; Finn & Levine , 2017 ; Agrawal et al. , 2016 ; Pathak et al. , 2018 ) . However , the horizon of these approaches is still very short , and replanning in long-term drifts away in practice . Some methods try to alleviate this issue via object modeling ( Janner et al. , 2019 ; Li et al. , 2019 ) or skip connections ( Ebert et al. , 2018a ) but assume the models are trained with state-action pairs . In contrast to prior works where a short-range dynamic model is unrolled in time , we learn our long-range models from passive data and then couple them with short-range forward models to infer actions during planning .
The paper proposes a variation of interaction networks (IN) called region proposal interaction networks. The key idea is to have a richer object-centric feature representation using ROI-Pooling to encode the objects for prediction and use convolution operators to help the IN handle the change in the dimensionality of the feature representation. The paper is well-written and is evaluated on several popular benchmarks. The proposed variations seem to have a considerable effect on the performance to offer significant gains on challenging benchmarks.
SP:72e513837413282d60e4e2ab71276a0f7856e87e