# Datasets: allenai /mup

Dataset Preview
paper_name (string)text (string)summary (string)paper_id (string)
For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability
1 INTRODUCTION . Statistical learning theory studies the learning properties of machine learning algorithms , and more fundamentally , the conditions under which learning from finite data is possible . In this context , classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures , such as combinatorial dimensions , covering numbers and Rademacher/Gaussian complexities ( Shalev-Shwartz & Ben-David , 2014 ; Boucheron et al. , 2005 ) . Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data ( Bousquet & Elisseeff , 2001 ; Kutin & Niyogi , 2002 ) . In this view , the continuity of the process that maps data to estimators is crucial , rather than the complexity of the hypothesis space . Different notions of stability can be considered , depending on the data perturbation and metric considered ( Kutin & Niyogi , 2002 ) . Interestingly , the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other , and can be shown to be equivalent as shown in Poggio et al . ( 2004 ) and Shalev-Shwartz et al . ( 2010 ) . In modern machine learning overparameterized models , with a larger number of parameters than the size of the training data , have become common . The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process ( Bühlmann & Van De Geer , 2011 ; Steinwart & Christmann , 2008 ) . However , it was recently shown - first for deep networks ( Zhang et al. , 2017 ) , and more recently for kernel methods ( Belkin et al. , 2019 ) - that learning is possible in the absence of regularization , i.e. , when perfectly fitting/interpolating the data . Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding . Since learning using models that interpolate is not exclusive to deep neural networks , we study generalization in the presence of interpolation in the case of kernel methods . We study both linear and kernel least squares problems in this paper . Our Contributions : • We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach . While the ( uniform ) stability properties of regularized kernel methods are well known ( Bousquet & Elisseeff , 2001 ) , we study interpolating solutions of the unregularized (  ridgeless '' ) regression problems . • We obtain an upper bound on the stability of interpolating solutions , and show that this upper bound is minimized by the minimum norm interpolating solution . This also means that among all interpolating solutions , the minimum norm solution has the best test error . In particular , the same conclusion is also true for gradient descent , since it converges to the minimum norm solution in the setting we consider , see e.g . Rosasco & Villa ( 2015 ) . • Our stability bounds show that the average stability of the minimum norm solution is controlled by the condition number of the empirical kernel matrix . It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix ( see the discussion of why overparametrization is “ good ” in Poggio et al . ( 2019 ) ) . Our results show that the condition number also controls stability ( and hence , test error ) in a statistical sense . Organization : In section 2 , we introduce basic ideas in statistical learning and empirical risk minimization , as well as the notation used in the rest of the paper . In section 3 , we briefly recall some definitions of stability . In section 4 , we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability . In section 5 we discuss our results in the context of recent work on high dimensional regression . We conclude in section 6 . 2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION . We begin by recalling the basic ideas in statistical learning theory . In this setting , X is the space of features , Y is the space of targets or labels , and there is an unknown probability distribution µ on the product space Z = X × Y . In the following , we consider X = Rd and Y = R. The distribution µ is fixed but unknown , and we are given a training set S consisting of n samples ( thus |S| = n ) drawn i.i.d . from the probability distribution on Zn , S = ( zi ) ni=1 = ( xi , yi ) n i=1 . Intuitively , the goal of supervised learning is to use the training set S to “ learn ” a function fS that evaluated at a new value xnew should predict the associated value of ynew , i.e . ynew ≈ fS ( xnew ) . The loss is a function V : F × Z → [ 0 , ∞ ) , where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point . We define a hypothesis space H ⊆ F where algorithms search for solutions . With the above notation , the expected risk of f is defined as I [ f ] = EzV ( f , z ) which is the expected loss on a new sample drawn according to the data distribution µ . In this setting , statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization ( ERM ) where we minimize the empirical risk IS [ f ] = 1 n ∑n i=1 V ( f , zi ) . A natural error measure for our ERM solution fS is the expected excess risk ES [ I [ fS ] −minf∈H I [ f ] ] . Another common error measure is the expected generalization error/gap given by ES [ I [ fS ] − IS [ fS ] ] . These two error measures are closely related since , the expected excess risk is easily bounded by the expected generalization error ( see Lemma 5 ) . 2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION . The focus in this paper is on the kernel least squares problem . We assume the loss function V is the square loss , that is , V ( f , z ) = ( y − f ( x ) ) 2 . The hypothesis space is assumed to be a reproducing kernel Hilbert space , defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H , such that K ( x , x′ ) = 〈Φ ( x ) , Φ ( x′ ) 〉H for all x , x′ ∈ X , where 〈· , ·〉H is the inner product in H. In this setting , functions are linearly parameterized , that is there exists w ∈ H such that f ( x ) = 〈w , Φ ( x ) 〉H for all x ∈ X . The ERM problem typically has multiple solutions , one of which is the minimum norm solution : f†S = arg min f∈M ‖f‖H , M = arg min f∈H 1 n n∑ i=1 ( f ( xi ) − yi ) 2 . ( 1 ) Here ‖·‖H is the norm onH induced by the inner product . The minimum norm solution can be shown to be unique and satisfy a representer theorem , that is for all x ∈ X : f†S ( x ) = n∑ i=1 K ( x , xi ) cS [ i ] , cS = K †y ( 2 ) where cS = ( cS [ 1 ] , . . . , cS [ n ] ) , y = ( y1 . . . yn ) ∈ Rn , K is the n by n matrix with entries Kij = K ( xi , xj ) , i , j = 1 , . . . , n , and K† is the Moore-Penrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features , that is the rank of X is n , then it is possible to show that for many kernels one can replace K† by K−1 ( see Remark 2 ) . Note that invertibility is necessary and sufficient for interpolation . That is , if K is invertible , f†S ( xi ) = yi for all i = 1 , . . . , n , in which case the training error in ( 1 ) is zero . Remark 1 ( Pseudoinverse for underdetermined linear systems ) A simple yet relevant example are linear functions f ( x ) = w > x , that correspond toH = Rd and Φ the identity map . If the rank of X ∈ Rd×n is n , then any interpolating solution wS satisfies w > S xi = yi for all i = 1 , . . . , n , and the minimum norm solution , also called Moore-Penrose solution , is given by ( w†S ) > = y > X† where the pseudoinverse X† takes the form X† = X > ( XX > ) −1 . Remark 2 ( Invertibility of translation invariant kernels ) Translation invariant kernels are a family of kernel functions given by K ( x1 , x2 ) = k ( x1 − x2 ) where k is an even function on Rd . Translation invariant kernels are Mercer kernels ( positive semidefinite ) if the Fourier transform of k ( · ) is non-negative . For Radial Basis Function kernels ( K ( x1 , x2 ) = k ( ||x1 − x2|| ) ) we have the additional property due to Theorem 2.3 of Micchelli ( 1986 ) that for distinct points x1 , x2 , . . . , xn ∈ Rd the kernel matrix K is non-singular and thus invertible . The above discussion is directly related to regularization approaches . Remark 3 ( Stability and Tikhonov regularization ) Tikhonov regularization is used to prevent potential unstable behaviors . In the above setting , it corresponds to replacing Problem ( 1 ) by minf∈H 1 n ∑n i=1 ( f ( xi ) − yi ) 2 + λ ‖f‖ 2 H where the corresponding unique solution is given by fλS ( x ) = ∑n i=1K ( x , xi ) c [ i ] , c = ( K + λIn ) −1y . In contrast to ERM solutions , the above approach prevents interpolation . The properties of the corresponding estimator are well known . In this paper , we complement these results focusing on the case λ→ 0 . Finally , we end by recalling the connection between minimum norm and the gradient descent . Remark 4 ( Minimum norm and gradient descent ) In our setting , it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist , see e.g . Rosasco & Villa ( 2015 ) . Thus , a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges . In particular , when ERM has multiple interpolating solutions , gradient descent converges to a solution that minimizes a bound on stability , as we show in this paper .
This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study.
SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
Discriminative Representation Loss (DRL): A More Efficient Approach than Gradient Re-Projection in Continual Learning
This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below.
This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves state-of-the-art results on some datasets.
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
A Robust Fuel Optimization Strategy For Hybrid Electric Vehicles: A Deep Reinforcement Learning Based Continuous Time Design Approach
This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation.
SP:a1e2218e6943bf138aeb359e23628676b396ed66
Neural representation and generation for RNA secondary structures
This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semi-supervised learning and targeted generation. This paper has many interesting contributions — a comparison of VAE models that use different RNA secondary structure encoding schemes, including traditional dot-bracket notation and a more complex hierarchical encoding, and they also introduce various decoding schemes to encourage valid secondary structures.
SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
DiP Benchmark Tests: Evaluation Benchmarks for Discourse Phenomena in MT
This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 150-5000 data points per language and phenomenon. A relatively large number of systems from previous work is benchmarked on each test set, and agreement with human judgments is measured.
SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
Private Image Reconstruction from System Side Channels Using Generative Models
The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAE-LP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAE-LP model is refined using a GAN to add fine details. Compelling results are demonstrated for recovery of private information and state of art metrics are reported.
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
This paper proposes a method of learning ensembles that adhere to an "ensemble version" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for predicting the labels (Y), i.e. I(X;Z) or I(X;Z|Y), this paper proposes that ensembles should additionally avoid spurious correlations between the ensemble members that aren't useful for predicting Y, i.e. I(Z_i; Z_j| Y). They show empirically that the coefficient on this term increases diversity at the expense of decreasing accuracy of individual members of the ensemble.
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
Zero-shot Synthesis with Group-Supervised Learning
The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zero-shot setting. The framework leverages the underlying semantic links across samples which could be instantiated as a multigraph. Cycle-consistent reconstruction loss as well as reconstruction loss are computed on synthetic samples from swapped latent representations.
SP:5561773ab024b083be4e362db079e371abf79653
Asymmetric self-play for automatic goal discovery in robotic manipulation
This paper presents an approach to learn goal conditioned policies by relying on self-play which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too challenging for Bob to solve. Bob's task is to solve the task by trying to reproduce the end state of Alice's demonstration. As a result, the learned policy performs various tasks and can work in zero-shot settings.
SP:9f70871f0111b58783f731748d8750c635998f32
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maximizing the mutual information between the subgraph and the GNN output embedding of the center node. Then, the authors give an upper bound for the difference of EGI scores of two graphs based on the difference of eigenvalues of the graph Laplacian of the subgraph samples from the two graphs. The implication is that if the difference of the eigenvalues is small, then the EGI scores are similar, which means the GNN has a similar ability to encode the structure of the two graphs.
SP:038a1d3066f8273977337262e975d7a7aab5002f
Information Lattice Learning
The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy.
SP:40cba7b6c04d7e44709baed351382c27fa89a129
Don't be picky, all students in the right family can learn from good teachers
This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, increasing the diversity of the architectures explored. They show that this approach combined with the standard knowledge distillation loss is able to learn good student architectures requiring substantially less samples and achieving competitive performances when comparing to other knowledge distillation algorithms.
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat model where the attacker is unaware of the defense mechanism. As has been argued again and again in the literature and in community guidelines such as [1, 2], the oblivious threat model is trivial and yields absolutely no insights into the effectiveness of a defense (e.g. you can just manipulate the backpropagated gradient in random ways to prevent any gradient-based attack from finding adversarial perturbations). The problem with oblivious attacks is clearly visible in the results section where more PGD iterations are less effective than fewer iterations - a clear red flag that the evaluation is ineffective. The paper also fails to point out that Pang et al. 2020, one of the methods they combine their method with, has been shown to be ineffective [2].
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples
The technique is described in sufficient detail and the paper is easy to read. Experimental results involving three datasets: MNIST, street view house numbers, and German traffic signs. The experimental results show that the proposed technique finds significant failures in all datasets, including critical failure scenarios. After correction, the performance of the method improves.
Improving Learning to Branch via Reinforcement Learning
1 INTRODUCTION . Mixed Integer Programming ( MIP ) has been applied widely in many real-world problems , such as scheduling ( Barnhart et al. , 2003 ) and transportation ( Melo & Wolsey , 2012 ) . Branch and Bound ( B & B ) is a general and widely used paradigm for solving MIP problems ( Wolsey & Nemhauser , 1999 ) . B & B recursively partitions the solution space into a search tree and compute relaxation bounds along the way to prune subtrees that provably can not contain an optimal solution . This iterative process requires sequential decision makings : node selection : selecting the next solution space to evaluate , variable selection : selecting the variable by which to partition the solution space ( Achterberg & Berthold , 2009 ) . In this work , we focus on learning a variable selection strategy , which is the core of the B & B algorithm ( Achterberg & Wunderling , 2013 ) . Very often , instances from the same MIP problem family are solved repeatedly in industry , which gives rise to the opportunity for learning to improve the variable selection policy ( Bengio et al. , 2020 ) . Based on the human-designed heuristics , Di Liberto et al . ( 2016 ) learn a classifier that dynamically selects an existing rule to perform variable selection ; Balcan et al . ( 2018 ) consider a weighted score of multiple heuristics and analyse the sample complexity of finding such a good weight . The first step towards learning a variable selection policy was taken by Khalil et al . ( 2016 ) , who learn an instance customized policy in an online fashion , as well as Alvarez et al . ( 2017 ) and Hansknecht et al . ( 2018 ) who learn a branching rule offline on a collection of similar instances . Those methods need extensively feature engineering and require strong domain knowledge in MIP . To avoid that , Gasse et al . ( 2019 ) propose a graph convolutional neural network approach to obtain competitive performance , only requiring raw features provided by the solver . In each case , the branching policy is learned by imitating the decision of strong branching as it consistently leads to the smallest B & B trees empirically ( Achterberg et al. , 2005 ) . In this work , we argue that strong branching is not a good expert to imitate . The excellent performance ( the smallest B & B tree ) of strong branching relies mostly on the information obtained in solving branch linear programming ( LP ) rather than the decision it makes . This factor prevents learning a good policy by imitating only the decision made by strong branching . To obtain more effective and non-myopic policies , i.e . minimizing the total solving nodes rather than maximizing the immediate duality gap gap , we use reinforcement learning ( RL ) and model the variable selection process as a Markov Decision Process ( MDP ) . Though the MDP formulation for MIP has been mentioned in the previous works ( Gasse et al. , 2019 ; Etheve et al. , 2020 ) , the advantage of RL has not been demonstrated clearly in literature . The challenges of using RL are multi-fold . First , the state space is a complex search tree , which can involve hundreds or thousands of nodes ( with a linear program on each node ) and evolve over time . In the meanwhile , the objective of MIP is to solve problems faster . Hence a trade-off between decision quality and computation time is required when representing the state and designing a policy based on this state representation . Second , learning a branching policy by RL requires rolling out on a distribution of instances . Moreover , for each instance , the solving trajectory could contain thousands of steps and actions can have long-lasting effects . These result in a large variance in gradient estimation . Third , each step of variable selection can have hundreds of candidates . The large action set makes the exploration in MIP very hard . In this work , we address these challenges by designing a policy network inspired by primal-dual iteration and employing a novelty search evolutionary strategy ( NS-ES ) to improve the policy . For efficiency-effectiveness trade-off , the primal-dual policy ignores the redundant information and makes high-quality decisions on the fly . For reducing variance , the ES algorithm is an attractive choice as its gradient estimation is independent of the trajectory length ( Salimans et al. , 2017 ) . For exploration , we introduce a new representation of the B & B solving process employed by novelty search ( Conti et al. , 2018 ) to encourage visiting new states . We evaluate our RL trained agent over a range of problems ( namely , set covering , maximum independent set , capacitated facility location ) . The experiments show that our approach significantly outperforms stateof-the-art human-designed heuristics ( Achterberg & Berthold , 2009 ) as well as imitation based learning methods ( Khalil et al. , 2016 ; Gasse et al. , 2019 ) . In the ablation study , we compare our primal-dual policy net with GCN ( Gasse et al. , 2019 ) , our novelty based ES with vanilla ES ( Salimans et al. , 2017 ) . The results confirm that both our policy network and the novelty search evolutionary strategy are indispensable for the success of the RL agent . In summary , our main contributions are the followings : • We point out the overestimation of the decision quality of strong branching and suggest that methods other than imitating strong branching are needed to find better variable selection policy . • We model the variable selection process as MDP and design a novel policy net based on primal-dual iteration over reduced LP relaxation . • We introduce a novel set representation and optimal transport distance for the branching process associated with a policy , based on which we train our RL agent using novelty search evolution strategy and obtain substantial improvements in empirical evaluation . 2 BACKGROUND . Mixed Integer Programming . MIP is an optimization problem , which is typically formulated as minx∈Rn { cTx : Ax ≤ b ,  ≤ x ≤ u , xj ∈ Z , ∀j ∈ J } ( 1 ) where c ∈ Rn is the objective vector , A ∈ Rm×n is the constraint coefficient matrix , b ∈ Rm is the constraint vector ,  , u ∈ Rn are the variable bounds . The set J ⊆ { 1 , · · · , n } is an index set for integer variables . We denote the feasible region of x as X . Linear Programming Relaxation . LP relaxation is an important building block for solving MIP problems , where the integer constraints are removed : minx∈Rn { cTx : Ax ≤ b ,  ≤ x ≤ u } . ( 2 ) Algorithm 1 : Branch and Bound Input : A MIP P in form Equation 1 Output : An optimal solution set x∗ and optimal value c∗ 1 Initialize the problem set S : = { PLP } . where PLP is in form Equation 2 . Set x∗ = φ , c∗ =∞ ; 2 If S = φ , exit by returning x∗ and c∗ ; 3 Select and pop a LP relaxation Q ∈ S ; 4 Solve Q with optimal solution x̂ and optimal value ĉ ; 5 If ĉ ≥ c∗ , go to 2 ; 6 If x̂ ∈ X , set x∗ = x̂ , c∗ = ĉ , go to 2 ; 7 Select variable j , split Q into two subproblems Q+j and Q − j , add them to S and go to 3 ; Branch and Bound . LP based B & B is the most successful method in solving MIP . A typical LP based B & B algorithm for solving MIP looks as Algorithm 1 ( Achterberg et al. , 2005 ) . It consists of two major decisions : node selection , in line 3 , and variable selection , in line 7 . In this paper , we will focus on the variable selection . Given a LP relaxation and its optimal solution x̂ , the variable selection means selecting an index j . Then , branching splits the current problem into two subproblems , each representing the original LP relaxation with a new constraint xj ≤ bx̂jc for Q−j and xj ≥ dx̂je for Q + j respectively . This procedure can be visualized by a binary tree , which is commonly called search tree . We give a simple visualization in Section A.1 . Evolution Strategy . Evolution Strategies ( ES ) is a class of black box optimization algorithm ( Rechenberg , 1978 ) . In this work , we refer to the definition in Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2008 ) . NES represents the population as a distribution of parameter vectors θ characterized by parameters φ : pφ ( θ ) . NES optimizes φ to maximize the expectation of a fitness f ( θ ) over the population Eθ∼pφ [ f ( θ ) ] . In recent work , Salimans et al . ( 2017 ) outlines a version of NES applied to standard RL benchmark problems , where θ parameterizes the policy πθ , φt = ( θt , σ ) parameterizes a Gaussian distribution pφ ( θ ) = N ( θt , σ2I ) and f ( θ ) is the cumulative reward R ( θ ) over a full agent interaction . At every iteration , Salimans et al . ( 2017 ) apply n additive Gaussian noises to the current parameter and update the population as θt+1 = θt + α 1 nσ n∑ i=1 f ( θt + σ i ) i ( 3 ) To encourage exploration , Conti et al . ( 2018 ) propose Novelty Search Evolution Strategy ( NS-ES ) . In NSES , the fitness function f ( θ ) = λN ( θ ) + ( 1−λ ) R ( θ ) is selected as a combination of domain specific novelty score N and cumulative reward R , where λ is the balancing weight . 3 WHY IMITATING STRONG BRANCHING IS NOT GOOD . Strong branching is a human-designed heuristic , which solves all possible branch LPs Q+j , Q − j ahead of branching . As strong branching usually produces the smallest B & B search trees ( Achterberg , 2009 ) , many learning-based variable selection policy are trained by mimicking strong branching ( Gasse et al. , 2019 ; Khalil et al. , 2016 ; Alvarez et al. , 2017 ; Hansknecht et al. , 2018 ) . However , we claim that strong branching is not a good expert : the reason strong branching can produce a small search tree is the reduction obtained in solving branch LP , rather than its decision quality . Specifically , ( i ) Strong branching can check lines 5 , 6 in Algorithm 1 before branching . If the pruning condition is satisfied , strong branching does not need to add the subproblem into the problem set S. ( ii ) Strong branching can strengthen other LP relaxations in the problem set S via domain propagation ( Rodosek et al. , 1999 ) and conflict analysis ( Achterberg , 2007 ) . For example , if strong branching finds x1 ≥ 1 and x2 ≥ 1 can be pruned during solving branch LP , then any other LP relaxations containing x1 ≥ 1 can be strengthened by adding x2 ≤ 0 . These two reductions are the direct consequence of solving branch LP , and they can not be learned by a variable selection policy . ( iii ) Strong branching activates primal heuristics ( Berthold , 2006 ) after solving LPs . To examine the decision quality of strong branching , we employ vanilla full strong branching ( Gamrath et al. , 2020 ) , which takes the same decision as full strong branching , while the side-effect of solving branch LP is switched off . Experiments in Section 5.2 show that vanilla full strong branching has poor decision quality . Hence , imitating strong branching is not a wise choice for learning variable selection policy .
The paper proposes a model for *variable selection* in *Mixed Integer Programming (MIP)* solvers. While this problem is clearly a sequential decision making task, modeling it as an MDP is challenging. As a result, existing works use other approaches such as ranking or imitation learning. This paper overcomes these challenges by introducing a new problem representation.
SP:bbaedd5d8e7591fa3a5587260bf19f3d05779976
Frequency Decomposition in Neural Processes
Neural Processes are a powerful tool for learning representations of function spaces purely from examples , in a way that allows them to perform predictions at test time conditioned on so-called context observations . The learned representations are finite-dimensional , while function spaces are infinite-dimensional , and so far it has been unclear how these representations are learned and what kinds of functions can be represented . We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components , similar to a Fourier transform . In this context , we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce , depending on their representation size . This bound is confirmed empirically . Finally , we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others , which makes them programmable band-pass or band-stop filters . 1 INTRODUCTION . Neural Processes ( Garnelo et al. , 2018a ; b ) are a class of models that can learn a distribution over functions , or more generally a function space . In contrast to many other approaches that do the same , for example Bayesian Neural Networks , Neural Processes learn an explicit representation of such a function space , which allows them to condition their predictions on an arbitrary number of observations that are only available at test time . This representation is finite-dimensional , while function spaces are infinite-dimensional , and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so . Our work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space , and in the process describes constraints and conditions that decide what kinds of function spaces can be represented . We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective , which allows us to derive a theoretical upper bound on the frequencies , i.e . Fourier components , of functions that can be represented . We subsequently confirm this bound empirically , which suggests that the learned representations should contain a notion of frequency . To further investigate this hypothesis , we continue with a visualization of the learned representations , which reveals that Neural Processes can decompose a function space into different frequency components , essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour . As further evidence of this we train Neural Processes to represent only certain frequencies , which results in them suppressing those frequencies that were not observed in the training data . Our contributions can be summarized as follows1 : • We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct . As we show , the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval . • We investigate learned representations qualitatively , presenting evidence that Neural Processes perform a frequency decomposition of the function space , akin to a Fourier transform . This behaviour is not incentivized externally but rather emerges naturally . 1The complete source code to reproduce our experiments is available at https : //github.com/ * * * • We show that by choosing the training distribution appropriately , Neural Processes can be made to represent certain frequencies and suppress others , which turns them into programmable band-pass or band-stop filters . 2 BACKGROUND . Neural Processes ( Garnelo et al. , 2018a ; b ) are maps P : C , X → Y , where C is a set of tuples { ( x , f ( x ) ) } Nc=1 = : ( xc , f ( xc ) ) 2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context , because Neural Processes perform predictions for values xt ∈ X ( t for target ) , conditioned on these points . F is the function space we would like to find a representation of . Note that some sources define function spaces as any set of functions with a shared domain and co-domain , while others require them to be vector spaces as well . We don ’ t concern ourselves with this distinction and further restrict our work to X = Y = R , because it allows us to visualize learned representations . We only look at the original Neural Processes , namely the deterministic Conditional Neural Processes ( CNP ) ( Garnelo et al. , 2018a ) and the variational Neural Processes ( NP ) ( Garnelo et al. , 2018b ) , because newer contributions in the field work in ways that preclude them from being analyzed in the same way . We discuss this further in Section 5 . In CNPs and NPs , the map P is separated into two parts , a so called encoding E : C → Z and a decoding or generating part G : Z , X → Y . Z is referred to as the representation or latent space . To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators , specifically neural networks , as the name suggests . The defining characteristic of CNPs and NPs is that E encodes individual pairs ( x , f ( x ) ) from the context separately , and the resulting representations are averaged to form a global representation , meaning one that is independent of the target points xt at which we then evaluate the Neural Process . This is often not the case in later work , for example in Attentive Neural Processes ( Kim et al. , 2019 ) , where the individual representations are instead aggregated using an attention mechanism that depends on xt . In CNPs the representations are deterministic , while in NPs they parametrize mean and ( log- ) variance of a Gaussian distribution , so the latter are trained using variational inference . For details on implementation and training we refer to Appendix A.1 . Our work will investigate how these global representations , which are finite-dimensional , represent infinite-dimensional function spaces . As stated above , E and by extension the Neural Process P acts on set-valued inputs . This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering . Recall that sets are permutation invariant , so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings , but Zaheer et al . ( 2017 ) show that it is in fact the only way to ensure it : E is permutation-invariant if and only if it has a so-called sum-decomposition , i.e . it can be represented in the form E ( x ) = ρ ( N∑ i=1 φ ( xi ) ) ( 1 ) where ρ , φ are appropriately chosen functions . Wagstaff et al . ( 2019 ) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section . 3 AN UPPER BOUND ON SIGNAL FREQUENCIES . We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition , so that the global representations are permutation-invariant , as shown in Zaheer et al . ( 2017 ) . Expanding on this , Wagstaff et al . ( 2019 ) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of 2We use boldface as a shorthand for sets , not vectors . 3This will depend on the implementation of E and G , and for neural networks F is practically restricted to continuous and differentiable functions . the sets are input-output tuples of some function f , as it is typically the case in Neural Processes . We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process . In order to do this , we must first define what it means to successfully learn a representation of a function space . Definition 3.1 ( Representation of Function Spaces in Neural Processes ) . We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [ a , b ] ⊂ R , if , for some error tolerance , it holds for all x ∈ [ a , b ] and for all f ∈ F , represented as a suitable set of discrete measurements ( xf , f ( xf ) ) , that |P ( ( xf , f ( xf ) ) , x ) − f ( x ) | < . That means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance . The choice of this tolerance is essentially arbitrary , but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements , by which we mean that it must be possible to reconstruct f from those measurements . Switching to signal-processing terminology , we know that to represent a continuous signal as a set of discrete measurements , we need to sample it at points with a distance of at most τ = 1/ ( 2νmax ) , where νmax is the maximum frequency component of the signal . This is most commonly known as the Nyquist-Shannon sampling theorem ( Whittaker , 1915 ; Kotelnikov , 1933 ; Shannon , 1949 ) . For any finite real interval [ a , b ] , this translates to a number of sampling points N > 2|b − a|νmax . The latter allows us to make a connection to the findings by Wagstaff et al . ( 2019 ) , so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct . Theorem 3.1 ( Maximum Frequency in Neural Process Representations ) . A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [ a , b ] ⊂ R if for all f ∈ F with a maximum frequency content νmax , f it holds that : νmax , f < Dr 2|b− a| ( 2 ) Note that this means we should in theory be able to represent any function space that obeys Eq . ( 2 ) to within arbitrarily small . In practice , we will typically have less control over F , and we only find approximate representations . Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq . ( 2 ) . It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points . During training , we work with randomly sampled inputs , but at test time equidistant points are used , as we outline in Appendix A.2 .
The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation, authors infer a maximum theoretical upper bond of frequencies of functions f that can be used to represent the points. The bond depends on the latent dimension/representation size and the finite interval spawn by the points. Simulations are computed to test the validity of the upper bond. Authors find that NPs behave like a Fourier Transform and decompose the spectrum of the signal. Since the representation during training learns to represent specific frequencies, NPs can be used as band pass/stop filter.
SP:a20769de2c7acf390c7e3bece904a17df6a991bd
Multi-agent Policy Optimization with Approximatively Synchronous Advantage Estimation
The paper deals with the problem of credit assignment and synchronous estimation in cooperative multi-agent reinforcement learning problems. The authors introduce marginal advantage functions and use them for the estimation of the counterfactual advantage function. These functions permit to decompose the Multi-Agent Policy Optimization Problem in Single Agent Policy Optimization subproblems, which are solved using TRPO.
SP:ba25b5b02701e01998e9dd22e4230c4e095f4542
This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than working on the frequency domain, the paper focuses on learning the polynomial coefficients of the filter on the vertex domain. The key novelty is the consideration of a stack architecture for which the polynomial filter is formed by the successive application (i.e., matrix multiplication) of filters of order one. Numerical experiments with real datasets showcase the merits, including superior classification performance, of the proposed architecture.
SP:37bdb147b866b9e32a94d55dae82d7a42cea8da9
Deep $k$-NN Label Smoothing Improves Reproducibility of Neural Network Predictions
The main objective of this paper is to reduce the model stability, in particular, the prediction churn of neural networks. The prediction churn is defined as the changed prediction w.r.t. model randomness, e.g. multiple runs of networks. The paper proposed to use a interpolated version of global label smoothing and k-NN label smoothing. Theoretically it is shown that k-NN rule converges to the Bayes rule when k is small, and converges to a kernel smoothed version of Bayes rule when k is linear in n. Experiments are conducted that show the proposed method gives highest test accuracy and lowest churn rate in most cases.
SP:f19be0fdce321827638f91d57607ba340b1c3e4b
This paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. AFD employs a min-max adversarial learning framework where the classifier learns to encode features of both clean and adversarial images as the same distribution, thereby desensitizing adversarial features. With the aim of fooling a separate discriminator model into categorizing the classifier’s adversarial features as from clean images, the classifier is trained with the standard cross-entropy loss and adversarial loss terms. The authors showed through experiments on MNIST, CIFAR10 and CIFAR100 datasets that AFD mostly outperform previous defenses across different adversarial attacks under white- and black-box conditions.
Syntactic representations in the human brain: beyond effort-based metrics
1 INTRODUCTION . Neuroscientists have long been interested in how the brain processes syntax . To date , there is no consensus on which brain regions are involved in processing it . Classically , only a small number of regions in the left hemisphere were thought to be involved in language processing . More recently , the language system was proposed to involve a set of brain regions spanning the left and right hemisphere ( Fedorenko & Thompson-Schill , 2014 ) . Similarly , some findings show that syntax is constrained to specific brain regions ( Grodzinsky & Friederici , 2006 ; Friederici , 2011 ) , while other findings show syntax is distributed throughout the language system ( Blank et al. , 2016 ; Fedorenko et al. , 2012 ; 2020 ) . The biological basis of syntax was first explored through studies of the impact of brain lesions on language comprehension or production ( Grodzinsky , 2000 ) and later through non-invasive neuroimaging experiments that record brain activity while subjects perform language tasks , using methods such as functional Magnetic Resonance Imaging ( fMRI ) or electroencephalography ( EEG ) . These experiments usually isolate syntactic processing by contrasting the activity between a difficult syntactic condition and an easier one and by identifying regions that increase in activity with syntactic effort ( Friederici , 2011 ) . An example of these conditions is reading a sentence with an object-relative clause ( e.g . “ The rat that the cat chased was tired '' ) , which is more taxing than reading a sentence with a subject-relative clause ( e.g . “ The cat that chased the rat was tired '' ) . In the past decade , this approach was extended to study syntactic processing in naturalistic settings such as when reading or listening to a story ( Brennan et al. , 2012 ; Hale et al. , 2018 ; Willems et al. , 2015 ) . Because such complex material is not organized into conditions , neuroscientists have instead devised effort-based metrics capturing the word-by-word evolving syntactic demands required to understand the material . Brain regions with activity correlated with those metrics are suggested to be involved in processing syntax . We use the term effort-based metrics to refer to uni-dimensional measures capturing word-by-word syntactic demands . A standard approach for constructing a syntactic effort-based metric is to assume a sentence ’ s syntactic representation and estimate the number of syntactic operations performed at each word . Node Count is popular such metric . It relies on constituency trees ( structures that capture the hierarchical grammatical relationship between the words in a sentence ) . While traversing the words of the sentence in order , subtrees of the constituency tree get completed ; Node Count refers to the number of such subtrees that get completed at each word , effectively capturing syntactic load or effort . Brennan et al . ( 2012 ) use Node Count to support the theory that the Anterior Temporal Lobe ( ATL ) is involved in syntactic processing . Another example of an effort-based metric is given by an EEG study by Hale et al . ( 2018 ) . They show that parser action count ( the number of possible actions a parser can take at each word ) is predictive of the P600 , a positive peak in the brain ’ s electrical activity occurring around 600ms after word onset . The P600 is hypothesized to be driven by syntactic processing ( to resolve incongruencies ) , and the results of Hale et al . ( 2018 ) align with this hypothesis . Though effort-based metrics are a good proposal for capturing the effort involved in integrating a word into the syntactic structure of a sentence , they are not reflective of the entire syntactic information in play . Hence , these metrics can not be used to study the brain representation of syntactic constructs such as nouns , verbs , relationships and dependencies between words , and the complex hierarchical structure of phrases and sentences . Constituency trees and dependency trees are the two main structures that capture a sentence ’ s syntactic structure . Constituency trees are derived using phrase structure grammars that encode valid phrase and clause structure ( see Figure 1 ( A ) for an example ) . Dependency trees encode relations between pairs of words such as subject-verb relationships . We use representations derived from both types of trees . We derive word level dependency ( DEP ) labels from dependency trees , and we focus on encoding the structural information given by constituency trees since we want to analyze if the brain builds hierarchical representations of phrase structure . We characterize the syntactic structure inherent in sentence constituency trees by computing an evolving vector representation of the syntactic structure processed at each word using the subgraph embedding algorithm by Adhikari et al . ( 2018 ) . We show that our syntactic structure embeddings – along with other simpler syntactic structure embeddings built using conventional syntactic features such as part-of-speech ( POS ) tags and DEP tags – are better than effort-based metrics at predicting the fMRI data of subjects reading text . This indicates that representations of syntax , and not just syntactic effort , can be observed in fMRI . We also address the important question of whether regions that are predicted by syntactic features are selective for syntax , meaning they are only responsive to syntax and not to other language properties such as semantics . To answer this question , we model the semantic properties of words using a contextual word embedding space ( Devlin et al. , 2018 ) . We find that regions that are predicted by syntactic features are also predicted by semantic features and thus are not selective for syntax . Scientific questions We ask three main questions : • How can scientists construct syntactic structure embeddings that capture the syntactic structure inherent in phrases and sentences ? • Are these embeddings better at predicting brain activity compared to effort-based metrics when used as inputs to encoding models ? • Which brain regions are involved in processing complex syntactic structure and are they different from regions involved in semantic processing ? Contributions We make four main contributions : • We propose a subgraph embeddings-based method to model the syntactic structure inherent in phrases and sentences . • We show that effort-based metrics can be complemented by syntactic structure embeddings which can predict brain activity to a larger extent than effort-based metrics . • Using our syntactic structure embeddings , we find some evidence supporting the hypothesis that the brain processes and represents complex syntactic information such as phrase and clause structure . • We find evidence supporting the existing hypothesis that syntactic processing appears to be distributed in the language network in regions that are not selective for syntax . 2 METHODS . We first describe the syntactic features used in this study and their generation . All of the features we use are incremental i.e . they are computed per word . We then describe our fMRI data analyses . Effort-based metrics We use four effort-based metrics in our analyses - Node Count , Syntactic Surprisal , word frequency and word length . Node Count is an effort-based metric popular in neuroscience . To compute it , we obtain the constituency tree of each sentence using the self-attentive encoder-based constituency parser by Kitaev & Klein ( 2018 ) . We compute Node Count for each word as the number of subtrees that are completed by incorporating this word into its sentence . Syntactic Surprisal is another effort-based metric proposed by Roark et al . ( 2009 ) and is computed using an incremental top down parser ( Roark , 2001 ) . Both of these metrics aim to measure the amount of effort that is required to integrate a word into the syntactic structure of its sentence . The word frequency metric is computed using the wordfreq package ( Speer et al. , 2018 ) as the Zipf frequency of a word . This is the base-10 logarithm of the number of occurrences per billion of a given word in a large text corpus . Finally , word length is the number of characters in the presented word . The last two metrics approximate the amount of effort that is required to read a word . Constituency tree-based Graph Embeddings ( ConTreGE ) Constituency trees are a rich source of syntactic information . We build three representations of these trees that encode this information : ( a ) The largest subtree which is completed upon incorporating a word into a sentence ( see figure 1 ( B ) ) is representative of the implicit syntactic information given by the word . Given that Node Count reduces all of the information present in these subtrees to just one number , it is easy to see that it can not effectively capture this information . POS tags ( categorize words into nouns , verbs , adjectives , etc . ) also capture some of the information present in these trees as they encode phrase structure to a certain extent . But , they are incapable of completely encoding their hierarchical structure and the parsing decisions which are made while generating them . In order to better encode their structure , we first build subgraph embeddings of these completed subtrees called ConTreGE Comp vectors . ( b ) We hypothesize that the brain not only processes structure seen thus far but also predicts future structure from structure it already knows . To test this , we construct embeddings , simply called ConTreGE vectors , using incomplete subtrees that are constructed by retaining all the phrase structure grammar productions that are required to derive the words seen till now , thereby allowing us to capture future sentence structure ( in the form of future constituents ) before the full sentence is read ( see figure 1 ( C ) ) . These subtrees contain leaves that are non-terminal symbols unlike complete subtrees that only have terminal symbols ( words and punctuation ) as leaves . In this context , a non-terminal symbol is a symbol that can be derived further using some rule in the phrase structure grammar ( ex . NP , VP , etc. ) . If incomplete subtrees are more representative of the brain ’ s processes , it would mean that the brain expects certain phrase structures even before the entire phrase or sentence is read . ConTreGE Comp and ConTreGE vectors need to be built using accurate constituency trees constructed using the whole sentence . Thus , we reuse the trees generated to compute Node Count to build them . ( c ) Further , the brain could be computing several possible top down partial parses that can derive the words seen thus far ( see figures 1 ( D ) and ( E ) ) and modifying the list of possible parses as future words are read . To test this hypothesis , we designed Incremental ConTreGE ( InConTreGE ) vectors that are representative of the most probable parses so far . For a given word , its InConTreGE vector is computed as : v = ∑5 i=1 e −siWi where Wi is the subgraph embedding of a partial parse tree built by an incremental top-down parser ( Roark 2001 CoLing ) after reading the word and si is the score assigned to this partial parse that is inversely proportional to the parser ’ s confidence in this tree . To effectively capture the structure of all subtrees , we encode them using the subgraph embeddings proposed by Adhikari et al . ( 2018 ) which preserve the neighbourhood properties of subgraphs . A long fixed length random walk on a subgraph is generated to compute its embedding . Since consecutive nodes in a random walk are neighbours , a long walk can effectively inform us about the neighbourhoods of nodes in the subgraph . Each node in a walk is identified using its unique ID . So , a random walk can be interpreted as a “ paragraph '' where the words are the node IDs . Finally , the subgraph ’ s embedding is computed as the Paragraph Vector ( Le & Mikolov , 2014 ) of this paragraph that is representative of the subgraph ’ s structure . Note that all of the subtrees of a given type ( complete , incomplete or partial parse ) are encoded together . This ensures that all ConTreGE Comp vectors , all ConTreGE vectors and all InConTreGE vectors are in our own spaces . Figure 2 illustrates the subtree encoding process . First , every unique non-terminal in the subtrees is mapped to a unique number ( ex . S is mapped to 1 , NP is mapped to 2 , etc . ) and every terminal is mapped to a unique number that is representative of the order in which they were presented ( the first presented token is mapped to 10000 , the second token is mapped to 10001 and so on ) . We did not map each unique terminal to a unique number ( for instance , we did not map all instances of  Harry '' to one number ) because a random walk through the tree could give us word co-occurrence information and thus lead to the inclusion of some semantic information in the vectors . Every tree node ’ s label is then replaced by the number it was mapped to in the previous step . The edge lists of these subtrees are supplied to the subgraph embedding generation algorithm to finally obtain 15-dimensional vectors for every presented word . The length of the random walks is set to 100000 and we use an extension of the Distributed Bag of Nodes ( DBON ) model proposed by Le & Mikolov ( 2014 ) for generating Paragraph Vectors called Sub2Vec-DBON by Adhikari et al . ( 2018 ) . The length of the sliding window is set to 5 and the model is trained for 20 epochs . Since ConTreGE Comp , ConTreGE and InConTreGE encode information about the neighbourhoods of all nodes in the constituency trees , they can capture their hierarchical structure . Thus , brain regions predicted by these vectors are likely to be involved in building and encoding hierarchical sentence structure . Punctuation We create one-hot binary vectors indicating the type of punctuation that was presented along with a word ( e.g. . or , ) . For example , a sentence might have ended with  Malfoy. '' . In this punctuation-based feature space , the column corresponding to . will be set to 1 for this word . While punctuation is seldom considered a syntactic feature , sentence boundaries are highly correlated with changes in working memory load . These changes are bound to be a great source of variability in the fMRI signal ( as we will observe later ) . Failing to account for sentence boundaries and working memory might be a source of confounding that has been ignored in the literature . Part-of-speech tags and dependency tags We use two standard word-level syntactic features - POS and DEP tags . The POS tag of a word is read off previously generated constituency trees . The DEP tag of a word ( ex . subject , object , etc . ) correspond to its assigned role in the dependency trees of the presented sentences which were generated using the spaCy English dependency parser ( 2 ) . We create one-hot binary vectors indicating the POS tag and the DEP tag of each word and concatenate them to create one feature space which we refer to as simple syntactic structure embeddings . Semantic features We adapt the vectors obtained from layer 12 of a pretrained ( 1 ) cased BERTlarge model ( Devlin et al. , 2018 ) to identify regions that process semantics . We use layer 12 because of previous work showing that middle layers of sentence encoders are optimal for predicting brain activity ( Jain & Huth , 2018 ; Toneva & Wehbe , 2019 ) . We obtain the contextual embeddings for a word by running the pretrained model only on the words seen thus far , preventing the inclusion of future semantic information . Since a presented word can be broken up into multiple subtokens , we compute its embedding as the average of the subtokens ’ embeddings . Using principal component analysis ( PCA ) , we reduce their dimensionality to 15 to match the ConTreGE vectors ’ dimensionality . fMRI data We use the fMRI data of 9 subjects reading chapter 9 of Harry Potter and the Sorcerer ’ s Stone ( Rowling , 2012 ) , collected and made available by Wehbe et al . ( 2014 ) . Words are presented one at a time at a rate of 0.5s each . All the brain plots shown here are averages over the 9 subjects in the Montreal Neurological Institute ( MNI ) space . Preprocessing details are in Appendix B . Predicting brain activity The applicability of a given syntactic feature in studying syntactic processing is determined by its efficacy in predicting the brain data described above . Ridge regression is used to perform these predictions and their coefficient of determination ( R2 score ) measures the feature ’ s efficacy . For each voxel of each subject , the regularization parameter is chosen independently . We use Ridge regression because of its computational efficiency and because of the Wehbe et al . ( 2015 ) results showing that with such fMRI data , as long as the regularization parameter is chosen by cross-validation for each voxel independently , different regularization techniques lead to similar results . Indeed , Ridge regression is a common regularization technique used for predictive fMRI models ( Mitchell et al. , 2008 ; Nishimoto et al. , 2011 ; Wehbe et al. , 2014 ; Huth et al. , 2016 ) . For every voxel , a model is fit to predict the signals Y = [ y1 , y2 , . . . , yn ] recorded in that voxel where n is the number of time points ( TR , or time to repetition ) . The words are first grouped by the TR in which they were presented . Then , the features of words in every group are summed to form a sequence of features X = [ x1 , x2 , . . . , xn ] aligned with the brain signals . The response measured by fMRI is an indirect consequence of brain activity that peaks about 6 seconds after stimulus onset . A common solution to account for this delay is to express brain activity as a function of the features of the preceding time points ( Nishimoto et al. , 2011 ; Wehbe et al. , 2014 ; Huth et al. , 2016 ) . Thus , we train our models to predict any yi using xi−1 , xi−2 , xi−3 and xi−4 . We test the models in a cross-validation loop : the data is first split into 4 contiguous and equal sized folds . Each model uses three folds of the data for training and one fold for evaluation . We remove the data from the 5 TRs which either precede or follow the test fold from the training set of folds . This is done to avoid any unintentional data leaks since consecutive yis are correlated with each other because of the lag and continuous nature of the fMRI signal . The brain signals and the word features which comprise the training and testing data for each model are individually Z-scored . After training we obtain the predictions for the validation fold . The predictions for all folds are concatenated ( to form a prediction for the entire experiment in which each time point is predicted from a model trained without the data for that time point ) . Note that since all 3 ConTreGe vectors are stochastic , we construct them 5 times each , and learn a different model each time . The predictions of the 5 models are averaged together into a single prediction . The R2 score is computed for every voxel using the predictions and the real signals . We run a permutation test to test if R2 scores are significantly higher than chance . We permute blocks of contiguous fMRI TRs , instead of individual TRs , to account for the slowness of the underlying hemodynamic response . We choose a common value of 10 TRs ( Deniz et al. , 2019 ) . The predictions are permuted within fold 5000 times , and the resulting R2 scores are used as an empirical distribution of chance performance , from which the p-value of the unpermuted performance is estimated . We also run a bootstrap test to test if a model has a higher R2 score than another . The difference is that in each iteration , we permute ( using the same indices ) the predictions of both models and compute the difference of their R2 and use the resulting distribution to estimate the p-value of the unpermuted difference . Finally , the Benjamni-Hochberg False Discovery Rate correction ( Benjamini & Hochberg , 1995 ) is used for all tests ( appropriate because fMRI data is considered to have positive dependence ( Genovese , 2000 ) ) . The correction is performed by grouping together all the voxel-level p-values ( i.e . across all subjects and feature groups ) and choosing one threshold for all of our results . The correction is done in this way since we test multiple prediction models across multiple voxels and subjects . To compute Region of Interest ( ROI ) statistics , left-hemisphere ROI masks for the language system obtained from a “ sentence vs. non-word '' fMRI contrast ( Fedorenko et al. , 2010 ) are obtained from ( 3 ) and mirrored to obtain the right-hemisphere ROIs .
This paper derives various types of graph embeddings to encode aspects of syntactic information that the brain may be processing during real-time sentence comprehension. These embeddings, along with indicators of punctuation, POS and dependency tags, and BERT embeddings, are used to predict brain activity recorded via fMRI. The authors argue that this is an improvement over use of effort-based metrics to predict brain activity, as these embeddings contain richer information than is captured by distilling down to a single measure of effort. They show that various brain regions are significantly better predicted by the syntactic embeddings than by the effort-based metrics and POS+dependency indicators. BERT embeddings, however, prove to be a better predictor (than syntactic and other predictors) across much more substantial areas of activity.
Analogical Reasoning for Visually Grounded Compositional Generalization
This paper explores the problem of generalizing to novel combinations of verbs and nouns in a task for captioning video stills from videos about cooking. The paper introduces a new dataset based off of EPIC-Kitchens (Damen et al. 2018) which masks out verbs and nouns and splits the evaluation data into seen combinations of verb/noun pairs and unseen combinations of verb/noun pairs, challenging a model to generate captions for pairs which were not seen during training.
SP:7327dc440b5c193c1dda156276860f89594721fa
A Unified Framework for Convolution-based Graph Neural Networks
This paper presents a unified framework for graph convolutional neural networks based on regularized optimization, connecting different variants of graph neural networks including vanilla, attention-based, and topology-based approaches. The authors also propose a novel regularization technique to approach the oversmoothing problem in graph convolution. Experiments on the standard settings of node classification on Citeseer, Cora, and Pubmed prove the effectiveness of the proposed regularization techniques.
SP:5be9a3c39234c10c226c42eec95e29cbddbaf8c0
Benchmarks for Deep Off-Policy Evaluation
1 INTRODUCTION . Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction , such as in robotics ( Kober et al. , 2013 ) , board games and video games ( Tesauro , 1995 ; Mnih et al. , 2013 ; Vinyals et al. , 2019 ) , and recommender systems ( Aggarwal et al. , 2016 ) . However , this sort of active online interaction is often impractical for real-world problems , where active data collection can be costly ( Li et al. , 2010 ) , dangerous ( Hauskrecht & Fraser , 2000 ; Kendall et al. , 2019 ) , or time consuming ( Gu et al. , 2017 ) . Batch ( or offline ) reinforcement learning , has been studied extensively in domains such as healthcare ( Thapa et al. , 2005 ; Raghu et al. , 2018 ) , recommender systems ( Dudík et al. , 2014 ; Theocharous et al. , 2015 ; Swaminathan et al. , 2017 ) , education ( Mandel et al. , 2014 ) , and robotics ( Kalashnikov et al. , 2018 ) . A major challenge with such methods is the off-policy evaluation ( OPE ) problem , where one must evaluate the expected performance of policies solely from offline data . This is critical for several reasons , including providing high-confidence guarantees prior to deployment ( Thomas et al. , 2015 ) , and performing policy improvement and model selection ( Bottou et al. , 2013 ; Doroudi et al. , 2017 ) . The goal of this paper is to provide a standardized benchmark for evaluating OPE methods . Although considerable theoretical ( Thomas & Brunskill , 2016 ; Swaminathan & Joachims , 2015 ; Jiang & Li , 2015 ; Wang et al. , 2017 ; Yang et al. , 2020 ) and practical progress ( Gilotte et al. , 2018 ; Nie et al. , 2019 ; Kalashnikov et al. , 2018 ) on OPE algorithms has been made in a range of different domains , there are few broadly accepted evaluation tasks that combine complex , high-dimensional problems ∗Equally major contributors . †Policies and evaluation code are available at https : //github.com/google-research/deep_ ope . See Section 5 for links to modelling code . commonly explored by modern deep reinforcement learning algorithms ( Bellemare et al. , 2013 ; Brockman et al. , 2016 ) with standardized evaluation protocols and metrics . Our goal is to provide a set of tasks with a range of difficulty , excercise a variety of design properties , and provide policies with different behavioral patterns in order to establish a standardized framework for comparing OPE algorithms . We put particular emphasis on large datasets , long-horizon tasks , and task complexity to facilitate the development of scalable algorithms that can solve high-dimensional problems . Our primary contribution is the Deep Off-Policy Evaluation ( DOPE ) benchmark . DOPE is designed to measure the performance of OPE methods by 1 ) evaluating on challenging control tasks with properties known to be difficult for OPE methods , but which occur in real-world scenarios , 2 ) evaluating across a range of policies with different values , to directly measure performance on policy evaluation , ranking and selection , and 3 ) evaluating in ideal and adversarial settings in terms of dataset coverage and support . These factors are independent of task difficulty , but are known to have a large impact on OPE performance . To achieve 1 , we selected tasks on a set of design principles outlined in Section 3.1 . To achieve 2 , for each task we include 10 to 96 policies for evaluation and devise an evaluation protocol that measures policy evaluation , ranking , and selection as outlined in Section 3.2 . To achieve 3 , we provide two domains with differing dataset coverage and support properties described in Section 4 . Finally , to enable an easy-to-use research platform , we provide the datasets , target policies , evaluation API , as well as the recorded results of state-of-the-art algorithms ( presented in Section 5 ) as open-source . 2 BACKGROUND We briefly review the off-policy evaluation ( OPE ) problem setting . We consider Markov decision processes ( MDPs ) , defined by a tuple ( S , A , T , R , ρ0 , γ ) , with state space S , action space A , transition distribution T ( s′|s , a ) , initial state distribution ρ0 ( s ) , reward function R ( s , a ) and discount factor γ ∈ ( 0 , 1 ] . In reinforcement learning , we are typically concerned with optimizing or estimating the performance of a policy π ( a|s ) . The performance of a policy is commonly measured by the policy value V π , defined as the expected sum of discounted rewards : V π : = Es0∼ρ0 , s1 : ∞ , a0 : ∞∼π [ ∞∑ t=0 γtR ( st , at ) ] . ( 1 ) If we have access to state and action samples collected from a policy π , then we can use the sample mean of observed returns to estimate the value function above . However , in off-policy evaluation we are typically interested in estimating the value of a policy when the data is collected from a separate behavior policy πB ( a|s ) . This setting can arise , for example , when data is being generated online from another process , or in the purely offline case when we have a historical dataset . In this work we consider the latter , purely offline setting . The typical setup for this problem formulation is that we are provided with a discount γ , a dataset of trajectories collected from a behavior policy D = { ( s0 , a0 , r0 , s1 , . . . ) } , and optionally the action probabilities for the behavior policy πB ( at|st ) . In many practical applications , logging action propensities is not possible , for example , when the behavior policy is a mix of ML and hard-coded business logic . For this reason , we focus on the setting without propensities to encourage future work on behavior-agnostic OPE methods . For the methods that require propensities , we estimate the propensities with behavior cloning . The objective can take multiple flavors , as shown in Fig . 1 . A common task in OPE is to estimate the performance , or value , of a policy π ( which may not be the same as πB ) so that the estimated value is as close as possible to V π under a metric such as MSE or absolute error . A second task is to perform policy selection , where the goal is to select the best policy or set of policies out of a group of candidates . This setup corresponds to how OPE is commonly used in practice , which is to find the best performing strategy out of a pool when online evaluation is too expensive to be feasible . 3 DOPE : DEEP OFF-POLICY EVALUATION . The goal of the Deep Off-Policy Evaluation ( DOPE ) benchmark is to provide tasks that are challenging and effective measures of progress for OPE methods , yet is easy to use in order to better facilitate research . Therefore , we design our benchmark around a set of properties which are known to be difficult for existing OPE methods in order to gauge their shortcomings , and keep all tasks amenable to simulation in order for the benchmark to be accessible and easy to evaluate . 3.1 TASK PROPERTIES . We describe our motivating properties for selecting tasks for the benchmark as follows : High Dimensional Spaces ( H ) High-dimensionality is a key-feature in many real-world domains where it is difficult to perform feature engineering , such as in robotics , autonomous driving , and more . In these problems , it becomes challenging to accurately estimate quantities such as the value function without the use of high-capacity models such a neural networks and large datasets with wide state coverage . Our benchmark contains complex continuous-space tasks which exercise these challenges . Long Time-Horizon ( L ) Long time horizon tasks are known to present difficult challenges for OPE algorithms . Some algorithms have difficulty doing credit assignment for these tasks . This can be made worse as the state dimension or action dimension increases . Sparse Rewards ( R ) Sparse reward tasks increase the difficulty of credit assignment and add exploration challenges , which may interact with data coverage in the offline setting . We include a range robotics and navigation tasks which are difficult to solve due to reward sparsity . Temporally extended control ( T ) The ability to make decisions hierarchically is major challenge in many reinforcement learning applications . We include two navigation tasks which require high-level planning in addition to low-level control in order to simulate the difficulty in such problems . 3.2 EVALUATION PROTOCOL The goal of DOPE to provide metrics for policy ranking , evaluation and selection . Many existing OPE methods have only been evaluated on point estimates of value such as MSE , but policy selection is an important , practical use-case of OPE . In order to explicitly measure the quality of using OPE for policy selection , we provide a set of policies with varying value , and devise two metrics that measure how well OPE methods can rank policies . For each task we include a dataset of logged experiencesD , and a set of policies { π1 , π2 , ... , πN } with varying values . For each policy , OPE algorithms must use D to produce an estimate of the policy ’ s value . For evaluation of these estimates , we provide  ground truth values '' { V π1 , V π2 , ... , V πN } that are computed by running the policy forM ≥ 1000 episodes , where the exact value ofM is given by the number of episodes needed to lower the error bar on the ground truth values to 0.666 . The estimated values are then compared to these ground truth values using three different metrics encompassing both policy evaluation and selection ( illustrated in Figure 2 ; see Appendix A.1 for mathematical definitions ) . Absolute Error This metric measures estimate accuracy instead of its usefulness for ranking . Error is the most commonly used metric to assess performance of OPE algorithms . We opted to use absolute error instead of MSE to be robust to outliers . Regret @ k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set . It is computed by identifying the top-k policies according to the estimated returns . Regret @ k is the difference between the actual expected return of the best policy in the entire set , and the actual value of the best policy in the top-k set . Rank correlation This metric directly measures how well estimated values rank policies , by computing the correlation between ordinal rankings according by the OPE estimates and ordinal rankings according to the ground truth values . 4 DOMAINS . DOPE contains two domains designed to provide a more comprehensive picture of how well OPE methods perform in different settings . These two domains are constructed using two benchmarks previously proposed for offline reinforcement learning : RL Unplugged ( Gulcehre et al. , 2020 ) and D4RL ( Fu et al. , 2020 ) , and reflect the challenges found within them . The DOPE RL Unplugged domain is constrained in two important ways : 1 ) the data is always generated using online RL training , ensuring there is adequate coverage of the state-action space , and 2 ) the policies are generated by applying offline RL algorithms to the same dataset we use for evaluation , ensuring that the behavior policy and evaluation policies induce similar state-action distributions . Using it , we hope to understand how OPE methods work as task complexity increases from simple Cartpole tasks to controlling a Humanoid body while controlling for ideal data . On the other hand , the DOPE D4RL domain has : 1 ) data from various sources ( including random exploration , human teleoperation , and RL-trained policies with limited exploration ) , which results in varying levels of coverage of the state-action space , and 2 ) policies that are generated using online RL algorithms , making it less likely that the behavior and evaluation policies share similar induced state-action distributions . Both of these result in distribution shift which is known to be challenging for OPE methods , even in simple tasks . So , using it we hope to measure how well OPE methods work in more practical data settings .
This article proposes a benchmark of off-policy evaluation, which provides different metrics for policy ranking, evaluation and selection. Offline metrics are provided by evaluating the value function of logged data, and then evaluating absolute error, rank correlation and regret. Verify the effectiveness of different offline evaluation methods. This article provides two evaluation scenarios, one is DOPE RL unplugged, and the other is D4RL. In the experiment, the author verified the benchmark proposed in this article in the MuJoCo environment to evaluate the effectiveness of different offline evaluation methods.
SP:dd2a50abff85d2b52b02dfe27cd42e443ea265cf
Triple-Search: Differentiable Joint-Search of Networks, Precision, and Accelerators
This paper proposes Triple-Search (TRIPS), a differentiable framework of jointly searching for network architecture, quantization precision, and accelerator parameters. To address the dilemma between exploding training memory and biased search, the proposed framework leverages heterogeneous sampling where soft Gumbel Softmax is used for weight update and hard Gumbel Softmax is used for probabilities \beta. To integrate accelerator search, hard Gumbel Softmax is used on hardware design choices and the overall hardware cost is used for penalization. Experiments are conducted on the FPGA platform for CIFAR and ImageNet dataset to show the superiority of TRIPS over NAS-only methods.
SP:1037f94ce6eae4a42ea7913c76007f5f3c26aeaf
This paper deals with continual learning. Specifically, given a stream of tasks we want to maximise performance across all tasks. Typically neural networks suffer from catastrophic forgetting which results in worse performance on tasks seen earlier in training. There are many proposed solutions to this problem. One specific set of approaches are "memory based" algorithms. Here we store some training examples in memory from the tasks seen thus far. These are then mixed in with new training data so as to encourage the model to not forget past tasks.
SP:d850572819200f79545616fc92e789ce958b30d4
Improving Transformation Invariance in Contrastive Representation Learning
Given one image, the paper first generates different views which are controlled by differentiable parameter \alpha, and then minimizes the additional "conditional variance" term~(expectation of these views' squared differences). Therefore, the paper encourages representations of the same image remain similar under the augmentation. A testing strategy is further proposed by voting features with different augmentations. Results demonstrate the effectiveness.
SP:a692e1e43991839e08a02e9122757224e1582cfd
Understanding the Effect of Bias in Deep Anomaly Detection
1 INTRODUCTION . Anomaly detection ( Chandola et al. , 2009 ; Pimentel et al. , 2014 ) trains a formal model to identify unexpected or anomalous instances in incoming data , whose behaviors differ from normal instances . It is particularly useful for detecting problematic events such as digital fraud , structural defects , and system malfunctions . Building accurate anomaly detection models is a well-known challenge in machine learning , due to the scarcity of labeled anomaly data . The classical and most common approach is to train anomaly detection models using only normal data1 , i.e. , first train a model using a corpus of normal data to capture normal behaviors , then configure the model to flag instances with large deviations as anomalies . Researchers have also developed deep learning methods to better capture the complex structure in the data ( Ruff et al . ( 2018 ) ; Wang et al . ( 2019a ) ; Zhou & Paffenroth ( 2017 ) ) . Following the terminology introduced by Chandola et al . ( 2009 ) , we refer to these models as semi-supervised anomaly detection . Recently , a new line of anomaly detection models proposes to leverage available labeled anomalies during model training , i.e. , train an anomaly detection model using both normal data and additional labeled anomaly samples as they become available ( Ruff et al . ( 2020b ) ; Yamanaka et al . ( 2019 ) ; Ruff et al . ( 2020a ) ; Hendrycks et al . ( 2019a ) ) . Existing works show that these new models achieve considerable performance improvements beyond the models trained using only normal data . We hereby refer to these models as deep supervised2 anomaly detection ( Chandola et al. , 2009 ) . When exploring these models , we found that when the labeled anomalies ( used to train the model ) do not align with the target distribution , they could introduce harmful bias to the trained model . Specifically , when comparing the performance of a supervised anomaly detector to its semi-supervised 1Existing literature has used different terms to describe this type of models : some using semi-supervised anomaly detection ( Chandola et al. , 2009 ) and others using unsupervised anomaly detection ( Ruff et al. , 2018 ) . 2Some works termed these models as semi-supervised anomaly detection ( Ruff et al. , 2020b ; Yamanaka et al. , 2019 ; Ruff et al. , 2020a ; Hendrycks et al. , 2019a ) while others termed them as supervised anomaly detection ( Chandola et al. , 2009 ) . version , the performance difference varies significantly across test anomaly data , some better and some worse . That is , using labeled anomalies during model training does not always improve model performance ; instead , it may introduce large variance ( or bias ) in anomaly detection outcomes . In this paper , we aim to understand the effect of a biased training set on deep anomaly detection models . We formally state the anomaly detection problem , focusing on the anomaly detector ’ s recall at a given false positive rate as the main performance metric . We factor the contribution of the labeled anomalies by the detector ’ s anomaly scoring function , and show that different types of labeled anomalies produce different anomaly scoring functions . Next , given any two different anomaly scoring functions , we formally define their difference in performance as the relative scoring bias of the anomaly detectors . Our novel notion of scoring bias for anomaly detection aligns with the notion of bias in the classical supervised learning setting , with the key difference being the different performance metric—we target recall at a given false positive rate , the metric used by real-world anomaly detection tasks ( Li et al. , 2019 ; Liu et al. , 2018 ) . Along this line , we establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection . We empirically validate our assumptions and theoretical results on both synthetic and three real-world datasets ( Fashion-MNIST , Statlog ( Landsat Satellite ) , and Cellular Spectrum Misuse ( Li et al. , 2019 ) ) . Furthermore , we provide an empirical study on how a biased training anomaly set affects the anomaly score function and therefore the resulting detection performance . We consider the above three real-world datasets and six deep-learning based anomaly detection models . Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic , and provides a solid benchmark for future research . In this paper , we introduce a formal analysis on the effect of a biased training set on deep anomaly detection . Our main contributions are the following : • We discover the issue of large performance variance in deep anomaly detectors , caused by the use of the biased anomaly set as training data . • We model the effect of biased training as relative scoring bias , and establish the first finite sample rates for estimating the relative scoring bias of the trained models . • We conduct empirical experiments to verify and characterize the impact of the relative scoring bias on six popular anomaly detection models , and three real-world datasets . To the best of our knowledge , our work is the first to formally study the effect of a biased anomaly training set on deep anomaly detection . Our results show both significant positive and negative impacts of these biases , and suggest that model trainers must treat anomalies with additional care . We believe this leads to new opportunities for improving deep anomaly detectors and deserves more attention from the research community . 2 RELATED WORK . Anomaly Detection Models . While the literature on anomaly detection models is extensive , the most relevant to our work are deep learning based models . Following the terminology used by Chandola et al . ( 2009 ) , we consider two types of models : • Semi-supervised anomaly detection refers to models trained on only normal data , e.g. , Ruff et al . ( 2018 ) ; Sakurada & Yairi ( 2014 ) ; Zhou & Paffenroth ( 2017 ) ; • Supervised anomaly detection refers to models trained on normal data and a small set of labeled anomalies , e.g. , Pang et al . ( 2019 ) ; Daniel et al . ( 2019 ) ; Yamanaka et al . ( 2019 ) ; Ruff et al . ( 2020a ; b ) . One can also categorize models by their architecture : hypersphere ( Ruff et al. , 2018 ; 2020a ; b ) and autoencoder ( or reconstruction ) based models ( Zhou & Paffenroth , 2017 ; Yamanaka et al. , 2019 ) . Another line of recent work proposes to use synthetic or auxiliary anomalies to train anomaly detection models ( Golan & El-Yaniv ( 2018 ) ; Hendrycks et al . ( 2019c ) ; Lee et al . ( 2018 ) ; Hendrycks et al . ( 2019b ) ) , “ forcing ” the model to learn a more compact representation of the normal data . While the existing work has shown empirically that the choice of abnormal data in training can help detect some unseen abnormal distributions , it does not offer any theoretical explanation for the phe- nomenon , nor does it consider the counter-cases when additional abnormal data in training hurt the detection performance . Bias in Anomaly Detection . To the best of our knowledge , we are the first to identify the presence of bias caused by an additional labeled anomaly set in deep anomaly detection models , especially when there exists a mismatch between the anomalies present in training and those encountered in testing ( as shown in Section 5 ) . Existing work has explored the presence of bias in semi-supervised anomaly detection models when there exists defective normal data in training , like outliers and simple-to-reconstruct examples ( Tong et al. , 2019 ) , or examples with background noise ( Liu & Ma , 2019 ) . There is also literature on the bias-variance tradeoff for ensembles of semi-supervised anomaly detection models ( Aggarwal & Sathe , 2015 ; Rayana et al. , 2016 ) . But little or no work has been done on the bias of anomaly detection in the supervised setting ( i.e. , models trained on both normal data and some labeled anomalies ) . Finally , another line of work in transfer learning has identified the value of additional labeled data in training ( Kpotufe & Martinet , 2018 ; Hanneke & Kpotufe , 2019 ) and the performance bias on target data by transferring knowledge from a less related source ( Wang et al. , 2019b ; Wu et al. , 2020 ) . Yet most work only considered the cases of classification models . PAC guarantees for Anomaly Detection . Despite significant progress on developing theoretical guarantees for classification tasks ( Valiant ( 1984 ) ; Kearns et al . ( 1994 ) ) , little has been done for anomaly detection tasks . Siddiqui et al . ( 2016 ) first establishes a PAC framework for anomaly detection models using the notion of pattern space ; however , it is hard to apply such pattern spaces to deep learning models with complex latent spaces . Liu et al . ( 2018 ) proposes a model-agnostic approach to provide the PAC guarantee for anomaly detection performance , by analyzing the convergence for the cumulative distribution of anomaly scores . We follow the basic setting from this line of work to address the convergence of the relative scoring bias . In contrast to prior work , our proof relies on a novel adaption of the key theoretical tool from Massart ( 1990 ) , which allows us to extend our theory to characterize the notion of scoring bias as defined in Section 3.2 . 3 PROBLEM FORMULATION . We now formally state the anomaly detection problem . Consider a model class Θ for anomaly detection , and a ( labeled ) training set D sampled from a mixture distribution D over the normal and anomalous instances . In the context of anomaly detection , a model θ maps each input instance x to a continuous output , which corresponds to anomaly score sθ ( x ) . The model further uses a threshold τθ on the score function to produce a binary label for input x . For a given threshold value τθ , we can define the False Positive Rate ( FPR ) of the model θ on the input data distribution as FPR ( sθ , τθ ) = P [ sθ ( x ) > τθ | y = 0 ] , and the True Positive Rate ( TPR , a.k.a . Recall ) as TPR ( sθ , τθ ) = P [ sθ ( x ) > τθ | y = 1 ] . The FPR and TPR are competing objectives—therefore , a key challenge for anomaly detection algorithms is to identify a configuration of the score , threshold pair ( sθ , τθ ) that strikes a balance between the two performance metrics . W.l.o.g.3 , in this paper we focus on the following scenario , where the objective is to maximize TPR subject to achieving a target FPR . Formally , let q be the target FPR ; we define the optimal anomaly detector as4 ( s∗θ , τ∗θ ) ∈ arg max ( sθ , τθ ) : θ∈Θ TPR ( sθ , τθ ) s.t . FPR ( sθ , τθ ) ≤ q ( 3.1 ) 3.1 A GENERAL ANOMALY DETECTION FRAMEWORK . Note that the performance metric ( namely TPR ) in Problem 3.1 is statistics that depends on the entire predictive distribution , and can not be easily evaluated on any single data point . Therefore , rather than directly solving Problem 3.1 , practical anomaly detection algorithms ( such as OCSVM ( Schölkopf et al. , 1999 ) , Deep SAD ( Ruff et al. , 2020b ) , etc ) often rely on a two-stage process : ( 1 ) 3Our results can be easily extended to the setting where the goal is to minimize FPR subject to a given TPR . 4This formulation aligns with many contemporary works in deep anomaly detection . For example , Li et al . ( 2019 ) show that in real-world anomaly detection problems , it is desirable to detect anomalies with a prefixed low false alarm rate ; Liu et al . ( 2018 ) formulate the anomaly detection in a similar way , where the goal is to minimize FPR for a fixed TPR . learning the score function sθ from training data via a surrogate loss , and ( 2 ) given sθ from the previous step , computing the threshold function τθ on the training data . Formally , given a model class Θ , a training set D , a loss function  , and a target FPR q , a two-staged anomaly detection algorithm outputs { ŝθ ∈ arg minsθ : θ∈Θ  ( sθ , D ) τ̂θ ∈ arg maxτθ : θ∈Θ TPR ( ŝθ , τθ ) s.t . FPR ( ŝθ , τθ ) ≤ q ( 3.2 ) Note that the first part of Equation 3.2 amounts to solving a supervised learning problem . Here , the loss function  could be instantiated into latent-space-based losses ( e.g. , Deep SAD ) , marginbased losses ( e.g. , OCSVM ) , or reconstruction-based losses ( e.g. , ABC ( Yamanaka et al. , 2019 ) ) ; therefore , many contemporary anomaly detection models fall into this framework . To set the threshold τ̂θ , we consider using the distribution of the anomaly scores ŝθ ( · ) from a labeled validation set Dval ∼ D. Let Dval : = Dval0 ∪ Dvala where Dval0 and Dvala denote the subset of normal data and the subset of abnormal data of Dval . Denote the empirical CDFs for anomaly scores assigned to x in Dval0 and D val a as F̂0 and F̂a , respectively . Then , given a target FPR value q , following a similar argument as Liu et al . ( 2018 ) , one can compute the threshold as τ̂θ = max { u ∈ R : F̂0 ( u ) ≤ q } . The steps for solving the second part of Equation 3.2 is summarized in Algorithm 1 . Algorithm 1 : Computing the anomaly detection threshold for Problem 3.2 Data : A validation dataset Dval and a scoring function s ( · ) . Result : A score threshold achieving a target FPR and the corresponding recall on Dval . 1 Get anomaly score s ( x ) for each x in Dval . 2 Compute empirical CDF F̂0 ( x ) and F̂a ( x ) for anomaly scores of x in Dval0 and Dvala . 3 Output detection threshold τ̂ = max { u ∈ R : F̂0 ( u ) ≤ q } . 4 Output TPR ( recall ) on Dvala as r̂ = 1− F̂a ( τ̂ ) .
This paper studies the potential bias in deep semi-supervised anomaly detection. The bias is evaluated in terms of TPR rate given a fixed FPR rate. It uses the anomaly scores output by unsupervised anomaly detectors as a benchmark to examine the relative scoring bias in deep semi-supervised anomaly detectors. It further studies the finite sample rate for this type of scoring bias. This type of bias is verified using some synthetic and real-world datasets. The empirical results also show the potential impact of this bias on several anomaly detectors.
SP:a24603a5dbc07070aeba98e1206511799111bec6
Calibration tests beyond classification
1 INTRODUCTION . We consider the general problem of modelling the relationship between a featureX and a target Y in a probabilistic setting , i.e. , we focus on models that approximate the conditional probability distribution P ( Y |X ) of target Y for given feature X . The use of probabilistic models that output a probability distribution instead of a point estimate demands guarantees on the predictions beyond accuracy , enabling meaningful and interpretable predicted uncertainties . One such statistical guarantee is calibration , which has been studied extensively in metereological and statistical literature ( DeGroot & Fienberg , 1983 ; Murphy & Winkler , 1977 ) . A calibrated model ensures that almost every prediction matches the conditional distribution of targets given this prediction . Loosely speaking , in a classification setting a predicted distribution of the model is called calibrated ( or reliable ) , if the empirically observed frequencies of the different classes match the predictions in the long run , if the same class probabilities would be predicted repeatedly . A classical example is a weather forecaster who predicts each day if it is going to rain on the next day . If she predicts rain with probability 60 % for a long series of days , her forecasting model is calibrated for predictions of 60 % if it actually rains on 60 % of these days . If this property holds for almost every probability distribution that the model outputs , then the model is considered to be calibrated . Calibration is an appealing property of a probabilistic model since it 1The source code of the experiments is available at https : //github.com/devmotion/ Calibration_ICLR2021 . provides safety guarantees on the predicted distributions even in the common case when the model does not predict the true distributions P ( Y |X ) . Calibration , however , does not guarantee accuracy ( or refinement ) —a model that always predicts the marginal probabilities of each class is calibrated but probably inaccurate and of limited use . On the other hand , accuracy does not imply calibration either since the predictions of an accurate model can be too over-confident and hence miscalibrated , as observed , e.g. , for deep neural networks ( Guo et al. , 2017 ) . In the field of machine learning , calibration has been studied mainly for classification problems ( Bröcker , 2009 ; Guo et al. , 2017 ; Kull et al. , 2017 ; 2019 ; Kumar et al. , 2018 ; Platt , 2000 ; Vaicenavicius et al. , 2019 ; Widmann et al. , 2019 ; Zadrozny , 2002 ) and for quantiles and confidence intervals of models for regression problems with real-valued targets ( Fasiolo et al. , 2020 ; Ho & Lee , 2005 ; Kuleshov et al. , 2018 ; Rueda et al. , 2006 ; Taillardat et al. , 2016 ) . In our work , however , we do not restrict ourselves to these problem settings but instead consider calibration for arbitrary predictive models . Thus , we generalize the common notion of calibration as : Definition 1 . Consider a model PX : = P ( Y |X ) of a conditional probability distribution P ( Y |X ) . Then model P is said to be calibrated if and only if P ( Y |PX ) = PX almost surely . ( 1 ) If P is a classification model , Definition 1 coincides with the notion of ( multi-class ) calibration by Bröcker ( 2009 ) ; Kull et al . ( 2019 ) ; Vaicenavicius et al . ( 2019 ) . Alternatively , in classification some authors ( Guo et al. , 2017 ; Kumar et al. , 2018 ; Naeini et al. , 2015 ) study the strictly weaker property of confidence calibration ( Kull et al. , 2019 ) , which only requires P ( Y = arg maxPX |maxPX ) = maxPX almost surely . ( 2 ) This notion of calibration corresponds to calibration according to Definition 1 for a reduced problem with binary targets Ỹ : = 1 ( Y = arg maxPX ) and Bernoulli distributions P̃X : = Ber ( maxPX ) as probabilistic models . For real-valued targets , Definition 1 coincides with the so-called distribution-level calibration by Song et al . ( 2019 ) . Distribution-level calibration implies that the predicted quantiles are calibrated , i.e. , the outcomes for all real-valued predictions of the , e.g. , 75 % quantile are actually below the predicted quantile with 75 % probability ( Song et al. , 2019 , Theorem 1 ) . Conversely , although quantile-based calibration is a common approach for real-valued regression problems ( Fasiolo et al. , 2020 ; Ho & Lee , 2005 ; Kuleshov et al. , 2018 ; Rueda et al. , 2006 ; Taillardat et al. , 2016 ) , it provides weaker guarantees on the predictions . For instance , the linear regression model in Fig . 1 empirically shows quantiles that appear close to being calibrated albeit being uncalibrated according to Definition 1 . Figure 1 also raises the question of how to assess calibration for general target spaces in the sense of Definition 1 , without having to rely on visual inspection . In classification , measures of calibration such as the commonly used expected calibration error ( ECE ) ( Guo et al. , 2017 ; Kull et al. , 2019 ; Naeini et al. , 2015 ; Vaicenavicius et al. , 2019 ) and the maximum calibration error ( MCE ) ( Naeini et al. , 2015 ) try to capture the average and maximal discrepancy between the distributions on the left hand side and the right hand side of Eq . ( 1 ) or Eq . ( 2 ) , respectively . These measures can be generalized to other target spaces ( see Definition B.1 ) , but unfortunately estimating these calibration errors from observations of features and corresponding targets is problematic . Typically , the predictions are different for ( almost ) all observations , and hence estimation of the conditional probability P ( Y |PX ) , which is needed in the estimation of ECE and MCE , is challenging even for low-dimensional target spaces and usually leads to biased and inconsistent estimators ( Vaicenavicius et al. , 2019 ) . Kernel-based calibration errors such as the maximum mean calibration error ( MMCE ) ( Kumar et al. , 2018 ) and the kernel calibration error ( KCE ) ( Widmann et al. , 2019 ) for confidence and multi-class calibration , respectively , can be estimated without first estimating the conditional probability and hence avoid this issue . They are defined as the expected value of a weighted sum of the differences of the left and right hand side of Eq . ( 1 ) for each class , where the weights are given as a function of the predictions ( of all classes ) and chosen such that the calibration error is maximized . A reformulation with matrix-valued kernels ( Widmann et al. , 2019 ) yields unbiased and differentiable estimators without explicit dependence on P ( Y |PX ) , which simplifies the estimation and allows to explicitly account for calibration in the training objective ( Kumar et al. , 2018 ) . Additionally , the kernel-based framework allows the derivation of reliable statistical hypothesis tests for calibration in multi-class classification ( Widmann et al. , 2019 ) . However , both the construction as a weighted difference of the class-wise distributions in Eq . ( 1 ) and the reformulation with matrix-valued kernels require finite target spaces and hence can not be applied to regression problems . To be able to deal with general target spaces , we present a new and more general framework of calibration errors without these limitations . Our framework can be used to reason about and test for calibration of any probabilistic predictive model . As explained above , this is in stark contrast with existing methods that are restricted to simple output distributions , such as classification and scalar-valued regression problems . A key contribution of this paper is a new framework that is applicable to multivariate regression , as well as situations when the output is of a different ( e.g. , discrete ordinal ) or more complex ( e.g. , graph-structured ) type , with clear practical implications . Within this framework a KCE for general target spaces is obtained . We want to highlight that for multi-class classification problems its formulation is more intuitive and simpler to use than the measure proposed by Widmann et al . ( 2019 ) based on matrix-valued kernels . To ease the application of the KCE we derive several estimators of the KCE with subquadratic sample complexity and their asymptotic properties in tests for calibrated models , which improve on existing estimators and tests in the two-sample test literature by exploiting the special structure of the calibration framework . Using the proposed framework , we numerically evaluate the calibration of neural network models and ensembles of such models . 2 CALIBRATION ERROR : A GENERAL FRAMEWORK . In classification , the distributions on the left and right hand side of Eq . ( 1 ) can be interpreted as vectors in the probability simplex . Hence ultimately the distance measure for ECE and MCE ( see Definition B.1 ) can be chosen as a distance measure of real-valued vectors . The total variation , Euclidean , and squared Euclidean distances are common choices ( Guo et al. , 2017 ; Kull et al. , 2019 ; Vaicenavicius et al. , 2019 ) . However , in a general setting measuring the discrepancy between P ( Y |PX ) and PX can not necessarily be reduced to measuring distances between vectors . The conditional distribution P ( Y |PX ) can be arbitrarily complex , even if the predicted distributions are restricted to a simple class of distributions that can be represented as real-valued vectors . Hence in general we have to resort to dedicated distance measures of probability distributions . Additionally , the estimation of conditional distributions P ( Y |PX ) is challenging , even more so than in the restricted case of classification , since in general these distributions can be arbitrarily complex . To circumvent this problem , we propose to use the following construction : We define a random variable ZX ∼ PX obtained from the predictive model and study the discrepancy between the joint distributions of the two pairs of random variables ( PX , Y ) and ( PX , ZX ) , respectively , instead of the discrepancy between the conditional distributions P ( Y |PX ) and PX . Since ( PX , Y ) d = ( PX , ZX ) if and only if P ( Y |PX ) = PX almost surely , model P is calibrated if and only if the distributions of ( PX , Y ) and ( PX , ZX ) are equal . The random variable pairs ( PX , Y ) and ( PX , ZX ) take values in the product space P×Y , where P is the space of predicted distributions PX and Y is the space of targets Y . For instance , in classification , P could be the probability simplex and Y the set of all class labels , whereas in the case of Gaussian predictive models for scalar targets P could be the space of normal distributions and Y be R. The study of the joint distributions of ( PX , Y ) and ( PX , ZX ) motivates the definition of a generally applicable calibration error as an integral probability metric ( Müller , 1997 ; Sriperumbudur et al. , 2009 ; 2012 ) between these distributions . In contrast to common f -divergences such as the Kullback-Leibler divergence , integral probability metrics do not require that one distribution is absolutely continuous with respect to the other , which can not be guaranteed in general . Definition 2 . Let Y denote the space of targets Y , and P the space of predicted distributions PX . We define the calibration error with respect to a space of functions F of the form f : P × Y → R as CEF : = sup f∈F ∣∣EPX , Y f ( PX , Y ) − EPX , ZX f ( PX , ZX ) ∣∣ . ( 3 ) By construction , if model P is calibrated , then CEF = 0 regardless of the choice of F . However , the converse statement is not true for arbitrary function spaces F . From the theory of integral probability metrics ( see , e.g. , Müller , 1997 ; Sriperumbudur et al. , 2009 ; 2012 ) , we know that for certain choices of F the calibration error in Eq . ( 3 ) is a well-known metric on the product space P×Y , which implies that CEF = 0 if and only if model P is calibrated . Prominent examples include the maximum mean discrepancy2 ( MMD ) ( Gretton et al. , 2007 ) , the total variation distance , the Kantorovich distance , and the Dudley metric ( Dudley , 1989 , p. 310 ) . As pointed out above , Definition 2 is a generalization of the definition for multi-class classification proposed by Widmann et al . ( 2019 ) —which is based on vector-valued functions and only applicable to finite target spaces—to any probabilistic predictive model . In Appendix E we show this explicitly and discuss the special case of classification problems in more detail . Previous results ( Widmann et al. , 2019 ) imply that in classification MMCE and , for common distance measures d ( · , · ) such as the total variation and squared Euclidean distance , ECEd and MCEd are special cases of CEF . In Appendix G we show that our framework also covers natural extensions of ECEd and MCEd to countably infinite discrete target spaces , which to our knowledge have not been studied before and occur , e.g. , in Poisson regression . The literature of integral probability metrics suggests that we can resort to estimating CEF from i.i.d . samples from the distributions of ( PX , Y ) and ( PX , ZX ) . For the MMD , the Kantorovich distance , and the Dudley metric tractable strongly consistent empirical estimators exist ( Sriperumbudur et al. , 2012 ) . Here the empirical estimator for the MMD is particularly appealing since compared with the other estimators “ it is computationally cheaper , the empirical estimate converges at a faster rate to the population value , and the rate of convergence is independent of the dimension d of the space ( for S = Rd ) ” ( Sriperumbudur et al . ( 2012 ) ) . Our specific design of ( PX , ZX ) can be exploited to improve on these estimators . If EZx∼Pxf ( Px , Zx ) can be evaluated analytically for a fixed prediction Px , then CEF can be estimated empirically with reduced variance by marginalizing out ZX . Otherwise EZx∼Pxf ( Px , Zx ) has to be estimated , but in contrast to the common estimators of the integral probability metrics discussed above the artificial construction of ZX allows us to approximate it by numerical integration methods such as ( quasi ) Monte Carlo integration or quadrature rules with arbitrarily small error and variance . Monte Carlo integration preserves statistical properties of the estimators such as unbiasedness and consistency . 2As we discuss in Section 3 , the MMD is a metric if and only if the employed kernel is characteristic .
The authors present an approach for testing calibration in conditional probability estimation models. They build on a line of work in the kernel estimation literature assessing whether the conditional distributions are well calibrated (i.e. P(Y | f(X)) = f(X), where f is some predictive model). They develop an MMD kernel estimator and expand on practical choices of kernels that are computationally tractable. They then derive an asymptotic null distribution for calibrated models, enabling control over the error rate when labeling a model uncalibrated. A few simulation studies are done with neural networks to show the applicability of the method.
Semantic Hashing with Locality Sensitive Embeddings
The authors consider the problem of learning a hash function such that semantically similar elements have high collision probability. They modify the approach Deep Hashing Networks (Zhu et al., 2016) with a new loss function. Rather than use a sigmoid based loss function, the authors argue that a loss function based on angular similarity and SimHash would be better. Specifically, they use the probability of SimHash collisions as a loss function. They then experimentally verify their method on synthetic data from a Stochastic Block Model distribution, image data (CIFAR-10 and ImageNet), and text data (OSCAR). They show improvements over related methods.
SP:becb496310e88c1e2e7d03131093b9ebcf075c1d
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
1 INTRODUCTION . When models are tested on distributions that are different from the training distribution , they typically suffer large drops in performance ( Blitzer and Pereira , 2007 ; Szegedy et al. , 2014 ; Jia and Liang , 2017 ; AlBadawy et al. , 2018 ; Hendrycks et al. , 2019a ) . For example , in remote sensing , central tasks include predicting poverty , crop type , and land cover from satellite imagery for downstream humanitarian , policy , and environmental applications ( Xie et al. , 2016 ; Jean et al. , 2016 ; Wang et al. , 2020 ; Rußwurm et al. , 2020 ) . In some developing African countries , labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys ( Jean et al. , 2016 ) . To make accurate predictions in these countries , we must extrapolate to out-of-distribution ( OOD ) examples across different geographic terrains and political borders . We consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution ( e.g. , global satellite imagery ) . While labels are scarce , auxiliary information is often cheaply available for every input and may provide some signal for the missing labels . Auxiliary information can come from additional data sources ( e.g. , climate data from other satellites ) or derived from the original input ( e.g. , background or non-visible spectrum image channels ) . This auxiliary information is often discarded or not leveraged , and how to best use them is unclear . One way is to use them directly as input features ( aux-inputs ) ; another is to treat them as prediction outputs for an auxiliary task ( aux-outputs ) in pre-training . Which approach leads to better in-distribution or OOD performance ? Aux-inputs provide more features to potentially improve in-distribution performance , and one may hope that this also improves OOD performance . Indeed , previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy ( Recht et al. , 2019 ; Taori et al. , 2020 ; Xie et al. , 2020 ; Santurkar et al. , 2020 ) . However , in this paper we find that aux-inputs can introduce more spurious correlations with the labels : as a result , while aux-inputs often improve in-distribution accuracy , they can worsen OOD accuracy . We give examples of this trend on CelebA ( Liu et al. , 2015 ) and real-world satellite datasets in Sections 5.2 and 5.3 . Conversely , aux-output methods such as pre-training may improve OOD performance through auxiliary supervision ( Caruana , 1997 ; Weiss et al. , 2016 ; Hendrycks et al. , 2019a ) . Hendrycks et al . ∗Equal contribution . 𝑥 𝑧 𝑤 𝑦 𝑢 𝐵∗ 𝐴∗ 𝐶∗ 𝜃 '' 𝜃 # Figure 2 : Graphical model for our theoretical setting : prediction task with input x , target y , and auxiliary information z , which is related to y through the latent variable w and latent noise u . ( 2019a ) show that pre-training on ImageNet can improve adversarial robustness , and Hendrycks et al . ( 2019b ) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions . In this paper , we find that while aux-outputs improve OOD accuracy , the in-distribution accuracy is worse than with aux-inputs . Thus , we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs . To theoretically study how to best use auxiliary information , we extend the multi-task linear regression setting ( Du et al. , 2020 ; Tripuraneni et al. , 2020 ) to allow for distribution shifts . We show that auxiliary information helps in-distribution error by providing useful features for predicting the target , but the relationship between the aux-inputs and the target can shift significantly OOD , worsening the OOD error . In contrast , the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space . We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information . Can we do better than using auxiliary information as inputs or outputs alone ? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs ( Figure 1 ) . In-N-Out first uses an aux-inputs model , which has good in-distribution accuracy , to pseudolabel in-distribution unlabeled data . It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data . We prove that In-N-Out , which combines self-training and pre-training , further improves both in-distribution and OOD error over the aux-outputs model . We show empirical results on CelebA and two remote sensing tasks ( land cover and cropland prediction ) that parallel the theory . On all datasets , In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2 % in-distribution , 2–3 % OOD over not using auxiliary information on remote sensing tasks . Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone ( up to 5 % in-distribution , 1–2 % OOD on remote sensing tasks ) . We also find that using OOD ( rather than in-distribution ) unlabeled examples for pre-training is crucial for OOD improvements . 2 SETUP . Let x∈Rd be the input ( e.g. , a satellite image ) , y ∈R be the target ( e.g. , crop type ) , and z ∈RT be the cheaply obtained auxiliary information either from additional sources ( e.g. , climate information ) or derived from the original data ( e.g. , background ) . Training data . Let Pid and Pood denote the underlying distribution of ( x , y , z ) triples in-distribution and out-of-distribution , respectively . The training data consists of ( i ) in-distribution labeled data { ( xi , yi , zi ) } ni=1 ∼ Pid , ( ii ) in-distribution unlabeled data { ( xidi , zidi ) } mid i=1 ∼ Pid , and ( iii ) out-of-distribution unlabeled data { ( xoodi , zoodi ) } mood i=1 ∼Pood . Goal and risk metrics . Our goal is to learn a model from input and auxiliary information to the target , f : Rd×RT →R . For a loss function  , the in-distribution population risk of the model f is Rid ( f ) =Ex , y , z∼Pid [  ( f ( x , z ) , y ) ] , and its OOD population risk isRood ( f ) =Ex , y , z∼Pood [  ( f ( x , z ) , y ) ] . 2.1 MODELS . We consider three common ways to use the auxiliary information ( z ) to learn a model . Baseline . The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information ( accomplished by setting z to 0 ) : f̂bs =argmin f 1 n n∑ i=1  ( f ( xi,0 ) , yi ) . ( 1 ) Aux-inputs . The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features : f̂in =argmin f 1 n n∑ i=1  ( f ( xi , zi ) , yi ) . ( 2 ) Aux-outputs . The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task , in hopes that there is a low-dimensional feature representation that is common to predicting both z and y . Training the aux-outputs model consists of two steps : In the pre-training step , we use all the unlabeled data to learn a shared feature representation . Let h : Rd→Rk denote a feature map and gz-out : Rk→RT denote a mapping from feature representation to the auxiliary outputs . Let  aux denote the loss function for the auxiliary information . We define the empirical risk of h and gz-out as : R̂pre ( h , gz-out ) = 1 mid+mood ( mid∑ i=1  aux ( gz-out ( h ( x id i ) ) , z id i ) + mood∑ i=1  aux ( gz-out ( h ( x ood i ) ) , z ood i ) ) . ( 3 ) The estimate of the feature map is ĥout =argminhmingz-outR̂pre ( h , gz-out ) . In the transfer step , the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out : Rk→R from feature representation to target y . We define the transfer empirical risk as : R̂trans ( ĥout , gy-out ) = 1 n n∑ i=1  ( gy-out ( ĥout ( xi ) ) , yi ) ( 4 ) The estimate of the target mapping is ĝy-out = argmingy-out R̂trans ( ĥout , gy-out ) . The final aux-outputs model is f̂out ( x , z ) = ĝy-out ( ĥout ( x ) ) . ( 5 ) Like the baseline model , the aux-outputs model ignores the auxiliary information for prediction . 3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS . We now analyze the baseline , aux-inputs , and aux-outputs models introduced in Section 2 . Our setup extends a linear regression setting commonly used for analyzing multi-task problems ( Du et al. , 2020 ; Tripuraneni et al. , 2020 ) . Setup . See Figure 2 for the graphical model . Letw=B ? x∈Rk be a low-dimensional latent feature ( k≤d ) shared between auxiliary information z and the target y . Let u∈Rm denote unobserved latent variables not captured in x . We assume z and y are linear functions of u andw : y=θ > ww+θ > u u+ , ( 6 ) z=A ? w+C ? u , ( 7 ) where ∼ P denotes noise with mean 0 and variance σ2 . As in Du et al . ( 2020 ) , we assume the dimension of the auxiliary information T is greater than the feature dimension k , that is T ≥k , and thatA ? , B ? andC ? have full rank ( rank k ) . We also assume T ≥m , wherem is the dimension of u . Data . Let Px and Pu denote the distribution of x and u in-distribution ( ID ) , and let P ′x , P ′u denote the distribution x and uOOD . We assume x and u are independent , have distributions with bounded density everywhere , and have invertible covariance matrices . We assume the mean of u is zero in- and out-of-distribution1 . We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD , a common assumption in unsupervised domain adaptation theory ( Sugiyama et al. , 2007 ; Kumar et al. , 2020 ; Raghunathan et al. , 2020 ) . Loss metrics . We use the squared loss for the target and auxiliary losses :  ( ŷ , y ) = ( y− ŷ ) 2 and  aux ( z , z ′ ) =‖z−z′‖22 . Models . We assume all model families ( f , h , gz-out , gy-out ) in Section 2 are linear . Let S= ( A ? , B ? , C ? , θw , θu , Px , Pu ) denote a problem setting which satisfies all the above assumptions . 3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION , BUT CAN HURT OOD . We first show that the aux-inputs model ( 2 ) performs better than the baseline model ( 1 ) in-distribution . Intuitively , the target y depends on both the inputs x ( throughw ) and latent variable u ( Figure 2 ) . The baseline model only uses x to predict y ; thus it can not capture the variation in y due to u . On the other hand , the aux-inputs model uses x and z to predict y . Since z is a function of x ( through w ) and u , u can be recovered from x and z by inverting this relation . Note that u is unobserved but implicitly recovered . The aux-inputs model can then combine u and x to predict y better . Let σ2u=Eu∼Pu [ ( θ > u u ) 2 ] denote the ( in-distribution ) variance of y due to the latent variables u . The following proposition shows that if σ2u > 0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2 Proposition 1 . For all problem settings S , P , assuming regularity conditions ( bounded x , u , sub-Gaussian noise , and T =m ) , and σ2u > 0 , for all δ > 0 , there existsN such that for n≥N number of training points , with probability at least 1−δ over the training examples , the aux-inputs model improves over the baseline : Rid ( f̂in ) < Rid ( f̂bs ) . ( 8 ) Although using z as input leads to better in-distribution performance , we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples . Intuitively , the aux-inputs model uses z , which can be unreliable OOD because z depends on u and u can shift OOD . In more detail , the aux-inputs model learns to predict ŷ= θ̂ > x , inx+θ̂ > z , inz , where the true output y=θ > x x+θ > z z , and θ̂z , in is an approximation to the true parameter θz , that has some error . Out-of-distribution u and hence z can have very high variance , which would magnify ( θ̂z , in−θz ) > z and lead to bad predictions . Example 1 . There exists a problem setting S , P , such that for every n , there is some test distribution P ′x , P ′ u with : E [ Rood ( f̂in ) ] > E [ Rood ( f̂bs ) ] ( 9 )
This paper introduces a new method for leveraging auxiliary information and unlabelled data to improve out-of-distribution model performance. Theoretically, in a linear model with latent variables, they demonstrate using auxiliary data as inputs helps in-distribution test-error, but can hurt out-of-distribution error, while using auxiliary data to pretrain a "good" representation always improve out-of-distribution error. The proposed method uses the auxiliary data to learn an initial model, which generates psuedolabels to fine-tune the pretrained model.
SP:7611ee6b9dfabf7ec6a65da58cb6e3892705e1c9
Variance Reduction in Hierarchical Variational Autoencoders
1 INTRODUCTION . Variational autoencoders ( VAE ) [ 10 ] are a popular latent variable model for unsupervised learning that simplifies learning by the introduction of a learned approximate posterior . Given data x and latent variables z , we specify the conditional distribution p ( x|z ) by parameterizing the distribution parameters by a neural network . Since it is difficult to learn such a model directly , another conditional distribution q ( z|x ) is introduced to approximate the posterior distribution . During learning the goal is to maximize the evidence lower bound ( ELBO ) , which lower bounds the log likelihood , log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) +log p ( z ) − log q ( z|x ) ] . In their simplest form , the generative model p ( x|z ) and the approximate posterior q ( z|x ) are Gaussian distributions optimized in unison . A natural way to increase the modeling capacity of VAE is to incorporate a hierarchy of stochastic variables . Such models , however , turn out to be difficult to train and higher levels in the hierarchy tend to remain independent of input data – a problem termed posterior collapse . Posterior collapse in VAEs manifests itself by the latent distribution tending to fall back to the prior . With hierarchical VAEs the effect is found to be more pronounced in the top layers farther from the output . For the purpose of the paper and for clarity of exposition , we focus on the simplest extension of hierarchical variational autoencoders where stochastic layers are stacked serially on top of each other [ 2 , 21 ] , p ( x , z ) = p ( x|z1 ) p ( zL ) ∏L−1 i=1 p ( zi|zi+1 ) and q ( z|x ) = q ( z1|x ) ∏L−1 i=1 q ( zi+1|zi ) . The intermediate distributions in this model are commonly taken to be Gaussian distributions parameterized by neural network functions , so that p ( zi|zi+1 ) = N ( zi|µ ( zi+1 ) , σ ( zi+1 ) ) , where µ ( z ) , σ ( z ) are neural networks computing the mean and variance of the Gaussian distribution . We refer to them as vanilla hierarchical variational autoencoders . For each stochastic layer in this model there is a corresponding KL divergence term in the objective given by E [ KL ( q ( zi|zi−1 ) ||p ( zi|zi+1 ) ] . ( 1 ) As described later , expression 1 can be easily decomposed to show an explicit dependence on the variance of the parameterizing functions µ ( zi ) , σ ( zi ) of the intermediate Gaussian distribution . We further show the KL divergence term to be closely related to the harmonics of the parameterizing function . For complex parameterizing functions the KL divergence term has large high frequency components ( and thus high variance ) which leads to unstable training causing posterior collapse . Building on this , we suggest a method for training the simplest hierarchical extension of VAE that avoids the problem of posterior collapse without introducing further architectural complexity [ 13 , 21 ] . Given a hierarchical variational autoencoder , our training method incorporates a smoothing parameter ( we denote this by ρ ) in the neural network functions used to parameterize the intermediate latent distributions . The smoothing is done such that expected values are preserved , the higher frequencies are attenuated and the variance is reduced . Next , the gradients computed with the smooth functions are used to train the original hierarchical variational autoencoder . For the construction of the smoothing transformations for VAEs with Gaussian latent spaces we make use of ideas from the analysis of Gaussian spaces . We analyze the stochastic functions in vanilla hierarchical VAEs as Hermite expansions on Gaussian spaces [ 9 ] . The Ornstein-Uhlenbeck ( OU ) semigroup from Gaussian analysis is a set of operators that we show to smoothly interpolate between a random variable and its expectation . The OU semigroup provides the appropriate set of smoothing operators which enable us to control variance and avoid posterior collapse . We further show that by smoothing the intermediate parameterizing functions µ ( z ) , σ ( z ) in the proposed manner , the KL divergence of the top layer sees a sudden sharp drop toward zero as the amount of smoothing is decreased . This behaviour is retained when we evaluate the KL divergence on the original unsmoothed variational autoencoder model . This behaviour is reminiscent of phase transitions from statistical mechanics and we adopt the same terminology to describe the phenomenon . Our experiments suggest that the phenomenon is general across datasets and commonly used architectures . Furthermore , the critical value of the smoothing parameter ρ at which the transition occurs is fixed for a given model configuration and varies with stochastic depth and width . We make the following contributions . First , we establish a connection between higher harmonics , variance , posterior collapse and phase transitions in hierarchical VAEs . Second , we show that by using the Ornstein-Uhlenbeck semigroup of operators on the generative stochastic functions in VAEs we reduce higher frequencies and consequently variance to mitigate posterior collpase . We corroborate our findings experimentally and further obtain in CIFAR-10 likelihoods competitive with more complex architectural solutions alongside a reduction in model size . We refer to the proposed family of models as Hermite variational autoencoders ( HVAE ) . 2 HERMITE VARIATIONAL AUTOENCODERS . 2.1 ANALYSIS ON GAUSSIAN SPACES . The analysis of Gaussian spaces studies functions of Gaussian random variables . These are realvalued functions defined on Rn endowed with the Gaussian measure . Many functions employed in machine learning are instances of such functions : decoders for variational autoencoders , as is the case in this work , and generators for generative adversarial networks being two examples . By way of summary , the main facts we use from this field are that a function on a Gaussian space can be expanded in an orthonormal basis , where the basis functions are the Hermite polynomials . This orthonormal expansion is akin to a Fourier transform in this space . The second fact is that the coefficients of such an expansion can be modified in a way to reduce the variance of the expanded function by applying an operator from the Ornstein-Uhlenbeck semigroup of operators . Next , we give a brief introduction . For further details on Gaussian analysis we refer to [ 9 ] . Gaussian Spaces : Let L2 ( Rn , γ ) be the space of square integrable functions , f : Rn → R , with the Gaussian measure γ ( z ) = ∏ iN ( zi|0 , 1 ) . Given functions f , g in this space , the inner product is given by 〈f , g〉 = Eγ ( z ) [ f ( z ) g ( z ) ] . Basis functions for L2 ( R , γ ) : Taking the space of univariate functions L2 ( R , γ ) , it is known that the polynomial functions φi ( z ) = zi are a basis for this space . By a process of orthonormalization we obtain the normalized Hermite polynomial basis for this space . The first few Hermite polynomials are the following : h0 ( z ) = 1 , h1 ( z ) = z , h2 = z 2−1√ 2 , . . .. Basis functions for L2 ( Rn , γ ) : Letting α ∈ Nn be a multi-index , the basis functions for L2 ( Rn , γ ) are obtained by multiplying the univariate basis functions across dimension , hα ( z ) = ∏ i hαi ( zi ) . Hermite expansion : A function in L2 ( Rn , γ ) can be expressed as f = ∑ α∈Nn f̂ ( α ) hα , where f̂ ( α ) are the Hermite coefficients of f and are computed as f̂ ( α ) = 〈f , hα〉 = Eγ ( z ) [ f ( z ) hα ( z ) ] . Plancherel ’ s theorem is the following relation between the norm of f and f̂ which follows from orthnormality of the basis functions . 〈f , f〉 = ∑ α f̂ ( α ) 2 , ( 2 ) Ornstein-Uhlenbeck ( OU ) Semigroup : Given a parameter ρ ∈ [ 0 , 1 ] and a Gaussian variable z , we construct a correlated variable z′ as z′ = ρz + √ 1− ρ2zω , where zω ∼ N ( 0 , 1 ) is a random standard Gaussian sample . The OU semigroup is a set of operators , denoted Uρ and parameterized by ρ ∈ [ 0 , 1 ] . The action of Uρ on f at z is to average the function values on correlated z′s around z , Uρf ( z ) = Ez′|z [ f ( z′ ) ] = Ezω [ f ( ρz + √ 1− ρ2zω ) ] ( 3 ) The action of the Uρ operators on the Hermite expansion of function f ( z ) is to decay Hermite coefficients according to their degree , Uρf ( z ) = ∑ α∈Nn ρ |α|f̂ ( α ) hα . where |α| = ∑ i αi . If z is reparameterized as z = σ 1 + µ , the correlated OU sample is given by z′ = σ ( ρ 1 +√ 1− ρ2 2 ) + µ , where 1 , 2 are standard Gaussian variables . This can also be expressed in terms of z as z′ = ρz + ( 1− ρ ) µ+ σ √ 1− ρ2 2 , ( 4 ) 2.2 HERMITE EXPANSIONS FOR VAES . Our proposed method is a new training procedure for the vanilla hierarchical variational autoencoder that builds upon Hermite expansions of Gaussian functions and properties of the OU semigroup . In the context of hierarchical variational autoencoders , the Gaussian function f is the generative model µi ( zi+1 ) and σi ( zi+1 ) that receives as inputs the latent variable zi+1 to return the Gaussian latent variable of the next layer , zi ∼ N ( µi ( zi+1 ) , σi ( zi+1 ) ) . We make use of the following properties of the OU semigroup to construct Gaussian functions of lower variance . The first property we employ is that the OU semigroup of operators interpolates between a random variable ( ρ = 1 ) and its expectation ( ρ = 0 ) , where the parameter ρ controls the extent of the interpolation . Proposition 1 The operators Uρ retain the expected value of the operated function , E [ f ] = E [ Uρf ] . Proposition 2 The operators Uρ interpolate between a random variable and its expectation . In particular , as ρ→ 1 , Uρf = f . and as ρ→ 0 , Uρf = E [ f ] The second property we exploit is that the new random variable Uρf ( z ) has lower variance compared with original variable f ( z ) and is in general a smoother function than f ( z ) . The smoothing properties of the operator Uρ can be understood by examining the Hermite expansion of Uρf . First we note that we can express the expectation and variance of a function f in terms of its Hermite coefficients , specifically E [ f ] = f̂ ( 0 ) and Var ( f ) = E [ ( f −E [ f ] ) 2 ] = E [ ( f − f̂ ( 0 ) ) 2 ] =∑α : |α| > 0 f̂ ( α ) 2 , which follows from Plancherel ’ s theorem ( equation 2 ) . Replacing f with Uρf and using the Hermite expansion of Uρf from equation 3 , the mean remains the same , E [ Uρf ] = ρ0f̂ ( 0 ) = f̂ ( 0 ) , and variance reduces like Var [ Uρf ] = E [ ( Uρf − E [ f ] ) 2 ] = E [ ( f − f̂ ( 0 ) ) 2 ] = ∑ α : |α| > 0 ρ2|α|f̂ ( α ) 2 . ( 5 ) The last equation indicates that the contribution to the variance by f̂ ( α ) decays by an amount ρ2|α| when ρ ∈ ( 0 , 1 ) . This , in turn , leads to a decrease in variance . Algorithm . In essence , Hermite variational autoencoders are similar to variational autoencoders , save for applying the OU semigroup to the latent distributions p ( zi|zi+1 ) that comprise the generator to compute gradients during training only . Specifically , we apply these operators to the functions parameterizing the mean and variance of the latent Gaussian distributions . For each distribution p ( zi|zi+1 ) we substitute N ( zi|µi ( zi+1 ) , σi ( zi+1 ) ) with N ( zi|Uρµi ( zi+1 ) , Uρσi ( zi+1 ) ) . The new functions result in latent distributions with parameters that have lower variance but the same expected value relative to the conditional input latent distribution . In an alternative parameterization we apply the OU semigroup to the ratio of the mean and variance functions : Uρ µiσi ( zi+1 ) ( see next section for a justification of this ) . The OU semigroup operators can also be applied on approximate posterior functions , but we observe little benefit . In practice , we compute Uρµi ( zi+1 ) and Uρσi ( zi+1 ) by Monte Carlo averaging . As for a function f , Uρf = Ez′|z [ f ( z′ ) ] , where z′ are the correlated samples , we estimate the expectation by Monte Carlo averaging over z′ . Experiments show that 5 to 10 samples suffice . It is important to emphasize that the substitution of the lower variance functions for parameterizing the distributions is only done when computing gradients during training . All evaluations , training or test , are still done on the original hierarchical variational autoencoder model . Thus , the new training procedure has an additional computational cost only for the intermediate distributions in the generator , proportional to the number of correlated samples during training . Complexity . In Hermite VAE the OU sampling operation is only applied in the intermediate stochastic layers in the generator network . In particular , it is not applied in the inference network or in the last layer of the decoder . The fact that OU sampling is not applied in the final stochastic layer computing p ( x|z1 ) is especially important for deep VAEs for images since feature maps are upsampled to match image dimensions in this layer . Thus , for 5 OU samples , the added computational and activation memory complexity is significantly less than 5 times the total cost of the base VAE model , and is 5 times the cost in the higher decoder layers only in the base model . An empirical comparison of maximum memory usage of various models can be found in table 6 .
This paper studies the training of deep hierarchical VAEs and focuses on the problem of posterior collapse. It is argued that reducing the variance of the gradient estimate may help to overcome posterior collapse. The authors focus on reducing the variance of the functions parameterizing the variational distribution of each layer using a layer-wise smoothing operator based on the Ornstein-Uhlenbeck semigroup (parameterized by a parameter $\rho$). The operator requires additional Monte-Carlo samples. The authors provide an analytical analysis of bias and variance. Last they train multiple VAEs models, measure the posterior collapse and observe a phase transition behaviour depending on the parameter $\rho$.
SP:b6dd62914f7464efb601c6d9f8a4d35e047447d5
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
The paper proposes an approximation method, called NEMO (Normalized maximum likelihood Estimation for model-based optimization) to compute the conditional normalized maximum log-likelihood of a query data point as a way to quantify the uncertainty in a forward prediction model in offline model-based optimization problems. The main idea is to construct a conditional NML (CNML) distribution that maps the high-dimensional inputs to a distribution over output variables. In addition, the paper provides a theoretical motivation that estimating the true function with the CNML is close to the best possible expert even if the test label is chosen adversarially, which is a great challenge for an optimizer to exploit the model. By using this CNML on three offline optimization benchmark datasets (Superconductor, GFP, MoleculeActivity) with gradient ascent-based optimization, the NEMO outputs all the other four baselines on the Superconductor dataset by almost 1.4x to 1.7x, the generate comparable results as the other four baselines method on the GFP and MoleculeActivity datasets.
SP:2d25eeb93ba90f9c4064bf794f9a132a6859c8e4
Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs
This paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results in more interpretable directions. The method is applied on top of a VAE model (OPTIMUS), and the authors argue that different directions discovered by PCA correspond to interpretable concepts.
SP:ce75f565c3c17363695c9e39f28b49a66e3731b8
Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures
√ n dependence . We demonstrate the usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature . 1 INTRODUCTION . In recent years , there has been a surge of interest in the use of information-theoretic techniques for bounding the loss of learning algorithms . While the first results of this flavor can be traced to the probably approximately correct ( PAC ) -Bayesian approach ( McAllester , 1998 ; Catoni , 2007 ) ( see also ( Guedj , 2019 ) for a recent review ) , the connection between loss bounds and classical information-theoretic measures was made explicit in the works of Russo & Zou ( 2016 ) and Xu & Raginsky ( 2017 ) , where bounds on the average population loss were derived in terms of the mutual information between the training data and the output hypothesis . Since then , these average loss bounds have been tightened ( Bu et al. , 2019 ; Asadi et al. , 2018 ; Negrea et al. , 2019 ) . Furthermore , the information-theoretic framework has also been successfully applied to derive tail probability bounds on the population loss ( Bassily et al. , 2018 ; Esposito et al. , 2019 ; Hellström & Durisi , 2020a ) . Of particular relevance to the present paper is the random-subset setting , introduced by Steinke & Zakynthinou ( 2020 ) and further studied in ( Hellström & Durisi , 2020b ; Haghifam et al. , 2020 ) . In this setting , a random vector S is used to select n training samples Z ( S ) from a larger set Z̃ of 2n samples . Then , bounds on the average population loss are derived in terms of the conditional mutual information ( CMI ) I ( W ; S|Z̃ ) between the chosen hypothesis W and the random vector S given the set Z̃ . The bounds obtained by Xu & Raginsky ( 2017 ) depend on the mutual information I ( W ; Z ) , a quantity that can be unbounded if W reveals too much about the training set Z . In contrast , bounds for the random-subset setting are always finite , since I ( W ; S|Z̃ ) is never larger than n bits . Most information-theoretic population loss bounds mentioned thus far are given by the training loss plus a term with a √ IM ( PWZ ) /n-dependence , where IM ( PWZ ) denotes an information measure , such as mutual information or maximal leakage ( Issa et al. , 2020 ) . Assuming that the information measure grows at most polylogarithmically with n , the convergence rate of the population loss to the training loss is Õ ( 1/ √ n ) , where the Õ-notation hides logarithmic factors . This is sometimes referred to as a slow rate . In the context of bounds on the excess risk , defined as the difference between the achieved population loss for a chosen hypothesis w and its infimum over the hypothesis class , it is known that slow rates are optimal for worst-case distributions and hypothesis classes ( Talagrand , 1994 ) . However , it is also known that under the assumption of realizability ( i.e. , the existence of a w in the hypothesis class such that the population loss LPZ ( w ) = 0 ) and when the hypothesis class is finite , the dependence on the sample size can be improved to Õ ( 1/n ) ( Vapnik , 1998 , Chapter 4 ) . This is referred to as a fast rate . Excess risk bounds with fast rates for randomized classifiers have also been derived , under certain additional conditions , for both bounded losses ( Van Erven et al. , 2015 ) and unbounded losses ( Grünwald & Mehta , 2020 ) . Notably , Steinke & Zakynthinou ( 2020 , Thm . 2 ( 3 ) ) derive a population loss bound whose dependence on n is I ( W ; S|Z̃ ) /n . The price for this improved dependence is that the training loss that is added to the n-dependent term is multiplied by a constant larger than 1 . Furthermore , ( Steinke & Zakynthinou , 2020 , Thm . 8 ) shows that if the Vapnik-Chervonenkis ( VC ) dimension of the hypothesis class is finite , there exists an empirical risk minimizer ( ERM ) whose CMI grows at most logarithmically with n. This implies that the CMI approach leads to fast-rate bounds in certain scenarios . However , the result in ( Steinke & Zakynthinou , 2020 , Thm . 2 ( 3 ) ) pertains only to the average population loss : no tail bounds on the population loss are provided . Throughout the paper , we will , with an abuse of terminology , refer to bounds with an n-dependence of the form IM ( PWZ ) /n as fast-rate bounds . Such bounds are also known as linear bounds ( Dziugaite et al. , 2020 ) . Note that the n-dependence of the information measure IM ( PWZ ) has to be at most polylogarithmic for such bounds to actually achieve a fast rate in the usual sense . An intriguing open problem in statistical learning is to find a theoretical justification for the capability of overparameterized neural networks ( NNs ) to achieve good generalization performance despite being able to memorize randomly labeled training data sets ( Zhang et al. , 2017 ) . As a consequence of this behavior , classical population loss bounds that hold uniformly over a given hypothesis class , such as VC bounds , are vacuous when applied to overparameterized NNs . This has stimulated recent efforts aimed at obtaining tighter population loss bounds that are algorithm-dependent or data-dependent . In the past few years , several studies have shown that promising bounds are attainable by using techniques from the PAC-Bayesian literature ( Dziugaite & Roy , 2017 ; Zhou et al. , 2019 ; Dziugaite et al. , 2020 ) . The PAC-Bayesian approach entails using the Kullback-Leibler ( KL ) divergence to compare the distribution on the weights of the NN induced by training to some reference distribution . These distributions are referred to as the posterior and the prior , respectively . Recently , Dziugaite et al . ( 2020 ) used data-dependent priors to obtain state-of-the-art bounds for LeNet-5 trained on MNIST and Fashion-MNIST . In their approach , the available data is used both for training the network and for choosing the prior . This leads to a bound that is tighter than previously available bounds . Furthermore , the bound can be further improved by minimizing the KL divergence between the posterior and the chosen prior during training . One drawback of the PAC-Bayesian approach is that it applies only to stochastic NNs , whose weights are randomly chosen each time the network is used , and not to deterministic NNs with fixed weights . Information-theoretic bounds have also been derived for iterative , noisy training algorithms such as stochastic gradient Langevin dynamics ( SGLD ) ( Bu et al. , 2019 ) . These bounds lead to nonvacuous estimates of the population loss of overparameterized NNs that are trained using SGLD through the use of data-dependent priors ( Negrea et al. , 2019 ) . However , these bounds do not apply to deterministic NNs , nor to standard stochastic gradient descent ( SGD ) training . Furthermore , the bounds pertain to the average population loss , and not to its tails . Although the techniques yielding these estimates can be adapted to the PAC-Bayesian setting , as discussed by Negrea et al . ( 2019 , App . I ) , the resulting bounds are generally loose . 1.1 CONTRIBUTIONS . In this paper , we extend the fast-rate average loss bound by Steinke & Zakynthinou ( 2020 ) to the PAC-Bayesian and the single-draw settings . We then use the resulting PAC-Bayesian and single-draw bounds to characterize the test loss of NNs used to classify images from the MNIST and FashionMNIST data sets . The single-draw bounds can be applied to deterministic NNs trained through SGD but with Gaussian noise added to the final weights , whereas the PAC-Bayesian bounds apply only to randomized neural networks , whose weights are drawn from a Gaussian distribution each time the network is used . For the same setup , we also evaluate the slow-rate PAC-Bayesian and single-draw bounds from ( Hellström & Durisi , 2020b ) . Our numerical experiments reveal that both the slow-rate bounds from ( Hellström & Durisi , 2020b ) and the newly derived fast-rate bounds are nonvacuous . Furthermore , for some settings , the fast-rate bounds presented in this paper are quantitatively stronger than the corresponding slow-rate ones from ( Hellström & Durisi , 2020b ) , and essentially match the best bounds available in the literature for SGD-trained NNs ( Dziugaite et al. , 2020 ) . 1.2 PRELIMINARIES . We now detail some notation and describe the random-subset setting introduced in ( Steinke & Zakynthinou , 2020 ) . Let Z be the instance space , W be the hypothesis space , and  : W ×Z → R+ be the loss function . Throughout the paper , we will assume that the range of  ( w , z ) is restricted to [ 0 , 1 ] for all w ∈ W and all z ∈ Z . A typical example of such a loss function is the classification error . In this setting , the sample Z consists of an example X ∈ X and a corresponding label Y ∈ Y . Then , the loss is given by  ( W , Z ) = 1 { fW ( X ) 6= Y } , where fW ( · ) is the map from X to Y induced by the hypothesis W . We note that , when applying our bounds to NNs , the function  ( · , · ) used to characterize the performance of the network does not necessarily need to coincide with the loss function used when training the NN . For instance , one could use the ( unbounded ) cross-entropy loss when training the NN , and apply the bounds for the scenario in which  ( · , · ) is the classification error . In the random-subset setting , 2n training samples Z̃ = ( Z̃1 , . . . , Z̃2n ) are available , with all entries of Z̃ being drawn independently from some distribution PZ onZ . However , only a randomly selected subset of cardinality n is actually used for training . Following ( Steinke & Zakynthinou , 2020 ) , we assume that the training dataZ ( S ) is selected as follows . Let S = ( S1 , . . . , Sn ) be an n-dimensional random vector , the elements of which are drawn independently from a Bern ( 1/2 ) distribution and are independent of Z̃ . Then , for i = 1 , . . . , n , the ith training sample in Z ( S ) is Zi ( Si ) = Z̃i+Sin . Thus , the binary variable Si determines whether the training set Z ( S ) will contain the sample Z̃i or the sample Z̃i+n . The selected training procedure , including the loss function used for training , will determine the conditional distribution PW |Z ( S ) on the hypothesis class given the training data . For a given W ∼ PW |Z ( S ) , we let LZ ( S ) ( W ) = 1n ∑n i=1  ( W , Zi ( Si ) ) denote the training loss . Furthermore , we let S̄ denote the modulo-2 complement ofS . ThenLZ ( S̄ ) ( W ) can be interpreted as a test loss , sinceW is conditionally independent ofZ ( S̄ ) givenZ ( S ) . Finally , we note that the average over ( Z̃ , S ) of the test loss is the population loss LPZ ( W ) = EPZ̃S [ LZ ( S̄ ) ( W ) ] = EPZ [  ( W , Z ) ] . Our bounds will depend on several different information-theoretic quantities , which we shall introduce next . The information density ı ( W , Z ) between W and Z is defined as ı ( W , Z ) = log dPWZdPWPZ , where dPWZdPWPZ is the Radon-Nikodym derivative of PWZ with respect to PWPZ . The information density is well-defined if PWZ is absolutely continuous with respect to PWPZ , denoted by PWZ PWPZ . The conditional information density ı ( W , S|Z̃ ) between W and S given Z̃ is defined as ı ( W , S|Z̃ ) = log dPWZ̃SdPW |Z̃PZ̃S , provided that PWZ̃S PW |Z̃PZ̃S . The mutual information can be obtained as I ( W ; Z ) = EPWZ [ ı ( W , Z ) ] and the conditional mutual information as I ( W ; S|Z̃ ) = EPWZ̃S [ ı ( W , S|Z̃ ) ] . We will also need the KL divergences D ( PW |Z ||PW ) = EPW |Z [ ı ( W , Z ) ] and D ( PW |Z̃S ||PW |Z̃ ) = EPW |Z̃S [ ı ( W , S|Z̃ ) ] . In practical applications , the marginal distribution PW is not available , since PZ is unknown . Furthermore , PW |Z̃ is also difficult to compute , since marginalizing PSPW |Z̃S over S involves performing training 2 n times . Hence , bounds depending on ı ( W , Z ) or on ı ( W , S|Z̃ ) can not typically be evaluated . Therefore , it will be convenient to replace the information density ı ( W , Z ) with the proxy log dPWZdQWPZ and ı ( W , S|Z̃ ) with log dPWZ̃SdQW |Z̃PZ̃S . Here , QW and QW |Z̃ are suitably chosen auxiliary distributions ( priors ) that are used in place of the intractable , true marginals .
This paper extends results of prior work by Steinke and Zakynthinou, by providing generalization bounds in the PAC-Bayesian and single-draw settings that depend on the conditional mutual information. The emphasis in this work is on obtaining fast rates ($1/n$ vs. $1/\sqrt{n}$). The authors also conduct empirical experiments showing how the fast rate bounds they propose can be useful for obtaining non-vacuous generalization bounds in the context of over-parameterized neural networks.
Neural Time-Dependent Partial Differential Equation
This work proposes a sequence-to-sequence approach for learning the time evolution of PDEs. The method employs a bi-directional LSTM to predict solutions of a PDE-based formulation for a chosen number of time steps. By itself this is an interesting, and important goal, but the method does not seem to contain any novel components apart from demonstrating that LSTMs can be used to learn data from PDEs. The paper only compares to a simple form of PINNs, but not to a variety of other time forecasting algorithms available in the deep learning field (LSTM are just one of many methods used these days, a more state of the art one being e.g. transformers). In addition, the examples only contain single cases with relatively simple model equations.
Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active Learning
In this paper, the authors develop a data selection scheme aimed to minimize a notion of Bayes excess risk for overparametrized linear models. The excess Bayes risk is the expected squared error between the prediction and the target. The authors note that solutions such as V-optimality exist for the underparametrized cases (linear regression), and offer extensions to ridge regression. After the development of a greedy schemes and a tentative extension to deep learning models, the authors show that their selection scheme can outperform random selection on MNIST with a specific model.
SP:797b07cd8142a35333037bb573db0dfe5dde65ac
Offline Policy Optimization with Variance Regularization
This paper proposes a novel algorithm for offline policy optimization. The main idea is to prevent overestimation bias by regularizing against the variance of the importance weighted value estimate. There are two key modifications: (1) using an importance weight from the stationary distribution and (2) using Fenchel duality to introduce a min-max problem to avoid double sampling when estimating the gradient of the variance regularization term. The theory section motivates the use of variance regularization and the experiments show improvements over BCQ when adding the proposed variance regularization algorithm.
SP:4989f7703e106a20401cec0a5058d440720b0379
Quantifying Statistical Significance of Neural Network Representation-Driven Hypotheses by Selective Inference
1 INTRODUCTION . The remarkable predictive performance of deep neural networks ( DNNs ) stems from their ability to learn appropriate representations from data . In order to understand the decision-making process of DNNs , it is thus important to be able to explain and interpret DNN representations . For example , in image classification tasks , knowing the attention region from DNN representation allows us to understand the reason for the classification . In the past few years , several methods have been developed to explain and interpret DNN representations ( Ribeiro et al. , 2016 ; Bach et al. , 2015 ; Doshi-Velez & Kim , 2017 ; Lundberg & Lee , 2017 ; Zhou et al. , 2016 ; Selvaraju et al. , 2017 ) ; however , some of them have turned out to be unstable and not reproducible ( Kindermans et al. , 2017 ; Ghorbani et al. , 2019 ; Melis & Jaakkola , 2018 ; Zhang et al. , 2020 ; Dombrowski et al. , 2019 ; Heo et al. , 2019 ) . Therefore , it is crucially important to develop a method to quantify the reliability of DNN representations . In this paper , we interpret these representations as hypotheses that are driven by DNN ( called DNNdriven hypotheses ) and employ statistical hypothesis testing framework to quantify the reliability of DNN representations . For example , in an image classification task , the reliability of an attention region can be quantified based on the statistical significance of the difference between the attention region and the rest of the image . Unfortunately , however , traditional statistical test can not be applied to this problem because the hypothesis ( attention region in the above example ) itself is selected by the data . Traditional statistical test is valid only when the hypothesis is non-random . Roughly speaking , if a hypothesis is selected by the data , the hypothesis will over-fit to the data and the bias needs to be corrected when assessing the reliability of the hypothesis . Our main contribution in this paper is to introduce Selective Inference ( SI ) approach for testing the reliability of DNN representations . The basic idea of SI is to perform statistical inference under the condition that the hypothesis is selected . SI approach has been demonstrated to be effective in the context of feature selections such as Lasso . In this paper , in order to introduce SI for DNN representations , we develop a novel SI algorithm based on homotopy method , which enables us to derive the exact ( non-asymptotic ) conditional sampling distribution of the DNN-driven hypothesis . We use p-value as a criterion to quantify the reliability of DNN representation . In the literature , pvalues are often misinterpreted and there are various source of mis-interpretation has been discussed ( Wasserstein & Lazar , 2016 ) . In this paper , by using SI , we address one of the sources of misinterpreted p-values ; the p-values are biased when the hypothesis is selected after looking at the data ( often called double-dipping or data dredging ) . We believe our approach is a first significant step to provide valid p-values for assessing the reliability of DNN representations . Figure 1 shows an example that illustrates the importance of our method . Related works . Several recent approaches have been developed to visualize and understand a trained DNN . Many of these post-hoc approaches ( Mahendran & Vedaldi , 2015 ; Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016 ; Simonyan et al. , 2013 ) have focused on developing visualization tools for the activation maps and/or the filter weights within trained networks . Others have aimed to identify the discriminative regions in an input image , given a trained network ( Selvaraju et al. , 2017 ; Fong & Vedaldi , 2017 ; Zhou et al. , 2016 ; Lundberg & Lee , 2017 ) . In parallel , some recent studies have showed that many popular methods for explanation and interpretation are not stable with respect to the perturbation or the adversarial attack on the input data and the model ( Kindermans et al. , 2017 ; Ghorbani et al. , 2019 ; Melis & Jaakkola , 2018 ; Zhang et al. , 2020 ; Dombrowski et al. , 2019 ; Heo et al. , 2019 ) . However , there are no previous studies that quantitatively evaluate the stability and reproducibility of DNN representations with a rigorous statistical inference framework . In the past few years , SI has been actively studied for inference on the features of linear models selected by several feature selection methods , e.g. , Lasso ( Lee et al. , 2016 ; Liu et al. , 2018 ; Duy & Takeuchi , 2020 ) . The basic idea of SI is to make inference conditional on the selection event , which allows us to derive the exact ( non-asymptotic ) sampling distribution of the test statistic . Besides , SI has also been applied to various problems ( Bachoc et al. , 2014 ; Fithian et al. , 2015 ; Choi et al. , 2017 ; Tian et al. , 2018 ; Chen & Bien , 2019 ; Hyun et al. , 2018 ; Bachoc et al. , 2018 ; Loftus & Taylor , 2014 ; Loftus , 2015 ; Panigrahi et al. , 2016 ; Tibshirani et al. , 2016 ; Yang et al. , 2016 ; Suzumura et al. , 2017 ; Duy et al. , 2020 ) . However , to the best of our knowledge , there is no existing study that provides SI for DNNs , which is technically challenging . This study is partly motivated by Tanizaki et al . ( 2020 ) where the authors provide a framework to compute p-values for image segmentation results provided by graph cut and threshold-based segmentation algorithms . As we demonstrate in this paper , our method can be also used to assess the reliability of DNN-based segmentation results . Contribution . To our knowledge , this is the first study that provides an exact ( non-asymptotic ) inference method for statistically quantifying the reliability of data-driven hypotheses that are discovered from DNN representation . We propose a novel SI homotopy method , inspired by Duy & Takeuchi ( 2020 ) , for conducting powerful and efficient SI for DNN representations . We conduct experiments on both synthetic and real-world datasets , through which we offer evidence that our proposed method can successfully control the false positive rate , has decent performance in terms of computational efficiency , and provides good results in practical applications . We provide our implementation in the supplementary document and it will be released when this paper is published . 2 PROBLEM STATEMENT . To formulate the problem , we denote an image with n pixels corrupted with Gaussian noise as X = ( X1 , ... , Xn ) > = µ+ ε , ε ∼ N ( 0 , Σ ) , ( 1 ) where µ ∈ Rn is an unknown mean pixel intensity vector and ε ∈ Rn is a vector of Normally distributed noise with the covariance matrix Σ that is known or able to be estimated from external data . We note that we do not assume that the pixel intensities in an image follow Normal distribution in Equation ( 1 ) . Instead , we only assume that the vector of noises added to the true pixel values follows a multivariate Normal distribution . For an image X and a trained DNN , the main target is to identify an attention region ( discriminative/informative region ) in the input image X based on a DNN representation . A pixel is assigned to the attention region if its corresponding value in the representation layer is greater than a pre-defined threshold . We denote the set of pixels ofX divided into attention region and non-attention region as C+X and C − X , respectively . Definition 1 . We define A ( X ) as the event that the result of dividing pixels of image X into two sets of pixels C+X and C − X is obtained by applying a DNN onX , i.e. , A ( X ) = { C+X , C − X } . ( 2 ) Quantifying the statistical significance of DNN-driven hypotheses . Given an observed image xobs ∈ Rn sampled from the model ( 1 ) , we can obtain C+ xobs and C− xobs by applying DNN on xobs . Let us consider a score ∆ that represents the degree to which the attention region differs from the non-attention region . In general , we can define any score as long as it is written in the form ∆ = η > xobs . For example , we can define ∆ as the difference in average pixel values between the attention region and the non-attention region , i.e. , ∆ = mC+ xobs −mC− xobs = 1 |C+ xobs | ∑ i∈C+ xobs xobsi − 1 |C− xobs | ∑ i∈C− xobs xobsi = η > xobs , where η = 1|C+ xobs |1 n C+ xobs − 1|C− xobs |1 n C− xobs , and 1nC ∈ Rn is a vector whose elements belonging to a set C are 1 , and 0 otherwise . If the value of |∆| is sufficiently large , the difference between C+ xobs and C− xobs is significant and the attention region is reliable . To quantify the statistical significance , we consider a statistical hypothesis testing with the following null hypothesis H0 and alternative hypothesis H1 : H0 : µC+ xobs = µC− xobs vs. H1 : µC+ xobs 6= µC− xobs , ( 3 ) where µC+ xobs and µC− xobs are the true means of the pixel values in the attention region and nonattention region , respectively . Given a significance level α ( e.g. , 0.05 ) , we reject H0 if the p-value is smaller than α , which indicates the attention region differs from the non-attention region . Otherwise , we can not say that the difference is significant . In a standard ( naive ) statistical test , the hypotheses in ( 3 ) are assumed to be fixed , i.e. , non-random . Then , the naive ( two-sided ) p-value is simply given as pnaive = PH0 ( |η > X| ≥ |∆| ) = PH0 ( |η > X| ≥ |η > xobs| ) . ( 4 ) However , since the hypotheses in ( 3 ) are actually not fixed in advance , the naive p-value is not valid in the sense that , if we reject H0 with a significance level α , the false detection rate ( type-I error ) can not be controlled at level α , which indicates that pnaive is unreliable . This is due to the fact that the hypotheses ( the attention region ) in ( 3 ) are selected by looking at the data ( the input image ) , and thus selection bias exists . This selection bias is sometimes called data dredging , data snooping or p-hacking ( Ioannidis , 2005 ; Head et al. , 2015 ) . Selective inference ( SI ) for computing valid p-values . The basic idea of SI is to make inference conditional on the selection event , which allows us to derive the exact ( non-asymptotic ) sampling distribution of the test statistic η > X in an attempt to avoid the selection bias . Thus , we employ the following conditional p-value pselective = PH0 ( |η > X| ≥ |η > xobs| | A ( X ) = A ( xobs ) , q ( X ) = q ( xobs ) ) , ( 5 ) where q ( X ) = ( In − cη > ) X with c = Ση ( η > Ση ) −1 . The first condition A ( X ) = A ( xobs ) indicates the event that the result of dividing pixels into an attention region and non-attention region for a random image X is the same as that of the observed image xobs , i.e. , C+X = C + xobs and C−X = C − xobs . The second condition q ( X ) = q ( xobs ) indicates the component that is independent of the test statistic forX is the same as the one for xobs . The q ( X ) corresponds to the component z in the seminal SI paper of Lee et al . ( 2016 ) ( Sec 5 , Eq 5.2 and Theorem 5.2 ) . The p-value in ( 5 ) , which is called selective type I error or selective p-values in the SI literature ( Fithian et al. , 2014 ) , is valid in the sense that PH0 ( pselective < α ) = α , ∀α ∈ [ 0 , 1 ] , i.e. , the false detection rate is theoretically controlled at level α indicating the selective p-value is reliable . To calculate the selective p-value in ( 5 ) , we need to identify the conditional data space . Let us define the set of x ∈ Rn that satisfies the conditions in ( 5 ) as X = { x ∈ Rn | A ( x ) = A ( xobs ) , q ( x ) = q ( xobs ) } . ( 6 ) According to the second condition , the data in X are restricted to a line ( Sec 6 in Liu et al . ( 2018 ) , and Fithian et al . ( 2014 ) ) . Therefore , the set X can be re-written , using a scalar parameter z ∈ R , as X = { x ( z ) = a+ bz | z ∈ Z } , ( 7 ) where a = q ( xobs ) , b = Ση  ( η >  Ση  ) −1 , and Z = { z ∈ R | A ( x ( z ) ) = A ( xobs ) } . ( 8 ) Now , let us consider a random variable Z ∈ R and its observation zobs ∈ R that satisfyX = a+bZ and xobs = a+ bzobs . Then , the selective p-value in ( 5 ) is re-written as pselective = PH0 ( |η > X| ≥ |η > xobs| |X ∈ X ) = PH0 ( |Z| ≥ |zobs| | Z ∈ Z ) . ( 9 ) Since the variable Z ∼ N ( 0 , η > Ση ) under the null hypothesis , the law of Z | Z ∈ Z follows a truncated Normal distribution . Once the truncation region Z is identified , the selective p-value ( 9 ) can be computed as pselective = F Z 0 , η > Ση ( −|z obs| ) + 1− FZ0 , η > Ση ( |z obs| ) , ( 10 ) where F Em , s2 is the c.d.f . of the truncated normal distribution with mean m , variance s 2 and truncation region E . Therefore , the most important task is to identify Z . Extension of the problem setup to hypothesis driven from DNN-based image segmentation . We interpret the hypothesis driven from image segmentation result as the one obtained from the representation at output layer instead of internal representation . Our problem setup is general and can be directly applied to this case . For example , we can consider the attention region as the object region and the non-attention region as the background region . Then , we can conduct SI to quantify the significance of the difference between object and background regions . We note that we consider the case where the image is segmented into two regions—object and background—to simplify the problem and notations . The extension to more than two regions is straightforward .
This paper proposed a novel method which to quantify the reliability of DNN-driven hypotheses in a statistical hypothesis testing framework. Naive statistical testings are not appropriate for the DNN-driven hypotheses, where the hypotheses are selected by looking at the data(i.e. The selection bias exists). To address this problem, the authors developed a novel homotopy method under the Selective-Inference(SI) framework, which can derive the exact sampling distribution of the DNN-driven hypotheses. In this paper, the authors mainly focus on DNNs which consist of affine operations, max-operations, and piecewise-linear activation. As described by Lee et al. (2016), the main idea of SI is to make the inference conditional on the selection event. Specifically to the DNN-driven hypotheses, the authors proposed a novel method that consists of two steps, 1) Adding extra conditioning to make the problem traceable. 2) Combining multiple over-conditioning cases by homotopy method to solve the over-conditioning problem. The experimental results on both synthetic and real-world datasets illustrate the proposed method can successfully control the FP error rate.
SP:4e77d43eb99688600f6c2115e1882e0b1e11a751
This paper proposes a variant of the GTD2 algorithm by adding an additional regularization term to the objective function, and the new algorithm is named as Gradient-DD (GDD). The regularization ensures that the value function does not change drastically between consecutive iterations. The authors show that the update rule of GDD can be written as a difference equation and aim to further show the convergence via Lyapunov based analysis. An simulation study is provided to compare the proposed GDD algorithm with TD, ETD, and GTD.
SP:8a32dfc80f31fd3da97e15ce98193144d03836b5
FactoredRL: Leveraging Factored Graphs for Deep Reinforcement Learning
We propose a simple class of deep reinforcement learning ( RL ) methods , called FactoredRL , that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms . In tabular and linear approximation settings , the factored Markov decision process literature has shown exponential improvements in sample efficiency by leveraging factored environment structures . We extend this to deep RL algorithms that use neural networks . For model-based algorithms , we use the factored structure to inform the state transition network architecture and for model-free algorithms we use the factored structure to inform the Q network or the policy network architecture . We demonstrate that doing this significantly improves sample efficiency in both discrete and continuous state-action space settings . 1 INTRODUCTION . In many domains , the structure of the Markov Decision Process ( MDP ) is known at the time of problem formulation . For example , in inventory management , we know the structure of the state transition : how inventory flows from a vendor , to a warehouse , to a customer ( Giannoccaro & Pontrandolfo , 2002 ; Oroojlooyjadid et al. , 2017 ) . In portfolio management , we know that a certain asset changes only when the agent buys or sells a corresponding item ( Jiang et al. , 2017 ) . Similar structural information is available in vehicle routing , robotics , computing , and many others . Our work stems from the observation that we can exploit the known structure of a given MDP to learn a good policy . We build on the Factored MDP literature ( Boutilier et al. , 1995 ; Osband & Van Roy , 2014 ; Kearns & Singh , 2002 ; Cui & Khardon , 2016 ) , and propose a factored graph to represent known relationships between states , actions and rewards in a given problem . We use the factored graphs to inform the structure of the neural networks used in deep reinforcement learning ( RL ) algorithms to improve their sample efficiency . We give literature references and example factor graphs for real world applications in Appendix A . Consider a motivational example , where the goal of the agent is to balance multiple independent cartpoles simultaneously , with each cartpole defined as per OpenAI gym ( G. Brockman & Zaremba , 2016 ) . The agent can take a ‘ left ’ or ‘ right ’ action on each cartpole , and the state includes the position and velocity of each cart and each pole . We refer to this as the Multi-CartPole problem . Both model-based and model-free algorithms treat the state-action space as a single entity , which makes exploration combinatorially complex . As a consequence , the sample efficiency of RL algorithms degrades exponentially with the number of cartpoles , despite the problem remaining conceptually simple for a human . By allowing the agent access to the problem ’ s factored structure ( i.e . each action affects only one cartpole ) , we bypass the need to learn about each action ’ s relationship with the entire state , and instead only need to learn about each action ’ s relationship with its single , related cartpole . We show how to integrate knowledge of the factored graph into both model-based and model-free deep RL algorithms , and thereby improve sample efficiency . In all cases , we first write down a factored graph as an adjacency matrix , representing the relationships between state , action , and reward . From this adjacency matrix , we then define a Factored Neural Network ( Factored NN ) , which uses input and output masking to reflect the structure of the factored graph . Finally , we show how to integrate this Factored NN into existing deep RL algorithms . For modelbased , we use the Factored NN to learn decomposed state transitions , and then integrate this state transition model with Monte Carlo Tree Search ( MCTS ) ( Kocsis & Szepesvári , 2006 ) . For model-free , we use the Factored NN to learn a decomposed Q-function , and then integrate with DQN ( Mnih et al. , 2015 ) . Also for model-free , we use the Factored NN to learn a decomposed policy function , and then integrate with PPO ( Schulman et al. , 2017 ) . In all three cases , we demonstrate empirically that these Factored RL methods ( Factored MCTS , DQN , and PPO ) are able to achieve better sample efficiency than their vanilla implementations , on a range of environments . 2 RELATED WORK . Several methods have been proposed that exploit the structural information of a problem in the Factored MDP literature . Kearns & Koller ( 1999 ) propose a method to conduct model-based RL with a Dynamic Bayesian Network ( DBN ) ( Dean & Kanazawa , 1989 ) and learn its parameters based on an extension of the Explicit Explore or Exploit ( E3 ) algorithm ( Kearns & Singh , 2002 ) . Guestrin et al . ( 2003 ) propose a linear program and a dynamic program based algorithm to learn linear value functions in Factored MDPs , and extend it to multi-agent settings ( Guestrin et al. , 2002 ) . They exploit the context specific and additive structure in Factored MDP that capture the locality of influence of specific states and actions . We use the same structures in our proposed algorithms . Cui & Khardon ( 2016 ) propose a symbolic representation of Factored MDPs . Osband & Van Roy ( 2014 ) propose posterior sampling and upper confidence bounds based algorithms and prove that they are near-optimal . They show that the sample efficiency of the algorithm scales polynomially with the number of parameters that encode the factored MDP , which may be exponentially smaller than the full state-action space . Xu & Tewari ( 2020 ) extend the results to non-episodic settings and Lattimore et al . ( 2016 ) show similar results for contextual bandits . The algorithms proposed in these prior works assume a tabular ( Cui et al. , 2015 ; Geißer et al . ) or linear setting ( Guestrin et al. , 2003 ) , or require symbolic expressions ( Cui & Khardon , 2016 ) . We extend these ideas to deep RL algorithms by incorporating the structural information in the neural network . Li & Czarnecki ( 2019 ) propose a factored DQN algorithm for urban driving applications . Our proposed algorithms are similar , but we extend the ideas to model-based algorithms like MCTS ( Kocsis & Szepesvári , 2006 ) , and model-free on-policy algorithms like PPO ( Schulman et al. , 2017 ) . We also evaluate our algorithms on a variety of environments which encompass discrete and continuous stateaction spaces . The Factored NN we propose is closely related to Graph Neural Networks ( Scarselli et al. , 2008 ; Zhou et al. , 2018 ) , which are deep learning based methods that operate on graph domain and have been applied to domains such as network analysis ( Kipf & Welling , 2016 ) , molecule design ( Liu et al. , 2018 ) and computer vision ( Xu et al. , 2018 ) . Instead of explicitly embedding the neighbors of all the nodes with neural networks , we use a single neural network with masking . NerveNet Wang et al . ( 2018 ) addresses the expressiveness of structure in an MDP , similar to our work . They focus on robotics applications and demonstrate state-action factorization with PPO . In our work , we additionally demonstrate state transition and state-reward factorization in MCTS and DQN respectively . In addition , they propose imposing a structure with Graph Neural Networks . In contrast , we propose using input and output masking without modifying the neural architecture . Working Memory Graphs Loynd et al . ( 2020 ) uses Transformer networks for modeling both factored observations and dependencies across time steps . However , they only evaluate their method in a grid world with a single discrete action . In contrast , we demonstrate our methods on multiple environments and algorithms with factorization in state transition , state-action and state-reward relationships . In addition , our factored network is a simple extension to the existing network used to solve a problem , whereas they impose a complex network architecture . Action masking has been used effectively to improve RL performance in multiple works ( Williams & Zweig , 2016 ; Williams et al. , 2017 ; Vinyals et al. , 2017 ) . We use a similar trick when applying our Factored NN to policy networks in model-free RL . However , we use both an action mask as well as a state mask to incorporate factored structure in policy networks . Our state transition networks for model-based RL also imposes masks on both input and output corresponding to current state-action and next state respectively . Wu et al . ( 2018 ) introduce an action dependent baseline in actor-critic algorithms , where a separate advantage function is learned for each action . Their method also exploits structure available in the action space . Our method to incorporate structure is orthogonal , as we modify the policy network in actor-critic methods . There is also a relationship between our work and the emerging intersection of reinforcement learning and causal inference , as factored graphs are are a super-set of causal graphs in the MDP setting . Lu et al . ( 2018 ) use the backdoor criterion in causal inference and variational autoencoders . Zhang & Bareinboim ( 2019 ) propose a near-optimal algorithm by taking advantage of causal inference in non-Markovian dynamic treatment regimes . Both works assume there exist unobserved confounders in the environment . We instead tackle a different problem where there are no unobserved confounders and show that there are still benefits to leverage structural information . 3 TERMINOLOGY . We briefly describe terminology used in this paper . We use Directed Acyclic Graphs ( DAG ) to represent relationships between the variables . DAGs consist of nodes and edges where the nodes correspond to random variables X = ( X1 , ... , Xd ) , and a directed edge from variable Xi to Xj represents that Xi has an effect on Xj ( Xi is also called the parent of Xj ) . Under Markov conditions , the joint distribution of the variables can be factored as p ( X1 : d ) = ∏d i=1 p ( Xi|PA ( Xi ) ) . Consider a general Markov Decision Process ( MDP ) defined by ( S , A , P , R , ρ0 , γ ) , where S , A denote the state and action space respectively , P denotes the transition probability , R represents the reward function , ρ0 and γ represent the initial distribution of the state and discount factor respectively . In the classic RL setting , one typically assumes each state Skt+1 depends on the entire previous states and actions , i.e. , PA ( Skt+1 ) = { { Skt } |S| k=1 , { Akt } |A| k=1 } , where | · | denotes the cardinality of the space , and PA denotes the parents of a node in a bayesian network . However , in many scenarios , one component of the action Akt may only cause part of the state-space { Skt } k∈Ck to change , where Ck is the index set of the related states of the kth component of the action . In other words , the parents of each state may only be a subset of the actions and previous states , i.e. , PA ( Skt+1 ) ${ { Skt } |S| k=1 , { Akt } |A| k=1 } . Simplifying the conditional dependencies helps to construct a more accurate model , enabling us to better decompose the the dynamics and reduce complexity of the learning tasks . We assume the factored structure of the environment does not change over time . This paper presents a methodology for incorporating factor-graphs into model-based and model-free RL methods. The work starts by assuming access to a correct and factor graph showing the relationship between individual state factors, actions, and rewards. The authors propose to make use of this factor graph by using a Factored Neural Network - which is similar to the standard feed-forward MLP networks that would typically be used to parameterize a policy or Q-function - except that it masks out connections between input and output nodes that are not connected in the factor graph. Presumably this results in a sparser neural network which can lead to faster learning and better sample complexity. The authors demonstrate how these factored NNs can be incorporated with model-based MCTS as well as model-free DQN and PPO. In short - the algorithm remains unchanged and the only substition seems to be the Factored NN rather than a fully-connected NN. Experiments are performed on Multi-Cartpole (simultaneous control over several cartpoles), Taxi, BitFlip, and PyBullet's Ant, Half-Cheetah, and Humanoid. Each of the factored algorithms is compared with the un-factored equivalent and increased sample efficiency of learning is noted for the factored variants. The authors provide the manually-defined factor-graphs used for each of these environments in the Appendix. SP:dcb62a0cc1b03e9ea24b2ed167f14255d9386f95 Parallel Training of Deep Networks with Local Updates 1 INTRODUCTION . Backpropagation ( Rumelhart et al. , 1985 ) is by far the most common method used to train neural networks . Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss ( Schulman et al. , 2015 ) , non-smooth loss landscape ( Metz et al. , 2019 ) , or due to memory and/or compute requirements ( Ororbia et al. , 2020 ) . However , progress in deep learning is producing ever larger models in terms of parameter count and depth , in vision ( Hénaff et al. , 2019 ; Chen et al. , 2020 ) , language ( Radford et al. , 2019 ; Brown et al. , 2020 ) , and many other domains ( Silver et al. , 2017 ; Vinyals et al. , 2019 ; Berner et al. , 2019 ) . As model size increases , backpropagation incurs growing computational , memory , and synchronization overhead ( Ben-Nun & Hoefler , 2018 ) . This raises the question of whether there are more efficient training strategies , even for models and losses that are considered well matched to training by backpropagation . Much of the work on training large scale models focuses on designing compute infrastructure which makes backpropagation more efficient , despite growing model size ( Dean et al. , 2012b ; Chen et al. , 2015 ; Sergeev & Balso , 2018 ) . One of the most common ways to achieve efficient training of deep neural networks with backpropagation is to scale utilizing data parallelism ( Zhang et al. , 1989 ; Chen et al. , 2016 ) , training on bigger batch sizes spread across multiple devices . However , diminishing returns have been reported with this method for larger batch sizes , effectively wasting compute ( Goyal et al. , 2017 ; Masters & Luschi , 2018 ; Shallue et al. , 2018 ; McCandlish et al. , 2018 ) . Training based on pipeline parallelism has also been introduced , but still requires large batches for efficient training ( Petrowski et al. , 1993 ; Ben-Nun & Hoefler , 2018 ; Huang et al. , 2019 ) . Moreover , in addition to the limitation that in the forward pass each layer can only process the input data in sequence ( forward locking ) , the use of backpropagation implies that the network parameters of each layer can only be updated in turn after completing the full forward pass ( backward locking ) . This backward locking results in increased memory overhead , and precludes efficient parallel processing across layers ( Jaderberg et al. , 2017 ) . The challenges of scaling compute infrastructure to support deep networks trained with backpropagation motivate the need for alternative approaches to training deep neural networks . In this work , we explore how layer-wise local updates ( Belilovsky et al. , 2019a ; Löwe et al. , 2019 ; Xiong et al. , 2020 ) can help overcome these challenges and scale more efficiently with compute than backpropagation . With local updates , each layer is updated before even completing a full forward pass through the network . This remedies the forward and backward locking problems which harm memory efficiency and update latency in standard backprop . Layer-wise local updates are not proportional to gradients of the original loss , and are not even guaranteed to descend a loss function . Nevertheless , in practice they are effective at training neural networks . We refer to this approach of parallelizing compute , which is alternative and complementary to data and model parallelism , as local parallelism . Our investigation focuses on the trade-offs of using local update methods as opposed to global backpropagation . To summarize our contributions : ( i ) We provide the first large scale investigation into local update methods in both vision and language domains . We find training speedups ( as measured by the reduction in required sequential compute steps ) of up to 10× on simple MLPs , and 2× on Transformer architectures . These training speedups are the result of local training methods being able to leverage more parallel compute than backprop . ( ii ) We provide insight into how local parallelism methods work , and experimentally compare the similarity of their gradient and features to those from backprop . ( iii ) We demonstrate a prototype implementation of local parallelism for ResNets , and show up to a 40 % increase in sample throughput ( number of training points per second ) relative to backprop , due to higher hardware utilization . We believe that local parallelism will provide benefits whenever there are diminishing returns from data parallelism , and avoid stale weights from pipelined model parallelism . Additionally , we have released code showing an example of local parallelism , available at hiddenurl . 2 RELATED WORK . 2.1 PARALLELIZATION IN DEEP LEARNING . Scaling large models has led to the development of a number of techniques to train deep models in a parallel fashion ( Ben-Nun & Hoefler , 2018 ) , summarized in Figure 1 . Data Parallelism : Data Parallelism ( Zhang et al. , 1989 ) is an attempt to speed up training of a model by splitting the data among multiple identical models and training each model on a shard of the data independently . Data parallelism is effectively training with larger minibatches ( Kaplan et al. , 2020 ) . This creates issues around the consistency of a model which then needs to be synchronized ( Deng et al. , 2012 ; Dean et al. , 2012a ) . There are two main ways to synchronize weights across model copies : ( i ) Synchronous optimization , where data parallel training synchronizes at the end of every minibatch ( Das et al. , 2016 ; Chen et al. , 2016 ) , with a communication overhead that increases with the number of devices ; ( ii ) Asynchronous optimization that implements data parallel training with independent updates of local model parameters without global synchronization ( Niu et al. , 2011 ; Dean et al. , 2012a ) – this increases device utilization , but empirically gradients are computed on stale weights , which results in a poor sample efficiency and thus slower overall training time compared to synchronous optimization . Model Parallelism : Model Parallelism is used when a model is too large to fit in the memory of a single device and is instead spread over multiple processors ( Krizhevsky et al. , 2012 ; Shazeer et al. , 2018 ; Harlap et al. , 2018 ; Lepikhin et al. , 2020 ) . This is increasingly common as state of the art performance continues to improve with increasing model size ( Brown et al. , 2020 ) . Model parallelism unfortunately has a few downsides : ( i ) High communication costs – the total training time for larger networks can become dominated by communication costs ( Simonyan & Zisserman , 2015 ) , which in the worst case can grow quadratically with the number of devices , and can reach up to 85 % of the total training time of a large model such as VGG-16 ( Harlap et al. , 2018 ; Simonyan & Zisserman , 2015 ) ; ( ii ) Device under-utilization – forward propagation and backward propagation are both synchronous operations , which can result in processor under-utilization in model-parallel systems . This problem becomes worse as we increase the number of layers ( Ben-Nun & Hoefler , 2018 ; Jia et al. , 2014 ; Collobert et al. , 2011 ; Abadi et al. , 2016 ; Huang et al. , 2018 ) . Pipeline Parallelism : Due to the forward and backward locking , using multiple devices to process consecutive blocks of the deep model would make an inefficient use of the hardware resources . Pipelining ( Harlap et al. , 2018 ) concurrently passes multiple mini-batches to multiple layers on multiple devices . This increases device utilization but can introduce staleness and consistency issues which lead to unstable training . Harlap et al . ( 2018 ) alleviates the consistency issue by storing past versions of each layer . Huang et al . ( 2019 ) addresses the staleness issue by pipelining microbatches and synchronously updating at the end of each minibatch . Guan et al . ( 2019 ) builds on this work by introducing a weight prediction strategy and Yang et al . ( 2020 ) investigates to what extent the tradeoff between staleness/consistency and device utilization is necessary . Local updates on the other hand can keep device utilization high with both small and large batches and avoid the weight staleness problem . Local Learning Rules : Local learning describes a family of methods that perform parameter updates based only on local information , where locality is defined as dependence of neighboring neurons , layers , or groups of layers . The earliest local method we are aware of is Hebbian Learning ( Hebb , 1949 ) which has further been explored in BCM theory ( Izhikevich & Desai , 2003 ; Coesmans et al. , 2004 ) , Oja ’ s rule ( Oja , 1982 ) , Generalized Hebbian Learning ( Sanger , 1989 ) , and meta-learned local learning rules ( Bengio et al. , 1990 ; 1992 ; Metz et al. , 2018 ; Gu et al. , 2019 ) . Architectures like Hopfield Networks ( Hopfield , 1982 ) and Boltzmann Machines ( Ackley et al. , 1985 ) also employ a local update , and predate backprogation in deep learning . Modern variants of local training methods have attempted to bridge the performance gap with backpropagation . These include projection methods such as Hebbian learning rules for deep networks ( Krotov & Hopfield , 2019 ; Grinberg et al. , 2019 ; Ryali et al. , 2020 ) , and local layer-wise learning with auxiliary losses ( Belilovsky et al. , 2019a ; b ) . Most similar to our work is decoupled greedy layer-wise learning ( Belilovsky et al. , 2019b ; Löwe et al. , 2019 ) , which trained auxiliary image classifiers greedily , and local contrastive learning ( Xiong et al. , 2020 ) . These methods mainly focus on matching the performance of backpropagation with respect to training epochs , whereas our work focuses on tradeoffs . Finally , while not local in the sense that parallelized layers still optimize for the global objective , Huo et al . ( 2018b ) parallelize layers by caching gradients and using delayed gradient signals to overcome the backward locking problem and update decoupled layers in parallel . 3 LOCAL PARALLELISM . Given a deep neural network , we divide the layers into a sequence of J blocks , which may contain one or more layers . Each block is trained independently with an auxiliary objective , and receives the activations output by the previous block as input or , in the case of the first block , the data from the sampled minibatch . We consider five variants to train this sequence of J blocks : backpropagation , greedy local parallelism , overlapping local parallelism , and chunked local parallelism , as shown in Figure 2 . We also include a baseline method of just training the last , or last two , layers . In all of the local methods , training occurs by attaching objective functions to the end of each block and back propagating the signal locally into the corresponding block or blocks . In this work the auxiliary objective functions that we use take the same form as the global objective . For example , to train a classifier on CIFAR-10 , we attach auxiliary linear classifiers to each local block . See Belilovsky et al . ( 2019b ) for further discussion on the form of this objective . Backpropagation : In our notation , backpropagation groups all layers into one block and thus J = 1 . The parameters are updated with one instance of global error correction . While backpropagation ensures that all weights are updated according to the final output loss , it also suffers from forward and backward locking ( Jaderberg et al. , 2017 ) , an issue that local parallelized methods aim to resolve . Greedy local parallelism : A straightforward approach to enable local training is to attach an auxiliary network to each local layer , which generates predictions from the activations of hidden layers . After generating predictions , each local gradient is backpropagated to its respective local block , shown in Figure 2 ( b ) . The activations are then passed as input to the next layer . We refer to this approach , introduced in ( Belilovsky et al. , 2019b ) , as greedy . Greedy local parallelism is the most parallelizable of all the schemes we consider . However , a potential downside is that fully greedy updates force the layers to learn features that are only relevant to their local objective and preclude inter-layer communication , which may result in lower evaluation performance for the global objective , or worse generalization . Overlapping local parallelism : One issue with the purely greedy approach is that features learned for any individual block may not be useful for subsequent blocks , since there is no inter-block propagation of gradient . For this reason , we consider overlapping local architectures where the first layer of each block is also the last layer of the previous block , as shown in Figure 2 ( c ) , though overlapping of more layers is also possible . This redundancy enables inter-block propagation of gradient that is still local , since only neighboring blocks overlap . However , this comes at the cost of running additional backward passes . The overlapping architecture has appeared before in Xiong et al . ( 2020 ) , but was used only for contrastive losses . Ours is the first work to investigate overlapping local architectures for standard prediction objectives in computer vision and language . Overlapping updates are parallelizable , but come with the additional complexity of keeping duplicates of the overlapping components and averaging updates for these layers . Chunked local parallelism : The greedy architecture is maximally parallel in the sense that it distributes one layer per block . However , it is also possible to have fewer parallel blocks by combining multiple layers into one . We refer to this architecture , shown in Figure 2 ( d ) , as chunked local parallelism . This method trades off parallelizability and therefore throughput for an error signal that propagates through more consecutive layers . It differs from overlapping local parallelism by not needing to duplicate any layer . While previous work has investigated the asymptotic performance of chunked parallelism ( Belilovsky et al. , 2019b ) , ours is the first to consider the compute efficiency and parallelizability of local parallelism . By stacking multiple layers per each parallelized block , chunked parallelism sits between fully parallelized methods , such as greedy and overlapping updates , and fully sequential methods like backpropagation . It is a very poorly written paper. Basic idea of finding a way to not have to wait for full forward pass is not new. Multiple research papers have been published from the extreme of using stale weight to some form of sub-network backdrop as a proxy for the full network. This paper proposed no new idea for local update. Prior work have all suffered with one or both of these two limitations: a) poor experimental framework, or b) not being able to meet the accuracy bar set by backprop. This work suffers from both. Very poorly described experimental basis - and failing to come even close to the backprop accuracy target with any decent speedup claim. Former is my biggest concern. Section 6 starts with 'Here we show that performance gains of local parallelism can be realized on real hardware' - with near-zero description of any 'real' hardware, except a footnote on '1000 IPUs on a chip'. SP:ad7eb2bcb3a83153f140e5e8bfaa8b76110e62ab Simple and Effective VAE Training with Calibrated Decoders 1 INTRODUCTION . Deep density models based on the variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) have found ubiquitous use in probabilistic modeling and representation learning as they are both conceptually simple and are able to scale to very complex distributions and large datasets . These VAE techniques are used for tasks such as future frame prediction ( Castrejon et al. , 2019 ) , image segmentation ( Kohl et al. , 2018 ) , generating speech ( Chung et al. , 2015 ) and music ( Dhariwal et al. , 2020 ) , as well as model-based reinforcement learning ( Hafner et al. , 2019a ) . However , in practice , many of these approaches require careful manual tuning of the balance between two terms that correspond to distortion and rate from information theory ( Alemi et al. , 2017 ) . This balance trades off fidelity of reconstruction and quality of samples from the model : a model with low rate would not contain enough information to reconstruct the data , while allowing the model to have high rate might lead to unrealistic samples from the prior as the KL-divergence constraint becomes weaker ( Alemi et al. , 2017 ; Higgins et al. , 2017 ) . While a proper variational lower bound does not expose any free parameters to control this tradeoff , many prior works heuristically introduce a weight on the prior KL-divergence term , often denoted β . Usually , β needs to be tuned for every dataset and model variant as a hyperparameter , which slows down development and can lead to poor performance as finding the optimal value is often prohibitively computationally expensive . Moreover , using β 6= 1 precludes the appealing interpretation of the VAE objective as a bound on the data likelihood , and is undesirable for applications like density modeling . While many architectures for calibrating decoders have been proposed in the literature ( Kingma & Welling , 2014 ; Kingma et al. , 2016 ; Dai & Wipf , 2019 ) , more applied work typically employs VAEs with uncalibrated decoding distributions , such as Gaussian distributions without a learned variance , where the decoder only outputs the mean parameter ( Castrejon et al. , 2019 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ; Babaeizadeh et al. , 2018 ; Lee et al. , 2018 ; Hafner et al. , 2019b ; Pong et al. , 2019 ; Zhu et al. , 2017 ; Pavlakos et al. , 2019 ) , or uses other ad-hoc modifications to the objective ( Sohn et al. , 2015 ; Henaff et al. , 2019 ) . Indeed , it is well known that attempting to learn the variance in a Gaussian decoder may lead to numerical instability ( Rezende & Viola , 2018 ; Dai & Wipf , 2019 ) , and naı̈ve approaches often lead to poor results . As a result , it remains unclear whether practical empirical performance of VAEs actually benefits from calibrated decoders or not . To rectify this , our first contribution is a comparative analysis of various calibrated decoder architectures and practical recommendations for simple and effective VAE training . We find that , while naı̈ve calibrated decoders often lead to worse results , a careful choice of the decoder distribution can work very well , and removes the need to tune the additional parameter β . Indeed , we note that the entropy of the decoding distribution controls the mutual information I ( x ; z ) . Calibrated decoders allow the model to control I ( x ; z ) automatically , instead of relying on manual tuning . Our second contribution is a simple but novel technique for optimizing the decoder variance analytically , without requiring the decoder network to produce it as an additional output . We call the resulting approach to learning the Gaussian variance the σ-VAE . In our experiments , the σ-VAE outperforms the alternative of learning the variance through gradient descent , while being simpler to implement and extend . We validate our results on several VAE and sequence VAE models and a range of image and video datasets . 2 RELATED WORK . Prior work on variational autoencoders has studied a number of different decoder parameterizations . Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) use the Bernoulli distribution for the binary MNIST data and Kingma & Welling ( 2014 ) use Gaussian distributions with learned variance parameter for grayscale images . However , modeling images with continuous distributions is prone to instability as the variance can converge to zero ( Rezende & Viola , 2018 ; Mattei & Frellsen , 2018 ; Dai & Wipf , 2019 ) . Some work has attempted to rectify this problem by using dequantization ( Gregor et al. , 2016 ) , which is theoretically appealing as it is tightly related to the log-likelihood of the original discrete data ( Theis et al. , 2016 ) , optimizing the variance in a two-stage procedure ( Arvanitidis et al. , 2017 ) , or training a post-hoc prior ( Ghosh et al. , 2019 ) . Takahashi et al . ( 2018 ) ; Barron ( 2019 ) proposed more expressive distributions . Additionally , different choices for representing such variance exist , including diagonal covariance ( Kingma & Welling , 2014 ; Sønderby et al. , 2016 ; Rolfe , 2016 ) , or a single shared parameter ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ; Edwards & Storkey , 2016 ; Rezende & Viola , 2018 ) . We analyze these and notice that learning a single variance parameter shared across images leads to stable training and good performance , without the use of dequantization or even clipping the variance , although these techniques can be used with our decoders ; and further improve the estimation of this variance with an analytic solution . Early work on discrete VAE decoders for color images modeled them with the Bernoulli distribution , treating the color intensities as probabilities ( Gregor et al. , 2015 ) . Further work has explored various parameterizations based on discretized continuous distributions , such as discretized logistic ( Kingma et al. , 2016 ) . More recent work has improved expressivity of the decoder with a mixture of discretized logistics ( Chen et al. , 2016 ; Maaløe et al. , 2019 ) . However , these models also employ powerful autoregressive decoders ( Chen et al. , 2016 ; Gulrajani et al. , 2016 ; Maaløe et al. , 2019 ) , and the latent variables in these models may not represent all of the significant factors of variation in the data , as some factors can instead be modeled internally by the autoregressive decoder ( Alemi et al. , 2017 ) .1 While many calibrated decoders have been proposed , outside the core generative modeling community uncalibrated decoders are ubiquitous . They are used in work on video prediction ( Denton & Fergus , 2018 ; Castrejon et al. , 2019 ; Lee et al. , 2018 ; Babaeizadeh et al. , 2018 ) , image segmentation ( Kohl et al. , 2018 ) , image-to-image translation ( Zhu et al. , 2017 ) , 3D human pose ( Pavlakos et al. , 2019 ) , as well as model-based reinforcement learning ( Henaff et al. , 2019 ; Hafner et al. , 2019b ; a ) , and representation learning ( Lee et al. , 2019 ; Watter et al. , 2015 ; Pong et al. , 2019 ) . Most of these works utilize the heuristic hyperparameter β instead , which is undesirable both as the resulting objective is no longer a bound on the likelihood , and as β usually requires extensive tuning . In this work , we analyze the common pitfalls of using calibrated decoders that may have prevented the practitioners from using them , propose a simple and effective analytic way of learning such calibrated distribution , and provide a comprehensive experimental evaluation of different decoding distributions . Alternative discussions of the hyperparameter β are presented by Zhao et al . ( 2017 ) ; Higgins et al . ( 2017 ) ; Alemi et al . ( 2017 ) ; Achille & Soatto ( 2018 ) , who show that it controls the amount of information in the latent variable , I ( x ; z ) . Peng et al . ( 2018 ) ; Rezende & Viola ( 2018 ) further discuss constrained optimization objectives for VAEs , which also yield a similar hyperparameter . Here , we focus on β-VAEs with Gaussian decoders with constant variance , as commonly used in recent work , and show that the hyperparameter β can be incorporated in the decoding likelihood for these models . 1BIVA ( Maaløe et al. , 2019 ) uses the Mixture of Logistics decoder proposed in ( Salimans et al. , 2017 ) that produces the channels for each pixel autoregressively , see also App D . 3 ANALYSING DECODING DISTRIBUTIONS . The generative model of a VAE ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) with parameters θ is specified with a prior distribution over the latent variable pθ ( z ) , commonly unit Gaussian , and a decoding distribution pθ ( x|z ) , which for color images is commonly a conditional Gaussian parameterized with a neural network . We would like to fit this generative model to a given dataset by maximizing the evidence lower bound ( ELBO ( Neal & Hinton , 1998 ; Jordan et al. , 1999 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) ) , which uses an approximate posterior distribution qφ ( z|x ) , also commonly a conditional Gaussian specified with a neural network . In this work , we focus on the form of the decoding distribution pθ ( x|z ) . To achieve the best results , we want a decoding distribution that represents the required probability p ( x|z ) accurately In this section , we will review and analyze various choices of decoding distributions that enable better decoder calibration , including expressive decoding distributions that can represent both the prediction of the image and the uncertainty about such prediction , or even multimodal predictions . 3.1 GAUSSIAN DECODERS . We first analyse the commonly used Gaussian decoders . We note that the commonly used MSE reconstruction loss between the reconstruction x̂ and ground truth data x is equivalent to the negative log-likelihood objective with a Gaussian decoding distribution with constant variance : − ln p ( x|z ) = 1 2 ||x̂− x||2 +D ln √ 2π = 1 2 ||x̂− x||2 + c = D 2 MSE ( x̂ , x ) + c , where p ( x|z ) ∼ N ( x̂ , I ) , the prediction x̂ is produced with a neural network x̂ = µθ ( z ) , and D is the dimensionality of x . This demonstrates a drawback of methods that rely simply on the MSE loss ( Castrejon et al. , 2019 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ; Hafner et al. , 2019b ; Pong et al. , 2019 ; Zhu et al. , 2017 ; Henaff et al. , 2019 ) , as it is equivalent to assuming a particular , constant variance of the Gaussian decoding distribution . By learning this variance , we can achieve much better performance due to better calibration of the decoder . There are several ways in which we can specify this variance . An expressive way to specify the variance is to specify a diagonal covariance matrix for the image , with one value per pixel ( Kingma & Welling , 2014 ; Sønderby et al. , 2016 ; Rolfe , 2016 ) . This can be done , for example , by letting a neural network σθ output the diagonal entries of the covariance matrix given a latent sample z : pθ ( x|z ) ∼ N ( µθ ( z ) , σθ ( z ) 2 ) . ( 1 ) This parameterization of the decoding distribution outputs one variance value per each pixel and channel . While powerful , we observe in Section 5.3 that this approach attains suboptimal performance , and is moreover prone to numerical instability . Instead , we will find experimentally that a simpler parameterization , in which the covariance matrix is specified with a single shared ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ; Edwards & Storkey , 2016 ; Rezende & Viola , 2018 ) parameter σ as Σ = σI often works better in practice : pθ , σ ( x|z ) ∼ N ( µθ ( z ) , σ 2I ) . ( 2 ) The parameter σ can be optimized together with parameters of the neural network θ with gradient descent . Of particular interest is the interpretation of this parameter . Writing out the expression for the decoding likelihood , we obtain − ln p ( x|z ) = 1 2σ2 ||x̂−x||2+D lnσ √ 2π = 1 2σ2 ||x̂−x||2+D lnσ+c = D lnσ+ D 2σ2 MSE ( x̂ , x ) +c . The full objective of the resulting Gaussian σ-VAE is : Lθ , φ , σ = D lnσ + D 2σ2 MSE ( x̂ , x ) +DKL ( q ( z|x ) ||p ( z ) ) . ( 3 ) Note that σ may be viewed as a weighting parameter between the MSE reconstruction term and the KL-divergence term in the objective . Moreover , this objective explicitly specifies how to select the optimal variance : the variance should be selected to minimize the ( weighted ) MSE loss while also minimizing the logarithm of the variance . Decoder Calibration It is important that the decoder distribution be calibrated in the statistical sense , that is , the predicted probabilities should correspond to the frequencies of seeing a particular value of x given that prediction ( DeGroot & Fienberg , 1983 ; Dawid , 1982 ) . The calibration of a neural network can be usually improved by estimating the uncertainty of that prediction ( Guo et al. , 2017 ) , such as the variance of a Gaussian ( Kendall & Gal , 2017 ) . Since the naive MSE loss assumes a constant variance , it does not effectively represent the uncertainty of the prediction , and is often poorly calibrated . Instead , learning the variance as in Eq . 3 leads to better uncertainty estimation and better calibration . In Sec 5.1 , we show that learning a good estimate of this uncertainty is crucial for the quality of the VAE generations . Connection to β-VAE . The β-VAE objective ( Higgins et al. , 2017 ) for a Gaussian decoder with unit variance is : Lβ = D 2 MSE ( x̂ , x ) + βDKL ( q ( z|x ) ||p ( z ) ) . ( 4 ) We see that it can be interpreted as a particular case of the objective ( 3 ) , where the variance is constant and the term D lnσ can be ignored during optimization . The β-VAE objective is then equivalent to a σ-VAE with a constant variance σ = √ β/2 ( for a particular learning rate setting ) . In recent work ( Zhu et al. , 2017 ; Denton & Fergus , 2018 ; Lee et al. , 2019 ) , β-VAE models are often used in this exact regime . By tuning the β term , practitioners are able to tune the variance of the decoder , manually producing a more calibrated decoder . However , by re-interpreting the β-VAE objective as a special case of the VAE and introducing the missing D lnσ term , we can both obtain a valid evidence lower bound , and remove the need to manually select β . Instead , the variance σ can instead simply be learned end-to-end , reducing the need for hyperparameter tuning . An alternative discussion of this connection in the context of linear VAEs is also presented by Lucas et al . ( 2019 ) . While the β term is not necessary for good performance if the decoder is calibrated , it can still be employed if desired , such as when the aim is to attain better disentanglement ( Higgins et al. , 2017 ) or a particular rate-distortion tradeoff ( Alemi et al. , 2017 ) . However , we found that with calibrated decoders , the best sample quality is obtained when β = 1 . Loss implementation details . For the correct evidence lower bound computation , it is necessary to add the values of the MSE loss and the KL divergence across the dimensions . We observe that common implementations of these losses ( Denton & Fergus , 2018 ; Abadi et al. , 2016 ; Paszke et al. , 2019 ) use averaging instead , which will lead to poor results if the number of image dimensions is significantly different from the number of the latent dimensions . While this can be conveniently ignored in the β-VAE regime , where the balance term is tuned manually anyway , for the σ-VAE it is essential to compute the objective value correctly . Variance implementation details . Since the variance is non-negative , we parameterize it logarithmically as σ2 = e2λ , where λ is the logarithm of the standard deviation . For some models , such as per-pixel variance decoders , we observed that it is necessary to restrict the variance range for numerical stability . We do so by using the soft clipping operations proposed by Chua et al . ( 2018 ) : λ : = λmax − softplus ( λmax − λ ) ; λ : = λmin + softplus ( λ− λmin ) . We observe that setting λmin = −6 to lower bound the standard deviation to be at least half of the distance between allowed color values works well in practice . We also observe that this clipping is unnecessary when learning a shared σ value . This paper discusses a well-known problem of VAE training that decoder produces blurry reconstruction with constant variance. While much existing work addressed this problem by introducing independent variance training (as of the original VAE model) or additional hyper-parameters, those approaches usually come with additional training/tuning difficulty and even break the ELBO assumption. This paper proposed a simple$\sigma$-VAE that addresses the above problem by optimizing a single variance variable. This also could be easily connected to the well known$\beta$-VAE works. The experiment results in Tables 2 and 3 show the proposed model obtains a better FID score than the existing works on multiple datasets. SP:a3e5acdd322677d019a4582db78dab2dc1102818 Bayesian Neural Networks with Variance Propagation for Uncertainty Evaluation 1 INTRODUCTION . Uncertainty evaluation is a core technique in practical applications of deep neural networks ( DNNs ) . As an example , let us consider the Cyber-Physical Systems ( CPS ) such as the automated driving system . In the past decade , machine learning methods are widely utilized to realize the environment perception and path-planing components in the CPS . In particular , the automated driving system has drawn a huge attention as a safety-critical and real-time CPS ( NITRD CPS Senior Steering Group , 2012 ; Wing , 2009 ) . In the automated driving system , the environment perception component is built using DNN-based predictive models . In real-world applications , the CPS is required to deal with unexpected samples that have not seen in the training process . Therefore , not only achieving the high-prediction accuracy under the ideal environment but providing uncertainty evaluation for real-world data is significant for safety-critical systems ( Henne et al. , 2019 ) . The CPS should prepare some options such as the rejection of the recommended action to promote the user ’ s intervention when the uncertainty is high . Such an interactive system is necessary to build fail-safe systems ( Varshney & Alemzadeh , 2017 ; Varshney , 2016 ) . On the other hand , the uncertainty evaluation is useful to enhance the efficiency of learning algorithms , i.e. , samples with high uncertainty are thought to convey important information for training networks . Active data selection based on the uncertainty has been studied for long time under the name of active learning ( David et al. , 1996 ; Gal et al. , 2017 ; Holub et al. , 2008 ; Li & Guo , 2013 ; Shui et al. , 2020 ) . In statistics and machine learning , Bayesian estimation has been commonly exploited for uncertainty evaluation ( Bishop , 2006. ) . In the Bayesian framework , the prior knowledge is represented as the prior distribution of the statistical model . The prior distribution is updated to the posterior distribution based on observations . The epistemic model uncertainty is represented in the prior distribution , and upon observing data , those beliefs can be updated in the form of a posterior distribution , which yields model uncertainty conditioned on observed data . The entropy or the variance is representative of uncertainty measures ( Cover & Thomas , 2006 ) . For complicated models such as DNNs , however , a direct application of Bayesian methods is prohibited as the computation including the high-dimensional integration highly costs . In deep learning , Bayesian methods are related to stochastic learning algorithms . This relation is utilized to approximate the posterior over complex models . The stochastic method called dropout is a powerful regularization method for DNNs ( Srivastava et al. , 2014 ) . In each layer of the DNN , some units are randomly dropped in the learning using stochastic gradient descent methods . Gal & Ghahramani ( 2016a ) revealed that the dropout is interpreted as the variational Bayes method . Based on this interpretation , they proposed a simple sampling method of DNN parameters from the approximate posterior distribution . Furthermore , the uncertainty of the DNN-based prediction is evaluated using the Monte-Carlo ( MC ) method called MC dropout . While the Bayesian DNN trained using dropout is realized by a simple procedure , the computational overhead is not ignorable . In the MC dropout , dropout is used also at the test time with a number of repeated feed-forward calculations to effectively sample from the approximate posterior . Hence , the naive MC dropout is not necessarily relevant to the system demanding the real-time response . In this work , we propose a sampling-free method to evaluate the uncertainty of the DNN-based prediction . Our method is computationally inexpensive comparing to the MC dropout and provides reliable uncertainty evaluation . In the following , we will first outline related works . Section 3 is devoted to show the detailed formulae of calculating the uncertainty . In our method , an upper bound of the variance is propagated in each layer to evaluate the uncertainty of the output . We show that the our method alleviates the overconfident prediction . This property is shared with scaling methods for the calibration of the class-probability on test samples . In Section 4 , we study the relation between our method and scaling methods . In Section 5 , we demonstrate the computational efficiency and statistical reliability of our method through some numerical experiments using both DNNs and RNNs . 2 RELATED WORKS . The framework of Bayesian inference is often utilized to evaluate the uncertainty of DNN-based predictions . In Bayesian methods , the uncertainty is represented by the predictive distribution defined from the posterior distribution of the weight parameters . MacKay ( 1992 ) proposed a simple approximation method of the posterior distribution for neural networks , and demonstrated that the Bayesian method improves the prediction performance on classification tasks . Graves ( 2011 ) showed that the variational method efficiently works to approximate the posterior distribution of complex neural network models . There are many approaches to evaluate the uncertainty of modern DNNs ( Alex Kendall & Cipolla , 2017 ; Choi et al. , 2018 ; Lu et al. , 2017 ; Le et al. , 2018 ) . We briefly review MC-based methods and sampling-free methods . Monte-Carlo methods based on Stochastic Learning : The randomness in the learning process can be interpreted as a prior distribution . In particular , the dropout is a landmark of stochastic regularization method to train DNNs ( Srivastava et al. , 2014 ) . Gal & Ghahramani ( 2016a ) proposed a simple method to generate weight parameters from the posterior distribution induced from the prior corresponding to the dropout regularization . The predictive distribution is approximated by the MC dropout , which compute the expected output over the Monte-Carlo sampling of the weight parameters . Gal & Ghahramani ( 2016b ) reported that the MC dropout efficiently works not only for feed-forward DNNs but for recurrent neural networks ( RNNs ) . Another sampling based method is the ensemble-based posteriors with different random seeds ( Lakshminarayanan et al. , 2017 ) . However , the computation cost is high as the bootstrap method requires repeated training of parameters using resampling data . Sampling-free methods : Though the MC dropout is a simple and practical method to evaluate the uncertainty , a number of feed-forward computations are necessary to approximate the predictive distribution . Recently , some sampling-free methods have been proposed for the uncertainty evaluation . Probabilistic network is a direct way to deal with uncertainty . The parameters of the probabilistic model , say the mean and the variance of the Gaussian distribution , are propagated in probabilistic neural networks . Then , the uncertainty evaluation is given by a single feed-forward calculation . Choi et al . ( 2018 ) used the mixture of Gaussian distributions as a probabilistic neural network and Wang et al . ( 2016 ) proposed natural-parameter networks as a class of probabilistic neural networks based on exponential families . For a given input vector , the network outputs the parameters of the distribution . For the recurrent neural networks , Hwang et al . ( 2019 ) proposed a variant of the natural-parameter networks . Instead of parameters of statistical models , Wu et al . ( 2019 ) developed a sampling-free method to propagate the first and second order moments of the posterior distribution . Sampling-free methods can evaluate the uncertainty with a one-pass computation for neural networks . However , specialized learning algorithms are required to train the probabilistic networks . Our method is applicable to DNNs and RNNs trained by common learning methods with the dropout . Postels et al . ( 2019 ) and Shekhovtsov & Flach ( 2019 ) proposed similar methods that propagate the uncertainty of the network to the output layer . Differently from the past works , our method takes the upper limit of the correlations among the inputs at the affine layer into account when the uncertainty is evaluated . In addition , we show that our method efficiently works even for RNNs . 3 UNCERTAINTY EVALUATION WITH VARIANCE PROPAGATION . In this work , we assume that we can access to the weight parameters in the DNN and the dropout probability in the training process . As the variance is a common measure of uncertainty , we propose a variance propagation algorithm for the trained DNN . Implementation of our method called nn2vpbnn is presented in Section A in the appendix . In our method , we need only the DNN or RNN trained using dropout . Unlike various kinds of probabilistic NNs , we do not need any specialized training procedure to evaluate the uncertainty . This is a great advantage for our implementation . Furthermore , the representative values of the predictive distribution , i.e . the mean and variance , are obtained by a one-path feed-forward calculation . Hence , we can circumvent iterative Monte-Carlo calculations . 3.1 UNCERTAINTY IN AFFINE LAYER . Let us consider the output of the affine layer y = Wx + b for the random input x , where W = ( Wij ) ∈ R  ×m and b = ( bi )  i=1 ∈ R  . Suppose that the random vector x has the mean vector E [ x ] and the variance covariance matrix ( Σx ) i , j = Cov ( xi , xj ) for i , j = 1 , . . . , m. Then , the mean vector E [ y ] and the variance covariance matrix Σy of y are given by E [ y ] = WE [ x ] + b and Σy = WΣxW T . As the estimation of the full variable-covariance matrix is not necessarily reliable , we use only the variances of each xi and an upper bound of the absolute correlation coefficient to evaluate the uncertainty . For W = ( Wij ) , the variance Var [ yi ] is Var [ yi ] = ∑ jW 2 ijVar [ xj ] +∑ j , j′ : j 6=j′WijWij′Cov ( xj , xj′ ) . Suppose the absolute correlation coefficient among x1 , . . . , xm is bounded above by ρ , 0 ≤ ρ ≤ 1 . Using the relation between the correlation and variance , we have Var [ yi ] ≤ ∑ j W 2ijVar [ xj ] + ρ ∑ j , j′ : j 6=j′ |Wij ||Wij′ | √ Var ( xj ) √ Var ( xj′ ) = ( 1− ρ ) ∑ j |Wij |2Var [ xj ] + ρ ( ∑ j |Wij | √ Var ( xj ) ) 2 , i = 1 , . . . ,  . ( 1 ) Under the independent assumption , i.e. , ρ = 0 , the minimum upper bound is obtained . The prediction with a small variance leads to overconfident decision making . Hence , the upper bounding of the variance is important to build fail-safe systems . A simple method of estimating ρ is presented in Section 3.5 . Using the above formula , the mean and an upper bound of the variance of y are computed using the mean and an upper bound of the variance of x . In this paper , such a computation is referred to as the Variance Propagation or VP for short . Let us define the variance vector of the m-dimensional random vector x = ( x1 , . . . , xm ) ∈ Rm by Var [ x ] = ( Var [ x1 ] , . . . , Var [ xm ] ) ∈ Rm . Furthermore , we denote the concatenated vector of the mean and variance of z or its approximation as U ( z ) , i.e. , U ( z ) = ( E [ z ] , Var [ z ] ) . The VP at the affine layer is expressed by the function Taff , U ( y ) = ( m , v ) = Taff ( U ( x ) ) , ( 2 ) where m = WE [ x ] + b ∈ Rm and each element of v ∈ Rm is defined by equation 1 . The average pooling layer , global average pooling layer ( Lin et al. , 2013 ) , and the batch normalization layer ( Ioffe & Szegedy , 2015 ) are examples of the affine layer . Hence , the VP of the affine layer also works to evaluate the uncertainty of these layers . The distribution of yi is well approximated by the univariate Gaussian distribution if the correlation among x is small ( Wang & Manning , 2013 ; Wu et al. , 2019 ) . Based on this fact , the uncertainty of yi can be represented by the univariate Gaussian distribution N ( E [ yi ] , Var [ yi ] ) . In our method , the variance Var [ yi ] of the approximate Gaussian is given by the variance v in equation 2 . This paper proposes a sampling free technique based on variance propagation to model predictive distributions of deep learning models. Estimating uncertainty of deep learning models is an important line of research for understanding the reliability of predictions and ensuring robustness to out-of-distribution data. Results are shown using synthetic data, perplexity analysis for a language modeling task and out-of-distribution detection performance using a convolutional network. SP:3a1d7f7165762299ba2d9bab4144576660b9a784 Private Post-GAN Boosting 1 INTRODUCTION . The vast collection of detailed personal data , including everything from medical history to voting records , to GPS traces , to online behavior , promises to enable researchers from many disciplines to conduct insightful data analyses . However , many of these datasets contain sensitive personal information , and there is a growing tension between data analyses and data privacy . To protect the privacy of individual citizens , many organizations , including Google ( Erlingsson et al. , 2014 ) , Microsoft ( Ding et al. , 2017 ) , Apple ( Differential Privacy Team , Apple , 2017 ) , and more recently the 2020 US Census ( Abowd , 2018 ) , have adopted differential privacy ( Dwork et al. , 2006 ) as a mathematically rigorous privacy measure . However , working with noisy statistics released under differential privacy requires training . A natural and promising approach to tackle this challenge is to release differentially private synthetic data—a privatized version of the dataset that consists of fake data records and that approximates the real dataset on important statistical properties of interest . Since they already satisfy differential privacy , synthetic data enable researchers to interact with the data freely and to perform the same analyses even without expertise in differential privacy . A recent line of work ( Beaulieu-Jones et al. , 2019 ; Xie et al. , 2018 ; Yoon et al. , 2019 ) studies how one can generate synthetic data by incorporating differential privacy into generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Although GANs provide a powerful framework for synthetic data , they are also notoriously hard to train and privacy constraint imposes even more difficulty . Due to the added noise in the private gradient updates , it is often difficult to reach convergence with private training . In this paper , we study how to improve the quality of the synthetic data produced by private GANs . Unlike much of the prior work that focuses on fine-tuning of network architectures and training techniques , we propose Private post-GAN boosting ( Private PGB ) —a differentially private method that boosts the quality of the generated samples after the training of a GAN . Our method can be viewed as a simple and practical amplification scheme that improves the distribution from any ex- isting black-box GAN training method – private or not . We take inspiration from an empirical observation in Beaulieu-Jones et al . ( 2019 ) that even though the generator distribution at the end of the private training may be a poor approximation to the data distribution ( due to e.g . mode collapse ) , there may exist a high-quality mixture distribution that is given by several generators over different training epochs . PGB is a principled method for finding such a mixture at a moderate privacy cost and without any modification of the GAN training procedure . To derive PGB , we first formulate a two-player zero-sum game , called post-GAN zero-sum game , between a synthetic data player , who chooses a distribution over generated samples over training epochs to emulate the real dataset , and a distinguisher player , who tries to distinguish generated samples from real samples with the set of discriminators over training epochs . We show that under a “ support coverage ” assumption the synthetic data player ’ s mixed strategy ( given by a distribution over the generated samples ) at an equilibrium can successfully “ fool ” the distinguisher–that is , no mixture of discriminators can distinguish the real versus fake examples better than random guessing . While the strict assumption does not always hold in practice , we demonstrate empirically that the synthetic data player ’ s equilibrium mixture consistently improves the GAN distribution . The Private PGB method then privately computes an approximate equilibrium in the game . The algorithm can be viewed as a computationally efficient variant of MWEM ( Hardt & Rothblum , 2010 ; Hardt et al. , 2012 ) , which is an inefficient query release algorithm with near-optimal sample complexity . Since MWEM maintains a distribution over exponentially many “ experts ” ( the set of all possible records in the data domain ) , it runs in time exponential in the dimension of the data . In contrast , we rely on private GAN to reduce the support to only contain the set of privately generated samples , which makes PGB tractable even for high-dimensional data . We also provide an extension of the PGB method by incorporating the technique of discriminator rejection sampling ( Azadi et al. , 2019 ; Turner et al. , 2019 ) . We leverage the fact that the distinguisher ’ s equilibrium strategy , which is a mixture of discriminators , can often accurately predict which samples are unlikely and thus can be used as a rejection sampler . This allows us to further improve the PGB distribution with rejection sampling without any additional privacy cost since differential privacy is preserved under post-processing . Our Private PGB method also has a natural non-private variant , which we show improves the GAN training without privacy constraints . We empirically evaluate both the Private and Non-Private PGB methods on several tasks . To visualize the effects of our methods , we first evaluate our methods on a two-dimensional toy dataset with samples drawn from a mixture of 25 Gaussian distributions . We define a relevant quality score function and show that the both Private and Non-Private PGB methods improve the score of the samples generated from GAN . We then show that the Non-Private PGB method can also be used to improve the quality of images generated by GANs using the MNIST dataset . Finally , we focus on applications with high relevance for privacy-protection . First we synthesize US Census datasets and demonstrate that the PGB method can improve the generator distribution on several statistical measures , including 3-way marginal distributions and pMSE . Secondly , we evaluate the PGB methods on a dataset with a natural classification task . We train predictive models on samples from Private PGB and samples from a private GAN ( without PGB ) , and show that PGB consistently improves the model accuracy on real out-of-sample test data . Related work . Our PGB method can be viewed as a modular boosting method that can improve on a growing line of work on differentially private GANs ( Beaulieu-Jones et al. , 2019 ; Xie et al. , 2018 ; Frigerio et al. , 2019 ; Torkzadehmahani et al. , 2020 ) . To obtain formal privacy guarantees , these algorithms optimize the discriminators in GAN under differential privacy , by using private SGD , RMSprop , or Adam methods , and track the privacy cost using moments accounting Abadi et al . ( 2016 ) ; Mironov ( 2017 ) . Yoon et al . ( 2019 ) give a private GAN training method by adapting ideas from the PATE framework ( Papernot et al. , 2018 ) . Our PGB method is inspired by the Private Multiplicative Weigths method ( Hardt & Rothblum , 2010 ) and its more practical variant MWEM ( Hardt et al. , 2012 ) , which answer a large collection of statistical queries by releasing a synthetic dataset . Our work also draws upon two recent techniques ( Turner et al . ( 2019 ) and Azadi et al . ( 2019 ) ) that use the discriminator as a rejection sampler to improve the generator distribution . We apply their technique by using the mixture discriminator computed in PGB as the rejection sampler . There has also been work that applies the idea of boosting to ( non-private ) GANs . For example , Arora et al . ( 2017 ) and Hoang et al . ( 2018 ) propose methods that directly train a mixture of generators and discriminators , and Tolstikhin et al . ( 2017 ) proposes AdaGAN that reweighes the real examples during training similarly to what is done in AdaBoost ( Freund & Schapire , 1997 ) . Both of these methods may be hard to make differentially private : they either require substantially more privacy budget to train a collection of discriminators or increase the weights on a subset of examples , which requires more adding more noise when computing private gradients . In contrast , our PGB method boosts the generated samples post training and does not make modifications to the GAN training procedure . 2 PRELIMINARIES . Let X denote the data domain of all possible observations in a given context . Let pd be a distribution over X . We say that two datasets X , X ′ ∈ Xn are adjacent , denoted by X ∼ X ′ , if they differ by at most one observation . We will write pX to denote the empirical distribution over X . Definition 1 ( Differential Privacy ( DP ) ( Dwork et al. , 2006 ) ) . A randomized algorithm A : Xn → R with output domain R ( e.g . all generative models ) is ( ε , δ ) -differentially private ( DP ) if for all adjacent datasets X , X ′ ∈ Xn and for all S ⊆ R : P ( A ( X ) ∈ S ) ≤ eεP ( A ( X ′ ) ∈ S ) + δ . A very nice property of differential privacy is that it is preserved under post-processing . Lemma 1 ( Post-processing ) . LetM be an ( ε , δ ) -differentially private algorithm with output range R and f : R→ R′ be any mapping , the composition f ◦M is ( ε , δ ) -differentially private . As a result , any subsequent analyses conducted on DP synthetic data also satisfy DP . The exponential mechanism ( McSherry & Talwar , 2007 ) is a private mechanism for selecting among the best of a discrete set of alternativesR , where “ best ” is defined by a quality function q : Xn×R → R that measures the quality of the result r for the dataset X . The sensitivity of the quality score q is defined as ∆ ( q ) = maxr∈RmaxX∼X′ |q ( X , r ) −q ( X ′ , r ) | . Then given a quality score q and privacy parameter ε , the exponential mechanismME ( q , ε , X ) simply samples a random alternative from the rangeR such that the probability of selecting each r is proportional to exp ( εq ( X , r ) / ( 2∆ ( q ) ) ) . 2.1 DIFFERENTIALLY PRIVATE GAN . The framework of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) consists of two types of neural networks : generators and discriminators . A generator G is a function that maps random vectors z ∈ Z drawn from a prior distribution pz to a sample G ( z ) ∈ X . A discriminator D takes an observation x ∈ X as input and computes a probability D ( x ) that the observation is real . Each observation is either drawn from the underlying distribution pd or the induced distribution pg from a generator . The training of GAN involves solving the following joint optimization over the discriminator and generator : min G max D Ex∼pX [ f ( D ( x ) ) ] + Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] where f : [ 0 , 1 ] → R is a monotone function . For example , in standard GAN , f ( a ) = log a , and in Wasserstein GAN ( Arjovsky et al. , 2017 ) , f ( a ) = a . The standard ( non-private ) algorithm iterates between optimizing the parameters of the discriminator and the generator based on the loss functions : LD = −Ex∼pX [ f ( D ( x ) ) ] − Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] , LG = Ez∼pz [ f ( 1−D ( G ( z ) ) ) ] The private algorithm for training GAN also performs the same alternating optimization , but it optimizes the discriminator under differential privacy while keeping the generator optimization the same . In general , the training proceeds over epochs τ = 1 , . . . , N , and at the end of each epoch τ the algorithm obtains a discriminator Dτ and a generator Gτ by optimizing the loss functions respectively . In Beaulieu-Jones et al . ( 2019 ) ; Xie et al . ( 2018 ) , the private optimization on the discriminators is done by running the private SGD method Abadi et al . ( 2016 ) or its variants . Yoon et al . ( 2019 ) performs the private optimization by incorporating the PATE framework Papernot et al . ( 2018 ) . For all of these private GAN methods , the entire sequence of discriminators { D1 , . . . , DN } satisfies privacy , and thus the sequence of generators { G1 , . . . , GN } is also private since they can be viewed as post-processing of the discriminators . Our PGB method is agnostic to the exact private GAN training methods . This paper studies the differential private synthetic dataset generation. Unlike previous DP based GAN models, this paper aims to boost the sample quality of after the training stage. In particular, the final synthetic dataset is sampled from the sequence of generators obtained during GAN training. The distribution is obtained by a private two-player game between the privately selected discriminator and a sampler from the mixture of generators. The results are demonstrated on gaussian data and tabular data. SP:72d1283f3602edc22896934271fcec5b03f25d9e A Near-Optimal Recipe for Debiasing Trained Machine Learning Models 1 INTRODUCTION . Machine learning is increasingly applied to critical decisions which can have a lasting impact on individual lives , such as for credit lending ( Bruckner , 2018 ) , medical applications ( Deo , 2015 ) , and criminal justice ( Brennan et al. , 2009 ) . Consequently , it is imperative to understand and improve the degree of bias of such automated decision-making . Unfortunately , despite the fact that bias ( or “ fairness ” ) is a central concept in our society today , it is difficult to define it in precise terms . In fact , as people perceive ethical matters differently depending on a plethora of factors including geographical location or culture ( Awad et al. , 2018 ) , no universally-agreed upon definition for bias exists . Moreover , the definition of bias may depend on the application and might even be ignored in favor of accuracy when the stakes are high , such as in medical diagnosis ( Kleinberg et al. , 2017 ; Ingold and Soper , 2016 ) . As such , it is not surprising that several definitions of “ unbiased classification ” have been introduced . These include statistical parity ( Dwork et al. , 2012 ; Zafar et al. , 2017a ) , equality of opportunity ( Hardt et al. , 2016 ) , and equalized odds ( Hardt et al. , 2016 ; Kleinberg et al. , 2017 ) . Unfortunately , such definitions are not generally compatible ( Chouldechova , 2017 ) and some might even be in conflict with calibration ( Kleinberg et al. , 2017 ) . In addition , because fairness is a societal concept , it does not necessarily translate into a statistical criteria ( Chouldechova , 2017 ; Dixon et al. , 2018 ) . Statistical parity Let X be an instance space and let Y = { 0 , 1 } be the target set in a standard binary classification problem . In the fair classification setting , we may further assume the existence of a ( possibly randomized ) sensitive attribute s : X → { 0 , 1 , . . . , K } , where s ( x ) = k if and only if x ∈ Xk for some total partition X = ∪kXk . For example , X might correspond to the set of job applicants while s indicates their gender . Here , the sensitive attribute can be randomized if , for instance , the gender of an applicant is not a deterministic function of the full instance x ∈ X ( e.g . number of publications , years of experience , ... etc ) . Then , a commonly used criterion for fairness is to require similar mean outcomes across the sensitive attribute . This property is well-captured through the notion of statistical parity ( a.k.a . demographic parity ) ( Corbett-Davies et al. , 2017 ; Dwork et al. , 2012 ; Zafar et al. , 2017a ; Mehrabi et al. , 2019 ) : Definition 1 ( Statistical Parity ) . Let X be an instance space and X = ∪kXk be a total partition of X . A classifier f : X → { 0 , 1 } satisfies statistical parity across all groups X1 , . . . , XK if : max k∈ { 1,2 , ... , K } Ex [ f ( x ) | x ∈ Xk ] − min k∈ { 1,2 , ... , K } Ex [ f ( x ) | x ∈ Xk ] ≤ To motivate and further clarify the definition , we showcase the empirical results on the Adult benchmark dataset ( Blake and Merz , 1998 ) in Figure 1 . When tasked with predicting whether the income of individuals is above$ 50K per year , all considered classifiers exhibit gender-related bias . One way of removing such bias is to enforce statistical parity across genders . Crucially , however , without taking ethnicity into account , different demographic groups may experience different outcomes . In fact , gender bias can actually increase in some minority groups after enforcing statistical parity . This can be fixed by redefining the sensitive attribute to be the cross product of both gender and ethnicity ( green bars ) . Our main contribution is to present a near-optimal recipe for debiasing models , including deep neural networks , according to Definition 1 . Specifically , we formulate the task of debiasing learned models as a regularized optimization problem that is solved efficiently using the projected SGD method . We show how the algorithm produces thresholding rules with randomization near the thresholds , where the width of randomization is controlled by the regularization parameter . We also show that randomization near the threshold is necessary for Bayes risk consistency . While we focus on binary sensitive attributes in our experiments in Section 5 , our algorithm and its theoretical guarantees continue to hold for non-binary sensitive attributes as well . Statement of Contribution . 1 . We derive a near-optimal post-processing algorithm for debiasing learned models ( Section 3 ) . 2 . We prove theoretical guarantees for the proposed algorithm , including a proof of correctness and an explicit bound on the Bayes excess risk ( Section 4 ) . 3 . We empirically validate the proposed algorithm on benchmark datasets across both classical algorithms and modern DNN architectures . Our experiments demonstrate that the proposed algorithm significantly outperforms previous post-processing methods ( Section 5 ) . In Appendix E , we also show how the proposed algorithm can be modified to handle other criteria of bias as well . 2 RELATED WORK . Algorithms for fair machine learning can be broadly classified into three groups : ( 1 ) pre-processing methods , ( 2 ) in-processing methods , and ( 3 ) post-processing methods ( Zafar et al. , 2019 ) . Preprocessing algorithms transform the data into a different representation such that any classifier trained on it will not exhibit bias . This includes methods for learning a fair representation ( Zemel et al. , 2013 ; Lum and Johndrow , 2016 ; Bolukbasi et al. , 2016 ; Calmon et al. , 2017 ; Madras et al. , 2018 ; Kamiran and Calders , 2012 ) , label manipulation ( Kamiran and Calders , 2009 ) , data augmentation ( Dixon et al. , 2018 ) , or disentanglement ( Locatello et al. , 2019 ) . On the other hand , in-processing methods constrain the behavior of learning algorithms in order to control bias . This includes methods based on adversarial learning ( Zhang et al. , 2018 ) and constraint-based classification , such as by incorporating constrains on the decision margin ( Zafar et al. , 2019 ) or features ( Grgić-Hlača et al. , 2018 ) . Agarwal et al . ( 2018 ) showed that the task of learning an unbiased classifier could be reduced to a sequence of cost-sensitive classification problems , which could be applied to any black-box classifier . One caveat of the latter approach is that it requires solving a linear program ( LP ) and retraining classifiers , such as neural networks , many times before convergence . The algorithm we propose in this paper is a post-processing method , which can be justified theoretically ( Corbett-Davies et al. , 2017 ; Hardt et al. , 2016 ; Menon and Williamson , 2018 ; Celis et al. , 2019 ) . Fish et al . ( 2016 ) and Woodworth et al . ( 2017 ) fall under this category . However , the former only provides generalization guarantees without consistency results while the latter proposes a twostage approach that requires changes to the original training algorithm . Kamiran et al . ( 2012 ) also proposes a post-processing algorithm , called Reject Option Classifier ( ROC ) , without providing any theoretical guarantees . In contrast , our algorithm is Bayes consistent and does not alter the original classification method . In Celis et al . ( 2019 ) and Menon and Williamson ( 2018 ) , instance-dependent thresholding rules are also learned . However , our algorithm also learns to randomize around the threshold ( Figure 2 ( a ) ) and this randomization is key to our algorithm both theoretically as well as experimentally ( Appendix C and Section 5 ) . Hardt et al . ( 2016 ) learns a randomized post-processing rule but our proposed algorithm outperforms it in all of our experiments ( Section 5 ) . Woodworth et al . ( 2017 ) showed that the post-processing approach can , sometimes , be highly suboptimal . Nevertheless , the latter result does not contradict the statement that our post-processing rule is near-optimal because we assume that the original classifier outputs a monotone transformation of some approximation to the posterior probability p ( y = 1 | x ) ( e.g . margin or softmax output ) whereas Woodworth et al . ( 2017 ) assumed in their construction that the post-processing rule had access to the binary predictions only . We argue that the proposed algorithm has distinct advantages , particularly for deep neural networks ( DNNs ) . First , stochastic convex optimization methods are well-understood and can scale well to massive amounts of data ( Bottou , 2010 ) , which is often the case in deep learning today . Second , the guarantees provided by our algorithm hold w.r.t . the binary predictions instead of using a proxy , such as the margin as in some previous works ( Zafar et al. , 2017b ; 2019 ) . Third , unlike previous reduction methods that would require retraining a deep neural network several times until convergence ( Agarwal et al. , 2018 ) , which can be prohibitively expensive , our algorithm operates on learned models that are trained once and does not require retraining . Besides developing algorithms for fair classification , several recent works focused on other related aspects , such as proposing new definitions for fairness ; e.g . demographic parity ( Dwork et al. , 2012 ; Mehrabi et al. , 2019 ) , equalized odds ( Hardt et al. , 2016 ) , equality of opportunity/disparate mistreatment ( Zafar et al. , 2017a ; Hardt et al. , 2016 ) , and individual fairness ( Dwork et al. , 2012 ) . Recent works have also established several impossibility results related to fair classification , such as Kleinberg et al . ( 2017 ) ; Chouldechova ( 2017 ) . In our case , we derive a new impossibility result that holds for any deterministic binary classifier and relate it to the task of controlling the covariance between the classifier ’ s predictions and the sensitive attribute ( Appendix E ) . 3 NEAR-OPTIMAL ALGORITHM FOR STATISTICAL PARITY . Notation We reserve boldface letters for random variables ( e.g . x ) , small letters for instances ( e.g . x ) , capital letters for sets ( e.g . X ) , and calligraphic typeface for universal sets ( e.g . the instance space X ) . Given a set S , 1S ( x ) ∈ { 0 , 1 } is the characteristic function indicating whether x ∈ S. We denote by [ n ] the set of integers { 1 , . . . , n } and [ x ] + = max { 0 , x } . Algorithm Given a classifier f : X → [ −1 , +1 ] our goal is to post-process the predictions made by f 1 in order to control the bias with respect to a sensitive attribute s : X → [ K ] as in Definition 1 . To this end , instead of learning a deterministic classifier , we consider randomized prediction rules of the form h̃ : X × { 1 , 2 , . . . , K } × [ −1 , 1 ] → [ 0 , 1 ] , where h̃ ( x ) represents the probability of predicting the positive class given ( i ) instance x ∈ X , ( ii ) sensitive attribute s ( x ) , and ( iii ) classifier ’ s output f ( x ) . As discussed in Appendix B , for post-processing rule h̃ ( x ) , and for each group Xk ⊆ X , the fairness constraint in Definition 1 can be written as |Ex [ h̃ ( x ) | x ∈ Xk ] − ρ| ≤ , where ρ ∈ [ 0 , 1 ] is a hyper-parameter tuned via a validation dataset . On the other hand , minimizing the probability of altering the predictions of the original classifier can be achieved by maximizing the inner product Ex [ h̃ ( x ) ·f ( x ) ] . Instead of optimizing this quantity directly , which would lead to a pure thresholding rule , we minimize the regularized objective : ( γ/2 ) Ex [ h̃ ( x ) 2 ] −Ex [ h̃ ( x ) · f ( x ) ] for some regularization parameter γ > 0 . This regularization leads to randomization around the threshold , which we show to be critical , both theoretically ( Section 4 and Appendix C ) and experimentally ( Section 5 ) . Using Lagrange duality we show that the solution reduces to the update rules in Equation 2 with optimization variables { λk , µk } k∈ [ K ] and the corresponding predictor which outputs +1 for group Xk with probability h̃γ ( x ) is given by h̃γ ( x ) = 0 , f ( x ) ≤ λk − µk ( f ( x ) − λk + µk ) /γ , λk − µk ≤ f ( x ) ≤ λk − µk + γ 1 , f ( x ) ≥ λk − µk + γ ( 1 ) where ξγ is given by Eq . ( 3 ) . Update rules To learn these parameters , one can apply the following update rules ( Appendix B ) : λs ( x ) ← max { 0 , λs ( x ) − η ( 2 + ρ+ ∂ ∂λs ( x ) ξγ ( f ( x ) − ( λs ( x ) − µs ( x ) ) ) ) } µs ( x ) ← max { 0 , µs ( x ) − η ( 2 − ρ+ ∂ ∂µs ( x ) ξγ ( f ( x ) − ( λs ( x ) − µs ( x ) ) ) ) } , ( 2 ) where , again , ρ ∈ [ 0 , 1 ] is a hyperparameter tuned via a validation dataset , s : X → [ K ] is the sensitive attribute , and γ > 0 is a regularization parameter that controls the level of randomization . In addition , the function ξγ : R→ R+ is given by : ξγ ( w ) = w2 2γ · I { 0 ≤ w ≤ γ } + ( w − γ 2 ) · I { w > γ } ( 3 ) Note that ξγ is convex and its derivative ξ′γ is ( 1/γ ) -Lipschitz continuous ; it can be interpreted as differentiable approximation to the ReLU unit ( Nair and Hinton , 2010 ) . A full pseudocode of the proposed algorithm is presented in Appendix A .
In this paper, the authors propose a post-processing method for removing bias from a trained model. The bias is defined as conditional statistical parity — for a given partitioning of the data, the predicted label should be conditionally uncorrelated with the sensitive (bias inducing) attribute for each partition. The authors relax this strong requirement to an epsilon-constraint on the conditional covariance for each partition. As an example, race (sensitive attribute) should be conditionally uncorrelated to whether an individual will default on their loan (predicted target) for each city (data partition). The authors propose a constrained optimization problem that takes the input data, sensitive attribute, partitioning and a trained model to yield a probabilistic decision rule. Subsequently, they propose an iterative solution to the problem, proving some theoretical properties as well as showing how the method compares to different baselines.
SP:a6280b6605e621403de6ac4c3fc80fa71184ab6d
DeLighT: Deep and Light-weight Transformer
This paper presents a variant of Transformer where low-dimension matrix multiplications and single-head attention are used. Stacked group-linear-transformation (GLT) are applied on input of each layer to perform dimension growth and then reduction. The paper is well-written and easy to follow. Experiments demonstrate the propose architecture matches or improves the performance of baseline Transformers with fewer parameters.
SP:90ffef024018f59b3bde23aa2e2a4677602d41e8
On the mapping between Hopfield networks and Restricted Boltzmann Machines
1 INTRODUCTION . Hopfield networks ( HNs ) ( Hopfield , 1982 ; Amit , 1989 ) are a classical neural network architecture that can store prescribed patterns as fixed-point attractors of a dynamical system . In their standard formulation with binary valued units , HNs can be regarded as spin glasses with pairwise interactions Jij that are fully determined by the patterns to be encoded . HNs have been extensively studied in the statistical mechanics literature ( e.g . ( Kanter & Sompolinsky , 1987 ; Amit et al. , 1985 ) ) , where they can be seen as an interpolation between the ferromagnetic Ising model ( p = 1 pattern ) and the Sherrington-Kirkpatrick spin glass model ( many random patterns ) ( Kirkpatrick & Sherrington , 1978 ; Barra & Guerra , 2008 ) . By encoding patterns as dynamical attractors which are robust to perturbations , HNs provide an elegant solution to pattern recognition and classification tasks . They are considered the prototypical attractor neural network , and are the historical precursor to modern recurrent neural networks . Concurrently , spin glasses have been used extensively in the historical machine learning literature where they comprise a sub-class of “ Boltzmann machines ” ( BMs ) ( Ackley et al. , 1985 ) . Given a collection of data samples drawn from a data distribution , one is generally interested in “ training ” a BM by tuning its weights Jij such that its equilibrium distribution can reproduce the data distribution as closely as possible ( Hinton , 2012 ) . The resulting optimization problem is dramatically simplified when the network has a two-layer structure where each layer has no self-interactions , so that there are only inter-layer connections ( Hinton , 2012 ) ( see Fig . 1 ) . This architecture is known as a Restricted Boltzmann Machine ( RBM ) , and the two layers are sometimes called the visible layer and the hidden layer . The visible layer characteristics ( dimension , type of units ) are determined by the training data , whereas the hidden layer can have binary or continuous units and the dimension is chosen somewhat arbitrarily . In addition to generative modelling , RBMs and their multi-layer extensions have been used for a variety of learning tasks , such as classification , feature extraction , and dimension reduction ( e.g . Salakhutdinov et al . ( 2007 ) ; Hinton & Salakhutdinov ( 2006 ) ) . There has been extensive interest in the relationship between HNs and RBMs , as both are built on the Ising model formalism and fulfill similar roles , with the aim of better understanding RBM behaviour and potentially improving performance . Various results in this area have been recently reviewed ( Marullo & Agliari , 2021 ) . In particular , an exact mapping between HNs and RBMs has been previously noted for the special case of uncorrelated ( orthogonal ) patterns ( Barra et al. , 2012 ) . Several related models have since been studied ( Agliari et al. , 2013 ; Mézard , 2017 ) , which partially relax the uncorrelated pattern constraint . However , the patterns observed in most real datasets exhibit significant correlations , precluding the use of these approaches . In this paper , we demonstrate exact correspondence between HNs and RBMs in the case of correlated pattern HNs . Specifically , we show that any HN with N binary units and p < N arbitrary ( i.e . non-orthogonal ) binary patterns encoded via the projection rule ( Kanter & Sompolinsky , 1987 ; Personnaz et al. , 1986 ) , can be transformed into an RBM with N binary and p gaussian variables . We then characterize when the reverse map from RBMs to HNs can be made . We consider a practical example using the mapping , and discuss the potential importance of this correspondence for the training and interpretability of RBMs . 2 RESULTS . We first introduce the classical solution to the problem of encodingN -dimensional binary { −1 , +1 } vectors { ξµ } pµ=1 , termed “ patterns ” , as global minima of a pairwise spin glass H ( s ) = − 12s TJs . This is often framed as a pattern retrieval problem , where the goal is to specify or learn Jij such that an energy-decreasing update rule for H ( s ) converges to the patterns ( i.e . they are stable fixed points ) . Consider the N × p matrix ξ with the p patterns as its columns . Then the classical prescription known as the projection rule ( or pseudo-inverse rule ) ( Kanter & Sompolinsky , 1987 ; Personnaz et al. , 1986 ) , J = ξ ( ξT ξ ) −1ξT , guarantees that the p patterns will be global minima of H ( s ) . This resulting spin model is commonly called a ( projection ) Hopfield network , and has the Hamiltonian H ( s ) = −1 2 sT ξ ( ξT ξ ) −1ξTs . ( 1 ) Note that ξT ξ invertibility is guaranteed as long as the patterns are linearly independent ( we therefore require p ≤ N ) . Also note that in the special ( rare ) case of orthogonal patterns ξµ · ξν = Nδµν ( also called “ uncorrelated ” ) , studied in the previous work ( Barra et al. , 2012 ) , one has ξT ξ = NI and so the pseudo-inverse interactions reduce to the well-known Hebbian form J = 1N ξξ T ( the properties of which are studied extensively in Amit et al . ( 1985 ) ) . Additional details on the projection HN Eq . ( 1 ) are provided in Appendix A . To make progress in analyzing Eq . ( 1 ) , we first consider a transformation of ξ which eliminates the inverse factor . 2.1 MAPPING A HOPFIELD NETWORK TO A RESTRICTED BOLTZMANN MACHINE . In order to obtain a more useful representation of the quadratic form Eq . ( 1 ) ( for our purposes ) , we utilize the QR-decomposition ( Schott & Stewart , 1999 ) of ξ to “ orthogonalize ” the patterns , ξ = QR , ( 2 ) with Q ∈ RN×p , R ∈ Rp×p . The columns of Q are the orthogonalized patterns , and form an orthonormal basis ( of non-binary vectors ) for the p-dimensional subspace spanned by the binary patterns . R is upper triangular , and if its diagonals are held positive then Q and R are both unique ( Schott & Stewart , 1999 ) . Note both the order and sign of the columns of ξ are irrelevant for HN pattern recall , so there are n = 2p · p ! possibleQ , R pairs . Fixing a pattern ordering , we can use the orthogonality ofQ to re-write the interaction matrix as J = ξ ( ξT ξ ) −1ξT = QR ( RTR ) −1RTQT = QQT ( 3 ) ( the last equality follows from ( RTR ) −1 = R−1 ( RT ) −1 ) . Eq . ( 3 ) resembles the simple Hebbian rule but with non-binary orthogonal patterns . Defining q ≡ QTs in analogy to the classical pattern overlap parameterm ≡ 1N ξ Ts ( Amit et al. , 1985 ) , we have H ( s ) = −1 2 sTQQTs = −1 2 q ( s ) · q ( s ) . ( 4 ) Using a Gaussian integral as in Amit et al . ( 1985 ) ; Barra et al . ( 2012 ) ; Mézard ( 2017 ) to transform ( exactly ) the partition function Z ≡ ∑ { s } e −βH ( s ) of Eq . ( 1 ) , we get Z = ∑ { s } e 1 2 ( βq ) T ( β−1I ) ( βq ) = ∑ { s } ∫ e− β 2 ∑ µ λ 2 µ+β ∑ µ λµ ∑ iQiµsi ∏ µ dλµ√ 2π/β . ( 5 ) The second line can be seen as the partition function of an expanded Hamiltonian for the N ( binary ) original variables { si } and the p ( continuous ) auxiliary variables { λµ } , i.e . HRBM ( { si } , { λµ } ) = 1 2 ∑ µ λ2µ − ∑ µ ∑ i Qiµsiλµ . ( 6 ) Note that this is the Hamiltonian of a binary-continuous RBM with inter-layer weights Qiµ . The original HN is therefore equivalent to an RBM described by Eq . ( 6 ) ( depicted in Fig . 1 ) . As mentioned above , there are many RBMs which correspond to the same HN due to the combinatorics of choosing Q . In fact , instead of QR factorization one can use any decomposition which satisfies J = UUT , with orthogonal U ∈ RN×p ( see Appendix B ) , in which case U acts as the RBM weights . Also note the inclusion of an applied field term − ∑ i bisi in Eq . ( 1 ) trivially carries through the procedure , i.e . H̃RBM ( { si } , { λµ } ) = 12 ∑ µ λ 2 µ − ∑ i bisi − ∑ µ ∑ iQiµsiλµ . Instead of working with the joint form Eq . ( 6 ) , one could take a different direction from Eq . ( 5 ) and sum out the original variables { si } , i.e . Z = ∫ e− β 2 ∑ µ λ 2 µ2N ∏ i cosh ( β ∑ µ Qiµλµ ) ∏ µ dλµ√ 2π/β . ( 7 ) This continuous , p-dimensional representation is useful for numerical estimation of Z ( Section 3.1 ) . We may write Eq . ( 7 ) as Z = ∫ e−F0 ( λ ) dλµ , where F0 ( { λµ } ) = 1 2 ∑ µ λ2µ − 1 β ∑ i ln cosh ( β ∑ µ Qiµλµ ) . ( 8 ) Eq . ( 8 ) is an approximate Lyapunov function for the mean dynamics of { λµ } ; ∇λF0 describes the effective behaviour of the stochastic dynamics of the N binary variables { si } at temperature β−1 . 2.2 COMMENTS ON THE REVERSE MAPPING . With the mapping from HNs ( with correlated patterns ) to RBMs established , we now consider the reverse direction . Consider a binary-continuous RBM with inter-layer weights Wiµ which couple a visible layer of N binary variables { si } to a hidden layer of p continuous variables { λµ } , H ( s , λ ) = 1 2 ∑ µ λ2µ − ∑ i bisi − ∑ µ ∑ i Wiµsiλµ . ( 9 ) Here we use W instead of Q for the RBM weights to emphasize that the RBM is not necessarily an HN . First , following Mehta et al . ( 2019 ) , we transform the RBM to a BM with binary states by integrating out the hidden variables . The corresponding Hamiltonian for the visible units alone is ( see Appendix D.1 for details ) , H̃ ( s ) = − ∑ i bisi − 1 2 ∑ i ∑ j ∑ µ WiµWjµsisj , ( 10 ) a pairwise Ising model with a particular coupling structure Jij = ∑ µWiµWjµ , which in vector form is J = ∑ µ wµw T µ =WW T , ( 11 ) where { wµ } are the p columns ofW . In general , this Ising model Eq . ( 10 ) produced by integrating out the hidden variables need not have Hopfield structure ( discussed below ) . However , it automatically does ( as noted in Barra et al . ( 2012 ) ) , in the very special case whereWiµ ∈ { −1 , +1 } . In that case , the binary patterns are simply { wµ } , so that Eq . ( 11 ) represents a Hopfield network with the Hebbian prescription . This situation is likely rare and may only arise as a by-product of constrained training ; for a generically trained RBM the weights will not be binary . It is therefore interesting to clarify when and how real-valued RBM interactionsW can be associated with HNs . Approximate binary representation of W : In Section 2.1 , we orthogonalized the binary matrix ξ via the QR decomposition ξ = QR , where Q is an orthogonal ( but non-binary ) matrix , which allowed us to map a projection HN ( defined by its patterns ξ , Eq . ( 1 ) ) to an RBM ( defined by its inter-layer weightsQ , Eq . ( 6 ) ) . Here we consider the reverse map . Given a trained RBM with weights W ∈ RN×p , we look for an invertible transformation X ∈ Rp×p which binarizes W . We make the mild assumption that W is rank p. If we find such an X , then B =WX will be the Hopfield pattern matrix ( analogous to ξ ) , with Biµ ∈ { −1 , +1 } . This is a non-trivial problem , and an exact solution is not guaranteed . As a first step to study the problem , we relax it to that of finding a matrix X ∈ GLp ( R ) ( i.e . invertible , p × p , real ) which minimizes the binarization error argmin X∈GLp ( R ) ||WX − sgn ( WX ) ||F . ( 12 ) We denote the approximately binary transformation ofW via a particular solutionX by Bp =WX . ( 13 ) We also define the associated error matrixE ≡ Bp− sgn ( Bp ) . We stress thatBp is non-binary and approximatesB ≡ sgn ( Bp ) , the columns of which will be HN patterns under certain conditions on E. We provide an initial characterization and example in Appendix D .
This paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function of a binary-continuous (Bernoulli-Gaussian) RBM. The authors comments on the mapping from RBM to BN. The experiments show the advantages of training RBM with weights initialised from BN projection weights in generation and classification.
SP:c83ecc74eb885df5f29e5a7080a8c60d1ee0a3b0
One Reflection Suffice
The authors present a way to learn the action of an arbitrary orthogonal matrix on a vector via a map from $\mathbb{R}^{n\times n}$ onto $\operatorname{O}(n)$. They show that the map is surjective, and give conditions under which they can invert this action. They then compare against previous proposed schemes in one task and show the performance of their models in other two.
SP:3d705a1b70254d2b9d05277efff8ac08b0539086
PCPs: Patient Cardiac Prototypes
This paper proposes to learn patient-specific representation using patient physiological signals. The authors design a PCP representation for each patient, which is learned to agree with signals from the same patients and disagrees with the remaining patients. In the supervised part, the classifier is generated from patient-specific parameters by meta-learning. The model was evaluated on three large ECG datasets: PhysioNet 2020 ECG, Chapman ECG, PTB-XL ECG.
SP:0cb862cf3806c4f04d2d30f200c25841a1cb52a8
Activation-level uncertainty in deep neural networks
1 INTRODUCTION . Deep Neural Networks ( DNNs ) have achieved state-of-the-art performance in many different tasks , such as speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Mikolov et al. , 2013 ) or computer vision ( Krizhevsky et al. , 2012 ) . In spite of their predictive power , DNNs are limited in terms of uncertainty estimation . This has been a classical concern in the field ( MacKay , 1992 ; Hinton & Van Camp , 1993 ; Barber & Bishop , 1998 ) , which has attracted a lot of attention in the last years ( Lakshminarayanan et al. , 2017 ; Guo et al. , 2017 ; Sun et al. , 2019 ; Wenzel et al. , 2020 ) . Indeed , this ability to “ know what is not known ” is essential for critical applications such as medical diagnosis ( Esteva et al. , 2017 ; Mobiny et al. , 2019 ) or autonomous driving ( Kendall & Gal , 2017 ; Gal , 2016 ) . Bayesian Neural Networks ( BNNs ) address this problem through a Bayesian treatment of the network weights1 ( MacKay , 1992 ; Neal , 1995 ) . This will be refered to as weight-space stochasticity . However , dealing with uncertainty in weight space is challenging , since it contains many symmetries and is highly dimensional ( Wenzel et al. , 2020 ; Sun et al. , 2019 ; Snoek et al. , 2019 ; Fort et al. , 2019 ) . Here we focus on two specific limitations . First , it has been recently shown that BNNs with well-established inference methods such as Bayes by Backprop ( BBP ) ( Blundell et al. , 2015 ) and MC-Dropout ( Gal & Ghahramani , 2016 ) underestimate the predictive uncertainty for instances located in-between two clusters of training points ( Foong et al. , 2020 ; 2019 ; Yao et al. , 2019 ) . Second , the weight-space prior does not allow BNNs to guide extrapolation to out-of-distribution ( OOD ) data ( Sun et al. , 2019 ; Nguyen et al. , 2015 ; Ren et al. , 2019 ) . Both aspects are illustrated graphically in Figure 3 , more details in Section 3.1 . ∗Work developed mostly while visiting Cambridge University , UK . 1The bias term will be absorbed within the weights throughout the work . As an alternative to standard BNNs , Functional Bayesian Neural Nets ( fBNN ) specify the prior and perform inference directly in function space ( Sun et al. , 2019 ) . This provides a mechanism to guide the extrapolation in OOD data , e.g . predictions can be encouraged to revert to the prior in regions of no observed data . However , the posterior stochastic process is still defined by a factorized Gaussian on the network weights ( i.e . as in BBP ) , see ( Sun et al. , 2019 , Sect . 3.1 ) . We will show that this makes fBNN inherit the problem of underestimating the predictive uncertainty for in-between data . In this work , we adopt a different approach by moving stochasticity from the weights to the activation function , see Figure 1 . This will be referred to as auNN ( activation-level uncertainty for Neural Networks ) . The activation functions are modelled with ( one-dimensional ) GP priors , for which a triangular kernel inspired by the ReLu non-linearity ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) is used . Since non-linearities are typically simple functions ( e.g . ReLu , sigmoid , tanh ) , our GPs are sparsified with few inducing points . The network weights are deterministic parameters which are estimated to maximize the marginal likelihood of the model . The motivation behind auNN is to avoid inference in the complex space of weights . We hypothesise that it could be enough to introduce stochasticity in the activation functions that follow the linear projections to provide sensible uncertainty estimations . We show that auNN obtains well-calibrated estimations for in-between data , and its prior allows to guide the extrapolation to OOD data by reverting to the empirical mean . This will be visualized in a simple 1D example ( Figure 3 and Table 1 ) . Moreover , auNN obtains competitive performance in standard benchmarks , is scalable ( datasets of up to ten millions training points are used ) , and can be readily used for classification . The use of GPs for the activations establishes an interesting connection with deep GPs ( DGPs ) ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . The main difference is the linear projection before the GP , recall Figure 1 ( c-d ) . This allows auNN units to model simpler mappings between layers , which are defined along one direction of the input space , similarly to neural networks . However , DGP units model more complex mappings defined on the whole input space , see also Figure 2a . We will show that auNN units require fewer inducing points and are better suited for deep architectures , achieving superior performance . Also , a thorough discussion on additional related work will be provided in Section 4 . In summary , the main contributions of this paper are : ( 1 ) a new approach to model uncertainty in DNNs , based on deterministic weights and simple stochastic non-linearities ( in principle , not necessarily modelled by GPs ) ; ( 2 ) the specific use of non-parametric GPs as a prior , including the triangular kernel inspired by the ReLu ; ( 3 ) auNN addresses a well-known limitation of BNNs and fBNNs ( uncertainty underestimation for in-between data ) , can guide the extrapolation to OOD data by reverting to the empirical mean , and is competitive in standard prediction tasks ; ( 4 ) auNN units require fewer inducing points and are better suited for deep architectures than DGP ones , achieving superior performance . 2 PROBABILISTIC MODEL AND INFERENCE . Model specification . We focus on a supervised task ( e.g . regression or classification ) with training data2 { xn , : , yn , : } Nn=1 . The graphical model in Figure 2b will be useful throughout this section . We 2The output is represented as a vector since all the derivations apply for the multi-output case . assume a model of L layers , each one with Dl units as in Figure 1c . Each activation is modelled with a ( 1D ) GP prior , i.e . f ld ( a l d ) ∼ GP ( µld , kld ) , with µld : R → R and kld : R × R → R. The GP hyperparameters θld will be omitted for clarity ( for the kernels used here , θ l d includes the amplitude and the lengthscale ) . Assuming independence between units , each layer depends on the previous one as : p ( Fl|Fl−1 , Wl ) = p ( Fl|Al ) = ∏Dl d=1 p ( f l d|ald ) , ( 1 ) where Fl is the N ×Dl matrix of outputs of the l-th layer for N inputs , Wl is the Dl−1 ×Dl matrix of weights in that layer , and Al is the N ×Dl matrix of pre-activations , i.e . Al = Fl−1 ·Wl . As usual , the columns and rows of Fl are denoted as f ld and f l n , : , respectively ( and analogously for the other matrices ) . Since the activation is defined by a GP , we have p ( f ld|ald ) = N ( f ld|µld , Kld ) , with µld ( resp . Kld ) the result of evaluating µ l d ( resp . k l d ) on a l d ( that is , µ l d is a N -dimensional vector and K l d is a N ×N matrix ) . To fully specify the model , the output Y is defined from the last layer with a distribution that factorizes across data points , i.e . p ( Y|FL ) = ∏N n=1 p ( yn , :|fLn , : ) . This formulation resembles that of DGPs ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . The main difference is that we model Fl|Fl−1 through Dl 1D GPs evaluated on the pre-activations Al ( i.e . the projections of Fl−1 through Wl ) , whereas DGPs use Dl GPs of dimension Dl−1 evaluated directly on Fl−1 , recall Figure 1 ( c-d ) . Variational Inference . Inference in the proposed model is intractable . To address this , we follow standard sparse variational GP approaches ( Titsias , 2009 ; Hensman et al. , 2013 ; 2015 ) , similarly to the Doubly Stochastic Variational Inference ( DSVI ) for DGPs ( Salimbeni & Deisenroth , 2017 ) . Specifically , in each unit of each layer we introduce M l inducing values uld , which are the result of evaluating the GP on the one-dimensional inducing points zld . We naturally write U l and Zl for the corresponding M l × Dl matrices associated to the l-th layer , respectively . Following eq . ( 1 ) , the augmented model for one layer is p ( Fl , Ul|Fl−1 , Wl , Zl ) = p ( Fl|Ul , Al , Zl ) p ( Ul|Zl ) = ∏Dl d=1 p ( f l d|uld , ald , zld ) p ( uld|zld ) . ( 2 ) Variational inference ( VI ) involves the approximation of the true posterior p ( { Fl , Ul } l|Y ) . Following ( Hensman et al. , 2013 ; Salimbeni & Deisenroth , 2017 ) , we propose a posterior given by p ( F|U ) and a parametric Gaussian on U : q ( { Fl , Ul } l ) = ∏L l=1 p ( F l|Ul , Al , Zl ) q ( Ul ) = ∏L l=1 ∏Dl d=1 p ( f l d|uld , ald , zld ) q ( uld ) , ( 3 ) where q ( uld ) = N ( uld|mld , Sld ) , with mld ∈ RM l and Sld ∈ RM l×M l variational parameters to be estimated . Minimizing the KL divergence between q ( { Fl , Ul } l ) and the true posterior is equivalent to maximizing the following evidence lower bound ( ELBO ) : log p ( Y| { Wl , Zl } l ) ≥ ELBO = N∑ n=1 Eq ( fLn , : ) [ log p ( yn , :|fLn , : ) ] − L∑ l=1 Dl∑ d=1 KL ( q ( uld ) ||p ( uld ) ) . ( 4 ) In the ELBO , the KL term can be computed in closed-form , as both q ( uld ) and p ( u l d ) are Gaussians . The log likelihood term can be approximated by sampling from the marginal posterior q ( fLn , : ) , which can be done efficiently through univariate Gaussians as in ( Salimbeni & Deisenroth , 2017 ) . Specifically , Ul can be analytically marginalized in eq . ( 3 ) , which yields q ( { Fl } l ) = ∏ l q ( F l|Fl−1 , Wl ) = ∏ l , dN ( f ld|µ̃ l d , Σ̃ l d ) , with : [ µ̃ld ] i = µ l d ( a l id ) +α l d ( a l id ) ᵀ ( mld − µld ( zld ) ) , ( 5 ) [ Σ̃ l d ] ij = k l d ( a l id , a l jd ) −αld ( alid ) ᵀ ( kld ( zld ) − Sld ) αld ( aljd ) , ( 6 ) where αld ( x ) = k l d ( x , z l d ) [ k l d ( z l d ) ] −1 and aln , : = W lf l−1n , : . Importantly , the marginal posterior q ( f l n , : ) is a Gaussian that depends only on aln , : , which in turn only depends on q ( f l−1 n , : ) . Therefore , sampling from f ln , : is straightforward using the reparametrization trick ( Kingma & Welling , 2013 ) : f lnd = [ µ̃ l d ] n + ε · [ Σ̃ l d ] 1/2 nn , with ε ∼ N ( 0 , 1 ) , and f0n , : = xn , : . ( 7 ) Training consists in maximizing the ELBO , eq . ( 4 ) , w.r.t . variational parameters { mld , Sld } , inducing points { zld } , and model parameters ( i.e . weights { wld } and kernel parameters { θ l d } ) . This can be done in batches , allowing for scalability to very large datasets . The complexity to evaluate the ELBO is O ( NM2 ( D1 + · · ·+DL ) ) , the same as DGPs with DSVI ( Salimbeni & Deisenroth , 2017 ) .3 Predictions . Given a new x∗ , : , we want to compute4 p ( fL∗ , :|X , Y ) ≈ Eq ( { Ul } ) [ p ( fL∗ , :| { Ul } ) ] . As in ( Salimbeni & Deisenroth , 2017 ) , this can be approximated by sampling S values up to the ( L− 1 ) -th layer with the same eq . ( 7 ) , but starting with x∗ , : . Then , p ( fL∗ , :|X , Y ) is given by the mixture of the S Gaussians distributions obtained from eqs . ( 5 ) - ( 6 ) . Triangular kernel . One of the most popular kernels in GPs is the RBF ( Williams & Rasmussen , 2006 ) , which produces very smooth functions . However , the ReLu non-linearity led to a general boost in performance in DNNs ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) , and we aim to model similar activations . Therefore , we introduce the use of the triangular ( TRI ) kernel . Just like RBF , TRI is an isotropic kernel , i.e . it depends on the distance between the inputs , k ( x , y ) = γ · g ( |x− y|/  ) , with γ and  the amplitude and lengthscale . For RBF , g ( t ) = e−t 2/2 . For TRI , g ( t ) = max ( 1− t , 0 ) . This is a valid kernel ( Williams & Rasmussen , 2006 , Section 4.2.1 ) . Similarly to the ReLu , the functions modelled by TRI are piecewise linear , see Figure 6a in the main text and Figure 8 in Appendix C. Comparison with DGP . The difference between auNN and DGP units is graphically illustrated in Figure 2a . Whereas DGP mappings from one layer to the next are complex functions defined on Dl−1 dimensions ( Dl−1 = 2 in the figure ) , auNN mappings are defined just along one direction via the weight projection . This is closer in spirit to NNs , whose mappings are also simpler and better suited for feature extraction and learning more abstract concepts . Moreover , since the GP is defined on a 1D space , auNN requires fewer inducing points than DGP ( which , intuitively , can be interpreted as inducing ( hyper ) planes in the Dl−1-dimensional space before the projection ) .
Either putting the uncertainty on the weights (e.g., Bayes by BP) or on the activation (e.g., fast dropout or variants of natural-parameter networks [2,3] or Bayesian dark knowledge [4]) or both [1] have been investigated before. The idea of moving the uncertainty from the weight to the activation function is not new. One could argue that VAE-style parameterization or local reparameterization trick is also a kind of methods that put uncertainty in the activation function. In fact the proposed method does involve the reprarameterization trick in each layer as shown in Eq. 7.
Local SGD Meets Asynchrony
In this paper, the authors argue that the mini-batch method and local SGD method suffers generalization performance degradation for large local mini-batch size. An asynchronous method is proposed to improve the generalization performance. A sublinear convergence rate is provided for the non-convex objective. As there are some missing definitions and little explanation of the proposed method, the reviewer finds the paper hard to read.
SP:4d94ef57fdaf5f1100b6b09331d5cff5264fcdf6
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
1 INTRODUCTION . Negotiation is ubiquitous in human interaction , from e-commerce to the multi-billion dollar sales of companies . Learning how to negotiate effectively involves deep pragmatic understanding and planning the dialogue strategically ( Thompson ; Bazerman et al. , 2000b ; Pruitt , 2013 ) . Modern dialogue systems for collaborative tasks such as restaurant or flight reservations have made considerable progress by modeling the dialogue history and structure explicitly using the semantic content , like slot-value pairs ( Larionov et al. , 2018 ; Young , 2006 ) , or implicitly with encoder-decoder architectures ( Sordoni et al. , 2015 ; Li et al. , 2016 ) . In such tasks , users communicate explicit intentions , enabling systems to map the utterances into specific intent slots ( Li et al. , 2020 ) . However , such mapping is less clear in complex non-collaborative tasks like negotiation ( He et al. , 2018 ) and persuasion ( Wang et al. , 2019 ) , where user intent and most effective strategies are hidden . Hence , along with the generated dialogue , the strategic choice of framing and the sequence of chosen strategies play a vital role , as depicted in Figure 1 . Indeed , prior work on negotiation dialogues has primarily focused on optimizing dialogue strategies—from highlevel task-specific strategies ( Lewis et al. , 2017 ) , to more specific task execution planning ( He et al. , 2018 ) , to fine-grained planning of linguistic outputs given 1Code , data and a demo system is released at https : //github.com/rishabhjoshi/ DialoGraph_ICLR21 strategic choices ( Zhou et al. , 2019 ) . These studies have confirmed that it is crucial to control for pragmatics of the dialogue to build effective negotiation systems . To model the explicit dialogue structure , prior work incorporated Hidden Markov Models ( HMMs ) ( Zhai & Williams , 2014 ; Ritter et al. , 2010 ) , Finite State Transducers ( FSTs ) ( Zhou et al. , 2020 ) and RNNs ( He et al. , 2018 ; Shi et al. , 2019 ) . While RNN-based models lack interpretability , HMMand FST-based approaches may lack expressivity . In this paper , we hypothesize that Graph Neural Networks ( GNNs ) ( Wu et al. , 2020 ) can combine the benefits of interpretability and expressivity because of their effectiveness in encoding graph-structured data through message propagation . While being sufficiently expressive to model graph structures , GNNs also provide a natural means for interpretation via intermediate states ( Xie & Lu , 2019 ; Pope et al. , 2019 ) . We propose DIALOGRAPH , an end-to-end negotiation dialogue system that leverages Graph Attention Networks ( GAT ) ( Veličković et al. , 2018 ) to model complex negotiation strategies while providing interpretability for the model via intermediate structures . DIALOGRAPH incorporates the recently proposed hierarchical graph pooling based approaches ( Ranjan et al. , 2020 ) to learn the associations between negotiation strategies , including conceptual and linguistic strategies and dialogue acts , and their relative importance in predicting the best sequence . We focus on buyer–seller negotiations in which two individuals negotiate on the price of an item through a chat interface , and we model the seller ’ s behavior on the CraigslistBargain dataset ( He et al. , 2018 ) .2 We demonstrate that DIALOGRAPH outperforms previous state-of-art methods on strategy prediction and downstream dialogue responses . This paper makes several contributions . First , we introduce a novel approach to model negotiation strategies and their dependencies as graph structures , via GNNs . Second , we incorporate these learned graphs into an end-to-end negotiation dialogue system and demonstrate that it consistently improves future-strategy prediction and downstream dialogue generation , leading to better negotiation deals ( sale prices ) . Finally , we demonstrate how to interpret intermediate structures and learned sequences of strategies , opening-up the black-box of end-to-end strategic dialogue systems . 2 DIALOGRAPH . We introduce DIALOGRAPH , a modular end-to-end dialogue system , that incorporates GATs with hierarchical pooling to learn pragmatic dialogue strategies jointly with the dialogue history . DIALOGRAPH is based on a hierarchical encoder-decoder model and consists of three main components : ( 1 ) hierarchical dialogue encoder , which learns a representation for each utterance and encodes its local context ; ( 2 ) structure encoder for encoding sequences of negotiation strategies and dialogue acts ; and ( 3 ) utterance decoder , which finally generates the output utterance . Formally , our dialogue input consists of a sequence of tuples , D = [ ( u1 , da1 , ST1 ) , ( u2 , da2 , ST2 ) , ... , ( un , dan , STn ) ] where ui is the utterance , dai is the coarse dialogue act and STi = { sti,1 , sti,2 , . . . , sti , k } is the set of k fine-grained negotiation strategies for the utterance ui.3 The dialogue context forms the input to ( 1 ) and the previous dialogue acts and negotiation strategies form the input to ( 2 ) . The overall architecture is shown in Figure 2 . In what follows , we describe DIALOGRAPH in detail . 2.1 HIERARCHICAL DIALOGUE ENCODER . A dialogue context typically comprises of multiple dialogue utterances which are sequential in nature . We use hierarchical encoders for modeling such sequential dialogue contexts ( Jiao et al. , 2019 ) . To encode the utterance ut at time t , we use the pooled representations from BERT ( Devlin et al. , 2019 ) to obtain the corresponding utterance embedding et . We then pass the utterance embeddings through a GRU to obtain the dialogue context encoding till time t , denoted by hUt . 2We focus on the seller ’ s side following Zhou et al . ( 2019 ) who devised a set of strategies specific to maximizing the seller ’ s success . Our proposed methodology , however , is general . 3For example , in an utterance Morning ! My bro destroyed my old kit and I ’ m looking for a new pair for $10 , the coarse dialogue act is Introduction , and the finer grained negotiation strategies include Proposing price , Being informal and Talking about family for building rapport . 2.2 STRUCTURE ENCODER . Our structure encoder is designed to model the graph representations of the strategies and dialogue acts using GATs and output their structural representations . These structural representations are used to predict the next set of strategies and dialogue acts and enrich the encoded dialogue representation . Below we describe the structure encoder for negotiation strategies . We model the sequence of negotiation strategies , ST = [ ST1 , ST2 , . . . , STt ] by creating a directed graph , where STi is the set of k fine-grained negotiation strategies for the utterance ui . Formally , we define a graph G ( V , E , X ) with |E| edges andN = |V| nodes where each node vi ∈ V represents a particular negotiation strategy for an utterance and has a d-dimensional feature representation denoted by zi . Z ∈ RN×d denotes the feature matrix of the nodes and A ∈ RN×N represents the adjacency matrix , where N is the total number of nodes ( strategies ) that have occurred in the conversation till that point . Therefore , each node represents a strategy-utterance pair . We define the set of edges as E = { ( a , b ) } ; a , b ∈ V where a and b denote strategies at utterances ua and ub , present at turns ta and tb , such that tb > ta . In other words , we make a directed edge from a particular node ( strategy in an utterance ) to all the consecutive nodes . This ensures a direct connection from all the previous strategies to the more recent ones.4 In the same way , we form the graph out of the sequence of dialogue acts . These direct edges and learned edge attention weights help us interpret the dependence and influence of strategies on each other . To get the structural representations from the strategy graphs , we pass them through a hierarchical graph pooling based encoder , which consists of l layers of GAT , each followed by the Adaptive Structure Aware Pooling ( ASAP ) layer ( Ranjan et al. , 2020 ) . As part of the ASAP layer , the model first runs GAT over the input graph representations to obtain structurally informed representations of the nodes . Then a cluster assignment step is performed which generates a cluster assignment matrix , S , which tells the model which nodes come in a similar structural context . After that , the clusters are ranked and then the graph is pooled by taking the top few clusters as new nodes and forming edges between them using the existing graph . This way the size of the graph is reduced at every step which leads to a structurally informed graph representation . We take advantage of the cluster formulation to obtain the associations between the negotiation strategies , as identified from the cluster assignment matrix , S. These association scores can later be used to interpret which strategies are associated with each other and tend to co-occur in similar contexts . Moreover , we also use the node attention scores from GAT to interpret the influence of different strategies on the 4Appendix C shows an example of the graph obtained from a sequence of strategies . representation of a particular strategy , which essentially gives the dependence information between strategies . In this way , the structure representation is learned and accumulated in a manner that preserves the structural information ( Ying et al. , 2018 ; Lee et al. , 2019 ) . After each pooling step , the graph representation is summarized using the concatenation of mean and max of the node representations . The summaries are then added and passed through fully connected layers to obtain the final structural representation of the strategies hSTt . We employ a similar structure encoder to encode the graph obtained from the sequence of dialogue acts , to obtain hdat . 2.3 UTTERANCE DECODER . The utterance decoder uses the dialogue context representation and structural representations of dialogue acts and negotiation strategies to produce the dialogue response ( next utterance ) . We enrich the dialogue representation by concatenating the structural representations before passing it to a standard greedy GRU ( Cho et al. , 2014 ) decoder . This architecture follows Zhou et al . ( 2020 ) , who introduced a dynamic negotiation system that incorporates negotiation strategies and dialogue acts via FSTs . We thus follow their utterance decoder architecture to enable direct baseline comparison . For the jth word of utterance ut+1 , w j t+1 , we condition on the previous word w j−1 t+1 to calculate the probability distribution over the vocabulary as pwjt+1 = softmax ( GRU ( ht , w j−1 t+1 ) ) where ht = [ hut ; h ST t ; h da t ] and [ ; ] represents the concatenation operator . For encoding the price , we replace all price information in the dataset with placeholders representing the percentage of the offer price . For example , we would replace$ 35 with < price− 0.875 > if the original selling price is \$ 40 . The decoder generates these placeholders which are then replaced with the calculated price before generating the utterance . 2.4 MODEL TRAINING . We use hSTt to predict the next set of strategies STt+1 , a binary value vector which represents the k-hot representation of negotiation strategies for the next turn . We compute the probability of the jth strategy occurring in ut+1 as p ( stt+1 , j |hSTt ) = σ ( hSTt ) . where σ denotes the sigmoid operator . We threshold the probability by 0.5 to obtain the k-hot representation . We denote the weighted negative log likelihood of strategies LST as the loss function of the task of next strategy prediction LST = − ∑ j δj log ( p ( stt+1 , j ) ) − ∑ k log ( 1 − p ( stt+1 , k ) ) where the summation of j are over the strategies present ( st ′ t+1 , j = 1 ) and not present ( st ′ t+1 , k = 0 ) in the ground truth strategies set , ST ′ . Here δj is the positive weight associated with the particular strategy . We add this weight to the positive examples to trade off precision and recall . We put δj = # of instances not having strategy j/ # of instances having strategy j . Similarly , we use hdat to predict the dialogue act for the next utterance dat+1 . Given the target dialogue act da ′ t+1 and the class weights ρda for the dialogue acts , we denote the class-weighted cross entropy loss over the set of possible dialogue acts , LDA = −ρda log ( softmax ( hdat ) ) . We pass ht = [ hut ; h ST t ; h da t ] through a linear layer to predict the negotiation success , which is denoted by the sale-to-list ratio r = ( sale price− buyer target price ) / ( listed price− buyer target price ) ( Zhou et al. , 2019 ) . We split the ratios into 5 negotiation classes of equal sizes using the training data and use those to predict the success of negotiation . Therefore , given the predicted probabilities for target utterance u ′ t+1 from §2.3 , target ratio class y ′ r and the learnable parameters Wr and br , we use the cross entropy loss as the loss for the generation task ( LNLG ) as well as the negotiation outcome prediction task ( LR ) , thus LNLG = − ∑ wj∈u ′ t+1 log ( p wj t+1 ) and LR = − ∑ r∈ [ 1,5 ] y ′ r log ( softmax ( Wrht + br ) ) . The LR loss optimizes for encoding negotiation strategies to enable accurate prediction of negotiation outcome . We use hyperparameters α , β and γ to optimize the joint loss Ljoint , of strategy prediction , dialogue act prediction , utterance generation and outcome prediction together , using the Adam optimizer ( Kingma & Ba , 2014 ) , to get Ljoint = LNLG + αLST + βLDA + γLR .
This paper deals with the problem of natural language generation for a dialogue system involved in complex communication tasks such as negotiation or persuasion. The proposed architecture consists of two encoders: one for the utterance and the other for dialogue acts and negotiation strategies. The decoder is an RNN that converts the encoded vectors to the output utterance. Each utterance is first passed through BERT to get an utterance-level encoding. The sequence of utterance encodings is then passed through an RNN to generate a conversation level encodings. The negotiation strategies and dialogue acts in a conversation are represented using a node-edge graph, where the nodes are one of the N different strategies/acts and there exists an edge from node a to node b if an utterance with strategy A precedes any utterance with strategy B. The entire architecture is trained in a multi-task setup where the loss function accounts for both the predictions of the model and generated language. The proposed architecture is evaluated on the CraigslistBargain dataset and compared against Zhou et al. 2020.
Learning the Step-size Policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm
The paper studies a problem of learning step-size policy for L-BFGS algorithm. This paper falls into a general category of meta-learning algorithms that try to derive a data-driven approach to learn one of the parameters of the learning algorithm. In this case, it is the learning rate of L-BFGS. The paper is very similar in nature to the papers of Ravi & Larochelle, MAML and Andrychowicz.
SP:3b3e7833784c53527eb32d5f6ac8d720f9d764bd
Uncertainty Calibration Error: A New Metric for Multi-Class Classification
This paper proposes a new calibration error measurement named UCE (Uncertainty Calibration Error) for deep classification models. It consists in doing a calibration in order to achieve "perfect calibration" (i.e., the uncertainty provided is equivalent to the classification error at all levels in [0, 1]), relying on normalized entropy for multiclass classification. This UCE is well justified for classification problems with several classes to process, where the entropy is demonstrated to be asymptotically equivalent to the classification (top-1) error. A point with this UCE metric is that is has some interpretability properties in terms of its value, and is said to be robust to the number of bins used.
SP:7a92beaba926a93a627208abebe4a455ae3e0400
Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference
1 INTRODUCTION . Bayesian inference provides a powerful framework to blend prior knowledge , data generation process and ( possibly small ) data for statistical inference . With some prior knowledge ⇢ ( distribution ) for the quantity of interest x 2 Rd , and some ( noisy ) measurement y 2 Rdy , it casts on x a posterior q ( x|y ) / ⇢ ( x ) L ( y|x ) , where L ( y|x ) = N ( y F ( x ) ; 0 ,  ) . ( 1 ) where L ( y|x ) is the likelihood that compares the data y with system prediction F ( x ) from the candidate x , here F denotes the forward process . We can use different distributions to model the mismatch  = y F ( x ) , and for illustration simplicity , we assume Gaussian in Equation 1 . For example , Bayesian deep learning generates model predicted logits F ( x ) from model parameters x , and compares it with discrete labels y through binomial or multinomial distribution . Sampling or inferring from q is a long-standing challenge , especially for high-dimensional ( high-d ) cases . An arbitrary high-d posterior can have its importance regions ( also called “ modes ” ) anywhere in the high-d space , and finding these modes requires computational cost that grows exponentially with the dimension d. This intrinsic difficulty is the consequence of “ the curse of dimensionality ” , which all existing Bayesian inference methods suffer from , e.g. , MCMC-based methods ( Neal et al. , 2011 ; Welling & Teh , 2011 ; Cui et al. , 2016 ) , SVGD-type methods ( Liu & Wang , 2016 ; Chen et al. , 2018 ; 2019a ) , and generative modeling ( Morzfeld et al. , 2012 ; Parno et al. , 2016 ; Hou et al. , 2019 ) . In this paper , we focus on Bayesian inference problems with multiscale structure and exploit this structure to sample from a high-d posterior . While the original problem has a high spatial resolution ( fine-scale ) , its low resolution ( coarse-scale ) analogy is computationally attractive because it lies in a low-dimension ( low-d ) space . A problem has the multiscale structure if such coarse-scale low-d surrogate exists and gives good approximation to the fine-scale high-d problem , see Section 2.1 . Such multiscale property is very common in high-d Bayesian inference problems . For example , inferring 3-D permeability field of subsurface at the scale of meters is a reasonable approximation of itself at the scale of centimeters , while the problem dimension is 106-times fewer . We propose a Multiscale Invertible Generative Network ( MsIGN ) to sample from high-d Bayesian inference problems with multiscale structure . MsIGN is a flow-based generative network that can both generate samples and give density evaluation . It consists of multiple scales that recursively lifts up samples to a finer-scale ( higher-resolution ) , except that the coarsest scale directly samples from a low-d ( low resolution ) distribution . At each scale , a fixed prior conditioning layer combines coarse-scale samples with some random noise according to the prior to enhance the resolution , and then an invertible flow modifies the samples for better accuracy , see Figure 1 . The architecture of MsIGN makes it fully invertible between the final sample and random noise at all scales . MsIGN undergoes a multi-stage training that learns a hierarchy of distributions with dimensions growing from the lowest to the highest ( the target posterior ) . Each stage gives a good initialization to the next stage thanks to the multiscale property . To capture multiple modes , we choose Jeffreys divergence DJ ( pkq ) as the training objective at each stage , which is defined as DJ ( pkq ) = DKL ( pkq ) +DKL ( qkp ) = Ex⇠p [ log ( p ( x ) /q ( x ) ) ] + Ex⇠q [ log ( q ( x ) /p ( x ) ) ] . ( 2 ) Jeffreys divergence removes bad local minima of single-sided Kullback-Leibler ( KL ) divergence to avoid mode missing . We build an unbiased estimation of it by leveraging prior conditioning layer in importance sampling . Proper loss function and good initialization from multi-stage training solve the non-convex optimization stably and capture multi-modes of the high-d distribution . In summary , we claim four contributions in this work . First , we propose a Multiscale Invertible deep Generative Network ( MsIGN ) with a novel prior conditioning layer , which can be trained in a coarse-to-fine scale manner . Second , Jeffreys divergence is used as the objective function to avoid mode collapse , and is estimated by importance sampling based on the prior conditioning layer . Third , when applied to two Bayesian inverse problems , MsIGN clearly captures multiple modes in the high-d posterior and approximates the posterior accurately , demonstrating its superior performance compared with previous methods via the generative modeling approach . Fourth , we also apply MsIGN to image synthesis tasks , where it achieves superior performance in bits-perdimension among our baseline models , like Glow ( Kingma & Dhariwal , 2018 ) , FFJORD ( Grathwohl et al. , 2018 ) , Flow++ ( Ho et al. , 2019 ) , i-ResNet ( Behrmann et al. , 2019 ) , and Residual Flow ( Chen et al. , 2019b ) . MsIGN also yields great interpret-ability of its neurons in intermediate layers . 2 METHOLOGY . We will abbreviate q ( x|y ) in Equation 1 as q ( x ) for simplicity in the following context , because y only plays the role of defining the target distribution q ( x ) in MsIGN . In Section 2.1 , we discuss the multiscale structure in detail of the posterior q ( x ) and derive a scale decoupling that can be utilized to divide and conquer the high-d challenge of Bayesian inference . As a flow-based generative model like in Dinh et al . ( 2016 ) , MsIGN models a bijective that maps Gaussian noise z to a sample x whose distribution is denoted as p✓ ( x ) , where ✓ is the network parameters . MsIGN allows fast generation of samples x and density evaluation p✓ ( x ) , so we train our working distribution p✓ ( x ) to approximate the target distribution q ( x ) . We present the architecture of MsIGN in Section 2.2 and the training algorithm in Section 2.3 . 2.1 MULTISCALE STRUCTURE AND SCALE DECOUPLING . We say a Bayesian inference problem has multiscale structure if the associated coarse-scale likelihood Lc approximates the original likelihood L well : L ( y|x ) ⇡ Lc ( y|xc ) , where Lc ( y|xc ) : = N ( y Fc ( xc ) ; 0 ,  ) . ( 3 ) Here xc 2 Rdc is a coarse-scale version of the fine-scale quantity x 2 Rd ( dc < d ) , given by a deterministic pooling operator A : xc = A ( x ) . The map Fc : Rdc ! Rdy is a forward process that gives system prediction based on the coarse-scale information xc . A popular case of the multiscale structure is when A is the average pooling operator , and F ( x ) ⇡ Fc ( xc ) , meaning that the system prediction mainly depends on the lower-resolution information xc . Equation 3 motivates us to define a surrogate distribution q̃ ( x ) / ⇢ ( x ) Lc ( y|A ( x ) ) that approximates the target posterior q ( x ) well1 : q̃ ( x ) = ⇢ ( x ) Lc ( y|A ( x ) ) = ⇢ ( x ) Lc ( y|xc ) ⇡ ⇢ ( x ) L ( y|x ) = q ( x ) . ( 4 ) We also notice that the prior ⇢ allows an exact scale decoupling . To generate a sample x from ⇢ , one can first sample its coarse-scale version xc = A ( x ) , and then replenish missing fine-scale details without changing the coarse-scale structure by sampling from the conditional distribution ⇢ ( x|xc ) = ⇢ ( x|A ( x ) = xc ) . Using ⇢c to denote the distribution of xc = A ( x ) , the conditional probability calculation summarizes this scale decoupling process as ⇢ ( x ) = ⇢ ( x|xc ) ⇢c ( xc ) . Combining the scale effect in the likelihood and the scale decoupling in the prior , we decouple the surrogate q̃ ( x ) = ⇢ ( x ) Lc ( y|A ( x ) ) into the prior conditional distribution ⇢ ( x|xc ) and a coarse-scale posterior , defined as qc ( xc ) : = ⇢c ( xc ) L ( y|xc ) . The decoupling goes as q̃ ( x ) = ⇢ ( x ) Lc ( y|xc ) = ⇢ ( x|xc ) ⇢c ( xc ) Lc ( y|xc ) = ⇢ ( x|xc ) qc ( xc ) , ( 5 ) The prior conditional distribution ⇢ ( x|xc ) bridges the coarse-scale posterior qc ( xc ) and the surrogate q̃ ( x ) , which in turn approximates the original fine-scale posterior q ( x ) . Parno et al . ( 2016 ) proposed a similar scale decoupling relation , and we leave the discussion and comparison to Appendix A . Figure 1 shows the integrated sampling strategy . To sample an x from q , we start with an xc from qc . The prior conditioning layer then performs random upsampling from the prior conditional distribution ⇢ ( ·|xc ) , and the output will be a sample x̃ of the surrogate q̃ . Due to the approximation q̃ ⇡ q from Equation 4 , we stack multiple invertible blocks for the invertible flow F to modify the sample x̃ ⇠ q̃ to a sample x ⇠ q : x = F ( x̃ ) . F is initialized as an identity map in training . Finally , to obtain the xc from qc , we apply the above procedure recursively until the dimension of the coarsest scale is small enough so that qc can be easily sampled by a standard method . 2.2 MULTISCALE INVERTIBLE GENERATIVE NETWORK : ARCHITECTURE . Our proposed MsIGN has multiple levels to recursively apply the above strategy . We denote L the number of levels , xl 2 Rdl the sample at level l , and Al : Rdl ! Rdl 1 the pooling operator from level l to l 1 : xl 1 = Al ( xl ) . Following the idea in Section 2.1 , we can define the l-th level target ql ( xl ) and surrogate q̃l ( x̃l ) , and the last-level target qL is our original target q in Equation 1 . The l-th level of MsIGN uses a prior conditioning layer PCl and an inverse transform Fl to capture ql . Prior conditioning layer . The prior conditioning layer PCl at level l lifts a coarse-scale sample xl 1 2 Rdl 1 up to a random fine-scale one xl 2 Rdl following the conditional distribution ⇢ ( xl|xl 1 ) . The difference in dimension is compensated by a Gaussian noise zl 2 Rdl dl 1 , which is the source of randomness : xl = PCl ( xl 1 , zl ) . PCl depends only on the prior conditional distribution ⇢ ( xl|xl 1 ) , and thus can be pre-computed independently for different levels regardless of the likelihood L. When the prior is Gaussian and the pooling operators are linear ( e.g. , average pooling ) , the prior conditional distribution is still Gaussian with moments specified as follows . Lemma 2.1 Suppose that ⇢ ( xl ) = N ( xl ; 0 , ⌃l ) , and Al ( xl ) = Alxl for some Al 2 Rdl 1⇥dl , then with Ul 1 : = ⌃lATl ( Al⌃lA T l ) 1 and ⌃l|l 1 : = ⌃l ⌃lATl ( Al⌃lATl ) 1Al⌃l , we have ⇢ ( xl|xl 1 = Alxl ) = N ( xl ; Ul 1xl 1 , ⌃l|l 1 ) . 1We omit normalizing constants . Equivalence and approximation are up to normalization in the following . With the Cholesky decomposition ( or eigen-decomposition ) ⌃l|l 1 = BlBTl , we design the prior conditioning layer PCl as below , which is invertible between xl and ( xl 1 , zl ) : xl = PCl ( xl 1 , zl ) : = Ul 1xl 1 +Blzl , zl ⇠ N ( 0 , Idl dl 1 ) . ( 6 ) We refer readers to Appendix B for proof of Lemma 2.1 and the invertibility in Equation 6 . When the prior is non-Gaussian or the pooling operators are nonlinear , there exists a nonlinear invertible prior conditioning operator xl = PCl ( xl 1 , zl ) such that xl follows the prior conditional distribution ⇢ ( xl|xl 1 ) given xl 1 and zl ⇠ N ( 0 , Idl dl 1 ) . We can pre-train an invertible network to approximate this sampling process , and fix it as the prior conditioning layer . Invertible flow . The invertible flow Fl at level l modifies the surrogate q̃l towards the target ql . The more accurate the multiscale structure in Equation 3 is , the better q̃l approximates ql , and the closer Fl is to the identity map . Therefore , we parameterize Fl by some flow-based generative model and initialize it as an identity map . In practice , we utilize the invertible block of Glow ( Kingma & Dhariwal , 2018 ) , which consists of actnorm , invertible 1⇥ 1 convolution , and affine coupling layer , and stack several blocks as the inverse flow Fl in MsIGN . Overall model . MsIGN is a bijective map between random noise inputs at different scales { zl } Ll=1 and the finest-scale sample xL . The forward direction of MsIGN maps { zl } Ll=1 to xL as below : x1 = F1 ( z1 ) , x̃l = PCl ( xl 1 , zl ) , xl = Fl ( x̃l ) , 2 l L . ( 7 ) As a flow-based generative model , sample generation as in Equation 7 and density evaluation p✓ ( x ) by the change-of-variable rule is accessible and fast for MsIGN . In scenarios when certain bound needs enforcing to the output , we can append element-wise output activations at the end of MsIGN . For example , image synthesis can use the sigmoid function so that pixel values lie in [ 0 , 1 ] . Such activations should be bijective to keep the invertible relation between random noise to the sample .
This paper presents a model and a corresponding training approach for multi-scale invertible models. The presented model is defined on multiple scales with information on finer scales being conditioned on coarser scales. Data generation is hence done sequentially from a coarser to finer scale. The authors argue that this multi-scale sampling helps in addressing the curse of dimensionality problem by allowing to sample from high density regions more efficiently.
SP:92d112388a1eac20c2208f0596cdfcdcca685c8f
This study is presented clearly, and the core idea is interesting. However, the presented novelty is limited to a globally (for all tasks) and locally (task-specific) learning paradigm using a framework inspired by (Badirli et al., 2020). The authors have presented experimental results for both regression and classification setups, which are interesting.
SP:077926a214f87b9fdcd5a5f9d818d6313437cd90
1 INTRODUCTION . There is a surge of interest to study test-time adaptation to help generalization to unseen domains ( e.g. , recent work by Sun et al . ( 2020 ) ; Wang et al . ( 2020 ) ; Nado et al . ( 2020 ) ) . At the high level , a generic test-time adaptation can be modeled as an algorithm Γ which accepts an ( optional ) labeled training dataset D , an ( optional ) model F trained on D ( usually used as a starting point ) , and an unlabeled test feature set U , outputs a model F̃ = Γ ( F , D , U ) , in order to achieve high test accuracy on U . For large test set U , test-time adaptation can be viewed as a form of transductive learning ( Joachims ( 1999 ) ; Vapnik ( 1998 ) ) ( i.e. , using D , U to train a model to predict specific instances in U ) , which is argued to be easier than more traditional inductive learning . This paper studies test-time adaptation in the context of adversarial robustness ( i.e. , there is an active agent who tries to fool the test-time adaptation by perturbing the input so that F̃ gives wrong predictions ) . There are several motivations in pursuing this direction . First , this question is of practical interest : Many practical ML pipelines run in a batch mode1 , where they first collect a set of unlabelled data points , and then send them to a model ( e.g . Nado et al . ( 2020 ) ) . In such cases , data in the batch may have been adversarially perturbed , and it is a natural question whether we can leverage the large batch size and test-time adaptation to enhance adversarial robustness . Second , from a purely theoretical point of view , since test-time adaptation is a form of transductive learning , it is intriguing to ask whether transductive adversarial learning can be easier , given that traditional adversarial robustness is formulated in the inductive learning setting ( e.g . Madry et al . ( 2018 ) ) . To this end , a recent work by Goldwasser et al . ( 2020 ) shows that , with transductive learning , one can achieve nontrivial guarantees for classes of bounded VC dimension with arbitrary train and test distributions . The current work complements their paper in the setting of deep learning . To study these questions , we formalize a threat model , which we call ( test-time ) maximin threat model , for the adversarial robustness of test-time adaptation . Recall that the classic adversarial 1For example , Instagram collects a large batch of photos before sending them to a model to tag people . robustness game is a minimax game minF EV [ maxṼ L ( F , Ṽ ) ] , where V is random sampled data , Ṽ is the perturbed data generated from V by the adversary , and L ( F , Ṽ ) is the loss of the model F on Ṽ . By contrast , in the maximin threat model , we allow V to be sampled from a different domain , and the game is maximin : EV [ maxU minF̃ L ( F̃ , Ṽ ) ] ( where U is the perturbed features of V , subject to the attack type , and Ṽ is the labeled perturbed data , see Definition 2 ) . By the maximin inequality , it follows that this threat model is no harder than the minimax model ( to allow source and target domains to be different , we need to generalize the classic minimax model , see Definition 3 ) . We then move on to the focus of this work : Whether the maximin threat model is “ strictly weaker ” than the minimax threat model . We note that any good defender solution ( a robust model ) in the minimax game induces a good defender solution in the maximin game ( an adaptation algorithm that outputs that robust model ) , thus intuitively , the good defender solutions of the minimax model is a subset of the good defender solutions of the maximin threat model . We ask whether such a containment is proper : That is , whether there exists a defender solution that is good in the maximin threat model , but is bad in the minimax threat model . The existence of such a defender will demonstrate that the maximin threat model admits more good solutions . Besides theoretical interest , this question is also of practical importance since these “ new ” solutions may possess desirable properties that good solutions in the minimax threat model may lack . For example , one such property is that the defender solution is attack agnostic ( Goodfellow ( 2018 ) ( pp.30 ) ) : That is , the solution is not to directly optimize the performance measure for a particular type of perturbation2 . To this end , we first present a provable separation between the maximin and minimax threat models in a natural Gaussian data model . In fact , the separation holds even when U only contains a single point , indicating the power of transductive learning . We then move to deep learning . While we do not have provable guarantees , we empirically examine Domain Adverarial Neural Networks ( DANN ) ( Ganin et al . ( 2017 ) ) , an algorithm designed for unsupervised domain adaptation ( UDA ) , as a candidate for the separation . Specifically , we demonstrate that DANN provides nontrivial testtime adversarial robustness against both transfer attacks and adaptive attacks , in both homogeneous and inhomogeneous cases . This is somewhat surprising as DANN is attack agnostic as we mentioned above , and has not been considered for adversarial robustness . Not surprisingly , as we hypothesized for a separation , the accuracy becomes very low when evaluating F̃ in the minimax model . Complementing the above result , we explore the maximin robustness of the recent data-oblivious adaptation algorithms ( namely , the adaptation algorithms do not useD , but just the pretrained model F and unlabeled test set U ) . Specifically , we consider Test-Time Training ( TTT ) by Sun et al . ( 2020 ) 3 . We show that TTT can be easily attacked using simple transfer attacks . While this is not surprising as authors of Sun et al . ( 2020 ) have cautioned that TTT is not designed for adversarial robustness , the situation is in sharp contrast to our results with DANN . The rest of the paper is organized as follows : Section 2 presents the setup . In Section 3 we define threat models . In Section 4 we present theoretical results about separation , and examine DANN as a candidate separation in the deep learning . Finally , Section 5 explores the maximin robustness of oblivious test-time adaptation , and concludes the paper with future directions . 2 PRELIMINARIES . Let F be a model , for a data point ( x , y ) ∈ X ×Y , a loss function  ( F ; x , y ) give the loss of F on x given the true label y . Let V be a set of labeled data points . We use the notation L ( F , V ) = 1 |V | ∑ ( x , y ) ∈V  ( F ; x , y ) to denote the empirical loss of F on V . For example , if we use binary loss  0,1 ( F ; x , y ) = 1 [ F ( x ) 6= y ] , this gives the test error of F on V . We use the notation V |X to denote the projection of V to its features , that is { ( xi , yi ) } mi=1 7→ { x1 , . . . , xm } . Threat model for classic adversarial robustness . To formulate the threat model for test-time adaptation , we first present a threat model for the classic adversarial robustness . Although the classic adversarial robustness can be written down succinctly as a minimax objective , namely 2Another consideration , which is beyond the scope of this paper , is the computational feasibility of finding a good solution , given the hardness of minimax optimization Katz et al . ( 2017 ) ; Daskalakis et al . ( 2020 ) . 3While TTT does not use training data D at the test time , it has a special self-training component , and the joint architecture is a Y -structure . A more domain agnostic approach is discussed in Wang et al . ( 2020 ) . minF E ( x , y ) ∼ ( X , Y ) [ maxx′∈N ( x ) [  ( F ; x ′ , y ) ] ] ( N ( x ) is a neighborhood function of x , determined by the attack type ) , a threat model formulation will help us develop more nuanced models . Definition 1 ( Threat model for classic adversarial robustness ) . Attacker and defender agree on a particular attack type . Attacker is an algorithm A , and defender is a supervised learning algorithm T . Before game starts . • A ( labeled ) training set D is sampled i.i.d . from from ( X , Y ) . Training time . • ( Defender ) Train a model F on D as F = T ( D ) . Test time . • A ( labeled ) natural test set V is sampled i.i.d . from ( X , Y ) . • ( Attacker ) On input F , D , and V , A perturbs each point ( x , y ) ∈ V to ( x′ , y ) ( subject to the agreed attack type ) , giving Ṽ = A ( F , D , V ) . Evaluation : . Evaluate the test loss of F on Ṽ , L ( F , Ṽ ) . Attacker ’ s goal is to maximize the test loss , while the defender ’ s goal is to minimize the test loss . We stress that the i.i.d sampling of V is important ( which is also present in the expectation in the minimax objective ) : Otherwise an attacker can pick any point that fools F and repeat it arbitrarily many times . ( we refer readers to Goodfellow ( 2019 ) for more discussions along this line ) . Notations for models and attacks . In this paper we mainly use the PGD attacks ( Projected Gradient Descent attacks ) with norm-based perturbations Madry et al . ( 2018 ) . For example , given a model F , we use the notation PGD ( F , V ) to denote PGD attacks against F , on data V ( the attack type is specified in the context ) . We adopt the following notations : Notation Meaning T A target model trained on the labeled target data V . AdvT An adversarially trained target model using the labeled target data V . S A source model trained on the labeled source data D. AdvS An adversarially trained source model using the labeled source data D. PGD ( · , · ) PGD Attacks on a model and data . For example , PGD ( AdvT , V ) means to use PGD attacks on the model AdvT and data V . Test-time defenses and BPDA . Various previous work have investigated test-time defenses where a pretrained model is fixed and there is a “ preprocessing procedure ” to preprocess an input before sending it to the model . Several such defenses were described and attacked in Athalye et al . ( 2018 ) , by the BPDA technique ( Backward Pass Differentiable Approximation ) . While syntactically one can fit these defenses into our framework , they only form some very special cases of our framework , which reuses a fixed pretrained model and focuses on input sanitization . As we will show later in the paper , for both of our provable separation and deep learning results , the adaptation algorithms train new models ( beyond sanitizing inputs ) ; and theoretically attacking these adaptations becomes a bilevel optimization . In these cases , it is unclear how to apply BPDA , and indeed it is an intriguing direction to further study attacking unsupervised domain adaptation algorithms , such as DANN .
SP:2969ff98eb93abe37242a962df458541311090ff
Subspace Clustering via Robust Self-Supervised Convolutional Neural Network
1 INTRODUCTION . Subspace clustering approaches have achieved encouraging performance when compared with the clustering algorithms that rely on proximity measures between data points . The main idea behind the subspace model is that the data can be drawn from low-dimensional subspaces which are embedded in a high-dimensional ambient space ( Lodhi & Bajwa , 2018 ) . Grouping such data associated with respective subspaces is known as the subspace clustering ( Vidal , 2011 ) . That is , each low-dimensional subspace corresponds to a class or category . Up to now , two main approaches for recovering lowdimensional subspaces are developed : models that are based on the self-representation property , and non-linear generalization of subspace clustering called union of subspaces ( UoS ) ( Lodhi & Bajwa , 2018 ; Lu & Do , 2008 ; Wu & Bajwa , 2014 ; 2015 ) . UoS algorithms are out of the scope of this work . Self-representation subspace clustering is achieved in two steps : ( i ) learning representation matrix C from data X and building corresponding affinity matrix A = |C|+ |CT | ; ( ii ) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix that correspond with the leading k eigenvalues . This second step is known as spectral clustering ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . Owning to the presumed subspace structure , the data points obey the self-expressiveness or self-representation property ( Elhamifar & Vidal , 2013 ; Peng et al. , 2016b ; Liu et al. , 2012 ; Li & Vidal , 2016 ; Favaro et al. , 2011 ) . In other words , each data point can be represented as a linear combination of other points in a dataset : X=XC . The self-representation approach is facing serious limitations regarding real-world datasets . One limitation relates to the linearity assumption because in a wide range of applications samples lie in nonlinear subspaces , e.g . face images acquired under non-uniform illumination and different poses ( Ji et al. , 2017 ) . Standard practice for handling data from nonlinear manifolds is to use the kernel trick on samples mapped implicitly into high dimensional space . Therein , samples better conform to linear subspaces ( Patel et al. , 2013 ; Patel & Vidal , 2014 ; Xiao et al. , 2015 ; Brbić & Kopriva , 2018 ) . However , identifying an appropriate kernel function for a given data set is quite a difficult task ( Zhang et al. , 2019b ) . The second limitation of existing deep SC methods relates to their assumption that the origin of data corruption is known , in which case the proper error model can be employed . In real-word applications origin of data corruption is unknown . That can severely harm the algorithm ’ s learning process if the non-robust loss function is used . Furthermore , validation ( i.e . stopping of the learning process ) in most of the deep SC methods often requires access to the ground-truth labels . That stands for violation of the basic principle of unsupervised machine learning and yields the overly-optimistic results . Dataset size is also a limitation when it comes to memory requirements . Since the self-representation subspace clustering is based on building the affinity matrix , memory complexity increases as the square of the dataset size . However , the latter limitation is not in the main focus of this work . Motivated by the exceptional ability of deep neural networks to capture complex underlying structures of data and learn discriminative features for clustering ( Hinton & Salakhutdinov , 2006 ; Dilokthanakul et al. , 2016 ; Ghasedi Dizaji et al. , 2017 ; Tian et al. , 2014 ; Xie et al. , 2016 ) , deep subspace clustering approaches emerged recently ( Ji et al. , 2017 ; Abavisani & Patel , 2018 ; Peng et al. , 2016a ; Yang et al. , 2019 ; Zhou et al. , 2018 ; Ji et al. , 2019b ; Peng et al. , 2018 ; 2017 ; Zhou et al. , 2019 ; Zhang et al. , 2019a ; Kheirandishfard et al. , 2020 ) . In particular , it is shown that convolutional neural networks ( CNNs ) , when applied to images of different classes , can learn features that lie in a UoS ( Lezama et al. , 2018 ) . Mostly , the base of the recently developed deep subspace-clustering networks is convolutional autoencoder . It is an end-to-end fully convolutional network that is based on the minimization of the reconstruction error . Together , the autoencoder and an additional self-expression ( SE ) module are forming a Deep subspace clustering network ( DSCNet ) ( Ji et al. , 2017 ) . Hence , the total loss function of DSCNet is composed of reconstruction loss and SE model loss . That is , during the learning process the clustering quality is not taken into account . Self-supervised convolutional SC network ( S2ConvSCN ) ( Zhang et al. , 2019a ) addressed this issue through the addition of a fully connected layer ( FC ) module and a spectral clustering module that , respectively , generate soft- and pseudo-labels . Dual self-supervision is achieved by forcing these two modules to converge towards consensus . Related accumulated loss , therefore , participates in enhancing the self-representation matrix and the quality of features extracted in the encoder layer . The architecture of S2ConvSCN has a possibility of direct classification once the learning process is completed . A trained encoder and the FC module can make a new network that can directly classify unseen data , also known as an out-of-sample problem . However , while this network can be validated and compared with other algorithms on a separate data set , such an ablation study was not completed . Furthermore , the main disadvantage of the DSCNet architecture , and indirectly S2ConvSCN , is that the network training is stopped when the accuracy is highest ( Ji et al. , 2019a ) . First , it is a direct violation of the unsupervised learning principle as the ground-truth labels are exposed . Second , the reported performance ( Zhang et al. , 2019a ; Ji et al. , 2017 ) is overly-optimistic and can not be compared to other algorithms . Also , as mentioned in ( Haeffele et al. , 2020 ) , most self-expressive based deep subspace clustering models suffer from the need of post-processing the self-representation matrix . Compared to the baseline model , we significantly reduced the post-processing while maintaining the noise-free matrix . Mentioned research problems led to three main contributions of proposed Robust S2ConvSCN : • robustness to errors of the unknown ( arbitrary ) origin is achieved by using the correntropy induced metric ( CIM ) in the self-expression loss , • the network is trained using the early-stopping method while monitoring only the accumulated loss , • thanks to correntropy based loss function the training process is less sensitive to data corruptions which enables the network to generalize better . This study has , also , three side-contributions : • the performance of models is estimated using the unseen ( out-of-sample ) data , • block-diagonal regularization of self-representation matrix is integrated into the gradient descent learning process , • post-processing of self-representation matrix is reduced to a significant extent . A complete head to head comparison of the baseline S2ConvSCN model and our robust approach can be seen in Figure 1 . 2 BACKGROUND AND RELATED WORK . 2.1 MAIN NOTATIONS AND DEFINITIONS . Throughout this paper , matrices are represented with bold capital symbols and vectors with bold lower-case symbols . X ∈ Rd×N represents data matrix comprised of N data samples with dimen- sionality d. { H ( l ) i } m ( l ) i=1 represent feature maps produced at the output of layer l−1 . Thus , H ( 0 ) = X and H ( L ) = X̂ . X̂ represents the output of the decoder and L represents number of convolutional layers in the autoencoder . { w ( l ) i } m ( l ) i=1 stand for a set of filters with associated biases { b ( l ) i } m ( l ) i=1 that form a convolutional layer l = 1 , . . . , L. zn = [ h ( L/2 ) 1 ( : ) . . .h ( L/2 ) m ( L/2 ) ( : ) ] T ∈ Rd̂×1 stands for feature vector comprised of vectorized and concatenated feature maps , with d̂ extracted features , in the top layer L2 ( encoder output ) representing input sample xn , n = 1 , . . . , N . C ∈ R N×N stands for representation matrix in self-expressive model Z = ZC . A = |C|+ |CT | is the affinity matrix and L = D− 1 2AD 1 2 is corresponding graph Laplacian matrix . D is diagonal degree matrix such that Dii = ∑N j=1 Aij . ‖X‖F = √∑N i , j=1 x 2 ij is the Frobenius norm of matrix X .  p ( x ) = ‖x‖p = ( ∑d i=1 ‖xi‖ p ) 1/p , 0 < p ≤ 1 is the  p norm of x .  0 ( x ) = ‖x‖0 = # { xi 6= 0 , i = 1 , . . . , d } , where # denotes the cardinality function , is  0 quasi norm of x . The Sp , 0 < p ≤ 1 , Schatten norms of matrix X are defined as the corresponding  p norms of the vector of singular values of X , i.e . Sp ( X ) = ‖σ ( X ) ‖p where σ ( X ) stands for the vector of singular values of X . Depending on the context , 0 represents matrix/vector of all zeros and 1 represents matrix/vector of all ones . Grouping the data according to the linear subspaces they are drawn from is known as subspace clustering ( Vidal , 2011 ) . The problem is formally defined in : Definition 1 . Let X = [ X1 , . . . , Xk ] be a set of sample vectors drawn from a union of k subspaces in Rd , ∪ki=1 { Si } , of dimensions di min { d , N } , for i = 1 , . . . , k. Let Xi be a collection of Ni samples drawn from subspace Si , N = ∑k i=1Ni . The problem of subspace clustering is to segment samples into the subspaces they are drawn from . Throughout this paper , as it is the case in the majority of other papers , we have assumed that number of clusters k is known a priori . 2.2 APPROACHES TO SUBSPACE CLUSTERING . Usually , processes that operate in different modes generate data in real-world scenarios . Each mode models such data as lying on a subspace , while the whole process , thus , generates data lying on a union of subspaces ( UoS ) ( Lodhi & Bajwa , 2018 ) . The alternative to the UoS model is the selfrepresentation based subspace model . It implies that every sample from the dataset can be represented as a linear combination of other samples from the same cluster . While shallow models directly optimize such a self-representation matrix , their deep counterparts train the whole network to better extract features from the raw data and achieve representation linearity . Many approaches to deep subspace clustering are based on the introduction of the self-representation in the feature space ( Abavisani & Patel , 2018 ; Ji et al. , 2017 ; Peng et al. , 2016a ; Zhou et al. , 2018 ; 2019 ; Zhang et al. , 2019a ; Kheirandishfard et al. , 2020 ; Zhang et al. , 2020 ) . However , one weakness of self-expressive deep subspace clustering models is that their perfomance mainly depends on the self-representation matrix . Thus , elimination of the noise is done by post-processing ( Haeffele et al. , 2020 ) . It appears in many cases that from the final performance point of view the post-processing matters more than depth of the network . By the virtue of self-representation property , improvements of the shallow subspace clustering methods are of direct relevance to their deep counterparts . The subspace clustering task is accomplished through ( i ) learning the representation matrix C from data X , and ( ii ) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix L that correspond with the k leading eigenvalues . This second step is known as spectral clustering ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . Low-rank ( Liu et al. , 2012 ; Favaro et al. , 2011 ) and sparse models ( Elhamifar & Vidal , 2013 ) are one of the commonly used algorithms to solve SC clustering problem . They aim to learn the low-rank and sparse representation matrix by solving the following optimization problem ( Li & Vidal , 2016 ) : min C λ ‖C‖pSp + τ ‖C‖ p p s.t . Z = ZC , diag ( C ) = 0 ( 1 ) where λ and τ are nonnegative regularization constants . If number of layers L = 0 problem ( 1 ) is related to shallow subspace clustering . Constraint diag ( C ) = 0 is necessary to prevent sparseness regularized optimization algorithms to converge towards trivial solution where each data point represents itself . This constraint is not necessary for problem constrained only by low-rank . When data samples are contaminated with additive white Gaussian noise ( AWGN ) problem ( 1 ) becomes : min C ‖E‖2F + λ ‖C‖ p Sp + τ ‖C‖pp s.t . diag ( C ) = 0 ( 2 ) where E stands for the modelling error ( noise ) : E = Z− ZC . ( 3 ) Alternatively , square of the Frobenius norm of C is used for regularization ( Lu et al. , 2012 ) : min C ‖E‖2F + λ ‖C‖ 2 F ( 4 ) Objective ( 4 ) is used also in the self-expression module of the S2ConvSCN in ( Zhang et al. , 2019a ) . As seen from ( 2 ) and ( 4 ) , the MSE measure for discrepancy between Z and its self-representation ZC is justified only for the contamination by the AWGN . For sample-specific corruptions ( outliers ) the proper norm is ‖E‖2,1 while for large random corruptions the proper choice is ‖E‖1 ( Liu et al. , 2012 ) . However , errors in real world data have different origins and magnitude and may not follow specific probabilistic model . Sometimes , it is hard to know the true origin of corruptions present in data . Thus , to obtain method robust to arbitrary corruption we propose to introduce the CIM of the error . Rationale behind introduction of any regularization on C is to reflect its structural property of block-diagonality . Even though ‖C‖Sp and ‖C‖p , 0 ≤ p ≤ 1 in principle satisfy the enforced block-diagonality condition , their approximation of the BD structure of C is indirect ( Lu et al. , 2018 ) . Hence , for comparison , this study proposes introduction of loss function with gradient-based BD regularization on representation matrix C .
This paper presents an approach to deep subspace clustering based on minimizing the correntropy induced metric (CIM), with the goal of establishing when training should be stopped and generalizing to unseen data. The main contribution over the existing S2ConfSCN method is a change from squared error loss to CIM when optimizing over the affinity matrix. A key benefit of CIM as a loss is that it does not decrease arbitrarily with training epochs, so it provides a means of estimating when training should cease without needing ground truth labels. The authors argue that CIM "ensures a smooth decrease of the loss function that enables the use of label-free stopping criterion." However, this claim is only justified through a minimal empirical evaluation. The authors also include a means of enforcing block diagonal structure in the learned affinity matrix.
SP:b7532fd6e281d88fff5a0a89c73ae3e6651f8827
UNSUPERVISED ANOMALY DETECTION FROM SEMANTIC SIMILARITY SCORES
The authors present a new Algorithm for performing unsupervised anomaly detection in diverse applications such as visual, audio and text data. They propose a two-step method in which first they utilise contrastive learning in order to find a semantically dense map of the data onto the unit-hypersphere. Then, they classify neighbouring pairs of test examples as in- or out-of- distribution based on the amount of the shared semantic information. Finally, they show that in several anomaly detection problems in the field of visual data their proposed method outperforms several existing methods.
SP:f0e0d909df518f25eb9243837939225d7db1196e
Learning to Generate 3D Shapes with Generative Cellular Automata