paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Reweighted Proximal Pruning for Large-Scale Language Representation
1 INTRODUCTION . Pre-trained language representations such as GPT ( Radford et al. , 2018 ) , BERT ( Devlin et al. , 2019 ) and XLNet ( Yang et al. , 2019 ) , have shown substantial performance improvements using self-supervised training on large-scale corpora ( Dai & Le , 2015 ; Peters et al. , 2018 ; Radford et al. , 2018 ; Liu et al. , 2019a ) . More interestingly , the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks , such as question answering ( Rajpurkar et al. , 2016 ; 2018 ) , and language inference ( Bowman et al. , 2015 ; Williams et al. , 2017 ) , without substantial task-specific architecture modifications . BERT is conceptually simple and empirically powerful ( Devlin et al. , 2019 ) . However , along with the significant performance enhancement , the parameter volume and complexity of these pre-trained language representations significantly increase . As a result , it becomes difficult to deploy these large-scale language representations into real-life computation constrained devices including mobile phones and edge devices . Throughout this paper , we attempt to answer the following questions . Question 1 : Is it possible to compress large-scale language representations such as BERT via weight pruning ? Question 2 : How would the weight-pruned , pre-trained model affect the performance of the downstream multi-task transfer learning objectives ? The problem of weight pruning has been studied under many types of deep neural networks ( DNNs ) ( Goodfellow et al. , 2016 ) , such as AlexNet ( Krizhevsky et al. , 2012 ) , VGG ( Simonyan & Zisserman , 2014 ) , ResNet ( He et al. , 2016 ) , and MobileNet ( Howard et al. , 2017 ) . It is shown that weight pruning can result in a notable reduction in the model size . A suite of weight pruning techniques have been developed , such as non-structured weight pruning ( Han et al. , 2015 ) , structured weight pruning ( Wen et al. , 2016 ) , filter pruning ( Li et al. , 2016 ) , channel pruning ( He et al. , 2017 ) , ADMM-NN ( Ren et al. , 2019 ) and PCONV ( Ma et al. , 2019 ) to name a few . Different from pruning CNNtype models , BERT not only considers the metrics on the pre-training task , but also needs to make allowance for the downstream multi-task transfer learning objectives . Thus , the desired weight pruning needs to preserve the capacity of transfer learning from a sparse pre-trained model to downstream fine-tuning tasks . In this work , we investigate irregular weight pruning techniques on the BERT model , including the iterative pruning method ( Han et al. , 2015 ) and one-shot pruning method ( Liu et al. , 2019b ) . However , these methods fail to converge to a sparse pre-trained model without incurring significant accuracy drop , or in many cases do not converge at all ( see supporting results in Appendix ) . Note that the aforementioned weight pruning techniques are built on different sparsity-promoting regularization schemes ( Han et al. , 2015 ; Wen et al. , 2016 ) , e.g. , lasso regression ( ` 1 regularization ) and ridge regression ( ` 2 regularization ) . We find that the failure of previous methods on weight pruning of BERT is possibly due to the inaccurate sparse pattern learnt from the simple ` 1 or ` 2 based sparsitypromoting regularizer . In fact , the difficulty of applying regularization to generate weight sparsity coincides with the observation in ( Loshchilov & Hutter , 2018 ) on the imcompatibility of conventional weight decay ( ` 2 regularization ) for training super-deep DNNs as BERT . It is pointed out that the main reason is that the direct optimization of a regularization penalty term causes divergence from the original loss function and has negative effect on the effectiveness of gradient-based update . To mitigate this limitation , ( Loshchilov & Hutter , 2018 ) have modified the regularization in Adam by decoupling weight decay regularization from the gradient-based update , and have achieved stateof-the-art results on large-scale language pre-training and downstream multi-task transfer learning objectives ( Devlin et al. , 2019 ) . In this work , we aim at more accurate universal sparse pattern search ( see Figure 1 for an overview of our approach ) motivated by our experiments and the conclusion from Loshchilov & Hutter ( 2018 ) . We propose Reweighted Proximal Pruning ( RPP ) , which integrates reweighted ` 1 minimization ( Candes et al. , 2008 ) with proximal algorithm ( Parikh et al. , 2014 ) . RPP consists of two parts : the reweighted ` 1 minimization and the proximal operator . Reweighted ` 1 minimization serves as a better method of generating sparsity in DNN models matching the nature of weight pruning , compared with ` 1 regularization . Thanks to the closed-form solution of proximal operation on a weighted ` 1 norm , in RPP the sparsity pattern search can be decoupled from computing the gradient of the training loss . In this way the aforementioned pitfall in prior weight pruning technique on BERT can be avoided . We show that RPP achieves effective weight pruning on BERT for the first time to the best of our knowledge . Experimental results demonstrate that the proximal pruned BERT model keeps high accuracy on a wide range of downstream tasks , including SQuAD ( Rajpurkar et al. , 2016 ; 2018 ) and GLUE ( Wang et al. , 2018 ) . We summarize our contributions as follows . • We develop the pruning algorithm Reweighted Proximal Pruning ( RPP ) , which acheives the first effective weight pruning result on large pre-trained language representation model - BERT . RPP achieves 59.3 % weight sparsity without inducing the performance loss on both pre-training and fine-tuning tasks . • We spotlight the relationship between the pruning ratio of the pre-trained DNN model and the performance on the downstream multi-task transfer learning objectives . We show that many downstream tasks except for SQuAD allows at least 80 % pruning ratio compared with 59.3 % under the more challenging task SQuAD . • We observe that as the pruning ratio of the pre-trained language model increases , the performance on the downstream transfer learning tasks decreases . The descending range varies in different downstream transfer learning tasks . However , the proposed RPP approach is able to achieve a consistently high pruning ratio compared to iterative pruning based methods . • We show that different from weight pruning in image classification tasks , RPP helps to find the structured sparsity pattern in transformer blocks used in BERT . Moreover , we peer into the effect of network pruning on the language representation embedded in BERT . 2 RELATED WORK . BERT and prior work on model compression BERT ( Devlin et al. , 2019 ) is a self-supervised approach for pre-training a deep transformer encoder ( Vaswani et al. , 2017 ) , before fine-tuning it for particular downstream tasks . Pre-training of BERT optimizes two training objectives − masked language modeling ( MLM ) and next sentence prediction ( NSP ) − which require a large collection of unlabeled text . We use BooksCorpus ( 800M words ) ( Zhu et al. , 2015 ) and the English instance of Wikipedia ( 2,500M words ) as the pre-training corpus , the same as Devlin et al . ( 2019 ) . For detailed information about the BERT model , readers can refer to the original paper ( Devlin et al. , 2019 ) . Michel et al . ( 2019 ) mask some heads in multi-head attention modules in BERT , and then evaluate the performance on the machine translation task . Similarly , Hao et al . ( 2019 ) eliminates certain heads in the multi-head attention module . First , the limited previous work do not consider the pretraining metrics and the other downstream multi-mask transfer learning objectives . They only considered the specific machine translation task ( out of over 10 transfer tasks ) , which is only a specific fine-tuning and is limited for the universal pre-trained language representation ( BERT ) . Second , the multi-head attention module uses a weight sharing mechanism ( Vaswani et al. , 2017 ) . So masking some heads does not reduce the weight volume . Finally , multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions , while single attention head inhibits this effect ( Vaswani et al. , 2017 ) . As a result , masking some heads in multi-head attention harms the weight sharing mechanism , without weight volume reduction . In summary , the limited previous work in this area are not effective weight pruning method on BERT . Shen et al . ( 2019 ) reports the quantization result of BERT model , which is orthogonal to our work and can be combined for further compression/acceleration . Reweighted ` 1 and proximal algorithm Candes et al . ( 2008 ) present reweighted ` 1 algorithm and demonstrate the remarkable performance and broad applicability in the areas of statistical estimation , error correction and image processing . Proximal algorithms can be viewed as an analogous tool for non-smooth , constrained , large-scale , or distributed versions of these problems ( Parikh et al. , 2014 ) . To the best of our knowledge , ours is the first work that applies reweighted ` 1 minimization to network compression , particularly for BERT pruning . , 3 REWEIGHTED PROXIMAL PRUNING FOR LARGE-SCALE LANGUAGE REPRESENTATION DURING PRE-TRAINING . Pruning for pre-trained language representations should not only consider the performance of pretraining objectives , but also make allowance for the downstream fine-tuning transfer learning tasks . Let fi denote the loss function of network for downstream task Ti ∼ p ( T ) , where p ( T ) denotes the distribution of tasks . Let w denote the parameters of the pre-trained model ( pre-training in BERT ) , and zi denote the i-th task-specified model parameters ( fine-tuning in BERT ) . The downstream tasks have separate fine-tuned models , even though they are initialized with the same pre-trained parameters ( Devlin et al. , 2019 ) . Starting from the pre-trained parameters w , the parameters zi ( w ) are obtained through fine-tuning minimize w∈Rd fi ( w ) ( 1 ) 3.1 PRUNING FORMULATION IN TRANSFER LEARNING . Following the conventional weight pruning formulation , we first consider the problem of weight pruning during pre-training : minimize w∈Rd f0 ( w ) + γ‖w‖p ( 2 ) where f0 is the loss function of pruning , p ∈ { 0 , 1 } denotes the type of regularization norm , and γ is a regularization term . We note that the sparsity-promoting regularizer in the objective could also be replaced with a hard ` p constraint , |w‖p ≤ τ for some τ . Let ŵ denote the solution to problem ( 2 ) , and the corresponding sparse pattern Sŵ is given by Sŵ = { i|ŵi = 0 , ∀i ∈ [ d ] } ( 3 ) For a specific transfer task i , we allow an additional retraining/fine-tuning step to train/fine-tune weights starting from the pre-training results ŵ and subject to the determined , fixed sparse pattern Sŵ , denoted as zi ( ŵ ; Sŵ ) . That is , we solve the modified problem equation 1 minimize zi fi ( zi ( ŵ ; Sŵ ) ) ( 4 ) Here , different from ( 1 ) , the task-specific fine tuning weights variable zi ( ŵ ; Sŵ ) is now defined over Sŵ . Our goal is to seek a sparse ( weight pruned ) model during pre-training , with weight collection ŵ and sparsity Sŵ , which can perform as well as the original pre-trained model over multiple new tasks ( indexed by i ) . These fine-tuned models zi ( ŵ ; Sŵ ) ( for different i ) share the identical universal sparsity Sŵ .
Models such as BERT are pretrained language models which provide significant improvement for different tasks, however they suffer from high huge size and complexity. This paper has proposed using proximal gradient descent to find sparse weights for BERT to reduce the number of parameters and make the model smaller. They concentrate on the drawbacks of the previous sparse-based approaches and claimed that they have convergence issues (they have provided some evidence in the appendix). therefore, they propose to use reweighed sparse method and optimise it using proximal gradient descent which provides a closed form solution for sparse constraint.
SP:fd30a45391475363c65ab80a809f654676cbea71
Reweighted Proximal Pruning for Large-Scale Language Representation
1 INTRODUCTION . Pre-trained language representations such as GPT ( Radford et al. , 2018 ) , BERT ( Devlin et al. , 2019 ) and XLNet ( Yang et al. , 2019 ) , have shown substantial performance improvements using self-supervised training on large-scale corpora ( Dai & Le , 2015 ; Peters et al. , 2018 ; Radford et al. , 2018 ; Liu et al. , 2019a ) . More interestingly , the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks , such as question answering ( Rajpurkar et al. , 2016 ; 2018 ) , and language inference ( Bowman et al. , 2015 ; Williams et al. , 2017 ) , without substantial task-specific architecture modifications . BERT is conceptually simple and empirically powerful ( Devlin et al. , 2019 ) . However , along with the significant performance enhancement , the parameter volume and complexity of these pre-trained language representations significantly increase . As a result , it becomes difficult to deploy these large-scale language representations into real-life computation constrained devices including mobile phones and edge devices . Throughout this paper , we attempt to answer the following questions . Question 1 : Is it possible to compress large-scale language representations such as BERT via weight pruning ? Question 2 : How would the weight-pruned , pre-trained model affect the performance of the downstream multi-task transfer learning objectives ? The problem of weight pruning has been studied under many types of deep neural networks ( DNNs ) ( Goodfellow et al. , 2016 ) , such as AlexNet ( Krizhevsky et al. , 2012 ) , VGG ( Simonyan & Zisserman , 2014 ) , ResNet ( He et al. , 2016 ) , and MobileNet ( Howard et al. , 2017 ) . It is shown that weight pruning can result in a notable reduction in the model size . A suite of weight pruning techniques have been developed , such as non-structured weight pruning ( Han et al. , 2015 ) , structured weight pruning ( Wen et al. , 2016 ) , filter pruning ( Li et al. , 2016 ) , channel pruning ( He et al. , 2017 ) , ADMM-NN ( Ren et al. , 2019 ) and PCONV ( Ma et al. , 2019 ) to name a few . Different from pruning CNNtype models , BERT not only considers the metrics on the pre-training task , but also needs to make allowance for the downstream multi-task transfer learning objectives . Thus , the desired weight pruning needs to preserve the capacity of transfer learning from a sparse pre-trained model to downstream fine-tuning tasks . In this work , we investigate irregular weight pruning techniques on the BERT model , including the iterative pruning method ( Han et al. , 2015 ) and one-shot pruning method ( Liu et al. , 2019b ) . However , these methods fail to converge to a sparse pre-trained model without incurring significant accuracy drop , or in many cases do not converge at all ( see supporting results in Appendix ) . Note that the aforementioned weight pruning techniques are built on different sparsity-promoting regularization schemes ( Han et al. , 2015 ; Wen et al. , 2016 ) , e.g. , lasso regression ( ` 1 regularization ) and ridge regression ( ` 2 regularization ) . We find that the failure of previous methods on weight pruning of BERT is possibly due to the inaccurate sparse pattern learnt from the simple ` 1 or ` 2 based sparsitypromoting regularizer . In fact , the difficulty of applying regularization to generate weight sparsity coincides with the observation in ( Loshchilov & Hutter , 2018 ) on the imcompatibility of conventional weight decay ( ` 2 regularization ) for training super-deep DNNs as BERT . It is pointed out that the main reason is that the direct optimization of a regularization penalty term causes divergence from the original loss function and has negative effect on the effectiveness of gradient-based update . To mitigate this limitation , ( Loshchilov & Hutter , 2018 ) have modified the regularization in Adam by decoupling weight decay regularization from the gradient-based update , and have achieved stateof-the-art results on large-scale language pre-training and downstream multi-task transfer learning objectives ( Devlin et al. , 2019 ) . In this work , we aim at more accurate universal sparse pattern search ( see Figure 1 for an overview of our approach ) motivated by our experiments and the conclusion from Loshchilov & Hutter ( 2018 ) . We propose Reweighted Proximal Pruning ( RPP ) , which integrates reweighted ` 1 minimization ( Candes et al. , 2008 ) with proximal algorithm ( Parikh et al. , 2014 ) . RPP consists of two parts : the reweighted ` 1 minimization and the proximal operator . Reweighted ` 1 minimization serves as a better method of generating sparsity in DNN models matching the nature of weight pruning , compared with ` 1 regularization . Thanks to the closed-form solution of proximal operation on a weighted ` 1 norm , in RPP the sparsity pattern search can be decoupled from computing the gradient of the training loss . In this way the aforementioned pitfall in prior weight pruning technique on BERT can be avoided . We show that RPP achieves effective weight pruning on BERT for the first time to the best of our knowledge . Experimental results demonstrate that the proximal pruned BERT model keeps high accuracy on a wide range of downstream tasks , including SQuAD ( Rajpurkar et al. , 2016 ; 2018 ) and GLUE ( Wang et al. , 2018 ) . We summarize our contributions as follows . • We develop the pruning algorithm Reweighted Proximal Pruning ( RPP ) , which acheives the first effective weight pruning result on large pre-trained language representation model - BERT . RPP achieves 59.3 % weight sparsity without inducing the performance loss on both pre-training and fine-tuning tasks . • We spotlight the relationship between the pruning ratio of the pre-trained DNN model and the performance on the downstream multi-task transfer learning objectives . We show that many downstream tasks except for SQuAD allows at least 80 % pruning ratio compared with 59.3 % under the more challenging task SQuAD . • We observe that as the pruning ratio of the pre-trained language model increases , the performance on the downstream transfer learning tasks decreases . The descending range varies in different downstream transfer learning tasks . However , the proposed RPP approach is able to achieve a consistently high pruning ratio compared to iterative pruning based methods . • We show that different from weight pruning in image classification tasks , RPP helps to find the structured sparsity pattern in transformer blocks used in BERT . Moreover , we peer into the effect of network pruning on the language representation embedded in BERT . 2 RELATED WORK . BERT and prior work on model compression BERT ( Devlin et al. , 2019 ) is a self-supervised approach for pre-training a deep transformer encoder ( Vaswani et al. , 2017 ) , before fine-tuning it for particular downstream tasks . Pre-training of BERT optimizes two training objectives − masked language modeling ( MLM ) and next sentence prediction ( NSP ) − which require a large collection of unlabeled text . We use BooksCorpus ( 800M words ) ( Zhu et al. , 2015 ) and the English instance of Wikipedia ( 2,500M words ) as the pre-training corpus , the same as Devlin et al . ( 2019 ) . For detailed information about the BERT model , readers can refer to the original paper ( Devlin et al. , 2019 ) . Michel et al . ( 2019 ) mask some heads in multi-head attention modules in BERT , and then evaluate the performance on the machine translation task . Similarly , Hao et al . ( 2019 ) eliminates certain heads in the multi-head attention module . First , the limited previous work do not consider the pretraining metrics and the other downstream multi-mask transfer learning objectives . They only considered the specific machine translation task ( out of over 10 transfer tasks ) , which is only a specific fine-tuning and is limited for the universal pre-trained language representation ( BERT ) . Second , the multi-head attention module uses a weight sharing mechanism ( Vaswani et al. , 2017 ) . So masking some heads does not reduce the weight volume . Finally , multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions , while single attention head inhibits this effect ( Vaswani et al. , 2017 ) . As a result , masking some heads in multi-head attention harms the weight sharing mechanism , without weight volume reduction . In summary , the limited previous work in this area are not effective weight pruning method on BERT . Shen et al . ( 2019 ) reports the quantization result of BERT model , which is orthogonal to our work and can be combined for further compression/acceleration . Reweighted ` 1 and proximal algorithm Candes et al . ( 2008 ) present reweighted ` 1 algorithm and demonstrate the remarkable performance and broad applicability in the areas of statistical estimation , error correction and image processing . Proximal algorithms can be viewed as an analogous tool for non-smooth , constrained , large-scale , or distributed versions of these problems ( Parikh et al. , 2014 ) . To the best of our knowledge , ours is the first work that applies reweighted ` 1 minimization to network compression , particularly for BERT pruning . , 3 REWEIGHTED PROXIMAL PRUNING FOR LARGE-SCALE LANGUAGE REPRESENTATION DURING PRE-TRAINING . Pruning for pre-trained language representations should not only consider the performance of pretraining objectives , but also make allowance for the downstream fine-tuning transfer learning tasks . Let fi denote the loss function of network for downstream task Ti ∼ p ( T ) , where p ( T ) denotes the distribution of tasks . Let w denote the parameters of the pre-trained model ( pre-training in BERT ) , and zi denote the i-th task-specified model parameters ( fine-tuning in BERT ) . The downstream tasks have separate fine-tuned models , even though they are initialized with the same pre-trained parameters ( Devlin et al. , 2019 ) . Starting from the pre-trained parameters w , the parameters zi ( w ) are obtained through fine-tuning minimize w∈Rd fi ( w ) ( 1 ) 3.1 PRUNING FORMULATION IN TRANSFER LEARNING . Following the conventional weight pruning formulation , we first consider the problem of weight pruning during pre-training : minimize w∈Rd f0 ( w ) + γ‖w‖p ( 2 ) where f0 is the loss function of pruning , p ∈ { 0 , 1 } denotes the type of regularization norm , and γ is a regularization term . We note that the sparsity-promoting regularizer in the objective could also be replaced with a hard ` p constraint , |w‖p ≤ τ for some τ . Let ŵ denote the solution to problem ( 2 ) , and the corresponding sparse pattern Sŵ is given by Sŵ = { i|ŵi = 0 , ∀i ∈ [ d ] } ( 3 ) For a specific transfer task i , we allow an additional retraining/fine-tuning step to train/fine-tune weights starting from the pre-training results ŵ and subject to the determined , fixed sparse pattern Sŵ , denoted as zi ( ŵ ; Sŵ ) . That is , we solve the modified problem equation 1 minimize zi fi ( zi ( ŵ ; Sŵ ) ) ( 4 ) Here , different from ( 1 ) , the task-specific fine tuning weights variable zi ( ŵ ; Sŵ ) is now defined over Sŵ . Our goal is to seek a sparse ( weight pruned ) model during pre-training , with weight collection ŵ and sparsity Sŵ , which can perform as well as the original pre-trained model over multiple new tasks ( indexed by i ) . These fine-tuned models zi ( ŵ ; Sŵ ) ( for different i ) share the identical universal sparsity Sŵ .
The paper proposes a new approach to prune weights that is designed keeping large scale pre-trained language representations like BERT. Such a method is desirable for deploying such models on devices with limited memory like phones etc. Experiments on Squad and Glue datasets show that a pruned version of the model maintains high accuracy for these tasks.
SP:fd30a45391475363c65ab80a809f654676cbea71
Network Pruning for Low-Rank Binary Index
1 INTRODUCTION . Numerous parameter pruning techniques have been introduced based on the observation that significant amounts of connectivity can be removed without sacrificing model accuracy . Current active research strives to enhance pruning rate at the cost of additional computations ( Guo et al. , 2016 ; Molchanov et al. , 2017 ) , to reduce computational complexity of pruning procedures ( Zhu & Gupta , 2017 ; Han et al. , 2015 ) , and to find the underlying principles of pruning ( Liu et al. , 2019 ; Frankle & Carbin , 2019 ) , to name a few . Figure 1 shows a dense matrix after pruning redundant parameters and various masking index representations . Fine-grained parameter pruning ( i.e. , each parameter is evaluated to be pruned ) results in sparse matrices which can be represented by various formats depending on how indices of non-zero weights are described ( Lee et al. , 2018a ) . A desirable index format in sparse DNNs should produce 1 ) low memory footprint for index and 2 ) high parallelism for index storage accesses . Unfortunately widely used compressed sparse row ( CSR ) format involves two index values for each non-zero weight while the number of bits to represent indices increases as the parameter size increases . Even though the binary index form is a regular structure that can utilize full memory bandwidth , sparsity does not reduce memory footprint . A critical concern on those previous sparse matrix formats ( CSR format and binary index format ) is that the relative amount of index increases if the number of bits to present non-zero weight shrinks due to advanced quantization techniques . Table 1 describes index size divided by the amount of non-zero weights assuming that sparsity is S and non-zero weights are quantized to be Q bits . It is clear that reducing quantization bits aggravates the issue of too large index portion even if a relative indexing scheme ( Han et al. , 2016b ) is used for CSR format with 5 bits . If pruning is performed in a fine-grained manner to improve overall pruning rate , then each row/column or block exhibits different sparsity . Consequently , row/column-wise or block-wise computations require vastly different computation latency leading to significantly degraded parallelism ( Han et al. , 2016a ) . Thus , some of recent pruning techniques suggest removing connectivity in a well-structured form , ( Wen et al. , 2016 ; He et al. , 2017 ) , or in a block-level ( Yu et al. , 2017 ; Narang et al. , 2017 ) . In this paper , we propose a new fine-grained pruning method to find an efficient sparse matrix representation based on binary index-matrix factorization . As shown in Figure 1 , our proposed index compression scheme decomposes a binary index matrix into two small binary matrices in order to reduce index storage and maintain regular structure . Binary matrix multiplication is inherently parallelizable and sparse non-zero weights can be decoded with high parallelism using recent approaches ( Ahn et al. , 2019 ) . Thus , decoding sparse matrices can be performed with high performance . In order to accomplish such a new indexing form , we propose an algorithm that finds a particular finegrained pruning result by generating a low-rank binary index matrix . Since factorization may not exactly reproduce the original binary index matrix , we investigate whether a low-rank binary index matrix produces a pruning result while maintaining allowable model accuracy . Our proposed binary matrix factorization technique significantly reduces the amount of index data compared to using CSR , as demonstrated in the next sections . We also introduce a tiling technique to alleviate on-chip memory size and the burden of on-chip binary matrix multiplication , and to further improve compression ratio . Recently , a regular-structured index compression method has been proposed using the Viterbi algorithm ( Lee et al. , 2018a ) , which explores sequences to be used as pruning mask bits , decompressed by Viterbi encoders . Even though compression rate improves over CSR , for every one input bit , a large number of XOR gates and delay units is required . In comparison , our proposed decompression relies on simple binary matrix multiplications and achieves even higher compression . 2 BINARY MATRIX FACTORIZATION FOR PRUNING INDEX . Suppose that a 5× 5 weight matrix W is given as W = −0.1 0.9 1.2 −0.2 −0.6 1.8 0.2 −0.7 −1.6 0.6 −0.1 −1.7 0.1 −0.3 1.2 −0.4 1.4 −0.9 0.6 1.4 −1.1 0.5 1.0 1.0 −0.3 . ( 1 ) Following the magnitude-based pruning method ( Han et al. , 2015 ) , all weights with magnitude smaller than a certain threshold value are pruned to zero . For example , for a threshold of 0.7 , we obtain the following pruning index ( i.e. , binary masking layer ) matrix I as I = 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 0 . ( 2 ) It is important to note that magnitude-based pruning is not an optimal solution ( Lee et al. , 2018a ; LeCun et al. , 1990 ; Han et al. , 2016b ) to maximize the pruning rate , i.e. , a variety of masking layers exist for the same pruning rate . Binary matrix factorization ( BMF ) ( Zhang et al. , 2007 ) is a reasonable approach to compress I . For I ∈ { 0 , 1 } m×n , BMF tries to find the best Im×kp and Ik×nz to approximate I as I ≈ Ip ⊗ Iz = Ia , where k corresponds to the binary matrix factorization rank and ⊗ stands for binary matrix multiplication . A binary product of Ip and Iz is then defined as ( Ia ) i , j = k∨ l=1 ( Ip ) i , l ∧ ( Iz ) l , j . ( 3 ) While BMF should minimize the number of mismatched bits between I and Ia , such an optimization is NP-hard . Hence , several heuristic methods for BMF have been proposed in the literature ( Zhang et al. , 2007 ) . Moreover , BMF using I ( without referring to W ) lacks weight magnitude information . Yet , in the context of magnitude-based pruning , weights of small magnitude should be pruned with higher probability ( i.e. , lower importance ) . Therefore , we explore a method that preserves the importance of each weight when it is not possible to exactly reproduce I through matrix factorization . 2.1 BINARY MATRIX FACTORIZATION BASED ON NON-NEGATIVE MATRIX FACTORIZATION . Non-negative matrix factorization ( NMF ) factorizes a real-valued matrix H into two real-valued matrices H1 and H2 under the constraint that all three matrices consist of non-negative elements . Similar to singular-value decomposition ( SVD ) , NMF attempts to minimize ||H−H1H2||2F , where ||H||F denotes the Frobenius norm of the matrix H . The property of non-negativity in NMF is useful for analysis of various multivariate data . Numerous numerical approximations of NMF have been suggested since an exact solution of NMF is not generally available ( Lee & Seung , 1999 ; Zhang et al. , 2007 ) . In our proposed technique , we take the magnitude of each element of W to generate M ( i.e. , Mi , j = |Wi , j | ) . and M is factorized by an NMF library ( e.g. , ( Zitnik & Zupan , 2012 ) ) into two matrices Mp and Mz . For example , a magnitude matrix M of the matrix W of Eq . ( 1 ) can be factorized into Mp = 0.2 0.5 1.3 0.0 0.0 0.9 0.3 0.8 0.8 0.2 , Mz = [ 1.3 0.1 0.7 1.2 0.3 0.0 1.8 0.7 0.2 1.3 ] , ( 4 ) where rank k is 2 . The next step is to convert Mp and Mz into two binary matrices Ip and Iz using threshold values Tp and Tz ( i.e. , ( Ip ) i , j = 1 if ( Mp ) i , j ≥ Tp , or 0 otherwise ) . The sparsity of Ip and Iz can each be controlled by Tp and Tz . Our goal is to achieve similar sparsity between I and Ip ⊗ Iz . Suppose that Tp = 0.5 and Tz = 0.6 are carefully chosen to produce similar sparsity as I . We obtain Ip = 0 1 1 0 0 1 0 1 1 0 , Iz = [ 1 0 1 1 0 0 1 1 0 1 ] ( 5 ) and the binary product of Ip and Iz becomes Ia = Ip ⊗ Iz = 0 1 1 0 1 1 0 1 1 0 0 1 1 0 1 0 1 1 0 1 1 0 1 1 0 . ( 6 ) Compared with the pruning-index matrix I in Eq . ( 2 ) , there are 2 mismatched elements ( underlined in Eq . ( 6 ) ) . Algorithm 1 : Binary pruning-index-data matrix factorization input : W ∈ Rm×n , rank k , target sparsity S output : Ip ∈ { 0 , 1 } m×k , Iz ∈ { 0 , 1 } k×n 1 : Generate the magnitude matrix M using W 2 : Mp , Mz = NMF ( M , k ) 3 : Costmin ←∞ , Sminp ← 0.0 , Sminz ← 0.0 4 : for Sp = 0.0 to 1.0 do 5 : Compute Sz using Eq . ( 7 ) 6 : repeat 7 : Convert ( Mp , Mz ) into ( Ip , Iz ) w/ ( Sp , Sz ) 8 : Adjust Sz depending on ( Sa − S ) 9 : until Sa ≈ S 10 : Cost← ∑ Ii , j=1 , ( Ia ) i , j=0 Mi , j 11 : if Costmin > Cost then 12 : Costmin ← Cost , Sminp ← Sp , Sminz ← Sz 13 : end if 14 : end for 15 : Convert ( Mp , Mz ) into ( Ip , Iz ) w/ ( Sminp , S min z ) 16 : Return Ip , Iz The rationale behind this approach is as follows : 1 ) If Mi , j is large , then its corresponding k components of Mp and Mz ( i.e. , ( Mp ) i , : and ( Mz ) : ,j ) will also be large with high probability and , correspondingly , 2 ) Binary matrix conversion using Tp and Tz would yield a high probability of being ‘ 1 ’ within Ip and Iz if the corresponding Mi , j is large . Note that in order to match ‘ OR ’ operations in Eq . ( 3 ) , negative values in Mp or Mz are not allowed . Thus , SVD is not applicable for our proposed scheme . Let S , Sa , Sp , and Sz be the sparsity of I , Ia , Ip , and Iz , respectively . From the dot product operation in Eq . ( 6 ) , the expression for pruning rate S becomes S = ( ( 1− ( 1− Sp ) ( 1− Sz ) ) k , ( 7 ) assuming that the probability of a bit being ‘ 0 ’ in Ip and Iz follows Sp and Sz . Then , Sz = ( S1/k − Sp ) / ( 1 − Sp ) , which needs to be fine-tuned in practice . If Tp and associated Sp are given , then Tz and Sz are automatically determined by the target pruning rate . Subsequently , given W and rank k , it is necessary to find a certain Sp that produces the best masking layer for pruning . In order to optimize Sp , we define the cost function for pruning-index compression to be ∑ Mi , j , where ( I ) i , j = 1 and ( Ia ) i , j = 0 ( i.e. , a sum of all unintentionally pruned weights ’ magnitude by binary matrix decomposition ) . Algorithm 1 describes a method to find the best Sp by sweeping Sp and monitoringCost . Given the linear relationship between Sa and Sz , the algorithm can use binary search to expedite adjustment of Sz .
The paper addresses the problem of reducing the computational complexity of neural network pruning. Main idea is to compute a low-rank approximation of the binary index matrix used to represent the structure of the pruned network. In the considered setup, the binary index matrix is the (sparse) boolean matrix associated with the nonzero network's weights. As low-rank decomposition of binary matrices is a hard problem, the authors propose a method to approximate the solution by computing a more standard non-negative matrix factorization.
SP:8323e9c866137e2f5c7a692bbeb89cce8d2fd6df
Network Pruning for Low-Rank Binary Index
1 INTRODUCTION . Numerous parameter pruning techniques have been introduced based on the observation that significant amounts of connectivity can be removed without sacrificing model accuracy . Current active research strives to enhance pruning rate at the cost of additional computations ( Guo et al. , 2016 ; Molchanov et al. , 2017 ) , to reduce computational complexity of pruning procedures ( Zhu & Gupta , 2017 ; Han et al. , 2015 ) , and to find the underlying principles of pruning ( Liu et al. , 2019 ; Frankle & Carbin , 2019 ) , to name a few . Figure 1 shows a dense matrix after pruning redundant parameters and various masking index representations . Fine-grained parameter pruning ( i.e. , each parameter is evaluated to be pruned ) results in sparse matrices which can be represented by various formats depending on how indices of non-zero weights are described ( Lee et al. , 2018a ) . A desirable index format in sparse DNNs should produce 1 ) low memory footprint for index and 2 ) high parallelism for index storage accesses . Unfortunately widely used compressed sparse row ( CSR ) format involves two index values for each non-zero weight while the number of bits to represent indices increases as the parameter size increases . Even though the binary index form is a regular structure that can utilize full memory bandwidth , sparsity does not reduce memory footprint . A critical concern on those previous sparse matrix formats ( CSR format and binary index format ) is that the relative amount of index increases if the number of bits to present non-zero weight shrinks due to advanced quantization techniques . Table 1 describes index size divided by the amount of non-zero weights assuming that sparsity is S and non-zero weights are quantized to be Q bits . It is clear that reducing quantization bits aggravates the issue of too large index portion even if a relative indexing scheme ( Han et al. , 2016b ) is used for CSR format with 5 bits . If pruning is performed in a fine-grained manner to improve overall pruning rate , then each row/column or block exhibits different sparsity . Consequently , row/column-wise or block-wise computations require vastly different computation latency leading to significantly degraded parallelism ( Han et al. , 2016a ) . Thus , some of recent pruning techniques suggest removing connectivity in a well-structured form , ( Wen et al. , 2016 ; He et al. , 2017 ) , or in a block-level ( Yu et al. , 2017 ; Narang et al. , 2017 ) . In this paper , we propose a new fine-grained pruning method to find an efficient sparse matrix representation based on binary index-matrix factorization . As shown in Figure 1 , our proposed index compression scheme decomposes a binary index matrix into two small binary matrices in order to reduce index storage and maintain regular structure . Binary matrix multiplication is inherently parallelizable and sparse non-zero weights can be decoded with high parallelism using recent approaches ( Ahn et al. , 2019 ) . Thus , decoding sparse matrices can be performed with high performance . In order to accomplish such a new indexing form , we propose an algorithm that finds a particular finegrained pruning result by generating a low-rank binary index matrix . Since factorization may not exactly reproduce the original binary index matrix , we investigate whether a low-rank binary index matrix produces a pruning result while maintaining allowable model accuracy . Our proposed binary matrix factorization technique significantly reduces the amount of index data compared to using CSR , as demonstrated in the next sections . We also introduce a tiling technique to alleviate on-chip memory size and the burden of on-chip binary matrix multiplication , and to further improve compression ratio . Recently , a regular-structured index compression method has been proposed using the Viterbi algorithm ( Lee et al. , 2018a ) , which explores sequences to be used as pruning mask bits , decompressed by Viterbi encoders . Even though compression rate improves over CSR , for every one input bit , a large number of XOR gates and delay units is required . In comparison , our proposed decompression relies on simple binary matrix multiplications and achieves even higher compression . 2 BINARY MATRIX FACTORIZATION FOR PRUNING INDEX . Suppose that a 5× 5 weight matrix W is given as W = −0.1 0.9 1.2 −0.2 −0.6 1.8 0.2 −0.7 −1.6 0.6 −0.1 −1.7 0.1 −0.3 1.2 −0.4 1.4 −0.9 0.6 1.4 −1.1 0.5 1.0 1.0 −0.3 . ( 1 ) Following the magnitude-based pruning method ( Han et al. , 2015 ) , all weights with magnitude smaller than a certain threshold value are pruned to zero . For example , for a threshold of 0.7 , we obtain the following pruning index ( i.e. , binary masking layer ) matrix I as I = 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 0 . ( 2 ) It is important to note that magnitude-based pruning is not an optimal solution ( Lee et al. , 2018a ; LeCun et al. , 1990 ; Han et al. , 2016b ) to maximize the pruning rate , i.e. , a variety of masking layers exist for the same pruning rate . Binary matrix factorization ( BMF ) ( Zhang et al. , 2007 ) is a reasonable approach to compress I . For I ∈ { 0 , 1 } m×n , BMF tries to find the best Im×kp and Ik×nz to approximate I as I ≈ Ip ⊗ Iz = Ia , where k corresponds to the binary matrix factorization rank and ⊗ stands for binary matrix multiplication . A binary product of Ip and Iz is then defined as ( Ia ) i , j = k∨ l=1 ( Ip ) i , l ∧ ( Iz ) l , j . ( 3 ) While BMF should minimize the number of mismatched bits between I and Ia , such an optimization is NP-hard . Hence , several heuristic methods for BMF have been proposed in the literature ( Zhang et al. , 2007 ) . Moreover , BMF using I ( without referring to W ) lacks weight magnitude information . Yet , in the context of magnitude-based pruning , weights of small magnitude should be pruned with higher probability ( i.e. , lower importance ) . Therefore , we explore a method that preserves the importance of each weight when it is not possible to exactly reproduce I through matrix factorization . 2.1 BINARY MATRIX FACTORIZATION BASED ON NON-NEGATIVE MATRIX FACTORIZATION . Non-negative matrix factorization ( NMF ) factorizes a real-valued matrix H into two real-valued matrices H1 and H2 under the constraint that all three matrices consist of non-negative elements . Similar to singular-value decomposition ( SVD ) , NMF attempts to minimize ||H−H1H2||2F , where ||H||F denotes the Frobenius norm of the matrix H . The property of non-negativity in NMF is useful for analysis of various multivariate data . Numerous numerical approximations of NMF have been suggested since an exact solution of NMF is not generally available ( Lee & Seung , 1999 ; Zhang et al. , 2007 ) . In our proposed technique , we take the magnitude of each element of W to generate M ( i.e. , Mi , j = |Wi , j | ) . and M is factorized by an NMF library ( e.g. , ( Zitnik & Zupan , 2012 ) ) into two matrices Mp and Mz . For example , a magnitude matrix M of the matrix W of Eq . ( 1 ) can be factorized into Mp = 0.2 0.5 1.3 0.0 0.0 0.9 0.3 0.8 0.8 0.2 , Mz = [ 1.3 0.1 0.7 1.2 0.3 0.0 1.8 0.7 0.2 1.3 ] , ( 4 ) where rank k is 2 . The next step is to convert Mp and Mz into two binary matrices Ip and Iz using threshold values Tp and Tz ( i.e. , ( Ip ) i , j = 1 if ( Mp ) i , j ≥ Tp , or 0 otherwise ) . The sparsity of Ip and Iz can each be controlled by Tp and Tz . Our goal is to achieve similar sparsity between I and Ip ⊗ Iz . Suppose that Tp = 0.5 and Tz = 0.6 are carefully chosen to produce similar sparsity as I . We obtain Ip = 0 1 1 0 0 1 0 1 1 0 , Iz = [ 1 0 1 1 0 0 1 1 0 1 ] ( 5 ) and the binary product of Ip and Iz becomes Ia = Ip ⊗ Iz = 0 1 1 0 1 1 0 1 1 0 0 1 1 0 1 0 1 1 0 1 1 0 1 1 0 . ( 6 ) Compared with the pruning-index matrix I in Eq . ( 2 ) , there are 2 mismatched elements ( underlined in Eq . ( 6 ) ) . Algorithm 1 : Binary pruning-index-data matrix factorization input : W ∈ Rm×n , rank k , target sparsity S output : Ip ∈ { 0 , 1 } m×k , Iz ∈ { 0 , 1 } k×n 1 : Generate the magnitude matrix M using W 2 : Mp , Mz = NMF ( M , k ) 3 : Costmin ←∞ , Sminp ← 0.0 , Sminz ← 0.0 4 : for Sp = 0.0 to 1.0 do 5 : Compute Sz using Eq . ( 7 ) 6 : repeat 7 : Convert ( Mp , Mz ) into ( Ip , Iz ) w/ ( Sp , Sz ) 8 : Adjust Sz depending on ( Sa − S ) 9 : until Sa ≈ S 10 : Cost← ∑ Ii , j=1 , ( Ia ) i , j=0 Mi , j 11 : if Costmin > Cost then 12 : Costmin ← Cost , Sminp ← Sp , Sminz ← Sz 13 : end if 14 : end for 15 : Convert ( Mp , Mz ) into ( Ip , Iz ) w/ ( Sminp , S min z ) 16 : Return Ip , Iz The rationale behind this approach is as follows : 1 ) If Mi , j is large , then its corresponding k components of Mp and Mz ( i.e. , ( Mp ) i , : and ( Mz ) : ,j ) will also be large with high probability and , correspondingly , 2 ) Binary matrix conversion using Tp and Tz would yield a high probability of being ‘ 1 ’ within Ip and Iz if the corresponding Mi , j is large . Note that in order to match ‘ OR ’ operations in Eq . ( 3 ) , negative values in Mp or Mz are not allowed . Thus , SVD is not applicable for our proposed scheme . Let S , Sa , Sp , and Sz be the sparsity of I , Ia , Ip , and Iz , respectively . From the dot product operation in Eq . ( 6 ) , the expression for pruning rate S becomes S = ( ( 1− ( 1− Sp ) ( 1− Sz ) ) k , ( 7 ) assuming that the probability of a bit being ‘ 0 ’ in Ip and Iz follows Sp and Sz . Then , Sz = ( S1/k − Sp ) / ( 1 − Sp ) , which needs to be fine-tuned in practice . If Tp and associated Sp are given , then Tz and Sz are automatically determined by the target pruning rate . Subsequently , given W and rank k , it is necessary to find a certain Sp that produces the best masking layer for pruning . In order to optimize Sp , we define the cost function for pruning-index compression to be ∑ Mi , j , where ( I ) i , j = 1 and ( Ia ) i , j = 0 ( i.e. , a sum of all unintentionally pruned weights ’ magnitude by binary matrix decomposition ) . Algorithm 1 describes a method to find the best Sp by sweeping Sp and monitoringCost . Given the linear relationship between Sa and Sz , the algorithm can use binary search to expedite adjustment of Sz .
This paper proposed a new network pruning method that generates a low-rank binary index matrix to compress index data, and a tile-based factorization technique to save memory. The binary index can achieve larger compression ratio than the CSR index, and the low-rank binary index can further reduce memory usage. The results for various networks, including DNN, CNN and LSTM, have shown the effectiveness of the propsoed method. The paper is well-written and easy to follow.
SP:8323e9c866137e2f5c7a692bbeb89cce8d2fd6df
Constant Time Graph Neural Networks
1 INTRODUCTION . Machine learning on graph structures has various applications such as chemo-informatics ( Gilmer et al. , 2017 ) , question answering systems ( Schlichtkrull et al. , 2018 ) , and recommender systems ( Fan et al. , 2019 ) . Recently , a novel machine learning model for graph data called graph neural networks ( GNNs ) ( Gori et al. , 2005 ; Scarselli et al. , 2009 ; Kipf & Welling , 2017 ; Hamilton et al. , 2017 ) demonstrated state-of-the-art performances in various graph learning tasks . However , large scale graphs such as social network graphs and web graphs contain billions of nodes , and even a linear time computation cost per iteration is prohibited . Therefore , applying GNNs to huge graphs is challenging . Although Ying et al . ( 2018 ) succeeded in applying GNNs to a web-scale network using MapReduce , it still requires massive computational resources . There are several node sampling techniques to reduce GNN computation . For example , an empirical neighbor sampling scheme is used to speed up GraphSAGE ( Hamilton et al. , 2017 ) . FastGCN employs a random layer-wise node sampling ( Chen et al. , 2018b ) . Huang et al . ( 2018 ) further improved FastGCN by using an adaptive sampling technique to reduce the variance of estimators . Chen et al . ( 2018a ) proposed a variant of neighbor sampling , which used historical activations to reduce the estimator variance . Overall , the existing sampling techniques for GNNs work well in practice . However , these techniques are either not theoretically guaranteed in terms of approximation error , or thy require at least a linear time computation cost . In this study , we consider the problem of approximating the embedding of one node using GNNs in constant time with maximum precision1 . We analyze the neighbor sampling technique ( Hamilton et al. , 2017 ) to show that a constant number of samples are needed to guarantee the approximation error . It should be noted that the neighbor sampling was introduced as a heuristic method originally , and they did not provide any theoretical guarantees . Specifically , given an error tolerance ε and 1e.g. , predicting whether a user of an SNS clicks an advertisement by GNNs in real-time ( i.e. , when the user accesses ) . A user may have many neighbors , but GNNs must respond in limited time , which prohibits exact computation . It motivates us to approximate the exact computation in limited time with theoretical guarantee . confidence probability 1 − δ , our analysis shows that the estimate ẑv of the exact embedding zv of a node v such that Pr [ ‖ẑv − zv‖2 ≥ ε ] ≤ δ and the estimate ∂̂zv∂θ of the exact gradient ∂zv ∂θ of the embedding zv with respect to the network parameters θ , such that Pr [ ‖ ∂̂zv∂θ − ∂zv ∂θ ‖F ≥ ε ] ≤ δ can be computed in a constant time . Especially , the uniform node sampling can approximate the exact embedding and its gradients within O ( 1 ε2L ( log 1ε + log 1 δ ) L−1 log 1δ ) time , where L denotes the number of layers . This complexity is completely independent of the number of nodes , edges , and neighbors of the input , which enables us to deal with graphs irrespective of their size . Moreover , the complexity is a polynomial with respect to 1ε and log 1 δ . We demonstrate that the time complexity is optimal when L = 1 with respect to the error tolerance ε . Through experiments , we show that the approximation error between the exact computation and its approximation rapidly converges to zero . To the best of our knowledge , this is the first constant time approximation algorithm for GNNs with a theoretical guarantee in terms of approximation error . Contributions : The contributions of this paper are summarized as follows : • We analyze the neighbor sampling technique for GraphSAGE , GAT , and GCN to provide theoretical justification . Especially , our analysis shows that the complexity is completely independent of the number of nodes , edges , and neighbors of the input . • We show that some existing GNNs , including the original GraphSAGE ( Hamilton et al. , 2017 ) , can not be approximated in constant time by any algorithm ( see Table 1 for details ) . • We empirically validate our theorems using synthetic and real-world datasets . 2 RELATED WORK . 2.1 GRAPH NEURAL NETWORKS . Graph neural networks ( GNNs ) were first introduced by Gori et al . ( 2005 ) and Scarselli et al . ( 2009 ) . They obtained node embedding by recursively applying the propagation function until convergence . Kipf & Welling ( 2017 ) proposed graph convolutional networks ( GCN ) , which significantly outperformed the existing methods , including non-neural network based approaches . Gilmer et al . ( 2017 ) proposed the message passing neural networks ( MPNNs ) , a general framework of GNNs using the message passing mechanism . Veličković et al . ( 2018 ) proposed the graph attention networks ( GAT ) , which incorporate the attention mechanism into GNNs . With the advent of GAT , various GNN models with the attention mechanism have been recently proposed ( Wang et al. , 2019 ; Park et al. , 2019 ) . GraphSAGE ( Hamilton et al. , 2017 ) is another GNN model , which employs neighbor sampling to reduce the computational costs of training and inference . Owing to neighbor sampling , GraphSAGE can deal with large graphs . However , neighbor sampling was introduced without any theoretical guarantee , and the number of samples is chosen empirically . An alternative computationally efficient GNN would be FastGCN ( Chen et al. , 2018b ) , which employs layer-wise random node sampling to speed up training and inference . Huang et al . ( 2018 ) further improved FastGCN by using an adaptive node sampling technique to reduce the variance of estimators . Thanks to the adaptive sampling technique , it reduces the computational costs and outperforms neighbor sampling in terms of classification accuracy and convergence speed . Chen et al . ( 2018a ) proposed an alternative neighbor sampling technique , which uses historical activations to reduce the estimator variance . Additionally , it could achieve zero variance after a certain number of iterations . However , because it used the same sampling technique of GraphSAGE to obtain the initial solution , the approximation error was not theoretically bounded until the Ω ( n ) -th iteration . Overall , the existing sampling techniques work well in practice . However , these techniques are either not theoretically guaranteed in terms of approximation error , or they require at least a linear time computation cost to calculate the embedding of a node and its gradient of GNN models . Moreover , it is not clear whether we can apply the sampling techniques to state-of-the-art GNN models such as GAT . 2.2 SUBLINEAR TIME ALGORITHMS . The sublinear time algorithms were originally proposed for property testing ( Rubinfeld & Sudan , 1996 ) . Sublinear property testing algorithms check whether the input has some property π or the input is sufficiently far from the property π with high probability in sublinear time with respect to the input size . Sublinear time approximation algorithms are another type of sublinear time algorithms . More specifically , they calculate a value sufficiently close to the exact value with high probability in sublinear time . Constant time algorithms are a subclass of sublinear time algorithms . They work not only in sublinear time with respect to the input size but also in constant time . The proposed algorithm is classified as a constant time approximation algorithm . The examples of sublinear time approximation algorithms include minimum spanning tree in metric space ( Czumaj & Sohler , 2004 ) and minimum spanning tree with integer weights ( Chazelle et al. , 2005 ) . Parnas & Ron ( 2007 ) proposed a method to convert distributed local algorithms into constant time approximation algorithms . In their study , they proposed a method to construct constant time algorithms for the minimum vertex cover problem and dominating set problem . A classic example of sublinear time algorithms related to machine learning includes clustering ( Indyk , 1999 ; Mishra et al. , 2001 ) . Examples of recent work in this stream include constant time approximation of the minimum value of quadratic functions ( Hayashi & Yoshida , 2016 ) and constant time approximation of the residual error of the Tucker decomposition ( Hayashi & Yoshida , 2017 ) . They adopted simple sampling strategies to obtain theoretical guarantee similar to our work . In this paper , we provide theoretical guarantee for approximation of GNNs within a constant time for the first time . 3 BACKGROUND . 3.1 NOTATIONS . Let G be the input graph , V = { 1 , 2 , . . . , n } be the set of nodes , n = |V| be the number of nodes , E be the set of edges , m = |E| be the number of edges , deg ( v ) be the degree of a node v , N ( v ) be the set of neighbors of a node v , xv ∈ Rd0 be the feature vector associated to a node v ∈ V , X = ( x1 , x2 . . . , xn ) > ∈ Rn×d0 be the stacked feature vectors , and > denotes the matrix transpose . 3.2 NODE EMBEDDING MODEL Algorithm 1 Oz : Exact embedding Require : Graph G = ( V , E ) ; Features X ∈ Rn×d0 ; Node index v ∈ V ; Model parameters θ . Ensure : Exact embedding zv 1 : z ( 0 ) i ← xi ( ∀i ∈ V ) 2 : for l ∈ { 1 , . . . , L } do 3 : for i ∈ V do 4 : h ( l ) i ← ∑ u∈N ( i ) Mliu ( z ( l−1 ) i , z ( l−1 ) u , eiu , θ ) 5 : z ( l ) i ← Ul ( z ( l−1 ) i , h ( l ) i , θ ) 6 : end for 7 : end for 8 : return z ( L ) v We consider the node embedding problem using GNNs . Especially , we employ the message passing neural networks ( MPNNs ) framework ( Gilmer et al. , 2017 ) . This framework includes many GNN models , such as GraphSAGE and GCN . Algorithm 1 shows the algorithm of MPNNs . We refer to z ( L ) v as zv simply . The aim of this study is to develop a constant time approximation algorithm for calculating the embedding vector zv and gradients ∂zv∂θ with the given model parameters θ and node v . 3.3 COMPUTATIONAL MODEL ASSUMPTIONS . We have to specify how to access the input to design constant time algorithms because the constant time algorithms can not read the entire input . We follow the standard convention of sublinear time algorithms ( Parnas & Ron , 2007 ; Nguyen & Onak , 2008 ) . We model our algorithm as an oracle machine that can generate queries regarding the input , and we measure the complexity by query complexity . Algorithms can access the input only by querying the following oracles : ( 1 ) Odeg ( v ) : the degree of node v , ( 2 ) OG ( v , i ) : the i-th neighbor of node v , and ( 3 ) Ofeature ( v ) : the feature of node v. We assume that our algorithm can query the oracles in constant time per query .
In this paper, the authors propose a constant-time approximation for graph convolution operation via theoretical analysis on the number of sampling from each neighbor. The authors prove that both node embedding and gradient can be approximated via constant number of samples among the neighbors. Extensive experiments are carried out to verify the correctness of the proposed bounds.
SP:d68fbbd734099992d67e10a0b15483ae7359ec52
Constant Time Graph Neural Networks
1 INTRODUCTION . Machine learning on graph structures has various applications such as chemo-informatics ( Gilmer et al. , 2017 ) , question answering systems ( Schlichtkrull et al. , 2018 ) , and recommender systems ( Fan et al. , 2019 ) . Recently , a novel machine learning model for graph data called graph neural networks ( GNNs ) ( Gori et al. , 2005 ; Scarselli et al. , 2009 ; Kipf & Welling , 2017 ; Hamilton et al. , 2017 ) demonstrated state-of-the-art performances in various graph learning tasks . However , large scale graphs such as social network graphs and web graphs contain billions of nodes , and even a linear time computation cost per iteration is prohibited . Therefore , applying GNNs to huge graphs is challenging . Although Ying et al . ( 2018 ) succeeded in applying GNNs to a web-scale network using MapReduce , it still requires massive computational resources . There are several node sampling techniques to reduce GNN computation . For example , an empirical neighbor sampling scheme is used to speed up GraphSAGE ( Hamilton et al. , 2017 ) . FastGCN employs a random layer-wise node sampling ( Chen et al. , 2018b ) . Huang et al . ( 2018 ) further improved FastGCN by using an adaptive sampling technique to reduce the variance of estimators . Chen et al . ( 2018a ) proposed a variant of neighbor sampling , which used historical activations to reduce the estimator variance . Overall , the existing sampling techniques for GNNs work well in practice . However , these techniques are either not theoretically guaranteed in terms of approximation error , or thy require at least a linear time computation cost . In this study , we consider the problem of approximating the embedding of one node using GNNs in constant time with maximum precision1 . We analyze the neighbor sampling technique ( Hamilton et al. , 2017 ) to show that a constant number of samples are needed to guarantee the approximation error . It should be noted that the neighbor sampling was introduced as a heuristic method originally , and they did not provide any theoretical guarantees . Specifically , given an error tolerance ε and 1e.g. , predicting whether a user of an SNS clicks an advertisement by GNNs in real-time ( i.e. , when the user accesses ) . A user may have many neighbors , but GNNs must respond in limited time , which prohibits exact computation . It motivates us to approximate the exact computation in limited time with theoretical guarantee . confidence probability 1 − δ , our analysis shows that the estimate ẑv of the exact embedding zv of a node v such that Pr [ ‖ẑv − zv‖2 ≥ ε ] ≤ δ and the estimate ∂̂zv∂θ of the exact gradient ∂zv ∂θ of the embedding zv with respect to the network parameters θ , such that Pr [ ‖ ∂̂zv∂θ − ∂zv ∂θ ‖F ≥ ε ] ≤ δ can be computed in a constant time . Especially , the uniform node sampling can approximate the exact embedding and its gradients within O ( 1 ε2L ( log 1ε + log 1 δ ) L−1 log 1δ ) time , where L denotes the number of layers . This complexity is completely independent of the number of nodes , edges , and neighbors of the input , which enables us to deal with graphs irrespective of their size . Moreover , the complexity is a polynomial with respect to 1ε and log 1 δ . We demonstrate that the time complexity is optimal when L = 1 with respect to the error tolerance ε . Through experiments , we show that the approximation error between the exact computation and its approximation rapidly converges to zero . To the best of our knowledge , this is the first constant time approximation algorithm for GNNs with a theoretical guarantee in terms of approximation error . Contributions : The contributions of this paper are summarized as follows : • We analyze the neighbor sampling technique for GraphSAGE , GAT , and GCN to provide theoretical justification . Especially , our analysis shows that the complexity is completely independent of the number of nodes , edges , and neighbors of the input . • We show that some existing GNNs , including the original GraphSAGE ( Hamilton et al. , 2017 ) , can not be approximated in constant time by any algorithm ( see Table 1 for details ) . • We empirically validate our theorems using synthetic and real-world datasets . 2 RELATED WORK . 2.1 GRAPH NEURAL NETWORKS . Graph neural networks ( GNNs ) were first introduced by Gori et al . ( 2005 ) and Scarselli et al . ( 2009 ) . They obtained node embedding by recursively applying the propagation function until convergence . Kipf & Welling ( 2017 ) proposed graph convolutional networks ( GCN ) , which significantly outperformed the existing methods , including non-neural network based approaches . Gilmer et al . ( 2017 ) proposed the message passing neural networks ( MPNNs ) , a general framework of GNNs using the message passing mechanism . Veličković et al . ( 2018 ) proposed the graph attention networks ( GAT ) , which incorporate the attention mechanism into GNNs . With the advent of GAT , various GNN models with the attention mechanism have been recently proposed ( Wang et al. , 2019 ; Park et al. , 2019 ) . GraphSAGE ( Hamilton et al. , 2017 ) is another GNN model , which employs neighbor sampling to reduce the computational costs of training and inference . Owing to neighbor sampling , GraphSAGE can deal with large graphs . However , neighbor sampling was introduced without any theoretical guarantee , and the number of samples is chosen empirically . An alternative computationally efficient GNN would be FastGCN ( Chen et al. , 2018b ) , which employs layer-wise random node sampling to speed up training and inference . Huang et al . ( 2018 ) further improved FastGCN by using an adaptive node sampling technique to reduce the variance of estimators . Thanks to the adaptive sampling technique , it reduces the computational costs and outperforms neighbor sampling in terms of classification accuracy and convergence speed . Chen et al . ( 2018a ) proposed an alternative neighbor sampling technique , which uses historical activations to reduce the estimator variance . Additionally , it could achieve zero variance after a certain number of iterations . However , because it used the same sampling technique of GraphSAGE to obtain the initial solution , the approximation error was not theoretically bounded until the Ω ( n ) -th iteration . Overall , the existing sampling techniques work well in practice . However , these techniques are either not theoretically guaranteed in terms of approximation error , or they require at least a linear time computation cost to calculate the embedding of a node and its gradient of GNN models . Moreover , it is not clear whether we can apply the sampling techniques to state-of-the-art GNN models such as GAT . 2.2 SUBLINEAR TIME ALGORITHMS . The sublinear time algorithms were originally proposed for property testing ( Rubinfeld & Sudan , 1996 ) . Sublinear property testing algorithms check whether the input has some property π or the input is sufficiently far from the property π with high probability in sublinear time with respect to the input size . Sublinear time approximation algorithms are another type of sublinear time algorithms . More specifically , they calculate a value sufficiently close to the exact value with high probability in sublinear time . Constant time algorithms are a subclass of sublinear time algorithms . They work not only in sublinear time with respect to the input size but also in constant time . The proposed algorithm is classified as a constant time approximation algorithm . The examples of sublinear time approximation algorithms include minimum spanning tree in metric space ( Czumaj & Sohler , 2004 ) and minimum spanning tree with integer weights ( Chazelle et al. , 2005 ) . Parnas & Ron ( 2007 ) proposed a method to convert distributed local algorithms into constant time approximation algorithms . In their study , they proposed a method to construct constant time algorithms for the minimum vertex cover problem and dominating set problem . A classic example of sublinear time algorithms related to machine learning includes clustering ( Indyk , 1999 ; Mishra et al. , 2001 ) . Examples of recent work in this stream include constant time approximation of the minimum value of quadratic functions ( Hayashi & Yoshida , 2016 ) and constant time approximation of the residual error of the Tucker decomposition ( Hayashi & Yoshida , 2017 ) . They adopted simple sampling strategies to obtain theoretical guarantee similar to our work . In this paper , we provide theoretical guarantee for approximation of GNNs within a constant time for the first time . 3 BACKGROUND . 3.1 NOTATIONS . Let G be the input graph , V = { 1 , 2 , . . . , n } be the set of nodes , n = |V| be the number of nodes , E be the set of edges , m = |E| be the number of edges , deg ( v ) be the degree of a node v , N ( v ) be the set of neighbors of a node v , xv ∈ Rd0 be the feature vector associated to a node v ∈ V , X = ( x1 , x2 . . . , xn ) > ∈ Rn×d0 be the stacked feature vectors , and > denotes the matrix transpose . 3.2 NODE EMBEDDING MODEL Algorithm 1 Oz : Exact embedding Require : Graph G = ( V , E ) ; Features X ∈ Rn×d0 ; Node index v ∈ V ; Model parameters θ . Ensure : Exact embedding zv 1 : z ( 0 ) i ← xi ( ∀i ∈ V ) 2 : for l ∈ { 1 , . . . , L } do 3 : for i ∈ V do 4 : h ( l ) i ← ∑ u∈N ( i ) Mliu ( z ( l−1 ) i , z ( l−1 ) u , eiu , θ ) 5 : z ( l ) i ← Ul ( z ( l−1 ) i , h ( l ) i , θ ) 6 : end for 7 : end for 8 : return z ( L ) v We consider the node embedding problem using GNNs . Especially , we employ the message passing neural networks ( MPNNs ) framework ( Gilmer et al. , 2017 ) . This framework includes many GNN models , such as GraphSAGE and GCN . Algorithm 1 shows the algorithm of MPNNs . We refer to z ( L ) v as zv simply . The aim of this study is to develop a constant time approximation algorithm for calculating the embedding vector zv and gradients ∂zv∂θ with the given model parameters θ and node v . 3.3 COMPUTATIONAL MODEL ASSUMPTIONS . We have to specify how to access the input to design constant time algorithms because the constant time algorithms can not read the entire input . We follow the standard convention of sublinear time algorithms ( Parnas & Ron , 2007 ; Nguyen & Onak , 2008 ) . We model our algorithm as an oracle machine that can generate queries regarding the input , and we measure the complexity by query complexity . Algorithms can access the input only by querying the following oracles : ( 1 ) Odeg ( v ) : the degree of node v , ( 2 ) OG ( v , i ) : the i-th neighbor of node v , and ( 3 ) Ofeature ( v ) : the feature of node v. We assume that our algorithm can query the oracles in constant time per query .
In this paper, the authors provide a theoretical framework for characterizing the approximation guarantees provided by node sampling to estimate embeddings in various GNN architectures. In particular, they prove several PAC learning-style bounds on the embedding and gradient estimation when using node sampling approaches. They also observe that since the number of nodes selected for sampling is not dependent on the size of the graph, this amounts to a constant time operation for determining embedding and gradient estimates.
SP:d68fbbd734099992d67e10a0b15483ae7359ec52
Identity Crisis: Memorization and Generalization Under Extreme Overparameterization
1 INTRODUCTION . The remarkable empirical success of deep neural networks is often attributed to the availability of large data sets for training . However , sample size does not provide a comprehensive rationale since complex models often outperform simple ones on a given data set , even when the model size exceeds the number of training examples . What form of inductive bias leads to better generalization performance from highly overparameterized models ? Numerous theoretical and empirical studies of inductive bias in deep learning have been conducted in recent years ( Dziugaite & Roy , 2016 ; Kawaguchi et al. , 2017 ; Bartlett et al. , 2017 ; Neyshabur et al. , 2017 ; Liang et al. , 2017 ; Neyshabur et al. , 2018 ; Arora et al. , 2018 ; Zhou et al. , 2019 ) but these postmortem analyses do not identify the root source of the bias . One cult belief among researchers is that gradient-based optimization methods provide an implicit bias toward simple solutions ( Neyshabur et al. , 2014 ; Soudry et al. , 2018 ; Shah et al. , 2018 ; Arora et al. , 2019 ) . However , when a network is sufficiently large ( e.g. , the number of hidden units in each layer is polynomial in the input dimension and the number of training examples ) , then under some mild assumptions , gradient methods are guaranteed to fit the training set perfectly ( Allen-Zhu et al. , 2018b ; Du et al. , 2018a ; b ; Zou et al. , 2018 ) . These results do not distinguish a model trained on a data distribution with strong statistical regularities from one trained on the same inputs but with randomly shuffled labels . Although the former model might achieve good generalization , the latter can only memorize the training labels . Consequently , these analyses do not tell the whole story on the question of inductive bias . Another line of research characterizes sufficient conditions on the input and label distribution that guarantee generalization from a trained network . These conditions range from linear separability ( Brutzkus et al. , 2018 ) to compact structures ( Li & Liang , 2018 ) . While very promising , this direction has thus far identified only structures that can be solved by linear or nearest neighbor classifiers over the original input space . The fact that in many applications deep neural networks significantly outperform these simpler models reveals a gap in our understanding of deep neural networks . As a formal understanding of inductive bias in deep networks has been elusive , we conduct a novel exploration in a highly restrictive setting that admits visualization and quantification of inductive bias , allowing us to compare variations in architecture , optimization procedure , initialization scheme , and hyperparameters . The particular task we investigate is learning an identity mapping in a regression setting . The identity mapping is interesting for four reasons . First , it imposes a structural regularity between the input and output , the type of regularity that could in principle lead to systematic generalization ( He et al. , 2016 ; Hardt & Ma , 2017 ) . Second , it requires that every input feature is transmitted to the output and thus provides a sensitive indicator of whether a model succeeds in passing activations ( and gradients ) between inputs and outputs . Third , conditional image generation is a popular task in the literature ( e.g. , Mirza & Osindero , 2014 ; Ledig et al. , 2017 ) ; an identity mapping is the simplest form of such a generative process . Fourth , and perhaps most importantly , it admits detailed analysis and visualization of model behaviors and hidden representations . Consider networks trained on the identity task with 60k MNIST digits . Although only digit images are presented during the training , one might expect the strong regularity of the task to lead to good generalization to images other than digits . Figure 1 compares three different architectures . The top row shows various input patterns , and the next three rows are outputs from a 20-layer convolutional net ( CNN ) , a 10-layer fully connected net ( FCN ) with rectified-linear unit ( ReLU ) activation functions , and a 1-layer FCN . The 1-layer FCN amounts to a convex optimization problem with infinitely many solutions , however gradient decent converges to a unique closed-form solution . All nets perform well on the training set ( first three columns ) and transfer well to novel digits and digit blends ( columns 4–6 ) . Yet , outside of the hull of hand-printed digits , only the CNN discovers a reasonably good approximation to the identity function . Figure 1 reflects architecture-specific inductive bias that persists even with 60k training examples . Despite this persistence , a model ’ s intrinsic bias is more likely to be revealed with a smaller training set . In this paper , we push this argument to the limit by studying learning with a single training example . Although models are free to reveal their natural proclivities in this maximally overparameterized regime , our initial intuition was that a single example would be uninteresting as models would be algebraically equivalent to the constant function ( e.g. , via biases on output units ) . Further , it seemed inconceivable that inductive biases would be sufficiently strong to learn a mapping close to the identity . Unexpectedly , our experiments show that model behavior is subtle and architecture dependent . In a broad set of experiments , we highlight model characteristics—including depth , initialization , and hyperparameters—that determine where a model lands on the continuum between memorization ( learning a constant function ) and generalization ( learning the identity function ) . The simplicity of the training scenario permits rich characterization of inductive biases . 2 RELATED WORK . The consequences of overparameterized models in deep learning have been extensively studied in recently years , on the optimization landscape and convergence of SGD ( Allen-Zhu et al. , 2018b ; Du et al. , 2018a ; b ; Bassily et al. , 2018 ; Zou et al. , 2018 ; Oymak & Soltanolkotabi , 2018 ) , as well as the generalization guarantees under stronger structural assumptions of the data ( Li & Liang , 2018 ; Brutzkus et al. , 2018 ; Allen-Zhu et al. , 2018a ) . Another line of related work is the study of the implicit regularization effects of SGD on training overparameterized models ( Neyshabur et al. , 2014 ; Zhang et al. , 2017 ; Soudry et al. , 2018 ; Shah et al. , 2018 ; Arora et al. , 2019 ) . The traits of memorization in learning are also explicitly studied from various perspectives such as prioritizing learning of simple patterns ( Arpit et al. , 2017 ) or perfect interpolation of the training set ( Belkin et al. , 2018 ; Feldman , 2019 ) . More recently , coincidentally with the writing of this paper , Radhakrishnan et al . ( 2018 ) reported on the effects of the downsampling operator in convolutional auto-encoders on image memorization . Their empirical framework is similar to ours , fitting CNNs to the autoregression problem with few training examples . We focus on investigating the general inductive bias in the extreme overparameterization case , and study a broader range of network types without enforcing a bottleneck in the architectures . 3 EXPERIMENTS . We explore a progression of models : linear convex models , linear non-convex models ( with multiple linear layers ) , fully-connected multilayered architectures with nonlinearities , and finally the case of greatest practical importance , fully convolutional networks . In all architectures we study , we ensure that there is a simple realization of the identity function ( see Appendix B ) . We train networks by minimizing the mean squared error using standard gradient descent . 3.1 FULLY CONNECTED NETWORKS . Figure 2 shows examples of predictions from multi-layer fully connected networks . Infinitely many solutions exist for all models under this extreme over-parameterization , and the figure shows that all the models fit the training example perfectly . However , on new test examples , contrasting behaviors are observed between shallow and deep networks . In particular , deeper models bias toward predicting a constant output , whereas shallower networks tend to predict random white noises on unseen inputs . The random predictions can be characterized as follows ( proof in Appendix C ) . Theorem 1 . A one-layer fully connected network , when trained with gradient descent on a single training example x̂ , converges to a solution that makes the following prediction on a test example x : f ( x ) = Π‖ ( x ) + RΠ⊥ ( x ) , ( 1 ) where x = Π‖ ( x ) +Π⊥ ( x ) decomposes x into orthogonal components that are parallel and perpendicular to the training example x̂ , respective . R is a random matrix from the network initialization , independent of the training data . For test examples similar to the training example—i.e. , where Π‖ ( x ) dominates Π⊥ ( x ) —the outputs resemble the training output ; on the other hand , for test examples that are not highly correlated to the training example , Π⊥ ( x ) dominates and the outputs looks like white noise due to the random projection by R. The behavior can be empirically verified from the second row in Figure 2 . Specifically , the first test example is a mixture of the training and an unseen test example , and the corresponding output is a mixture of white noise and the training output . For the remaining test examples , the outputs appear random . Although Theorem 1 characterizes only the 1-layer linear case , the empirical results in Figure 2 suggest that shallow ( 2 layer ) networks tend to have this inductive bias . However , this inductive bias does not miraculously obtain good generalization : the trained model fails to learn either the identity or the constant function . Specifically , it predicts well in the vicinity ( measured by correlations ) of the training example x̂ , but further away its predictions are random . In particular , when the test example x is orthogonal to x̂ , the prediction is completely random . In deeper ( 6 layer ) nets , the deviations from the identity function take on a quite different characteristic form . Interestingly , deeper linear networks behave more like deeper ReLU networks , with a strong bias towards a constant function that maps any input to the single training output . A multilayer linear network with no hidden-layer bottleneck has essentially the same representational power as a 1-layer linear network , but gradient descent produces different learning dynamics that alter the inductive biases . See Appendix E for more results and analysis on FCNs . 3.2 CONVOLUTIONAL NETWORKS . We next study the inductive bias of convolutional neural networks with ReLU activation functions . Figure 3 shows predictions on various test patterns obtained by training CNNs of varying depths . Compared to FCNs , CNNs have strong structural constraints that limit the receptive field of each neuron to a spatially local neighborhood and the weights are tied and being used across the spatial array . These two constraints match the structure of the identity target function . ( See Appendix B.3 for an example of constructing the identity function with CNNs . ) Similar to the fully connected case , for one-layer CNN , we can bound the error as follows ( proof in Appendix D ) . Theorem 2 . A one-layer convolutional neural network can learn the identity map from a single training example with the mean squared error over all output pixels bounded as MSE ≤ Õ ( m ( m/C − r ) C ) ( 2 ) where m is the number of network parameters , C is the number of channels in the image , and r ≤ m/C is the rank of the subspace formed by the span of the local input patches . The error grows with m , the number of parameters in the network . For example , learning CNNs with larger receptive field sizes will be harder . Even though the bound seems to decrease with more ( input and output ) channels in the image , note that the number of channels C also contributes to the number of parameters ( m = KHKWC2 ) , so there is a trade-off . Unlike typical generalization bound that decays with number of i.i.d . training examples , we have only one training example here , and the key quantity that reduces the bound is the rank r of the subspace formed by the local image patches . The size of the training image implicitly affects bounds as larger image generates more image patches . Note the rank r also heavily depends on the contents of the training image . For example , simply padding the image with zeros on all boundaries will not reduce the error bound . With enough linearly independent image patches , the subspace becomes full rank r = m/C , and learning of the global identity map is guaranteed . The theorem guarantees only the one-layer case . Empirically—as shown in Figure 3—CNNs with depth up-to-5 layers learn a fairly accurate approximation to the identity function , with the exception of a few artifacts at the boundaries . For a quantitative evaluation , we measure the performance by calculating the correlation ( See Appendix J for the results in MSE . ) to two reference functions : the identity function and the constant function that maps every input to the training point x̂ . To examine how a model ’ s response varies with similarity to the training image , we generate test images having correlation ρ ∈ [ 0 , 1 ] to the training image by : ( 1 ) sampling an image with random pixels , ( 2 ) adding αx̂ to the image , picking α such that the correlation with x̂ is ρ ; ( 3 ) renormalizing the image to be of the same norm as x̂ . For ρ = 0 , the test images are orthogonal to x̂ , whereas for ρ = 1 , the test images equal x̂ . The results for CNNs of different depths are shown in Figure 4 . The quantitative findings are consistent with the visualizations : shallow CNNs are able to learn the identity function from only one training example ; very deep CNNs bias towards the constant function ; and CNNs of intermediate depth correlate well with neither the identity nor the constant function . However , 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Layer index 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Er ro r r at e layers 1 3 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 5 : Illustration of the collapse of predictive power as function of layer ’ s depth . Error rate is measured by using the representations computed at each layer to a simple averaging based classifier on the MNIST test set . The error rate at each layer is plotted for a number of trained CNNs of different depth . The thick red line shows the curve of an untrained 20-layer CNN for reference . unlike FCNs that produce white-noise-like predictions , from Figure 3 CNNs of intermediate depth behave like edge detectors . To evaluate how much information is lost in the intermediate layers , we use the following simple criterion to assess the representation in each layer . We feed each image in the MNIST dataset through a network trained on our single example . We collect the representations at a given layer and perform a simple similarity-weighted classification . For each example xi from the MNIST test set , we predict its class as a weighted average of the ( one hot ) label vector of each example xj from the MNIST training set , where the weight is the inner-product of xi and xj . This metric does not quantify how much information is preserved as the image representation propagates through the layers , because the representation could still be maintained yet not captured by the simple correlation-weighted classifier . It nonetheless provides a simple metric for exploring the ( identity ) mapping : using the input image as the baseline , if a layer represents the identity function , then the representation at that layer would obtain a similar error rate when using the input representation ; on the other hand , if a layer degenerates into the constant function , then the corresponding representation would have error rate close to random guessing of the label . The results are plotted in Figure 5 . The error curve for a randomly initialized 20-layer CNN is shown as reference : at random initialization , the smoothing effect renders the representations beyond the sixth layer unuseful for the averaging-based classifier . After training , the concave nonmonotonicity in the curves indicates loss and then recovery of the information present in the input . Trained networks try to recover the washed out intermediate layer representations as means to link the input and the output layer . However , if the depth is too large , the network tries to infer input-output relations using partial information , resulting in models that behave like edge detectors . Finally , for the case of 20 layers , the curve shows that the bottom few layers do get small improvements in error rate comparing to random initialization , but the big gap between the input and output layer drives the network to learn the constant function instead . On a first sight , this deems to underscore a vanishing gradient problem , but Figure 1 reveals that given a sufficient number of training examples , a 20-layer CNN can still learn the identity map . See also Appendix G for further discussions on vanishing gradients . Since CNNs preserve spatial structure , we can also visualize information loss in the intermediate layers . The visualization results , described in Appendix F , are consistent with the aforementioned observations .
This paper studies the inductive bias in deep neural networks. The authors train several FF image recognition networks and many CNN variants on a single image, and observe that most networks fall into one of two categories: either memorizing the output of the single training sample, or learning the identity function and generalizing to new, unseen images. The paper is clearly written, present a broad set of experiments, and provides interesting insights, that are somewhat surprising.
SP:a199489c162c1b433e63d5aeee3d69ada321d32a
Identity Crisis: Memorization and Generalization Under Extreme Overparameterization
1 INTRODUCTION . The remarkable empirical success of deep neural networks is often attributed to the availability of large data sets for training . However , sample size does not provide a comprehensive rationale since complex models often outperform simple ones on a given data set , even when the model size exceeds the number of training examples . What form of inductive bias leads to better generalization performance from highly overparameterized models ? Numerous theoretical and empirical studies of inductive bias in deep learning have been conducted in recent years ( Dziugaite & Roy , 2016 ; Kawaguchi et al. , 2017 ; Bartlett et al. , 2017 ; Neyshabur et al. , 2017 ; Liang et al. , 2017 ; Neyshabur et al. , 2018 ; Arora et al. , 2018 ; Zhou et al. , 2019 ) but these postmortem analyses do not identify the root source of the bias . One cult belief among researchers is that gradient-based optimization methods provide an implicit bias toward simple solutions ( Neyshabur et al. , 2014 ; Soudry et al. , 2018 ; Shah et al. , 2018 ; Arora et al. , 2019 ) . However , when a network is sufficiently large ( e.g. , the number of hidden units in each layer is polynomial in the input dimension and the number of training examples ) , then under some mild assumptions , gradient methods are guaranteed to fit the training set perfectly ( Allen-Zhu et al. , 2018b ; Du et al. , 2018a ; b ; Zou et al. , 2018 ) . These results do not distinguish a model trained on a data distribution with strong statistical regularities from one trained on the same inputs but with randomly shuffled labels . Although the former model might achieve good generalization , the latter can only memorize the training labels . Consequently , these analyses do not tell the whole story on the question of inductive bias . Another line of research characterizes sufficient conditions on the input and label distribution that guarantee generalization from a trained network . These conditions range from linear separability ( Brutzkus et al. , 2018 ) to compact structures ( Li & Liang , 2018 ) . While very promising , this direction has thus far identified only structures that can be solved by linear or nearest neighbor classifiers over the original input space . The fact that in many applications deep neural networks significantly outperform these simpler models reveals a gap in our understanding of deep neural networks . As a formal understanding of inductive bias in deep networks has been elusive , we conduct a novel exploration in a highly restrictive setting that admits visualization and quantification of inductive bias , allowing us to compare variations in architecture , optimization procedure , initialization scheme , and hyperparameters . The particular task we investigate is learning an identity mapping in a regression setting . The identity mapping is interesting for four reasons . First , it imposes a structural regularity between the input and output , the type of regularity that could in principle lead to systematic generalization ( He et al. , 2016 ; Hardt & Ma , 2017 ) . Second , it requires that every input feature is transmitted to the output and thus provides a sensitive indicator of whether a model succeeds in passing activations ( and gradients ) between inputs and outputs . Third , conditional image generation is a popular task in the literature ( e.g. , Mirza & Osindero , 2014 ; Ledig et al. , 2017 ) ; an identity mapping is the simplest form of such a generative process . Fourth , and perhaps most importantly , it admits detailed analysis and visualization of model behaviors and hidden representations . Consider networks trained on the identity task with 60k MNIST digits . Although only digit images are presented during the training , one might expect the strong regularity of the task to lead to good generalization to images other than digits . Figure 1 compares three different architectures . The top row shows various input patterns , and the next three rows are outputs from a 20-layer convolutional net ( CNN ) , a 10-layer fully connected net ( FCN ) with rectified-linear unit ( ReLU ) activation functions , and a 1-layer FCN . The 1-layer FCN amounts to a convex optimization problem with infinitely many solutions , however gradient decent converges to a unique closed-form solution . All nets perform well on the training set ( first three columns ) and transfer well to novel digits and digit blends ( columns 4–6 ) . Yet , outside of the hull of hand-printed digits , only the CNN discovers a reasonably good approximation to the identity function . Figure 1 reflects architecture-specific inductive bias that persists even with 60k training examples . Despite this persistence , a model ’ s intrinsic bias is more likely to be revealed with a smaller training set . In this paper , we push this argument to the limit by studying learning with a single training example . Although models are free to reveal their natural proclivities in this maximally overparameterized regime , our initial intuition was that a single example would be uninteresting as models would be algebraically equivalent to the constant function ( e.g. , via biases on output units ) . Further , it seemed inconceivable that inductive biases would be sufficiently strong to learn a mapping close to the identity . Unexpectedly , our experiments show that model behavior is subtle and architecture dependent . In a broad set of experiments , we highlight model characteristics—including depth , initialization , and hyperparameters—that determine where a model lands on the continuum between memorization ( learning a constant function ) and generalization ( learning the identity function ) . The simplicity of the training scenario permits rich characterization of inductive biases . 2 RELATED WORK . The consequences of overparameterized models in deep learning have been extensively studied in recently years , on the optimization landscape and convergence of SGD ( Allen-Zhu et al. , 2018b ; Du et al. , 2018a ; b ; Bassily et al. , 2018 ; Zou et al. , 2018 ; Oymak & Soltanolkotabi , 2018 ) , as well as the generalization guarantees under stronger structural assumptions of the data ( Li & Liang , 2018 ; Brutzkus et al. , 2018 ; Allen-Zhu et al. , 2018a ) . Another line of related work is the study of the implicit regularization effects of SGD on training overparameterized models ( Neyshabur et al. , 2014 ; Zhang et al. , 2017 ; Soudry et al. , 2018 ; Shah et al. , 2018 ; Arora et al. , 2019 ) . The traits of memorization in learning are also explicitly studied from various perspectives such as prioritizing learning of simple patterns ( Arpit et al. , 2017 ) or perfect interpolation of the training set ( Belkin et al. , 2018 ; Feldman , 2019 ) . More recently , coincidentally with the writing of this paper , Radhakrishnan et al . ( 2018 ) reported on the effects of the downsampling operator in convolutional auto-encoders on image memorization . Their empirical framework is similar to ours , fitting CNNs to the autoregression problem with few training examples . We focus on investigating the general inductive bias in the extreme overparameterization case , and study a broader range of network types without enforcing a bottleneck in the architectures . 3 EXPERIMENTS . We explore a progression of models : linear convex models , linear non-convex models ( with multiple linear layers ) , fully-connected multilayered architectures with nonlinearities , and finally the case of greatest practical importance , fully convolutional networks . In all architectures we study , we ensure that there is a simple realization of the identity function ( see Appendix B ) . We train networks by minimizing the mean squared error using standard gradient descent . 3.1 FULLY CONNECTED NETWORKS . Figure 2 shows examples of predictions from multi-layer fully connected networks . Infinitely many solutions exist for all models under this extreme over-parameterization , and the figure shows that all the models fit the training example perfectly . However , on new test examples , contrasting behaviors are observed between shallow and deep networks . In particular , deeper models bias toward predicting a constant output , whereas shallower networks tend to predict random white noises on unseen inputs . The random predictions can be characterized as follows ( proof in Appendix C ) . Theorem 1 . A one-layer fully connected network , when trained with gradient descent on a single training example x̂ , converges to a solution that makes the following prediction on a test example x : f ( x ) = Π‖ ( x ) + RΠ⊥ ( x ) , ( 1 ) where x = Π‖ ( x ) +Π⊥ ( x ) decomposes x into orthogonal components that are parallel and perpendicular to the training example x̂ , respective . R is a random matrix from the network initialization , independent of the training data . For test examples similar to the training example—i.e. , where Π‖ ( x ) dominates Π⊥ ( x ) —the outputs resemble the training output ; on the other hand , for test examples that are not highly correlated to the training example , Π⊥ ( x ) dominates and the outputs looks like white noise due to the random projection by R. The behavior can be empirically verified from the second row in Figure 2 . Specifically , the first test example is a mixture of the training and an unseen test example , and the corresponding output is a mixture of white noise and the training output . For the remaining test examples , the outputs appear random . Although Theorem 1 characterizes only the 1-layer linear case , the empirical results in Figure 2 suggest that shallow ( 2 layer ) networks tend to have this inductive bias . However , this inductive bias does not miraculously obtain good generalization : the trained model fails to learn either the identity or the constant function . Specifically , it predicts well in the vicinity ( measured by correlations ) of the training example x̂ , but further away its predictions are random . In particular , when the test example x is orthogonal to x̂ , the prediction is completely random . In deeper ( 6 layer ) nets , the deviations from the identity function take on a quite different characteristic form . Interestingly , deeper linear networks behave more like deeper ReLU networks , with a strong bias towards a constant function that maps any input to the single training output . A multilayer linear network with no hidden-layer bottleneck has essentially the same representational power as a 1-layer linear network , but gradient descent produces different learning dynamics that alter the inductive biases . See Appendix E for more results and analysis on FCNs . 3.2 CONVOLUTIONAL NETWORKS . We next study the inductive bias of convolutional neural networks with ReLU activation functions . Figure 3 shows predictions on various test patterns obtained by training CNNs of varying depths . Compared to FCNs , CNNs have strong structural constraints that limit the receptive field of each neuron to a spatially local neighborhood and the weights are tied and being used across the spatial array . These two constraints match the structure of the identity target function . ( See Appendix B.3 for an example of constructing the identity function with CNNs . ) Similar to the fully connected case , for one-layer CNN , we can bound the error as follows ( proof in Appendix D ) . Theorem 2 . A one-layer convolutional neural network can learn the identity map from a single training example with the mean squared error over all output pixels bounded as MSE ≤ Õ ( m ( m/C − r ) C ) ( 2 ) where m is the number of network parameters , C is the number of channels in the image , and r ≤ m/C is the rank of the subspace formed by the span of the local input patches . The error grows with m , the number of parameters in the network . For example , learning CNNs with larger receptive field sizes will be harder . Even though the bound seems to decrease with more ( input and output ) channels in the image , note that the number of channels C also contributes to the number of parameters ( m = KHKWC2 ) , so there is a trade-off . Unlike typical generalization bound that decays with number of i.i.d . training examples , we have only one training example here , and the key quantity that reduces the bound is the rank r of the subspace formed by the local image patches . The size of the training image implicitly affects bounds as larger image generates more image patches . Note the rank r also heavily depends on the contents of the training image . For example , simply padding the image with zeros on all boundaries will not reduce the error bound . With enough linearly independent image patches , the subspace becomes full rank r = m/C , and learning of the global identity map is guaranteed . The theorem guarantees only the one-layer case . Empirically—as shown in Figure 3—CNNs with depth up-to-5 layers learn a fairly accurate approximation to the identity function , with the exception of a few artifacts at the boundaries . For a quantitative evaluation , we measure the performance by calculating the correlation ( See Appendix J for the results in MSE . ) to two reference functions : the identity function and the constant function that maps every input to the training point x̂ . To examine how a model ’ s response varies with similarity to the training image , we generate test images having correlation ρ ∈ [ 0 , 1 ] to the training image by : ( 1 ) sampling an image with random pixels , ( 2 ) adding αx̂ to the image , picking α such that the correlation with x̂ is ρ ; ( 3 ) renormalizing the image to be of the same norm as x̂ . For ρ = 0 , the test images are orthogonal to x̂ , whereas for ρ = 1 , the test images equal x̂ . The results for CNNs of different depths are shown in Figure 4 . The quantitative findings are consistent with the visualizations : shallow CNNs are able to learn the identity function from only one training example ; very deep CNNs bias towards the constant function ; and CNNs of intermediate depth correlate well with neither the identity nor the constant function . However , 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Layer index 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Er ro r r at e layers 1 3 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 5 : Illustration of the collapse of predictive power as function of layer ’ s depth . Error rate is measured by using the representations computed at each layer to a simple averaging based classifier on the MNIST test set . The error rate at each layer is plotted for a number of trained CNNs of different depth . The thick red line shows the curve of an untrained 20-layer CNN for reference . unlike FCNs that produce white-noise-like predictions , from Figure 3 CNNs of intermediate depth behave like edge detectors . To evaluate how much information is lost in the intermediate layers , we use the following simple criterion to assess the representation in each layer . We feed each image in the MNIST dataset through a network trained on our single example . We collect the representations at a given layer and perform a simple similarity-weighted classification . For each example xi from the MNIST test set , we predict its class as a weighted average of the ( one hot ) label vector of each example xj from the MNIST training set , where the weight is the inner-product of xi and xj . This metric does not quantify how much information is preserved as the image representation propagates through the layers , because the representation could still be maintained yet not captured by the simple correlation-weighted classifier . It nonetheless provides a simple metric for exploring the ( identity ) mapping : using the input image as the baseline , if a layer represents the identity function , then the representation at that layer would obtain a similar error rate when using the input representation ; on the other hand , if a layer degenerates into the constant function , then the corresponding representation would have error rate close to random guessing of the label . The results are plotted in Figure 5 . The error curve for a randomly initialized 20-layer CNN is shown as reference : at random initialization , the smoothing effect renders the representations beyond the sixth layer unuseful for the averaging-based classifier . After training , the concave nonmonotonicity in the curves indicates loss and then recovery of the information present in the input . Trained networks try to recover the washed out intermediate layer representations as means to link the input and the output layer . However , if the depth is too large , the network tries to infer input-output relations using partial information , resulting in models that behave like edge detectors . Finally , for the case of 20 layers , the curve shows that the bottom few layers do get small improvements in error rate comparing to random initialization , but the big gap between the input and output layer drives the network to learn the constant function instead . On a first sight , this deems to underscore a vanishing gradient problem , but Figure 1 reveals that given a sufficient number of training examples , a 20-layer CNN can still learn the identity map . See also Appendix G for further discussions on vanishing gradients . Since CNNs preserve spatial structure , we can also visualize information loss in the intermediate layers . The visualization results , described in Appendix F , are consistent with the aforementioned observations .
This paper studies the inductive bias of neural nets by considering the toy example of learning an identity map through a single data point (and hence the NNs are always overparametrized). The authors compare CNNs versus FCNs, and find that CNNs tend to “generalize” in terms of actually learning the concept of an identity, whereas FCNs are prone to memorization. The authors also present results under various different settings such as changing the filter size or the number of hidden channels of CNNs. The conclusion is that the simpler the network architecture is, the better it generalizes. Another observation is that deep CNNs exhibit extreme memorization.
SP:a199489c162c1b433e63d5aeee3d69ada321d32a
PCMC-Net: Feature-based Pairwise Choice Markov Chains
1 INTRODUCTION . Choice modeling aims at finding statistical models capturing the human behavior when faced with a set of alternatives . Classical examples include consumer purchasing decisions , choices of schooling or employment , and commuter choices for modes of transportation among available options . Traditional models are based on different assumptions about human decision making , e.g . Thurstone ’ s Case V model ( Thurstone , 1927 ) or Bradley-Terry-Luce ( BTL ) model ( Bradley & Terry , 1952 ) . Nevertheless , in complex scenarios , like online shopping sessions presenting numerous alternatives to user-specific queries , these assumptions are often too restrictive to provide accurate predictions . Formally , there is a universe of alternatives U , possibly infinite . In each choice situation , some finite choice set S ⊆ U is considered . A choice model is a distribution over the alternatives of a given choice set S , where the probability of choosing the item i among S is denoted as PS ( i ) . These models can be further parameterized by the alternatives ’ features and by those of the individual making the choice . An important class of choice models is the Multinomial Logit ( MNL ) , a generalization of the BTL model—defined for pairwise choices only—to larger sets . MNL models satisfy Luce ’ s axiom also known as independence of irrelevant alternatives ( Luce , 1959 ) , which states that the probability of selecting one alternative over another from a set of many alternatives is not affected by the presence or absence of other alternatives in the set . Moreover , any model satisfying Luce ’ s axiom is equivalent to some MNL model ( Luce , 1977 ) . Equivalently , the probability of choosing some item i from a given set S can be expressed as PS ( i ) = wi/ ∑ j∈S wj where wi is the latent value of the item i. Luce ’ s axiom implies stochastic transitivity i.e . if P ( aB b ) ≥ 1/2 and P ( bB c ) ≥ 1/2 , then P ( a B c ) ≥ max ( P ( aB b ) , P ( bB c ) ) where P ( i B j ) ≡ P { i , j } ( i ) ( Luce , 1977 ) . Stochastic transitivity implies the necessity of a total order across all elements and also prevents from expressing cyclic preference situations like the stochastic rock-paper-scissors game described in Section 3.2 . Thurstone ’ s Case V model exhibits strict stochastic transitivity but does not satisfy Luce ’ s axiom ( Adams & Messick , 1958 ) . Luce ’ s axiom and stochastic transitivity are strong assumptions that often do not hold for empirical choice data ( see ( Ragain & Ugander , 2016 ) and references therein ) . For example , Luce ’ s axiom prevents models from expressing context effects like the attraction effect ( also know as asymmetric dominance or decoy effect ) , the similarity effect and the compromise effect . The attraction effect occurs when two alternatives are augmented with an asymmetrically dominated one ( i.e. , a new option that is inferior in all aspects with respect to one option , but inferior in only some aspects and superior in other aspects with respect to the other option ) and the probability of selecting the better , dominant alternative increases ( Huber et al. , 1982 ) . The similarity effect arises from the introduction of an alternative that is similar to , and competitive with , one of the original alternatives , and causes a decrease in the probability of choosing the similar alternative ( Tversky , 1972 ) . The compromise effect occurs when there is an increase in the probability of choosing an alternative that becomes the intermediate option when a third extreme option is introduced ( Simonson , 1989 ) . Examples of these effects are visualized in Section 4 . A larger class of models is the one of Random Utility Models ( RUM ) ( Block & Marschak , 1960 ; Manski , 1977 ) , which includes MNL but also other models satisfying neither Luce ’ s axiom nor stochastic transitivity . This class affiliates with each i ∈ U a random variable Xi and defines for each subset S ⊆ U the probability PS ( i ) = P ( Xi ≥ Xj , ∀j ∈ S ) . RUM exhibits regularity i.e . if A ⊆ B then PA ( x ) ≥ PB ( x ) . Regularity also prevents models from expressing context effects ( Huber et al. , 1982 ) . The class of Nested MNL ( McFadden , 1980 ) allows to express RUM models but also others that do not obey regularity . Nevertheless , inference is practically difficult for Nested MNL models . Recently , a more flexible class of models called Pairwise Choice Markov Chains has been introduced in Ragain & Ugander ( 2016 ) . This class includes MNL but also other models that satisfy neither Luce ’ s axiom , nor stochastic transitivity , nor regularity . This class defines the choice distribution as the stationary distribution of a continuous time Markov chain defined by some transition rate matrix . Still , it satisfies a weakened version of Luce ’ s axiom called uniform expansion stating that if we add “ copies ” ( with no preference between them ) , the probability of choosing one element of the copies is invariant to the number of copies . Although the flexibility of this class is appealing , the proposed inference is based on maximizing the likelihood of the rate matrix for the observed choices which is prone to overfitting when the number of observations for each possible alternative is small and is inappropriate when new alternatives can be seen at test time . Alternatives and individuals making choices can be described by a set of features that can be then used to understand their impact on the choice probability . A linear-in-features MNL assumes that the latent value is given by a linear combination of the parameters of the alternatives and the individual . Features of the individual can be taken into account by these models but inference suffers from scarcity and is inappropriate when new alternatives can be seen at test time . The latent class MNL ( LC-MNL ) model ( Greene & Hensher , 2003 ) takes into account individual heterogeneity by using a Bayesian mixture over different latent classes—whose number must be specified—in which homogeneity and linearity is assumed . A linear-in-features parameterization of PCMC , for features in Rd , was suggested in ( Ragain & Ugander , 2016 , Appendix ) but requires fitting a weight matrix of size |U | × d , which makes it scale poorly and does not allow to predict unseen alternatives . In this work , we propose an amortized inference approach for PCMC in the sense that the statistical parameters are reused for any pair of alternatives and their number is thus independent of the size of the universe . In addition , we allow non-linear modeling by using a neural network . In complex cases like airline itinerary choice , where the alternatives are strongly dependent on an individual-specific query and some features , like price , can be dynamic , the previous approaches have limited expressive power or are inappropriate . Two recently introduced methods allow complex feature handling for alternatives and individuals . Mottini & Acuna-Agost ( 2017 ) proposes a recurrent neural network method consisting in learning to point , within a sequence of alternatives , to the chosen one . This model is appealing because of its feature learning capability but neither its choice-theoretic properties have been studied nor its dependence on the order of the sequence . Lhéritier et al . ( 2019 ) proposes to train a Random Forest classifier to predict whether an alternative is going to be predicted or not independently of the rest of the alternatives of the choice set . This approach does not take into account the fact that in each choice set exactly one alternative is chosen . For this reason , the probabilities provided by the model are only used as scores to rank the alternatives , which can be interpreted as latent values—making it essentially equivalent to a non-linear MNL . To escape this limitation and make the latent values dependent on the choice set , relative features are added ( e.g . the price for i-th alternative pricei is converted to pricei/minj∈S pricej ) . The non-parametric nature of this model is appealing but its choice-theoretic properties have not been studied either . In this work , we propose to enable PCMC with neural networks based feature handling , therefore enjoying both the good theoretical properties of PCMC and the complex feature handling of the previous neural network based and non-parametric methods . This neural network parameterization of PCMC makes the inference amortized allowing to handle large ( and even infinite ) size universes as shown in our experiments for airline itinerary choice modeling shown in Section 5 . 2 BACKGROUND : PAIRWISE CHOICE MARKOV CHAINS . 2.1 DEFINITION . A Pairwise Choice Markov Chain ( PCMC ) ( Ragain & Ugander , 2016 ) defines the choice probability PS ( i ) as the probability mass on the alternative i ∈ S of the stationary distribution of a continuous time Markov chain ( CTMC ) whose set of states corresponds to S. The model ’ s parameters are the off-diagonal entries qij ≥ 0 of a rate matrix Q indexed by pairs of elements in U . Given a choice set S , the choice distribution is the stationary distribution of the continuous time Markov chain given by the matrix QS obtained by restricting the rows and columns of Q to elements in S and setting qii = − ∑ j∈S\i qij for each i ∈ S. Therefore , the distribution PS is parameterized by the |S| ( |S| − 1 ) transition rates of QS . The constraint qij + qji > 0 ( 1 ) is imposed in order to guarantee that the chain has a single closed communicating class which implies the existence and the unicity of the stationary distribution πS ( see , e.g. , Norris ( 1997 ) ) obtained by solving { πSQS = 0 πS1 T = 1 ( 2 ) where 0 and 1 are row vectors of zeros and ones , respectively . Since any column of Qs is the opposite of the sum of the rest of the columns , it is equivalent to solve πSQ ′ S = [ 0 | 1 ] ( 3 ) where Q′S ≡ [ ( ( QS ) ij ) 1≤i≤|S|,1≤j < |S| | 1 T ] . 2.2 PROPERTIES . In Ragain & Ugander ( 2016 ) , it is shown that PCMC allow to represent any MNL model , but also models that are non-regular and do not satisfy stochastic transitivity ( using the rock-scissor-paper example of Section 3.2 ) . In the classical red bus/blue bus example ( see , e.g. , Train ( 2009 ) ) , the color of the bus is irrelevant to the preference of the transportation mode “ bus ” with respect to the “ car ” mode . Nevertheless , MNL models reduce the probability of choosing the “ car ” mode when color variants of buses are added , which does not match empirical behavior . PCMC models allows to model this kind of situations thanks to a property termed contractibility , which intuitively means that we can “ contract ” subsets Ai ⊆ U to a single “ type ” when the probability of choosing an element of Ai is independent of the pairwise probabilities between elements within the subsets . Formally , a partition of U into non-empty sets A1 , . . . , Ak is a contractible partition if qaiaj = λij for all ai ∈ Ai , aj ∈ Aj for some Λ = { λij } for i , j ∈ { 1 , . . . , k } . Then , the following proposition is shown . Proposition 1 ( Ragain & Ugander ( 2016 ) ) . For a given Λ , letA1 , . . . , Ak be a contractible partition for two PCMC models on U represented by Q , Q′ with stationary distributions π , π′ . Then , for any Ai , ∑ j∈Ai PU ( j ) = ∑ j∈Ai P ′U ( j ) . Then it is shown , that contractibility implies uniform expansion formally defined as follows . Definition 1 ( Uniform Expansion ) . Consider a choice between n elements in a set S ( 1 ) = { i11 , . . . , in1 } , and another choice from a set S ( k ) containing k copies of each of the n elements : S ( k ) = { i11 , . . . , i1k , i21 , . . . , i2k , . . . , in1 , . . . , ink } . The axiom of uniform expansion states that for each m ∈ { 1 , . . . , n } and all k ≥ 1 , PS ( 1 ) ( im1 ) = k∑ j=1 PS ( k ) ( imj ) .
This paper introduces a novel approximate inference method, called PCMC-Net, for models from the family of Pairwise Choice Markov Chains (PCMC). The method relies on training a neural network. Consequently, the authors claim that inference is amortized, but its computational complexity is still quadratic in the number of choice alternatives due to separate processing of all pairs of alternatives. PCMC-Net bakes the definition of PCMC into the neural net structure and therefore satisfies the theoretical properties of contractability and uniform expansion, which are desired properties of choice models
SP:e4f04edbe1885c93875d82d78d3e0cf2d0359393
PCMC-Net: Feature-based Pairwise Choice Markov Chains
1 INTRODUCTION . Choice modeling aims at finding statistical models capturing the human behavior when faced with a set of alternatives . Classical examples include consumer purchasing decisions , choices of schooling or employment , and commuter choices for modes of transportation among available options . Traditional models are based on different assumptions about human decision making , e.g . Thurstone ’ s Case V model ( Thurstone , 1927 ) or Bradley-Terry-Luce ( BTL ) model ( Bradley & Terry , 1952 ) . Nevertheless , in complex scenarios , like online shopping sessions presenting numerous alternatives to user-specific queries , these assumptions are often too restrictive to provide accurate predictions . Formally , there is a universe of alternatives U , possibly infinite . In each choice situation , some finite choice set S ⊆ U is considered . A choice model is a distribution over the alternatives of a given choice set S , where the probability of choosing the item i among S is denoted as PS ( i ) . These models can be further parameterized by the alternatives ’ features and by those of the individual making the choice . An important class of choice models is the Multinomial Logit ( MNL ) , a generalization of the BTL model—defined for pairwise choices only—to larger sets . MNL models satisfy Luce ’ s axiom also known as independence of irrelevant alternatives ( Luce , 1959 ) , which states that the probability of selecting one alternative over another from a set of many alternatives is not affected by the presence or absence of other alternatives in the set . Moreover , any model satisfying Luce ’ s axiom is equivalent to some MNL model ( Luce , 1977 ) . Equivalently , the probability of choosing some item i from a given set S can be expressed as PS ( i ) = wi/ ∑ j∈S wj where wi is the latent value of the item i. Luce ’ s axiom implies stochastic transitivity i.e . if P ( aB b ) ≥ 1/2 and P ( bB c ) ≥ 1/2 , then P ( a B c ) ≥ max ( P ( aB b ) , P ( bB c ) ) where P ( i B j ) ≡ P { i , j } ( i ) ( Luce , 1977 ) . Stochastic transitivity implies the necessity of a total order across all elements and also prevents from expressing cyclic preference situations like the stochastic rock-paper-scissors game described in Section 3.2 . Thurstone ’ s Case V model exhibits strict stochastic transitivity but does not satisfy Luce ’ s axiom ( Adams & Messick , 1958 ) . Luce ’ s axiom and stochastic transitivity are strong assumptions that often do not hold for empirical choice data ( see ( Ragain & Ugander , 2016 ) and references therein ) . For example , Luce ’ s axiom prevents models from expressing context effects like the attraction effect ( also know as asymmetric dominance or decoy effect ) , the similarity effect and the compromise effect . The attraction effect occurs when two alternatives are augmented with an asymmetrically dominated one ( i.e. , a new option that is inferior in all aspects with respect to one option , but inferior in only some aspects and superior in other aspects with respect to the other option ) and the probability of selecting the better , dominant alternative increases ( Huber et al. , 1982 ) . The similarity effect arises from the introduction of an alternative that is similar to , and competitive with , one of the original alternatives , and causes a decrease in the probability of choosing the similar alternative ( Tversky , 1972 ) . The compromise effect occurs when there is an increase in the probability of choosing an alternative that becomes the intermediate option when a third extreme option is introduced ( Simonson , 1989 ) . Examples of these effects are visualized in Section 4 . A larger class of models is the one of Random Utility Models ( RUM ) ( Block & Marschak , 1960 ; Manski , 1977 ) , which includes MNL but also other models satisfying neither Luce ’ s axiom nor stochastic transitivity . This class affiliates with each i ∈ U a random variable Xi and defines for each subset S ⊆ U the probability PS ( i ) = P ( Xi ≥ Xj , ∀j ∈ S ) . RUM exhibits regularity i.e . if A ⊆ B then PA ( x ) ≥ PB ( x ) . Regularity also prevents models from expressing context effects ( Huber et al. , 1982 ) . The class of Nested MNL ( McFadden , 1980 ) allows to express RUM models but also others that do not obey regularity . Nevertheless , inference is practically difficult for Nested MNL models . Recently , a more flexible class of models called Pairwise Choice Markov Chains has been introduced in Ragain & Ugander ( 2016 ) . This class includes MNL but also other models that satisfy neither Luce ’ s axiom , nor stochastic transitivity , nor regularity . This class defines the choice distribution as the stationary distribution of a continuous time Markov chain defined by some transition rate matrix . Still , it satisfies a weakened version of Luce ’ s axiom called uniform expansion stating that if we add “ copies ” ( with no preference between them ) , the probability of choosing one element of the copies is invariant to the number of copies . Although the flexibility of this class is appealing , the proposed inference is based on maximizing the likelihood of the rate matrix for the observed choices which is prone to overfitting when the number of observations for each possible alternative is small and is inappropriate when new alternatives can be seen at test time . Alternatives and individuals making choices can be described by a set of features that can be then used to understand their impact on the choice probability . A linear-in-features MNL assumes that the latent value is given by a linear combination of the parameters of the alternatives and the individual . Features of the individual can be taken into account by these models but inference suffers from scarcity and is inappropriate when new alternatives can be seen at test time . The latent class MNL ( LC-MNL ) model ( Greene & Hensher , 2003 ) takes into account individual heterogeneity by using a Bayesian mixture over different latent classes—whose number must be specified—in which homogeneity and linearity is assumed . A linear-in-features parameterization of PCMC , for features in Rd , was suggested in ( Ragain & Ugander , 2016 , Appendix ) but requires fitting a weight matrix of size |U | × d , which makes it scale poorly and does not allow to predict unseen alternatives . In this work , we propose an amortized inference approach for PCMC in the sense that the statistical parameters are reused for any pair of alternatives and their number is thus independent of the size of the universe . In addition , we allow non-linear modeling by using a neural network . In complex cases like airline itinerary choice , where the alternatives are strongly dependent on an individual-specific query and some features , like price , can be dynamic , the previous approaches have limited expressive power or are inappropriate . Two recently introduced methods allow complex feature handling for alternatives and individuals . Mottini & Acuna-Agost ( 2017 ) proposes a recurrent neural network method consisting in learning to point , within a sequence of alternatives , to the chosen one . This model is appealing because of its feature learning capability but neither its choice-theoretic properties have been studied nor its dependence on the order of the sequence . Lhéritier et al . ( 2019 ) proposes to train a Random Forest classifier to predict whether an alternative is going to be predicted or not independently of the rest of the alternatives of the choice set . This approach does not take into account the fact that in each choice set exactly one alternative is chosen . For this reason , the probabilities provided by the model are only used as scores to rank the alternatives , which can be interpreted as latent values—making it essentially equivalent to a non-linear MNL . To escape this limitation and make the latent values dependent on the choice set , relative features are added ( e.g . the price for i-th alternative pricei is converted to pricei/minj∈S pricej ) . The non-parametric nature of this model is appealing but its choice-theoretic properties have not been studied either . In this work , we propose to enable PCMC with neural networks based feature handling , therefore enjoying both the good theoretical properties of PCMC and the complex feature handling of the previous neural network based and non-parametric methods . This neural network parameterization of PCMC makes the inference amortized allowing to handle large ( and even infinite ) size universes as shown in our experiments for airline itinerary choice modeling shown in Section 5 . 2 BACKGROUND : PAIRWISE CHOICE MARKOV CHAINS . 2.1 DEFINITION . A Pairwise Choice Markov Chain ( PCMC ) ( Ragain & Ugander , 2016 ) defines the choice probability PS ( i ) as the probability mass on the alternative i ∈ S of the stationary distribution of a continuous time Markov chain ( CTMC ) whose set of states corresponds to S. The model ’ s parameters are the off-diagonal entries qij ≥ 0 of a rate matrix Q indexed by pairs of elements in U . Given a choice set S , the choice distribution is the stationary distribution of the continuous time Markov chain given by the matrix QS obtained by restricting the rows and columns of Q to elements in S and setting qii = − ∑ j∈S\i qij for each i ∈ S. Therefore , the distribution PS is parameterized by the |S| ( |S| − 1 ) transition rates of QS . The constraint qij + qji > 0 ( 1 ) is imposed in order to guarantee that the chain has a single closed communicating class which implies the existence and the unicity of the stationary distribution πS ( see , e.g. , Norris ( 1997 ) ) obtained by solving { πSQS = 0 πS1 T = 1 ( 2 ) where 0 and 1 are row vectors of zeros and ones , respectively . Since any column of Qs is the opposite of the sum of the rest of the columns , it is equivalent to solve πSQ ′ S = [ 0 | 1 ] ( 3 ) where Q′S ≡ [ ( ( QS ) ij ) 1≤i≤|S|,1≤j < |S| | 1 T ] . 2.2 PROPERTIES . In Ragain & Ugander ( 2016 ) , it is shown that PCMC allow to represent any MNL model , but also models that are non-regular and do not satisfy stochastic transitivity ( using the rock-scissor-paper example of Section 3.2 ) . In the classical red bus/blue bus example ( see , e.g. , Train ( 2009 ) ) , the color of the bus is irrelevant to the preference of the transportation mode “ bus ” with respect to the “ car ” mode . Nevertheless , MNL models reduce the probability of choosing the “ car ” mode when color variants of buses are added , which does not match empirical behavior . PCMC models allows to model this kind of situations thanks to a property termed contractibility , which intuitively means that we can “ contract ” subsets Ai ⊆ U to a single “ type ” when the probability of choosing an element of Ai is independent of the pairwise probabilities between elements within the subsets . Formally , a partition of U into non-empty sets A1 , . . . , Ak is a contractible partition if qaiaj = λij for all ai ∈ Ai , aj ∈ Aj for some Λ = { λij } for i , j ∈ { 1 , . . . , k } . Then , the following proposition is shown . Proposition 1 ( Ragain & Ugander ( 2016 ) ) . For a given Λ , letA1 , . . . , Ak be a contractible partition for two PCMC models on U represented by Q , Q′ with stationary distributions π , π′ . Then , for any Ai , ∑ j∈Ai PU ( j ) = ∑ j∈Ai P ′U ( j ) . Then it is shown , that contractibility implies uniform expansion formally defined as follows . Definition 1 ( Uniform Expansion ) . Consider a choice between n elements in a set S ( 1 ) = { i11 , . . . , in1 } , and another choice from a set S ( k ) containing k copies of each of the n elements : S ( k ) = { i11 , . . . , i1k , i21 , . . . , i2k , . . . , in1 , . . . , ink } . The axiom of uniform expansion states that for each m ∈ { 1 , . . . , n } and all k ≥ 1 , PS ( 1 ) ( im1 ) = k∑ j=1 PS ( k ) ( imj ) .
This paper enables a feature-based parametrization and amortized inference of Pairwise Choice Markov Chains (PCMCs), a model for decisions in the face of a set of alternative choices (e.g. the rock-paper-scissors game). Previous approaches to fitting PCMCs have leveraged sequential least squares programming, making optimization unstable, the model prone to overfitting, and test-time inference difficult. The authors propose parametrizing PCMCs with neural networks to fix these issues. Relying on universal function approximation results, the authors show that their PCMC-Net can represent arbitrary transition matrices. The experiments report results on a dataset of airline booking behavior, comparing PCMC-Net with four other baselines from the literature.
SP:e4f04edbe1885c93875d82d78d3e0cf2d0359393
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
1 INTRODUCTION . First-order methods such as Stochastic Gradient Descent ( SGD ) are currently the standard choice for training deep neural networks . The merit of first-order methods is obvious : they only calculate the gradient and therefore are computationally efficient . In addition to better computational efficiency , SGD has even more advantages among the first-order methods . At each iteration , SGD computes the gradient only on a mini-batch instead of all training data . Such randomness introduced by sampling the mini-batch can lead to better generalization ( Hardt et al. , 2015 ; Keskar et al. , 2016 ; Masters & Luschi , 2018 ; Mou et al. , 2017 ; Zhu et al. , 2018 ) and better convergence ( Ge et al. , 2015 ; Jin et al. , 2017a ; b ) , which is crucial when the function class is highly overparameterized deep neural networks . Recently there is a huge body of works trying to develop more efficient first-order methods beyond SGD ( Duchi et al. , 2011 ; Kingma & Ba , 2014 ; Luo et al. , 2019 ; Liu et al. , 2019 ) . Second-order methods , despite their better convergence rate , are rarely used to train deep neural networks . At each iteration , the algorithm has to compute second order information , for example , the Hessian or its approximation , which is typically an m by m matrix where m is the number of parameters of the neural network . Moreover , the algorithm needs to compute the inverse of this matrix . The computational cost is prohibitive and usually it is not even possible to store such a matrix . Recently , there is a series of works ( Du et al. , 2018b ; a ; Zou et al. , 2018 ; Allen-Zhu et al. , 2018a ; Oymak & Soltanolkotabi , 2018 ; Arora et al. , 2019b ; a ; Cao & Gu , 2019 ; Zou & Gu , 2019 ) considering the optimization of neural networks based on the idea of neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) . Roughly speaking , the idea of NTK is to linearly approximate the output of a network w.r.t . the parameters at a local region , thus resulting in a kernel feature map , which is the gradient of the output w.r.t . the parameters . Jacot et al . ( 2018 ) shows that when the width of the network tends to infinity , the NTK tends to be unchanged during gradient flow since the network only needs o ( 1 ) change ( goes to zero as the width tends to infinity ) of parameter to fit the data , so the change of kernel is also o ( 1 ) . As for finite-width networks , however , the NTK changes and previous works ( Allen-Zhu et al. , 2018b ; Arora et al. , 2019a ) show that for sufficiently wide neural networks the optimization dynamic of GD/SGD is equivalent to that of using GD/SGD to solve an NTK kernel regression problem where the kernel is slowly evolving ( see Lemma 1 in Section 2 for a precise description ) . A natural question then arises : Can we gain acceleration by directly solving kernel regression w.r.t . the NTK at each step ? In this paper , we give a positive answer to this question and reveal the connection between NTK regression and Gauss-Newton method . We propose a novel optimization method – the Gram-GaussNewton ( GGN ) method . Instead of doing gradient descent , GGN solves the kernel regression w.r.t . the NTK at each step of the optimization . Following this idea , we theoretically prove that for overparameterized networks , GGN enjoys quadratic convergence compared to the linear rate of gradient descent . Besides theoretical fast convergence rate , GGN is also very efficient in practice . In fact , GGN is implicitly a reformulation of the Gauss-Newton method ( see Section 3.1 for details ) which is a classic second-order algorithm often used for solving nonlinear regression problems with square loss . In the Gauss-Newton method , one uses J > J as an approximation of the Hessian ( see Section 2 for a formal description ) where J is the Jacobian matrix . However , the original Gauss-Newton method faces challenges when used for training deep neural networks . Most seriously , the size of the approximate Hessian J > J is m by m , where m is the number of parameters of the neural network . Moreover , for overparameterized neural networks , J > J is not invertible , which may make the algorithm intractable for training commonly-used neural networks . GGN bypasses the difficulty stated above as follows . Instead of using J > J as approximate Hessian and applying Newton-type method , each step of GGN only involves the Gram matrix JJ > whose size is n by n where n is the number of data . Furthermore , as already mentioned , to get better generalization performance , it is crucial to use mini-batch to introduce sampling noise when calculating derivatives . Therefore , like SGD , we also use mini-batch in GGN . In this case , the size of the Gram matrix further reduces to b by b , where b is the batch size . Though conventional wisdom may suggest that applying mini-batch scheme to second-order methods will introduce biased estimation of the accelerated gradient direction , we give the first convergence result for mini-batch second-order method on overparameterized networks . Regarding computational complexity , we show that at each iteration , the overhead of GGN is small compared to SGD : the extra computation of GGN is mainly the matrix product JJ > and the inverse of this matrix whose size is small for a mini-batch . Detailed analyses can be found in Section 3.3 . We next conduct experiments on two regression tasks to study the effectiveness of the GGN algorithm . We demonstrate that in these two real applications , using a practical neural network ( e.g. , ResNet-32 ) with standard width , our proposed GGN algorithm can converge faster and achieve better performance than several baseline algorithms . 1.1 RELATED WORKS . Despite the prevalence of first-order methods for training deep neural networks , there have been continuing efforts in developing practical second-order methods ( Becker et al. , 1988 ; Pascanu & Bengio , 2013 ) . We summarize some of these works below . The main approach for these methods is to develop delicate approximations of the second-order information matrix so that the update direction can be computed as efficiently as possible . For example , Botev et al . ( 2017 ) proposed a recursive block-diagonal approximation of the Hessian . The blocks are Kronecker factored and can be efficiently computed and inverted . Grosse and Martens in a series of works developed the K-FAC method ( Martens & Grosse , 2015 ; Grosse & Martens , 2016 ) . The key idea is a Kronecker-factored approximation of the Fisher information matrix , which is used as the second-order matrix in natural gradient methods . These works received considerable attention and have been further improved ( Wu et al. , 2017 ; George et al. , 2018 ; Martens et al. , 2018 ) . Bernacchia et al . ( 2018 ) derived an exact expression of the natural gradient update , but only works for linear networks . Different from all these works , our GGN algorithm does not try to approximate the second-order matrix whose size is inevitably huge . Instead , we present an easy-to-compute solution of the updating direction , reducing the computational cost significantly . One exceptional concurrent work Ren & Goldfarb ( 2019 ) also aims to use the exact Gauss-Newton update . They focus on reducing the complexity of inverting approximate Hessian by Sherman-Morrison-Woodbury Formula and require subtle implementation tricks to use backpropagation . In contrast , GGN has simpler update rule and better guarantee for neural networks . In a concurrent and independent work , Zhang et al . ( 2019a ) showed that natural gradient method and K-FAC have a linear convergence rate for sufficiently wide networks in full-batch setting . In contrast , our method enjoys a higher-order ( quadratic ) convergence rate guarantee for overparameterized networks , and we focus on developing a practical and theoretically sound optimization method . We also reveal the relation between our method and NTK kernel regression , so using results based on NTK ( Arora et al. , 2019b ) , one can easily give generalization guarantee of our method . Another independent work ( Achiam et al. , 2019 ) proposed a preconditioned Q-learning algorithm which has similar form of our update rule . Unlike the methods considered in Zhang et al . ( 2019a ) ; Achiam et al . ( 2019 ) which contain the learning rate that needed to be tuned , our derivation of GGN does not introduce a learning rate term ( or understood as suggesting that the learning rate can be fixed to be 1 to get good performance which is verified in Figure 2 ( c ) ) . 2 NEURAL TANGENT KERNEL AND THE CLASSIC GAUSS-NEWTON METHOD FOR NONLINEAR LEAST SQUARES REGRESSION . Nonlinear least squares regression problem is a general machine learning problem . Given data pairs { xi , yi } ni=1 and a class of nonlinear functions f , e.g . neural networks , parameterized by w , the nonlinear least squares regression aims to solve the optimization problem min w∈Rm L ( w ) = 1 2 n∑ i=1 ( f ( w , xi ) − yi ) 2 . ( 1 ) In the seminal work ( Jacot et al. , 2018 ) , the authors consider the case when f is a neural network with infinite width . They showed that optimization on this problem using gradient flow involves a special kernel which is called neural tangent kernel ( NTK ) . The follow-up works further extended the relation between optimization and NTK which can be concluded in the following lemma : Lemma 1 ( Lemma 3.1 in Arora et al . ( 2019a ) , see also Dou & Liang ( 2019 ) ; Mei et al . ( 2019 ) ) . Consider optimizing problem ( 1 ) by gradient descent with infinitesimally small learning rate : dwtdt = −∇L ( wt ) . where wt is the parameters at time t. Let ft = ( f ( wt , xi ) ) ni=1 ∈ Rn be the network outputs on all xi ’ s at time t , and y = ( yi ) ni=1 be the desired outputs . Then ft follows the following evolution : dft dt = −Gt · ( ft − y ) , ( 2 ) where Gt is an n × n positive semidefinite matrix , i.e . the Gram matrix w.r.t . the NTK at time t , whose ( i , j ) -th entry is 〈∇wf ( wt , xi ) , ∇wf ( wt , xj ) 〉 . The key idea of Jacot et al . ( 2018 ) and its extensions ( Du et al. , 2018b ; a ; Zou et al. , 2018 ; Allen-Zhu et al. , 2018a ; Oymak & Soltanolkotabi , 2018 ; Lee et al. , 2019 ; Yang , 2019 ; Arora et al. , 2019b ; a ; Cao & Gu , 2019 ; Zou & Gu , 2019 ) is that when the network is sufficiently wide , the Gram matrix at initialization G0 is close to a fixed positive definite matrix defined by the infinite-width kernel and Gt is close to G0 during training for all t. Under this situation , Gt remains invertible , and the above dynamics is then identical to the dynamics of solving kernel regression with gradient flow w.r.t . the current kernel at time t. In fact , Arora et al . ( 2019a ) rigorously proves that a fully-trained sufficiently wide ReLU neural network is equivalent to the kernel regression predictor . As pointed out in Chizat & Bach ( 2018 ) , the idea of NTK can be summarized as a linear approximation using first order Taylor expansion . We give an example of this idea on the NTK at initialization : f ( w , xi ) − f ( w0 , xi ) ≈ ∇wf ( w0 , xi ) · ( w −w0 ) , ( 3 ) where∇wf ( w0 , x ) can then be viewed as an explicit expression of feature map at x , w −w0 is the parameter in reproducing kernel Hilbert space ( RKHS ) induced by NTK and f ( w , xi ) − f ( w0 , xi ) the target value . The idea of linear approximation is also used in the classic Gauss-Newton method ( Golub , 1965 ) to obtain an acceleration algorithm for solving nonlinear least squares problem ( 1 ) . Concretely , at iteration t , Gauss-Newton method takes the following first-order approximation : f ( w , xi ) − f ( wt , xi ) ≈ ∇wf ( wt , xi ) · ( w −wt ) , ( 4 ) where wt stands for the parameter at iteration t. We note that this is also the linear expansion for deriving NTK at time t. According to Eq . ( 1 ) and ( 4 ) , to update the parameter , one can instead solve the following problem . wt+1 = argmin w 1 2 ‖ft + Jt ( w −wt ) − y‖22 , ( 5 ) where ft , y have the same meaning as in Lemma 1 , and Jt = ( ∇wf ( wt , x1 ) , · · · , ∇wf ( wt , xn ) ) > ∈ Rn×m is the Jacobian matrix . A necessary and sufficient condition for w to be the solution of Eq . ( 5 ) is ( J > t Jt ) · ( w −wt ) = −J > t ( ft − y ) . ( 6 ) Below we will denote Ht : = J > t Jt ∈ Rm×m . For under-parameterized model ( i.e. , the number of parameters m is less than the number of data n ) , Ht is invertible , and the update rule is wt+1 = wt −H−1t J > t ( ft − y ) . ( 7 ) This can also be viewed as an approximate Newton ’ s method using Ht = J > t Jt to approximate the Hessian matrix . In fact , the exact Hessian matrix is ∇2w 1 2 n∑ i=1 ( f ( wt , xi ) − yi ) 2 = J > t Jt + n∑ i=1 ( f ( wt , xi ) − yi ) ∇2wf ( wt , xi ) . ( 8 ) In the case when f is only mildly nonlinear w.r.t . w at data point xi ’ s , ∇2wf ( wt , xi ) ≈ 0 , and Ht is close to the real Hessian . In this situation , the behavior of the Gauss-Newton method is similar to that of Newton ’ s method , and thus can achieve a superlinear convergence rate ( Golub , 1965 ) .
The authors propose a scalable second order method for optimization using a quadratic loss. The method is inspired by the Neural Tangent kernel approach, which also allows them to provide global convergence rates for GD and batch SGD. The algorithm has a computational complexity that is linear in the number of parameters and requires to solve a system of the size of the minibatch. They also show experimentally the advantage of using their proposed methods over SGD.
SP:7d6388235c53030aa92499c15b4543f82b9ff27c
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
1 INTRODUCTION . First-order methods such as Stochastic Gradient Descent ( SGD ) are currently the standard choice for training deep neural networks . The merit of first-order methods is obvious : they only calculate the gradient and therefore are computationally efficient . In addition to better computational efficiency , SGD has even more advantages among the first-order methods . At each iteration , SGD computes the gradient only on a mini-batch instead of all training data . Such randomness introduced by sampling the mini-batch can lead to better generalization ( Hardt et al. , 2015 ; Keskar et al. , 2016 ; Masters & Luschi , 2018 ; Mou et al. , 2017 ; Zhu et al. , 2018 ) and better convergence ( Ge et al. , 2015 ; Jin et al. , 2017a ; b ) , which is crucial when the function class is highly overparameterized deep neural networks . Recently there is a huge body of works trying to develop more efficient first-order methods beyond SGD ( Duchi et al. , 2011 ; Kingma & Ba , 2014 ; Luo et al. , 2019 ; Liu et al. , 2019 ) . Second-order methods , despite their better convergence rate , are rarely used to train deep neural networks . At each iteration , the algorithm has to compute second order information , for example , the Hessian or its approximation , which is typically an m by m matrix where m is the number of parameters of the neural network . Moreover , the algorithm needs to compute the inverse of this matrix . The computational cost is prohibitive and usually it is not even possible to store such a matrix . Recently , there is a series of works ( Du et al. , 2018b ; a ; Zou et al. , 2018 ; Allen-Zhu et al. , 2018a ; Oymak & Soltanolkotabi , 2018 ; Arora et al. , 2019b ; a ; Cao & Gu , 2019 ; Zou & Gu , 2019 ) considering the optimization of neural networks based on the idea of neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) . Roughly speaking , the idea of NTK is to linearly approximate the output of a network w.r.t . the parameters at a local region , thus resulting in a kernel feature map , which is the gradient of the output w.r.t . the parameters . Jacot et al . ( 2018 ) shows that when the width of the network tends to infinity , the NTK tends to be unchanged during gradient flow since the network only needs o ( 1 ) change ( goes to zero as the width tends to infinity ) of parameter to fit the data , so the change of kernel is also o ( 1 ) . As for finite-width networks , however , the NTK changes and previous works ( Allen-Zhu et al. , 2018b ; Arora et al. , 2019a ) show that for sufficiently wide neural networks the optimization dynamic of GD/SGD is equivalent to that of using GD/SGD to solve an NTK kernel regression problem where the kernel is slowly evolving ( see Lemma 1 in Section 2 for a precise description ) . A natural question then arises : Can we gain acceleration by directly solving kernel regression w.r.t . the NTK at each step ? In this paper , we give a positive answer to this question and reveal the connection between NTK regression and Gauss-Newton method . We propose a novel optimization method – the Gram-GaussNewton ( GGN ) method . Instead of doing gradient descent , GGN solves the kernel regression w.r.t . the NTK at each step of the optimization . Following this idea , we theoretically prove that for overparameterized networks , GGN enjoys quadratic convergence compared to the linear rate of gradient descent . Besides theoretical fast convergence rate , GGN is also very efficient in practice . In fact , GGN is implicitly a reformulation of the Gauss-Newton method ( see Section 3.1 for details ) which is a classic second-order algorithm often used for solving nonlinear regression problems with square loss . In the Gauss-Newton method , one uses J > J as an approximation of the Hessian ( see Section 2 for a formal description ) where J is the Jacobian matrix . However , the original Gauss-Newton method faces challenges when used for training deep neural networks . Most seriously , the size of the approximate Hessian J > J is m by m , where m is the number of parameters of the neural network . Moreover , for overparameterized neural networks , J > J is not invertible , which may make the algorithm intractable for training commonly-used neural networks . GGN bypasses the difficulty stated above as follows . Instead of using J > J as approximate Hessian and applying Newton-type method , each step of GGN only involves the Gram matrix JJ > whose size is n by n where n is the number of data . Furthermore , as already mentioned , to get better generalization performance , it is crucial to use mini-batch to introduce sampling noise when calculating derivatives . Therefore , like SGD , we also use mini-batch in GGN . In this case , the size of the Gram matrix further reduces to b by b , where b is the batch size . Though conventional wisdom may suggest that applying mini-batch scheme to second-order methods will introduce biased estimation of the accelerated gradient direction , we give the first convergence result for mini-batch second-order method on overparameterized networks . Regarding computational complexity , we show that at each iteration , the overhead of GGN is small compared to SGD : the extra computation of GGN is mainly the matrix product JJ > and the inverse of this matrix whose size is small for a mini-batch . Detailed analyses can be found in Section 3.3 . We next conduct experiments on two regression tasks to study the effectiveness of the GGN algorithm . We demonstrate that in these two real applications , using a practical neural network ( e.g. , ResNet-32 ) with standard width , our proposed GGN algorithm can converge faster and achieve better performance than several baseline algorithms . 1.1 RELATED WORKS . Despite the prevalence of first-order methods for training deep neural networks , there have been continuing efforts in developing practical second-order methods ( Becker et al. , 1988 ; Pascanu & Bengio , 2013 ) . We summarize some of these works below . The main approach for these methods is to develop delicate approximations of the second-order information matrix so that the update direction can be computed as efficiently as possible . For example , Botev et al . ( 2017 ) proposed a recursive block-diagonal approximation of the Hessian . The blocks are Kronecker factored and can be efficiently computed and inverted . Grosse and Martens in a series of works developed the K-FAC method ( Martens & Grosse , 2015 ; Grosse & Martens , 2016 ) . The key idea is a Kronecker-factored approximation of the Fisher information matrix , which is used as the second-order matrix in natural gradient methods . These works received considerable attention and have been further improved ( Wu et al. , 2017 ; George et al. , 2018 ; Martens et al. , 2018 ) . Bernacchia et al . ( 2018 ) derived an exact expression of the natural gradient update , but only works for linear networks . Different from all these works , our GGN algorithm does not try to approximate the second-order matrix whose size is inevitably huge . Instead , we present an easy-to-compute solution of the updating direction , reducing the computational cost significantly . One exceptional concurrent work Ren & Goldfarb ( 2019 ) also aims to use the exact Gauss-Newton update . They focus on reducing the complexity of inverting approximate Hessian by Sherman-Morrison-Woodbury Formula and require subtle implementation tricks to use backpropagation . In contrast , GGN has simpler update rule and better guarantee for neural networks . In a concurrent and independent work , Zhang et al . ( 2019a ) showed that natural gradient method and K-FAC have a linear convergence rate for sufficiently wide networks in full-batch setting . In contrast , our method enjoys a higher-order ( quadratic ) convergence rate guarantee for overparameterized networks , and we focus on developing a practical and theoretically sound optimization method . We also reveal the relation between our method and NTK kernel regression , so using results based on NTK ( Arora et al. , 2019b ) , one can easily give generalization guarantee of our method . Another independent work ( Achiam et al. , 2019 ) proposed a preconditioned Q-learning algorithm which has similar form of our update rule . Unlike the methods considered in Zhang et al . ( 2019a ) ; Achiam et al . ( 2019 ) which contain the learning rate that needed to be tuned , our derivation of GGN does not introduce a learning rate term ( or understood as suggesting that the learning rate can be fixed to be 1 to get good performance which is verified in Figure 2 ( c ) ) . 2 NEURAL TANGENT KERNEL AND THE CLASSIC GAUSS-NEWTON METHOD FOR NONLINEAR LEAST SQUARES REGRESSION . Nonlinear least squares regression problem is a general machine learning problem . Given data pairs { xi , yi } ni=1 and a class of nonlinear functions f , e.g . neural networks , parameterized by w , the nonlinear least squares regression aims to solve the optimization problem min w∈Rm L ( w ) = 1 2 n∑ i=1 ( f ( w , xi ) − yi ) 2 . ( 1 ) In the seminal work ( Jacot et al. , 2018 ) , the authors consider the case when f is a neural network with infinite width . They showed that optimization on this problem using gradient flow involves a special kernel which is called neural tangent kernel ( NTK ) . The follow-up works further extended the relation between optimization and NTK which can be concluded in the following lemma : Lemma 1 ( Lemma 3.1 in Arora et al . ( 2019a ) , see also Dou & Liang ( 2019 ) ; Mei et al . ( 2019 ) ) . Consider optimizing problem ( 1 ) by gradient descent with infinitesimally small learning rate : dwtdt = −∇L ( wt ) . where wt is the parameters at time t. Let ft = ( f ( wt , xi ) ) ni=1 ∈ Rn be the network outputs on all xi ’ s at time t , and y = ( yi ) ni=1 be the desired outputs . Then ft follows the following evolution : dft dt = −Gt · ( ft − y ) , ( 2 ) where Gt is an n × n positive semidefinite matrix , i.e . the Gram matrix w.r.t . the NTK at time t , whose ( i , j ) -th entry is 〈∇wf ( wt , xi ) , ∇wf ( wt , xj ) 〉 . The key idea of Jacot et al . ( 2018 ) and its extensions ( Du et al. , 2018b ; a ; Zou et al. , 2018 ; Allen-Zhu et al. , 2018a ; Oymak & Soltanolkotabi , 2018 ; Lee et al. , 2019 ; Yang , 2019 ; Arora et al. , 2019b ; a ; Cao & Gu , 2019 ; Zou & Gu , 2019 ) is that when the network is sufficiently wide , the Gram matrix at initialization G0 is close to a fixed positive definite matrix defined by the infinite-width kernel and Gt is close to G0 during training for all t. Under this situation , Gt remains invertible , and the above dynamics is then identical to the dynamics of solving kernel regression with gradient flow w.r.t . the current kernel at time t. In fact , Arora et al . ( 2019a ) rigorously proves that a fully-trained sufficiently wide ReLU neural network is equivalent to the kernel regression predictor . As pointed out in Chizat & Bach ( 2018 ) , the idea of NTK can be summarized as a linear approximation using first order Taylor expansion . We give an example of this idea on the NTK at initialization : f ( w , xi ) − f ( w0 , xi ) ≈ ∇wf ( w0 , xi ) · ( w −w0 ) , ( 3 ) where∇wf ( w0 , x ) can then be viewed as an explicit expression of feature map at x , w −w0 is the parameter in reproducing kernel Hilbert space ( RKHS ) induced by NTK and f ( w , xi ) − f ( w0 , xi ) the target value . The idea of linear approximation is also used in the classic Gauss-Newton method ( Golub , 1965 ) to obtain an acceleration algorithm for solving nonlinear least squares problem ( 1 ) . Concretely , at iteration t , Gauss-Newton method takes the following first-order approximation : f ( w , xi ) − f ( wt , xi ) ≈ ∇wf ( wt , xi ) · ( w −wt ) , ( 4 ) where wt stands for the parameter at iteration t. We note that this is also the linear expansion for deriving NTK at time t. According to Eq . ( 1 ) and ( 4 ) , to update the parameter , one can instead solve the following problem . wt+1 = argmin w 1 2 ‖ft + Jt ( w −wt ) − y‖22 , ( 5 ) where ft , y have the same meaning as in Lemma 1 , and Jt = ( ∇wf ( wt , x1 ) , · · · , ∇wf ( wt , xn ) ) > ∈ Rn×m is the Jacobian matrix . A necessary and sufficient condition for w to be the solution of Eq . ( 5 ) is ( J > t Jt ) · ( w −wt ) = −J > t ( ft − y ) . ( 6 ) Below we will denote Ht : = J > t Jt ∈ Rm×m . For under-parameterized model ( i.e. , the number of parameters m is less than the number of data n ) , Ht is invertible , and the update rule is wt+1 = wt −H−1t J > t ( ft − y ) . ( 7 ) This can also be viewed as an approximate Newton ’ s method using Ht = J > t Jt to approximate the Hessian matrix . In fact , the exact Hessian matrix is ∇2w 1 2 n∑ i=1 ( f ( wt , xi ) − yi ) 2 = J > t Jt + n∑ i=1 ( f ( wt , xi ) − yi ) ∇2wf ( wt , xi ) . ( 8 ) In the case when f is only mildly nonlinear w.r.t . w at data point xi ’ s , ∇2wf ( wt , xi ) ≈ 0 , and Ht is close to the real Hessian . In this situation , the behavior of the Gauss-Newton method is similar to that of Newton ’ s method , and thus can achieve a superlinear convergence rate ( Golub , 1965 ) .
Authors propose minimizing neural network using kernel ridge regression. (Formula 9 and Algorithm 1). Main difference of this method is compared to Gauss-Newton, is that it uses JJ' as curvature, which has dimensions b-by-by (batch size b), instead of J'J as curvature, which has dimensions m-by-m (number of parameters m).
SP:7d6388235c53030aa92499c15b4543f82b9ff27c
Switched linear projections and inactive state sensitivity for deep neural network interpretability
We introduce switched linear projections for expressing the activity of a neuron in a ReLU-based deep neural network in terms of a single linear projection in the input space . The method works by isolating the active subnetwork , a series of linear transformations , that completely determine the entire computation of the deep network for a given input instance . We also propose that for interpretability it is instructive and meaningful to focus on the patterns that deactive the neurons in the network , which are ignored by the exisiting methods that implicitly track only the active aspect of the network ’ s computation . We introduce a novel interpretability method for the inactive state sensitivity ( Insens ) . Comparison against existing methods shows that Insens is robust ( in the presence of noise ) , complete ( in terms of patterns that affect the computation ) and a very effective interpretability method for deep neural networks . 1 INTRODUCTION . It is notoriously hard to interpret how deep networks accomplish the tasks for which they are trained . At the same time , due to the pervasiveness of deep learning in numerous aspects of computing , it is increasingly important to gain understanding of how they work . There are risks associated with the possibility that a neural network might not be “ looking ” at the “ right ” patterns ( Nguyen et al . ; Geirhos et al. , 2019 ) , as well as opportunities to learn from the network ’ s capable of better than human performance ( Sadler & Regan , 2019 ) . Hence , there is ongoing effort to improve the interpretation and interpretability of the internal representation of neural networks . What makes this interpretation of the inside of a neural network hard is the high dimensionality and the distributed nature of its internal computation . Aside from the first hidden layer , neurons operate in an abstract high-dimensional space . If that was not hard enough , the analysis of individual components of the network ( such as activity of individual neurons ) is rarely instructive , since it is the intricate relationships and interplay of those components that contain the “ secret sauce ” . The two broad approaches to dealing with this complexity is to either use simpler interpretable models to approximate what a neural network does , or to trace back the elements of the computation into the input space in order to make the internal dynamics relatable to the input . In the latter approach we are typically interested in neurons ’ sensitivity – how the changes in network input affect their output , and decomposition – how different components of the input contribute to the output . In this paper we propose a straightforward and elegant method for expressing the computation of an arbitrary neuron ’ s activity to a single linear projection in the input space . This projection consists of a switched weight vector and a switched bias that easily lend themselves to sensitivity analysis ( analogous to gradient-based sensitivity ) and decomposition of the internal computation . We also introduce a new approach for interpretability analysis , called inactive state sensitivity ( Insens ) , which uses switched linear projections to aggregate the contribution of patterns in the input that deactivate neurons in the network . We demonstrate on several networks and image-based datasets that Insens provides a comprehensive picture of a deep network ’ s internal computation . The only constraint for the proposed methods is that the network must use ReLU activation functions for its hidden neurons . 2 RELATED WORK . Previous work on deep learning interpretability is extensive with a wide variety of methods and approaches – Simonyan et al . ( 2014 ) ; Zeiler & Fergus ; Bach et al . ( 2015 ) ; Mahendran & Vedaldi ; Montavon et al . ( 2017 ) ; Sundararajan et al . ( 2017 ) ; Zhou et al . ( 2019 ) being just a selection of the most prominent efforts in this area . Our work on the single linear projection follows the approach akin to Lee et al . and Erhan et al . ( 2009 ) , where the objective was to interpret the computation performed by an arbitrary neuron for a particular input vector as a projection in the input space . However , whereas these previous attempts were based on Deep Belief Nets ( Hinton et al. , 2006 ) and required an approximation of the said projection , our method is a forward computation that gives the neuron ’ s activity in terms of a linear projection in the input space . It works for any neural network , including convolutional ones , as long as all hidden neurons use piecewise linear activation functions . All existing methods for interpretability of deep learning , due to the nature of ReLU computation , necessarily provide information only about the active subnetwork of the ReLU-based architecture . Our own observations , as well as other evidence showing that in practice neural networks produce a relatively low number of activation regions ( Hanin & Rolnick , 2019 ) , lead us to the hypothesis that the analysis of the patterns in the input that switch neurons off gives an excellent picture of a network ’ s sensitivity . We also take the view that too much interpretation in interpretability introduces the risk of showing us what we expect to see and not what the network is actually focussing on . For instance , in Deep Taylor Decomposition ( Montavon et al. , 2017 ) choices of different root-points for the decomposition of the relevance function lead to different rules for Layerwise Relevance Propagation ( LRP ) ( Bach et al. , 2015 ) , which can lead to different interpretations of what is important in the input . The LRP-α1β0 rule , for example , emphasises the computation over the positive weights in the network while discounting the relevance of the information passing through the negative weights . This rule is justified by assumptions about desired properties of the explanation , but this comes with a risk of confirmation bias . Insens is an attempt to take into account the patterns in the input that cause the neurons inside the network to produce zero output . The information related to the inactive network may seem irrelevant , since inactive neurons do not directly contribute to the computation of the overall output . However , there is something in the input that switches a particular set of neurons off , thus regulating the active computation , and as we show in this paper , this something carries a lot of meaningful information . 3 SWITCHED LINEAR PROJECTIONS . The basis of the switched network concept is the fact that neurons that produce output of zero do not contribute to the computation of the overall output of the network . The notion of dead neurons , that is neurons that always output zero , is not new , nor is the realisation that these neurons , along with their connecting weights , can be taken out the network without any impact on the computation . In a switched projection , we treat the zero-output neurons as temporarily dead for a given instance of input . We refer to these neurons as inactive , since they may become active for a different network input . Thus we isolate the subnetwork of the active neurons in a given computation . As it happens , for a ReLU neuron the active neurons are those that pass their activity , the weighted sum of its inputs plus bias , directly to its output1 . This means that a subnetwork of active ReLU neurons is just a series of linear transformations , which is equivalent to a single linear transformation . As a result , we can express the computation performed by any neuron in a ReLU network as a projection onto a switched weight vector in the input space plus the switched bias . The term switched indicates that this weight and bias vector changes when the state of the network changes , the state corresponding to the particular combination of the active and inactive neurons in the network . Figure 1 illustrates the concept graphically , and a formal description is given in the following proposition : Proposition 1 ( Switched linear projections ) . Let x ∈ Rd be a vector of inputs , wli ∈ RUl−1 the weight vector , and bli ∈ R the bias of neuron i in layer l ( with Ul−1 inputs from the previous layer ) . Let the activity of a neuron i in layer l be defined as : vli ( x ) = ( . . . σr ( σr ( xW1 + b1 ) W2 + b2 ) . . . ) wli + bli , ( 1 ) where Wl = [ wTl1 . . . w T lUl ] , T denotes transpose , bl = [ bl1 . . . blUl ] and σr ( v ) = max ( v , 0 ) is the ReLU activation function . If we define an input-dependent state of the network as W ( x ) l = [ σ̇r ( vl1 ( x ) ) wTl1 . . . σ̇r ( vlUl ( x ) ) wTlUl ] and b ( x ) l = [ σ̇r ( vl1 ( x ) ) bl1 . . . σ̇r ( vlUl ( x ) ) blUl ] , where σ̇r ( v ) = dσr ( v ) dv , then for ŵTli ( x ) = W ( x ) 1 W ( x ) 2 . . .W ( x ) l−1w T li and b̂li ( x ) = b ( x ) 1 W ( x ) 2 . . .W ( x ) l−1w T li + b ( x ) 2 W ( x ) 3 . . .W ( x ) l−1w T li + . . .+ bl−1w T li + bli , we have vli ( x ) = xŵ T li ( x ) + b̂li ( x ) . ( 2 ) The proof is provided in Appendix A . Note that the ReLU derivative , σ̇r ( v ) , is just a convenient definition for a step function , so that σ̇r ( v ) w = { w , v > 0 0 , otherwise . ( 3 ) To simplify the notation , whenever referring to the parameters of the switched projection ŵ , b̂ , as well as activity v , we will drop the explicit dependency on x . While Figure 1 illustrates the switching concepts on a small fully connected network , switched linear projections can be computed for networks with convolutional as well as pooling layers . A convolutional layer is just a special case of a fully connected layer with many weights being zero and groups of neurons constrained to share the weight values on their connections . For max pooling , the neurons that do not win the competition , and thus their output does not affect the computation from then on , are deemed to be inactive regardless of the output they produce . 3.1 SENSITIVITY . Equation 2 makes it obvious that a given neuron ’ s switched weight vector is just the derivative of its activity with respect to the network input , ŵ = ∂v ( x ) x . Thus , the switched weight vector is analogous to gradient-based sensitivity analysis . Figure 2 shows the heatmaps of the switched weight sensitivity for the same set of hidden neurons with different inputs from the MNIST-trained 2CONV neural network ( for details on the network architecture featured in this paper see Appendix B ) . In this visualisation we show normalised ŵ , with intensity of red corresponding to the larger positive value , and the intensity of blue the negative value . The neurons were chosen from the 2nd convolutional layer and the penultimate fully connected layer respectively such that for the four 1In our terminology , activity denotes output before the activation function and an active neuron is one that produces non-zero output after the activation function ; for a ReLU neuron the active and inactive neurons are those that have positive and negative activity respectively . pred : 1 b̂ = −0.1 b̂ = −2.9 b̂ = −0.5 b̂ = 7.2 b̂ = −0.1 b̂ = −7.9 b̂ = −18.3 b̂ = 6.7 b̂ = 10.6 b̂ = −9.6 label : 0 pred : 0 v = 0.3 b̂ = −0.1 v = 39.1 b̂ = −4.2 v = 21.5 b̂ = −2.3 v = 28.6 b̂ = −3.0 v = −4.1 b̂ = 0.3 v = 88.5 b̂ = 5.7 v = −122.8 b̂ = 7.6 v = 399.6 b̂ = −39.2 v = 180.8 b̂ = −22.4 v = −14.7 b̂ = 3.1 considered inputs some neurons were always active , some inactive , and others sometimes active and sometimes not . Figure 2 makes it clear that a given neuron is not necessarily sensitive to the same pattern for different network inputs – this is most evident in the sensitivity of the neurons of the fully connected layer . Also note that some neurons in the convolutional layers are active despite the fact that they only “ see ” the part of the input image that is “ empty ” ( all pixels are black ) . This leads us to the conclusion that a given neuron is not necessarily a detector of a particular pattern in the input space , which is often the underlying assumption of the existing interpretability techniques . Something also apparent in our switched projection analysis , though not evident from Figure 2 , is that for a given input most of the neurons in the network are inactive . On average , only 17 % of the neurons were active for a given MNIST input in this architecture . The fact that only a subset of neurons are active in a given computation is not a quirk of one specific network , as observed by ( Hanin & Rolnick , 2019 ) . Switched linear projections give us an interpretation of a deep network as a set of linear , input-dependent , transformations . Something about the input activates a subset of neurons , but also keeps all the remaining neurons inactive . Traditional sensitivity analysis , as well as the one shown in Figure 2 , shows a direction ( or magnitude ) of the gradient of the input that would increase neurons activity provided the same state of the network remained unchanged . However , often in these networks the state does change even after small perturbations of the input , often resulting in same classification , but different input gradient . We reason that the analysis of the state , including information about what makes the neurons inactive , provides a missing and likely very important aspect , to interpretability than analysis of the active subnetwork alone .
This manuscript introduces a novel method to explain activities of ReLU-based deep networks by constructing a linear subnetwork which only contains neurons activated by the input. The status of each neuron can be obtained given any input sample. Moreover, the author applies the notion of “neuron’s center”, which is a neutral data point that is similar to actual input x, but with differences in particular objects to cause f(x) be positive. The activity of each neuron can be decomposed into the attribution of each input pixel, and this decomposition can also be used to measure the contribution of each pixel to the network stability. Overall, the proposed methodology is intuitive and distinctive to the state-of-the-art interpretability methods.
SP:7d3b2759cf0b3dfc61a0de94781dc460b29b5b84
Switched linear projections and inactive state sensitivity for deep neural network interpretability
We introduce switched linear projections for expressing the activity of a neuron in a ReLU-based deep neural network in terms of a single linear projection in the input space . The method works by isolating the active subnetwork , a series of linear transformations , that completely determine the entire computation of the deep network for a given input instance . We also propose that for interpretability it is instructive and meaningful to focus on the patterns that deactive the neurons in the network , which are ignored by the exisiting methods that implicitly track only the active aspect of the network ’ s computation . We introduce a novel interpretability method for the inactive state sensitivity ( Insens ) . Comparison against existing methods shows that Insens is robust ( in the presence of noise ) , complete ( in terms of patterns that affect the computation ) and a very effective interpretability method for deep neural networks . 1 INTRODUCTION . It is notoriously hard to interpret how deep networks accomplish the tasks for which they are trained . At the same time , due to the pervasiveness of deep learning in numerous aspects of computing , it is increasingly important to gain understanding of how they work . There are risks associated with the possibility that a neural network might not be “ looking ” at the “ right ” patterns ( Nguyen et al . ; Geirhos et al. , 2019 ) , as well as opportunities to learn from the network ’ s capable of better than human performance ( Sadler & Regan , 2019 ) . Hence , there is ongoing effort to improve the interpretation and interpretability of the internal representation of neural networks . What makes this interpretation of the inside of a neural network hard is the high dimensionality and the distributed nature of its internal computation . Aside from the first hidden layer , neurons operate in an abstract high-dimensional space . If that was not hard enough , the analysis of individual components of the network ( such as activity of individual neurons ) is rarely instructive , since it is the intricate relationships and interplay of those components that contain the “ secret sauce ” . The two broad approaches to dealing with this complexity is to either use simpler interpretable models to approximate what a neural network does , or to trace back the elements of the computation into the input space in order to make the internal dynamics relatable to the input . In the latter approach we are typically interested in neurons ’ sensitivity – how the changes in network input affect their output , and decomposition – how different components of the input contribute to the output . In this paper we propose a straightforward and elegant method for expressing the computation of an arbitrary neuron ’ s activity to a single linear projection in the input space . This projection consists of a switched weight vector and a switched bias that easily lend themselves to sensitivity analysis ( analogous to gradient-based sensitivity ) and decomposition of the internal computation . We also introduce a new approach for interpretability analysis , called inactive state sensitivity ( Insens ) , which uses switched linear projections to aggregate the contribution of patterns in the input that deactivate neurons in the network . We demonstrate on several networks and image-based datasets that Insens provides a comprehensive picture of a deep network ’ s internal computation . The only constraint for the proposed methods is that the network must use ReLU activation functions for its hidden neurons . 2 RELATED WORK . Previous work on deep learning interpretability is extensive with a wide variety of methods and approaches – Simonyan et al . ( 2014 ) ; Zeiler & Fergus ; Bach et al . ( 2015 ) ; Mahendran & Vedaldi ; Montavon et al . ( 2017 ) ; Sundararajan et al . ( 2017 ) ; Zhou et al . ( 2019 ) being just a selection of the most prominent efforts in this area . Our work on the single linear projection follows the approach akin to Lee et al . and Erhan et al . ( 2009 ) , where the objective was to interpret the computation performed by an arbitrary neuron for a particular input vector as a projection in the input space . However , whereas these previous attempts were based on Deep Belief Nets ( Hinton et al. , 2006 ) and required an approximation of the said projection , our method is a forward computation that gives the neuron ’ s activity in terms of a linear projection in the input space . It works for any neural network , including convolutional ones , as long as all hidden neurons use piecewise linear activation functions . All existing methods for interpretability of deep learning , due to the nature of ReLU computation , necessarily provide information only about the active subnetwork of the ReLU-based architecture . Our own observations , as well as other evidence showing that in practice neural networks produce a relatively low number of activation regions ( Hanin & Rolnick , 2019 ) , lead us to the hypothesis that the analysis of the patterns in the input that switch neurons off gives an excellent picture of a network ’ s sensitivity . We also take the view that too much interpretation in interpretability introduces the risk of showing us what we expect to see and not what the network is actually focussing on . For instance , in Deep Taylor Decomposition ( Montavon et al. , 2017 ) choices of different root-points for the decomposition of the relevance function lead to different rules for Layerwise Relevance Propagation ( LRP ) ( Bach et al. , 2015 ) , which can lead to different interpretations of what is important in the input . The LRP-α1β0 rule , for example , emphasises the computation over the positive weights in the network while discounting the relevance of the information passing through the negative weights . This rule is justified by assumptions about desired properties of the explanation , but this comes with a risk of confirmation bias . Insens is an attempt to take into account the patterns in the input that cause the neurons inside the network to produce zero output . The information related to the inactive network may seem irrelevant , since inactive neurons do not directly contribute to the computation of the overall output . However , there is something in the input that switches a particular set of neurons off , thus regulating the active computation , and as we show in this paper , this something carries a lot of meaningful information . 3 SWITCHED LINEAR PROJECTIONS . The basis of the switched network concept is the fact that neurons that produce output of zero do not contribute to the computation of the overall output of the network . The notion of dead neurons , that is neurons that always output zero , is not new , nor is the realisation that these neurons , along with their connecting weights , can be taken out the network without any impact on the computation . In a switched projection , we treat the zero-output neurons as temporarily dead for a given instance of input . We refer to these neurons as inactive , since they may become active for a different network input . Thus we isolate the subnetwork of the active neurons in a given computation . As it happens , for a ReLU neuron the active neurons are those that pass their activity , the weighted sum of its inputs plus bias , directly to its output1 . This means that a subnetwork of active ReLU neurons is just a series of linear transformations , which is equivalent to a single linear transformation . As a result , we can express the computation performed by any neuron in a ReLU network as a projection onto a switched weight vector in the input space plus the switched bias . The term switched indicates that this weight and bias vector changes when the state of the network changes , the state corresponding to the particular combination of the active and inactive neurons in the network . Figure 1 illustrates the concept graphically , and a formal description is given in the following proposition : Proposition 1 ( Switched linear projections ) . Let x ∈ Rd be a vector of inputs , wli ∈ RUl−1 the weight vector , and bli ∈ R the bias of neuron i in layer l ( with Ul−1 inputs from the previous layer ) . Let the activity of a neuron i in layer l be defined as : vli ( x ) = ( . . . σr ( σr ( xW1 + b1 ) W2 + b2 ) . . . ) wli + bli , ( 1 ) where Wl = [ wTl1 . . . w T lUl ] , T denotes transpose , bl = [ bl1 . . . blUl ] and σr ( v ) = max ( v , 0 ) is the ReLU activation function . If we define an input-dependent state of the network as W ( x ) l = [ σ̇r ( vl1 ( x ) ) wTl1 . . . σ̇r ( vlUl ( x ) ) wTlUl ] and b ( x ) l = [ σ̇r ( vl1 ( x ) ) bl1 . . . σ̇r ( vlUl ( x ) ) blUl ] , where σ̇r ( v ) = dσr ( v ) dv , then for ŵTli ( x ) = W ( x ) 1 W ( x ) 2 . . .W ( x ) l−1w T li and b̂li ( x ) = b ( x ) 1 W ( x ) 2 . . .W ( x ) l−1w T li + b ( x ) 2 W ( x ) 3 . . .W ( x ) l−1w T li + . . .+ bl−1w T li + bli , we have vli ( x ) = xŵ T li ( x ) + b̂li ( x ) . ( 2 ) The proof is provided in Appendix A . Note that the ReLU derivative , σ̇r ( v ) , is just a convenient definition for a step function , so that σ̇r ( v ) w = { w , v > 0 0 , otherwise . ( 3 ) To simplify the notation , whenever referring to the parameters of the switched projection ŵ , b̂ , as well as activity v , we will drop the explicit dependency on x . While Figure 1 illustrates the switching concepts on a small fully connected network , switched linear projections can be computed for networks with convolutional as well as pooling layers . A convolutional layer is just a special case of a fully connected layer with many weights being zero and groups of neurons constrained to share the weight values on their connections . For max pooling , the neurons that do not win the competition , and thus their output does not affect the computation from then on , are deemed to be inactive regardless of the output they produce . 3.1 SENSITIVITY . Equation 2 makes it obvious that a given neuron ’ s switched weight vector is just the derivative of its activity with respect to the network input , ŵ = ∂v ( x ) x . Thus , the switched weight vector is analogous to gradient-based sensitivity analysis . Figure 2 shows the heatmaps of the switched weight sensitivity for the same set of hidden neurons with different inputs from the MNIST-trained 2CONV neural network ( for details on the network architecture featured in this paper see Appendix B ) . In this visualisation we show normalised ŵ , with intensity of red corresponding to the larger positive value , and the intensity of blue the negative value . The neurons were chosen from the 2nd convolutional layer and the penultimate fully connected layer respectively such that for the four 1In our terminology , activity denotes output before the activation function and an active neuron is one that produces non-zero output after the activation function ; for a ReLU neuron the active and inactive neurons are those that have positive and negative activity respectively . pred : 1 b̂ = −0.1 b̂ = −2.9 b̂ = −0.5 b̂ = 7.2 b̂ = −0.1 b̂ = −7.9 b̂ = −18.3 b̂ = 6.7 b̂ = 10.6 b̂ = −9.6 label : 0 pred : 0 v = 0.3 b̂ = −0.1 v = 39.1 b̂ = −4.2 v = 21.5 b̂ = −2.3 v = 28.6 b̂ = −3.0 v = −4.1 b̂ = 0.3 v = 88.5 b̂ = 5.7 v = −122.8 b̂ = 7.6 v = 399.6 b̂ = −39.2 v = 180.8 b̂ = −22.4 v = −14.7 b̂ = 3.1 considered inputs some neurons were always active , some inactive , and others sometimes active and sometimes not . Figure 2 makes it clear that a given neuron is not necessarily sensitive to the same pattern for different network inputs – this is most evident in the sensitivity of the neurons of the fully connected layer . Also note that some neurons in the convolutional layers are active despite the fact that they only “ see ” the part of the input image that is “ empty ” ( all pixels are black ) . This leads us to the conclusion that a given neuron is not necessarily a detector of a particular pattern in the input space , which is often the underlying assumption of the existing interpretability techniques . Something also apparent in our switched projection analysis , though not evident from Figure 2 , is that for a given input most of the neurons in the network are inactive . On average , only 17 % of the neurons were active for a given MNIST input in this architecture . The fact that only a subset of neurons are active in a given computation is not a quirk of one specific network , as observed by ( Hanin & Rolnick , 2019 ) . Switched linear projections give us an interpretation of a deep network as a set of linear , input-dependent , transformations . Something about the input activates a subset of neurons , but also keeps all the remaining neurons inactive . Traditional sensitivity analysis , as well as the one shown in Figure 2 , shows a direction ( or magnitude ) of the gradient of the input that would increase neurons activity provided the same state of the network remained unchanged . However , often in these networks the state does change even after small perturbations of the input , often resulting in same classification , but different input gradient . We reason that the analysis of the state , including information about what makes the neurons inactive , provides a missing and likely very important aspect , to interpretability than analysis of the active subnetwork alone .
The work basically introduced a new way of looking at interpretability; instead of focusing on the source of activations in the network for a given input image, focus on the source of stability (non-active) neurons (in a ReLU network). The work starts by proving (although it is trivial) that in a ReLU (more generally any piece-wise linear) network, for a given input image, there is a locally linear relationship between a given neuron's activation and the image: v= w^T x + b. As the authors correctly mention, focusing on 'w' as the sensitivity analysis is basically the vanilla gradient method. The contribution, however, is focusing on the projection of bias and the introduced notion of 'centre'. With this provided notion, one can focus on the deactivated neurons in the network and how each input pixel is responsible for it. In other words, unlike previous work that focuses on the activation map, the authors correctly refer to the deactivated neurons as another source of the network's prediction.
SP:7d3b2759cf0b3dfc61a0de94781dc460b29b5b84
Bio-Inspired Hashing for Unsupervised Similarity Search
The fruit fly Drosophila ’ s olfactory circuit has inspired a new locality sensitive hashing ( LSH ) algorithm , FlyHash . In contrast with classical LSH algorithms that produce low dimensional hash codes , FlyHash produces sparse high-dimensional hash codes and has also been shown to have superior empirical performance compared to classical LSH algorithms in similarity search . However , FlyHash uses random projections and can not learn from data . Building on inspiration from FlyHash and the ubiquity of sparse expansive representations in neurobiology , our work proposes a novel hashing algorithm BioHash that produces sparse high dimensional hash codes in a data-driven manner . We show that BioHash outperforms previously published benchmarks for various hashing methods . Since our learning algorithm is based on a local and biologically plausible synaptic plasticity rule , our work provides evidence for the proposal that LSH might be a computational reason for the abundance of sparse expansive motifs in a variety of biological systems . We also propose a convolutional variant BioConvHash that further improves performance . From the perspective of computer science , BioHash and BioConvHash are fast , scalable and yield compressed binary representations that are useful for similarity search . 1 INTRODUCTION . Sparse expansive representations are ubiquitous in neurobiology . Expansion means that a highdimensional input is mapped to an even higher dimensional secondary representation . Such expansion is often accompanied by a sparsification of the activations : dense input data is mapped into a sparse code , where only a small number of secondary neurons respond to a given stimulus . A classical example of the sparse expansive motif is the Drosophila fruit fly olfactory system . In this case , approximately 50 projection neurons send their activities to about 2500 Kenyon cells ( Turner et al. , 2008 ) , thus accomplishing an approximately 50x expansion . An input stimulus typically activates approximately 50 % of projection neurons , and less than 10 % Kenyon cells ( Turner et al. , 2008 ) , providing an example of significant sparsification of the expanded codes . Another example is the rodent olfactory circuit . In this system , dense input from the olfactory bulb is projected into piriform cortex , which has 1000x more neurons than the number of glomeruli in the olfactory bulb . Only about 10 % of those neurons respond to a given stimulus ( Mombaerts et al. , 1996 ) . A similar motif is found in rat ’ s cerebellum and hippocampus ( Dasgupta et al. , 2017 ) . From the computational perspective , expansion is helpful for increasing the number of classification decision boundaries by a simple perceptron ( Cover , 1965 ) or increasing memory storage capacity in models of associative memory ( Hopfield , 1982 ) . Additionally , sparse expansive representations have been shown to reduce intrastimulus variability and the overlaps between representations induced by distinct stimuli ( Sompolinsky , 2014 ) . Sparseness has also been shown to increase the capacity of models of associative memory ( Tsodyks & Feigelman , 1988 ) . The goal of the present work is to use this “ biological ” inspiration about sparse expansive motifs , as well as local Hebbian learning , for designing a novel hashing algorithm BioHash that can be used in similarity search . Below we describe the task , the algorithm , and demonstrate that BioHash improves retrieval performance on common benchmark datasets . Similarity search and LSH . In similarity search , given a query q ∈ Rd , a similarity measure sim ( q , x ) , and a database X ∈ Rn×d containing n items , the objective is to retrieve a ranked list of R items from the database most similar to q . When data is high-dimensional ( e.g . images/documents ) and the databases are large ( millions or billions items ) , this is a computationally challenging problem . However , approximate solutions are generally acceptable , with Locality Sensitive Hashing ( LSH ) being one such approach ( Wang et al. , 2014 ) . Similarity search approaches maybe unsupervised or supervised . Since labelled information for extremely large datasets is infeasible to obtain , our work focuses on the unsupervised setting . In LSH ( Indyk & Motwani , 1998 ; Charikar , 2002 ) , the idea is to encode each database entry x ( and query q ) with a binary representation h ( x ) ( h ( q ) respectively ) and to retrieve R entries with smallest Hamming distances dH ( h ( x ) , h ( q ) ) . Intuitively , ( see ( Charikar , 2002 ) , for a formal definition ) , a hash function h : Rd −→ { −1 , 1 } m is said to be locality sensitive , if similar ( dissimilar ) items x1 and x2 are close by ( far apart ) in Hamming distance dH ( h ( x1 ) , h ( x2 ) ) . LSH algorithms are of fundamental importance in computer science , with applications in similarity search , data compression and machine learning ( Andoni & Indyk , 2008 ) . Drosophila olfactory circuit and FlyHash . In classical LSH approaches , the data dimensionality d is much larger than the embedding space dimension m , resulting in low-dimensional hash codes ( Wang et al. , 2014 ; Indyk & Motwani , 1998 ; Charikar , 2002 ) . In contrast , a new family of hashing algorithms has been proposed ( Dasgupta et al. , 2017 ) where m d , but the secondary representation is highly sparse with only a small number k of m units being active , see Figure 1 . We call this algorithm FlyHash in this paper , since it is motivated by the computation carried out by the fly ’ s olfactory circuit . The expansion from the d dimensional input space into an m dimensional secondary representation is carried out using a random set of weights W ( Dasgupta et al. , 2017 ; Caron et al. , 2013 ) . The resulting high dimensional representation is sparsified by k-Winner-Take-All ( k-WTA ) feedback inhibition in the hidden layer resulting in top ∼ 5 % of units staying active ( Lin et al. , 2014 ; Stevens , 2016 ) . While FlyHash uses random synaptic weights , sparse expansive representations are not necessarily random ( Sompolinsky , 2014 ) , perhaps not even in the case of Drosophila ( Gruntman & Turner , 2013 ; Zheng et al. , 2018 ) . Moreover , using synaptic weights that are learned from data might help to further improve the locality sensitivity property of FlyHash . Thus , it is important to investigate the role of learned synapses on the hashing performance . A recent work SOLHash ( Li et al. , 2018 ) , takes inspiration from FlyHash and attempts to adapt the synapses to data , demonstrating improved performance over FlyHash . However , every learning update step in SOLHash invokes a constrained linear program and also requires computing pairwise inner-products between all training points , making it very time consuming and limiting its scalability to datasets of even modest size . These limitations restrict SOLHash to training only on a small fraction of the data ( Li et al. , 2018 ) . Additionally , SOLHash is biologically implausible ( for an extended discussion , see Sec . 5 ) . BioHash also takes inspiration from FlyHash and demonstrates improved performance compared to random weights used in FlyHash , but it is fast , online , scalable and , importantly , BioHash is neurologically plausible . Not only `` biological '' inspiration can lead to improving hashing techniques , but the opposite might also be true . One of the statements of the present paper is that BioHash satisfies locality sensitive property , and , at the same time , utilizes a biologically plausible learning rule for synaptic weights ( Krotov & Hopfield , 2019 ) . This provides evidence toward the proposal that the reason why sparse expansive representations are so common in biological organisms is because they perform locality sensitive hashing . In other words , they cluster similar stimuli together and push distinct stimuli far apart . Thus , our work provides evidence toward the proposal that LSH might be a fundamental computational principle utilized by the sparse expansive circuits Fig . 1 ( right ) . Importantly , learning of synapses must be neurobiologically plausible ( the synaptic plasticity rule should be local ) . Contributions . Building on inspiration from FlyHash and more broadly the ubiquity of sparse , expansive representations in neurobiology , our work proposes a novel hashing algorithm BioHash , that in contrast with previous work ( Dasgupta et al. , 2017 ; Li et al. , 2018 ) , produces sparse high dimensional hash codes in a data-driven manner and with learning of synapses in a neurobiologically plausible way . We provide an existence proof for the proposal that LSH maybe a fundamental computational principle in neural circuits ( Dasgupta et al. , 2017 ) in the context of learned synapses . We incorporated convolutional structure into BioHash , resulting in a hashing with improved performance compared to previously published benchmarks . From the perspective of computer science , we show that BioHash is simple , scalable to large datasets and demonstrates good performance for similarity search . Interestingly , BioHash outperforms a number of recent SOTA deep hashing methods trained via backpropogation . 2 APPROXIMATE SIMILARITY SEARCH VIA BIOHASHING . Formally , if we denote a data point as x ∈ Rd , we seek a binary hash code y ∈ { −1 , 1 } m. We define the hash length of a binary code as k , if the exact Hamming distance computation is O ( k ) . Below we present our bio-inspired hashing algorithm . 2.1 BIO-INSPIRED HASHING ( BIOHASH ) . We adopt a biologically plausible unsupervised algorithm for representation learning from Krotov & Hopfield ( 2019 ) . Denote the synapses from the input layer to the hash layer as W ∈ Rm×d . The learning dynamics for the synapses of an individual neuron µ , denoted by Wµi , is given by τ dWµi dt = g [ Rank ( 〈Wµ , x〉µ ) ] ( xi − 〈Wµ , x〉µWµi ) , ( 1 ) where Wµ = ( Wµ1 , Wµ2 ... Wµd ) , and g [ µ ] = 1 , µ = 1 −∆ , µ = r 0 , otherwise ( 2 ) and 〈x , y〉µ = ∑ i , j η µ i , jxiyj , with η µ i , j = |Wµi|p−2δij , where δij is Kronecker delta and τ is the time scale of the learning dynamics . The Rank operation in equation ( 1 ) sorts the inner products from the largest ( µ = 1 ) to the smallest ( µ = m ) . It can be shown that the synapses converge to a unit ( p−norm ) sphere ( Krotov & Hopfield , 2019 ) . The training dynamics can be shown to minimize the following energy function E = − ∑ A m∑ µ=1 g [ Rank ( 〈Wµ , xA〉µ ) ] 〈Wµ , xA〉µ 〈Wµ , Wµ〉 p−1 p µ , ( 3 ) where A indexes the training example . Note that the training dynamics do not perform gradient descent , i.e Ẇµ 6= ∇WµE . However , time derivative of the energy function under dynamics ( 1 ) is always negative ( we show this for the case ∆ = 0 below ) , τ dE dt = − ∑ A τ ( p− 1 ) 〈Wµ̂ , Wµ̂〉 p−1 p +1 µ̂ [ 〈dWµ̂ dt , xA〉µ̂〈Wµ̂ , Wµ̂〉µ̂ − 〈Wµ̂ , xA〉µ̂〈 dWµ̂ dt , Wµ̂〉µ̂ ] = = − ∑ A τ ( p− 1 ) 〈Wµ̂ , Wµ̂〉 p−1 p +1 µ̂ [ 〈xA , xA〉µ̂〈Wµ̂ , Wµ̂〉µ̂ − 〈Wµ̂ , xA〉2µ̂ ] ≤ 0 , ( 4 ) where Cauchy-Schwartz inequality is used . For every training example A the index of the activated hidden unit is defined as µ̂ = arg max µ [ 〈Wµ , xA〉µ ] . ( 5 ) Thus , the energy function decreases during learning . A similar result can be shown for ∆ 6= 0 . After the learning-phase is complete , the hash code is generated , as in FlyHash , via WTA sparsification : for a given query x we generate a hash code y ∈ { −1 , 1 } m as yµ = { 1 , 〈Wµ , x〉µ is in top k −1 , otherwise . ( 6 ) Thus , the hyperparameters of the method are p , r , m and ∆ . Note that the synapses are updated based only on pre- and post-synaptic activations resulting in Hebbian or anti-Hebbian updates . Many `` unsupervised '' learning to hash approaches provide a sort of `` weak supervision '' in the form of similarities evaluated in the feature space of deep CNNs trained on ImageNet ( Jin et al. , 2019 ) to achieve good performance . BioHash does not assume such information is provided and is completely unsupervised .
This paper studies a new model of locally sensitive hashing (LSH) that is inspired by the fruit fly Drosophila's olfactory circuit. Instead of mapping each input to a low-dimensional space, such LSH methods (FlyHash) map given $d$-dimensional inputs to an $m$-dimensional space such that $m \gg d$. However, these mappings enforce sparsity on the hash maps such that only $k$ out of $m$ coordinates of the output of the hash map are non-zero.
SP:97237fefef6ae3891e0d3558373cccf759002ab5
Bio-Inspired Hashing for Unsupervised Similarity Search
The fruit fly Drosophila ’ s olfactory circuit has inspired a new locality sensitive hashing ( LSH ) algorithm , FlyHash . In contrast with classical LSH algorithms that produce low dimensional hash codes , FlyHash produces sparse high-dimensional hash codes and has also been shown to have superior empirical performance compared to classical LSH algorithms in similarity search . However , FlyHash uses random projections and can not learn from data . Building on inspiration from FlyHash and the ubiquity of sparse expansive representations in neurobiology , our work proposes a novel hashing algorithm BioHash that produces sparse high dimensional hash codes in a data-driven manner . We show that BioHash outperforms previously published benchmarks for various hashing methods . Since our learning algorithm is based on a local and biologically plausible synaptic plasticity rule , our work provides evidence for the proposal that LSH might be a computational reason for the abundance of sparse expansive motifs in a variety of biological systems . We also propose a convolutional variant BioConvHash that further improves performance . From the perspective of computer science , BioHash and BioConvHash are fast , scalable and yield compressed binary representations that are useful for similarity search . 1 INTRODUCTION . Sparse expansive representations are ubiquitous in neurobiology . Expansion means that a highdimensional input is mapped to an even higher dimensional secondary representation . Such expansion is often accompanied by a sparsification of the activations : dense input data is mapped into a sparse code , where only a small number of secondary neurons respond to a given stimulus . A classical example of the sparse expansive motif is the Drosophila fruit fly olfactory system . In this case , approximately 50 projection neurons send their activities to about 2500 Kenyon cells ( Turner et al. , 2008 ) , thus accomplishing an approximately 50x expansion . An input stimulus typically activates approximately 50 % of projection neurons , and less than 10 % Kenyon cells ( Turner et al. , 2008 ) , providing an example of significant sparsification of the expanded codes . Another example is the rodent olfactory circuit . In this system , dense input from the olfactory bulb is projected into piriform cortex , which has 1000x more neurons than the number of glomeruli in the olfactory bulb . Only about 10 % of those neurons respond to a given stimulus ( Mombaerts et al. , 1996 ) . A similar motif is found in rat ’ s cerebellum and hippocampus ( Dasgupta et al. , 2017 ) . From the computational perspective , expansion is helpful for increasing the number of classification decision boundaries by a simple perceptron ( Cover , 1965 ) or increasing memory storage capacity in models of associative memory ( Hopfield , 1982 ) . Additionally , sparse expansive representations have been shown to reduce intrastimulus variability and the overlaps between representations induced by distinct stimuli ( Sompolinsky , 2014 ) . Sparseness has also been shown to increase the capacity of models of associative memory ( Tsodyks & Feigelman , 1988 ) . The goal of the present work is to use this “ biological ” inspiration about sparse expansive motifs , as well as local Hebbian learning , for designing a novel hashing algorithm BioHash that can be used in similarity search . Below we describe the task , the algorithm , and demonstrate that BioHash improves retrieval performance on common benchmark datasets . Similarity search and LSH . In similarity search , given a query q ∈ Rd , a similarity measure sim ( q , x ) , and a database X ∈ Rn×d containing n items , the objective is to retrieve a ranked list of R items from the database most similar to q . When data is high-dimensional ( e.g . images/documents ) and the databases are large ( millions or billions items ) , this is a computationally challenging problem . However , approximate solutions are generally acceptable , with Locality Sensitive Hashing ( LSH ) being one such approach ( Wang et al. , 2014 ) . Similarity search approaches maybe unsupervised or supervised . Since labelled information for extremely large datasets is infeasible to obtain , our work focuses on the unsupervised setting . In LSH ( Indyk & Motwani , 1998 ; Charikar , 2002 ) , the idea is to encode each database entry x ( and query q ) with a binary representation h ( x ) ( h ( q ) respectively ) and to retrieve R entries with smallest Hamming distances dH ( h ( x ) , h ( q ) ) . Intuitively , ( see ( Charikar , 2002 ) , for a formal definition ) , a hash function h : Rd −→ { −1 , 1 } m is said to be locality sensitive , if similar ( dissimilar ) items x1 and x2 are close by ( far apart ) in Hamming distance dH ( h ( x1 ) , h ( x2 ) ) . LSH algorithms are of fundamental importance in computer science , with applications in similarity search , data compression and machine learning ( Andoni & Indyk , 2008 ) . Drosophila olfactory circuit and FlyHash . In classical LSH approaches , the data dimensionality d is much larger than the embedding space dimension m , resulting in low-dimensional hash codes ( Wang et al. , 2014 ; Indyk & Motwani , 1998 ; Charikar , 2002 ) . In contrast , a new family of hashing algorithms has been proposed ( Dasgupta et al. , 2017 ) where m d , but the secondary representation is highly sparse with only a small number k of m units being active , see Figure 1 . We call this algorithm FlyHash in this paper , since it is motivated by the computation carried out by the fly ’ s olfactory circuit . The expansion from the d dimensional input space into an m dimensional secondary representation is carried out using a random set of weights W ( Dasgupta et al. , 2017 ; Caron et al. , 2013 ) . The resulting high dimensional representation is sparsified by k-Winner-Take-All ( k-WTA ) feedback inhibition in the hidden layer resulting in top ∼ 5 % of units staying active ( Lin et al. , 2014 ; Stevens , 2016 ) . While FlyHash uses random synaptic weights , sparse expansive representations are not necessarily random ( Sompolinsky , 2014 ) , perhaps not even in the case of Drosophila ( Gruntman & Turner , 2013 ; Zheng et al. , 2018 ) . Moreover , using synaptic weights that are learned from data might help to further improve the locality sensitivity property of FlyHash . Thus , it is important to investigate the role of learned synapses on the hashing performance . A recent work SOLHash ( Li et al. , 2018 ) , takes inspiration from FlyHash and attempts to adapt the synapses to data , demonstrating improved performance over FlyHash . However , every learning update step in SOLHash invokes a constrained linear program and also requires computing pairwise inner-products between all training points , making it very time consuming and limiting its scalability to datasets of even modest size . These limitations restrict SOLHash to training only on a small fraction of the data ( Li et al. , 2018 ) . Additionally , SOLHash is biologically implausible ( for an extended discussion , see Sec . 5 ) . BioHash also takes inspiration from FlyHash and demonstrates improved performance compared to random weights used in FlyHash , but it is fast , online , scalable and , importantly , BioHash is neurologically plausible . Not only `` biological '' inspiration can lead to improving hashing techniques , but the opposite might also be true . One of the statements of the present paper is that BioHash satisfies locality sensitive property , and , at the same time , utilizes a biologically plausible learning rule for synaptic weights ( Krotov & Hopfield , 2019 ) . This provides evidence toward the proposal that the reason why sparse expansive representations are so common in biological organisms is because they perform locality sensitive hashing . In other words , they cluster similar stimuli together and push distinct stimuli far apart . Thus , our work provides evidence toward the proposal that LSH might be a fundamental computational principle utilized by the sparse expansive circuits Fig . 1 ( right ) . Importantly , learning of synapses must be neurobiologically plausible ( the synaptic plasticity rule should be local ) . Contributions . Building on inspiration from FlyHash and more broadly the ubiquity of sparse , expansive representations in neurobiology , our work proposes a novel hashing algorithm BioHash , that in contrast with previous work ( Dasgupta et al. , 2017 ; Li et al. , 2018 ) , produces sparse high dimensional hash codes in a data-driven manner and with learning of synapses in a neurobiologically plausible way . We provide an existence proof for the proposal that LSH maybe a fundamental computational principle in neural circuits ( Dasgupta et al. , 2017 ) in the context of learned synapses . We incorporated convolutional structure into BioHash , resulting in a hashing with improved performance compared to previously published benchmarks . From the perspective of computer science , we show that BioHash is simple , scalable to large datasets and demonstrates good performance for similarity search . Interestingly , BioHash outperforms a number of recent SOTA deep hashing methods trained via backpropogation . 2 APPROXIMATE SIMILARITY SEARCH VIA BIOHASHING . Formally , if we denote a data point as x ∈ Rd , we seek a binary hash code y ∈ { −1 , 1 } m. We define the hash length of a binary code as k , if the exact Hamming distance computation is O ( k ) . Below we present our bio-inspired hashing algorithm . 2.1 BIO-INSPIRED HASHING ( BIOHASH ) . We adopt a biologically plausible unsupervised algorithm for representation learning from Krotov & Hopfield ( 2019 ) . Denote the synapses from the input layer to the hash layer as W ∈ Rm×d . The learning dynamics for the synapses of an individual neuron µ , denoted by Wµi , is given by τ dWµi dt = g [ Rank ( 〈Wµ , x〉µ ) ] ( xi − 〈Wµ , x〉µWµi ) , ( 1 ) where Wµ = ( Wµ1 , Wµ2 ... Wµd ) , and g [ µ ] = 1 , µ = 1 −∆ , µ = r 0 , otherwise ( 2 ) and 〈x , y〉µ = ∑ i , j η µ i , jxiyj , with η µ i , j = |Wµi|p−2δij , where δij is Kronecker delta and τ is the time scale of the learning dynamics . The Rank operation in equation ( 1 ) sorts the inner products from the largest ( µ = 1 ) to the smallest ( µ = m ) . It can be shown that the synapses converge to a unit ( p−norm ) sphere ( Krotov & Hopfield , 2019 ) . The training dynamics can be shown to minimize the following energy function E = − ∑ A m∑ µ=1 g [ Rank ( 〈Wµ , xA〉µ ) ] 〈Wµ , xA〉µ 〈Wµ , Wµ〉 p−1 p µ , ( 3 ) where A indexes the training example . Note that the training dynamics do not perform gradient descent , i.e Ẇµ 6= ∇WµE . However , time derivative of the energy function under dynamics ( 1 ) is always negative ( we show this for the case ∆ = 0 below ) , τ dE dt = − ∑ A τ ( p− 1 ) 〈Wµ̂ , Wµ̂〉 p−1 p +1 µ̂ [ 〈dWµ̂ dt , xA〉µ̂〈Wµ̂ , Wµ̂〉µ̂ − 〈Wµ̂ , xA〉µ̂〈 dWµ̂ dt , Wµ̂〉µ̂ ] = = − ∑ A τ ( p− 1 ) 〈Wµ̂ , Wµ̂〉 p−1 p +1 µ̂ [ 〈xA , xA〉µ̂〈Wµ̂ , Wµ̂〉µ̂ − 〈Wµ̂ , xA〉2µ̂ ] ≤ 0 , ( 4 ) where Cauchy-Schwartz inequality is used . For every training example A the index of the activated hidden unit is defined as µ̂ = arg max µ [ 〈Wµ , xA〉µ ] . ( 5 ) Thus , the energy function decreases during learning . A similar result can be shown for ∆ 6= 0 . After the learning-phase is complete , the hash code is generated , as in FlyHash , via WTA sparsification : for a given query x we generate a hash code y ∈ { −1 , 1 } m as yµ = { 1 , 〈Wµ , x〉µ is in top k −1 , otherwise . ( 6 ) Thus , the hyperparameters of the method are p , r , m and ∆ . Note that the synapses are updated based only on pre- and post-synaptic activations resulting in Hebbian or anti-Hebbian updates . Many `` unsupervised '' learning to hash approaches provide a sort of `` weak supervision '' in the form of similarities evaluated in the feature space of deep CNNs trained on ImageNet ( Jin et al. , 2019 ) to achieve good performance . BioHash does not assume such information is provided and is completely unsupervised .
This paper introduces a variant of FlyHash for similarity search in vector space. The basic idea is motivated by the intuition: the original FlyHash method is data-independent, so can we improve FlyHash's locality sensitivity by learning from data. It does so by learning the weights of the projection layer and uses winner-take-all sparsification to generate sparse binary hash code. This leads to the bio-inspired hashing algorithm (BioHash). The paper argues that by taking into account the density of data, the learned projection helps to further push similar entities to have hash codes that point in similar direction while repelling dissimilar objects in opposite directions. Experiment results are reported on MNIST and CIFAR10, and the proposed approach demonstrates better retrieval precision results compared to several other hashing based methods.
SP:97237fefef6ae3891e0d3558373cccf759002ab5
Model-Agnostic Feature Selection with Additional Mutual Information
1 INTRODUCTION . Model interpretation techniques aim to select features important for a response by reducing models ( sometimes locally ) to be human interpretable . However , the phrase model interpretation can be a bit of a misnomer . Any interpretation of a model must be imbued to the model by the population distribution that provides the data to train the model . In this sense , interpreting a model should be viewed as understanding the population distribution of data through the lens of a model . Existing methods for understanding the population distributions only work with particular models fit to the population , particular choices of test statistic , or particular auxiliary models for interpretation ( Ribeiro et al. , 2016 ; Lundberg and Lee , 2017 ) . Such structural restrictions limit the applicability of these methods to a smaller class of population distributions . To be able to work in a black-box manner , feature selection methods can use models but must not require a particular structure in models used in selection processes . Understanding the population distribution can be phrased as assessing whether a response is independent of a feature given the rest of the features ; this test is called a conditional randomization test ( Candes et al. , 2018 ) . Conditional randomization tests require test statistics . Test statistics like linear model coefficients ( Barber et al. , 2015 ) or correlation may miss dependence between the response and outcome . To avoid missing relationships between variables , we develop the notion of a proper test statistic . Proper test statistics are those whose power increases to one as the amount of data increases . Conditional independence implies the conditional-joint factorizes into conditionalmarginals . Measuring the divergence between these distributions yields a proper test statistic . Of the class of integral probability metrics ( Müller , 1997 ) and f -divergences ( Csiszár , 1964 ) , the KLdivergence simplifies estimation and allows for reuse of the model structures and code from the standard task of predicting the response from the features . Using the KL-divergence in this context has a natural interpretation ; it is a measure of the additional information each feature provides about the outcome over the rest . This measure of information is known as the additional mutual information ( AMI ) ( Ranganath and Perotte , 2018 ) . Our proposed procedure is called the additional mutual information conditional randomization test ( AMI-CRT ) . AMI-CRT uses regressions to simulate data from the null for each feature and compares the additional mutual information ( AMI ) of the original data to the AMI of the simulations from the null to assess significance . AMI-CRT works with any regression with probabilistic outputs like cross-entropy-trained neural networks . Training many regressions for each sample from the null can require substantial computation . While this is an embarrassingly parallel computation , we develop FAST-AMI-CRT that only requires a single model trained from null data . FAST-AMI-CRT uses an average of the model from the null with the model from the original data . We show this mixture guards against both variance in model training and poor estimation of the null . Though simple , AMI-CRT outperforms popular procedures for feature importance on a wide variety of simulated data , hospital records , and biological data . Working with data sometimes requires interpreting individual datapoints . For example , a doctor may benefit from knowing which features for a particular patient relate to their health . The process of identifying features at a datapoint-level is called instance-wise feature selection ( Ribeiro et al. , 2016 ; Lundberg and Lee , 2017 ; Gimenez and Zou , 2019 ) . We identify an issue in instance-wise feature selection , where even features selected using the true population distribution do not yield the features that were used to generate the response of an instance . The crux of this disparity is that the response generation process , conditional on the features , may use randomness to select features . We provide an example to demonstrate where instance-wise feature selection can go awry . We develop sufficient conditions for instance-wise feature selection to avoid this issue . The same regression estimates from AMI-CRT can be used to estimate feature-importances with minimal computational overhead , resulting in a method we term additional mutual information instance-wise feature selection ( AMIIW ) . We demonstrate AMI-IW on multiple simulations and image data . Across all of these tasks AMI-IW outperforms popular baselines . Related Work . Permutation tests ( Fisher , 1937 ) provide a test for marginal independence between each feature and the outcome . However , they fail to test conditional independence , which is required when covariates are dependent on each other . To address this , solutions like Sure Independence Screening ( Barut et al. , 2016 ; Fan and Lv , 2008 ) and Conditional Randomization Tests ( Barber et al. , 2015 ; Candes et al. , 2018 ) have been proposed . These outline frameworks for conditional independence testing . However , they often make linearity or additive noise assumptions about the data generating distribution . Furthermore , they require the choice of a test statistic to capture some notion of conditional independence . The user of such frameworks is often burdened with the task of choosing this test statistic , which may require strong assumptions about the data generating distribution . Extending this approach to neural networks , Lu et al . ( 2018 ) propose a fully connected network whose weights are used as a test statistic . Though moving beyond linear models , their method is specific to fully connected networks . Tansey et al . ( 2018 ) propose holdout randomization tests ( HRTs ) that use empirical loss as a test-statistic . The loss they use for continuous-valued distributions of response is the mean-squared-error ( MSE ) , which may ignore higher order dependencies between the response and features . Using AMI inside an HRT would capture these higher order dependencies . In our results , we adapt our test-statistic AMI to HRTs and show that this produces better calibration than other choices of loss . HRTs provide computational speed-ups over CRTs . However this speedup comes at the cost of robustness to poor estimations of the null feature distribution . We demonstrate empirically that FAST-AMI-CRT is robust to such poor estimations . Beyond understanding the population distribution , some tasks require interpreting a population distribution on the level of an individual datapoint . Methods that test for conditional independence work under distributional notions of feature selection , but are not designed to identify the relevant features for a particular sample . To address this issue of “ instance-wise feature selection , ” several methods have been proposed , including local perturbations ( Simonyan et al. , 2013 ; Sundararajan et al. , 2017 ; Ribeiro et al. , 2016 ) and fitting simpler auxiliary models to explain the predictions of a large model ( Chen et al. , 2018 ; Lundberg and Lee , 2017 ; Yoon et al. , 2019 ; Turner , 2016 ; Štrumbelj and Kononenko , 2014 ; Shrikumar et al. , 2017 ) . Our instance-wise work is most similar to that of Burns et al . ( 2019 ) , who repurpose the HRT framework to perform instance-wise feature selection , or Gimenez and Zou ( 2019 ) , who define a conditional randomization test ( CRT ) procedure for subsets of the feature space . In general , however , the conditions under which instance-wise feature selection with predictive models may be possible are not well developed . We address this issue by first identifying a set of sufficient conditions under which instance-wise feature selection is always possible . We then show how estimators used in AMI-CRT can be repurposed for use in an instance-wise setting , yielding a procedure called the AMI-IW . 2 PROPER TESTS FOR FEATURE SELECTION . Practitioners of machine learning use feature selection to identify important features for their predictive task . One way to filter out important features is to find those that improve predictions given the rest of the features . This can be formalized through conditional independence . Let xj be the jth feature of x and let x−j be all features but the jth one . The goal is to discover a set S such that ∀xj 6∈ S , xj ⊥ y | x−j , where independence is with respect to the true population distribution q . The only knowledge about q comes from a finite set of samples DN : = { ( x ( i ) , y ( i ) ) } Ni=1 sampled from the population . This means that it is impossible to assess exact conditional independence . Therefore , in the finite sample setting , we must formulate a statistical hypothesis test . A conditional randomization test ( CRT ) ( Candes et al. , 2018 ) defines a hypothesis test for conditional independence . For the jth feature , CRTs first compute a test statistic t using the N samples of data DN . CRTs place this statistic in a null distribution where samples of the jth feature xj are replaced by samples of x̃j |x−j which by construction satisfy x̃j ⊥ y |x−j . Letting D̃j , N be a dataset where { x ( i ) j } Ni=1 has been replaced by { x̃ ( i ) j } Ni=1 , the p-value for a CRT is pj ( DN ) = E ∀i=1 ... N x̃ ( i ) j ∼q ( xj |x−j=x ( i ) −j ) [ 1 ( t ( DN ) ≤ t ( D̃j , N ) ) ] = E ∀i=1 ... N x̃ ( i ) j ∼q ( xj |x−j=x ( i ) −j ) [ 1 ( t ( DN ) − t ( D̃j , N ) ≤ 0 ) ] , ( 1 ) Under smoothness constraints , the p-value is uniform under the null because it computes the cumulative distribution function of the test statistic under the null . While CRTs provide a general method for conditional independence testing , they leave several components including the choice of test statistic unspecified . 2.1 CHOOSING THE RIGHT TEST STATISTIC . Imagine a test statistic t ( · ) = t ( { x ( i ) j , y ( i ) } Ni=1 ) that uses only a feature xj and the outcome y . Any p-values computed using this test statistic would be meaningless when testing for conditional independence , as t never considers the remaining features x−j . Therefore , particular choices for test statistics limit what can be tested . To address this , we introduce the concept of a proper test statistic . Definition 1 . Proper Test Statistic : A test statistic t ( DN ) is proper if p-values produced by the statistic converge to 0 when the null must be rejected , and are uniformly distributed otherwise . Using t in Equation ( 1 ) , this is : pj ( DN ) d−−−−→ N→∞ { Uniform ( 0 , 1 ) if xj ⊥ y | x−j 0 with probability 1 if xj 6⊥ y | x−j , ( 2 ) where d−→ indicates a convergence in distribution . Under the alternate hypothesis , which in the case of feature selection is xj 6⊥ y | x−j , the power to reject the null hypothesis must be 1 , implying pj → 0 . A proper test statistic requires that Equation ( 2 ) must hold for all distributions of y , x . Proper tests statistics in a CRT select the features in S as the data grows . Definition 1 mirrors the concept of a scoring rule ( Gneiting and Raftery , 2007 ) , which measures the calibration of a probabilistic prediction by a model . A proper scoring rule is one such that the highest expected score is obtained by a model that uses the true probability distribution to make predictions . Divergences are proper test statistics . Conditional independence means the conditional distribution r factorizes : r ( xj , y | x−j ) = r ( xj | x−j ) r ( y | x−j ) . ( 3 ) Divergences measure the closeness between two distributions . A divergence is zero when the two distributions are the same and positive otherwise . Computing any divergence K , like an integral probability metric ( Müller , 1997 ) or an f -divergence ( Csiszár , 1964 ) , between the left hand side and right hand side of Equation ( 3 ) would be a proper test statistic . LetK ( a , b ) ≥ 0 with equality holding only when a is equal in distribution to b , then a proper test statistic Kj ( r ) : = Er ( x−j ) [ K ( r ( xj , y | x−j ) , r ( xj | x−j ) r ( y | x−j ) ) ] . A consistent estimator of this quantity is a proper test statistic ( see Appendix B.1 ) . Casting conditional independence testing as divergence estimation reduces this test to fitting univariate regressions that can reuse pre-developed model code from the features to the response . Define the resampling distribution q̃j = q̃j ( x̃j |x−j ) q ( y , x−j ) . Using a divergence in a CRT requires estimates of the following conditional distributions : q ( xj | x−j ) , q ( y | x ) , q̃j ( y | x̃j , x−j ) , and q ( y | x−j ) . The first distribution q ( xj | x−j ) is required for any CRT . The next distribution q ( y | x ) corresponds to the standard task of building a good regression model . The third distribution q̃j ( y | x̃j , x−j ) requires a regression model with corrupted inputs . This regression can reuse the model structure and code from the standard regression task q ( y | x ) . However , the last distribution q ( y | x−j ) could require development of new model structures . For example , if x is an image , a good model for q ( y | x ) could be a convolutional neural network . If the conditioning set x−j is a subregion of that image , the convolutional neural network used for q ( y | x ) would need to be modified for different padding and filter sizes . This means new models could be needed for each x−j . In the next section , we show that the KL-divergence removes the need for estimating this distribution , and therefore only requires the piece needed for all CRTs , q ( xj | x−j ) , and model code to fit the response from the features .
This paper addresses supervised feature selection: given a D-dimensional input variable x = (x_1, ..., x_D), and a response variable y, the goal is to find a subset of "useful" features in x. Here, a feature x_j is useful if it is dependent on y even when conditioning on all other input variables (denoted by x_{-j}, which is a set). A generic procedure that can produce a p-value for each feature (allowing on to test each feature whether it is useful) is the conditional randomization test (CRT) proposed in Candes et al., 2018. For the CRT to produce a valid p-value for each feature (input dimension) x_j, one needs to specify a test statistic that measures conditional dependence between x_j and y given the rest of the features.
SP:c71c1b8b8e1a7fe2225d8e232288de3c08f0356c
Model-Agnostic Feature Selection with Additional Mutual Information
1 INTRODUCTION . Model interpretation techniques aim to select features important for a response by reducing models ( sometimes locally ) to be human interpretable . However , the phrase model interpretation can be a bit of a misnomer . Any interpretation of a model must be imbued to the model by the population distribution that provides the data to train the model . In this sense , interpreting a model should be viewed as understanding the population distribution of data through the lens of a model . Existing methods for understanding the population distributions only work with particular models fit to the population , particular choices of test statistic , or particular auxiliary models for interpretation ( Ribeiro et al. , 2016 ; Lundberg and Lee , 2017 ) . Such structural restrictions limit the applicability of these methods to a smaller class of population distributions . To be able to work in a black-box manner , feature selection methods can use models but must not require a particular structure in models used in selection processes . Understanding the population distribution can be phrased as assessing whether a response is independent of a feature given the rest of the features ; this test is called a conditional randomization test ( Candes et al. , 2018 ) . Conditional randomization tests require test statistics . Test statistics like linear model coefficients ( Barber et al. , 2015 ) or correlation may miss dependence between the response and outcome . To avoid missing relationships between variables , we develop the notion of a proper test statistic . Proper test statistics are those whose power increases to one as the amount of data increases . Conditional independence implies the conditional-joint factorizes into conditionalmarginals . Measuring the divergence between these distributions yields a proper test statistic . Of the class of integral probability metrics ( Müller , 1997 ) and f -divergences ( Csiszár , 1964 ) , the KLdivergence simplifies estimation and allows for reuse of the model structures and code from the standard task of predicting the response from the features . Using the KL-divergence in this context has a natural interpretation ; it is a measure of the additional information each feature provides about the outcome over the rest . This measure of information is known as the additional mutual information ( AMI ) ( Ranganath and Perotte , 2018 ) . Our proposed procedure is called the additional mutual information conditional randomization test ( AMI-CRT ) . AMI-CRT uses regressions to simulate data from the null for each feature and compares the additional mutual information ( AMI ) of the original data to the AMI of the simulations from the null to assess significance . AMI-CRT works with any regression with probabilistic outputs like cross-entropy-trained neural networks . Training many regressions for each sample from the null can require substantial computation . While this is an embarrassingly parallel computation , we develop FAST-AMI-CRT that only requires a single model trained from null data . FAST-AMI-CRT uses an average of the model from the null with the model from the original data . We show this mixture guards against both variance in model training and poor estimation of the null . Though simple , AMI-CRT outperforms popular procedures for feature importance on a wide variety of simulated data , hospital records , and biological data . Working with data sometimes requires interpreting individual datapoints . For example , a doctor may benefit from knowing which features for a particular patient relate to their health . The process of identifying features at a datapoint-level is called instance-wise feature selection ( Ribeiro et al. , 2016 ; Lundberg and Lee , 2017 ; Gimenez and Zou , 2019 ) . We identify an issue in instance-wise feature selection , where even features selected using the true population distribution do not yield the features that were used to generate the response of an instance . The crux of this disparity is that the response generation process , conditional on the features , may use randomness to select features . We provide an example to demonstrate where instance-wise feature selection can go awry . We develop sufficient conditions for instance-wise feature selection to avoid this issue . The same regression estimates from AMI-CRT can be used to estimate feature-importances with minimal computational overhead , resulting in a method we term additional mutual information instance-wise feature selection ( AMIIW ) . We demonstrate AMI-IW on multiple simulations and image data . Across all of these tasks AMI-IW outperforms popular baselines . Related Work . Permutation tests ( Fisher , 1937 ) provide a test for marginal independence between each feature and the outcome . However , they fail to test conditional independence , which is required when covariates are dependent on each other . To address this , solutions like Sure Independence Screening ( Barut et al. , 2016 ; Fan and Lv , 2008 ) and Conditional Randomization Tests ( Barber et al. , 2015 ; Candes et al. , 2018 ) have been proposed . These outline frameworks for conditional independence testing . However , they often make linearity or additive noise assumptions about the data generating distribution . Furthermore , they require the choice of a test statistic to capture some notion of conditional independence . The user of such frameworks is often burdened with the task of choosing this test statistic , which may require strong assumptions about the data generating distribution . Extending this approach to neural networks , Lu et al . ( 2018 ) propose a fully connected network whose weights are used as a test statistic . Though moving beyond linear models , their method is specific to fully connected networks . Tansey et al . ( 2018 ) propose holdout randomization tests ( HRTs ) that use empirical loss as a test-statistic . The loss they use for continuous-valued distributions of response is the mean-squared-error ( MSE ) , which may ignore higher order dependencies between the response and features . Using AMI inside an HRT would capture these higher order dependencies . In our results , we adapt our test-statistic AMI to HRTs and show that this produces better calibration than other choices of loss . HRTs provide computational speed-ups over CRTs . However this speedup comes at the cost of robustness to poor estimations of the null feature distribution . We demonstrate empirically that FAST-AMI-CRT is robust to such poor estimations . Beyond understanding the population distribution , some tasks require interpreting a population distribution on the level of an individual datapoint . Methods that test for conditional independence work under distributional notions of feature selection , but are not designed to identify the relevant features for a particular sample . To address this issue of “ instance-wise feature selection , ” several methods have been proposed , including local perturbations ( Simonyan et al. , 2013 ; Sundararajan et al. , 2017 ; Ribeiro et al. , 2016 ) and fitting simpler auxiliary models to explain the predictions of a large model ( Chen et al. , 2018 ; Lundberg and Lee , 2017 ; Yoon et al. , 2019 ; Turner , 2016 ; Štrumbelj and Kononenko , 2014 ; Shrikumar et al. , 2017 ) . Our instance-wise work is most similar to that of Burns et al . ( 2019 ) , who repurpose the HRT framework to perform instance-wise feature selection , or Gimenez and Zou ( 2019 ) , who define a conditional randomization test ( CRT ) procedure for subsets of the feature space . In general , however , the conditions under which instance-wise feature selection with predictive models may be possible are not well developed . We address this issue by first identifying a set of sufficient conditions under which instance-wise feature selection is always possible . We then show how estimators used in AMI-CRT can be repurposed for use in an instance-wise setting , yielding a procedure called the AMI-IW . 2 PROPER TESTS FOR FEATURE SELECTION . Practitioners of machine learning use feature selection to identify important features for their predictive task . One way to filter out important features is to find those that improve predictions given the rest of the features . This can be formalized through conditional independence . Let xj be the jth feature of x and let x−j be all features but the jth one . The goal is to discover a set S such that ∀xj 6∈ S , xj ⊥ y | x−j , where independence is with respect to the true population distribution q . The only knowledge about q comes from a finite set of samples DN : = { ( x ( i ) , y ( i ) ) } Ni=1 sampled from the population . This means that it is impossible to assess exact conditional independence . Therefore , in the finite sample setting , we must formulate a statistical hypothesis test . A conditional randomization test ( CRT ) ( Candes et al. , 2018 ) defines a hypothesis test for conditional independence . For the jth feature , CRTs first compute a test statistic t using the N samples of data DN . CRTs place this statistic in a null distribution where samples of the jth feature xj are replaced by samples of x̃j |x−j which by construction satisfy x̃j ⊥ y |x−j . Letting D̃j , N be a dataset where { x ( i ) j } Ni=1 has been replaced by { x̃ ( i ) j } Ni=1 , the p-value for a CRT is pj ( DN ) = E ∀i=1 ... N x̃ ( i ) j ∼q ( xj |x−j=x ( i ) −j ) [ 1 ( t ( DN ) ≤ t ( D̃j , N ) ) ] = E ∀i=1 ... N x̃ ( i ) j ∼q ( xj |x−j=x ( i ) −j ) [ 1 ( t ( DN ) − t ( D̃j , N ) ≤ 0 ) ] , ( 1 ) Under smoothness constraints , the p-value is uniform under the null because it computes the cumulative distribution function of the test statistic under the null . While CRTs provide a general method for conditional independence testing , they leave several components including the choice of test statistic unspecified . 2.1 CHOOSING THE RIGHT TEST STATISTIC . Imagine a test statistic t ( · ) = t ( { x ( i ) j , y ( i ) } Ni=1 ) that uses only a feature xj and the outcome y . Any p-values computed using this test statistic would be meaningless when testing for conditional independence , as t never considers the remaining features x−j . Therefore , particular choices for test statistics limit what can be tested . To address this , we introduce the concept of a proper test statistic . Definition 1 . Proper Test Statistic : A test statistic t ( DN ) is proper if p-values produced by the statistic converge to 0 when the null must be rejected , and are uniformly distributed otherwise . Using t in Equation ( 1 ) , this is : pj ( DN ) d−−−−→ N→∞ { Uniform ( 0 , 1 ) if xj ⊥ y | x−j 0 with probability 1 if xj 6⊥ y | x−j , ( 2 ) where d−→ indicates a convergence in distribution . Under the alternate hypothesis , which in the case of feature selection is xj 6⊥ y | x−j , the power to reject the null hypothesis must be 1 , implying pj → 0 . A proper test statistic requires that Equation ( 2 ) must hold for all distributions of y , x . Proper tests statistics in a CRT select the features in S as the data grows . Definition 1 mirrors the concept of a scoring rule ( Gneiting and Raftery , 2007 ) , which measures the calibration of a probabilistic prediction by a model . A proper scoring rule is one such that the highest expected score is obtained by a model that uses the true probability distribution to make predictions . Divergences are proper test statistics . Conditional independence means the conditional distribution r factorizes : r ( xj , y | x−j ) = r ( xj | x−j ) r ( y | x−j ) . ( 3 ) Divergences measure the closeness between two distributions . A divergence is zero when the two distributions are the same and positive otherwise . Computing any divergence K , like an integral probability metric ( Müller , 1997 ) or an f -divergence ( Csiszár , 1964 ) , between the left hand side and right hand side of Equation ( 3 ) would be a proper test statistic . LetK ( a , b ) ≥ 0 with equality holding only when a is equal in distribution to b , then a proper test statistic Kj ( r ) : = Er ( x−j ) [ K ( r ( xj , y | x−j ) , r ( xj | x−j ) r ( y | x−j ) ) ] . A consistent estimator of this quantity is a proper test statistic ( see Appendix B.1 ) . Casting conditional independence testing as divergence estimation reduces this test to fitting univariate regressions that can reuse pre-developed model code from the features to the response . Define the resampling distribution q̃j = q̃j ( x̃j |x−j ) q ( y , x−j ) . Using a divergence in a CRT requires estimates of the following conditional distributions : q ( xj | x−j ) , q ( y | x ) , q̃j ( y | x̃j , x−j ) , and q ( y | x−j ) . The first distribution q ( xj | x−j ) is required for any CRT . The next distribution q ( y | x ) corresponds to the standard task of building a good regression model . The third distribution q̃j ( y | x̃j , x−j ) requires a regression model with corrupted inputs . This regression can reuse the model structure and code from the standard regression task q ( y | x ) . However , the last distribution q ( y | x−j ) could require development of new model structures . For example , if x is an image , a good model for q ( y | x ) could be a convolutional neural network . If the conditioning set x−j is a subregion of that image , the convolutional neural network used for q ( y | x ) would need to be modified for different padding and filter sizes . This means new models could be needed for each x−j . In the next section , we show that the KL-divergence removes the need for estimating this distribution , and therefore only requires the piece needed for all CRTs , q ( xj | x−j ) , and model code to fit the response from the features .
This paper presents a method to provide some level of interpretation on the influence of input features on the response of a machine level model all the way down to the instance level. The proposed method is model agnostic. Quoting the authors, they advocate for methods that look at interpretability “as understanding the population distribution through the lens of the model” without restriction on the models fit. The problem is posed as a hypothesis testing problem. The paper proposes “proper test statistics” for model agnostic feature selection. It is argued that f-divergence tests are proper statistic tests, with the KL being particularly interesting as it provides computational advantages.
SP:c71c1b8b8e1a7fe2225d8e232288de3c08f0356c
Estimating counterfactual treatment outcomes over time through adversarially balanced representations
1 INTRODUCTION . As clinical decision-makers are often faced with the problem of choosing between treatment alternatives for patients , reliably estimating their effects is paramount . While clinical trials represent the gold standard for causal inference , they are expensive , have a few patients and narrow inclusion criteria ( Booth & Tannock , 2014 ) . Leveraging the increasingly available observational data about patients , such as electronic health records , represents a more viable alternative for estimating treatment effects . A large number of methods have been proposed for performing causal inference using observational data in the static setting ( Johansson et al. , 2016 ; Shalit et al. , 2017 ; Alaa & van der Schaar , 2017 ; Li & Fu , 2017 ; Yoon et al. , 2018 ; Alaa & van der Schaar , 2018 ; Yao et al. , 2018 ) and only a few methods address the longitudinal setting ( Xu et al. , 2016 ; Roy et al. , 2016 ; Soleimani et al. , 2017 ; Schulam & Saria , 2017 ; Lim et al. , 2018 ) . However , estimating the effects of treatments over time poses unique opportunities such as understanding how diseases evolve under different treatment plans , how individual patients respond to medication over time , but also which are optimal timings for assigning treatments , thus providing new tools to improve clinical decision support systems . The biggest challenge when estimating the effects of time-dependent treatments from observational data involves correctly handling the time-dependent confounders : patient covariates that are affected by past treatments which then influence future treatments and outcomes ( Platt et al. , 2009 ) . For instance , consider that treatment A is given when a certain patient covariate ( e.g . white blood cell count ) has been outside of normal range values for several consecutive timesteps . Suppose also that this patient covariate was itself affected by the past administration of treatment B . If these patients are more likely to die , without adjusting for the time-dependent confounding ( e.g . the changes in the white blood cell count over time ) , we will incorrectly conclude that treatment A is harmful to patients . Moreover , estimating the effect of a different sequence of treatments on the patient outcome would require not only adjusting for the bias at the current step ( in treatment A ) , but also for the bias introduced by the previous application of treatment B . Existing methods for causal inference in the static setting can not be applied in this longitudinal setting since they are designed to handle the cross-sectional set-up , where the treatment and outcome depend only on a static value of the patient covariates . If we consider again the above example , these methods would not be able to model how the changes in patient covariates over time affect the assignment of treatments and they would also not be able to estimate the effect of a sequence of treatments on the patient outcome ( e.g . sequential application of treatment A followed by treatment B ) . Different models that can handle these temporal dependencies in the observational data and varying-length patient histories are needed for estimating treatment effects over time . Time-dependent confounders are present in observational data because doctors follow policies : the history of the patients ’ covariates and the patients ’ response to past treatments are used to decide future treatments ( Mansournia et al. , 2012 ) . The direct use of supervised learning methods will be biased by the treatment policies present in the observational data and will not be able to correctly estimate counterfactuals for different treatment assignment policies . Standard methods for adjusting for time-varying confounding and estimating the effects of timevarying exposures are based on ideas from epidemiology . The most widely used among these are Marginal Structural Models ( MSMs ) ( Robins et al. , 2000 ; Mansournia et al. , 2012 ) which use the inverse probability of treatment weighting ( IPTW ) to adjust for the time-dependent confounding bias . Through IPTW , MSMs create a pseudo-population where the probability of treatment does not depend on the time-varying confounders . However , MSMs are not robust to model misspecification in computing the IPTWs . MSMs can also give high-variance estimates due to extreme weights ; computing the IPTW involves dividing by probability of assigning a treatment conditional on patient history which can be numerically unstable if the probability is small . We introduce the Counterfactual Recurrent Network ( CRN ) , a novel sequence-to-sequence architecture for estimating treatment effects over time . CRN leverages recent advances in representation learning ( Bengio et al. , 2012 ) and domain adversarial training ( Ganin et al. , 2016 ) to overcome the problems of existing methods for causal inference over time . Our main contributions are as follows . Treatment invariant representations over time . CRN constructs treatment invariant representations at each timestep in order to break the association between patient history and treatment assignment and thus removes the bias from time-dependent confounders . For this , CRN uses domain adversarial training ( Ganin et al. , 2016 ; Li et al. , 2018 ; Sebag et al. , 2019 ) to trade-off between build- ing this balancing representation and predicting patient outcomes . We show that these representations remove the bias from time-varying confounders and can be reliably used for estimating counterfactual outcomes . This represents the first work that introduces ideas from domain adaptation to the area of estimating treatment effects over time . In addition , by building balancing representations , we propose a novel way of removing the bias introduced by time-varying confounders . Counterfactual estimation of future outcomes . To estimate counterfactual outcomes for treatment plans ( and not just single treatments ) , we integrate the domain adversarial training procedure as part of a sequence-to-sequence architecture . CRN consists of an encoder network which builds treatment invariant representations of the patient history that are used to initialize the decoder . The decoder network estimates outcomes under an intended sequence of future treatments , while also updating the balanced representation . By performing counterfactual estimation of future treatment outcomes , CRN can be used to answer critical medical questions such as deciding when to give treatments to patients , when to start and stop treatment regimes , and also how to select from multiple treatments over time . We illustrate in Figure 1 the applicability of our method in choosing optimal cancer treatments . In our experiments , we evaluate CRN in a realistic set-up using a model of tumour growth ( Geng et al. , 2017 ) . We show that CRN achieves better performance in predicting counterfactual outcomes , but also in choosing the right treatment and timing of treatment than current state-of-the-art methods . 2 RELATED WORK . We focus on methods for estimating treatment effects over time and for building balancing representations for causal inference . A more in-depth review of related work is in Appendix A . Treatment effects over time . Standard methods for estimating the effects of time-varying exposures were first developed in the epidemiology literature and include the g-computation formula , Structural Nested Models and Marginal Structural Models ( MSMs ) ( Robins , 1986 ; 1994 ; Robins et al. , 2000 ; Robins & Hernán , 2008 ) . Originally , these methods have used predictors performing logistic/linear regression which makes them unsuitable for handling complex time-dependencies ( Hernán et al. , 2001 ; Mansournia et al. , 2012 ; Mortimer et al. , 2005 ) . To address these limitations , methods that use Bayesian non-parametrics or recurrent neural networks as part of these frameworks have been proposed . ( Xu et al. , 2016 ; Roy et al. , 2016 ; Lim et al. , 2018 ) . To begin with , Xu et al . ( 2016 ) use Gaussian processes to model discrete patient outcomes as a generalized mixed-effects model and uses the g-computation method to handle time-varying confounders . Soleimani et al . ( 2017 ) extend the approach in Xu et al . ( 2016 ) to the continuous time-setting and model treatment responses using linear time-invariant dynamical systems . Roy et al . ( 2016 ) use Dirichlet and Gaussian processes to model the observational data and estimate the IPTW in Marginal Structural Models . Schulam & Saria ( 2017 ) build upon work from Lok et al . ( 2008 ) ; Arjas & Parner ( 2004 ) and use marked point processes and Gaussian processes to learn causal effects in continuous-time data . These Bayesian non-parametric methods make strong assumptions about model structure and consequently can not handle well heterogeneous treatment effects arising from baseline variables ( Soleimani et al. , 2017 ; Schulam & Saria , 2017 ) and multiple treatment outcomes ( Xu et al. , 2016 ; Schulam & Saria , 2017 ) . The work most related to ours is the one of Lim et al . ( 2018 ) which improves on the standard MSMs by using recurrent neural networks to estimate the inverse probability of treatment weights ( IPTWs ) . Lim et al . ( 2018 ) introduces Recurrent Marginal Structural Networks ( RMSNs ) which also use a sequence-to-sequence deep learning architecture to forecast treatment responses in a similar fashion to our model . However , RMSNs require training additional RNNs to estimate the propensity weights and does not overcome the fundamental problems with IPTWs , such as the high-variance of the weights . Conversely , CRN takes advantage of the recent advances in machine learning , in particular , representation learning to propose a novel way of handling time-varying confounders . Balancing representations for treatment effect estimation . Balancing the distribution of control and treated groups has been used for counterfactual estimation in the static setting . The methods proposed in the static setting for balancing representations are based on using discrepancy measures in the representation space between treated and untreated patients , which do not generalize to multiple treatments ( Johansson et al. , 2016 ; Shalit et al. , 2017 ; Li & Fu , 2017 ; Yao et al. , 2018 ) . Moreover , due to the sequential assignment of treatments in the longitudinal setting , and due to the change of patient covariates over time according to previous treatments , the methods for the static setting are not directly applicable to the time-varying setting ( Hernán et al. , 2000 ; Mansournia et al. , 2012 ) . 3 PROBLEM FORMULATION . Consider an observational datasetD = { { x ( i ) t , a ( i ) t , y ( i ) t+1 } T ( i ) t=1 ∪ { v ( i ) } } N i=1 consisting of information about N independent patients . For each patient ( i ) , we observe time-dependent covariates X ( i ) t ∈ Xt , treatment received A ( i ) t ∈ { A1 , . . . AK } = A and outcomes Y ( i ) t+1 ∈ Yt+1 for T ( i ) discrete timesteps . The patient can also have baseline covariates V ( i ) ∈ V such as gender and genetic information . Note that the outcome Y ( i ) t+1 will be part of the observed covariates X ( i ) t+1 . For simplicity , the patient superscript ( i ) will be omitted unless explicitly needed . We adopt the potential outcomes framework proposed by ( Neyman , 1923 ; Rubin , 1978 ) and extended by ( Robins & Hernán , 2008 ) to account for time-varying treatments . Let Y [ ā ] be the potential outcomes , either factual or counterfactual , for each possible course of treatment ā . Let H̄t = ( X̄t , Āt−1 , V ) represent the history of the patient covariates X̄t = ( X1 , . . . , Xt ) , treatment assignments Āt = ( A1 , . . . , At ) and static features V. We want to estimate : E ( Yt+τ [ ā ( t , t+ τ − 1 ) ] |H̄t ) , ( 1 ) where ā ( t , t+ τ − 1 ) = [ at , . . .at+τ−1 ] represents a possible sequence of treatments from timestep t just until before the potential outcome Yt+τ is observed . We make the standard assumptions ( Robins et al. , 2000 ; Lim et al. , 2018 ) needed to identify the treatment effects : consistency , positivity and no hidden confounders ( sequential strong ignorability ) . See Appendix B for more more details .
The paper introduces Counterfactual Recurrent Network (CRN) that is able to estimate the effects of various treatments from longitudinal data. The claim is that the model can decide (i) treatment plan; (ii) optimal time of treatment; and (iii) when to stop treatment. The proposed method attempts to learn time-invariant representations that are not predictive of the next treatment by borrowing ideas from Ganin, et al. (2016)’s work on for domain adversarial training. In fact, this paper is an extension of (Atan et al., 2018) to be applicable for longitudinal data.
SP:e1844adc4921e5ff485d07de51ca078b21321390
Estimating counterfactual treatment outcomes over time through adversarially balanced representations
1 INTRODUCTION . As clinical decision-makers are often faced with the problem of choosing between treatment alternatives for patients , reliably estimating their effects is paramount . While clinical trials represent the gold standard for causal inference , they are expensive , have a few patients and narrow inclusion criteria ( Booth & Tannock , 2014 ) . Leveraging the increasingly available observational data about patients , such as electronic health records , represents a more viable alternative for estimating treatment effects . A large number of methods have been proposed for performing causal inference using observational data in the static setting ( Johansson et al. , 2016 ; Shalit et al. , 2017 ; Alaa & van der Schaar , 2017 ; Li & Fu , 2017 ; Yoon et al. , 2018 ; Alaa & van der Schaar , 2018 ; Yao et al. , 2018 ) and only a few methods address the longitudinal setting ( Xu et al. , 2016 ; Roy et al. , 2016 ; Soleimani et al. , 2017 ; Schulam & Saria , 2017 ; Lim et al. , 2018 ) . However , estimating the effects of treatments over time poses unique opportunities such as understanding how diseases evolve under different treatment plans , how individual patients respond to medication over time , but also which are optimal timings for assigning treatments , thus providing new tools to improve clinical decision support systems . The biggest challenge when estimating the effects of time-dependent treatments from observational data involves correctly handling the time-dependent confounders : patient covariates that are affected by past treatments which then influence future treatments and outcomes ( Platt et al. , 2009 ) . For instance , consider that treatment A is given when a certain patient covariate ( e.g . white blood cell count ) has been outside of normal range values for several consecutive timesteps . Suppose also that this patient covariate was itself affected by the past administration of treatment B . If these patients are more likely to die , without adjusting for the time-dependent confounding ( e.g . the changes in the white blood cell count over time ) , we will incorrectly conclude that treatment A is harmful to patients . Moreover , estimating the effect of a different sequence of treatments on the patient outcome would require not only adjusting for the bias at the current step ( in treatment A ) , but also for the bias introduced by the previous application of treatment B . Existing methods for causal inference in the static setting can not be applied in this longitudinal setting since they are designed to handle the cross-sectional set-up , where the treatment and outcome depend only on a static value of the patient covariates . If we consider again the above example , these methods would not be able to model how the changes in patient covariates over time affect the assignment of treatments and they would also not be able to estimate the effect of a sequence of treatments on the patient outcome ( e.g . sequential application of treatment A followed by treatment B ) . Different models that can handle these temporal dependencies in the observational data and varying-length patient histories are needed for estimating treatment effects over time . Time-dependent confounders are present in observational data because doctors follow policies : the history of the patients ’ covariates and the patients ’ response to past treatments are used to decide future treatments ( Mansournia et al. , 2012 ) . The direct use of supervised learning methods will be biased by the treatment policies present in the observational data and will not be able to correctly estimate counterfactuals for different treatment assignment policies . Standard methods for adjusting for time-varying confounding and estimating the effects of timevarying exposures are based on ideas from epidemiology . The most widely used among these are Marginal Structural Models ( MSMs ) ( Robins et al. , 2000 ; Mansournia et al. , 2012 ) which use the inverse probability of treatment weighting ( IPTW ) to adjust for the time-dependent confounding bias . Through IPTW , MSMs create a pseudo-population where the probability of treatment does not depend on the time-varying confounders . However , MSMs are not robust to model misspecification in computing the IPTWs . MSMs can also give high-variance estimates due to extreme weights ; computing the IPTW involves dividing by probability of assigning a treatment conditional on patient history which can be numerically unstable if the probability is small . We introduce the Counterfactual Recurrent Network ( CRN ) , a novel sequence-to-sequence architecture for estimating treatment effects over time . CRN leverages recent advances in representation learning ( Bengio et al. , 2012 ) and domain adversarial training ( Ganin et al. , 2016 ) to overcome the problems of existing methods for causal inference over time . Our main contributions are as follows . Treatment invariant representations over time . CRN constructs treatment invariant representations at each timestep in order to break the association between patient history and treatment assignment and thus removes the bias from time-dependent confounders . For this , CRN uses domain adversarial training ( Ganin et al. , 2016 ; Li et al. , 2018 ; Sebag et al. , 2019 ) to trade-off between build- ing this balancing representation and predicting patient outcomes . We show that these representations remove the bias from time-varying confounders and can be reliably used for estimating counterfactual outcomes . This represents the first work that introduces ideas from domain adaptation to the area of estimating treatment effects over time . In addition , by building balancing representations , we propose a novel way of removing the bias introduced by time-varying confounders . Counterfactual estimation of future outcomes . To estimate counterfactual outcomes for treatment plans ( and not just single treatments ) , we integrate the domain adversarial training procedure as part of a sequence-to-sequence architecture . CRN consists of an encoder network which builds treatment invariant representations of the patient history that are used to initialize the decoder . The decoder network estimates outcomes under an intended sequence of future treatments , while also updating the balanced representation . By performing counterfactual estimation of future treatment outcomes , CRN can be used to answer critical medical questions such as deciding when to give treatments to patients , when to start and stop treatment regimes , and also how to select from multiple treatments over time . We illustrate in Figure 1 the applicability of our method in choosing optimal cancer treatments . In our experiments , we evaluate CRN in a realistic set-up using a model of tumour growth ( Geng et al. , 2017 ) . We show that CRN achieves better performance in predicting counterfactual outcomes , but also in choosing the right treatment and timing of treatment than current state-of-the-art methods . 2 RELATED WORK . We focus on methods for estimating treatment effects over time and for building balancing representations for causal inference . A more in-depth review of related work is in Appendix A . Treatment effects over time . Standard methods for estimating the effects of time-varying exposures were first developed in the epidemiology literature and include the g-computation formula , Structural Nested Models and Marginal Structural Models ( MSMs ) ( Robins , 1986 ; 1994 ; Robins et al. , 2000 ; Robins & Hernán , 2008 ) . Originally , these methods have used predictors performing logistic/linear regression which makes them unsuitable for handling complex time-dependencies ( Hernán et al. , 2001 ; Mansournia et al. , 2012 ; Mortimer et al. , 2005 ) . To address these limitations , methods that use Bayesian non-parametrics or recurrent neural networks as part of these frameworks have been proposed . ( Xu et al. , 2016 ; Roy et al. , 2016 ; Lim et al. , 2018 ) . To begin with , Xu et al . ( 2016 ) use Gaussian processes to model discrete patient outcomes as a generalized mixed-effects model and uses the g-computation method to handle time-varying confounders . Soleimani et al . ( 2017 ) extend the approach in Xu et al . ( 2016 ) to the continuous time-setting and model treatment responses using linear time-invariant dynamical systems . Roy et al . ( 2016 ) use Dirichlet and Gaussian processes to model the observational data and estimate the IPTW in Marginal Structural Models . Schulam & Saria ( 2017 ) build upon work from Lok et al . ( 2008 ) ; Arjas & Parner ( 2004 ) and use marked point processes and Gaussian processes to learn causal effects in continuous-time data . These Bayesian non-parametric methods make strong assumptions about model structure and consequently can not handle well heterogeneous treatment effects arising from baseline variables ( Soleimani et al. , 2017 ; Schulam & Saria , 2017 ) and multiple treatment outcomes ( Xu et al. , 2016 ; Schulam & Saria , 2017 ) . The work most related to ours is the one of Lim et al . ( 2018 ) which improves on the standard MSMs by using recurrent neural networks to estimate the inverse probability of treatment weights ( IPTWs ) . Lim et al . ( 2018 ) introduces Recurrent Marginal Structural Networks ( RMSNs ) which also use a sequence-to-sequence deep learning architecture to forecast treatment responses in a similar fashion to our model . However , RMSNs require training additional RNNs to estimate the propensity weights and does not overcome the fundamental problems with IPTWs , such as the high-variance of the weights . Conversely , CRN takes advantage of the recent advances in machine learning , in particular , representation learning to propose a novel way of handling time-varying confounders . Balancing representations for treatment effect estimation . Balancing the distribution of control and treated groups has been used for counterfactual estimation in the static setting . The methods proposed in the static setting for balancing representations are based on using discrepancy measures in the representation space between treated and untreated patients , which do not generalize to multiple treatments ( Johansson et al. , 2016 ; Shalit et al. , 2017 ; Li & Fu , 2017 ; Yao et al. , 2018 ) . Moreover , due to the sequential assignment of treatments in the longitudinal setting , and due to the change of patient covariates over time according to previous treatments , the methods for the static setting are not directly applicable to the time-varying setting ( Hernán et al. , 2000 ; Mansournia et al. , 2012 ) . 3 PROBLEM FORMULATION . Consider an observational datasetD = { { x ( i ) t , a ( i ) t , y ( i ) t+1 } T ( i ) t=1 ∪ { v ( i ) } } N i=1 consisting of information about N independent patients . For each patient ( i ) , we observe time-dependent covariates X ( i ) t ∈ Xt , treatment received A ( i ) t ∈ { A1 , . . . AK } = A and outcomes Y ( i ) t+1 ∈ Yt+1 for T ( i ) discrete timesteps . The patient can also have baseline covariates V ( i ) ∈ V such as gender and genetic information . Note that the outcome Y ( i ) t+1 will be part of the observed covariates X ( i ) t+1 . For simplicity , the patient superscript ( i ) will be omitted unless explicitly needed . We adopt the potential outcomes framework proposed by ( Neyman , 1923 ; Rubin , 1978 ) and extended by ( Robins & Hernán , 2008 ) to account for time-varying treatments . Let Y [ ā ] be the potential outcomes , either factual or counterfactual , for each possible course of treatment ā . Let H̄t = ( X̄t , Āt−1 , V ) represent the history of the patient covariates X̄t = ( X1 , . . . , Xt ) , treatment assignments Āt = ( A1 , . . . , At ) and static features V. We want to estimate : E ( Yt+τ [ ā ( t , t+ τ − 1 ) ] |H̄t ) , ( 1 ) where ā ( t , t+ τ − 1 ) = [ at , . . .at+τ−1 ] represents a possible sequence of treatments from timestep t just until before the potential outcome Yt+τ is observed . We make the standard assumptions ( Robins et al. , 2000 ; Lim et al. , 2018 ) needed to identify the treatment effects : consistency , positivity and no hidden confounders ( sequential strong ignorability ) . See Appendix B for more more details .
This work addresses the problem of causal inference in time-dependent treatment regimes. To address the problem, the authors propose an extension of the balancing representation for causal inference framework that seeks render the current treatment independent from a representation of the history of treatment and confounders. This is sensibly actualized within an RNN. The authors provide empirical results that demonstrate the proposed method performing very well in comparison to prior art.
SP:e1844adc4921e5ff485d07de51ca078b21321390
Confidence Scores Make Instance-dependent Label-noise Learning Possible
1 INTRODUCTION . The recent success of deep neural networks has increased the need for high-quality labeled data . However , such a labelling process can be time-consuming and costly . A compromise is to resort to weakly-supervised annotations , using crowdsourcing platforms or trained classifiers that annotate the data automatically . These weakly-supervised annotations tend to be low-quality and noisy , which negatively affects the accuracy of high-capacity models due to memorization effects ( Zhang et al. , 2017 ) . Thus , learning with noisy labels has often drawn a lot of attention . Early works on noisy labels studied random classification noise ( RCN ) for binary classification ( Angluin & Laird , 1988 ; Kearns , 1993 ) . In the RCN model , each instance has its label flipped with a fixed noise rate ρ ∈ [ 0 , 12 ) . A natural extension of RCN is class-conditional noise ( CCN ) for multiclass classification ( Stempfel & Ralaivola , 2009 ; Natarajan et al. , 2013 ; Scott et al. , 2013 ; Menon et al. , 2015 ; van Rooyen & Williamson , 2015 ; Patrini et al. , 2016 ) ( Appendix A ) . In the CCN model , each instance from class i has a fixed probability ρi , j of being assigned to class j . Thus , it is possible to encode some similarity information between classes . For example , we can expect that the image of a “ dog ” is more likely to be erroneously labelled as “ cat ” than “ boat ” . To handle the CCN model , a common method is the loss correction , which aims to correct the prediction or the loss of the classifier using an estimated noise transition matrix ( Patrini et al. , 2017 ; Sukhbaatar et al. , 2015 ; Goldberger & Ben-Reuven , 2017 ; Ma et al. , 2018 ) . Another common approach is the label correction , which aims to improve the label quality during training . For example , Reed et al . ( 2015 ) introduced a bootstrapping scheme . Similarly , Tanaka et al . ( 2018 ) proposed to update the weights of a classifier iteratively using noisy labels , and use the updated classifier to yield more high-quality pseudo-labels for the training set . Although these methods have theoretical guarantees , they are unable to cope with real-world noise , e.g. , instance-dependent noise ( IDN ) . The IDN model considers a more general noise ( Manwani & Sastry , 2013 ; Ghosh et al. , 2014 ; Menon et al. , 2016 ; Cheng et al. , 2017 ; Menon et al. , 2018 ) , where the probability that an instance is mislabeled depends on both its class and features . Intuitively , this noise is quite realistic , as poorquality or ambiguous instances are more likely to be mislabeled in real-world datasets . However , it is much more complex to formulate the IDN model , since the probability of a mislabeled instance is a function of not only the label space but also the input space that can be very high dimensional . As a result , several pioneer works have considered stronger assumptions on noise functions . However , stronger assumptions tend to restrict the utility of these works ( Table 1 ) . For instance , the boundary-consistent noise model considers stronger noise for samples closer to the decision boundary of the Bayesian optimal classifier ( Du & Cai , 2015 ; Menon et al. , 2018 ) . However , such a model is restricted to binary and can not estimate noise functions . Cheng et al . ( 2017 ) recently studied a particular case of the IDN model , where noise functions are upper-bounded . Nonetheless , their method is limited to binary classification and has only been tested on small datasets . Instead of simplifying assumptions on noise functions , we propose to tackle the IDN model from the source , by considering confidence scores to be available for the label of each instance . We term this new setting confidence-scored instance-dependent noise ( CSIDN , Figure 1c ) . The confidence scores denote how likely an instance is to be correctly labeled . Assuming that ( i ) confidence scores are available for each instance , ( ii ) transitions probabilities to other classes are independent of the instance conditionally on the assigned label being erroneous and ( iii ) a set of anchor points is available , we derive an instance-level forward correction algorithm which can fully estimate the transition probability for each instance , and subsequently train a robust classifier with a loss-correction method similarly to Patrini et al . ( 2017 ) . It is noted that confidence scores can be easily and cheaply derived during the construction of the dataset . For example , in crowdsourcing platforms , simply counting how many annotators agree on a given instance can give a notion of how confident a label is . Besides , many real-world datasets are automatically annotated using a trained classifier , such as web-scraped datasets ( Tong Xiao et al. , 2015 ) and physiological features inferred from medical records ( Agarwal et al. , 2016 ) . In these cases , the class-probabilities of the labels assigned by the classifier can be seen as confidence scores , provided that the classifier is well calibrated ( Guo et al. , 2017 ) . To sum up , we first formulate instance-dependent noise in Section 2.1 , and expose its robustness challenge in Section 2.2 . Then , we explain our motivation to use confidence scores , and propose the confidence-scored instance-dependent noise ( CSIDN ) model in Section 2.3 . Lastly , to handle this new noise model , we present the first practical algorithm termed instance-level forward correction in Section 3 , and validate the proposed algorithm through extensive experiments in Section 4 . 2 TACKLING INSTANCE-DEPENDENT NOISE FROM THE SOURCE . In this section , we present the IDN model along with the limitations of existing approaches , and introduce the CSIDN model as a tractable instance-dependent noise model . 2.1 NOISE MODELS : FROM CLASS-CONDITIONAL TO INSTANCE-DEPENDENT NOISE . We formulate the problem of learning with noisy labels in this section . Let D be the distribution of a pair of random variables ( X , Y ) ∈ X × Y , where X ∈ Rd , Y = { 1 , 2 , . . . , K } and K is the number of classes . In the classification task with noisy labels , we hope to train a classifier while having only access to samples from a noisy distribution D̄ of random variables ( X , Ȳ ) ∈ X × Y . Given a point x sampled from X , Ȳ is derived from the random variable Y via a noise transition matrix T ( x ) = ( Ti , j ( x ) ) K i , j=1 ∈ [ 0 , 1 ] K×K : ∀1 ≤ j ≤ K , P ( Ȳ = j|X = x ) = K∑ i=1 Ti , j ( x ) P ( Y = i|X = x ) . ( 1 ) Each noise function Ti , j : X 7→ [ 0 , 1 ] is defined as Ti , j ( x ) = P ( Ȳ = j|Y = i , X = x ) . In the class-conditional noise ( CNN ) model ( Figure 1a ) , the transition matrix does not depend on the instance x and the noise is entirely characterized by theK2 constants Ti , j . However , in the instancedependent noise ( IDN ) model ( Figure 1b ) , the transition matrix depends on the actual instance . This tremendously complicates the problem , as the noise is now characterized by K2 functions over the latent space X , which can be very high dimensional ( e.g. , d ∼ 104-106 for an object recognition dataset ) . 2.2 CHALLENGES FROM INSTANCE-DEPENDENT NOISE . Limitation of existing CCN methods . Due to the complexity of the IDN model , most recent works in learning with noisy labels have focused on the CCN model ( Figure 1a ) , and the CCN model can be seen as a simplified IDN model ( Figure 1b ) free of feature information . In addition to loss correction and label correction mentioned before , another method for the CCN model is sample selection , which aims to find reliable samples during training , such as the smallloss approaches ( Jiang et al. , 2018 ; Han et al. , 2018 ) . Inspired by the memorization in deep learning ( Arpit et al. , 2017 ) , those methods first run a standard classifier on a noisy dataset , then select the small-loss samples for reliable training . However , all approaches can not handle the IDN model directly . Specifically , loss correction considers the noise model to be characterized by a fixed transition matrix , which does not include any instance-level information . Meanwhile , label correction is vulnerable to the IDN model , since the classifier will be much weaker on noisy regions and labels corrected by the current prediction would likely be erroneous . Similarly , sample selection is easily affected by the IDN model . For example , in the small-loss approaches , instance-dependent noise functions can leave partial regions of the input space clean and other regions very noisy ( e.g. , in an object recognition dataset , poor-quality pictures will tend to receive more noisy labels than high-quality ones ) . Since clean regions will tend to receive smaller losses than noisy regions , the small-loss approaches , which only trains on points with the smallest-losses , will focus on clean regions and neglect harder noisy regions . Then , since the distribution of clean regions will subsequently be different from the global distribution , this will introduce a covariate-shift ( Shimodaira , 2000 ) , which greatly degrades performances . Moreover , it is hard to use importance reweighting ( Sugiyama et al. , 2007 ) for alleviate the issue , since importance reweighting would require estimating the clean posterior probability that is intractable for the IDN model . To validate this fact , we generate a 3-class distribution of concentric circles ( cf . Figure 2a ) , with ∀ ( x , y ) ∈ R2 × { 1 , 2 , 3 } , P ( ȳ 6= y|x ) = 12 ( w·x ‖w‖‖x‖ + 1 ) with w = ( 0 , 1 ) ( cf . Figure 2b ) . We then train a network on the top R ( T ) small-loss instances at each epoch T based on the losses of the previous epoch , with R ( T ) decreasing in T as described in Han et al . ( 2018 ) . Figure 2c shows the density of the top 50 % small-loss instances selected after 10 epochs : since noisy regions are associated to higher losses , the network eventually tends to select instances from the clean region and neglect the noisy region . This leads to covariate-shift , which is associated with decreased performances ( Shimodaira , 2000 ) . Limitation of pioneer IDN methods . The main challenge of the IDN model is the wide range of possible noise functions included in its formulation . Since each Ti , j ( · ) is a function of the highdimensional input space X , it is challenging for a model to be flexible enough to fit any real-world noise function while being trainable on corrupted datasets , let alone derive theoretical results . Instead , various recent works have considered stronger assumptions on noise functions . For instance , boundary-consistent noise ( BCN ) , first introduced by ( Du & Cai , 2015 ) and generalized in Menon et al . ( 2018 ) , considers stronger noise for samples closer to the decision boundary of the Bayesian optimal classifier . This is a reasonable model for noise from human annotators , since “ harder ” instances ( i.e. , instances closer to the decision boundary ) are more likely to be corrupted . Moreover , it is simple enough to derive some theoretical guarantees , as done in Menon et al . ( 2018 ) . Additionally , an extension of the BCN model was studied in Bootkrajang & Chaijaruwanich ( 2018 ) , where the noise function is a Gaussian mixture of the distance to the Bayesian optimal boundary . However , the BCN model and its extension are restricted to binary classification , and their geometry-based assumption becomes difficult to fathom for high-dimensional input spaces . Furthermore , Cheng et al . ( 2017 ) recently studied a particular case of the IDN model , where the probabilities that the true labels of samples flip into corrupted ones have upper bounds . They proposed a method based on distilled samples , where noisy labels agree with the optimal Bayesian classifier on the clean distribution . However , their method is limited to binary classification and has only been tested on small UCI datasets . Table 1 summarizes the characteristics of those approaches .
This paper focuses on instance-dependent label noise problem, which is a new and important area in learning with noisy labels. The authors propose confidence-scored instance-dependent noise (CSIDN) to overcome strong assumptions on noise models. They clearly define confidence scores and justify their availability. To solve CSIDN model, they propose instance-level forward correction with theoretical guarantees. Their experiments on both synthetic and real-world datasets show the advantage of this algorithm.
SP:b258e7150a73085f7bf32c927126a3601210bcec
Confidence Scores Make Instance-dependent Label-noise Learning Possible
1 INTRODUCTION . The recent success of deep neural networks has increased the need for high-quality labeled data . However , such a labelling process can be time-consuming and costly . A compromise is to resort to weakly-supervised annotations , using crowdsourcing platforms or trained classifiers that annotate the data automatically . These weakly-supervised annotations tend to be low-quality and noisy , which negatively affects the accuracy of high-capacity models due to memorization effects ( Zhang et al. , 2017 ) . Thus , learning with noisy labels has often drawn a lot of attention . Early works on noisy labels studied random classification noise ( RCN ) for binary classification ( Angluin & Laird , 1988 ; Kearns , 1993 ) . In the RCN model , each instance has its label flipped with a fixed noise rate ρ ∈ [ 0 , 12 ) . A natural extension of RCN is class-conditional noise ( CCN ) for multiclass classification ( Stempfel & Ralaivola , 2009 ; Natarajan et al. , 2013 ; Scott et al. , 2013 ; Menon et al. , 2015 ; van Rooyen & Williamson , 2015 ; Patrini et al. , 2016 ) ( Appendix A ) . In the CCN model , each instance from class i has a fixed probability ρi , j of being assigned to class j . Thus , it is possible to encode some similarity information between classes . For example , we can expect that the image of a “ dog ” is more likely to be erroneously labelled as “ cat ” than “ boat ” . To handle the CCN model , a common method is the loss correction , which aims to correct the prediction or the loss of the classifier using an estimated noise transition matrix ( Patrini et al. , 2017 ; Sukhbaatar et al. , 2015 ; Goldberger & Ben-Reuven , 2017 ; Ma et al. , 2018 ) . Another common approach is the label correction , which aims to improve the label quality during training . For example , Reed et al . ( 2015 ) introduced a bootstrapping scheme . Similarly , Tanaka et al . ( 2018 ) proposed to update the weights of a classifier iteratively using noisy labels , and use the updated classifier to yield more high-quality pseudo-labels for the training set . Although these methods have theoretical guarantees , they are unable to cope with real-world noise , e.g. , instance-dependent noise ( IDN ) . The IDN model considers a more general noise ( Manwani & Sastry , 2013 ; Ghosh et al. , 2014 ; Menon et al. , 2016 ; Cheng et al. , 2017 ; Menon et al. , 2018 ) , where the probability that an instance is mislabeled depends on both its class and features . Intuitively , this noise is quite realistic , as poorquality or ambiguous instances are more likely to be mislabeled in real-world datasets . However , it is much more complex to formulate the IDN model , since the probability of a mislabeled instance is a function of not only the label space but also the input space that can be very high dimensional . As a result , several pioneer works have considered stronger assumptions on noise functions . However , stronger assumptions tend to restrict the utility of these works ( Table 1 ) . For instance , the boundary-consistent noise model considers stronger noise for samples closer to the decision boundary of the Bayesian optimal classifier ( Du & Cai , 2015 ; Menon et al. , 2018 ) . However , such a model is restricted to binary and can not estimate noise functions . Cheng et al . ( 2017 ) recently studied a particular case of the IDN model , where noise functions are upper-bounded . Nonetheless , their method is limited to binary classification and has only been tested on small datasets . Instead of simplifying assumptions on noise functions , we propose to tackle the IDN model from the source , by considering confidence scores to be available for the label of each instance . We term this new setting confidence-scored instance-dependent noise ( CSIDN , Figure 1c ) . The confidence scores denote how likely an instance is to be correctly labeled . Assuming that ( i ) confidence scores are available for each instance , ( ii ) transitions probabilities to other classes are independent of the instance conditionally on the assigned label being erroneous and ( iii ) a set of anchor points is available , we derive an instance-level forward correction algorithm which can fully estimate the transition probability for each instance , and subsequently train a robust classifier with a loss-correction method similarly to Patrini et al . ( 2017 ) . It is noted that confidence scores can be easily and cheaply derived during the construction of the dataset . For example , in crowdsourcing platforms , simply counting how many annotators agree on a given instance can give a notion of how confident a label is . Besides , many real-world datasets are automatically annotated using a trained classifier , such as web-scraped datasets ( Tong Xiao et al. , 2015 ) and physiological features inferred from medical records ( Agarwal et al. , 2016 ) . In these cases , the class-probabilities of the labels assigned by the classifier can be seen as confidence scores , provided that the classifier is well calibrated ( Guo et al. , 2017 ) . To sum up , we first formulate instance-dependent noise in Section 2.1 , and expose its robustness challenge in Section 2.2 . Then , we explain our motivation to use confidence scores , and propose the confidence-scored instance-dependent noise ( CSIDN ) model in Section 2.3 . Lastly , to handle this new noise model , we present the first practical algorithm termed instance-level forward correction in Section 3 , and validate the proposed algorithm through extensive experiments in Section 4 . 2 TACKLING INSTANCE-DEPENDENT NOISE FROM THE SOURCE . In this section , we present the IDN model along with the limitations of existing approaches , and introduce the CSIDN model as a tractable instance-dependent noise model . 2.1 NOISE MODELS : FROM CLASS-CONDITIONAL TO INSTANCE-DEPENDENT NOISE . We formulate the problem of learning with noisy labels in this section . Let D be the distribution of a pair of random variables ( X , Y ) ∈ X × Y , where X ∈ Rd , Y = { 1 , 2 , . . . , K } and K is the number of classes . In the classification task with noisy labels , we hope to train a classifier while having only access to samples from a noisy distribution D̄ of random variables ( X , Ȳ ) ∈ X × Y . Given a point x sampled from X , Ȳ is derived from the random variable Y via a noise transition matrix T ( x ) = ( Ti , j ( x ) ) K i , j=1 ∈ [ 0 , 1 ] K×K : ∀1 ≤ j ≤ K , P ( Ȳ = j|X = x ) = K∑ i=1 Ti , j ( x ) P ( Y = i|X = x ) . ( 1 ) Each noise function Ti , j : X 7→ [ 0 , 1 ] is defined as Ti , j ( x ) = P ( Ȳ = j|Y = i , X = x ) . In the class-conditional noise ( CNN ) model ( Figure 1a ) , the transition matrix does not depend on the instance x and the noise is entirely characterized by theK2 constants Ti , j . However , in the instancedependent noise ( IDN ) model ( Figure 1b ) , the transition matrix depends on the actual instance . This tremendously complicates the problem , as the noise is now characterized by K2 functions over the latent space X , which can be very high dimensional ( e.g. , d ∼ 104-106 for an object recognition dataset ) . 2.2 CHALLENGES FROM INSTANCE-DEPENDENT NOISE . Limitation of existing CCN methods . Due to the complexity of the IDN model , most recent works in learning with noisy labels have focused on the CCN model ( Figure 1a ) , and the CCN model can be seen as a simplified IDN model ( Figure 1b ) free of feature information . In addition to loss correction and label correction mentioned before , another method for the CCN model is sample selection , which aims to find reliable samples during training , such as the smallloss approaches ( Jiang et al. , 2018 ; Han et al. , 2018 ) . Inspired by the memorization in deep learning ( Arpit et al. , 2017 ) , those methods first run a standard classifier on a noisy dataset , then select the small-loss samples for reliable training . However , all approaches can not handle the IDN model directly . Specifically , loss correction considers the noise model to be characterized by a fixed transition matrix , which does not include any instance-level information . Meanwhile , label correction is vulnerable to the IDN model , since the classifier will be much weaker on noisy regions and labels corrected by the current prediction would likely be erroneous . Similarly , sample selection is easily affected by the IDN model . For example , in the small-loss approaches , instance-dependent noise functions can leave partial regions of the input space clean and other regions very noisy ( e.g. , in an object recognition dataset , poor-quality pictures will tend to receive more noisy labels than high-quality ones ) . Since clean regions will tend to receive smaller losses than noisy regions , the small-loss approaches , which only trains on points with the smallest-losses , will focus on clean regions and neglect harder noisy regions . Then , since the distribution of clean regions will subsequently be different from the global distribution , this will introduce a covariate-shift ( Shimodaira , 2000 ) , which greatly degrades performances . Moreover , it is hard to use importance reweighting ( Sugiyama et al. , 2007 ) for alleviate the issue , since importance reweighting would require estimating the clean posterior probability that is intractable for the IDN model . To validate this fact , we generate a 3-class distribution of concentric circles ( cf . Figure 2a ) , with ∀ ( x , y ) ∈ R2 × { 1 , 2 , 3 } , P ( ȳ 6= y|x ) = 12 ( w·x ‖w‖‖x‖ + 1 ) with w = ( 0 , 1 ) ( cf . Figure 2b ) . We then train a network on the top R ( T ) small-loss instances at each epoch T based on the losses of the previous epoch , with R ( T ) decreasing in T as described in Han et al . ( 2018 ) . Figure 2c shows the density of the top 50 % small-loss instances selected after 10 epochs : since noisy regions are associated to higher losses , the network eventually tends to select instances from the clean region and neglect the noisy region . This leads to covariate-shift , which is associated with decreased performances ( Shimodaira , 2000 ) . Limitation of pioneer IDN methods . The main challenge of the IDN model is the wide range of possible noise functions included in its formulation . Since each Ti , j ( · ) is a function of the highdimensional input space X , it is challenging for a model to be flexible enough to fit any real-world noise function while being trainable on corrupted datasets , let alone derive theoretical results . Instead , various recent works have considered stronger assumptions on noise functions . For instance , boundary-consistent noise ( BCN ) , first introduced by ( Du & Cai , 2015 ) and generalized in Menon et al . ( 2018 ) , considers stronger noise for samples closer to the decision boundary of the Bayesian optimal classifier . This is a reasonable model for noise from human annotators , since “ harder ” instances ( i.e. , instances closer to the decision boundary ) are more likely to be corrupted . Moreover , it is simple enough to derive some theoretical guarantees , as done in Menon et al . ( 2018 ) . Additionally , an extension of the BCN model was studied in Bootkrajang & Chaijaruwanich ( 2018 ) , where the noise function is a Gaussian mixture of the distance to the Bayesian optimal boundary . However , the BCN model and its extension are restricted to binary classification , and their geometry-based assumption becomes difficult to fathom for high-dimensional input spaces . Furthermore , Cheng et al . ( 2017 ) recently studied a particular case of the IDN model , where the probabilities that the true labels of samples flip into corrupted ones have upper bounds . They proposed a method based on distilled samples , where noisy labels agree with the optimal Bayesian classifier on the clean distribution . However , their method is limited to binary classification and has only been tested on small UCI datasets . Table 1 summarizes the characteristics of those approaches .
Learning with noise labels is a hot topic now due to the reason that deep learning algorithms often require large-scale supervised training samples and labelling a large amount of data is costly. However, almost all of the existing methods assume that the label noise is instance-independent. It either depends on the clean classes or is completely random. This paper studies the instance-dependent label noise, which is more realistic and applicable, but difficulty to address. The authors target to solve this problem. A feasible solution would contribute to the community a lot.
SP:b258e7150a73085f7bf32c927126a3601210bcec
Topic Models with Survival Supervision: Archetypal Analysis and Neural Approaches
1 INTRODUCTION . Predicting time-to-event outcomes arises in a variety of applications . For example , in healthcare , we may be interested in predicting how much time a patient has to live . In criminology , we may be interested in predicting when a convicted criminal might reoffend . In e-commerce and on streaming platforms , companies with subscription services like Amazon and Netflix may be interested in predicting when users might cancel their subscriptions . In many such applications , we can now collect an enormous number of measurements per person/subject . However , how all of these measurements relate is typically unknown . In this paper , we aim to address the twin objectives of learning how measurements relate in the form of a topic model , and learning how topics can assist in predicting time-to-event outcomes via a survival analysis model . For ease of exposition , we phrase the problem we consider in the classical survival analysis context of predicting time until death . We assume that we have access to a training dataset of n subjects . For each subject , we know how many times each of d “ words ” appears , where the dictionary of words is pre-specified . As an example , in a clinical context , one word might correspond to “ low blood pressure reading ” ; for a given subject , we can count how many such readings the subject has had recorded in the past . We denote Xi , u to be the number of times word u ∈ { 1 , . . . , d } appears for subject i ∈ { 1 , . . . , n } . Viewing X as an n-by-d matrix , the i-th row of X can be thought of as the feature vector for the i-th subject . As for the training label for the i-th subject , we have two recordings : event indicator δi ∈ { 0 , 1 } specifies whether the i-th subject died , and observed time Yi ∈ R+ is the i-th subject ’ s “ survival time ” ( time until death ) if δi = 1 or the “ censoring time ” if δi = 0 . The censoring time gives a lower bound on the survival time for the i-th subject . For example , when we stop collecting data , some subjects will still be alive , so we know they live at least as long as when we stopped collecting training data . Our goal is to discover topics for the d words that help predict survival times of unseen test subjects . Note that an unsupervised topic model like latent Dirichlet allocation ( LDA ) ( Blei et al. , 2013 ) would not use any of the training labels ( the event indicators δi ’ s and observed times Yi ’ s ) , learning topics using only the word counts matrix X . Meanwhile , in survival analysis , a standard approach would involve learning a survival model using all the patients ’ feature vectors and labels but the model would not learn thematic structure in the different features , e.g. , topics . Jointly learning both a topic model and a survival model was first done by Dawson & Kendziorski ( 2012 ) , who combined LDA with a Cox proportional hazards model ( Cox , 1972 ) . Using LDA with r topics , Dawson and Kendziorski represent the i-th subject as a probability vector Wi ∈ [ 0 , 1 ] r specifying the subject ’ s membership in each of the r topics ; then Wi ’ s are treated as the input covariates to the Cox model . Dawson and Kendziorski called this joint model SURVLDA and derived a variational EM algorithm to estimate its parameters . In this paper , we build on SURVLDA by proposing two new survival-supervised topic modeling approaches , both of which allow for either the topic or the survival model to be replaced . Our contributions are as follows : • ( Section 3 ) We show how to take a discriminative approach to jointly learning topic and survival models , where topics are estimated via archetypal analysis ( Cutler & Breiman , 1994 ; Javadi & Montanari , 2019 ) . Archetypal analysis represents each subject as a convex combination of “ archetypes ” , which are optimized to be diverse yet still be close to the convex hull of the subjects ’ feature vectors . Applied to topic modeling , the archetypes are the topics , with each archetype specifying a particular topic ’ s word distribution . Archetypal analysis does not assume a parametric model and can learn a wide class of topic models . We describe how to combine archetypal analysis with any survival analysis model for which we can take a specific partial derivative . • ( Section 4 ) We approximate Dawson and Kendziorski ’ s SURVLDA model in a neural net framework , which allows for different choices of topic and survival models to be combined . This approach requires that the topic and survival models already have neural net approximations or formulations . For example , LDA and some variants of it can already be approximated using variational autoencoders ( Srivastava & Sutton , 2017 ; Card et al. , 2018 ) . In particular , Card et al . ( 2018 ) show how to approximate supervised LDA ( McAuliffe & Blei , 2008 ) in a neural net framework that they call SCHOLAR ; they specifically consider classification as the supervised task although they mention that their framework could be used to predict other real-valued outputs . We specifically combine their approach with that of Katzman et al . ( 2018 ) to handle survival supervision . • ( Section 5 ) We apply our two proposed approaches to two survival analysis datasets ( predicting how long pancreatitis patients stay in an intensive care unit , and time until death for breast cancer subjects ) , comparing against a number of classical and recently developed deep survival analysis baselines . Survival-supervised topic models have time-to-event prediction accuracy that is competitive with top-performing existing baselines while producing clinically interpretable topics . 2 BACKGROUND . We begin with some background on archetypal analysis , topic modeling , and survival analysis . Along the way , we introduce notation that recurs throughout the paper . As a reminder , we assume that we have access to training data ( X1 , Y1 , δ1 ) , ( X2 , Y2 , δ2 ) , . . . , ( Xn , Yn , δn ) , where the i-th training subject has feature vector Xi ∈ Rd , observed time Yi ∈ R+ , and event indicator δi ∈ { 0 , 1 } . Throughout this paper , we generally take Xi , u ( for u ∈ { 1 , 2 , . . . , d } ) to be the number of times word d appears , for some user-specified dictionary of d words . We let Xi , u denote the fraction of times a word appears for a specific subject , meaning that Xi , u = Xi , u∑d v=1Xi , v . Note that X is an n-by-d matrix , and we use Xi to denote the i-th row of X . We use this indexing notation for other matrices as well . 2.1 ARCHETYPAL ANALYSIS AND TOPIC MODELING . Archetypal analysis ( Cutler & Breiman , 1994 ; Javadi & Montanari , 2019 ) posits that each training vector Xi can be well-approximated by a convex combination of r different unknown “ archetypes ” H1 , H2 , . . . , Hr ∈ Rd : Xi ≈ r∑ g=1 Wi , gHg ( 2.1 ) for some weights Wi,1 , . . . , Wi , r ∈ [ 0 , 1 ] that sum to 1 , i.e. , the vector Wi : = ( Wi,1 , . . . , Wi , r ) resides in the probability simplex ∆r : = { w ∈ [ 0 , 1 ] r : ∑r g=1 wg = 1 } . By stacking the archetypes H1 , . . . , Hr as rows to form the matrixH , equation ( 2.1 ) can be expressed asX ≈WH . Archetypal analysis aims to estimate W and H given X . If the archetypes H1 , . . . , Hr are constrained to be in the probability simplex ∆d , then we get a topic model , and each archetype corresponds to a word distribution . For example , if rows of W are generated i.i.d . from a Dirichlet distribution , and rows of H are generated i.i.d . from another Dirichlet distribution , then we get LDA ( Blei et al. , 2013 ) . As a slight modification of this setup , if the rows of W are instead generated from a logistic normal distribution that allows correlation between topics , we get the correlated topic model ( Lafferty & Blei , 2006 ) . For an example that is not generative , if the archetypes are on a probability simplex , and for each archetype g ∈ { 1 , . . . , r } , there exists a word w that only appears in archetype g ( i.e. , Hg , w > 0 and Hh , w = 0 for all h 6= g ) , then we have a topic model satisfying the separability or “ anchor word ” assumption ( Donoho & Stodden , 2004 ; Arora et al. , 2012a ; b ; 2013 ) . Archetypal analysis optimizes over matrices W and H that include all of the aforementioned topic models above as special cases . In fact , archetypal analysis does not require that archetypes be on a probability simplex or that they be nonnegative ; the input matrix X need not consist of word frequencies and could be positive or negative real-valued measurements . Crucially , the error in approximation ( 2.1 ) should be small ; precise details including identifiability and degeneracy issues can be found in Section 3 of Javadi & Montanari ( 2019 ) . To estimate weights W and archetypes H , Javadi and Montanari proposed the following approach . First , for a point u ∈ Rd and a matrix V ∈ Rm×d , we define the distance from u to the convex hull of the rows of V as D ( u , V ) : = min w∈∆m ‖u− V > w‖2 . The vector w ∈ ∆m that achieves the minimum consists of the convex combination weights that best combine rows of V to approximate the point u . Then Javadi and Montanari ( approximately ) minimize the nonconvex loss Larch ( W , H ; λ ) : = n∑ i=1 ‖Xi −H > Wi‖22︸ ︷︷ ︸ ♠ +λ r∑ g=1 D2 ( Hg , X ) ︸ ︷︷ ︸ ♥ ( 2.2 ) subject to the constraint thatWi ∈ ∆r for i = 1 , . . . , n ; constant λ ≥ 0 is a user-specified regularization parameter . Minimizing term ♠ ( error of approximating input data Xi ’ s as convex combination of archetypes ) encourages the archetypes to be far apart and have a convex hull that contains the input data . However , this term does not prevent the archetypes from taking on extreme values ; for example , if the archetypes already have a convex hull that contains the Xi ’ s ( so that ♠ = 0 ) , we can move the archetypes even farther apart and still have their convex hull contain the Xi ’ s ( so we still have ♠ = 0 ) . We prevent this behavior by minimizing term ♥ , which encourages each archetype to be close to the convex hull of the input data . To learn a topic model , we enforce that the archetypes correspond to distributions over words by requiring each row of H to be in probability simplex ∆d . The resulting optimization problem is ( Ŵ , Ĥ ) ∈ argmin W∈Rn×r , H∈Rr×d s.t.Wi∈∆r for all i , Hg∈∆d for all g Larch ( W , H ; λ ) . ( 2.3 ) A local minimum can be found by alternating between minimizing W with H fixed , and vice versa .
The paper addresses the problem of survival analysis (predicting time until, e.g., death) using topic modeling. The point of introducing topic modeling here is to gain better insight into what helps predict survival times of unseen test subjects. This contrasts with much of the earlier work in survival analysis, which is mainly or only concerned with getting the best prediction without concern with the “thematic structure” of features. Such a thematic structure could be useful to clinicians, but analysis of the learned topic structure by clinical experts is apparently left for future work. Empirical results on pancreatitis and metabric datasets show that the methods of the paper are comparable to several established baselines.
SP:97bd209e33851a48ea9e5c3cab5c3888438e5189
Topic Models with Survival Supervision: Archetypal Analysis and Neural Approaches
1 INTRODUCTION . Predicting time-to-event outcomes arises in a variety of applications . For example , in healthcare , we may be interested in predicting how much time a patient has to live . In criminology , we may be interested in predicting when a convicted criminal might reoffend . In e-commerce and on streaming platforms , companies with subscription services like Amazon and Netflix may be interested in predicting when users might cancel their subscriptions . In many such applications , we can now collect an enormous number of measurements per person/subject . However , how all of these measurements relate is typically unknown . In this paper , we aim to address the twin objectives of learning how measurements relate in the form of a topic model , and learning how topics can assist in predicting time-to-event outcomes via a survival analysis model . For ease of exposition , we phrase the problem we consider in the classical survival analysis context of predicting time until death . We assume that we have access to a training dataset of n subjects . For each subject , we know how many times each of d “ words ” appears , where the dictionary of words is pre-specified . As an example , in a clinical context , one word might correspond to “ low blood pressure reading ” ; for a given subject , we can count how many such readings the subject has had recorded in the past . We denote Xi , u to be the number of times word u ∈ { 1 , . . . , d } appears for subject i ∈ { 1 , . . . , n } . Viewing X as an n-by-d matrix , the i-th row of X can be thought of as the feature vector for the i-th subject . As for the training label for the i-th subject , we have two recordings : event indicator δi ∈ { 0 , 1 } specifies whether the i-th subject died , and observed time Yi ∈ R+ is the i-th subject ’ s “ survival time ” ( time until death ) if δi = 1 or the “ censoring time ” if δi = 0 . The censoring time gives a lower bound on the survival time for the i-th subject . For example , when we stop collecting data , some subjects will still be alive , so we know they live at least as long as when we stopped collecting training data . Our goal is to discover topics for the d words that help predict survival times of unseen test subjects . Note that an unsupervised topic model like latent Dirichlet allocation ( LDA ) ( Blei et al. , 2013 ) would not use any of the training labels ( the event indicators δi ’ s and observed times Yi ’ s ) , learning topics using only the word counts matrix X . Meanwhile , in survival analysis , a standard approach would involve learning a survival model using all the patients ’ feature vectors and labels but the model would not learn thematic structure in the different features , e.g. , topics . Jointly learning both a topic model and a survival model was first done by Dawson & Kendziorski ( 2012 ) , who combined LDA with a Cox proportional hazards model ( Cox , 1972 ) . Using LDA with r topics , Dawson and Kendziorski represent the i-th subject as a probability vector Wi ∈ [ 0 , 1 ] r specifying the subject ’ s membership in each of the r topics ; then Wi ’ s are treated as the input covariates to the Cox model . Dawson and Kendziorski called this joint model SURVLDA and derived a variational EM algorithm to estimate its parameters . In this paper , we build on SURVLDA by proposing two new survival-supervised topic modeling approaches , both of which allow for either the topic or the survival model to be replaced . Our contributions are as follows : • ( Section 3 ) We show how to take a discriminative approach to jointly learning topic and survival models , where topics are estimated via archetypal analysis ( Cutler & Breiman , 1994 ; Javadi & Montanari , 2019 ) . Archetypal analysis represents each subject as a convex combination of “ archetypes ” , which are optimized to be diverse yet still be close to the convex hull of the subjects ’ feature vectors . Applied to topic modeling , the archetypes are the topics , with each archetype specifying a particular topic ’ s word distribution . Archetypal analysis does not assume a parametric model and can learn a wide class of topic models . We describe how to combine archetypal analysis with any survival analysis model for which we can take a specific partial derivative . • ( Section 4 ) We approximate Dawson and Kendziorski ’ s SURVLDA model in a neural net framework , which allows for different choices of topic and survival models to be combined . This approach requires that the topic and survival models already have neural net approximations or formulations . For example , LDA and some variants of it can already be approximated using variational autoencoders ( Srivastava & Sutton , 2017 ; Card et al. , 2018 ) . In particular , Card et al . ( 2018 ) show how to approximate supervised LDA ( McAuliffe & Blei , 2008 ) in a neural net framework that they call SCHOLAR ; they specifically consider classification as the supervised task although they mention that their framework could be used to predict other real-valued outputs . We specifically combine their approach with that of Katzman et al . ( 2018 ) to handle survival supervision . • ( Section 5 ) We apply our two proposed approaches to two survival analysis datasets ( predicting how long pancreatitis patients stay in an intensive care unit , and time until death for breast cancer subjects ) , comparing against a number of classical and recently developed deep survival analysis baselines . Survival-supervised topic models have time-to-event prediction accuracy that is competitive with top-performing existing baselines while producing clinically interpretable topics . 2 BACKGROUND . We begin with some background on archetypal analysis , topic modeling , and survival analysis . Along the way , we introduce notation that recurs throughout the paper . As a reminder , we assume that we have access to training data ( X1 , Y1 , δ1 ) , ( X2 , Y2 , δ2 ) , . . . , ( Xn , Yn , δn ) , where the i-th training subject has feature vector Xi ∈ Rd , observed time Yi ∈ R+ , and event indicator δi ∈ { 0 , 1 } . Throughout this paper , we generally take Xi , u ( for u ∈ { 1 , 2 , . . . , d } ) to be the number of times word d appears , for some user-specified dictionary of d words . We let Xi , u denote the fraction of times a word appears for a specific subject , meaning that Xi , u = Xi , u∑d v=1Xi , v . Note that X is an n-by-d matrix , and we use Xi to denote the i-th row of X . We use this indexing notation for other matrices as well . 2.1 ARCHETYPAL ANALYSIS AND TOPIC MODELING . Archetypal analysis ( Cutler & Breiman , 1994 ; Javadi & Montanari , 2019 ) posits that each training vector Xi can be well-approximated by a convex combination of r different unknown “ archetypes ” H1 , H2 , . . . , Hr ∈ Rd : Xi ≈ r∑ g=1 Wi , gHg ( 2.1 ) for some weights Wi,1 , . . . , Wi , r ∈ [ 0 , 1 ] that sum to 1 , i.e. , the vector Wi : = ( Wi,1 , . . . , Wi , r ) resides in the probability simplex ∆r : = { w ∈ [ 0 , 1 ] r : ∑r g=1 wg = 1 } . By stacking the archetypes H1 , . . . , Hr as rows to form the matrixH , equation ( 2.1 ) can be expressed asX ≈WH . Archetypal analysis aims to estimate W and H given X . If the archetypes H1 , . . . , Hr are constrained to be in the probability simplex ∆d , then we get a topic model , and each archetype corresponds to a word distribution . For example , if rows of W are generated i.i.d . from a Dirichlet distribution , and rows of H are generated i.i.d . from another Dirichlet distribution , then we get LDA ( Blei et al. , 2013 ) . As a slight modification of this setup , if the rows of W are instead generated from a logistic normal distribution that allows correlation between topics , we get the correlated topic model ( Lafferty & Blei , 2006 ) . For an example that is not generative , if the archetypes are on a probability simplex , and for each archetype g ∈ { 1 , . . . , r } , there exists a word w that only appears in archetype g ( i.e. , Hg , w > 0 and Hh , w = 0 for all h 6= g ) , then we have a topic model satisfying the separability or “ anchor word ” assumption ( Donoho & Stodden , 2004 ; Arora et al. , 2012a ; b ; 2013 ) . Archetypal analysis optimizes over matrices W and H that include all of the aforementioned topic models above as special cases . In fact , archetypal analysis does not require that archetypes be on a probability simplex or that they be nonnegative ; the input matrix X need not consist of word frequencies and could be positive or negative real-valued measurements . Crucially , the error in approximation ( 2.1 ) should be small ; precise details including identifiability and degeneracy issues can be found in Section 3 of Javadi & Montanari ( 2019 ) . To estimate weights W and archetypes H , Javadi and Montanari proposed the following approach . First , for a point u ∈ Rd and a matrix V ∈ Rm×d , we define the distance from u to the convex hull of the rows of V as D ( u , V ) : = min w∈∆m ‖u− V > w‖2 . The vector w ∈ ∆m that achieves the minimum consists of the convex combination weights that best combine rows of V to approximate the point u . Then Javadi and Montanari ( approximately ) minimize the nonconvex loss Larch ( W , H ; λ ) : = n∑ i=1 ‖Xi −H > Wi‖22︸ ︷︷ ︸ ♠ +λ r∑ g=1 D2 ( Hg , X ) ︸ ︷︷ ︸ ♥ ( 2.2 ) subject to the constraint thatWi ∈ ∆r for i = 1 , . . . , n ; constant λ ≥ 0 is a user-specified regularization parameter . Minimizing term ♠ ( error of approximating input data Xi ’ s as convex combination of archetypes ) encourages the archetypes to be far apart and have a convex hull that contains the input data . However , this term does not prevent the archetypes from taking on extreme values ; for example , if the archetypes already have a convex hull that contains the Xi ’ s ( so that ♠ = 0 ) , we can move the archetypes even farther apart and still have their convex hull contain the Xi ’ s ( so we still have ♠ = 0 ) . We prevent this behavior by minimizing term ♥ , which encourages each archetype to be close to the convex hull of the input data . To learn a topic model , we enforce that the archetypes correspond to distributions over words by requiring each row of H to be in probability simplex ∆d . The resulting optimization problem is ( Ŵ , Ĥ ) ∈ argmin W∈Rn×r , H∈Rr×d s.t.Wi∈∆r for all i , Hg∈∆d for all g Larch ( W , H ; λ ) . ( 2.3 ) A local minimum can be found by alternating between minimizing W with H fixed , and vice versa .
The paper considers the problem of interpreting the predictions for survival analysis using topic models. The classical survival analysis problem assumes each datapoint is a subject (X,Y,\delta) where X is a feature vector and Y is a life time or a censoring time depending on whether the subject is dead (when delta=1) or alive (when delta=0). The usual objective here is to predict survival times. The authors assume the features in X are interpretable readings (eg "low blood pressure") and indicate the number of times that reading was observed. Under this setting, such features can also be seen as words with datapoints being (BoW) documents. The goal then becomes that of finding topics that help predict life times on unseen subjects.
SP:97bd209e33851a48ea9e5c3cab5c3888438e5189
EDUCE: Explaining model Decision through Unsupervised Concepts Extraction
1 INTRODUCTION . While deep learning models are powerful tools to perform a large variety of tasks , their predictive process often remains obscure . Understanding their behavior becomes crucial . This is particularly true with text data , where predicting without justifications has limited applicability . There has been a recent focus on trying to make deep models more interpretable , see for example ( Ribeiro et al. , 2016 ; Bach et al. , 2015 ; Shrikumar et al. , 2017 ; Simonyan et al. , 2014 ; Sundararajan et al. , 2017 ) . Specifically , methods have been recently proposed to provide such explanations simultaneously with the prediction . For example , ( Lei et al. , 2016 ; Yu et al. , 2019 ; Bastings et al. , 2019 ) select subsets of words in the input text that can account for the model ’ s prediction ( called rationales ) . In this work , we propose a model that provides an explanation based on the absence or presence “ concepts ” that are automatically discovered in the texts . Suppose we ask a user to tell what is the category ( or class ) of the top text of Figure 1 ( text x ) . She detects that the words the government said relate to a specific concept that is present in x , a concept she also detect in the words made official what in text x′ . Let us call this concept “ politics ” in the remainder . She also notes that the words retails sales bounced back refer to a concept different from the previous one . Let us call this other concept “ economy ” . As the text is concerned with politics and economy she infers that its category is Business . Said otherwise , she detects excerpts that relate to particular concepts and decides on the text category based on the detected concepts . Similarly , our paradigm assumes that an explanation of a model ’ s prediction is understandable if it relies on a few concepts , where each concept relates to parts of text ( referred to as excerpts ) that are semantically consistent across multiple texts . Our methodology is as follows . First , our model encodes the input text into a binary low-dimensional vector , where each value ( 0 or 1 ) is computed from an excerpt and denotes the presence or absence of a specific concept in the input . Then , the prediction is made from this binary vector alone , allowing an easy interpretation of the decision in term of presence/absence of concepts . While we set a maximal value of the number of concepts ( which is the dimensionality of the binary representation ) , the concepts are unsupervised and not defined a priori . The model automatically determines them in a way that eases interpretation : each concept is encouraged to be semantically consistent and not to overlap with other concepts . Therefore , extracted excerpts for a concept must be discriminative of that concept only and share similar semantics . This is enforced through a concept consistency and separability constraint added as an auxiliary loss in the learning optimization problem . As a result , each discovered concept can be understood from the corresponding excerpts extracted that activate its appearance : in our previous example , the meaning of the first concept the user identifies is inferred from the excerpts she detected for that concept in x and x′ , i.e . the government said , made official what . Looking at these excerpts , we identify that concept as politics . Our idea relates to Latent Dirichlet Allocation ( LDA ) ( Blei et al. , 2003 ) , where text documents are described by a set of topics that are semantically consistent . However , LDA builds a probabilistic model of text generation , whereas our goal is to discover and define latent concepts that are relevant for a prediction task at hand . In comparison to rationale-based text processing models ( Lei et al. , 2016 ; Yu et al. , 2019 ; Bastings et al. , 2019 ) , we rely on a different paradigm : our model ’ s prediction is based on the absence or presence of discovered concepts , and makes no direct use of the words captured as excerpts . We do so to ease interpretation of the prediction . This makes the interpretation of a different nature than these methods , and simpler to understand for the user . Our contribution with this work is a new self-interpretable model that predicts solely from the presence/absence of concepts in the input . Concepts are learned unsupervisingly , and described by a semantically consistent set of excerpts . We experiment on three text categorization tasks and a multiaspect sentiment analysis task , compare our model ’ s performance with state-of-the-art prediction models , and demonstrate its interpretability . Note that an instance of our model for image processing is described in the supplementary Section E with illustrative experiments . 2 THE EDUCE MODEL . We present our model , called EDUCE for Explaining model Decisions through Unsupervised Concepts Extraction , using a multi-class classification task , but our method can be used to perform regression or multi-label classification . We consider a training dataset D of inputs ( x1 , .... , xN ) , xn ∈ X and corresponding labels ( y1 , ... , yN ) , yn ∈ Y . Contrarily to back-box models that use complex computations over low-level input features to predict the output class y , our objective is to enforce the model to map an x input to an easy-to-interpret representation z ( x ) ∈ { 0 , 1 } C , on which the output prediction relies . EDUCE ’ s inference process consists in the following steps . Step 1 : For each concept c ( i.e . each dimension of z ( x ) ) , compute pγ ( s|x , c ) : the probability of each excerpt s in x to be selected . Sample a unique excerpt sc ( x ) ∼ pγ ( s|x , c ) per concept ( per dimension of z ( x ) ) . Note that the same excerpt can be selected for multiple concepts . Step 2 : Decide on each value zc ( x ) of the representation z ( x ) by sampling from pα ( zc|sc ( x ) , c ) . Given sc ( x ) , zc ( sc ( x ) ) = 1 means concept c is detected as present and zc ( sc ( x ) ) = 0 means it is absent . Step 3 : Predict the output class y from z ( x ) = ( z1 ( s1 ( x ) ) , ... , zC ( sC ( x ) ) ) . We do not have concept-level annotations . To ensure semantic consistency and prevent overlap of the concepts , we jointly train a concept classifier to recognize , for every excerpt sc ( x ) such that zc ( sc ( x ) ) = 1 , the concept ( i.e . the dimension ) it was extracted for : the label for each sc ( x ) is simply c ∈ [ 1 , C ] . Figure 1 illustrates these steps , which we detail below . 2.1 GENERATING PREDICTIONS AND EXPLANATIONS . Step 1 : Extract a unique excerpt sc ( x ) per concept c. Given an input sentence x : w1 , ... , wM , where each wi is a word , a excerpt s consists of a span of consecutive words of flexible size between 3 and 10 words : s = wk , ... , wk+l , 3 ≤ l ≤ 10 . For each concept , we extract a unique excerpt , defined by its first and last word . Our extraction process is very similar to the one proposed in Question-Answering models ( e.g Devlin et al . ( 2019 ) ) . An excerpt is sampled by first sampling its start word and then its end word conditioned on the start word . We denote the probability over each excerpt s to represent concept c in x as pγ ( s|x , c ) and we write as the product of ( i ) pstart ( wk|x , c ) , the probability for s ’ s first word to be the start word of the excerpt to extract and ( ii ) pstop ( wk+l|x , c , wk ) the probability of wk+l to be the stop word given the start word wk : pγ ( s|x , c ) = pγ ( wk , ... , wk+l|x , c ) = pstop ( wk+l|x , c , wk ) pstart ( wk|x , c ) . We parametrize pstart ( wk|x , c ) and pstop ( wk+l|x , c , wk ) using recurrent neural networks . First we feed the entire input sentence x : w1 , ... , wM through a bidirectional LSTM , and we represent each word wi in the sentence as the concatenation of the forward pass and backward pass hidden states for that word : hk = [ −→ hk , ←− hk ] , ∀k ∈ [ 1 , M ] . Second , each hk is fed to a linear layer with parameters γstart which outputs a score for wk , followed by the softmax activation function over all possible words to have the probability distribution over each word to be the start word : pstart ( wk|x , c ) = exp ( γstart · hk ) ∑M k=1 exp ( γstart · hk ) , k = 1 , ... , M. ( 1 ) Using this distribution , we can sample a specific start word wstart ∼ pstart ( wk|x , c ) . Third , we feed again the vectors hk to another linear layer with parameters γstop that gives a score for each word to be the stop word . We mask out these scores before taking the softmax , such that the probabilities for wstart+l with l < 3 and l > 10 to be the stop word are 0. pstop ( wstart+l|x , c , wstart ) = exp ( γstop · hstart+l ) ∑10 l=3 exp ( γstop · hstart+l ) , l ≥ 3 or l ≤ 10 0 , l < 3 or l > 10 . We then sample a specific stop word wstop ∼ pstop ( wstart+l|x , c , wstart ) 1 . We now have the start word wstart and stop word wstop . The corresponding excerpt is wstart , ... , wstop and is represented as a fixed size vector sc ( x ) by computing the average of ( pre-trained , fixed ) words embeddings vectors of wstart , ... , wstop . Hence sampled excerpts sc ( x ) belong to Rd , where d is the dimension of embedding vectors . This process is illustrated in Figure 1 : For text x , the excerpt Retails sales bounced back is selected for concept c1 , the government said for concept c2 and a midsummer slump for concept c3 . Step 2 : For each c , from the excerpt sc ( x ) , decide on the value zc ( x ) denoting the presence/absence of concept c. Each extracted excerpt ( one per concept ) is processed to decide on the absence or presence of each concept in x . Specifically , for each concept c we take the dot product of a weight vector αc ∈ Rd with sc ( x ) , followed by a sigmoid activation function , in order to obtain the Bernoulli probability pα ( zc = 1|sc ( x ) , c ) = σ ( αc · sc ( x ) ) ( 2 ) This is the probability that sc ( x ) extracted from x for concept c triggers the presence of concept c. The binary vector z ( x ) is obtained by independently sampling each dimension : ∀c zc ∼ pα ( zc|sc ( x ) , c ) . This step can be seen as C independent binary classifiers , each of them deciding on the presence of a particular concept . We illustrate this in Step 2 in Figure 1 : For text x , the excerpt Retails sales bounced back activates the presence of concept c1 , the government said activates the presence of c2 but a midsummer slump does not activate the presence of c3 . Step 3 : Predict y from z ( x ) . As shown in Step 3 of Figure 1 , given an input x , the prediction of the output y is made solely from the intermediate binary representation z ( x ) . We use a linear classifier without bias , parametrized by a weight matrix δ ∈ R|Y|×C followed by a softmax activation function , returning pδ ( y|z ( x ) ) ∀y ∈ Y . Output classification training objective . The parameters { γ , α , δ } are learned in a end-to-end manner by minimizing cross-entropy , which writes for each x as : Loutput ( x , y , δ , α , γ ) = Es ( x ) ∼pγ [ Ez ( x ) ∼pα [ L output ( y , z ( x ) , δ ) ] ] = E∀c sc ( x ) ∼pγ ( s|x , c ) [ E∀c zc∼pα ( zc|sc ( x ) , c ) [ − log pδ ( y|z ( x ) ) ] ] . ( 3 ) We give details about the optimization in Section 2.3 . 1We also prevent stop words to be outside of the text length .
The paper introduces a new concept-based interpretability method that lies in the family of self-interpretable models (i.e. it's not a post-hoc method). Self-interpretability is achieved by a two-stage model: First, a concept-extractor finds the related pieces of consecutive words (excerpts) in a given text that are related to a concept among a set of given concepts (if any), then the model makes its predictions solely based on the presence or absence of concepts (binary). The most useful part of the algorithm is that there is no need for concept annotations. The work then experimentally shows that their method, although does not outperform a non-self-interpretabile baseline but has better performance and interpretation compared to rival methods.
SP:1f205607623dfcb3aa5e21f7d13d3182ba29bad5
EDUCE: Explaining model Decision through Unsupervised Concepts Extraction
1 INTRODUCTION . While deep learning models are powerful tools to perform a large variety of tasks , their predictive process often remains obscure . Understanding their behavior becomes crucial . This is particularly true with text data , where predicting without justifications has limited applicability . There has been a recent focus on trying to make deep models more interpretable , see for example ( Ribeiro et al. , 2016 ; Bach et al. , 2015 ; Shrikumar et al. , 2017 ; Simonyan et al. , 2014 ; Sundararajan et al. , 2017 ) . Specifically , methods have been recently proposed to provide such explanations simultaneously with the prediction . For example , ( Lei et al. , 2016 ; Yu et al. , 2019 ; Bastings et al. , 2019 ) select subsets of words in the input text that can account for the model ’ s prediction ( called rationales ) . In this work , we propose a model that provides an explanation based on the absence or presence “ concepts ” that are automatically discovered in the texts . Suppose we ask a user to tell what is the category ( or class ) of the top text of Figure 1 ( text x ) . She detects that the words the government said relate to a specific concept that is present in x , a concept she also detect in the words made official what in text x′ . Let us call this concept “ politics ” in the remainder . She also notes that the words retails sales bounced back refer to a concept different from the previous one . Let us call this other concept “ economy ” . As the text is concerned with politics and economy she infers that its category is Business . Said otherwise , she detects excerpts that relate to particular concepts and decides on the text category based on the detected concepts . Similarly , our paradigm assumes that an explanation of a model ’ s prediction is understandable if it relies on a few concepts , where each concept relates to parts of text ( referred to as excerpts ) that are semantically consistent across multiple texts . Our methodology is as follows . First , our model encodes the input text into a binary low-dimensional vector , where each value ( 0 or 1 ) is computed from an excerpt and denotes the presence or absence of a specific concept in the input . Then , the prediction is made from this binary vector alone , allowing an easy interpretation of the decision in term of presence/absence of concepts . While we set a maximal value of the number of concepts ( which is the dimensionality of the binary representation ) , the concepts are unsupervised and not defined a priori . The model automatically determines them in a way that eases interpretation : each concept is encouraged to be semantically consistent and not to overlap with other concepts . Therefore , extracted excerpts for a concept must be discriminative of that concept only and share similar semantics . This is enforced through a concept consistency and separability constraint added as an auxiliary loss in the learning optimization problem . As a result , each discovered concept can be understood from the corresponding excerpts extracted that activate its appearance : in our previous example , the meaning of the first concept the user identifies is inferred from the excerpts she detected for that concept in x and x′ , i.e . the government said , made official what . Looking at these excerpts , we identify that concept as politics . Our idea relates to Latent Dirichlet Allocation ( LDA ) ( Blei et al. , 2003 ) , where text documents are described by a set of topics that are semantically consistent . However , LDA builds a probabilistic model of text generation , whereas our goal is to discover and define latent concepts that are relevant for a prediction task at hand . In comparison to rationale-based text processing models ( Lei et al. , 2016 ; Yu et al. , 2019 ; Bastings et al. , 2019 ) , we rely on a different paradigm : our model ’ s prediction is based on the absence or presence of discovered concepts , and makes no direct use of the words captured as excerpts . We do so to ease interpretation of the prediction . This makes the interpretation of a different nature than these methods , and simpler to understand for the user . Our contribution with this work is a new self-interpretable model that predicts solely from the presence/absence of concepts in the input . Concepts are learned unsupervisingly , and described by a semantically consistent set of excerpts . We experiment on three text categorization tasks and a multiaspect sentiment analysis task , compare our model ’ s performance with state-of-the-art prediction models , and demonstrate its interpretability . Note that an instance of our model for image processing is described in the supplementary Section E with illustrative experiments . 2 THE EDUCE MODEL . We present our model , called EDUCE for Explaining model Decisions through Unsupervised Concepts Extraction , using a multi-class classification task , but our method can be used to perform regression or multi-label classification . We consider a training dataset D of inputs ( x1 , .... , xN ) , xn ∈ X and corresponding labels ( y1 , ... , yN ) , yn ∈ Y . Contrarily to back-box models that use complex computations over low-level input features to predict the output class y , our objective is to enforce the model to map an x input to an easy-to-interpret representation z ( x ) ∈ { 0 , 1 } C , on which the output prediction relies . EDUCE ’ s inference process consists in the following steps . Step 1 : For each concept c ( i.e . each dimension of z ( x ) ) , compute pγ ( s|x , c ) : the probability of each excerpt s in x to be selected . Sample a unique excerpt sc ( x ) ∼ pγ ( s|x , c ) per concept ( per dimension of z ( x ) ) . Note that the same excerpt can be selected for multiple concepts . Step 2 : Decide on each value zc ( x ) of the representation z ( x ) by sampling from pα ( zc|sc ( x ) , c ) . Given sc ( x ) , zc ( sc ( x ) ) = 1 means concept c is detected as present and zc ( sc ( x ) ) = 0 means it is absent . Step 3 : Predict the output class y from z ( x ) = ( z1 ( s1 ( x ) ) , ... , zC ( sC ( x ) ) ) . We do not have concept-level annotations . To ensure semantic consistency and prevent overlap of the concepts , we jointly train a concept classifier to recognize , for every excerpt sc ( x ) such that zc ( sc ( x ) ) = 1 , the concept ( i.e . the dimension ) it was extracted for : the label for each sc ( x ) is simply c ∈ [ 1 , C ] . Figure 1 illustrates these steps , which we detail below . 2.1 GENERATING PREDICTIONS AND EXPLANATIONS . Step 1 : Extract a unique excerpt sc ( x ) per concept c. Given an input sentence x : w1 , ... , wM , where each wi is a word , a excerpt s consists of a span of consecutive words of flexible size between 3 and 10 words : s = wk , ... , wk+l , 3 ≤ l ≤ 10 . For each concept , we extract a unique excerpt , defined by its first and last word . Our extraction process is very similar to the one proposed in Question-Answering models ( e.g Devlin et al . ( 2019 ) ) . An excerpt is sampled by first sampling its start word and then its end word conditioned on the start word . We denote the probability over each excerpt s to represent concept c in x as pγ ( s|x , c ) and we write as the product of ( i ) pstart ( wk|x , c ) , the probability for s ’ s first word to be the start word of the excerpt to extract and ( ii ) pstop ( wk+l|x , c , wk ) the probability of wk+l to be the stop word given the start word wk : pγ ( s|x , c ) = pγ ( wk , ... , wk+l|x , c ) = pstop ( wk+l|x , c , wk ) pstart ( wk|x , c ) . We parametrize pstart ( wk|x , c ) and pstop ( wk+l|x , c , wk ) using recurrent neural networks . First we feed the entire input sentence x : w1 , ... , wM through a bidirectional LSTM , and we represent each word wi in the sentence as the concatenation of the forward pass and backward pass hidden states for that word : hk = [ −→ hk , ←− hk ] , ∀k ∈ [ 1 , M ] . Second , each hk is fed to a linear layer with parameters γstart which outputs a score for wk , followed by the softmax activation function over all possible words to have the probability distribution over each word to be the start word : pstart ( wk|x , c ) = exp ( γstart · hk ) ∑M k=1 exp ( γstart · hk ) , k = 1 , ... , M. ( 1 ) Using this distribution , we can sample a specific start word wstart ∼ pstart ( wk|x , c ) . Third , we feed again the vectors hk to another linear layer with parameters γstop that gives a score for each word to be the stop word . We mask out these scores before taking the softmax , such that the probabilities for wstart+l with l < 3 and l > 10 to be the stop word are 0. pstop ( wstart+l|x , c , wstart ) = exp ( γstop · hstart+l ) ∑10 l=3 exp ( γstop · hstart+l ) , l ≥ 3 or l ≤ 10 0 , l < 3 or l > 10 . We then sample a specific stop word wstop ∼ pstop ( wstart+l|x , c , wstart ) 1 . We now have the start word wstart and stop word wstop . The corresponding excerpt is wstart , ... , wstop and is represented as a fixed size vector sc ( x ) by computing the average of ( pre-trained , fixed ) words embeddings vectors of wstart , ... , wstop . Hence sampled excerpts sc ( x ) belong to Rd , where d is the dimension of embedding vectors . This process is illustrated in Figure 1 : For text x , the excerpt Retails sales bounced back is selected for concept c1 , the government said for concept c2 and a midsummer slump for concept c3 . Step 2 : For each c , from the excerpt sc ( x ) , decide on the value zc ( x ) denoting the presence/absence of concept c. Each extracted excerpt ( one per concept ) is processed to decide on the absence or presence of each concept in x . Specifically , for each concept c we take the dot product of a weight vector αc ∈ Rd with sc ( x ) , followed by a sigmoid activation function , in order to obtain the Bernoulli probability pα ( zc = 1|sc ( x ) , c ) = σ ( αc · sc ( x ) ) ( 2 ) This is the probability that sc ( x ) extracted from x for concept c triggers the presence of concept c. The binary vector z ( x ) is obtained by independently sampling each dimension : ∀c zc ∼ pα ( zc|sc ( x ) , c ) . This step can be seen as C independent binary classifiers , each of them deciding on the presence of a particular concept . We illustrate this in Step 2 in Figure 1 : For text x , the excerpt Retails sales bounced back activates the presence of concept c1 , the government said activates the presence of c2 but a midsummer slump does not activate the presence of c3 . Step 3 : Predict y from z ( x ) . As shown in Step 3 of Figure 1 , given an input x , the prediction of the output y is made solely from the intermediate binary representation z ( x ) . We use a linear classifier without bias , parametrized by a weight matrix δ ∈ R|Y|×C followed by a softmax activation function , returning pδ ( y|z ( x ) ) ∀y ∈ Y . Output classification training objective . The parameters { γ , α , δ } are learned in a end-to-end manner by minimizing cross-entropy , which writes for each x as : Loutput ( x , y , δ , α , γ ) = Es ( x ) ∼pγ [ Ez ( x ) ∼pα [ L output ( y , z ( x ) , δ ) ] ] = E∀c sc ( x ) ∼pγ ( s|x , c ) [ E∀c zc∼pα ( zc|sc ( x ) , c ) [ − log pδ ( y|z ( x ) ) ] ] . ( 3 ) We give details about the optimization in Section 2.3 . 1We also prevent stop words to be outside of the text length .
The authors proposed a self-explainable deep net architecture that could be used for text categorization. The main idea is to force the network to extract "excerpts", from the input text, each corresponds to a concept, which are also learned for interpretation. The classification is finally made based off of the learned concept, which is a binary vector. All three steps are learned in an end-to-end manner. The learning of concepts is regularized to make sure the concepts are consistent and non-overlapping. The idea sounds interesting and the experimental results support the usefulness of the proposed method on a variety of datasets. My sole concern is about the sensitivity analysis of the explanation, i.e. how robust is the explanation with respect to the perturbations that do not change the classifier prediction. It has been discussed in the literature that many explanation methods suffer from this sensitivity issue.
SP:1f205607623dfcb3aa5e21f7d13d3182ba29bad5
Quantifying the Cost of Reliable Photo Authentication via High-Performance Learned Lossy Representations
1 INTRODUCTION . Increasing adoption of machine learning in computer graphics has rapidly decreased the time-frame and skill set needed for convincing photo manipulation . Point-and-click solutions are readily available for plausible object insertion ( Portenier et al. , 2019 ) , removal ( Xiong et al. , 2019 ) , sky replacement ( Tsai et al. , 2016 ) , face editing ( Portenier et al. , 2018 ) and many other popular operations . While often performed with humorous or artistic intent , they can wreak havoc by altering medical records ( Mirsky et al. , 2019 ) , concealing scientific misconduct ( Gilbert , 2009 ; Bik et al. , 2016 ; Bucci , 2018 ) or even interfering with democratic elections ( Chesney & Citron , 2019 ) . Reasoning about photo integrity and origin relies on subtle statistical traces , e.g. , fingerprints of imaging sensors ( Chen et al. , 2008 ) , color interpolation artifacts ( Popescu & Farid , 2005 ) , or pixel co-occurrence patterns ( Marra et al. , 2019b ; Mayer & Stamm , 2019 ) . Unfortunately , such traces are commonly destroyed during online dissemination , since social networks are forced to aggressively compress digital media to optimize storage and bandwidth expenditures - especially on mobile devices ( Cabral & Kandrot , 2015 ) . As a result , detection of photo manipulations online is notoriously unreliable . Some platforms perform forensic photo analysis at the ingress ( Truepic , 2019 ) , but it may already be too late . Existing photo compression standards , like JPEG , optimize for human perception alone and aggressively remove weak micro-signals already at the device . We demonstrate that huge gains in photo manipulation detection accuracy are possible at low cost by carefully optimizing lossy compression . Thanks to explicit optimization , fractional increase in bitrate is sufficient to significantly increase the detection accuracy . We build upon the work of Korus & Memon ( 2019 ) and use their toolbox for end-to-end modeling of photo dissemination channels . We design a lightweight and high-performance lossy image codec , and optimize for reliable manipulation detection - a backbone of modern forensic analysis ( Wu et al. , 2019 ; Mayer & Stamm , 2019 ) . Interestingly , the model learns complex frequency attenuation patterns as simple inclusion of high-frequency information turns out to be insufficient . This suggests new directions in ongoing efforts to revisit the standard rate-distortion paradigm ( Blau & Michaeli , 2019 ) . We believe such solutions could be useful for social media platforms , photo attestation services , or insurance companies , which may exploit asymmetric compression setups and acquire photos from smart-phones in analysis-friendly formats . In terms of rate-distortion , our model is comparable with modern hand-engineered codecs , like BPG ( Bellard , 2014 ) which delivers only slightly better results . On GPU-enabled platforms , our codec can be faster , even without low-level optimization . 2 RELATED WORK . Learned Compression : Rapid progress in deep learning has rekindled interest in lossy image compression . While some studies consider fully end-to-end solutions dispensing with conventional entropy coding ( Toderici et al. , 2017 ) , the most successful solutions tend to be variations of autoencoders combined with context-adaptive arithmetic coding . Such codecs have recently surpassed state-of-the-art hand-crafted solutions ( Rippel & Bourdev , 2017 ; Mentzer et al. , 2018 ) . Adoption of generative models allows to hallucinate unimportant details , and reach extreme compression rates while maintaining good perceptual quality ( Agustsson et al. , 2018 ) . This research direction makes explicit provenance objectives increasingly pressing . Compression vs High-level Vision : JPEG compression is commonly used for data augmentation to retain high machine vision performance on compressed images . Despite this , severe compression is known to degrade accuracy ( Dodge & Karam , 2016 ) , and restoration techniques are often used to mitigate the problem ( Wang et al. , 2016 ) . Some studies optimize JPEG compression to encode semantically salient regions with better quality in a format-compliant way ( Prakash et al. , 2017 ) . Researchers also explore trainable variations of the JPEG codec optimized for minimal performance degradation and low power use in IoT devices ( Liu et al. , 2018 ) . In high-volume applications , computational footprint can be reduced by running high-level vision directly on the DCT coefficients ( Gueguen et al. , 2018 ) . Adoption of trainable latent representations gives more flexibility and allows for end-to-end training ( Torfason et al. , 2018 ) . Optimization of Photo Dissemination Channels : Large volume of photos shared online spawned the need to aggressively optimize all steps of photo dissemination ( uplink , downlink and storage ) . Social media platforms already rely on in-house solutions ( Facebook , 2018 ) , and employ extreme measures , like header transplantation , to minimize overhead and improve user experience ( Cabral & Kandrot , 2015 ) . The platforms actively engage in research and development of image compression , including optimization of the standard JPEG codec ( Google , 2016 ) , development of new backwardcompatible standards like JPEG-XL ( Rhatushnyak et al. , 2019 ) , and development of entirely new codecs - both hand-engineered ( e.g. , WebP ) and end-to-end trained ( Toderici et al. , 2017 ) . 3 END-TO-END TRAINABLE PHOTO DISSEMINATION MODEL . We build upon a recently published end-to-end trainable model of photo acquisition and dissemination ( Korus & Memon , 2019 ) . The model uses a forensic analysis network ( FAN ) for photo manipulation detection , and allows for joint optimization of the FAN and the camera ISP , leading to distinct imaging artifacts that facilitate authentication . The published toolbox included only stan- dard JPEG compression , and we extended it to support trainable codecs . We show a generic version of the updated model in Fig . 1 with highlighted potentially trainable elements . In this study , we fixed the camera model , and jointly optimize the FAN and a deep compression network ( DCN ) . We describe the design of our DCN codec , and its pre-training protocol below . 3.1 BASELINE DCN ARCHITECTURE . Our DCN model follows the general auto-encoder architecture proposed by Theis et al . ( 2017 ) , but uses different quantization , entropy estimation and entropy coding schemes ( Section 3.2 ) . The model is fully convolutional , and consists of 3 sub-sampling ( stride-2 ) convolutional layers , and 3 residual blocks in between ( Fig . 2 ) . We do not use any normalization layers ( such as GDN ) , and rely solely on a single trainable scaling factor . Distribution shaping occurs organically thanks to entropy regularization ( see Fig . A.3b in the appendix ) . The decoder mirrors the encoder , and implements up-sampling using sub-pixel convolutions ( combination of convolutional and depth-to-space layers ) . We experimented with different variants of latent representation quantization , eventually converging on soft-quantization with a fixed codebook of integers with a given maximal number of bits per feature ( bpf ) . We used a 5-bpf uniform codebook ( M = 32 values from -15 to 16 ) . We show the impact of codebook size in the appendix ( Fig . A.3a ) . The model is trained to minimize distortion between the input and reconstructed images regularized by entropy of the latent representation : Ldcn = E X [ d ( X , D ◦ Q ◦ E ( X ) ) + λHH ( Q ◦ E ( X ) ) ] , ( 1 ) where X is the input image , and E , Q , and D denote the encoder , quantization , and decoder , respectively . We used a simpleL2 loss in the RGB domain as the distortion measure d ( · , · ) , a differentiable soft estimate of entropy H ( Section 3.2 ) , and SSIM as the validation metric . 3.2 SOFT QUANTIZATION AND ENTROPY ESTIMATION . We developed our own quantization and entropy estimation mechanism , because we found existing approaches unnecessarily complicated and/or lacking in accuracy . Some of the most recent solutions include : ( 1 ) addition of uniform random noise to quantized samples and non-parametric entropy modeling by a fitted piece-wise linear model ( Ballé et al. , 2016 ) ; ( 2 ) differentiable entropy upper bound with a uniform random noise component ( Theis et al. , 2017 ) ; ( 3 ) regularization by penalizing norm of quantized coefficients and differences between spatial neighbors ( Rippel & Bourdev , 2017 ) ; ( 4 ) PixelCNN for entropy estimation and context modeling ( Mentzer et al. , 2018 ) . Our approach builds upon the soft quantization used by Mentzer et al . ( 2018 ) , but is extended to address numerical stability problems , and allow for accurate entropy estimation . Let z be a vectorized latent representation Z of N images , i.e . : zk = zn , i , j , c where n , i , j , c advance sequentially along an arbitrary memory layout ( here image , width , height , channel ) . Let c denote a quantization codebook withM centers [ c1 , . . . , cM ] ( code words ) . Then , given a weight matrix W ∈ [ 0 , 1 ] N , M : ∀m ∑ n wn , m = 1 , we can define : hard quantization as ẑ = [ c argmaxmw : ,m ] ; and soft quantization as z̃ = Wc . Hard quantization replaces an input value with the closest available codeword , and corresponds to a rounding operation performed by the image codec . Soft quantization is a differentiable relaxation , which uses a linear combination of all code-words - as specified by the weight matrix . A detailed comparison of both quantization modes , along with an illustration of potential numerical pitfalls , can be observed in the top row of Fig . A.1 in the appendix . The hard and soft quantization are used in the forward and backward passes , respectively . In Tensorflow , this can be implemented as z = tf.stop gradient ( ẑ - z̃ ) + z̃ . The weights for individual code-words in the mixture are computed by applying a kernel κ to the distances between the values and the code-words , which can be organized into a distance matrix D : D = z− cᵀ = [ dn , m = zn − cm ] , ( 2 ) W = κ ( D ) = [ wn , m = κ ( dn , m ) ] . ( 3 ) The most commonly used implementations use a Gaussian kernel : κγ = e −γd2n , m , ( 4 ) which suffers from numerical problems for edge cases overflowing the codebook range ( see Fig . A.1 top row in the 4-th and 5-th columns ) . To alleviate these problems , we adopt a t-Student kernel : κγ , v = ( 1 + γd2n , m v ) − ( v+1 ) /2 , ( 5 ) which behaves much better in practice . We do not normalize the kernels , and ensure correct proportions of the weights by numerically normalizing rows of the weight matrix . We estimate entropy of the quantized values by summing the weight matrix along the sample dimension , which yields an estimate of the histogram w.r.t . codebook entries ( comparison with an actual histogram is shown in Fig . A.3 ) : h̃ = [ h̃m = ∑ n wn , m ] . ( 6 ) This allows to estimate the entropy of the latent representation by simply : Ĥ = − ∑ m h̃mlog2h̃m . ( 7 ) We assess the quality of the estimate both for synthetic random numbers ( 1,000 numbers sampled from Laplace distributions of various scales ) and an actual latent representation of 128 × 128 px RGB image patches sampled from the clic test set ( see Section 3.5 and examples in Fig . 4a ) . For the random sample , we fixed the quantization codebook to integers from−5 to 5 , and performed the experiment numerically . For the real patches , we fed the images through a pre-trained DCN model ( a medium-quality model with 32 feature channels ; 32-C ) and used the codebook embedded in the model ( integers from −15 to 16 ) . Fig . 3 shows the entropy estimation error ( both absolute and relative ) and scatter plots of real entropies vs. their soft estimates using the Gaussian and t-Student kernels . It can be observed that the t-Student kernel consistently outperforms the commonly used Gaussian . The impact of the kernels ’ hyperparameters on the relative estimation error is shown in Fig . A.2 . The best combination of kernel parameters ( v = 50 , γ = 25 ) is highlighted in red and used in all subsequent experiments .
The paper describes a pipeline for image compression which allows to reliably detect specific manipulation patterns in compressed images. The results show that it is possible to learn image compression that performs similarly to a modern image compression algorithm while in the same time is optimized to reveal specific kinds of manipulations. The authors build upon (Korus & Memon, 2019), but use a learnable codec instead of differentiable JPEG.
SP:d8e30dcadff63f56df4894cb5ba871cb2f8ee0ea
Quantifying the Cost of Reliable Photo Authentication via High-Performance Learned Lossy Representations
1 INTRODUCTION . Increasing adoption of machine learning in computer graphics has rapidly decreased the time-frame and skill set needed for convincing photo manipulation . Point-and-click solutions are readily available for plausible object insertion ( Portenier et al. , 2019 ) , removal ( Xiong et al. , 2019 ) , sky replacement ( Tsai et al. , 2016 ) , face editing ( Portenier et al. , 2018 ) and many other popular operations . While often performed with humorous or artistic intent , they can wreak havoc by altering medical records ( Mirsky et al. , 2019 ) , concealing scientific misconduct ( Gilbert , 2009 ; Bik et al. , 2016 ; Bucci , 2018 ) or even interfering with democratic elections ( Chesney & Citron , 2019 ) . Reasoning about photo integrity and origin relies on subtle statistical traces , e.g. , fingerprints of imaging sensors ( Chen et al. , 2008 ) , color interpolation artifacts ( Popescu & Farid , 2005 ) , or pixel co-occurrence patterns ( Marra et al. , 2019b ; Mayer & Stamm , 2019 ) . Unfortunately , such traces are commonly destroyed during online dissemination , since social networks are forced to aggressively compress digital media to optimize storage and bandwidth expenditures - especially on mobile devices ( Cabral & Kandrot , 2015 ) . As a result , detection of photo manipulations online is notoriously unreliable . Some platforms perform forensic photo analysis at the ingress ( Truepic , 2019 ) , but it may already be too late . Existing photo compression standards , like JPEG , optimize for human perception alone and aggressively remove weak micro-signals already at the device . We demonstrate that huge gains in photo manipulation detection accuracy are possible at low cost by carefully optimizing lossy compression . Thanks to explicit optimization , fractional increase in bitrate is sufficient to significantly increase the detection accuracy . We build upon the work of Korus & Memon ( 2019 ) and use their toolbox for end-to-end modeling of photo dissemination channels . We design a lightweight and high-performance lossy image codec , and optimize for reliable manipulation detection - a backbone of modern forensic analysis ( Wu et al. , 2019 ; Mayer & Stamm , 2019 ) . Interestingly , the model learns complex frequency attenuation patterns as simple inclusion of high-frequency information turns out to be insufficient . This suggests new directions in ongoing efforts to revisit the standard rate-distortion paradigm ( Blau & Michaeli , 2019 ) . We believe such solutions could be useful for social media platforms , photo attestation services , or insurance companies , which may exploit asymmetric compression setups and acquire photos from smart-phones in analysis-friendly formats . In terms of rate-distortion , our model is comparable with modern hand-engineered codecs , like BPG ( Bellard , 2014 ) which delivers only slightly better results . On GPU-enabled platforms , our codec can be faster , even without low-level optimization . 2 RELATED WORK . Learned Compression : Rapid progress in deep learning has rekindled interest in lossy image compression . While some studies consider fully end-to-end solutions dispensing with conventional entropy coding ( Toderici et al. , 2017 ) , the most successful solutions tend to be variations of autoencoders combined with context-adaptive arithmetic coding . Such codecs have recently surpassed state-of-the-art hand-crafted solutions ( Rippel & Bourdev , 2017 ; Mentzer et al. , 2018 ) . Adoption of generative models allows to hallucinate unimportant details , and reach extreme compression rates while maintaining good perceptual quality ( Agustsson et al. , 2018 ) . This research direction makes explicit provenance objectives increasingly pressing . Compression vs High-level Vision : JPEG compression is commonly used for data augmentation to retain high machine vision performance on compressed images . Despite this , severe compression is known to degrade accuracy ( Dodge & Karam , 2016 ) , and restoration techniques are often used to mitigate the problem ( Wang et al. , 2016 ) . Some studies optimize JPEG compression to encode semantically salient regions with better quality in a format-compliant way ( Prakash et al. , 2017 ) . Researchers also explore trainable variations of the JPEG codec optimized for minimal performance degradation and low power use in IoT devices ( Liu et al. , 2018 ) . In high-volume applications , computational footprint can be reduced by running high-level vision directly on the DCT coefficients ( Gueguen et al. , 2018 ) . Adoption of trainable latent representations gives more flexibility and allows for end-to-end training ( Torfason et al. , 2018 ) . Optimization of Photo Dissemination Channels : Large volume of photos shared online spawned the need to aggressively optimize all steps of photo dissemination ( uplink , downlink and storage ) . Social media platforms already rely on in-house solutions ( Facebook , 2018 ) , and employ extreme measures , like header transplantation , to minimize overhead and improve user experience ( Cabral & Kandrot , 2015 ) . The platforms actively engage in research and development of image compression , including optimization of the standard JPEG codec ( Google , 2016 ) , development of new backwardcompatible standards like JPEG-XL ( Rhatushnyak et al. , 2019 ) , and development of entirely new codecs - both hand-engineered ( e.g. , WebP ) and end-to-end trained ( Toderici et al. , 2017 ) . 3 END-TO-END TRAINABLE PHOTO DISSEMINATION MODEL . We build upon a recently published end-to-end trainable model of photo acquisition and dissemination ( Korus & Memon , 2019 ) . The model uses a forensic analysis network ( FAN ) for photo manipulation detection , and allows for joint optimization of the FAN and the camera ISP , leading to distinct imaging artifacts that facilitate authentication . The published toolbox included only stan- dard JPEG compression , and we extended it to support trainable codecs . We show a generic version of the updated model in Fig . 1 with highlighted potentially trainable elements . In this study , we fixed the camera model , and jointly optimize the FAN and a deep compression network ( DCN ) . We describe the design of our DCN codec , and its pre-training protocol below . 3.1 BASELINE DCN ARCHITECTURE . Our DCN model follows the general auto-encoder architecture proposed by Theis et al . ( 2017 ) , but uses different quantization , entropy estimation and entropy coding schemes ( Section 3.2 ) . The model is fully convolutional , and consists of 3 sub-sampling ( stride-2 ) convolutional layers , and 3 residual blocks in between ( Fig . 2 ) . We do not use any normalization layers ( such as GDN ) , and rely solely on a single trainable scaling factor . Distribution shaping occurs organically thanks to entropy regularization ( see Fig . A.3b in the appendix ) . The decoder mirrors the encoder , and implements up-sampling using sub-pixel convolutions ( combination of convolutional and depth-to-space layers ) . We experimented with different variants of latent representation quantization , eventually converging on soft-quantization with a fixed codebook of integers with a given maximal number of bits per feature ( bpf ) . We used a 5-bpf uniform codebook ( M = 32 values from -15 to 16 ) . We show the impact of codebook size in the appendix ( Fig . A.3a ) . The model is trained to minimize distortion between the input and reconstructed images regularized by entropy of the latent representation : Ldcn = E X [ d ( X , D ◦ Q ◦ E ( X ) ) + λHH ( Q ◦ E ( X ) ) ] , ( 1 ) where X is the input image , and E , Q , and D denote the encoder , quantization , and decoder , respectively . We used a simpleL2 loss in the RGB domain as the distortion measure d ( · , · ) , a differentiable soft estimate of entropy H ( Section 3.2 ) , and SSIM as the validation metric . 3.2 SOFT QUANTIZATION AND ENTROPY ESTIMATION . We developed our own quantization and entropy estimation mechanism , because we found existing approaches unnecessarily complicated and/or lacking in accuracy . Some of the most recent solutions include : ( 1 ) addition of uniform random noise to quantized samples and non-parametric entropy modeling by a fitted piece-wise linear model ( Ballé et al. , 2016 ) ; ( 2 ) differentiable entropy upper bound with a uniform random noise component ( Theis et al. , 2017 ) ; ( 3 ) regularization by penalizing norm of quantized coefficients and differences between spatial neighbors ( Rippel & Bourdev , 2017 ) ; ( 4 ) PixelCNN for entropy estimation and context modeling ( Mentzer et al. , 2018 ) . Our approach builds upon the soft quantization used by Mentzer et al . ( 2018 ) , but is extended to address numerical stability problems , and allow for accurate entropy estimation . Let z be a vectorized latent representation Z of N images , i.e . : zk = zn , i , j , c where n , i , j , c advance sequentially along an arbitrary memory layout ( here image , width , height , channel ) . Let c denote a quantization codebook withM centers [ c1 , . . . , cM ] ( code words ) . Then , given a weight matrix W ∈ [ 0 , 1 ] N , M : ∀m ∑ n wn , m = 1 , we can define : hard quantization as ẑ = [ c argmaxmw : ,m ] ; and soft quantization as z̃ = Wc . Hard quantization replaces an input value with the closest available codeword , and corresponds to a rounding operation performed by the image codec . Soft quantization is a differentiable relaxation , which uses a linear combination of all code-words - as specified by the weight matrix . A detailed comparison of both quantization modes , along with an illustration of potential numerical pitfalls , can be observed in the top row of Fig . A.1 in the appendix . The hard and soft quantization are used in the forward and backward passes , respectively . In Tensorflow , this can be implemented as z = tf.stop gradient ( ẑ - z̃ ) + z̃ . The weights for individual code-words in the mixture are computed by applying a kernel κ to the distances between the values and the code-words , which can be organized into a distance matrix D : D = z− cᵀ = [ dn , m = zn − cm ] , ( 2 ) W = κ ( D ) = [ wn , m = κ ( dn , m ) ] . ( 3 ) The most commonly used implementations use a Gaussian kernel : κγ = e −γd2n , m , ( 4 ) which suffers from numerical problems for edge cases overflowing the codebook range ( see Fig . A.1 top row in the 4-th and 5-th columns ) . To alleviate these problems , we adopt a t-Student kernel : κγ , v = ( 1 + γd2n , m v ) − ( v+1 ) /2 , ( 5 ) which behaves much better in practice . We do not normalize the kernels , and ensure correct proportions of the weights by numerically normalizing rows of the weight matrix . We estimate entropy of the quantized values by summing the weight matrix along the sample dimension , which yields an estimate of the histogram w.r.t . codebook entries ( comparison with an actual histogram is shown in Fig . A.3 ) : h̃ = [ h̃m = ∑ n wn , m ] . ( 6 ) This allows to estimate the entropy of the latent representation by simply : Ĥ = − ∑ m h̃mlog2h̃m . ( 7 ) We assess the quality of the estimate both for synthetic random numbers ( 1,000 numbers sampled from Laplace distributions of various scales ) and an actual latent representation of 128 × 128 px RGB image patches sampled from the clic test set ( see Section 3.5 and examples in Fig . 4a ) . For the random sample , we fixed the quantization codebook to integers from−5 to 5 , and performed the experiment numerically . For the real patches , we fed the images through a pre-trained DCN model ( a medium-quality model with 32 feature channels ; 32-C ) and used the codebook embedded in the model ( integers from −15 to 16 ) . Fig . 3 shows the entropy estimation error ( both absolute and relative ) and scatter plots of real entropies vs. their soft estimates using the Gaussian and t-Student kernels . It can be observed that the t-Student kernel consistently outperforms the commonly used Gaussian . The impact of the kernels ’ hyperparameters on the relative estimation error is shown in Fig . A.2 . The best combination of kernel parameters ( v = 50 , γ = 25 ) is highlighted in red and used in all subsequent experiments .
This paper presents a learned image compression method that is able to be robust under a variety of tasks. The results aren't state of the art in terms of rate-distortion performance, but this paper has a very good analysis of the results, and has produced a very fast codec. In that sense, this is a very interesting paper that may lead to other fast methods (the other fast method they compared the runtime against - WaveOne never published a complete description).
SP:d8e30dcadff63f56df4894cb5ba871cb2f8ee0ea
Self-Adversarial Learning with Comparative Discrimination for Text Generation
1 INTRODUCTION . Generative Adversarial Networks ( Goodfellow et al. , 2014 ) ( GANs ) have achieved tremendous success for image generation and received much attention in computer vision . For text generation , however , the performance of GANs is severely limited due to reward sparsity and mode collapse : reward sparsity refers to the difficulty for the generator to receive reward signals when its generated samples can hardly fool the discriminator that is much easier to train ; while mode collapse refers to the phenomenon that the generator only learns limited patterns from the real data . As a result , both the quality and diversity of generated text samples are limited . To address the above issues , we propose a novel self-adversarial learning ( SAL ) paradigm for improving adversarial text generation . In contrast to standard GANs ( Figure 1 ( a ) ) that use a binary classifier as its discriminator to predict whether a sample is real or generated , SAL employs a comparative discriminator which is a pairwise classifier assessing whether the currently generated sample is better than its previously generated one , as shown in Figure 1 ( b ) . During training , SAL rewards the generator when its currently generated samples are found to be better than its previously generated samples . In the earlier training stage when the quality of generated samples is far below the real data , this self-improvement reward mechanism makes it easier for the generator to receive non-sparse rewards with informative learning signals , effectively alleviating the reward sparsity issue ; while in the later training stage , SAL can prevent a sample from keeping receiving high reward as the self-improvement for a popular mode will become more and more difficult , and therefore help the generator avoid collapsing toward the limited patterns of real data . We comprehensively evaluate the proposed self-adversarial learning paradigm in both synthetic data and real data on the text generation benchmark platform ( Zhu et al. , 2018 ) . Compared to the previous approaches for adversarial text generation ( Yu et al. , 2017 ; Che et al. , 2017 ; Lin et al. , 2017 ) , our ∗ This work was done during the first author ’ s internship at Microsoft Research Asia . approach shows a substantial improvement in terms of both the quality and the diversity of generated samples as well as better performance stability in adversarial learning . 2 BACKGROUND : ADVERSARIAL TEXT GENERATION . Adversarial text generation has drawn much attention in recent years due to its advantages ( e.g. , sequence-level guidance without the exposure bias issue ( Bengio et al. , 2015 ) ) over maximum likelihood estimation ( MLE ) for natural language generation . It formulates the learning process as a minimax game between a generator Gθ parameterized by θ and a discriminator Dφ parameterized by φ : the discriminator is trained to distinguish between the samples drawn from the real data distribution pdata and the samples generated by the generator ; while the generator is trained to generate samples that can “ fool ” the discriminator . The adversarial learning objective of the generator and the discriminator can be formulated as : min θ max φ Ex∼pdata [ logDφ ( x ) ] + Ez∼pz [ log ( 1−Dφ ( Gθ ( z ) ) ) ] ( 1 ) where x is a sample from the real data , Gθ ( z ) is a sample generated by the generator with the initialization z that is drawn from the noise distribution pz ( e.g. , standard normal distribution ) . While GANs have shown some promising results ( Yu et al. , 2017 ; Guo et al. , 2018 ) , there are two fundamental issues that impede their progress in text generation : ( i ) Reward sparsity , which is due to the fact that the discriminator tends to learn much better than the generator and thus easily recognizes generated samples as fake . In such cases , it will be difficult for the generator to receive rewards ; ( ii ) Mode collapse , which arises from the intrinsic nature of GANs and leads the adversarial models to only learn the limited patterns from the real samples . These two issues limit the ability of GANs to generate high-quality and diverse text samples , which have not been well addressed yet . 3 SELF-ADVERSARIAL LEARNING . To address the aforementioned issues , we propose a novel self-adversarial learning ( SAL ) paradigm . Inspired by self-play ( Silver et al. , 2017 ; Rennie et al. , 2017 ) in reinforcement learning , the core idea of SAL is to reward the generator if its currently generated sample is found to be better than its previously generated ones . Like AlphaGo ( Silver et al. , 2017 ) , the generator in SAL struggles to learn to generate better samples than its previously generated samples for passing the “ self-improvement ” test by a comparative discriminator , which is a pairwise classifier trained to compare the quality of two samples , as Figure 1 ( b ) shows . Compared to conventional GANs ( Figure 1 ( a ) ) , SAL has the following advantages : First , in the earlier training stage when the quality of generated samples is far below the real data , the self-improvement reward mechanism of SAL allows the generator to receive informative learning signals more easily as it makes the assessment of sample quality better adapted to the current capability of the generator , making it less likely to suffer from the issue of reward sparsity ; Second , in the later training stage when the generated samples ’ quality is high , SAL can prevent a sample from keeping receiving high reward as it will become more and more difficult to pass the “ self-improvement ” test , thus reducing the risk of the generator collapsing towards limited patterns . The self-improvement mechanism and the ’ tie ’ option in the comparative discriminator also provides a reasonable baseline which corresponds to cases where newly generated samples are found to be indistinguishable with previous ones , thus improving the training stability . We provide a more detailed qualitative analysis of why the proposed self-adversarial learning paradigm can alleviate these problems in Appendix . 3.1 COMPARATIVE DISCRIMINATOR . As introduced above , the core component for SAL is the comparative discriminator , which is a pairwise classifier comparing the quality of two samples . It learns a total order of sample quality and encodes the inductive bias that one sample is better ( > ) , worse ( < ) , or indistinguishable ( ≈ ) in terms of its quality compared to the other . For a ( text ) sample , the comparative discriminator can offer more informative learning signals than the conventional binary ( i.e. , real/fake ) classification based discriminator because the sample can receive multiple feedbacks from the comparative discriminator by comparing it with multiple other samples . For training the comparative discriminator , we construct pairwise training examples from the real and generated samples , as Figure 2 shows . For a real sample s+ and a generated sample s− , we assign the label “ better ( > ) ” to the pair ( s+ , s− ) and “ worse ( < ) ” to ( s− , s+ ) . For two samples both from real data or from the generated samples , we assign the label “ indistinguishable ( ≈ ) ” to such pairs ( i.e. , ( si+ , s j + ) and ( s i − , s j − ) ) . For a training set with n real samples and n generated samples , the comparative discrimination can construct ( 2n 2 ) pairwise training examples , allowing to enhance the generalization ability of the comparative discriminator . Moreover , to improve the model ’ s ability to distinguish between good generated samples and bad generated samples for self-play learning , we additionally select the samples generated during the later stage of MLE training as pseudo-real samples , and select those generated in the earlier epochs when the generator does not fully converge as fake sentences . We then pair the pseudo-real samples with the fake samples to construct training instances to supervise the model to compare their qualities . In this way , the comparative discriminator is prevented from being taught to always recognize two generated samples as equally bad and assign zero reward to the generator . As a result , it can become more sensitive to the quality difference in a pair of text samples and thus allow the generator to receive rewards more easily . 3.2 TRAINING . Before we formally introduce the training procedure for SAL , we first define the learning objective for the comparative discriminator Dφ and the generator Gθ in SAL : LD = −E ( x1 , x2 ) ∼ ( M∪pdata ( x ) ) 2 [ logD Q ( x1 , x2 ) φ ( x1 , x2 ) ] ( 2 ) LG = −E ( z , xr ) ∼ ( pz ( z ) , M ) [ ∑ q∈ { > , < , ≈ } wq logD q φ ( Gθ ( z ) , xr ) ] ( 3 ) In Eq ( 2 ) and Eq ( 3 ) , M is the set of previous generated samples by the generator , Q ( x1 , x2 ) ∈ { > , < , ≈ } is the true label for the pair ( x1 , x2 ) , Dqφ ( x1 , x2 ) is the probability of the comparative discriminator ’ s prediction being q ( q ∈ { > , < , ≈ } ) for the pair ( x1 , x2 ) . wq is the reward weight for the case q , which is a hyperparameter for SAL . If the generator generates a sample Gθ ( z ) that is better ( > ) than its previously generated sample xr , it receives a positive reward ; if Gθ ( z ) is worse ( < ) than xr , it receives a negative reward ; while if the quality of Gθ ( z ) is classified as similar ( ≈ ) to xr , it receives zero credit . Therefore , we have w ( > ) > 0 = w ( ≈ ) > w ( < ) . Since LG can only be directly optimized in standard continuous GAN training , we alternatively employ the policy gradient algorithm ( Sutton et al. , 2000 ) to train the generator , as previous approaches for adversarial text generation . For SAL , we define the reward for a generated sample xg compared with a reference sample xr which is a previous generated sample by the generator as the weighted reward based on the probability distribution of the prediction by the comparative discriminator : γφ ( xg , xr ) = w ( > ) D ( > ) φ ( xg , xr ) + w ( < ) D ( < ) φ ( xg , xr ) + w ( ≈ ) D ( ≈ ) φ ( xg , xr ) ( 4 ) In text generation , the generator Gθ obtains the reward only when one sample has been completely generated , which means no intermediate reward is gained . To relieve this problem , following the practice in SeqGAN ( Yu et al. , 2017 ) , we utilize the Monte Carlo rollout method to approximate intermediate rewards by sampling unknown tokens with generated prefix Y1 : t following generator policy Gθ till sample completion . Empirically , we found that the Monte Carlo rollout also helps to reduce the variance of the reference sample utilized for comparison . We calculate the expected reward as Rθ , φ ( s = Y1 : t−1 , a = yt ) = E ( xg , xr ) ∼ ( Gθ ( Y1 : t−1 ) , M ) [ γφ ( xg , xr ) ] ( 5 ) The objective of the generator is to generate a sequence to maximize its expected final reward . With likelihood ratio ( Sutton et al. , 2000 ) , we can formulate the gradient of the objective function for generator Gθ as : ∇θJ ( θ ) = T∑ t=1 EY1 : t−1∼Gθ [ ∇θ log ( Gθ ( yt|Y1 : t ) ) · Rθ , φ ( s = Y1 : t−1 , a = yt ) ] ( 6 ) To improve the training of self-adversarial learning , we borrow ideas from the field of deep reinforcement learning and propose two training techniques to improve self-adversarial training . Scheduled rewarding Similar to the exploitation-exploration trade-off in reinforcement learning ( Langford & Zhang , 2007 ) , the positive reward assigned for actions generating better samples encourage exploration while the penalty for generating worse samples makes the generator more conservative . Intuitively , in the earlier stage of self-adversarial learning , the generator should explore better policy by receiving higher rewards for relative progress ; while in the later stage , the generator should be more conservative by penalizing more for worse samples to avoid performance degradation . We simply decrease w ( > ) and increase w ( < ) linearly with training iteration and refer this technique as scheduled rewarding . Algorithm 1 Self-Adversarial Learning With Comparative Discriminator Require : Generator Gθ ; comparative discriminator Dφ ; samples of real sentences S+ ; self-adversarial learning step g ; discriminator step k ; memory bufferM for the previous generated samples 1 : Pretrain Gθ using MLE on S+ 2 : Generate samples with Gθ and store them intoM 3 : repeat 4 : for k steps do 5 : Collect a mini-batch of balanced sample pairs ( x1 , x2 ) fromM∪S+ 6 : Update Dφ via Eq ( 2 ) 7 : end for 8 : for g steps do 9 : Generate a mini-batch of samples xg ∼ Gθ 10 : Collect a mini-batch of reference samples xr fromM 11 : Update Gθ via Eq ( 6 ) 12 : end for 13 : UpdateM with Gθ 14 : until Convergence Memory replay Continuously comparing the generator with its most recent stage may suffer from the correlation between generated samples and reference samples , which makes the training process unstable . Inspired by experience replay ( Lillicrap et al. , 2015 ) , we construct a memory buffer which contains samples generated in the last K training steps . Reference samples are sampled from the memory buffer rather than samples generated in the most recent stage of the generator , which empirically helps stabilize the training process . The training process of SAL is summarized in Algorithm 3 . Self-adversarial learning with the proposed comparative discriminator achieves Nash Equilibrium when the generator models the distribution of real samples perfectly . In this case , the comparative discriminator can not successfully distinguish generated samples from real samples and tends to recognize two samples as `` indistinguishable '' . The reward received by the generator is thus zero and training converges . However , how a non-Bernoulli GAN converges to such an equilibrium is still an open problem ( Goodfellow et al. , 2014 ; Goodfellow , 2014 ) and is beyond the scope of this work .
This paper introduces a Self-Adversarial Learning (SAL) mechanism in GAN based text generation, aiming at tackling the problem of mode collapse and sparse rewards problem. Specifically, motivated by “self-play” mechanism in RL community, instead of using a binary classifier as discriminator in original GAN, SAL employs a comparative discriminator which is a pairwise classifier with three classes: “better”, “worse” and “indistinguishable”. The authors provide lots of experimental results and ablation study showing the efficiency of the proposed mechanism in comparison with previous GAN models.
SP:e3e9a73988c8a2fb968f9ac2739ccac95f5a01bf
Self-Adversarial Learning with Comparative Discrimination for Text Generation
1 INTRODUCTION . Generative Adversarial Networks ( Goodfellow et al. , 2014 ) ( GANs ) have achieved tremendous success for image generation and received much attention in computer vision . For text generation , however , the performance of GANs is severely limited due to reward sparsity and mode collapse : reward sparsity refers to the difficulty for the generator to receive reward signals when its generated samples can hardly fool the discriminator that is much easier to train ; while mode collapse refers to the phenomenon that the generator only learns limited patterns from the real data . As a result , both the quality and diversity of generated text samples are limited . To address the above issues , we propose a novel self-adversarial learning ( SAL ) paradigm for improving adversarial text generation . In contrast to standard GANs ( Figure 1 ( a ) ) that use a binary classifier as its discriminator to predict whether a sample is real or generated , SAL employs a comparative discriminator which is a pairwise classifier assessing whether the currently generated sample is better than its previously generated one , as shown in Figure 1 ( b ) . During training , SAL rewards the generator when its currently generated samples are found to be better than its previously generated samples . In the earlier training stage when the quality of generated samples is far below the real data , this self-improvement reward mechanism makes it easier for the generator to receive non-sparse rewards with informative learning signals , effectively alleviating the reward sparsity issue ; while in the later training stage , SAL can prevent a sample from keeping receiving high reward as the self-improvement for a popular mode will become more and more difficult , and therefore help the generator avoid collapsing toward the limited patterns of real data . We comprehensively evaluate the proposed self-adversarial learning paradigm in both synthetic data and real data on the text generation benchmark platform ( Zhu et al. , 2018 ) . Compared to the previous approaches for adversarial text generation ( Yu et al. , 2017 ; Che et al. , 2017 ; Lin et al. , 2017 ) , our ∗ This work was done during the first author ’ s internship at Microsoft Research Asia . approach shows a substantial improvement in terms of both the quality and the diversity of generated samples as well as better performance stability in adversarial learning . 2 BACKGROUND : ADVERSARIAL TEXT GENERATION . Adversarial text generation has drawn much attention in recent years due to its advantages ( e.g. , sequence-level guidance without the exposure bias issue ( Bengio et al. , 2015 ) ) over maximum likelihood estimation ( MLE ) for natural language generation . It formulates the learning process as a minimax game between a generator Gθ parameterized by θ and a discriminator Dφ parameterized by φ : the discriminator is trained to distinguish between the samples drawn from the real data distribution pdata and the samples generated by the generator ; while the generator is trained to generate samples that can “ fool ” the discriminator . The adversarial learning objective of the generator and the discriminator can be formulated as : min θ max φ Ex∼pdata [ logDφ ( x ) ] + Ez∼pz [ log ( 1−Dφ ( Gθ ( z ) ) ) ] ( 1 ) where x is a sample from the real data , Gθ ( z ) is a sample generated by the generator with the initialization z that is drawn from the noise distribution pz ( e.g. , standard normal distribution ) . While GANs have shown some promising results ( Yu et al. , 2017 ; Guo et al. , 2018 ) , there are two fundamental issues that impede their progress in text generation : ( i ) Reward sparsity , which is due to the fact that the discriminator tends to learn much better than the generator and thus easily recognizes generated samples as fake . In such cases , it will be difficult for the generator to receive rewards ; ( ii ) Mode collapse , which arises from the intrinsic nature of GANs and leads the adversarial models to only learn the limited patterns from the real samples . These two issues limit the ability of GANs to generate high-quality and diverse text samples , which have not been well addressed yet . 3 SELF-ADVERSARIAL LEARNING . To address the aforementioned issues , we propose a novel self-adversarial learning ( SAL ) paradigm . Inspired by self-play ( Silver et al. , 2017 ; Rennie et al. , 2017 ) in reinforcement learning , the core idea of SAL is to reward the generator if its currently generated sample is found to be better than its previously generated ones . Like AlphaGo ( Silver et al. , 2017 ) , the generator in SAL struggles to learn to generate better samples than its previously generated samples for passing the “ self-improvement ” test by a comparative discriminator , which is a pairwise classifier trained to compare the quality of two samples , as Figure 1 ( b ) shows . Compared to conventional GANs ( Figure 1 ( a ) ) , SAL has the following advantages : First , in the earlier training stage when the quality of generated samples is far below the real data , the self-improvement reward mechanism of SAL allows the generator to receive informative learning signals more easily as it makes the assessment of sample quality better adapted to the current capability of the generator , making it less likely to suffer from the issue of reward sparsity ; Second , in the later training stage when the generated samples ’ quality is high , SAL can prevent a sample from keeping receiving high reward as it will become more and more difficult to pass the “ self-improvement ” test , thus reducing the risk of the generator collapsing towards limited patterns . The self-improvement mechanism and the ’ tie ’ option in the comparative discriminator also provides a reasonable baseline which corresponds to cases where newly generated samples are found to be indistinguishable with previous ones , thus improving the training stability . We provide a more detailed qualitative analysis of why the proposed self-adversarial learning paradigm can alleviate these problems in Appendix . 3.1 COMPARATIVE DISCRIMINATOR . As introduced above , the core component for SAL is the comparative discriminator , which is a pairwise classifier comparing the quality of two samples . It learns a total order of sample quality and encodes the inductive bias that one sample is better ( > ) , worse ( < ) , or indistinguishable ( ≈ ) in terms of its quality compared to the other . For a ( text ) sample , the comparative discriminator can offer more informative learning signals than the conventional binary ( i.e. , real/fake ) classification based discriminator because the sample can receive multiple feedbacks from the comparative discriminator by comparing it with multiple other samples . For training the comparative discriminator , we construct pairwise training examples from the real and generated samples , as Figure 2 shows . For a real sample s+ and a generated sample s− , we assign the label “ better ( > ) ” to the pair ( s+ , s− ) and “ worse ( < ) ” to ( s− , s+ ) . For two samples both from real data or from the generated samples , we assign the label “ indistinguishable ( ≈ ) ” to such pairs ( i.e. , ( si+ , s j + ) and ( s i − , s j − ) ) . For a training set with n real samples and n generated samples , the comparative discrimination can construct ( 2n 2 ) pairwise training examples , allowing to enhance the generalization ability of the comparative discriminator . Moreover , to improve the model ’ s ability to distinguish between good generated samples and bad generated samples for self-play learning , we additionally select the samples generated during the later stage of MLE training as pseudo-real samples , and select those generated in the earlier epochs when the generator does not fully converge as fake sentences . We then pair the pseudo-real samples with the fake samples to construct training instances to supervise the model to compare their qualities . In this way , the comparative discriminator is prevented from being taught to always recognize two generated samples as equally bad and assign zero reward to the generator . As a result , it can become more sensitive to the quality difference in a pair of text samples and thus allow the generator to receive rewards more easily . 3.2 TRAINING . Before we formally introduce the training procedure for SAL , we first define the learning objective for the comparative discriminator Dφ and the generator Gθ in SAL : LD = −E ( x1 , x2 ) ∼ ( M∪pdata ( x ) ) 2 [ logD Q ( x1 , x2 ) φ ( x1 , x2 ) ] ( 2 ) LG = −E ( z , xr ) ∼ ( pz ( z ) , M ) [ ∑ q∈ { > , < , ≈ } wq logD q φ ( Gθ ( z ) , xr ) ] ( 3 ) In Eq ( 2 ) and Eq ( 3 ) , M is the set of previous generated samples by the generator , Q ( x1 , x2 ) ∈ { > , < , ≈ } is the true label for the pair ( x1 , x2 ) , Dqφ ( x1 , x2 ) is the probability of the comparative discriminator ’ s prediction being q ( q ∈ { > , < , ≈ } ) for the pair ( x1 , x2 ) . wq is the reward weight for the case q , which is a hyperparameter for SAL . If the generator generates a sample Gθ ( z ) that is better ( > ) than its previously generated sample xr , it receives a positive reward ; if Gθ ( z ) is worse ( < ) than xr , it receives a negative reward ; while if the quality of Gθ ( z ) is classified as similar ( ≈ ) to xr , it receives zero credit . Therefore , we have w ( > ) > 0 = w ( ≈ ) > w ( < ) . Since LG can only be directly optimized in standard continuous GAN training , we alternatively employ the policy gradient algorithm ( Sutton et al. , 2000 ) to train the generator , as previous approaches for adversarial text generation . For SAL , we define the reward for a generated sample xg compared with a reference sample xr which is a previous generated sample by the generator as the weighted reward based on the probability distribution of the prediction by the comparative discriminator : γφ ( xg , xr ) = w ( > ) D ( > ) φ ( xg , xr ) + w ( < ) D ( < ) φ ( xg , xr ) + w ( ≈ ) D ( ≈ ) φ ( xg , xr ) ( 4 ) In text generation , the generator Gθ obtains the reward only when one sample has been completely generated , which means no intermediate reward is gained . To relieve this problem , following the practice in SeqGAN ( Yu et al. , 2017 ) , we utilize the Monte Carlo rollout method to approximate intermediate rewards by sampling unknown tokens with generated prefix Y1 : t following generator policy Gθ till sample completion . Empirically , we found that the Monte Carlo rollout also helps to reduce the variance of the reference sample utilized for comparison . We calculate the expected reward as Rθ , φ ( s = Y1 : t−1 , a = yt ) = E ( xg , xr ) ∼ ( Gθ ( Y1 : t−1 ) , M ) [ γφ ( xg , xr ) ] ( 5 ) The objective of the generator is to generate a sequence to maximize its expected final reward . With likelihood ratio ( Sutton et al. , 2000 ) , we can formulate the gradient of the objective function for generator Gθ as : ∇θJ ( θ ) = T∑ t=1 EY1 : t−1∼Gθ [ ∇θ log ( Gθ ( yt|Y1 : t ) ) · Rθ , φ ( s = Y1 : t−1 , a = yt ) ] ( 6 ) To improve the training of self-adversarial learning , we borrow ideas from the field of deep reinforcement learning and propose two training techniques to improve self-adversarial training . Scheduled rewarding Similar to the exploitation-exploration trade-off in reinforcement learning ( Langford & Zhang , 2007 ) , the positive reward assigned for actions generating better samples encourage exploration while the penalty for generating worse samples makes the generator more conservative . Intuitively , in the earlier stage of self-adversarial learning , the generator should explore better policy by receiving higher rewards for relative progress ; while in the later stage , the generator should be more conservative by penalizing more for worse samples to avoid performance degradation . We simply decrease w ( > ) and increase w ( < ) linearly with training iteration and refer this technique as scheduled rewarding . Algorithm 1 Self-Adversarial Learning With Comparative Discriminator Require : Generator Gθ ; comparative discriminator Dφ ; samples of real sentences S+ ; self-adversarial learning step g ; discriminator step k ; memory bufferM for the previous generated samples 1 : Pretrain Gθ using MLE on S+ 2 : Generate samples with Gθ and store them intoM 3 : repeat 4 : for k steps do 5 : Collect a mini-batch of balanced sample pairs ( x1 , x2 ) fromM∪S+ 6 : Update Dφ via Eq ( 2 ) 7 : end for 8 : for g steps do 9 : Generate a mini-batch of samples xg ∼ Gθ 10 : Collect a mini-batch of reference samples xr fromM 11 : Update Gθ via Eq ( 6 ) 12 : end for 13 : UpdateM with Gθ 14 : until Convergence Memory replay Continuously comparing the generator with its most recent stage may suffer from the correlation between generated samples and reference samples , which makes the training process unstable . Inspired by experience replay ( Lillicrap et al. , 2015 ) , we construct a memory buffer which contains samples generated in the last K training steps . Reference samples are sampled from the memory buffer rather than samples generated in the most recent stage of the generator , which empirically helps stabilize the training process . The training process of SAL is summarized in Algorithm 3 . Self-adversarial learning with the proposed comparative discriminator achieves Nash Equilibrium when the generator models the distribution of real samples perfectly . In this case , the comparative discriminator can not successfully distinguish generated samples from real samples and tends to recognize two samples as `` indistinguishable '' . The reward received by the generator is thus zero and training converges . However , how a non-Bernoulli GAN converges to such an equilibrium is still an open problem ( Goodfellow et al. , 2014 ; Goodfellow , 2014 ) and is beyond the scope of this work .
To alleviate the issues of reward sparsity and mode collapse in most text-generation GANs with a binary discriminator, this paper proposes a self-adversarial learning (SAL) framework with a novel comparative discriminator that takes pairs of text examples from real and generated examples and outputs better, worse, or indistinguishable. Inspired by self-play in reinforcement learning, SAL employs self-play to reward the generator to generate better samples than previous samples with self-improvement signals from the comparative discriminator. It is argued that, because the comparative discriminator always produces self-improvement signals during the training and the self-improvement signal will not be very strong when generated samples are already good enough, the issues of reward sparsity and mode collapses in conventional text GANs are reduced. Experimental results on synthetic data and benchmark datasets demonstrate that SAL outperforms SeqGAN, MaliGAN, and RankGAN both quantitatively and qualitatively.
SP:e3e9a73988c8a2fb968f9ac2739ccac95f5a01bf
Robust training with ensemble consensus
Since deep neural networks are over-parameterized , they can memorize noisy examples . We address such a memorization issue in the presence of label noise . From the fact that deep neural networks can not generalize to neighborhoods of memorized features , we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation . Based on this , we propose a novel training method called Learning with Ensemble Consensus ( LEC ) that prevents overfitting to noisy examples by removing them based on the consensus of an ensemble of perturbed networks . One of the proposed LECs , LTEC outperforms the current state-of-the-art methods on noisy MNIST , CIFAR-10 , and CIFAR-100 in an efficient manner . 1 INTRODUCTION . Deep neural networks ( DNNs ) have shown excellent performance ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) on visual recognition datasets ( Deng et al. , 2009 ) . However , it is difficult to obtain highquality labeled datasets in practice ( Wang et al. , 2018a ) . Even worse , DNNs might not learn patterns from the training data in the presence of noisy examples ( Zhang et al. , 2016 ) . Therefore , there is an increasing demand for robust training methods . In general , DNNs optimized with SGD first learn patterns relevant to clean examples under label noise ( Arpit et al. , 2017 ) . Based on this , recent studies regard examples that incur small losses on the network that does not overfit noisy examples as clean ( Han et al. , 2018 ; Shen & Sanghavi , 2019 ) . However , such small-loss examples could be noisy , especially under a high level of noise . Therefore , sampling trainable examples from a noisy dataset by relying on small-loss criteria might be impractical . To address this , we find the method to identify noisy examples among small-loss ones based on wellknown observations : ( i ) noisy examples are learned via memorization rather than via pattern learning and ( ii ) under a certain perturbation , network predictions for memorized features easily fluctuate , while those for generalized features do not . Based on these two observations , we hypothesize that out of small-loss examples , training losses of noisy examples would increase by injecting certain perturbation to network parameters , while those of clean examples would not . This suggests that examples that consistently incur small losses under multiple perturbations can be regarded as clean . This idea comes from an artifact of SGD optimization , thereby being applicable to any architecture optimized with SGD . In this work , we introduce a method to perturb parameters to distinguish noisy examples from smallloss examples . We then propose a method to robustly train neural networks under label noise , which is termed learning with ensemble consensus ( LEC ) . In LEC , the network is initially trained on the entire training set for a while and then trained on the intersection of small-loss examples of the ensemble of perturbed networks . We present three LECs with different perturbations and evaluate their effectiveness on three benchmark datasets with random label noise ( Goldberger & Ben-Reuven , 2016 ; Ma et al. , 2018 ) , open-set noise ( Wang et al. , 2018b ) , and semantic noise . Our proposed LEC outperforms existing robust training methods by efficiently removing noisy examples from training batches . 2 RELATED WORK . Generalization of DNNs . Although DNNs are over-parameterized , they have impressive generalization ability ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) . Some studies argue that gradient-based optimization plays an important role in regularizing DNNs ( Neyshabur et al. , 2014 ; Zhang et al. , 2016 ) . Arpit et al . ( 2017 ) show that DNNs optimized with gradient-based methods learn patterns relevant to clean examples in the early stage of training . Since mislabeling reduces the correlation with other training examples , it is likely that noisy examples are learned via memorization . Therefore , we analyze the difference between generalized and memorized features to discriminate clean and noisy examples . Training DNNs with Noisy datasets . Label noise issues can be addressed by reducing negative impact of noisy examples . One direction is to train with a modified loss function based on the noise distribution . Most studies of this direction estimate the noise distribution prior to training as it is not accessible in general ( Sukhbaatar et al. , 2014 ; Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ; Hendrycks et al. , 2018 ) . Another direction is to train with modified labels using the current model prediction ( Reed et al. , 2014 ; Ma et al. , 2018 ) . Aside from these directions , recent work suggests a method of exploiting small-loss examples ( Jiang et al. , 2017 ; Han et al. , 2018 ; Yu et al. , 2019 ; Shen & Sanghavi , 2019 ) based on the generalization ability of DNNs . However , it is still hard to find clean examples by relying on training losses . This study presents a simple method to overcome such a problem of small-loss criteria . 3 ROBUST TRAINING WITH ENSEMBLE CONSENSUS . 3.1 PROBLEM STATEMENT . Suppose that % of examples in a dataset D : = Dclean ∪Dnoisy are noisy . Let S , D , θ denote the set of ( 100- ) % small-loss examples of the network f parameterized by θ out of examples in D. Since it is generally hard to learn only all clean examples especially on the highly corrupted training set , it is problematic to regard all examples in S , D , θ as being clean . To mitigate this , we suggest a simple idea : to find noisy examples among examples in S , D , θ . 3.2 LEARNING WITH ENSEMBLE CONSENSUS ( LEC ) . Since noisy examples are little correlated with other training examples , they are likely to be learned via memorization . However , DNNs can not generalize to neighborhoods of the memorized features . This means that even if training losses of noisy examples are small , they can be easily increased under a certain perturbation δ , i.e. , for ( x , y ) ∈ Dnoisy , ( x , y ) ∈ S , D , θ ⇒ ( x , y ) /∈ S , D , θ+δ . Unlike noisy examples , the network f trained on the entire set D can learn patterns from some clean examples in the early stage of training . Thus , their training losses are consistently small in the presence of the perturbation δ , i.e. , for ( x , y ) ∈ Dclean , ( x , y ) ∈ S , D , θ ⇒ ( x , y ) ∈ S , D , θ+δ . This suggests that noisy examples can be identified from the inconsistency of losses under certain perturbation δ . Based on this , we regard examples in the intersection of ( 100- ) % small-loss examples of an ensemble of M networks generated by adding perturbations δ1 , δ2 , ... , δM to θ , i.e. , ∩Mm=1S , D , θ+δm as clean . We call it ensemble consensus filtering because examples are selected via ensemble consensus . With this filtering , we develop a training method termed learning with ensemble consensus ( LEC ) described in Algorithms 1 and 2 . Both algorithms consist of warming-up and filtering processes . The difference between these two lies in the filtering process . During the filtering process of Algorithm 1 , the network is trained on the intersection of ( 100- ) % small-loss examples of M networks within a mini batch B . Therefore , the number of examples updated at once is changing . We can encourage more stable training with a fixed number of examples to be updated at once as described in Algorithm 2 . During the filtering process of Algorithm 2 , we first obtain the intersection of small-loss examples of M networks within a full batch D at each epoch . We then sample a subset of batchsize from the intersection and train them at each update like a normal SGD . Algorithm 1 LEC Require : noisy datasetD with noise ratio % , duration of warming- up Tw , # of networks used for filteringM , perturbation δ 1 : Initialize θ randomly 2 : for epoch t = 1 : Tw do I Warming-up process 3 : for mini-batch index b = 1 : |D|batchsize do 4 : Sample a subset of batchsize Bb from a full batchD 5 : θ ← θ − α∇θ 1|Bb| ∑ ( x , y ) ∈Bb CE ( fθ ( x ) , y ) 6 : end for 7 : end for 8 : for epoch t = Tw + 1 : Tend do I Filtering process 9 : for mini-batch index b = 1 : |D|batchsize do 10 : Sample a subset of batchsize Bb from a full batchD 11 : form = 1 : M do 12 : θm = θ + δm , b , t . Adding perturbation 13 : S , Bb , θm : = ( 100− ) % small-loss examples of fθm within a mini batch Bb 14 : end for 15 : Bb′ = ∩Mm=1S , Bb , θm . Ensemble consensus filtering 16 : θ ← θ − α∇θ 1|Bb′| ∑ ( x , y ) ∈Bb′ CE ( fθ ( x ) , y ) 17 : end for 18 : end for Algorithm 2 LEC-full Require : noisy datasetD with noise ratio % , duration of warming- up Tw , # of networks used for filteringM , perturbation δ 1 : Initialize θ randomly 2 : for epoch t = 1 : Tw do I Warming-up process 3 : for mini-batch index b = 1 : |D|batchsize do 4 : Sample a subset of batchsize Bb from a full batchD 5 : θ ← θ − α∇θ 1|Bb| ∑ ( x , y ) ∈Bb CE ( fθ ( x ) , y ) 6 : end for 7 : end for 8 : for epoch t = Tw + 1 : Tend do I Filtering process 9 : form = 1 : M do 10 : θm = θ + δm , t . Adding perturbation 11 : S , D , θm : = ( 100− ) % small-loss examples of fθm within a full batchD 12 : end for 13 : D′t = ∩Mm=1S , D , θm . Ensemble consensus filtering 14 : for mini-batch index b = 1 : |D ′ t| batchsize do 15 : Sample a subset of batchsize B′b fromD′t 16 : θ ← θ − α∇θ 1|B′ b | ∑ ( x , y ) ∈B′ b CE ( fθ ( x ) , y ) 17 : end for 18 : end for 3.3 PERTURBATION TO IDENTIFY NOISY EXAMPLES . Now we aim to find a perturbation δ to be injected to discriminate memorized features from generalized ones . We present three LECs with different perturbations in the following . The pseudocodes can be found in Section A.1.3 . • Network-Ensemble Consensus ( LNEC ) : Inspired by the observation that an ensemble of networks with the same architecture is correlated during generalization and is decorrelated during memorization ( Morcos et al. , 2018 ) , the perturbation δ comes from the difference between M networks . During the warming-up process , M networks are trained independently . During the filtering process , M networks are trained on the intersection of ( 100- ) % small-loss examples of M networks . • Self-Ensemble Consensus ( LSEC ) : We focus on the relationship between Morcos et al . ( 2018 ) and Lakshminarayanan et al . ( 2017 ) : network predictions for memorized features are uncertain and those for generalized features are certain . Since the uncertainty of predictions also can be captured by multiple stochastic predictions ( Gal & Ghahramani , 2016 ) , the perturbation δ comes from the difference between M stochastic predictions of a single network.1 During the filtering process , the network is trained on the intersection of ( 100- ) % small-loss examples obtained with M stochastic predictions . • Temporal-Ensemble Consensus ( LTEC ) : Inspired by the observation that during training , atypical features are more easily forgetful compared to typical features ( Toneva et al. , 2018 ) , the perturbation δ comes from the difference between networks at current and preceding epochs . During the filtering process , the network is trained on the intersection of ( 100- ) % small-loss examples at the current epoch t and preceding min ( M − 1 , t − 1 ) epochs . We collect ( 100- ) % small-loss examples at the preceding epochs , rather than network parameters to reduce memory usage . 1As in Gal & Ghahramani ( 2016 ) , the stochasticity of predictions is caused by stochastic operations such as dropout ( Srivastava et al. , 2014 ) .
In this paper, the authors proposed to identify noisy training examples using ensemble consensus. The authors argued and demonstrated through numeric studies that, to the contrary of some earlier work, training examples with low training loss are not necessarily mislabeled. Rather, the authors hypothesized that examples with high noise require memorization, which is sensitive to perturbations. Thus, the authors proposed to identify and subsequently remove those examples from training by looking at the loss after small perturbations to the model parameters. Examples with consistently low training loss are retained for training. The authors also provided several alternatives of perturbations, including examining the consensus between an ensemble of networks, between multiple stochastic predictions, or between predictions from prior training epochs. Finally, the authors demonstrated the performance of their procedures using numerical studies.
SP:672b4b380be73c57e2e7fd3d9f7ea8af0d98f6d1
Robust training with ensemble consensus
Since deep neural networks are over-parameterized , they can memorize noisy examples . We address such a memorization issue in the presence of label noise . From the fact that deep neural networks can not generalize to neighborhoods of memorized features , we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation . Based on this , we propose a novel training method called Learning with Ensemble Consensus ( LEC ) that prevents overfitting to noisy examples by removing them based on the consensus of an ensemble of perturbed networks . One of the proposed LECs , LTEC outperforms the current state-of-the-art methods on noisy MNIST , CIFAR-10 , and CIFAR-100 in an efficient manner . 1 INTRODUCTION . Deep neural networks ( DNNs ) have shown excellent performance ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) on visual recognition datasets ( Deng et al. , 2009 ) . However , it is difficult to obtain highquality labeled datasets in practice ( Wang et al. , 2018a ) . Even worse , DNNs might not learn patterns from the training data in the presence of noisy examples ( Zhang et al. , 2016 ) . Therefore , there is an increasing demand for robust training methods . In general , DNNs optimized with SGD first learn patterns relevant to clean examples under label noise ( Arpit et al. , 2017 ) . Based on this , recent studies regard examples that incur small losses on the network that does not overfit noisy examples as clean ( Han et al. , 2018 ; Shen & Sanghavi , 2019 ) . However , such small-loss examples could be noisy , especially under a high level of noise . Therefore , sampling trainable examples from a noisy dataset by relying on small-loss criteria might be impractical . To address this , we find the method to identify noisy examples among small-loss ones based on wellknown observations : ( i ) noisy examples are learned via memorization rather than via pattern learning and ( ii ) under a certain perturbation , network predictions for memorized features easily fluctuate , while those for generalized features do not . Based on these two observations , we hypothesize that out of small-loss examples , training losses of noisy examples would increase by injecting certain perturbation to network parameters , while those of clean examples would not . This suggests that examples that consistently incur small losses under multiple perturbations can be regarded as clean . This idea comes from an artifact of SGD optimization , thereby being applicable to any architecture optimized with SGD . In this work , we introduce a method to perturb parameters to distinguish noisy examples from smallloss examples . We then propose a method to robustly train neural networks under label noise , which is termed learning with ensemble consensus ( LEC ) . In LEC , the network is initially trained on the entire training set for a while and then trained on the intersection of small-loss examples of the ensemble of perturbed networks . We present three LECs with different perturbations and evaluate their effectiveness on three benchmark datasets with random label noise ( Goldberger & Ben-Reuven , 2016 ; Ma et al. , 2018 ) , open-set noise ( Wang et al. , 2018b ) , and semantic noise . Our proposed LEC outperforms existing robust training methods by efficiently removing noisy examples from training batches . 2 RELATED WORK . Generalization of DNNs . Although DNNs are over-parameterized , they have impressive generalization ability ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) . Some studies argue that gradient-based optimization plays an important role in regularizing DNNs ( Neyshabur et al. , 2014 ; Zhang et al. , 2016 ) . Arpit et al . ( 2017 ) show that DNNs optimized with gradient-based methods learn patterns relevant to clean examples in the early stage of training . Since mislabeling reduces the correlation with other training examples , it is likely that noisy examples are learned via memorization . Therefore , we analyze the difference between generalized and memorized features to discriminate clean and noisy examples . Training DNNs with Noisy datasets . Label noise issues can be addressed by reducing negative impact of noisy examples . One direction is to train with a modified loss function based on the noise distribution . Most studies of this direction estimate the noise distribution prior to training as it is not accessible in general ( Sukhbaatar et al. , 2014 ; Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ; Hendrycks et al. , 2018 ) . Another direction is to train with modified labels using the current model prediction ( Reed et al. , 2014 ; Ma et al. , 2018 ) . Aside from these directions , recent work suggests a method of exploiting small-loss examples ( Jiang et al. , 2017 ; Han et al. , 2018 ; Yu et al. , 2019 ; Shen & Sanghavi , 2019 ) based on the generalization ability of DNNs . However , it is still hard to find clean examples by relying on training losses . This study presents a simple method to overcome such a problem of small-loss criteria . 3 ROBUST TRAINING WITH ENSEMBLE CONSENSUS . 3.1 PROBLEM STATEMENT . Suppose that % of examples in a dataset D : = Dclean ∪Dnoisy are noisy . Let S , D , θ denote the set of ( 100- ) % small-loss examples of the network f parameterized by θ out of examples in D. Since it is generally hard to learn only all clean examples especially on the highly corrupted training set , it is problematic to regard all examples in S , D , θ as being clean . To mitigate this , we suggest a simple idea : to find noisy examples among examples in S , D , θ . 3.2 LEARNING WITH ENSEMBLE CONSENSUS ( LEC ) . Since noisy examples are little correlated with other training examples , they are likely to be learned via memorization . However , DNNs can not generalize to neighborhoods of the memorized features . This means that even if training losses of noisy examples are small , they can be easily increased under a certain perturbation δ , i.e. , for ( x , y ) ∈ Dnoisy , ( x , y ) ∈ S , D , θ ⇒ ( x , y ) /∈ S , D , θ+δ . Unlike noisy examples , the network f trained on the entire set D can learn patterns from some clean examples in the early stage of training . Thus , their training losses are consistently small in the presence of the perturbation δ , i.e. , for ( x , y ) ∈ Dclean , ( x , y ) ∈ S , D , θ ⇒ ( x , y ) ∈ S , D , θ+δ . This suggests that noisy examples can be identified from the inconsistency of losses under certain perturbation δ . Based on this , we regard examples in the intersection of ( 100- ) % small-loss examples of an ensemble of M networks generated by adding perturbations δ1 , δ2 , ... , δM to θ , i.e. , ∩Mm=1S , D , θ+δm as clean . We call it ensemble consensus filtering because examples are selected via ensemble consensus . With this filtering , we develop a training method termed learning with ensemble consensus ( LEC ) described in Algorithms 1 and 2 . Both algorithms consist of warming-up and filtering processes . The difference between these two lies in the filtering process . During the filtering process of Algorithm 1 , the network is trained on the intersection of ( 100- ) % small-loss examples of M networks within a mini batch B . Therefore , the number of examples updated at once is changing . We can encourage more stable training with a fixed number of examples to be updated at once as described in Algorithm 2 . During the filtering process of Algorithm 2 , we first obtain the intersection of small-loss examples of M networks within a full batch D at each epoch . We then sample a subset of batchsize from the intersection and train them at each update like a normal SGD . Algorithm 1 LEC Require : noisy datasetD with noise ratio % , duration of warming- up Tw , # of networks used for filteringM , perturbation δ 1 : Initialize θ randomly 2 : for epoch t = 1 : Tw do I Warming-up process 3 : for mini-batch index b = 1 : |D|batchsize do 4 : Sample a subset of batchsize Bb from a full batchD 5 : θ ← θ − α∇θ 1|Bb| ∑ ( x , y ) ∈Bb CE ( fθ ( x ) , y ) 6 : end for 7 : end for 8 : for epoch t = Tw + 1 : Tend do I Filtering process 9 : for mini-batch index b = 1 : |D|batchsize do 10 : Sample a subset of batchsize Bb from a full batchD 11 : form = 1 : M do 12 : θm = θ + δm , b , t . Adding perturbation 13 : S , Bb , θm : = ( 100− ) % small-loss examples of fθm within a mini batch Bb 14 : end for 15 : Bb′ = ∩Mm=1S , Bb , θm . Ensemble consensus filtering 16 : θ ← θ − α∇θ 1|Bb′| ∑ ( x , y ) ∈Bb′ CE ( fθ ( x ) , y ) 17 : end for 18 : end for Algorithm 2 LEC-full Require : noisy datasetD with noise ratio % , duration of warming- up Tw , # of networks used for filteringM , perturbation δ 1 : Initialize θ randomly 2 : for epoch t = 1 : Tw do I Warming-up process 3 : for mini-batch index b = 1 : |D|batchsize do 4 : Sample a subset of batchsize Bb from a full batchD 5 : θ ← θ − α∇θ 1|Bb| ∑ ( x , y ) ∈Bb CE ( fθ ( x ) , y ) 6 : end for 7 : end for 8 : for epoch t = Tw + 1 : Tend do I Filtering process 9 : form = 1 : M do 10 : θm = θ + δm , t . Adding perturbation 11 : S , D , θm : = ( 100− ) % small-loss examples of fθm within a full batchD 12 : end for 13 : D′t = ∩Mm=1S , D , θm . Ensemble consensus filtering 14 : for mini-batch index b = 1 : |D ′ t| batchsize do 15 : Sample a subset of batchsize B′b fromD′t 16 : θ ← θ − α∇θ 1|B′ b | ∑ ( x , y ) ∈B′ b CE ( fθ ( x ) , y ) 17 : end for 18 : end for 3.3 PERTURBATION TO IDENTIFY NOISY EXAMPLES . Now we aim to find a perturbation δ to be injected to discriminate memorized features from generalized ones . We present three LECs with different perturbations in the following . The pseudocodes can be found in Section A.1.3 . • Network-Ensemble Consensus ( LNEC ) : Inspired by the observation that an ensemble of networks with the same architecture is correlated during generalization and is decorrelated during memorization ( Morcos et al. , 2018 ) , the perturbation δ comes from the difference between M networks . During the warming-up process , M networks are trained independently . During the filtering process , M networks are trained on the intersection of ( 100- ) % small-loss examples of M networks . • Self-Ensemble Consensus ( LSEC ) : We focus on the relationship between Morcos et al . ( 2018 ) and Lakshminarayanan et al . ( 2017 ) : network predictions for memorized features are uncertain and those for generalized features are certain . Since the uncertainty of predictions also can be captured by multiple stochastic predictions ( Gal & Ghahramani , 2016 ) , the perturbation δ comes from the difference between M stochastic predictions of a single network.1 During the filtering process , the network is trained on the intersection of ( 100- ) % small-loss examples obtained with M stochastic predictions . • Temporal-Ensemble Consensus ( LTEC ) : Inspired by the observation that during training , atypical features are more easily forgetful compared to typical features ( Toneva et al. , 2018 ) , the perturbation δ comes from the difference between networks at current and preceding epochs . During the filtering process , the network is trained on the intersection of ( 100- ) % small-loss examples at the current epoch t and preceding min ( M − 1 , t − 1 ) epochs . We collect ( 100- ) % small-loss examples at the preceding epochs , rather than network parameters to reduce memory usage . 1As in Gal & Ghahramani ( 2016 ) , the stochasticity of predictions is caused by stochastic operations such as dropout ( Srivastava et al. , 2014 ) .
This paper proposes a general method for eliminating noisy labels in supervised learning based on the combination of two ideas: outputs of noisy examples are less robust under noise, and noisy labels are less likely to have a low loss. The authors then propose 3 concrete instantiations of the idea, and do a thorough empirical study (including ablations) across multiple architectures, datasets, noise types, and comparing to multiple related methods. The results show pretty convincingly that one of the new methods (LTEC) that uses past networks outputs to build an ensemble performs really well.
SP:672b4b380be73c57e2e7fd3d9f7ea8af0d98f6d1
Deep Evidential Uncertainty
1 INTRODUCTION Recent advances in deep supervised learning have yielded super human level performance and precision . While these models empirically generalize well when placed into new test enviornments , they are often easily fooled by adversarial perturbations ( Goodfellow et al. , 2014 ) , and have difficulty understanding when their predictions should not be trusted . Today , regression based neural networks ( NNs ) are being deployed in safety critical domains of computer vision ( Godard et al. , 2017 ) as well as in robotics and control ( Bojarski et al. , 2016 ) where the ability to infer model uncertainty is crucial for eventual wide-scale adoption . Furthermore , precise uncertainty estimates are useful both for human interpretation of confidence and anomaly detection , and also for propagating these estimates to other autonomous components of a larger , connected system . Existing approaches to uncertainty estimation are roughly split into two categories : ( 1 ) learning aleatoric uncertainty ( uncertainty in the data ) and ( 2 ) epistemic uncertainty ( uncertainty in the prediction ) . While representations for aleatoric uncertainty can be learned directly from data , approaches for estimating epistemic uncertainty focus on placing probabilistic priors over the weights and sampling to obtain a measure of variance . In practice , many challenges arise with this approach , such as the computational expense of sampling during inference , how to pick an appropriate weight prior , or even how to learn such a representation given your prior . Instead , we formulate learning as an evidence acquisition process , where the model can acquire evidence during training in support of its prediction ( Sensoy et al. , 2018 ; Malinin & Gales , 2018 ) . Every training example adds support to a learned higher-order , evidential distribution . Sampling from this distribution yields instances of lower-order , likelihood functions from which the data was drawn ( cf . Fig . 1 ) . We demonstrate that , by placing priors over our likelihood function , we can learn a grounded representation of epistemic and aleatoric uncertainty without sampling during inference . In summary , this work makes the following contributions : 1 . A novel and scalable method for learning representations of epistemic and aleatoric uncertainty , specifically on regression problems , by placing evidential priors over the likelihood ; 2 . Formulation of a novel evidential regularizer for continuous regression problems , which we show is necessary for expressing lack of a evidence on out-of-distribution examples ; 3 . Evaluation of learned epistemic uncertainty on benchmark regression tasks and comparison against other state-of-the-art uncertainty estimation techniques for neural networks ; and 4 . Robustness evaluation against out of distribution and adversarially perturbed test data . 2 MODELLING UNCERTAINTIES FROM DATA . 2.1 PRELIMINARIES . Consider the following supervised optimization problem : given a dataset , D , of N paired training examples , ( x1 , y1 ) , . . . , ( xN , yN ) , we aim to learn a function f , parameterized by a set of weights , w , which approximately solves the following optimization problem : min w J ( w ) ; J ( w ) = 1 N N∑ i=1 Li ( w ) , ( 1 ) where Li ( · ) describes a loss function . In this work , we consider deterministic regression problems , which commonly optimize the sum of squared errors , Li ( w ) = 12 ‖yi − f ( xi ; w ) ‖ 2 . In doing so , the model is encouraged to learn the average correct answer for a given input , but does not explicitly model any underlying noise or uncertainty in the data when making its estimation . 2.2 MAXIMUM LIKELIHOOD ESTIMATION . We can also approach our optimization problem from a maximum likelihood perspective , where we learn model parameters that maximize the likelihood of observing a particular set of training data . In the context of deterministic regression , we assume our targets , yi , were drawn i.i.d . from a Gaussian distribution with mean and variance parameters θ = ( µ , σ2 ) . In maximum likelihood estimation , we aim to learn a model to infer θ = ( µ , σ2 ) that maximize the likelihood of observing our targets , y , given by p ( yi|θ ) . In practice , we minimize the negative log likelihood by setting : Li ( w ) = − log p ( yi|µ , σ2︸ ︷︷ ︸ θ ) = 1 2 log ( 2πσ2 ) + ( yi − µ ) 2 2σ2 . ( 2 ) In learning the parameters θ , this likelihood function allows us to successfully model the uncertainty of our data , also known as the aleatoric uncertainty . However , our model remains oblivious to the predictive model or epistemic uncertainty ( Kendall & Gal , 2017 ) . In this paper , we present a novel approach for estimating the evidence in support of network predictions by directly learning both the inferred aleatoric uncertainty as well as the underlying epistemic uncertainty over its predictions . We achieve this by placing higher-order prior distributions over the learned parameters governing the distribution from which our observations are drawn . 3 EVIDENTIAL UNCERTAINTY FOR REGRESSION . 3.1 PROBLEM SETUP . We consider the problem where our observed targets , yi , are drawn i.i.d . from a Gaussian distribution now with unknown mean and variance ( µ , σ2 ) , which we seek to probabilistically estimate . We model this by placing a conjugate prior distribution on ( µ , σ2 ) . If we assume our observations are drawn from a Gaussian , this leads to placing a Gaussian prior on our unknown mean and an Inverse-Gamma prior on our unknown variance : ( y1 , . . . , yN ) ∼ N ( µ , σ2 ) µ ∼ N ( γ , σ2λ−1 ) σ2 ∼ Γ−1 ( α , β ) . where Γ ( · ) is the gamma function , m = ( γ , λ , α , β ) , and γ ∈ R , λ > 0 , α > 0 , β > 0 . Our aim is to estimate a posterior distribution q ( µ , σ2 ) = p ( µ , σ2|y1 , . . . , yN ) . To obtain an approximation for the true posterior , we assume that the estimated distribution can be factorized ( Parisi , 1988 ) such that q ( µ , σ2 ) = q ( µ ) q ( σ2 ) . Thus , our approximation takes the form of the Gaussian conjugate prior , the Normal Inverse-Gamma ( N.I.G . ) distribution : p ( µ , σ2︸ ︷︷ ︸ θ | γ , λ , α , β︸ ︷︷ ︸ m ) = βα √ λ Γ ( α ) √ 2πσ2 ( 1 σ2 ) α+1 exp { −2β + λ ( γ − µ ) 2 2σ2 } . ( 3 ) A popular interpretation of the parameters of the conjugate prior distribution is in terms of “ virtualobservations ” in support of a given property ( Jordan , 2009 ) . For example , the mean of a N.I.G . distribution can be interpreted as being estimated from λ virtual-observations with sample mean γ while its variance was estimated from 2α virtual-observations with sample mean γ and sum of squared deviations 2β . Following from this interpretation , we define the total evidence , Φ , of our evidential distributions as the sum of all inferred virtual-observations counts : ( Φ = λ+ 2α ) . Drawing a sample θj from the N.I.G . distribution yields a single instance of our likelihood function , namely N ( µj , σ2j ) . Thus , the N.I.G . hyperparameters , ( γ , λ , α , β ) , determine not only the location but also the dispersion concentrations , or uncertainty , associated with our inferred likelihood function . Therefore , we can interpret the N.I.G . distribution as higher-order , evidential , distribution on top of the unknown lower-order likelihood distribution from which observations are drawn . For example , in Fig . 2A we visualize different evidential N.I.G . distributions with varying model parameters . We illustrate that by increasing the evidential parameters ( i.e . λ , α ) of this distribution , the p.d.f . becomes tightly concentrated about its inferred likelihood function . Considering a single parameter realization of this higher-order distribution , cf . Fig . 2B , we can subsequently sample many lower-order realizations of our likelihood function , as shown in Fig . 2C . In this work , we use neural networks to infer the hyperparameters of this higher-order , evidential distribution , given an input . This approach presents several distinct advantages compared to prior work . First , our method enables simultaneous learning of the desired regression task , along with aleatoric and epistemic uncertainty estimation , built in , by enforcing evidential priors . Second , since the evidential prior is a higher-order N.I.G . distribution , the maximum likelihood Gaussian can be computed analytically from the expected values of the ( µ , σ2 ) parameters , without the need for sampling . Third , we can effectively estimate the epistemic or model uncertainty associated with the network ’ s prediction by simply evaluating the variance of our inferred evidential distribution . 3.2 LEARNING THE EVIDENTIAL DISTRIBUTION . Having formalized the use of an evidential distribution to capture both aleatoric and epistemic uncertainty , we next describe our approach for learning a model ( c.f . Fig . 2D ) to output the hyperparameters of this distribution . For clarity , we will structure the learning objective into two distinct parts : ( 1 ) acquiring or maximizing model evidence in support of our observations and ( 2 ) minimizing evidence or inflating uncertainty when the prediction is wrong . At a high level , we can think of ( 1 ) as a way of fitting our data to the evidential model while ( 2 ) enforces a prior to inflate our uncertainty estimates . ( 1 ) Maximizing the model fit . From Bayesian probability theory , the “ model evidence ” , or marginal likelihood , is defined as the likelihood of an observation , yi , given the evidential distribution parameters m and is computed by marginalizing over the likelihood parameters θ : p ( yi|m ) = p ( yi|θ , m ) p ( θ|m ) p ( θ|yi , m ) = ∫ θ p ( yi|θ , m ) p ( θ|m ) dθ . ( 4 ) The model evidence is not , in general , straightforward to evaluate since computing it involves integrating out the dependence on latent model parameters : p ( yi|m ) = ∫ ∞ σ2=0 ∫ ∞ µ=−∞ p ( yi|µ , σ2 ) p ( µ , σ2|m ) dµdσ2 ( 5 ) However , by placing a N.I.G . evidential prior on our Gaussian likelihood function an analytical solution for the model evidence does exist . For computational reasons , we minimize the negative logarithm of the model evidence ( LNLLi ( w ) ) . For a complete derivation please refer to Sec . 7.1 , LNLLi ( w ) = − log p ( yi|m ) = − log ( 2 1 2+αβα √ λ 2π ( 1 + λ ) ( 2β + λ ( γ − yi ) 2 1 + λ ) − 12−α ) . ( 6 ) Instead of modeling this loss using empirical Bayes , where the objective is to maximize model evidence , we alternatively can minimize the sum-of-squared ( SOS ) errors , between the evidential prior and the data that would be sampled from the associated likelihood . Thus , we define LSOSi ( w ) as LSOSi ( w ) = Eθ′∼p ( θ|m ) [ Ey′∼p ( y|θ′ ) [ ||y′ − yi||22 ] ] ( 7 ) = ∫ ∞ σ2=0 ∫ ∞ µ=−∞ Ey′∼p ( y|µ , σ2 ) [ ||y′ − yi||22 ] p ( µ , σ2|m ) dµdσ2 ( 8 ) = ( Γ ( α− 12 ) 4 Γ ( α ) λ √ β ) ( 2β ( 1 + λ ) + ( 2α− 1 ) λ ( yi − γ ) 2 ) . ( 9 ) A step-by-step derivation is given in Sec . 7.1 . In our experiments , using LSOSi ( w ) resulted in greater training stability and increased performance , compared to the LNLLi ( w ) loss . Therefore , LSOSi ( w ) is used in all presented results . ( 2 ) Minimizing evidence on errors . In the first term of our objective above , we outlined a loss function for training a NN to output parameters of a N.I.G . distribution to fit our observations , either by maximizing the model evidence or minimizing the sum-of-squared errors . Now , we describe how to regularize training by applying a lack of evidence prior ( i.e. , maximum uncertainty ) . Therefore , during training we aim to minimize our evidence ( or maximize our uncertainty ) everywhere except where we have training data . This can be done by minimizing the KL-divergence between the inferred posterior , q ( θ ) , and a prior , p ( θ ) . This has been demonstrated with success in the categorical setting where the uncertainty prior can be set to a uniform Dirichlet ( Malinin & Gales , 2018 ; Sensoy et al. , 2018 ) . In the regression setting , the KL-divergence between our posterior and a N.I.G . zero evidence prior ( i.e. , { α , λ } = 0 ) is not well defined ( Soch & Allefeld , 2016 ) , please refer to Sec . 7.2 for a derivation . Furthermore , this prior needs to be enforced specifically where there is no support from the data . Past works in classification accomplish this by using the ground truth likelihoood classification ( i.e. , the one-hot encoded labels ) to remove the non-misleading evidence . However , in regression , labels are provided as point targets ( not ground truth Gaussian likelihoods ) . Unlike classification , it is not possible to penalize evidence everywhere except our single point estimate , as this space is infinite and unbounded . Thus , these previously explored approaches for evidential optimization are not directly applicable . To address both of these shortcomings of past works , now in the regression setting , we formulate a novel evidence regularizer , LRi , based on the error of the i-th prediction , LRi ( w ) = ‖yi − E [ µi ] ‖p · Φ = ‖yi − γ‖p · ( 2α+ λ ) , ( 10 ) where ‖x‖p represents the L-p norm of x . The value of p impacts the penalty imposed on the evidence when a wrong prediction is made . For example , p = 2 , heavily over-penalizes the evidence on larger errors , whereas p = 1 and p = 0.5 saturate the evidence penalty for larger errors . We found that p = 1 provided the optimal stability during training and use this value in all presented results . This regularization loss imposes a penalty whenever there is an error in the prediction that scales with the total evidence of our inferred posterior . Conversely , large amounts of predicted evidence will not be penalized as long as the prediction is close to the target observation . We provide an ablation analysis to quantitatively demonstrate the added value of this evidential regularizer in Sec 7.3.2 . The combined loss function employed during training consists of the two loss terms for maximizing model evidence and regularizing evidence , Li ( w ) = LSOSi ( w ) + LRi ( w ) . ( 11 )
This paper proposed deep evidential regression, a method for training neural networks to not only estimate the output but also the associated evidence in support of that output. The main idea follows the evidential deep learning work proposed in (Sensoy et al., 2018) extending it from the classification regime to the regression regime, by placing evidential priors over the Gaussian likelihood function and performing the type-II maximum likelihood estimation similar to the empirical Bayes method [1,2]. The authors demonstrated that the both the epistemic and aleatoric uncertainties could be estimated in one forward pass under the proposed framework without resorting to multiple passes and showed favorable uncertainty comparing to existing methods. Robustness against out of distribution and adversarially perturbed data is illustrated as well.
SP:3225cc5911de7539e93553a7794975ea4358671e
Deep Evidential Uncertainty
1 INTRODUCTION Recent advances in deep supervised learning have yielded super human level performance and precision . While these models empirically generalize well when placed into new test enviornments , they are often easily fooled by adversarial perturbations ( Goodfellow et al. , 2014 ) , and have difficulty understanding when their predictions should not be trusted . Today , regression based neural networks ( NNs ) are being deployed in safety critical domains of computer vision ( Godard et al. , 2017 ) as well as in robotics and control ( Bojarski et al. , 2016 ) where the ability to infer model uncertainty is crucial for eventual wide-scale adoption . Furthermore , precise uncertainty estimates are useful both for human interpretation of confidence and anomaly detection , and also for propagating these estimates to other autonomous components of a larger , connected system . Existing approaches to uncertainty estimation are roughly split into two categories : ( 1 ) learning aleatoric uncertainty ( uncertainty in the data ) and ( 2 ) epistemic uncertainty ( uncertainty in the prediction ) . While representations for aleatoric uncertainty can be learned directly from data , approaches for estimating epistemic uncertainty focus on placing probabilistic priors over the weights and sampling to obtain a measure of variance . In practice , many challenges arise with this approach , such as the computational expense of sampling during inference , how to pick an appropriate weight prior , or even how to learn such a representation given your prior . Instead , we formulate learning as an evidence acquisition process , where the model can acquire evidence during training in support of its prediction ( Sensoy et al. , 2018 ; Malinin & Gales , 2018 ) . Every training example adds support to a learned higher-order , evidential distribution . Sampling from this distribution yields instances of lower-order , likelihood functions from which the data was drawn ( cf . Fig . 1 ) . We demonstrate that , by placing priors over our likelihood function , we can learn a grounded representation of epistemic and aleatoric uncertainty without sampling during inference . In summary , this work makes the following contributions : 1 . A novel and scalable method for learning representations of epistemic and aleatoric uncertainty , specifically on regression problems , by placing evidential priors over the likelihood ; 2 . Formulation of a novel evidential regularizer for continuous regression problems , which we show is necessary for expressing lack of a evidence on out-of-distribution examples ; 3 . Evaluation of learned epistemic uncertainty on benchmark regression tasks and comparison against other state-of-the-art uncertainty estimation techniques for neural networks ; and 4 . Robustness evaluation against out of distribution and adversarially perturbed test data . 2 MODELLING UNCERTAINTIES FROM DATA . 2.1 PRELIMINARIES . Consider the following supervised optimization problem : given a dataset , D , of N paired training examples , ( x1 , y1 ) , . . . , ( xN , yN ) , we aim to learn a function f , parameterized by a set of weights , w , which approximately solves the following optimization problem : min w J ( w ) ; J ( w ) = 1 N N∑ i=1 Li ( w ) , ( 1 ) where Li ( · ) describes a loss function . In this work , we consider deterministic regression problems , which commonly optimize the sum of squared errors , Li ( w ) = 12 ‖yi − f ( xi ; w ) ‖ 2 . In doing so , the model is encouraged to learn the average correct answer for a given input , but does not explicitly model any underlying noise or uncertainty in the data when making its estimation . 2.2 MAXIMUM LIKELIHOOD ESTIMATION . We can also approach our optimization problem from a maximum likelihood perspective , where we learn model parameters that maximize the likelihood of observing a particular set of training data . In the context of deterministic regression , we assume our targets , yi , were drawn i.i.d . from a Gaussian distribution with mean and variance parameters θ = ( µ , σ2 ) . In maximum likelihood estimation , we aim to learn a model to infer θ = ( µ , σ2 ) that maximize the likelihood of observing our targets , y , given by p ( yi|θ ) . In practice , we minimize the negative log likelihood by setting : Li ( w ) = − log p ( yi|µ , σ2︸ ︷︷ ︸ θ ) = 1 2 log ( 2πσ2 ) + ( yi − µ ) 2 2σ2 . ( 2 ) In learning the parameters θ , this likelihood function allows us to successfully model the uncertainty of our data , also known as the aleatoric uncertainty . However , our model remains oblivious to the predictive model or epistemic uncertainty ( Kendall & Gal , 2017 ) . In this paper , we present a novel approach for estimating the evidence in support of network predictions by directly learning both the inferred aleatoric uncertainty as well as the underlying epistemic uncertainty over its predictions . We achieve this by placing higher-order prior distributions over the learned parameters governing the distribution from which our observations are drawn . 3 EVIDENTIAL UNCERTAINTY FOR REGRESSION . 3.1 PROBLEM SETUP . We consider the problem where our observed targets , yi , are drawn i.i.d . from a Gaussian distribution now with unknown mean and variance ( µ , σ2 ) , which we seek to probabilistically estimate . We model this by placing a conjugate prior distribution on ( µ , σ2 ) . If we assume our observations are drawn from a Gaussian , this leads to placing a Gaussian prior on our unknown mean and an Inverse-Gamma prior on our unknown variance : ( y1 , . . . , yN ) ∼ N ( µ , σ2 ) µ ∼ N ( γ , σ2λ−1 ) σ2 ∼ Γ−1 ( α , β ) . where Γ ( · ) is the gamma function , m = ( γ , λ , α , β ) , and γ ∈ R , λ > 0 , α > 0 , β > 0 . Our aim is to estimate a posterior distribution q ( µ , σ2 ) = p ( µ , σ2|y1 , . . . , yN ) . To obtain an approximation for the true posterior , we assume that the estimated distribution can be factorized ( Parisi , 1988 ) such that q ( µ , σ2 ) = q ( µ ) q ( σ2 ) . Thus , our approximation takes the form of the Gaussian conjugate prior , the Normal Inverse-Gamma ( N.I.G . ) distribution : p ( µ , σ2︸ ︷︷ ︸ θ | γ , λ , α , β︸ ︷︷ ︸ m ) = βα √ λ Γ ( α ) √ 2πσ2 ( 1 σ2 ) α+1 exp { −2β + λ ( γ − µ ) 2 2σ2 } . ( 3 ) A popular interpretation of the parameters of the conjugate prior distribution is in terms of “ virtualobservations ” in support of a given property ( Jordan , 2009 ) . For example , the mean of a N.I.G . distribution can be interpreted as being estimated from λ virtual-observations with sample mean γ while its variance was estimated from 2α virtual-observations with sample mean γ and sum of squared deviations 2β . Following from this interpretation , we define the total evidence , Φ , of our evidential distributions as the sum of all inferred virtual-observations counts : ( Φ = λ+ 2α ) . Drawing a sample θj from the N.I.G . distribution yields a single instance of our likelihood function , namely N ( µj , σ2j ) . Thus , the N.I.G . hyperparameters , ( γ , λ , α , β ) , determine not only the location but also the dispersion concentrations , or uncertainty , associated with our inferred likelihood function . Therefore , we can interpret the N.I.G . distribution as higher-order , evidential , distribution on top of the unknown lower-order likelihood distribution from which observations are drawn . For example , in Fig . 2A we visualize different evidential N.I.G . distributions with varying model parameters . We illustrate that by increasing the evidential parameters ( i.e . λ , α ) of this distribution , the p.d.f . becomes tightly concentrated about its inferred likelihood function . Considering a single parameter realization of this higher-order distribution , cf . Fig . 2B , we can subsequently sample many lower-order realizations of our likelihood function , as shown in Fig . 2C . In this work , we use neural networks to infer the hyperparameters of this higher-order , evidential distribution , given an input . This approach presents several distinct advantages compared to prior work . First , our method enables simultaneous learning of the desired regression task , along with aleatoric and epistemic uncertainty estimation , built in , by enforcing evidential priors . Second , since the evidential prior is a higher-order N.I.G . distribution , the maximum likelihood Gaussian can be computed analytically from the expected values of the ( µ , σ2 ) parameters , without the need for sampling . Third , we can effectively estimate the epistemic or model uncertainty associated with the network ’ s prediction by simply evaluating the variance of our inferred evidential distribution . 3.2 LEARNING THE EVIDENTIAL DISTRIBUTION . Having formalized the use of an evidential distribution to capture both aleatoric and epistemic uncertainty , we next describe our approach for learning a model ( c.f . Fig . 2D ) to output the hyperparameters of this distribution . For clarity , we will structure the learning objective into two distinct parts : ( 1 ) acquiring or maximizing model evidence in support of our observations and ( 2 ) minimizing evidence or inflating uncertainty when the prediction is wrong . At a high level , we can think of ( 1 ) as a way of fitting our data to the evidential model while ( 2 ) enforces a prior to inflate our uncertainty estimates . ( 1 ) Maximizing the model fit . From Bayesian probability theory , the “ model evidence ” , or marginal likelihood , is defined as the likelihood of an observation , yi , given the evidential distribution parameters m and is computed by marginalizing over the likelihood parameters θ : p ( yi|m ) = p ( yi|θ , m ) p ( θ|m ) p ( θ|yi , m ) = ∫ θ p ( yi|θ , m ) p ( θ|m ) dθ . ( 4 ) The model evidence is not , in general , straightforward to evaluate since computing it involves integrating out the dependence on latent model parameters : p ( yi|m ) = ∫ ∞ σ2=0 ∫ ∞ µ=−∞ p ( yi|µ , σ2 ) p ( µ , σ2|m ) dµdσ2 ( 5 ) However , by placing a N.I.G . evidential prior on our Gaussian likelihood function an analytical solution for the model evidence does exist . For computational reasons , we minimize the negative logarithm of the model evidence ( LNLLi ( w ) ) . For a complete derivation please refer to Sec . 7.1 , LNLLi ( w ) = − log p ( yi|m ) = − log ( 2 1 2+αβα √ λ 2π ( 1 + λ ) ( 2β + λ ( γ − yi ) 2 1 + λ ) − 12−α ) . ( 6 ) Instead of modeling this loss using empirical Bayes , where the objective is to maximize model evidence , we alternatively can minimize the sum-of-squared ( SOS ) errors , between the evidential prior and the data that would be sampled from the associated likelihood . Thus , we define LSOSi ( w ) as LSOSi ( w ) = Eθ′∼p ( θ|m ) [ Ey′∼p ( y|θ′ ) [ ||y′ − yi||22 ] ] ( 7 ) = ∫ ∞ σ2=0 ∫ ∞ µ=−∞ Ey′∼p ( y|µ , σ2 ) [ ||y′ − yi||22 ] p ( µ , σ2|m ) dµdσ2 ( 8 ) = ( Γ ( α− 12 ) 4 Γ ( α ) λ √ β ) ( 2β ( 1 + λ ) + ( 2α− 1 ) λ ( yi − γ ) 2 ) . ( 9 ) A step-by-step derivation is given in Sec . 7.1 . In our experiments , using LSOSi ( w ) resulted in greater training stability and increased performance , compared to the LNLLi ( w ) loss . Therefore , LSOSi ( w ) is used in all presented results . ( 2 ) Minimizing evidence on errors . In the first term of our objective above , we outlined a loss function for training a NN to output parameters of a N.I.G . distribution to fit our observations , either by maximizing the model evidence or minimizing the sum-of-squared errors . Now , we describe how to regularize training by applying a lack of evidence prior ( i.e. , maximum uncertainty ) . Therefore , during training we aim to minimize our evidence ( or maximize our uncertainty ) everywhere except where we have training data . This can be done by minimizing the KL-divergence between the inferred posterior , q ( θ ) , and a prior , p ( θ ) . This has been demonstrated with success in the categorical setting where the uncertainty prior can be set to a uniform Dirichlet ( Malinin & Gales , 2018 ; Sensoy et al. , 2018 ) . In the regression setting , the KL-divergence between our posterior and a N.I.G . zero evidence prior ( i.e. , { α , λ } = 0 ) is not well defined ( Soch & Allefeld , 2016 ) , please refer to Sec . 7.2 for a derivation . Furthermore , this prior needs to be enforced specifically where there is no support from the data . Past works in classification accomplish this by using the ground truth likelihoood classification ( i.e. , the one-hot encoded labels ) to remove the non-misleading evidence . However , in regression , labels are provided as point targets ( not ground truth Gaussian likelihoods ) . Unlike classification , it is not possible to penalize evidence everywhere except our single point estimate , as this space is infinite and unbounded . Thus , these previously explored approaches for evidential optimization are not directly applicable . To address both of these shortcomings of past works , now in the regression setting , we formulate a novel evidence regularizer , LRi , based on the error of the i-th prediction , LRi ( w ) = ‖yi − E [ µi ] ‖p · Φ = ‖yi − γ‖p · ( 2α+ λ ) , ( 10 ) where ‖x‖p represents the L-p norm of x . The value of p impacts the penalty imposed on the evidence when a wrong prediction is made . For example , p = 2 , heavily over-penalizes the evidence on larger errors , whereas p = 1 and p = 0.5 saturate the evidence penalty for larger errors . We found that p = 1 provided the optimal stability during training and use this value in all presented results . This regularization loss imposes a penalty whenever there is an error in the prediction that scales with the total evidence of our inferred posterior . Conversely , large amounts of predicted evidence will not be penalized as long as the prediction is close to the target observation . We provide an ablation analysis to quantitatively demonstrate the added value of this evidential regularizer in Sec 7.3.2 . The combined loss function employed during training consists of the two loss terms for maximizing model evidence and regularizing evidence , Li ( w ) = LSOSi ( w ) + LRi ( w ) . ( 11 )
This paper proposes a novel approach to estimate the confidence of predictions in a regression setting. The approach starts from the standard modelling assuming iid samples from a Gaussian distribution with unknown mean and variances and places evidential priors (relying on the Dempster-Shafer Theory of Evidence [1] /subjective logic [2]) on those quantities to model uncertainty in a deterministic fashion, i.e. without relying on sampling as most previous approaches. This opens the door to online applications with fully integrated uncertainty estimates.
SP:3225cc5911de7539e93553a7794975ea4358671e
Adaptive Loss Scaling for Mixed Precision Training
1 INTRODUCTION . Training deep neural networks ( DNNs ) is well-known to be time and energy consuming , motivating the development of new methods and hardware to make training more efficient . One way to improve training efficiency is to use numerical representations that are more hardware-friendly . This is the reason that the IEEE 754 32-bit single-precision floating point format ( FP32 ) is more widely used for training DNNs than the more precise double precision format ( FP64 ) , which is commonly used in other areas of high-performance computing . In an effort to further improve hardware efficiency , there has been increasing interest in using data types with even lower precision than FP32 for training ( Micikevicius et al. , 2018 ; Kuchaiev et al. , 2018 ; Wang et al. , 2018 ; Kalamkar et al. , 2019 ; Mellempudi et al. , 2019 ; Sakr et al. , 2019 ) . Of these , the IEEE half-precision floating-point ( FP16 ) format is already well supported by modern GPU vendors ( Choquette et al. , 2018 ) . Using FP16 for training can reduce the memory footprint by half compared to FP32 and significantly improve the runtime performance and power efficiency . Nevertheless , numerical issues like overflow , underflow , and rounding errors frequently occur when training in low precision only . Recent works propose various improvements , of which mixed precision training ( MPT ) ( Micikevicius et al. , 2018 ) is the state-of-the-art . Its core idea is to use FP16 for the compute-intensive yet precision-insensitive operations , such as matrix multiplication , for computational efficiency , while using FP32 for the operations that require high precision , such as batch normalization ( Ioffe & Szegedy , 2015 ) and gradient update accumulation . Activations and gradients , which largely contribute to memory consumption , are stored in FP16 , while the weights are stored in FP32 for more accurate accumulation of gradient updates . Even though MPT seems promising and has wide support from both hardware and software frameworks , it still suffers from reliability issues , mainly due to the more limited dynamic range of FP16 being unable to adequately cover possible gradient values during training . The most common issue is for small gradients to fall into the underflow gap and become zero , which makes training less effective . Loss scaling ( Micikevicius et al. , 2018 ; Kuchaiev et al. , 2018 ; Mellempudi et al. , 2019 ) addresses the range limitation in FP16 by introducing a hyperparameter α to scale the loss value before the start of the backward pass so that the computed ( scaled ) gradients can then be properly represented in FP16 without significant underflow . For an appropriate choice of α , loss scaling can achieve state of the art results that are competitive with regular FP32 training . Unfortunately , there is no single value of α that will work in arbitrary models , and so it often needs to be tuned per model . Its value must be chosen large enough to prevent underflow issues from affecting training accuracy . However , if α is chosen too large , it could amplify rounding errors caused by swamping ( Higham , 1993 ) or even result in overflow . This observed sensitivity to the particular choice of loss scale is also reported by Mellempudi et al . ( 2019 ) , who find that different values can lead to very different ResNet-50 MPT convergence behavior . Furthermore , the data distribution of gradients can vary both between layers and between iterations ( Figure 1 ) , which implies that a single scale is insufficient . For instance , gradients closer to the input require a higher loss scale that may cause overflow or severe rounding errors if the same value were used in layers closer to the output . Including the time spent tuning α , the total training time of MPT can even exceed regular FP32 training . We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use . We hope that this will help to utilize better existing hardware with support for fast FP16 operations . Our method improves the usability of MPT compared to existing methods by removing the need to tune a model-specific loss scale hyperparameter , while retaining ( and in some cases surpassing ) the accuracy of regular FP32 training . We achieve this by introducing layer-wise loss scale values which are automatically computed and dynamically updated during training to deal with underflow more effectively than existing methods . Experimental results on several examples show that MPT with adaptive loss scaling can achieve the best model accuracy and the shortest overall training time , especially when training deep models on large datasets . 2 BACKGROUND . 2.1 PRELIMINARY . As mentioned above , MPT ( Micikevicius et al. , 2018 ) uses FP16 for storing the activations and gradients and for the most compute-intensive tasks , while FP32 is used only where increased precision is required . FP16 has three fewer exponent bits than FP32 , limiting its dynamic range to magnitudes between umin = 2−24 and umax = 65505 . In practice , the gradients often have a larger range than this , resulting in numerical issues when using MPT . In FP16 , if the absolute actual value of a gradient |g| is smaller than umin , it will become 0 ; and if it is larger than umax , it will be infinite . Also , even if a value is in [ umin , umax ) , the closer it comes to either bound , the less accurate its FP16 form is regarding absolute rounding error , e.g. , 1024.1 is rounded to 1024 . Underflow motivates loss scaling , while the overflow and rounding error are what loss scaling should be careful with . Figure 2 : Comparison between the standard backpropagation algorithm and the loss scaled one . Each layer has a single output in this formulation . Output finds the output layer index , GradW calculates the gradient update with respect to the weights . Backprop calculates the activation gradient given the layer type OP and the input gradient . δN+1 ← initial error gradient ; for i← layer indices in reversed topological order do j ← Output ( i ) ; Wi ←Wi+ GradW ( δj ) ; δi ← Backprop ( OP ( i ) , δj ) ; end Algorithm 1 : Standard backpropagation algorithm . δN+1 ← initial error gradient ; δN+1 ← αδN+1 ; for i← layer indices in reversed topological order do j ← Output ( i ) ; Wi←Wi+ GradW ( δj ) /α ; δi← Backprop ( OP ( i ) , δj ) ; end Algorithm 2 : Standard loss scaling algorithm . α is the loss scale . Figure 2 shows the basic loss scaling algorithm ( Algorithm 2 ) compared to standard backpropagation without loss scaling ( Algorithm 1 ) . Note that they differ only in that in Algorithm 2 , the initial error gradients δN+1 from the output layer are scaled by α before the start of the backward pass , and that the weight gradient update GradW is then unscaled by the same α just before the weight update . Recall that α should be chosen large enough to prevent underflow issues from affecting training , while also being small enough to prevent overflow . Even when kept within this range , using a larger value than necessary could introduce more absolute rounding error as aforementioned . Also , since loss scaling amplifies the ratio between the largest and the smallest elements within each gradient , swamping ( Higham , 1993 ) , the phenomenon that summing small values with larger ones is inaccurate in floating-point , becomes more likely and may hinder training ( Wang et al. , 2018 ) . 2.2 RELATED WORK . Many recent works focus on reducing rounding error to improve the training performance . Wang et al . ( 2018 ) devise a chunk-based accumulation mechanism to mitigate the swamping issue . Sakr et al . ( 2019 ) improve the solution to the same problem by finding lower precision for accumulation by variance analysis . Alternatively , Hoffer et al . ( 2018 ) identify the numerical issues caused by batch normalization and propose to replace it by a more numerically stable and efficient alternative . These methods are orthogonal to loss scaling , and we plan to study the effect of applying them together with adaptive loss scaling as future work . Loss scaling that aims to improve mixed precision training by reducing the underflow rate in the computed gradients can be traced back to Micikevicius et al . ( 2018 ) . They originally suggest to choose a constant loss scale either empirically , or using a factor that can not scale the maximal absolute gradient value to overflow . Kuchaiev et al . ( 2018 ) propose two improved versions : one is called backoff , which simply makes the loss scale smaller if a numerical error is encountered during training ; the other is logmax , which models the maximal absolute gradient value across training iterations by log-normal distribution , in order to estimate the proper loss scale value for the next iteration . We argue that these new solutions are still not ideal , since backoff is simply trial-and-error and can waste training workload ; and logmax is risky to use when we do not have much gradient values to model that log normal distribution , or this assumption can not apply . Mellempudi et al . ( 2019 ) further study the effect of the backoff method for 8-bit floating point . 3 ADAPTIVE LOSS SCALING . As a preview , Figure 3 shows a concrete example of our adaptive loss scaling approach in the backward pass of a 3-layer Multi-Layer Perceptron ( MLP ) . After the forward pass has completed , starting from the rightmost node , the gradients δ are first propagated from loss L and are then scaled by a scalar α4 before being consumed by the last linear layer . The weight gradients for this layer are then scaled by 1/α4 just before the weight update for W3 in order to prevent the particular choice of α4 from affecting the computed gradient magnitudes . It is at this point that our approach begins to differ from the standard loss scaling method . In the standard method , the same α4 would be used for all layers in the network . However , in our method , each layer i calculates its own local loss scale value βi based on the statistics of its output gradients and weights in the current iteration , in order to minimize underflow in its computed input gradients . This βi is then used to scale weightsWi before computing the scaled input activation gradients for layer i . Since the scaling effects from these local loss scales accumulate , when unscaling gradients for updating , we use the product of all previous scale values , indicated by αi = α4 ∏3 j=i+1 βj . Thus , our approach attempts to minimize underflow in every layer simultaneously through the use of layer-local loss scales βi which are also computed automatically based on the current layer statistics . Compared to the standard method , this removes the need to perform model-specific hyperparameter tuning and enables layer-wise loss scaling . 3.1 LOSS SCALED BACKPROPAGATION . We use a 2-tuple notation to denote the propagated entity from layer i : 〈αi , δi〉 , in which αi is the loss scale value for layer i and δi is the gradient that has been scaled by αi . To be more specific about this notation , we can take aN -layer MLP as an example ( see Figure 3 for the notation ) . In this case , layer i takes in 〈αi+1 , δi+1〉 , updates its weight by ( yTi−1δi+1 ) /αi+1 , and produces 〈αi , δi〉 for the previous layer i− 1 . We will elaborate more on how αi is calculated in the following sections . Algorithm 3 : Backpropagation algorithm with adaptive loss scaling , assuming each layer has a single output . Section 3.2.2 shows how multiple outputs work . 〈αN+1 , δN+1〉 ← initial loss scale , and error gradient scaled by α0 for i← layer ID in a reversed topological order do j ← GetLayerOutput ( i ) Wi ←Wi+ GetWeightGradient ( δj ) /αj βi← GetLossScale ( OP ( i ) , 〈αj , δj〉 ) 〈αi , δi〉 ← 〈αjβi , Backprop ( OP ( i ) , βiδj ) 〉 end Algorithm 3 shows the pseudocode for adaptive loss scaled backpropagation for the case where each layer has a single output ( we describe how to handle the multiple-output case in Section 3.2.2 ) : 1 . We start with the error gradients δN+1 computed from the output loss value for the last layer N + 1 . We may optionally scale this gradient by αN+1 . Normally we keep it as 1 . 2 . As visiting each previous layer i in a reversed topological order of the computational graph , we retrieve the tuple 〈αj , δj〉 propagated to it from the next downstream layer that represents the scaled loss for layer i ’ s output . We calculate a local loss scale value βi , which will be used to scale δj before we calculate the activation gradients for the previous layer . 3 . We use δj and other cached inputs ( omitted ) to compute the gradients for Wi . However , since these gradients have been scaled , we must unscale them using αj before performing the weight update . 4 . Since βi contributes to the magnitude of the gradient δi , we calculate the loss scale value δi to be passed to the next previous layer as αjβi .
The authors propose an adaptive loss scaling method during the backpropagation stage for the mix precision training to reduce the underflow. Compared with the previous work, which scales the loss by human design, and needs to be consistent in all layers. The authors state that they can decide the scale rate layer by layer automatically to reduce the underflow in a low precision situation.
SP:4da7ae6cfcf4cab7581bab283e12354b60d3b2dd
Adaptive Loss Scaling for Mixed Precision Training
1 INTRODUCTION . Training deep neural networks ( DNNs ) is well-known to be time and energy consuming , motivating the development of new methods and hardware to make training more efficient . One way to improve training efficiency is to use numerical representations that are more hardware-friendly . This is the reason that the IEEE 754 32-bit single-precision floating point format ( FP32 ) is more widely used for training DNNs than the more precise double precision format ( FP64 ) , which is commonly used in other areas of high-performance computing . In an effort to further improve hardware efficiency , there has been increasing interest in using data types with even lower precision than FP32 for training ( Micikevicius et al. , 2018 ; Kuchaiev et al. , 2018 ; Wang et al. , 2018 ; Kalamkar et al. , 2019 ; Mellempudi et al. , 2019 ; Sakr et al. , 2019 ) . Of these , the IEEE half-precision floating-point ( FP16 ) format is already well supported by modern GPU vendors ( Choquette et al. , 2018 ) . Using FP16 for training can reduce the memory footprint by half compared to FP32 and significantly improve the runtime performance and power efficiency . Nevertheless , numerical issues like overflow , underflow , and rounding errors frequently occur when training in low precision only . Recent works propose various improvements , of which mixed precision training ( MPT ) ( Micikevicius et al. , 2018 ) is the state-of-the-art . Its core idea is to use FP16 for the compute-intensive yet precision-insensitive operations , such as matrix multiplication , for computational efficiency , while using FP32 for the operations that require high precision , such as batch normalization ( Ioffe & Szegedy , 2015 ) and gradient update accumulation . Activations and gradients , which largely contribute to memory consumption , are stored in FP16 , while the weights are stored in FP32 for more accurate accumulation of gradient updates . Even though MPT seems promising and has wide support from both hardware and software frameworks , it still suffers from reliability issues , mainly due to the more limited dynamic range of FP16 being unable to adequately cover possible gradient values during training . The most common issue is for small gradients to fall into the underflow gap and become zero , which makes training less effective . Loss scaling ( Micikevicius et al. , 2018 ; Kuchaiev et al. , 2018 ; Mellempudi et al. , 2019 ) addresses the range limitation in FP16 by introducing a hyperparameter α to scale the loss value before the start of the backward pass so that the computed ( scaled ) gradients can then be properly represented in FP16 without significant underflow . For an appropriate choice of α , loss scaling can achieve state of the art results that are competitive with regular FP32 training . Unfortunately , there is no single value of α that will work in arbitrary models , and so it often needs to be tuned per model . Its value must be chosen large enough to prevent underflow issues from affecting training accuracy . However , if α is chosen too large , it could amplify rounding errors caused by swamping ( Higham , 1993 ) or even result in overflow . This observed sensitivity to the particular choice of loss scale is also reported by Mellempudi et al . ( 2019 ) , who find that different values can lead to very different ResNet-50 MPT convergence behavior . Furthermore , the data distribution of gradients can vary both between layers and between iterations ( Figure 1 ) , which implies that a single scale is insufficient . For instance , gradients closer to the input require a higher loss scale that may cause overflow or severe rounding errors if the same value were used in layers closer to the output . Including the time spent tuning α , the total training time of MPT can even exceed regular FP32 training . We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use . We hope that this will help to utilize better existing hardware with support for fast FP16 operations . Our method improves the usability of MPT compared to existing methods by removing the need to tune a model-specific loss scale hyperparameter , while retaining ( and in some cases surpassing ) the accuracy of regular FP32 training . We achieve this by introducing layer-wise loss scale values which are automatically computed and dynamically updated during training to deal with underflow more effectively than existing methods . Experimental results on several examples show that MPT with adaptive loss scaling can achieve the best model accuracy and the shortest overall training time , especially when training deep models on large datasets . 2 BACKGROUND . 2.1 PRELIMINARY . As mentioned above , MPT ( Micikevicius et al. , 2018 ) uses FP16 for storing the activations and gradients and for the most compute-intensive tasks , while FP32 is used only where increased precision is required . FP16 has three fewer exponent bits than FP32 , limiting its dynamic range to magnitudes between umin = 2−24 and umax = 65505 . In practice , the gradients often have a larger range than this , resulting in numerical issues when using MPT . In FP16 , if the absolute actual value of a gradient |g| is smaller than umin , it will become 0 ; and if it is larger than umax , it will be infinite . Also , even if a value is in [ umin , umax ) , the closer it comes to either bound , the less accurate its FP16 form is regarding absolute rounding error , e.g. , 1024.1 is rounded to 1024 . Underflow motivates loss scaling , while the overflow and rounding error are what loss scaling should be careful with . Figure 2 : Comparison between the standard backpropagation algorithm and the loss scaled one . Each layer has a single output in this formulation . Output finds the output layer index , GradW calculates the gradient update with respect to the weights . Backprop calculates the activation gradient given the layer type OP and the input gradient . δN+1 ← initial error gradient ; for i← layer indices in reversed topological order do j ← Output ( i ) ; Wi ←Wi+ GradW ( δj ) ; δi ← Backprop ( OP ( i ) , δj ) ; end Algorithm 1 : Standard backpropagation algorithm . δN+1 ← initial error gradient ; δN+1 ← αδN+1 ; for i← layer indices in reversed topological order do j ← Output ( i ) ; Wi←Wi+ GradW ( δj ) /α ; δi← Backprop ( OP ( i ) , δj ) ; end Algorithm 2 : Standard loss scaling algorithm . α is the loss scale . Figure 2 shows the basic loss scaling algorithm ( Algorithm 2 ) compared to standard backpropagation without loss scaling ( Algorithm 1 ) . Note that they differ only in that in Algorithm 2 , the initial error gradients δN+1 from the output layer are scaled by α before the start of the backward pass , and that the weight gradient update GradW is then unscaled by the same α just before the weight update . Recall that α should be chosen large enough to prevent underflow issues from affecting training , while also being small enough to prevent overflow . Even when kept within this range , using a larger value than necessary could introduce more absolute rounding error as aforementioned . Also , since loss scaling amplifies the ratio between the largest and the smallest elements within each gradient , swamping ( Higham , 1993 ) , the phenomenon that summing small values with larger ones is inaccurate in floating-point , becomes more likely and may hinder training ( Wang et al. , 2018 ) . 2.2 RELATED WORK . Many recent works focus on reducing rounding error to improve the training performance . Wang et al . ( 2018 ) devise a chunk-based accumulation mechanism to mitigate the swamping issue . Sakr et al . ( 2019 ) improve the solution to the same problem by finding lower precision for accumulation by variance analysis . Alternatively , Hoffer et al . ( 2018 ) identify the numerical issues caused by batch normalization and propose to replace it by a more numerically stable and efficient alternative . These methods are orthogonal to loss scaling , and we plan to study the effect of applying them together with adaptive loss scaling as future work . Loss scaling that aims to improve mixed precision training by reducing the underflow rate in the computed gradients can be traced back to Micikevicius et al . ( 2018 ) . They originally suggest to choose a constant loss scale either empirically , or using a factor that can not scale the maximal absolute gradient value to overflow . Kuchaiev et al . ( 2018 ) propose two improved versions : one is called backoff , which simply makes the loss scale smaller if a numerical error is encountered during training ; the other is logmax , which models the maximal absolute gradient value across training iterations by log-normal distribution , in order to estimate the proper loss scale value for the next iteration . We argue that these new solutions are still not ideal , since backoff is simply trial-and-error and can waste training workload ; and logmax is risky to use when we do not have much gradient values to model that log normal distribution , or this assumption can not apply . Mellempudi et al . ( 2019 ) further study the effect of the backoff method for 8-bit floating point . 3 ADAPTIVE LOSS SCALING . As a preview , Figure 3 shows a concrete example of our adaptive loss scaling approach in the backward pass of a 3-layer Multi-Layer Perceptron ( MLP ) . After the forward pass has completed , starting from the rightmost node , the gradients δ are first propagated from loss L and are then scaled by a scalar α4 before being consumed by the last linear layer . The weight gradients for this layer are then scaled by 1/α4 just before the weight update for W3 in order to prevent the particular choice of α4 from affecting the computed gradient magnitudes . It is at this point that our approach begins to differ from the standard loss scaling method . In the standard method , the same α4 would be used for all layers in the network . However , in our method , each layer i calculates its own local loss scale value βi based on the statistics of its output gradients and weights in the current iteration , in order to minimize underflow in its computed input gradients . This βi is then used to scale weightsWi before computing the scaled input activation gradients for layer i . Since the scaling effects from these local loss scales accumulate , when unscaling gradients for updating , we use the product of all previous scale values , indicated by αi = α4 ∏3 j=i+1 βj . Thus , our approach attempts to minimize underflow in every layer simultaneously through the use of layer-local loss scales βi which are also computed automatically based on the current layer statistics . Compared to the standard method , this removes the need to perform model-specific hyperparameter tuning and enables layer-wise loss scaling . 3.1 LOSS SCALED BACKPROPAGATION . We use a 2-tuple notation to denote the propagated entity from layer i : 〈αi , δi〉 , in which αi is the loss scale value for layer i and δi is the gradient that has been scaled by αi . To be more specific about this notation , we can take aN -layer MLP as an example ( see Figure 3 for the notation ) . In this case , layer i takes in 〈αi+1 , δi+1〉 , updates its weight by ( yTi−1δi+1 ) /αi+1 , and produces 〈αi , δi〉 for the previous layer i− 1 . We will elaborate more on how αi is calculated in the following sections . Algorithm 3 : Backpropagation algorithm with adaptive loss scaling , assuming each layer has a single output . Section 3.2.2 shows how multiple outputs work . 〈αN+1 , δN+1〉 ← initial loss scale , and error gradient scaled by α0 for i← layer ID in a reversed topological order do j ← GetLayerOutput ( i ) Wi ←Wi+ GetWeightGradient ( δj ) /αj βi← GetLossScale ( OP ( i ) , 〈αj , δj〉 ) 〈αi , δi〉 ← 〈αjβi , Backprop ( OP ( i ) , βiδj ) 〉 end Algorithm 3 shows the pseudocode for adaptive loss scaled backpropagation for the case where each layer has a single output ( we describe how to handle the multiple-output case in Section 3.2.2 ) : 1 . We start with the error gradients δN+1 computed from the output loss value for the last layer N + 1 . We may optionally scale this gradient by αN+1 . Normally we keep it as 1 . 2 . As visiting each previous layer i in a reversed topological order of the computational graph , we retrieve the tuple 〈αj , δj〉 propagated to it from the next downstream layer that represents the scaled loss for layer i ’ s output . We calculate a local loss scale value βi , which will be used to scale δj before we calculate the activation gradients for the previous layer . 3 . We use δj and other cached inputs ( omitted ) to compute the gradients for Wi . However , since these gradients have been scaled , we must unscale them using αj before performing the weight update . 4 . Since βi contributes to the magnitude of the gradient δi , we calculate the loss scale value δi to be passed to the next previous layer as αjβi .
In this paper, the authors propose a method to train models in FP16 precision. The authors show that the key reason of training performance drop is the overflow or underflow of back propagation information. Instead of using a fixed value or dynamic value proposed by a previous work, this paper adopts a more elaborate way to minimize underflow
SP:4da7ae6cfcf4cab7581bab283e12354b60d3b2dd
Towards Stabilizing Batch Statistics in Backward Propagation of Batch Normalization
1 INTRODUCTION . Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) is one of the most popular techniques for training neural networks . It has been widely proven effective in many applications , and become the indispensable part of many state of the art deep models . Despite the success of BN , it ’ s still challenging to utilize BN when batch size is extremely small1 . The batch statistics with small batch size are highly unstable , leading to slow convergence during training and bad performance during inference . For example , in detection or segmentation tasks , the batch size is often limited to 1 or 2 per GPU due to the requirement of high resolution inputs or complex structure of the model . Directly computing batch statistics without any modification on each GPU will make performance of the model severely degrade . To address such issues , many modified normalization methods have been proposed . They can be roughly divided into two categories : some of them try to improve vanilla BN by correcting batch statistics ( Ioffe , 2017 ; Singh & Shrivastava , 2019 ) , but they all fail to completely restore the performance of vanilla BN ; Other methods get over the instability of BN by using instance-level normalization ( Ulyanov et al. , 2016 ; Ba et al. , 2016 ; Wu & He , 2018 ) , therefore models can avoid the affect ∗Equal Contribution . Work was done when Junjie Yan was an intern at Megvii Technology . †Corresponding author . 1In the context of this paper , we use ” batch size/normalization batch size ” to refer the number of samples used to compute statistics unless otherwise stated . We use ” gradient batch size ” to refer the number of samples used to update weights . of batch statistics . This type of methods can restore the performance in small batch cases to some extent . However , instance-level normalization hardly meet industrial or commercial needs so far , for this type of methods have to compute instance-level statistics both in training and inference , which will introduce additional nonlinear operations in inference procedure and dramatically increase consumption Shao et al . ( 2019 ) . While vanilla BN uses the statistics computed over the whole training data instead of batch of samples when training finished . Thus BN is a linear operator and can be merged with convolution layer during inference procedure . Figure 1 ( a ) shows with ResNet-50 ( He et al. , 2016 ) , instance-level normalization almost double the inference time compared with vanilla BN . Therefore , it ’ s a tough but necessary task to restore the performance of BN in small batch training without introducing any nonlinear operations in inference procedure . In this paper , we first analysis the formulation of vanilla BN , revealing there are actually not only 2 but 4 batch statistics involved in normalization during forward propagation ( FP ) as well as backward propagation ( BP ) . The additional 2 batch statistics involved in BP are associated with gradients of the model , and have never been well discussed before . They play an important role in regularizing gradients of the model during BP . In our experiments ( see Figure 2 ) , variance of the batch statistics associated with gradients in BP , due to small batch size , is even larger than that of the widelyknown batch statistics ( mean , variance of feature maps ) . We believe the instability of batch statistics associated with gradients is one of the key reason why BN performs poorly in small batch cases . Based on our analysis , we propose a novel normalization method named Moving Average Batch Normalization ( MABN ) . MABN can completely get over small batch issues without introducing any nonlinear manipulation in inference procedure . The core idea of MABN is to replace batch statistics with moving average statistics . We substitute batch statistics involved in BP and FP with different type of moving average statistics respectively , and theoretical analysis is given to prove the benefits . However , we observed directly using moving average statistics as substitutes for batch statistics can ’ t make training converge in practice . We think the failure takes place due to the occasional large gradients during training , which has been mentioned in Ioffe ( 2017 ) . To avoid training collapse , we modified the vanilla normalization form by reducing the number of batch statistics , centralizing the weights of convolution kernels , and utilizing renormalizing strategy . We also theoretically prove the modified normalization form is more stable than vanilla form . MABN shows its effectiveness in multiple vision public datasets and tasks , including ImageNet ( Russakovsky et al. , 2015 ) , COCO ( Lin et al. , 2014 ) . All results of experiments show MABN with small batch size ( 1 or 2 ) can achieve comparable performance as BN with regular batch size ( see Figure 1 ( b ) ) . Besides , it has same inference consumption as vanilla BN ( see Figure 1 ( a ) ) . We also conducted sufficient ablation experiments to verify the effectiveness of MABN further . 2 RELATED WORK . Batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) normalizes the internal feature maps of deep neural network using channel-wise statistics ( mean , standard deviation ) along batch dimension . It has been widely proven effectively in most of tasks . But the vanilla BN heavily relies on sufficient batch size in practice . To restore the performance of BN in small batch cases , many normalization techniques have been proposed : Batch Renormalization ( BRN ) ( Ioffe , 2017 ) introduces renormalizing parameters in BN to correct the batch statistics during training , where the renormalizing parameters are computed using moving average statistics ; Unlike BRN , EvalNorm ( Singh & Shrivastava , 2019 ) corrects the batch statistics during inference procedure . Both BRN and EvalNorm can restore the performance of BN to some extent , but they all fail to get over small batch issues completely . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) , Layer Normalization ( LN ) ( Ba et al. , 2016 ) , and Group normalization ( GN ) ( Wu & He , 2018 ) all try to avoid the effect of batch size by utilizing instance level statistics . IN uses channel-wise statistics per instance instead of per batch , while LN uses instance-level statistics along channel dimension . But IN and LN shows no superiority to vanilla BN in most of cases . GN divides all channels in predefined groups , and uses group-wise statistics per instance . It can restore the performance of vanilla BN very well in classification and detection tasks . But it have to introduce extra nonlinear manipulations in inference procedure and severely increase inference consumption , as we have pointed out in Section 1 . SyncBN ( Peng et al. , 2018 ) handle the small batch issues by computing the mean and variance across multiple GPUs . This method doesn ’ t essentially solve the problem , and requires a lot of resource . Online Normalization Chiley et al . ( 2019 ) modifies BP by using moving average statistics , so they can set batch size as 1 without degradation of performance , but Online Normalization still have to use instance-level normalization to cooperate with modification in BP , so its inference efficiency is much lower than original BN . Apart from operating on feature maps , some works exploit to normalize the weights of convolution : Weight Standardization ( Qiao et al. , 2019 ) centralizes weight at first before divides weights by its standard deviation . It still has to combine with GN to handle small batch cases . 3 STATISTICS IN BATCH NORMALIZATION . 3.1 REVIEW OF BATCH NORMALIZATION . First of all , let ’ s review the formulation of batch Normalization ( Ioffe & Szegedy , 2015 ) : assume the input of a BN layer is denoted as X ∈ RB×p , where B denotes the batch size , p denotes number of features . In training procedure , the normalized feature maps Y at iteration t is computed as : Y = X − µBt σBt , ( 1 ) where batch statistics µBt and σ 2 Bt are the sample mean and sample variance computed over the batch of samples Bt at iteration t : µBt = 1 B ∑ b Xb , : , σ 2 Bt = 1 B ∑ b ( Xb , : − µBt ) 2 . ( 2 ) Besides , a pair of parameters γ , β are used to scale and shift normalized value Y : Z = Y γ + β . ( 3 ) The scaling and shifting part is added in all normalization form by default , and will be omitted in the following discussion for simplicity . As Ioffe & Szegedy ( 2015 ) demonstrated , the batch statistics µBt , σ 2 Bt are both involved in backward propagation ( BP ) . We can derive the formulation of BP in BN as follows : let L denote the loss , Θt denote the set of the whole learnable parameters of the model at iteration t. Given the partial gradients ∂L∂Y ∣∣∣∣ Θt , Bt , the partial gradients ∂L∂X ∣∣∣∣ Θt , Bt is computed as ∂L ∂X ∣∣∣∣ Θt , Bt = 1 σBt ( ∂L ∂Y ∣∣∣∣ Θt , Bt − gBt − Y ·ΨBt ) ( 4 ) where · denotes element-wise production , gBt and ΨBt are computed as gBt = 1 B ∑ b ∂L ∂Yb , : ∣∣∣∣ Θt , Bt , ΨBt = 1 B ∑ b Yb , : · ∂L ∂Yb , : ∣∣∣∣ Θt , Bt , ( 5 ) It can be seen from ( 5 ) that gBt and ΨBt are also batch statistics involved in BN during BP . But they have never been well discussed before . 3.2 INSTABILITY OF BATCH STATISTICS . According to Ioffe & Szegedy ( 2015 ) , the ideal normalization is to normalize feature maps X using expectation and variance computed over the whole training data set : Y = X − EX√ V ar [ X ] . ( 6 ) But it ’ s impractical when using stochastic optimization . Therefore , Ioffe & Szegedy ( 2015 ) uses mini-batches in stochastic gradient training , each mini-batch produces estimates the mean and variance of each activation . Such simplification makes it possible to involve mean and variance in BP . From the derivation in section 3.1 , we can see batch statistics µBt , σ 2 Bt are the Monte Carlo ( MC ) estimators of population statistics E [ X|Θt ] , V ar [ X|Θt ] respectively at iteration t. Similarly , batch statistics gBt , ΨBt are MC estimators of population statistics E [ ∂L∂Yb , : |Θt ] , E [ Yb , : · ∂L ∂Yb , : |Θt ] at iteration t. E [ ∂L∂Yb , : |Θt ] , E [ Yb , : · ∂L ∂Yb , : |Θt ] are computed over the whole data set . They contain the information how the mean and the variance of population will change as model updates , so they play an important role to make trade off between the change of individual sample and population . Therefore , it ’ s crucial to estimate the population statistics precisely , in order to regularize the gradients of the model properly as weights update . It ’ s well known the variance of MC estimator is inversely proportional to the number of samples , hence the variance of batch statistics dramatically increases when batch size is small . Figure 2 shows the change of batch statistics from a specific normalization layer of ResNet-50 during training on ImageNet . Regular batch statistics ( orange line ) are regarded as a good approximation for population statistics . We can see small batch statistics ( blue line ) are highly unstable , and contains notable error compared with regular batch statistics during training . In fact , the bias of gBt and ΨBt in BP is more serious than that of µBt and σ 2 Bt ( see Figure 2 ( c ) , 2 ( d ) ) . The instability of small batch statistics can worsen the capacity of the models in two aspects : firstly the instability of small batch statistics will make training unstable , resulting in slow convergence ; Secondly the instability of small batch can produce huge difference between batch statistics and population statistics . Since the model is trained using batch statistics while evaluated using population statistics , the difference between batch statistics and population statistics will cause inconsistency between training and inference procedure , leading to bad performance of the model on evaluation data .
The paper extends recently proposed BatchRenormalization (BRN) technique which uses exponential moving average (EMA) statistics in forward and backward passes of BatchNorm (BN) instead of vanilla batch statistics. Motivation of the work is to stabilize training neural networks on small batch size setup. Authors propose to replace EMA in backward pass by simple moving average (SMA) and show that under some assumptions such replacement reduces variance. Also they consider slightly different way of normalization without centralizing features X, but centralizing convolutional kernels according to Qiao et al. (2019).
SP:85a8e18145acb8bc9d79e188c173ac2c1d1007ed
Towards Stabilizing Batch Statistics in Backward Propagation of Batch Normalization
1 INTRODUCTION . Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) is one of the most popular techniques for training neural networks . It has been widely proven effective in many applications , and become the indispensable part of many state of the art deep models . Despite the success of BN , it ’ s still challenging to utilize BN when batch size is extremely small1 . The batch statistics with small batch size are highly unstable , leading to slow convergence during training and bad performance during inference . For example , in detection or segmentation tasks , the batch size is often limited to 1 or 2 per GPU due to the requirement of high resolution inputs or complex structure of the model . Directly computing batch statistics without any modification on each GPU will make performance of the model severely degrade . To address such issues , many modified normalization methods have been proposed . They can be roughly divided into two categories : some of them try to improve vanilla BN by correcting batch statistics ( Ioffe , 2017 ; Singh & Shrivastava , 2019 ) , but they all fail to completely restore the performance of vanilla BN ; Other methods get over the instability of BN by using instance-level normalization ( Ulyanov et al. , 2016 ; Ba et al. , 2016 ; Wu & He , 2018 ) , therefore models can avoid the affect ∗Equal Contribution . Work was done when Junjie Yan was an intern at Megvii Technology . †Corresponding author . 1In the context of this paper , we use ” batch size/normalization batch size ” to refer the number of samples used to compute statistics unless otherwise stated . We use ” gradient batch size ” to refer the number of samples used to update weights . of batch statistics . This type of methods can restore the performance in small batch cases to some extent . However , instance-level normalization hardly meet industrial or commercial needs so far , for this type of methods have to compute instance-level statistics both in training and inference , which will introduce additional nonlinear operations in inference procedure and dramatically increase consumption Shao et al . ( 2019 ) . While vanilla BN uses the statistics computed over the whole training data instead of batch of samples when training finished . Thus BN is a linear operator and can be merged with convolution layer during inference procedure . Figure 1 ( a ) shows with ResNet-50 ( He et al. , 2016 ) , instance-level normalization almost double the inference time compared with vanilla BN . Therefore , it ’ s a tough but necessary task to restore the performance of BN in small batch training without introducing any nonlinear operations in inference procedure . In this paper , we first analysis the formulation of vanilla BN , revealing there are actually not only 2 but 4 batch statistics involved in normalization during forward propagation ( FP ) as well as backward propagation ( BP ) . The additional 2 batch statistics involved in BP are associated with gradients of the model , and have never been well discussed before . They play an important role in regularizing gradients of the model during BP . In our experiments ( see Figure 2 ) , variance of the batch statistics associated with gradients in BP , due to small batch size , is even larger than that of the widelyknown batch statistics ( mean , variance of feature maps ) . We believe the instability of batch statistics associated with gradients is one of the key reason why BN performs poorly in small batch cases . Based on our analysis , we propose a novel normalization method named Moving Average Batch Normalization ( MABN ) . MABN can completely get over small batch issues without introducing any nonlinear manipulation in inference procedure . The core idea of MABN is to replace batch statistics with moving average statistics . We substitute batch statistics involved in BP and FP with different type of moving average statistics respectively , and theoretical analysis is given to prove the benefits . However , we observed directly using moving average statistics as substitutes for batch statistics can ’ t make training converge in practice . We think the failure takes place due to the occasional large gradients during training , which has been mentioned in Ioffe ( 2017 ) . To avoid training collapse , we modified the vanilla normalization form by reducing the number of batch statistics , centralizing the weights of convolution kernels , and utilizing renormalizing strategy . We also theoretically prove the modified normalization form is more stable than vanilla form . MABN shows its effectiveness in multiple vision public datasets and tasks , including ImageNet ( Russakovsky et al. , 2015 ) , COCO ( Lin et al. , 2014 ) . All results of experiments show MABN with small batch size ( 1 or 2 ) can achieve comparable performance as BN with regular batch size ( see Figure 1 ( b ) ) . Besides , it has same inference consumption as vanilla BN ( see Figure 1 ( a ) ) . We also conducted sufficient ablation experiments to verify the effectiveness of MABN further . 2 RELATED WORK . Batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) normalizes the internal feature maps of deep neural network using channel-wise statistics ( mean , standard deviation ) along batch dimension . It has been widely proven effectively in most of tasks . But the vanilla BN heavily relies on sufficient batch size in practice . To restore the performance of BN in small batch cases , many normalization techniques have been proposed : Batch Renormalization ( BRN ) ( Ioffe , 2017 ) introduces renormalizing parameters in BN to correct the batch statistics during training , where the renormalizing parameters are computed using moving average statistics ; Unlike BRN , EvalNorm ( Singh & Shrivastava , 2019 ) corrects the batch statistics during inference procedure . Both BRN and EvalNorm can restore the performance of BN to some extent , but they all fail to get over small batch issues completely . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) , Layer Normalization ( LN ) ( Ba et al. , 2016 ) , and Group normalization ( GN ) ( Wu & He , 2018 ) all try to avoid the effect of batch size by utilizing instance level statistics . IN uses channel-wise statistics per instance instead of per batch , while LN uses instance-level statistics along channel dimension . But IN and LN shows no superiority to vanilla BN in most of cases . GN divides all channels in predefined groups , and uses group-wise statistics per instance . It can restore the performance of vanilla BN very well in classification and detection tasks . But it have to introduce extra nonlinear manipulations in inference procedure and severely increase inference consumption , as we have pointed out in Section 1 . SyncBN ( Peng et al. , 2018 ) handle the small batch issues by computing the mean and variance across multiple GPUs . This method doesn ’ t essentially solve the problem , and requires a lot of resource . Online Normalization Chiley et al . ( 2019 ) modifies BP by using moving average statistics , so they can set batch size as 1 without degradation of performance , but Online Normalization still have to use instance-level normalization to cooperate with modification in BP , so its inference efficiency is much lower than original BN . Apart from operating on feature maps , some works exploit to normalize the weights of convolution : Weight Standardization ( Qiao et al. , 2019 ) centralizes weight at first before divides weights by its standard deviation . It still has to combine with GN to handle small batch cases . 3 STATISTICS IN BATCH NORMALIZATION . 3.1 REVIEW OF BATCH NORMALIZATION . First of all , let ’ s review the formulation of batch Normalization ( Ioffe & Szegedy , 2015 ) : assume the input of a BN layer is denoted as X ∈ RB×p , where B denotes the batch size , p denotes number of features . In training procedure , the normalized feature maps Y at iteration t is computed as : Y = X − µBt σBt , ( 1 ) where batch statistics µBt and σ 2 Bt are the sample mean and sample variance computed over the batch of samples Bt at iteration t : µBt = 1 B ∑ b Xb , : , σ 2 Bt = 1 B ∑ b ( Xb , : − µBt ) 2 . ( 2 ) Besides , a pair of parameters γ , β are used to scale and shift normalized value Y : Z = Y γ + β . ( 3 ) The scaling and shifting part is added in all normalization form by default , and will be omitted in the following discussion for simplicity . As Ioffe & Szegedy ( 2015 ) demonstrated , the batch statistics µBt , σ 2 Bt are both involved in backward propagation ( BP ) . We can derive the formulation of BP in BN as follows : let L denote the loss , Θt denote the set of the whole learnable parameters of the model at iteration t. Given the partial gradients ∂L∂Y ∣∣∣∣ Θt , Bt , the partial gradients ∂L∂X ∣∣∣∣ Θt , Bt is computed as ∂L ∂X ∣∣∣∣ Θt , Bt = 1 σBt ( ∂L ∂Y ∣∣∣∣ Θt , Bt − gBt − Y ·ΨBt ) ( 4 ) where · denotes element-wise production , gBt and ΨBt are computed as gBt = 1 B ∑ b ∂L ∂Yb , : ∣∣∣∣ Θt , Bt , ΨBt = 1 B ∑ b Yb , : · ∂L ∂Yb , : ∣∣∣∣ Θt , Bt , ( 5 ) It can be seen from ( 5 ) that gBt and ΨBt are also batch statistics involved in BN during BP . But they have never been well discussed before . 3.2 INSTABILITY OF BATCH STATISTICS . According to Ioffe & Szegedy ( 2015 ) , the ideal normalization is to normalize feature maps X using expectation and variance computed over the whole training data set : Y = X − EX√ V ar [ X ] . ( 6 ) But it ’ s impractical when using stochastic optimization . Therefore , Ioffe & Szegedy ( 2015 ) uses mini-batches in stochastic gradient training , each mini-batch produces estimates the mean and variance of each activation . Such simplification makes it possible to involve mean and variance in BP . From the derivation in section 3.1 , we can see batch statistics µBt , σ 2 Bt are the Monte Carlo ( MC ) estimators of population statistics E [ X|Θt ] , V ar [ X|Θt ] respectively at iteration t. Similarly , batch statistics gBt , ΨBt are MC estimators of population statistics E [ ∂L∂Yb , : |Θt ] , E [ Yb , : · ∂L ∂Yb , : |Θt ] at iteration t. E [ ∂L∂Yb , : |Θt ] , E [ Yb , : · ∂L ∂Yb , : |Θt ] are computed over the whole data set . They contain the information how the mean and the variance of population will change as model updates , so they play an important role to make trade off between the change of individual sample and population . Therefore , it ’ s crucial to estimate the population statistics precisely , in order to regularize the gradients of the model properly as weights update . It ’ s well known the variance of MC estimator is inversely proportional to the number of samples , hence the variance of batch statistics dramatically increases when batch size is small . Figure 2 shows the change of batch statistics from a specific normalization layer of ResNet-50 during training on ImageNet . Regular batch statistics ( orange line ) are regarded as a good approximation for population statistics . We can see small batch statistics ( blue line ) are highly unstable , and contains notable error compared with regular batch statistics during training . In fact , the bias of gBt and ΨBt in BP is more serious than that of µBt and σ 2 Bt ( see Figure 2 ( c ) , 2 ( d ) ) . The instability of small batch statistics can worsen the capacity of the models in two aspects : firstly the instability of small batch statistics will make training unstable , resulting in slow convergence ; Secondly the instability of small batch can produce huge difference between batch statistics and population statistics . Since the model is trained using batch statistics while evaluated using population statistics , the difference between batch statistics and population statistics will cause inconsistency between training and inference procedure , leading to bad performance of the model on evaluation data .
The paper proposes a new approach for batch-normalization. Standard approaches are sensitive to the batch size, because small batches will lead to unstable statistics. So when the mini-batch is small, the performance can drop significantly. The paper addresses this issue by analyzing extra statistics in the batch normalization and introducing moving average statistics, weights centralization and a slightly modified normalization. The proposed method does not require large batch sizes and nonlinear operations, but still maintain the robustness. The theoretical analysis and guarantees are provided as well. Experiments on typical datasets demonstrate the effectiveness of the proposed trick.
SP:85a8e18145acb8bc9d79e188c173ac2c1d1007ed
Prune or quantize? Strategy for Pareto-optimally low-cost and accurate CNN
1 INTRODUCTION . Reducing execution cost of deep learning inference is one of the most active research topics for applying superhuman recognition in embedded IoT devices and robots . A typical approach for employing memory- and computation-efficient components is separable convolution , which is a combination of depth-wise and point-wise convolutions ( Iandola et al. , 2016 ; Zoph et al. , 2018 ; Zhang et al. , 2018 ; Howard et al. , 2017 ) , structured/unstructured pruning of connections and activations , and quantizing activation , weight , and their vectors ( Stock et al. , 2019 ; Jegou et al. , 2011 ; Gong et al. , 2014 ) . Among these , separable convolution and structured pruning are similar , in that separable convolution can be viewed as convolutions pruned in a handcrafted manner . From a pruning viewpoint , since the separable convolution structure results from applying aggressive pruning to normal convolution , the result is drastic reductions in memory and computational cost at the expense of greatly decreased accuracy ( Stock et al. , 2019 ) . On the other hand , structured pruning and quantization are seemingly orthogonal approaches that can be naturally combined ( Tung & Mori , 2018 ; Han et al. , 2016 ) . However , their interactions are still not well-studied . For instance , the use of a single-bit representation is being actively explored as an extreme quantization . Since a nonnegligible accuracy drop is inevitable in extreme quantization , some papers have proposed increasing the number of channels to compensate for the lack of expressivity ( Lin et al. , 2017 ) . In other words , a quantization approach can further reduce the number of bits by compromising the increase in number of channels , or the increase in number of computations . This indicates that , con- versely , reducing channels by pruning may limit capability for quantization . This discussion raises a controversial question : which is better , a fat model with smaller bit width or a slim model with larger bit width ? Answering this question requires a metric that fairly measures the effects of both pruning and quantization . One such metric in the literature is the inference speed when the model is executed on specific hardware . This metric is useful or even ideal when the target hardware is known in advance but strongly depends on features of the hardware architecture . Yang et al . ( 2018 ) searched for an optimal architecture using inference time as the optimization objective and found different optimal architectures depending on the target device . For example , if the hardware can not handle extremely low bit-widths ( 1 or 2 bits ) , instead treating them as 8-bit integers with upper bits filled with zeros , we can not exploit the reduction of bit width to improve inference speed . From a theoretical viewpoint , figuring out the extent to which we can reduce the computational complexity of deep neural networks is another important open question . The discussion so far urges us to develop a hardware-agnostic and theoretically reasonable metric for measuring computational costs of neural network architectures . In this paper , we propose the Frobenius norm of the effective value of weight parameters as one such metric . This metric is proportional to the total energy when the model is executed on ideal hardware , where energy consumption for a single multiply-accumulate ( MAC ) computation is proportional to the squared effective amplitude of the individual weight parameter used for the MAC computation . The basic idea of the metric is analogous to a highly efficient class-B amplifier circuit whose energy consumption is determined by the instant signal amplitude ( Sechi , 1976 ) . This metric successfully reflects the effects of both quantization and structured/unstructured pruning in accordance with intuition . Using the proposed metric , we empirically find that a slimmer model can achieve a far better Pareto frontier in a lower computational cost region than can a fatter model after quantization , while a fat model is advantageous for achieving higher accuracy in a larger computational cost region . Finally , we perform experiments under a post-training quantization scenario ( Banner et al. , 2018 ) on ImageNet dataset ( Deng et al. , 2009 ) to verify the validity of our claim , namely that prune-then-quantize is superior to quantize-only or prune-only for achieving a better Pareto frontier . Further , since this metric is relevant to the signal-to-noise ratio ( S/N ) , it is measurable during SGD training , in which the absolute value of weights and the random walk of weight parameters correspond to signal and noise , respectively . We observe that the dependencies of the metric on validation accuracy seem to be correlated between those during training and those applying quantization after training . From this observation , we point out some possibilities for which we could expect robustness of a model for quantization from information obtained during training , we could determine an optimal policy for quantization of that model , and we could develop a novel optimization or regularization scheme . The main contributions of this paper are as follows : • We define a hardware-agnostic metric for measuring the computational cost of pruned and quantized models . • We empirically find that models with fewer parameters achieve far better accuracy in a low computational cost region after quantization . • We show a potential quantitative relation between quantization noise and perturbation of weight parameters during SGD training . And as implications , we hope to exploit our findings for • thorough comparison of various neural network architectures using the proposed hardwareagnostic metric , • development of a method for extracting a quantization policy from information obtained during SGD training , and • development of a training algorithm or regularization scheme for producing robust models based on the relation between quantization noise and perturbation of weight parameters during SGD training . 2 EFFECTIVE SIGNAL NORM . We seek a metric that properly reflects the effects of both quantization and pruning . Conventionally , quantization effectiveness is evaluated according to the number of bits required to achieve a given accuracy , or the accuracy achieved by using certain bit numbers for specific network architectures ( Stock et al. , 2019 ) . We can not use this to compare efficiencies between different architecture models ( e.g. , MobileNet versus ResNet-18 ) . The number of MAC computations or parameters can be used to compare different architectures , but the number of MAC computations does not consider quantization and the number of parameters is not directly related to inference time . Recently , the use of actual or estimated inference speeds as a metric for comparing network architectures has been proposed ( Yang et al. , 2018 ; Wang et al. , 2019a ; Cai et al. , 2019 ) . This metric is very useful when the target hardware is known in advance , and ideal for those who wish to use the model that performs best on that hardware . However , this metric is strongly hardware dependent . Indeed , Yang et al . ( 2018 ) ; Wang et al . ( 2019a ) ; Cai et al . ( 2019 ) found that optimal architectures for different types of target hardware are totally different . Considering interest in , for example , the simplest realizable deep neural network model while achieving a required accuracy , there is a need for a hardware-agnostic metric . The metric for model evaluation should correlate with energy consumed when the model is executed on ideal hardware . We assume that energy consumption by ideal hardware monotonically decreases when the bit width is reduced by quantization and when the number of nonzeros in weight parameters is reduced by pruning . For example , hardware with an 8-bit integer MAC array can not be further accelerated even if the bit width is reduced from 8 to 1 or 2 bits . Thus , the energy consumption measured using such hardware does not satisfy the aforementioned requirement and can not be our metric . Hardware like a CPU , which processes each computation in serial , can naturally exploit the structured or unstructured sparsity of weight parameters by skipping computations with zeroed weights . However , because it is difficult to parallelize computations while maintaining such a strategy , it is generally difficult to benefit from sparsity in GPU-like hardware employing massively parallel MAC units . Hardware dedicated to sparse convolution ( Lu et al. , 2019 ) tends to show better performance only when sparsity is sufficiently high , due to relatively large overheads for encoding and decoding sparse weight parameters in a special format . Therefore , the benefit of sparsity from pruning and low bit width from quantization largely depends on the hardware architecture , so long as we consider only existing hardware . Because we require a hardware-agnostic metric , we assume ideal hardware in which energy consumption is linearly proportional to the number of nonzero weight parameters and monotonically depends on the bit width of weight parameters , as shown in Figure 1 , setting aside the feasibility of such ideal hardware . 2.1 DEFINITION OF EFFECTIVE SIGNAL NORM . We define a metric called the effective signal norm ( ESN ) as ESN = ∑ l ||clf ( Wlint ) ||2F , ( 1 ) with Wlint = ⌊W l/∆l⌋+0.5 , where Wl is the weight tensor and ∆l is the quantization step size of the lth layer ; and cl is a coefficient depending on the layer , in that if cl = 1 , ESN is related to the number of parameters ( cf . memory footprint ) , and if cl is the number of computations per parameter at the lth layer , ESN is related to the number of computations ( cf . FLOP ) . f ( · ) is an element-wise function that determines how the metric responds to the value of each weight parameter . We propose two functions for f ( · ) . The first is f ( Wlint ) = W l int , based on the assumption that energy consumption increases with the square of the value for each weight parameter or for each computation . When cl = 1 , the definition is ESNa = ∑ l ||Wlint||2F . ( 2 ) This assumption is reasonable when we employ an analog ( or in-memory ) MAC computation engine ( Shafiee et al. , 2016 ; Miyashita et al. , 2017 ) , because energy consumption is proportional to the square of the signal amplitude when the signal represents an analog quantity such as voltage or current . Assuming ideal hardware , we adopt a definition where energy consumption varies according to the instant amplitude ( cf . class-B amplifier ) , which is more energy efficient than the case where energy consumption is constant and the value is determined by the maximal amplitude ( cf . class-A amplifier ) ( Sechi , 1976 ) . The second proposed function is f ( Wlint ) = ⌈log2 ( abs ( W l int ) ) + 1⌉ , where log2 ( · ) and abs ( · ) are functions applied to each element of a tensor argument . This is based on the assumption that energy consumption increases with the binary logarithm of the value for each weight parameter . When cl = 1 , the definition is ESNd = ∑ l ||⌈log2 ( abs ( W l int ) ) + 1⌉||2F . ( 3 ) In a digital circuit , a number is represented as a binary digit ( bits ) , so the energy consumption for moving or processing signals is roughly proportional to the number of bits , which is the binary logarithm of the value . It is therefore reasonable to use Equation ( 3 ) for a digital circuit .
The paper proposes a new metric to evaluate both the amount of pruning and quantization. This metric is agnostic to the hardware architecture and is simply obtained by computing the Frobenius norm of some point-wise transformation of the quantized weights. They first show empirically that this Evaluation metric is correlated with the validation accuracy. Then use this metric to provide some general rules for pruning/quantizing to preserve the highest validation accuracy. Finally, they derive a strategy to perform pruning by monitory the signal to noise ratio during training and show experimentally that such method performs better than competing ones.
SP:c7c30d05c86b1bb69f9098afe8bc2514e8eb22c0
Prune or quantize? Strategy for Pareto-optimally low-cost and accurate CNN
1 INTRODUCTION . Reducing execution cost of deep learning inference is one of the most active research topics for applying superhuman recognition in embedded IoT devices and robots . A typical approach for employing memory- and computation-efficient components is separable convolution , which is a combination of depth-wise and point-wise convolutions ( Iandola et al. , 2016 ; Zoph et al. , 2018 ; Zhang et al. , 2018 ; Howard et al. , 2017 ) , structured/unstructured pruning of connections and activations , and quantizing activation , weight , and their vectors ( Stock et al. , 2019 ; Jegou et al. , 2011 ; Gong et al. , 2014 ) . Among these , separable convolution and structured pruning are similar , in that separable convolution can be viewed as convolutions pruned in a handcrafted manner . From a pruning viewpoint , since the separable convolution structure results from applying aggressive pruning to normal convolution , the result is drastic reductions in memory and computational cost at the expense of greatly decreased accuracy ( Stock et al. , 2019 ) . On the other hand , structured pruning and quantization are seemingly orthogonal approaches that can be naturally combined ( Tung & Mori , 2018 ; Han et al. , 2016 ) . However , their interactions are still not well-studied . For instance , the use of a single-bit representation is being actively explored as an extreme quantization . Since a nonnegligible accuracy drop is inevitable in extreme quantization , some papers have proposed increasing the number of channels to compensate for the lack of expressivity ( Lin et al. , 2017 ) . In other words , a quantization approach can further reduce the number of bits by compromising the increase in number of channels , or the increase in number of computations . This indicates that , con- versely , reducing channels by pruning may limit capability for quantization . This discussion raises a controversial question : which is better , a fat model with smaller bit width or a slim model with larger bit width ? Answering this question requires a metric that fairly measures the effects of both pruning and quantization . One such metric in the literature is the inference speed when the model is executed on specific hardware . This metric is useful or even ideal when the target hardware is known in advance but strongly depends on features of the hardware architecture . Yang et al . ( 2018 ) searched for an optimal architecture using inference time as the optimization objective and found different optimal architectures depending on the target device . For example , if the hardware can not handle extremely low bit-widths ( 1 or 2 bits ) , instead treating them as 8-bit integers with upper bits filled with zeros , we can not exploit the reduction of bit width to improve inference speed . From a theoretical viewpoint , figuring out the extent to which we can reduce the computational complexity of deep neural networks is another important open question . The discussion so far urges us to develop a hardware-agnostic and theoretically reasonable metric for measuring computational costs of neural network architectures . In this paper , we propose the Frobenius norm of the effective value of weight parameters as one such metric . This metric is proportional to the total energy when the model is executed on ideal hardware , where energy consumption for a single multiply-accumulate ( MAC ) computation is proportional to the squared effective amplitude of the individual weight parameter used for the MAC computation . The basic idea of the metric is analogous to a highly efficient class-B amplifier circuit whose energy consumption is determined by the instant signal amplitude ( Sechi , 1976 ) . This metric successfully reflects the effects of both quantization and structured/unstructured pruning in accordance with intuition . Using the proposed metric , we empirically find that a slimmer model can achieve a far better Pareto frontier in a lower computational cost region than can a fatter model after quantization , while a fat model is advantageous for achieving higher accuracy in a larger computational cost region . Finally , we perform experiments under a post-training quantization scenario ( Banner et al. , 2018 ) on ImageNet dataset ( Deng et al. , 2009 ) to verify the validity of our claim , namely that prune-then-quantize is superior to quantize-only or prune-only for achieving a better Pareto frontier . Further , since this metric is relevant to the signal-to-noise ratio ( S/N ) , it is measurable during SGD training , in which the absolute value of weights and the random walk of weight parameters correspond to signal and noise , respectively . We observe that the dependencies of the metric on validation accuracy seem to be correlated between those during training and those applying quantization after training . From this observation , we point out some possibilities for which we could expect robustness of a model for quantization from information obtained during training , we could determine an optimal policy for quantization of that model , and we could develop a novel optimization or regularization scheme . The main contributions of this paper are as follows : • We define a hardware-agnostic metric for measuring the computational cost of pruned and quantized models . • We empirically find that models with fewer parameters achieve far better accuracy in a low computational cost region after quantization . • We show a potential quantitative relation between quantization noise and perturbation of weight parameters during SGD training . And as implications , we hope to exploit our findings for • thorough comparison of various neural network architectures using the proposed hardwareagnostic metric , • development of a method for extracting a quantization policy from information obtained during SGD training , and • development of a training algorithm or regularization scheme for producing robust models based on the relation between quantization noise and perturbation of weight parameters during SGD training . 2 EFFECTIVE SIGNAL NORM . We seek a metric that properly reflects the effects of both quantization and pruning . Conventionally , quantization effectiveness is evaluated according to the number of bits required to achieve a given accuracy , or the accuracy achieved by using certain bit numbers for specific network architectures ( Stock et al. , 2019 ) . We can not use this to compare efficiencies between different architecture models ( e.g. , MobileNet versus ResNet-18 ) . The number of MAC computations or parameters can be used to compare different architectures , but the number of MAC computations does not consider quantization and the number of parameters is not directly related to inference time . Recently , the use of actual or estimated inference speeds as a metric for comparing network architectures has been proposed ( Yang et al. , 2018 ; Wang et al. , 2019a ; Cai et al. , 2019 ) . This metric is very useful when the target hardware is known in advance , and ideal for those who wish to use the model that performs best on that hardware . However , this metric is strongly hardware dependent . Indeed , Yang et al . ( 2018 ) ; Wang et al . ( 2019a ) ; Cai et al . ( 2019 ) found that optimal architectures for different types of target hardware are totally different . Considering interest in , for example , the simplest realizable deep neural network model while achieving a required accuracy , there is a need for a hardware-agnostic metric . The metric for model evaluation should correlate with energy consumed when the model is executed on ideal hardware . We assume that energy consumption by ideal hardware monotonically decreases when the bit width is reduced by quantization and when the number of nonzeros in weight parameters is reduced by pruning . For example , hardware with an 8-bit integer MAC array can not be further accelerated even if the bit width is reduced from 8 to 1 or 2 bits . Thus , the energy consumption measured using such hardware does not satisfy the aforementioned requirement and can not be our metric . Hardware like a CPU , which processes each computation in serial , can naturally exploit the structured or unstructured sparsity of weight parameters by skipping computations with zeroed weights . However , because it is difficult to parallelize computations while maintaining such a strategy , it is generally difficult to benefit from sparsity in GPU-like hardware employing massively parallel MAC units . Hardware dedicated to sparse convolution ( Lu et al. , 2019 ) tends to show better performance only when sparsity is sufficiently high , due to relatively large overheads for encoding and decoding sparse weight parameters in a special format . Therefore , the benefit of sparsity from pruning and low bit width from quantization largely depends on the hardware architecture , so long as we consider only existing hardware . Because we require a hardware-agnostic metric , we assume ideal hardware in which energy consumption is linearly proportional to the number of nonzero weight parameters and monotonically depends on the bit width of weight parameters , as shown in Figure 1 , setting aside the feasibility of such ideal hardware . 2.1 DEFINITION OF EFFECTIVE SIGNAL NORM . We define a metric called the effective signal norm ( ESN ) as ESN = ∑ l ||clf ( Wlint ) ||2F , ( 1 ) with Wlint = ⌊W l/∆l⌋+0.5 , where Wl is the weight tensor and ∆l is the quantization step size of the lth layer ; and cl is a coefficient depending on the layer , in that if cl = 1 , ESN is related to the number of parameters ( cf . memory footprint ) , and if cl is the number of computations per parameter at the lth layer , ESN is related to the number of computations ( cf . FLOP ) . f ( · ) is an element-wise function that determines how the metric responds to the value of each weight parameter . We propose two functions for f ( · ) . The first is f ( Wlint ) = W l int , based on the assumption that energy consumption increases with the square of the value for each weight parameter or for each computation . When cl = 1 , the definition is ESNa = ∑ l ||Wlint||2F . ( 2 ) This assumption is reasonable when we employ an analog ( or in-memory ) MAC computation engine ( Shafiee et al. , 2016 ; Miyashita et al. , 2017 ) , because energy consumption is proportional to the square of the signal amplitude when the signal represents an analog quantity such as voltage or current . Assuming ideal hardware , we adopt a definition where energy consumption varies according to the instant amplitude ( cf . class-B amplifier ) , which is more energy efficient than the case where energy consumption is constant and the value is determined by the maximal amplitude ( cf . class-A amplifier ) ( Sechi , 1976 ) . The second proposed function is f ( Wlint ) = ⌈log2 ( abs ( W l int ) ) + 1⌉ , where log2 ( · ) and abs ( · ) are functions applied to each element of a tensor argument . This is based on the assumption that energy consumption increases with the binary logarithm of the value for each weight parameter . When cl = 1 , the definition is ESNd = ∑ l ||⌈log2 ( abs ( W l int ) ) + 1⌉||2F . ( 3 ) In a digital circuit , a number is represented as a binary digit ( bits ) , so the energy consumption for moving or processing signals is roughly proportional to the number of bits , which is the binary logarithm of the value . It is therefore reasonable to use Equation ( 3 ) for a digital circuit .
The authors propose a hardware-agnostic metric called effective signal norm (ESN) to measure the computational cost of convolutional neural networks. This metric aims to fairly measure the effects of pruning and quantization. What’s more, based on the metric, the authors demonstrate that models with fewer parameters achieve far better accuracy after quantization. A large number of experiments are carried out to prove the effects of the metric and related conclusions, however, several experiments and arguments are confusing.
SP:c7c30d05c86b1bb69f9098afe8bc2514e8eb22c0
Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs
1 INTRODUCTION . Deep Learning frameworks such as MXNet ( Chen et al. , 2015 ) , PyTorch ( Paszke et al. , 2017 ) , and TensorFlow ( TensorFlow Authors , 2016a ) represent neural network models as computation graphs . Efficiently executing such graphs requires optimizing discrete decisions about how to map the computations in a graph onto hardware so as to minimize a relevant cost metric ( e.g. , running time , peak memory ) . Given that execution efficiency is critical for the success of neural networks , there is growing interest in the use of optimizing static compilers for neural network computation graphs , such as Glow ( Rotem et al. , 2018 ) , MLIR ( MLIR Authors , 2018 ) , TVM ( Chen et al. , 2018a ) , and XLA ( XLA team , 2017 ) . Here we consider the model parallelism setting where a computation graph can be executed using multiple devices in parallel . Nodes of the graph are computational tasks , and directed edges denote dependencies between them . We consider jointly optimizing over placement , i.e. , which nodes are executed on which devices , and schedule , i.e. , the node execution order on each device . These decisions are typically made in either one or two passes in the compiler . We consider two different objectives : 1 ) minimize running time , subject to not exceeding device memory limits , and 2 ) minimize peak memory usage . In the optimization literature , such problems are studied under the class of task scheduling , which is known to be NP-hard in typical settings ( Sinnen , 2007 ; Kwok & Ahmad , 1999 ) . As scheduling and placement are just a few of the many complex decisions made in a compiler , it is essential in a production setting that a solution 1 ) produce solutions of acceptable quality fast , even on large graphs ( e.g. , thousands of nodes ) and decision spaces , and 2 ) handle diverse graphs from various types of applications , neural network architectures , and users . In this work we consider ∗Google AI Resident learning an optimizer that satisfies these requirements . Crucially , we aim to learn an optimizer that generalizes to a broad set of previously unseen computation graphs , without the need for training on such graphs , thus allowing it to be fast at test time . Previous works on learning to optimize model parallelism decisions ( Mirhoseini et al. , 2017 ; 2018 ; Addanki et al. , 2019 ) have not considered generalization to a broad set of graphs nor joint optimization of placement and scheduling . In Mirhoseini et al . ( 2017 ; 2018 ) , learning is done from scratch for each computation graph and for placement decisions only , requiring hours ( e.g. , 12 to 27 hours per graph ) . This is too slow to be broadly useful in a general-purpose production compiler . We propose an approach that takes only seconds to optimize similar graphs . In concurrent work to ours , Addanki et al . ( 2019 ) shows generalization to unseen graphs , but they are generated artificially by architecture search for a single learning task and dataset . In contrast , we collect real user-defined graphs spanning a broad set of tasks , architectures , and datasets . In addition , both Mirhoseini et al . ( 2017 ; 2018 ) and Addanki et al . ( 2019 ) consider only placement decisions and rely on TensorFlow ’ s dynamic scheduler ; they do not address the static compiler setting where it is natural to jointly optimize scheduling and placement . The key idea of our approach ( Figure 1 ) is to learn a neural network that , conditioned on the input graph to be optimized , directs an existing optimization algorithm ’ s search such that it finds a better solution in the same search budget . We choose the Biased Random-Key Genetic Algorithm ( BRKGA ( Gonçalves & Resende , 2011 ) ) as the optimization algorithm after an extensive evaluation of several choices showed that it gives by far the best speed-vs-quality trade-off for our application . BRKGA produces good solutions in just a few seconds even for real-world TensorFlow graphs with thousands of nodes , and we use learning to improve the solution quality significantly at similar speed . We train a graph neural network ( Battaglia et al. , 2018 ) to take a computation graph as input and output node-specific proposal distributions to use in the mutant generation step of BRKGA ’ s inner loop . BRKGA is then run to completion with those input-dependent distribution choices , instead of inputagnostic default choices , to compute execution decisions . The distributions are predicted at each node , resulting in a high-dimensional prediction problem . There is no explicit supervision available , so we use the objective value as a reward signal in a contextual bandit approach with REINFORCE ( Williams , 1992 ) . Our approach , “ Reinforced Genetic Algorithm Learning ” ( REGAL ) , uses the network ’ s ability to generalize to new graphs to significantly improve the solution quality of the genetic algorithm for the same objective evaluation budget . We follow the static compiler approach of constructing a coarse static cost model to evaluate execution decisions and optimizing them with respect to it , as done in ( Addanki et al. , 2018 ; Jia et al. , 2018 ) . This is in contrast to evaluating the cost by executing the computation graph on hardware ( Mirhoseini et al. , 2017 ; 2018 ) . A computationally cheap cost model enables fast optimization . It is also better suited for distributed training of RL policies since a cost model is cheap to replicate in parallel actors , while hardware environments are not . Our cost model corresponds to classical NP-hard scheduling problems , so optimizing it is difficult . In this paper we focus fully on learning to optimize this cost model , leaving integration with a compiler for future work . We structure the neural network ’ s task as predicting proposal distributions to use in the search over execution decisions , rather than the decisions themselves directly . Empirically we have found the direct prediction approach to be too slow at inference time for our application and generalizes poorly . Our approach potentially allows the network to learn a more abstract policy not directly tied to detailed decisions that are specific to particular graphs , which may generalize better to new graphs . It can also make the learning task easier as the search may succeed even with sub-optimal proposal distribution predictions , thus smoothening the reward function and allowing the network to incrementally learn better proposals . The node-specific proposal distribution choices provide a rich set of knobs for the network to flexibly direct the search . Combining learning with a search algorithm has been shown to be successful ( e.g. , ( Silver et al. , 2017 ; 2018 ) ) , and our work can be seen as an instance of the same high-level idea . This paper makes several contributions : • We are the first to demonstrate learning a policy for jointly optimizing placement and scheduling that generalizes to a broad set of real-world TensorFlow graphs . REGAL significantly outperforms all baseline algorithms on two separate tasks of minimizing runtime and peak memory usage ( section 5.3 ) on datasets constructed from 372 unique real-world TensorFlow graphs , the largest dataset of its kind in the literature and at least an order of magnitude larger than the ones in previous works ( Mirhoseini et al. , 2017 ; 2018 ; Chen et al. , 2018b ; Addanki et al. , 2018 ; 2019 ) . • We use a graph neural network to predict mutant sampling distributions of a genetic algorithm , specifically BRKGA , for the input graph to be optimized . This directs BRKGA ’ s search in an input-dependent way , improving solution quality for the same search budget . • We compare extensively to classical optimization algorithms , such as enumerative search , local search , genetic search , and other heuristics , and analyze room-for-improvement in the objective value available to be captured via learning . Both are missing in previous works . 2 RELATED WORK . Learning to optimize computation graphs : AutoTVM ( Chen et al. , 2018b ) applies learning to the very different problem of optimizing low-level implementations of operators in a tensor program , while we focus on optimizing higher-level decisions such as placement and scheduling of ops . Mao et al . ( 2019 ) use graph neural nets and RL to learn a scheduling policy for data processing jobs on clusters . These works are conceptually similar to ours in their use of learning , applied to a different domain . Learning for combinatorial optimization : Our work is an instance of applying learning for combinatorial optimization ( Bengio et al. , 2018 ) . Previous works on learning graph combinatorial optimization algorithms ( e.g. , Li et al . ( 2018 ) ; Khalil et al . ( 2017 ) ) have focused on problems such as Minimum Vertex Cover , Maximum Clique , Maximum Independent Set , etc . The task scheduling problem we consider is significantly different in that the objective value is a more complex function on node-level decisions . Also , we focus on large-scale , real-world TensorFlow graphs , while e.g. , Khalil et al . ( 2017 ) uses small-scale , synthetic graph distributions . Learning a proposal distribution for stochastic search : Bunel et al . ( 2017 ) learns a policy for predicting instance-dependent proposal distributions to be used in the stochastic optimizer STOKE ( Schkufza et al. , 2013 ) for superoptimizing programs . However , it uses handcrafted instance features and shows results on relatively simple , small programs . In contrast , we automatically learn the instance representations and show results on real-world graphs . An earlier work by Paige & Wood ( 2016 ) similarly learns a neural network to predict input-dependent proposal distributions for sequential Monte Carlo search for inference in a graphical model . Optimization without learning : Parallel task scheduling ( Sinnen , 2007 ; Kwok & Ahmad , 1999 ) is a classical problem for scheduling ops in a computational graph to minimize runtime . Learning is not traditionally a part of the approaches proposed in this literature . Mayer et al . ( 2017 ) studies greedy task scheduling approaches for TensorFlow . Jia et al . ( 2018 ) develops a simulation-based optimizer for deep learning computation graphs that uses a larger decision space by combining data , model , and attribute parallelism . Our approach can potentially be extended to such larger decisions spaces to achieve even bigger improvements in execution cost . 3 BACKGROUND . Figure 1 shows an overview of our approach . Given an input graph to optimize , instead of applying BRKGA directly with the default uniform distribution at all nodes , a graph neural network predicts beta distribution choices at each node . BRKGA is run with these choices to optimize placement and scheduling decisions with respect to the objective defined by the performance model . We first explain the performance model and BRKGA in this section , and the learning component in the next . 3.1 PERFORMANCE MODEL . A computation graph has a set of ops to run . Each op produces zero or more tensors and requires zero or more tensors as input . The runtime of each op is known and fixed ( e.g. , given by a simulator as in Jia et al . ( 2018 ) ) . The memory use of each tensor is known ( an assumption that holds in static compilers like XLA ) . We assume a collection of d homogeneous devices that have separate local memory and can run at most one op at a time . An op can run only when its input tensors are present in the local memory . Tensors can be transferred across devices by synchronous ( blocking ) transfers . Tensors are freed from local memory after all local consumers have run . In this setting , we consider the problem of finding an assignment of ops to devices and an overall schedule such that each op is run once with the objectives of ( 1 ) minimizing the peak local memory use across devices ( e.g. , to find a feasible way to run a large computation graph ) , or ( 2 ) minimizing the runtime subject to a constraint on the peak memory used on any device . The performance model does not consider rematerialization of tensors , fragmentation when computing memory use , and asynchronous transfers between devices . Despite these simplifications , the model yields slight variants of problems that are known to be NP-hard ( Eyraud-Dubois et al. , 2015 ) and therefore remains a challenging setting in which to study how to learn an optimizer . See section A.4 for more details of the model .
This paper proposes an ML-based method to optimize TensorFlow Graph execution. Specifically, it combines graph neural networks (GNNs) and BRKGA (a genetic algorithm) to search over the joint space of TF node-device placement and scheduling. The core claims on the advantages of this method are that (1) it co-searches placement and scheduling space, (2) the trained model can generalize to different graphs and inference cost is very small. The experimental results show that REGEL can outperform a few baseline methods on this problem.
SP:3044e58365db4d2ac46059803c3696add32881ba
Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs
1 INTRODUCTION . Deep Learning frameworks such as MXNet ( Chen et al. , 2015 ) , PyTorch ( Paszke et al. , 2017 ) , and TensorFlow ( TensorFlow Authors , 2016a ) represent neural network models as computation graphs . Efficiently executing such graphs requires optimizing discrete decisions about how to map the computations in a graph onto hardware so as to minimize a relevant cost metric ( e.g. , running time , peak memory ) . Given that execution efficiency is critical for the success of neural networks , there is growing interest in the use of optimizing static compilers for neural network computation graphs , such as Glow ( Rotem et al. , 2018 ) , MLIR ( MLIR Authors , 2018 ) , TVM ( Chen et al. , 2018a ) , and XLA ( XLA team , 2017 ) . Here we consider the model parallelism setting where a computation graph can be executed using multiple devices in parallel . Nodes of the graph are computational tasks , and directed edges denote dependencies between them . We consider jointly optimizing over placement , i.e. , which nodes are executed on which devices , and schedule , i.e. , the node execution order on each device . These decisions are typically made in either one or two passes in the compiler . We consider two different objectives : 1 ) minimize running time , subject to not exceeding device memory limits , and 2 ) minimize peak memory usage . In the optimization literature , such problems are studied under the class of task scheduling , which is known to be NP-hard in typical settings ( Sinnen , 2007 ; Kwok & Ahmad , 1999 ) . As scheduling and placement are just a few of the many complex decisions made in a compiler , it is essential in a production setting that a solution 1 ) produce solutions of acceptable quality fast , even on large graphs ( e.g. , thousands of nodes ) and decision spaces , and 2 ) handle diverse graphs from various types of applications , neural network architectures , and users . In this work we consider ∗Google AI Resident learning an optimizer that satisfies these requirements . Crucially , we aim to learn an optimizer that generalizes to a broad set of previously unseen computation graphs , without the need for training on such graphs , thus allowing it to be fast at test time . Previous works on learning to optimize model parallelism decisions ( Mirhoseini et al. , 2017 ; 2018 ; Addanki et al. , 2019 ) have not considered generalization to a broad set of graphs nor joint optimization of placement and scheduling . In Mirhoseini et al . ( 2017 ; 2018 ) , learning is done from scratch for each computation graph and for placement decisions only , requiring hours ( e.g. , 12 to 27 hours per graph ) . This is too slow to be broadly useful in a general-purpose production compiler . We propose an approach that takes only seconds to optimize similar graphs . In concurrent work to ours , Addanki et al . ( 2019 ) shows generalization to unseen graphs , but they are generated artificially by architecture search for a single learning task and dataset . In contrast , we collect real user-defined graphs spanning a broad set of tasks , architectures , and datasets . In addition , both Mirhoseini et al . ( 2017 ; 2018 ) and Addanki et al . ( 2019 ) consider only placement decisions and rely on TensorFlow ’ s dynamic scheduler ; they do not address the static compiler setting where it is natural to jointly optimize scheduling and placement . The key idea of our approach ( Figure 1 ) is to learn a neural network that , conditioned on the input graph to be optimized , directs an existing optimization algorithm ’ s search such that it finds a better solution in the same search budget . We choose the Biased Random-Key Genetic Algorithm ( BRKGA ( Gonçalves & Resende , 2011 ) ) as the optimization algorithm after an extensive evaluation of several choices showed that it gives by far the best speed-vs-quality trade-off for our application . BRKGA produces good solutions in just a few seconds even for real-world TensorFlow graphs with thousands of nodes , and we use learning to improve the solution quality significantly at similar speed . We train a graph neural network ( Battaglia et al. , 2018 ) to take a computation graph as input and output node-specific proposal distributions to use in the mutant generation step of BRKGA ’ s inner loop . BRKGA is then run to completion with those input-dependent distribution choices , instead of inputagnostic default choices , to compute execution decisions . The distributions are predicted at each node , resulting in a high-dimensional prediction problem . There is no explicit supervision available , so we use the objective value as a reward signal in a contextual bandit approach with REINFORCE ( Williams , 1992 ) . Our approach , “ Reinforced Genetic Algorithm Learning ” ( REGAL ) , uses the network ’ s ability to generalize to new graphs to significantly improve the solution quality of the genetic algorithm for the same objective evaluation budget . We follow the static compiler approach of constructing a coarse static cost model to evaluate execution decisions and optimizing them with respect to it , as done in ( Addanki et al. , 2018 ; Jia et al. , 2018 ) . This is in contrast to evaluating the cost by executing the computation graph on hardware ( Mirhoseini et al. , 2017 ; 2018 ) . A computationally cheap cost model enables fast optimization . It is also better suited for distributed training of RL policies since a cost model is cheap to replicate in parallel actors , while hardware environments are not . Our cost model corresponds to classical NP-hard scheduling problems , so optimizing it is difficult . In this paper we focus fully on learning to optimize this cost model , leaving integration with a compiler for future work . We structure the neural network ’ s task as predicting proposal distributions to use in the search over execution decisions , rather than the decisions themselves directly . Empirically we have found the direct prediction approach to be too slow at inference time for our application and generalizes poorly . Our approach potentially allows the network to learn a more abstract policy not directly tied to detailed decisions that are specific to particular graphs , which may generalize better to new graphs . It can also make the learning task easier as the search may succeed even with sub-optimal proposal distribution predictions , thus smoothening the reward function and allowing the network to incrementally learn better proposals . The node-specific proposal distribution choices provide a rich set of knobs for the network to flexibly direct the search . Combining learning with a search algorithm has been shown to be successful ( e.g. , ( Silver et al. , 2017 ; 2018 ) ) , and our work can be seen as an instance of the same high-level idea . This paper makes several contributions : • We are the first to demonstrate learning a policy for jointly optimizing placement and scheduling that generalizes to a broad set of real-world TensorFlow graphs . REGAL significantly outperforms all baseline algorithms on two separate tasks of minimizing runtime and peak memory usage ( section 5.3 ) on datasets constructed from 372 unique real-world TensorFlow graphs , the largest dataset of its kind in the literature and at least an order of magnitude larger than the ones in previous works ( Mirhoseini et al. , 2017 ; 2018 ; Chen et al. , 2018b ; Addanki et al. , 2018 ; 2019 ) . • We use a graph neural network to predict mutant sampling distributions of a genetic algorithm , specifically BRKGA , for the input graph to be optimized . This directs BRKGA ’ s search in an input-dependent way , improving solution quality for the same search budget . • We compare extensively to classical optimization algorithms , such as enumerative search , local search , genetic search , and other heuristics , and analyze room-for-improvement in the objective value available to be captured via learning . Both are missing in previous works . 2 RELATED WORK . Learning to optimize computation graphs : AutoTVM ( Chen et al. , 2018b ) applies learning to the very different problem of optimizing low-level implementations of operators in a tensor program , while we focus on optimizing higher-level decisions such as placement and scheduling of ops . Mao et al . ( 2019 ) use graph neural nets and RL to learn a scheduling policy for data processing jobs on clusters . These works are conceptually similar to ours in their use of learning , applied to a different domain . Learning for combinatorial optimization : Our work is an instance of applying learning for combinatorial optimization ( Bengio et al. , 2018 ) . Previous works on learning graph combinatorial optimization algorithms ( e.g. , Li et al . ( 2018 ) ; Khalil et al . ( 2017 ) ) have focused on problems such as Minimum Vertex Cover , Maximum Clique , Maximum Independent Set , etc . The task scheduling problem we consider is significantly different in that the objective value is a more complex function on node-level decisions . Also , we focus on large-scale , real-world TensorFlow graphs , while e.g. , Khalil et al . ( 2017 ) uses small-scale , synthetic graph distributions . Learning a proposal distribution for stochastic search : Bunel et al . ( 2017 ) learns a policy for predicting instance-dependent proposal distributions to be used in the stochastic optimizer STOKE ( Schkufza et al. , 2013 ) for superoptimizing programs . However , it uses handcrafted instance features and shows results on relatively simple , small programs . In contrast , we automatically learn the instance representations and show results on real-world graphs . An earlier work by Paige & Wood ( 2016 ) similarly learns a neural network to predict input-dependent proposal distributions for sequential Monte Carlo search for inference in a graphical model . Optimization without learning : Parallel task scheduling ( Sinnen , 2007 ; Kwok & Ahmad , 1999 ) is a classical problem for scheduling ops in a computational graph to minimize runtime . Learning is not traditionally a part of the approaches proposed in this literature . Mayer et al . ( 2017 ) studies greedy task scheduling approaches for TensorFlow . Jia et al . ( 2018 ) develops a simulation-based optimizer for deep learning computation graphs that uses a larger decision space by combining data , model , and attribute parallelism . Our approach can potentially be extended to such larger decisions spaces to achieve even bigger improvements in execution cost . 3 BACKGROUND . Figure 1 shows an overview of our approach . Given an input graph to optimize , instead of applying BRKGA directly with the default uniform distribution at all nodes , a graph neural network predicts beta distribution choices at each node . BRKGA is run with these choices to optimize placement and scheduling decisions with respect to the objective defined by the performance model . We first explain the performance model and BRKGA in this section , and the learning component in the next . 3.1 PERFORMANCE MODEL . A computation graph has a set of ops to run . Each op produces zero or more tensors and requires zero or more tensors as input . The runtime of each op is known and fixed ( e.g. , given by a simulator as in Jia et al . ( 2018 ) ) . The memory use of each tensor is known ( an assumption that holds in static compilers like XLA ) . We assume a collection of d homogeneous devices that have separate local memory and can run at most one op at a time . An op can run only when its input tensors are present in the local memory . Tensors can be transferred across devices by synchronous ( blocking ) transfers . Tensors are freed from local memory after all local consumers have run . In this setting , we consider the problem of finding an assignment of ops to devices and an overall schedule such that each op is run once with the objectives of ( 1 ) minimizing the peak local memory use across devices ( e.g. , to find a feasible way to run a large computation graph ) , or ( 2 ) minimizing the runtime subject to a constraint on the peak memory used on any device . The performance model does not consider rematerialization of tensors , fragmentation when computing memory use , and asynchronous transfers between devices . Despite these simplifications , the model yields slight variants of problems that are known to be NP-hard ( Eyraud-Dubois et al. , 2015 ) and therefore remains a challenging setting in which to study how to learn an optimizer . See section A.4 for more details of the model .
In this paper, the authors proposed a framework to generating a task scheduling for a compiler to reduce the execution cost of neural networks. A computation graph is first fed into a GNN to produce a beta distribution, which is then fed into the BRKGA algorithm to yield the encoded solutions. The motivation is interesting, and the proposed method is technically reasonable. The details are also included in the appendix. To improve the quality, the following concerns may be considered:
SP:3044e58365db4d2ac46059803c3696add32881ba
The Detection of Distributional Discrepancy for Text Generation
1 INTRODUCTION . Text generation by neural language models ( LM ) , such as LSTM ( Hochreiter & Schmidhuber , 1997 ) have given rise to much progress and are now used to dialogue generation ( Li et al. , 2017 ) , machine translation ( Wu et al. , 2016 ) and image caption ( Xu et al. , 2015 ) . However , the generated sentences are still poor in semantics or global coherence , even not perfect grammatically speaking ( Caccia et al. , 2019 ) . It means that the discrepancy between generated text and real text is large . One reason is the architecture and parameters ’ number of LM itself ( Radford et al. , 2019 ; Santoro et al. , 2018 ) . Many researchers attribute it to the exposure bias ( Bengio et al. , 2015 ) because the LM is trained with a maximum likelihood estimate ( MLE ) and predicts the next word conditioned on words from the ground-truth during training . But it only conditions on the words generated by itself during reference . Statistically , this discrepancy means the two distributional functions of real texts and generated texts is different . Reducing this distributional difference may be a practicable way to improve text generation . Some researchers try to reduce this difference with GAN ( Goodfellow et al. , 2014 ) . They use a discriminator to detect the discrepancy between real samples and generated samples , and feed the signal back to upgrade the generator ( a LM ) . In order to solve the non-differential issue that arises by the need to handle discrete tokens , reinforcement learning ( RL ) ( Williams , 1992 ) is adapted by SeqGAN ( Yu et al. , 2017 ) , RankGAN ( Lin et al. , 2017 ) , and LeakGAN ( Guo et al. , 2018 ) . The Gumble-Softmax is also introduced by GSGAN ( Jang et al. , 2017 ) and RelGAN ( Nie et al. , 2019 ) to solve this issue . These language GANs pre-train both the generator ( G ) and the discriminator ( D ) before adversarial learning1 . During adversarial learning , for each round , the G is trained several epochs and then , the D is trained tens of epochs . Learning stops when the model converges . Furthermore , considering the generated texts ’ quality and diversity simultaneously ( Shi et al. , 2018 ) , 1An exception is RelGAN which needs not pre-train D. MaskGAN ( Fedus et al. , 2018 ) , DpGAN ( Xu et al. , 2018 ) , FMGAN ( Chen et al. , 2018 ) and RelGAN ( Nie et al. , 2019 ) are proposed . They evaluate the generated text with Bleu and self-Bleu ( Zhu et al. , 2018 ) or LM socre and reverse LM score ( Cı́fka et al. , 2018 ) , and claim these GANs improve the performance of generator . However recently questions have been rasied over these claims . Semeniuta et al . ( 2018 ) and Caccia et al . ( 2019 ) showed that via more precise experiments and evaluation , these considered GAN variants are defeated by a well-adjusted language model . d ’ Autume et al . ( 2019 ) trained language GANs from scratch , nevertheless , they only achieve the ” comparable ” performance against LM . He et al . ( 2019 ) quantifies the exposure bias and concludes it is either 3 percent lower in performance or indistinguishable . All the aforementioned methods treat GAN as a black box for evaluation . For those language GANs , there are several critical issues such as whether the D detects the discrepancy or not ; the detected discrepancy is severe or not , the signals from D could improve the generator or not are still unclear . In this paper , we try to solve these problems via investigating GAN in both pre-training and the adversarial learning process . Theoretically analysing the signal from D , we obtain two metric functions to measure the distributional difference . With these two functions , we first measure the difference between the real text and the generated text by a MLE-trained language model ( pretrain ) . Second , we try some methods to update generator with feedback signal from D , then , we use these metric functions to evaluate the updated generator . Finally , we analysis the existing language GANs during the adversarial learning with these two functions . All the code and data could be find https : //github.com/ . Our contributions are as follows : • We propose two metric functions to measure the distributional difference between real text and generated text . Besides that , a method is put forward to estimate them . • Evaluated using these two functions , a number of experiment show there is an obvious discrepancy between the real text and the generated text even when it is generated by a well-adjusted language model . • Although this discrepancy could be detected by D , the feedback signal from D can not improve G using existing methods . • Experimenting on two existing language GANs , SeqGAN and RelGAN , the distributional discrepancy between real text and generated text increases with more adversarial learning rounds . It demonstrates both of these language GANs fail . 2 METHOD . In GAN , the generator Gθ implicitly defines a probability distribution pθ ( x ) to mimic the real data distribution pd ( x ) . min Gθ max Dφ V ( Dφ , Gθ ) = Ex∼pd ( x ) [ logDφ ( x ) ] + Ex∼pθ ( x ) [ log ( 1−Dφ ( x ) ) ] ( 1 ) We define Dφ to detect the discrepancy between pθ ( x ) and pd ( x ) . We optimize Dφ as follow , max Dφ V ( Dφ , Gθ ) = max Dφ Ex∼pd [ logDφ ( x ) ] + Ex∼pθ [ log ( 1−Dφ ( x ) ) ] ( 2 ) Assuming D∗φ ( x ) is the optimal solution for a given θ , according to ( Goodfellow et al. , 2014 ) , there will be , D∗φ ( x ) = pd ( x ) pd ( x ) + pθ ( x ) ( 3 ) We obtain two metric functions to measure this discrepancy . D ∗ φ ( x ) ≥ 0.5 , iif pd ( x ) ≥ pθ ( x ) D∗φ ( x ) < 0.5 , iif pd ( x ) < pθ ( x ) ( 4 ) With it , the integration of density function could be transformed into statistic equation . Based on that , we could get a way which will be described in next , to compute the precise discrepancy . 2.1 APPROXIMATE DISCREPANCY . Let , qd ( x ) = pd ( x ) pd ( x ) + pθ ( x ) qθ ( x ) = pθ ( x ) pd ( x ) + pθ ( x ) ( 5 ) So , qd ( x ) = p ( x comes from real data|x ) , qθ ( x ) = p ( x comes from generated data|x ) , qd ( x ) + qθ ( x ) = 1 . With equation 5 , we could get a constraint and an approximated measure function of distributional function . Figure 1 ( a ) illustrates the relationship between qθ ( x ) and qd ( x ) . Let , ud = Ex∼pd ( x ) ( D∗φ ( x ) ) uθ = Ex∼pθ ( x ) ( D∗φ ( x ) ) ( 6 ) They are two statistic equations who are the expectation of the D∗φ ’ s predictions on real text and on generated text respectively . According to the above equation , it is easy to get following equation . 1 2 [ ud + uθ ] = 0.5 ( 7 ) It gives a constraint for Dφ converging to D∗φ . We should take this constraint into account when estimating the ideal function D∗φ . From equation 3 , the process of optimizing the discriminator is make ud big and make uθ small . So , we could estimate the distributional discrepancy according to the following function . Intuitively , using ud and uθ , we get a metric function to measure the discrepancy between pθ ( x ) and pd ( x ) , da = ud − uθ ( 8 ) We call it approximate discrepancy . It is the subtraction of the average score of a well-trained discriminator ( denoted as D̂φ ) makes the predictions on real samples and on generated samples . It reflects the discrepancy between these two sets to some degree . From equation 5 and 6 , we get equation 8 , da = ∫ [ qd ( x ) − qθ ( x ) ] pd ( x ) dx = Ex∼pd ( x ) [ qd ( x ) − qθ ( x ) ] ( 9 ) Figure 1 ( a ) illustrates the discrepancy between two distributional functions qθ ( x ) and qd ( x ) . Both of them are systematic to the line of q = 0.5 . But it is not a complete measure because there is not only a positive part but also a negative part . A complete metric function is shown in next section . 2.2 ABSOLUTE DISCREPANCY . In order to precisely measure the discrepancy , we define ds , ds = 1 2 ∫ ∣∣pd ( x ) − pθ ( x ) ∣∣dx ( 10 ) The range of this function is 0 ∼ 1 . The bigger of its value is , the discrepancy is more . When its value is zero , it means pd ( x ) ≡ pθ ( x ) , namely there is no discrepancy . Fortunately , this function could be estimated by statistic method which is described by following equation . The proof is shown in appendix A. ds = 1 2 [ E x∼pd ( x ) D∗φ ( x ) > 0.5 ( 1 ) − E x∼pd ( x ) D∗φ ( x ) ≤0.5 ( 1 ) + E x∼pθ ( x ) D∗φ ( x ) ≤0.5 ( 1 ) − E x∼pθ ( x ) D∗φ ( x ) > 0.5 ( 1 ) ] ( 11 ) With equation 11 , we could more precisely estimate the discrepancy between pθ ( x ) and pd ( x ) . Assuming the classification precision of D∗φ is a , then the error rate is b = 1 − a . According to equation 11 , ds = a − b . So , the discrepancy between pθ ( x ) and pd ( x ) equals the classification precision of D∗φ minus its error rate . 2.3 USING D∗φ ( x ) TO IMPROVE Gθ Given an instance x generated by Gθ , if D∗φ ( x ) is larger , it means the possibility of x is real data is larger . For an instance D∗φ ( x ) = 0.8 , there will be pθ ( x ) < pd ( x ) according to equation 3 . So , we should update Gθ to make the probability density pθ ( x ) increase . It may improve the performance of Gθ . Based on this , we could select out some generated instances by the value of D∗φ ( x ) to update the generator . In fact , we find it helpful to use the faked samples whose score are higher assigned by D∗φ . Experiment 4.3 shows the results .
This paper proposes an estimator to quantify the difference in distributions between real and generated text based on a classifier that discriminates between real vs generated text. The methodology is however not particularly well motivated and the experiments do not convince me that this proposed measure is superior to other reasonable choices. Overall, the writing also contains many grammatical errors and confusing at places.
SP:cbb28b39e7a2f17e1b229130ba484e49b69ab695
The Detection of Distributional Discrepancy for Text Generation
1 INTRODUCTION . Text generation by neural language models ( LM ) , such as LSTM ( Hochreiter & Schmidhuber , 1997 ) have given rise to much progress and are now used to dialogue generation ( Li et al. , 2017 ) , machine translation ( Wu et al. , 2016 ) and image caption ( Xu et al. , 2015 ) . However , the generated sentences are still poor in semantics or global coherence , even not perfect grammatically speaking ( Caccia et al. , 2019 ) . It means that the discrepancy between generated text and real text is large . One reason is the architecture and parameters ’ number of LM itself ( Radford et al. , 2019 ; Santoro et al. , 2018 ) . Many researchers attribute it to the exposure bias ( Bengio et al. , 2015 ) because the LM is trained with a maximum likelihood estimate ( MLE ) and predicts the next word conditioned on words from the ground-truth during training . But it only conditions on the words generated by itself during reference . Statistically , this discrepancy means the two distributional functions of real texts and generated texts is different . Reducing this distributional difference may be a practicable way to improve text generation . Some researchers try to reduce this difference with GAN ( Goodfellow et al. , 2014 ) . They use a discriminator to detect the discrepancy between real samples and generated samples , and feed the signal back to upgrade the generator ( a LM ) . In order to solve the non-differential issue that arises by the need to handle discrete tokens , reinforcement learning ( RL ) ( Williams , 1992 ) is adapted by SeqGAN ( Yu et al. , 2017 ) , RankGAN ( Lin et al. , 2017 ) , and LeakGAN ( Guo et al. , 2018 ) . The Gumble-Softmax is also introduced by GSGAN ( Jang et al. , 2017 ) and RelGAN ( Nie et al. , 2019 ) to solve this issue . These language GANs pre-train both the generator ( G ) and the discriminator ( D ) before adversarial learning1 . During adversarial learning , for each round , the G is trained several epochs and then , the D is trained tens of epochs . Learning stops when the model converges . Furthermore , considering the generated texts ’ quality and diversity simultaneously ( Shi et al. , 2018 ) , 1An exception is RelGAN which needs not pre-train D. MaskGAN ( Fedus et al. , 2018 ) , DpGAN ( Xu et al. , 2018 ) , FMGAN ( Chen et al. , 2018 ) and RelGAN ( Nie et al. , 2019 ) are proposed . They evaluate the generated text with Bleu and self-Bleu ( Zhu et al. , 2018 ) or LM socre and reverse LM score ( Cı́fka et al. , 2018 ) , and claim these GANs improve the performance of generator . However recently questions have been rasied over these claims . Semeniuta et al . ( 2018 ) and Caccia et al . ( 2019 ) showed that via more precise experiments and evaluation , these considered GAN variants are defeated by a well-adjusted language model . d ’ Autume et al . ( 2019 ) trained language GANs from scratch , nevertheless , they only achieve the ” comparable ” performance against LM . He et al . ( 2019 ) quantifies the exposure bias and concludes it is either 3 percent lower in performance or indistinguishable . All the aforementioned methods treat GAN as a black box for evaluation . For those language GANs , there are several critical issues such as whether the D detects the discrepancy or not ; the detected discrepancy is severe or not , the signals from D could improve the generator or not are still unclear . In this paper , we try to solve these problems via investigating GAN in both pre-training and the adversarial learning process . Theoretically analysing the signal from D , we obtain two metric functions to measure the distributional difference . With these two functions , we first measure the difference between the real text and the generated text by a MLE-trained language model ( pretrain ) . Second , we try some methods to update generator with feedback signal from D , then , we use these metric functions to evaluate the updated generator . Finally , we analysis the existing language GANs during the adversarial learning with these two functions . All the code and data could be find https : //github.com/ . Our contributions are as follows : • We propose two metric functions to measure the distributional difference between real text and generated text . Besides that , a method is put forward to estimate them . • Evaluated using these two functions , a number of experiment show there is an obvious discrepancy between the real text and the generated text even when it is generated by a well-adjusted language model . • Although this discrepancy could be detected by D , the feedback signal from D can not improve G using existing methods . • Experimenting on two existing language GANs , SeqGAN and RelGAN , the distributional discrepancy between real text and generated text increases with more adversarial learning rounds . It demonstrates both of these language GANs fail . 2 METHOD . In GAN , the generator Gθ implicitly defines a probability distribution pθ ( x ) to mimic the real data distribution pd ( x ) . min Gθ max Dφ V ( Dφ , Gθ ) = Ex∼pd ( x ) [ logDφ ( x ) ] + Ex∼pθ ( x ) [ log ( 1−Dφ ( x ) ) ] ( 1 ) We define Dφ to detect the discrepancy between pθ ( x ) and pd ( x ) . We optimize Dφ as follow , max Dφ V ( Dφ , Gθ ) = max Dφ Ex∼pd [ logDφ ( x ) ] + Ex∼pθ [ log ( 1−Dφ ( x ) ) ] ( 2 ) Assuming D∗φ ( x ) is the optimal solution for a given θ , according to ( Goodfellow et al. , 2014 ) , there will be , D∗φ ( x ) = pd ( x ) pd ( x ) + pθ ( x ) ( 3 ) We obtain two metric functions to measure this discrepancy . D ∗ φ ( x ) ≥ 0.5 , iif pd ( x ) ≥ pθ ( x ) D∗φ ( x ) < 0.5 , iif pd ( x ) < pθ ( x ) ( 4 ) With it , the integration of density function could be transformed into statistic equation . Based on that , we could get a way which will be described in next , to compute the precise discrepancy . 2.1 APPROXIMATE DISCREPANCY . Let , qd ( x ) = pd ( x ) pd ( x ) + pθ ( x ) qθ ( x ) = pθ ( x ) pd ( x ) + pθ ( x ) ( 5 ) So , qd ( x ) = p ( x comes from real data|x ) , qθ ( x ) = p ( x comes from generated data|x ) , qd ( x ) + qθ ( x ) = 1 . With equation 5 , we could get a constraint and an approximated measure function of distributional function . Figure 1 ( a ) illustrates the relationship between qθ ( x ) and qd ( x ) . Let , ud = Ex∼pd ( x ) ( D∗φ ( x ) ) uθ = Ex∼pθ ( x ) ( D∗φ ( x ) ) ( 6 ) They are two statistic equations who are the expectation of the D∗φ ’ s predictions on real text and on generated text respectively . According to the above equation , it is easy to get following equation . 1 2 [ ud + uθ ] = 0.5 ( 7 ) It gives a constraint for Dφ converging to D∗φ . We should take this constraint into account when estimating the ideal function D∗φ . From equation 3 , the process of optimizing the discriminator is make ud big and make uθ small . So , we could estimate the distributional discrepancy according to the following function . Intuitively , using ud and uθ , we get a metric function to measure the discrepancy between pθ ( x ) and pd ( x ) , da = ud − uθ ( 8 ) We call it approximate discrepancy . It is the subtraction of the average score of a well-trained discriminator ( denoted as D̂φ ) makes the predictions on real samples and on generated samples . It reflects the discrepancy between these two sets to some degree . From equation 5 and 6 , we get equation 8 , da = ∫ [ qd ( x ) − qθ ( x ) ] pd ( x ) dx = Ex∼pd ( x ) [ qd ( x ) − qθ ( x ) ] ( 9 ) Figure 1 ( a ) illustrates the discrepancy between two distributional functions qθ ( x ) and qd ( x ) . Both of them are systematic to the line of q = 0.5 . But it is not a complete measure because there is not only a positive part but also a negative part . A complete metric function is shown in next section . 2.2 ABSOLUTE DISCREPANCY . In order to precisely measure the discrepancy , we define ds , ds = 1 2 ∫ ∣∣pd ( x ) − pθ ( x ) ∣∣dx ( 10 ) The range of this function is 0 ∼ 1 . The bigger of its value is , the discrepancy is more . When its value is zero , it means pd ( x ) ≡ pθ ( x ) , namely there is no discrepancy . Fortunately , this function could be estimated by statistic method which is described by following equation . The proof is shown in appendix A. ds = 1 2 [ E x∼pd ( x ) D∗φ ( x ) > 0.5 ( 1 ) − E x∼pd ( x ) D∗φ ( x ) ≤0.5 ( 1 ) + E x∼pθ ( x ) D∗φ ( x ) ≤0.5 ( 1 ) − E x∼pθ ( x ) D∗φ ( x ) > 0.5 ( 1 ) ] ( 11 ) With equation 11 , we could more precisely estimate the discrepancy between pθ ( x ) and pd ( x ) . Assuming the classification precision of D∗φ is a , then the error rate is b = 1 − a . According to equation 11 , ds = a − b . So , the discrepancy between pθ ( x ) and pd ( x ) equals the classification precision of D∗φ minus its error rate . 2.3 USING D∗φ ( x ) TO IMPROVE Gθ Given an instance x generated by Gθ , if D∗φ ( x ) is larger , it means the possibility of x is real data is larger . For an instance D∗φ ( x ) = 0.8 , there will be pθ ( x ) < pd ( x ) according to equation 3 . So , we should update Gθ to make the probability density pθ ( x ) increase . It may improve the performance of Gθ . Based on this , we could select out some generated instances by the value of D∗φ ( x ) to update the generator . In fact , we find it helpful to use the faked samples whose score are higher assigned by D∗φ . Experiment 4.3 shows the results .
This paper proposes two metrics to measure the discrepancy between generated text and real text, based on the discriminator score in GANs. Empirically, it shows that text generated by current text generation methods is still far from human-generated text, as measured by the proposed metric. The writing is a bit rough so sometimes it's hard to figure out what has been done. It's also unclear how the proposed metrics compare to simply using the discriminator for evaluation. Therefore, I'm inclined to reject the current submission.
SP:cbb28b39e7a2f17e1b229130ba484e49b69ab695
Augmenting Self-attention with Persistent Memory
1 INTRODUCTION . Transformer networks ( Vaswani et al. , 2017 ) are sequence models that rely on the attention mechanism ( Bahdanau et al. , 2015 ) to capture long term dependencies . Since their introduction in the context of machine translation , they have been applied to many natural language processing tasks , such as language modeling ( Al-Rfou et al. , 2019 ) or sentence representation ( Devlin et al. , 2019 ) . On most of them , they are now surpassing the former state-of-the-art models based on recurrent ( Hochreiter & Schmidhuber , 1997 ) or convolutional networks ( Dauphin et al. , 2017 ) . At their core , transformers use a self-attention layer that forms a representation of the current input by gathering the most relevant information from its context . This layer is repeated along the network depth , allowing for information to flow for long distances and to form rich sequence representations . The self-attention mechanism is often considered as the key component of their success and many have worked on improving transformers by increasing the size of the context captured by those layers ( Wu et al. , 2019 ; Dai et al. , 2019 ; Sukhbaatar et al. , 2019 ) . However , self-attention layers are not the only component of transformer networks and they do not explain the effectiveness of transformers by themselves . Each of these layers is followed by a feedforward layer . These feedforward layers contain most of the parameters of the model . This suggests that their role is probably as important as the self-attention mechanism . In fact , the transformer layer , i.e. , the sequence of self-attention and feedforward sublayers , should be regarded as a single mechanism that gathers information from the context and transforms it into a rich representation . Having such two different layer types of at the core makes Transformer models harder to analyse and understand . In particular , there are not many works exploring the properties of feedforward layers . In this work , we simplify the transformer architecture by revisiting its mechanism , while keeping its properties . We introduce a new layer that merges the self-attention and feedforward sublayers into a single unified attention layer , as illustrated in Figure 1 . As opposed to the two-step mechanism of the transformer layer , it directly builds its representation from the context and a persistent memory block without going through a feedforward transformation . The additional persistent memory block stores , in the form of key-value vectors , information that does not depend on the context . In terms of parameters , these persistent key-value vectors replace the feedforward sublayer . This modification dramatically simplifies the structure of the network with no loss of performance . We evaluate the resulting architecture on standard word level and character level language modeling benchmarks and report performances that are competitive with transformers . Self-attention Feedforward All-attention 2 RELATED WORK . Neural language modeling . Different network architectures have been proposed for language modeling , such as feed-forward networks ( Bengio et al. , 2003a ) , recurrent networks ( Mikolov et al. , 2010 ) , gated convolutional networks ( Dauphin et al. , 2017 ) and transformer networks ( Vaswani et al. , 2017 ) . Of particular interest , Al-Rfou et al . ( 2019 ) apply deep transformers to character level language modeling . Dai et al . ( 2019 ) introduces a caching mechanism , relying on the relative position embeddings from Shaw et al . ( 2018 ) , which makes inference in these models much more efficient for unbounded sequences . More recently , Sukhbaatar et al . ( 2019 ) add a learnable self-attention span to extend the size of the context . Word level language models deal with large vocabularies and computing the most probable word is computationally demanding . Solutions are to either replace the softmax loss with an approximation ( Goodman , 2001 ; Morin & Bengio , 2005 ) , to sample from the vocabulary during training ( Bengio et al. , 2003b ; Jozefowicz et al. , 2016 ) or to include subword units ( Sennrich et al. , 2016 ) . A simple yet effective solution is to replace the loss by a hierarchical softmax designed to better take advantage of the GPU specificities ( Grave et al. , 2017a ) . Finally , many works focus on the regularization of large language models . In particular , Zaremba et al . ( 2014 ) show that dropout ( Srivastava et al. , 2014 ) is effective for recurrent networks . More recently , Press & Wolf ( 2017 ) show that tying the embedding and classifier weights significantly improves generalization . Baevski & Auli ( 2019 ) further show that combining this regularization technique with the adaptive softmax of ( Grave et al. , 2017a ) reduces the memory footprint of a transformer while improving its performance . Attention based models . The attention mechanism was first introduced in the context of mixture of experts by Jordan & Jacobs ( 1994 ) . It is only recently that Bahdanau et al . ( 2015 ) have shown their potential when used in neural networks in the context of machine translation . Since then , this mechanism is commonly incorporated within many models , with applications in natural language processing and computer vision , besides transformers . Sukhbaatar et al . ( 2015 ) apply the attention mechanism on the same sequence , i.e. , the so-called self-attention , in an auto-regressive model called end-to-end memory network . They show their potential in the context of language modeling . Graves et al . ( 2014 ) use the attention mechanism for reading from and writing to internal memory for solving algorithmic tasks . Vinyals et al . ( 2015 ) combine this self-attention mechanism with a recurrent network to solve simple algorithmic problems . Later , Merity et al . ( 2017 ) show that these networks can be used as language models if combined with a cache mechanism ( Grave et al. , 2017b ) . The attention mechanism has been also applied to question answering ( Miller et al. , 2016 ) and image captioning ( Xu et al. , 2015 ) . Finally , Shazeer et al . ( 2017 ) uses the attention mechanism as a mixture of experts in a recurrent network . 3 TRANSFORMER LAYER . A transformer model is made of a stack of identical layers , called transformer layers . Each layer is composed of a multi-head self-attention sublayer followed by a feedforward sublayer . Each sublayer is also followed by an add-norm operation , i.e. , a skip-connection ( He et al. , 2016 ) , and layer normalization ( Lei Ba et al. , 2016 ) . In this section , we review the structure of the transformer layer and refer the reader to Vaswani et al . ( 2017 ) for additional details of the overall model . Multi-head self-attention sublayer . A core mechanism of a transformer network is the multi-head self-attention layer , which consists of multiple attention heads applied in parallel . Each attention head applies the attention mechanism of Bahdanau et al . ( 2015 ) on an input sequence of vectors . More formally , given a sequence x1 , ... , xT of d-dimensional input vectors , each head applies two linear transformations to these vectors to form the key and value vectors : kt = Wkxt , ( 1 ) vt = Wvxt , ( 2 ) where Wk and Wv are the “ key ” and “ value ” matrices of a size dh × d , where dh = d/H is the dimension of a head and H is the number of heads . The key vectors are then used to compute a similarity score between an element t of the input sequence and all the elements of its context Ct . The context can be , for instance , the elements of the sequence that precede t in the case of language modeling , or the whole sequence in the encoder for machine translation . The similarity score between t and an element c of its context Ct is defined as stc = x > t W > q ( kc + p ( t , c ) ) , ( 3 ) where Wq ∈ Rdh×d is the “ query ” matrix , and p ( t , c ) is a position encoding function . There are several ways to encode positions : fixed absolute ( Vaswani et al. , 2017 ) , learned absolute ( Al-Rfou et al. , 2019 ) , and learned relative ( Sukhbaatar et al. , 2015 ; Shaw et al. , 2018 ) . The relative position encoding function improves the efficiency for unbounded sequences , making them useful for language modeling ( Dai et al. , 2019 ) . In this paper , we thus use the relative position encoding defined as p ( t , c ) = ut−c , where ui are position embeddings learned during training . The head then outputs a vector yt by taking the average of the context representations weighted by attention weights atc obtained by applying a softmax function to the similarity scores : yt = ∑ c∈Ct atc ( vc + p ( t , c ) ) and atc = exp ( stc/ √ dh ) ∑ i∈Ct exp ( sti/ √ dh ) . ( 4 ) Note that one can use different position encoding functions for the key and value sides . Finally , the outputs from the different heads are concatenated for each timestep t and multiplied by the d × d “ output ” matrix Wo . The final output of this sublayer is thus a sequence of T vectors of dimension d. Feedforward sublayer . The second element of a transformer layer is a fully connected feedforward layer . This sublayer is applied to each position t in the input sequence independently , and consists of two affine transformations with a pointwise non-linear function in between : FF ( xt ) = U σ ( Vxt + b ) + c , ( 5 ) where σ ( x ) = max ( 0 , x ) is the ReLU activation function ; V and U are matrices of dimension d×df and df × d respectively ; b and c are the bias terms . Typically , df is set to be 4 times larger than d. Add-norm . Both the multi-head self-attention and the feed-forward layer are followed by an addnorm operation . This transformation is simply a residual connection ( He et al. , 2016 ) followed by layer normalization ( Lei Ba et al. , 2016 ) . The layer normalization computes the average and standard deviation of the output activations of a given sublayer and normalizes them accordingly . This guarantees that the input yt of the following sublayer is well conditioned , i.e. , that yTt 1 = 0 and yTt yt = √ d. More precisely , the AddNorm operation is defined as : AddNorm ( xt ) = LayerNorm ( xt + Sublayer ( xt ) ) , ( 6 ) where Sublayer is either a multi-head self-attention or a feedforward sublayer . Transformer layer . The overall transformer layer has the following set of equations : zt = AddNorm ( MultiHead ( xt ) ) , ( 7 ) yt = AddNorm ( FF ( zt ) ) , ( 8 ) where MultiHead is the multi-head self-attention sublayer . This is shown on the left panel of Fig . 1 . 4 OUR APPROACH . In this section , we first show that a feedforward sublayer can be viewed as an attention layer . Then , we take advantage of this interpretation of a feedforward model to concatenate it with the self-attention layer , forming a novel layer that relies solely on a multi-head attention layer without the need for a feedforward sublayer .
This paper proposes a simple modification to the ubiquitous Transformer model. Noticing that the feed-forward layer of a Transformer layer looks a bit like an attention over "persistent" memory vectors, the authors propose to explicitly incorporate this notion directly into the self-attention layer. This involves concatenating the contextual representations with global, learned memory vectors, which are attended over.
SP:43fde058475c6f68bfcad69c3bed3982344e5fa2
Augmenting Self-attention with Persistent Memory
1 INTRODUCTION . Transformer networks ( Vaswani et al. , 2017 ) are sequence models that rely on the attention mechanism ( Bahdanau et al. , 2015 ) to capture long term dependencies . Since their introduction in the context of machine translation , they have been applied to many natural language processing tasks , such as language modeling ( Al-Rfou et al. , 2019 ) or sentence representation ( Devlin et al. , 2019 ) . On most of them , they are now surpassing the former state-of-the-art models based on recurrent ( Hochreiter & Schmidhuber , 1997 ) or convolutional networks ( Dauphin et al. , 2017 ) . At their core , transformers use a self-attention layer that forms a representation of the current input by gathering the most relevant information from its context . This layer is repeated along the network depth , allowing for information to flow for long distances and to form rich sequence representations . The self-attention mechanism is often considered as the key component of their success and many have worked on improving transformers by increasing the size of the context captured by those layers ( Wu et al. , 2019 ; Dai et al. , 2019 ; Sukhbaatar et al. , 2019 ) . However , self-attention layers are not the only component of transformer networks and they do not explain the effectiveness of transformers by themselves . Each of these layers is followed by a feedforward layer . These feedforward layers contain most of the parameters of the model . This suggests that their role is probably as important as the self-attention mechanism . In fact , the transformer layer , i.e. , the sequence of self-attention and feedforward sublayers , should be regarded as a single mechanism that gathers information from the context and transforms it into a rich representation . Having such two different layer types of at the core makes Transformer models harder to analyse and understand . In particular , there are not many works exploring the properties of feedforward layers . In this work , we simplify the transformer architecture by revisiting its mechanism , while keeping its properties . We introduce a new layer that merges the self-attention and feedforward sublayers into a single unified attention layer , as illustrated in Figure 1 . As opposed to the two-step mechanism of the transformer layer , it directly builds its representation from the context and a persistent memory block without going through a feedforward transformation . The additional persistent memory block stores , in the form of key-value vectors , information that does not depend on the context . In terms of parameters , these persistent key-value vectors replace the feedforward sublayer . This modification dramatically simplifies the structure of the network with no loss of performance . We evaluate the resulting architecture on standard word level and character level language modeling benchmarks and report performances that are competitive with transformers . Self-attention Feedforward All-attention 2 RELATED WORK . Neural language modeling . Different network architectures have been proposed for language modeling , such as feed-forward networks ( Bengio et al. , 2003a ) , recurrent networks ( Mikolov et al. , 2010 ) , gated convolutional networks ( Dauphin et al. , 2017 ) and transformer networks ( Vaswani et al. , 2017 ) . Of particular interest , Al-Rfou et al . ( 2019 ) apply deep transformers to character level language modeling . Dai et al . ( 2019 ) introduces a caching mechanism , relying on the relative position embeddings from Shaw et al . ( 2018 ) , which makes inference in these models much more efficient for unbounded sequences . More recently , Sukhbaatar et al . ( 2019 ) add a learnable self-attention span to extend the size of the context . Word level language models deal with large vocabularies and computing the most probable word is computationally demanding . Solutions are to either replace the softmax loss with an approximation ( Goodman , 2001 ; Morin & Bengio , 2005 ) , to sample from the vocabulary during training ( Bengio et al. , 2003b ; Jozefowicz et al. , 2016 ) or to include subword units ( Sennrich et al. , 2016 ) . A simple yet effective solution is to replace the loss by a hierarchical softmax designed to better take advantage of the GPU specificities ( Grave et al. , 2017a ) . Finally , many works focus on the regularization of large language models . In particular , Zaremba et al . ( 2014 ) show that dropout ( Srivastava et al. , 2014 ) is effective for recurrent networks . More recently , Press & Wolf ( 2017 ) show that tying the embedding and classifier weights significantly improves generalization . Baevski & Auli ( 2019 ) further show that combining this regularization technique with the adaptive softmax of ( Grave et al. , 2017a ) reduces the memory footprint of a transformer while improving its performance . Attention based models . The attention mechanism was first introduced in the context of mixture of experts by Jordan & Jacobs ( 1994 ) . It is only recently that Bahdanau et al . ( 2015 ) have shown their potential when used in neural networks in the context of machine translation . Since then , this mechanism is commonly incorporated within many models , with applications in natural language processing and computer vision , besides transformers . Sukhbaatar et al . ( 2015 ) apply the attention mechanism on the same sequence , i.e. , the so-called self-attention , in an auto-regressive model called end-to-end memory network . They show their potential in the context of language modeling . Graves et al . ( 2014 ) use the attention mechanism for reading from and writing to internal memory for solving algorithmic tasks . Vinyals et al . ( 2015 ) combine this self-attention mechanism with a recurrent network to solve simple algorithmic problems . Later , Merity et al . ( 2017 ) show that these networks can be used as language models if combined with a cache mechanism ( Grave et al. , 2017b ) . The attention mechanism has been also applied to question answering ( Miller et al. , 2016 ) and image captioning ( Xu et al. , 2015 ) . Finally , Shazeer et al . ( 2017 ) uses the attention mechanism as a mixture of experts in a recurrent network . 3 TRANSFORMER LAYER . A transformer model is made of a stack of identical layers , called transformer layers . Each layer is composed of a multi-head self-attention sublayer followed by a feedforward sublayer . Each sublayer is also followed by an add-norm operation , i.e. , a skip-connection ( He et al. , 2016 ) , and layer normalization ( Lei Ba et al. , 2016 ) . In this section , we review the structure of the transformer layer and refer the reader to Vaswani et al . ( 2017 ) for additional details of the overall model . Multi-head self-attention sublayer . A core mechanism of a transformer network is the multi-head self-attention layer , which consists of multiple attention heads applied in parallel . Each attention head applies the attention mechanism of Bahdanau et al . ( 2015 ) on an input sequence of vectors . More formally , given a sequence x1 , ... , xT of d-dimensional input vectors , each head applies two linear transformations to these vectors to form the key and value vectors : kt = Wkxt , ( 1 ) vt = Wvxt , ( 2 ) where Wk and Wv are the “ key ” and “ value ” matrices of a size dh × d , where dh = d/H is the dimension of a head and H is the number of heads . The key vectors are then used to compute a similarity score between an element t of the input sequence and all the elements of its context Ct . The context can be , for instance , the elements of the sequence that precede t in the case of language modeling , or the whole sequence in the encoder for machine translation . The similarity score between t and an element c of its context Ct is defined as stc = x > t W > q ( kc + p ( t , c ) ) , ( 3 ) where Wq ∈ Rdh×d is the “ query ” matrix , and p ( t , c ) is a position encoding function . There are several ways to encode positions : fixed absolute ( Vaswani et al. , 2017 ) , learned absolute ( Al-Rfou et al. , 2019 ) , and learned relative ( Sukhbaatar et al. , 2015 ; Shaw et al. , 2018 ) . The relative position encoding function improves the efficiency for unbounded sequences , making them useful for language modeling ( Dai et al. , 2019 ) . In this paper , we thus use the relative position encoding defined as p ( t , c ) = ut−c , where ui are position embeddings learned during training . The head then outputs a vector yt by taking the average of the context representations weighted by attention weights atc obtained by applying a softmax function to the similarity scores : yt = ∑ c∈Ct atc ( vc + p ( t , c ) ) and atc = exp ( stc/ √ dh ) ∑ i∈Ct exp ( sti/ √ dh ) . ( 4 ) Note that one can use different position encoding functions for the key and value sides . Finally , the outputs from the different heads are concatenated for each timestep t and multiplied by the d × d “ output ” matrix Wo . The final output of this sublayer is thus a sequence of T vectors of dimension d. Feedforward sublayer . The second element of a transformer layer is a fully connected feedforward layer . This sublayer is applied to each position t in the input sequence independently , and consists of two affine transformations with a pointwise non-linear function in between : FF ( xt ) = U σ ( Vxt + b ) + c , ( 5 ) where σ ( x ) = max ( 0 , x ) is the ReLU activation function ; V and U are matrices of dimension d×df and df × d respectively ; b and c are the bias terms . Typically , df is set to be 4 times larger than d. Add-norm . Both the multi-head self-attention and the feed-forward layer are followed by an addnorm operation . This transformation is simply a residual connection ( He et al. , 2016 ) followed by layer normalization ( Lei Ba et al. , 2016 ) . The layer normalization computes the average and standard deviation of the output activations of a given sublayer and normalizes them accordingly . This guarantees that the input yt of the following sublayer is well conditioned , i.e. , that yTt 1 = 0 and yTt yt = √ d. More precisely , the AddNorm operation is defined as : AddNorm ( xt ) = LayerNorm ( xt + Sublayer ( xt ) ) , ( 6 ) where Sublayer is either a multi-head self-attention or a feedforward sublayer . Transformer layer . The overall transformer layer has the following set of equations : zt = AddNorm ( MultiHead ( xt ) ) , ( 7 ) yt = AddNorm ( FF ( zt ) ) , ( 8 ) where MultiHead is the multi-head self-attention sublayer . This is shown on the left panel of Fig . 1 . 4 OUR APPROACH . In this section , we first show that a feedforward sublayer can be viewed as an attention layer . Then , we take advantage of this interpretation of a feedforward model to concatenate it with the self-attention layer , forming a novel layer that relies solely on a multi-head attention layer without the need for a feedforward sublayer .
This paper considers an architecture change to the transformer in which they swap the feedforward subcomponent of the standard transformer with an "attention only" variant that includes persistent "memory" vectors. The model is evaluated against a suite of baselines on the tasks of character- and word-level language modeling. Combining this "all attention" approach with adaptive span yields results about equivalent to the SOTA, in some cases with fewer parameters than existing models. The authors do a nice job of presenting ablation results. A key finding here, for example, is that the a model stripped of both persistent vectors and the feedforward sublayer performs poorly.
SP:43fde058475c6f68bfcad69c3bed3982344e5fa2
Gradient Surgery for Multi-Task Learning
1 INTRODUCTION . While deep learning and deep reinforcement learning ( RL ) have shown considerable promise in enabling systems to perform complex tasks , the data requirements of current methods make it difficult to learn a breadth of capabilities particularly when all tasks are learned individually from scratch . A natural approach to such multi-task learning problems is to train a single network on all tasks jointly , with the aim of discovering shared structure across the tasks in a way that achieves greater efficiency and performance than solving the tasks individually . However , learning multiple tasks all at once results in a difficult optimization problem , sometimes leading to worse overall performance and data efficiency compared to learning tasks individually ( Parisotto et al. , 2015 ; Rusu et al. , 2016a ) . These optimization challenges are so prevalent that multiple multi-task RL algorithms have considered using independent training as a subroutine of the algorithm before distilling the independent models into a multi-tasking model ( Levine et al. , 2016 ; Parisotto et al. , 2015 ; Rusu et al. , 2016a ; Ghosh et al. , 2017 ; Teh et al. , 2017 ) , producing a multi-task model but losing out on the efficiency gains over independent training . If we could tackle the optimization challenges of multi-task learning effectively , we may be able to actually realize the hypothesized benefits of multi-task learning without the cost in final performance . While there has been a significant amount of research in multi-task learning ( Caruana , 1997 ; Ruder , 2017 ) , the optimization challenges are not well understood . Prior work has described varying learning speeds of different tasks ( Chen et al. , 2017 ) and plateaus in the optimization landscape ( Schaul et al. , 2019 ) as potential causes , while a range of other works have focused on the model architecture ( Misra et al. , 2016b ; Liu et al. , 2018 ) . In this work , we instead hypothesize that the central optimization issue in multi-task learning arises from gradients from different tasks conflicting with one another . In particular , we define two gradients to be conflicting if they point away from one another ( i.e. , have a negative cosine similarity ) . As a concrete example , consider the 2D optimization landscapes of two task objectives shown in Figure 1 . The optimization landscape of each task consists of a deep valley , as has been characterized of neural network optimization landscapes in the past ( Goodfellow et al. , 2014 ) . When considering the combined optimization landscape for multiple tasks , SGD produces gradients that struggle to efficiently find the optimum . This occurs due to a gradient thrashing phenomenon , where the gradient of one task destabilizes optimization in the valley . We can observe this in Figure 1 ( d ) when the optimization reaches the deep valley of task 1 , but is prevented from traversing the valley to an optimum . In Section 6.2 , we find experimentally that this thrashing phenomenon also occurs in a neural network multi-task learning problem . The core contribution of this work is a method for mitigating gradient interference by altering the gradients directly , i.e . by performing “ gradient surgery ” . If two gradients are conflicting , we alter the gradients by projecting each onto the normal plane of the other , preventing the interfering components of the gradient from being applied to the network . We refer to this particular form of gradient surgery as projecting conflicting gradients ( PCGrad ) . PCGrad is model-agnostic , requiring only a single modification to the application of gradients . Hence , it is easy to apply to a range of problem settings , including multi-task supervised learning and multi-task reinforcement learning , and can also be readily combined with other multi-task learning approaches , such as those that modify the architecture . We evaluate PCGrad on multi-task CIFAR classification , multi-objective scene understanding , a challenging multi-task RL domain , and goal-conditioned RL . Across the board , we find PCGrad leads to significant improvements in terms of data efficiency , optimization speed , and final performance compared to prior approaches . Further , on multi-task supervised learning tasks , PCGrad can be successfully combined with prior state-of-the-art methods for multi-task learning for even greater performance . 2 PRELIMINARIES . The goal of multi-task learning is to find parameters θ of a model fθ that achieve high average performance across all the training tasks drawn from a distribution of tasks p ( T ) . More formally , we aim to solve the problem : min θ ETi∼p ( T ) [ Li ( fθ ) ] , where Li is a loss function for the i-th task Ti that we want to minimize . To obtain a model that solves a specific task from the task distribution p ( T ) , we define a task-conditioned model fθ ( y|x , zi ) , with input x , output y , and encoding zi for task Ti , which could be provided as a one-hot vector or in any other form . 3 MULTI-TASK LEARNING VIA GRADIENT SURGERY . While the multi-task problem can in principle be solved by simply applying a standard single-task algorithm with a suitable task identifier provided to the model or a simple multi-head or multi-output model , a number of prior works ( Parisotto et al. , 2015 ; Rusu et al. , 2016a ; Sener & Koltun , 2018 ) have found this learning problem to be difficult , especially in the reinforcement learning setting . We hypothesize that one of the main challenges of multi-task learning can be characterized by conflicting and thrashing gradients , and find that this can significantly impede learning progress , especially when combined with iterative data collection . We identify possible causes for this problem and propose a simple and general approach to mitigate it . 3.1 THRASHING GRADIENTS IN MULTI-TASK OPTIMIZATION LANDSCAPES . We hypothesize that a key optimization issue in multi-task learning arises when gradients from multiple tasks are in conflict with one another , i.e . when gradients point away from one another . More specifically , we hypothesize that such conflict may lead to gradient thrashing . Concretely , gradient thrashing refers to the phenomenon where a large gradient for one task changes the parameter vectors in a way that substantially decreases performance on another task . Since worse performance typically leads to larger gradients , this results in alternating gradient directions , where , at the next iteration , the second task will have large gradients that dominate and reduce performance on the former task . This issue can be particularly pronounced for neural network optimization , since neural network loss landscapes are known to resemble long narrow valleys ( Goodfellow et al. , 2014 ) , where the gradient perpendicular to the direction of the valley will be small . We aim to study this hypothesis through two toy examples . First , consider the two-dimensional optimization landscape illustrated in Fig . 1a , where the landscape for each task objective corresponds to a deep and curved valley ( Fig . 1b and 1c ) . The optima of this multi-task objective correspond to where the two valleys meet . More details on the optimization landscape are in Appendix B . We observe that the gradient thrashing hypothesis is consistent with what we observe when running Adam ( Kingma & Ba , 2014 ) on this landscape in Fig . 1d , where we observe that Adam does not traverse one valley towards the other , preventing it from reaching an optimum . We also aim to detect if a similar phenomenon occurs in multi-task learning with a neural network with thousands of parameters on a toy regression problem . To measure the extent of gradient thrashing , we plot the cosine similarity between the gradients of two tasks throughout the beginning of learning in Fig . 4 ( left ) . We indeed observe a significant level of gradient thrashing at every iteration , where the cosine similarity varies between −0.75 and 0.75 at a very high frequency . Motivated by these observations , we develop an algorithm that aims to alleviate the optimization challenges caused by gradient thrashing by preventing such gradient conflict between tasks . 3.2 PCGRAD : PROJECTING CONFLICTING GRADIENTS . We aim to prevent gradient thrashing by directly altering the gradients themselves , i.e . through “ gradient surgery. ” To be maximally effective and maximally applicable , we must perform surgery in a way that still allows for positive interactions between the task gradients and does not introduce any assumptions on the form of the model . We start by first detecting whether two gradients are in conflict , by measuring whether they point away from one another . More concretely , we characterize two tasks as conflicting for the current parameter setting if they yield a negative cosine similarity between their respective gradients . The goal of PCGrad is to modify the gradients for each task so as to minimize negative conflict with other task gradients , which will in turn mitigate gradient thrashing . To deconflict gradients during optimization , PCGrad adopts a simple procedure : if the gradients between two tasks are in conflict , i.e . their cosine similarity is negative , we project the gradient from one task onto the normal plane of the gradient of the other task . This amounts to removing the conflicting component of the gradient for the task , thereby reducing the amount of destructive gradient interference between tasks . A pictorial description of this idea is shown in Fig . 2 . Suppose the gradient for task Ti is gi , and the gradient for task Tj is gj . PCGrad proceeds as follows : ( 1 ) First , it determines whether gi conflicts with gj by computing the cosine similarity between vectors gi and gj , where negative values indicate conflicting gradients . ( 2 ) If the cosine similarity is negative , we replace gi by its projection onto the normal plane of gj : gi = gi − gi·gj‖gj‖2gj . If the gradients are not in conflict , i.e . cosine similarity is non-negative , the original gradient gi remains unaltered . ( 3 ) PCGrad repeats this process across all of the other tasks sampled in random order from the current Algorithm 1 PCGrad Update Rule Require : Current model parameters θ 1 : Sample mini-batch of tasks B = { Tk } ∼ p ( T ) 2 : for Ti ∼ B in sequence do 3 : Compute gradient gi of Ti as gi = ∇θLi ( fθ ) 4 : for Tj uniformly∼ B in random order do 5 : Compute gradient gj of task Tj as gj = ∇θLj ( fθ ) 6 : Compute cosine similarity between gi as gj as cos ( φij ) = gi·gj ‖gi‖‖gj‖ . 7 : if cos ( φij ) < 0 then 8 : Set gi = gi − gi·gj‖gj‖2 gj // Subtract the projection of gi onto gj 9 : end if 10 : end for 11 : Store gproji = gi 12 : end for 13 : return update ∆θ = ∑ i g proj i batch Tj ∀ j 6= i , resulting in the gradient gproji that is applied for task Ti . We perform the same procedure for all tasks in the batch to obtain their respective gradients . The full update procedure is described in Algorithm 1 and a discussion on using a random task order is included in Appendix D. This procedure , while simple to implement , ensures that the gradients that we apply for each task per batch interfere minimally with the other tasks in the batch , mitigating the thrashing gradient problem , producing a variant on standard first-order gradient descent in the multi-objective setting . In practice , the PCGrad gradient surgery method can be combined with any gradient-based optimizer , including commonly used methods such as SGD with momentum and Adam ( Kingma & Ba , 2014 ) , by simply passing the computed update to the respective optimizer instead of the original gradient . Our experimental results verify the hypothesis that this procedure reduces the problem of thrashing gradients , and find that , as a result , learning progress is substantially improved . Finally , we analyze the convergence of this procedure in Theorem 1 in the two-task setting , to ensure that the procedure is sensible under the standard assumptions in optimization . Theorem 1 . Consider two task loss functions L1 : Rn → R and L2 : Rn → R which are convex and differentiable . For all θ ∈ Rn , let L ( θ ) = L1 ( θ ) + L2 ( θ ) , i.e . L is a multi-task objective . Let φ be the angle between ∇L1 ( θ ) and ∇L2 ( θ ) . Suppose L is differentiable and that its gradient is Lipschitz continuous with constant L > 0 , i.e . we have ||∇L ( θ1 ) −∇L ( θ2 ) ||2 ≤ L||θ1 − θ2||2 for any θ1 , θ2 . Then , the PCGrad update rule with step size t ≤ 1L will converge to either ( 1 ) a location in the optimization landscape where cos ( φ ) = −1 or ( 2 ) the optimal value L ( θ∗ ) . Proof . See Appendix A. Theorem 1 states that application of the PCGrad update in the two-task setting with a convex and Lipschitz multi-task loss function L leads to convergence to either the minimizer of L or a potentially sub-optimal objective value . A sub-optimal solution occurs when the cosine similarity between the gradients of the two tasks is −1 , i.e . the gradients directly conflict , leading to zero gradient after applying PCGrad . However , in practice , since we are using SGD , which is a noisy estimate of the true batch gradients , the cosine similarity between the gradients of two tasks in a minibatch is unlikely to be −1 , thus avoiding this scenario .
This paper proposes as solution to manage the case where gradients are conflicting in gradient-based Multi-Task Learning (MTL), pointing to different directions. They propose a simple “gradient surgery” technique that alters the gradients by projecting a conflicting gradient on the normal vector of the other one, in order to mitigate the effect. The method is generic in the sense that it can be directly applied to various gradient-based architectures easily.
SP:664b4cc73449713aac9e5e0d40027993d7a17c3a
Gradient Surgery for Multi-Task Learning
1 INTRODUCTION . While deep learning and deep reinforcement learning ( RL ) have shown considerable promise in enabling systems to perform complex tasks , the data requirements of current methods make it difficult to learn a breadth of capabilities particularly when all tasks are learned individually from scratch . A natural approach to such multi-task learning problems is to train a single network on all tasks jointly , with the aim of discovering shared structure across the tasks in a way that achieves greater efficiency and performance than solving the tasks individually . However , learning multiple tasks all at once results in a difficult optimization problem , sometimes leading to worse overall performance and data efficiency compared to learning tasks individually ( Parisotto et al. , 2015 ; Rusu et al. , 2016a ) . These optimization challenges are so prevalent that multiple multi-task RL algorithms have considered using independent training as a subroutine of the algorithm before distilling the independent models into a multi-tasking model ( Levine et al. , 2016 ; Parisotto et al. , 2015 ; Rusu et al. , 2016a ; Ghosh et al. , 2017 ; Teh et al. , 2017 ) , producing a multi-task model but losing out on the efficiency gains over independent training . If we could tackle the optimization challenges of multi-task learning effectively , we may be able to actually realize the hypothesized benefits of multi-task learning without the cost in final performance . While there has been a significant amount of research in multi-task learning ( Caruana , 1997 ; Ruder , 2017 ) , the optimization challenges are not well understood . Prior work has described varying learning speeds of different tasks ( Chen et al. , 2017 ) and plateaus in the optimization landscape ( Schaul et al. , 2019 ) as potential causes , while a range of other works have focused on the model architecture ( Misra et al. , 2016b ; Liu et al. , 2018 ) . In this work , we instead hypothesize that the central optimization issue in multi-task learning arises from gradients from different tasks conflicting with one another . In particular , we define two gradients to be conflicting if they point away from one another ( i.e. , have a negative cosine similarity ) . As a concrete example , consider the 2D optimization landscapes of two task objectives shown in Figure 1 . The optimization landscape of each task consists of a deep valley , as has been characterized of neural network optimization landscapes in the past ( Goodfellow et al. , 2014 ) . When considering the combined optimization landscape for multiple tasks , SGD produces gradients that struggle to efficiently find the optimum . This occurs due to a gradient thrashing phenomenon , where the gradient of one task destabilizes optimization in the valley . We can observe this in Figure 1 ( d ) when the optimization reaches the deep valley of task 1 , but is prevented from traversing the valley to an optimum . In Section 6.2 , we find experimentally that this thrashing phenomenon also occurs in a neural network multi-task learning problem . The core contribution of this work is a method for mitigating gradient interference by altering the gradients directly , i.e . by performing “ gradient surgery ” . If two gradients are conflicting , we alter the gradients by projecting each onto the normal plane of the other , preventing the interfering components of the gradient from being applied to the network . We refer to this particular form of gradient surgery as projecting conflicting gradients ( PCGrad ) . PCGrad is model-agnostic , requiring only a single modification to the application of gradients . Hence , it is easy to apply to a range of problem settings , including multi-task supervised learning and multi-task reinforcement learning , and can also be readily combined with other multi-task learning approaches , such as those that modify the architecture . We evaluate PCGrad on multi-task CIFAR classification , multi-objective scene understanding , a challenging multi-task RL domain , and goal-conditioned RL . Across the board , we find PCGrad leads to significant improvements in terms of data efficiency , optimization speed , and final performance compared to prior approaches . Further , on multi-task supervised learning tasks , PCGrad can be successfully combined with prior state-of-the-art methods for multi-task learning for even greater performance . 2 PRELIMINARIES . The goal of multi-task learning is to find parameters θ of a model fθ that achieve high average performance across all the training tasks drawn from a distribution of tasks p ( T ) . More formally , we aim to solve the problem : min θ ETi∼p ( T ) [ Li ( fθ ) ] , where Li is a loss function for the i-th task Ti that we want to minimize . To obtain a model that solves a specific task from the task distribution p ( T ) , we define a task-conditioned model fθ ( y|x , zi ) , with input x , output y , and encoding zi for task Ti , which could be provided as a one-hot vector or in any other form . 3 MULTI-TASK LEARNING VIA GRADIENT SURGERY . While the multi-task problem can in principle be solved by simply applying a standard single-task algorithm with a suitable task identifier provided to the model or a simple multi-head or multi-output model , a number of prior works ( Parisotto et al. , 2015 ; Rusu et al. , 2016a ; Sener & Koltun , 2018 ) have found this learning problem to be difficult , especially in the reinforcement learning setting . We hypothesize that one of the main challenges of multi-task learning can be characterized by conflicting and thrashing gradients , and find that this can significantly impede learning progress , especially when combined with iterative data collection . We identify possible causes for this problem and propose a simple and general approach to mitigate it . 3.1 THRASHING GRADIENTS IN MULTI-TASK OPTIMIZATION LANDSCAPES . We hypothesize that a key optimization issue in multi-task learning arises when gradients from multiple tasks are in conflict with one another , i.e . when gradients point away from one another . More specifically , we hypothesize that such conflict may lead to gradient thrashing . Concretely , gradient thrashing refers to the phenomenon where a large gradient for one task changes the parameter vectors in a way that substantially decreases performance on another task . Since worse performance typically leads to larger gradients , this results in alternating gradient directions , where , at the next iteration , the second task will have large gradients that dominate and reduce performance on the former task . This issue can be particularly pronounced for neural network optimization , since neural network loss landscapes are known to resemble long narrow valleys ( Goodfellow et al. , 2014 ) , where the gradient perpendicular to the direction of the valley will be small . We aim to study this hypothesis through two toy examples . First , consider the two-dimensional optimization landscape illustrated in Fig . 1a , where the landscape for each task objective corresponds to a deep and curved valley ( Fig . 1b and 1c ) . The optima of this multi-task objective correspond to where the two valleys meet . More details on the optimization landscape are in Appendix B . We observe that the gradient thrashing hypothesis is consistent with what we observe when running Adam ( Kingma & Ba , 2014 ) on this landscape in Fig . 1d , where we observe that Adam does not traverse one valley towards the other , preventing it from reaching an optimum . We also aim to detect if a similar phenomenon occurs in multi-task learning with a neural network with thousands of parameters on a toy regression problem . To measure the extent of gradient thrashing , we plot the cosine similarity between the gradients of two tasks throughout the beginning of learning in Fig . 4 ( left ) . We indeed observe a significant level of gradient thrashing at every iteration , where the cosine similarity varies between −0.75 and 0.75 at a very high frequency . Motivated by these observations , we develop an algorithm that aims to alleviate the optimization challenges caused by gradient thrashing by preventing such gradient conflict between tasks . 3.2 PCGRAD : PROJECTING CONFLICTING GRADIENTS . We aim to prevent gradient thrashing by directly altering the gradients themselves , i.e . through “ gradient surgery. ” To be maximally effective and maximally applicable , we must perform surgery in a way that still allows for positive interactions between the task gradients and does not introduce any assumptions on the form of the model . We start by first detecting whether two gradients are in conflict , by measuring whether they point away from one another . More concretely , we characterize two tasks as conflicting for the current parameter setting if they yield a negative cosine similarity between their respective gradients . The goal of PCGrad is to modify the gradients for each task so as to minimize negative conflict with other task gradients , which will in turn mitigate gradient thrashing . To deconflict gradients during optimization , PCGrad adopts a simple procedure : if the gradients between two tasks are in conflict , i.e . their cosine similarity is negative , we project the gradient from one task onto the normal plane of the gradient of the other task . This amounts to removing the conflicting component of the gradient for the task , thereby reducing the amount of destructive gradient interference between tasks . A pictorial description of this idea is shown in Fig . 2 . Suppose the gradient for task Ti is gi , and the gradient for task Tj is gj . PCGrad proceeds as follows : ( 1 ) First , it determines whether gi conflicts with gj by computing the cosine similarity between vectors gi and gj , where negative values indicate conflicting gradients . ( 2 ) If the cosine similarity is negative , we replace gi by its projection onto the normal plane of gj : gi = gi − gi·gj‖gj‖2gj . If the gradients are not in conflict , i.e . cosine similarity is non-negative , the original gradient gi remains unaltered . ( 3 ) PCGrad repeats this process across all of the other tasks sampled in random order from the current Algorithm 1 PCGrad Update Rule Require : Current model parameters θ 1 : Sample mini-batch of tasks B = { Tk } ∼ p ( T ) 2 : for Ti ∼ B in sequence do 3 : Compute gradient gi of Ti as gi = ∇θLi ( fθ ) 4 : for Tj uniformly∼ B in random order do 5 : Compute gradient gj of task Tj as gj = ∇θLj ( fθ ) 6 : Compute cosine similarity between gi as gj as cos ( φij ) = gi·gj ‖gi‖‖gj‖ . 7 : if cos ( φij ) < 0 then 8 : Set gi = gi − gi·gj‖gj‖2 gj // Subtract the projection of gi onto gj 9 : end if 10 : end for 11 : Store gproji = gi 12 : end for 13 : return update ∆θ = ∑ i g proj i batch Tj ∀ j 6= i , resulting in the gradient gproji that is applied for task Ti . We perform the same procedure for all tasks in the batch to obtain their respective gradients . The full update procedure is described in Algorithm 1 and a discussion on using a random task order is included in Appendix D. This procedure , while simple to implement , ensures that the gradients that we apply for each task per batch interfere minimally with the other tasks in the batch , mitigating the thrashing gradient problem , producing a variant on standard first-order gradient descent in the multi-objective setting . In practice , the PCGrad gradient surgery method can be combined with any gradient-based optimizer , including commonly used methods such as SGD with momentum and Adam ( Kingma & Ba , 2014 ) , by simply passing the computed update to the respective optimizer instead of the original gradient . Our experimental results verify the hypothesis that this procedure reduces the problem of thrashing gradients , and find that , as a result , learning progress is substantially improved . Finally , we analyze the convergence of this procedure in Theorem 1 in the two-task setting , to ensure that the procedure is sensible under the standard assumptions in optimization . Theorem 1 . Consider two task loss functions L1 : Rn → R and L2 : Rn → R which are convex and differentiable . For all θ ∈ Rn , let L ( θ ) = L1 ( θ ) + L2 ( θ ) , i.e . L is a multi-task objective . Let φ be the angle between ∇L1 ( θ ) and ∇L2 ( θ ) . Suppose L is differentiable and that its gradient is Lipschitz continuous with constant L > 0 , i.e . we have ||∇L ( θ1 ) −∇L ( θ2 ) ||2 ≤ L||θ1 − θ2||2 for any θ1 , θ2 . Then , the PCGrad update rule with step size t ≤ 1L will converge to either ( 1 ) a location in the optimization landscape where cos ( φ ) = −1 or ( 2 ) the optimal value L ( θ∗ ) . Proof . See Appendix A. Theorem 1 states that application of the PCGrad update in the two-task setting with a convex and Lipschitz multi-task loss function L leads to convergence to either the minimizer of L or a potentially sub-optimal objective value . A sub-optimal solution occurs when the cosine similarity between the gradients of the two tasks is −1 , i.e . the gradients directly conflict , leading to zero gradient after applying PCGrad . However , in practice , since we are using SGD , which is a noisy estimate of the true batch gradients , the cosine similarity between the gradients of two tasks in a minibatch is unlikely to be −1 , thus avoiding this scenario .
The paper presents a method to boost multi-task learning performance by editing gradient to remove conflicts between tasks. The main idea is to use cosine similarity to 1) determine if two task gradients conflict and 2) to project one conflicting gradient to the normal plane of the other, thereby removing the conflict at the expense of disturbing the other gradient to some extent. Experiments are presented for classification and other computer vision tasks along with reinforcement learning problems.
SP:664b4cc73449713aac9e5e0d40027993d7a17c3a
Reducing Computation in Recurrent Networks by Selectively Updating State Neurons
1 INTRODUCTION . Recurrent Neural Networks ( RNN ) are the state-of-the-art approach to many sequential learning problems including speech recognition ( Graves et al. , 2013 ) , machine translation ( Bahdanau et al. , 2015 ) , and sequence generation ( Graves , 2013 ; Xu et al. , 2015 ) . However , RNNs typically rely on computationally-taxing updates to their entire hidden state at each timestep , a cost that grows with hidden state size . As demonstrated by the success of gating mechanisms such as the GRU ( Cho et al. , 2014 ) and LSTM ( Hochreiter & Schmidhuber , 1997 ) , all dimensions rarely need to be re-computed from scratch at each timestep . By discretely selecting which dimensions to update at each timestep via a learned update pattern , RNNs with a large hidden state can be trained with lower computational requirements ( Bengio et al. , 2013 ) , inference in long RNNs can be expedited ( Campos et al. , 2018 ) , and hidden representations may be made more robust to misleading inputs such as outliers or noise . Selective neuron activation in RNNs has recently gained attention in the literature ( Koutnik et al. , 2014 ; Neil et al. , 2016 ; Shen et al. , 2019 ; Jernite et al. , 2017 ; Campos et al. , 2018 ) . The most popular methods hand-craft specific update patterns , dictating which dimensions of the hidden state will update at which timesteps according to prior knowledge of a task ( Koutnik et al. , 2014 ; Neil et al. , 2016 ) . This imposes undue challenges in implementation , limits extensibility , and ignores the data-driven curation of information-flow through the RNN , a signature property of recurrent memory cells ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014 ) . More recent methods learn to react to input data but impose strict relationships between the update patterns across both hidden dimensions and time ( Shen et al. , 2019 ; Jernite et al. , 2017 ; Campos et al. , 2018 ) . While applicable to tasks with clear hierarchical components , such as modeling character-level text ( Chung et al. , 2017 ) , these assumptions limit the expressiveness of the learned update patterns . Specifically , we study the problem of generating a binary update-pattern for the hidden states learned by an RNN . The learned update-pattern defines which dimensions of the hidden state to update at each timestep , similar to the motivation for Residual Networks ( He et al. , 2016 ; Wang & Tian , 2016 ) and Highway Networks ( Srivastava et al. , 2015 ; Zilly et al. , 2017 ) . Ideally , only a small subset of the hidden state ’ s dimensions needs to be updated at each timestep , especially with high-dimensional hidden states . In this way , representations can be learned while both solving a sequential learning task and minimizing the number of updates . This results in a reduction of the compute time . A solution to this multi-objective optimization problem should have a comparable accuracy to a traditional constantly-updating RNN but save the majority of computation steps along the way , ultimately accelerating inference and training ( Neil et al. , 2016 ) . Despite the potential for reducing computation required by RNNs , learning said update patterns is a challenging problem . First , binary-output neurons making discrete decisions ( whether or not to update a hidden state dimension , for example ) in the interior of a neural network is a classic challenge to gradient-based learning . This is because such decisions are non-differentiable by nature and therefore backpropagation can not be directly used to update the weights . Second , the quality of a learned update pattern is unsupervised and thus the only feedback is task-specific . This discourages a priori assumptions of the update patterns . To address the aforementioned challenges , we propose the selective activation RNN , or SA-RNN , which parameterizes a distribution of update-likelihoods , one per hidden state dimension , from which update-decisions can be made at each timestep . We augment an RNN with an update coordinator that adaptively controls which coordinate directions to update in the hidden state on the fly . The coordinator is modeled as a lightweight neural network that observes incoming data at each timestep and makes a discrete decision of whether or not enough information is stored in each individual hidden dimension to warrant an update . Subsequently , each hidden dimension is either computed by the RNN or copied from the previous timestep . The coordinator ’ s architecture is kept as simple as possible so the complexity of the RNN can scale without simply outsourcing computation to another network , similar to the controller in Ha & Schmidhuber ( 2018 ) . Most notably , in contrast to other recent approaches ( Koutnik et al. , 2014 ; Jernite et al. , 2017 ; Neil et al. , 2016 ; Campos et al. , 2018 ; Shen et al. , 2019 ; Liu et al. , 2018 ) we impose no assumptions of which individual hidden dimensions should update ( or not update ) together . Instead , we show that using an entirely-learned approach still results in complex task-specific update patterns . On three publicly-available datasets , we show that our low-bias approach achieves higher accuracy with far fewer updates than recent state-of-the-art methods ( Koutnik et al. , 2014 ; Jernite et al. , 2017 ; Neil et al. , 2016 ; Campos et al. , 2018 ) . These results indicate that predicting RNN update-patterns solely with respect to a task is not only feasible and low-bias , but is also favorable in a variety of settings . 2 RELATED WORK . Recurrent neuron update patterns have gained much interest in recent literature ( Koutnik et al. , 2014 ; Jernite et al. , 2017 ; Neil et al. , 2016 ; Chung et al. , 2017 ; Shen et al. , 2019 ; Liu et al. , 2018 ) . All of these methods boast fewer updates to the hidden states than standard RNN architectures . However , there are several limitations of these methods , two of which are summarized as follows . First , the most popular methods rely on extensively-handcrafted update patterns consisting of periodic neuron activations ( Koutnik et al. , 2014 ; Neil et al. , 2016 ; Liu et al. , 2018 ) . This requires either prior knowledge of sampling frequencies or seasonal patterns present in the data , reducing the potential extension to many sequential learning problems . Additionally , these input-agnostic periodic updates are fixed prior to learning . The choice of update periods heavily impacts the performance of the model , and sequences with irregular information flow can not be modeled without massive state representations . Second , the most recent works allow for data-reactive update patterns ( Jernite et al. , 2017 ; Shen et al. , 2019 ; Campos et al. , 2018 ) but assume temporal hierarchies in the input sequences and study settings where this effect is exceedingly obvious ( for example , character-level sentence modeling ( Chung et al. , 2017 ) ) . In many real-world settings , temporal hierarchies are often subtle and forcing this assumption into the architectural design may limit its applications . Additionally , our approach is related to conditional computation , which predicts subsets of neural networks to activate depending on input data ( Bengio et al. , 2015 ; Shazeer et al. , 2017 ; Cheng et al. , 2017 ) . In many cases , when a particular concept can be represented using only the sub-network of a large neural network , computation can be preserved by learning the structure of said sub-network ( Schmidhuber , 2012 ) and activating it accordingly . 3 SELECTIVE NEURON ACTIVATION FOR RNNS . We introduce the Selective-Activation RNN , or SA-RNN , a broadly-applicable augmentation to RNNs which minimizes the computation required for RNNs by facilitating unimpeded informationflow across timesteps for individual dimensions of the hidden state . At its core , SA-RNN learns a data-driven strategy for discretely reading and writing information to the latent state space through the learned parameterization of an update-likelihood distribution . Despite leaving hidden dimension update patterns independent from one another , complex strategies still arise naturally depending on the sequential learning task at hand . In this section , we describe the training process of SA-RNN with D-dimensional hidden states on sequences of length T for input data x with V variables . We omit biases from affine transformation equations and use notation for one training instance for ease of readability . An overview of the forward pass through SA-RNN is shown in Figure 1 . 3.1 COMPUTING HIDDEN STATES . RNNs compute a sequence of hidden states one timestep at a time ( Elman , 1990 ) , each computed by a parametric recurrence function R ( · ) : ht = R ( ht−1 , xt|θr ) . The result is a sequence of vector representations H = { h1 , . . . , hT } where each ht ∈ RD represents temporal dynamics of the time series up to timestep t with respect to a task , preserving not only temporal dependencies but also the ordering of the inputs . A popular and powerful augmentation to the RNN , as it was originally proposed , is the Gated Recurrent Unit ( GRU ) ( Cho et al. , 2014 ) , which adds a series of gates between ht−1 and ht to alleviate the vanishing gradient problem ( Bengio et al. , 1994 ) : rt = σ ( Wrht−1 + Urxt ) ( 1 ) zt = σ ( Wzht−1 + Uzxt ) ( 2 ) st = φ ( Wcxt + Uc ( rt ht−1 ) ) ( 3 ) h̃t = ( 1− zt ) ht−1 + zt st ( 4 ) where W s and Us are matrices of learnable parameters of shape D ×D and D × V respectively , xt ∈ RV is the input data at timestep t , represents the element-wise multiplication , σ represents the sigmoid function , and φ represents a non-linearity ( traditionally the hyperbolic tangent function ) . Its design is motivated heavily by the LSTM ( Hochreiter & Schmidhuber , 1997 ) . The GRU performs soft read/write operations , recomputing the entire vector ht at each timestep since gate z ∈ [ 0 , 1 ] D , the space of vectors with values inclusively between 0 and 1 . Instead , we propose that all dimensions do not need to be updated at each timestep , as the position of the hidden state in many dimensions may often encode enough of the modeled input . Note that the output of the recurrence function is referred to as h̃t . In the next section , we describe how to compute ht , the final hidden state for timestep t which is subsequently used for computing ht+1 or the task . 3.2 SELECTIVE NEURON ACTIVATION . To reduce computation required to generate state representations , we assume updating representations to be a sequence of binary decisions – at each timestep a neuron will either be updated or not . Thus , we propose a learned update coordinator , which generates a binary mask for each hidden dimension , forecasting which dimensions need to be updated at the next timestep . First , an update-likelihood ũt is computed for each neuron , informed by both the data observed at the current timestep and the previous update-likelihoods : ũt = σ ( Wuht−1 +Wixt ) where Wu ∈ RD×D is a diagonal matrix of trainable parameters which dictate the linear relationship between ht−1 and ũt . Wu is kept diagonal to maintain relationships between update-decisions of the same dimension while avoiding the extensive computation of a fully-connected layer , similar to the hidden state decay in Che et al . ( 2018 ) . Wi ∈ RD×V encodes the influence of the input data on the current update-likelihood and σ ( · ) represents the hard sigmoid function , bounding ũ according to a slope α . Thus ũt ∈ [ 0 , 1 ] D , with one update-likelihood per dimension of the hidden state . To discretize ũt , allowing information to flow unimpeded , element-wise binarization is applied : ut = binarize ( ũt ) , where ( 5 ) binarize ( a ) = { 1 if a > 0.5 , 0 otherwise . ( 6 ) We apply this final discrete update decision as a binary gating mechanism since ut ∈ { 0 , 1 } D : ht = ut h̃t + ( 1− ut ) ht−1 ( 7 ) As written , this equation requires the pre-computation of h̃t . However , through masking the computation can be directed at only the needed updates upon calculation of ut . Thus when ũnt , the update decision for the n-th dimension in h , is 1 , hnt is updated according to the new information present in h̃nt . We note that this update-decision strategy does not impose the inter-neuron assumptions of Jernite et al . ( 2017 ) ; Shen et al . ( 2019 ) ; Koutnik et al . ( 2014 ) while still allowing such strategies to be learned if they are found to be optimal by the model since decisions are made with respect to previous decisions , similar to Campos et al . ( 2018 ) . We hypothesize that updating neurons together may generally be beneficial since complex temporal dependencies often require representations evolving in blocks of multiple neurons , as discussed in Koutnik et al . ( 2014 ) . Since binary-output neurons are inherently non-differentiable , barring the direct use of backpropagation , we approximate the gradient of the binarization function using the straight-through gradient estimator ( Bengio et al. , 2013 ) trained with slope-annealing ( Chung et al. , 2017 ) : ∂binarize ( x ) ∂x = 1 . ( 8 ) By estimating the gradient in this way we avoid additional loss terms and end up with empiricallyreasonable approximations in comparison to other high-variance methods , such as REINFORCE ( Williams , 1992 ; Chung et al. , 2017 ; Campos et al. , 2018 ) . After computing the sequence of state representations H , they are projected into the output space depending on the task at hand .
This paper proposes selective activation RNN (SA-RNN), by using an update coordinator to determine which subset of the RNN’s hidden state dimensions should be updated at a given timestep. The proposed loss term is then a sum of the original objective (e.g. classification) and a weighted sum of the probability that each dimension will be updated for each timestep. The method is evaluated on 3 time series datasets: Seizures, TwitterBuzz, Yahoo.
SP:a7eaf12be994bfd19da12b089262215f4f683e58
Reducing Computation in Recurrent Networks by Selectively Updating State Neurons
1 INTRODUCTION . Recurrent Neural Networks ( RNN ) are the state-of-the-art approach to many sequential learning problems including speech recognition ( Graves et al. , 2013 ) , machine translation ( Bahdanau et al. , 2015 ) , and sequence generation ( Graves , 2013 ; Xu et al. , 2015 ) . However , RNNs typically rely on computationally-taxing updates to their entire hidden state at each timestep , a cost that grows with hidden state size . As demonstrated by the success of gating mechanisms such as the GRU ( Cho et al. , 2014 ) and LSTM ( Hochreiter & Schmidhuber , 1997 ) , all dimensions rarely need to be re-computed from scratch at each timestep . By discretely selecting which dimensions to update at each timestep via a learned update pattern , RNNs with a large hidden state can be trained with lower computational requirements ( Bengio et al. , 2013 ) , inference in long RNNs can be expedited ( Campos et al. , 2018 ) , and hidden representations may be made more robust to misleading inputs such as outliers or noise . Selective neuron activation in RNNs has recently gained attention in the literature ( Koutnik et al. , 2014 ; Neil et al. , 2016 ; Shen et al. , 2019 ; Jernite et al. , 2017 ; Campos et al. , 2018 ) . The most popular methods hand-craft specific update patterns , dictating which dimensions of the hidden state will update at which timesteps according to prior knowledge of a task ( Koutnik et al. , 2014 ; Neil et al. , 2016 ) . This imposes undue challenges in implementation , limits extensibility , and ignores the data-driven curation of information-flow through the RNN , a signature property of recurrent memory cells ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014 ) . More recent methods learn to react to input data but impose strict relationships between the update patterns across both hidden dimensions and time ( Shen et al. , 2019 ; Jernite et al. , 2017 ; Campos et al. , 2018 ) . While applicable to tasks with clear hierarchical components , such as modeling character-level text ( Chung et al. , 2017 ) , these assumptions limit the expressiveness of the learned update patterns . Specifically , we study the problem of generating a binary update-pattern for the hidden states learned by an RNN . The learned update-pattern defines which dimensions of the hidden state to update at each timestep , similar to the motivation for Residual Networks ( He et al. , 2016 ; Wang & Tian , 2016 ) and Highway Networks ( Srivastava et al. , 2015 ; Zilly et al. , 2017 ) . Ideally , only a small subset of the hidden state ’ s dimensions needs to be updated at each timestep , especially with high-dimensional hidden states . In this way , representations can be learned while both solving a sequential learning task and minimizing the number of updates . This results in a reduction of the compute time . A solution to this multi-objective optimization problem should have a comparable accuracy to a traditional constantly-updating RNN but save the majority of computation steps along the way , ultimately accelerating inference and training ( Neil et al. , 2016 ) . Despite the potential for reducing computation required by RNNs , learning said update patterns is a challenging problem . First , binary-output neurons making discrete decisions ( whether or not to update a hidden state dimension , for example ) in the interior of a neural network is a classic challenge to gradient-based learning . This is because such decisions are non-differentiable by nature and therefore backpropagation can not be directly used to update the weights . Second , the quality of a learned update pattern is unsupervised and thus the only feedback is task-specific . This discourages a priori assumptions of the update patterns . To address the aforementioned challenges , we propose the selective activation RNN , or SA-RNN , which parameterizes a distribution of update-likelihoods , one per hidden state dimension , from which update-decisions can be made at each timestep . We augment an RNN with an update coordinator that adaptively controls which coordinate directions to update in the hidden state on the fly . The coordinator is modeled as a lightweight neural network that observes incoming data at each timestep and makes a discrete decision of whether or not enough information is stored in each individual hidden dimension to warrant an update . Subsequently , each hidden dimension is either computed by the RNN or copied from the previous timestep . The coordinator ’ s architecture is kept as simple as possible so the complexity of the RNN can scale without simply outsourcing computation to another network , similar to the controller in Ha & Schmidhuber ( 2018 ) . Most notably , in contrast to other recent approaches ( Koutnik et al. , 2014 ; Jernite et al. , 2017 ; Neil et al. , 2016 ; Campos et al. , 2018 ; Shen et al. , 2019 ; Liu et al. , 2018 ) we impose no assumptions of which individual hidden dimensions should update ( or not update ) together . Instead , we show that using an entirely-learned approach still results in complex task-specific update patterns . On three publicly-available datasets , we show that our low-bias approach achieves higher accuracy with far fewer updates than recent state-of-the-art methods ( Koutnik et al. , 2014 ; Jernite et al. , 2017 ; Neil et al. , 2016 ; Campos et al. , 2018 ) . These results indicate that predicting RNN update-patterns solely with respect to a task is not only feasible and low-bias , but is also favorable in a variety of settings . 2 RELATED WORK . Recurrent neuron update patterns have gained much interest in recent literature ( Koutnik et al. , 2014 ; Jernite et al. , 2017 ; Neil et al. , 2016 ; Chung et al. , 2017 ; Shen et al. , 2019 ; Liu et al. , 2018 ) . All of these methods boast fewer updates to the hidden states than standard RNN architectures . However , there are several limitations of these methods , two of which are summarized as follows . First , the most popular methods rely on extensively-handcrafted update patterns consisting of periodic neuron activations ( Koutnik et al. , 2014 ; Neil et al. , 2016 ; Liu et al. , 2018 ) . This requires either prior knowledge of sampling frequencies or seasonal patterns present in the data , reducing the potential extension to many sequential learning problems . Additionally , these input-agnostic periodic updates are fixed prior to learning . The choice of update periods heavily impacts the performance of the model , and sequences with irregular information flow can not be modeled without massive state representations . Second , the most recent works allow for data-reactive update patterns ( Jernite et al. , 2017 ; Shen et al. , 2019 ; Campos et al. , 2018 ) but assume temporal hierarchies in the input sequences and study settings where this effect is exceedingly obvious ( for example , character-level sentence modeling ( Chung et al. , 2017 ) ) . In many real-world settings , temporal hierarchies are often subtle and forcing this assumption into the architectural design may limit its applications . Additionally , our approach is related to conditional computation , which predicts subsets of neural networks to activate depending on input data ( Bengio et al. , 2015 ; Shazeer et al. , 2017 ; Cheng et al. , 2017 ) . In many cases , when a particular concept can be represented using only the sub-network of a large neural network , computation can be preserved by learning the structure of said sub-network ( Schmidhuber , 2012 ) and activating it accordingly . 3 SELECTIVE NEURON ACTIVATION FOR RNNS . We introduce the Selective-Activation RNN , or SA-RNN , a broadly-applicable augmentation to RNNs which minimizes the computation required for RNNs by facilitating unimpeded informationflow across timesteps for individual dimensions of the hidden state . At its core , SA-RNN learns a data-driven strategy for discretely reading and writing information to the latent state space through the learned parameterization of an update-likelihood distribution . Despite leaving hidden dimension update patterns independent from one another , complex strategies still arise naturally depending on the sequential learning task at hand . In this section , we describe the training process of SA-RNN with D-dimensional hidden states on sequences of length T for input data x with V variables . We omit biases from affine transformation equations and use notation for one training instance for ease of readability . An overview of the forward pass through SA-RNN is shown in Figure 1 . 3.1 COMPUTING HIDDEN STATES . RNNs compute a sequence of hidden states one timestep at a time ( Elman , 1990 ) , each computed by a parametric recurrence function R ( · ) : ht = R ( ht−1 , xt|θr ) . The result is a sequence of vector representations H = { h1 , . . . , hT } where each ht ∈ RD represents temporal dynamics of the time series up to timestep t with respect to a task , preserving not only temporal dependencies but also the ordering of the inputs . A popular and powerful augmentation to the RNN , as it was originally proposed , is the Gated Recurrent Unit ( GRU ) ( Cho et al. , 2014 ) , which adds a series of gates between ht−1 and ht to alleviate the vanishing gradient problem ( Bengio et al. , 1994 ) : rt = σ ( Wrht−1 + Urxt ) ( 1 ) zt = σ ( Wzht−1 + Uzxt ) ( 2 ) st = φ ( Wcxt + Uc ( rt ht−1 ) ) ( 3 ) h̃t = ( 1− zt ) ht−1 + zt st ( 4 ) where W s and Us are matrices of learnable parameters of shape D ×D and D × V respectively , xt ∈ RV is the input data at timestep t , represents the element-wise multiplication , σ represents the sigmoid function , and φ represents a non-linearity ( traditionally the hyperbolic tangent function ) . Its design is motivated heavily by the LSTM ( Hochreiter & Schmidhuber , 1997 ) . The GRU performs soft read/write operations , recomputing the entire vector ht at each timestep since gate z ∈ [ 0 , 1 ] D , the space of vectors with values inclusively between 0 and 1 . Instead , we propose that all dimensions do not need to be updated at each timestep , as the position of the hidden state in many dimensions may often encode enough of the modeled input . Note that the output of the recurrence function is referred to as h̃t . In the next section , we describe how to compute ht , the final hidden state for timestep t which is subsequently used for computing ht+1 or the task . 3.2 SELECTIVE NEURON ACTIVATION . To reduce computation required to generate state representations , we assume updating representations to be a sequence of binary decisions – at each timestep a neuron will either be updated or not . Thus , we propose a learned update coordinator , which generates a binary mask for each hidden dimension , forecasting which dimensions need to be updated at the next timestep . First , an update-likelihood ũt is computed for each neuron , informed by both the data observed at the current timestep and the previous update-likelihoods : ũt = σ ( Wuht−1 +Wixt ) where Wu ∈ RD×D is a diagonal matrix of trainable parameters which dictate the linear relationship between ht−1 and ũt . Wu is kept diagonal to maintain relationships between update-decisions of the same dimension while avoiding the extensive computation of a fully-connected layer , similar to the hidden state decay in Che et al . ( 2018 ) . Wi ∈ RD×V encodes the influence of the input data on the current update-likelihood and σ ( · ) represents the hard sigmoid function , bounding ũ according to a slope α . Thus ũt ∈ [ 0 , 1 ] D , with one update-likelihood per dimension of the hidden state . To discretize ũt , allowing information to flow unimpeded , element-wise binarization is applied : ut = binarize ( ũt ) , where ( 5 ) binarize ( a ) = { 1 if a > 0.5 , 0 otherwise . ( 6 ) We apply this final discrete update decision as a binary gating mechanism since ut ∈ { 0 , 1 } D : ht = ut h̃t + ( 1− ut ) ht−1 ( 7 ) As written , this equation requires the pre-computation of h̃t . However , through masking the computation can be directed at only the needed updates upon calculation of ut . Thus when ũnt , the update decision for the n-th dimension in h , is 1 , hnt is updated according to the new information present in h̃nt . We note that this update-decision strategy does not impose the inter-neuron assumptions of Jernite et al . ( 2017 ) ; Shen et al . ( 2019 ) ; Koutnik et al . ( 2014 ) while still allowing such strategies to be learned if they are found to be optimal by the model since decisions are made with respect to previous decisions , similar to Campos et al . ( 2018 ) . We hypothesize that updating neurons together may generally be beneficial since complex temporal dependencies often require representations evolving in blocks of multiple neurons , as discussed in Koutnik et al . ( 2014 ) . Since binary-output neurons are inherently non-differentiable , barring the direct use of backpropagation , we approximate the gradient of the binarization function using the straight-through gradient estimator ( Bengio et al. , 2013 ) trained with slope-annealing ( Chung et al. , 2017 ) : ∂binarize ( x ) ∂x = 1 . ( 8 ) By estimating the gradient in this way we avoid additional loss terms and end up with empiricallyreasonable approximations in comparison to other high-variance methods , such as REINFORCE ( Williams , 1992 ; Chung et al. , 2017 ; Campos et al. , 2018 ) . After computing the sequence of state representations H , they are projected into the output space depending on the task at hand .
A main problem with RNN is to update all hidden dimensions in each time step. The authors proposed selective-activation RNN (SA-RNN), which modifies each state of RNN by adding an update coordinator which is modeled as a lightweight neural network. The coordinator, based on the incoming data, makes a discrete decision to update or not update each individual hidden dimension. A multi-objective optimization problem is defined to both solving a sequential learning task and minimizing the number of updates in each time step. The authors evaluated their networks on three public benchmark datasets and achieved good results compared to the state-of-the-art ones.
SP:a7eaf12be994bfd19da12b089262215f4f683e58
Off-policy Bandits with Deficient Support
1 INTRODUCTION . Many interactive systems ( e.g. , voice assistants , recommender systems , ad placement ) can be modeled as contextual bandit problems ( Langford & Zhang , 2008 ) . In particular , each user request provides a context ( e.g. , user profile , query ) for which the system selects an action ( e.g. , recommended product , presented ad ) and receives a reward ( e.g. , purchase , click ) . Such contextual-bandit data is logged in large quantities as a by-product of normal system operation ( Li et al. , 2011 ; 2015 ; Joachims et al. , 2017 ) , making it an attractive and low-cost source of training data . With terabytes of such log data readily available in many online systems , a range of algorithms have been proposed for batch learning from such logged contextual-bandit feedback ( Strehl et al. , 2011 ; Dudı́k et al. , 2011 ; Swaminathan & Joachims , 2015a ; Thomas & Brunskill , 2016 ; Farajtabar et al. , 2018 ; Su et al. , 2019 ; London & Sandler , 2019 ) . However , as we will argue below , these algorithms require an assumption about the log data that makes them unsuitable for many real-world applications . This assumption is typically referred to as the positivity or support assumption , and it is required by the Empirical Risk Minimization ( ERM ) objective that these algorithms optimize . Specifically , unlike in online learning for contextual bandits ( Williams , 1992 ; Agarwal et al. , 2014 ) , batch learning from bandit feedback ( BLBF ) operates in the off-policy setting . During off-policy learning , the algorithm has to address the counterfactual question of how much reward each policy in the policy space would have received , if it had been used instead of the logging policy . To this effect , virtually all state-of-the-art off-policy learning methods for contextual-bandit problems rely on counterfactual estimators ( Bottou et al. , 2013 ; Dudı́k et al. , 2011 ; Swaminathan & Joachims , 2015a ; Thomas & Brunskill , 2016 ; Farajtabar et al. , 2018 ; Su et al. , 2019 ) that employ inverse propensity score ( IPS ) weighting to get an unbiased ERM objective . Unlike regression-based direct-modeling ( DM ) approaches that are often hampered by bias from model-misspecification , IPS allows a controllable bias-variance trade-off through clipping and other variance-regularization techniques ( Strehl et al. , 2011 ; Swaminathan & Joachims , 2015a ; London & Sandler , 2019 ) . Unfortunately , IPS and its variance-control mechanisms break down when the logging policy does not have full support – meaning that some actions have zero probability of being selected under the logging policy . In this case IPS can be highly biased . Full support is an unreasonable assumption in many real-world systems , especially when the action space is large and many actions have poor rewards . For example , in a recommender system with a large catalog ( e.g . movies , music ) , it may be that less than 10 % of the actions have support under the logging policy . We will show that existing learning algorithms can fail catastrophically on such support deficient data . In this paper , we develop new off-policy contextual-bandit algorithms that are specifically designed to deal with support deficient log data . Since support deficiency translates into blind spots where we do not have any knowledge about the rewards , accounting for these blind spots as part of learning is crucial for robust learning . We approach this problem from three perspectives . First , we explore restricting the action space to those actions that have support under the logging policy . Second , we explore imputation methods that extrapolate estimated rewards to those blind spots . And , third , we restrict the policy space to only those policies that have limited exposure to the blind spots . To make the latter approach computationally tractable , we define a new measure of Support Divergence between policies , show how it can be estimated efficiently without closed-form knowledge of the logging policy , and how it can be used as a constraint on the policy space . We analyze the statistical and computational properties of all three approaches and perform an extensive empirical evaluation . We find that restricting the policy space is particularly effective , since it is computationally efficient , empirically effective at learning good policies , and convenient to use in practice . 2 RELATED WORK . Most prior works on BLBF can be classified into two different approaches . The first – called Direct Model ( DM ) – is based on a reduction to supervised learning , where a regression estimate is trained to predict rewards ( Beygelzimer & Langford , 2009 ) . To derive a policy , the action with the highest predicted reward is chosen . A drawback of this simple approach is the bias that results from misspecification of the regression model . Since regression models are often substantially misspecified for real-world data , the DM approach often does not work well empirically . The second approach is based on policy learning via ERM with a counterfactual risk estimator . Inverse propensity score ( IPS ) weighting is one of the most popular estimators to be used as empirical risk . However , policy learning algorithms based on IPS and related estimators ( Strehl et al. , 2011 ; Swaminathan & Joachims , 2015a ; b ; Thomas & Brunskill , 2016 ; London & Sandler , 2019 ) require the assumption that the logging policy has full support for every policy in the policy space . One exception is the work of Liu et al . ( 2019 ) . They relax the assumption to the existence of an optimal policy such that the logging policy covers the support of this optimal policy . However , this is an untestable assumption that does not provide guarantees for real-world applications . Our work proposes three approaches to addressing off-policy learning with support deficiency . First , our conservative extrapolation method is related to the method proposed by Liu et al . ( 2019 ) . They focus on the correction of the state distribution by defining an augmented MDP , and pessimistic imputation is used to get an estimate for policy-gradient learning . Second , our method of restricting the policy space uses a surrogate for the support divergence of two policies that was previously used as control variate in the SNIPS estimator ( Swaminathan & Joachims , 2015b ) . It also appeared in the Lagrangian formulation of the BanditNet objective ( Joachims et al. , 2018 ) and in the gradient update in REINFORCE algorithm ( Williams , 1992 ) . This connection gives interesting new insight that the baselines used in policy-gradient algorithms not only help to reduce variance in gradients ( Greensmith et al. , 2004 ) , but that they also connect to the problem of support deficiency in the off-policy setting . 3 OFF-POLICY LEARNING WITH DEFICIENT SUPPORT . We start by formally defining the problem of learning a contextual-bandit policy in the BLBF setting . Input to the policy are contexts x ∈ X drawn i.i.d from a fixed but unknown distribution P ( X ) . Given context x , the system executes a possibly stochastic policy π ( Y|x ) that selects an action y ∈ Y . For this context and action pair , the system observes a reward r ∈ [ rmin , rmax ] from P ( r|x , y ) . Given a space of policies Π , the reward of any policy π ∈ Π is defined as R ( π ) = E x E y∼π ( y|x ) E r∼P ( r|x , y ) [ r ] . ( 1 ) In the BLBF setting , the learning algorithm is given a dataset D : = { xi , yi , ri , π0 ( yi|xi ) } ni=1 of past system interactions which consists of context-action-reward-propensity tuples . The propensity π0 ( yi|xi ) is the probability of selecting action yi for context xi under the policy π0 that was used to log the data . We call π0 the logging policy , and we will discuss desired conditions on the stochasticity of π0 in the following . The goal of off-policy learning is to exploit the information in the logged data D to find a policy π̂ ∈ Π that has high reward R ( π̂ ) . Analogous to the ERM principle in supervised learning , off-policy learning algorithms typically optimize a counterfactual estimate R̂ ( π ) of R ( π ) as the training objective ( Li et al. , 2011 ; 2015 ; Bottou et al. , 2013 ; Swaminathan & Joachims , 2015a ) . π̂ = arg max π∈Π [ R̂ ( π ) ] ( 2 ) For conciseness , we ignore additional regularization terms in the objective ( Swaminathan & Joachims , 2015a ) , since they are irrelevant to the main point of this paper . As counterfactual estimator R̂ ( π ) , most algorithms rely on some form of IPS weighting ( Strehl et al. , 2011 ; Dudı́k et al. , 2011 ; Swaminathan & Joachims , 2015a ; b ; Wang et al. , 2017 ; Su et al. , 2019 ) to correct the distribution mismatch between the logging policy π0 and each target policy π ∈ Π. R̂IPS ( π ) = 1 n n∑ i=1 π ( yi|xi ) π0 ( yi|xi ) ri . ( 3 ) A crucial condition for the effectiveness of the IPS estimator ( and similar estimators ) is that the logging policy π0 assigns non-zero probability to all actions that have non-zero probability under the target policy π we aim to evaluate . This condition is known as positivity or full support , and it is defined as follows . Definition 1 ( Full support ) . The logging policy π0 is said to have full support for π when π0 ( y|x ) > 0 for all actions y ∈ Y and contexts x ∈ X for which π ( y|x ) > 0 . It is known that the IPS estimator is unbiased , ED [ R̂IPS ( π ) ] = R ( π ) , if the logging policy π0 has full support for π ( Li et al. , 2011 ) . To ensure unbiased ERM , algorithms that use the IPS estimator require that the logging policy π0 has full support for all policies π ∈ Π in the policy space . For sufficiently rich policy spaces , like deep-networks fw ( x , y ) with softmax outputs of the form πw ( y|x ) = exp ( fw ( x , y ) ) ∑ y′∈Y exp ( fw ( x , y ′ ) ) , ( 4 ) this means that the logging policy π0 needs to assign non-zero probability to every action y in every context x . This is a strong condition that is not feasible in many real-world systems , especially if the action space is large and many actions have poor reward . If the support requirement is violated , ERM learning can fail catastrophically . We will show below that the underlying reason is bias , not excessive variance that could be remedied through clipping or variance regularization ( Strehl et al. , 2011 ; Swaminathan & Joachims , 2015a ) . To quantify how support deficient a logging policy is , we denote the set of unsupported actions for context x under π0 as U ( x , π0 ) : = { y ∈ Y|π0 ( y|x ) = 0 } . The bias of the IPS estimator is then characterized by the expected reward on the unsupported actions . Proposition 1 . Given contexts x ∼ P ( X ) and logging policy π0 ( Y|x ) , the bias of R̂IPS for target policy π ( Y|x ) is equal to the expected reward on the unsupported action sets , i.e. , bias ( π|π0 ) = Ex [ − ∑ y∈U ( x , π0 ) π ( y|x ) δ ( x , y ) ] . The proof is in Appendix A.1 . From Proposition 1 , it is clear that support deficient log data can drastically mislead ERM learning . To quantify the effect of support deficiency on ERM , we define the support divergence between a logging policy π0 and a target policy π as follows . Definition 2 ( Support Divergence ) . For contexts x ∼ P ( X ) and any corresponding pair of target policy π and logging policy π0 , the Support Divergence is defined as DX ( π|π0 ) : = E x∼P ( X ) ∑ y∈U ( x , π0 ) π ( y|x ) . ( 5 ) With this definition in hand , we can quantify the effect of support deficiency on ERM learning for a policy space Π under logging policy π0 . Theorem 1 . For any given hypothesis space Π with logging policy π0 ∈ Π , there exists a reward distribution Pr with support in [ rmin , rmax ] such that in the limit of infinite training data , ERM using IPS over the logged data D ∼ P ( X ) × π0 ( ·|X ) × Pr can select a policy π̂ ∈ arg maxπ∈Π ED [ R̂IPS ( π ) ] that is at least ( rmax − rmin ) maxπ∈ΠDX ( π|π0 ) suboptimal . The proof is in Appendix A.2 . To illustrate the theorem , consider a problem with rewards r ∈ [ −1 , 0 ] . Furthermore , consider a policy space Π that contains a good policy πg with R ( πg ) = −0.1 and a bad policy πb with R ( πb ) = −0.7 . If policy πb has support divergence DX ( πb|π0 ) = 0.6 or larger , then ERM may return the bad πb instead of πg even with infinite amounts of training data . Note that it is sufficient to merely have one policy in Π that has large support deficiency to achieve this suboptimality . It is therefore crucial to control the support divergence DX ( π|π0 ) uniformly over all π ∈ Π , or to account for the suboptimality it can induce . To this effect , we explore three approaches in the following .
This work addresses the problem of off-policy evaluation in the presence of positivity violations, i.e. some actions are not observed in the logged policy. As the paper points out, positivity violations can lead to unboundedly bad estimates when employing IPS. The authors propose three methods to deal with this problem. The first uses only the observed actions, the second and third use extrapolation and augmentation to provide an approximation to the off-policy problem.
SP:03577eb6fb5a3b9ac15af04673f905565c57d425
Off-policy Bandits with Deficient Support
1 INTRODUCTION . Many interactive systems ( e.g. , voice assistants , recommender systems , ad placement ) can be modeled as contextual bandit problems ( Langford & Zhang , 2008 ) . In particular , each user request provides a context ( e.g. , user profile , query ) for which the system selects an action ( e.g. , recommended product , presented ad ) and receives a reward ( e.g. , purchase , click ) . Such contextual-bandit data is logged in large quantities as a by-product of normal system operation ( Li et al. , 2011 ; 2015 ; Joachims et al. , 2017 ) , making it an attractive and low-cost source of training data . With terabytes of such log data readily available in many online systems , a range of algorithms have been proposed for batch learning from such logged contextual-bandit feedback ( Strehl et al. , 2011 ; Dudı́k et al. , 2011 ; Swaminathan & Joachims , 2015a ; Thomas & Brunskill , 2016 ; Farajtabar et al. , 2018 ; Su et al. , 2019 ; London & Sandler , 2019 ) . However , as we will argue below , these algorithms require an assumption about the log data that makes them unsuitable for many real-world applications . This assumption is typically referred to as the positivity or support assumption , and it is required by the Empirical Risk Minimization ( ERM ) objective that these algorithms optimize . Specifically , unlike in online learning for contextual bandits ( Williams , 1992 ; Agarwal et al. , 2014 ) , batch learning from bandit feedback ( BLBF ) operates in the off-policy setting . During off-policy learning , the algorithm has to address the counterfactual question of how much reward each policy in the policy space would have received , if it had been used instead of the logging policy . To this effect , virtually all state-of-the-art off-policy learning methods for contextual-bandit problems rely on counterfactual estimators ( Bottou et al. , 2013 ; Dudı́k et al. , 2011 ; Swaminathan & Joachims , 2015a ; Thomas & Brunskill , 2016 ; Farajtabar et al. , 2018 ; Su et al. , 2019 ) that employ inverse propensity score ( IPS ) weighting to get an unbiased ERM objective . Unlike regression-based direct-modeling ( DM ) approaches that are often hampered by bias from model-misspecification , IPS allows a controllable bias-variance trade-off through clipping and other variance-regularization techniques ( Strehl et al. , 2011 ; Swaminathan & Joachims , 2015a ; London & Sandler , 2019 ) . Unfortunately , IPS and its variance-control mechanisms break down when the logging policy does not have full support – meaning that some actions have zero probability of being selected under the logging policy . In this case IPS can be highly biased . Full support is an unreasonable assumption in many real-world systems , especially when the action space is large and many actions have poor rewards . For example , in a recommender system with a large catalog ( e.g . movies , music ) , it may be that less than 10 % of the actions have support under the logging policy . We will show that existing learning algorithms can fail catastrophically on such support deficient data . In this paper , we develop new off-policy contextual-bandit algorithms that are specifically designed to deal with support deficient log data . Since support deficiency translates into blind spots where we do not have any knowledge about the rewards , accounting for these blind spots as part of learning is crucial for robust learning . We approach this problem from three perspectives . First , we explore restricting the action space to those actions that have support under the logging policy . Second , we explore imputation methods that extrapolate estimated rewards to those blind spots . And , third , we restrict the policy space to only those policies that have limited exposure to the blind spots . To make the latter approach computationally tractable , we define a new measure of Support Divergence between policies , show how it can be estimated efficiently without closed-form knowledge of the logging policy , and how it can be used as a constraint on the policy space . We analyze the statistical and computational properties of all three approaches and perform an extensive empirical evaluation . We find that restricting the policy space is particularly effective , since it is computationally efficient , empirically effective at learning good policies , and convenient to use in practice . 2 RELATED WORK . Most prior works on BLBF can be classified into two different approaches . The first – called Direct Model ( DM ) – is based on a reduction to supervised learning , where a regression estimate is trained to predict rewards ( Beygelzimer & Langford , 2009 ) . To derive a policy , the action with the highest predicted reward is chosen . A drawback of this simple approach is the bias that results from misspecification of the regression model . Since regression models are often substantially misspecified for real-world data , the DM approach often does not work well empirically . The second approach is based on policy learning via ERM with a counterfactual risk estimator . Inverse propensity score ( IPS ) weighting is one of the most popular estimators to be used as empirical risk . However , policy learning algorithms based on IPS and related estimators ( Strehl et al. , 2011 ; Swaminathan & Joachims , 2015a ; b ; Thomas & Brunskill , 2016 ; London & Sandler , 2019 ) require the assumption that the logging policy has full support for every policy in the policy space . One exception is the work of Liu et al . ( 2019 ) . They relax the assumption to the existence of an optimal policy such that the logging policy covers the support of this optimal policy . However , this is an untestable assumption that does not provide guarantees for real-world applications . Our work proposes three approaches to addressing off-policy learning with support deficiency . First , our conservative extrapolation method is related to the method proposed by Liu et al . ( 2019 ) . They focus on the correction of the state distribution by defining an augmented MDP , and pessimistic imputation is used to get an estimate for policy-gradient learning . Second , our method of restricting the policy space uses a surrogate for the support divergence of two policies that was previously used as control variate in the SNIPS estimator ( Swaminathan & Joachims , 2015b ) . It also appeared in the Lagrangian formulation of the BanditNet objective ( Joachims et al. , 2018 ) and in the gradient update in REINFORCE algorithm ( Williams , 1992 ) . This connection gives interesting new insight that the baselines used in policy-gradient algorithms not only help to reduce variance in gradients ( Greensmith et al. , 2004 ) , but that they also connect to the problem of support deficiency in the off-policy setting . 3 OFF-POLICY LEARNING WITH DEFICIENT SUPPORT . We start by formally defining the problem of learning a contextual-bandit policy in the BLBF setting . Input to the policy are contexts x ∈ X drawn i.i.d from a fixed but unknown distribution P ( X ) . Given context x , the system executes a possibly stochastic policy π ( Y|x ) that selects an action y ∈ Y . For this context and action pair , the system observes a reward r ∈ [ rmin , rmax ] from P ( r|x , y ) . Given a space of policies Π , the reward of any policy π ∈ Π is defined as R ( π ) = E x E y∼π ( y|x ) E r∼P ( r|x , y ) [ r ] . ( 1 ) In the BLBF setting , the learning algorithm is given a dataset D : = { xi , yi , ri , π0 ( yi|xi ) } ni=1 of past system interactions which consists of context-action-reward-propensity tuples . The propensity π0 ( yi|xi ) is the probability of selecting action yi for context xi under the policy π0 that was used to log the data . We call π0 the logging policy , and we will discuss desired conditions on the stochasticity of π0 in the following . The goal of off-policy learning is to exploit the information in the logged data D to find a policy π̂ ∈ Π that has high reward R ( π̂ ) . Analogous to the ERM principle in supervised learning , off-policy learning algorithms typically optimize a counterfactual estimate R̂ ( π ) of R ( π ) as the training objective ( Li et al. , 2011 ; 2015 ; Bottou et al. , 2013 ; Swaminathan & Joachims , 2015a ) . π̂ = arg max π∈Π [ R̂ ( π ) ] ( 2 ) For conciseness , we ignore additional regularization terms in the objective ( Swaminathan & Joachims , 2015a ) , since they are irrelevant to the main point of this paper . As counterfactual estimator R̂ ( π ) , most algorithms rely on some form of IPS weighting ( Strehl et al. , 2011 ; Dudı́k et al. , 2011 ; Swaminathan & Joachims , 2015a ; b ; Wang et al. , 2017 ; Su et al. , 2019 ) to correct the distribution mismatch between the logging policy π0 and each target policy π ∈ Π. R̂IPS ( π ) = 1 n n∑ i=1 π ( yi|xi ) π0 ( yi|xi ) ri . ( 3 ) A crucial condition for the effectiveness of the IPS estimator ( and similar estimators ) is that the logging policy π0 assigns non-zero probability to all actions that have non-zero probability under the target policy π we aim to evaluate . This condition is known as positivity or full support , and it is defined as follows . Definition 1 ( Full support ) . The logging policy π0 is said to have full support for π when π0 ( y|x ) > 0 for all actions y ∈ Y and contexts x ∈ X for which π ( y|x ) > 0 . It is known that the IPS estimator is unbiased , ED [ R̂IPS ( π ) ] = R ( π ) , if the logging policy π0 has full support for π ( Li et al. , 2011 ) . To ensure unbiased ERM , algorithms that use the IPS estimator require that the logging policy π0 has full support for all policies π ∈ Π in the policy space . For sufficiently rich policy spaces , like deep-networks fw ( x , y ) with softmax outputs of the form πw ( y|x ) = exp ( fw ( x , y ) ) ∑ y′∈Y exp ( fw ( x , y ′ ) ) , ( 4 ) this means that the logging policy π0 needs to assign non-zero probability to every action y in every context x . This is a strong condition that is not feasible in many real-world systems , especially if the action space is large and many actions have poor reward . If the support requirement is violated , ERM learning can fail catastrophically . We will show below that the underlying reason is bias , not excessive variance that could be remedied through clipping or variance regularization ( Strehl et al. , 2011 ; Swaminathan & Joachims , 2015a ) . To quantify how support deficient a logging policy is , we denote the set of unsupported actions for context x under π0 as U ( x , π0 ) : = { y ∈ Y|π0 ( y|x ) = 0 } . The bias of the IPS estimator is then characterized by the expected reward on the unsupported actions . Proposition 1 . Given contexts x ∼ P ( X ) and logging policy π0 ( Y|x ) , the bias of R̂IPS for target policy π ( Y|x ) is equal to the expected reward on the unsupported action sets , i.e. , bias ( π|π0 ) = Ex [ − ∑ y∈U ( x , π0 ) π ( y|x ) δ ( x , y ) ] . The proof is in Appendix A.1 . From Proposition 1 , it is clear that support deficient log data can drastically mislead ERM learning . To quantify the effect of support deficiency on ERM , we define the support divergence between a logging policy π0 and a target policy π as follows . Definition 2 ( Support Divergence ) . For contexts x ∼ P ( X ) and any corresponding pair of target policy π and logging policy π0 , the Support Divergence is defined as DX ( π|π0 ) : = E x∼P ( X ) ∑ y∈U ( x , π0 ) π ( y|x ) . ( 5 ) With this definition in hand , we can quantify the effect of support deficiency on ERM learning for a policy space Π under logging policy π0 . Theorem 1 . For any given hypothesis space Π with logging policy π0 ∈ Π , there exists a reward distribution Pr with support in [ rmin , rmax ] such that in the limit of infinite training data , ERM using IPS over the logged data D ∼ P ( X ) × π0 ( ·|X ) × Pr can select a policy π̂ ∈ arg maxπ∈Π ED [ R̂IPS ( π ) ] that is at least ( rmax − rmin ) maxπ∈ΠDX ( π|π0 ) suboptimal . The proof is in Appendix A.2 . To illustrate the theorem , consider a problem with rewards r ∈ [ −1 , 0 ] . Furthermore , consider a policy space Π that contains a good policy πg with R ( πg ) = −0.1 and a bad policy πb with R ( πb ) = −0.7 . If policy πb has support divergence DX ( πb|π0 ) = 0.6 or larger , then ERM may return the bad πb instead of πg even with infinite amounts of training data . Note that it is sufficient to merely have one policy in Π that has large support deficiency to achieve this suboptimality . It is therefore crucial to control the support divergence DX ( π|π0 ) uniformly over all π ∈ Π , or to account for the suboptimality it can induce . To this effect , we explore three approaches in the following .
This paper talks about the problem of off-policy or batch learning in the contextual bandit setting without the complete support assumption. This problem setting is very realistic and encountered in most problems, especially in temporally extended settings, such as reinforcement learning. They compare three approaches for the same: restricting action selection, learning extrapolated reward models, and by restricting the policy class. They derive a SNIPS style estimator for the support constraint in the final approach. The approach with restricting the policy class demonstrates decent empirical results although the direct method is very much comparable.
SP:03577eb6fb5a3b9ac15af04673f905565c57d425
Conditional Flow Variational Autoencoders for Structured Sequence Prediction
1 INTRODUCTION . Anticipating future states of the environment is a key competence necessary for the success of autonomous agents . In complex real world environments , the future is highly uncertain . Therefore , structured predictions , one to many mappings of the likely future states of the world , are important . In many scenarios , these tasks can be cast as sequence prediction problems . Particularly , Conditional Variational Autoencoders ( CVAE ) ( Sohn et al. , 2015 ; Bayer & Osendorfer , 2014 ; Chung et al. , 2015 ) have been very successful – from prediction of pedestrians trajectories ( Lee et al. , 2017 ; Bhattacharyya et al. , 2018 ; Pajouheshgar & Lampert , 2018 ) to outcomes of robotic actions ( Babaeizadeh et al. , 2018 ) . The distribution of future sequences is diverse and highly multi-modal . CVAEs model diverse futures by factorizing the distribution of future states using a set of latent variables which are mapped to likely future states . However , CVAEs assume a standard Gaussian prior on the latent variables which induces a strong model bias ( Hoffman & Johnson , 2016 ; Tomczak & Welling , 2018 ) which makes it challenging to capture multi-modal distributions . This also leads to missing modes due to posterior collapse ( Bowman et al. , 2016 ; Razavi et al. , 2019 ) . Recent work ( Tomczak & Welling , 2018 ; Wang et al. , 2017 ; Gu et al. , 2018 ) has therefore focused on more complex Gaussian mixture based priors . Gaussian mixtures still have limited expressiveness and optimization suffers from complications e.g . determining the number of mixture components . Normalizing flows are more expressive and enable the modelling of complex multi-modal priors . Recent work on flow based priors ( Chen et al. , 2017 ; Ziegler & Rush , 2019 ) , have focused only on the unconditional ( plain VAE ) case . However , this not sufficient for CVAEs because in the conditional case the complexity of the distributions are highly dependent on the condition . In this work , 1 . We propose Conditional Flow Variational Autoencoders ( CF-VAE ) based on novel conditional normalizing flow based priors In order to model complex multi-modal conditional distributions over sequences . In Figure 1 , we show example predictions of MNIST handwriting stroke of our CF-VAE . We observe that , given a starting stroke , our CF-VAE model with data dependent normalizing flow based latent prior captures the two main modes of the conditional distribution – i.e . 1 and 8 – while CVAEs with fixed uni-modal Gaussian prior predictions have limited diversity . 2 . We propose a regularization scheme that stabilizes the optimization of the evidence lower bound and leads to better fit to the target data distribution . 3 . We leverage our conditional flow prior to deal with posterior collapse which causes standard CVAEs to ignore modes in sequence prediction tasks . 4 . Finally , our method outperforms the state of the art on three structured sequence prediction tasks – handwriting stroke prediction on MNIST , trajectory prediction on Stanford Drone and HighD . 2 RELATED WORK . Normalizing Flows . Normalizing flows are a powerful class of density estimation methods with exact inference . ( Dinh et al. , 2015 ) introduced affine normalizing flows with triangular Jacobians . ( Dinh et al. , 2017 ) extend flows with masked convolutions which allow for complex ( non-autoregessive ) dependence between the dimensions . In ( Kingma & Dhariwal , 2018 ) , 1 × 1 convolutions were proposed for improved image generation compared to ( Dinh et al. , 2017 ) . In ( Huang et al. , 2018 ) normalizing flows are auto-regressive and ( Behrmann et al. , 2019 ) extend it to ResNet . ( Lu & Huang , 2019 ) extended normalizing flows to model conditional distributions . Here , we propose conditional normalizing flows to learn conditional priors for variational latent models . Variational Autoencoders . The original variational autoencoder ( Kingma & Welling , 2014 ) used uni-modal Gaussian prior and posterior distributions . Thereafter , two lines of work have focused on developing either more expressive prior or posterior distributions . Rezende & Mohamed ( 2015 ) propose normalizing flows to model complex posterior distributions . Kingma et al . ( 2016 ) ; Tomczak & Welling ( 2016 ) ; Berg et al . ( 2018 ) present more complex inverse autoregessive flows , householder and Sylvester normalizing flow based posteriors . Here , we focus on the orthogonal direction of more expressive priors and the above approaches are compatible with our approach . Recent work which focus more expressive priors include ( Nalisnick & Smyth , 2017 ) which proposes a Dirichlet process prior and ( Goyal et al. , 2017 ) which proposes a nested Chinese restaurant process prior . However , these methods require sophisticated learning methods . In contrast , ( Tomczak & Welling , 2018 ) proposes a mixture of Gaussians based prior ( with fixed number of components ) which is easier to train and shows promising results on some image generation tasks . ( Chen et al. , 2017 ) , proposes a inverse autoregressive flow based prior which leads to improvements in complex image generation tasks like CIFAR-10 . ( Ziegler & Rush , 2019 ) proposes a prior for VAE based text generation using complex non-linear flows which allows for complex multi-modal priors . While these works focus on unconditional priors , we aim to develop more expressive conditional priors . Posterior Collapse . Posterior collapse arises when the latent posterior does not encode useful information . Most prior work ( Yang et al. , 2017 ; Dieng et al. , 2019 ; Higgins et al. , 2017 ) concentrate on unconditional VAEs and modify the training objective – the KL divergence term is annealed to prevent collapse to the prior . Liu et al . ( 2019 ) extends KL annealing to CVAEs . However , KL annealing does not optimize a true lower bound of the ELBO for most of training . Zhao et al . ( 2017 ) also modifies the objective to choose the model with the maximal rate . Razavi et al . ( 2019 ) propose anti-causal sequential priors for text modelling tasks . Bowman et al . ( 2016 ) ; Gulrajani et al . ( 2017 ) proposes to weaken the decoder so that the latent variables can not be ignored , however only unconditional VAEs are considered . Wang & Wang ( 2019 ) shows the advantage of normalizing flow based posteriors for preventing posterior collapse . In contrast , we study for the first time posterior collapse in conditional models on datasets with minor modes . Structured Sequence Prediction . Helbing & Molnar ( 1995 ) ; Robicquet et al . ( 2016 ) ; Alahi et al . ( 2016 ) ; Gupta et al . ( 2018 ) ; Zhao et al . ( 2019 ) ; Sadeghian et al . ( 2019 ) consider the problem of traffic participant trajectory prediction in a social context . Notably , ( Gupta et al. , 2018 ; Zhao et al. , 2019 ; Sadeghian et al. , 2019 ) use generative adversarial networks to generate socially compliant trajectories . However , the predictions are uni-modal . Starting from Bayer & Osendorfer ( 2014 ) ; Chung et al . ( 2015 ) , more recently Lee et al . ( 2017 ) ; Bhattacharyya et al . ( 2018 ) ; Rhinehart et al . ( 2018 ) ; Deo & Trivedi ( 2019 ) ; Pajouheshgar & Lampert ( 2018 ) considers structured ( one to many ) predictions using – a CVAE , improved CVAE training , pushforward policies for vehicle ego-motion prediction , motion planning , spatio-temporal convolutional network respectively . Kumar et al . ( 2019 ) proposes a normalizing flow based model for video sequence prediction , however the sequences considered have very limited diversity compared to the trajectory prediction tasks considered here . Here , we focus on improving structured predictions using conditional normalizing flows based priors . 3 CONDITIONAL FLOW VARIATIONAL AUTOENCODER ( CF-VAE ) . Our Conditional Flow Variational Autoencoder is based on the conditional variational autoencoder ( Sohn et al. , 2015 ) which is a deep directed graphical model for modeling conditional data distributions pθ ( y|x ) . Here , x is the sequence up to time t , x = [ x1 , · · · , xt ] and y is the sequence to be predicted up to time T , y = [ yt+1 , · · · , yT ] . CVAEs factorize the conditional distribution using latent variables z . In detail , pθ ( y|x ) = ∫ pθ ( y|z , x ) p ( z|x ) dz , where p ( z|x ) is the prior on the latent variables . During training , amortized variational inference is used and the posterior distribution qφ ( z|x , y ) is learnt using a recognition network . The ELBO is maximized , given by , log ( pθ ( y|x ) ) ≥ Eqφ ( z|x , y ) log ( pθ ( y|z , x ) ) −DKL ( qφ ( z|x , y ) ||p ( z|x ) ) . ( 1 ) In practice , to simplify learning , simple unconditional standard Gaussian priors are used ( Sohn et al. , 2015 ) . However , the complexity e.g . the number of modes of the target distributions pθ ( y|x ) , is highly dependent upon the condition x . An unconditional prior demands identical latent distributions irrespective complexity of the target conditional distribution – a very strong constraint on the recognition network . Moreover , the latent variables can not encode any conditioning information and this leaves the burden of learning the dependence on the condition completely on the decoder . Furthermore , on complex conditional multi-modal data , Gaussian priors have been shown to induce a strong model bias ( Tomczak & Welling , 2016 ; Ziegler & Rush , 2019 ) . It becomes increasingly difficult to map complex multi-modal distributions to uni-modal Gaussian distributions , further complicated by the sensitivity of the RNNs encoder/decoders to subtle variations in the hidden states ( Bowman et al. , 2016 ) . Moreover , the standard closed form estimate of the KL-divergence pushes the encoded latent distributions to the mean of the Gaussian leading to latent variable collapse ( Wang et al. , 2017 ; Gu et al. , 2018 ) while discriminator based approaches ( Tolstikhin et al. , 2017 ) lead to underestimates of the KL-divergence ( Rosca et al. , 2017 ) . Therefore , we propose conditional priors based on conditional normalizing flows to enable the latent variables to encode conditional information and allow for complex multi-modal latent representations . Next , we introduce our new conditional non-linear normalizing flows followed by our regularized Conditional Flow Variational Autoencoder ( CF-VAE ) formulation . 3.1 CONDITIONAL NORMALIZING FLOWS . Recently , normalizing flow ( Tabak et al. , 2010 ; Dinh et al. , 2015 ) based priors for VAEs have been proposed ( Chen et al. , 2017 ; Ziegler & Rush , 2019 ) . Normalizing flows allows for complex priors by transforming a simple base density e.g . standard Gaussian to a complex multi-modal density through a series of n layers of invertible transformations fi , f1←→ h1 f2←→ h2 · · · fn←→ z . ( 2 ) However , such flows can not model conditional priors . In contrast to prior work , we utilize conditional normalizing flows to model complex conditional priors . Conditional normalizing flows also consists of a series of n layers of invertible transformations fi ( with parameters ψ ) , however we modify the transformations fi such that they are dependent on the condition x , |x f1|x←→ h1|x f2|x←→ h2|x · · · fn|x←→ z|x . ( 3 ) Further , in contrast to prior work ( Lu & Huang , 2019 ; Atanov et al. , 2019 ; Ardizzone et al. , 2019 ) which use affine flows ( fi ) , we build upon ( Ziegler & Rush , 2019 ) and introduce conditional nonlinear normalizing flows with split coupling . Split couplings ensure invertibility by applying a flow layer fi on only half of the dimensions at a time . To compute ( 5 ) , we split the dimensions zD of the latent variable into halfs , zL = { 1 , · · · , D/2 } and zR = { D/2 , · · · , d } at each invertible layer fi . Our transformation takes the following form for each dimension zj alternatively from zL or zR , f−1i ( z j |zR , x ) = j = a ( zR , x ) + b ( zR , x ) × zj + c ( z R , x ) 1 + ( d ( zR , x ) × zj + g ( zR , x ) ) 2 . ( 4 ) where , zj ∈ zL . Details of the forward ( generating ) operation fi are in Appendix A . To ensure that the generated prior distribution is conditioned on x , in ( 4 ) and in the corresponding forward operation fi , the coefficients { a , b , c , d , g } ∈ R are functions of both the other half of the dimensions of z and the condition x ( unlike Ziegler & Rush ( 2019 ) ) . Finally , due to the expressive power of our conditional non-linear normalizing flows , simple spherical Gaussians base distributions were sufficient .
The paper proposes a combination of conditional VAE wtih normalising flows priors and posterior regularisation strategies to capture the diversity of multi-modal trajectories of complex motion patterns. The paper argues that more flexible priors over the latent space can provide posteriors that more closely resemble the trajectories observed in the training data. To this end, the paper presents a derivation of the evidence lower bound for VAEs with normalising flows and discusses the effect of fixing the variance of the posterior to reduce instability during training. Additionally, it shows that conditioning the regularisation on whether or not the dataset contains a dominating mode leads to more diversity and captures minor modes more effectively. Experiments are reported on sequence datasets of handwritten digits, and two datasets with trajectories of vehicles in traffic.
SP:ef685ddc158f7b74eec0f2139f14d7443b1192a7
Conditional Flow Variational Autoencoders for Structured Sequence Prediction
1 INTRODUCTION . Anticipating future states of the environment is a key competence necessary for the success of autonomous agents . In complex real world environments , the future is highly uncertain . Therefore , structured predictions , one to many mappings of the likely future states of the world , are important . In many scenarios , these tasks can be cast as sequence prediction problems . Particularly , Conditional Variational Autoencoders ( CVAE ) ( Sohn et al. , 2015 ; Bayer & Osendorfer , 2014 ; Chung et al. , 2015 ) have been very successful – from prediction of pedestrians trajectories ( Lee et al. , 2017 ; Bhattacharyya et al. , 2018 ; Pajouheshgar & Lampert , 2018 ) to outcomes of robotic actions ( Babaeizadeh et al. , 2018 ) . The distribution of future sequences is diverse and highly multi-modal . CVAEs model diverse futures by factorizing the distribution of future states using a set of latent variables which are mapped to likely future states . However , CVAEs assume a standard Gaussian prior on the latent variables which induces a strong model bias ( Hoffman & Johnson , 2016 ; Tomczak & Welling , 2018 ) which makes it challenging to capture multi-modal distributions . This also leads to missing modes due to posterior collapse ( Bowman et al. , 2016 ; Razavi et al. , 2019 ) . Recent work ( Tomczak & Welling , 2018 ; Wang et al. , 2017 ; Gu et al. , 2018 ) has therefore focused on more complex Gaussian mixture based priors . Gaussian mixtures still have limited expressiveness and optimization suffers from complications e.g . determining the number of mixture components . Normalizing flows are more expressive and enable the modelling of complex multi-modal priors . Recent work on flow based priors ( Chen et al. , 2017 ; Ziegler & Rush , 2019 ) , have focused only on the unconditional ( plain VAE ) case . However , this not sufficient for CVAEs because in the conditional case the complexity of the distributions are highly dependent on the condition . In this work , 1 . We propose Conditional Flow Variational Autoencoders ( CF-VAE ) based on novel conditional normalizing flow based priors In order to model complex multi-modal conditional distributions over sequences . In Figure 1 , we show example predictions of MNIST handwriting stroke of our CF-VAE . We observe that , given a starting stroke , our CF-VAE model with data dependent normalizing flow based latent prior captures the two main modes of the conditional distribution – i.e . 1 and 8 – while CVAEs with fixed uni-modal Gaussian prior predictions have limited diversity . 2 . We propose a regularization scheme that stabilizes the optimization of the evidence lower bound and leads to better fit to the target data distribution . 3 . We leverage our conditional flow prior to deal with posterior collapse which causes standard CVAEs to ignore modes in sequence prediction tasks . 4 . Finally , our method outperforms the state of the art on three structured sequence prediction tasks – handwriting stroke prediction on MNIST , trajectory prediction on Stanford Drone and HighD . 2 RELATED WORK . Normalizing Flows . Normalizing flows are a powerful class of density estimation methods with exact inference . ( Dinh et al. , 2015 ) introduced affine normalizing flows with triangular Jacobians . ( Dinh et al. , 2017 ) extend flows with masked convolutions which allow for complex ( non-autoregessive ) dependence between the dimensions . In ( Kingma & Dhariwal , 2018 ) , 1 × 1 convolutions were proposed for improved image generation compared to ( Dinh et al. , 2017 ) . In ( Huang et al. , 2018 ) normalizing flows are auto-regressive and ( Behrmann et al. , 2019 ) extend it to ResNet . ( Lu & Huang , 2019 ) extended normalizing flows to model conditional distributions . Here , we propose conditional normalizing flows to learn conditional priors for variational latent models . Variational Autoencoders . The original variational autoencoder ( Kingma & Welling , 2014 ) used uni-modal Gaussian prior and posterior distributions . Thereafter , two lines of work have focused on developing either more expressive prior or posterior distributions . Rezende & Mohamed ( 2015 ) propose normalizing flows to model complex posterior distributions . Kingma et al . ( 2016 ) ; Tomczak & Welling ( 2016 ) ; Berg et al . ( 2018 ) present more complex inverse autoregessive flows , householder and Sylvester normalizing flow based posteriors . Here , we focus on the orthogonal direction of more expressive priors and the above approaches are compatible with our approach . Recent work which focus more expressive priors include ( Nalisnick & Smyth , 2017 ) which proposes a Dirichlet process prior and ( Goyal et al. , 2017 ) which proposes a nested Chinese restaurant process prior . However , these methods require sophisticated learning methods . In contrast , ( Tomczak & Welling , 2018 ) proposes a mixture of Gaussians based prior ( with fixed number of components ) which is easier to train and shows promising results on some image generation tasks . ( Chen et al. , 2017 ) , proposes a inverse autoregressive flow based prior which leads to improvements in complex image generation tasks like CIFAR-10 . ( Ziegler & Rush , 2019 ) proposes a prior for VAE based text generation using complex non-linear flows which allows for complex multi-modal priors . While these works focus on unconditional priors , we aim to develop more expressive conditional priors . Posterior Collapse . Posterior collapse arises when the latent posterior does not encode useful information . Most prior work ( Yang et al. , 2017 ; Dieng et al. , 2019 ; Higgins et al. , 2017 ) concentrate on unconditional VAEs and modify the training objective – the KL divergence term is annealed to prevent collapse to the prior . Liu et al . ( 2019 ) extends KL annealing to CVAEs . However , KL annealing does not optimize a true lower bound of the ELBO for most of training . Zhao et al . ( 2017 ) also modifies the objective to choose the model with the maximal rate . Razavi et al . ( 2019 ) propose anti-causal sequential priors for text modelling tasks . Bowman et al . ( 2016 ) ; Gulrajani et al . ( 2017 ) proposes to weaken the decoder so that the latent variables can not be ignored , however only unconditional VAEs are considered . Wang & Wang ( 2019 ) shows the advantage of normalizing flow based posteriors for preventing posterior collapse . In contrast , we study for the first time posterior collapse in conditional models on datasets with minor modes . Structured Sequence Prediction . Helbing & Molnar ( 1995 ) ; Robicquet et al . ( 2016 ) ; Alahi et al . ( 2016 ) ; Gupta et al . ( 2018 ) ; Zhao et al . ( 2019 ) ; Sadeghian et al . ( 2019 ) consider the problem of traffic participant trajectory prediction in a social context . Notably , ( Gupta et al. , 2018 ; Zhao et al. , 2019 ; Sadeghian et al. , 2019 ) use generative adversarial networks to generate socially compliant trajectories . However , the predictions are uni-modal . Starting from Bayer & Osendorfer ( 2014 ) ; Chung et al . ( 2015 ) , more recently Lee et al . ( 2017 ) ; Bhattacharyya et al . ( 2018 ) ; Rhinehart et al . ( 2018 ) ; Deo & Trivedi ( 2019 ) ; Pajouheshgar & Lampert ( 2018 ) considers structured ( one to many ) predictions using – a CVAE , improved CVAE training , pushforward policies for vehicle ego-motion prediction , motion planning , spatio-temporal convolutional network respectively . Kumar et al . ( 2019 ) proposes a normalizing flow based model for video sequence prediction , however the sequences considered have very limited diversity compared to the trajectory prediction tasks considered here . Here , we focus on improving structured predictions using conditional normalizing flows based priors . 3 CONDITIONAL FLOW VARIATIONAL AUTOENCODER ( CF-VAE ) . Our Conditional Flow Variational Autoencoder is based on the conditional variational autoencoder ( Sohn et al. , 2015 ) which is a deep directed graphical model for modeling conditional data distributions pθ ( y|x ) . Here , x is the sequence up to time t , x = [ x1 , · · · , xt ] and y is the sequence to be predicted up to time T , y = [ yt+1 , · · · , yT ] . CVAEs factorize the conditional distribution using latent variables z . In detail , pθ ( y|x ) = ∫ pθ ( y|z , x ) p ( z|x ) dz , where p ( z|x ) is the prior on the latent variables . During training , amortized variational inference is used and the posterior distribution qφ ( z|x , y ) is learnt using a recognition network . The ELBO is maximized , given by , log ( pθ ( y|x ) ) ≥ Eqφ ( z|x , y ) log ( pθ ( y|z , x ) ) −DKL ( qφ ( z|x , y ) ||p ( z|x ) ) . ( 1 ) In practice , to simplify learning , simple unconditional standard Gaussian priors are used ( Sohn et al. , 2015 ) . However , the complexity e.g . the number of modes of the target distributions pθ ( y|x ) , is highly dependent upon the condition x . An unconditional prior demands identical latent distributions irrespective complexity of the target conditional distribution – a very strong constraint on the recognition network . Moreover , the latent variables can not encode any conditioning information and this leaves the burden of learning the dependence on the condition completely on the decoder . Furthermore , on complex conditional multi-modal data , Gaussian priors have been shown to induce a strong model bias ( Tomczak & Welling , 2016 ; Ziegler & Rush , 2019 ) . It becomes increasingly difficult to map complex multi-modal distributions to uni-modal Gaussian distributions , further complicated by the sensitivity of the RNNs encoder/decoders to subtle variations in the hidden states ( Bowman et al. , 2016 ) . Moreover , the standard closed form estimate of the KL-divergence pushes the encoded latent distributions to the mean of the Gaussian leading to latent variable collapse ( Wang et al. , 2017 ; Gu et al. , 2018 ) while discriminator based approaches ( Tolstikhin et al. , 2017 ) lead to underestimates of the KL-divergence ( Rosca et al. , 2017 ) . Therefore , we propose conditional priors based on conditional normalizing flows to enable the latent variables to encode conditional information and allow for complex multi-modal latent representations . Next , we introduce our new conditional non-linear normalizing flows followed by our regularized Conditional Flow Variational Autoencoder ( CF-VAE ) formulation . 3.1 CONDITIONAL NORMALIZING FLOWS . Recently , normalizing flow ( Tabak et al. , 2010 ; Dinh et al. , 2015 ) based priors for VAEs have been proposed ( Chen et al. , 2017 ; Ziegler & Rush , 2019 ) . Normalizing flows allows for complex priors by transforming a simple base density e.g . standard Gaussian to a complex multi-modal density through a series of n layers of invertible transformations fi , f1←→ h1 f2←→ h2 · · · fn←→ z . ( 2 ) However , such flows can not model conditional priors . In contrast to prior work , we utilize conditional normalizing flows to model complex conditional priors . Conditional normalizing flows also consists of a series of n layers of invertible transformations fi ( with parameters ψ ) , however we modify the transformations fi such that they are dependent on the condition x , |x f1|x←→ h1|x f2|x←→ h2|x · · · fn|x←→ z|x . ( 3 ) Further , in contrast to prior work ( Lu & Huang , 2019 ; Atanov et al. , 2019 ; Ardizzone et al. , 2019 ) which use affine flows ( fi ) , we build upon ( Ziegler & Rush , 2019 ) and introduce conditional nonlinear normalizing flows with split coupling . Split couplings ensure invertibility by applying a flow layer fi on only half of the dimensions at a time . To compute ( 5 ) , we split the dimensions zD of the latent variable into halfs , zL = { 1 , · · · , D/2 } and zR = { D/2 , · · · , d } at each invertible layer fi . Our transformation takes the following form for each dimension zj alternatively from zL or zR , f−1i ( z j |zR , x ) = j = a ( zR , x ) + b ( zR , x ) × zj + c ( z R , x ) 1 + ( d ( zR , x ) × zj + g ( zR , x ) ) 2 . ( 4 ) where , zj ∈ zL . Details of the forward ( generating ) operation fi are in Appendix A . To ensure that the generated prior distribution is conditioned on x , in ( 4 ) and in the corresponding forward operation fi , the coefficients { a , b , c , d , g } ∈ R are functions of both the other half of the dimensions of z and the condition x ( unlike Ziegler & Rush ( 2019 ) ) . Finally , due to the expressive power of our conditional non-linear normalizing flows , simple spherical Gaussians base distributions were sufficient .
The work proposes a method to improve conditional VAE with a learnable prior distribution using normalizing flow. The authors also design two regularization methods for the CF-VAE to improve training stability and avoid posterior collapse. The paper is clearly motivated and easy to follow. Experiment results on MNIST, Stanford Drone and HighD datasets show the proposed that the model achieves better results than previous state-of-the-art models by significant margins.
SP:ef685ddc158f7b74eec0f2139f14d7443b1192a7
Sequence-level Intrinsic Exploration Model for Partially Observable Domains
1 INTRODUCTION . Under the reinforcement learning formalism , the learning behavior of an agent is driven by the reward that the agent collects from the environment ( Sutton and Barto , 1998 ) . However , many real-world problems have sparse rewards and most existing algorithms struggle with such sparsity . One inherent reason that leads to the inferior performance of the conventional approaches in sparse reward domains is that initially , the agent trained with those approaches could hardly stumble into a reward/goal state by chance due to their simple exploration strategies ( Pathak et al. , 2017 ) . To tackle the sparse reward problems , it is crucial to incentivize the agent ’ s exploration behavior . One prominent line of solutions for encouraging agent ’ s exploration is via reward shaping ( Singh , 1992 ; Dorigo and Colombetti , 1994 ) , where the agent develops internal reward models to assign additional reward signals apart from the environment reward to encourage exploration . To model the internal reward signal , often , the agent ’ s curiosity-driven behaviors are formalized as intrinsic novelty models ( Schmidhuber , 1991 ; Singh et al. , 2004 ; Oudeyer et al. , 2007 ) , which characterize agent ’ s experience to compute the novelty scores . Our work belongs to the broad category of methods that solve the sparse reward problems with novelty models and reward shaping . Specifically , we consider the line of sparse reward problems that employ partially observable inputs , with the inputs scaling to high-dimensional state spaces , such as images . Such problems cover a range of important applications among AI research , e.g. , navigation , robotics control and video game playing . Even though the recently emerged intrinsic novelty models have demonstrate considerable efficiency in solving sparse reward problems with partial observability , we still face the following two major challenges . First , inferring the novelty for the true state given only the partial observations still remains an open problem . Most of today ’ s Action sequence with lengthObservation sequence with length state-of-the-art novelty models ( e.g. , ( Savinov et al. , 2019 ; Pathak et al. , 2017 ) ) only derive the novelty from local information , e.g. , concatenation of few recent frames . Second , though prediction error has been widely adopted as an effective metric to infer novelty , most of the existing approaches develop novelty model upon short-term prediction error such as 1-step look-ahead . Such short-term prediction task might be an inadequate proxy for representing the novelty over state space , i.e. , it might be too simple and thus result in inferior novelty scores . Our key motivations are as follows . First , sequence-level novelty models are desired to reason over the partially observable states with greater efficiency . Second , the novelty model should consider longer-term prediction than self-prediction or 1-step look-ahead , to infer more meaningful novelty scores . Based on the above intuitions , this work proposes a new sequence-level novelty model for partially observable domains with the following two distinct properties . First , we introduce a dual-LSTM architecture to reason over a sequence of past transitions to construct the novelty model . Second , we infer the novelty of a state from the prediction error of open-loop multi-step forward dynamics prediction , which is crucial to derive high quality novelty scores . 2 METHODOLOGY . Partially Observable Markov Decision Process ( POMDP ) generalizes MDPs by learning under partial observability . Formally , a POMDP is defined as a tuple 〈S , A , O , T , Z , R〉 , where S , A and O are the spaces for the state , action and observation , respectively . The transition function T ( s , a , s′ ) = p ( s′|s , a ) specifies the probability for transiting to state s′ after taking action a at state s. The observation function Z ( s , a , o ) = p ( o|s , a ) defines the probability of receiving observation o after taking action a at state s. The reward functionR ( s , a ) defines the real-valued environment reward issued to the agent after taking action a at state s. Under partial observability , the state space S is not accessible by the agent . Thus , the agent performs decision making by forming a belief state bt from its observation space O , which integrates the information from the entire past history , i.e. , ( o0 , a0 , o1 , a1 , ... , ot , at ) . The goal of reinforcement learning is to optimize a policy π ( bt ) that outputs an action distribution given each belief state bt , with the objective of maximizing the discounted cumulative rewards collected from each episode , i.e. , ∑∞ t=0 γ trt , where γ ∈ ( 0 , 1 ] is a real-valued discount factor . 2.1 INTRINSIC EXPLORATION FRAMEWORK . We now describe our proposed sequence-level intrinsic novelty model for partially observable domains with high-dimensional inputs ( i.e. , images ) . Our primary focuses are the tasks where the external rewards rt are sparse , i.e. , zero for most of the times . This motivates us to engage a novelty function to infer the novelty over the state space and assign reward bonus to encourage exploration . The novelty function is derived from a forward-inverse dynamics model . Figure 1 depicts a high-level overview of our proposed sequence-level novelty computation . To infer the novelty of a state at time t , we perform reasoning over a sequence of transitions with length L+H . Intuitively , we use a sequence of H consequent observation frames together with a sequence of actions with length L which are taken following the observation sequence , to predict the forward dynamics . As such , the novelty model performs open-loop multi-step forward prediction . By setting the length of the action sequence , i.e. , L , our proposed paradigm could lead to forward dynamics prediction tasks with varying difficulty . To process the input sequences , we propose a dual-LSTM architecture as shown in Figure 2 . Overall , each raw observation and action data are first projected by their corresponding embedding modules . Then LSTM modules are adopted over the sequences of observation/action embeddings to derive the sequential observation/action features . Then the sequential observation/action features are synthesized in a specific form of ht , which serves as the latent representation for the past transitions at time t and is employed as input to predict forward dynamics f ( ht ) . The error of the forward dynamics prediction is used to estimate the novelty r+t of the state at time t. Furthermore , to make the latent features over the past transitions more informative , we also incorporate an inverse dynamics prediction model finv to predict the action distributions . Overall , the proposed dual-LSTM architecture enables us to perform sequence-level reasoning and inferring novelty from the multi-step forward prediction . 2.2 SEQUENCE ENCODING WITH DUAL-LSTM ARCHITECTURE . The sequence encoding module accepts a sequence of observations with length H and a sequence of actions with length L as input . Formally , we denote the observation sequence and action sequence by Ot = ot−L−H−1 : t−L−1 and At = at−L−1 : t−1 , respectively . Specifically , each observation ot is represented as a 3D image frame with width m , height n and channel c , i.e. , ot ∈Rm×n×c . Each action is modeled as a 1-hot encoding vector at∈R|A| , where |A| denotes the size of the action space . Given the sequences Ot and At , the sequence encoding module first adopts an embedding module fe ( · ) parameterized by θE = { θEo , θEa } to process the observation sequence and the action sequence as follows , φOt = fe ( Ot ; θEo ) and φ A t = fe ( At ; θEa ) , ( 1 ) where θEo and θEa denote the parameters for the observation embedding function and the action embedding function , respectively . Next , LSTM encoders are applied to the output of the observation/action embedding modules as follows , [ hot , c o t ] = LSTMo ( φOt , h o t−1 , c o t−1 ) and [ hat , c a t ] = LSTMa ( φAt , h a t−1 , c a t−1 ) , ( 2 ) where hot ∈ Rl and hat ∈ Rl represent the latent features encoded from the observation sequence and action sequence . For simplicity , we assume hot and h a t have the same dimensionality . c o t and c a t denote the cell output for the two LSTM modules . Next , the sequence features for the observation/action hot and h a t are synthesized to derive latent features ht which describe the past transitions . Intuitively , the form of ht is proposed as follows : hitrt = h o t hat and ht = [ hot , hat , hitrt ] . ( 3 ) To compute ht , an multiplicative interaction is first performed over hot and h a t , which results in h itr t and denotes element-wise multiplication . Then ht is derived by concatenating the multiplicative interaction feature hitrt with the latent representations for the observation and action sequences , i.e. , hot and h a t . The reason for generating ht in this way is that the prediction task over the partial observation ot is related to both the local information conveyed in the two sequences themselves ( i.e. , hot and h a t ) , as well as the collaborative information derived via interacting the two sequence features in a form . The reason for performing multiplicative interaction is that the advancement of such operation in synthesizing different types of features has been validated in prior works ( Oh et al. , 2015 ; Ma et al. , 2019 ) . We demonstrate that generating ht in the proposed form is effective and crucial to derive a desirable policy learning performance in the ablation study ( Figure 7c ) of the experiment section . 2.3 COMPUTING NOVELTY . To compute the novelty , the latent features ht are first employed as input by a feedforward prediction function to predict the forward dynamics : ψ̂t = f ( ht ; θF ) and ψ∗t = f∗ ( ot ) , ( 4 ) where f ( · ) is the forward prediction function parameterized by θF , and ψ̂t denotes the prediction output . We use ψ∗t to denote the prediction target , which is computed from some target function f∗ ( · ) . Within the proposed novelty framework , the target function f∗ ( · ) could be derived in various forms , where the common choices include the representation of ot at its original feature space , i.g. , image pixels , and the learned embedding of ot , i.e. , fe ( · ; θEo ) . Apart from the conventional choices , in this work , we employ a target function computed from a random network distillation model ( Burda et al. , 2019 ) . Thus , f∗ ( · ) is represented by a fixed and randomly initialized target network . Intuitively , it forms a random mapping from each input observation to a point in a k-dimensional space , i.e. , f∗ : Rm×n×c → Rk . Hence the forward dynamics model is trained to distill the randomly drawn function from the prior . The prediction error inferred from such a model is related to the uncertainty quantification in predicting some constant zero function ( Osband et al. , 2018 ) . The novelty of a state is inferred from the uncertainty evaluated as the MSE loss for the forward model . Formally , at step t , a novelty score or reward bonus is computed in the following form : r+ ( Ot , At ) = β 2 ||ψ∗t − ψ̂t|| 2 2 , ( 5 ) where β ≥ 0 is a hyperparameter to scale the reward bonus . The reward bonus is issued to the agent in a step-wise manner . During the policy learning process , the agent maximizes the sum over the external rewards and the intrinsic rewards derived from the novelty model . Therefore , the overall reward term to be maximized as will be shown in ( 8 ) is computed as rt = ret + r + t , where r e t denotes the external rewards from the environment .
The authors tackle the exploration problem by introducing SIM (Sequence-level Intrinsic exploration Module). In most existing literature, intrinsic motivation bonuses are scored based on individual states or transitions, and not over multi-step trajectories. SIM predicts novelty bonuses based on the prediction error of an open-loop forward dynamics model - the model consumes as input a sequence of observations (without paired actions) followed by a sequence of actions (without paired observations) to predict a feature vector associated with the next state. The error between this feature vector and the RND embedding of the true observation is used as a novelty bonus.
SP:1370baa1ac8b5530a26058d36be82ecf0e97cc98
Sequence-level Intrinsic Exploration Model for Partially Observable Domains
1 INTRODUCTION . Under the reinforcement learning formalism , the learning behavior of an agent is driven by the reward that the agent collects from the environment ( Sutton and Barto , 1998 ) . However , many real-world problems have sparse rewards and most existing algorithms struggle with such sparsity . One inherent reason that leads to the inferior performance of the conventional approaches in sparse reward domains is that initially , the agent trained with those approaches could hardly stumble into a reward/goal state by chance due to their simple exploration strategies ( Pathak et al. , 2017 ) . To tackle the sparse reward problems , it is crucial to incentivize the agent ’ s exploration behavior . One prominent line of solutions for encouraging agent ’ s exploration is via reward shaping ( Singh , 1992 ; Dorigo and Colombetti , 1994 ) , where the agent develops internal reward models to assign additional reward signals apart from the environment reward to encourage exploration . To model the internal reward signal , often , the agent ’ s curiosity-driven behaviors are formalized as intrinsic novelty models ( Schmidhuber , 1991 ; Singh et al. , 2004 ; Oudeyer et al. , 2007 ) , which characterize agent ’ s experience to compute the novelty scores . Our work belongs to the broad category of methods that solve the sparse reward problems with novelty models and reward shaping . Specifically , we consider the line of sparse reward problems that employ partially observable inputs , with the inputs scaling to high-dimensional state spaces , such as images . Such problems cover a range of important applications among AI research , e.g. , navigation , robotics control and video game playing . Even though the recently emerged intrinsic novelty models have demonstrate considerable efficiency in solving sparse reward problems with partial observability , we still face the following two major challenges . First , inferring the novelty for the true state given only the partial observations still remains an open problem . Most of today ’ s Action sequence with lengthObservation sequence with length state-of-the-art novelty models ( e.g. , ( Savinov et al. , 2019 ; Pathak et al. , 2017 ) ) only derive the novelty from local information , e.g. , concatenation of few recent frames . Second , though prediction error has been widely adopted as an effective metric to infer novelty , most of the existing approaches develop novelty model upon short-term prediction error such as 1-step look-ahead . Such short-term prediction task might be an inadequate proxy for representing the novelty over state space , i.e. , it might be too simple and thus result in inferior novelty scores . Our key motivations are as follows . First , sequence-level novelty models are desired to reason over the partially observable states with greater efficiency . Second , the novelty model should consider longer-term prediction than self-prediction or 1-step look-ahead , to infer more meaningful novelty scores . Based on the above intuitions , this work proposes a new sequence-level novelty model for partially observable domains with the following two distinct properties . First , we introduce a dual-LSTM architecture to reason over a sequence of past transitions to construct the novelty model . Second , we infer the novelty of a state from the prediction error of open-loop multi-step forward dynamics prediction , which is crucial to derive high quality novelty scores . 2 METHODOLOGY . Partially Observable Markov Decision Process ( POMDP ) generalizes MDPs by learning under partial observability . Formally , a POMDP is defined as a tuple 〈S , A , O , T , Z , R〉 , where S , A and O are the spaces for the state , action and observation , respectively . The transition function T ( s , a , s′ ) = p ( s′|s , a ) specifies the probability for transiting to state s′ after taking action a at state s. The observation function Z ( s , a , o ) = p ( o|s , a ) defines the probability of receiving observation o after taking action a at state s. The reward functionR ( s , a ) defines the real-valued environment reward issued to the agent after taking action a at state s. Under partial observability , the state space S is not accessible by the agent . Thus , the agent performs decision making by forming a belief state bt from its observation space O , which integrates the information from the entire past history , i.e. , ( o0 , a0 , o1 , a1 , ... , ot , at ) . The goal of reinforcement learning is to optimize a policy π ( bt ) that outputs an action distribution given each belief state bt , with the objective of maximizing the discounted cumulative rewards collected from each episode , i.e. , ∑∞ t=0 γ trt , where γ ∈ ( 0 , 1 ] is a real-valued discount factor . 2.1 INTRINSIC EXPLORATION FRAMEWORK . We now describe our proposed sequence-level intrinsic novelty model for partially observable domains with high-dimensional inputs ( i.e. , images ) . Our primary focuses are the tasks where the external rewards rt are sparse , i.e. , zero for most of the times . This motivates us to engage a novelty function to infer the novelty over the state space and assign reward bonus to encourage exploration . The novelty function is derived from a forward-inverse dynamics model . Figure 1 depicts a high-level overview of our proposed sequence-level novelty computation . To infer the novelty of a state at time t , we perform reasoning over a sequence of transitions with length L+H . Intuitively , we use a sequence of H consequent observation frames together with a sequence of actions with length L which are taken following the observation sequence , to predict the forward dynamics . As such , the novelty model performs open-loop multi-step forward prediction . By setting the length of the action sequence , i.e. , L , our proposed paradigm could lead to forward dynamics prediction tasks with varying difficulty . To process the input sequences , we propose a dual-LSTM architecture as shown in Figure 2 . Overall , each raw observation and action data are first projected by their corresponding embedding modules . Then LSTM modules are adopted over the sequences of observation/action embeddings to derive the sequential observation/action features . Then the sequential observation/action features are synthesized in a specific form of ht , which serves as the latent representation for the past transitions at time t and is employed as input to predict forward dynamics f ( ht ) . The error of the forward dynamics prediction is used to estimate the novelty r+t of the state at time t. Furthermore , to make the latent features over the past transitions more informative , we also incorporate an inverse dynamics prediction model finv to predict the action distributions . Overall , the proposed dual-LSTM architecture enables us to perform sequence-level reasoning and inferring novelty from the multi-step forward prediction . 2.2 SEQUENCE ENCODING WITH DUAL-LSTM ARCHITECTURE . The sequence encoding module accepts a sequence of observations with length H and a sequence of actions with length L as input . Formally , we denote the observation sequence and action sequence by Ot = ot−L−H−1 : t−L−1 and At = at−L−1 : t−1 , respectively . Specifically , each observation ot is represented as a 3D image frame with width m , height n and channel c , i.e. , ot ∈Rm×n×c . Each action is modeled as a 1-hot encoding vector at∈R|A| , where |A| denotes the size of the action space . Given the sequences Ot and At , the sequence encoding module first adopts an embedding module fe ( · ) parameterized by θE = { θEo , θEa } to process the observation sequence and the action sequence as follows , φOt = fe ( Ot ; θEo ) and φ A t = fe ( At ; θEa ) , ( 1 ) where θEo and θEa denote the parameters for the observation embedding function and the action embedding function , respectively . Next , LSTM encoders are applied to the output of the observation/action embedding modules as follows , [ hot , c o t ] = LSTMo ( φOt , h o t−1 , c o t−1 ) and [ hat , c a t ] = LSTMa ( φAt , h a t−1 , c a t−1 ) , ( 2 ) where hot ∈ Rl and hat ∈ Rl represent the latent features encoded from the observation sequence and action sequence . For simplicity , we assume hot and h a t have the same dimensionality . c o t and c a t denote the cell output for the two LSTM modules . Next , the sequence features for the observation/action hot and h a t are synthesized to derive latent features ht which describe the past transitions . Intuitively , the form of ht is proposed as follows : hitrt = h o t hat and ht = [ hot , hat , hitrt ] . ( 3 ) To compute ht , an multiplicative interaction is first performed over hot and h a t , which results in h itr t and denotes element-wise multiplication . Then ht is derived by concatenating the multiplicative interaction feature hitrt with the latent representations for the observation and action sequences , i.e. , hot and h a t . The reason for generating ht in this way is that the prediction task over the partial observation ot is related to both the local information conveyed in the two sequences themselves ( i.e. , hot and h a t ) , as well as the collaborative information derived via interacting the two sequence features in a form . The reason for performing multiplicative interaction is that the advancement of such operation in synthesizing different types of features has been validated in prior works ( Oh et al. , 2015 ; Ma et al. , 2019 ) . We demonstrate that generating ht in the proposed form is effective and crucial to derive a desirable policy learning performance in the ablation study ( Figure 7c ) of the experiment section . 2.3 COMPUTING NOVELTY . To compute the novelty , the latent features ht are first employed as input by a feedforward prediction function to predict the forward dynamics : ψ̂t = f ( ht ; θF ) and ψ∗t = f∗ ( ot ) , ( 4 ) where f ( · ) is the forward prediction function parameterized by θF , and ψ̂t denotes the prediction output . We use ψ∗t to denote the prediction target , which is computed from some target function f∗ ( · ) . Within the proposed novelty framework , the target function f∗ ( · ) could be derived in various forms , where the common choices include the representation of ot at its original feature space , i.g. , image pixels , and the learned embedding of ot , i.e. , fe ( · ; θEo ) . Apart from the conventional choices , in this work , we employ a target function computed from a random network distillation model ( Burda et al. , 2019 ) . Thus , f∗ ( · ) is represented by a fixed and randomly initialized target network . Intuitively , it forms a random mapping from each input observation to a point in a k-dimensional space , i.e. , f∗ : Rm×n×c → Rk . Hence the forward dynamics model is trained to distill the randomly drawn function from the prior . The prediction error inferred from such a model is related to the uncertainty quantification in predicting some constant zero function ( Osband et al. , 2018 ) . The novelty of a state is inferred from the uncertainty evaluated as the MSE loss for the forward model . Formally , at step t , a novelty score or reward bonus is computed in the following form : r+ ( Ot , At ) = β 2 ||ψ∗t − ψ̂t|| 2 2 , ( 5 ) where β ≥ 0 is a hyperparameter to scale the reward bonus . The reward bonus is issued to the agent in a step-wise manner . During the policy learning process , the agent maximizes the sum over the external rewards and the intrinsic rewards derived from the novelty model . Therefore , the overall reward term to be maximized as will be shown in ( 8 ) is computed as rt = ret + r + t , where r e t denotes the external rewards from the environment .
This paper extends the prediction-error based model by Pathak et. al., 2019 by learning a forward (and inverse) dynamics model for predicting a state feature multiple steps into the future (say, K-steps) given an open loop sequence of K actions as opposed to 1 step into the future, with the caveat that instead of using learnable state features, a random network is used for computing state features, similar to Random Network Distillation (RND) by Burda et. al., 2019. Also, their inverse dynamics models predicts the entire sequence of actions up to K steps. Experiments on VizDoom point-navigation tasks show that the proposed model does better than baselines as rewards get sparser. Ablations are provided to justify the choice of K in multi-step prediction, the choice of inverse dynamics and the choice of RND state features.
SP:1370baa1ac8b5530a26058d36be82ecf0e97cc98
Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators
1 INTRODUCTION . Few-shot learning in the perspective of meta-learning aims to train models which can quickly solve novel tasks or adapt to new environments with limited number of examples . In case of few-shot classification , models are usually evaluated on a held-out dataset which does not have any common class with the training dataset . In the real world , however , we often face harder problems in which novel tasks arise arbitrarily from many different domains even including previously unseen ones . In this study , we propose a more practical few-shot classification algorithm to generalize across domains beyond the common assumption , i.e. , meta-training and meta-testing within a single domain . Our approach to cover a complex multi-domain task distribution is to construct a pool of multiple models and learn to select the best one given a novel task through meta-training over various domains . This recasts task-specific adaption across domains as a simple selection problem , which could be much easier than manipulating high-dimensional parameters or representations of a single model to adapt to a novel task . Furthermore , we enforce all models to share some of the parameters and train per-model modulators with model-specific parameters on top of that . By doing so , each model could keep important domain-invariant features while the model pool has representational diversity as a whole without a significant increase of model parameters . We train and test our algorithms on various image classification datasets with different characteristics . Experimental results show that the proposed selection scheme outperforms other state-of-theart algorithms in few-shot classification tasks from many different domains without being given any knowledge of the domain which the task belongs to . We also show that even few-shot classification tasks from previously unseen domains , i.e. , domains which have never appeared during meta-training , can be done successfully by averaging outputs of all models . 2 METHODS . 2.1 PROBLEM STATEMENT . We follow the common setting of few-shot classification in the meta-learning community ( Vinyals et al. , 2016 ) . For a N -way , K-shot classification task , an episode which consists of a support set S = { ( xsi , ysi ) } NKi=1 and a query set Q = { ( x q i , y q i ) } Ti=1 is sampled from a given dataset , where xsi , x q i , y s i and y q i represent examples and their correct labels respectively and T is the number of query examples . Once a model has been trained with respect to a number of random episodes at meta-training time , it is expected to predict a correct label for an unlabeled query given only a few labeled examples ( i.e. , support set ) even if all these came from classes which have never appeared during meta-training . Based on this setting , we try to build a domain-agnostic meta-learner beyond the common metalearning assumptions , i.e. , meta-training within a single domain and meta-testing within the same domain . We perform meta-training over multiple diverse domains , which we call source domains DS1 , DS2 , · · · , DSM , where M is the number of source domains , expecting to obtain a domaingeneralized meta-learner . Since we presume that one particular dataset defines its own domain , we realize this idea by training this meta-learner on various tasks from many different datasets . In our study , the trained meta-learner is meta-tested on a target domain DT for two types of crossdomain few-shot classification tasks . One is a task which is required to classify from held-out classes of multiple source domains ( i.e. , DT ∈ { DS1 , DS2 , · · · , DSM } ) without knowing from which dataset each task is sampled . This could be used to evaluate whether the meta-learner is capable of adapting to a complex task distribution across multiple domains . We also tackle a task sampled from previously unseen datasets during the meta-training ( i.e. , DSi ∩ DT = ∅ for all i ) , which requires to generalize over out-of-distribution tasks in domain-level . 2.2 BUILDING A POOL OF EMBEDDING MODELS WITH DISPARATE MODULATORS . Basically , we perform metric-based meta-learning to learn a good metric space in which the support and query examples from the same class are located closely over various domains . While recent meta-learning methods have been proposed to train a single model commonly applicable to various potential tasks and to learn to adjust the model to a particular task for further improvement ( Rusu et al. , 2019 ; Oreshkin et al. , 2018 ; Ye et al. , 2018 ; Triantafillou et al. , 2019 ) , we train a pool of multiple embedding models each of which defines a different metric space and a meta-model to select the best model from them given a novel task . This makes task-specific adaptation easier and more effective to learn because our approach is required to solve only a simple classification problem , i.e. , choose one of all pre-trained models , instead of learning to manipulate high-dimensional model components , such as model parameters or activations , directly to adapt to novel tasks from various domains . Rather than training each model separately , we take an approach that all models share a considerable amount of parameters and they are differentiated by adding per-model modulators as done usually in multi-task learning ( Ruder , 2017 ) . The rationale behind this is to let our model pool capture good domain-invariant features by the shared parameters as well as have diversity , which is desirable to represent the complex cross-domain task distribution , without a significant increase of the number of parameters . To realize this idea , we first build a base network fE ( · ; θ ) shared among all models . One large virtual dataset is constructed by aggregating all source datasets . The base network is trained on this dataset following typical supervised learning procedure ( Figure 1 ( a ) ) . In the next step , we build one model per source domain by adding a per-model modulator with a parameter set αi on top of the frozen base network . We then train each modulator on one dataset DSi by performing metric-based metalearning in the same way as the Prototypical Networks ( ProtoNet ) ( Snell et al. , 2017 ) ( Figure 1 ( b ) ) . Finally , we have a pool of embedding models which are ready for non-parametric classification in the same way as ProtoNet . As shown in Figure 2 , we add the modulator components to the base network in a per-layer basis following the idea proposed in ( Rebuffi et al. , 2018 ) . This way has turned out to be more effective than the conventional approach , i.e. , adding a few extra layers for each model , for domain-specific representation . Moreover , this allows each modulated model to have the same computational cost at inference time as the base network ’ s because all modulator components can be fused into existing convolution 3×3 operations . We try two modulator architectures , convolution 1×1 and channel-wise transform ( i.e. , per-channel trainable scale and bias ) . The former shows slightly better performance whereas the latter uses much less parameters only incurring negligible memory overhead to the pool . More details of the architecture including the number of parameters can be found in Appendix B . 2.3 LEARNING TO SELECT THE BEST MODEL FOR A TARGET TASK . After the construction of the pool , we build a meta-model to predict the most suitable model from all constituent models in the pool for a given task as the final step of our training . By training this model over a number of episodes sampled randomly from all available source datasets , we expect this ability to be generalized to novel tasks from various domains including unseen ones during meta-training . As depicted in Figure 1 ( c ) , this meta-model parameterized by φ , which we call a model selection network , is trained in order to map a task representation ztask for a particular task to an index of the best model in the model pool . The task representation is obtained by passing all examples in the support set of the task through the base network and taking a mean of all resulting embedded vectors to form a fixed-length summary of the given task . During meta-training , the index of the best model , which is the ground truth label for training the selection network , is generated by measuring the classification accuracy values of all models in the pool given the query set and picking an index of one which has the highest accuracy . In our setup , the task-specific adaptation is reduced to a ( M+1 ) -way classification problem when we have M+1 embedding models including the base network learned from M available source datasets . Learning this classifier could be far simpler than learning to adapt model parameters ( Rusu et al. , 2019 ) , embedded vectors ( Ye et al. , 2018 ) or per-layer activations ( Oreshkin et al. , 2018 ) to a particular task because their dimensions are usually much larger than that of our selection network outputs , i.e. , the number of the pre-trained models . The overall training procedure for constructing the pool and training the selection network is summarized in Algorithm 1 in Appendix C . 2.4 INFERENCE WITH THE POOL AT META-TESTING TIME . One way of the inference is to predict a class with the best single model chosen by the selection network fS ( · ; φ ) for a given support set ( Figure 3 ( a ) ) . Following the method proposed in ProtoNet as shown in Equation 1 , a class prediction ŷ to a query example xq is made by finding a class whose prototype cy is the closest one to an embedded vector of the given query example . Specifically , the prototype for the class y is defined as a mean of embedded vectors of all examples in the support set belonging to a class y and squared Euclidean distances dyi to these prototypes are compared between the classes . ŷ = argmin y dyi ( x q ) = argmin y ‖cy − fE ( xq ; θ , αi ) ‖2 . ( 1 ) Another way to benefit from the model pool is to combine outputs from all available embedding models for inference ( Figure 3 ( b ) ) . We take the simplest approach to average outputs of all models in the level of class prediction probability . As described in Equation 2 , we collect output probabilities p ( y | xq ; i ) based on the relative distances to class prototypes dyi for a given task . Then , our final prediction ŷ would be a class to maximize a mean of these probabilities from all M + 1 models . ŷ = argmax y 1 M + 1 M∑ i=0 p ( y | xq ; i ) = argmax y 1 M + 1 M∑ i=0 softmax ( −dyi ( x q ) ) ( 2 ) As the last step , we adopt test-time ‘ further adaptation ’ proposed in ( Chen et al. , 2019 ) , which turned out to make additional performance improvement in most cases . Both experimental results with and without the further adaptation are presented in Section 3 and Appendix D with its implementation details in Appendix C . 3 EXPERIMENTS . 3.1 SETUP . 3.1.1 DATASETS . We use eight image classification datasets , denoted as Aircraft , CIFAR100 , DTD , GTSRB , ImageNet12 , Omniglot , SVHN , UCF101 and Flowers , introduced in the Visual Decathlon Challenge ( Rebuffi et al. , 2017 ) for evaluation . These are considered as eight different domains in our experiments . All datasets are resplit for the few-shot classification , i.e. , no overlap of classes between meta-training and meta-testing . More details about the datatsets can be found in Appendix A .
This paper aims to tackle few-shot classification with many different domains. The idea is to build a pool of embedding models, which are based on the same base network. The models are diversed by their own modulators. The high-level intuition is to let the model pool capture good domain-invariant features by the shared parameters and domain-specific features by the selection network, which is desirable to represent the complex cross-domain task distribution, without a significant increase in the number of parameters.
SP:c6613fdb5a29233f4fb6b2eb86542ae07fc1d366
Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators
1 INTRODUCTION . Few-shot learning in the perspective of meta-learning aims to train models which can quickly solve novel tasks or adapt to new environments with limited number of examples . In case of few-shot classification , models are usually evaluated on a held-out dataset which does not have any common class with the training dataset . In the real world , however , we often face harder problems in which novel tasks arise arbitrarily from many different domains even including previously unseen ones . In this study , we propose a more practical few-shot classification algorithm to generalize across domains beyond the common assumption , i.e. , meta-training and meta-testing within a single domain . Our approach to cover a complex multi-domain task distribution is to construct a pool of multiple models and learn to select the best one given a novel task through meta-training over various domains . This recasts task-specific adaption across domains as a simple selection problem , which could be much easier than manipulating high-dimensional parameters or representations of a single model to adapt to a novel task . Furthermore , we enforce all models to share some of the parameters and train per-model modulators with model-specific parameters on top of that . By doing so , each model could keep important domain-invariant features while the model pool has representational diversity as a whole without a significant increase of model parameters . We train and test our algorithms on various image classification datasets with different characteristics . Experimental results show that the proposed selection scheme outperforms other state-of-theart algorithms in few-shot classification tasks from many different domains without being given any knowledge of the domain which the task belongs to . We also show that even few-shot classification tasks from previously unseen domains , i.e. , domains which have never appeared during meta-training , can be done successfully by averaging outputs of all models . 2 METHODS . 2.1 PROBLEM STATEMENT . We follow the common setting of few-shot classification in the meta-learning community ( Vinyals et al. , 2016 ) . For a N -way , K-shot classification task , an episode which consists of a support set S = { ( xsi , ysi ) } NKi=1 and a query set Q = { ( x q i , y q i ) } Ti=1 is sampled from a given dataset , where xsi , x q i , y s i and y q i represent examples and their correct labels respectively and T is the number of query examples . Once a model has been trained with respect to a number of random episodes at meta-training time , it is expected to predict a correct label for an unlabeled query given only a few labeled examples ( i.e. , support set ) even if all these came from classes which have never appeared during meta-training . Based on this setting , we try to build a domain-agnostic meta-learner beyond the common metalearning assumptions , i.e. , meta-training within a single domain and meta-testing within the same domain . We perform meta-training over multiple diverse domains , which we call source domains DS1 , DS2 , · · · , DSM , where M is the number of source domains , expecting to obtain a domaingeneralized meta-learner . Since we presume that one particular dataset defines its own domain , we realize this idea by training this meta-learner on various tasks from many different datasets . In our study , the trained meta-learner is meta-tested on a target domain DT for two types of crossdomain few-shot classification tasks . One is a task which is required to classify from held-out classes of multiple source domains ( i.e. , DT ∈ { DS1 , DS2 , · · · , DSM } ) without knowing from which dataset each task is sampled . This could be used to evaluate whether the meta-learner is capable of adapting to a complex task distribution across multiple domains . We also tackle a task sampled from previously unseen datasets during the meta-training ( i.e. , DSi ∩ DT = ∅ for all i ) , which requires to generalize over out-of-distribution tasks in domain-level . 2.2 BUILDING A POOL OF EMBEDDING MODELS WITH DISPARATE MODULATORS . Basically , we perform metric-based meta-learning to learn a good metric space in which the support and query examples from the same class are located closely over various domains . While recent meta-learning methods have been proposed to train a single model commonly applicable to various potential tasks and to learn to adjust the model to a particular task for further improvement ( Rusu et al. , 2019 ; Oreshkin et al. , 2018 ; Ye et al. , 2018 ; Triantafillou et al. , 2019 ) , we train a pool of multiple embedding models each of which defines a different metric space and a meta-model to select the best model from them given a novel task . This makes task-specific adaptation easier and more effective to learn because our approach is required to solve only a simple classification problem , i.e. , choose one of all pre-trained models , instead of learning to manipulate high-dimensional model components , such as model parameters or activations , directly to adapt to novel tasks from various domains . Rather than training each model separately , we take an approach that all models share a considerable amount of parameters and they are differentiated by adding per-model modulators as done usually in multi-task learning ( Ruder , 2017 ) . The rationale behind this is to let our model pool capture good domain-invariant features by the shared parameters as well as have diversity , which is desirable to represent the complex cross-domain task distribution , without a significant increase of the number of parameters . To realize this idea , we first build a base network fE ( · ; θ ) shared among all models . One large virtual dataset is constructed by aggregating all source datasets . The base network is trained on this dataset following typical supervised learning procedure ( Figure 1 ( a ) ) . In the next step , we build one model per source domain by adding a per-model modulator with a parameter set αi on top of the frozen base network . We then train each modulator on one dataset DSi by performing metric-based metalearning in the same way as the Prototypical Networks ( ProtoNet ) ( Snell et al. , 2017 ) ( Figure 1 ( b ) ) . Finally , we have a pool of embedding models which are ready for non-parametric classification in the same way as ProtoNet . As shown in Figure 2 , we add the modulator components to the base network in a per-layer basis following the idea proposed in ( Rebuffi et al. , 2018 ) . This way has turned out to be more effective than the conventional approach , i.e. , adding a few extra layers for each model , for domain-specific representation . Moreover , this allows each modulated model to have the same computational cost at inference time as the base network ’ s because all modulator components can be fused into existing convolution 3×3 operations . We try two modulator architectures , convolution 1×1 and channel-wise transform ( i.e. , per-channel trainable scale and bias ) . The former shows slightly better performance whereas the latter uses much less parameters only incurring negligible memory overhead to the pool . More details of the architecture including the number of parameters can be found in Appendix B . 2.3 LEARNING TO SELECT THE BEST MODEL FOR A TARGET TASK . After the construction of the pool , we build a meta-model to predict the most suitable model from all constituent models in the pool for a given task as the final step of our training . By training this model over a number of episodes sampled randomly from all available source datasets , we expect this ability to be generalized to novel tasks from various domains including unseen ones during meta-training . As depicted in Figure 1 ( c ) , this meta-model parameterized by φ , which we call a model selection network , is trained in order to map a task representation ztask for a particular task to an index of the best model in the model pool . The task representation is obtained by passing all examples in the support set of the task through the base network and taking a mean of all resulting embedded vectors to form a fixed-length summary of the given task . During meta-training , the index of the best model , which is the ground truth label for training the selection network , is generated by measuring the classification accuracy values of all models in the pool given the query set and picking an index of one which has the highest accuracy . In our setup , the task-specific adaptation is reduced to a ( M+1 ) -way classification problem when we have M+1 embedding models including the base network learned from M available source datasets . Learning this classifier could be far simpler than learning to adapt model parameters ( Rusu et al. , 2019 ) , embedded vectors ( Ye et al. , 2018 ) or per-layer activations ( Oreshkin et al. , 2018 ) to a particular task because their dimensions are usually much larger than that of our selection network outputs , i.e. , the number of the pre-trained models . The overall training procedure for constructing the pool and training the selection network is summarized in Algorithm 1 in Appendix C . 2.4 INFERENCE WITH THE POOL AT META-TESTING TIME . One way of the inference is to predict a class with the best single model chosen by the selection network fS ( · ; φ ) for a given support set ( Figure 3 ( a ) ) . Following the method proposed in ProtoNet as shown in Equation 1 , a class prediction ŷ to a query example xq is made by finding a class whose prototype cy is the closest one to an embedded vector of the given query example . Specifically , the prototype for the class y is defined as a mean of embedded vectors of all examples in the support set belonging to a class y and squared Euclidean distances dyi to these prototypes are compared between the classes . ŷ = argmin y dyi ( x q ) = argmin y ‖cy − fE ( xq ; θ , αi ) ‖2 . ( 1 ) Another way to benefit from the model pool is to combine outputs from all available embedding models for inference ( Figure 3 ( b ) ) . We take the simplest approach to average outputs of all models in the level of class prediction probability . As described in Equation 2 , we collect output probabilities p ( y | xq ; i ) based on the relative distances to class prototypes dyi for a given task . Then , our final prediction ŷ would be a class to maximize a mean of these probabilities from all M + 1 models . ŷ = argmax y 1 M + 1 M∑ i=0 p ( y | xq ; i ) = argmax y 1 M + 1 M∑ i=0 softmax ( −dyi ( x q ) ) ( 2 ) As the last step , we adopt test-time ‘ further adaptation ’ proposed in ( Chen et al. , 2019 ) , which turned out to make additional performance improvement in most cases . Both experimental results with and without the further adaptation are presented in Section 3 and Appendix D with its implementation details in Appendix C . 3 EXPERIMENTS . 3.1 SETUP . 3.1.1 DATASETS . We use eight image classification datasets , denoted as Aircraft , CIFAR100 , DTD , GTSRB , ImageNet12 , Omniglot , SVHN , UCF101 and Flowers , introduced in the Visual Decathlon Challenge ( Rebuffi et al. , 2017 ) for evaluation . These are considered as eight different domains in our experiments . All datasets are resplit for the few-shot classification , i.e. , no overlap of classes between meta-training and meta-testing . More details about the datatsets can be found in Appendix A .
In this paper, the authors proposed to address the few-shot learning problem, especially for the cross-domain setting where a newly coming task originates from a different distribution (or in this work implemented by sampling from an unseen dataset). Basically, the authors constructed a model zoo based on source datasets at hand, and learned an “argmax” meta-selector which takes embedding of a task as input and outputs the model selection index. The idea is very intuitive, and the implementation is kind of an incremental combination of (Rebuffi et al., 2018) building models for multi-tasks and (Oreshkin et al., 2018) tailoring based on task embeddings.
SP:c6613fdb5a29233f4fb6b2eb86542ae07fc1d366
Exploration in Reinforcement Learning with Deep Covering Options
1 INTRODUCTION . Temporal abstraction , often formalized via the options framework ( Sutton et al. , 1999 ) , has the potential to greatly improve the performance of reinforcement learning ( RL ) agents by representing actions at different time scales . However , the question of which options an agent should construct , and the related question of what objective function that option construction process should be optimizing , remain open . One recent approach is to construct options that aid exploration by providing agents with more decisive behavior than the dithering common to random exploration ( e.g. , Menache et al. , 2002 ; Stolle and Precup , 2002 ; Şimşek and Barto , 2004 ; Şimşek et al. , 2005 ; Şimşek and Barto , 2009 ; Machado et al. , 2017 ; Eysenbach et al. , 2019 ) . The Laplacian ( Chung , 1996 ) , the matrix extracted from the graph induced by the agent ’ s policy and the dynamics of the environment , is often used when discovering options for exploration ( e.g. , Machado and Bowling , 2016 ; Machado et al. , 2017 ; 2018 ; Jinnai et al. , 2019b ) . The options discovered with such an approach encourage agents to navigate to parts of the state space that are infrequently visited . However , the existing methods either lack a principled way of constraining the number of discovered options ( e.g. , Machado and Bowling , 2016 ; Machado et al. , 2017 ; 2018 ) or are limited to the tabular setting ( e.g. , Jinnai et al. , 2019b ) . In this paper we show how recent developments in eigenfunction estimation of the Laplacian ( Wu et al. , 2019 ) can be used to extend a principled approach for option discovery ( Jinnai et al. , 2019b ) to the non-linear function approximation case . This new algorithm for option discovery , deep covering options , is computationally tractable and it is applicable to environments with large ( or continuous ) state-spaces . Despite methods that learn representations generally being more flexible , more scalable , and often leading to better performance , before this paper , covering options could not be easily combined with modern representation learning techniques . Deep covering options discovers a small set of options that encourage exploration by minimizing the agent ’ s expected cover time—the expected number of steps required to visit every state in the environment ( Broder and Karlin , 1989 ) . Moreover , unlike most previous approaches to discovering options for exploration , it can be applied to both settings where a pretraining ( unsupervised ) phase is available ( e.g. , Eysenbach et al. , 2019 ) and to the traditional , fully online , setting . We evaluate our method , in both settings , in three different platforms to demonstrate its applicability in a wide range of domains . First , we apply it to the Pinball domain ( Konidaris and Barto , 2009 ) , which has a discrete action-space and a continuous state-space . Second , we apply it to three MuJoCo control tasks ( Todorov et al. , 2012 ) , which are continuous state- and action-space domains . In all of these domains , our method improves over the baseline . Finally , we perform a qualitative analysis of our method in three Atari 2600 games ( Bellemare et al. , 2013 ) to demonstrate its potential in domains with very large state-spaces . Deep covering options successfully finds under-explored regions of the state space and builds options to target those regions . 2 BACKGROUND AND RELATED WORK . We assume the standard reinforcement learning setting ( Sutton and Barto , 1998 ) , where the environment is modeled as a Markov Decision Process ( MDP ) , ( S , A , T , R , γ ) , where S is the set of states , A is the set of actions , T : S ×A×S → [ 0 , 1 ] is the state transition function , R : S ×A → R is the reward function , and 0 ≤ γ ≤ 1 is the discount factor . We use the options framework ( Sutton et al. , 1999 ) to represent temporally extended actions . It defines an option as a triple ( I , π , β ) , where I ⊆ S is the set of states in which the option can initiate , π : S → Pr ( A ) is the policy the agent follows when that option is being executed , and β : S → [ 0 , 1 ] , is the termination condition . We refer to a set of states in which β ( s ) = 1 as a termination set . 2.1 RELATED WORK . Many option discovery algorithms are based on the reward signals generated by the environment and are thus task dependent . These methods often decompose the trajectories reaching the rewarding states into options . Several papers have proposed generating options from trajectories reaching these rewarding states ( e.g. , McGovern and Barto , 2001 ; Menache et al. , 2002 ; Konidaris and Barto , 2009 ) , while other approaches use the observed rewards to generate options with gradient descent ( e.g. , Mankowitz et al. , 2016 ; Bacon et al. , 2017 ; Harb et al. , 2018 ; Tiwari and Thomas , 2019 ) . These approaches are often ineffective in sparse reward problems , where only a few state-action pairs lead to a positive reward . Fewer papers have tackled the problem of option discovery for exploration without using reward signals . Eysenbach et al . ( 2019 ) proposed to generate options maximizing an information theoretic objective so that each option generates diverse behavior . While many option discovery methods are limited to discrete state and action space tasks , their method can generate options that solve many continuous control tasks , even when ignoring the environment ’ s reward function . Machado et al . ; Machado et al . ( 2017 ; 2018 ) proposed eigenoptions , a method to generate options using the Laplacian eigenvectors ( Chung , 1996 ) . Their approach is similar to covering options but requires the set of options to be orthogonal to each other and introduces a prohibitively large number of options at each iteration . Several papers have proposed identifying subgoal states without reward information through graph concepts such as clustering ( Menache et al. , 2002 ; Şimşek et al. , 2005 ) , visitation statistics ( Şimşek and Barto , 2004 ; Stolle and Precup , 2002 ) , and betweenness centrality ( Şimşek and Barto , 2009 ) . As they use graph algorithms to discover subgoals , their scope is often limited to tabular domains . 2.2 COVERING OPTIONS . Covering options ( Jinnai et al. , 2019b ) is an approach that minimizes the expected cover time of a uniformly random policy by augmenting the agent ’ s action set with options obtained from the eigenvector associated with the second smallest eigenvalue of the Laplacian . Covering options can be seen as increasing the likelihood that a random walk is going to lead to a rewarding state since the expected cover time is the time required for a random walk to visit all the vertices in a graph ( Broder and Karlin , 1989 ) . Covering options achieves such an objective by minimizing the upper bound of the expected cover time , E [ C ( G ) ] , which is given by the second smallest eigenvalue of the normalized Laplacian , λ2 , also known as the algebraic connectivity ( Fiedler , 1973 ) : E [ C ( G ) ] ≤ n 2 lnn λ2 ( 1 + o ( 1 ) ) , ( 1 ) where n is the number of vertices of the graph . Equation 1 shows that the larger the algebraic connectivity , the smaller the upper bound of the expected cover time . Intuitively , algebraic connectivity represents how densely the graph is connected . The eigenvector f corresponding to λ2 is an embedding of a graph to a one-dimensional interval where nodes connected by an edge tend to be placed nearby ( see Figure 1 , adapted from Jinnai et al. , 2019b ) . A pair of nodes with the maximum and minimum value in f are the most distant nodes in the embedding space . Connecting these two nodes greedily maximizes the algebraic connectivity to a first order approximation ( Ghosh and Boyd , 2006 ) . Covering options works as follows : 1 . Compute the second smallest eigenvalue and the corresponding eigenvector f of the Laplacian exactly by solving the following constraint optimization problem : λ2 = inf fTA1=0 fTAf=1 G ( f ) G ( f ) = 1 2 ∑ s∈S [ ( f ( s ) − f ( s′ ) ) 2 A ( s , s′ ) ] , ( 2 ) where A is the adjacency matrix of the state-space graph where the entry at ( s , s′ ) is 1 if s and s′ are adjacent and 0 otherwise . 2 . Let vi and vj be the state with largest and smallest value in the eigenvector respectively . Generate two options ; one with I = { vi } and β = { vj } and the other one with I = { vj } and β = { vi } . Each option policy is the optimal path from the initial state to the termination state . 3 . Set G ← G ∪ { ( vi , vj ) } and repeat the process until the number of options reaches a threshold . While this method is an efficient algorithm with performance guarantees , it is limited to small discrete MDPs as it requires a state-space graph . Moreover , explicitly computing the matrix that encodes the environment ’ s adjacency matrix is unrealistic beyond small problems . Finally , the method is constrained to point options where both the initiation and termination sets consist of a single state ( Jinnai et al. , 2019a ) . Options generated by this method are therefore only executable at a single state . This is not useful for tasks with large ( or continuous ) state-spaces as the probability of visiting the state in the initiation set of the option tends to zero . Even if the agent visits the state in the option ’ s initiation set and starts following the corresponding option ’ s policy , the probability of reaching the state in the termination set is also small ( see Figure 2 ) . In the next section we introduce an approach that addresses these limitations . 3 DEEP COVERING OPTIONS . We propose deep covering options , a new algorithm that finds options that speed-up exploration in domains with large ( or continuous ) state-spaces . It directly seeks to optimize an objective for exploration . If the objective function is optimized , the options generated by the algorithm greedily maximize the algebraic connectivity of the underlying state-space graph to a first order approximation ( Ghosh and Boyd , 2006 ) , which in turn minimize the upper bound on the expected cover time . Deep covering options consists of four steps ( see Algorithm 1 ) : Algorithm 1 Deep covering options 1 : Input : Set of state-transitionsH , a percentile 0 ≤ k ≤ 100 2 : Compute f by minimizing G̃ ( f ) usingH ( Equation 5 ) 3 : β′ ← k-th percentile value of f inH 4 : βo ( s ) ← { 1 if f ( s ) < β′ 0 otherwise 5 : Io ← { s|βo ( s ) = 0 , s ∈ S } 6 : Train πo off-policy by maximizing the total accumulated pseudo-rewards ro = f ( s ) − f ( s′ ) usingH and f 7 : Return ( Io , πo , βo ) 1. compute an eigenfunction of the Laplacian of the state-space graph approximately ( line 2 in Algorithm 1 ) , 2. identify an under-explored region in the state-space using the eigenfunctions ( line 3 ) , 3. set the under-explored region as the termination set and set the compliment of it as the initiation set ( line 4 , 5 ) , 4. train a policy of the option using the pseudo-reward induced by the eigenfunctions ( line 6 ) . There are two problems in Equation 2 that prevent its applicability to non-tabular domains . First , the equation requires the adjacency matrixA as input . Second , a constrained optimization problem is hard to solve using gradient-based methods . We address these issues by approximating the computation of the Laplacian with the following objective ( Wu et al. , 2019 , Equation 6 ) : G̃ ( f1 , f2 , ... , fd ) = 1 2 E ( s , s′ ) ∼H [ d∑ k=1 ( fk ( s ) − fk ( s′ ) ) 2 ] + ηEs∼ρ , s′∼ρ [ ∑ j , k ( fj ( s ) fk ( s ) − δjk ) ( fj ( s ′ ) fk ( s ′ ) − δjk ) ] , ( 3 ) whereH is the set of sampled state-transitions , ρ is a distribution of states in the dataset ( ρ ( s ) is the number of occurrence of s inH divided by the size ofH ) , η is the Lagrange multiplier , and δjk is 1 if j 6= k and 0 otherwise . Such an expression , inspired by spectral graph drawing theory , uses the repulsive term ( the summation multiplied by η ) to ensure the functions f1 , ... , fd are orthogonal to each other . Unlike G , G̃ is a constraint-free objective to compute the eigenfunction , only requiring trajectories instead of the state-space graph . As we only require the second eigenfunction ( unlike eigenoptions ) , we can simplify the objective function to take only two arguments : G̃ ( f1 , f2 ) = G ( f1 , f2 ) + ηEs∼ρ , s′∼ρ [ ∑ j , k ( fj ( s ) fk ( s ) − δjk ) ( fj ( s ′ ) fk ( s ′ ) − δjk ) ] . ( 4 ) Assume G ( f1 ) ≤ G ( f2 ) without loss of generality . G ( f1 ) = 0 and f1 is a constant function because the first eigenvalue of the Laplacian matrix is zero . To simplify the equation , we assume f1 = 1 without loss of generality . Then : G̃ ( f ) = G̃ ( 1 , f ) = 1 2 E ( s , s′ ) ∼H [ ( f ( s ) −f ( s′ ) ) 2 ] +ηEs∼ρ , s′∼ρ [ ( f ( s ) 2−1 ) ( f ( s′ ) 2−1 ) +f ( s ) 2f ( s′ ) 2 ] . ( 5 ) Deep covering options compute the second eigenfunction f by minimizing G̃ ( f ) instead of G ( f ) ( see Algorithm 1 ) . Our objective function only needs sampled state-transitions H instead of a complete state-space graph . As it is an unconstrained optimization problem , we can optimize by simple gradient-based methods . The objective function is essentially the same as the objective function of covering options which has theoretical guarantee on the expected cover time but computed approximately so that it scales to large or infinite state-space domains . k Reward 5 2.5311 ± 1.71 10 3.1873 ± 3.33 30 3.2210 ± 4.54 50 3.0748 ± 3.28 Table 1 : The effect of the size of the termination set ( percentile k ) on the performance of Deep covering options in Pinball with 3 options . Reward is averaged over 100 episodes and 5 runs . While covering options is constrained to options with the initiation set consisting of a single state , we set the termination set as a set of states with f value smaller than its k-th percentile . As proposed by Machado et al . ( 2017 ; 2018 ) , we define the initiation set to be the complement of the termination set . We train the option policy off-policy , maximizing the total pseudo-reward ro = f ( s ) − f ( s′ ) so that it learns to reach the termination set ( i.e. , the set of states with f ( s ) < β′ ) .
The paper proposes an algorithm to extend the recently proposed method of “covering options” from a tabular setting to continuous state spaces (or large discrete state spaces). The proposed algorithm approximately computes the second eigenfunction of the normalized laplacian of the state space, uses it to identify an under-explored region and trains an option to terminate in such a region. Each new learnt option is added to the initial set of primitive actions and a policy over this growing set of actions is learnt separately. An online algorithm is also proposed that does the above option learning process intermittently in addition to training for an external task. The paper shows empirical evidence of better or equal performance to base algorithms which do not discover options, prior work such as DIAYN (Eysenbach et. al, 2019) (that discover options via mutual information maximization between visited states and options), as well as ablations of their proposed method with different number of options.
SP:461e76527806e28b022e6c4ed7872e6d8d7a3697
Exploration in Reinforcement Learning with Deep Covering Options
1 INTRODUCTION . Temporal abstraction , often formalized via the options framework ( Sutton et al. , 1999 ) , has the potential to greatly improve the performance of reinforcement learning ( RL ) agents by representing actions at different time scales . However , the question of which options an agent should construct , and the related question of what objective function that option construction process should be optimizing , remain open . One recent approach is to construct options that aid exploration by providing agents with more decisive behavior than the dithering common to random exploration ( e.g. , Menache et al. , 2002 ; Stolle and Precup , 2002 ; Şimşek and Barto , 2004 ; Şimşek et al. , 2005 ; Şimşek and Barto , 2009 ; Machado et al. , 2017 ; Eysenbach et al. , 2019 ) . The Laplacian ( Chung , 1996 ) , the matrix extracted from the graph induced by the agent ’ s policy and the dynamics of the environment , is often used when discovering options for exploration ( e.g. , Machado and Bowling , 2016 ; Machado et al. , 2017 ; 2018 ; Jinnai et al. , 2019b ) . The options discovered with such an approach encourage agents to navigate to parts of the state space that are infrequently visited . However , the existing methods either lack a principled way of constraining the number of discovered options ( e.g. , Machado and Bowling , 2016 ; Machado et al. , 2017 ; 2018 ) or are limited to the tabular setting ( e.g. , Jinnai et al. , 2019b ) . In this paper we show how recent developments in eigenfunction estimation of the Laplacian ( Wu et al. , 2019 ) can be used to extend a principled approach for option discovery ( Jinnai et al. , 2019b ) to the non-linear function approximation case . This new algorithm for option discovery , deep covering options , is computationally tractable and it is applicable to environments with large ( or continuous ) state-spaces . Despite methods that learn representations generally being more flexible , more scalable , and often leading to better performance , before this paper , covering options could not be easily combined with modern representation learning techniques . Deep covering options discovers a small set of options that encourage exploration by minimizing the agent ’ s expected cover time—the expected number of steps required to visit every state in the environment ( Broder and Karlin , 1989 ) . Moreover , unlike most previous approaches to discovering options for exploration , it can be applied to both settings where a pretraining ( unsupervised ) phase is available ( e.g. , Eysenbach et al. , 2019 ) and to the traditional , fully online , setting . We evaluate our method , in both settings , in three different platforms to demonstrate its applicability in a wide range of domains . First , we apply it to the Pinball domain ( Konidaris and Barto , 2009 ) , which has a discrete action-space and a continuous state-space . Second , we apply it to three MuJoCo control tasks ( Todorov et al. , 2012 ) , which are continuous state- and action-space domains . In all of these domains , our method improves over the baseline . Finally , we perform a qualitative analysis of our method in three Atari 2600 games ( Bellemare et al. , 2013 ) to demonstrate its potential in domains with very large state-spaces . Deep covering options successfully finds under-explored regions of the state space and builds options to target those regions . 2 BACKGROUND AND RELATED WORK . We assume the standard reinforcement learning setting ( Sutton and Barto , 1998 ) , where the environment is modeled as a Markov Decision Process ( MDP ) , ( S , A , T , R , γ ) , where S is the set of states , A is the set of actions , T : S ×A×S → [ 0 , 1 ] is the state transition function , R : S ×A → R is the reward function , and 0 ≤ γ ≤ 1 is the discount factor . We use the options framework ( Sutton et al. , 1999 ) to represent temporally extended actions . It defines an option as a triple ( I , π , β ) , where I ⊆ S is the set of states in which the option can initiate , π : S → Pr ( A ) is the policy the agent follows when that option is being executed , and β : S → [ 0 , 1 ] , is the termination condition . We refer to a set of states in which β ( s ) = 1 as a termination set . 2.1 RELATED WORK . Many option discovery algorithms are based on the reward signals generated by the environment and are thus task dependent . These methods often decompose the trajectories reaching the rewarding states into options . Several papers have proposed generating options from trajectories reaching these rewarding states ( e.g. , McGovern and Barto , 2001 ; Menache et al. , 2002 ; Konidaris and Barto , 2009 ) , while other approaches use the observed rewards to generate options with gradient descent ( e.g. , Mankowitz et al. , 2016 ; Bacon et al. , 2017 ; Harb et al. , 2018 ; Tiwari and Thomas , 2019 ) . These approaches are often ineffective in sparse reward problems , where only a few state-action pairs lead to a positive reward . Fewer papers have tackled the problem of option discovery for exploration without using reward signals . Eysenbach et al . ( 2019 ) proposed to generate options maximizing an information theoretic objective so that each option generates diverse behavior . While many option discovery methods are limited to discrete state and action space tasks , their method can generate options that solve many continuous control tasks , even when ignoring the environment ’ s reward function . Machado et al . ; Machado et al . ( 2017 ; 2018 ) proposed eigenoptions , a method to generate options using the Laplacian eigenvectors ( Chung , 1996 ) . Their approach is similar to covering options but requires the set of options to be orthogonal to each other and introduces a prohibitively large number of options at each iteration . Several papers have proposed identifying subgoal states without reward information through graph concepts such as clustering ( Menache et al. , 2002 ; Şimşek et al. , 2005 ) , visitation statistics ( Şimşek and Barto , 2004 ; Stolle and Precup , 2002 ) , and betweenness centrality ( Şimşek and Barto , 2009 ) . As they use graph algorithms to discover subgoals , their scope is often limited to tabular domains . 2.2 COVERING OPTIONS . Covering options ( Jinnai et al. , 2019b ) is an approach that minimizes the expected cover time of a uniformly random policy by augmenting the agent ’ s action set with options obtained from the eigenvector associated with the second smallest eigenvalue of the Laplacian . Covering options can be seen as increasing the likelihood that a random walk is going to lead to a rewarding state since the expected cover time is the time required for a random walk to visit all the vertices in a graph ( Broder and Karlin , 1989 ) . Covering options achieves such an objective by minimizing the upper bound of the expected cover time , E [ C ( G ) ] , which is given by the second smallest eigenvalue of the normalized Laplacian , λ2 , also known as the algebraic connectivity ( Fiedler , 1973 ) : E [ C ( G ) ] ≤ n 2 lnn λ2 ( 1 + o ( 1 ) ) , ( 1 ) where n is the number of vertices of the graph . Equation 1 shows that the larger the algebraic connectivity , the smaller the upper bound of the expected cover time . Intuitively , algebraic connectivity represents how densely the graph is connected . The eigenvector f corresponding to λ2 is an embedding of a graph to a one-dimensional interval where nodes connected by an edge tend to be placed nearby ( see Figure 1 , adapted from Jinnai et al. , 2019b ) . A pair of nodes with the maximum and minimum value in f are the most distant nodes in the embedding space . Connecting these two nodes greedily maximizes the algebraic connectivity to a first order approximation ( Ghosh and Boyd , 2006 ) . Covering options works as follows : 1 . Compute the second smallest eigenvalue and the corresponding eigenvector f of the Laplacian exactly by solving the following constraint optimization problem : λ2 = inf fTA1=0 fTAf=1 G ( f ) G ( f ) = 1 2 ∑ s∈S [ ( f ( s ) − f ( s′ ) ) 2 A ( s , s′ ) ] , ( 2 ) where A is the adjacency matrix of the state-space graph where the entry at ( s , s′ ) is 1 if s and s′ are adjacent and 0 otherwise . 2 . Let vi and vj be the state with largest and smallest value in the eigenvector respectively . Generate two options ; one with I = { vi } and β = { vj } and the other one with I = { vj } and β = { vi } . Each option policy is the optimal path from the initial state to the termination state . 3 . Set G ← G ∪ { ( vi , vj ) } and repeat the process until the number of options reaches a threshold . While this method is an efficient algorithm with performance guarantees , it is limited to small discrete MDPs as it requires a state-space graph . Moreover , explicitly computing the matrix that encodes the environment ’ s adjacency matrix is unrealistic beyond small problems . Finally , the method is constrained to point options where both the initiation and termination sets consist of a single state ( Jinnai et al. , 2019a ) . Options generated by this method are therefore only executable at a single state . This is not useful for tasks with large ( or continuous ) state-spaces as the probability of visiting the state in the initiation set of the option tends to zero . Even if the agent visits the state in the option ’ s initiation set and starts following the corresponding option ’ s policy , the probability of reaching the state in the termination set is also small ( see Figure 2 ) . In the next section we introduce an approach that addresses these limitations . 3 DEEP COVERING OPTIONS . We propose deep covering options , a new algorithm that finds options that speed-up exploration in domains with large ( or continuous ) state-spaces . It directly seeks to optimize an objective for exploration . If the objective function is optimized , the options generated by the algorithm greedily maximize the algebraic connectivity of the underlying state-space graph to a first order approximation ( Ghosh and Boyd , 2006 ) , which in turn minimize the upper bound on the expected cover time . Deep covering options consists of four steps ( see Algorithm 1 ) : Algorithm 1 Deep covering options 1 : Input : Set of state-transitionsH , a percentile 0 ≤ k ≤ 100 2 : Compute f by minimizing G̃ ( f ) usingH ( Equation 5 ) 3 : β′ ← k-th percentile value of f inH 4 : βo ( s ) ← { 1 if f ( s ) < β′ 0 otherwise 5 : Io ← { s|βo ( s ) = 0 , s ∈ S } 6 : Train πo off-policy by maximizing the total accumulated pseudo-rewards ro = f ( s ) − f ( s′ ) usingH and f 7 : Return ( Io , πo , βo ) 1. compute an eigenfunction of the Laplacian of the state-space graph approximately ( line 2 in Algorithm 1 ) , 2. identify an under-explored region in the state-space using the eigenfunctions ( line 3 ) , 3. set the under-explored region as the termination set and set the compliment of it as the initiation set ( line 4 , 5 ) , 4. train a policy of the option using the pseudo-reward induced by the eigenfunctions ( line 6 ) . There are two problems in Equation 2 that prevent its applicability to non-tabular domains . First , the equation requires the adjacency matrixA as input . Second , a constrained optimization problem is hard to solve using gradient-based methods . We address these issues by approximating the computation of the Laplacian with the following objective ( Wu et al. , 2019 , Equation 6 ) : G̃ ( f1 , f2 , ... , fd ) = 1 2 E ( s , s′ ) ∼H [ d∑ k=1 ( fk ( s ) − fk ( s′ ) ) 2 ] + ηEs∼ρ , s′∼ρ [ ∑ j , k ( fj ( s ) fk ( s ) − δjk ) ( fj ( s ′ ) fk ( s ′ ) − δjk ) ] , ( 3 ) whereH is the set of sampled state-transitions , ρ is a distribution of states in the dataset ( ρ ( s ) is the number of occurrence of s inH divided by the size ofH ) , η is the Lagrange multiplier , and δjk is 1 if j 6= k and 0 otherwise . Such an expression , inspired by spectral graph drawing theory , uses the repulsive term ( the summation multiplied by η ) to ensure the functions f1 , ... , fd are orthogonal to each other . Unlike G , G̃ is a constraint-free objective to compute the eigenfunction , only requiring trajectories instead of the state-space graph . As we only require the second eigenfunction ( unlike eigenoptions ) , we can simplify the objective function to take only two arguments : G̃ ( f1 , f2 ) = G ( f1 , f2 ) + ηEs∼ρ , s′∼ρ [ ∑ j , k ( fj ( s ) fk ( s ) − δjk ) ( fj ( s ′ ) fk ( s ′ ) − δjk ) ] . ( 4 ) Assume G ( f1 ) ≤ G ( f2 ) without loss of generality . G ( f1 ) = 0 and f1 is a constant function because the first eigenvalue of the Laplacian matrix is zero . To simplify the equation , we assume f1 = 1 without loss of generality . Then : G̃ ( f ) = G̃ ( 1 , f ) = 1 2 E ( s , s′ ) ∼H [ ( f ( s ) −f ( s′ ) ) 2 ] +ηEs∼ρ , s′∼ρ [ ( f ( s ) 2−1 ) ( f ( s′ ) 2−1 ) +f ( s ) 2f ( s′ ) 2 ] . ( 5 ) Deep covering options compute the second eigenfunction f by minimizing G̃ ( f ) instead of G ( f ) ( see Algorithm 1 ) . Our objective function only needs sampled state-transitions H instead of a complete state-space graph . As it is an unconstrained optimization problem , we can optimize by simple gradient-based methods . The objective function is essentially the same as the objective function of covering options which has theoretical guarantee on the expected cover time but computed approximately so that it scales to large or infinite state-space domains . k Reward 5 2.5311 ± 1.71 10 3.1873 ± 3.33 30 3.2210 ± 4.54 50 3.0748 ± 3.28 Table 1 : The effect of the size of the termination set ( percentile k ) on the performance of Deep covering options in Pinball with 3 options . Reward is averaged over 100 episodes and 5 runs . While covering options is constrained to options with the initiation set consisting of a single state , we set the termination set as a set of states with f value smaller than its k-th percentile . As proposed by Machado et al . ( 2017 ; 2018 ) , we define the initiation set to be the complement of the termination set . We train the option policy off-policy , maximizing the total pseudo-reward ro = f ( s ) − f ( s′ ) so that it learns to reach the termination set ( i.e. , the set of states with f ( s ) < β′ ) .
The authors introduce deep covering options, an online mechanism to extend the covering options to large state spaces. They claim their method discovers options that are task agnostic. The method is evaluated in sparse reward domains and claims to gain improvement in exploration and performance as well. The authors extend the recent developments in eigenfunction estimation of the Laplacian to a principled approach for option discovery to non-linear function approximation.
SP:461e76527806e28b022e6c4ed7872e6d8d7a3697
Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps
Generating visualizations and interpretations from high-dimensional data is a1 common problem in many fields . Two key approaches for tackling this problem2 are clustering and representation learning . There are very performant deep cluster-3 ing models on the one hand and interpretable representation learning techniques,4 often relying on latent topological structures such as self-organizing maps , on the5 other hand . However , current methods do not yet successfully combine these two6 approaches . We present a new deep architecture for probabilistic clustering , VarP-7 SOM , and its extension to time series data , VarTPSOM . We show that they achieve8 superior clustering performance compared to current deep clustering methods on9 static MNIST/Fashion-MNIST data as well as medical time series , while inducing10 an interpretable representation . Moreover , on the medical time series , VarTPSOM11 successfully predicts future trajectories in the original data space.12 1 INTRODUCTION13 Information visualization techniques are essential in areas where humans have to make decisions14 based on large amounts of complex data . Their goal is to find an interpretable representation of15 the data that allows the integration of humans into the data exploration process . This encourages16 visual discoveries of relationships in the data and provides guidance to downstream tasks . In this17 way , a much higher degree of confidence in the findings of the exploration is attained ( Keim , 2002 ) .18 An interpretable representation of the data , in which the underlying factors are easily visualized , is19 particularly important in domains where the reason for obtaining a certain prediction is as valuable20 as the prediction itself . However , finding a meaningful and interpretable representation of complex21 data can be challenging.22 Clustering is one of the most natural ways for retrieving interpretable information from raw data.23 Long-established methods such as k-means ( MacQueen , 1967 ) and Gaussian Mixture Models24 ( Bishop , 2006 ) represent the cornerstone of cluster analysis . Their applicability , however , is often25 constrained to simple data and their performance is limited in high-dimensional , complex , real-world26 data sets , which do not exhibit a clustering-friendly structure.27 Deep generative models have recently achieved tremendous success in representation learning.28 Some of the most commonly used and efficient approaches are Autoencoders ( AEs ) , Variational29 Autoencoders ( VAEs ) and Generative Adversarial Networks ( GANs ) ( Kingma & Welling , 2013 ; 30 Goodfellow et al. , 2014 ) . The compressed latent representation generated by these models has been31 proven to ease the clustering process ( Aljalbout et al. , 2018 ) . As a result , the combination of deep32 generative models for feature extraction and clustering results in a dramatic increase of the clustering33 performance ( Xie et al. , 2015 ) . Although very successful , most of these methods do not investigate34 the relationship among clusters and the clustered feature points live in a high-dimensional latent35 space that can not be easily visualized or interpreted by humans.36 The Self-Organizing Map ( SOM ) ( Kohonen , 1990 ) is a clustering method that provides such an37 interpretable representation . It produces a low-dimensional ( typically 2-dimensional ) , discretized38 representation of the input space by inducing a flexible neighbourhood structure over the clusters.39 Alas , its applicability is often constrained to simple data sets similar to other classical clustering40 methods.41 To resolve the above issues , we propose a novel deep architecture , the Variational Probabilistic SOM42 ( VarPSOM ) , that jointly trains a VAE and a SOM to achieve an interpretable discrete representation43 while exhibiting state-of-the-art clustering performance . Instead of hard assignment of data points44 to clusters , our model uses a centroid-based probability distribution . It minimizes its Kullback-45 Leibler divergence against an auxiliary target distribution , while enforcing a SOM-friendly space.46 To highlight the importance of an interpretable representation for different purposes , we extended47 this model to deal with temporal data , yielding VarTPSOM . We discuss related work in Section48 2 . Extensive evidence of the superior clustering performance of both models , on MNIST/Fashion-49 MNIST images as well as real-world medical time series is presented in Section 4.50 Our main contributions are:51 • A novel architecture for deep clustering , yielding an interpretable discrete representation52 through the use of a probabilistic self-organizing map.53 • An extension of this architecture to time series , improving clustering performance on this54 data type and enabling temporal predictions.55 • A thorough empirical assessment of our proposed models , showing superior performance56 on benchmark tasks and challenging medical time series from the intensive care unit.57 2 RELATED WORK58 Self-Organizing Maps have been widely used as a means to visualize information from large59 amounts of data ( Tirunagari et al. , 2014 ) and as a form of clustering in which the centroids are con-60 nected by a topological neighborhood structure ( Flexer , 1999 ) . Since their early inception , several61 variants have been proposed to enhance their performance and scope . The adaptive subspace SOM,62 ASSOM ( Kohonen , 1995 ) , for example , proposed to combine PCA and SOMs to map data into a63 reduced feature space . ( Tokunaga & Furukawa , 2009 ) combine SOMs with multi-layer perceptrons64 to obtain a modular network . ( Liu et al. , 2015 ) proposed Deep SOM ( DSOM ) , an architecture com-65 posed of multiple layers similar to Deep Neural Networks . There exist several methods tailored66 to representation learning on time series , among them ( Franceschi et al. , 2019 ; Fortuin & Rätsch,67 2019 ; Fortuin et al. , 2019 ) , which are however not based on SOMs . Extensions of SOM optimized68 for temporal data include the Temporal Kohonen map ( Chappell & Taylor , 1993 ) and its improved69 version Recurrent SOM ( McQueen et al. , 2004 ) as well as Recursive SOM ( Voegtlin , 2002 ) . While70 SOM and its variants are particularly effective for data visualization ( Liu et al. , 2015 ) , it was rarely71 attempted to combine their merits in this respect with modern state-of-the-art clustering methods,72 which often use deep generative models in combination with probabilistic clustering.73 In particular , recent works on clustering analysis have shown that combining clustering algorithms74 with the latent space of AEs greatly increases the clustering performance ( Aljalbout et al. , 2018 ) .75 ( Xie et al. , 2015 ) proposed DEC , a method that sequentially applies embedding learning using76 Stacked Autoencoders ( SAE ) , and the Clustering Assignment Hardening method on the obtained77 representation . An improvement of this architecture , IDEC ( Guo et al. , 2017 ) , includes the decoder78 network of the SAE in the learning process , so that training is affected by both the clustering loss79 and the reconstruction loss . Similarly , DCN ( Yang et al. , 2016 ) combines a k-means clustering loss80 with the reconstruction loss of SAE to obtain an end-to-end architecture that jointly trains repre-81 sentations and clustering . These models achieve state-of-the-art clustering performance but they do82 not investigate the relationship among clusters . An exception is the work by ( Li et al. , 2018 ) , in83 which they present an unsupervised method that learns latent embeddings and discovers multi-facet84 clustering structure . Relationships among clusters were discovered , however , they do not provide a85 latent space that can be easily interpreted and which eases the process of analytical reasoning.86 While there exist previous efforts to endow VAEs with a hierarchical latent space ( Vikram et al.,87 2018 ; Goyal et al. , 2017 ) , to the best of our knowledge , only two models used deep generative88 models in combination with a SOM structure in the latent space . The SOM-VAE model ( Fortuin89 et al. , 2018 ) , inspired by the VQ-VAE architecture ( van den Oord et al. , 2017 ) ( which itself was90 later extended in ( Razavi et al. , 2019 ) ) , uses an AE to embed the input data points into a latent space91 and then applies a SOM-based clustering loss on top of this latent representation . It features hard92 assignments of points to centroids , as well as the use of a Markov model for temporal data , both93 of which yield inferior expressivity compared to our method . The Deep Embedded SOM , DESOM94 ( Forest et al. , 2019 ) , improved the previous model by using a Gaussian neighborhood window with95 exponential radius decay and by learning the SOM structure in a continuous setting . Both methods96 feature a topologically interpretable neighborhood structure and yield promising results in visual-97 izing state spaces . However , those works did not feature empirical comparisons to state-of-the-art98 deep clustering techniques and did not make use of many of the design principles that have recently99 proven to be successful in this space.100 3 PROBABILISTIC CLUSTERING WITH VARIATIONAL PSOM101 Given a set of data samples { xi } i=1 , ... , n , where xi ∈ Rd , the goal is to partition the data into a set102 of clusters { Si } i=1 , ... , K , while retaining a topological structure over the cluster centroids.103 The proposed architecture for static data is presented in Figure 1a . The input vector xi is embedded104 into a latent representation zi using a VAE . This latent vector is then clustered using PSOM , a105 new SOM clustering strategy that extends the Clustering Assignment Hardening method ( Xie et al.,106 2015 ) . The VAE and PSOM are trained jointly to learn a latent representation with the aim to boost107 the clustering performance . To prevent the network from outputting a trivial solution , the decoder108 network reconstructs the input from the latent embedding , encouraging it to be as similar as possible109 to the original input . The obtained loss function is a linear combination of the clustering loss and110 the reconstruction loss . To deal with temporal data , we propose another model variant , which is111 depicted in Figure 1b.112 3.1 BACKGROUND113 A Self-Organizing Map is comprised of K nodes connected to form a grid M ⊆ N2 , where the114 node mi , j , at position ( i , j ) of the grid , corresponds to a centroid vector , µi , j in the input space . The115 centroids are tied by a neighborhood relation N ( µi , j ) = { µi−1 , j , µi+1 , j , µi , j−1 , µi , j+1 } . Given a116 random initialization of the centroids , the SOM algorithm randomly selects an input xi and updates117 both its closest centroid µi , j and its neighbors N ( µi , j ) to move them closer to xi . For a complete118 description of the SOM algorithm , we refer to the appendix ( A ) .119 The Clustering Assignment Hardening method has been recently introduced by the DEC model ( Xie et al. , 2015 ) and was shown to perform well in the latent space of AEs ( Aljalbout et al. , 2018 ) . Given an embedding function zi = f ( xi ) , it uses a Student ’ s t-distribution ( S ) as a kernel to measure the similarity between an embedded data point zi , and a centroid µj : sij = ( 1 + ‖zi − µj‖2 /α ) −α+12 ∑ j′ ( 1 + ‖zi − µj′‖2 /α ) −α+12 . It improves the cluster purity by enforcing the distribution S to approach a target distribution , T : tij = sγij/ ∑ i′ si′j∑ j′ s γ ij′/ ∑ i′ si′j′ . By taking the original distribution to the power of γ and normalizing it , the target distribution puts more emphasis on data points that are assigned a high confidence . We follow ( Xie et al. , 2015 ) in choosing γ=2 , which leads to larger gradient contributions of points close to cluster centers , as they show empirically . The resulting clustering loss is defined as : L = KL ( T‖S ) = ∑ i ∑ j tij log tij sij . ( 1 ) 3.2 PROBABILISTIC SOM ( PSOM ) CLUSTERING120 Our proposed clustering method , called PSOM , expands Clustering Assignment Hardening to include a SOM neighborhood structure over the centroids . We add an additional loss to ( 1 ) to achieve an interpretable representation . This loss term maximizes the similarity between each data point and the neighbors of the closest centroids . For each embedded data point zi and each centroid µj the loss is defined as the negative sum of all the neighbors of µj , { e : µe ∈ N ( µj ( xi ) ) } , of the probability that zi is assigned to e , defined as sie . This sum is weighted by the similarity sij between zi and the centroid µj : LSOM = − 1 N ∑ i ∑ j sij ∑ e : µe∈N ( µj ( xi ) ) sie . The complete PSOM clustering loss is then : LPSOM = KL ( T‖S ) + βLSOM . We note that for β = 0 it becomes equivalent to Clustering Assignment Hardening.121 3.3 VARPSOM : VAE FOR FEATURE EXTRACTION122 In our method , the nonlinear mapping between the input xi and embedding zi is realized by a VAE . Instead of directly embedding the input xi into a latent embedding zi , the VAE learns a probability distribution qφ ( z | xi ) parametrized as a multivariate normal distribution whose mean and variance are ( µφ , Σφ ) = fφ ( xi ) . Similarly , it also learns the probability distribution of the reconstructed output given a sampled latent embedding , pθ ( xi | z ) where ( µθ , Σθ ) = fθ ( zi ) . Both fφ and fθ are neural networks , respectively called encoder and decoder . The ELBO loss is : LELBO = ∑ i [ −Ez ( log pθ ( xi | z ) ) +DKL ( qφ ( z | xi ) ‖ p ( z ) ) ] , ( 2 ) where p ( z ) is an isotropic Gaussian prior over the latent embeddings . The second term can be interpreted as a form of regularization , which encourages the latent space to be compact . For each data point xi the latent embedding zi is sampled from qφ ( z | xi ) . Adding the ELBO loss to the PSOM loss from the previous subsection , we get the overall loss function of VarPSOM : LVarPSOM = LPSOM + LELBO . ( 3 ) To the best of our knowledge , no previous SOM methods attempted to use a VAE to embed the123 inputs into a latent space . There are many advantages of a VAE over an AE for realizing our goals.124 Its prior on the latent space encourages structured and disentangled factors ( Higgins et al. , 2016 ) ,125 which could help clustering . The suitability of VAEs for anomaly detection ( An & Cho , 2015 ) 126 means that points with a higher variance in the latent space could be treated as less accurate and127 trustworthy . The regularization term of the VAE can be used to prevent the network from scattering128 the embedded points discontinuously in the latent space , which naturally facilitates the fitting of the129 SOM . To test if the use of CNNs can boost clustering performance on image data , we introduce130 another model variant called VarCPSOM , which uses convolutional filters as part of the VAE.131 3.4 VARTPSOM : EXTENSION TO TIME SERIES DATA132 To extend our proposed model to time series data , we add a temporal component to the architecture . Given a set of N time series of length T , { xt , i } t=1 , ... , T ; i=1 , ... , N , the goal is to learn interpretable trajectories on the SOM grid . To do so , the VarPSOM could be used directly but it would treat each time step t of the time series independently , which is undesirable . To exploit temporal information and enforce smoothness in the trajectories , we add an additional loss to ( 3 ) : Lsmooth = − 1 NT ∑ i ∑ t uit , it+1 , ( 4 ) where uit , it+1 = g ( zi , t , zi , t+1 ) is the similarity between zi , t and zi , t+1 using a Student ’ s t-133 distribution and zi , t refers to the embedding of time series xi at time index t. It maximizes the134 similarity between latent embeddings of adjacent time steps , such that large jumps in the latent state135 between time points are discouraged.136 One of the main goals in time series modeling is to predict future data points , or alternatively , future embeddings . This can be achieved by adding a long short-term memory network ( LSTM ) across the latent embeddings of the time series , as shown in Fig 1b . Each cell of the LSTM takes as input the latent embedding zt at time step t , and predicts a probability distribution over the next latent embedding , pω ( zt+1 | zt ) . We parametrize this distribution as a Multivariate Normal Distribution whose mean and variance are learnt by the LSTM . The prediction loss is the log-likelihood between the learned distribution and a sample of the next embedding zt+1 : Lpred = − ∑ i ∑ t log pω ( zt+1 | zt ) . ( 5 ) The final loss of VarTPSOM , which is trainable in a fully end-to-end fashion , is LVarTPSOM = LVarPSOM + Lsmooth + ηLpred . ( 6 ) 4 EXPERIMENTS137 First , we evaluate VarPSOM and VarCPSOM and compare them with state-of-the-art non-138 interpretable as well as SOM-based clustering methods on MNIST ( Lecun et al. , 1998 ) and Fashion-139 MNIST ( Xiao et al. , 2017 ) data . Here , particular focus is laid on the comparison of VarPSOM and140 the clustering models DEC and IDEC , to investigate the role of the VAE and the SOM loss . We141 then present visualizations of the obtained 2D representations , to illustrate how our method could142 ease visual reasoning about the data . Finally , we present extensive evidence of the performance of143 VarTPSOM on real-world complex time series from the eICU data set ( Pollard et al. , 2018 ) , and144 illustrate how it allows visualization of patient health state trajectories in an easily understandable145 2D domain . For details on the data sets , we refer to the appendix ( B.1 ) .146 Baselines We used two different types of baselines . The first category contains clustering methods147 that do not provide any interpretable discrete latent representation . Those include k-means , the DEC148 model , as well as its improved version IDEC , whose clustering methods are related to ours . We also149 include a modified version of IDEC that we call VarIDEC , in which we substitute the AE with a150 VAE , to investigate the role of the VAE . For all these methods we use 64 clusters . In the second151 category , we include state-of-the-art clustering methods based on SOMs . Here , we used a standard152 SOM ( minisom ) , AE+SOM , an architecture composed of an AE and a SOM applied on top of the153 latent representation ( trained sequentially ) , SOM-VAE and DESOM . Finally , we create a modified154 version of our model , called AEPSOM , in which we substitute the VAE with an AE ( similarly to155 VarIDEC ) . For all SOM-based methods we set the SOM grid size to ( 8 × 8 ) . For different grid156 configurations we refer to the appendix , ( B.3 ) .157 Implementation In implementing our models we focused on retaining a fair comparison with the158 baselines . Hence we decided to use a standard network structure , with fully connected layers of159 dimensions d− 500− 500− 2000− l , to implement both the VAE of our models and the AE of the160 baselines . The latent dimension , l , is set to 100 for the VAE , and to 10 for the AEs . Since the prior161 in the VAE enforces the latent embeddings to be compact , it also requires more dimensions to learn162 a meaningful latent space . On the other hand , providing the AE models with a higher-dimensional163 latent space , needed for the VAE , resulted in a dramatic decrease of performance ( see appendix B.2 ) .164 VarCPSOM is composed of 4 convolutional layers of feature maps [ 32 , 64 , 128 , 256 ] and kernel size165 3 × 3 for all layers . For all architectures , no greedy layer-wise pretraining was used to tune the166 VAE . Instead we simply run the VAE without the clustering loss for a few epochs for initialization.167 A standard SOM was then used to produce an initial configuration of the centroids/neighbourhood168 relation . Finally , the entire architecture is trained for 100 , 000 iterations . To avoid fine-tuning hy-169 perparameters , given the unsupervised setting , α is set to 10 for all experiments while the other170 hyperparameters are modified accordingly to maintain the same order of magnitude of the different171 loss components.172 Clustering Evaluation Table 1 shows the clustering quality results of VarPSOM and VarCPSOM173 on MNIST and Fashion-MNIST data , compared with the baselines . Purity and Normalized Mutual174 Information are used as evaluation metrics . We observe that our proposed models outperform the175 baselines of both categories and achieve state-of-the-art clustering performance.176 VarPSOM vs. IDEC VarPSOM is inspired by IDEC but it has two major differences . It uses a177 VAE instead of an AE and it improves interpretability in the latent space by adding a new loss that en-178 forces a SOM structure . Since both VarIDEC and VarPSOM show superior clustering performance179 compared to IDEC and AEPSOM respectively ( Table 1 ) , we conclude that the VAE indeed suc-180 ceeds in capturing a more meaningful latent representation compared to a standard AE . Regarding181 the second difference , the SOM structure was expected to slightly decrease the clustering perfor-182 mance , due to a trade-off between interpretability and raw clustering performance . However , we do183 not observe this in our results . Adding the SOM loss rather leads to an increase of the clustering184 performance . We suspect this is due to the regularization effect of the SOM ’ s topological structure.185 Overall , VarPSOM outperforms both DEC and IDEC.186 Improvement over Training After obtaining the initial configuration of the SOM structure , both187 clustering and feature extraction using the VAE are trained jointly . To illustrate that our architecture188 improves clustering performance over the initial configuration , we plotted NMI and Purity against189 the number of training iterations in Figure 2 . We observe that the performance is stable when190 increasing the number of epochs and no overfitting is visible.191 Role of the SOM loss To investigate the influence of the SOM loss component , we plot the clus-192 tering performance of VarPSOM against the weight ( β ) of LSOM in Fig . 3 , using the MNIST dataset.193 With β = 30 , theKL term ( responsible for improving clustering purity ) and the LSOM term ( respon-194 sible for enforcing a SOM structure over the centroids ) are almost equal . It is interesting to observe195 the different trends in NMI and purity . The NMI performance increases for increasing values of β196 while purity slightly decreases . Overall , enforcing a more interpretable latent space results in a more197 robust clustering model with higher NMI clustering performance.198 Time Series Evaluation We evaluate the clustering performance of our proposed models on the199 eICU dataset , comprised of complex medical time series . We compare them against SOM-VAE,200 as this is the only method among the baselines that is suited for temporal data . Table 2 shows the201 cluster cell enrichment in terms of NMI for three different labels , the current ( APACHE-0 ) and worst202 future ( APACHE-6/12 hours ) physiology scores . VarTPSOM clearly achieves superior clustering203 performance compared to SOM-VAE . This , we hypothesize , is due to the better feature extraction204 using a VAE as well as the improved treatment of uncertainty using PSOM , which features soft205 assignments , whereas SOM-VAE contains a deterministic AE and hard assignments . Moreover,206 both the smoothness loss and the prediction loss seem to increase the clustering performance . More207 results on ICU time series are reported in the appendix ( B.4 ) .208 To quantify the performance of VarTPSOM in unrolling future trajectories , we predict the final209 6 latent embeddings of each time series . For each predicted embedding we reconstruct the input210 using the decoder of the VAE . Finally , we measure the MSE between the original input and the211 reconstructed inputs for the last 6 hours of the ICU admission . As baselines , we used an LSTM that212 takes as input the first 66 hours of the time series and then predicts the next 6 hours . Since most213 of the trajectories tend to stay in the same state over long periods of time , another strong baseline214 is obtained by duplicating the last seen embedding over the final 6 hours . The results ( Table 3 ) 215 indicate that the joint training of clustering and prediction used by VarTPSOM clearly outperforms216 the 2 baselines.217 Model LSTM SameState VarTPSOM MSE 0.0386± 0.0049 0.0576± 0.0012 0.0297± 0.0009 Interpretability To illustrate the topological structure in the latent space , we present reconstruc-218 tions of the VarPSOM centroids , arranged in a ( 8× 8 ) grid , on static MNIST/Fashion-MNIST data219 in Figure 4 . On the ICU time series data , we show example trajectories for one patient dying at the220 end of the ICU stay , as well as two control patients which are dispatched healthily from the ICU . We221 observe that the trajectories are located in different parts of the SOM grid , and form a smooth and222 interpretable representation ( Fig . 5 ) . For further results , including a more quantitative evaluation223 using randomly sampled trajectories , enrichment for future mortality as well as an illustration of224 how the uncertainty generated by the soft assignments can help in data visualization , we refer to the225 appendix ( B.5 ) .226 5 CONCLUSION227 We presented two novel methods for interpretable unsupervised clustering , VarPSOM and VarTP-228 SOM . Both models make use of a VAE and a novel clustering method , PSOM , that extends the229 classical SOM algorithm to include a centroid-based probability distribution . Our models achieve230 superior clustering performance compared to state-of-the-art deep clustering baselines on bench-231 mark data sets and real-world medical time series . The use of a VAE for feature extraction , instead232 of an AE , used in previous methods , and the use of soft assignments of data points to clusters result233 in an interpretable model that can quantify uncertainty in the data.234 REFERENCES235 Elie Aljalbout , Vladimir Golkov , Yawar Siddiqui , and Daniel Cremers . Clustering with deep learn-236 ing : Taxonomy and new methods . CoRR , abs/1801.07648 , 2018 . URL http : //arxiv.org/237 abs/1801.07648.238 Jinwon An and Sungzoon Cho . Variational autoencoder based anomaly detection using reconstruc-239 tion probability . Special Lecture on IE , 2 ( 1 ) , 2015.240 Christopher M. Bishop . Pattern Recognition and Machine Learning ( Information Science and Statis-241 tics ) . Springer-Verlag , Berlin , Heidelberg , 2006 . ISBN 0387310738.242 Geoffrey J. Chappell and John G. Taylor . The temporal kohonen map . Neural Netw. , 6 ( 3 ) :441–445,243 March 1993 . ISSN 0893-6080. doi : 10.1016/0893-6080 ( 93 ) 90011-K. URL http : //dx.doi.244 org/10.1016/0893-6080 ( 93 ) 90011-K.245 Arthur Flexer . On the use of self-organizing maps for clustering and visualization . In Jan M. Żytkow246 and Jan Rauch ( eds . ) , Principles of Data Mining and Knowledge Discovery , pp . 80–88 , Berlin,247 Heidelberg , 1999 . Springer Berlin Heidelberg . ISBN 978-3-540-48247-5.248 Florent Forest , Mustapha Lebbah , Hanene Azzag , and Jérôme Lacaille . Deep embedded som : Joint249 representation learning and self-organization . 04 2019.250 Vincent Fortuin and Gunnar Rätsch . Deep mean functions for meta-learning in gaussian processes.251 arXiv preprint arXiv:1901.08098 , 2019.252 Vincent Fortuin , Matthias Hüser , Francesco Locatello , Heiko Strathmann , and Gunnar Rätsch.253 Som-vae : Interpretable discrete representation learning on time series . arXiv preprint254 arXiv:1806.02199 , 2018.255 Vincent Fortuin , Gunnar Rätsch , and Stephan Mandt . Multivariate time series imputation with256 variational autoencoders . arXiv preprint arXiv:1907.04155 , 2019.257 Jean-Yves Franceschi , Aymeric Dieuleveut , and Martin Jaggi . Unsupervised scalable representation258 learning for multivariate time series . arXiv preprint arXiv:1901.10738 , 2019.259 Ian J. Goodfellow , Jean Pouget-Abadie , Mehdi Mirza , Bing Xu , David Warde-Farley , Sherjil Ozair,260 Aaron Courville , and Yoshua Bengio . Generative Adversarial Networks . arXiv e-prints , art.261 arXiv:1406.2661 , Jun 2014.262 Prasoon Goyal , Zhiting Hu , Xiaodan Liang , Chenyu Wang , and Eric P Xing . Nonparametric vari-263 ational auto-encoders for hierarchical representation learning . In Proceedings of the IEEE Inter-264 national Conference on Computer Vision , pp . 5094–5102 , 2017.265 Xifeng Guo , Long Gao , Xinwang Liu , and Jianping Yin . Improved deep embedded clustering with266 local structure preservation . In Proceedings of the Twenty-Sixth International Joint Conference267 on Artificial Intelligence , IJCAI-17 , pp . 1753–1759 , 2017. doi : 10.24963/ijcai.2017/243 . URL268 https : //doi.org/10.24963/ijcai.2017/243.269 Irina Higgins , Loic Matthey , Xavier Glorot , Arka Pal , Benigno Uria , Charles Blundell , Shakir Mo-270 hamed , and Alexander Lerchner . Early visual concept learning with unsupervised deep learning.271 arXiv preprint arXiv:1606.05579 , 2016.272 D. A. Keim . Information visualization and visual data mining . IEEE Transactions on Visualization273 and Computer Graphics , 8 ( 1 ) :1–8 , Jan 2002 . ISSN 1077-2626. doi : 10.1109/2945.981847.274 Diederik P Kingma and Max Welling . Auto-Encoding Variational Bayes . arXiv e-prints , art.275 arXiv:1312.6114 , Dec 2013.276 T. Kohonen . The self-organizing map . Proceedings of the IEEE , 78 ( 9 ) :1464–1480 , Sep. 1990 . ISSN277 0018-9219. doi : 10.1109/5.58325.278 Teuvo Kohonen . The adaptive-subspace som ( assom ) and its use for the implementation of invariant279 feature detection . 1995.280 Y. Lecun , L. Bottou , Y. Bengio , and P. Haffner . Gradient-based learning applied to document281 recognition . Proceedings of the IEEE , 86 ( 11 ) :2278–2324 , Nov 1998 . ISSN 0018-9219. doi:282 10.1109/5.726791.283 Xiaopeng Li , Zhourong Chen , and Nevin L. Zhang . Latent tree variational autoencoder for joint284 representation learning and multidimensional clustering . CoRR , abs/1803.05206 , 2018 . URL285 http : //arxiv.org/abs/1803.05206.286 N. Liu , J. Wang , and Y. Gong . Deep self-organizing map for visual classification . In 2015 Interna-287 tional Joint Conference on Neural Networks ( IJCNN ) , pp . 1–6 , July 2015. doi : 10.1109/IJCNN.288 2015.7280357.289 J. MacQueen . Some methods for classification and analysis of multivariate observations . In290 Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability , Vol-291 ume 1 : Statistics , pp . 281–297 , Berkeley , Calif. , 1967 . University of California Press . URL292 https : //projecteuclid.org/euclid.bsmsp/1200512992.293 T. A. McQueen , A . A. Hopgood , J . A. Tepper , and T. J. Allen . A recurrent self-organizing map for294 temporal sequence processing . In Ahamad Lotfi and Jonathan M. Garibaldi ( eds . ) , Applications295 and Science in Soft Computing , pp . 3–8 , Berlin , Heidelberg , 2004 . Springer Berlin Heidelberg.296 ISBN 978-3-540-45240-9.297 Tom J Pollard , Alistair EW Johnson , Jesse D Raffa , Leo A Celi , Roger G Mark , and Omar Badawi.298 The eicu collaborative research database , a freely available multi-center database for critical care299 research . Scientific data , 5 , 2018.300 Ali Razavi , Aaron van den Oord , and Oriol Vinyals . Generating diverse high-fidelity images with301 vq-vae-2 . arXiv preprint arXiv:1906.00446 , 2019.302 S. Tirunagari , N. Poh , K. Aliabadi , D. Windridge , and D. Cooke . Patient level analytics using303 self-organising maps : A case study on type-1 diabetes self-care survey responses . In 2014 IEEE304 Symposium on Computational Intelligence and Data Mining ( CIDM ) , pp . 304–309 , Dec 2014.305 doi : 10.1109/CIDM.2014.7008682.306 Kazuhiro Tokunaga and Tetsuo Furukawa . Modular network som . Neural Networks , 22 ( 1 ) :82 –307 90 , 2009 . ISSN 0893-6080. doi : https : //doi.org/10.1016/j.neunet.2008.10.006 . URL http:308 //www.sciencedirect.com/science/article/pii/S0893608008002335.309 Aäron van den Oord , Oriol Vinyals , and Koray Kavukcuoglu . Neural discrete representation learn-310 ing . CoRR , abs/1711.00937 , 2017 . URL http : //arxiv.org/abs/1711.00937.311 Sharad Vikram , Matthew D Hoffman , and Matthew J Johnson . The loracs prior for vaes : Letting the312 trees speak for the data . arXiv preprint arXiv:1810.06891 , 2018.313 Thomas Voegtlin . Recursive self-organizing maps . Neural Networks , 15 ( 8 ) :979 – 991 , 2002.314 ISSN 0893-6080. doi : https : //doi.org/10.1016/S0893-6080 ( 02 ) 00072-2 . URL http : //www.315 sciencedirect.com/science/article/pii/S0893608002000722.316 Han Xiao , Kashif Rasul , and Roland Vollgraf . Fashion-mnist : a novel image dataset for benchmark-317 ing machine learning algorithms . CoRR , abs/1708.07747 , 2017 . URL http : //arxiv.org/318 abs/1708.07747.319 Junyuan Xie , Ross B. Girshick , and Ali Farhadi . Unsupervised deep embedding for clustering320 analysis . CoRR , abs/1511.06335 , 2015 . URL http : //arxiv.org/abs/1511.06335.321 Bo Yang , Xiao Fu , Nicholas D. Sidiropoulos , and Mingyi Hong . Towards k-means-friendly322 spaces : Simultaneous deep learning and clustering . CoRR , abs/1610.04794 , 2016 . URL323 http : //arxiv.org/abs/1610.04794.324 APPENDIX325 A SELF-ORGANIZING MAPS326 Among various existing interpretable unsupervised learning algorithms , Kohonen ’ s self-organizing327 map ( SOM ) ( Kohonen , 1990 ) is one of the most popular models . It is comprised of K neurons328 connected to form a discrete topological structure . The data are projected onto this topographic map329 which locally approximates the data manifold . Usually it is a finite two-dimensional region where330 neurons are arranged in a regular hexagonal or rectangular grid . Here we use a grid , M ⊆ N2,331 because of its simplicity and its visualization properties . Each neuron mij , at position ( i , j ) of the332 grid , for i , j = 1 , . . . , √ K , corresponds to a centroid vector , µi , j in the input space . The centroids333 are tied by a neighborhood relation , here defined as N ( µi , j ) = { µi−1 , j , µi+1 , j , µi , j−1 , µi , j+1 } .334 Given a random initialization of the centroids , the SOM algorithm randomly selects an input xi and335 updates both its closest centroid µi , j and its neighbors N ( µi , j ) to move them closer to xi . The336 algorithm ( 1 ) then iterates these steps until convergence.337 Algorithm 1 Self-Organizing Maps Require : 0 < α ( t ) < 1 ; limt→∞ ∑ α ( t ) →∞ ; limt→∞ ∑ α2 ( t ) < ∞ ; repeat At each time t , present an input x ( t ) and select the winner , ν ( t ) = arg min k∈Ω ‖x ( t ) −wk ( t ) ‖ Update the weights of the winner and its neighbours , ∆wk ( t ) = α ( t ) η ( ν , k , t ) [ x ( t ) −wν ( t ) ] until the map converges The range of SOM applications includes high dimensional data visualization , clustering , image338 and video processing , density or spectrum profile modeling , text/document mining , management339 systems and gene expression data analysis.340 B EXPERIMENTAL AND IMPLEMENTATION DETAILS341 B.1 DATASETS342 • MNIST : It consists of 70000 handwritten digits of 28-by-28 pixel size . Digits range from343 0 to 9 , yielding 10 patterns in total . The digits have been size-normalized and centered in a344 fixed-size image Lecun et al . ( 1998 ) .345 • Fashion MNIST : A dataset of Zalando ’ s article images consisting of a training set of346 60,000 examples and a test set of 10,000 examples Xiao et al . ( 2017 ) . Each example is347 a 28×28 grayscale image , associated with a label from 10 classes.348 • eICU : For temporal data we use vital sign/lab measurements of intensive care unit ( ICU ) 349 patients resampled to a 1-hour based grid using forward filling and filling with population350 statistics from the training set if no measurements were available . From all ICU stays , we351 excluded ICU stays , which were shorter than 1 day , longer than 30 days or which had at352 least one gap in the continuous vital sign monitoring , which we define by a interval between353 2 HR measurements of at least 1 hour . This yielded N = 10559 ICU stays from the354 eICU database . dvitals = 14 vital sign variables and dlab = 84 lab measurement variables355 were included , giving an overall data dimension of d = 98 . The last 72 hours of these356 multivariate time series were used for the experiments . As labels we use a variant of the357 current dynamic APACHE physiology score ( APACHE-0 ) as well as the worst APACHE358 score in the next 6 and 12 hours ( APACHE-6/12 ) , and the mortality in the next 24 hours.359 Only those variables from the APACHE score definition which are recorded in the eICU360 database were taken into account.361 Each dataset is divided into training , validation and test sets for both our models and the baselines.362 B.2 LATENT SPACE DIMENSION363 We evaluate the DEC model for different latent space dimensions . Table S1 shows that the AE , used364 in the DEC model , performs better when a lower dimensional latent space is used.365 Table S1 : Mean/Standard error of NMI and purity of DEC model on MNIST test set , across 10 runs with different random model initializations . We use 64 clusters and different latent space dimensions . Latent dimension Purity NMI l = 10 0.950± 0.001 0.681± 0.001 l = 100 0.750± 0.001 0.573± 0.001 B.3 NUMBER OF CLUSTERS366 We evaluate the NMI and purity clustering performance of our model , VarPSOM , with a varying367 number of clusters on the MNIST dataset . Since IDEC represents the main competitor we include368 it in this analysis . Figure S1 shows that VarPSOM outperforms IDEC for all the different configu-369 rations . In particular , it is interesting to observe that NMI decreases with an increasing number of370 clusters in both models . This is because the entropy of the clustering increases with the number of371 clusters.372 Figure S1 : NMI ( left ) and purity ( right ) clustering performance of VarPSOM and IDEC with varying number of clusters on the MNIST test set . B.4 LEARNING HEALTH STATE REPRESENTATIONS IN THE ICU373 By enforcing a SOM structure , VarPSOM , as well as VarTPSOM , project the cluster centroids onto374 a discrete 2D grid . Such a grid is particularly suited for visualization purposes and relations between375 centroids become immediately intuitive . In Fig . S2 a heat-map ( colored according to enrichment in376 the current APACHE score , as well as mortality risk in the next 24 hours ) shows compact enrichment377 structures . VarTPSOM succeeds in creating a meaningful and smooth neighbourhood structure . It378 distinguishes risk profiles with practically zero mortality risk from high mortality risk , reaching up379 to≈15 % , in different regions of the map , even though it is learned in a purely unsupervised fashion.380 Remarkably , the two heat-maps ( S2b and S2a ) show different enrichment patterns . Clusters which381 are enriched in health states with higher APACHE scores often do not correspond exactly to clusters382 with a higher mortality risk . This suggests that traditional representations of physiologic values,383 such as the APACHE score , fail to fully use all complex multivariate relationships present in the384 ICU recordings , and are not associated with dynamic mortality in a simple way.385 ( a ) Current APACHE score ( b ) Mortality risk in the next 24 hours Figure S2 : Heat-maps of enrichment in mortality risk in the next 24 hours as well as the current dynamic APACHE score , superimposed on the discrete 2D grid learned by VarTPSOM . B.5 VISUALIZING HEALTH STATE TRAJECTORIES IN THE ICU386 To analyze the trend of the patient pathology , VarTPSOM induces trajectories on the 2D SOM grid387 which can be easily visualized . Fig . S3 shows 20 randomly sampled patient trajectories obtained388 by our model . Trajectories ending in the death of the patient are shown in red , healthily dispatched389 patients are shown in green . Figure S3 : Randomly sampled VarTPSOM trajectories , from patients expired at the end of the ICU stay , as well as healthily dispatched patients . Superimposed is a heatmap which displays the cluster enrichment in the current APACHE score , from this model run . We observe that trajectories of dying patients are often in different locations of the map as healthy patients , in particular in those regions enriched for high APACHE scores , which corresponds with clinical intuition . 390 One of the main advantage of VarTPSOM over the traditional SOM algorithm is the use of soft391 assignments of data points to clusters which results in a better ability to quantify uncertainty in the392 data . For visualizing health states in the ICU , this property is very important . In Fig S4 we plot an393 example patient trajectory , where 6 different time-steps ( in temporal order ) of the trajectory were394 chosen . Our model yields a soft centroid-based probability distribution which evolves with time and395 which allows estimation of likely discrete health states at a given point in time . For each time-step396 the distribution of probabilities is plotted using a heat-map , whereas the overall trajectory is plotted397 using a black line . The circle and cross indicate ICU admission and dispatch , respectively . Figure S4 : Probabilities over discrete patient health states for 6 different time-steps of the selected time series . 398
This paper proposes VarPSOM, a method which utilizes variational autoencoders (VAEs) and clustering techniques based on self-organizing maps (SOMs) to learn clustering of image data (MNIST and Fashion MNIST in particular). An LSTM-based extension termed VarTPSOM is also evaluated on medical time series data. For the most part, the experimental results are promising, and the visualizations are particularly nice.
SP:faefbfe1f151c4b3e0db3ef30e3317f45dd82274
Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps
Generating visualizations and interpretations from high-dimensional data is a1 common problem in many fields . Two key approaches for tackling this problem2 are clustering and representation learning . There are very performant deep cluster-3 ing models on the one hand and interpretable representation learning techniques,4 often relying on latent topological structures such as self-organizing maps , on the5 other hand . However , current methods do not yet successfully combine these two6 approaches . We present a new deep architecture for probabilistic clustering , VarP-7 SOM , and its extension to time series data , VarTPSOM . We show that they achieve8 superior clustering performance compared to current deep clustering methods on9 static MNIST/Fashion-MNIST data as well as medical time series , while inducing10 an interpretable representation . Moreover , on the medical time series , VarTPSOM11 successfully predicts future trajectories in the original data space.12 1 INTRODUCTION13 Information visualization techniques are essential in areas where humans have to make decisions14 based on large amounts of complex data . Their goal is to find an interpretable representation of15 the data that allows the integration of humans into the data exploration process . This encourages16 visual discoveries of relationships in the data and provides guidance to downstream tasks . In this17 way , a much higher degree of confidence in the findings of the exploration is attained ( Keim , 2002 ) .18 An interpretable representation of the data , in which the underlying factors are easily visualized , is19 particularly important in domains where the reason for obtaining a certain prediction is as valuable20 as the prediction itself . However , finding a meaningful and interpretable representation of complex21 data can be challenging.22 Clustering is one of the most natural ways for retrieving interpretable information from raw data.23 Long-established methods such as k-means ( MacQueen , 1967 ) and Gaussian Mixture Models24 ( Bishop , 2006 ) represent the cornerstone of cluster analysis . Their applicability , however , is often25 constrained to simple data and their performance is limited in high-dimensional , complex , real-world26 data sets , which do not exhibit a clustering-friendly structure.27 Deep generative models have recently achieved tremendous success in representation learning.28 Some of the most commonly used and efficient approaches are Autoencoders ( AEs ) , Variational29 Autoencoders ( VAEs ) and Generative Adversarial Networks ( GANs ) ( Kingma & Welling , 2013 ; 30 Goodfellow et al. , 2014 ) . The compressed latent representation generated by these models has been31 proven to ease the clustering process ( Aljalbout et al. , 2018 ) . As a result , the combination of deep32 generative models for feature extraction and clustering results in a dramatic increase of the clustering33 performance ( Xie et al. , 2015 ) . Although very successful , most of these methods do not investigate34 the relationship among clusters and the clustered feature points live in a high-dimensional latent35 space that can not be easily visualized or interpreted by humans.36 The Self-Organizing Map ( SOM ) ( Kohonen , 1990 ) is a clustering method that provides such an37 interpretable representation . It produces a low-dimensional ( typically 2-dimensional ) , discretized38 representation of the input space by inducing a flexible neighbourhood structure over the clusters.39 Alas , its applicability is often constrained to simple data sets similar to other classical clustering40 methods.41 To resolve the above issues , we propose a novel deep architecture , the Variational Probabilistic SOM42 ( VarPSOM ) , that jointly trains a VAE and a SOM to achieve an interpretable discrete representation43 while exhibiting state-of-the-art clustering performance . Instead of hard assignment of data points44 to clusters , our model uses a centroid-based probability distribution . It minimizes its Kullback-45 Leibler divergence against an auxiliary target distribution , while enforcing a SOM-friendly space.46 To highlight the importance of an interpretable representation for different purposes , we extended47 this model to deal with temporal data , yielding VarTPSOM . We discuss related work in Section48 2 . Extensive evidence of the superior clustering performance of both models , on MNIST/Fashion-49 MNIST images as well as real-world medical time series is presented in Section 4.50 Our main contributions are:51 • A novel architecture for deep clustering , yielding an interpretable discrete representation52 through the use of a probabilistic self-organizing map.53 • An extension of this architecture to time series , improving clustering performance on this54 data type and enabling temporal predictions.55 • A thorough empirical assessment of our proposed models , showing superior performance56 on benchmark tasks and challenging medical time series from the intensive care unit.57 2 RELATED WORK58 Self-Organizing Maps have been widely used as a means to visualize information from large59 amounts of data ( Tirunagari et al. , 2014 ) and as a form of clustering in which the centroids are con-60 nected by a topological neighborhood structure ( Flexer , 1999 ) . Since their early inception , several61 variants have been proposed to enhance their performance and scope . The adaptive subspace SOM,62 ASSOM ( Kohonen , 1995 ) , for example , proposed to combine PCA and SOMs to map data into a63 reduced feature space . ( Tokunaga & Furukawa , 2009 ) combine SOMs with multi-layer perceptrons64 to obtain a modular network . ( Liu et al. , 2015 ) proposed Deep SOM ( DSOM ) , an architecture com-65 posed of multiple layers similar to Deep Neural Networks . There exist several methods tailored66 to representation learning on time series , among them ( Franceschi et al. , 2019 ; Fortuin & Rätsch,67 2019 ; Fortuin et al. , 2019 ) , which are however not based on SOMs . Extensions of SOM optimized68 for temporal data include the Temporal Kohonen map ( Chappell & Taylor , 1993 ) and its improved69 version Recurrent SOM ( McQueen et al. , 2004 ) as well as Recursive SOM ( Voegtlin , 2002 ) . While70 SOM and its variants are particularly effective for data visualization ( Liu et al. , 2015 ) , it was rarely71 attempted to combine their merits in this respect with modern state-of-the-art clustering methods,72 which often use deep generative models in combination with probabilistic clustering.73 In particular , recent works on clustering analysis have shown that combining clustering algorithms74 with the latent space of AEs greatly increases the clustering performance ( Aljalbout et al. , 2018 ) .75 ( Xie et al. , 2015 ) proposed DEC , a method that sequentially applies embedding learning using76 Stacked Autoencoders ( SAE ) , and the Clustering Assignment Hardening method on the obtained77 representation . An improvement of this architecture , IDEC ( Guo et al. , 2017 ) , includes the decoder78 network of the SAE in the learning process , so that training is affected by both the clustering loss79 and the reconstruction loss . Similarly , DCN ( Yang et al. , 2016 ) combines a k-means clustering loss80 with the reconstruction loss of SAE to obtain an end-to-end architecture that jointly trains repre-81 sentations and clustering . These models achieve state-of-the-art clustering performance but they do82 not investigate the relationship among clusters . An exception is the work by ( Li et al. , 2018 ) , in83 which they present an unsupervised method that learns latent embeddings and discovers multi-facet84 clustering structure . Relationships among clusters were discovered , however , they do not provide a85 latent space that can be easily interpreted and which eases the process of analytical reasoning.86 While there exist previous efforts to endow VAEs with a hierarchical latent space ( Vikram et al.,87 2018 ; Goyal et al. , 2017 ) , to the best of our knowledge , only two models used deep generative88 models in combination with a SOM structure in the latent space . The SOM-VAE model ( Fortuin89 et al. , 2018 ) , inspired by the VQ-VAE architecture ( van den Oord et al. , 2017 ) ( which itself was90 later extended in ( Razavi et al. , 2019 ) ) , uses an AE to embed the input data points into a latent space91 and then applies a SOM-based clustering loss on top of this latent representation . It features hard92 assignments of points to centroids , as well as the use of a Markov model for temporal data , both93 of which yield inferior expressivity compared to our method . The Deep Embedded SOM , DESOM94 ( Forest et al. , 2019 ) , improved the previous model by using a Gaussian neighborhood window with95 exponential radius decay and by learning the SOM structure in a continuous setting . Both methods96 feature a topologically interpretable neighborhood structure and yield promising results in visual-97 izing state spaces . However , those works did not feature empirical comparisons to state-of-the-art98 deep clustering techniques and did not make use of many of the design principles that have recently99 proven to be successful in this space.100 3 PROBABILISTIC CLUSTERING WITH VARIATIONAL PSOM101 Given a set of data samples { xi } i=1 , ... , n , where xi ∈ Rd , the goal is to partition the data into a set102 of clusters { Si } i=1 , ... , K , while retaining a topological structure over the cluster centroids.103 The proposed architecture for static data is presented in Figure 1a . The input vector xi is embedded104 into a latent representation zi using a VAE . This latent vector is then clustered using PSOM , a105 new SOM clustering strategy that extends the Clustering Assignment Hardening method ( Xie et al.,106 2015 ) . The VAE and PSOM are trained jointly to learn a latent representation with the aim to boost107 the clustering performance . To prevent the network from outputting a trivial solution , the decoder108 network reconstructs the input from the latent embedding , encouraging it to be as similar as possible109 to the original input . The obtained loss function is a linear combination of the clustering loss and110 the reconstruction loss . To deal with temporal data , we propose another model variant , which is111 depicted in Figure 1b.112 3.1 BACKGROUND113 A Self-Organizing Map is comprised of K nodes connected to form a grid M ⊆ N2 , where the114 node mi , j , at position ( i , j ) of the grid , corresponds to a centroid vector , µi , j in the input space . The115 centroids are tied by a neighborhood relation N ( µi , j ) = { µi−1 , j , µi+1 , j , µi , j−1 , µi , j+1 } . Given a116 random initialization of the centroids , the SOM algorithm randomly selects an input xi and updates117 both its closest centroid µi , j and its neighbors N ( µi , j ) to move them closer to xi . For a complete118 description of the SOM algorithm , we refer to the appendix ( A ) .119 The Clustering Assignment Hardening method has been recently introduced by the DEC model ( Xie et al. , 2015 ) and was shown to perform well in the latent space of AEs ( Aljalbout et al. , 2018 ) . Given an embedding function zi = f ( xi ) , it uses a Student ’ s t-distribution ( S ) as a kernel to measure the similarity between an embedded data point zi , and a centroid µj : sij = ( 1 + ‖zi − µj‖2 /α ) −α+12 ∑ j′ ( 1 + ‖zi − µj′‖2 /α ) −α+12 . It improves the cluster purity by enforcing the distribution S to approach a target distribution , T : tij = sγij/ ∑ i′ si′j∑ j′ s γ ij′/ ∑ i′ si′j′ . By taking the original distribution to the power of γ and normalizing it , the target distribution puts more emphasis on data points that are assigned a high confidence . We follow ( Xie et al. , 2015 ) in choosing γ=2 , which leads to larger gradient contributions of points close to cluster centers , as they show empirically . The resulting clustering loss is defined as : L = KL ( T‖S ) = ∑ i ∑ j tij log tij sij . ( 1 ) 3.2 PROBABILISTIC SOM ( PSOM ) CLUSTERING120 Our proposed clustering method , called PSOM , expands Clustering Assignment Hardening to include a SOM neighborhood structure over the centroids . We add an additional loss to ( 1 ) to achieve an interpretable representation . This loss term maximizes the similarity between each data point and the neighbors of the closest centroids . For each embedded data point zi and each centroid µj the loss is defined as the negative sum of all the neighbors of µj , { e : µe ∈ N ( µj ( xi ) ) } , of the probability that zi is assigned to e , defined as sie . This sum is weighted by the similarity sij between zi and the centroid µj : LSOM = − 1 N ∑ i ∑ j sij ∑ e : µe∈N ( µj ( xi ) ) sie . The complete PSOM clustering loss is then : LPSOM = KL ( T‖S ) + βLSOM . We note that for β = 0 it becomes equivalent to Clustering Assignment Hardening.121 3.3 VARPSOM : VAE FOR FEATURE EXTRACTION122 In our method , the nonlinear mapping between the input xi and embedding zi is realized by a VAE . Instead of directly embedding the input xi into a latent embedding zi , the VAE learns a probability distribution qφ ( z | xi ) parametrized as a multivariate normal distribution whose mean and variance are ( µφ , Σφ ) = fφ ( xi ) . Similarly , it also learns the probability distribution of the reconstructed output given a sampled latent embedding , pθ ( xi | z ) where ( µθ , Σθ ) = fθ ( zi ) . Both fφ and fθ are neural networks , respectively called encoder and decoder . The ELBO loss is : LELBO = ∑ i [ −Ez ( log pθ ( xi | z ) ) +DKL ( qφ ( z | xi ) ‖ p ( z ) ) ] , ( 2 ) where p ( z ) is an isotropic Gaussian prior over the latent embeddings . The second term can be interpreted as a form of regularization , which encourages the latent space to be compact . For each data point xi the latent embedding zi is sampled from qφ ( z | xi ) . Adding the ELBO loss to the PSOM loss from the previous subsection , we get the overall loss function of VarPSOM : LVarPSOM = LPSOM + LELBO . ( 3 ) To the best of our knowledge , no previous SOM methods attempted to use a VAE to embed the123 inputs into a latent space . There are many advantages of a VAE over an AE for realizing our goals.124 Its prior on the latent space encourages structured and disentangled factors ( Higgins et al. , 2016 ) ,125 which could help clustering . The suitability of VAEs for anomaly detection ( An & Cho , 2015 ) 126 means that points with a higher variance in the latent space could be treated as less accurate and127 trustworthy . The regularization term of the VAE can be used to prevent the network from scattering128 the embedded points discontinuously in the latent space , which naturally facilitates the fitting of the129 SOM . To test if the use of CNNs can boost clustering performance on image data , we introduce130 another model variant called VarCPSOM , which uses convolutional filters as part of the VAE.131 3.4 VARTPSOM : EXTENSION TO TIME SERIES DATA132 To extend our proposed model to time series data , we add a temporal component to the architecture . Given a set of N time series of length T , { xt , i } t=1 , ... , T ; i=1 , ... , N , the goal is to learn interpretable trajectories on the SOM grid . To do so , the VarPSOM could be used directly but it would treat each time step t of the time series independently , which is undesirable . To exploit temporal information and enforce smoothness in the trajectories , we add an additional loss to ( 3 ) : Lsmooth = − 1 NT ∑ i ∑ t uit , it+1 , ( 4 ) where uit , it+1 = g ( zi , t , zi , t+1 ) is the similarity between zi , t and zi , t+1 using a Student ’ s t-133 distribution and zi , t refers to the embedding of time series xi at time index t. It maximizes the134 similarity between latent embeddings of adjacent time steps , such that large jumps in the latent state135 between time points are discouraged.136 One of the main goals in time series modeling is to predict future data points , or alternatively , future embeddings . This can be achieved by adding a long short-term memory network ( LSTM ) across the latent embeddings of the time series , as shown in Fig 1b . Each cell of the LSTM takes as input the latent embedding zt at time step t , and predicts a probability distribution over the next latent embedding , pω ( zt+1 | zt ) . We parametrize this distribution as a Multivariate Normal Distribution whose mean and variance are learnt by the LSTM . The prediction loss is the log-likelihood between the learned distribution and a sample of the next embedding zt+1 : Lpred = − ∑ i ∑ t log pω ( zt+1 | zt ) . ( 5 ) The final loss of VarTPSOM , which is trainable in a fully end-to-end fashion , is LVarTPSOM = LVarPSOM + Lsmooth + ηLpred . ( 6 ) 4 EXPERIMENTS137 First , we evaluate VarPSOM and VarCPSOM and compare them with state-of-the-art non-138 interpretable as well as SOM-based clustering methods on MNIST ( Lecun et al. , 1998 ) and Fashion-139 MNIST ( Xiao et al. , 2017 ) data . Here , particular focus is laid on the comparison of VarPSOM and140 the clustering models DEC and IDEC , to investigate the role of the VAE and the SOM loss . We141 then present visualizations of the obtained 2D representations , to illustrate how our method could142 ease visual reasoning about the data . Finally , we present extensive evidence of the performance of143 VarTPSOM on real-world complex time series from the eICU data set ( Pollard et al. , 2018 ) , and144 illustrate how it allows visualization of patient health state trajectories in an easily understandable145 2D domain . For details on the data sets , we refer to the appendix ( B.1 ) .146 Baselines We used two different types of baselines . The first category contains clustering methods147 that do not provide any interpretable discrete latent representation . Those include k-means , the DEC148 model , as well as its improved version IDEC , whose clustering methods are related to ours . We also149 include a modified version of IDEC that we call VarIDEC , in which we substitute the AE with a150 VAE , to investigate the role of the VAE . For all these methods we use 64 clusters . In the second151 category , we include state-of-the-art clustering methods based on SOMs . Here , we used a standard152 SOM ( minisom ) , AE+SOM , an architecture composed of an AE and a SOM applied on top of the153 latent representation ( trained sequentially ) , SOM-VAE and DESOM . Finally , we create a modified154 version of our model , called AEPSOM , in which we substitute the VAE with an AE ( similarly to155 VarIDEC ) . For all SOM-based methods we set the SOM grid size to ( 8 × 8 ) . For different grid156 configurations we refer to the appendix , ( B.3 ) .157 Implementation In implementing our models we focused on retaining a fair comparison with the158 baselines . Hence we decided to use a standard network structure , with fully connected layers of159 dimensions d− 500− 500− 2000− l , to implement both the VAE of our models and the AE of the160 baselines . The latent dimension , l , is set to 100 for the VAE , and to 10 for the AEs . Since the prior161 in the VAE enforces the latent embeddings to be compact , it also requires more dimensions to learn162 a meaningful latent space . On the other hand , providing the AE models with a higher-dimensional163 latent space , needed for the VAE , resulted in a dramatic decrease of performance ( see appendix B.2 ) .164 VarCPSOM is composed of 4 convolutional layers of feature maps [ 32 , 64 , 128 , 256 ] and kernel size165 3 × 3 for all layers . For all architectures , no greedy layer-wise pretraining was used to tune the166 VAE . Instead we simply run the VAE without the clustering loss for a few epochs for initialization.167 A standard SOM was then used to produce an initial configuration of the centroids/neighbourhood168 relation . Finally , the entire architecture is trained for 100 , 000 iterations . To avoid fine-tuning hy-169 perparameters , given the unsupervised setting , α is set to 10 for all experiments while the other170 hyperparameters are modified accordingly to maintain the same order of magnitude of the different171 loss components.172 Clustering Evaluation Table 1 shows the clustering quality results of VarPSOM and VarCPSOM173 on MNIST and Fashion-MNIST data , compared with the baselines . Purity and Normalized Mutual174 Information are used as evaluation metrics . We observe that our proposed models outperform the175 baselines of both categories and achieve state-of-the-art clustering performance.176 VarPSOM vs. IDEC VarPSOM is inspired by IDEC but it has two major differences . It uses a177 VAE instead of an AE and it improves interpretability in the latent space by adding a new loss that en-178 forces a SOM structure . Since both VarIDEC and VarPSOM show superior clustering performance179 compared to IDEC and AEPSOM respectively ( Table 1 ) , we conclude that the VAE indeed suc-180 ceeds in capturing a more meaningful latent representation compared to a standard AE . Regarding181 the second difference , the SOM structure was expected to slightly decrease the clustering perfor-182 mance , due to a trade-off between interpretability and raw clustering performance . However , we do183 not observe this in our results . Adding the SOM loss rather leads to an increase of the clustering184 performance . We suspect this is due to the regularization effect of the SOM ’ s topological structure.185 Overall , VarPSOM outperforms both DEC and IDEC.186 Improvement over Training After obtaining the initial configuration of the SOM structure , both187 clustering and feature extraction using the VAE are trained jointly . To illustrate that our architecture188 improves clustering performance over the initial configuration , we plotted NMI and Purity against189 the number of training iterations in Figure 2 . We observe that the performance is stable when190 increasing the number of epochs and no overfitting is visible.191 Role of the SOM loss To investigate the influence of the SOM loss component , we plot the clus-192 tering performance of VarPSOM against the weight ( β ) of LSOM in Fig . 3 , using the MNIST dataset.193 With β = 30 , theKL term ( responsible for improving clustering purity ) and the LSOM term ( respon-194 sible for enforcing a SOM structure over the centroids ) are almost equal . It is interesting to observe195 the different trends in NMI and purity . The NMI performance increases for increasing values of β196 while purity slightly decreases . Overall , enforcing a more interpretable latent space results in a more197 robust clustering model with higher NMI clustering performance.198 Time Series Evaluation We evaluate the clustering performance of our proposed models on the199 eICU dataset , comprised of complex medical time series . We compare them against SOM-VAE,200 as this is the only method among the baselines that is suited for temporal data . Table 2 shows the201 cluster cell enrichment in terms of NMI for three different labels , the current ( APACHE-0 ) and worst202 future ( APACHE-6/12 hours ) physiology scores . VarTPSOM clearly achieves superior clustering203 performance compared to SOM-VAE . This , we hypothesize , is due to the better feature extraction204 using a VAE as well as the improved treatment of uncertainty using PSOM , which features soft205 assignments , whereas SOM-VAE contains a deterministic AE and hard assignments . Moreover,206 both the smoothness loss and the prediction loss seem to increase the clustering performance . More207 results on ICU time series are reported in the appendix ( B.4 ) .208 To quantify the performance of VarTPSOM in unrolling future trajectories , we predict the final209 6 latent embeddings of each time series . For each predicted embedding we reconstruct the input210 using the decoder of the VAE . Finally , we measure the MSE between the original input and the211 reconstructed inputs for the last 6 hours of the ICU admission . As baselines , we used an LSTM that212 takes as input the first 66 hours of the time series and then predicts the next 6 hours . Since most213 of the trajectories tend to stay in the same state over long periods of time , another strong baseline214 is obtained by duplicating the last seen embedding over the final 6 hours . The results ( Table 3 ) 215 indicate that the joint training of clustering and prediction used by VarTPSOM clearly outperforms216 the 2 baselines.217 Model LSTM SameState VarTPSOM MSE 0.0386± 0.0049 0.0576± 0.0012 0.0297± 0.0009 Interpretability To illustrate the topological structure in the latent space , we present reconstruc-218 tions of the VarPSOM centroids , arranged in a ( 8× 8 ) grid , on static MNIST/Fashion-MNIST data219 in Figure 4 . On the ICU time series data , we show example trajectories for one patient dying at the220 end of the ICU stay , as well as two control patients which are dispatched healthily from the ICU . We221 observe that the trajectories are located in different parts of the SOM grid , and form a smooth and222 interpretable representation ( Fig . 5 ) . For further results , including a more quantitative evaluation223 using randomly sampled trajectories , enrichment for future mortality as well as an illustration of224 how the uncertainty generated by the soft assignments can help in data visualization , we refer to the225 appendix ( B.5 ) .226 5 CONCLUSION227 We presented two novel methods for interpretable unsupervised clustering , VarPSOM and VarTP-228 SOM . Both models make use of a VAE and a novel clustering method , PSOM , that extends the229 classical SOM algorithm to include a centroid-based probability distribution . Our models achieve230 superior clustering performance compared to state-of-the-art deep clustering baselines on bench-231 mark data sets and real-world medical time series . The use of a VAE for feature extraction , instead232 of an AE , used in previous methods , and the use of soft assignments of data points to clusters result233 in an interpretable model that can quantify uncertainty in the data.234 REFERENCES235 Elie Aljalbout , Vladimir Golkov , Yawar Siddiqui , and Daniel Cremers . Clustering with deep learn-236 ing : Taxonomy and new methods . CoRR , abs/1801.07648 , 2018 . URL http : //arxiv.org/237 abs/1801.07648.238 Jinwon An and Sungzoon Cho . Variational autoencoder based anomaly detection using reconstruc-239 tion probability . Special Lecture on IE , 2 ( 1 ) , 2015.240 Christopher M. Bishop . Pattern Recognition and Machine Learning ( Information Science and Statis-241 tics ) . Springer-Verlag , Berlin , Heidelberg , 2006 . ISBN 0387310738.242 Geoffrey J. Chappell and John G. Taylor . The temporal kohonen map . Neural Netw. , 6 ( 3 ) :441–445,243 March 1993 . ISSN 0893-6080. doi : 10.1016/0893-6080 ( 93 ) 90011-K. URL http : //dx.doi.244 org/10.1016/0893-6080 ( 93 ) 90011-K.245 Arthur Flexer . On the use of self-organizing maps for clustering and visualization . In Jan M. Żytkow246 and Jan Rauch ( eds . ) , Principles of Data Mining and Knowledge Discovery , pp . 80–88 , Berlin,247 Heidelberg , 1999 . Springer Berlin Heidelberg . ISBN 978-3-540-48247-5.248 Florent Forest , Mustapha Lebbah , Hanene Azzag , and Jérôme Lacaille . Deep embedded som : Joint249 representation learning and self-organization . 04 2019.250 Vincent Fortuin and Gunnar Rätsch . Deep mean functions for meta-learning in gaussian processes.251 arXiv preprint arXiv:1901.08098 , 2019.252 Vincent Fortuin , Matthias Hüser , Francesco Locatello , Heiko Strathmann , and Gunnar Rätsch.253 Som-vae : Interpretable discrete representation learning on time series . arXiv preprint254 arXiv:1806.02199 , 2018.255 Vincent Fortuin , Gunnar Rätsch , and Stephan Mandt . Multivariate time series imputation with256 variational autoencoders . arXiv preprint arXiv:1907.04155 , 2019.257 Jean-Yves Franceschi , Aymeric Dieuleveut , and Martin Jaggi . Unsupervised scalable representation258 learning for multivariate time series . arXiv preprint arXiv:1901.10738 , 2019.259 Ian J. Goodfellow , Jean Pouget-Abadie , Mehdi Mirza , Bing Xu , David Warde-Farley , Sherjil Ozair,260 Aaron Courville , and Yoshua Bengio . Generative Adversarial Networks . arXiv e-prints , art.261 arXiv:1406.2661 , Jun 2014.262 Prasoon Goyal , Zhiting Hu , Xiaodan Liang , Chenyu Wang , and Eric P Xing . Nonparametric vari-263 ational auto-encoders for hierarchical representation learning . In Proceedings of the IEEE Inter-264 national Conference on Computer Vision , pp . 5094–5102 , 2017.265 Xifeng Guo , Long Gao , Xinwang Liu , and Jianping Yin . Improved deep embedded clustering with266 local structure preservation . In Proceedings of the Twenty-Sixth International Joint Conference267 on Artificial Intelligence , IJCAI-17 , pp . 1753–1759 , 2017. doi : 10.24963/ijcai.2017/243 . URL268 https : //doi.org/10.24963/ijcai.2017/243.269 Irina Higgins , Loic Matthey , Xavier Glorot , Arka Pal , Benigno Uria , Charles Blundell , Shakir Mo-270 hamed , and Alexander Lerchner . Early visual concept learning with unsupervised deep learning.271 arXiv preprint arXiv:1606.05579 , 2016.272 D. A. Keim . Information visualization and visual data mining . IEEE Transactions on Visualization273 and Computer Graphics , 8 ( 1 ) :1–8 , Jan 2002 . ISSN 1077-2626. doi : 10.1109/2945.981847.274 Diederik P Kingma and Max Welling . Auto-Encoding Variational Bayes . arXiv e-prints , art.275 arXiv:1312.6114 , Dec 2013.276 T. Kohonen . The self-organizing map . Proceedings of the IEEE , 78 ( 9 ) :1464–1480 , Sep. 1990 . ISSN277 0018-9219. doi : 10.1109/5.58325.278 Teuvo Kohonen . The adaptive-subspace som ( assom ) and its use for the implementation of invariant279 feature detection . 1995.280 Y. Lecun , L. Bottou , Y. Bengio , and P. Haffner . Gradient-based learning applied to document281 recognition . Proceedings of the IEEE , 86 ( 11 ) :2278–2324 , Nov 1998 . ISSN 0018-9219. doi:282 10.1109/5.726791.283 Xiaopeng Li , Zhourong Chen , and Nevin L. Zhang . Latent tree variational autoencoder for joint284 representation learning and multidimensional clustering . CoRR , abs/1803.05206 , 2018 . URL285 http : //arxiv.org/abs/1803.05206.286 N. Liu , J. Wang , and Y. Gong . Deep self-organizing map for visual classification . In 2015 Interna-287 tional Joint Conference on Neural Networks ( IJCNN ) , pp . 1–6 , July 2015. doi : 10.1109/IJCNN.288 2015.7280357.289 J. MacQueen . Some methods for classification and analysis of multivariate observations . In290 Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability , Vol-291 ume 1 : Statistics , pp . 281–297 , Berkeley , Calif. , 1967 . University of California Press . URL292 https : //projecteuclid.org/euclid.bsmsp/1200512992.293 T. A. McQueen , A . A. Hopgood , J . A. Tepper , and T. J. Allen . A recurrent self-organizing map for294 temporal sequence processing . In Ahamad Lotfi and Jonathan M. Garibaldi ( eds . ) , Applications295 and Science in Soft Computing , pp . 3–8 , Berlin , Heidelberg , 2004 . Springer Berlin Heidelberg.296 ISBN 978-3-540-45240-9.297 Tom J Pollard , Alistair EW Johnson , Jesse D Raffa , Leo A Celi , Roger G Mark , and Omar Badawi.298 The eicu collaborative research database , a freely available multi-center database for critical care299 research . Scientific data , 5 , 2018.300 Ali Razavi , Aaron van den Oord , and Oriol Vinyals . Generating diverse high-fidelity images with301 vq-vae-2 . arXiv preprint arXiv:1906.00446 , 2019.302 S. Tirunagari , N. Poh , K. Aliabadi , D. Windridge , and D. Cooke . Patient level analytics using303 self-organising maps : A case study on type-1 diabetes self-care survey responses . In 2014 IEEE304 Symposium on Computational Intelligence and Data Mining ( CIDM ) , pp . 304–309 , Dec 2014.305 doi : 10.1109/CIDM.2014.7008682.306 Kazuhiro Tokunaga and Tetsuo Furukawa . Modular network som . Neural Networks , 22 ( 1 ) :82 –307 90 , 2009 . ISSN 0893-6080. doi : https : //doi.org/10.1016/j.neunet.2008.10.006 . URL http:308 //www.sciencedirect.com/science/article/pii/S0893608008002335.309 Aäron van den Oord , Oriol Vinyals , and Koray Kavukcuoglu . Neural discrete representation learn-310 ing . CoRR , abs/1711.00937 , 2017 . URL http : //arxiv.org/abs/1711.00937.311 Sharad Vikram , Matthew D Hoffman , and Matthew J Johnson . The loracs prior for vaes : Letting the312 trees speak for the data . arXiv preprint arXiv:1810.06891 , 2018.313 Thomas Voegtlin . Recursive self-organizing maps . Neural Networks , 15 ( 8 ) :979 – 991 , 2002.314 ISSN 0893-6080. doi : https : //doi.org/10.1016/S0893-6080 ( 02 ) 00072-2 . URL http : //www.315 sciencedirect.com/science/article/pii/S0893608002000722.316 Han Xiao , Kashif Rasul , and Roland Vollgraf . Fashion-mnist : a novel image dataset for benchmark-317 ing machine learning algorithms . CoRR , abs/1708.07747 , 2017 . URL http : //arxiv.org/318 abs/1708.07747.319 Junyuan Xie , Ross B. Girshick , and Ali Farhadi . Unsupervised deep embedding for clustering320 analysis . CoRR , abs/1511.06335 , 2015 . URL http : //arxiv.org/abs/1511.06335.321 Bo Yang , Xiao Fu , Nicholas D. Sidiropoulos , and Mingyi Hong . Towards k-means-friendly322 spaces : Simultaneous deep learning and clustering . CoRR , abs/1610.04794 , 2016 . URL323 http : //arxiv.org/abs/1610.04794.324 APPENDIX325 A SELF-ORGANIZING MAPS326 Among various existing interpretable unsupervised learning algorithms , Kohonen ’ s self-organizing327 map ( SOM ) ( Kohonen , 1990 ) is one of the most popular models . It is comprised of K neurons328 connected to form a discrete topological structure . The data are projected onto this topographic map329 which locally approximates the data manifold . Usually it is a finite two-dimensional region where330 neurons are arranged in a regular hexagonal or rectangular grid . Here we use a grid , M ⊆ N2,331 because of its simplicity and its visualization properties . Each neuron mij , at position ( i , j ) of the332 grid , for i , j = 1 , . . . , √ K , corresponds to a centroid vector , µi , j in the input space . The centroids333 are tied by a neighborhood relation , here defined as N ( µi , j ) = { µi−1 , j , µi+1 , j , µi , j−1 , µi , j+1 } .334 Given a random initialization of the centroids , the SOM algorithm randomly selects an input xi and335 updates both its closest centroid µi , j and its neighbors N ( µi , j ) to move them closer to xi . The336 algorithm ( 1 ) then iterates these steps until convergence.337 Algorithm 1 Self-Organizing Maps Require : 0 < α ( t ) < 1 ; limt→∞ ∑ α ( t ) →∞ ; limt→∞ ∑ α2 ( t ) < ∞ ; repeat At each time t , present an input x ( t ) and select the winner , ν ( t ) = arg min k∈Ω ‖x ( t ) −wk ( t ) ‖ Update the weights of the winner and its neighbours , ∆wk ( t ) = α ( t ) η ( ν , k , t ) [ x ( t ) −wν ( t ) ] until the map converges The range of SOM applications includes high dimensional data visualization , clustering , image338 and video processing , density or spectrum profile modeling , text/document mining , management339 systems and gene expression data analysis.340 B EXPERIMENTAL AND IMPLEMENTATION DETAILS341 B.1 DATASETS342 • MNIST : It consists of 70000 handwritten digits of 28-by-28 pixel size . Digits range from343 0 to 9 , yielding 10 patterns in total . The digits have been size-normalized and centered in a344 fixed-size image Lecun et al . ( 1998 ) .345 • Fashion MNIST : A dataset of Zalando ’ s article images consisting of a training set of346 60,000 examples and a test set of 10,000 examples Xiao et al . ( 2017 ) . Each example is347 a 28×28 grayscale image , associated with a label from 10 classes.348 • eICU : For temporal data we use vital sign/lab measurements of intensive care unit ( ICU ) 349 patients resampled to a 1-hour based grid using forward filling and filling with population350 statistics from the training set if no measurements were available . From all ICU stays , we351 excluded ICU stays , which were shorter than 1 day , longer than 30 days or which had at352 least one gap in the continuous vital sign monitoring , which we define by a interval between353 2 HR measurements of at least 1 hour . This yielded N = 10559 ICU stays from the354 eICU database . dvitals = 14 vital sign variables and dlab = 84 lab measurement variables355 were included , giving an overall data dimension of d = 98 . The last 72 hours of these356 multivariate time series were used for the experiments . As labels we use a variant of the357 current dynamic APACHE physiology score ( APACHE-0 ) as well as the worst APACHE358 score in the next 6 and 12 hours ( APACHE-6/12 ) , and the mortality in the next 24 hours.359 Only those variables from the APACHE score definition which are recorded in the eICU360 database were taken into account.361 Each dataset is divided into training , validation and test sets for both our models and the baselines.362 B.2 LATENT SPACE DIMENSION363 We evaluate the DEC model for different latent space dimensions . Table S1 shows that the AE , used364 in the DEC model , performs better when a lower dimensional latent space is used.365 Table S1 : Mean/Standard error of NMI and purity of DEC model on MNIST test set , across 10 runs with different random model initializations . We use 64 clusters and different latent space dimensions . Latent dimension Purity NMI l = 10 0.950± 0.001 0.681± 0.001 l = 100 0.750± 0.001 0.573± 0.001 B.3 NUMBER OF CLUSTERS366 We evaluate the NMI and purity clustering performance of our model , VarPSOM , with a varying367 number of clusters on the MNIST dataset . Since IDEC represents the main competitor we include368 it in this analysis . Figure S1 shows that VarPSOM outperforms IDEC for all the different configu-369 rations . In particular , it is interesting to observe that NMI decreases with an increasing number of370 clusters in both models . This is because the entropy of the clustering increases with the number of371 clusters.372 Figure S1 : NMI ( left ) and purity ( right ) clustering performance of VarPSOM and IDEC with varying number of clusters on the MNIST test set . B.4 LEARNING HEALTH STATE REPRESENTATIONS IN THE ICU373 By enforcing a SOM structure , VarPSOM , as well as VarTPSOM , project the cluster centroids onto374 a discrete 2D grid . Such a grid is particularly suited for visualization purposes and relations between375 centroids become immediately intuitive . In Fig . S2 a heat-map ( colored according to enrichment in376 the current APACHE score , as well as mortality risk in the next 24 hours ) shows compact enrichment377 structures . VarTPSOM succeeds in creating a meaningful and smooth neighbourhood structure . It378 distinguishes risk profiles with practically zero mortality risk from high mortality risk , reaching up379 to≈15 % , in different regions of the map , even though it is learned in a purely unsupervised fashion.380 Remarkably , the two heat-maps ( S2b and S2a ) show different enrichment patterns . Clusters which381 are enriched in health states with higher APACHE scores often do not correspond exactly to clusters382 with a higher mortality risk . This suggests that traditional representations of physiologic values,383 such as the APACHE score , fail to fully use all complex multivariate relationships present in the384 ICU recordings , and are not associated with dynamic mortality in a simple way.385 ( a ) Current APACHE score ( b ) Mortality risk in the next 24 hours Figure S2 : Heat-maps of enrichment in mortality risk in the next 24 hours as well as the current dynamic APACHE score , superimposed on the discrete 2D grid learned by VarTPSOM . B.5 VISUALIZING HEALTH STATE TRAJECTORIES IN THE ICU386 To analyze the trend of the patient pathology , VarTPSOM induces trajectories on the 2D SOM grid387 which can be easily visualized . Fig . S3 shows 20 randomly sampled patient trajectories obtained388 by our model . Trajectories ending in the death of the patient are shown in red , healthily dispatched389 patients are shown in green . Figure S3 : Randomly sampled VarTPSOM trajectories , from patients expired at the end of the ICU stay , as well as healthily dispatched patients . Superimposed is a heatmap which displays the cluster enrichment in the current APACHE score , from this model run . We observe that trajectories of dying patients are often in different locations of the map as healthy patients , in particular in those regions enriched for high APACHE scores , which corresponds with clinical intuition . 390 One of the main advantage of VarTPSOM over the traditional SOM algorithm is the use of soft391 assignments of data points to clusters which results in a better ability to quantify uncertainty in the392 data . For visualizing health states in the ICU , this property is very important . In Fig S4 we plot an393 example patient trajectory , where 6 different time-steps ( in temporal order ) of the trajectory were394 chosen . Our model yields a soft centroid-based probability distribution which evolves with time and395 which allows estimation of likely discrete health states at a given point in time . For each time-step396 the distribution of probabilities is plotted using a heat-map , whereas the overall trajectory is plotted397 using a black line . The circle and cross indicate ICU admission and dispatch , respectively . Figure S4 : Probabilities over discrete patient health states for 6 different time-steps of the selected time series . 398
The paper proposes combining the latent space of a variational autoencoder with two losses that regularize the latent space. The first loss is the cluster hardening loss in Aljalbout et. al [https://arxiv.org/pdf/1801.07648.pdf]. This loss attempts to convert from a soft-assignments of points (in latent space) to cluster centers (where the assignments are based on similarities computed via a Student t kernel) to a hard assignments of points in latent space to cluster centers. The transformation is posed as the minimization of a KL divergence.
SP:faefbfe1f151c4b3e0db3ef30e3317f45dd82274
Progressive Upsampling Audio Synthesis via Effective Adversarial Training
1 INTRODUCTION . Synthesis of realistic sound is a long-studied research topic , with various real-world applications such as text-to-speech ( TTS ) ( Wang et al. , 2017 ; Ping et al. , 2018 ) , sound effect ( Raghuvanshi et al. , 2016 ) , and music generation ( Briot et al. , 2017 ; Dong et al. , 2018 ; Huang et al. , 2019 ) . Various techniques have been developed ranging from the sample-based to more computational ones such as the additive/subtractive synthesis , frequency modulation granular synthesis , and even a full physicsbased simulation ( Cook , 2002 ) . Human ’ s audible frequency range is up to 20 kHz , so the standard sampling rate for music and sound is 44.1 kHz . Thus , for interactive applications and live performances , the generation of the high temporal-resolution audio ( i.e. , 44.1 kHz ) in real-time has to meet the standard of human perceptual sensitivity to sound . However , the aforementioned methods often fail to do so , due to their heavy computational complexity with respect to the data size . Because of this , professional sound synthesizers usually have no choice but to rely on hardware implementations . ( Wessel & Wright , 2002 ) Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have emerged as a promising approach to the versatile ( e.g. , conditional generation from a low-dimensional latent vector ( Mirza & Osindero , 2014 ) ) and high-quality ( e.g. , super-resolution GAN ( Ledig et al. , 2017 ) ) image . One of the first GAN models for sound synthesis have been designed to first produce the spectrogram ( or some other similar intermediate representations ) ( Donahue et al. , 2019 ; Engel et al. , 2019 ; Marafioti et al. , 2019 ) . A spectrogram is a compact 2D representation of audio signals in terms of its frequency spectrum over time . The spectrogram can then be converted into the estimated time-domain waveform using the Griffin & Lim algorithm ( Griffin & Lim , 1984 ) . However , such a conversion process does not only introduces nontrivial errors but also runs slowly , preventing the approach from being applied at an interactive rate1 . WaveGAN ( Donahue et al. , 2019 ) was the first and state-of-the-art GAN model that can generate raw waveform audio from scratch . The first generations of sound-generating GANs , like the WaveGAN and its followers , have been influenced much by the enormously successful generative models for image synthesis . They can 1The interactive rate refers to the the maximum temporal threshold of around 10msec ( Wessel & Wright , 2002 ) over which humans would not be able to recognize the sound making event and the resultant sound as occuring at the same time . be divided into those that employ the single decoder architecture ( e.g. , DCGAN and StyleGAN ( Radford et al. , 2016 ; Karras et al. , 2019 ) ) and those that encode and decode the intermediate representations in several and progressive stages ( e.g. , StackGAN and progressive GAN ( Zhang et al. , 2017 ; Karras et al. , 2018 ) ) . WaveGAN is the direct descendant of DCGAN with modification for the 1D audio data , while GANSynth applied the concept of progressive generation of audio , but using the 2D spectrogram , treating the audio as a 2D image . No previous work in GAN based audio generation has attempted the direct and fast synthesis of 1D raw audio waveform employing the multiple and progressive encoder-decoder architecture . Therefore , in this paper , we propose PUGAN , modification and extension of WaveGAN architecture for efficiently synthesizing raw-waveform audio through progressive training . PUGAN generates low sampling rate audio using the first few layers of the original WaveGAN ( referred to as the lightweight WaveGAN module ) . The latter layers of WaveGAN are replaced with the bandwidth extension modules , each of which is composed of the neural upsampling layer and encoder/decoder . They progressively output ( progressively trained too ) the higher sampling rate audio . For the effective progressive training and generation , instead of the usual upsampling method such as the nearest neighbor used in image generation , PUGAN uses a new upsampling methods often employed in the digital signal processing ( DSP ) field in an attempt to preserve the frequency information of the original data ( Oppenheim , 1999 ) . This upsample process consists of the zero insertion and 1D convolution to function as an interpolation infinite impulse response ( IIR ) filter . On the discriminator side , we add the Sinc convolution ( Ravanelli & Bengio , 2018 ) before the first layer to replicate the function of the parameterized low pass Sinc filter , also a popular technique in the DSP area . We have also evaluated PUGAN in terms of both quantitative computational performance and qualitative metrics including the human perceptual evaluation . ( demo and code : https : //pugan-iclrdemo.herokuapp.com/ ) Overall , our contributions include the following : • propose PUGAN , with novel neural modules ( upsampling and bandwidth extension ) for the efficient generation of raw waveform audio , • apply the concept of resampling ( in the generator ) and sinc convolution layers ( in the discriminator ) suitable for handling sound generation instead of the conventional upsampling or convolution methods , and • demonstrate the effectiveness of the proposed approach by generating raw waveform audio with significantly less number of parameters in real-time with equivalent output quality as WaveGAN . 2 RELATED WORK . We first review related research in two areas , namely , the GAN-based sound generation and audioto-audio conversion . 2.1 GAN BASED AUDIO GENERATION . WaveGAN ( Donahue et al. , 2019 ) and GANSynth ( Engel et al. , 2019 ) are the two recent notable work that have applied the GAN technique to sound effects generation for the first time . WaveGAN modified the DCGAN and took the approach to operate for and generate one dimensional sound data ( raw-waveform ) fast and directly ( and distinguishing itself from the work like the SpecGAN ( Donahue et al. , 2019 ) which used the usual 2D/image-based processing and spectrogram output representation ) . WaveGAN also added the phase shuffle module to prevent the discriminator from learning the checkerboard artifact , and post-processing convolution layer with a relatively wide kernel size for noise reduction . GANSynth generated sound effects through the spectrogram-like representation , but its output quality was satisfactory for only pure tone instrumental sounds . TiF-GAN ( Marafioti et al. , 2019 ) made a marginal improvement by adding a provisional step for the phase information reconstruction . Note that in the generative setting , the 2D based approach ( using representations like spectrograms ) is considered problematic as spectrograms are not fully invertible to sound without a loss ( thus inexact ) and the inversion by , say , the most popular Griffin & Lim algorithm is time-consuming . A fast version of Griffin & Lim algorithm ( Perraudin et al. , 2013 ) and Deep Griffin & Lim iteration approaches ( Masuyama et al. , 2019 ) have been proposed to improve the inversion performance ; they have not reached the aforementioned level required for interactive applications . In addition , all previous GAN-based sound synthesizers were configured and experimented to output up to only 16 kHz audio . Such a sampling rate is sufficient for general TTS systems , but , as noted earlier , not for general sound effects or music . WaveNet ( Oord et al. , 2016 ) can generate 44.1 kHz sampling rate audio , but aside from the questionable quality of the output , its autoregressive nature make it difficult to achieve real-time performance through the GPU parallelization . In this regard , once trained , the generator part of GAN generally can operate faster . But for WaveGAN to generate 44.1 kHz audio , additional transposed convolutional layers ( to the current 16 kHz generator ) would make the architectural parameters prohibitively large ( two times higher ) and likewise the generation time . 2.2 AUDIO-TO-AUDIO CONVERSION . Audio-to-audio conversion refers to the task of taking an input audio sample and converting it into another with different characteristics . Most deep learning based audio conversion models have been influenced by similar image-to-image translation research . For instance , CycleGAN-VC ( Kaneko & Kameoka , 2018 ) , StarGAN-VC ( Kameoka et al. , 2018 ) , and WaveCycleGAN ( Tanaka et al. , 2018 ) all looked into the problem of voice conversion , and were based on the previous works of CycleGAN ( Zhu et al. , 2017 ) and StarGAN ( Choi et al. , 2018 ) . The task of denoising ( Pascual et al. , 2017 ) or generation of super-resolution signal ( Eskimez & Koishida , 2019 ) can also be regarded as a form of signal ( or audio ) conversion . Recently , few attempts have been made to apply GAN to the task of bandwidth extensions such as the SSRGAN ( Eskimez & Koishida , 2019 ; Li et al. , 2019 ) . Note that our objective is the generation of high-resolution audio and sound effects rather than just conversion . 3 DATA CHARACTERISTICS : AUDIO VERSUS IMAGE . In this section , we discuss potential reasons of why conventional GAN architectures have been successful in generating 2D images ( Zhu et al. , 2017 ; Karras et al. , 2018 ; 2019 ) , but less so for 1D sound waves with respect to their data characteristics . This analysis can give us hints on how to newly configure the GAN architecture to generate the sound signal faster with higher quality . Image and sound , both as signals , contain information across the frequency domain . Sound has the added dimension of time . Humans are highly sensitive to variation over time of the sound content over all the frequency range , which makes its quality depend on reproduction of all frequency components . In other words , in sound , the different frequency range may represent a particular characteristic ( e.g. , low bass male sound vs. high pitch female sound ) ( Klevans & Rodman , 1997 ) . In contrast , in images , high resolution components often correspond to details or even noises , and as such static image recognition and understanding may depend less on them ( Heittola et al. , 2009 ) . Upsampling of the data are important parts of the GAN architecture ( especially with respect to the conversion process ) . In image generation , for the reason mentioned above , the upsampling by standard interpolation , such as the nearest neighbor or linear interpolation , may suffice . Fig . 1 compares the application of the simple nearest neighbor based upsampling and linear interpolation to the sinc function based upsampling ( or resampling as better known in the DSP area ) . Fig . 1 shows the upsampling results using the nearest neighbor , linear interpolation and Kaiser resampling methods . The examples illustrate the occurrences of high frequency sidebands ( noise ) when the first two simpler upsampling methods are used . This may be particularly problematic if the high-resolution output is required . Another possibly effective method for dealing with signals of multi-frequency components is the resolution-wise progressive generation ( and training ) technique , as was demonstrated by the work of Progressive GAN ( Karras et al. , 2018 ) . While the original Progressive GAN was applied for 2D images , and similarly to spectrogram generation , we have applied the same idea to the 1D audio signal . However , the preliminary pilot result was not satisfactory ; the reconstructed results were unnaturally smooth in the high frequency range . This is attributed to the similar reason , the stride-1 transposed convolution layer effectively acting as a simple moving averaging method . On the other hand , for an audio generation as WaveGAN has implemented , the upsampling based on the transposed convolution is more proper than the others such as nearest neighbor . It was deemed more accurate in ” capturing ” ( filtering out ) the frequency-wise characteristics in the generation process in comparison to using the nearest neighbor . The only problem may be the fact that the number of the relevant architectural parameters grows excessively according to the output size , which ultimately would render the generation process non real-time . To summarize , based on these observations , the newly proposed PUGAN architecture would proceed to first train to learn the gross structure of the aural information distribution and fast produce the low resolution audio , then incrementally convert and enrich the output to a higher resolution efficiently instead of having to deal with the entire scale space with computationally heavy architecture simultaneously .
The authors detail PUGAN, architectural changes to models for raw waveform generation with GANs. They do a good job of motivating the challenge of raw audio generation with GANs and of methods for progressive training. PUGAN incorporates U-Net modules in the generator ("Bandwidth expansion"), sinc convolution as bandlimiting inputs to the generators, and the "style gan" type method of adding the noise at each level of the generator. Using listener studies and inception score, they show modest improvements over the state of the art (at time of submission), WaveGAN. Notably, their architecture is also more computation and parameter efficient.
SP:a324d745f08a17a7b48caa2246a51f222107f31c
Progressive Upsampling Audio Synthesis via Effective Adversarial Training
1 INTRODUCTION . Synthesis of realistic sound is a long-studied research topic , with various real-world applications such as text-to-speech ( TTS ) ( Wang et al. , 2017 ; Ping et al. , 2018 ) , sound effect ( Raghuvanshi et al. , 2016 ) , and music generation ( Briot et al. , 2017 ; Dong et al. , 2018 ; Huang et al. , 2019 ) . Various techniques have been developed ranging from the sample-based to more computational ones such as the additive/subtractive synthesis , frequency modulation granular synthesis , and even a full physicsbased simulation ( Cook , 2002 ) . Human ’ s audible frequency range is up to 20 kHz , so the standard sampling rate for music and sound is 44.1 kHz . Thus , for interactive applications and live performances , the generation of the high temporal-resolution audio ( i.e. , 44.1 kHz ) in real-time has to meet the standard of human perceptual sensitivity to sound . However , the aforementioned methods often fail to do so , due to their heavy computational complexity with respect to the data size . Because of this , professional sound synthesizers usually have no choice but to rely on hardware implementations . ( Wessel & Wright , 2002 ) Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have emerged as a promising approach to the versatile ( e.g. , conditional generation from a low-dimensional latent vector ( Mirza & Osindero , 2014 ) ) and high-quality ( e.g. , super-resolution GAN ( Ledig et al. , 2017 ) ) image . One of the first GAN models for sound synthesis have been designed to first produce the spectrogram ( or some other similar intermediate representations ) ( Donahue et al. , 2019 ; Engel et al. , 2019 ; Marafioti et al. , 2019 ) . A spectrogram is a compact 2D representation of audio signals in terms of its frequency spectrum over time . The spectrogram can then be converted into the estimated time-domain waveform using the Griffin & Lim algorithm ( Griffin & Lim , 1984 ) . However , such a conversion process does not only introduces nontrivial errors but also runs slowly , preventing the approach from being applied at an interactive rate1 . WaveGAN ( Donahue et al. , 2019 ) was the first and state-of-the-art GAN model that can generate raw waveform audio from scratch . The first generations of sound-generating GANs , like the WaveGAN and its followers , have been influenced much by the enormously successful generative models for image synthesis . They can 1The interactive rate refers to the the maximum temporal threshold of around 10msec ( Wessel & Wright , 2002 ) over which humans would not be able to recognize the sound making event and the resultant sound as occuring at the same time . be divided into those that employ the single decoder architecture ( e.g. , DCGAN and StyleGAN ( Radford et al. , 2016 ; Karras et al. , 2019 ) ) and those that encode and decode the intermediate representations in several and progressive stages ( e.g. , StackGAN and progressive GAN ( Zhang et al. , 2017 ; Karras et al. , 2018 ) ) . WaveGAN is the direct descendant of DCGAN with modification for the 1D audio data , while GANSynth applied the concept of progressive generation of audio , but using the 2D spectrogram , treating the audio as a 2D image . No previous work in GAN based audio generation has attempted the direct and fast synthesis of 1D raw audio waveform employing the multiple and progressive encoder-decoder architecture . Therefore , in this paper , we propose PUGAN , modification and extension of WaveGAN architecture for efficiently synthesizing raw-waveform audio through progressive training . PUGAN generates low sampling rate audio using the first few layers of the original WaveGAN ( referred to as the lightweight WaveGAN module ) . The latter layers of WaveGAN are replaced with the bandwidth extension modules , each of which is composed of the neural upsampling layer and encoder/decoder . They progressively output ( progressively trained too ) the higher sampling rate audio . For the effective progressive training and generation , instead of the usual upsampling method such as the nearest neighbor used in image generation , PUGAN uses a new upsampling methods often employed in the digital signal processing ( DSP ) field in an attempt to preserve the frequency information of the original data ( Oppenheim , 1999 ) . This upsample process consists of the zero insertion and 1D convolution to function as an interpolation infinite impulse response ( IIR ) filter . On the discriminator side , we add the Sinc convolution ( Ravanelli & Bengio , 2018 ) before the first layer to replicate the function of the parameterized low pass Sinc filter , also a popular technique in the DSP area . We have also evaluated PUGAN in terms of both quantitative computational performance and qualitative metrics including the human perceptual evaluation . ( demo and code : https : //pugan-iclrdemo.herokuapp.com/ ) Overall , our contributions include the following : • propose PUGAN , with novel neural modules ( upsampling and bandwidth extension ) for the efficient generation of raw waveform audio , • apply the concept of resampling ( in the generator ) and sinc convolution layers ( in the discriminator ) suitable for handling sound generation instead of the conventional upsampling or convolution methods , and • demonstrate the effectiveness of the proposed approach by generating raw waveform audio with significantly less number of parameters in real-time with equivalent output quality as WaveGAN . 2 RELATED WORK . We first review related research in two areas , namely , the GAN-based sound generation and audioto-audio conversion . 2.1 GAN BASED AUDIO GENERATION . WaveGAN ( Donahue et al. , 2019 ) and GANSynth ( Engel et al. , 2019 ) are the two recent notable work that have applied the GAN technique to sound effects generation for the first time . WaveGAN modified the DCGAN and took the approach to operate for and generate one dimensional sound data ( raw-waveform ) fast and directly ( and distinguishing itself from the work like the SpecGAN ( Donahue et al. , 2019 ) which used the usual 2D/image-based processing and spectrogram output representation ) . WaveGAN also added the phase shuffle module to prevent the discriminator from learning the checkerboard artifact , and post-processing convolution layer with a relatively wide kernel size for noise reduction . GANSynth generated sound effects through the spectrogram-like representation , but its output quality was satisfactory for only pure tone instrumental sounds . TiF-GAN ( Marafioti et al. , 2019 ) made a marginal improvement by adding a provisional step for the phase information reconstruction . Note that in the generative setting , the 2D based approach ( using representations like spectrograms ) is considered problematic as spectrograms are not fully invertible to sound without a loss ( thus inexact ) and the inversion by , say , the most popular Griffin & Lim algorithm is time-consuming . A fast version of Griffin & Lim algorithm ( Perraudin et al. , 2013 ) and Deep Griffin & Lim iteration approaches ( Masuyama et al. , 2019 ) have been proposed to improve the inversion performance ; they have not reached the aforementioned level required for interactive applications . In addition , all previous GAN-based sound synthesizers were configured and experimented to output up to only 16 kHz audio . Such a sampling rate is sufficient for general TTS systems , but , as noted earlier , not for general sound effects or music . WaveNet ( Oord et al. , 2016 ) can generate 44.1 kHz sampling rate audio , but aside from the questionable quality of the output , its autoregressive nature make it difficult to achieve real-time performance through the GPU parallelization . In this regard , once trained , the generator part of GAN generally can operate faster . But for WaveGAN to generate 44.1 kHz audio , additional transposed convolutional layers ( to the current 16 kHz generator ) would make the architectural parameters prohibitively large ( two times higher ) and likewise the generation time . 2.2 AUDIO-TO-AUDIO CONVERSION . Audio-to-audio conversion refers to the task of taking an input audio sample and converting it into another with different characteristics . Most deep learning based audio conversion models have been influenced by similar image-to-image translation research . For instance , CycleGAN-VC ( Kaneko & Kameoka , 2018 ) , StarGAN-VC ( Kameoka et al. , 2018 ) , and WaveCycleGAN ( Tanaka et al. , 2018 ) all looked into the problem of voice conversion , and were based on the previous works of CycleGAN ( Zhu et al. , 2017 ) and StarGAN ( Choi et al. , 2018 ) . The task of denoising ( Pascual et al. , 2017 ) or generation of super-resolution signal ( Eskimez & Koishida , 2019 ) can also be regarded as a form of signal ( or audio ) conversion . Recently , few attempts have been made to apply GAN to the task of bandwidth extensions such as the SSRGAN ( Eskimez & Koishida , 2019 ; Li et al. , 2019 ) . Note that our objective is the generation of high-resolution audio and sound effects rather than just conversion . 3 DATA CHARACTERISTICS : AUDIO VERSUS IMAGE . In this section , we discuss potential reasons of why conventional GAN architectures have been successful in generating 2D images ( Zhu et al. , 2017 ; Karras et al. , 2018 ; 2019 ) , but less so for 1D sound waves with respect to their data characteristics . This analysis can give us hints on how to newly configure the GAN architecture to generate the sound signal faster with higher quality . Image and sound , both as signals , contain information across the frequency domain . Sound has the added dimension of time . Humans are highly sensitive to variation over time of the sound content over all the frequency range , which makes its quality depend on reproduction of all frequency components . In other words , in sound , the different frequency range may represent a particular characteristic ( e.g. , low bass male sound vs. high pitch female sound ) ( Klevans & Rodman , 1997 ) . In contrast , in images , high resolution components often correspond to details or even noises , and as such static image recognition and understanding may depend less on them ( Heittola et al. , 2009 ) . Upsampling of the data are important parts of the GAN architecture ( especially with respect to the conversion process ) . In image generation , for the reason mentioned above , the upsampling by standard interpolation , such as the nearest neighbor or linear interpolation , may suffice . Fig . 1 compares the application of the simple nearest neighbor based upsampling and linear interpolation to the sinc function based upsampling ( or resampling as better known in the DSP area ) . Fig . 1 shows the upsampling results using the nearest neighbor , linear interpolation and Kaiser resampling methods . The examples illustrate the occurrences of high frequency sidebands ( noise ) when the first two simpler upsampling methods are used . This may be particularly problematic if the high-resolution output is required . Another possibly effective method for dealing with signals of multi-frequency components is the resolution-wise progressive generation ( and training ) technique , as was demonstrated by the work of Progressive GAN ( Karras et al. , 2018 ) . While the original Progressive GAN was applied for 2D images , and similarly to spectrogram generation , we have applied the same idea to the 1D audio signal . However , the preliminary pilot result was not satisfactory ; the reconstructed results were unnaturally smooth in the high frequency range . This is attributed to the similar reason , the stride-1 transposed convolution layer effectively acting as a simple moving averaging method . On the other hand , for an audio generation as WaveGAN has implemented , the upsampling based on the transposed convolution is more proper than the others such as nearest neighbor . It was deemed more accurate in ” capturing ” ( filtering out ) the frequency-wise characteristics in the generation process in comparison to using the nearest neighbor . The only problem may be the fact that the number of the relevant architectural parameters grows excessively according to the output size , which ultimately would render the generation process non real-time . To summarize , based on these observations , the newly proposed PUGAN architecture would proceed to first train to learn the gross structure of the aural information distribution and fast produce the low resolution audio , then incrementally convert and enrich the output to a higher resolution efficiently instead of having to deal with the entire scale space with computationally heavy architecture simultaneously .
The paper presents an approach based on generative adversarial models for the unconditional generation of audio. The authors take inspiration from WaveGAN, to which they add more sophisticated upsampling blocks (called the bandwidth extension module) instead of transposed convolutions. They also propose to add a sinc convolution layer to the discriminators to improve training. Finally, they propose a progressive training scheme similar in spirit to the progressive training of GANs in images. Experiments are performed on generating audio pronunciation of digits, and the authors compare their work in terms of inception score, human evaluation and computation cost to WaveGAN.
SP:a324d745f08a17a7b48caa2246a51f222107f31c
The Implicit Bias of Depth: How Incremental Learning Drives Generalization
1 INTRODUCTION . Neural networks have led to a breakthrough in modern machine learning , allowing us to efficiently learn highly expressive models that still generalize to unseen data . The theoretical reasons for this success are still unclear , as the generalization capabilities of neural networks defy the classic statistical learning theory bounds . Since these bounds , which depend solely on the capacity of the learned model , are unable to account for the success of neural networks , we must examine additional properties of the learning process . One such property is the optimization algorithm - while neural networks can express a multitude of possible ERM solutions for a given training set , gradient-based methods with the right initialization may be implicitly biased towards certain solutions which generalize . A possible way such an implicit bias may present itself , is if gradient-based methods were to search the hypothesis space for possible solutions of gradually increasing complexity . This would suggest that while the hypothesis space itself is extremely complex , our search strategy favors the simplest solutions and thus generalizes . One of the leading results along these lines has been by Saxe et al . ( 2013 ) , deriving an analytical solution for the gradient flow dynamics of deep linear networks and showing that for such models , the singular values converge at different rates , with larger values converging first . At the limit of infinitesimal initialization of the deep linear network , Gidel et al . ( 2019 ) show these dynamics exhibit a behavior of “ incremental learning ” - the singular values of the model are learned separately , one at a time . Our work generalizes these results to small but finite initialization scales . Incremental learning dynamics have also been explored in gradient descent applied to matrix completion and sensing with a factorized parameterization ( Gunasekar et al . ( 2017 ) , Arora et al . ( 2018 ) , Woodworth et al . ( 2019 ) ) . When initialized with small Gaussian weights and trained with a small learning rate , such a model is able to successfully recover the low-rank matrix which labeled the data , even if the problem is highly over-determined and no additional regularization is applied . In their proof of low-rank recovery for such models , Li et al . ( 2017 ) show that the model remains lowrank throughout the optimization process , leading to the successful generalization . Additionally , Arora et al . ( 2019 ) explore the dynamics of such models , showing the singular values are learned at different rates and that deeper models exhibit stronger incremental learning dynamics . Our work deals with a more simplified setting , allowing us to determine explicitly under which conditions depth leads to this dynamical phenomenon . Finally , the learning dynamics of nonlinear models have been studied as well . Combes et al . ( 2018 ) and Williams et al . ( 2019 ) study the gradient flow dynamics of shallow ReLU networks under restrictive distributional assumptions , Ronen et al . ( 2019 ) show that shallow networks learn functions of gradually increasing frequencies and Nakkiran et al . ( 2019 ) show how deep ReLU networks correlate with linear classifiers in the early stages of training . These findings , along with others , suggest that the generalization ability of deep networks is at least in part due to the incremental learning dynamics of gradient descent . Following this line of work , we begin by explicitly defining the notion of incremental learning for a toy model which exhibits this sort of behavior . Analyzing the dynamics of the model for gradient flow and gradient descent , we characterize the effect of the model ’ s depth and initialization scale on incremental learning , showing how deeper models allow for incremental learning in larger ( realistic ) initialization scales . Specifically , we show that a depth-2 model requires exponentially small initialization for incremental learning to occur , while deeper models only require the initialization to be polynomially small . Once incremental learning has been defined and characterized for the toy model , we generalize our results theoretically and empirically for larger linear and quadratic models . Examples of incremental learning in these models can be seen in figure 1 , which we discuss further in section 4 . 2 DYNAMICAL ANALYSIS OF A TOY MODEL . We begin by analyzing incremental learning for a simple model . This will allow us to gain a clear understanding of the phenomenon and the conditions for it , which we will later be able to apply to a variety of other models in which incremental learning is present . 2.1 PRELIMINARIES . Our simple linear model will be similar to the toy model analyzed by Woodworth et al . ( 2019 ) . Our input space will be X = Rd and the hypothesis space will be linear models with non-negative weights , such that : fσ ( x ) = 〈σ , x〉 σ ∈ Rd≥0 ( 1 ) We will introduce depth into our model , by parameterizing σ using w ∈ Rd≥0 in the following way : ∀i : σi = wNi Where N represents the depth of the model . Since we restrict the model to having non-negative weights , this parameterization doesn ’ t change the expressiveness , but it does radically change it ’ s optimization dynamics . Assuming the data is labeled by some σ∗ ∈ Rd≥0 , we will study the dynamics of this model for general N under a depth-normalized1 squared loss over Gaussian inputs , which will allow us to derive our analytical solution : ` N ( w ) = 1 2N2 Ex [ ( 〈σ∗ , x〉 − 〈wN , x〉 ) 2 ] = 1 2N2 ||wN − σ∗||2 ( 2 ) We will assume that our model is initialized uniformly with a tunable scaling factor , such that : ∀i : wi ( 0 ) = N √ σ0 ( 3 ) 2.2 GRADIENT FLOW ANALYTICAL SOLUTIONS . Analyzing our toy model using gradient flow allows us to obtain an analytical solution for the dynamics of σ ( t ) along with the dynamics of the loss function for a general N . For brevity , the following theorem refers only to N = 1 , 2 and N →∞ , however the solutions for 3 ≤ N < ∞ are similar in structure to N →∞ , but more complicated . We also assume σ∗i > 0 for brevity , however we can derive the solutions for σ∗i = 0 as well . Note that this result is a special case adaptation of the one presented in Saxe et al . ( 2013 ) for deep linear networks : Theorem 1 . Minimizing the toy linear model described in ( 1 ) with gradient flow over the depth normalized squared loss ( 2 ) , with Gaussian inputs and weights initialized as in ( 3 ) and assuming σ∗i > 0 leads to the following analytical solutions for different values of N : N = 1 : σi ( t ) = σ ∗ i + ( σ0 − σ∗i ) e−t N = 2 : σi ( t ) = σ0σ ∗ i e σ∗i t σ0 ( eσ ∗ i t − 1 ) + σ∗i N →∞ : t = 1 ( σ∗i ) 2 log ( σi ( t ) ( σ0 − σ∗i ) σ0 ( σi ( t ) − σ∗i ) ) − 1 σ∗i ( 1 σi ( t ) − 1 σ0 ) Proof . The gradient flow equations for our model are the following : ẇi = −∇wi ` = 1 N wN−1i ( σ ∗ i − wNi ) Given the dynamics of the w parameters , we may use the chain rule to derive the dynamics of the induced model , σ : σ̇i = dσi dwi ẇi = w 2N−2 ( σ∗i − wNi ) = σ 2− 2N i ( σ ∗ i − σi ) ( 4 ) This differential equation is solvable for all N , leading to the solutions in the theorem . Taking N →∞ in ( 4 ) leads to σ̇i = σ2i ( σ∗i − σi ) , which is also solvable . 1This normalization is used for mathematical convenience to have solutions of different depths exhibit similar time scales in their dynamics . Equivalently , we can derive the solutions for the regular square loss and then use different time scalings in the dynamical analysis . Analyzing these solutions , we see how even in such a simple model depth causes different factors of the model to be learned at different rates . Specifically , values corresponding to larger optimal values converge faster , suggesting a form of incremental learning . This is most clear for N = 2 where the solution isn ’ t implicit , but is also the case for N ≥ 3 , as we will see in the next subsection . These dynamics are depicted in figure 2 , where we see the dynamics of the different values of σ ( t ) as learning progresses . When N = 1 , all values are learned at the same rate regardless of the initialization , while the deeper models are clearly biased towards learning the larger singular values first , especially at small initialization scales . Our model has only one optimal solution due to the population loss , but it is clear how this sort of dynamic can induce sparse solutions - if the model is able to fit the data after a small amount of learning phases , then it ’ s obtained result will be sparse . Alternatively , if N = 1 , we know that the dynamics will lead to the minimal ` 2 norm solution which is dense . We explore the sparsity inducing bias of our toy model by comparing it empirically2 to a greedy sparse approximation algorithm in appendix D , and give our theoretical results in the next section . 3 INCREMENTAL LEARNING . Equipped with analytical solutions for the dynamics of our model for every depth , we turn to study how the depth and initialization effect incremental learning . While Gidel et al . ( 2019 ) focuses on incremental learning in depth-2 models at the limit of σ0 → 0 , we will study the phenomenon for a general depth and for σ0 > 0 . First , we will define the notion of incremental learning . Since all values of σ are learned in parallel , we can ’ t expect one value to converge before the other moves at all ( which happens for infinitesimal initialization as shown by Gidel et al . ( 2019 ) ) . We will need a more relaxed definition for incremental learning in finite initialization scales . 2The code for reproducing all of our experiments can be found in https : //github.com/dsgissin/ Incremental-Learning Definition 1 . Given two values σi , σj such that σ∗i > σ∗j > 0 and both are initialized as σi ( 0 ) = σj ( 0 ) = σ0 < σ ∗ j , and given two scalars s ∈ ( 0 , 14 ) and f ∈ ( 3 4 , 1 ) , we call the learning of the values ( s , f ) -incremental if there exists a t for which : σj ( t ) ≤ sσ∗j < fσ∗i ≤ σi ( t ) In words , two values have distinct learning phases if the first almost converges ( f ≈ 1 ) before the second changes by much ( s 1 ) . Note that for any N , σ ( t ) is monotonically increasing and so once σj ( t ) = sσ∗j , it will not decrease to allow further incremental learning . Given this definition of incremental learning , we turn to study the conditions that facilitate incremental learning in our toy model . Our main result is a dynamical depth separation result , showing that incremental learning is dependent on σ ∗ i σ∗j in different ways for different values ofN . The largest difference in dependence happens between N = 2 and N = 3 , where the dependence changes from exponential to polynomial : Theorem 2 . Given two values σi , σj of a toy linear model as in ( 1 ) , where σ∗i σ∗j = r > 1 and the model is initialized as in ( 3 ) , and given two scalars s ∈ ( 0 , 14 ) and f ∈ ( 3 4 , 1 ) , then the largest initialization value for which the learning phases of the values are ( s , f ) -incremental , denoted σth0 , is bounded in the following way : sσ∗j ( s rf ) 1 ( 1−f ) ( r−1 ) ≤σth0 ≤ sσ∗j ( s rf ) 1−s r−1 N = 2 sσ∗j ( ( 1− f ) ( r − 1 ) 1 + ( 1− f ) ( r − 1 ) ) N N−2 ≤σth0 ≤ sσ∗j ( r − 1 r − s ) N N−2 N ≥ 3 Proof sketch ( the full proof is given in appendix A ) . Rewriting the separable differential equation in ( 4 ) to calculate the time until σ ( t ) = ασ∗ , we get the following : tα ( σ ) = ∫ ασ∗ σ0 dσ σ2− 2 N ( σ∗ − σ ) The condition for incremental learning is then the requirement that tf ( σi ) ≤ ts ( σj ) , resulting in : ∫ fσ∗i σ0 dσ σ2− 2 N ( σ∗i − σ ) ≤ ∫ sσ∗j σ0 dσ σ2− 2 N ( σ∗j − σ ) We then relax/restrict the above condition to get a necessary/sufficient condition on σ0 , leading to a lower and upper bound on σth0 . Note that the value determining the condition for incremental learning is σ ∗ i σ∗j - if two values are in the same order of magnitude , then their ratio will be close to 1 and we will need a small initialization to obtain incremental learning . The dependence on the ratio changes with depth , and is exponential for N = 2 . This means that incremental learning , while possible for shallow models , is difficult to see in practice . This result explains why changing the initialization scale in figure 2 changes the dynamics of the N ≥ 3 models , while not changing the dynamics for N = 2 noticeably . The next theorem extends part of our analysis to gradient descent , a more realistic setting than the infinitesimal learning rate of gradient flow : Theorem 3 . Given two values σi , σj of a depth-2 toy linear model as in ( 1 ) , such that σ∗i σ∗j = r > 1 and the model is initialized as in ( 3 ) , and given two scalars s ∈ ( 0 , 14 ) and f ∈ ( 3 4 , 1 ) , and assuming σ∗j ≥ 2σ0 , and assuming we optimize with gradient descent with a learning rate η ≤ cσ∗1 for c < 2 ( √ 2 − 1 ) and σ∗1 the largest value of σ∗ , then the largest initialization value for which the learning phases of the values are ( s , f ) -incremental , denoted σth0 , is lower and upper bounded in the following way : 1 2 s 1− s σ∗j ( 1− f 2rf s 1− s ) 1 A−1 ≤ σth0 ≤ s 1− s σ∗j ( 1− f f s 1− s ) 1 B−1
This paper deals with the theoretical study of the gradient dynamics in deep neural networks. More precisely, this paper define a notion of incremental learning for a particular learning dynamics and study how the depth of the network influence it. Then, the authors show two cases where it applies: matrix sensing, quadratic neural networks and provide intuitions on how it could also apply to linear convolutional networks.
SP:a02b08206bf5b7026e1c35f23d2810cffa529d1f
The Implicit Bias of Depth: How Incremental Learning Drives Generalization
1 INTRODUCTION . Neural networks have led to a breakthrough in modern machine learning , allowing us to efficiently learn highly expressive models that still generalize to unseen data . The theoretical reasons for this success are still unclear , as the generalization capabilities of neural networks defy the classic statistical learning theory bounds . Since these bounds , which depend solely on the capacity of the learned model , are unable to account for the success of neural networks , we must examine additional properties of the learning process . One such property is the optimization algorithm - while neural networks can express a multitude of possible ERM solutions for a given training set , gradient-based methods with the right initialization may be implicitly biased towards certain solutions which generalize . A possible way such an implicit bias may present itself , is if gradient-based methods were to search the hypothesis space for possible solutions of gradually increasing complexity . This would suggest that while the hypothesis space itself is extremely complex , our search strategy favors the simplest solutions and thus generalizes . One of the leading results along these lines has been by Saxe et al . ( 2013 ) , deriving an analytical solution for the gradient flow dynamics of deep linear networks and showing that for such models , the singular values converge at different rates , with larger values converging first . At the limit of infinitesimal initialization of the deep linear network , Gidel et al . ( 2019 ) show these dynamics exhibit a behavior of “ incremental learning ” - the singular values of the model are learned separately , one at a time . Our work generalizes these results to small but finite initialization scales . Incremental learning dynamics have also been explored in gradient descent applied to matrix completion and sensing with a factorized parameterization ( Gunasekar et al . ( 2017 ) , Arora et al . ( 2018 ) , Woodworth et al . ( 2019 ) ) . When initialized with small Gaussian weights and trained with a small learning rate , such a model is able to successfully recover the low-rank matrix which labeled the data , even if the problem is highly over-determined and no additional regularization is applied . In their proof of low-rank recovery for such models , Li et al . ( 2017 ) show that the model remains lowrank throughout the optimization process , leading to the successful generalization . Additionally , Arora et al . ( 2019 ) explore the dynamics of such models , showing the singular values are learned at different rates and that deeper models exhibit stronger incremental learning dynamics . Our work deals with a more simplified setting , allowing us to determine explicitly under which conditions depth leads to this dynamical phenomenon . Finally , the learning dynamics of nonlinear models have been studied as well . Combes et al . ( 2018 ) and Williams et al . ( 2019 ) study the gradient flow dynamics of shallow ReLU networks under restrictive distributional assumptions , Ronen et al . ( 2019 ) show that shallow networks learn functions of gradually increasing frequencies and Nakkiran et al . ( 2019 ) show how deep ReLU networks correlate with linear classifiers in the early stages of training . These findings , along with others , suggest that the generalization ability of deep networks is at least in part due to the incremental learning dynamics of gradient descent . Following this line of work , we begin by explicitly defining the notion of incremental learning for a toy model which exhibits this sort of behavior . Analyzing the dynamics of the model for gradient flow and gradient descent , we characterize the effect of the model ’ s depth and initialization scale on incremental learning , showing how deeper models allow for incremental learning in larger ( realistic ) initialization scales . Specifically , we show that a depth-2 model requires exponentially small initialization for incremental learning to occur , while deeper models only require the initialization to be polynomially small . Once incremental learning has been defined and characterized for the toy model , we generalize our results theoretically and empirically for larger linear and quadratic models . Examples of incremental learning in these models can be seen in figure 1 , which we discuss further in section 4 . 2 DYNAMICAL ANALYSIS OF A TOY MODEL . We begin by analyzing incremental learning for a simple model . This will allow us to gain a clear understanding of the phenomenon and the conditions for it , which we will later be able to apply to a variety of other models in which incremental learning is present . 2.1 PRELIMINARIES . Our simple linear model will be similar to the toy model analyzed by Woodworth et al . ( 2019 ) . Our input space will be X = Rd and the hypothesis space will be linear models with non-negative weights , such that : fσ ( x ) = 〈σ , x〉 σ ∈ Rd≥0 ( 1 ) We will introduce depth into our model , by parameterizing σ using w ∈ Rd≥0 in the following way : ∀i : σi = wNi Where N represents the depth of the model . Since we restrict the model to having non-negative weights , this parameterization doesn ’ t change the expressiveness , but it does radically change it ’ s optimization dynamics . Assuming the data is labeled by some σ∗ ∈ Rd≥0 , we will study the dynamics of this model for general N under a depth-normalized1 squared loss over Gaussian inputs , which will allow us to derive our analytical solution : ` N ( w ) = 1 2N2 Ex [ ( 〈σ∗ , x〉 − 〈wN , x〉 ) 2 ] = 1 2N2 ||wN − σ∗||2 ( 2 ) We will assume that our model is initialized uniformly with a tunable scaling factor , such that : ∀i : wi ( 0 ) = N √ σ0 ( 3 ) 2.2 GRADIENT FLOW ANALYTICAL SOLUTIONS . Analyzing our toy model using gradient flow allows us to obtain an analytical solution for the dynamics of σ ( t ) along with the dynamics of the loss function for a general N . For brevity , the following theorem refers only to N = 1 , 2 and N →∞ , however the solutions for 3 ≤ N < ∞ are similar in structure to N →∞ , but more complicated . We also assume σ∗i > 0 for brevity , however we can derive the solutions for σ∗i = 0 as well . Note that this result is a special case adaptation of the one presented in Saxe et al . ( 2013 ) for deep linear networks : Theorem 1 . Minimizing the toy linear model described in ( 1 ) with gradient flow over the depth normalized squared loss ( 2 ) , with Gaussian inputs and weights initialized as in ( 3 ) and assuming σ∗i > 0 leads to the following analytical solutions for different values of N : N = 1 : σi ( t ) = σ ∗ i + ( σ0 − σ∗i ) e−t N = 2 : σi ( t ) = σ0σ ∗ i e σ∗i t σ0 ( eσ ∗ i t − 1 ) + σ∗i N →∞ : t = 1 ( σ∗i ) 2 log ( σi ( t ) ( σ0 − σ∗i ) σ0 ( σi ( t ) − σ∗i ) ) − 1 σ∗i ( 1 σi ( t ) − 1 σ0 ) Proof . The gradient flow equations for our model are the following : ẇi = −∇wi ` = 1 N wN−1i ( σ ∗ i − wNi ) Given the dynamics of the w parameters , we may use the chain rule to derive the dynamics of the induced model , σ : σ̇i = dσi dwi ẇi = w 2N−2 ( σ∗i − wNi ) = σ 2− 2N i ( σ ∗ i − σi ) ( 4 ) This differential equation is solvable for all N , leading to the solutions in the theorem . Taking N →∞ in ( 4 ) leads to σ̇i = σ2i ( σ∗i − σi ) , which is also solvable . 1This normalization is used for mathematical convenience to have solutions of different depths exhibit similar time scales in their dynamics . Equivalently , we can derive the solutions for the regular square loss and then use different time scalings in the dynamical analysis . Analyzing these solutions , we see how even in such a simple model depth causes different factors of the model to be learned at different rates . Specifically , values corresponding to larger optimal values converge faster , suggesting a form of incremental learning . This is most clear for N = 2 where the solution isn ’ t implicit , but is also the case for N ≥ 3 , as we will see in the next subsection . These dynamics are depicted in figure 2 , where we see the dynamics of the different values of σ ( t ) as learning progresses . When N = 1 , all values are learned at the same rate regardless of the initialization , while the deeper models are clearly biased towards learning the larger singular values first , especially at small initialization scales . Our model has only one optimal solution due to the population loss , but it is clear how this sort of dynamic can induce sparse solutions - if the model is able to fit the data after a small amount of learning phases , then it ’ s obtained result will be sparse . Alternatively , if N = 1 , we know that the dynamics will lead to the minimal ` 2 norm solution which is dense . We explore the sparsity inducing bias of our toy model by comparing it empirically2 to a greedy sparse approximation algorithm in appendix D , and give our theoretical results in the next section . 3 INCREMENTAL LEARNING . Equipped with analytical solutions for the dynamics of our model for every depth , we turn to study how the depth and initialization effect incremental learning . While Gidel et al . ( 2019 ) focuses on incremental learning in depth-2 models at the limit of σ0 → 0 , we will study the phenomenon for a general depth and for σ0 > 0 . First , we will define the notion of incremental learning . Since all values of σ are learned in parallel , we can ’ t expect one value to converge before the other moves at all ( which happens for infinitesimal initialization as shown by Gidel et al . ( 2019 ) ) . We will need a more relaxed definition for incremental learning in finite initialization scales . 2The code for reproducing all of our experiments can be found in https : //github.com/dsgissin/ Incremental-Learning Definition 1 . Given two values σi , σj such that σ∗i > σ∗j > 0 and both are initialized as σi ( 0 ) = σj ( 0 ) = σ0 < σ ∗ j , and given two scalars s ∈ ( 0 , 14 ) and f ∈ ( 3 4 , 1 ) , we call the learning of the values ( s , f ) -incremental if there exists a t for which : σj ( t ) ≤ sσ∗j < fσ∗i ≤ σi ( t ) In words , two values have distinct learning phases if the first almost converges ( f ≈ 1 ) before the second changes by much ( s 1 ) . Note that for any N , σ ( t ) is monotonically increasing and so once σj ( t ) = sσ∗j , it will not decrease to allow further incremental learning . Given this definition of incremental learning , we turn to study the conditions that facilitate incremental learning in our toy model . Our main result is a dynamical depth separation result , showing that incremental learning is dependent on σ ∗ i σ∗j in different ways for different values ofN . The largest difference in dependence happens between N = 2 and N = 3 , where the dependence changes from exponential to polynomial : Theorem 2 . Given two values σi , σj of a toy linear model as in ( 1 ) , where σ∗i σ∗j = r > 1 and the model is initialized as in ( 3 ) , and given two scalars s ∈ ( 0 , 14 ) and f ∈ ( 3 4 , 1 ) , then the largest initialization value for which the learning phases of the values are ( s , f ) -incremental , denoted σth0 , is bounded in the following way : sσ∗j ( s rf ) 1 ( 1−f ) ( r−1 ) ≤σth0 ≤ sσ∗j ( s rf ) 1−s r−1 N = 2 sσ∗j ( ( 1− f ) ( r − 1 ) 1 + ( 1− f ) ( r − 1 ) ) N N−2 ≤σth0 ≤ sσ∗j ( r − 1 r − s ) N N−2 N ≥ 3 Proof sketch ( the full proof is given in appendix A ) . Rewriting the separable differential equation in ( 4 ) to calculate the time until σ ( t ) = ασ∗ , we get the following : tα ( σ ) = ∫ ασ∗ σ0 dσ σ2− 2 N ( σ∗ − σ ) The condition for incremental learning is then the requirement that tf ( σi ) ≤ ts ( σj ) , resulting in : ∫ fσ∗i σ0 dσ σ2− 2 N ( σ∗i − σ ) ≤ ∫ sσ∗j σ0 dσ σ2− 2 N ( σ∗j − σ ) We then relax/restrict the above condition to get a necessary/sufficient condition on σ0 , leading to a lower and upper bound on σth0 . Note that the value determining the condition for incremental learning is σ ∗ i σ∗j - if two values are in the same order of magnitude , then their ratio will be close to 1 and we will need a small initialization to obtain incremental learning . The dependence on the ratio changes with depth , and is exponential for N = 2 . This means that incremental learning , while possible for shallow models , is difficult to see in practice . This result explains why changing the initialization scale in figure 2 changes the dynamics of the N ≥ 3 models , while not changing the dynamics for N = 2 noticeably . The next theorem extends part of our analysis to gradient descent , a more realistic setting than the infinitesimal learning rate of gradient flow : Theorem 3 . Given two values σi , σj of a depth-2 toy linear model as in ( 1 ) , such that σ∗i σ∗j = r > 1 and the model is initialized as in ( 3 ) , and given two scalars s ∈ ( 0 , 14 ) and f ∈ ( 3 4 , 1 ) , and assuming σ∗j ≥ 2σ0 , and assuming we optimize with gradient descent with a learning rate η ≤ cσ∗1 for c < 2 ( √ 2 − 1 ) and σ∗1 the largest value of σ∗ , then the largest initialization value for which the learning phases of the values are ( s , f ) -incremental , denoted σth0 , is lower and upper bounded in the following way : 1 2 s 1− s σ∗j ( 1− f 2rf s 1− s ) 1 A−1 ≤ σth0 ≤ s 1− s σ∗j ( 1− f f s 1− s ) 1 B−1
This paper studies the phenomenon of incremental learning in several deep models. It starts with analyzing the optimization dynamics of a toy model, and showing that it follows incremental learning, a notion defined clearly in the paper. In particular, it shows that depth affects the strength of incremental learning in the sense that when the depth of the model is increased (especially when going from N=2 to N=3), the maximal initialization value with which incremental learning can occur is increased. In this sense, deeper models experience incremental learning more easily. The paper then moves on to other “deep linear“ models, including matrix sensing, one-hidden-layer quadratic neural networks and diagonal/convolutional linear neural networks, derives ODEs for the evolution of the singular values in the learned models, which is argued to also lead to incremental learning.
SP:a02b08206bf5b7026e1c35f23d2810cffa529d1f
Deep amortized clustering
1 INTRODUCTION Clustering is a fundamental task in unsupervised machine learning to group similar data points into multiple clusters . Aside from its usefulness in many downstream tasks , clustering is an important tool for visualising and understanding the underlying structures of datasets , as well as a model for categorisation in cognitive science . Most clustering algorithms have two basic components - how to define a cluster and how to assign data points to those clusters . The former is usually defined using metrics to measure distances between data points , or using generative models describing the shapes of clusters . The latter , how to assign data points to the clusters , is then typically optimized iteratively w.r.t . objective functions derived based on the cluster definitions . Note that cluster definitions are user defined , and are reflections of the user ’ s prior knowledge about the clustering process , with different definitions leading to different clusterings . However , cluster definitions used in practice are often quite simple , for example clusters in k-means are defined in terms of ` 2 distance to centroids , while Gaussians are a commonly used generative model for clusters in mixture models . Recently , advances in deep learning has facilitated the approximation of complex functions in a black-box fashion . One particular application of relevance to the problem of clustering in this paper is that of amortized inference ( Gershman & Goodman , 2014 ; Stuhlmüller et al. , 2013 ) , where neural networks are trained to predict the states of latent variables given observations in a generative model or probabilistic programme . In the context of learning set-input neural networks ( Zaheer et al. , 2017 ) , Lee et al . ( 2019 ) showed that it is possible to amortize the iterative clustering process for a Mixture of Gaussians ( MOG ) , while Pakman et al . ( 2019 ) demonstrated that it is possible to train a neural network to sequentially assign data points to clusters . Both approaches can be interpreted as using neural networks for amortized inference of cluster assignments and parameters given a dataset . Note that once neural networks are used for amortized clustering , we can take advantage of their flexibility in working with more complex ways to define clusters . Further , the amortization networks can be trained using generated datasets where the ground truth clusterings are known . This can be interpreted as implicitly learning the definition of clusters underlying the training datasets , such that amortized inference ( approximately ) produces the appropriate clusterings . In a sense this shares a similar philosophy as Neural Processes ( Garnelo et al. , 2018b ; a ) , which meta-learns from multiple datasets to learn a prior over functions . In this paper , we build on these prior works and propose Deep Amortized Clustering ( DAC ) . As in prior works , the amortization networks in DAC are trained using generated datasets where the ground truth clusterings are known . Like Lee et al . ( 2019 ) , DAC uses a Set Transformer , but differs from Lee et al . ( 2019 ) in that it generates clusters sequentially , which enables to produce a varying number of clusters depending on the complexity of the dataset ( Fig . 1 ) . Our approach also extends Lee et al . ( 2019 ) from MOG to problems with more complex cluster definitions , which are arguably harder to hand specify and easier to meta-learn from data . Our work also differs from Pakman et al . ( 2019 ) in that our network processes data points in parallel while Pakman et al . ( 2019 ) processes them sequentially , which is arguably less scalable and limits applicability to smaller datasets . This paper is organized as follows . We begin by describing in Section 2 the permutation-invariant set transformer modules that we use throughout the paper . In Section 3 , we describe how we implement our core idea of identifying one cluster at a time , and describe our framework for clustering , the DAC There are several challenges in solving DAC on complex datasets , and we structured our paper roughly in order of difficulty . We apply DAC to clustering synthetic data ( Section 5 ) and image data ( Section 6 ) ; some settings required additional components , which we describe when needed . 2 A PRIMER ON SET TRANSFORMER AND AMORTIZED CLUSTERING . In this section , we briefly review the set-input neural network architectures to be used in the paper , and describe how Lee et al . ( 2019 ) used them to solve amortized clustering for MOG . 2.1 SET TRANSFORMER . The Set Transformer ( ST ) is a permutation-invariant set-input neural network that uses self-attetntion operations as building blocks . It utilizes multi-head attention ( Vaswani et al. , 2017 ) for both encoding elements of a set and decoding encoded features into outputs . The fundamental building block of a ST is the Multihead Attention Block ( MAB ) , which takes two sets X = [ x1 , . . . , xn ] > and Y = [ y1 , . . . , ym ] > and outputs a set of the same size as X . Throughout this article , we represent sets as matrices where each row corresponds to an element . An MAB is defined as MAB ( X , Y ) = H + rFF ( H ) where H = X + rFF ( MultiheadAtt ( X , Y ) ) , ( 1 ) where rFF ( · ) is a feed-forward layer applied row-wise ( i.e. , for each element ) . MAB ( X , Y ) computes the pairwise interactions between the elements in X and Y with sparse weights obtained from attention . A Self-Attention Block ( SAB ) is simply MAB applied to the set itself : SAB ( X ) , MAB ( X , X ) . We can model high-order interactions among the items in a set by stacking multiple SABs ; we denote such a stack of L SABs applied to set X as SABL ( X ) . To summarize a set into a fixed-length representation , ST uses an operation called Pooling by Multihead Attention ( PMA ) . A PMA is defined as PMAk ( X ) = MAB ( S , X ) where S = [ s1 , . . . , sk ] > are trainable parameters . Note that the time-complexity of SAB is O ( n2 ) because of pairwise computation . To reduce this , Lee et al . ( 2019 ) proposed to use Induced Self-Attention Block ( ISAB ) defined as ISAB ( X ) = MAB ( X , MAB ( I , X ) ) , ( 2 ) where I = [ i1 , . . . , im ] > are trainable inducing points . ISAB indirectly compares the elements of X through the inducing points , reducing the time-complexity to O ( nm ) . Similarly to the SAB , we write ISABL ( X ) to denote a stack of L ISABs . 2.2 AMORTIZED CLUSTERING WITH SET TRANSFORMER . Lee et al . ( 2019 ) presented an example using ST for amortized inference for a MOG . A dataset X is clustered by maximizing the likelihood of a k component MOG , and a ST is used to output the parameters as : HX = ISABL ( X ) , Hθ = PMAk ( HX ) , ( logitπj , θj ) k j=1 = rFF ( SABL′ ( Hθ ) ) , ( 3 ) where πj is the mixing coefficient and θj = ( µj , σ2j ) are the mean and variance for the jth Gaussian component . The network is trained to maximize the expected log likelihood over datasets : Ep ( X ) [ nX∑ i=1 log k∑ j=1 πj logN ( xi ; µj , σ2j ) ] , ( 4 ) where nX is the number of elements inX . Clustering is then achieved by picking the highest posterior probability component for each data point under the MOG with parameters output by the ST . 3 DEEP AMORTIZED CLUSTERING . An apparent limitation of the model described in Section 2.2 is that it assumes a fixed number of clusters generated from Gaussian distributions . In this section , we describe our method to solve DAC in the more realistic scenario of having a variable number of clusters and arbitrarily complex cluster shapes . 3.1 FILTERING : INFERRING ONE CLUSTER AT A TIME . The objective ( 4 ) is not applicable when the number of clusters is not fixed nor bounded . A remedy to this is to build a set-input neural network f that identifies the clusters iteratively and make it to learn “ when to stop ” , similar to Adaptive Computation Time ( ACT ) for RNNs ( Graves , 2016 ) . One may think of several ways to implement this idea ( we present an illustrative example that simply augments ACT to ST in Section 5.1 ) . Here we propose to train f to solve a simpler task - instead of clustering the entire dataset , focus on finding one cluster at a time . The task , what we call as filtering , is defined as a forward pass through f that takes a set X and outputs a parameter θ to describe a cluster along with a membership probability vector m ∈ [ 0 , 1 ] nX where nX is the number of elements in X . The meaning of the parameter θ depends on the specific problem . For example , θ for MOG is ( µ , σ2 ) , the parameters of a Gaussian distribution . mi represents the probability of xi belonging to the cluster described by θ . To filter out the datapoints that belong to the current cluster , we use 0.5 as the threshold to discretize m to a boolean mask vector . The resulting smaller dataset is then fed back into the neural network to produce the next cluster and so on . Minimum Loss Filtering Now we describe how to train the filtering network f . AssumeX has kX true clusters , and let y ∈ [ 1 , . . . , kX ] nX be a cluster label vector corresponding to the true clustering of X . Then we define the loss function for one filtering iteration producing one θ and one m as L ( X , y , m , θ ) = min j∈ { 1 , ... , kX } ( 1 nX nX∑ i=1 BCE ( mi,1 { yi=j } ) − 1 nX|j ∑ i|yi=j log p ( xi ; θ ) ) , ( 5 ) where nX|j : = ∑nX i=1 1 { yi=j } , BCE ( · , · ) is the binary cross-entropy loss , and p ( x ; θ ) is the density of x under cluster parameterised by θ . This loss encourages θ to describe the data distribution of a cluster , and m to specify which datapoints belong to this particular cluster . The rationale to take minimum across the clusters is follows . One way to train f to pick a cluster at each iteration is to impose an ordering on the clusters ( e.g . in order of appearance in some arbitrary indexing of X , or in order of distance to origin ) , and to train f to follow this order . However , this may introduce unnecessary inductive biases that deteriorates learning . Instead , we let f find the easiest one to identify , thus promoting f to learn its own search strategy . Note that there are kX ! equally valid ways to label the clusters in X . This combinatorial explosion makes learning with standard supervised learning objectives for y tricky , but our loss ( 5 ) is inherently free from this problem while being invariant to the labelling of clusters . We use the following architecture for the filtering network f : Section 2 : encode data : HX = ISABL ( X ) , decode cluster : Hθ = PMA1 ( HX ) , θ = rFF ( Hθ ) , decode mask : Hm = ISABL′ ( MAB ( HX , Hθ ) ) , m = sigmoid ( rFF ( Hm ) ) . ( 6 ) The network first encodesX intoHX and extracts cluster parameters θ . Then θ together with encoded data HX are further processed to produce the membership probabilities m. We call the filtering network with architecture ( 6 ) and trained with objective ( 5 ) Minimum Loss Filtering ( MLF ) . Anchored Filtering An alternative strategy that we found beneficial for harder datasets is to use anchor points . Given a dataset X and labels y constructed from the true clustering , we sample an anchor point with index a ∈ { 1 , . . . , nX } uniformly from X . We parameterize a set-input network f to take both X and a is input , and to output the cluster that contains the anchor point xa . The corresponding loss function is , L ( x , y , a , m , θ ) = 1 nX nX∑ i=1 BCE ( mi,1 { yi=ja } ) − 1 nX|ja ∑ i|yi=ja log p ( xi ; θ ) , ( 7 ) where ja denotes the the true cluster index containing a . The architecture to be trained with this loss can be implemented as encode data : HX = ISABL ( X ) , HX|a = MAB ( HX , ha ) , decode cluster : Hθ = PMA1 ( HX|a ) , θ = rFF ( Hθ ) , decode mask : Hm = ISABL′ ( MAB ( HX|a , Hθ ) ) , m = sigmoid ( rFF ( Hm ) ) , ( 8 ) where ha is the row vector of HX corresponding to the index a . We train ( 8 ) by randomly sampling a for each step , and thus promoting f to find clusters by comparing each data point to the random anchor point . Note that the loss is also free from the label order ambiguity given anchor points . We call this filtering strategy Anchored Filtering ( AF ) .
The paper presents an amortized clustering method, called DAC, which is a neural architecture that allows efficient data clustering using a few forward passes. The proposed method is essentially based on the idea behind set-input neural networks [1], which consists of modeling the interaction between instances within a given dataset. Compared with the previous work [1], the main difference is that DAC does not need to specify the number of clusters, as in the case of Bayesian nonparametrics, making it more flexible for clustering complex datasets. It is empirically shown that DAC can efficiently and accurately cluster new datasets coming from the same distribution for both synthetic and image data.
SP:0706a81690b0a5309c00632dd76f8d62290c7e9e
Deep amortized clustering
1 INTRODUCTION Clustering is a fundamental task in unsupervised machine learning to group similar data points into multiple clusters . Aside from its usefulness in many downstream tasks , clustering is an important tool for visualising and understanding the underlying structures of datasets , as well as a model for categorisation in cognitive science . Most clustering algorithms have two basic components - how to define a cluster and how to assign data points to those clusters . The former is usually defined using metrics to measure distances between data points , or using generative models describing the shapes of clusters . The latter , how to assign data points to the clusters , is then typically optimized iteratively w.r.t . objective functions derived based on the cluster definitions . Note that cluster definitions are user defined , and are reflections of the user ’ s prior knowledge about the clustering process , with different definitions leading to different clusterings . However , cluster definitions used in practice are often quite simple , for example clusters in k-means are defined in terms of ` 2 distance to centroids , while Gaussians are a commonly used generative model for clusters in mixture models . Recently , advances in deep learning has facilitated the approximation of complex functions in a black-box fashion . One particular application of relevance to the problem of clustering in this paper is that of amortized inference ( Gershman & Goodman , 2014 ; Stuhlmüller et al. , 2013 ) , where neural networks are trained to predict the states of latent variables given observations in a generative model or probabilistic programme . In the context of learning set-input neural networks ( Zaheer et al. , 2017 ) , Lee et al . ( 2019 ) showed that it is possible to amortize the iterative clustering process for a Mixture of Gaussians ( MOG ) , while Pakman et al . ( 2019 ) demonstrated that it is possible to train a neural network to sequentially assign data points to clusters . Both approaches can be interpreted as using neural networks for amortized inference of cluster assignments and parameters given a dataset . Note that once neural networks are used for amortized clustering , we can take advantage of their flexibility in working with more complex ways to define clusters . Further , the amortization networks can be trained using generated datasets where the ground truth clusterings are known . This can be interpreted as implicitly learning the definition of clusters underlying the training datasets , such that amortized inference ( approximately ) produces the appropriate clusterings . In a sense this shares a similar philosophy as Neural Processes ( Garnelo et al. , 2018b ; a ) , which meta-learns from multiple datasets to learn a prior over functions . In this paper , we build on these prior works and propose Deep Amortized Clustering ( DAC ) . As in prior works , the amortization networks in DAC are trained using generated datasets where the ground truth clusterings are known . Like Lee et al . ( 2019 ) , DAC uses a Set Transformer , but differs from Lee et al . ( 2019 ) in that it generates clusters sequentially , which enables to produce a varying number of clusters depending on the complexity of the dataset ( Fig . 1 ) . Our approach also extends Lee et al . ( 2019 ) from MOG to problems with more complex cluster definitions , which are arguably harder to hand specify and easier to meta-learn from data . Our work also differs from Pakman et al . ( 2019 ) in that our network processes data points in parallel while Pakman et al . ( 2019 ) processes them sequentially , which is arguably less scalable and limits applicability to smaller datasets . This paper is organized as follows . We begin by describing in Section 2 the permutation-invariant set transformer modules that we use throughout the paper . In Section 3 , we describe how we implement our core idea of identifying one cluster at a time , and describe our framework for clustering , the DAC There are several challenges in solving DAC on complex datasets , and we structured our paper roughly in order of difficulty . We apply DAC to clustering synthetic data ( Section 5 ) and image data ( Section 6 ) ; some settings required additional components , which we describe when needed . 2 A PRIMER ON SET TRANSFORMER AND AMORTIZED CLUSTERING . In this section , we briefly review the set-input neural network architectures to be used in the paper , and describe how Lee et al . ( 2019 ) used them to solve amortized clustering for MOG . 2.1 SET TRANSFORMER . The Set Transformer ( ST ) is a permutation-invariant set-input neural network that uses self-attetntion operations as building blocks . It utilizes multi-head attention ( Vaswani et al. , 2017 ) for both encoding elements of a set and decoding encoded features into outputs . The fundamental building block of a ST is the Multihead Attention Block ( MAB ) , which takes two sets X = [ x1 , . . . , xn ] > and Y = [ y1 , . . . , ym ] > and outputs a set of the same size as X . Throughout this article , we represent sets as matrices where each row corresponds to an element . An MAB is defined as MAB ( X , Y ) = H + rFF ( H ) where H = X + rFF ( MultiheadAtt ( X , Y ) ) , ( 1 ) where rFF ( · ) is a feed-forward layer applied row-wise ( i.e. , for each element ) . MAB ( X , Y ) computes the pairwise interactions between the elements in X and Y with sparse weights obtained from attention . A Self-Attention Block ( SAB ) is simply MAB applied to the set itself : SAB ( X ) , MAB ( X , X ) . We can model high-order interactions among the items in a set by stacking multiple SABs ; we denote such a stack of L SABs applied to set X as SABL ( X ) . To summarize a set into a fixed-length representation , ST uses an operation called Pooling by Multihead Attention ( PMA ) . A PMA is defined as PMAk ( X ) = MAB ( S , X ) where S = [ s1 , . . . , sk ] > are trainable parameters . Note that the time-complexity of SAB is O ( n2 ) because of pairwise computation . To reduce this , Lee et al . ( 2019 ) proposed to use Induced Self-Attention Block ( ISAB ) defined as ISAB ( X ) = MAB ( X , MAB ( I , X ) ) , ( 2 ) where I = [ i1 , . . . , im ] > are trainable inducing points . ISAB indirectly compares the elements of X through the inducing points , reducing the time-complexity to O ( nm ) . Similarly to the SAB , we write ISABL ( X ) to denote a stack of L ISABs . 2.2 AMORTIZED CLUSTERING WITH SET TRANSFORMER . Lee et al . ( 2019 ) presented an example using ST for amortized inference for a MOG . A dataset X is clustered by maximizing the likelihood of a k component MOG , and a ST is used to output the parameters as : HX = ISABL ( X ) , Hθ = PMAk ( HX ) , ( logitπj , θj ) k j=1 = rFF ( SABL′ ( Hθ ) ) , ( 3 ) where πj is the mixing coefficient and θj = ( µj , σ2j ) are the mean and variance for the jth Gaussian component . The network is trained to maximize the expected log likelihood over datasets : Ep ( X ) [ nX∑ i=1 log k∑ j=1 πj logN ( xi ; µj , σ2j ) ] , ( 4 ) where nX is the number of elements inX . Clustering is then achieved by picking the highest posterior probability component for each data point under the MOG with parameters output by the ST . 3 DEEP AMORTIZED CLUSTERING . An apparent limitation of the model described in Section 2.2 is that it assumes a fixed number of clusters generated from Gaussian distributions . In this section , we describe our method to solve DAC in the more realistic scenario of having a variable number of clusters and arbitrarily complex cluster shapes . 3.1 FILTERING : INFERRING ONE CLUSTER AT A TIME . The objective ( 4 ) is not applicable when the number of clusters is not fixed nor bounded . A remedy to this is to build a set-input neural network f that identifies the clusters iteratively and make it to learn “ when to stop ” , similar to Adaptive Computation Time ( ACT ) for RNNs ( Graves , 2016 ) . One may think of several ways to implement this idea ( we present an illustrative example that simply augments ACT to ST in Section 5.1 ) . Here we propose to train f to solve a simpler task - instead of clustering the entire dataset , focus on finding one cluster at a time . The task , what we call as filtering , is defined as a forward pass through f that takes a set X and outputs a parameter θ to describe a cluster along with a membership probability vector m ∈ [ 0 , 1 ] nX where nX is the number of elements in X . The meaning of the parameter θ depends on the specific problem . For example , θ for MOG is ( µ , σ2 ) , the parameters of a Gaussian distribution . mi represents the probability of xi belonging to the cluster described by θ . To filter out the datapoints that belong to the current cluster , we use 0.5 as the threshold to discretize m to a boolean mask vector . The resulting smaller dataset is then fed back into the neural network to produce the next cluster and so on . Minimum Loss Filtering Now we describe how to train the filtering network f . AssumeX has kX true clusters , and let y ∈ [ 1 , . . . , kX ] nX be a cluster label vector corresponding to the true clustering of X . Then we define the loss function for one filtering iteration producing one θ and one m as L ( X , y , m , θ ) = min j∈ { 1 , ... , kX } ( 1 nX nX∑ i=1 BCE ( mi,1 { yi=j } ) − 1 nX|j ∑ i|yi=j log p ( xi ; θ ) ) , ( 5 ) where nX|j : = ∑nX i=1 1 { yi=j } , BCE ( · , · ) is the binary cross-entropy loss , and p ( x ; θ ) is the density of x under cluster parameterised by θ . This loss encourages θ to describe the data distribution of a cluster , and m to specify which datapoints belong to this particular cluster . The rationale to take minimum across the clusters is follows . One way to train f to pick a cluster at each iteration is to impose an ordering on the clusters ( e.g . in order of appearance in some arbitrary indexing of X , or in order of distance to origin ) , and to train f to follow this order . However , this may introduce unnecessary inductive biases that deteriorates learning . Instead , we let f find the easiest one to identify , thus promoting f to learn its own search strategy . Note that there are kX ! equally valid ways to label the clusters in X . This combinatorial explosion makes learning with standard supervised learning objectives for y tricky , but our loss ( 5 ) is inherently free from this problem while being invariant to the labelling of clusters . We use the following architecture for the filtering network f : Section 2 : encode data : HX = ISABL ( X ) , decode cluster : Hθ = PMA1 ( HX ) , θ = rFF ( Hθ ) , decode mask : Hm = ISABL′ ( MAB ( HX , Hθ ) ) , m = sigmoid ( rFF ( Hm ) ) . ( 6 ) The network first encodesX intoHX and extracts cluster parameters θ . Then θ together with encoded data HX are further processed to produce the membership probabilities m. We call the filtering network with architecture ( 6 ) and trained with objective ( 5 ) Minimum Loss Filtering ( MLF ) . Anchored Filtering An alternative strategy that we found beneficial for harder datasets is to use anchor points . Given a dataset X and labels y constructed from the true clustering , we sample an anchor point with index a ∈ { 1 , . . . , nX } uniformly from X . We parameterize a set-input network f to take both X and a is input , and to output the cluster that contains the anchor point xa . The corresponding loss function is , L ( x , y , a , m , θ ) = 1 nX nX∑ i=1 BCE ( mi,1 { yi=ja } ) − 1 nX|ja ∑ i|yi=ja log p ( xi ; θ ) , ( 7 ) where ja denotes the the true cluster index containing a . The architecture to be trained with this loss can be implemented as encode data : HX = ISABL ( X ) , HX|a = MAB ( HX , ha ) , decode cluster : Hθ = PMA1 ( HX|a ) , θ = rFF ( Hθ ) , decode mask : Hm = ISABL′ ( MAB ( HX|a , Hθ ) ) , m = sigmoid ( rFF ( Hm ) ) , ( 8 ) where ha is the row vector of HX corresponding to the index a . We train ( 8 ) by randomly sampling a for each step , and thus promoting f to find clusters by comparing each data point to the random anchor point . Note that the loss is also free from the label order ambiguity given anchor points . We call this filtering strategy Anchored Filtering ( AF ) .
In this paper, the authors proposed a new clustering method called deep amortized clustering (DAC). Inspired by Lee et al 2019, the authors exploited a transformer to gather the contextual information across different dataset points and then predict the cluster label for each data point. The main difference from Lee et al is that the proposed DAC sequentially estimate the cluster labels for the data points and thus more flexible to estimate the number of clusters in the whole dataset. Based on the proposed DAC method, the authors evaluated the performance on both unsupervised clustering and supervised clustering tasks. it turns out the proposed method has achieved better or comparable performance to previous work on various datasets while hold less computational cost.
SP:0706a81690b0a5309c00632dd76f8d62290c7e9e
Visual Interpretability Alone Helps Adversarial Robustness
1 INTRODUCTION . It has become widely known that convolutional neural networks ( CNNs ) are vulnerable to adversarial examples , namely , perturbed inputs with intention to mislead networks ’ prediction ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2016a ; Carlini & Wagner , 2017 ; Chen et al. , 2018 ; Su et al. , 2018 ) . The vulnerability of CNNs has spurred extensive research on adversarial attack and defense . To design adversarial attacks , most work has focused on creating either imperceptible input perturbations ( Goodfellow et al. , 2015 ; Papernot et al. , 2016a ; Carlini & Wagner , 2017 ; Chen et al. , 2018 ) or adversarial patches robust to the physical environment ( Eykholt et al. , 2018 ; Brown et al. , 2017 ; Athalye et al. , 2017 ) . Many defense methods have also been developed to prevent CNNs from misclassification when facing adversarial attacks . Examples include defensive distillation ( Papernot et al. , 2016b ) , training with adversarial examples ( Goodfellow et al. , 2015 ) , input gradient or curvature regularization ( Ross & Doshi-Velez , 2018 ; Moosavi-Dezfooli et al. , 2019 ) , adversarial training via robust optimization ( Madry et al. , 2018 ) , and TRADES to trade adversarial robustness off against accuracy ( Zhang et al. , 2019 ) . Besides studying adversarial effects on network prediction decisions , this work explores the connection between adversarial robustness and network interpretability , and provides novel insights on when and how interpretability helps the robustness . Having a prediction might not be enough for many real-world machine learning applications . It is also crucial to demystify why CNNs make certain decisions . Thus , the problem of network interpretation arises . Various methods have been proposed to understand the mechanism of decision making by CNNs . One category of methods justify a prediction decision by assigning importance values to reflect the influence of individual pixels or image sub-regions on the final classification . Examples include pixel-space sensitivity map methods ( Simonyan et al. , 2013 ; Zeiler & Fergus , 2014 ; Springenberg et al. , 2014 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ) and class-discriminative localization methods ( Zhou et al. , 2016 ; Selvaraju et al. , 2017 ; Chattopadhay et al. , 2018 ; Petsiuk et al. , 2018 ) , where the former evaluates the sensitivity of a network classification decision to pixel variations at the input , and the latter localizes which parts of an input image were looked at by the network for making a classification decision . Sensitivity map methods include vanilla gradient ( Simonyan et al. , 2013 ) , deconvolution ( Zeiler & Fergus , 2014 ) , guided backpropagation ( Springenberg et al. , 2014 ) , SmoothGrad ( Smilkov et al. , 2017 ) , integrated gradient ( IG ) ( Sundararajan et al. , 2017 ) to name a few . They highlight fine-grained details in the image but are not class-discriminative for visual explanation . By contrast , localization approaches like class activation map ( CAM ) ( Zhou et al. , 2016 ) , GradCAM ( Selvaraju et al. , 2017 ) , GradCAM++ ( Chattopadhay et al. , 2018 ) and RISE ( Petsiuk et al. , 2018 ) are highly class-discriminative , namely , localizing image sub-regions reasoned for a prediction class . We refer readers to Sec . 2 for some representative interpretation methods . Besides interpreting CNNs via feature importance maps , some methods zoom into the internal response of neural networks . Examples include network dissection ( Bau et al. , 2017 ) , which evaluates the alignment between individual hidden units and semantic concepts , and learning perceptuallyaligned representations from robust training ( Engstrom et al. , 2019 ) . Very recently , some works ( Xu et al. , 2019b ; a ; Zhang et al. , 2018 ; Subramanya et al. , 2018 ; Ghorbani et al. , 2019 ; Dombrowski et al. , 2019 ; Chen et al. , 2019 ) are beginning to study adversarial robustness by exploring the spectrum between classification accuracy and network interpretability . It was shown in ( Xu et al. , 2019b ; a ) that an imperceptible adversarial perturbation to fool classifiers can lead to a significant change in a class-specific network interpretability map , e.g. , CAM . Thus , it was argued that such an interpretability discrepancy can be used as a helpful metric to differentiate adversarial examples from benign inputs . However , the work ( Zhang et al. , 2018 ; Subramanya et al. , 2018 ) showed that under certain conditions , generating an attack ( which we call interpretability sneaking attack , ISA ) that fools the classifier as well as its coupled interpreter ( in terms of keeping interpretability map highly similar to that of benign input ) is not significantly more difficult than generating adversarial inputs deceiving the classifier only . Besides investigating robustness in classification through the lens of interpretability , the work ( Ghorbani et al. , 2019 ; Dombrowski et al. , 2019 ) studied the robustness in network interpretation maps , showing that which can significantly be manipulated via imperceptible input perturbations but keeping the classifier ’ s decision intact . We call this type of threat model attack against interpretability ( AAI ) . The existing work had no agreement on the relationship between robustness in interpretation and robustness in classification . Spurred by that , we attempt to explore this relationship from both attack and defense perspectives . The most relevant work to ours is ( Chen et al. , 2019 ) , which robustified network interpretation with the aid of integrated gradient ( IG ) , an axiomatic attribution map . It proposed a robust attribution training , which was shown as a principled generalization of previous formulations of robust classification and an effective defense against AAI . In this paper , we first investigate when ISA is possible , and then relate our insights on ISA to robust classification and robust interpretability . Different from the previous work , our paper contains the following contributions . 1 . We provide an answer to the question of when adversarial examples can bypass interpretability discrepancy . We show that enforcing stealthiness of adversarial examples from network interpretation could be challenging . Its difficulty relies on how one measures the interpretability discrepancy caused by input perturbations . 2 . We propose an ` 1 norm based 2-class interpretability discrepancy measure and theoretically show that constraining it helps adversarial robustness . 3 . We develop an interpretability-aware robust training method and empirically show that interpretability alone can be used to defend adversarial attacks for both misclassifcation and misinterpretation . Compared to the IG-based robust attribution training ( Chen et al. , 2019 ) , our approach is simpler in implementation , and provides better robustness even as facing a strong adversary . 2 PRELIMINARIES AND MOTIVATION : INTERPRETABILITY OF CNNS FOR JUSTIFYING A CLASSIFICATION DECISION . To explain what and why CNNs predict , we consider two types of network interpretation methods : a ) class activation map ( CAM ) ( Zhou et al. , 2016 ; Selvaraju et al. , 2017 ; Chattopadhay et al. , 2018 ) and b ) pixel sensitivity map ( PSM ) ( Simonyan et al. , 2013 ; Springenberg et al. , 2014 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ; Yeh et al. , 2019 ) . Let f ( x ) ∈ RC denote a CNN-based predictor that maps an input x ∈ Rd to a probability vector of C classes . Here fc ( x ) , the cth element of f ( x ) , denotes the classification score ( given by logit before the softmax ) for class c. Let L ( x , c ) denote an interpreter ( CAM or PSM ) that reflects where in x contributes to the classifier ’ s decision on c. CAM-type methods . CAM ( Zhou et al. , 2016 ) produces a class-discriminative localization map for CNNs , which performs global averaging pooling over convolutional feature maps prior to the softmax . Let the penultimate layer output K feature maps , each of which is denoted by a vector representation Ak ∈ Ru for channel k ∈ [ K ] . Here [ K ] represents the integer set { 1 , 2 , . . . , K } . The ith entry of CAM LCAM ( x , c ) is given by [ LCAM ( x , c ) ] i = 1 u ∑ k∈ [ K ] wckAk , i , i ∈ [ u ] , ( 1 ) where wck is the linear classification weight that associates the channel k with the class c , and Ak , i denotes the ith element of Ak . The rationale behind ( 1 ) is that the classification score fc ( x ) can be written as the average of CAM values ( Zhou et al. , 2016 ) , fc ( x ) = ∑u i=1 [ LCAM ( x , c ) ] i . For visual explanation , LCAM ( x , c ) is often up-sampled to the input dimension d using bi-linear interpolation . GradCAM ( Selvaraju et al. , 2017 ) generalizes CAM for CNNs without the architecture ‘ global average pooling → softmax layer ’ over the final convolutional maps . Specifically , the weight wck in ( 1 ) is given by the gradient of the classification score fc ( x ) with respect to ( w.r.t . ) the feature map Ak , wck = 1 u ∑u i=1 ∂fc ( x ) ∂Ak , i . GradCAM++ ( Chattopadhay et al. , 2018 ) , a generalized formulation of GradCAM , utilizes a more involved weighted average of the ( positive ) pixel-wise gradients but provides a better localization map if an image contains multiple occurrences of the same class . In this work , we focus on CAM since it is computationally light and our models used in experiments follow the architecture ‘ global average pooling→ softmax layer ’ . PSM-type methods . PSM uses calculations with gradients to assign importance scores to individual pixels toward explaining the classification decision about an input . Examples of commonly-used approaches include vanilla gradient ( Simonyan et al. , 2013 ) , guided backpropogation ( Springenberg et al. , 2014 ) , SmoothGrad ( Smilkov et al. , 2017 ) , and integrated gradient ( IG ) ( Sundararajan et al. , 2017 ) . In particular , IG satisfies the completeness attribution axiom that PSM ought to obey . Specifically , it averages gradient saliency maps for interpolations between an input x and a baseline image a : [ LIG ( x , c ) ] i = ( xi − ai ) ∫ 1 α=0 ∂fc ( a+ α ( x− a ) ) ∂xi dα ≈ ( xi − ai ) m∑ i=1 ∂fc ( a+ i m ( x− a ) ) ∂xi 1 m , i ∈ [ d ] , ( 2 ) where m is the number of steps in the Riemman approximation of the integral . The completeness axiom ( Sundararajan et al. , 2017 , Proposition 1 ) states that ∑d i=1 [ LIG ( x , c ) ] i = fc ( x ) − fc ( a ) , where the baseline image a is often chosen such that fc ( a ) ≈ 0 , e.g. , the black image . Note that CAM also satisfies the completeness axiom . PSM is able to highlight fine-grained details in the image , but is computationally intensive and not very class-discriminative compared to CAM ( Selvaraju et al. , 2017 ) . Interpretability discrepancy caused by adversarial perturbation . Let x′ = x + δ represent an adversarial example w.r.t . x , where δ denotes an adversarial perturbation . By replacing the input image x with x′ , the CNN will be fooled from the true label t to the target ( incorrect ) label t′ . It was recently shown in ( Xu et al. , 2019b ; a ) that the adversary could introduce an evident interpretability discrepancy w.r.t . both the true and the target label in terms of L ( x , t ) vs. L ( x′ , t ) , and L ( x , t′ ) vs. L ( x′ , t′ ) . An illustrative example is provided in Figure 1 . We see that an adversary suppresses the network interpretation w.r.t . the true label but promotes the interpretation w.r.t . the target label . We also observe that compared to IG , CAM and GradCAM++ better localize class-specific discriminative regions . These results reveal two observations on how measuring interpretation discrepancy affects classification robustness : a ) interpretability discrepancy may be used to detect adversarial examples , b ) interpretability discrepancy itself may be vulnerable to adversarial perturbations . In what follows , we explore the spectrum between adversarial robustness and interpretability from a unified perspective considering both the adversarial vulnerability of interpretability discrepancy and the value of interpretability discrepancy in a defense .
In summary, this paper studies if interpretation robustness (i.e., similar examples should have similar interpretation) can help enhance the robustness of the model, especially in terms of adversarial attacks. The study direction itself is interesting and very useful for the interpretation and adversarial attack community. Moreover, some promising results can be observed in part of the empirical study. However, this paper can be improved a lot as follows.
SP:e6fdf0645148e28af1a54c7f05e2594b2cf535cd
Visual Interpretability Alone Helps Adversarial Robustness
1 INTRODUCTION . It has become widely known that convolutional neural networks ( CNNs ) are vulnerable to adversarial examples , namely , perturbed inputs with intention to mislead networks ’ prediction ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2016a ; Carlini & Wagner , 2017 ; Chen et al. , 2018 ; Su et al. , 2018 ) . The vulnerability of CNNs has spurred extensive research on adversarial attack and defense . To design adversarial attacks , most work has focused on creating either imperceptible input perturbations ( Goodfellow et al. , 2015 ; Papernot et al. , 2016a ; Carlini & Wagner , 2017 ; Chen et al. , 2018 ) or adversarial patches robust to the physical environment ( Eykholt et al. , 2018 ; Brown et al. , 2017 ; Athalye et al. , 2017 ) . Many defense methods have also been developed to prevent CNNs from misclassification when facing adversarial attacks . Examples include defensive distillation ( Papernot et al. , 2016b ) , training with adversarial examples ( Goodfellow et al. , 2015 ) , input gradient or curvature regularization ( Ross & Doshi-Velez , 2018 ; Moosavi-Dezfooli et al. , 2019 ) , adversarial training via robust optimization ( Madry et al. , 2018 ) , and TRADES to trade adversarial robustness off against accuracy ( Zhang et al. , 2019 ) . Besides studying adversarial effects on network prediction decisions , this work explores the connection between adversarial robustness and network interpretability , and provides novel insights on when and how interpretability helps the robustness . Having a prediction might not be enough for many real-world machine learning applications . It is also crucial to demystify why CNNs make certain decisions . Thus , the problem of network interpretation arises . Various methods have been proposed to understand the mechanism of decision making by CNNs . One category of methods justify a prediction decision by assigning importance values to reflect the influence of individual pixels or image sub-regions on the final classification . Examples include pixel-space sensitivity map methods ( Simonyan et al. , 2013 ; Zeiler & Fergus , 2014 ; Springenberg et al. , 2014 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ) and class-discriminative localization methods ( Zhou et al. , 2016 ; Selvaraju et al. , 2017 ; Chattopadhay et al. , 2018 ; Petsiuk et al. , 2018 ) , where the former evaluates the sensitivity of a network classification decision to pixel variations at the input , and the latter localizes which parts of an input image were looked at by the network for making a classification decision . Sensitivity map methods include vanilla gradient ( Simonyan et al. , 2013 ) , deconvolution ( Zeiler & Fergus , 2014 ) , guided backpropagation ( Springenberg et al. , 2014 ) , SmoothGrad ( Smilkov et al. , 2017 ) , integrated gradient ( IG ) ( Sundararajan et al. , 2017 ) to name a few . They highlight fine-grained details in the image but are not class-discriminative for visual explanation . By contrast , localization approaches like class activation map ( CAM ) ( Zhou et al. , 2016 ) , GradCAM ( Selvaraju et al. , 2017 ) , GradCAM++ ( Chattopadhay et al. , 2018 ) and RISE ( Petsiuk et al. , 2018 ) are highly class-discriminative , namely , localizing image sub-regions reasoned for a prediction class . We refer readers to Sec . 2 for some representative interpretation methods . Besides interpreting CNNs via feature importance maps , some methods zoom into the internal response of neural networks . Examples include network dissection ( Bau et al. , 2017 ) , which evaluates the alignment between individual hidden units and semantic concepts , and learning perceptuallyaligned representations from robust training ( Engstrom et al. , 2019 ) . Very recently , some works ( Xu et al. , 2019b ; a ; Zhang et al. , 2018 ; Subramanya et al. , 2018 ; Ghorbani et al. , 2019 ; Dombrowski et al. , 2019 ; Chen et al. , 2019 ) are beginning to study adversarial robustness by exploring the spectrum between classification accuracy and network interpretability . It was shown in ( Xu et al. , 2019b ; a ) that an imperceptible adversarial perturbation to fool classifiers can lead to a significant change in a class-specific network interpretability map , e.g. , CAM . Thus , it was argued that such an interpretability discrepancy can be used as a helpful metric to differentiate adversarial examples from benign inputs . However , the work ( Zhang et al. , 2018 ; Subramanya et al. , 2018 ) showed that under certain conditions , generating an attack ( which we call interpretability sneaking attack , ISA ) that fools the classifier as well as its coupled interpreter ( in terms of keeping interpretability map highly similar to that of benign input ) is not significantly more difficult than generating adversarial inputs deceiving the classifier only . Besides investigating robustness in classification through the lens of interpretability , the work ( Ghorbani et al. , 2019 ; Dombrowski et al. , 2019 ) studied the robustness in network interpretation maps , showing that which can significantly be manipulated via imperceptible input perturbations but keeping the classifier ’ s decision intact . We call this type of threat model attack against interpretability ( AAI ) . The existing work had no agreement on the relationship between robustness in interpretation and robustness in classification . Spurred by that , we attempt to explore this relationship from both attack and defense perspectives . The most relevant work to ours is ( Chen et al. , 2019 ) , which robustified network interpretation with the aid of integrated gradient ( IG ) , an axiomatic attribution map . It proposed a robust attribution training , which was shown as a principled generalization of previous formulations of robust classification and an effective defense against AAI . In this paper , we first investigate when ISA is possible , and then relate our insights on ISA to robust classification and robust interpretability . Different from the previous work , our paper contains the following contributions . 1 . We provide an answer to the question of when adversarial examples can bypass interpretability discrepancy . We show that enforcing stealthiness of adversarial examples from network interpretation could be challenging . Its difficulty relies on how one measures the interpretability discrepancy caused by input perturbations . 2 . We propose an ` 1 norm based 2-class interpretability discrepancy measure and theoretically show that constraining it helps adversarial robustness . 3 . We develop an interpretability-aware robust training method and empirically show that interpretability alone can be used to defend adversarial attacks for both misclassifcation and misinterpretation . Compared to the IG-based robust attribution training ( Chen et al. , 2019 ) , our approach is simpler in implementation , and provides better robustness even as facing a strong adversary . 2 PRELIMINARIES AND MOTIVATION : INTERPRETABILITY OF CNNS FOR JUSTIFYING A CLASSIFICATION DECISION . To explain what and why CNNs predict , we consider two types of network interpretation methods : a ) class activation map ( CAM ) ( Zhou et al. , 2016 ; Selvaraju et al. , 2017 ; Chattopadhay et al. , 2018 ) and b ) pixel sensitivity map ( PSM ) ( Simonyan et al. , 2013 ; Springenberg et al. , 2014 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ; Yeh et al. , 2019 ) . Let f ( x ) ∈ RC denote a CNN-based predictor that maps an input x ∈ Rd to a probability vector of C classes . Here fc ( x ) , the cth element of f ( x ) , denotes the classification score ( given by logit before the softmax ) for class c. Let L ( x , c ) denote an interpreter ( CAM or PSM ) that reflects where in x contributes to the classifier ’ s decision on c. CAM-type methods . CAM ( Zhou et al. , 2016 ) produces a class-discriminative localization map for CNNs , which performs global averaging pooling over convolutional feature maps prior to the softmax . Let the penultimate layer output K feature maps , each of which is denoted by a vector representation Ak ∈ Ru for channel k ∈ [ K ] . Here [ K ] represents the integer set { 1 , 2 , . . . , K } . The ith entry of CAM LCAM ( x , c ) is given by [ LCAM ( x , c ) ] i = 1 u ∑ k∈ [ K ] wckAk , i , i ∈ [ u ] , ( 1 ) where wck is the linear classification weight that associates the channel k with the class c , and Ak , i denotes the ith element of Ak . The rationale behind ( 1 ) is that the classification score fc ( x ) can be written as the average of CAM values ( Zhou et al. , 2016 ) , fc ( x ) = ∑u i=1 [ LCAM ( x , c ) ] i . For visual explanation , LCAM ( x , c ) is often up-sampled to the input dimension d using bi-linear interpolation . GradCAM ( Selvaraju et al. , 2017 ) generalizes CAM for CNNs without the architecture ‘ global average pooling → softmax layer ’ over the final convolutional maps . Specifically , the weight wck in ( 1 ) is given by the gradient of the classification score fc ( x ) with respect to ( w.r.t . ) the feature map Ak , wck = 1 u ∑u i=1 ∂fc ( x ) ∂Ak , i . GradCAM++ ( Chattopadhay et al. , 2018 ) , a generalized formulation of GradCAM , utilizes a more involved weighted average of the ( positive ) pixel-wise gradients but provides a better localization map if an image contains multiple occurrences of the same class . In this work , we focus on CAM since it is computationally light and our models used in experiments follow the architecture ‘ global average pooling→ softmax layer ’ . PSM-type methods . PSM uses calculations with gradients to assign importance scores to individual pixels toward explaining the classification decision about an input . Examples of commonly-used approaches include vanilla gradient ( Simonyan et al. , 2013 ) , guided backpropogation ( Springenberg et al. , 2014 ) , SmoothGrad ( Smilkov et al. , 2017 ) , and integrated gradient ( IG ) ( Sundararajan et al. , 2017 ) . In particular , IG satisfies the completeness attribution axiom that PSM ought to obey . Specifically , it averages gradient saliency maps for interpolations between an input x and a baseline image a : [ LIG ( x , c ) ] i = ( xi − ai ) ∫ 1 α=0 ∂fc ( a+ α ( x− a ) ) ∂xi dα ≈ ( xi − ai ) m∑ i=1 ∂fc ( a+ i m ( x− a ) ) ∂xi 1 m , i ∈ [ d ] , ( 2 ) where m is the number of steps in the Riemman approximation of the integral . The completeness axiom ( Sundararajan et al. , 2017 , Proposition 1 ) states that ∑d i=1 [ LIG ( x , c ) ] i = fc ( x ) − fc ( a ) , where the baseline image a is often chosen such that fc ( a ) ≈ 0 , e.g. , the black image . Note that CAM also satisfies the completeness axiom . PSM is able to highlight fine-grained details in the image , but is computationally intensive and not very class-discriminative compared to CAM ( Selvaraju et al. , 2017 ) . Interpretability discrepancy caused by adversarial perturbation . Let x′ = x + δ represent an adversarial example w.r.t . x , where δ denotes an adversarial perturbation . By replacing the input image x with x′ , the CNN will be fooled from the true label t to the target ( incorrect ) label t′ . It was recently shown in ( Xu et al. , 2019b ; a ) that the adversary could introduce an evident interpretability discrepancy w.r.t . both the true and the target label in terms of L ( x , t ) vs. L ( x′ , t ) , and L ( x , t′ ) vs. L ( x′ , t′ ) . An illustrative example is provided in Figure 1 . We see that an adversary suppresses the network interpretation w.r.t . the true label but promotes the interpretation w.r.t . the target label . We also observe that compared to IG , CAM and GradCAM++ better localize class-specific discriminative regions . These results reveal two observations on how measuring interpretation discrepancy affects classification robustness : a ) interpretability discrepancy may be used to detect adversarial examples , b ) interpretability discrepancy itself may be vulnerable to adversarial perturbations . In what follows , we explore the spectrum between adversarial robustness and interpretability from a unified perspective considering both the adversarial vulnerability of interpretability discrepancy and the value of interpretability discrepancy in a defense .
The present work considers adversarial attacks that also yield similar outputs for "interpretability methods", which are methods that output some vector corresponding to a given classification (usually the vector is e.g. an image or a similar object). It also shows that by regularizing nearby inputs to have similar interpetations (instead of similar classifications), robustness can be achieved similar to adversarial training.
SP:e6fdf0645148e28af1a54c7f05e2594b2cf535cd
Carpe Diem, Seize the Samples Uncertain "at the Moment" for Adaptive Batch Selection
1 INTRODUCTION . Stochastic gradient descent ( SGD ) for randomly selected mini-batch samples is commonly used to train deep neural networks ( DNNs ) . However , many recent studies have pointed out that the performance of DNNs is heavily dependent on how well the mini-batch samples are selected ( Shrivastava et al. , 2016 ; Chang et al. , 2017 ; Katharopoulos & Fleuret , 2018 ) . In earlier approaches , a sample ’ s difficulty is employed to identify proper mini-batch samples , and these approaches achieve a more accurate and robust network ( Han et al. , 2018 ) or expedite the training convergence of SGD ( Loshchilov & Hutter , 2016 ) . However , the two opposing difficulty-based strategies , i.e. , preferring easy samples ( Kumar et al. , 2010 ; Han et al. , 2018 ) versus hard samples ( Loshchilov & Hutter , 2016 ; Shrivastava et al. , 2016 ) , work well in different situations . Thus , for practical reasons to cover more diverse situations , recent approaches begin to exploit a sample ’ s uncertainty that indicates the consistency of previous predictions ( Chang et al. , 2017 ; Song et al. , 2019 ) . An important question here is how to evaluate the sample ’ s uncertainty based on its historical predictions during the training process . Intuitively , because a series of historical predictions can be seen as a series of data indexed in chronological order , the uncertainty can be measured based on two forms of handling time-series observations : ( i ) a growing window ( Figure 1 ( a ) ) that consistently increases the size of a window to use all available observations and ( ii ) a sliding window ( Figure 1 ( b ) ) that maintains a window of a fixed size on the most recent observations by deleting outdated ones . While the state-of-the-art algorithm , Active Bias ( Chang et al. , 2017 ) , adopts the growing window , we propose to use the sliding window in this paper . In more detail , Active Bias recognizes uncertain samples based on the inconsistency of the predictions in the entire history of past SGD iterations . Then , it emphasizes such uncertain samples by choosing them with high probability for the next mini-batch . However , according to our experiments presented in Section 5.2 , such uncertain samples slowed down the convergence speed of training , though they ultimately reduced the generalization error . This weakness is attributed to the inherent limitation of the growing window , where older observations could be too outdated ( Torgo , 2011 ) . In other words , the outdated predictions no longer represent a network ’ s current behavior . As illustrated in Figure 2 , when the label predictions of two samples were inconsistent for a long time , Active Bias invariably regards them as highly uncertain , although their recent label predictions become consistent along with the network ’ s training progress . This characteristic evidently entails the risk of emphasizing uninformative samples that are too easy or too hard at the current moment , thereby slowing down the convergence speed of training . Therefore , we propose a simple but effective batch selection method , called Recency Bias , that takes advantage of the sliding window to evaluate the uncertainty in fresher observations . As opposed to Active Bias , Recency Bias excludes the outdated predictions by managing a sliding window of a fixed size and picks up the samples predicted inconsistently within the sliding window . Thus , as shown in Figure 2 , the two samples uninformative at the moment are no longer selected by Recency Bias simply because their recent predictions are consistent . Consequently , since informative samples are effectively selected throughout the training process , this strategy not only accelerates the training speed but also leads to a more accurate network . To validate the superiority of Recency Bias , two popular convolutional neural networks ( CNNs ) were trained for two independent tasks : image classification and fine tuning . We compared Recency Bias with not only random batch selection ( baseline ) but also two state-of-the-art batch selection strategies . Compared with three batch selection strategies , Recency Bias provided a relative reduction of test error by 1.81 % –20.5 % in a fixed wall-clock training time . At the same time , it significantly reduced the execution time by 24.6 % –59.3 % to reach the same test error . 2 RELATED WORK . Let D = { ( xi , yi ) |1 ≤ i ≤ N } be the entire training dataset composed of a sample xi with its true label yi , where N is the total number of training samples . Then , a straightforward strategy to construct a mini-batchM = { ( xi , yi ) |1 ≤ i ≤ b } is to select b samples uniformly at random ( i.e. , P ( xi|D ) = 1/N ) from the training dataset D. Because not all samples have an equal impact on training , many research efforts have been devoted to develop advanced sampling schemes . Bengio et al . ( 2009 ) first took easy samples and then gradually increased the difficulty of samples using heuristic rules . Kumar et al . ( 2010 ) determined the easiness of the samples using their prediction errors . Recently , Tsvetkov et al . ( 2016 ) used Bayesian optimization to learn an optimal curriculum for training dense , distributed word representations . Sachan & Xing ( 2016 ) emphasized that the right curriculum must introduce a small number of the samples dissimilar to those previously seen . Fan et al . ( 2017 ) proposed a neural data filter based on reinforcement learning to select training samples adaptively . However , it is common for deep learning to emphasize hard samples because of the plethora of easy ones ( Katharopoulos & Fleuret , 2018 ) . Loshchilov & Hutter ( 2016 ) proposed a difficulty-based sampling scheme , called Online Batch , that uses the rank of the loss computed from previous epochs . Online Batch sorts the previously computed losses of samples in descending order and exponentially decays the sampling probability of a sample according to its rank r. Then , the r-th ranked sample x ( r ) is selected with the probability dropping by a factor of exp ( log ( se ) /N ) , where se is the selection pressure parameter that affects the probability gap between the most and the least important samples . When normalized to sum to 1.0 , the probability P ( x ( r ) |D ; se ) is defined by Eq . ( 1 ) . It has been reported that Online Batch accelerates the convergence of training but deteriorates the generalization error because of the overfitting to hard training samples ( Loshchilov & Hutter , 2016 ) . P ( x ( r ) |D ; se ) = 1/ exp ( log ( se ) /N ) r∑N j=1 1/ exp ( log ( se ) /N ) j ( 1 ) Most close to our work , Chang et al . ( 2017 ) devised an uncertainty-based sampling scheme , called Active Bias , that chooses uncertain samples with high probability for the next batch . Active Bias maintains the historyHt−1i that stores all h ( yi|xi ) before the current iteration t ( i.e. , growing window ) , where h ( yi|xi ) is the softmax probability of a given sample xi for its true label yi . Then , it measures the uncertainty of the sample xi by computing the variance over all h ( yi|xi ) inHt−1i and draws the next mini-batch samples based on the normalized probability P ( xi|D , Ht−1i ; ) in Eq . ( 2 ) , where is the smoothness constant to prevent the low variance samples from never being selected again . As mentioned earlier in Section 1 , Active Bias slows down the training process because the oldest part in the historyHt−1i no longer represents the current behavior of the network . P ( xi|D , Ht−1i ; ) = ˆstd ( Ht−1i ) + ∑N j=1 ( ˆstd ( Ht−1j ) + ) , ˆstd ( Ht−1i ) = √√√√var ( h ( yi|xi ) ) + var ( h ( yi|xi ) ) 2|Ht−1i | ( 2 ) For the completeness of the survey , we include the recent studies on submodular batch selection . Joseph et al . ( 2019 ) and Wang et al . ( 2019 ) designed their own submodular objectives that cover diverse aspects , such as sample redundancy and sample representativeness , for more effective batch selection . Differently from their work , we explore the issue of truly uncertain samples in an orthogonal perspective . Our uncertainty measure can be easily injected into their submodular optimization framework as a measure of sample informativeness . In Section 5 , we will confirm that Recency Bias outperforms Online Batch and Active Bias , which are regarded as two state-of-the-art adaptive batch selection methods for deep learning . 3 Recency Bias COMPONENTS 3.1 CRITERION OF AN UNCERTAIN SAMPLE . The main challenge of Recency Bias is to identify the samples whose recent label predictions are highly inconsistent , which are neither too easy nor too hard at the moment . Thus , we adopt the predictive uncertainty ( Song et al. , 2019 ) in Definition 3.1 that uses the information entropy ( Chandler , 1987 ) to measure the inconsistency of recent label predictions . Here , the sample with high predictive uncertainty is regarded as uncertain and selected with high probability for the next mini-batch . Definition 3.1 . ( Predictive Uncertainty ) Let ŷit = Φ ( xi , θt ) be the predicted label of a sample xi at time t andHxi ( q ) = { ŷt1 , ŷt2 , . . . , ŷtq } be the label history of the sample xi that stores the predicted labels at the previous q times , where Φ is a neural network . The label history Hxi ( q ) corresponds to the sliding window of size q to compute the uncertainty of the sample xi . Next , p ( yi|xi ; q ) is formulated such that it provides the probability of the label yi ∈ { 1 , 2 , ... , k } estimated as the label of the sample xi based onHxi ( q ) as in Eq . ( 3 ) , where [ · ] is the Iverson bracket1 . p ( yi|xi ; q ) = ∑ ŷi∈Hxi ( q ) [ ŷi = yi ] |Hxi ( q ) | ( 3 ) Then , to quantify the uncertainty of the sample xi , the predictive uncertainty F ( xi ; q ) is defined using the empirical entropy as in Eq . ( 4 ) . Because the uncertainty is bounded , we add the standardization term δ to normalize the value to [ 0 , 1 ] . For k classes , δ is the maximum entropy when ∀jp ( j|xi ; q ) = 1/k . F ( xi ; q ) = − ( 1/δ ) k∑ j=1 p ( j|xi ; q ) log p ( j|xi ; q ) δ = − log ( 1/k ) ( 4 ) 1The Iverson bracket [ p ] returns 1 if p is true ; 0 otherwise . 3.2 SAMPLING PROBABILITY FOR MINI-BATCH CONSTRUCTION . To construct next mini-batch samples , we assign the sampling probability according to the predictive uncertainty in Definition 3.1 . Motivated by Loshchilov & Hutter ( 2016 ) , the sampling probability of a given sample xi is exponentially decayed with its predictive uncertainty F ( xi ; q ) . In detail , we adopt the quantization method ( Chen & Wornell , 2001 ) and use the quantization index to decay the sampling probability . The index is obtained by the simple quantizer Q in Eq . ( 5 ) , where ∆ is the quantization step size . Compared with the rank-based index ( Loshchilov & Hutter , 2016 ) , the quantization index is known to well reflect the difference in actual values ( Widrow et al. , 1996 ) . Q ( F ( xi ; q ) ) = d ( 1− F ( xi ; q ) ) /∆e , 0 ≤ F ( xi ; q ) ≤ 1 ( 5 ) In Eq . ( 5 ) , we set ∆ to be 1/N such that the index is bounded to N ( the total number of samples ) . Then , the sampling probability P ( xi|D ; se ) is defined as in Eq . ( 6 ) . The higher the predictive uncertainty , the smaller the quantization index . Therefore , a higher sampling probability is assigned for uncertain samples in Eq . ( 6 ) . P ( xi|D ; se ) = 1/ exp ( log ( se ) /N ) Q ( F ( xi ; q ) ) ∑N j=1 1/ exp ( log ( se ) /N ) Q ( F ( xj ; q ) ) ( 6 ) Meanwhile , it is known that using only some part of training data exacerbates the overfitting problem at a late stage of training ( Loshchilov & Hutter , 2016 ; Zhou & Bilmes , 2018 ) . Thus , to alleviate the problem , we include more training samples as the training progresses by exponentially decaying the selection pressure se as in Eq . ( 7 ) . At each epoch e from e0 to eend , the selection pressure se exponentially decreases from se0 to 1 . Because this technique gradually reduces the sampling probability gap between the most and the least uncertain samples , more diverse samples are selected for the next mini-batch at a later epoch . When the selection pressure se becomes 1 , the mini-batch samples are randomly chosen from the entire dataset . se = se0 ( exp ( log ( 1/se0 ) / ( eend − e0 ) ) ) e−e0 ( 7 ) 4 Recency Bias ALGORITHM Algorithm 1 Recency Bias Algorithm INPUT : D : data , epochs , b : batch size , q : window size , se0 : initial selection pressure , γ : warm-up OUTPUT : θt : model parameter 1 : t← 1 ; 2 : θt ← Initialize the model parameter ; 3 : for i = 1 to epochs do 4 : / * Sampling Probability Derivation * / 5 : if i > γ then 6 : se ← Decay_Selection_Pressure ( se0 , i ) ; / * Decaying se by Eq . ( 7 ) * / 7 : for m = 1 to N do / * Updating the index and the sampling probability in a batch * / 8 : q_dict [ xm ] = Q ( F ( xm ; q ) ) ; / * By Eq . ( 5 ) * / 9 : p_table← Compute_Prob ( q_dict , se ) ; / * By Eq . ( 6 ) * / 10 : / * Network Training * / 11 : for j = 1 to N/b do / * Mini-batch * / 12 : if i ≤ γ then / * Warm-up * / 13 : { ( x1 , y1 ) , . . . , ( xb , yb ) } ← Randomly select next mini-batch samples ; 14 : else / * Adaptive batch selection * / 15 : { ( x1 , y1 ) , . . . , ( xb , yb ) } ← Select next mini-batch samples based on p_table ; 16 : losses , labels← Inference_Step ( { ( x1 , y1 ) , . . . , ( xb , yb ) } , θt ) ; / * Forward * / 17 : θt+1 ← SGD_Step ( losses , θt ) ; / * Backward * / 18 : Update_Label_History ( labels ) ; / * By Definition 3.1 * / 19 : t← t+ 1 ; 20 : return θt ; Algorithm 1 describes the overall procedure of Recency Bias . The algorithm requires a warm-up period of γ epochs because the quantization index for each sample is not confirmed yet . During the warm-up period , which should be at least q epochs ( γ ≥ q ) to obtain the label history of size q , randomly selected mini-batch samples are used for the network update ( Lines 12–13 ) . After the warm-up period , the algorithm decays the selection pressure se and updates not only the quantization index but also the sampling probability in a batch at the beginning of each epoch ( Lines 4–9 ) . Subsequently , the uncertain samples are selected for the next mini-batch according to the updated sampling probability ( Line 14–15 ) , and then the label history is updated along with the network update ( Lines 16–19 ) . Overall , the key technical novelty of Recency Bias is to incorporate the notion of a sliding window ( Line 8 ) rather than a growing window into adaptive batch selection , thereby improving both training speed and generalization error . Time Complexity : The main “ additional ” cost of Recency Bias is the derivation of the sampling probability for each sample ( Lines 4–9 ) . Because only simple mathematical operations are needed per sample , its time complexity is linear to the number of samples ( i.e. , O ( N ) ) , which is negligible compared with that of the forward and backward steps of a complex network ( Lines 16–17 ) . Therefore , we contend that Recency Bias does not add the complexity of an underlying optimization algorithm .
This paper proposes Recency Bias, an adaptive mini batch selection method for training deep neural networks. To select informative minibatches for training, the proposed method maintains a fixed size sliding window of past model predictions for each data sample. At a given iteration, samples which have highly inconsistent predictions within the sliding window are added to the minibatch. The main contribution of this paper is the introduction of sliding window to remember past model predictions, as an improvement over the SOTA approach: Active Bias, which maintains a growing window of model predictions. Empirical studies are performed to show the superiority of Recency Bias over two SOTA approaches. Results are shown on the task of (1) image classification from scratch and (2) image classification by fine-tuning pretrained networks.
SP:1c1b27e49b3df07bb7da0440a4ab0018d9d8440d
Carpe Diem, Seize the Samples Uncertain "at the Moment" for Adaptive Batch Selection
1 INTRODUCTION . Stochastic gradient descent ( SGD ) for randomly selected mini-batch samples is commonly used to train deep neural networks ( DNNs ) . However , many recent studies have pointed out that the performance of DNNs is heavily dependent on how well the mini-batch samples are selected ( Shrivastava et al. , 2016 ; Chang et al. , 2017 ; Katharopoulos & Fleuret , 2018 ) . In earlier approaches , a sample ’ s difficulty is employed to identify proper mini-batch samples , and these approaches achieve a more accurate and robust network ( Han et al. , 2018 ) or expedite the training convergence of SGD ( Loshchilov & Hutter , 2016 ) . However , the two opposing difficulty-based strategies , i.e. , preferring easy samples ( Kumar et al. , 2010 ; Han et al. , 2018 ) versus hard samples ( Loshchilov & Hutter , 2016 ; Shrivastava et al. , 2016 ) , work well in different situations . Thus , for practical reasons to cover more diverse situations , recent approaches begin to exploit a sample ’ s uncertainty that indicates the consistency of previous predictions ( Chang et al. , 2017 ; Song et al. , 2019 ) . An important question here is how to evaluate the sample ’ s uncertainty based on its historical predictions during the training process . Intuitively , because a series of historical predictions can be seen as a series of data indexed in chronological order , the uncertainty can be measured based on two forms of handling time-series observations : ( i ) a growing window ( Figure 1 ( a ) ) that consistently increases the size of a window to use all available observations and ( ii ) a sliding window ( Figure 1 ( b ) ) that maintains a window of a fixed size on the most recent observations by deleting outdated ones . While the state-of-the-art algorithm , Active Bias ( Chang et al. , 2017 ) , adopts the growing window , we propose to use the sliding window in this paper . In more detail , Active Bias recognizes uncertain samples based on the inconsistency of the predictions in the entire history of past SGD iterations . Then , it emphasizes such uncertain samples by choosing them with high probability for the next mini-batch . However , according to our experiments presented in Section 5.2 , such uncertain samples slowed down the convergence speed of training , though they ultimately reduced the generalization error . This weakness is attributed to the inherent limitation of the growing window , where older observations could be too outdated ( Torgo , 2011 ) . In other words , the outdated predictions no longer represent a network ’ s current behavior . As illustrated in Figure 2 , when the label predictions of two samples were inconsistent for a long time , Active Bias invariably regards them as highly uncertain , although their recent label predictions become consistent along with the network ’ s training progress . This characteristic evidently entails the risk of emphasizing uninformative samples that are too easy or too hard at the current moment , thereby slowing down the convergence speed of training . Therefore , we propose a simple but effective batch selection method , called Recency Bias , that takes advantage of the sliding window to evaluate the uncertainty in fresher observations . As opposed to Active Bias , Recency Bias excludes the outdated predictions by managing a sliding window of a fixed size and picks up the samples predicted inconsistently within the sliding window . Thus , as shown in Figure 2 , the two samples uninformative at the moment are no longer selected by Recency Bias simply because their recent predictions are consistent . Consequently , since informative samples are effectively selected throughout the training process , this strategy not only accelerates the training speed but also leads to a more accurate network . To validate the superiority of Recency Bias , two popular convolutional neural networks ( CNNs ) were trained for two independent tasks : image classification and fine tuning . We compared Recency Bias with not only random batch selection ( baseline ) but also two state-of-the-art batch selection strategies . Compared with three batch selection strategies , Recency Bias provided a relative reduction of test error by 1.81 % –20.5 % in a fixed wall-clock training time . At the same time , it significantly reduced the execution time by 24.6 % –59.3 % to reach the same test error . 2 RELATED WORK . Let D = { ( xi , yi ) |1 ≤ i ≤ N } be the entire training dataset composed of a sample xi with its true label yi , where N is the total number of training samples . Then , a straightforward strategy to construct a mini-batchM = { ( xi , yi ) |1 ≤ i ≤ b } is to select b samples uniformly at random ( i.e. , P ( xi|D ) = 1/N ) from the training dataset D. Because not all samples have an equal impact on training , many research efforts have been devoted to develop advanced sampling schemes . Bengio et al . ( 2009 ) first took easy samples and then gradually increased the difficulty of samples using heuristic rules . Kumar et al . ( 2010 ) determined the easiness of the samples using their prediction errors . Recently , Tsvetkov et al . ( 2016 ) used Bayesian optimization to learn an optimal curriculum for training dense , distributed word representations . Sachan & Xing ( 2016 ) emphasized that the right curriculum must introduce a small number of the samples dissimilar to those previously seen . Fan et al . ( 2017 ) proposed a neural data filter based on reinforcement learning to select training samples adaptively . However , it is common for deep learning to emphasize hard samples because of the plethora of easy ones ( Katharopoulos & Fleuret , 2018 ) . Loshchilov & Hutter ( 2016 ) proposed a difficulty-based sampling scheme , called Online Batch , that uses the rank of the loss computed from previous epochs . Online Batch sorts the previously computed losses of samples in descending order and exponentially decays the sampling probability of a sample according to its rank r. Then , the r-th ranked sample x ( r ) is selected with the probability dropping by a factor of exp ( log ( se ) /N ) , where se is the selection pressure parameter that affects the probability gap between the most and the least important samples . When normalized to sum to 1.0 , the probability P ( x ( r ) |D ; se ) is defined by Eq . ( 1 ) . It has been reported that Online Batch accelerates the convergence of training but deteriorates the generalization error because of the overfitting to hard training samples ( Loshchilov & Hutter , 2016 ) . P ( x ( r ) |D ; se ) = 1/ exp ( log ( se ) /N ) r∑N j=1 1/ exp ( log ( se ) /N ) j ( 1 ) Most close to our work , Chang et al . ( 2017 ) devised an uncertainty-based sampling scheme , called Active Bias , that chooses uncertain samples with high probability for the next batch . Active Bias maintains the historyHt−1i that stores all h ( yi|xi ) before the current iteration t ( i.e. , growing window ) , where h ( yi|xi ) is the softmax probability of a given sample xi for its true label yi . Then , it measures the uncertainty of the sample xi by computing the variance over all h ( yi|xi ) inHt−1i and draws the next mini-batch samples based on the normalized probability P ( xi|D , Ht−1i ; ) in Eq . ( 2 ) , where is the smoothness constant to prevent the low variance samples from never being selected again . As mentioned earlier in Section 1 , Active Bias slows down the training process because the oldest part in the historyHt−1i no longer represents the current behavior of the network . P ( xi|D , Ht−1i ; ) = ˆstd ( Ht−1i ) + ∑N j=1 ( ˆstd ( Ht−1j ) + ) , ˆstd ( Ht−1i ) = √√√√var ( h ( yi|xi ) ) + var ( h ( yi|xi ) ) 2|Ht−1i | ( 2 ) For the completeness of the survey , we include the recent studies on submodular batch selection . Joseph et al . ( 2019 ) and Wang et al . ( 2019 ) designed their own submodular objectives that cover diverse aspects , such as sample redundancy and sample representativeness , for more effective batch selection . Differently from their work , we explore the issue of truly uncertain samples in an orthogonal perspective . Our uncertainty measure can be easily injected into their submodular optimization framework as a measure of sample informativeness . In Section 5 , we will confirm that Recency Bias outperforms Online Batch and Active Bias , which are regarded as two state-of-the-art adaptive batch selection methods for deep learning . 3 Recency Bias COMPONENTS 3.1 CRITERION OF AN UNCERTAIN SAMPLE . The main challenge of Recency Bias is to identify the samples whose recent label predictions are highly inconsistent , which are neither too easy nor too hard at the moment . Thus , we adopt the predictive uncertainty ( Song et al. , 2019 ) in Definition 3.1 that uses the information entropy ( Chandler , 1987 ) to measure the inconsistency of recent label predictions . Here , the sample with high predictive uncertainty is regarded as uncertain and selected with high probability for the next mini-batch . Definition 3.1 . ( Predictive Uncertainty ) Let ŷit = Φ ( xi , θt ) be the predicted label of a sample xi at time t andHxi ( q ) = { ŷt1 , ŷt2 , . . . , ŷtq } be the label history of the sample xi that stores the predicted labels at the previous q times , where Φ is a neural network . The label history Hxi ( q ) corresponds to the sliding window of size q to compute the uncertainty of the sample xi . Next , p ( yi|xi ; q ) is formulated such that it provides the probability of the label yi ∈ { 1 , 2 , ... , k } estimated as the label of the sample xi based onHxi ( q ) as in Eq . ( 3 ) , where [ · ] is the Iverson bracket1 . p ( yi|xi ; q ) = ∑ ŷi∈Hxi ( q ) [ ŷi = yi ] |Hxi ( q ) | ( 3 ) Then , to quantify the uncertainty of the sample xi , the predictive uncertainty F ( xi ; q ) is defined using the empirical entropy as in Eq . ( 4 ) . Because the uncertainty is bounded , we add the standardization term δ to normalize the value to [ 0 , 1 ] . For k classes , δ is the maximum entropy when ∀jp ( j|xi ; q ) = 1/k . F ( xi ; q ) = − ( 1/δ ) k∑ j=1 p ( j|xi ; q ) log p ( j|xi ; q ) δ = − log ( 1/k ) ( 4 ) 1The Iverson bracket [ p ] returns 1 if p is true ; 0 otherwise . 3.2 SAMPLING PROBABILITY FOR MINI-BATCH CONSTRUCTION . To construct next mini-batch samples , we assign the sampling probability according to the predictive uncertainty in Definition 3.1 . Motivated by Loshchilov & Hutter ( 2016 ) , the sampling probability of a given sample xi is exponentially decayed with its predictive uncertainty F ( xi ; q ) . In detail , we adopt the quantization method ( Chen & Wornell , 2001 ) and use the quantization index to decay the sampling probability . The index is obtained by the simple quantizer Q in Eq . ( 5 ) , where ∆ is the quantization step size . Compared with the rank-based index ( Loshchilov & Hutter , 2016 ) , the quantization index is known to well reflect the difference in actual values ( Widrow et al. , 1996 ) . Q ( F ( xi ; q ) ) = d ( 1− F ( xi ; q ) ) /∆e , 0 ≤ F ( xi ; q ) ≤ 1 ( 5 ) In Eq . ( 5 ) , we set ∆ to be 1/N such that the index is bounded to N ( the total number of samples ) . Then , the sampling probability P ( xi|D ; se ) is defined as in Eq . ( 6 ) . The higher the predictive uncertainty , the smaller the quantization index . Therefore , a higher sampling probability is assigned for uncertain samples in Eq . ( 6 ) . P ( xi|D ; se ) = 1/ exp ( log ( se ) /N ) Q ( F ( xi ; q ) ) ∑N j=1 1/ exp ( log ( se ) /N ) Q ( F ( xj ; q ) ) ( 6 ) Meanwhile , it is known that using only some part of training data exacerbates the overfitting problem at a late stage of training ( Loshchilov & Hutter , 2016 ; Zhou & Bilmes , 2018 ) . Thus , to alleviate the problem , we include more training samples as the training progresses by exponentially decaying the selection pressure se as in Eq . ( 7 ) . At each epoch e from e0 to eend , the selection pressure se exponentially decreases from se0 to 1 . Because this technique gradually reduces the sampling probability gap between the most and the least uncertain samples , more diverse samples are selected for the next mini-batch at a later epoch . When the selection pressure se becomes 1 , the mini-batch samples are randomly chosen from the entire dataset . se = se0 ( exp ( log ( 1/se0 ) / ( eend − e0 ) ) ) e−e0 ( 7 ) 4 Recency Bias ALGORITHM Algorithm 1 Recency Bias Algorithm INPUT : D : data , epochs , b : batch size , q : window size , se0 : initial selection pressure , γ : warm-up OUTPUT : θt : model parameter 1 : t← 1 ; 2 : θt ← Initialize the model parameter ; 3 : for i = 1 to epochs do 4 : / * Sampling Probability Derivation * / 5 : if i > γ then 6 : se ← Decay_Selection_Pressure ( se0 , i ) ; / * Decaying se by Eq . ( 7 ) * / 7 : for m = 1 to N do / * Updating the index and the sampling probability in a batch * / 8 : q_dict [ xm ] = Q ( F ( xm ; q ) ) ; / * By Eq . ( 5 ) * / 9 : p_table← Compute_Prob ( q_dict , se ) ; / * By Eq . ( 6 ) * / 10 : / * Network Training * / 11 : for j = 1 to N/b do / * Mini-batch * / 12 : if i ≤ γ then / * Warm-up * / 13 : { ( x1 , y1 ) , . . . , ( xb , yb ) } ← Randomly select next mini-batch samples ; 14 : else / * Adaptive batch selection * / 15 : { ( x1 , y1 ) , . . . , ( xb , yb ) } ← Select next mini-batch samples based on p_table ; 16 : losses , labels← Inference_Step ( { ( x1 , y1 ) , . . . , ( xb , yb ) } , θt ) ; / * Forward * / 17 : θt+1 ← SGD_Step ( losses , θt ) ; / * Backward * / 18 : Update_Label_History ( labels ) ; / * By Definition 3.1 * / 19 : t← t+ 1 ; 20 : return θt ; Algorithm 1 describes the overall procedure of Recency Bias . The algorithm requires a warm-up period of γ epochs because the quantization index for each sample is not confirmed yet . During the warm-up period , which should be at least q epochs ( γ ≥ q ) to obtain the label history of size q , randomly selected mini-batch samples are used for the network update ( Lines 12–13 ) . After the warm-up period , the algorithm decays the selection pressure se and updates not only the quantization index but also the sampling probability in a batch at the beginning of each epoch ( Lines 4–9 ) . Subsequently , the uncertain samples are selected for the next mini-batch according to the updated sampling probability ( Line 14–15 ) , and then the label history is updated along with the network update ( Lines 16–19 ) . Overall , the key technical novelty of Recency Bias is to incorporate the notion of a sliding window ( Line 8 ) rather than a growing window into adaptive batch selection , thereby improving both training speed and generalization error . Time Complexity : The main “ additional ” cost of Recency Bias is the derivation of the sampling probability for each sample ( Lines 4–9 ) . Because only simple mathematical operations are needed per sample , its time complexity is linear to the number of samples ( i.e. , O ( N ) ) , which is negligible compared with that of the forward and backward steps of a complex network ( Lines 16–17 ) . Therefore , we contend that Recency Bias does not add the complexity of an underlying optimization algorithm .
This paper explores a well motivated but very heuristic idea for selecting the next samples to train on for training deep learning models. This method relies on looking at the uncertainty of predictions of in the recent history of statements and preferring those instances that have a predictive uncertainty over the recent predictions. This allows the training method to train on instances that are neither too hard nor too easy and focus on reducing the uncertainty whenever it has the greatest potential gain to do so.
SP:1c1b27e49b3df07bb7da0440a4ab0018d9d8440d
LabelFool: A Trick in the Label Space
1 INTRODUCTION . Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in classification tasks ( Krizhevsky et al. , 2012b ; LeCun et al. , 2010 ; He et al. , 2016 ) . Nevertheless , it is found that adding well-designed perturbations to original samples can make classifiers of deep neural networks fail ( Szegedy et al. , 2013 ) . These kinds of samples are called adversarial samples . Techniques for generating adversarial samples are called attackers . We think the ideal attacker should satisfy three levels of requirements . The first requirement is fooling networks which means making classifiers fail to classify an image successfully . For example , a dog image can be classified as a cat after added some well-designed perturbations . There are a number of methods for achieving a high attack rate ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017 ; Dong et al. , 2018 ) . The second requirement for the ideal attacker is the imperceptibility in the image space . This means the magnitude of perturbations in the pixel level needs to be as tiny as possible so that it is imperceptible to human eyes . For example , additive perturbations are minimized with lp norm to generate imperceptible adversarial samples ( Seyed-Mohsen et al. , 2016 ) . Extreme cases also exist where only changing one or a few pixels ( Su et al. , 2019 ; Modas et al. , 2019 ) can make classifiers fail . Moosavi-Dezfooli et al . ( 2017 ) even show the existence of universal ( image-agnostic ) perturbations . The third requirement for the ideal attacker , which is newly proposed in this paper , is the imperceptibility of the error made by the classifier in the label space . It means making the classifier to mis-classify an image as the label which is similar to its ground truth , so that people won ’ t notice the misclassification . For example , in Figure 1 , a human user will probably ignore the mis-classification if an attacker caused a “ church ” to be mis-classified as a “ monastery ” as the third attacker does . However , a human user will easily notice the mistake if an attacker caused a “ church ” to be misclassified as a “ dome ” as the second attacker does or caused an apparent perturbation in the image space as the first attacker does . In real applications , a human user will take defensive measures as soon as he notices the attack . Therefore making the whole attack process imperceptible is crucial for letting observers ’ guard down . Tiny perturbations in the image space but large perturbations in the label space can muddle through on the input terminal . But as soon as observers check on the output terminal and see the obviously-incorrect label for an input , they will realize that the classifier fail due to some attacks and take defensive measures immediately , just as Figure 1 shows . This justifies the power of attacks which also confuse people in the label space . So the imperceptibility in the label space is quite important . However , to our best knowledge , few attackers have realized this point . In this paper , we propose an untargeted-attack algorithm called LabelFool , to perturb an image to be mis-classified as the label which is similar to its ground truth , so that people won ’ t notice the misclassification . In the meantime , LabelFool also guarantees the imperceptibility in the image space as well as maintaining a high attack rate in fooling classifiers . There are two steps by which we accomplish our goal . The first step is to choose a target label which is similar to the input image ’ s ground truth . The second step is to perturb the input to be classified as this target label . The way is finding the classification boundary between the current label and the target label , and then moving the input towards this boundary until it is classified as the target label . We conduct a subjective experiment on ImageNet ( Deng et al. , 2009 ) which shows that adversarial samples generated by our method are indeed much less recognizable in the label space by human observers than other attacks . We also perform objective experiments on ImageNet to demonstrate that adversarial samples generated by LabelFool still guarantee the imperceptibility in the image space as well as maintaining a high attack rate in fooling classifiers . 2 RELATED WORK . The phenomenon that neural networks are sensitive to adversarial samples was proposed by Szegedy et al . ( 2013 ) . Since then , many researchers have studied how to generat adversarial samples . FGSM ( Goodfellow et al. , 2015 ) was proposed to maximize the classification error subject to l∞-norm based distortion constraints . CW attack ( Carlini & Wagner , 2017 ) generates adversarial samples by solving an optimization problem based on l0/l2/l∞ constraint , and l0 CW attack is the first proposed method that can cause targeted misclassification on the ImageNet dataset , meaning that we can specify the label of adversarial samples . But this designation of the target label is arbitrary . Until this paper , there has no guide about how to choose a target label such that it is difficult for a person to notice that the network has failed . Besides achieving the goal of misclassification , many researchers realize the importance of imperceptibility in the image space ( Xu et al. , 2019 ) . One-pixel attack ( Su et al. , 2019 ) and SparseFool ( Modas et al. , 2019 ) attack networks in a scenario where perturbing only one/a few pixels can make a big difference . Moosavi-Dezfooli et al . ( 2017 ) show the existence of universal image-agnostic perturbations for state-of-the-art deep neural networks . DeepFool ( Seyed-Mohsen et al. , 2016 ) seeks the minimum image-level distortion . And for generating adversarial samples , it directly moves the input sample to the nearest class in the feature space . This is the most closely related work to ours , because features extracted from classification models can reflect images ’ perceptual information and the classes which are close in the feature space are often perceptually similar . However , DeepFool approximates the multi-dimensional classification boundaries in two dimensions and this might make big errors on finding the nearest class . All these attacks generate adversarial samples by iteration and the algorithm stops as soon as an adversarial sample is born no matter what label it belongs to . This will lead to an apparent misclassification so that observers will sound the defensive alarm quickly . In this paper , we will compare our method with three attacks : FGSM , DeepFool and SparseFool to show the advantage of our method in the imperceptibility in the label space . We will also demonstrate that the performance gain in the label space is not at the expense of the loss in the image space or attack rate . 3 LABELFOOL . In this section , we will introduce our method about how to choose a target label which is undetectable by human observers and how we can perturb the input image so that the classifier assigns this specific label . The whole pipeline is shown in Figure 2 . All the symbols and notations used in this paper are summarized in Table 1 . We use the same notation i ( i = 1 , 2 , . . . ) for “ class ” and “ label ” , because “ class ” and “ label ” are interchangeable in this paper . LabelFool contains two steps . The first step is to choose a target label for the input image which is similar to its ground truth . The second step is to perturb the input image to be classified as this label . Inspired by DeepFool ( Seyed-Mohsen et al. , 2016 ) , we make modifications at the feature level . We keep moving the input towards this chosen class at the feature level until it is classified as the label we want . 3.1 CHOOSE A TARGET LABEL . The first step of our method is choosing the target label tx for an input image x . As we want the target label to be imperceptible in the label space to human observers , we need to find the most “ similar ” label to the input image ’ s ground truth lx where the most “ similar ” means the nearest in the perceptual distance metric . However , lx is usually unknown when an input image is given . So it is important to estimate the probability distribution P of an input ’ s ground truth lx , based on which , we can compute the distance between each class in the dataset and lx , then choose the nearest one as the target class . We propose a weighted distance model to achieve this goal . Before introducing the model , there are some preparations . Given two image x , y , we choose pre-trained image classification models to extract features φx , φy because these features can reflect some perceptual information . As we want to calculate the distance in the perceptual distance metric and cosine distance has been used to measure perceptual similarity in many works ( Lin et al. , 2016 ; Wang et al. , 2019 ) , we compute the distance between x and y as d ( x , y ) = 1 − cos ( φx , φy ) . After having the distance between two images , we can compute the distance between classes . Each class is a set of images . To measure the distance between two sets , we choose Hausdorff distance ( Henrikson , 1999 ) . The distance between class i and class j is denoted as Di , j . Suppose a dataset has n classes . Then , we can construct a matrix D ∈ Rn×n by calculating the distance between all pairs of classes in the dataset , and it will be used in the following probability model to provide the distance we need . After these preparations , we can start to decide the target label for an input image . As introduced before , we need to estimate the probability distribution P of the ground truth lx because we want to find the nearest label to lx which is unknown in the beginning . When an image x is put into a classifier f , state-of-the-art machine learning classifiers usually output a predicted label l̂x and a probability vector p̂ whose elements mean P ( x ∈ class i ) = p̂i . For simplicity , we suppose the elements in p̂ are sorted in the descending order . Meanwhile , p̂ can be thought as P ’ s approximation . Furthermore , we define a distance function between lx and the class i in a n-classes dataset asDi ( lx ) . In order to choose the nearest label to lx as the target label tx , we need to estimate the expectation of Di ( lx ) which is denoted as Elx∼P [ Di ( lx ) ] ( i = 1 , . . . , n ) . In general , our target function is Eq . ( 1 ) . tx = arg min i=1 , ... , n Elx∼P [ Di ( lx ) ] ( 1 ) Specifically , when p̂1 is larger than some threshold δ1 , we use Maximum Likelihood Estimation ( MLE ) ( Pfanzagl , 2011 ) which means we believe the classifier and take the predicted label l̂x as the ground truth lx , then choose the label ( except l̂x ) nearest to l̂x as the target label tx . So in this circumstance , we assume lx = l̂x and Elx∼P [ Di ( lx ) ] = Di , l̂x . Therefore , tx is tx = arg min i 6=l̂x , i=1 , ... , n Di , l̂x if p̂1 > δ1 . ( 2 ) When p̂1 is smaller than the threshold δ1 , we are not sure whether lx is equal to l̂x . Instead , we sample some labels and compute the weighted distance between each label i and these labels . We sample all labels whose probability are larger than a threshold δ2 and we use M to represent the number of sampled labels . We think the input image might belong to one of these M labels . The labels whose probability are smaller than δ2 will be abandoned because we think the input image can hardly fall into these categories . The weight and the distance is provided by the vector p̂ and matrix D respectively . So in this circumstance , as we are not sure which label is the ground truth , we want to find a target label which has the minimum expected distance with all these possible labels . Therefore , the value of Elx∼P [ Di ( lx ) ] can be approximated as ∑M j=1 p̂j ·Di , j and the target label tx is shown in Eq . ( 3 ) . This can be explained by Importance Sampling ( Owen & Zhou , 2000 ) because it is hard to sample from the real probability distribution P . We can only use the probability distribution p̂ which is an approximation of P to estimate the value of Elx∼P [ Di ( lx ) ] . tx = arg min i=1 , ... , n M∑ j=1 p̂j ·Di , j if p̂1 ≤ δ1 ( 3 ) In conclusion , the whole strategy for choosing the target label tx of an input image x is computed as Eq . ( 4 ) . The target label tx minimizes Elx∼P [ Di ( lx ) ] just as Figure 2 shows . tx = arg min i6=l̂x , i=1 , ... , n Di , l̂x if p̂1 > δ1 arg min i=1 , ... , n M∑ j=1 p̂j ·Di , j otherwise ( 4 )
This paper proposes a method to create adversarial perturbations whose target labels are similar to their ground truth. The target labels are selected using an existing perceptual similarity measure for images. Perturbations are generated using a DeepFool-like algorithm. Human evaluation supports that the pair of the generated images and target labels are more natural to humans than prior attack algorithms.
SP:9774f39a520317f2d547b5ab2690f59473d20f8e
LabelFool: A Trick in the Label Space
1 INTRODUCTION . Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in classification tasks ( Krizhevsky et al. , 2012b ; LeCun et al. , 2010 ; He et al. , 2016 ) . Nevertheless , it is found that adding well-designed perturbations to original samples can make classifiers of deep neural networks fail ( Szegedy et al. , 2013 ) . These kinds of samples are called adversarial samples . Techniques for generating adversarial samples are called attackers . We think the ideal attacker should satisfy three levels of requirements . The first requirement is fooling networks which means making classifiers fail to classify an image successfully . For example , a dog image can be classified as a cat after added some well-designed perturbations . There are a number of methods for achieving a high attack rate ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017 ; Dong et al. , 2018 ) . The second requirement for the ideal attacker is the imperceptibility in the image space . This means the magnitude of perturbations in the pixel level needs to be as tiny as possible so that it is imperceptible to human eyes . For example , additive perturbations are minimized with lp norm to generate imperceptible adversarial samples ( Seyed-Mohsen et al. , 2016 ) . Extreme cases also exist where only changing one or a few pixels ( Su et al. , 2019 ; Modas et al. , 2019 ) can make classifiers fail . Moosavi-Dezfooli et al . ( 2017 ) even show the existence of universal ( image-agnostic ) perturbations . The third requirement for the ideal attacker , which is newly proposed in this paper , is the imperceptibility of the error made by the classifier in the label space . It means making the classifier to mis-classify an image as the label which is similar to its ground truth , so that people won ’ t notice the misclassification . For example , in Figure 1 , a human user will probably ignore the mis-classification if an attacker caused a “ church ” to be mis-classified as a “ monastery ” as the third attacker does . However , a human user will easily notice the mistake if an attacker caused a “ church ” to be misclassified as a “ dome ” as the second attacker does or caused an apparent perturbation in the image space as the first attacker does . In real applications , a human user will take defensive measures as soon as he notices the attack . Therefore making the whole attack process imperceptible is crucial for letting observers ’ guard down . Tiny perturbations in the image space but large perturbations in the label space can muddle through on the input terminal . But as soon as observers check on the output terminal and see the obviously-incorrect label for an input , they will realize that the classifier fail due to some attacks and take defensive measures immediately , just as Figure 1 shows . This justifies the power of attacks which also confuse people in the label space . So the imperceptibility in the label space is quite important . However , to our best knowledge , few attackers have realized this point . In this paper , we propose an untargeted-attack algorithm called LabelFool , to perturb an image to be mis-classified as the label which is similar to its ground truth , so that people won ’ t notice the misclassification . In the meantime , LabelFool also guarantees the imperceptibility in the image space as well as maintaining a high attack rate in fooling classifiers . There are two steps by which we accomplish our goal . The first step is to choose a target label which is similar to the input image ’ s ground truth . The second step is to perturb the input to be classified as this target label . The way is finding the classification boundary between the current label and the target label , and then moving the input towards this boundary until it is classified as the target label . We conduct a subjective experiment on ImageNet ( Deng et al. , 2009 ) which shows that adversarial samples generated by our method are indeed much less recognizable in the label space by human observers than other attacks . We also perform objective experiments on ImageNet to demonstrate that adversarial samples generated by LabelFool still guarantee the imperceptibility in the image space as well as maintaining a high attack rate in fooling classifiers . 2 RELATED WORK . The phenomenon that neural networks are sensitive to adversarial samples was proposed by Szegedy et al . ( 2013 ) . Since then , many researchers have studied how to generat adversarial samples . FGSM ( Goodfellow et al. , 2015 ) was proposed to maximize the classification error subject to l∞-norm based distortion constraints . CW attack ( Carlini & Wagner , 2017 ) generates adversarial samples by solving an optimization problem based on l0/l2/l∞ constraint , and l0 CW attack is the first proposed method that can cause targeted misclassification on the ImageNet dataset , meaning that we can specify the label of adversarial samples . But this designation of the target label is arbitrary . Until this paper , there has no guide about how to choose a target label such that it is difficult for a person to notice that the network has failed . Besides achieving the goal of misclassification , many researchers realize the importance of imperceptibility in the image space ( Xu et al. , 2019 ) . One-pixel attack ( Su et al. , 2019 ) and SparseFool ( Modas et al. , 2019 ) attack networks in a scenario where perturbing only one/a few pixels can make a big difference . Moosavi-Dezfooli et al . ( 2017 ) show the existence of universal image-agnostic perturbations for state-of-the-art deep neural networks . DeepFool ( Seyed-Mohsen et al. , 2016 ) seeks the minimum image-level distortion . And for generating adversarial samples , it directly moves the input sample to the nearest class in the feature space . This is the most closely related work to ours , because features extracted from classification models can reflect images ’ perceptual information and the classes which are close in the feature space are often perceptually similar . However , DeepFool approximates the multi-dimensional classification boundaries in two dimensions and this might make big errors on finding the nearest class . All these attacks generate adversarial samples by iteration and the algorithm stops as soon as an adversarial sample is born no matter what label it belongs to . This will lead to an apparent misclassification so that observers will sound the defensive alarm quickly . In this paper , we will compare our method with three attacks : FGSM , DeepFool and SparseFool to show the advantage of our method in the imperceptibility in the label space . We will also demonstrate that the performance gain in the label space is not at the expense of the loss in the image space or attack rate . 3 LABELFOOL . In this section , we will introduce our method about how to choose a target label which is undetectable by human observers and how we can perturb the input image so that the classifier assigns this specific label . The whole pipeline is shown in Figure 2 . All the symbols and notations used in this paper are summarized in Table 1 . We use the same notation i ( i = 1 , 2 , . . . ) for “ class ” and “ label ” , because “ class ” and “ label ” are interchangeable in this paper . LabelFool contains two steps . The first step is to choose a target label for the input image which is similar to its ground truth . The second step is to perturb the input image to be classified as this label . Inspired by DeepFool ( Seyed-Mohsen et al. , 2016 ) , we make modifications at the feature level . We keep moving the input towards this chosen class at the feature level until it is classified as the label we want . 3.1 CHOOSE A TARGET LABEL . The first step of our method is choosing the target label tx for an input image x . As we want the target label to be imperceptible in the label space to human observers , we need to find the most “ similar ” label to the input image ’ s ground truth lx where the most “ similar ” means the nearest in the perceptual distance metric . However , lx is usually unknown when an input image is given . So it is important to estimate the probability distribution P of an input ’ s ground truth lx , based on which , we can compute the distance between each class in the dataset and lx , then choose the nearest one as the target class . We propose a weighted distance model to achieve this goal . Before introducing the model , there are some preparations . Given two image x , y , we choose pre-trained image classification models to extract features φx , φy because these features can reflect some perceptual information . As we want to calculate the distance in the perceptual distance metric and cosine distance has been used to measure perceptual similarity in many works ( Lin et al. , 2016 ; Wang et al. , 2019 ) , we compute the distance between x and y as d ( x , y ) = 1 − cos ( φx , φy ) . After having the distance between two images , we can compute the distance between classes . Each class is a set of images . To measure the distance between two sets , we choose Hausdorff distance ( Henrikson , 1999 ) . The distance between class i and class j is denoted as Di , j . Suppose a dataset has n classes . Then , we can construct a matrix D ∈ Rn×n by calculating the distance between all pairs of classes in the dataset , and it will be used in the following probability model to provide the distance we need . After these preparations , we can start to decide the target label for an input image . As introduced before , we need to estimate the probability distribution P of the ground truth lx because we want to find the nearest label to lx which is unknown in the beginning . When an image x is put into a classifier f , state-of-the-art machine learning classifiers usually output a predicted label l̂x and a probability vector p̂ whose elements mean P ( x ∈ class i ) = p̂i . For simplicity , we suppose the elements in p̂ are sorted in the descending order . Meanwhile , p̂ can be thought as P ’ s approximation . Furthermore , we define a distance function between lx and the class i in a n-classes dataset asDi ( lx ) . In order to choose the nearest label to lx as the target label tx , we need to estimate the expectation of Di ( lx ) which is denoted as Elx∼P [ Di ( lx ) ] ( i = 1 , . . . , n ) . In general , our target function is Eq . ( 1 ) . tx = arg min i=1 , ... , n Elx∼P [ Di ( lx ) ] ( 1 ) Specifically , when p̂1 is larger than some threshold δ1 , we use Maximum Likelihood Estimation ( MLE ) ( Pfanzagl , 2011 ) which means we believe the classifier and take the predicted label l̂x as the ground truth lx , then choose the label ( except l̂x ) nearest to l̂x as the target label tx . So in this circumstance , we assume lx = l̂x and Elx∼P [ Di ( lx ) ] = Di , l̂x . Therefore , tx is tx = arg min i 6=l̂x , i=1 , ... , n Di , l̂x if p̂1 > δ1 . ( 2 ) When p̂1 is smaller than the threshold δ1 , we are not sure whether lx is equal to l̂x . Instead , we sample some labels and compute the weighted distance between each label i and these labels . We sample all labels whose probability are larger than a threshold δ2 and we use M to represent the number of sampled labels . We think the input image might belong to one of these M labels . The labels whose probability are smaller than δ2 will be abandoned because we think the input image can hardly fall into these categories . The weight and the distance is provided by the vector p̂ and matrix D respectively . So in this circumstance , as we are not sure which label is the ground truth , we want to find a target label which has the minimum expected distance with all these possible labels . Therefore , the value of Elx∼P [ Di ( lx ) ] can be approximated as ∑M j=1 p̂j ·Di , j and the target label tx is shown in Eq . ( 3 ) . This can be explained by Importance Sampling ( Owen & Zhou , 2000 ) because it is hard to sample from the real probability distribution P . We can only use the probability distribution p̂ which is an approximation of P to estimate the value of Elx∼P [ Di ( lx ) ] . tx = arg min i=1 , ... , n M∑ j=1 p̂j ·Di , j if p̂1 ≤ δ1 ( 3 ) In conclusion , the whole strategy for choosing the target label tx of an input image x is computed as Eq . ( 4 ) . The target label tx minimizes Elx∼P [ Di ( lx ) ] just as Figure 2 shows . tx = arg min i6=l̂x , i=1 , ... , n Di , l̂x if p̂1 > δ1 arg min i=1 , ... , n M∑ j=1 p̂j ·Di , j otherwise ( 4 )
This paper describes a technique for creating adversarial images where the added perturbations are not only imperceptible to machines, but also to human observers. The authors describe why this might be beneficial. The method works by finding labels that are not too far from the source image's ground-truth labels, and moving the source image in that direction. To find the target label, the authors use a threshold on the confidence of predicted ground-truth labels. The authors test their algorithm using a newly proposed metric of how much a method allows imperceptibility for a human observer. They show that their method creates images whose perturbations are more impercetible to humans, compared to other methods, but are also imperceptible to machines.
SP:9774f39a520317f2d547b5ab2690f59473d20f8e
Searching to Exploit Memorization Effect in Learning from Corrupted Labels
1 INTRODUCTION . Learning with deep neural networks has enjoyed huge empirical success in recent years across a wide variety of tasks , from image processing to speech recognition , and from language modeling to recommender system ( Goodfellow et al. , 2016 ) . However , their success highly counts on the availability of well-annotated and big data , which is barely available for real-world applications . Instead , what we are facing with in practice are large data sets which are collected from crowdsourcing platforms or crawled from the Internet , thus containing many noisy labels ( Li et al. , 2017b ; Patrini et al. , 2017 ) . Besides , due to the vast learning capacity of deep networks , they will eventually over-fit on these noisy labels , leading to poor predicting performance , which can be worse than that obtained from simple models ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) . To reduce negative effects from noisy labels , many methods have been proposed ( Sukhbaatar et al. , 2015 ; Reed et al. , 2015 ; Patrini et al. , 2017 ; Ghosh et al. , 2017 ; Malach & Shalev-Shwartz , 2017 ) . Recently , a promising direction is training networks only on selected instances that are more likely to be clean ( Jiang et al. , 2018 ; Han et al. , 2018b ; Ma et al. , 2018 ; Yu et al. , 2019 ; Wang et al. , 2019 ) . Intuitively , as the training data becomes less noisy , better performance can be obtained . Among those works , the representative methods are MentorNet ( Jiang et al. , 2018 ) and Co-teaching ( Han et al. , 2018b ; Yu et al. , 2019 ) , they take small-loss samples in each mini-batch as clean instances . Specifically , MentorNet pre-trains an extra network , and then uses the extra network for selecting clean instances to guide the training . When the clean validation data is not available , MentorNet has to use a predefined curriculum ( Bengio et al. , 2009 ) . Co-teaching is an improvement over MentorNet , it simultaneously maintains two networks which have identical architectures during the training process . And in each mini-batch of data , each network is updated using the other network ’ s small-loss instances . To the success of these sample-selection methods , the memorization effect of deep networks ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) is the crux . Memorization happens widely in various architectures of deep network , e.g. , multilayer perceptron ( MLP ) and convolutional neural network ( CNN ) . Specifically , it means that deep networks tend to learn easy and correct patterns first and then over-fit on ( possibly noisy ) training data set ( see Fig.1 ( a ) - ( b ) ) . Thus , when learning with noisy labels , while the validation loss will first increase and then significantly decrease , the training loss will continuously get smaller with more training epochs . Due to such effect , sample-selection methods can learn correct patterns at early stage and then use the obtained discriminative ability to filter out corrupted instances in subsequent training epochs ( Jiang et al. , 2018 ; Han et al. , 2018b ; Chen et al. , 2019 ) . While the memorization effect is critical to the success of sample-selection methods , however , how to properly exploit it is not addressed in the literature . And trivial attempts can easily lead to even worse performance than standard deep networks ( Han et al. , 2018b ) . Some recent endeavors seek to evade from this problems by integrating with other auxiliary information , e.g. , a small clean subset is used in ( Ren et al. , 2018 ) , and knowledge graphs are utilized in ( Li et al. , 2017b ) . In this paper , motivated by the success of automated machine learning ( AutoML ) on designing data-dependent models ( Hutter et al. , 2018 ) , and the fact that memorization heavily depends on many factors ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) , we propose to exploit memorization effects automatically using AutoML techniques . Contributions are summarized as follows : • First , to have an in-depth understanding of why it is difficult to tune sample-selection methods with good performance . We examine behaviors of memorization effect from multiple perspectives . We find that , while there exist general patterns in how memorization occurs with the training process ( see Fig.1 ( a ) - ( b ) ) , it is hard to quantize to which extend such effect can happen ( see Fig.1 ( b ) - ( f ) ) . Especially , memorization can be affected by many factors , e.g. , data sets , network architectures , and the choice of the optimizers . It is exactly such complex dependency make the design of proper sample-selection rules a hard problem , which motivates us to solve the problem by AutoML techniques . • To make good use of AutoML techniques , we then derive an expressive search space for exploiting memorization , which is from the above observations , i.e. , the curvature of how many instances need to be sampled during iterating should be similar with the inverse of the learning curve on the validation set . Such a space is not too huge since it has only a few variables , thus allows subsequent algorithms converging fast to promising candidates . • Then , to design an efficient algorithm , we show the failure of gradient-based methods and the inefficiency of derivative-free methods . These motivate us to take a probabilistic view of the search problem and adopt natural gradient descent ( Amari , 1998 ; Pascanu & Bengio , 2013 ) for optimization . The designed algorithm can effectively address above problems and is significantly faster than other popular search algorithms . • Finally , we conduct extensive experiments on both synthetic , benchmark , and real data sets , under various settings using different network architectures . These experiments demonstrate that the proposed method can not only be much more efficient than existing AutoML algorithms , but also can achieve much better performance than the state-of-the-art sample-selection approaches designed by humans . Besides , we further visualize and explain the searched functions , which can also help design better rules to control memorization effects in the future . 2 RELATED WORK . 2.1 LEARNING FROM NOISY LABELS . The mainstream research focuses on class-conditional noise ( CCN ) ( Angluin & Laird , 1988 ) , where the label corruption is independent of features . Generally , recent methods for handling CCN model can be classified into three categories . The first one is based on the estimation of transition matrix , which tries to capture how correct labels flip into wrong ones ( Sukhbaatar et al. , 2015 ; Reed et al. , 2015 ; Patrini et al. , 2017 ; Ghosh et al. , 2017 ) . These methods then use the estimated matrix to correct gradients or loss during training . However , they are fragile to heavy noise and unable to handle many classes ( Han et al. , 2018b ) . The second type is the regularization approach ( Miyato et al. , 2016 ; Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ) . Although regularization approach can achieve a satisfying performance , it is still an incomplete approach since ( Jiang et al. , 2018 ) shows that it can only delay the overfitting progress rather than avoid it , i.e . given enough training time , it can still fit the noisy data completely . Thus , it requires much domain knowledge to determine the appropriate number of training epochs in order to prevent overfitting . The last one is sample-selection approach , which attempts to reduce negative effects from noisy labels by selecting clean instances during training . The recent state-of-the-art method is also built on sample-selection approach ( Jiang et al. , 2018 ; Han et al. , 2018b ; Malach & Shalev-Shwartz , 2017 ; Yu et al. , 2019 ) . Active learning ( Settles , 1994 ) is a closely related method , which iteratively selects unlabeled samples with high high-confident predictions into the training data set . Thus , to do active learning , we need to obtain a classifier of which the performance is good enough . As a result , active learning is not applicable for directly learning from noisy labels here . A promising criteria to select “ clean instances ” is to pick up instances that has relatively small losses in each mini-batch ( Jiang et al. , 2018 ; Han et al. , 2018b ) . The fundamental property behind these methods is the memorization effect of deep networks ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) , which means deep networks can learn simple patterns first and then start to over-fit . Such effect helps classifiers set up discriminate ability in the early stage , then make clean instances more likely to have smaller loss that those noisy ones . The general framework of sample-selection approach is in Alg.1 . Specifically , some small-loss instances D̄f are selected from the mini-batch D̄ in step 5 . These “ clean ” instances are then used to update network parameters in step 6 . The R ( t ) in step 8 , which controls how many instances to be kept in each epoch , is the most important hyper-parameter as it explicitly exploits the memorization effect ( Han et al. , 2018b ; Jiang et al. , 2018 ; Yu et al. , 2019 ) . Algorithm 1 Framework of the sample-selection approach ( Jiang et al. , 2018 ; Han et al. , 2018b ) . 1 : for t = 1 , · · · , T do 2 : shuffle training set D ; 3 : for n = 1 , · · · , N do 4 : draw a mini-batch D̄ from D ; 5 : select D̄f , i.e. , R ( t ) small-loss instances from D̄ based on network ’ s predictions ; 6 : update the network ’ s parameter using gradient from D̄f ; 7 : end for 8 : exploit memorization effects using R ( t ) ( estimation on percentage of clean instances ) ; 9 : end for However , it is hard to exactly determine how much proportion of small-loss samples should be selected in each epoch ( Jiang et al. , 2018 ; Ren et al. , 2018 ) . As will be discussed in Sec.3.1 , due to various practical usages issues , to which extend memorization effect can happen is hard to quantize . Thus , performance obtained from existing solutions are far from desired , and we are motivated to solve this issue by AutoML . 2.2 AUTOMATED MACHINE LEARNING ( AUTOML ) . Automated machine learning ( AutoML ) ( Hutter et al. , 2018 ) has recently exhibited its power in easing the usage of and designing better machine learning models . Basically , AutoML can be regarded as a black-box optimization problem where we need to efficiently and effectively search for hyperparameters or designs for the underlying learning models evaluated by the validation set . Regarding the success of AutoML , there are two important perspectives ( Feurer et al. , 2015 ; Zoph & Le , 2017 ; Xie & Yuille , 2017 ; Bender et al. , 2018 ) : • Search space : First , it needs to be general enough , which means it should cover existing models as special cases . This also helps experts better understand limitations of existing models and thus facilitate future researches . However , the space can not be too general , otherwise searching in such a space will be too expensive . • Search algorithm : Optimization problems in AutoML are usually black-box . Unlike convex optimization , there is no universal and efficient optimization tools . Once the search space is determined , domain knowledge should also be explored in the design of search algorithm so that good candidates in the space can be identified efficiently . Search space is domain-specific and needs to specially designed for every AutoML problem . There are two types of search algorithms popularly used . The first one is derivative-free optimization methods , it is usually used for searching in a general search space , e.g. , reinforcement learning ( Zoph & Le , 2017 ; Baker et al. , 2017 ) , genetic programming ( Escalante et al. , 2009 ; Xie & Yuille , 2017 ) , and Bayes optimization ( Feurer et al. , 2015 ; Snoek et al. , 2012 ) . More recently , gradientbased methods , which alternatively update parameters and hyper-parameters , have been developed as more efficient replacements for derivative-free optimization methods on some AutoML problems , e.g. , neural network architecture search ( Liu et al. , 2019 ; Akimoto et al. , 2019 ; Xie et al. , 2018 ) . However , existing AutoML techniques can not be directly used in exploiting memorization here . First , we need to carefully define a domain-specific space . Besides , we will also show existing algorithms are neither not applicable nor too slow . This motivates us to propose a new algorithm based on natural gradient .
This paper studies the problem of learning from corrupted labels via picking up clean instances from training dataset. The sample selection mainly based on function R(t), which controls how many instances are kept. This paper proposes a unique curvature of R(t) based on intuition and presents how R(t) can be learned via combination of some existing functions. Natural gradient is presented to optimize the parameters in the autoML framework. Experimental results on both synthetic data and real-world data demonstrate the effectiveness of the proposed method.
SP:bd7f50f0b7150fbbf799760afdeeaeed76e93c8e
Searching to Exploit Memorization Effect in Learning from Corrupted Labels
1 INTRODUCTION . Learning with deep neural networks has enjoyed huge empirical success in recent years across a wide variety of tasks , from image processing to speech recognition , and from language modeling to recommender system ( Goodfellow et al. , 2016 ) . However , their success highly counts on the availability of well-annotated and big data , which is barely available for real-world applications . Instead , what we are facing with in practice are large data sets which are collected from crowdsourcing platforms or crawled from the Internet , thus containing many noisy labels ( Li et al. , 2017b ; Patrini et al. , 2017 ) . Besides , due to the vast learning capacity of deep networks , they will eventually over-fit on these noisy labels , leading to poor predicting performance , which can be worse than that obtained from simple models ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) . To reduce negative effects from noisy labels , many methods have been proposed ( Sukhbaatar et al. , 2015 ; Reed et al. , 2015 ; Patrini et al. , 2017 ; Ghosh et al. , 2017 ; Malach & Shalev-Shwartz , 2017 ) . Recently , a promising direction is training networks only on selected instances that are more likely to be clean ( Jiang et al. , 2018 ; Han et al. , 2018b ; Ma et al. , 2018 ; Yu et al. , 2019 ; Wang et al. , 2019 ) . Intuitively , as the training data becomes less noisy , better performance can be obtained . Among those works , the representative methods are MentorNet ( Jiang et al. , 2018 ) and Co-teaching ( Han et al. , 2018b ; Yu et al. , 2019 ) , they take small-loss samples in each mini-batch as clean instances . Specifically , MentorNet pre-trains an extra network , and then uses the extra network for selecting clean instances to guide the training . When the clean validation data is not available , MentorNet has to use a predefined curriculum ( Bengio et al. , 2009 ) . Co-teaching is an improvement over MentorNet , it simultaneously maintains two networks which have identical architectures during the training process . And in each mini-batch of data , each network is updated using the other network ’ s small-loss instances . To the success of these sample-selection methods , the memorization effect of deep networks ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) is the crux . Memorization happens widely in various architectures of deep network , e.g. , multilayer perceptron ( MLP ) and convolutional neural network ( CNN ) . Specifically , it means that deep networks tend to learn easy and correct patterns first and then over-fit on ( possibly noisy ) training data set ( see Fig.1 ( a ) - ( b ) ) . Thus , when learning with noisy labels , while the validation loss will first increase and then significantly decrease , the training loss will continuously get smaller with more training epochs . Due to such effect , sample-selection methods can learn correct patterns at early stage and then use the obtained discriminative ability to filter out corrupted instances in subsequent training epochs ( Jiang et al. , 2018 ; Han et al. , 2018b ; Chen et al. , 2019 ) . While the memorization effect is critical to the success of sample-selection methods , however , how to properly exploit it is not addressed in the literature . And trivial attempts can easily lead to even worse performance than standard deep networks ( Han et al. , 2018b ) . Some recent endeavors seek to evade from this problems by integrating with other auxiliary information , e.g. , a small clean subset is used in ( Ren et al. , 2018 ) , and knowledge graphs are utilized in ( Li et al. , 2017b ) . In this paper , motivated by the success of automated machine learning ( AutoML ) on designing data-dependent models ( Hutter et al. , 2018 ) , and the fact that memorization heavily depends on many factors ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) , we propose to exploit memorization effects automatically using AutoML techniques . Contributions are summarized as follows : • First , to have an in-depth understanding of why it is difficult to tune sample-selection methods with good performance . We examine behaviors of memorization effect from multiple perspectives . We find that , while there exist general patterns in how memorization occurs with the training process ( see Fig.1 ( a ) - ( b ) ) , it is hard to quantize to which extend such effect can happen ( see Fig.1 ( b ) - ( f ) ) . Especially , memorization can be affected by many factors , e.g. , data sets , network architectures , and the choice of the optimizers . It is exactly such complex dependency make the design of proper sample-selection rules a hard problem , which motivates us to solve the problem by AutoML techniques . • To make good use of AutoML techniques , we then derive an expressive search space for exploiting memorization , which is from the above observations , i.e. , the curvature of how many instances need to be sampled during iterating should be similar with the inverse of the learning curve on the validation set . Such a space is not too huge since it has only a few variables , thus allows subsequent algorithms converging fast to promising candidates . • Then , to design an efficient algorithm , we show the failure of gradient-based methods and the inefficiency of derivative-free methods . These motivate us to take a probabilistic view of the search problem and adopt natural gradient descent ( Amari , 1998 ; Pascanu & Bengio , 2013 ) for optimization . The designed algorithm can effectively address above problems and is significantly faster than other popular search algorithms . • Finally , we conduct extensive experiments on both synthetic , benchmark , and real data sets , under various settings using different network architectures . These experiments demonstrate that the proposed method can not only be much more efficient than existing AutoML algorithms , but also can achieve much better performance than the state-of-the-art sample-selection approaches designed by humans . Besides , we further visualize and explain the searched functions , which can also help design better rules to control memorization effects in the future . 2 RELATED WORK . 2.1 LEARNING FROM NOISY LABELS . The mainstream research focuses on class-conditional noise ( CCN ) ( Angluin & Laird , 1988 ) , where the label corruption is independent of features . Generally , recent methods for handling CCN model can be classified into three categories . The first one is based on the estimation of transition matrix , which tries to capture how correct labels flip into wrong ones ( Sukhbaatar et al. , 2015 ; Reed et al. , 2015 ; Patrini et al. , 2017 ; Ghosh et al. , 2017 ) . These methods then use the estimated matrix to correct gradients or loss during training . However , they are fragile to heavy noise and unable to handle many classes ( Han et al. , 2018b ) . The second type is the regularization approach ( Miyato et al. , 2016 ; Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ) . Although regularization approach can achieve a satisfying performance , it is still an incomplete approach since ( Jiang et al. , 2018 ) shows that it can only delay the overfitting progress rather than avoid it , i.e . given enough training time , it can still fit the noisy data completely . Thus , it requires much domain knowledge to determine the appropriate number of training epochs in order to prevent overfitting . The last one is sample-selection approach , which attempts to reduce negative effects from noisy labels by selecting clean instances during training . The recent state-of-the-art method is also built on sample-selection approach ( Jiang et al. , 2018 ; Han et al. , 2018b ; Malach & Shalev-Shwartz , 2017 ; Yu et al. , 2019 ) . Active learning ( Settles , 1994 ) is a closely related method , which iteratively selects unlabeled samples with high high-confident predictions into the training data set . Thus , to do active learning , we need to obtain a classifier of which the performance is good enough . As a result , active learning is not applicable for directly learning from noisy labels here . A promising criteria to select “ clean instances ” is to pick up instances that has relatively small losses in each mini-batch ( Jiang et al. , 2018 ; Han et al. , 2018b ) . The fundamental property behind these methods is the memorization effect of deep networks ( Zhang et al. , 2016 ; Arpit et al. , 2017 ) , which means deep networks can learn simple patterns first and then start to over-fit . Such effect helps classifiers set up discriminate ability in the early stage , then make clean instances more likely to have smaller loss that those noisy ones . The general framework of sample-selection approach is in Alg.1 . Specifically , some small-loss instances D̄f are selected from the mini-batch D̄ in step 5 . These “ clean ” instances are then used to update network parameters in step 6 . The R ( t ) in step 8 , which controls how many instances to be kept in each epoch , is the most important hyper-parameter as it explicitly exploits the memorization effect ( Han et al. , 2018b ; Jiang et al. , 2018 ; Yu et al. , 2019 ) . Algorithm 1 Framework of the sample-selection approach ( Jiang et al. , 2018 ; Han et al. , 2018b ) . 1 : for t = 1 , · · · , T do 2 : shuffle training set D ; 3 : for n = 1 , · · · , N do 4 : draw a mini-batch D̄ from D ; 5 : select D̄f , i.e. , R ( t ) small-loss instances from D̄ based on network ’ s predictions ; 6 : update the network ’ s parameter using gradient from D̄f ; 7 : end for 8 : exploit memorization effects using R ( t ) ( estimation on percentage of clean instances ) ; 9 : end for However , it is hard to exactly determine how much proportion of small-loss samples should be selected in each epoch ( Jiang et al. , 2018 ; Ren et al. , 2018 ) . As will be discussed in Sec.3.1 , due to various practical usages issues , to which extend memorization effect can happen is hard to quantize . Thus , performance obtained from existing solutions are far from desired , and we are motivated to solve this issue by AutoML . 2.2 AUTOMATED MACHINE LEARNING ( AUTOML ) . Automated machine learning ( AutoML ) ( Hutter et al. , 2018 ) has recently exhibited its power in easing the usage of and designing better machine learning models . Basically , AutoML can be regarded as a black-box optimization problem where we need to efficiently and effectively search for hyperparameters or designs for the underlying learning models evaluated by the validation set . Regarding the success of AutoML , there are two important perspectives ( Feurer et al. , 2015 ; Zoph & Le , 2017 ; Xie & Yuille , 2017 ; Bender et al. , 2018 ) : • Search space : First , it needs to be general enough , which means it should cover existing models as special cases . This also helps experts better understand limitations of existing models and thus facilitate future researches . However , the space can not be too general , otherwise searching in such a space will be too expensive . • Search algorithm : Optimization problems in AutoML are usually black-box . Unlike convex optimization , there is no universal and efficient optimization tools . Once the search space is determined , domain knowledge should also be explored in the design of search algorithm so that good candidates in the space can be identified efficiently . Search space is domain-specific and needs to specially designed for every AutoML problem . There are two types of search algorithms popularly used . The first one is derivative-free optimization methods , it is usually used for searching in a general search space , e.g. , reinforcement learning ( Zoph & Le , 2017 ; Baker et al. , 2017 ) , genetic programming ( Escalante et al. , 2009 ; Xie & Yuille , 2017 ) , and Bayes optimization ( Feurer et al. , 2015 ; Snoek et al. , 2012 ) . More recently , gradientbased methods , which alternatively update parameters and hyper-parameters , have been developed as more efficient replacements for derivative-free optimization methods on some AutoML problems , e.g. , neural network architecture search ( Liu et al. , 2019 ; Akimoto et al. , 2019 ; Xie et al. , 2018 ) . However , existing AutoML techniques can not be directly used in exploiting memorization here . First , we need to carefully define a domain-specific space . Besides , we will also show existing algorithms are neither not applicable nor too slow . This motivates us to propose a new algorithm based on natural gradient .
This paper focuses on the topic of learning from noisy -- or as they call it "corrupted" -- labels. Specifically this focuses on an approach where data selection -- ideally of cleaner/less noisy examples -- can help the learn model overcome data noise, akin to the approaches this builds upon (i.e, the Co-Teaching and MentorNet approaches). The specific idea here is to take an AutoML style approach to the problem in particular to determine how many examples are selected in each mini-batch. The proposed method is based upon natural gradient based updates to the hparams (which was really the only feasible way to tackle this problem given the complex dependence on the hparams and a good choice). The experimental results using synthetic noise corruption are indicative of improved performance compared to the baseline techniques.
SP:bd7f50f0b7150fbbf799760afdeeaeed76e93c8e
Consistency Regularization for Generative Adversarial Networks
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have recently demonstrated impressive results on image-synthesis benchmarks ( Radford et al. , 2016 ; Zhang et al. , 2017 ; Miyato & Koyama , 2018 ; Zhang et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2019 ) . In the original setting , GANs are composed of two neural networks trained with competing goals : the generator is trained to synthesize realistic samples to fool the discriminator and the discriminator is trained to distinguish real samples from fake ones produced by the generator . One major problem with GANs is the instability of the training procedure and the general sensitivity of the results to various hyperparameters ( Salimans et al. , 2016 ) . Because GAN training implicitly requires finding the Nash equilibrium of a non-convex game in a continuous and high dimensional parameter space , it is substantially more complicated than standard neural network training . In fact , formally characterizing the convergence properties of the GAN training procedure is mostly an open problem ( Odena , 2019 ) . Previous work ( Arjovsky & Bottou , 2017 ; Miyato et al. , 2018a ; Odena et al. , 2017 ; Chen et al. , 2019 ; Wei et al. , 2018 ) has shown that interventions focused on the discriminator can mitigate stability issues . Most successful interventions fall into two categories , normalization and regularization . Spectral normalization is the most effective normalization method , in which weight matrices in the discriminator are divided by an approximation of their largest singular value . For regularization , Gulrajani et al . ( 2017 ) penalize the gradient norm of straight lines between real data and generated data . Roth et al . ( 2017 ) propose to directly regularize the squared gradient norm for both the training data and the generated data . DRAGAN ( Kodali et al. , 2017 ) introduces another form of gradient penalty where the gradients at Gaussian perturbations of training data are penalized . One may anticipate simultaneous regularization and normalization could improve sample quality . However , most of these gradient based regularization methods either provide marginal gains or fail to introduce any improvement when normalization is used ( Kurach et al. , 2019 ) , which is also observed in our experiments . These regularization methods and spectral normalization are motivated by controlling Lipschitz constant of the discriminator . We suspect this might be the reason that applying both does not lead to overlaid gain . In this paper , we examine a technique called consistency regularization ( Bachman et al. , 2014 ; Sajjadi et al. , 2016 ; Laine & Aila , 2016 ; Zhai et al. , 2019 ; Xie et al. , 2019 ; Hu et al. , 2017 ) in contrast to gradient-based regularizers . Consistency regularization is widely used in semi-supervised learning to ensure that the classifier output remains unaffected for an unlabeled example even it is augmented in semantic-preserving ways . In light of this intuition , we hypothesize a well-trained discriminator should also be regularized to have the consistency property , which enforces the discriminator to be unchanged by arbitrary semantic-preserving perturbations and to focus more on semantic and structural changes between real and fake data . Therefore , we propose a simple regularizer to the discriminator of GAN : we augment images with semantic-preserving augmentations before they are fed into the GAN discriminator and penalize the sensitivity of the discriminator to those augmentations . This technique is simple to use and surprisingly effective . It is as well less computationally expensive than prior techniques . More importantly , in our experiments , consistency regularization can always further improve the model performance when spectral normalization is used , whereas the performance gains of previous regularization methods diminish in such case . In extensive ablation studies , we show that it works across a large range of GAN variants and datasets . We also show that simply applying this technique on top of existing GAN models leads to new state-of-the-art results as measured by Frechet Inception Distance ( Heusel et al. , 2017 ) . In summary , our contributions are summarized as follows : • We propose consistency regularization for GAN discriminators to yield a simple , effective regularizer with lower computational cost than gradient-based regularization methods . • We conduct extensive experiments with different GAN variants to demonstrate that our technique interacts effectively with spectral normalization . Our consistency regularized GAN ( CR-GAN ) achieves the best FID scores for unconditional image generation on both CIFAR-10 and CelebA . • We show that simply applying the proposed technique can further boost the performance of state-of-the-art GAN models . We improve FID scores for conditional image generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012 . 2 METHOD . 2.1 GANS . A GAN consists of a generator network and a discriminator network . The generator G takes a latent variable z ∼ p ( z ) sampled from a prior distribution and maps it to the observation space X . The discriminator D takes an observation x ∈ X and produces a decision output over possible observation sources ( either from G or from the empirical data distribution ) . In the standard GAN training procedure the generator G and the discriminator D are trained by minimizing the following objectives in an alternating fashion : LD = −Ex∼pdata [ logD ( x ) ] − Ez∼p ( z ) [ 1− logD ( G ( z ) ) ] , LG = −Ez∼p ( z ) [ logD ( G ( z ) ) ] , ( 1 ) where p ( z ) is usually a standard normal distribution . This formulation is originally proposed by Goodfellow et al . ( 2014 ) as non-saturating ( NS ) GAN . A significant amount of research has been done on modifying this formulation in order to improve the training process . A notable example is the hinge-loss version of the adversarial loss ( Lim & Ye , 2017 ; Tran et al. , 2017 ) : LD = −Ex∼pdata [ min ( 0 , −1 +D ( x ) ) ] − Ez∼p ( z ) [ min ( 0 , −1−D ( G ( z ) ) ) ] , LG = −Ez∼p ( z ) [ D ( G ( z ) ) ] . ( 2 ) Another commonly adopted GAN formulation is the Wassertein GAN ( WGAN ) ( Arjovsky et al. , 2017 ) , in which the authors propose clipping the weights of the discriminator in an attempt to enforce that the GAN training procedure implicitly optimizes a bound on the Wassertein distance between the target distribution and the distribution given by the generator . The loss function of WGAN can be written as LD = −Ex∼pdata [ D ( x ) ] + Ez∼p ( z ) [ D ( G ( z ) ) ] , LG = −Ez∼p ( z ) [ D ( G ( z ) ) ] . ( 3 ) Subsequent work has refined this technique in several ways ( Gulrajani et al. , 2017 ; Miyato et al. , 2018a ; Zhang et al. , 2019 ) , and the current widely-used practice is to enforce spectral normalization ( Miyato et al. , 2018a ) on both the generator and the discriminator . 2.2 CONSISTENCY REGULARIZATION . Consistency regularization has emerged as a gold-standard technique ( Sajjadi et al. , 2016 ; Laine & Aila , 2016 ; Zhai et al. , 2019 ; Xie et al. , 2019 ; Oliver et al. , 2018 ; Berthelot et al. , 2019 ) for semisupervised learning on image data . The basic idea is simple : an input image is perturbed in some semantics-preserving ways and the sensitivity of the classifier to that perturbation is penalized . The perturbation can take many forms : it can be image flipping , or cropping , or adversarial attacks . The regularization form is either the mean-squared-error ( Sajjadi et al. , 2016 ; Laine & Aila , 2016 ) between the model ’ s output for a perturbed and non-perturbed input or the KL divergence ( Xie et al. , 2019 ; Miyato et al. , 2018b ) between the distribution over classes implied by the output logits . 2.3 CONSISTENCY REGULARIZATION FOR GANS . The goal of the discriminator in GANs is to distinguish real data from fake ones produced by the generator . The decision should be invariant to any valid domain-specific data augmentations . For example , in the image domain , the image being real or not should not change if we flip the image horizontally or translate the image by a few pixels . However , the discriminator in GANs does not guarantee this property explicitly . To resolve this , we propose a consistency regularization on the GAN discriminator during training . In practice , we randomly augment training images as they are passed to the discriminator and penalize the sensitivity of the discriminator to those augmentations . We use Dj ( x ) to denote the output vector before activation of the jth layer of the discriminator given input x. T ( x ) denotes a stochastic data augmentation function . This function can be linear or nonlinear , but aims to preserve the semantics of the input . Our proposed regularization is given by min D Lcr = min D n∑ j=m λj ∥∥Dj ( x ) −Dj ( T ( x ) ) ∥∥2 , ( 4 ) where j indexes the layers , m is the starting layer and n is the ending layer that consistency is enforced . λj is weight coefficient for jth layer and ‖·‖ denotes L2 norm of a given vector . This consistency regularization encourages the discriminator to produce the same output for a data point under various data augmentations . Algorithm 1 Consistency Regularized GAN ( CR-GAN ) . We use λ = 10 by default . Input : generator and discriminator parameters θG , θD , consistency regularization coefficient λ , Adam hyperparameters α , β1 , β2 , batch size M , number of discriminator iterations per generator iteration ND 1 : for number of training iterations do 2 : for t = 1 , ... , ND do 3 : for i = 1 , ... , M do 4 : Sample z ∼ p ( z ) , x ∼ pdata ( x ) 5 : Augment x to get T ( x ) 6 : L ( i ) cr ← ∥∥D ( x ) −D ( T ( x ) ) ∥∥2 7 : L ( i ) D ← D ( G ( z ) ) −D ( x ) 8 : end for 9 : θD ← Adam ( 1M ∑M i=1 ( L ( i ) D + λL ( i ) cr ) , α , β1 , β2 ) 10 : end for 11 : Sample a batch of latent variables { z ( i ) } Mi=1 ∼ p ( z ) 12 : θG ← Adam ( 1M ∑M i=1 ( −D ( G ( z ) ) ) , α , β1 , β2 ) 13 : end for In our experiments , we find that consistency regularization on the last layer of the discriminator before the activation function is sufficient . Lcr can be rewritten as Lcr = ∥∥D ( x ) −D ( T ( x ) ) ∥∥2 , ( 5 ) where from now on we will drop the layer index for brevity . This cost is added to the discriminator loss ( weighted by a hyper-parameter λ ) when updating the discriminator parameters . The generator update remains unchanged . Thus , the overall consistency regularized GAN ( CR-GAN ) objective is written as LcrD = LD + λLcr , L cr G = LG . ( 6 ) Our design of Lcr is general-purpose and thereby can work with any valid adversarial losses LG and LD for GANs ( See Section 2.1 for examples ) . Algorithm 1 illustrates the details of CR-GAN with Wassertein loss as an example . In contrast to previous regularizers , our method does not increase much overhead . The only extra computational cost comes from feeding an additional ( third ) image through the discriminator forward and backward when updating the discriminator parameters .
This paper proposes to use Consistency Regularization for training GANs, a technique known to work well in unsupervised learning. The technique consists in applying a transformation to real images and enforcing that the features of the discriminator between the transformed inputs and the original inputs are similar. The author show that using this technique enables them to improve the performance of a standard GAN significantly on CIFAR10. They also carry an ablation study studying the influence of the different part of the proposed technique.
SP:d3d5b63a44519237d64cc3087c26a5c910a6e17e
Consistency Regularization for Generative Adversarial Networks
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have recently demonstrated impressive results on image-synthesis benchmarks ( Radford et al. , 2016 ; Zhang et al. , 2017 ; Miyato & Koyama , 2018 ; Zhang et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2019 ) . In the original setting , GANs are composed of two neural networks trained with competing goals : the generator is trained to synthesize realistic samples to fool the discriminator and the discriminator is trained to distinguish real samples from fake ones produced by the generator . One major problem with GANs is the instability of the training procedure and the general sensitivity of the results to various hyperparameters ( Salimans et al. , 2016 ) . Because GAN training implicitly requires finding the Nash equilibrium of a non-convex game in a continuous and high dimensional parameter space , it is substantially more complicated than standard neural network training . In fact , formally characterizing the convergence properties of the GAN training procedure is mostly an open problem ( Odena , 2019 ) . Previous work ( Arjovsky & Bottou , 2017 ; Miyato et al. , 2018a ; Odena et al. , 2017 ; Chen et al. , 2019 ; Wei et al. , 2018 ) has shown that interventions focused on the discriminator can mitigate stability issues . Most successful interventions fall into two categories , normalization and regularization . Spectral normalization is the most effective normalization method , in which weight matrices in the discriminator are divided by an approximation of their largest singular value . For regularization , Gulrajani et al . ( 2017 ) penalize the gradient norm of straight lines between real data and generated data . Roth et al . ( 2017 ) propose to directly regularize the squared gradient norm for both the training data and the generated data . DRAGAN ( Kodali et al. , 2017 ) introduces another form of gradient penalty where the gradients at Gaussian perturbations of training data are penalized . One may anticipate simultaneous regularization and normalization could improve sample quality . However , most of these gradient based regularization methods either provide marginal gains or fail to introduce any improvement when normalization is used ( Kurach et al. , 2019 ) , which is also observed in our experiments . These regularization methods and spectral normalization are motivated by controlling Lipschitz constant of the discriminator . We suspect this might be the reason that applying both does not lead to overlaid gain . In this paper , we examine a technique called consistency regularization ( Bachman et al. , 2014 ; Sajjadi et al. , 2016 ; Laine & Aila , 2016 ; Zhai et al. , 2019 ; Xie et al. , 2019 ; Hu et al. , 2017 ) in contrast to gradient-based regularizers . Consistency regularization is widely used in semi-supervised learning to ensure that the classifier output remains unaffected for an unlabeled example even it is augmented in semantic-preserving ways . In light of this intuition , we hypothesize a well-trained discriminator should also be regularized to have the consistency property , which enforces the discriminator to be unchanged by arbitrary semantic-preserving perturbations and to focus more on semantic and structural changes between real and fake data . Therefore , we propose a simple regularizer to the discriminator of GAN : we augment images with semantic-preserving augmentations before they are fed into the GAN discriminator and penalize the sensitivity of the discriminator to those augmentations . This technique is simple to use and surprisingly effective . It is as well less computationally expensive than prior techniques . More importantly , in our experiments , consistency regularization can always further improve the model performance when spectral normalization is used , whereas the performance gains of previous regularization methods diminish in such case . In extensive ablation studies , we show that it works across a large range of GAN variants and datasets . We also show that simply applying this technique on top of existing GAN models leads to new state-of-the-art results as measured by Frechet Inception Distance ( Heusel et al. , 2017 ) . In summary , our contributions are summarized as follows : • We propose consistency regularization for GAN discriminators to yield a simple , effective regularizer with lower computational cost than gradient-based regularization methods . • We conduct extensive experiments with different GAN variants to demonstrate that our technique interacts effectively with spectral normalization . Our consistency regularized GAN ( CR-GAN ) achieves the best FID scores for unconditional image generation on both CIFAR-10 and CelebA . • We show that simply applying the proposed technique can further boost the performance of state-of-the-art GAN models . We improve FID scores for conditional image generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012 . 2 METHOD . 2.1 GANS . A GAN consists of a generator network and a discriminator network . The generator G takes a latent variable z ∼ p ( z ) sampled from a prior distribution and maps it to the observation space X . The discriminator D takes an observation x ∈ X and produces a decision output over possible observation sources ( either from G or from the empirical data distribution ) . In the standard GAN training procedure the generator G and the discriminator D are trained by minimizing the following objectives in an alternating fashion : LD = −Ex∼pdata [ logD ( x ) ] − Ez∼p ( z ) [ 1− logD ( G ( z ) ) ] , LG = −Ez∼p ( z ) [ logD ( G ( z ) ) ] , ( 1 ) where p ( z ) is usually a standard normal distribution . This formulation is originally proposed by Goodfellow et al . ( 2014 ) as non-saturating ( NS ) GAN . A significant amount of research has been done on modifying this formulation in order to improve the training process . A notable example is the hinge-loss version of the adversarial loss ( Lim & Ye , 2017 ; Tran et al. , 2017 ) : LD = −Ex∼pdata [ min ( 0 , −1 +D ( x ) ) ] − Ez∼p ( z ) [ min ( 0 , −1−D ( G ( z ) ) ) ] , LG = −Ez∼p ( z ) [ D ( G ( z ) ) ] . ( 2 ) Another commonly adopted GAN formulation is the Wassertein GAN ( WGAN ) ( Arjovsky et al. , 2017 ) , in which the authors propose clipping the weights of the discriminator in an attempt to enforce that the GAN training procedure implicitly optimizes a bound on the Wassertein distance between the target distribution and the distribution given by the generator . The loss function of WGAN can be written as LD = −Ex∼pdata [ D ( x ) ] + Ez∼p ( z ) [ D ( G ( z ) ) ] , LG = −Ez∼p ( z ) [ D ( G ( z ) ) ] . ( 3 ) Subsequent work has refined this technique in several ways ( Gulrajani et al. , 2017 ; Miyato et al. , 2018a ; Zhang et al. , 2019 ) , and the current widely-used practice is to enforce spectral normalization ( Miyato et al. , 2018a ) on both the generator and the discriminator . 2.2 CONSISTENCY REGULARIZATION . Consistency regularization has emerged as a gold-standard technique ( Sajjadi et al. , 2016 ; Laine & Aila , 2016 ; Zhai et al. , 2019 ; Xie et al. , 2019 ; Oliver et al. , 2018 ; Berthelot et al. , 2019 ) for semisupervised learning on image data . The basic idea is simple : an input image is perturbed in some semantics-preserving ways and the sensitivity of the classifier to that perturbation is penalized . The perturbation can take many forms : it can be image flipping , or cropping , or adversarial attacks . The regularization form is either the mean-squared-error ( Sajjadi et al. , 2016 ; Laine & Aila , 2016 ) between the model ’ s output for a perturbed and non-perturbed input or the KL divergence ( Xie et al. , 2019 ; Miyato et al. , 2018b ) between the distribution over classes implied by the output logits . 2.3 CONSISTENCY REGULARIZATION FOR GANS . The goal of the discriminator in GANs is to distinguish real data from fake ones produced by the generator . The decision should be invariant to any valid domain-specific data augmentations . For example , in the image domain , the image being real or not should not change if we flip the image horizontally or translate the image by a few pixels . However , the discriminator in GANs does not guarantee this property explicitly . To resolve this , we propose a consistency regularization on the GAN discriminator during training . In practice , we randomly augment training images as they are passed to the discriminator and penalize the sensitivity of the discriminator to those augmentations . We use Dj ( x ) to denote the output vector before activation of the jth layer of the discriminator given input x. T ( x ) denotes a stochastic data augmentation function . This function can be linear or nonlinear , but aims to preserve the semantics of the input . Our proposed regularization is given by min D Lcr = min D n∑ j=m λj ∥∥Dj ( x ) −Dj ( T ( x ) ) ∥∥2 , ( 4 ) where j indexes the layers , m is the starting layer and n is the ending layer that consistency is enforced . λj is weight coefficient for jth layer and ‖·‖ denotes L2 norm of a given vector . This consistency regularization encourages the discriminator to produce the same output for a data point under various data augmentations . Algorithm 1 Consistency Regularized GAN ( CR-GAN ) . We use λ = 10 by default . Input : generator and discriminator parameters θG , θD , consistency regularization coefficient λ , Adam hyperparameters α , β1 , β2 , batch size M , number of discriminator iterations per generator iteration ND 1 : for number of training iterations do 2 : for t = 1 , ... , ND do 3 : for i = 1 , ... , M do 4 : Sample z ∼ p ( z ) , x ∼ pdata ( x ) 5 : Augment x to get T ( x ) 6 : L ( i ) cr ← ∥∥D ( x ) −D ( T ( x ) ) ∥∥2 7 : L ( i ) D ← D ( G ( z ) ) −D ( x ) 8 : end for 9 : θD ← Adam ( 1M ∑M i=1 ( L ( i ) D + λL ( i ) cr ) , α , β1 , β2 ) 10 : end for 11 : Sample a batch of latent variables { z ( i ) } Mi=1 ∼ p ( z ) 12 : θG ← Adam ( 1M ∑M i=1 ( −D ( G ( z ) ) ) , α , β1 , β2 ) 13 : end for In our experiments , we find that consistency regularization on the last layer of the discriminator before the activation function is sufficient . Lcr can be rewritten as Lcr = ∥∥D ( x ) −D ( T ( x ) ) ∥∥2 , ( 5 ) where from now on we will drop the layer index for brevity . This cost is added to the discriminator loss ( weighted by a hyper-parameter λ ) when updating the discriminator parameters . The generator update remains unchanged . Thus , the overall consistency regularized GAN ( CR-GAN ) objective is written as LcrD = LD + λLcr , L cr G = LG . ( 6 ) Our design of Lcr is general-purpose and thereby can work with any valid adversarial losses LG and LD for GANs ( See Section 2.1 for examples ) . Algorithm 1 illustrates the details of CR-GAN with Wassertein loss as an example . In contrast to previous regularizers , our method does not increase much overhead . The only extra computational cost comes from feeding an additional ( third ) image through the discriminator forward and backward when updating the discriminator parameters .
The paper presents a new regularization technique termed consistency regularization for training GANs. The idea is the following: the authors propose to penalize the sensitivity of the last layer of the discriminator to augmented images. This idea is simple yet efficient: it is easy to implement, a regularization term is gradient-free, and its computation is up to 1.8 times faster than standard gradient-based regularization techniques. The authors tested different augmentation techniques and concluded that simple ones behave better (e.g., shifting and flipping). The experimental results show an impressive gain in FID measure, renewing the current state-of-the-art score for class conditional image generation on CIFAR-10 dataset.
SP:d3d5b63a44519237d64cc3087c26a5c910a6e17e
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
1 INTRODUCTION . Semi-supervised learning ( SSL ) provides a means of leveraging unlabeled data to improve a model ’ s performance when only limited labeled data is available . This can enable the use of large , powerful models when labeling data is expensive or inconvenient . Research on SSL has produced a diverse collection of approaches , including consistency regularization ( Sajjadi et al. , 2016 ; Laine & Aila , 2017 ) which encourages a model to produce the same prediction when the input is perturbed and entropy minimization ( Grandvalet & Bengio , 2005 ) which encourages the model to output highconfidence predictions . The recently proposed “ MixMatch ” algorithm ( Berthelot et al. , 2019 ) combines these techniques in a unified loss function and achieves strong performance on a variety of image classification benchmarks . In this paper , we propose two improvements which can be readily integrated into MixMatch ’ s framework . First , we introduce “ distribution alignment ” , which encourages the distribution of a model ’ s aggregated class predictions to match the marginal distribution of ground-truth class labels . This concept was introduced as a “ fair ” objective by Bridle et al . ( 1992 ) , where a related loss term was shown to arise from the maximization of mutual information between model inputs and outputs . After reviewing this theoretical framework , we show how distribution alignment can be straightforwardly added to MixMatch by modifying the “ guessed labels ” using a running average of model predictions . Second , we introduce “ augmentation anchoring ” , which replaces the consistency regularization component of MixMatch . For each given unlabeled input , augmentation anchoring first generates a weakly augmented version ( e.g . using only a flip and a crop ) and then generates multiple strongly augmented versions . The model ’ s prediction for the weakly-augmented input is treated as the basis of the guessed label for all of the strongly augmented versions . To generate strong augmentations , we introduce a variant of AutoAugment ( Cubuk et al. , 2018 ) based on control theory which we dub “ CTAugment ” . Unlike AutoAugment , CTAugment learns an augmentation policy alongside model training , making it particularly convenient in SSL settings . We call our improved algorithm “ ReMixMatch ” and experimentally validate it on a suite of standard SSL image benchmarks . ReMixMatch achieves state-of-the-art accuracy across all labeled data amounts , for example achieving an accuracy of 93.73 % with 250 labels on CIFAR-10 compared to the previous state-of-the-art of 88.92 % ( and compared to 96.09 % for fully-supervised classification with 50,000 labels ) . We also push the limited-data setting further than ever before , ultimately achieving a median of 84.92 % accuracy with only 40 labels ( just 4 labels per class ) on CIFAR-10 . To quantify the impact of our proposed improvements , we carry out an extensive ablation study to measure the impact of our improvements to MixMatch . Finally , we release all of our models and code to facilitate future work on semi-supervised learning . 2 BACKGROUND . The goal of a semi-supervised learning algorithm is to learn from unlabeled data in a way that improves performance on labeled data . Typical ways of achieving this include training against “ guessed ” labels for unlabeled data or optimizing a heuristically-motivated objective that does not rely on labels . This section reviews the semi-supervised learning methods relevant to ReMixMatch , with a particular focus on the components of the MixMatch algorithm upon which we base our work . Consistency Regularization Many SSL methods rely on consistency regularization to enforce that the model output remains unchanged when the input is perturbed . First proposed in ( Bachman et al. , 2014 ) , ( Sajjadi et al. , 2016 ) and ( Laine & Aila , 2017 ) , this approach was referred to as “ Regularization With Stochastic Transformations and Perturbations ” and the “ Π-Model ” respectively . While some work perturbs adversarially ( Miyato et al. , 2018 ) or using dropout ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ) , the most common perturbation is to apply domain-specific data augmentation ( Laine & Aila , 2017 ; Sajjadi et al. , 2016 ; Berthelot et al. , 2019 ; Xie et al. , 2019 ) . The loss function used to measure consistency is typically either the mean-squared error ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Sajjadi et al. , 2016 ) or cross-entropy ( Miyato et al. , 2018 ; Xie et al. , 2019 ) between the model ’ s output for a perturbed and non-perturbed input . Entropy Minimization Grandvalet & Bengio ( 2005 ) argues that unlabeled data should be used to ensure that classes are well-separated . This can be achieved by encouraging the model ’ s output distribution to have low entropy ( i.e. , to make “ high-confidence ” predictions ) on unlabeled data . For example , one can explicitly add a loss term to minimize the entropy of the model ’ s predicted class distribution on unlabeled data ( Grandvalet & Bengio , 2005 ; Miyato et al. , 2018 ) . Related to this idea are “ self-training ” methods ( McLachlan , 1975 ; Rosenberg et al. , 2005 ) such as Pseudo-Label ( Lee , 2013 ) that use the predicted class on an unlabeled input as a hard target for the same input , which implicitly minimizes the entropy of the prediction . Standard Regularization Outside of the setting of SSL , it is often useful to regularize models in the over-parameterized regime . This regularization can often be applied both when training on labeled and unlabeled data . For example , standard “ weight decay ” ( Hinton & van Camp , 1993 ) where the L2 norm of parameters is minimized is often used alongside SSL techniques . Similarly , powerful MixUp regularization ( Zhang et al. , 2017 ) which trains a model on linear interpolants of inputs and labels has recently been applied to SSL ( Berthelot et al. , 2019 ; Verma et al. , 2019 ) . Other Approaches The three aforementioned categories of SSL techniques does not cover the full literature on semi-supervised learning . For example , there is a significant body of research on “ transductive ” or graph-based semi-supervised learning techniques which leverage the idea that unlabeled datapoints should be assigned the label of a labeled datapoint if they are sufficiently similar ( Gammerman et al. , 1998 ; Joachims , 2003 ; 1999 ; Bengio et al. , 2006 ; Liu et al. , 2018 ) . Since our work does not involve these ( or other ) approaches to SSL , we will not discuss them further . A more substantial overview of SSL methods is available in ( Chapelle et al. , 2006 ) . 2.1 MIXMATCH . MixMatch ( Berthelot et al. , 2019 ) unifies several of the previously mentioned SSL techniques . The algorithm works by generating “ guessed labels ” for each unlabeled example , and then using fullysupervised techniques to train on the original labeled data along with the guessed labels for the unlabeled data . This section reviews the necessary details of MixMatch ; see ( Berthelot et al. , 2019 ) for a full definition . Let X = { ( xb , pb ) : b ∈ ( 1 , . . . , B ) } be a batch of labeled data and their corresponding one-hot labels representing one of L classes and let x̂b be augmented versions of these labeled examples . Similarly , let U = { ub : b ∈ ( 1 , . . . , B ) } be a batch of unlabeled examples . Finally , let pmodel ( y | x ; θ ) be the predicted class distribution produced by the model for input x. MixMatch first produces K weakly augmented versions of each unlabeled datapoint ûb , k for k ∈ { 1 , . . . , K } . Then , it generates a “ guessed label ” qb for each ub by computing the average prediction q̄b across the K augmented versions : q̄b = 1K ∑ k pmodel ( y | ûb , k ; θ ) . The guessed label distribution is then sharpened by adjusting its temperature ( i.e . raising all probabilities to a power of 1⁄T and renormalizing ) . Finally , pairs of examples ( x1 , p1 ) , ( x2 , p2 ) from the combined set of labeled examples and unlabeled examples with label guesses are fed into the MixUp ( Zhang et al. , 2017 ) algorithm to compute examples ( x′ , p′ ) where x′ = λx1 + ( 1 − λ ) x2 for λ ∼ Beta ( α , α ) , and similarly for p′ . Given these mixed-up examples , MixMatch performs standard fully-supervised training with minor modifications . A standard cross-entropy loss is used for labeled data , whereas the loss for unlabeled data is computed using a mean square error ( i.e . the Brier score ( Brier , 1950 ) ) and is weighted with a hyperparameter λU . The terms K ( number of augmentations ) , T ( sharpening temperature ) , α ( MixUp Beta parameter ) , and λU ( unlabeled loss weight ) are MixMatch ’ s hyperparameters . For augmentation , shifting and flipping was used for the CIFAR-10 , CIFAR-100 , and STL-10 datasets , and shifting alone was used for SVHN . 3 REMIXMATCH . Having introduced MixMatch , we now turn to the two improvements we propose in this paper : Distribution alignment and augmentation anchoring . For clarity , we describe how we integrate them into the base MixMatch algorithm ; the full algorithm for ReMixMatch is shown in algorithm 1 . 3.1 DISTRIBUTION ALIGNMENT . Our first contribution is distribution alignment , which enforces that the aggregate of predictions on unlabeled data matches the distribution of the provided labeled data . This general idea was first introduced over 25 years ago ( Bridle et al. , 1992 ) , but to the best of our knowledge is not used in modern SSL techniques . A schematic of distribution alignment can be seen in fig . 1 . After reviewing and extending the theory , we describe how it can be straightforwardly included in ReMixMatch . Algorithm 1 ReMixMatch algorithm for producing a collection of processed labeled examples and processed unlabeled examples with label guesses ( cf . Berthelot et al . ( 2019 ) Algorithm 1 . ) 1 : Input : Batch of labeled examples and their one-hot labels X = { ( xb , pb ) : b ∈ ( 1 , . . . , B ) } , batch of unlabeled examples U = { ub : b ∈ ( 1 , . . . , B ) } , sharpening temperature T , number of augmentations K , Beta distribution parameter α for MixUp . 2 : for b = 1 to B do 3 : x̂b = StrongAugment ( xb ) // Apply strong data augmentation to xb 4 : ûb , k = StrongAugment ( ub ) ; k ∈ { 1 , . . . , K } // Apply strong data augmentation K times to ub 5 : ũb = WeakAugment ( ub ) // Apply weak data augmentation to ub 6 : qb = pmodel ( y | ũb ; θ ) // Compute prediction for weak augmentation of ub 7 : qb = Normalize ( qb × p ( y ) / p̃ ( y ) ) // Apply distribution alignment 8 : qb = Normalize ( q 1/T b ) // Apply temperature sharpening to label guess 9 : end for 10 : X̂ = ( ( x̂b , pb ) ; b ∈ ( 1 , . . . , B ) ) // Augmented labeled examples and their labels 11 : Û1 = ( ( ûb,1 , qb ) ; b ∈ ( 1 , . . . , B ) ) // First strongly augmented unlabeled example and guessed label 12 : Û = ( ( ûb , k , qb ) ; b ∈ ( 1 , . . . , B ) , k ∈ ( 1 , . . . , K ) ) // All strongly augmented unlabeled examples 13 : Û = Û ∪ ( ( ũb , qb ) ; b ∈ ( 1 , . . . , B ) ) // Add weakly augmented unlabeled examples 14 : W = Shuffle ( Concat ( X̂ , Û ) ) // Combine and shuffle labeled and unlabeled data 15 : X ′ = ( MixUp ( X̂i , Wi ) ; i ∈ ( 1 , . . . , |X̂ | ) ) // Apply MixUp to labeled data and entries fromW 16 : U ′ = ( MixUp ( Ûi , Wi+|X̂ | ) ; i ∈ ( 1 , . . . , |Û | ) ) // Apply MixUp to unlabeled data and the rest ofW 17 : return X ′ , U ′ , Û1
This paper proposes two modifications for the MixMatch method [1] and achieves improved accuracy on a range of semi-supervised benchmarks. The first modification enforces the distribution of predicted labels to match the distribution of labeled data. The second modification is adding a learned data augmentation strategy, and adapting the method to work with strong data augmentation. The final method is titled ReMixMatch, and improves significantly over MixMatch, especially in low-data regime.
SP:620dded5d2b04f0d178ebd00c303f9fb43afdb30
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
1 INTRODUCTION . Semi-supervised learning ( SSL ) provides a means of leveraging unlabeled data to improve a model ’ s performance when only limited labeled data is available . This can enable the use of large , powerful models when labeling data is expensive or inconvenient . Research on SSL has produced a diverse collection of approaches , including consistency regularization ( Sajjadi et al. , 2016 ; Laine & Aila , 2017 ) which encourages a model to produce the same prediction when the input is perturbed and entropy minimization ( Grandvalet & Bengio , 2005 ) which encourages the model to output highconfidence predictions . The recently proposed “ MixMatch ” algorithm ( Berthelot et al. , 2019 ) combines these techniques in a unified loss function and achieves strong performance on a variety of image classification benchmarks . In this paper , we propose two improvements which can be readily integrated into MixMatch ’ s framework . First , we introduce “ distribution alignment ” , which encourages the distribution of a model ’ s aggregated class predictions to match the marginal distribution of ground-truth class labels . This concept was introduced as a “ fair ” objective by Bridle et al . ( 1992 ) , where a related loss term was shown to arise from the maximization of mutual information between model inputs and outputs . After reviewing this theoretical framework , we show how distribution alignment can be straightforwardly added to MixMatch by modifying the “ guessed labels ” using a running average of model predictions . Second , we introduce “ augmentation anchoring ” , which replaces the consistency regularization component of MixMatch . For each given unlabeled input , augmentation anchoring first generates a weakly augmented version ( e.g . using only a flip and a crop ) and then generates multiple strongly augmented versions . The model ’ s prediction for the weakly-augmented input is treated as the basis of the guessed label for all of the strongly augmented versions . To generate strong augmentations , we introduce a variant of AutoAugment ( Cubuk et al. , 2018 ) based on control theory which we dub “ CTAugment ” . Unlike AutoAugment , CTAugment learns an augmentation policy alongside model training , making it particularly convenient in SSL settings . We call our improved algorithm “ ReMixMatch ” and experimentally validate it on a suite of standard SSL image benchmarks . ReMixMatch achieves state-of-the-art accuracy across all labeled data amounts , for example achieving an accuracy of 93.73 % with 250 labels on CIFAR-10 compared to the previous state-of-the-art of 88.92 % ( and compared to 96.09 % for fully-supervised classification with 50,000 labels ) . We also push the limited-data setting further than ever before , ultimately achieving a median of 84.92 % accuracy with only 40 labels ( just 4 labels per class ) on CIFAR-10 . To quantify the impact of our proposed improvements , we carry out an extensive ablation study to measure the impact of our improvements to MixMatch . Finally , we release all of our models and code to facilitate future work on semi-supervised learning . 2 BACKGROUND . The goal of a semi-supervised learning algorithm is to learn from unlabeled data in a way that improves performance on labeled data . Typical ways of achieving this include training against “ guessed ” labels for unlabeled data or optimizing a heuristically-motivated objective that does not rely on labels . This section reviews the semi-supervised learning methods relevant to ReMixMatch , with a particular focus on the components of the MixMatch algorithm upon which we base our work . Consistency Regularization Many SSL methods rely on consistency regularization to enforce that the model output remains unchanged when the input is perturbed . First proposed in ( Bachman et al. , 2014 ) , ( Sajjadi et al. , 2016 ) and ( Laine & Aila , 2017 ) , this approach was referred to as “ Regularization With Stochastic Transformations and Perturbations ” and the “ Π-Model ” respectively . While some work perturbs adversarially ( Miyato et al. , 2018 ) or using dropout ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ) , the most common perturbation is to apply domain-specific data augmentation ( Laine & Aila , 2017 ; Sajjadi et al. , 2016 ; Berthelot et al. , 2019 ; Xie et al. , 2019 ) . The loss function used to measure consistency is typically either the mean-squared error ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Sajjadi et al. , 2016 ) or cross-entropy ( Miyato et al. , 2018 ; Xie et al. , 2019 ) between the model ’ s output for a perturbed and non-perturbed input . Entropy Minimization Grandvalet & Bengio ( 2005 ) argues that unlabeled data should be used to ensure that classes are well-separated . This can be achieved by encouraging the model ’ s output distribution to have low entropy ( i.e. , to make “ high-confidence ” predictions ) on unlabeled data . For example , one can explicitly add a loss term to minimize the entropy of the model ’ s predicted class distribution on unlabeled data ( Grandvalet & Bengio , 2005 ; Miyato et al. , 2018 ) . Related to this idea are “ self-training ” methods ( McLachlan , 1975 ; Rosenberg et al. , 2005 ) such as Pseudo-Label ( Lee , 2013 ) that use the predicted class on an unlabeled input as a hard target for the same input , which implicitly minimizes the entropy of the prediction . Standard Regularization Outside of the setting of SSL , it is often useful to regularize models in the over-parameterized regime . This regularization can often be applied both when training on labeled and unlabeled data . For example , standard “ weight decay ” ( Hinton & van Camp , 1993 ) where the L2 norm of parameters is minimized is often used alongside SSL techniques . Similarly , powerful MixUp regularization ( Zhang et al. , 2017 ) which trains a model on linear interpolants of inputs and labels has recently been applied to SSL ( Berthelot et al. , 2019 ; Verma et al. , 2019 ) . Other Approaches The three aforementioned categories of SSL techniques does not cover the full literature on semi-supervised learning . For example , there is a significant body of research on “ transductive ” or graph-based semi-supervised learning techniques which leverage the idea that unlabeled datapoints should be assigned the label of a labeled datapoint if they are sufficiently similar ( Gammerman et al. , 1998 ; Joachims , 2003 ; 1999 ; Bengio et al. , 2006 ; Liu et al. , 2018 ) . Since our work does not involve these ( or other ) approaches to SSL , we will not discuss them further . A more substantial overview of SSL methods is available in ( Chapelle et al. , 2006 ) . 2.1 MIXMATCH . MixMatch ( Berthelot et al. , 2019 ) unifies several of the previously mentioned SSL techniques . The algorithm works by generating “ guessed labels ” for each unlabeled example , and then using fullysupervised techniques to train on the original labeled data along with the guessed labels for the unlabeled data . This section reviews the necessary details of MixMatch ; see ( Berthelot et al. , 2019 ) for a full definition . Let X = { ( xb , pb ) : b ∈ ( 1 , . . . , B ) } be a batch of labeled data and their corresponding one-hot labels representing one of L classes and let x̂b be augmented versions of these labeled examples . Similarly , let U = { ub : b ∈ ( 1 , . . . , B ) } be a batch of unlabeled examples . Finally , let pmodel ( y | x ; θ ) be the predicted class distribution produced by the model for input x. MixMatch first produces K weakly augmented versions of each unlabeled datapoint ûb , k for k ∈ { 1 , . . . , K } . Then , it generates a “ guessed label ” qb for each ub by computing the average prediction q̄b across the K augmented versions : q̄b = 1K ∑ k pmodel ( y | ûb , k ; θ ) . The guessed label distribution is then sharpened by adjusting its temperature ( i.e . raising all probabilities to a power of 1⁄T and renormalizing ) . Finally , pairs of examples ( x1 , p1 ) , ( x2 , p2 ) from the combined set of labeled examples and unlabeled examples with label guesses are fed into the MixUp ( Zhang et al. , 2017 ) algorithm to compute examples ( x′ , p′ ) where x′ = λx1 + ( 1 − λ ) x2 for λ ∼ Beta ( α , α ) , and similarly for p′ . Given these mixed-up examples , MixMatch performs standard fully-supervised training with minor modifications . A standard cross-entropy loss is used for labeled data , whereas the loss for unlabeled data is computed using a mean square error ( i.e . the Brier score ( Brier , 1950 ) ) and is weighted with a hyperparameter λU . The terms K ( number of augmentations ) , T ( sharpening temperature ) , α ( MixUp Beta parameter ) , and λU ( unlabeled loss weight ) are MixMatch ’ s hyperparameters . For augmentation , shifting and flipping was used for the CIFAR-10 , CIFAR-100 , and STL-10 datasets , and shifting alone was used for SVHN . 3 REMIXMATCH . Having introduced MixMatch , we now turn to the two improvements we propose in this paper : Distribution alignment and augmentation anchoring . For clarity , we describe how we integrate them into the base MixMatch algorithm ; the full algorithm for ReMixMatch is shown in algorithm 1 . 3.1 DISTRIBUTION ALIGNMENT . Our first contribution is distribution alignment , which enforces that the aggregate of predictions on unlabeled data matches the distribution of the provided labeled data . This general idea was first introduced over 25 years ago ( Bridle et al. , 1992 ) , but to the best of our knowledge is not used in modern SSL techniques . A schematic of distribution alignment can be seen in fig . 1 . After reviewing and extending the theory , we describe how it can be straightforwardly included in ReMixMatch . Algorithm 1 ReMixMatch algorithm for producing a collection of processed labeled examples and processed unlabeled examples with label guesses ( cf . Berthelot et al . ( 2019 ) Algorithm 1 . ) 1 : Input : Batch of labeled examples and their one-hot labels X = { ( xb , pb ) : b ∈ ( 1 , . . . , B ) } , batch of unlabeled examples U = { ub : b ∈ ( 1 , . . . , B ) } , sharpening temperature T , number of augmentations K , Beta distribution parameter α for MixUp . 2 : for b = 1 to B do 3 : x̂b = StrongAugment ( xb ) // Apply strong data augmentation to xb 4 : ûb , k = StrongAugment ( ub ) ; k ∈ { 1 , . . . , K } // Apply strong data augmentation K times to ub 5 : ũb = WeakAugment ( ub ) // Apply weak data augmentation to ub 6 : qb = pmodel ( y | ũb ; θ ) // Compute prediction for weak augmentation of ub 7 : qb = Normalize ( qb × p ( y ) / p̃ ( y ) ) // Apply distribution alignment 8 : qb = Normalize ( q 1/T b ) // Apply temperature sharpening to label guess 9 : end for 10 : X̂ = ( ( x̂b , pb ) ; b ∈ ( 1 , . . . , B ) ) // Augmented labeled examples and their labels 11 : Û1 = ( ( ûb,1 , qb ) ; b ∈ ( 1 , . . . , B ) ) // First strongly augmented unlabeled example and guessed label 12 : Û = ( ( ûb , k , qb ) ; b ∈ ( 1 , . . . , B ) , k ∈ ( 1 , . . . , K ) ) // All strongly augmented unlabeled examples 13 : Û = Û ∪ ( ( ũb , qb ) ; b ∈ ( 1 , . . . , B ) ) // Add weakly augmented unlabeled examples 14 : W = Shuffle ( Concat ( X̂ , Û ) ) // Combine and shuffle labeled and unlabeled data 15 : X ′ = ( MixUp ( X̂i , Wi ) ; i ∈ ( 1 , . . . , |X̂ | ) ) // Apply MixUp to labeled data and entries fromW 16 : U ′ = ( MixUp ( Ûi , Wi+|X̂ | ) ; i ∈ ( 1 , . . . , |Û | ) ) // Apply MixUp to unlabeled data and the rest ofW 17 : return X ′ , U ′ , Û1
This paper presents ReMixMatch an improved version of MixMatch. The main contributions are the distribution alignment and the augmentation anchoring. Distribution alignment rescales the predictions based on the difference between the model marginals and the ground truth running average estimation. Augmentation anchoring instead of computing the guessed probabilities on unlabelled data as the average probabilities on transformed samples (as in MixMatch), it considers as guessed labels the average probabilities obtained from weak transformations (flip+crop) even when using stronger transformations (Autoaugment like).
SP:620dded5d2b04f0d178ebd00c303f9fb43afdb30
Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware
1 INTRODUCTION . Deep neural networks have shown excellent performance on various computer vision and natural language processing tasks , such as classification ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , object detection ( Girshick , 2015 ; Redmon et al. , 2016 ; He et al. , 2017 ) , segmentation ( Long et al. , 2015 ; Noh et al. , 2015 ) , machine translation ( Zhang et al. , 2018b ) , speech recognition ( Nassif et al. , 2019 ) , etc . While the past few years witnessed the success of DNNs on cloud and server-end computers , neural networks have been recently pushed to embedded and mobile areas to enable edge intelligence . For these scenarios , the power provision and computational strength on the edge computing devices are limited . As a result , it is essential to have more efficient network architectures and less expensive inference overhead . Therefore , there is increasing attention from the research community to study the compression of modern deep neural networks that are typically over-parameterized and computationally costly . Several categories of approaches are proposed to decrease the computational overhead of neural networks , such as lightweight neural network architectures ( Howard et al. , 2017 ) , neural architecture search ( NAS ) ( Elsken et al. , 2018 ) , and network pruning ( Han et al. , 2015 ; 2016 ; Wen et al. , 2016 ; ∗Corresponding author Molchanov et al. , 2017 ) . Besides these techniques , quantizing high-precision floating-point networks to lower bitwidth representation can also drastically decrease both the static parameters and the intermediate data generated during the network inference , resulting in reduced memory footprint and also computational intensity . And this paper focuses on the quantization of neural networks . Quantization technique is also closely related to the implementation of specialized hardware that maps the procedure of network inference onto the energy-efficient low-precision integer or fixedpoint arithmetic circuits . In the hardware perspective , low-precision integer accelerators or processors are dominating the solutions targeted on neural network inference , especially for mobile and embedded scenarios . Google ’ s Tensor Processing Unit 1.0 ( TPU ) ( Jouppi et al. , 2017 ) , Unified Deep Neural Network Accelerator ( UNPU ) ( Lee et al. , 2018 ) , Eyeriss ( Chen et al. , 2018 ) , Stripes ( Judd et al. , 2016 ) , Pragmatic ( Albericio et al. , 2017 ) and many other newly proposed hardware implementations are generally reliant on the effectiveness of the underlying quantization techniques , which are especially crucial for the low-precision integer hardware designed to process binary , ternary , 4-bit or 8-bit networks . In other words , quantization is not only a method to reduce the memory footprint as in traditional work , but also a mandatory step to make the network deployable on integer hardware . Though there is a lot of prior work that investigates low-precision quantization , they mainly target on reducing the memory overhead caused by floating or high precision data representation in the networks , but not focus on specialized integer hardware for network inference . To enable the neural network processors to work with low-precision integer operands and minimize the accuracy losses , a good network quantizer must satisfy the constraints as enlisted in Table 1 . First , all the parameters , including weights , bias , activations , partial results that eventually accumulate to an activation , and even the scaling factors , which are indispensable for low-precision networks like binary and ternary representation , must be quantized into low bitwidth integers as required by the underlying specialized hardware . In some prior work ( Zhou et al. , 2016 ; Zhu et al. , 2017 ; Zhang et al. , 2018a ; Mishra et al. , 2018 ; Choi , 2018 ) , they either leave bias and scaling factors unquantized or keep the first and last layer in full or high precision . Besides , some designs rely on high-precision internal register or ALUs to support high-precision partial results that are generated during computation before the final output of activations or features . For example , Krishnamoorthi ( 2018 ) , which quantizes the weights and activations to 8-bit , directly use 32-bit accumulators to cache the intermediate values or partial results to avoid overflows . However , for 4-bit and lower bitwidth , the integer accelerators can not afford high bitwidth accumulators , which indicates higher silicon area and power cost . For integer-only-arithmetic , we quantize the bias to fixed-point numbers by using a straight-forward method . The value range of these numbers is wide , resulting in overflows of the low bitwidth accumulators . To overcome this problem , we quantize the bias to 8-bit and finetune the bias of the model . As shown in Figure 1 , the bitwidth of accumulators can be reduced to 16-bit . Second , the BatchNorm ( BN ) layer does not necessarily need to be processed during inference for the reduction of computation and memory cost . For most of the convolutional neural networks , BN layers are often after the Conv or FC layers . In these situations , BN can be merged into the weights and biases of the corresponding Conv or FC layers . However , in Zhou et al . ( 2016 ) ; Zhang et al . ( 2018a ) , they use asymmetric or non-linear quantization , causing barriers to BN fusion . There are two ways to overcome this obstacle . One is “ BN folded training ” ( Krishnamoorthi , 2018 ) , which adopts BN fusion before weights quantization in every training step ; the other is to use symmetric linear quantization . However , the first method doubles the training time , while the second one has no additional computational overhead , which will be introduced in Section 3.4 . Third , linear quantization is necessary for state-of-the-art accelerators . There are many non-linear quantization methods which achieve excellent bitwidth reduction efficacy and accuracy tradeoffs . In these cases , it requires additional transformation to have correct arithmetic results after quantizing the value into non-linear distribution . For example , as in Han et al . ( 2016 ) ; Park et al . ( 2017 ) , it necessitates the operation of table lookup to have correct multiplication between quantized values . However , the linear quantization can make full use of the low-precision arithmetic components in off-the-shelf accelerators . Further , linear quantization can be divided into symmetric mode and asymmetric mode . Asymmetric quantization has one more parameter ( e.g. , zero-point ( Krishnamoorthi , 2018 ) ) than symmetric quantization , and it requires additional subtraction or linearoperation before multiplication . As a result , the symmetrical mode is compatible with the mainstream integer accelerator chip design and do not require the redesign of datapath in these hardware . Fourth , different CNNs or applications usually use a variety of activation functions . For instance , the object detection model Redmon et al . ( 2016 ) typically uses Leaky ReLU . And the bottleneck of ResNet block does not use any activation function . The quantization methods are expected to be adapted to these situations . However , Zhang et al . ( 2018a ) ; Park et al . ( 2017 ) only focus on the quantization of activations after ReLU . In this paper , we demonstrate our method is friendly to different activation methods such as Leaky ReLU . Some of the previous researches change the network structure for better quantization performance , e.g. , Mishra et al . ( 2018 ) double or even triple the convolutional filters to reduce accuracy degradation . For the energy-efficient integer neural network chips , it needs to remap the changed network architecture to hardware and adds to computational and memory access overhead due to the increased filters and parameters . As a result , keeping the network structure intact is important . Concerning all the factors above , in this paper , we present a learned linear symmetric quantization ( LLSQ ) method and also evaluate it on a low-precision neural network accelerator through hardware-software co-design . Specifically , our mainly contributions are : • Unlike most of other quantization methods , we quantize the whole network including the first and last layers . We also quantize bias and scaling factors , in support of the low bitwidth integer arithmetic units and accumulators on the accelerator . • We adopt learned linear symmetric quantization schemes which are hardware friendly ( such as the convenience of BN fusion implementation ) while achieving state-of-the-art prediction accuracy . • We design a specialized low-precision CNN inference accelerator to validate the methodology , which supports 2/4/8 integer operating and work with high efficiency . We then deploy our quantization model on the accelerator to illustrate the efficacy of the workflow . 2 MOTIVATION . Edge or embedded neural network accelerators generally have three primary design goals— smallfootprint , high-throughput/low-latency , and low-power . For different applications and scenarios , the prior researches on specialized deep learning processors are often falling into different categories : cloud-oriented hardware for warehouse machines , low power mobile processors and ultra-low power accelerators for IoT or cyber-physical devices . For mobile and embedded usage , specialized neural network processors are becoming increasingly popular as an efficient hardware solution of inference . DianNao ( Chen et al. , 2014 ) is proposed for fast inference of DNNs and it uses 16-bit fixed-point multipliers for small silicon area and lowenergy . Later , ShiDianNao ( Du et al. , 2015 ) is introduced and it burns extremely low energy consumption by putting all weights onto the SRAM to eliminate considerable DRAM accesses . Besides , DeepBurning ( Wang et al. , 2016 ) simplifies the design flow of accelerator for different NN models . Eyeriss ( Chen et al. , 2018 ) is also another representative of low-power accelerators . And it presents a row-stationary ( RS ) dataflow to minimize data movement energy consumption on a spatial architecture . To further reduce computation overhead , EIE ( Han et al. , 2016 ) exploits the sparsity and low-bit compression of the NNs and achieves better throughput , energy and area efficiency . These typical edge neural network processors are accepting fixed-point data input and using fixed-point processing elements to reduce the power and chip area overhead caused by floating-point arithmetic components and memory . For the cloud scenarios , specialized architectures like TPU ( Jouppi et al. , 2017 ) and FPGA-based accelerator cards are also replacing conventional GPGPU and CPU for highthroughput inference tasks . Even for cloud-oriented inference architectures , fixed-point processing architectures like TPU are favored because they are able to deliver much higher throughput for the given power budget and silicon area overhead . However , for the fixed-point or integer hardware targeted on neural network acceleration , quantization is prerequisite to convert the floating-point network model into the fixed-point format compatible with the specialized hardware , and it is also a critical step to ensure the accuracy of the network after conversion . Many prior quantization methods are intended to reduce the running overhead of networks but ignore the architecture and working mechanism of integer neural network processors , as illustrated in Table 1 , and they sometimes face considerable accuracy losses , or performance penalty or even fail to be supported on the realistic integer datapath due to the unconsciousness of the underlying hardware . This problem becomes particularly important for the hardware that is designed to run low bitwidth networks such as binary , ternary , and 2/4-bit models . For instance , Deep compression and WQ are clustering-based quantization methods , and they still need high-precision values to represent the weights , bias , and activations . As a result , they are not compatible with the hardware that only supports low-precision computing . LQ-Nets uses non-linear quantization based on the binary code and basis vector , and it can theoretically calculate the inner products between quantized weights and activations by bitwise operations only . However , it requires intensive modifications to the design of current processors by adding a lot of look-up tables in the datapath . Further , bias and scaling factors are not quantized in PACT and WRPN , resulting in performance penalty when employing additional high-precision or float-point ALUs to deal with them . In contrast , our LLSQ is designed to ease the model quantization flow for the specialized integer neural network processors by conforming to the constraints specified in Table 1 . To validate the importance of hardware-aware quantizer and software/hardware co-design , we also design a specialized CNN accelerator for wearable applications . And the specialized accelerator supports 2/4/8 integer operation and adopts the dataflow of low latency and energy design .
This paper proposes a linear symmetric quantizer for integer accelerators called LLSQ, which learns the quantization scaling factor using simulated gradient as update policy. Their main contribution is enabling inference on integer-only hardware by covering all parameters of all operators in convolutional networks, including weight, bias, activation and scaling factor. To address the quantization noise issue in bias parameters, they adopt Straight-Through Estimator and fine-tune the parameters after quantization. To improve inference efficiency, they apply BN layer fusion. They conduct experiments on public datasets for image classification and object detection to conclude that LLSQ achieves lower accuracy degradation compared to previous work. Finally, they test the quantized model on a specialized integer accelerator, showing the feasibility of the quantization on real hardware.
SP:8674a490809de44ceedfbfce7a48920a11390355
Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware
1 INTRODUCTION . Deep neural networks have shown excellent performance on various computer vision and natural language processing tasks , such as classification ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , object detection ( Girshick , 2015 ; Redmon et al. , 2016 ; He et al. , 2017 ) , segmentation ( Long et al. , 2015 ; Noh et al. , 2015 ) , machine translation ( Zhang et al. , 2018b ) , speech recognition ( Nassif et al. , 2019 ) , etc . While the past few years witnessed the success of DNNs on cloud and server-end computers , neural networks have been recently pushed to embedded and mobile areas to enable edge intelligence . For these scenarios , the power provision and computational strength on the edge computing devices are limited . As a result , it is essential to have more efficient network architectures and less expensive inference overhead . Therefore , there is increasing attention from the research community to study the compression of modern deep neural networks that are typically over-parameterized and computationally costly . Several categories of approaches are proposed to decrease the computational overhead of neural networks , such as lightweight neural network architectures ( Howard et al. , 2017 ) , neural architecture search ( NAS ) ( Elsken et al. , 2018 ) , and network pruning ( Han et al. , 2015 ; 2016 ; Wen et al. , 2016 ; ∗Corresponding author Molchanov et al. , 2017 ) . Besides these techniques , quantizing high-precision floating-point networks to lower bitwidth representation can also drastically decrease both the static parameters and the intermediate data generated during the network inference , resulting in reduced memory footprint and also computational intensity . And this paper focuses on the quantization of neural networks . Quantization technique is also closely related to the implementation of specialized hardware that maps the procedure of network inference onto the energy-efficient low-precision integer or fixedpoint arithmetic circuits . In the hardware perspective , low-precision integer accelerators or processors are dominating the solutions targeted on neural network inference , especially for mobile and embedded scenarios . Google ’ s Tensor Processing Unit 1.0 ( TPU ) ( Jouppi et al. , 2017 ) , Unified Deep Neural Network Accelerator ( UNPU ) ( Lee et al. , 2018 ) , Eyeriss ( Chen et al. , 2018 ) , Stripes ( Judd et al. , 2016 ) , Pragmatic ( Albericio et al. , 2017 ) and many other newly proposed hardware implementations are generally reliant on the effectiveness of the underlying quantization techniques , which are especially crucial for the low-precision integer hardware designed to process binary , ternary , 4-bit or 8-bit networks . In other words , quantization is not only a method to reduce the memory footprint as in traditional work , but also a mandatory step to make the network deployable on integer hardware . Though there is a lot of prior work that investigates low-precision quantization , they mainly target on reducing the memory overhead caused by floating or high precision data representation in the networks , but not focus on specialized integer hardware for network inference . To enable the neural network processors to work with low-precision integer operands and minimize the accuracy losses , a good network quantizer must satisfy the constraints as enlisted in Table 1 . First , all the parameters , including weights , bias , activations , partial results that eventually accumulate to an activation , and even the scaling factors , which are indispensable for low-precision networks like binary and ternary representation , must be quantized into low bitwidth integers as required by the underlying specialized hardware . In some prior work ( Zhou et al. , 2016 ; Zhu et al. , 2017 ; Zhang et al. , 2018a ; Mishra et al. , 2018 ; Choi , 2018 ) , they either leave bias and scaling factors unquantized or keep the first and last layer in full or high precision . Besides , some designs rely on high-precision internal register or ALUs to support high-precision partial results that are generated during computation before the final output of activations or features . For example , Krishnamoorthi ( 2018 ) , which quantizes the weights and activations to 8-bit , directly use 32-bit accumulators to cache the intermediate values or partial results to avoid overflows . However , for 4-bit and lower bitwidth , the integer accelerators can not afford high bitwidth accumulators , which indicates higher silicon area and power cost . For integer-only-arithmetic , we quantize the bias to fixed-point numbers by using a straight-forward method . The value range of these numbers is wide , resulting in overflows of the low bitwidth accumulators . To overcome this problem , we quantize the bias to 8-bit and finetune the bias of the model . As shown in Figure 1 , the bitwidth of accumulators can be reduced to 16-bit . Second , the BatchNorm ( BN ) layer does not necessarily need to be processed during inference for the reduction of computation and memory cost . For most of the convolutional neural networks , BN layers are often after the Conv or FC layers . In these situations , BN can be merged into the weights and biases of the corresponding Conv or FC layers . However , in Zhou et al . ( 2016 ) ; Zhang et al . ( 2018a ) , they use asymmetric or non-linear quantization , causing barriers to BN fusion . There are two ways to overcome this obstacle . One is “ BN folded training ” ( Krishnamoorthi , 2018 ) , which adopts BN fusion before weights quantization in every training step ; the other is to use symmetric linear quantization . However , the first method doubles the training time , while the second one has no additional computational overhead , which will be introduced in Section 3.4 . Third , linear quantization is necessary for state-of-the-art accelerators . There are many non-linear quantization methods which achieve excellent bitwidth reduction efficacy and accuracy tradeoffs . In these cases , it requires additional transformation to have correct arithmetic results after quantizing the value into non-linear distribution . For example , as in Han et al . ( 2016 ) ; Park et al . ( 2017 ) , it necessitates the operation of table lookup to have correct multiplication between quantized values . However , the linear quantization can make full use of the low-precision arithmetic components in off-the-shelf accelerators . Further , linear quantization can be divided into symmetric mode and asymmetric mode . Asymmetric quantization has one more parameter ( e.g. , zero-point ( Krishnamoorthi , 2018 ) ) than symmetric quantization , and it requires additional subtraction or linearoperation before multiplication . As a result , the symmetrical mode is compatible with the mainstream integer accelerator chip design and do not require the redesign of datapath in these hardware . Fourth , different CNNs or applications usually use a variety of activation functions . For instance , the object detection model Redmon et al . ( 2016 ) typically uses Leaky ReLU . And the bottleneck of ResNet block does not use any activation function . The quantization methods are expected to be adapted to these situations . However , Zhang et al . ( 2018a ) ; Park et al . ( 2017 ) only focus on the quantization of activations after ReLU . In this paper , we demonstrate our method is friendly to different activation methods such as Leaky ReLU . Some of the previous researches change the network structure for better quantization performance , e.g. , Mishra et al . ( 2018 ) double or even triple the convolutional filters to reduce accuracy degradation . For the energy-efficient integer neural network chips , it needs to remap the changed network architecture to hardware and adds to computational and memory access overhead due to the increased filters and parameters . As a result , keeping the network structure intact is important . Concerning all the factors above , in this paper , we present a learned linear symmetric quantization ( LLSQ ) method and also evaluate it on a low-precision neural network accelerator through hardware-software co-design . Specifically , our mainly contributions are : • Unlike most of other quantization methods , we quantize the whole network including the first and last layers . We also quantize bias and scaling factors , in support of the low bitwidth integer arithmetic units and accumulators on the accelerator . • We adopt learned linear symmetric quantization schemes which are hardware friendly ( such as the convenience of BN fusion implementation ) while achieving state-of-the-art prediction accuracy . • We design a specialized low-precision CNN inference accelerator to validate the methodology , which supports 2/4/8 integer operating and work with high efficiency . We then deploy our quantization model on the accelerator to illustrate the efficacy of the workflow . 2 MOTIVATION . Edge or embedded neural network accelerators generally have three primary design goals— smallfootprint , high-throughput/low-latency , and low-power . For different applications and scenarios , the prior researches on specialized deep learning processors are often falling into different categories : cloud-oriented hardware for warehouse machines , low power mobile processors and ultra-low power accelerators for IoT or cyber-physical devices . For mobile and embedded usage , specialized neural network processors are becoming increasingly popular as an efficient hardware solution of inference . DianNao ( Chen et al. , 2014 ) is proposed for fast inference of DNNs and it uses 16-bit fixed-point multipliers for small silicon area and lowenergy . Later , ShiDianNao ( Du et al. , 2015 ) is introduced and it burns extremely low energy consumption by putting all weights onto the SRAM to eliminate considerable DRAM accesses . Besides , DeepBurning ( Wang et al. , 2016 ) simplifies the design flow of accelerator for different NN models . Eyeriss ( Chen et al. , 2018 ) is also another representative of low-power accelerators . And it presents a row-stationary ( RS ) dataflow to minimize data movement energy consumption on a spatial architecture . To further reduce computation overhead , EIE ( Han et al. , 2016 ) exploits the sparsity and low-bit compression of the NNs and achieves better throughput , energy and area efficiency . These typical edge neural network processors are accepting fixed-point data input and using fixed-point processing elements to reduce the power and chip area overhead caused by floating-point arithmetic components and memory . For the cloud scenarios , specialized architectures like TPU ( Jouppi et al. , 2017 ) and FPGA-based accelerator cards are also replacing conventional GPGPU and CPU for highthroughput inference tasks . Even for cloud-oriented inference architectures , fixed-point processing architectures like TPU are favored because they are able to deliver much higher throughput for the given power budget and silicon area overhead . However , for the fixed-point or integer hardware targeted on neural network acceleration , quantization is prerequisite to convert the floating-point network model into the fixed-point format compatible with the specialized hardware , and it is also a critical step to ensure the accuracy of the network after conversion . Many prior quantization methods are intended to reduce the running overhead of networks but ignore the architecture and working mechanism of integer neural network processors , as illustrated in Table 1 , and they sometimes face considerable accuracy losses , or performance penalty or even fail to be supported on the realistic integer datapath due to the unconsciousness of the underlying hardware . This problem becomes particularly important for the hardware that is designed to run low bitwidth networks such as binary , ternary , and 2/4-bit models . For instance , Deep compression and WQ are clustering-based quantization methods , and they still need high-precision values to represent the weights , bias , and activations . As a result , they are not compatible with the hardware that only supports low-precision computing . LQ-Nets uses non-linear quantization based on the binary code and basis vector , and it can theoretically calculate the inner products between quantized weights and activations by bitwise operations only . However , it requires intensive modifications to the design of current processors by adding a lot of look-up tables in the datapath . Further , bias and scaling factors are not quantized in PACT and WRPN , resulting in performance penalty when employing additional high-precision or float-point ALUs to deal with them . In contrast , our LLSQ is designed to ease the model quantization flow for the specialized integer neural network processors by conforming to the constraints specified in Table 1 . To validate the importance of hardware-aware quantizer and software/hardware co-design , we also design a specialized CNN accelerator for wearable applications . And the specialized accelerator supports 2/4/8 integer operation and adopts the dataflow of low latency and energy design .
This paper focuses on the quantization of ConvNets. This paper proposes a learned linear symmetric quantizer to reduce the precision of weight, bias, and activation. The proposed approach works as the following: for a pre-trained neural network, it computes the new weight and activation as a product of a quantized value with a scaling factor. The quantization is based on a simple linear, symmetric function as in equation (1). The value of the scaling factor is searched by "simulated gradient" or exponential moving average during re-training. Next, batch normalization is fused into convolution, and the scaling factor and biases are re-calculated. Last, the scaling factor on the convolution is merged with bias terms, to remove the need for multiplication in hardware implementation. Since bias terms usually have a much larger dynamic range, higher precision is used to represent biases. Experiments show that the method achieves competitive results compared with previous quantization methods, and the quantized models can be deployed on hardware more easily.
SP:8674a490809de44ceedfbfce7a48920a11390355
Contrastive Learning of Structured World Models
A structured understanding of our world in terms of objects , relations , and hierarchies is an important component of human cognition . Learning such a structured world model from raw sensory data remains a challenge . As a step towards this goal , we introduce Contrastively-trained Structured World Models ( C-SWMs ) . CSWMs utilize a contrastive approach for representation learning in environments with compositional structure . We structure each state embedding as a set of object representations and their relations , modeled by a graph neural network . This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process . We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent , simple Atari games , and a multi-object physics simulation . Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments , while learning interpretable object-based representations . 1 INTRODUCTION . Compositional reasoning in terms of objects , relations , and actions is a central ability in human cognition ( Spelke & Kinzler , 2007 ) . This ability serves as a core motivation behind a range of recent works that aim at enriching machine learning models with the ability to disentangle scenes into objects , their properties , and relations between them ( Chang et al. , 2016 ; Battaglia et al. , 2016 ; Watters et al. , 2017 ; van Steenkiste et al. , 2018 ; Kipf et al. , 2018 ; Sun et al. , 2018 ; 2019b ; Xu et al. , 2019 ) . These structured neural models greatly facilitate predicting physical dynamics and the consequences of actions , and provide a strong inductive bias for generalization to novel environment situations , allowing models to answer counterfactual questions such as “ What would happen if I pushed this block instead of pulling it ? ” . Arriving at a structured description of the world in terms of objects and relations in the first place , however , is a challenging problem . While most methods in this area require some form of human annotation for the extraction of objects or relations , several recent works study the problem of object discovery from visual data in a completely unsupervised or self-supervised manner ( Eslami et al. , 2016 ; Greff et al. , 2017 ; Nash et al. , 2017 ; van Steenkiste et al. , 2018 ; Kosiorek et al. , 2018 ; Janner et al. , 2019 ; Xu et al. , 2019 ; Burgess et al. , 2019 ; Greff et al. , 2019 ; Engelcke et al. , 2019 ) . These methods follow a generative approach , i.e. , they learn to discover object-based representations by performing visual predictions or reconstruction and by optimizing an objective in pixel space . Placing a loss in pixel space requires carefully trading off structural constraints on latent variables vs. accuracy of pixel-based reconstruction . Typical failure modes include ignoring visually small , but relevant features for predicting the future , such as a bullet in an Atari game ( Kaiser et al. , 2019 ) , or wasting model capacity on visually rich , but otherwise potentially irrelevant features , such as static backgrounds . To avoid such failure modes , we propose to adopt a discriminative approach using contrastive learning , which scores real against fake experiences in the form of state-action-state triples from an experience buffer ( Lin , 1992 ) , in a similar fashion as typical graph embedding approaches score true facts in the form of entity-relation-entity triples against corrupted triples or fake facts . We introduce Contrastively-trained Structured World Models ( C-SWMs ) , a class of models for learning abstract state representations from observations in an environment . C-SWMs learn a set of abstract state variables , one for each object in a particular observation . Environment transitions are modeled using a graph neural network ( Scarselli et al. , 2009 ; Li et al. , 2015 ; Kipf & Welling , 2016 ; Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) that operates on latent abstract representations . This paper further introduces a novel object-level contrastive loss for unsupervised learning of object-based representations . We arrive at this formulation by adapting methods for learning translational graph embeddings ( Bordes et al. , 2013 ; Wang et al. , 2014 ) to our use case . By establishing a connection between contrastive learning of state abstractions ( François-Lavet et al. , 2018 ; Thomas et al. , 2018 ) and relational graph embeddings ( Nickel et al. , 2016a ) , we hope to provide inspiration and guidance for future model improvements in both fields . In a set of experiments , where we use a novel ranking-based evaluation strategy , we demonstrate that C-SWMs learn interpretable object-level state abstractions , accurately learn to predict state transitions many steps into the future , demonstrate combinatorial generalization to novel environment configurations and learn to identify objects from scenes without supervision . 2 STRUCTURED WORLD MODELS . Our goal is to learn an object-oriented abstraction of a particular observation or environment state . In addition , we would like to learn an action-conditioned transition model of the environment that takes object representations and their relations and interactions into account . We start by introducing the general framework for contrastive learning of state abstractions and transition models without object factorization in Sections 2.1–2.2 , and in the following describe a variant that utilizes object-factorized state representations , which we term a Structured World Model . 2.1 STATE ABSTRACTION . We consider an off-policy setting , where we operate solely on a buffer of offline experience , e.g. , obtained from an exploration policy . Formally , this experience buffer B = { ( st , at , st+1 ) } Tt=1 contains T tuples of states st ∈ S , actions at ∈ A , and follow-up states st+1 ∈ S , which are reached after taking action at . We do not consider rewards as part of our framework for simplicity . Our goal is to learn abstract or latent representations zt ∈ Z of environment states st ∈ S that discard any information which is not necessary to predict the abstract representation of the followup state zt+1 ∈ Z after taking action at . Formally , we have an encoder E : S → Z which maps observed states to abstract state representations and a transition model T : Z × A → Z operating solely on abstract state representations . 2.2 CONTRASTIVE LEARNING . Our starting point is the graph embedding method TransE ( Bordes et al. , 2013 ) : TransE embeds facts from a knowledge base K = { ( et , rt , ot ) } Tt=1 , which consists of entity-relation-entity triples ( et , rt , ot ) , where et is the subject entity ( analogous to the source state st in our case ) , rt is the relation ( analogous to the action at in our experience buffer ) , and ot is the object entity ( analogous to the target state st+1 ) . TransE defines the energy of a triple ( et , rt , ot ) as H = d ( F ( et ) + G ( rt ) , F ( ot ) ) , where F ( and G ) are embedding functions that map discrete entities ( and relations ) to RD , where D is the dimensionality of the embedding space , and d ( · , · ) denotes the squared Euclidean distance . Training is carried out with an energy-based hinge loss ( LeCun et al. , 2006 ) , with negative samples obtained by replacing the entities in a fact with random entities from the knowledge base . We can port TransE to our setting with only minor modifications . As the effect of an action is in general not independent of the source state , we replace G ( rt ) with T ( zt , at ) , i.e. , with the transition function , conditioned on both the action and the ( embedded ) source state via zt = E ( st ) . The overall energy of a state-action-state triple then can be defined as follows : H = d ( zt + T ( zt , at ) , zt+1 ) . This additive form of the transition model provides a strong inductive bias for modeling effects of actions in the environment as translations in the abstract state space . Alternatively , one could model effects as linear transformations or rotations in the abstract state space , which motivates the use of a graph embedding method such as RESCAL ( Nickel et al. , 2011 ) , CompleX ( Trouillon et al. , 2016 ) , or HolE ( Nickel et al. , 2016b ) . With the aforementioned modifications , we arrive at the following energy-based hinge loss : L = d ( zt + T ( zt , at ) , zt+1 ) + max ( 0 , γ − d ( z̃t , zt+1 ) ) , ( 1 ) defined for a single ( st , at , st+1 ) with a corrupted abstract state z̃t = E ( s̃t ) . s̃t is sampled at random from the experience buffer . The margin γ is a hyperparameter for which we found γ = 1 to be a good choice . Unlike Bordes et al . ( 2013 ) , we place the hinge only on the negative term instead of on the full loss and we do not constrain the norm of the abstract states zt , which we found to work better in our context ( see Appendix A.3 ) . The overall loss is to be understood as an expectation of the above over samples from the experience buffer B . 2.3 OBJECT-ORIENTED STATE FACTORIZATION . Our goal is to take into account the compositional nature of visual scenes , and hence we would like to learn a relational and object-oriented model of the environment that operates on a factored abstract state space Z = Z1× . . .×ZK , whereK is the number of available object slots . We further assume an object-factorized action space A = A1 × . . .×AK . This factorization ensures that each object is independently represented and it allows for efficient sharing of model parameters across objects in the transition model . This serves as a strong inductive bias for better generalization to novel scenes and facilitates learning and object discovery . The overall C-SWM model architecture using object-factorized representations is shown in Figure 1 . Encoder and Object Extractor We split the encoder into two separate modules : 1 ) a CNN-based object extractor Eext , and 2 ) an MLP-based object encoder Eenc . The object extractor module is a CNN operating directly on image-based observations from the environment with K feature maps in its last layer . Each feature mapmkt = [ Eext ( st ) ] k can be interpreted as an object mask corresponding to one particular object slot , where [ . . . ] k denotes selection of the k-th feature map . For simplicity , we only assign a single feature map per object slot which sufficed for the experiments considered in this work ( see Appendix A.4 ) . To allow for encoding of more complex object features ( other than , e.g. , position/velocity ) , the object extractor can be adapted to produce multiple feature maps per object slot . After the object extractor module , we flatten each feature map mkt ( object mask ) and feed it into the object encoder Eenc . The object encoder shares weights across objects and returns an abstract state representation : zkt = Eenc ( m k t ) with z k t ∈ Zk . We set Zk = RD in the following , where D is a hyperparameter . Relational Transition Model We implement the transition model as a graph neural network ( Scarselli et al. , 2009 ; Li et al. , 2015 ; Kipf & Welling , 2016 ; Battaglia et al. , 2016 ; Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) , which allows us to model pairwise interactions between object states while being invariant to the order in which objects are represented . After the encoder stage , we have an abstract state description zkt ∈ Zk and an action akt ∈ Ak for every object in the scene . We represent actions as one-hot vectors ( or a vector of zeros if no action is applied to a particular object ) , but note that other choices are possible , e.g. , for continuous action spaces . The transition function then takes as input the tuple of object representations zt = ( z1t , . . . , z K t ) and actions at = ( a 1 t , . . . , a K t ) at a particular time step : ∆zt = T ( zt , at ) = GNN ( { ( zkt , akt ) } Kk=1 ) . ( 2 ) T ( zt , at ) is implemented as a graph neural network ( GNN ) that takes zkt as input node features . The model predicts updates ∆zt = ( ∆z1t , . . . , ∆z K t ) . The object representations for the next time step are obtained via zt+1 = ( z1t + ∆z 1 t , . . . , z K t + ∆z K t ) . The GNN consists of node update functions fnode and edge update functions fedge with shared parameters across all nodes and edges . These functions are implemented as MLPs and we choose the following form of message passing updates : e ( i , j ) t = fedge ( [ z i t , z j t ] ) ( 3 ) ∆zjt = fnode ( [ z j t , a j t , ∑ i 6=j e ( i , j ) t ] ) , ( 4 ) where e ( i , j ) t is an intermediate representation of the edge or interaction between nodes i and j . This corresponds to a single round of node-to-edge and edge-to-node message passing . Alternatively , one could apply multiple rounds of message passing , but we did not find this to be necessary for the experiments considered in this work . Note that this update rule corresponds to message passing on a fully-connected scene graph , which isO ( K2 ) . This can be reduced to linear complexity by reducing connectivity to nearest neighbors in the abstract state space , which we leave for future work . We denote the output of the transition function for the k-th object as ∆zkt = T k ( zt , at ) in the following . Multi-object Contrastive Loss We only need to change the energy function to take the factorization of the abstract state space into account , which yields the following energy H for positive triples and H̃ for negative samples : H = 1 K K∑ k=1 d ( zkt + T k ( zt , at ) , z k t+1 ) , H̃ = 1 K K∑ k=1 d ( z̃kt , z k t+1 ) , ( 5 ) where z̃kt is the k-th object representation of the negative state sample z̃t = E ( s̃t ) . The overall contrastive loss for a single state-action-state sample from the experience buffer then takes the form : L = H + max ( 0 , γ − H̃ ) . ( 6 )
This paper aims to learn a structured latent space for images, which is made up of objects and their relations. The method works by (1) extracting object masks via a CNN, (2) turning those masks into feature vectors via an MLP, (3) estimating an action-conditioned delta for each feature via a GNN. Learning happens with contrastive losses, which ask that each feature+delta is close to the true next feature, and far away from other random possibilities. Experiments in simple synthetic environments (e.g., 2D geometric shapes moving on a black background) show encouraging results.
SP:813bacb9aed3dba22dc9c379793d87506d53f362
Contrastive Learning of Structured World Models
A structured understanding of our world in terms of objects , relations , and hierarchies is an important component of human cognition . Learning such a structured world model from raw sensory data remains a challenge . As a step towards this goal , we introduce Contrastively-trained Structured World Models ( C-SWMs ) . CSWMs utilize a contrastive approach for representation learning in environments with compositional structure . We structure each state embedding as a set of object representations and their relations , modeled by a graph neural network . This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process . We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent , simple Atari games , and a multi-object physics simulation . Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments , while learning interpretable object-based representations . 1 INTRODUCTION . Compositional reasoning in terms of objects , relations , and actions is a central ability in human cognition ( Spelke & Kinzler , 2007 ) . This ability serves as a core motivation behind a range of recent works that aim at enriching machine learning models with the ability to disentangle scenes into objects , their properties , and relations between them ( Chang et al. , 2016 ; Battaglia et al. , 2016 ; Watters et al. , 2017 ; van Steenkiste et al. , 2018 ; Kipf et al. , 2018 ; Sun et al. , 2018 ; 2019b ; Xu et al. , 2019 ) . These structured neural models greatly facilitate predicting physical dynamics and the consequences of actions , and provide a strong inductive bias for generalization to novel environment situations , allowing models to answer counterfactual questions such as “ What would happen if I pushed this block instead of pulling it ? ” . Arriving at a structured description of the world in terms of objects and relations in the first place , however , is a challenging problem . While most methods in this area require some form of human annotation for the extraction of objects or relations , several recent works study the problem of object discovery from visual data in a completely unsupervised or self-supervised manner ( Eslami et al. , 2016 ; Greff et al. , 2017 ; Nash et al. , 2017 ; van Steenkiste et al. , 2018 ; Kosiorek et al. , 2018 ; Janner et al. , 2019 ; Xu et al. , 2019 ; Burgess et al. , 2019 ; Greff et al. , 2019 ; Engelcke et al. , 2019 ) . These methods follow a generative approach , i.e. , they learn to discover object-based representations by performing visual predictions or reconstruction and by optimizing an objective in pixel space . Placing a loss in pixel space requires carefully trading off structural constraints on latent variables vs. accuracy of pixel-based reconstruction . Typical failure modes include ignoring visually small , but relevant features for predicting the future , such as a bullet in an Atari game ( Kaiser et al. , 2019 ) , or wasting model capacity on visually rich , but otherwise potentially irrelevant features , such as static backgrounds . To avoid such failure modes , we propose to adopt a discriminative approach using contrastive learning , which scores real against fake experiences in the form of state-action-state triples from an experience buffer ( Lin , 1992 ) , in a similar fashion as typical graph embedding approaches score true facts in the form of entity-relation-entity triples against corrupted triples or fake facts . We introduce Contrastively-trained Structured World Models ( C-SWMs ) , a class of models for learning abstract state representations from observations in an environment . C-SWMs learn a set of abstract state variables , one for each object in a particular observation . Environment transitions are modeled using a graph neural network ( Scarselli et al. , 2009 ; Li et al. , 2015 ; Kipf & Welling , 2016 ; Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) that operates on latent abstract representations . This paper further introduces a novel object-level contrastive loss for unsupervised learning of object-based representations . We arrive at this formulation by adapting methods for learning translational graph embeddings ( Bordes et al. , 2013 ; Wang et al. , 2014 ) to our use case . By establishing a connection between contrastive learning of state abstractions ( François-Lavet et al. , 2018 ; Thomas et al. , 2018 ) and relational graph embeddings ( Nickel et al. , 2016a ) , we hope to provide inspiration and guidance for future model improvements in both fields . In a set of experiments , where we use a novel ranking-based evaluation strategy , we demonstrate that C-SWMs learn interpretable object-level state abstractions , accurately learn to predict state transitions many steps into the future , demonstrate combinatorial generalization to novel environment configurations and learn to identify objects from scenes without supervision . 2 STRUCTURED WORLD MODELS . Our goal is to learn an object-oriented abstraction of a particular observation or environment state . In addition , we would like to learn an action-conditioned transition model of the environment that takes object representations and their relations and interactions into account . We start by introducing the general framework for contrastive learning of state abstractions and transition models without object factorization in Sections 2.1–2.2 , and in the following describe a variant that utilizes object-factorized state representations , which we term a Structured World Model . 2.1 STATE ABSTRACTION . We consider an off-policy setting , where we operate solely on a buffer of offline experience , e.g. , obtained from an exploration policy . Formally , this experience buffer B = { ( st , at , st+1 ) } Tt=1 contains T tuples of states st ∈ S , actions at ∈ A , and follow-up states st+1 ∈ S , which are reached after taking action at . We do not consider rewards as part of our framework for simplicity . Our goal is to learn abstract or latent representations zt ∈ Z of environment states st ∈ S that discard any information which is not necessary to predict the abstract representation of the followup state zt+1 ∈ Z after taking action at . Formally , we have an encoder E : S → Z which maps observed states to abstract state representations and a transition model T : Z × A → Z operating solely on abstract state representations . 2.2 CONTRASTIVE LEARNING . Our starting point is the graph embedding method TransE ( Bordes et al. , 2013 ) : TransE embeds facts from a knowledge base K = { ( et , rt , ot ) } Tt=1 , which consists of entity-relation-entity triples ( et , rt , ot ) , where et is the subject entity ( analogous to the source state st in our case ) , rt is the relation ( analogous to the action at in our experience buffer ) , and ot is the object entity ( analogous to the target state st+1 ) . TransE defines the energy of a triple ( et , rt , ot ) as H = d ( F ( et ) + G ( rt ) , F ( ot ) ) , where F ( and G ) are embedding functions that map discrete entities ( and relations ) to RD , where D is the dimensionality of the embedding space , and d ( · , · ) denotes the squared Euclidean distance . Training is carried out with an energy-based hinge loss ( LeCun et al. , 2006 ) , with negative samples obtained by replacing the entities in a fact with random entities from the knowledge base . We can port TransE to our setting with only minor modifications . As the effect of an action is in general not independent of the source state , we replace G ( rt ) with T ( zt , at ) , i.e. , with the transition function , conditioned on both the action and the ( embedded ) source state via zt = E ( st ) . The overall energy of a state-action-state triple then can be defined as follows : H = d ( zt + T ( zt , at ) , zt+1 ) . This additive form of the transition model provides a strong inductive bias for modeling effects of actions in the environment as translations in the abstract state space . Alternatively , one could model effects as linear transformations or rotations in the abstract state space , which motivates the use of a graph embedding method such as RESCAL ( Nickel et al. , 2011 ) , CompleX ( Trouillon et al. , 2016 ) , or HolE ( Nickel et al. , 2016b ) . With the aforementioned modifications , we arrive at the following energy-based hinge loss : L = d ( zt + T ( zt , at ) , zt+1 ) + max ( 0 , γ − d ( z̃t , zt+1 ) ) , ( 1 ) defined for a single ( st , at , st+1 ) with a corrupted abstract state z̃t = E ( s̃t ) . s̃t is sampled at random from the experience buffer . The margin γ is a hyperparameter for which we found γ = 1 to be a good choice . Unlike Bordes et al . ( 2013 ) , we place the hinge only on the negative term instead of on the full loss and we do not constrain the norm of the abstract states zt , which we found to work better in our context ( see Appendix A.3 ) . The overall loss is to be understood as an expectation of the above over samples from the experience buffer B . 2.3 OBJECT-ORIENTED STATE FACTORIZATION . Our goal is to take into account the compositional nature of visual scenes , and hence we would like to learn a relational and object-oriented model of the environment that operates on a factored abstract state space Z = Z1× . . .×ZK , whereK is the number of available object slots . We further assume an object-factorized action space A = A1 × . . .×AK . This factorization ensures that each object is independently represented and it allows for efficient sharing of model parameters across objects in the transition model . This serves as a strong inductive bias for better generalization to novel scenes and facilitates learning and object discovery . The overall C-SWM model architecture using object-factorized representations is shown in Figure 1 . Encoder and Object Extractor We split the encoder into two separate modules : 1 ) a CNN-based object extractor Eext , and 2 ) an MLP-based object encoder Eenc . The object extractor module is a CNN operating directly on image-based observations from the environment with K feature maps in its last layer . Each feature mapmkt = [ Eext ( st ) ] k can be interpreted as an object mask corresponding to one particular object slot , where [ . . . ] k denotes selection of the k-th feature map . For simplicity , we only assign a single feature map per object slot which sufficed for the experiments considered in this work ( see Appendix A.4 ) . To allow for encoding of more complex object features ( other than , e.g. , position/velocity ) , the object extractor can be adapted to produce multiple feature maps per object slot . After the object extractor module , we flatten each feature map mkt ( object mask ) and feed it into the object encoder Eenc . The object encoder shares weights across objects and returns an abstract state representation : zkt = Eenc ( m k t ) with z k t ∈ Zk . We set Zk = RD in the following , where D is a hyperparameter . Relational Transition Model We implement the transition model as a graph neural network ( Scarselli et al. , 2009 ; Li et al. , 2015 ; Kipf & Welling , 2016 ; Battaglia et al. , 2016 ; Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) , which allows us to model pairwise interactions between object states while being invariant to the order in which objects are represented . After the encoder stage , we have an abstract state description zkt ∈ Zk and an action akt ∈ Ak for every object in the scene . We represent actions as one-hot vectors ( or a vector of zeros if no action is applied to a particular object ) , but note that other choices are possible , e.g. , for continuous action spaces . The transition function then takes as input the tuple of object representations zt = ( z1t , . . . , z K t ) and actions at = ( a 1 t , . . . , a K t ) at a particular time step : ∆zt = T ( zt , at ) = GNN ( { ( zkt , akt ) } Kk=1 ) . ( 2 ) T ( zt , at ) is implemented as a graph neural network ( GNN ) that takes zkt as input node features . The model predicts updates ∆zt = ( ∆z1t , . . . , ∆z K t ) . The object representations for the next time step are obtained via zt+1 = ( z1t + ∆z 1 t , . . . , z K t + ∆z K t ) . The GNN consists of node update functions fnode and edge update functions fedge with shared parameters across all nodes and edges . These functions are implemented as MLPs and we choose the following form of message passing updates : e ( i , j ) t = fedge ( [ z i t , z j t ] ) ( 3 ) ∆zjt = fnode ( [ z j t , a j t , ∑ i 6=j e ( i , j ) t ] ) , ( 4 ) where e ( i , j ) t is an intermediate representation of the edge or interaction between nodes i and j . This corresponds to a single round of node-to-edge and edge-to-node message passing . Alternatively , one could apply multiple rounds of message passing , but we did not find this to be necessary for the experiments considered in this work . Note that this update rule corresponds to message passing on a fully-connected scene graph , which isO ( K2 ) . This can be reduced to linear complexity by reducing connectivity to nearest neighbors in the abstract state space , which we leave for future work . We denote the output of the transition function for the k-th object as ∆zkt = T k ( zt , at ) in the following . Multi-object Contrastive Loss We only need to change the energy function to take the factorization of the abstract state space into account , which yields the following energy H for positive triples and H̃ for negative samples : H = 1 K K∑ k=1 d ( zkt + T k ( zt , at ) , z k t+1 ) , H̃ = 1 K K∑ k=1 d ( z̃kt , z k t+1 ) , ( 5 ) where z̃kt is the k-th object representation of the negative state sample z̃t = E ( s̃t ) . The overall contrastive loss for a single state-action-state sample from the experience buffer then takes the form : L = H + max ( 0 , γ − H̃ ) . ( 6 )
This paper tackles the problem of learning an encoder and transition model of an environment, such that the representation learnt uses an object-centric representation which could favor compositionality and generalisation. This is trained using a contrastive max-margin loss, instead of a generative loss as previously explored. They do not consider RL or follow-up tasks leveraging these representations and transition models yet.
SP:813bacb9aed3dba22dc9c379793d87506d53f362
Keyword Spotter Model for Crop Pest and Disease Monitoring from Community Radio Data
1 INTRODUCTION . Ensuring a functional and near real-time system of surveillance for crop diseases and pests is of critical importance to sustaining the livelihoods of smallholder farmers in sub-Saharan Africa ( Mutembesa et al. , 2018 ) . Disease and pest surveillance systems have to be put in place to provide early warning to the farmers and the relevant agricultural research bodies . Usually , when a crop disease or pest is reported in a given area , experts from the respective research institutes take time to reach the reported location to carry out investigations . This usually involves inspection of the crops at specific intervals ( of about 10 km ) along the more accessible main roads , covering only small proportions of the areas of interest in major districts ( Mutembesa et al. , 2018 ) . Because the surveillance teams have to work within limited budgets , the surveys and the results from the surveys may be delayed or fewer regions may be sampled in a particular year . As the health experts provide an annual snapshot of the health of crops across the country they are limited in their ability to provide real-time actionable surveillance data . In many cases , the farmers never get to know the disease that has attacked their crops for weeks or even months . In many areas in Uganda , the vast majority of the affected people will use social media to communicate their concerns in their local communities . This social media is not Facebook or Twitter , its the local community radio stations existing in almost each village in sub-Saharan Africa ( Saeb et al. , 2017 ) . These rural radio stations give farmers an opportunity to interact with each other and also with the relevant agricultural authorities such as extension workers and agricultural experts . This can usually be through a number of formats like phone-in programs , live talk shows ( Nakabugu , 2001 ) . Specifically for some of the radio stations , they have targeted agricultural talk shows that can host an expert from an agricultural research institute who can aim at providing specific information for example : about crop disease and pest management . Keyword Spotting systems ( KWS ) is a classification task that aims at detection and retrieving of a series of words from a database of audio streams . The advantage of using a KWS is that unlike full automatic speech recognition systems , they can be developed without sufficient labelled data . This is common especially in low resourced languages Menon et al . ( 2019 ) . In this paper , we discuss an implementation of a Keyword Spotting model that we use to mine local community radio content using specific keywords for a low resourced language in Uganda . We evaluate our approach on the Luganda language which is a low-resource language that is currently spoken and used in many of the Agricultural communities in Uganda . 2 RELATED WORK . Previous work investigating crop disease surveillance utilizes different approaches . Some approaches have focused on setting up a crop disease surveillance network that relies on the use of mobile phones Mutembesa et al . ( 2018 ; 2019 ) while others use satellite imagery Zhang et al . ( 2014 ) . The disease detection aspect of the surveillance module uses computer vision and machine learning to detect plant diseases based on leaf imaging Aduwo et al . ( 2010 ) ; Mwebaze & Owomugisha ( 2016 ) . Leaf-based approaches however rely on the use of imaging device for low-resource approaches utilizing smartphones Quinn et al . ( 2011 ) ; Quinn ( 2013 ) ; Mutembesa et al . ( 2018 ; 2019 ) . This may be limited in areas with no or low smartphone adoption . Keyword spotting for low resource languages has been implemented before Menon et al . ( 2017 ) ; Saeb et al . ( 2017 ) . The approaches include used include CNNs , siamese CNNs Bromley et al . ( 1994 ) and autoencoders . Other models are designed for low computational resources such as Tang & Lin ( 2018 ) Coucke et al . ( 2019 ) due to the popularity of using keyword spotting to identify commands in smartphones , battery concerns from CPU requirements comes into play . Low resource languages pose a problem as models that consume a lot of data in training fail to converge due to low volume of text corpora and speech recording . Luganda is an almost zero-resource Bantu language , spoken in the central region of Uganda . Work on Luganda remains small compared to larger languages such as Kiwahili , Zulu and Hausa . Research on Luganda exists in machine translation Nandutu ( 2016 ) , keyword spotting Menon et al . ( 2017 ) ; Saeb et al . ( 2017 ) . Prior work on Luganda keyword spotting and radio monitoring of Luganda community radio has been initiated by Menon et al . ( 2017 ) ; Saeb et al . ( 2017 ) to generate insights concerning humanitarian aid and development . While radio content is publicly available and accessible with seemingly no data/privacy restrictions , there have been few interventions seeking to mine this data for surveillance purposes particularly for crop pests and diseases . 3 METHODOLOGY . 3.1 BUILDING THE KEYWORD CORPUS . The primary source of keywords in this study are radio recordings captured from radio stations spread across Uganda . We selected 55 radio stations which are commonly listened to in the central region . For each radio station , a google search was done to find out whether it had an online radio station and whether it had its radio schedule available online . From the initial list of radio stations , 19 of them had online streams and of these at least 14 radios broadcast in Luganda . We identified radio schedules for 10 of these radio stations from the radio station websites and also by manually listening in to the stations . The purpose was to identify the time when there are talk shows particularly the agricultural ones . It was observed that for most talk shows topics of discussions are picked depending on the audience demand , the trending topic in the society or country , sponsors/advertisers or specific campaigns though still there are weekly talk shows which are focused on agriculture . The purpose was to identify the time when the talk shows were hired particularly the agricultural ones . A python script was written to stream the online stations at identified time . These were recorded as 5 minute audio clips which were stored in a shared Dropbox folder . The team also identified 2 radio stations which avail their radio content online for the past 7 days as 1-hour recordings . The websites were scrapped and the audio recordings were sorted depending on whether the 1 hour was a talk show or not . Then the identified 1-hour audio clips that had talk shows were trimmed into 5 minute audio clips and these were added to the shared Dropbox folder . The 5-minute audio clips were then played back and carefully listened to by a team of five volunteers with the purpose of identifying and extracting the commonly used agricultural terms that would be fed into the keyword spotting model . To compliment the keywords captured from radio talkshows , we also scrapped an online local newspaper . For example , we obtained articles from a popular newspaper in Uganda commonly known as Bukedde1 . One advantage of using online articles as a source of keywords is that there are different ways in which the same crop disease or pest is mentioned and the spelling of such words can be to is captured specifically for Luganda . The keywords were then grouped into crops , diseases , fertilizers , herbicides and general keywords . Translations were also added in 2 languages that is Luganda and English as well as an alternative keyword in form of a stem and this was in case the stem alone was a unique keyword . ( give example of keywords with the stemming ) . Both the keyword sets from the local radio and online sources were then aggregated into one keyword corpus of 193 keywords . An example of keywords extracted from a radio talkshow on the Fall Army Worm pest affecting maize is shown in Table 1 . 3.2 SPEECH KEYWORD DATASET . Audio data collection was performed by crowdsourcing speech utterances of the different words from the keyword list . The use of studio captured samples seemed unrealistic , to mimic real-world settings the data was collected in a natural setting with noisy environments , poor quality recording equipment , and people talking in a natural , chatty way . Rather than using high quality microphones , and in a formal setting . This was ensured through the audio data collection tool derived from Warden ( 2017 ) where the person speaking out the words can do it using their phone or laptop wherever they are . In this study , we collected data from over 35 users who recorded the keywords in Luganda and English . An important goal here was to record sufficient data to train the model but low enough to allow for low-resource training . We ensured that we averaged 10 utterances per keyword . Keyword spotting models are much more useful if they are speaker independent , since the process of personalizing a model to an individual requires an intrusive user interface experience . With this in mind , the recording process had to be quick and easy to use , to reduce the number of people who would fail to complete it . The collected keyword audio data was encoded in ogg vorbis format . 3.3 DATA PREPROCESSING . 3.3.1 1D CNN . In order to perform speech processing , our first step is to convert the recorded ogg keyword files to wav files . As ogg is a lossy encoding format , we used ffmpeg ffm to decode the ogg vorbis files into wav audio files . The files were then transformed into 1d vectors using librosa McFee et al . ( 2019 ) . Then for an audio signal wst with a sampling rate s and a length t. We use the McFee et al . ( 2019 ) resampling feature below to normalize the sampling rate of all the samples to 8kHz as shown in Equation 1. fre : wst → w8kHzt ( 1 ) 1https : //www.bukedde.co.ug 3.3.2 SIAMESE CNN . For the Siamese CNN we transform the ogg files into wav using ffmpeg ffm . Then we proceed to generate spectograms and apply mel-compression . 4 MODEL ARCHITECTURE AND DESIGN . In this section , we briefly discuss the keyword spotting approaches that we use in this paper . 4.1 1D CONVOLUTION MODEL . In this study , We use a 1-dimensional Convolutional Neural Network ( CNN ) which takes in as input the processed raw audio data . The input to the model is an array representing the audio waveform ( X ) . The network is designed to learn the set of parameters ( θ ) to map the input to a prediction ( T ) according to a hierarchical feature extraction given by equation 3 . T = F ( X|Θ ) = fL ( ... f2 ( f1 ( X|θ1 ) |θ2 ) |θL ) ( 2 ) where L is the number of hidden layers in the network . The final architecture created is a 14-layer deep neural network with five 1D convolutional layers each with intermediate pooling layers . A dropout of 0.5 was also applied after the two successive dense layers . The final layer is a softmax activation function to map to only 10 target keywords selected randomly from the Luganda/English corpus . The model was trained using batch gradient descent with the Adam Optimizer and a learning rate of 0.001 .
This paper presents a very interesting application of speech keyword spotting techniques; the aim is to listen to continuous streams of community radio in Uganda in order to spot keywords of interest related to agriculture to monitor food security concerns in rural areas. The lack of internet infrastructure results in farmers in rural areas using community radio to share concerns related to agriculture. Therefore accurate keyword spotting techniques can potentially help researchers flag up areas of interest. The main engineering challenge that the study deals with is the fact that there isn’t a lot of training data available for languages like Lugandu.
SP:836de33d05125c0f6f805a38d340f6ceae4f22d7
Keyword Spotter Model for Crop Pest and Disease Monitoring from Community Radio Data
1 INTRODUCTION . Ensuring a functional and near real-time system of surveillance for crop diseases and pests is of critical importance to sustaining the livelihoods of smallholder farmers in sub-Saharan Africa ( Mutembesa et al. , 2018 ) . Disease and pest surveillance systems have to be put in place to provide early warning to the farmers and the relevant agricultural research bodies . Usually , when a crop disease or pest is reported in a given area , experts from the respective research institutes take time to reach the reported location to carry out investigations . This usually involves inspection of the crops at specific intervals ( of about 10 km ) along the more accessible main roads , covering only small proportions of the areas of interest in major districts ( Mutembesa et al. , 2018 ) . Because the surveillance teams have to work within limited budgets , the surveys and the results from the surveys may be delayed or fewer regions may be sampled in a particular year . As the health experts provide an annual snapshot of the health of crops across the country they are limited in their ability to provide real-time actionable surveillance data . In many cases , the farmers never get to know the disease that has attacked their crops for weeks or even months . In many areas in Uganda , the vast majority of the affected people will use social media to communicate their concerns in their local communities . This social media is not Facebook or Twitter , its the local community radio stations existing in almost each village in sub-Saharan Africa ( Saeb et al. , 2017 ) . These rural radio stations give farmers an opportunity to interact with each other and also with the relevant agricultural authorities such as extension workers and agricultural experts . This can usually be through a number of formats like phone-in programs , live talk shows ( Nakabugu , 2001 ) . Specifically for some of the radio stations , they have targeted agricultural talk shows that can host an expert from an agricultural research institute who can aim at providing specific information for example : about crop disease and pest management . Keyword Spotting systems ( KWS ) is a classification task that aims at detection and retrieving of a series of words from a database of audio streams . The advantage of using a KWS is that unlike full automatic speech recognition systems , they can be developed without sufficient labelled data . This is common especially in low resourced languages Menon et al . ( 2019 ) . In this paper , we discuss an implementation of a Keyword Spotting model that we use to mine local community radio content using specific keywords for a low resourced language in Uganda . We evaluate our approach on the Luganda language which is a low-resource language that is currently spoken and used in many of the Agricultural communities in Uganda . 2 RELATED WORK . Previous work investigating crop disease surveillance utilizes different approaches . Some approaches have focused on setting up a crop disease surveillance network that relies on the use of mobile phones Mutembesa et al . ( 2018 ; 2019 ) while others use satellite imagery Zhang et al . ( 2014 ) . The disease detection aspect of the surveillance module uses computer vision and machine learning to detect plant diseases based on leaf imaging Aduwo et al . ( 2010 ) ; Mwebaze & Owomugisha ( 2016 ) . Leaf-based approaches however rely on the use of imaging device for low-resource approaches utilizing smartphones Quinn et al . ( 2011 ) ; Quinn ( 2013 ) ; Mutembesa et al . ( 2018 ; 2019 ) . This may be limited in areas with no or low smartphone adoption . Keyword spotting for low resource languages has been implemented before Menon et al . ( 2017 ) ; Saeb et al . ( 2017 ) . The approaches include used include CNNs , siamese CNNs Bromley et al . ( 1994 ) and autoencoders . Other models are designed for low computational resources such as Tang & Lin ( 2018 ) Coucke et al . ( 2019 ) due to the popularity of using keyword spotting to identify commands in smartphones , battery concerns from CPU requirements comes into play . Low resource languages pose a problem as models that consume a lot of data in training fail to converge due to low volume of text corpora and speech recording . Luganda is an almost zero-resource Bantu language , spoken in the central region of Uganda . Work on Luganda remains small compared to larger languages such as Kiwahili , Zulu and Hausa . Research on Luganda exists in machine translation Nandutu ( 2016 ) , keyword spotting Menon et al . ( 2017 ) ; Saeb et al . ( 2017 ) . Prior work on Luganda keyword spotting and radio monitoring of Luganda community radio has been initiated by Menon et al . ( 2017 ) ; Saeb et al . ( 2017 ) to generate insights concerning humanitarian aid and development . While radio content is publicly available and accessible with seemingly no data/privacy restrictions , there have been few interventions seeking to mine this data for surveillance purposes particularly for crop pests and diseases . 3 METHODOLOGY . 3.1 BUILDING THE KEYWORD CORPUS . The primary source of keywords in this study are radio recordings captured from radio stations spread across Uganda . We selected 55 radio stations which are commonly listened to in the central region . For each radio station , a google search was done to find out whether it had an online radio station and whether it had its radio schedule available online . From the initial list of radio stations , 19 of them had online streams and of these at least 14 radios broadcast in Luganda . We identified radio schedules for 10 of these radio stations from the radio station websites and also by manually listening in to the stations . The purpose was to identify the time when there are talk shows particularly the agricultural ones . It was observed that for most talk shows topics of discussions are picked depending on the audience demand , the trending topic in the society or country , sponsors/advertisers or specific campaigns though still there are weekly talk shows which are focused on agriculture . The purpose was to identify the time when the talk shows were hired particularly the agricultural ones . A python script was written to stream the online stations at identified time . These were recorded as 5 minute audio clips which were stored in a shared Dropbox folder . The team also identified 2 radio stations which avail their radio content online for the past 7 days as 1-hour recordings . The websites were scrapped and the audio recordings were sorted depending on whether the 1 hour was a talk show or not . Then the identified 1-hour audio clips that had talk shows were trimmed into 5 minute audio clips and these were added to the shared Dropbox folder . The 5-minute audio clips were then played back and carefully listened to by a team of five volunteers with the purpose of identifying and extracting the commonly used agricultural terms that would be fed into the keyword spotting model . To compliment the keywords captured from radio talkshows , we also scrapped an online local newspaper . For example , we obtained articles from a popular newspaper in Uganda commonly known as Bukedde1 . One advantage of using online articles as a source of keywords is that there are different ways in which the same crop disease or pest is mentioned and the spelling of such words can be to is captured specifically for Luganda . The keywords were then grouped into crops , diseases , fertilizers , herbicides and general keywords . Translations were also added in 2 languages that is Luganda and English as well as an alternative keyword in form of a stem and this was in case the stem alone was a unique keyword . ( give example of keywords with the stemming ) . Both the keyword sets from the local radio and online sources were then aggregated into one keyword corpus of 193 keywords . An example of keywords extracted from a radio talkshow on the Fall Army Worm pest affecting maize is shown in Table 1 . 3.2 SPEECH KEYWORD DATASET . Audio data collection was performed by crowdsourcing speech utterances of the different words from the keyword list . The use of studio captured samples seemed unrealistic , to mimic real-world settings the data was collected in a natural setting with noisy environments , poor quality recording equipment , and people talking in a natural , chatty way . Rather than using high quality microphones , and in a formal setting . This was ensured through the audio data collection tool derived from Warden ( 2017 ) where the person speaking out the words can do it using their phone or laptop wherever they are . In this study , we collected data from over 35 users who recorded the keywords in Luganda and English . An important goal here was to record sufficient data to train the model but low enough to allow for low-resource training . We ensured that we averaged 10 utterances per keyword . Keyword spotting models are much more useful if they are speaker independent , since the process of personalizing a model to an individual requires an intrusive user interface experience . With this in mind , the recording process had to be quick and easy to use , to reduce the number of people who would fail to complete it . The collected keyword audio data was encoded in ogg vorbis format . 3.3 DATA PREPROCESSING . 3.3.1 1D CNN . In order to perform speech processing , our first step is to convert the recorded ogg keyword files to wav files . As ogg is a lossy encoding format , we used ffmpeg ffm to decode the ogg vorbis files into wav audio files . The files were then transformed into 1d vectors using librosa McFee et al . ( 2019 ) . Then for an audio signal wst with a sampling rate s and a length t. We use the McFee et al . ( 2019 ) resampling feature below to normalize the sampling rate of all the samples to 8kHz as shown in Equation 1. fre : wst → w8kHzt ( 1 ) 1https : //www.bukedde.co.ug 3.3.2 SIAMESE CNN . For the Siamese CNN we transform the ogg files into wav using ffmpeg ffm . Then we proceed to generate spectograms and apply mel-compression . 4 MODEL ARCHITECTURE AND DESIGN . In this section , we briefly discuss the keyword spotting approaches that we use in this paper . 4.1 1D CONVOLUTION MODEL . In this study , We use a 1-dimensional Convolutional Neural Network ( CNN ) which takes in as input the processed raw audio data . The input to the model is an array representing the audio waveform ( X ) . The network is designed to learn the set of parameters ( θ ) to map the input to a prediction ( T ) according to a hierarchical feature extraction given by equation 3 . T = F ( X|Θ ) = fL ( ... f2 ( f1 ( X|θ1 ) |θ2 ) |θL ) ( 2 ) where L is the number of hidden layers in the network . The final architecture created is a 14-layer deep neural network with five 1D convolutional layers each with intermediate pooling layers . A dropout of 0.5 was also applied after the two successive dense layers . The final layer is a softmax activation function to map to only 10 target keywords selected randomly from the Luganda/English corpus . The model was trained using batch gradient descent with the Adam Optimizer and a learning rate of 0.001 .
The paper describes an approach to analyze radio data with ML-based speech keyword techniques. The authors identify keywords related to agriculture and build a model that can automatically detect these keywords of interest. Their contribution is the proposed model relies on relatively simple neural networks (15-layer 1-D CNNs) to achieve keyword detection of low resourced language (e.g., Luganda). Therefore, they are capable of monitoring food security concerns in rural areas.
SP:836de33d05125c0f6f805a38d340f6ceae4f22d7
Hydra: Preserving Ensemble Diversity for Model Distillation
1 INTRODUCTION . Deep neural networks have achieved impressive performance , however , they tend to make overconfident predictions and poorly quantify uncertainty ( Lakshminarayanan et al. , 2017 ) . It has been demonstrated that ensembles of models improve predictive performance and offer higher quality uncertainty quantification ( Dietterich , 2000 ; Lakshminarayanan et al. , 2017 ; Ovadia et al. , 2019 ) . A fundamental limitation of ensembles is the cost of computation and memory at evaluation time . A popular solution is to distill an ensemble of models into a single compact network by attempting to match the average predictions of the original ensemble . This idea goes back to the foundational work of Hinton et al . ( 2015 ) , itself inspired by earlier ideas developed by Bucilu et al . ( 2006 ) . While this process has led to simple and well-performing algorithms , it fails to take into account the intrinsic diversity of the predictions of the ensemble , as represented by the individual predictions of each of its members . In particular , this diversity is all the more important in tasks that hinge on the uncertainty output of the ensemble , e.g. , in out-of-distribution scenarios ( Lakshminarayanan et al. , 2017 ; Ovadia et al. , 2019 ) . Similarly , by losing the diversity of the ensemble , this simple form of distillation makes it impossible to estimate measures of uncertainty such as model uncertainty ( Depeweg et al. , 2017 ; Malinin et al. , 2019 ) . Proper uncertainty quantification is especially crucial for safety-related tasks and applications . To overcome this limitation , Malinin et al . ( 2019 ) proposed to model the entire distribution of an ensemble using a Dirichlet distribution parametrized by a neural network , referred to as a prior network ( Malinin & Gales , 2018 ) . However , this imposes a strong parametric assumption on the distillation process . Inspired by multi-headed architectures already widely applied in various applications ( Szegedy et al. , 2015 ; Sercu et al. , 2016 ; Osband et al. , 2016 ; Song & Chai , 2018 ) , we propose a multiheaded model to distill ensembles . Our multi-headed approach—which we name Hydra—can be seen as an interpolation between the full ensemble of models and the knowledge distillation proposed by Hinton et al . ( 2015 ) . Our distillation model is comprised of ( 1 ) a single body and ( 2 ) as many heads as there are members in the original ensemble . Each head is assigned to an ensemble member distillation to ensemble models Hinton et al . ( 2015 ) train a network to imitate the average ensemble prediction . Hydra instead learns to distill the individual predictions of each ensemble member into separate light-weight head models while amortizing the computation through a shared heavy-weight body network . This retains the diversity of ensemble member predictions which is otherwise lost in knowledge distillation . and tries to mimic the individual predictions of this ensemble member , as illustrated in Figure 1 . The heads share the same body network whose role is to provide a common feature representation . The design of the body and the heads makes it possible to trade off the computational and memory efficiency against the fidelity with which the diversity of the ensemble is retained . An illustration of the common knowledge distillation and ensemble distillation as well as Hydra is shown in Figure 1 and a detailed methodology description is found in Section 2 . While the choices of the body and head architectures may appear like complex new hyperparameters we introduce , we will see in the experiments that we get good results by simply taking the N −k , 0 ≤ k N , layers of the original ensemble members for the body and duplicating the layer for the heads . Summary of contributions . Firstly , we present a multi-headed approach for the ensemble knowledge distillation . The shared component keeps the model computationally and memory efficient while diversity is captured through the heads matching the individual ensemble members . Secondly , we show through experimental evaluation that Hydra outperforms existing distillation methods for both classification and regression tasks with respect to predictive test performance . Lastly , we investigate Hydra ’ s behaviour in terms of in-domain and out-of-distribution data and demonstrate that Hydra comes closest to the ensemble behaviour in comparison to existing distillation methods . Novelty and significance . Ensembles of models have successfully improved predictive performance and yielded robust measures of uncertainty . However , existing distillation methods do not retain the diversity of the ensemble ( beyond its average predictive behavior ) , or need to make strong parametric assumptions that are not applicable in regression settings . To the best of our knowledge , our approach is the first to employ a multi-headed architecture in the context of ensemble distillation . It is simple to implement , does not make the strong parametric assumptions , requires few modifications to the distilled ensemble model and works well in practice , thereby making it attractive to apply to a wide range of ensemble models and tasks . 2 HYDRA : A MULTI-HEADED APPROACH . With a focus on offline distillation , our goal is to train a student network to match the predictive distribution of the teacher models , which is an ensemble of ( deep ) neural networks . Formally , given a dataset D = { ( x ( i ) , y ( i ) ) } Ni=1 , we consider an ensemble of M models ’ parameters θens = { θens , m } Mm=1 and prediction outputs { p ( y ( i ) |x ( i ) ; θens , m ) } Mm=1 . For simplicity , a single data instance pair will be referred to as ( x , y ) . In ( Hinton et al. , 2015 ; Balan et al. , 2015 ) distilling an ensemble of models into a single neural network is achieved by minimizing the Kullback-Leibler ( KL ) divergence between the student ’ s predictive distribution { p ( y|x ; θdistill ) } and the expected predictive distribution of an ensemble : L ( θens , θdistill ) = Ex [ KL ( 1 M M∑ m=1 ( p ( y|x ; θens , m ) ) ‖ p ( y|x ; θdistill ) ) ] ( 1 ) Hydra builds upon the approach of knowledge distillation and extends it to a multi-headed student model . Hydra is defined as a ( deep ) neural network with a single body andM heads . For distillation , Hydra has as many heads as there are ensemble members . The distillation model is parametrized by θhydra = { θbody , { θhead , m } Mm=1 } , i.e . the body , θbody , is shared among all heads { θhead , m } Mm=1 . In terms of number of parameters , we assume the heads to be much lighter than the shared part , so that the distillation is still meaningful . In practice , we use N − k , 0 ≤ k N , layers of the original ensemble member architecture and the original final layer ( s ) as head . The objective is to minimize the average KL divergence between each head m and corresponding ensemble member m. We differentiate between two tasks , classification and regression . Classification . For classification tasks , the ensemble of models has access to D during training , with each x belonging to one ofC classes , i.e. , y ∈ { 1 , 2 , . . . , C } . Assuming zm = fθhead , m ( fθbody ( x ) ) correspond to the logits , also termed unnormalised log probabilities , of head m , the categorical distribution over class labels for a sample x over a class c is computed as : p ( c|x ; θhydra , m ) = p ( c|zm ) = exp ( zm , c/T ) ∑C j=1 exp ( zm , j/T ) , ( 2 ) where T is a temperature re-scaling the logits . As discussed in ( Hinton et al. , 2015 ; Malinin et al. , 2019 ) the distribution of the teacher network is often “ sharp ” , which can limit the common support between the output distribution of the model and the target empirical distribution . Mimimizing KLdivergence between distributions with limited non-zero common support is known to be particularly difficult . To alleviate this issue , we follow the common practice ( Hinton et al. , 2015 ; Song & Chai , 2018 ; Lan et al. , 2018 ) to use temperature to “ heat up ” both distributions and increase common support during training . At evaluation , T is set to 1 . The soft probability distributions at a temperature of T are used to match the teacher ensemble of models by minimizing the average KL divergence between each head m and ensemble model m : L ( x , y ; θhydra , θens ) = 1 M M∑ m=1 KL ( p ( y|x ; θens , m ) ‖ p ( y|x ; θbody , θhead , m ) ) . ( 3 ) Compared to the objective of knowledge distillation ( 1 ) , we can observe that the average over the ensemble members is pulled out of the KL . Ignoring the constant entropy terms , this objective is reduced to standard cross entropy loss : L ( x , y ; θhead , θens ) = − T 2 M M∑ m=1 p ( y|x ; θens , m ) log p ( y|x ; θbody , θhead , m ) . ( 4 ) We scale our objective by T 2 as the gradient magnitudes produced by the soft targets are scaled by 1/T 2 . By multiplying the loss term by a factor of T 2 we ensure that the relative contributions to additional regularization losses remain roughly unchanged ( Song & Chai , 2018 ; Lan et al. , 2018 ) . Regression . We focus on heteroscedastic regression tasks where each ensemble member m outputs a mean µm ( x ) and σ2m ( x ) given an input x . 1 The output is modeled as p ( y|x , θm ) = N ( µm ( x ) , σ2m ( x ) ) for a given head m and the ensemble of models are trained by minimizing the negative log-likelihood . Traditional knowledge distillation matches a single Gaussian ( “ student ” ) outputting µdistill ( x ) and σ2distill ( x ) to a mixture of Gaussians ( a “ teacher ” ensemble ) : L ( x , y ; θens , θdistill ) = KL ( 1 M M∑ m=1 N ( µm ( x ) , σ2 , ( i ) m ( x ) ) ‖ N ( µdistill ( x ) , σ2distill ( x ) ) ) ( 5 ) With Hydra , each head m outputs a mean µhydra , m ( x ) and variance σ2hydra , m ( x ) and optimizes the KL divergence between each head output and corresponding ensemble member output : 1In our concrete implementation , our neural network outputs the mean µm ( x ) and log standard deviation log ( σ2m ( x ) ) which we thereafter exponentiate . L ( x , y ; θens , θhydra ) = 1 M M∑ m=1 KL ( N ( µm ( x ) , σ2 , ( i ) m ( x ) ) ‖ N ( µhydra , m ( x ) , σ2hydra , m ( x ) ) ) , ( omitting constants ) ∝ − 1 M M∑ m=1 ∫ N ( y ; µm ( x ) , σ2 , ( i ) m ( x ) ) logN ( y ; µdistill , σ2distill ) dy = − 1 M M∑ m=1 σ2m ( x ) + ( µm ( x ) − µdistill ( x ) ) 2 2σ2distill ( x ) + 1 2 log ( 2πσ2distill ( x ) ) where the final line uses the fact that each KL term has an analytical solution . Training with multi-head growth . Hydra is trained in two phases . In the first phase , Hydra mimics knowledge distillation in that it is trained until convergence with a single head—the “ Hinton head ” —to match the average predictions of the ensemble . Hydra is then extended by M − 1 heads , all of which initialized with the parameter values of the “ Hinton head ” . The resulting M heads are finally further trained to match the individual predictions of theM ensemble members ( according to objective ( 3 ) ) . In practice , we sometimes experienced difficulties for Hydra to converge in absence of this initialization scheme , and for the cases where different initialization worked , this two-phase training scheme typically led to overall quicker convergence .
This work introduces a new method for ensemble distillation. The problem of making better ensemble distillation methods seems relevant as ensembles are still one of the best ways to estimate uncertainty in practice (although see concerns below). The method itself is a simple extension of earlier “prior networks”: the original method suggested to fit a single network to mimick a distribution produce by given ensemble, and here authors suggest to use multi-head (one head per individual ensemble member) in order to better capture the ensemble diversity.
SP:7b714eb05f8e86b18444c8f39d89e566313988dc
Hydra: Preserving Ensemble Diversity for Model Distillation
1 INTRODUCTION . Deep neural networks have achieved impressive performance , however , they tend to make overconfident predictions and poorly quantify uncertainty ( Lakshminarayanan et al. , 2017 ) . It has been demonstrated that ensembles of models improve predictive performance and offer higher quality uncertainty quantification ( Dietterich , 2000 ; Lakshminarayanan et al. , 2017 ; Ovadia et al. , 2019 ) . A fundamental limitation of ensembles is the cost of computation and memory at evaluation time . A popular solution is to distill an ensemble of models into a single compact network by attempting to match the average predictions of the original ensemble . This idea goes back to the foundational work of Hinton et al . ( 2015 ) , itself inspired by earlier ideas developed by Bucilu et al . ( 2006 ) . While this process has led to simple and well-performing algorithms , it fails to take into account the intrinsic diversity of the predictions of the ensemble , as represented by the individual predictions of each of its members . In particular , this diversity is all the more important in tasks that hinge on the uncertainty output of the ensemble , e.g. , in out-of-distribution scenarios ( Lakshminarayanan et al. , 2017 ; Ovadia et al. , 2019 ) . Similarly , by losing the diversity of the ensemble , this simple form of distillation makes it impossible to estimate measures of uncertainty such as model uncertainty ( Depeweg et al. , 2017 ; Malinin et al. , 2019 ) . Proper uncertainty quantification is especially crucial for safety-related tasks and applications . To overcome this limitation , Malinin et al . ( 2019 ) proposed to model the entire distribution of an ensemble using a Dirichlet distribution parametrized by a neural network , referred to as a prior network ( Malinin & Gales , 2018 ) . However , this imposes a strong parametric assumption on the distillation process . Inspired by multi-headed architectures already widely applied in various applications ( Szegedy et al. , 2015 ; Sercu et al. , 2016 ; Osband et al. , 2016 ; Song & Chai , 2018 ) , we propose a multiheaded model to distill ensembles . Our multi-headed approach—which we name Hydra—can be seen as an interpolation between the full ensemble of models and the knowledge distillation proposed by Hinton et al . ( 2015 ) . Our distillation model is comprised of ( 1 ) a single body and ( 2 ) as many heads as there are members in the original ensemble . Each head is assigned to an ensemble member distillation to ensemble models Hinton et al . ( 2015 ) train a network to imitate the average ensemble prediction . Hydra instead learns to distill the individual predictions of each ensemble member into separate light-weight head models while amortizing the computation through a shared heavy-weight body network . This retains the diversity of ensemble member predictions which is otherwise lost in knowledge distillation . and tries to mimic the individual predictions of this ensemble member , as illustrated in Figure 1 . The heads share the same body network whose role is to provide a common feature representation . The design of the body and the heads makes it possible to trade off the computational and memory efficiency against the fidelity with which the diversity of the ensemble is retained . An illustration of the common knowledge distillation and ensemble distillation as well as Hydra is shown in Figure 1 and a detailed methodology description is found in Section 2 . While the choices of the body and head architectures may appear like complex new hyperparameters we introduce , we will see in the experiments that we get good results by simply taking the N −k , 0 ≤ k N , layers of the original ensemble members for the body and duplicating the layer for the heads . Summary of contributions . Firstly , we present a multi-headed approach for the ensemble knowledge distillation . The shared component keeps the model computationally and memory efficient while diversity is captured through the heads matching the individual ensemble members . Secondly , we show through experimental evaluation that Hydra outperforms existing distillation methods for both classification and regression tasks with respect to predictive test performance . Lastly , we investigate Hydra ’ s behaviour in terms of in-domain and out-of-distribution data and demonstrate that Hydra comes closest to the ensemble behaviour in comparison to existing distillation methods . Novelty and significance . Ensembles of models have successfully improved predictive performance and yielded robust measures of uncertainty . However , existing distillation methods do not retain the diversity of the ensemble ( beyond its average predictive behavior ) , or need to make strong parametric assumptions that are not applicable in regression settings . To the best of our knowledge , our approach is the first to employ a multi-headed architecture in the context of ensemble distillation . It is simple to implement , does not make the strong parametric assumptions , requires few modifications to the distilled ensemble model and works well in practice , thereby making it attractive to apply to a wide range of ensemble models and tasks . 2 HYDRA : A MULTI-HEADED APPROACH . With a focus on offline distillation , our goal is to train a student network to match the predictive distribution of the teacher models , which is an ensemble of ( deep ) neural networks . Formally , given a dataset D = { ( x ( i ) , y ( i ) ) } Ni=1 , we consider an ensemble of M models ’ parameters θens = { θens , m } Mm=1 and prediction outputs { p ( y ( i ) |x ( i ) ; θens , m ) } Mm=1 . For simplicity , a single data instance pair will be referred to as ( x , y ) . In ( Hinton et al. , 2015 ; Balan et al. , 2015 ) distilling an ensemble of models into a single neural network is achieved by minimizing the Kullback-Leibler ( KL ) divergence between the student ’ s predictive distribution { p ( y|x ; θdistill ) } and the expected predictive distribution of an ensemble : L ( θens , θdistill ) = Ex [ KL ( 1 M M∑ m=1 ( p ( y|x ; θens , m ) ) ‖ p ( y|x ; θdistill ) ) ] ( 1 ) Hydra builds upon the approach of knowledge distillation and extends it to a multi-headed student model . Hydra is defined as a ( deep ) neural network with a single body andM heads . For distillation , Hydra has as many heads as there are ensemble members . The distillation model is parametrized by θhydra = { θbody , { θhead , m } Mm=1 } , i.e . the body , θbody , is shared among all heads { θhead , m } Mm=1 . In terms of number of parameters , we assume the heads to be much lighter than the shared part , so that the distillation is still meaningful . In practice , we use N − k , 0 ≤ k N , layers of the original ensemble member architecture and the original final layer ( s ) as head . The objective is to minimize the average KL divergence between each head m and corresponding ensemble member m. We differentiate between two tasks , classification and regression . Classification . For classification tasks , the ensemble of models has access to D during training , with each x belonging to one ofC classes , i.e. , y ∈ { 1 , 2 , . . . , C } . Assuming zm = fθhead , m ( fθbody ( x ) ) correspond to the logits , also termed unnormalised log probabilities , of head m , the categorical distribution over class labels for a sample x over a class c is computed as : p ( c|x ; θhydra , m ) = p ( c|zm ) = exp ( zm , c/T ) ∑C j=1 exp ( zm , j/T ) , ( 2 ) where T is a temperature re-scaling the logits . As discussed in ( Hinton et al. , 2015 ; Malinin et al. , 2019 ) the distribution of the teacher network is often “ sharp ” , which can limit the common support between the output distribution of the model and the target empirical distribution . Mimimizing KLdivergence between distributions with limited non-zero common support is known to be particularly difficult . To alleviate this issue , we follow the common practice ( Hinton et al. , 2015 ; Song & Chai , 2018 ; Lan et al. , 2018 ) to use temperature to “ heat up ” both distributions and increase common support during training . At evaluation , T is set to 1 . The soft probability distributions at a temperature of T are used to match the teacher ensemble of models by minimizing the average KL divergence between each head m and ensemble model m : L ( x , y ; θhydra , θens ) = 1 M M∑ m=1 KL ( p ( y|x ; θens , m ) ‖ p ( y|x ; θbody , θhead , m ) ) . ( 3 ) Compared to the objective of knowledge distillation ( 1 ) , we can observe that the average over the ensemble members is pulled out of the KL . Ignoring the constant entropy terms , this objective is reduced to standard cross entropy loss : L ( x , y ; θhead , θens ) = − T 2 M M∑ m=1 p ( y|x ; θens , m ) log p ( y|x ; θbody , θhead , m ) . ( 4 ) We scale our objective by T 2 as the gradient magnitudes produced by the soft targets are scaled by 1/T 2 . By multiplying the loss term by a factor of T 2 we ensure that the relative contributions to additional regularization losses remain roughly unchanged ( Song & Chai , 2018 ; Lan et al. , 2018 ) . Regression . We focus on heteroscedastic regression tasks where each ensemble member m outputs a mean µm ( x ) and σ2m ( x ) given an input x . 1 The output is modeled as p ( y|x , θm ) = N ( µm ( x ) , σ2m ( x ) ) for a given head m and the ensemble of models are trained by minimizing the negative log-likelihood . Traditional knowledge distillation matches a single Gaussian ( “ student ” ) outputting µdistill ( x ) and σ2distill ( x ) to a mixture of Gaussians ( a “ teacher ” ensemble ) : L ( x , y ; θens , θdistill ) = KL ( 1 M M∑ m=1 N ( µm ( x ) , σ2 , ( i ) m ( x ) ) ‖ N ( µdistill ( x ) , σ2distill ( x ) ) ) ( 5 ) With Hydra , each head m outputs a mean µhydra , m ( x ) and variance σ2hydra , m ( x ) and optimizes the KL divergence between each head output and corresponding ensemble member output : 1In our concrete implementation , our neural network outputs the mean µm ( x ) and log standard deviation log ( σ2m ( x ) ) which we thereafter exponentiate . L ( x , y ; θens , θhydra ) = 1 M M∑ m=1 KL ( N ( µm ( x ) , σ2 , ( i ) m ( x ) ) ‖ N ( µhydra , m ( x ) , σ2hydra , m ( x ) ) ) , ( omitting constants ) ∝ − 1 M M∑ m=1 ∫ N ( y ; µm ( x ) , σ2 , ( i ) m ( x ) ) logN ( y ; µdistill , σ2distill ) dy = − 1 M M∑ m=1 σ2m ( x ) + ( µm ( x ) − µdistill ( x ) ) 2 2σ2distill ( x ) + 1 2 log ( 2πσ2distill ( x ) ) where the final line uses the fact that each KL term has an analytical solution . Training with multi-head growth . Hydra is trained in two phases . In the first phase , Hydra mimics knowledge distillation in that it is trained until convergence with a single head—the “ Hinton head ” —to match the average predictions of the ensemble . Hydra is then extended by M − 1 heads , all of which initialized with the parameter values of the “ Hinton head ” . The resulting M heads are finally further trained to match the individual predictions of theM ensemble members ( according to objective ( 3 ) ) . In practice , we sometimes experienced difficulties for Hydra to converge in absence of this initialization scheme , and for the cases where different initialization worked , this two-phase training scheme typically led to overall quicker convergence .
The paper proposes to distill the predictions of an ensemble with a multi-headed network, with as many heads as members in the original ensemble. Distillation proceeds by minimizing the KL divergence between the predictions of each ensemble member with the corresponding head in the student network. Experiments illustrate that the multi-headed architecture approximates the ensemble marginally better than approaches that use a network with a single head.
SP:7b714eb05f8e86b18444c8f39d89e566313988dc
Option Discovery using Deep Skill Chaining
1 INTRODUCTION . Hierarchical reinforcement learning ( Barto & Mahadevan , 2003 ) is a promising approach for solving long-horizon sequential decision making problems . Hierarchical methods lower the decision making burden on the agent through the use of problem specific action abstractions ( Konidaris , 2019 ) . While the use of temporally extended actions , or options ( Sutton et al. , 1999 ) , has been shown to accelerate learning ( McGovern & Sutton , 1998 ) , there remains the question of skill discovery : how can agents autonomously construct useful skills via interaction with the environment ? While a large body of work has sought to answer this question in small discrete domains , skill discovery in high-dimensional continuous spaces remains an open problem . An early approach to skill discovery in continuous-state environments was skill chaining ( Konidaris & Barto , 2009b ) , where an agent constructs a sequence of options that target a salient event in the MDP ( for example , the goal state ) . The skills are constructed so that successful execution of each option in the chain allows the agent to execute another option , which brings it closer still to its eventual goal . While skill chaining was capable of discovering skills in continuous state spaces , it could only be applied to relatively low-dimensional state-spaces with discrete actions . We introduce a new algorithm that combines the core insights of skill chaining with recent advances in using non-linear function approximation in reinforcement learning . The new algorithm , deep skill chaining , scales to high-dimensional problems with continuous state and action spaces . Through a series of experiments on five challenging domains in the MuJoCo physics simulator ( Todorov et al. , 2012 ) , we show that deep skill chaining can solve tasks that otherwise can not be solved by nonhierarchical agents in a reasonable amount of time . Furthermore , the new algorithm outperforms state-of-the-art deep skill discovery algorithms ( Bacon et al. , 2017 ; Levy et al. , 2019 ) in these tasks . 2 BACKGROUND AND RELATED WORK . Sequential decision making problems can be formalized as Markov Decision Processes ( MDPs ) . We consider goal-oriented episodic MDPs , where S denotes the state space , A is the action space , R is the reward function , T is the transition function , γ is the discount factor and g ∈ S is the terminating goal state ( Sutton & Barto , 2018 ) . Unlike goal-conditioned algorithms ( Sutton et al. , 2011 ; Schaul et al. , 2015 ) , we do not require that g be known ; instead we assume access to an indicator function 1g : s ∈ S −→ { 0 , 1 } which the agent can query to determine if it has reached the MDP ’ s goal . 1Video of learned policies : https : //youtu.be/MGvvPmm6JQg 2Code : https : //github.com/deep-skill-chaining/deep-skill-chaining One way to learn a policy in an MDP is to first learn an action-value function . The action-value function Qπ ( st , at ) is defined as the expected sum of discounted future rewards if the agent takes action at from st and then follows policy π thereafter : Qπ ( st , at ) = Eπ [ rt + γmaxat+1 Qπ ( st+1 , at+1 ) ] . Q-learning ( Watkins & Dayan , 1992 ) is a commonly used off-policy algorithm that uses the actionvalue function for control through a greedy policy π ( st ) = argmaxat Q ( st , at ) . Inspired by recent success in scaling Q-learning to high-dimensional spaces ( Mnih et al. , 2015 ; Van Hasselt et al. , 2016 ; Lillicrap et al. , 2015 ; Tesauro , 1994 ) , we learn the action-value functionQπφ ( st , at ) using non-linear function approximators parameterized by φ , by minimizing the loss L ( φ ) = Eπ [ ( Qφ ( st , at ) − yt ) 2 ] where the Q-learning target yt is given by the following equation ( Van Hasselt et al. , 2016 ) : yt = rt + γQφ′ ( st+1 , argmax at+1 Qφ ( st+1 , at+1 ) ) . ( 1 ) Deep Q-Learning ( DQN ) ( Mnih et al. , 2015 ) casts minimizingL ( φ ) as a standard regression problem by using target networks ( parameterized by φ′ ) and experience replay ( Lin , 1993 ) . 2.1 THE OPTIONS FRAMEWORK . The options framework ( Sutton et al. , 1999 ) models skills as options . An option o consists of three components : ( a ) its initiation condition , Io ( s ) , which determines whether o can be executed in state s , ( b ) its termination condition , βo ( s ) , which determines whether option execution must terminate in state s and ( c ) its closed-loop control policy , πo ( s ) , which maps state s to a low level action a ∈ A. Augmenting the set of available actions with options results in a Semi-Markov Decision Process ( SMDP ) ( Sutton et al. , 1999 ) where the next state depends on the current state , action and time . 2.2 SKILL DISCOVERY ALGORITHMS . Skill discovery has been studied extensively in small discrete domains ( McGovern & Sutton , 1998 ; Şimşek & Barto , 2004 ; Şimşek et al. , 2005 ; Bakker & Schmidhuber , 2004 ; Schmidhuber , 1991 ; Pickett & Barto , 2002 ; Dietterich , 2000 ) . Recently however , there has been a significant body of work aimed at discovering skills in continuous spaces . Option-critic methods : Option-Critic ( Bacon et al. , 2017 ) uses an end-to-end gradient based algorithm to learn options in high-dimensional continuous spaces . Option-Critic was a substantial step forward in skill discovery and led to a family of related methods ( Klissarov et al. , 2017 ; Tiwari & Thomas , 2019 ; Riemer et al. , 2018 ; Liu et al. , 2017 ; Jain et al. , 2018 ) . Proximal Policy Option Critic ( PPOC ) ( Klissarov et al. , 2017 ) extends Option-Critic to continuous action spaces and is the version of Option-Critic that we compare against in this paper . Our method bypasses two fundamental shortcomings of the Option-Critic framework : ( a ) unlike Option-Critic , we explicitly learn initiation sets of options and thus do not assume that all options are executable from everywhere , and ( b ) we do not treat the number of skills required to solve a task as a fixed and costly hyperparameter . Instead , our algorithm flexibly discovers as many skills as it needs to solve the given problem . Feudal methods : An alternative to the options framework is Feudal RL ( Dayan & Hinton , 1993 ) , which creates a hierarchy in which managers learn to assign subgoals to workers ; workers take a subgoal state as input and learn to reach it . Feudal Networks ( FuN ) ( Vezhnevets et al. , 2017 ) used neural networks to scale the Feudal-RL framework to high-dimensional continuous spaces ; it was extended and outperformed by HIRO ( Nachum et al. , 2018 ) in a series of control tasks in the MuJoCo simulator . More recently , Hierarchical Actor-Critic ( HAC ) ( Levy et al. , 2019 ) outperformed HIRO in a similar suite of continuous control problems . While HIRO relies on having a dense “ distanceto-goal ” based reward function to train both levels of their feudal hierarchy , HAC ’ s use of Hindsight Experience Replay ( HER ) ( Andrychowicz et al. , 2017 ) allows it to work in the more general sparsereward setting . Given its strong performance in continuous control problems and its ability to learn effectively in sparse-reward settings , we compare against HAC as a representative feudal method . Learning backward from the goal : The idea of sequencing locally applicable controllers is well established in robotics and control theory in the form of pre-image backchaining ( Kaelbling & LozanoPérez , 2017 ) and LQR-Trees ( Tedrake , 2009 ) . Such methods either require individually engineered control loops or a model of the system dynamics . Our work fits in the model-free RL setting and thus requires neither . More recently , reverse curriculum learning ( Florensa et al. , 2017 ) also learns backward from the goal . However , they define a curriculum of start states to learn a single policy , rather than learning skills . Relay Networks ( Kumar et al. , 2018 ) segment the value function backward from the goal using a thresholding scheme , which makes their method reliant on the accurate estimation of the value function . By contrast , our algorithm is agnostic to errors in value estimation , which are unavoidable when using function approximation in high-dimensional spaces . Planning with learned skills : Options have been shown to empirically speed up planning in several domains ( Silver & Ciosek , 2012 ; Jinnai et al. , 2019 ; James et al. , 2018 ; Francis & Ram , 1993 ; Konidaris , 2016 ; Sharma et al. , 2019 ) . However , Konidaris et al . ( 2018 ) show that for resulting plans to be provably feasible , skills must be executable sequentially . While they assume that such skills are given , we show that they can be autonomously discovered in high-dimensional spaces . 3 DEEP SKILL CHAINING . Deep skill chaining ( DSC ) is based on the intuition that it is easier to solve a long-horizon task from states in the local neighborhood of the goal . This intuition informs the first step of the algorithm : create an option that initiates near the goal and reliably takes the agent to the goal . Once such an option is learned , we create another option whose goal is to take the agent to a state from which it can successfully execute the first option . Skills are chained backward in this fashion until the start state of the MDP lies inside the initiation set of some option . The inductive bias of creating sequentially executable skills guarantees that as long as the agent successfully executes each skill in its chain , it will solve the original task . More formally , skill chaining amounts to learning options such that the termination condition βoi ( st ) of an option oi is the initiation condition Ioi−1 ( st ) of the option that precedes it in its chain . Our algorithm proceeds as follows : at time t , the policy over options πO : st ∈ S −→ o ∈ O determines which option to execute ( Section 3.2 ) . Control is then handed over to the selected option oi ’ s internal policy πoi : s ∈ S −→ at ∈ R|A| . πoi outputs joint torques until it either reaches its goal ( βoi : = Ioi−1 ) or times out at its predetermined budget T ( Section 3.1 ) . At this point , πO chooses another option to execute . If at any point the agent reaches the goal state of the MDP or the initiation condition of a previously learned option , it creates a new option to target such a salient event . The machinery for learning the initiation condition of this new option is described in Section 3.3 . We now detail the components of our architecture and how they are learned . Readers may also refer to Figures 4 & 7 and the pseudo-code in Appendix A.5 to gain greater intuition about our algorithm .
This paper studies the problem of learning suitable action abstractions (i.e., options or skills) that can be composed hierarchically to solve control tasks. The starting point for the paper is the (classic) observation that one skill should end where another can start. The paper then proposes a recursive algorithm for learning skills that obey this property. After finding some number of trajectories that reach the goal, the last few states of these trajectories are taken to define the initiation set for the ultimate skill and the termination set for the penultimate skill. The procedure is repeated, yielding a sequence (a "chain") of skills that extends from the initial state distribution to the terminal state. The fact that the number of skills is not defined apriori seems to be a strength, and the extension to trees of skills is neat.
SP:551579ae4e3fe3b943e738e04b923d519bea84e8
Option Discovery using Deep Skill Chaining
1 INTRODUCTION . Hierarchical reinforcement learning ( Barto & Mahadevan , 2003 ) is a promising approach for solving long-horizon sequential decision making problems . Hierarchical methods lower the decision making burden on the agent through the use of problem specific action abstractions ( Konidaris , 2019 ) . While the use of temporally extended actions , or options ( Sutton et al. , 1999 ) , has been shown to accelerate learning ( McGovern & Sutton , 1998 ) , there remains the question of skill discovery : how can agents autonomously construct useful skills via interaction with the environment ? While a large body of work has sought to answer this question in small discrete domains , skill discovery in high-dimensional continuous spaces remains an open problem . An early approach to skill discovery in continuous-state environments was skill chaining ( Konidaris & Barto , 2009b ) , where an agent constructs a sequence of options that target a salient event in the MDP ( for example , the goal state ) . The skills are constructed so that successful execution of each option in the chain allows the agent to execute another option , which brings it closer still to its eventual goal . While skill chaining was capable of discovering skills in continuous state spaces , it could only be applied to relatively low-dimensional state-spaces with discrete actions . We introduce a new algorithm that combines the core insights of skill chaining with recent advances in using non-linear function approximation in reinforcement learning . The new algorithm , deep skill chaining , scales to high-dimensional problems with continuous state and action spaces . Through a series of experiments on five challenging domains in the MuJoCo physics simulator ( Todorov et al. , 2012 ) , we show that deep skill chaining can solve tasks that otherwise can not be solved by nonhierarchical agents in a reasonable amount of time . Furthermore , the new algorithm outperforms state-of-the-art deep skill discovery algorithms ( Bacon et al. , 2017 ; Levy et al. , 2019 ) in these tasks . 2 BACKGROUND AND RELATED WORK . Sequential decision making problems can be formalized as Markov Decision Processes ( MDPs ) . We consider goal-oriented episodic MDPs , where S denotes the state space , A is the action space , R is the reward function , T is the transition function , γ is the discount factor and g ∈ S is the terminating goal state ( Sutton & Barto , 2018 ) . Unlike goal-conditioned algorithms ( Sutton et al. , 2011 ; Schaul et al. , 2015 ) , we do not require that g be known ; instead we assume access to an indicator function 1g : s ∈ S −→ { 0 , 1 } which the agent can query to determine if it has reached the MDP ’ s goal . 1Video of learned policies : https : //youtu.be/MGvvPmm6JQg 2Code : https : //github.com/deep-skill-chaining/deep-skill-chaining One way to learn a policy in an MDP is to first learn an action-value function . The action-value function Qπ ( st , at ) is defined as the expected sum of discounted future rewards if the agent takes action at from st and then follows policy π thereafter : Qπ ( st , at ) = Eπ [ rt + γmaxat+1 Qπ ( st+1 , at+1 ) ] . Q-learning ( Watkins & Dayan , 1992 ) is a commonly used off-policy algorithm that uses the actionvalue function for control through a greedy policy π ( st ) = argmaxat Q ( st , at ) . Inspired by recent success in scaling Q-learning to high-dimensional spaces ( Mnih et al. , 2015 ; Van Hasselt et al. , 2016 ; Lillicrap et al. , 2015 ; Tesauro , 1994 ) , we learn the action-value functionQπφ ( st , at ) using non-linear function approximators parameterized by φ , by minimizing the loss L ( φ ) = Eπ [ ( Qφ ( st , at ) − yt ) 2 ] where the Q-learning target yt is given by the following equation ( Van Hasselt et al. , 2016 ) : yt = rt + γQφ′ ( st+1 , argmax at+1 Qφ ( st+1 , at+1 ) ) . ( 1 ) Deep Q-Learning ( DQN ) ( Mnih et al. , 2015 ) casts minimizingL ( φ ) as a standard regression problem by using target networks ( parameterized by φ′ ) and experience replay ( Lin , 1993 ) . 2.1 THE OPTIONS FRAMEWORK . The options framework ( Sutton et al. , 1999 ) models skills as options . An option o consists of three components : ( a ) its initiation condition , Io ( s ) , which determines whether o can be executed in state s , ( b ) its termination condition , βo ( s ) , which determines whether option execution must terminate in state s and ( c ) its closed-loop control policy , πo ( s ) , which maps state s to a low level action a ∈ A. Augmenting the set of available actions with options results in a Semi-Markov Decision Process ( SMDP ) ( Sutton et al. , 1999 ) where the next state depends on the current state , action and time . 2.2 SKILL DISCOVERY ALGORITHMS . Skill discovery has been studied extensively in small discrete domains ( McGovern & Sutton , 1998 ; Şimşek & Barto , 2004 ; Şimşek et al. , 2005 ; Bakker & Schmidhuber , 2004 ; Schmidhuber , 1991 ; Pickett & Barto , 2002 ; Dietterich , 2000 ) . Recently however , there has been a significant body of work aimed at discovering skills in continuous spaces . Option-critic methods : Option-Critic ( Bacon et al. , 2017 ) uses an end-to-end gradient based algorithm to learn options in high-dimensional continuous spaces . Option-Critic was a substantial step forward in skill discovery and led to a family of related methods ( Klissarov et al. , 2017 ; Tiwari & Thomas , 2019 ; Riemer et al. , 2018 ; Liu et al. , 2017 ; Jain et al. , 2018 ) . Proximal Policy Option Critic ( PPOC ) ( Klissarov et al. , 2017 ) extends Option-Critic to continuous action spaces and is the version of Option-Critic that we compare against in this paper . Our method bypasses two fundamental shortcomings of the Option-Critic framework : ( a ) unlike Option-Critic , we explicitly learn initiation sets of options and thus do not assume that all options are executable from everywhere , and ( b ) we do not treat the number of skills required to solve a task as a fixed and costly hyperparameter . Instead , our algorithm flexibly discovers as many skills as it needs to solve the given problem . Feudal methods : An alternative to the options framework is Feudal RL ( Dayan & Hinton , 1993 ) , which creates a hierarchy in which managers learn to assign subgoals to workers ; workers take a subgoal state as input and learn to reach it . Feudal Networks ( FuN ) ( Vezhnevets et al. , 2017 ) used neural networks to scale the Feudal-RL framework to high-dimensional continuous spaces ; it was extended and outperformed by HIRO ( Nachum et al. , 2018 ) in a series of control tasks in the MuJoCo simulator . More recently , Hierarchical Actor-Critic ( HAC ) ( Levy et al. , 2019 ) outperformed HIRO in a similar suite of continuous control problems . While HIRO relies on having a dense “ distanceto-goal ” based reward function to train both levels of their feudal hierarchy , HAC ’ s use of Hindsight Experience Replay ( HER ) ( Andrychowicz et al. , 2017 ) allows it to work in the more general sparsereward setting . Given its strong performance in continuous control problems and its ability to learn effectively in sparse-reward settings , we compare against HAC as a representative feudal method . Learning backward from the goal : The idea of sequencing locally applicable controllers is well established in robotics and control theory in the form of pre-image backchaining ( Kaelbling & LozanoPérez , 2017 ) and LQR-Trees ( Tedrake , 2009 ) . Such methods either require individually engineered control loops or a model of the system dynamics . Our work fits in the model-free RL setting and thus requires neither . More recently , reverse curriculum learning ( Florensa et al. , 2017 ) also learns backward from the goal . However , they define a curriculum of start states to learn a single policy , rather than learning skills . Relay Networks ( Kumar et al. , 2018 ) segment the value function backward from the goal using a thresholding scheme , which makes their method reliant on the accurate estimation of the value function . By contrast , our algorithm is agnostic to errors in value estimation , which are unavoidable when using function approximation in high-dimensional spaces . Planning with learned skills : Options have been shown to empirically speed up planning in several domains ( Silver & Ciosek , 2012 ; Jinnai et al. , 2019 ; James et al. , 2018 ; Francis & Ram , 1993 ; Konidaris , 2016 ; Sharma et al. , 2019 ) . However , Konidaris et al . ( 2018 ) show that for resulting plans to be provably feasible , skills must be executable sequentially . While they assume that such skills are given , we show that they can be autonomously discovered in high-dimensional spaces . 3 DEEP SKILL CHAINING . Deep skill chaining ( DSC ) is based on the intuition that it is easier to solve a long-horizon task from states in the local neighborhood of the goal . This intuition informs the first step of the algorithm : create an option that initiates near the goal and reliably takes the agent to the goal . Once such an option is learned , we create another option whose goal is to take the agent to a state from which it can successfully execute the first option . Skills are chained backward in this fashion until the start state of the MDP lies inside the initiation set of some option . The inductive bias of creating sequentially executable skills guarantees that as long as the agent successfully executes each skill in its chain , it will solve the original task . More formally , skill chaining amounts to learning options such that the termination condition βoi ( st ) of an option oi is the initiation condition Ioi−1 ( st ) of the option that precedes it in its chain . Our algorithm proceeds as follows : at time t , the policy over options πO : st ∈ S −→ o ∈ O determines which option to execute ( Section 3.2 ) . Control is then handed over to the selected option oi ’ s internal policy πoi : s ∈ S −→ at ∈ R|A| . πoi outputs joint torques until it either reaches its goal ( βoi : = Ioi−1 ) or times out at its predetermined budget T ( Section 3.1 ) . At this point , πO chooses another option to execute . If at any point the agent reaches the goal state of the MDP or the initiation condition of a previously learned option , it creates a new option to target such a salient event . The machinery for learning the initiation condition of this new option is described in Section 3.3 . We now detail the components of our architecture and how they are learned . Readers may also refer to Figures 4 & 7 and the pseudo-code in Appendix A.5 to gain greater intuition about our algorithm .
The authors tackle the problem of skill discovery by skill chaining. In particular, the authors claim two key contributions over the state of the art in option discovery 1) learn initiation sets 2) do not need to specify the number of options and this is also learned. Skill discovery is formalized by skill chaining; wherein the skills are chained backward from a goal state, and in a way, such that termination of an option is the initiation of the option that precedes in its chain.
SP:551579ae4e3fe3b943e738e04b923d519bea84e8
Neural Outlier Rejection for Self-Supervised Keypoint Learning
Identifying salient points in images is a crucial component for visual odometry , Structure-from-Motion or SLAM algorithms . Recently , several learned keypoint methods have demonstrated compelling performance on challenging benchmarks . However , generating consistent and accurate training data for interest-point detection in natural images still remains challenging , especially for human annotators . We introduce IO-Net ( i.e . InlierOutlierNet ) , a novel proxy task for the self-supervision of keypoint detection , description and matching . By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework , we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching . Second , we introduce KeyPointNet , a keypoint-network architecture that is especially amenable to robust keypoint detection and description . We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task , and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution . Through extensive experiments and ablative analysis , we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.† 1 INTRODUCTION . Detecting interest points in RGB images and matching them across views is a fundamental capability of many robotic systems . Tasks such Simultaneous Localization and Mapping ( SLAM ) ( Cadena et al. , 2016 ) , Structure-from-Motion ( SfM ) ( Agarwal et al. , 2010 ) and object detection assume that salient keypoints can be detected and re-identified in a wide range of scenarios , which requires invariance properties to lighting effects , viewpoint changes , scale , time of day , etc . However , these tasks still mostly rely on handcrafted image features such as SIFT ( Lowe et al. , 1999 ) or ORB ( Rublee et al. , 2011 ) , which have been shown to be limited in performance when compared to learned alternatives ( Balntas et al. , 2017 ) . Deep learning methods have revolutionized many computer vision applications including 2D/3D object detection ( Lang et al. , 2019 ; Tian et al. , 2019 ) , semantic segmentation ( Li et al. , 2018 ; Kirillov et al. , 2019 ) , human pose estimation ( Sun et al. , 2019 ) , etc . However , most learning algorithms need supervision and rely on labels which are often expensive to acquire . Moreover , supervising interest point detection is unnatural , as a human annotator can not readily identify salient regions in images as well as key signatures or descriptors , which would allow their re-identification . Selfsupervised learning methods have gained in popularity recently , being used for tasks such as depth regression ( Guizilini et al. , 2019 ) , tracking ( Vondrick et al. , 2018 ) and representation learning ( Wang et al. , 2019 ; Kolesnikov et al. , 2019 ) . Following DeTone et al . ( 2018b ) and Christiansen et al . ( 2019 ) , we propose a self-supervised methodology for jointly training a keypoint detector as well as its associated descriptor . †Code : https : //github.com/TRI-ML/KP2D Our main contributions are : ( i ) We introduce IO-Net ( i.e . InlierOutlierNet ) , a novel proxy task for the self-supervision of keypoint detection , description and matching . By using a neurally-guided outlier-rejection scheme ( Brachmann & Rother , 2019 ) as an auxiliary task , we show that we are able to simultaneously self-supervise keypoint description and generate optimal inlier sets from possible corresponding point-pairs . While the keypoint network is fully self-supervised , the network is able to effectively learn distinguishable features for two-view matching , via the flow of gradients from consistently matched point-pairs . ( ii ) We introduce KeyPointNet , and propose two modifications to the keypoint-network architecture described in Christiansen et al . ( 2019 ) . First , we allow the keypoint location head to regress keypoint locations outside their corresponding cells , enabling keypoint matching near and across cell-boundaries . Second , by taking advantage of sub-pixel convolutions to interpolate the descriptor feature-maps to a higher resolution , we show that we are able to improve the fine-grained keypoint descriptor fidelity and performance especially as they retain more fine-grained detail for pixel-level metric learning in the self-supervised regime . Through extensive experiments and ablation studies , we show that the proposed architecture allows us to establish state-of-the-art performance for the task of self-supervised keypoint detection , description and matching . 2 RELATED WORK . The recent success of deep learning-based methods in many computer vision applications , especially feature descriptors , has motivated general research in the direction of image feature detection beyond handcrafted methods . Such state-of-the-art learned keypoint detectors and descriptors have recently demonstrated improved performance on challenging benchmarks ( DeTone et al. , 2018b ; Christiansen et al. , 2019 ; Sarlin et al. , 2019 ) . In TILDE ( Verdie et al. , 2015 ) , the authors introduced multiple piece-wise linear regression models to detect features under severe changes in weather and lighting conditions . To train the regressors , they generate pseudo ground truth interest points by using a Difference-of-Gaussian ( DoG ) detector ( Lowe , 2004 ) from an image sequence captured at different times of day and seasons . LIFT ( Yi et al. , 2016 ) is able to estimate features which are robust to significant viewpoint and illumination differences using an end-to-end learning pipeline consisting of three modules : interest point detection , orientation estimation and descriptor computation . In LF-Net ( Ono et al. , 2018 ) , the authors introduced an end-to-end differentiable network which estimates position , scale and orientation of features by jointly optimizing the detector and descriptor in a single module . Quad-networks ( Savinov et al. , 2017 ) introduced an unsupervised learning scheme for training a shallow 2-layer network to predict feature points . SuperPoint ( DeTone et al. , 2018b ) is a selfsupervised framework that is trained on whole images and is able to predict both interest points and descriptors . Its architecture shares most of the computation in the detection and description modules , making it fast enough for real-time operation , but it requires multiple stages of training which is not desirable in practice . Most recently , UnsuperPoint ( Christiansen et al. , 2019 ) presented a fast deeplearning based keypoint detector and descriptor which requires only one round of training in a selfsupervised manner . Inspired by SuperPoint , it also shares most of the computation in the detection and description modules , and uses a siamese network to learn descriptors . They employ simple homography adaptation along with non-spatial image augmentations to create the 2D synthetic views required to train their self-supervised keypoint estimation model , which is advantageous because it trivially solves data association between these views . In their work , Christiansen et al . ( 2019 ) predict keypoints that are evenly distributed within the cells and enforce that the predicted keypoint locations do not cross cell boundaries ( i.e . each cell predicts a keypoint inside it ) . We show that this leads to sub-optimal performance especially when stable keypoints appear near cell borders . Instead , our method explicitly handles the detection and association of keypoints across cell-boundaries , thereby improving the overall matching performance . In Self-Improving Visual Odometry ( DeTone et al. , 2018a ) , the authors first estimate 2D keypoints and descriptors for each image in a monocular sequence using a convolutional network , and then use a bundle adjustment method to classify the stability of those keypoints based on re-projection error , which serves as supervisory signal to retrain the model . Their method , however , is not fully differentiable , so it can not be trained in an end-to-end manner . Instead , we incorporate an end-to-end differentiable and neurally-guided outlierrejection mechanism ( IO-Net ) that explicitly generates an additional proxy supervisory signal for the matching input keypoint-pairs identified by our KeyPointNet architecture . This allows the keypoint descriptions to be further refined as a result of the outlier-rejection network predictions occurring during the two-view matching stage . 3 SELF-SUPERVISED KEYPOINT LEARNING . In this work , we aim to regress a function which takes as input an image and outputs keypoints , descriptors , and scores . Specifically , we define K : I → { p , f , s } , with input image I ∈ R3×H×W , and output keypoints p = { [ u , v ] } ∈ R2×N , descriptors f ∈ R256×N and keypoint scores s ∈ RN ; N represents the total number of keypoints extracted and it varies according to an input image resolution , as defined in the following sections . We note that throughout this paper we use p to refer to the set of keypoints extracted from an image , while p is used to refer to a single keypoint . Following the work of Christiansen et al . ( 2019 ) , we train the proposed learning framework in a self-supervised fashion by receiving as input a source image Is such that K ( Is ) = { ps , fs , ss } and a target image It such that K ( It ) = { pt , ft , st } . Images Is and It are related through a known homography transformation H which warps a pixel from the source image and maps it into the target image . We define p∗t = { [ u∗i , v∗i ] } = H ( ps ) , with i ∈ I - e.g . the corresponding locations of source keypoints ps after being warped into the target frame . Inspired by recent advances in Neural Guided Sample Consensus methods ( Brachmann & Rother , 2019 ) , we define a second function C which takes as input point-pairs along with associated weights according to a distance metric , and outputs the likelihood that each point-pair belongs to an inlier set of matches . Formally , we define C : { ps , p∗t , d ( fs , f∗t ) } ∈ R5×N → RN as a mapping which computes the probability that a point-pair belongs to an inlier set . We note that C is only used at training time to choose an optimal set of consistent inliers from possible corresponding point pairs and to encourage the gradient flow through consistent point-pairs . An overview of our method is presented in Figure 1 . We define the model K parametrized by θK as an encoder-decoder style network . The encoder consists of 4 VGG-style blocks stacked to reduce the resolution of the image H ×W to Hc×Wc = H/8×W/8 . This allows an efficient prediction for keypoint location and descriptors . In this low resolution embedding space , each pixel corresponds to an 8 × 8 cell in the original image . The decoder consists of 3 separate heads for the keypoints , descriptors and scores respectively . Thus for an image of input size H ×W , the total number of keypoints regressed is ( H ×W ) /64 , each with a corresponding score and descriptor . For every convolutional layer except the final one , batch normalization is applied with leakyReLU activation . A detailed description of our network architecture can be seen in Figure 2 . The IO-Net is a 1D CNN parametrized by θIO , for which we follow closely the structure from Brachmann & Rother ( 2019 ) with 4 default setting residual blocks and the original activation function for final layer is removed . A more detailed description of these networks can be found in the Appendix ( Tables 6 and 7 ) .
The following work proposes several improvements over prior works in unsupervised/self-supervised keypoint-descriptor learning such as Christiansen et al. One improvement is the relaxation of the cell-boundaries for keypoint prediction -- specifically allowing keypoints anchored at the cell's center to be offset into neighboring cells. Another change was the introduction of an inlier-outlier classifier network to be used as a proxy loss for the keypoint position and descriptors. They found the inlier-outlier loss to improve homography accuracy at 1 and 3 pixel thresholds.
SP:3c415c075029fe70504aac9ad8fd3a7a8995458c
Neural Outlier Rejection for Self-Supervised Keypoint Learning
Identifying salient points in images is a crucial component for visual odometry , Structure-from-Motion or SLAM algorithms . Recently , several learned keypoint methods have demonstrated compelling performance on challenging benchmarks . However , generating consistent and accurate training data for interest-point detection in natural images still remains challenging , especially for human annotators . We introduce IO-Net ( i.e . InlierOutlierNet ) , a novel proxy task for the self-supervision of keypoint detection , description and matching . By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework , we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching . Second , we introduce KeyPointNet , a keypoint-network architecture that is especially amenable to robust keypoint detection and description . We design the network to allow local keypoint aggregation to avoid artifacts due to spatial discretizations commonly used for this task , and we improve fine-grained keypoint descriptor performance by taking advantage of efficient sub-pixel convolutions to upsample the descriptor feature-maps to a higher operating resolution . Through extensive experiments and ablative analysis , we show that the proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.† 1 INTRODUCTION . Detecting interest points in RGB images and matching them across views is a fundamental capability of many robotic systems . Tasks such Simultaneous Localization and Mapping ( SLAM ) ( Cadena et al. , 2016 ) , Structure-from-Motion ( SfM ) ( Agarwal et al. , 2010 ) and object detection assume that salient keypoints can be detected and re-identified in a wide range of scenarios , which requires invariance properties to lighting effects , viewpoint changes , scale , time of day , etc . However , these tasks still mostly rely on handcrafted image features such as SIFT ( Lowe et al. , 1999 ) or ORB ( Rublee et al. , 2011 ) , which have been shown to be limited in performance when compared to learned alternatives ( Balntas et al. , 2017 ) . Deep learning methods have revolutionized many computer vision applications including 2D/3D object detection ( Lang et al. , 2019 ; Tian et al. , 2019 ) , semantic segmentation ( Li et al. , 2018 ; Kirillov et al. , 2019 ) , human pose estimation ( Sun et al. , 2019 ) , etc . However , most learning algorithms need supervision and rely on labels which are often expensive to acquire . Moreover , supervising interest point detection is unnatural , as a human annotator can not readily identify salient regions in images as well as key signatures or descriptors , which would allow their re-identification . Selfsupervised learning methods have gained in popularity recently , being used for tasks such as depth regression ( Guizilini et al. , 2019 ) , tracking ( Vondrick et al. , 2018 ) and representation learning ( Wang et al. , 2019 ; Kolesnikov et al. , 2019 ) . Following DeTone et al . ( 2018b ) and Christiansen et al . ( 2019 ) , we propose a self-supervised methodology for jointly training a keypoint detector as well as its associated descriptor . †Code : https : //github.com/TRI-ML/KP2D Our main contributions are : ( i ) We introduce IO-Net ( i.e . InlierOutlierNet ) , a novel proxy task for the self-supervision of keypoint detection , description and matching . By using a neurally-guided outlier-rejection scheme ( Brachmann & Rother , 2019 ) as an auxiliary task , we show that we are able to simultaneously self-supervise keypoint description and generate optimal inlier sets from possible corresponding point-pairs . While the keypoint network is fully self-supervised , the network is able to effectively learn distinguishable features for two-view matching , via the flow of gradients from consistently matched point-pairs . ( ii ) We introduce KeyPointNet , and propose two modifications to the keypoint-network architecture described in Christiansen et al . ( 2019 ) . First , we allow the keypoint location head to regress keypoint locations outside their corresponding cells , enabling keypoint matching near and across cell-boundaries . Second , by taking advantage of sub-pixel convolutions to interpolate the descriptor feature-maps to a higher resolution , we show that we are able to improve the fine-grained keypoint descriptor fidelity and performance especially as they retain more fine-grained detail for pixel-level metric learning in the self-supervised regime . Through extensive experiments and ablation studies , we show that the proposed architecture allows us to establish state-of-the-art performance for the task of self-supervised keypoint detection , description and matching . 2 RELATED WORK . The recent success of deep learning-based methods in many computer vision applications , especially feature descriptors , has motivated general research in the direction of image feature detection beyond handcrafted methods . Such state-of-the-art learned keypoint detectors and descriptors have recently demonstrated improved performance on challenging benchmarks ( DeTone et al. , 2018b ; Christiansen et al. , 2019 ; Sarlin et al. , 2019 ) . In TILDE ( Verdie et al. , 2015 ) , the authors introduced multiple piece-wise linear regression models to detect features under severe changes in weather and lighting conditions . To train the regressors , they generate pseudo ground truth interest points by using a Difference-of-Gaussian ( DoG ) detector ( Lowe , 2004 ) from an image sequence captured at different times of day and seasons . LIFT ( Yi et al. , 2016 ) is able to estimate features which are robust to significant viewpoint and illumination differences using an end-to-end learning pipeline consisting of three modules : interest point detection , orientation estimation and descriptor computation . In LF-Net ( Ono et al. , 2018 ) , the authors introduced an end-to-end differentiable network which estimates position , scale and orientation of features by jointly optimizing the detector and descriptor in a single module . Quad-networks ( Savinov et al. , 2017 ) introduced an unsupervised learning scheme for training a shallow 2-layer network to predict feature points . SuperPoint ( DeTone et al. , 2018b ) is a selfsupervised framework that is trained on whole images and is able to predict both interest points and descriptors . Its architecture shares most of the computation in the detection and description modules , making it fast enough for real-time operation , but it requires multiple stages of training which is not desirable in practice . Most recently , UnsuperPoint ( Christiansen et al. , 2019 ) presented a fast deeplearning based keypoint detector and descriptor which requires only one round of training in a selfsupervised manner . Inspired by SuperPoint , it also shares most of the computation in the detection and description modules , and uses a siamese network to learn descriptors . They employ simple homography adaptation along with non-spatial image augmentations to create the 2D synthetic views required to train their self-supervised keypoint estimation model , which is advantageous because it trivially solves data association between these views . In their work , Christiansen et al . ( 2019 ) predict keypoints that are evenly distributed within the cells and enforce that the predicted keypoint locations do not cross cell boundaries ( i.e . each cell predicts a keypoint inside it ) . We show that this leads to sub-optimal performance especially when stable keypoints appear near cell borders . Instead , our method explicitly handles the detection and association of keypoints across cell-boundaries , thereby improving the overall matching performance . In Self-Improving Visual Odometry ( DeTone et al. , 2018a ) , the authors first estimate 2D keypoints and descriptors for each image in a monocular sequence using a convolutional network , and then use a bundle adjustment method to classify the stability of those keypoints based on re-projection error , which serves as supervisory signal to retrain the model . Their method , however , is not fully differentiable , so it can not be trained in an end-to-end manner . Instead , we incorporate an end-to-end differentiable and neurally-guided outlierrejection mechanism ( IO-Net ) that explicitly generates an additional proxy supervisory signal for the matching input keypoint-pairs identified by our KeyPointNet architecture . This allows the keypoint descriptions to be further refined as a result of the outlier-rejection network predictions occurring during the two-view matching stage . 3 SELF-SUPERVISED KEYPOINT LEARNING . In this work , we aim to regress a function which takes as input an image and outputs keypoints , descriptors , and scores . Specifically , we define K : I → { p , f , s } , with input image I ∈ R3×H×W , and output keypoints p = { [ u , v ] } ∈ R2×N , descriptors f ∈ R256×N and keypoint scores s ∈ RN ; N represents the total number of keypoints extracted and it varies according to an input image resolution , as defined in the following sections . We note that throughout this paper we use p to refer to the set of keypoints extracted from an image , while p is used to refer to a single keypoint . Following the work of Christiansen et al . ( 2019 ) , we train the proposed learning framework in a self-supervised fashion by receiving as input a source image Is such that K ( Is ) = { ps , fs , ss } and a target image It such that K ( It ) = { pt , ft , st } . Images Is and It are related through a known homography transformation H which warps a pixel from the source image and maps it into the target image . We define p∗t = { [ u∗i , v∗i ] } = H ( ps ) , with i ∈ I - e.g . the corresponding locations of source keypoints ps after being warped into the target frame . Inspired by recent advances in Neural Guided Sample Consensus methods ( Brachmann & Rother , 2019 ) , we define a second function C which takes as input point-pairs along with associated weights according to a distance metric , and outputs the likelihood that each point-pair belongs to an inlier set of matches . Formally , we define C : { ps , p∗t , d ( fs , f∗t ) } ∈ R5×N → RN as a mapping which computes the probability that a point-pair belongs to an inlier set . We note that C is only used at training time to choose an optimal set of consistent inliers from possible corresponding point pairs and to encourage the gradient flow through consistent point-pairs . An overview of our method is presented in Figure 1 . We define the model K parametrized by θK as an encoder-decoder style network . The encoder consists of 4 VGG-style blocks stacked to reduce the resolution of the image H ×W to Hc×Wc = H/8×W/8 . This allows an efficient prediction for keypoint location and descriptors . In this low resolution embedding space , each pixel corresponds to an 8 × 8 cell in the original image . The decoder consists of 3 separate heads for the keypoints , descriptors and scores respectively . Thus for an image of input size H ×W , the total number of keypoints regressed is ( H ×W ) /64 , each with a corresponding score and descriptor . For every convolutional layer except the final one , batch normalization is applied with leakyReLU activation . A detailed description of our network architecture can be seen in Figure 2 . The IO-Net is a 1D CNN parametrized by θIO , for which we follow closely the structure from Brachmann & Rother ( 2019 ) with 4 default setting residual blocks and the original activation function for final layer is removed . A more detailed description of these networks can be found in the Appendix ( Tables 6 and 7 ) .
The paper is devoted to self-supervised learning of local features (both detectors and descriptors simultaneously). The problem is old yet not fully solved yet, because handcrafted SIFT is still winning the benchmarks. This work mostly follows and improves upon SuperPoint (DeTone et.al 2017) and the follow-up work UnsuperPoint (Christiansen et.al 2019) architecture and training scheme.
SP:3c415c075029fe70504aac9ad8fd3a7a8995458c
Attention Privileged Reinforcement Learning for Domain Transfer
1 INTRODUCTION . Deep Reinforcement Learning ( RL ) has recently provided significant successes in a range of areas , including video games ( Mnih et al. , 2015 ) , board games ( Silver et al. , 2017 ) , simulated continuous control tasks ( Lillicrap et al. , 2015 ) , and robotic manipulation ( Haarnoja et al. , 2018 ; Haarnoja , 2018 ; Riedmiller et al. , 2018 ; OpenAI et al. , 2018 ; Schwab et al. , 2019 ; Andrychowicz et al. , 2017 ) . However , application to physical systems has proven to be challenging in general , due to expensive and slow data generation as well as safety challenges when running untrained policies . A common approach to circumvent these issues is to transfer models trained in simulation to the real world ( Tobin et al. , 2017 ; Rusu et al. , 2016 ; Held et al. , 2017 ) . However , simulators only represent approximations of a physical system . Due to physical , visual , and behavioural discrepancies , naively transferring RL agents trained in simulation onto the real world can be challenging . To bridge the gap between simulation and the real world , we can either aim to align both domains ( Ganin et al. , 2016 ; Bousmalis et al. , 2016 ; Wulfmeier et al. , 2017 ) or ensure that the real system is covered by the distribution of simulated training data ( OpenAI et al. , 2018 ; Tobin et al. , 2017 ; Pinto et al. , 2018 ; Sadeghi & Levine , 2016 ; Viereck et al. , 2017 ) . However , training under a distribution of randomised visual attributes of the simulator , such as textures and lighting ( Sadeghi & Levine , 2016 ; Viereck et al. , 2017 ) , as well as physics ( OpenAI et al. , 2018 ) , can be substantially more difficult and slower due to the increased variability of the learning domain ( OpenAI et al. , 2018 ; Tobin et al. , 2017 ) . The more structured and informative the input representation is with respect to the task , the quicker the agent can be trained . A clear example of this effect can be found when an agent is trained with image inputs , versus training with access to the exact simulator states ( Tassa et al. , 2018 ; Pinto et al. , 2018 ) . However , visual perception is more general and access to more compressed representations can often be limited . When exact states are available during training but not deployment , we can make use of information asymmetric actor-critic methods ( Pinto et al. , 2018 ; Schwab et al. , 2019 ) to train the critic faster via access to the state while providing only images for the actor . 1Videos comparing the policy behaviours of APRiL to the asymmetric DDPG baseline can be found here By introducing Attention Privileged Reinforcement Learning ( APRiL ) , we aim to further leverage access to exact states . APRiL leverages states not only to train the critic , but indirectly also for an image-based actor . Extending asymmetric actor-critic methods , APRiL concurrently trains two actor-critic systems ( one symmetric , state-based agent , and another asymmetric agent with imagedependent actor ) . Both actors utilise an attention mechanism to filter input data and by having access to the simulation rendering system , we can optimise image and state based attention masks to align . By additionally sharing the replay buffer between both agents , we can accelerate the learning process of the image-based actor by training on better performing states that are more quickly discovered by the state-based actor due to its lower dimensional input that is invariant to visual randomisation . The key benefits of APRiL lie in its application to domain transfer . When training with domain randomisation for transfer , bootstrapping via asymmetric information has displayed crucial benefits ( Pinto et al. , 2018 ) . Visual randomisation substantially increases the complexity of the imagebased actor ’ s task . Under this setting , the attention network can support invariance with respect to the irrelevant , but highly varying , parts of the image . Furthermore , the convergence of the statespace actor remains unaffected by visual randomisation . We experimentally demonstrate considerable improvements regarding learning convergence and more robust transfer on a set of continuous action domains including : 2D navigation , 2D locomotion and 3D robotic manipulation . 2 PROBLEM SETUP . Before introducing Attention Privileged Reinforcement Learning ( APRiL ) , this section provides a background for the RL algorithms used . For a more in-depth introduction please refer to Lillicrap et al . ( 2015 ) and Pinto et al . ( 2018 ) . 2.1 REINFORCEMENT LEARNING . We describe an agent ’ s environment as a Partially Observable Markov Decision Process which is represented as the tuple ( S , O , A , P , r , γ , s0 ) , where S denotes a set of continuous states , A denotes a set of either discrete or continuous actions , P : S×A×S → { x ∈ R|0 ≤ x ≤ 1 } is the transition probability function , r : S × A → R is the reward function , γ is the discount factor , and s0 is the initial state distribution . O is a set of continuous observations corresponding to continuous states in S. At every time-step t , the agent takes action at = π ( ·|st ) according to its policy π : S → A . The policy is optimised as to maximize the expected return Rt = Es0 [ ∑∞ i=t γ i−tri|s0 ] . The agent ’ s Q-function is defined as Qπ ( st , at ) = E [ Rt|st , at ] . 2.2 ASYMMETRIC DEEP DETERMINISTIC POLICY GRADIENTS . Asymmetric Deep Deterministic Policy Gradients ( asymmetric DDPG ) ( Pinto et al. , 2018 ) represents a type of actor-critic algorithm designed specifically for efficient learning of a deterministic , observation-based policy in simulation for sim-to-real transfer . This is achieved by leveraging access to more compressed , informative environment states , available in simulation , to speed up and stabilise training of the critic . The algorithm maintains two neural networks : an observation-based actor or policy πθ : O → A ( with parameters θ ) used during training and test time , and a state-based Q-function ( also known as critic ) Qπφ : S ×A→ R ( with parameters φ ) which is only used during training . To enable exploration , the method ( like its symmetric version ( Silver et al. , 2014 ) ) relies on a noisy version of the policy ( called behavioural policy ) , e.g . πb ( o ) = π ( o ) + z where z ∼ N ( 0 , 1 ) ( see Appendix C for our particular instantiation ) . The transition tuples ( st , ot , at , rt , st+1 , ot+1 ) encountered during training are stored in a replay buffer ( Mnih et al. , 2015 ) . Training examples sampled from the replay buffer are used to optimize the critic and actor . By minimizing the Bellman error loss Lcritic = ( Q ( st , at ) − yt ) 2 , where yt = rt + γQ ( st+1 , π ( ot+1 ) ) , the critic is optimized to approximate the true Q values . The actor is optimized by minimizing the loss Lactor = −Es , o∼πb ( o ) [ Q ( s , π ( o ) ) ] . 3 ATTENTION PRIVILEGED REINFORCEMENT LEARNING ( APRIL ) . APRiL proposes to improve the performance and sample efficiency of an observation-based agent by using a quicker learning actor that has access to exact environment states , sharing replay buffers , and aligning attention mechanisms between both actors . While we focus in the following sections on extending asymmetric DDPG ( Pinto et al. , 2018 ) , these ideas are generally applicable to off-policy actor-critic methods ( Konda & Tsitsiklis , 2000 ) . APRiL is comprised of three modules as displayed in Figure 1 . The first two modules , As and Ao , are actor-critic algorithms with an attention network incorporated over the input to each actor . For the state-based module As we use standard symmetric DDPG , while the observation-based module Ao builds on asymmetric DDPG . Finally , the third partAT represents the alignment process between attention mechanisms of both actor-critic agents to more effectively transfer knowledge between the quicker and slower learners , As and Ao , respectively . As consists of three networks : Qπs , πs , hs ( respectively critic , actor , and attention ) with parameters { φs , θs , ψs } . Given input state st , the attention network outputs a soft gating mask ht of same dimensionality as the input , with values ranging between [ 0 , 1 ] . The input to the actor is an attentionfiltered version of the state , sat = hs ( st ) st. To encourage a sparse masking function , we found that training this attention module on both the traditional DDPG loss as well as an entropy loss helped : Lhs = −Es∼πb [ Qs ( s , πs ( sa ) ) − βH ( hs ( s ) ) ] , ( 1 ) where β is a hyperparameter to weight the additional entropy objective , and πb is the behaviour policy used to obtain experience ( in this case from a shared replay buffer ) . The actor and critic networks πs and Qs are trained with the symmetric DDPG actor and Bellman error losses respectively . Within AT , the state-attention obtained in As is converted to corresponding observation-attention T to act as a self-supervised target for the observation-based agent in Ao . This is achieved in a twostep process . First , state-attention hs ( s ) is converted into object-attention c , which specifies how task-relevant each object in the scene is . Second , object-attention is converted to observation-space attention by performing a weighted sum over object-specific segmentation maps : c =M · hs ( s ) ( 2 ) T = N−1∑ i=0 ci · zi ( 3 ) Here , M ∈ { 0 , 1 } N×ns ( where ns is the dimensionality of s ) is an environment-specific , predefined adjacency matrix that maps the dimensions of s to each corresponding object , and c ∈ [ 0 , 1 ] N is then an attention vector over the N objects in the environment . ci corresponds to the ith object attention value . zi ∈ { 0 , 1 } W×H is the binary segmentation map2 of the ith object segmenting the object with the rest of the scene , and has the same dimensions as the image observation . zi assigns values of 1 for pixels in the image occupied by the ith object , and 0 elsewhere . T ∈ [ 0 , 1 ] W×H is the converted state-attention to observation-space attention to act as a target to train the observationattention network ho on . The observation-based module Ao also consists of three networks : Qπo , πo , ho ( respectively critic , actor , and attention ) with parameters { φo , θo , ψo } . The structure of this module is the same as As except the actor and critic now have asymmetric inputs . The input to the actor is the attentionfiltered version of the observation , oat = ho ( ot ) ot3.The actor and critic networks πo and Qo are trained with the standard asymmetric DDPG actor and Bellman error losses respectively defined in Section 2.2 . The main difference between Ao and As is that the observation attention network ho is trained on both the actor loss and an object-weighted mean squared error loss : Lho = Eo , s∼πb 1 2 ∑ ij 1 wij ( ho ( o ) − T ) ij 2 − νQo ( s , πo ( oa ) ) , ( 4 ) where weights wij correspond to the fraction of the partial observation o that the object present in oi , j,1:3 occupies , and ν represents the relative weighting of both loss components . The weight terms , w , ensure that the attention network becomes invariant to the size of objects during training and does not simply fit to the most predominant object in the scene . Combining the self-supervised attention loss and the RL loss leverages efficient state-space learning unaffected by visual randomisation . During training , experiences are collected evenly from both state and observation based agents and stored in a shared replay buffer ( similar to Schwab et al . ( 2019 ) ) . This is to ensure that : 1 . Both statebased criticQs and observation-based criticQo observe states that would be visited by either of their respective policies . 2 . The attention modules hs and ho are trained on the same data distribution to better facilitate alignment . 3 . Efficient discovery of highly performing states from πs are used to speed up learning of πo . Algorithm 1 shows pseudocode for a single actor implementation of APRiL . In practice , in order to speed up data collection and gradient computation , we parallelise the agents and environments and ensure data collection from state- and image- based agents is even .
Building on top of the domain randomization principle (used to train policies robust to domain-variations) to learn policies which transfer well to new domains, the paper proposes an approach to improve and speed-up learning / training over randomized environments. The paper operates in a settings where the policy to be transferred only has access to observations -- images, etc -- and not the complete underlying state of a (simulated) environment. The underlying idea is to -- (1) maintain two sets of actor-critic networks - a symmetric pair where the actor has access to the underlying state and an asymmetric pair where the actor has access only to image observations; (2) evenly gather experiences from behavioral policies of both actors and store them in a shared replay buffer and (3) learn to align the attention placed by the policies over objects in the environment for the state and observation based actors. The idea is to leverage privileged information about the state (which is strictly more informative compared to observations) to learn robust observation based policies. Experimental results indicate the proposed approach improves generalization performance compared to several ablations of the same on both in-distribution and out-of-distribution environments.
SP:ada3d3555f409cc84a060f81d2e4934459fa731f
Attention Privileged Reinforcement Learning for Domain Transfer
1 INTRODUCTION . Deep Reinforcement Learning ( RL ) has recently provided significant successes in a range of areas , including video games ( Mnih et al. , 2015 ) , board games ( Silver et al. , 2017 ) , simulated continuous control tasks ( Lillicrap et al. , 2015 ) , and robotic manipulation ( Haarnoja et al. , 2018 ; Haarnoja , 2018 ; Riedmiller et al. , 2018 ; OpenAI et al. , 2018 ; Schwab et al. , 2019 ; Andrychowicz et al. , 2017 ) . However , application to physical systems has proven to be challenging in general , due to expensive and slow data generation as well as safety challenges when running untrained policies . A common approach to circumvent these issues is to transfer models trained in simulation to the real world ( Tobin et al. , 2017 ; Rusu et al. , 2016 ; Held et al. , 2017 ) . However , simulators only represent approximations of a physical system . Due to physical , visual , and behavioural discrepancies , naively transferring RL agents trained in simulation onto the real world can be challenging . To bridge the gap between simulation and the real world , we can either aim to align both domains ( Ganin et al. , 2016 ; Bousmalis et al. , 2016 ; Wulfmeier et al. , 2017 ) or ensure that the real system is covered by the distribution of simulated training data ( OpenAI et al. , 2018 ; Tobin et al. , 2017 ; Pinto et al. , 2018 ; Sadeghi & Levine , 2016 ; Viereck et al. , 2017 ) . However , training under a distribution of randomised visual attributes of the simulator , such as textures and lighting ( Sadeghi & Levine , 2016 ; Viereck et al. , 2017 ) , as well as physics ( OpenAI et al. , 2018 ) , can be substantially more difficult and slower due to the increased variability of the learning domain ( OpenAI et al. , 2018 ; Tobin et al. , 2017 ) . The more structured and informative the input representation is with respect to the task , the quicker the agent can be trained . A clear example of this effect can be found when an agent is trained with image inputs , versus training with access to the exact simulator states ( Tassa et al. , 2018 ; Pinto et al. , 2018 ) . However , visual perception is more general and access to more compressed representations can often be limited . When exact states are available during training but not deployment , we can make use of information asymmetric actor-critic methods ( Pinto et al. , 2018 ; Schwab et al. , 2019 ) to train the critic faster via access to the state while providing only images for the actor . 1Videos comparing the policy behaviours of APRiL to the asymmetric DDPG baseline can be found here By introducing Attention Privileged Reinforcement Learning ( APRiL ) , we aim to further leverage access to exact states . APRiL leverages states not only to train the critic , but indirectly also for an image-based actor . Extending asymmetric actor-critic methods , APRiL concurrently trains two actor-critic systems ( one symmetric , state-based agent , and another asymmetric agent with imagedependent actor ) . Both actors utilise an attention mechanism to filter input data and by having access to the simulation rendering system , we can optimise image and state based attention masks to align . By additionally sharing the replay buffer between both agents , we can accelerate the learning process of the image-based actor by training on better performing states that are more quickly discovered by the state-based actor due to its lower dimensional input that is invariant to visual randomisation . The key benefits of APRiL lie in its application to domain transfer . When training with domain randomisation for transfer , bootstrapping via asymmetric information has displayed crucial benefits ( Pinto et al. , 2018 ) . Visual randomisation substantially increases the complexity of the imagebased actor ’ s task . Under this setting , the attention network can support invariance with respect to the irrelevant , but highly varying , parts of the image . Furthermore , the convergence of the statespace actor remains unaffected by visual randomisation . We experimentally demonstrate considerable improvements regarding learning convergence and more robust transfer on a set of continuous action domains including : 2D navigation , 2D locomotion and 3D robotic manipulation . 2 PROBLEM SETUP . Before introducing Attention Privileged Reinforcement Learning ( APRiL ) , this section provides a background for the RL algorithms used . For a more in-depth introduction please refer to Lillicrap et al . ( 2015 ) and Pinto et al . ( 2018 ) . 2.1 REINFORCEMENT LEARNING . We describe an agent ’ s environment as a Partially Observable Markov Decision Process which is represented as the tuple ( S , O , A , P , r , γ , s0 ) , where S denotes a set of continuous states , A denotes a set of either discrete or continuous actions , P : S×A×S → { x ∈ R|0 ≤ x ≤ 1 } is the transition probability function , r : S × A → R is the reward function , γ is the discount factor , and s0 is the initial state distribution . O is a set of continuous observations corresponding to continuous states in S. At every time-step t , the agent takes action at = π ( ·|st ) according to its policy π : S → A . The policy is optimised as to maximize the expected return Rt = Es0 [ ∑∞ i=t γ i−tri|s0 ] . The agent ’ s Q-function is defined as Qπ ( st , at ) = E [ Rt|st , at ] . 2.2 ASYMMETRIC DEEP DETERMINISTIC POLICY GRADIENTS . Asymmetric Deep Deterministic Policy Gradients ( asymmetric DDPG ) ( Pinto et al. , 2018 ) represents a type of actor-critic algorithm designed specifically for efficient learning of a deterministic , observation-based policy in simulation for sim-to-real transfer . This is achieved by leveraging access to more compressed , informative environment states , available in simulation , to speed up and stabilise training of the critic . The algorithm maintains two neural networks : an observation-based actor or policy πθ : O → A ( with parameters θ ) used during training and test time , and a state-based Q-function ( also known as critic ) Qπφ : S ×A→ R ( with parameters φ ) which is only used during training . To enable exploration , the method ( like its symmetric version ( Silver et al. , 2014 ) ) relies on a noisy version of the policy ( called behavioural policy ) , e.g . πb ( o ) = π ( o ) + z where z ∼ N ( 0 , 1 ) ( see Appendix C for our particular instantiation ) . The transition tuples ( st , ot , at , rt , st+1 , ot+1 ) encountered during training are stored in a replay buffer ( Mnih et al. , 2015 ) . Training examples sampled from the replay buffer are used to optimize the critic and actor . By minimizing the Bellman error loss Lcritic = ( Q ( st , at ) − yt ) 2 , where yt = rt + γQ ( st+1 , π ( ot+1 ) ) , the critic is optimized to approximate the true Q values . The actor is optimized by minimizing the loss Lactor = −Es , o∼πb ( o ) [ Q ( s , π ( o ) ) ] . 3 ATTENTION PRIVILEGED REINFORCEMENT LEARNING ( APRIL ) . APRiL proposes to improve the performance and sample efficiency of an observation-based agent by using a quicker learning actor that has access to exact environment states , sharing replay buffers , and aligning attention mechanisms between both actors . While we focus in the following sections on extending asymmetric DDPG ( Pinto et al. , 2018 ) , these ideas are generally applicable to off-policy actor-critic methods ( Konda & Tsitsiklis , 2000 ) . APRiL is comprised of three modules as displayed in Figure 1 . The first two modules , As and Ao , are actor-critic algorithms with an attention network incorporated over the input to each actor . For the state-based module As we use standard symmetric DDPG , while the observation-based module Ao builds on asymmetric DDPG . Finally , the third partAT represents the alignment process between attention mechanisms of both actor-critic agents to more effectively transfer knowledge between the quicker and slower learners , As and Ao , respectively . As consists of three networks : Qπs , πs , hs ( respectively critic , actor , and attention ) with parameters { φs , θs , ψs } . Given input state st , the attention network outputs a soft gating mask ht of same dimensionality as the input , with values ranging between [ 0 , 1 ] . The input to the actor is an attentionfiltered version of the state , sat = hs ( st ) st. To encourage a sparse masking function , we found that training this attention module on both the traditional DDPG loss as well as an entropy loss helped : Lhs = −Es∼πb [ Qs ( s , πs ( sa ) ) − βH ( hs ( s ) ) ] , ( 1 ) where β is a hyperparameter to weight the additional entropy objective , and πb is the behaviour policy used to obtain experience ( in this case from a shared replay buffer ) . The actor and critic networks πs and Qs are trained with the symmetric DDPG actor and Bellman error losses respectively . Within AT , the state-attention obtained in As is converted to corresponding observation-attention T to act as a self-supervised target for the observation-based agent in Ao . This is achieved in a twostep process . First , state-attention hs ( s ) is converted into object-attention c , which specifies how task-relevant each object in the scene is . Second , object-attention is converted to observation-space attention by performing a weighted sum over object-specific segmentation maps : c =M · hs ( s ) ( 2 ) T = N−1∑ i=0 ci · zi ( 3 ) Here , M ∈ { 0 , 1 } N×ns ( where ns is the dimensionality of s ) is an environment-specific , predefined adjacency matrix that maps the dimensions of s to each corresponding object , and c ∈ [ 0 , 1 ] N is then an attention vector over the N objects in the environment . ci corresponds to the ith object attention value . zi ∈ { 0 , 1 } W×H is the binary segmentation map2 of the ith object segmenting the object with the rest of the scene , and has the same dimensions as the image observation . zi assigns values of 1 for pixels in the image occupied by the ith object , and 0 elsewhere . T ∈ [ 0 , 1 ] W×H is the converted state-attention to observation-space attention to act as a target to train the observationattention network ho on . The observation-based module Ao also consists of three networks : Qπo , πo , ho ( respectively critic , actor , and attention ) with parameters { φo , θo , ψo } . The structure of this module is the same as As except the actor and critic now have asymmetric inputs . The input to the actor is the attentionfiltered version of the observation , oat = ho ( ot ) ot3.The actor and critic networks πo and Qo are trained with the standard asymmetric DDPG actor and Bellman error losses respectively defined in Section 2.2 . The main difference between Ao and As is that the observation attention network ho is trained on both the actor loss and an object-weighted mean squared error loss : Lho = Eo , s∼πb 1 2 ∑ ij 1 wij ( ho ( o ) − T ) ij 2 − νQo ( s , πo ( oa ) ) , ( 4 ) where weights wij correspond to the fraction of the partial observation o that the object present in oi , j,1:3 occupies , and ν represents the relative weighting of both loss components . The weight terms , w , ensure that the attention network becomes invariant to the size of objects during training and does not simply fit to the most predominant object in the scene . Combining the self-supervised attention loss and the RL loss leverages efficient state-space learning unaffected by visual randomisation . During training , experiences are collected evenly from both state and observation based agents and stored in a shared replay buffer ( similar to Schwab et al . ( 2019 ) ) . This is to ensure that : 1 . Both statebased criticQs and observation-based criticQo observe states that would be visited by either of their respective policies . 2 . The attention modules hs and ho are trained on the same data distribution to better facilitate alignment . 3 . Efficient discovery of highly performing states from πs are used to speed up learning of πo . Algorithm 1 shows pseudocode for a single actor implementation of APRiL . In practice , in order to speed up data collection and gradient computation , we parallelise the agents and environments and ensure data collection from state- and image- based agents is even .
The topic addressed by the paper is domain adaptation and transfer learning in the text context of deep reinforcement learning, in particular the “sim2real” problem, where a policy is learned in simulation and should be transferred to a physical agent in a real-world scenario. The work builds on the existing “asymmetric DDPG” formulation (Pinto et al., 2017), which exploits the fact that full states are sometimes available in simulated environments but not during deployment. In Pinto et al., this is addressed by learning an actor taking as input observations, and a critic which has access to the state.
SP:ada3d3555f409cc84a060f81d2e4934459fa731f
Filling the Soap Bubbles: Efficient Black-Box Adversarial Certification with Non-Gaussian Smoothing
1 INTRODUCTION . Deep neural networks have achieved state-of-the-art performance on many tasks such as image classification ( He et al. , 2016 ; Lu et al. , 2018 ) and language modeling ( Devlin et al. , 2019 ) . Nonetheless , modern deep learning models have been shown to be highly sensitive to small and adversarially crafted perturbations on the inputs ( Goodfellow et al. , 2015 ) , which means a human-imperceptible changes on inputs could cause the model to make dramatically different predictions . Although many robust training algorithms have been developed to overcome adversarial attacking , most heuristically developed methods can be shown to be broken by more powerful adversaries eventually , ( e.g. , Athalye et al. , 2018 ; Madry et al. , 2018 ; Zhang et al. , 2019 ; Wang et al. , 2019 ) . This casts an urgent demand for developing robust classifiers with provable worst case guarantees . One promising approach for certifiably robustness is the recent randomized smoothing method ( e.g. , Cohen et al. , 2019 ; Salman et al. , 2019 ; Lee et al. , 2019 ; Li et al. , 2019 ; Lecuyer et al. , 2018 ) , which constructs smoothed classifiers with certifiable robustness by introducing noise on the inputs . Compared with the other more traditional verification approaches ( e.g . Wong & Kolter , 2017 ; Jordan et al. , 2019 ; Dvijotham et al. , 2018 ) that exploits special structures of the neural networks ( such as the properties of ReLU ) , the randomized smoothing methods work more flexibly on general blackbox classifiers and is shown to be more scalable and provide tighter bounds on challenging datasets such as ImageNet ( Deng et al. , 2009 ) . However , the existing randomized smoothing methods can only work against ` 2 attack , in which the perturbations are allowed within an ` 2 ball of certain radius . A stronger type of attack , such as the ` ∞ attacks , is much more challenging to defense and verify due to the larger set of perturbations , but is more relevant in practice . In addition , all the existing randomized smoothing methods use Gaussian noise for smoothing . Although appearing to be a natural choice , one of our key observations is that Gaussian distributions is in fact a rather sub-optimal choice in high dimensional spaces , even for ` 2 attack . This is due to a counter-intuitive phenomenon in high dimensional spaces ( Vershynin , 2018 ) that almost all of the probability mass of standard Gaussian distribution concentrates around the sphere of radius one ( and hence “ soap bubble ” in the title ) , instead of the center point ( which corresponds to the original input ) . As a result , the variance of the Gaussian noise needs to be sufficiently small to yield good approximation to the original classifiers ( by squeezing the “ soap bubble ” towards the center point ) , which , however , makes it difficult to verify due to the small noise . Further , for the more challenging ` ∞ attack , Gaussian smoothing provably degenerates in high dimensions . Our contribution We propose a general framework of adversarial certification using nonGaussian smoothing noises , based on a new perspective from variational optimization . Our framework re-derives the method of Cohen et al . ( 2019 ) as a special case , and is applicable to more general families of non-Gaussian smoothing distributions and more types of attacks beyond ` 2 norm . Importantly , our new framework reveals a fundamental trade-off between accuracy and robustness for guiding better choices of smoothing distributions . Leveraging our insight , we develop two new families of distributions for better certification results on ` 2 and ` ∞ attacks , respectively . Efficient computational approaches are developed to enable our method in practice . Empirical results show that our new framework and smoothing distributions significantly outperform the existing approaches for both ` 2 and ` ∞ attacking , on challenging datasets such as CIFAR-10 and ImageNet . 2 RELATED WORKS . Empirical Defenses Since Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2015 ) , many previous works have focused on utilizing small perturbation δ under certain constraint , e.g . in a ` p norm ball , to attack a neural network . Adversarial training ( Madry et al. , 2018 ) and its variants ( Kannan et al. , 2018 ; Zhang & Wang , 2019 ; Zhai et al. , 2019 ) are the most successful defense methods to date , in which the network is forced to solve a mini-max game between the defender and attacker with adversarial examples as data augmentation . However , these empirical defense methods are still easy to be broken and can not provide provable defense . Certified Defenses Unlike the empirical defense methods , once a classifier can guarantee a constant classification within a local region , it is called a robust certificate . Exact certification methods provide the minimal perturbation condition which leads to a different classification result . This line of work focus on deep neural networks with ReLU-like activation that makes the classifier a piece-wise linear function . This enables researchers to introduce satisfiability modulo theories ( Carlini et al. , 2017 ; Ehlers , 2017 ) or mix integer linear programming ( Cheng et al. , 2017 ; Dutta et al. , 2018 ) . Sufficient certification methods take a conservative way and try to bound the Lipschitz constant or other information of the network ( Jordan et al. , 2019 ; Wong & Kolter , 2017 ; Raghunathan et al. , 2018 ; Zhang et al. , 2018 ) . However , these certification strategies share a drawback that they are not feasible on large-scale scenarios , e.g . large enough practical networks , large enough datasets . Randomized Smoothing To mitigate this limitation of previous certifiable defenses , improving network robustness via randomness has been recently discussed ( Xie et al. , 2018 ; Liu et al. , 2018 ) . In certification community , Lecuyer et al . ( 2018 ) first introduced randomization with technique in differential privacy . Li et al . ( 2019 ) improved their work with a bound given by Rényi divergence . In succession , Cohen et al . ( 2019 ) firstly provided a tight bound for arbitrary Gaussian smoothed classifiers based on previous theorems found by Li & Kuelbs ( 1998 ) . Salman et al . ( 2019 ) combined the empirical and certification robustness , by applying adversarial training on randomized smoothed classifiers to achieve a higher certified accuracy . Lee et al . ( 2019 ) focused on ` 0 norm perturbation setting , and proposed a discrete smoothing distribution to beat the Gaussian distribution baseline . Similar to ( Lee et al. , 2019 ) , we also focus on finding a suitable distribution to trade-off accuracy and robustness for different types of adversarial attacks , such as ` 2 and ` ∞ . 3 BLACK-BOX CERTIFICATION WITH VARIATIONAL OPTIMIZATION . We start with introducing the background of adversarial certification problem and and randomized smoothing method . We then introduce in Section 3.1 our general framework of adversarial certification using non-Gaussian smoothing noises , from a new variational optimization perspective . Our framework includes the method of Cohen et al . ( 2019 ) as a special case , and reveals a critical trade-off between accuracy and robustness that provides important guidance for better choices of smoothing distributions in Section 4 . Adversarial Certification We consider binary classification of predicting binary labels y ∈ { 0 , 1 } from feature vectors x ∈ Rd for simplicity . The extension to multi-class cases is straightforward , and is discussed in Appendix D. Assume f ] : Rd → [ 0 , 1 ] is a pre-trained binary classifier ( ] means the classifier is given ) , which maps from the feature space Rd to either the class probability in interval [ 0 , 1 ] or the binary labels in { 0 , 1 } . In the robustness certification problem , a testing data point x0 ∈ Rd is given , and one is asked to verify if the classifier outputs the same prediction when we perturb the input x0 arbitrarily in B , a given neighborhood of x0 . Specifically , let B be a set of possible perturbation vectors , e.g. , B = { δ ∈ Rd : ‖δ‖p ≤ r } for ` p norm with a radius r. If the classifier predicts y = 1 on x0 , i.e . f ] ( x0 ) > 1/2 , we want to verify if f ] ( x0 + δ ) > 1/2 holds for any δ ∈ B . In this paper , we consider two types of attacks , including the ` 2 attack B ` 2 , r def = { δ : ‖δ‖2 ≤ r } , and the ` ∞ attack B ` ∞ , r def = { δ : ‖δ‖∞ ≤ r } . More general ` p attack can also be handled by our framework but is left as future works . Black-box Certification with Randomness Directly verifying f ] heavily relies on the smoothness of f ] , which has been explored in a series of recent works ( Lecuyer et al. , 2018 ; Wong & Kolter , 2017 ) . These methods typically depend on the special structure property ( e.g. , the use of ReLU units ) of f ] , and thus can not serve as general purpose algorithms . We are instead interested in black-box verification methods that could work for arbitrary classifiers . One approach to enable this , as explored in recent works ( Cohen et al. , 2019 ; Lee et al. , 2019 ) , is to replace f ] with a smoothed classifier by convovling with Gaussian noise , and verify the smoothed classifier . Specifically , assume π0 is a smoothing distribution with zero mean and bounded variance , e.g. , π0 = N ( 0 , σ2 ) . The randomized smoothed classifier is defined by f ] π0 ( x0 ) : = Ez∼π0 [ f ] ( x0 + z ) ] , which returns the averaged probability of x0 + z under the perturbation of z ∼ π0 . Assume we replace the original classifier with f ] π0 , then the goal becomes verifying f ] π0 using its inherent smoothness . Specifically , if f ] π0 ( x0 ) > 1/2 , we want to verify that f ] π0 ( x0 + δ ) > 1/2 for every δ ∈ B , that is , min δ∈B f ] π0 ( x0 + δ ) = minδ∈B Ez∼π0 [ f ] ( x0 + z + δ ) ] > 1/2 . ( 1 ) In this case , it is sufficient to obtain a guaranteed lower bound of minδ∈B f ] π0 ( x0 + δ ) and check if it is larger than 1/2 . When π0 is Gaussian N ( 0 , σ2 ) and for ` 2 attack , this problem was studied in Cohen et al . ( 2019 ) , which shows that a lower bound of min z∈B Ez∼π0 [ f ] ( x0 + z ) ] ≥ Φ ( Φ−1 ( f ] π0 ( x0 ) ) − r/σ ) , ( 2 ) where Φ ( · ) is the cumulative density function ( CDF ) of standard Gaussian distribution , and Φ−1 ( · ) represents its inverse function . The proof of this result in Cohen et al . ( 2019 ) uses NeymanPearson lemma ( Li & Kuelbs , 1998 ) , while in the following section another derivation using variational optimization is provided . Note that the bound in Equation ( 2 ) is tractable since it only requires to evaluate the smoothed classifier f ] π0 ( x0 ) at the original image x0 , instead of solving the difficult adversarial optimization over perturbation z in Equation ( 1 ) . In practice , f ] π0 ( x0 ) is approximated by Monte Carlo approximation with a non-asymptotic confidence bound .
This paper investigates the choice of noise distributions for smoothing an arbitrary classifier for defending against adversarial attacks. The paper focuses on the two major adversaries: \ell_2 adversaries and \ell_\infty adversaries. Theorem 1 quantifies the tradeoff between the choice of smoothing distribution which (1) has clean accuracy close to the original classifier and (2) promotes the smoothness of smoothed classifier (and hence adversarial accuracy). For the \ell_2 adversary, the paper argues that Gaussian distribution is not the right choice, because the distribution is concentrated on the spherical shell around the x. Instead, the authors propose using a new family of distributions, with the norm square (p_{|z|_2^2}) following the scaled \chi^2 distribution with degree d-k (Eq. 8). This allows an extra degree of freedom, and setting k=0 recovers the Gaussian distribution. For \ell_\infty perturbations, the paper suggests another family of distributions combining the \ell_2 and \ell_\infty norm (Eq. 9), and argues that it outperforms the natural choice of \ell_\infty norm-based distributions (Eq. 10).
SP:7733bd3495e737a6664928d1d5b01b5485bcce89
Filling the Soap Bubbles: Efficient Black-Box Adversarial Certification with Non-Gaussian Smoothing
1 INTRODUCTION . Deep neural networks have achieved state-of-the-art performance on many tasks such as image classification ( He et al. , 2016 ; Lu et al. , 2018 ) and language modeling ( Devlin et al. , 2019 ) . Nonetheless , modern deep learning models have been shown to be highly sensitive to small and adversarially crafted perturbations on the inputs ( Goodfellow et al. , 2015 ) , which means a human-imperceptible changes on inputs could cause the model to make dramatically different predictions . Although many robust training algorithms have been developed to overcome adversarial attacking , most heuristically developed methods can be shown to be broken by more powerful adversaries eventually , ( e.g. , Athalye et al. , 2018 ; Madry et al. , 2018 ; Zhang et al. , 2019 ; Wang et al. , 2019 ) . This casts an urgent demand for developing robust classifiers with provable worst case guarantees . One promising approach for certifiably robustness is the recent randomized smoothing method ( e.g. , Cohen et al. , 2019 ; Salman et al. , 2019 ; Lee et al. , 2019 ; Li et al. , 2019 ; Lecuyer et al. , 2018 ) , which constructs smoothed classifiers with certifiable robustness by introducing noise on the inputs . Compared with the other more traditional verification approaches ( e.g . Wong & Kolter , 2017 ; Jordan et al. , 2019 ; Dvijotham et al. , 2018 ) that exploits special structures of the neural networks ( such as the properties of ReLU ) , the randomized smoothing methods work more flexibly on general blackbox classifiers and is shown to be more scalable and provide tighter bounds on challenging datasets such as ImageNet ( Deng et al. , 2009 ) . However , the existing randomized smoothing methods can only work against ` 2 attack , in which the perturbations are allowed within an ` 2 ball of certain radius . A stronger type of attack , such as the ` ∞ attacks , is much more challenging to defense and verify due to the larger set of perturbations , but is more relevant in practice . In addition , all the existing randomized smoothing methods use Gaussian noise for smoothing . Although appearing to be a natural choice , one of our key observations is that Gaussian distributions is in fact a rather sub-optimal choice in high dimensional spaces , even for ` 2 attack . This is due to a counter-intuitive phenomenon in high dimensional spaces ( Vershynin , 2018 ) that almost all of the probability mass of standard Gaussian distribution concentrates around the sphere of radius one ( and hence “ soap bubble ” in the title ) , instead of the center point ( which corresponds to the original input ) . As a result , the variance of the Gaussian noise needs to be sufficiently small to yield good approximation to the original classifiers ( by squeezing the “ soap bubble ” towards the center point ) , which , however , makes it difficult to verify due to the small noise . Further , for the more challenging ` ∞ attack , Gaussian smoothing provably degenerates in high dimensions . Our contribution We propose a general framework of adversarial certification using nonGaussian smoothing noises , based on a new perspective from variational optimization . Our framework re-derives the method of Cohen et al . ( 2019 ) as a special case , and is applicable to more general families of non-Gaussian smoothing distributions and more types of attacks beyond ` 2 norm . Importantly , our new framework reveals a fundamental trade-off between accuracy and robustness for guiding better choices of smoothing distributions . Leveraging our insight , we develop two new families of distributions for better certification results on ` 2 and ` ∞ attacks , respectively . Efficient computational approaches are developed to enable our method in practice . Empirical results show that our new framework and smoothing distributions significantly outperform the existing approaches for both ` 2 and ` ∞ attacking , on challenging datasets such as CIFAR-10 and ImageNet . 2 RELATED WORKS . Empirical Defenses Since Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2015 ) , many previous works have focused on utilizing small perturbation δ under certain constraint , e.g . in a ` p norm ball , to attack a neural network . Adversarial training ( Madry et al. , 2018 ) and its variants ( Kannan et al. , 2018 ; Zhang & Wang , 2019 ; Zhai et al. , 2019 ) are the most successful defense methods to date , in which the network is forced to solve a mini-max game between the defender and attacker with adversarial examples as data augmentation . However , these empirical defense methods are still easy to be broken and can not provide provable defense . Certified Defenses Unlike the empirical defense methods , once a classifier can guarantee a constant classification within a local region , it is called a robust certificate . Exact certification methods provide the minimal perturbation condition which leads to a different classification result . This line of work focus on deep neural networks with ReLU-like activation that makes the classifier a piece-wise linear function . This enables researchers to introduce satisfiability modulo theories ( Carlini et al. , 2017 ; Ehlers , 2017 ) or mix integer linear programming ( Cheng et al. , 2017 ; Dutta et al. , 2018 ) . Sufficient certification methods take a conservative way and try to bound the Lipschitz constant or other information of the network ( Jordan et al. , 2019 ; Wong & Kolter , 2017 ; Raghunathan et al. , 2018 ; Zhang et al. , 2018 ) . However , these certification strategies share a drawback that they are not feasible on large-scale scenarios , e.g . large enough practical networks , large enough datasets . Randomized Smoothing To mitigate this limitation of previous certifiable defenses , improving network robustness via randomness has been recently discussed ( Xie et al. , 2018 ; Liu et al. , 2018 ) . In certification community , Lecuyer et al . ( 2018 ) first introduced randomization with technique in differential privacy . Li et al . ( 2019 ) improved their work with a bound given by Rényi divergence . In succession , Cohen et al . ( 2019 ) firstly provided a tight bound for arbitrary Gaussian smoothed classifiers based on previous theorems found by Li & Kuelbs ( 1998 ) . Salman et al . ( 2019 ) combined the empirical and certification robustness , by applying adversarial training on randomized smoothed classifiers to achieve a higher certified accuracy . Lee et al . ( 2019 ) focused on ` 0 norm perturbation setting , and proposed a discrete smoothing distribution to beat the Gaussian distribution baseline . Similar to ( Lee et al. , 2019 ) , we also focus on finding a suitable distribution to trade-off accuracy and robustness for different types of adversarial attacks , such as ` 2 and ` ∞ . 3 BLACK-BOX CERTIFICATION WITH VARIATIONAL OPTIMIZATION . We start with introducing the background of adversarial certification problem and and randomized smoothing method . We then introduce in Section 3.1 our general framework of adversarial certification using non-Gaussian smoothing noises , from a new variational optimization perspective . Our framework includes the method of Cohen et al . ( 2019 ) as a special case , and reveals a critical trade-off between accuracy and robustness that provides important guidance for better choices of smoothing distributions in Section 4 . Adversarial Certification We consider binary classification of predicting binary labels y ∈ { 0 , 1 } from feature vectors x ∈ Rd for simplicity . The extension to multi-class cases is straightforward , and is discussed in Appendix D. Assume f ] : Rd → [ 0 , 1 ] is a pre-trained binary classifier ( ] means the classifier is given ) , which maps from the feature space Rd to either the class probability in interval [ 0 , 1 ] or the binary labels in { 0 , 1 } . In the robustness certification problem , a testing data point x0 ∈ Rd is given , and one is asked to verify if the classifier outputs the same prediction when we perturb the input x0 arbitrarily in B , a given neighborhood of x0 . Specifically , let B be a set of possible perturbation vectors , e.g. , B = { δ ∈ Rd : ‖δ‖p ≤ r } for ` p norm with a radius r. If the classifier predicts y = 1 on x0 , i.e . f ] ( x0 ) > 1/2 , we want to verify if f ] ( x0 + δ ) > 1/2 holds for any δ ∈ B . In this paper , we consider two types of attacks , including the ` 2 attack B ` 2 , r def = { δ : ‖δ‖2 ≤ r } , and the ` ∞ attack B ` ∞ , r def = { δ : ‖δ‖∞ ≤ r } . More general ` p attack can also be handled by our framework but is left as future works . Black-box Certification with Randomness Directly verifying f ] heavily relies on the smoothness of f ] , which has been explored in a series of recent works ( Lecuyer et al. , 2018 ; Wong & Kolter , 2017 ) . These methods typically depend on the special structure property ( e.g. , the use of ReLU units ) of f ] , and thus can not serve as general purpose algorithms . We are instead interested in black-box verification methods that could work for arbitrary classifiers . One approach to enable this , as explored in recent works ( Cohen et al. , 2019 ; Lee et al. , 2019 ) , is to replace f ] with a smoothed classifier by convovling with Gaussian noise , and verify the smoothed classifier . Specifically , assume π0 is a smoothing distribution with zero mean and bounded variance , e.g. , π0 = N ( 0 , σ2 ) . The randomized smoothed classifier is defined by f ] π0 ( x0 ) : = Ez∼π0 [ f ] ( x0 + z ) ] , which returns the averaged probability of x0 + z under the perturbation of z ∼ π0 . Assume we replace the original classifier with f ] π0 , then the goal becomes verifying f ] π0 using its inherent smoothness . Specifically , if f ] π0 ( x0 ) > 1/2 , we want to verify that f ] π0 ( x0 + δ ) > 1/2 for every δ ∈ B , that is , min δ∈B f ] π0 ( x0 + δ ) = minδ∈B Ez∼π0 [ f ] ( x0 + z + δ ) ] > 1/2 . ( 1 ) In this case , it is sufficient to obtain a guaranteed lower bound of minδ∈B f ] π0 ( x0 + δ ) and check if it is larger than 1/2 . When π0 is Gaussian N ( 0 , σ2 ) and for ` 2 attack , this problem was studied in Cohen et al . ( 2019 ) , which shows that a lower bound of min z∈B Ez∼π0 [ f ] ( x0 + z ) ] ≥ Φ ( Φ−1 ( f ] π0 ( x0 ) ) − r/σ ) , ( 2 ) where Φ ( · ) is the cumulative density function ( CDF ) of standard Gaussian distribution , and Φ−1 ( · ) represents its inverse function . The proof of this result in Cohen et al . ( 2019 ) uses NeymanPearson lemma ( Li & Kuelbs , 1998 ) , while in the following section another derivation using variational optimization is provided . Note that the bound in Equation ( 2 ) is tractable since it only requires to evaluate the smoothed classifier f ] π0 ( x0 ) at the original image x0 , instead of solving the difficult adversarial optimization over perturbation z in Equation ( 1 ) . In practice , f ] π0 ( x0 ) is approximated by Monte Carlo approximation with a non-asymptotic confidence bound .
This paper presents a new method for adversarial certification using non-Gaussian noise. A new framework for certification is proposed, which allows to use different distributions compared to previous work based on Gaussian noise. From this framework, a trade-off between accuracy and robustness is identified and new distributions are proposed to obtain a better trade-off than with Gaussian noise. Using these new distributions, they re-certify models obtained in previous work.
SP:7733bd3495e737a6664928d1d5b01b5485bcce89
Dynamics-Aware Unsupervised Discovery of Skills
1 INTRODUCTION . Deep reinforcement learning ( RL ) enables autonomous learning of diverse and complex tasks with rich sensory inputs , temporally extended goals , and challenging dynamics , such as discrete gameplaying domains ( Mnih et al. , 2013 ; Silver et al. , 2016 ) , and continuous control domains including locomotion ( Schulman et al. , 2015 ; Heess et al. , 2017 ) and manipulation ( Rajeswaran et al. , 2017 ; Kalashnikov et al. , 2018 ; Gu et al. , 2017 ) . Most of the deep RL approaches learn a Q-function or a policy that are directly optimized for the training task , which limits their generalization to new scenarios . In contrast , MBRL methods ( Li & Todorov , 2004 ; Deisenroth & Rasmussen , 2011 ; Watter et al. , 2015 ) can acquire dynamics models that may be utilized to perform unseen tasks at test time . While this capability has been demonstrated in some of the recent works ( Levine et al. , 2016 ; Nagabandi et al. , 2018 ; Chua et al. , 2018b ; Kurutach et al. , 2018 ; Ha & Schmidhuber , ∗Work done a part of the Google AI Residency program . 2018 ) , learning an accurate global model that works for all state-action pairs can be exceedingly challenging , especially for high-dimensional system with complex and discontinuous dynamics . The problem is further exacerbated as the learned global model has limited generalization outside of the state distribution it was trained on and exploring the whole state space is generally infeasible . Can we retain the flexibility of model-based RL , while using model-free RL to acquire proficient low-level behaviors under complex dynamics ? While learning a global dynamics model that captures all the different behaviors for the entire statespace can be extremely challenging , learning a model for a specific behavior that acts only in a small part of the state-space can be much easier . For example , consider learning a model for dynamics of all gaits of a quadruped versus a model which only works for a specific gait . If we can learn many such behaviors and their corresponding dynamics , we can leverage model-predictive control to plan in the behavior space , as opposed to planning in the action space . The question then becomes : how do we acquire such behaviors , considering that behaviors could be random and unpredictable ? To this end , we propose Dynamics-Aware Discovery of Skills ( DADS ) , an unsupervised RL framework for learning low-level skills using model-free RL with the explicit aim of making model-based control easy . Skills obtained using DADS are directly optimized for predictability , providing a better representation on top of which predictive models can be learned . Crucially , the skills do not require any supervision to learn , and are acquired entirely through autonomous exploration . This means that the repertoire of skills and their predictive model are learned before the agent has been tasked with any goal or reward function . When a task is provided at test-time , the agent utilizes the previously learned skills and model to immediately perform the task without any further training . The key contribution of our work is an unsupervised reinforcement learning algorithm , DADS , grounded in mutual-information-based exploration . We demonstrate that our objective can embed learned primitives in continuous spaces , which allows us to learn a large , diverse set of skills . Crucially , our algorithm also learns to model the dynamics of the skills , which enables the use of model-based planning algorithms for downstream tasks . We adapt the conventional model predictive control algorithms to plan in the space of primitives , and demonstrate that we can compose the learned primitives to solve downstream tasks without any additional training . 2 PRELIMINARIES . Mutual information can been used as an objective to encourage exploration in reinforcement learning ( Houthooft et al. , 2016 ; Mohamed & Rezende , 2015 ) . According to its definition , I ( X ; Y ) = H ( X ) − H ( X | Y ) , maximizing mutual information I with respect to Y amounts to maximizing the entropyH of X while minimizing the conditional entropyH ( X | Y ) . In the context of RL , X is usually a function of the state and Y a function of actions . Maximizing this objective encourages the state entropy to be high , making the underlying policy to be exploratory . Recently , multiple works ( Eysenbach et al. , 2018 ; Gregor et al. , 2016 ; Achiam et al. , 2018 ) apply this idea to learn diverse skills which maximally cover the state space . To leverage planning-based control , MBRL estimates the true dynamics of the environment by learning a model p̂ ( s′ | s , a ) . This allows it to predict a trajectory of states τ̂H = ( st , ŝt+1 , . . . ŝt+H ) resulting from a sequence of actions without any additional interaction with the environment . While model-based RL methods have been demonstrated to be sample efficient compared to their modelfree counterparts , learning an effective model for the whole state-space is challenging . An openproblem in model-based RL is to incorporate temporal abstraction in model-based control , to enable high-level planning and move-away from planning at the granular level of actions . These seemingly unrelated ideas can be combined into a single optimization scheme , where we first discover skills ( and their models ) without any extrinsic reward and then compose these skills to optimize for the task defined at test time using model-based planning . At train time , we assume a Markov Decision Process ( MDP ) M1 ≡ ( S , A , p ) . The state space S and action space A are assumed to be continuous , and theA bounded . We assume the transition dynamics p to be stochastic , such that p : S × A× S 7→ [ 0 , ∞ ) . We learn a skill-conditioned policy π ( a | s , z ) , where the skills z belongs to the space Z , detailed in Section 3 . We assume that the skills are sampled from a prior p ( z ) over Z . We simultaneously learn a skill-conditioned transition function q ( s′ | s , z ) , coined as skill-dynamics , which predicts the transition to the next state s′ from the current state s for the skill z under the given dynamics p. At test time , we assume an MDPM2 ≡ ( S , A , p , r ) , where S , A , p match those defined inM1 , and the reward function r : S × A 7→ ( −∞ , ∞ ) . We plan in Z using q ( s′ | s , z ) to compose the learned skills z for optimizing r inM2 , which we detail in Section 4 . 3 DYNAMICS-AWARE DISCOVERY OF SKILLS ( DADS ) . We use the information theoretic paradigm of mutual information to obtain our unsupervised skill discovery algorithm . In particular , we propose to maximize the mutual information between the next state s′ and current skill z conditioned on the current state s. I ( s′ ; z | s ) = H ( z | s ) −H ( z | s′ , s ) ( 1 ) = H ( s′ | s ) −H ( s′ | s , z ) ( 2 ) Mutual information in Equation 1 quantifies how much can be known about s′ given z and s , or symmetrically , z given the transition from s → s′ . From Equation 2 , maximizing this objective corresponds to maximizing the diversity of transitions produced in the environment , that is denoted by the entropy H ( s′ | s ) , while making z informative about the next state s′ by minimizing the entropy H ( s′ | s , z ) . Intuitively , skills z can be interpreted as abstracted action sequences which are identifiable by the transitions generated in the environment ( and not just by the current state ) . Thus , optimizing this mutual information can be understood as encoding a diverse set of skills in the latent space Z , while making the transitions for a given z ∈ Z predictable . We use the entropydecomposition in Equation 2 to connect this objective with model-based control . We want to optimize the our skill-conditioned controller π ( a | s , z ) such that the latent space z ∼ p ( z ) is maximally informative about the transitions s→ s′ . Using the definition of conditional mutual information , we can rewrite Equation 2 as : I ( s′ ; z | s ) = ∫ p ( z , s , s′ ) log p ( s′ | s , z ) p ( s′ | s ) ds′dsdz ( 3 ) We assume the following generative model : p ( z , s , s′ ) = p ( z ) p ( s | z ) p ( s′ | s , z ) , where p ( z ) is user specified prior over Z , p ( s|z ) denotes the stationary state-distribution induced by π ( a | s , z ) for a skill z and p ( s′ | s , z ) denotes the transition distribution under skill z . Note , p ( s′ | s , z ) =∫ p ( s′ | s , a ) π ( a | s , z ) da is intractable to compute because the underlying dynamics are unknown . However , we can variationally lower bound the objective as follows : I ( s′ ; z | s ) = Ez , s , s′∼p [ log p ( s′ | s , z ) p ( s′ | s ) ] = Ez , s , s′∼p [ log qφ ( s ′ | s , z ) p ( s′ | s ) ] + Es , z∼p [ DKL ( p ( s′ | s , z ) || qφ ( s′ | s , z ) ) ] ≥ Ez , s , s′∼p [ log qφ ( s ′ | s , z ) p ( s′ | s ) ] ( 4 ) where we have used the non-negativity of KL-divergence , that is DKL ≥ 0 . Note , skill-dynamics qφ represents the variational approximation for the transition function p ( s′ | s , z ) , which enables model-based control as described in Section 4 . Equation 4 suggests an alternating optimization between qφ and π , summarized in Algorithm 1 . In every iteration : ( Tighten variational lower bound ) We minimize DKL ( p ( s′ | s , z ) || qφ ( s′ | s , z ) ) with respect to the parameters φ on z , s ∼ p to tighten the lower bound . For general function approximators like neural networks , we can write the gradient for φ as follows : ∇φEs , z [ DKL ( p ( s′ | s , z ) || qφ ( s′ | s , z ) ) ] = ∇φEz , s , s′ [ log p ( s′ | s , z ) qφ ( s′ | s , z ) ] = −Ez , s , s′ [ ∇φ log qφ ( s′ | s , z ) ] ( 5 ) which corresponds to maximizing the likelihood of the samples from p under qφ . ( Maximize approximate lower bound ) After fitting qφ , we can optimize π to maximize Ez , s , s′ [ log qφ ( s′ | s , z ) − log p ( s′ | s ) ] . Note , this is a reinforcement-learning style optimization with a reward function log qφ ( s′ | s , z ) − log p ( s′ | s ) . However , log p ( s′ | s ) is intractable to compute , so we approximate the reward function for π : rz ( s , a , s ′ ) = log qφ ( s ′ | s , z ) ∑L i=1 qφ ( s ′ | s , zi ) + logL , zi ∼ p ( z ) . ( 6 ) The approximation is motivated as follows : p ( s′ | s ) = ∫ p ( s′ | s , z ) p ( z|s ) dz ≈ ∫ qφ ( s ′ | s , z ) p ( z ) dz ≈ 1L ∑L i=1 qφ ( s ′ | s , zi ) for zi ∼ p ( z ) , where L denotes the number of samples from the prior . We are using the marginal of variational approximation qφ over the prior p ( z ) to approximate the marginal distribution of transitions . We discuss this approximation in Appendix C. Note , the final reward function rz encourages the policy π to produce transitions that are ( a ) predictable under qφ ( predictability ) and ( b ) different from the transitions produced under zi ∼ p ( z ) ( diversity ) . To generate samples from p ( z , s , s′ ) , we use the rollouts from the current policy π for multiple samples z ∼ p ( z ) in an episodic setting for a fixed horizon T . We also introduce entropy regularization for π ( a | s , z ) , which encourages the policy to discover action-sequences with similar state-transitions and to be clustered under the same skill z , making the policy robust besides encouraging exploration ( Haarnoja et al. , 2018a ) . The use of entropy regularization can be justified from an information bottleneck perspective as discussed for Information Maximization algorithm in ( Mohamed & Rezende , 2015 ) . This is even more extensively discussed from the graphical model perspective in Appendix B , which connects unsupervised skill discovery and information bottleneck literature , while also revealing the temporal nature of skills z . Details corresponding to implementation and hyperparameters are discussed in Appendix A .
This paper proposes a novel approach to learn a continuous set of skills (where a skill is associated with a latent vector and the skill policy network takes that vector as an extra input) by pure unsupervised exploration using as intrinsic reward a proxy for the mutual information between next states and the skill (given the previous state). These skills can be used in a model-based planning (model-predictive control) with zero 'supervised' training data (for which the rewards are given), but using calls to the reward function to evaluate candidate sequences of skills and actions. The proposed approach is convincingly compared in several ways to both previous model-based approaches and model-free approaches.
SP:cb8f98d674ac5fdafd3ff738a7d0027f6c4a19ad
Dynamics-Aware Unsupervised Discovery of Skills
1 INTRODUCTION . Deep reinforcement learning ( RL ) enables autonomous learning of diverse and complex tasks with rich sensory inputs , temporally extended goals , and challenging dynamics , such as discrete gameplaying domains ( Mnih et al. , 2013 ; Silver et al. , 2016 ) , and continuous control domains including locomotion ( Schulman et al. , 2015 ; Heess et al. , 2017 ) and manipulation ( Rajeswaran et al. , 2017 ; Kalashnikov et al. , 2018 ; Gu et al. , 2017 ) . Most of the deep RL approaches learn a Q-function or a policy that are directly optimized for the training task , which limits their generalization to new scenarios . In contrast , MBRL methods ( Li & Todorov , 2004 ; Deisenroth & Rasmussen , 2011 ; Watter et al. , 2015 ) can acquire dynamics models that may be utilized to perform unseen tasks at test time . While this capability has been demonstrated in some of the recent works ( Levine et al. , 2016 ; Nagabandi et al. , 2018 ; Chua et al. , 2018b ; Kurutach et al. , 2018 ; Ha & Schmidhuber , ∗Work done a part of the Google AI Residency program . 2018 ) , learning an accurate global model that works for all state-action pairs can be exceedingly challenging , especially for high-dimensional system with complex and discontinuous dynamics . The problem is further exacerbated as the learned global model has limited generalization outside of the state distribution it was trained on and exploring the whole state space is generally infeasible . Can we retain the flexibility of model-based RL , while using model-free RL to acquire proficient low-level behaviors under complex dynamics ? While learning a global dynamics model that captures all the different behaviors for the entire statespace can be extremely challenging , learning a model for a specific behavior that acts only in a small part of the state-space can be much easier . For example , consider learning a model for dynamics of all gaits of a quadruped versus a model which only works for a specific gait . If we can learn many such behaviors and their corresponding dynamics , we can leverage model-predictive control to plan in the behavior space , as opposed to planning in the action space . The question then becomes : how do we acquire such behaviors , considering that behaviors could be random and unpredictable ? To this end , we propose Dynamics-Aware Discovery of Skills ( DADS ) , an unsupervised RL framework for learning low-level skills using model-free RL with the explicit aim of making model-based control easy . Skills obtained using DADS are directly optimized for predictability , providing a better representation on top of which predictive models can be learned . Crucially , the skills do not require any supervision to learn , and are acquired entirely through autonomous exploration . This means that the repertoire of skills and their predictive model are learned before the agent has been tasked with any goal or reward function . When a task is provided at test-time , the agent utilizes the previously learned skills and model to immediately perform the task without any further training . The key contribution of our work is an unsupervised reinforcement learning algorithm , DADS , grounded in mutual-information-based exploration . We demonstrate that our objective can embed learned primitives in continuous spaces , which allows us to learn a large , diverse set of skills . Crucially , our algorithm also learns to model the dynamics of the skills , which enables the use of model-based planning algorithms for downstream tasks . We adapt the conventional model predictive control algorithms to plan in the space of primitives , and demonstrate that we can compose the learned primitives to solve downstream tasks without any additional training . 2 PRELIMINARIES . Mutual information can been used as an objective to encourage exploration in reinforcement learning ( Houthooft et al. , 2016 ; Mohamed & Rezende , 2015 ) . According to its definition , I ( X ; Y ) = H ( X ) − H ( X | Y ) , maximizing mutual information I with respect to Y amounts to maximizing the entropyH of X while minimizing the conditional entropyH ( X | Y ) . In the context of RL , X is usually a function of the state and Y a function of actions . Maximizing this objective encourages the state entropy to be high , making the underlying policy to be exploratory . Recently , multiple works ( Eysenbach et al. , 2018 ; Gregor et al. , 2016 ; Achiam et al. , 2018 ) apply this idea to learn diverse skills which maximally cover the state space . To leverage planning-based control , MBRL estimates the true dynamics of the environment by learning a model p̂ ( s′ | s , a ) . This allows it to predict a trajectory of states τ̂H = ( st , ŝt+1 , . . . ŝt+H ) resulting from a sequence of actions without any additional interaction with the environment . While model-based RL methods have been demonstrated to be sample efficient compared to their modelfree counterparts , learning an effective model for the whole state-space is challenging . An openproblem in model-based RL is to incorporate temporal abstraction in model-based control , to enable high-level planning and move-away from planning at the granular level of actions . These seemingly unrelated ideas can be combined into a single optimization scheme , where we first discover skills ( and their models ) without any extrinsic reward and then compose these skills to optimize for the task defined at test time using model-based planning . At train time , we assume a Markov Decision Process ( MDP ) M1 ≡ ( S , A , p ) . The state space S and action space A are assumed to be continuous , and theA bounded . We assume the transition dynamics p to be stochastic , such that p : S × A× S 7→ [ 0 , ∞ ) . We learn a skill-conditioned policy π ( a | s , z ) , where the skills z belongs to the space Z , detailed in Section 3 . We assume that the skills are sampled from a prior p ( z ) over Z . We simultaneously learn a skill-conditioned transition function q ( s′ | s , z ) , coined as skill-dynamics , which predicts the transition to the next state s′ from the current state s for the skill z under the given dynamics p. At test time , we assume an MDPM2 ≡ ( S , A , p , r ) , where S , A , p match those defined inM1 , and the reward function r : S × A 7→ ( −∞ , ∞ ) . We plan in Z using q ( s′ | s , z ) to compose the learned skills z for optimizing r inM2 , which we detail in Section 4 . 3 DYNAMICS-AWARE DISCOVERY OF SKILLS ( DADS ) . We use the information theoretic paradigm of mutual information to obtain our unsupervised skill discovery algorithm . In particular , we propose to maximize the mutual information between the next state s′ and current skill z conditioned on the current state s. I ( s′ ; z | s ) = H ( z | s ) −H ( z | s′ , s ) ( 1 ) = H ( s′ | s ) −H ( s′ | s , z ) ( 2 ) Mutual information in Equation 1 quantifies how much can be known about s′ given z and s , or symmetrically , z given the transition from s → s′ . From Equation 2 , maximizing this objective corresponds to maximizing the diversity of transitions produced in the environment , that is denoted by the entropy H ( s′ | s ) , while making z informative about the next state s′ by minimizing the entropy H ( s′ | s , z ) . Intuitively , skills z can be interpreted as abstracted action sequences which are identifiable by the transitions generated in the environment ( and not just by the current state ) . Thus , optimizing this mutual information can be understood as encoding a diverse set of skills in the latent space Z , while making the transitions for a given z ∈ Z predictable . We use the entropydecomposition in Equation 2 to connect this objective with model-based control . We want to optimize the our skill-conditioned controller π ( a | s , z ) such that the latent space z ∼ p ( z ) is maximally informative about the transitions s→ s′ . Using the definition of conditional mutual information , we can rewrite Equation 2 as : I ( s′ ; z | s ) = ∫ p ( z , s , s′ ) log p ( s′ | s , z ) p ( s′ | s ) ds′dsdz ( 3 ) We assume the following generative model : p ( z , s , s′ ) = p ( z ) p ( s | z ) p ( s′ | s , z ) , where p ( z ) is user specified prior over Z , p ( s|z ) denotes the stationary state-distribution induced by π ( a | s , z ) for a skill z and p ( s′ | s , z ) denotes the transition distribution under skill z . Note , p ( s′ | s , z ) =∫ p ( s′ | s , a ) π ( a | s , z ) da is intractable to compute because the underlying dynamics are unknown . However , we can variationally lower bound the objective as follows : I ( s′ ; z | s ) = Ez , s , s′∼p [ log p ( s′ | s , z ) p ( s′ | s ) ] = Ez , s , s′∼p [ log qφ ( s ′ | s , z ) p ( s′ | s ) ] + Es , z∼p [ DKL ( p ( s′ | s , z ) || qφ ( s′ | s , z ) ) ] ≥ Ez , s , s′∼p [ log qφ ( s ′ | s , z ) p ( s′ | s ) ] ( 4 ) where we have used the non-negativity of KL-divergence , that is DKL ≥ 0 . Note , skill-dynamics qφ represents the variational approximation for the transition function p ( s′ | s , z ) , which enables model-based control as described in Section 4 . Equation 4 suggests an alternating optimization between qφ and π , summarized in Algorithm 1 . In every iteration : ( Tighten variational lower bound ) We minimize DKL ( p ( s′ | s , z ) || qφ ( s′ | s , z ) ) with respect to the parameters φ on z , s ∼ p to tighten the lower bound . For general function approximators like neural networks , we can write the gradient for φ as follows : ∇φEs , z [ DKL ( p ( s′ | s , z ) || qφ ( s′ | s , z ) ) ] = ∇φEz , s , s′ [ log p ( s′ | s , z ) qφ ( s′ | s , z ) ] = −Ez , s , s′ [ ∇φ log qφ ( s′ | s , z ) ] ( 5 ) which corresponds to maximizing the likelihood of the samples from p under qφ . ( Maximize approximate lower bound ) After fitting qφ , we can optimize π to maximize Ez , s , s′ [ log qφ ( s′ | s , z ) − log p ( s′ | s ) ] . Note , this is a reinforcement-learning style optimization with a reward function log qφ ( s′ | s , z ) − log p ( s′ | s ) . However , log p ( s′ | s ) is intractable to compute , so we approximate the reward function for π : rz ( s , a , s ′ ) = log qφ ( s ′ | s , z ) ∑L i=1 qφ ( s ′ | s , zi ) + logL , zi ∼ p ( z ) . ( 6 ) The approximation is motivated as follows : p ( s′ | s ) = ∫ p ( s′ | s , z ) p ( z|s ) dz ≈ ∫ qφ ( s ′ | s , z ) p ( z ) dz ≈ 1L ∑L i=1 qφ ( s ′ | s , zi ) for zi ∼ p ( z ) , where L denotes the number of samples from the prior . We are using the marginal of variational approximation qφ over the prior p ( z ) to approximate the marginal distribution of transitions . We discuss this approximation in Appendix C. Note , the final reward function rz encourages the policy π to produce transitions that are ( a ) predictable under qφ ( predictability ) and ( b ) different from the transitions produced under zi ∼ p ( z ) ( diversity ) . To generate samples from p ( z , s , s′ ) , we use the rollouts from the current policy π for multiple samples z ∼ p ( z ) in an episodic setting for a fixed horizon T . We also introduce entropy regularization for π ( a | s , z ) , which encourages the policy to discover action-sequences with similar state-transitions and to be clustered under the same skill z , making the policy robust besides encouraging exploration ( Haarnoja et al. , 2018a ) . The use of entropy regularization can be justified from an information bottleneck perspective as discussed for Information Maximization algorithm in ( Mohamed & Rezende , 2015 ) . This is even more extensively discussed from the graphical model perspective in Appendix B , which connects unsupervised skill discovery and information bottleneck literature , while also revealing the temporal nature of skills z . Details corresponding to implementation and hyperparameters are discussed in Appendix A .
This paper introduces an unsupervised learning algorithm Dynamics-Aware Discovery of Skills (DADS) for learning low-level “skills” that can be leveraged for model-predictive control. The skills are learned by maximizing the mutual information between the next state s’ and the current skill z conditioned on the current state s. Maximizing this objective corresponds to maximizing the diversity of transitions produced in the environment, while making the skill z be informative about the next state s’. The idea is that using this objective leads to learning a diverse set of skills that are predictive of the environment. The skills z correspond to a set of action sequences, which are represented by a distribution \pi(a|s,z). Because the above objective is intractable to compute (because it relies on the true dynamics p(s’|s,a)), it is variationally lower bounded using the approximate dynamics q_{\phi}(s’|s,z), which represents the transition dynamics when using a certain skill and this variational lower bound is optimized to produce the optimal q_{\phi}(s’|s,z) and \pi(a|s,z).
SP:cb8f98d674ac5fdafd3ff738a7d0027f6c4a19ad
A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS
1 INTRODUCTION AND RELATED WORK . Generative adversarial networks ( Goodfellow et al. , 2014 ) are a specific type of generative model that has shown impressive performance lately . The main idea is that there are two networks that compete against each other : a generator network that generates images and a discriminator network that tries to distinguish between real and fake images . These models are useful because they can generate very realistic images that are not in the training set . Throughout the rest of the paper , we will use GANs as a specific class of models to study , however , the goodness of fit measure discussed in Section 3.2 and its applications can be extended to other generative networks such as Variational Autoencoders . Some GANs that appear to be successful in practice can not actually reproduce the training set , as we will see . Other generative models , such as Generative Latent Optimization ( GLO ) ( Bojanowski et al. , 2017 ) and Implicit Maximum Likelihood Estimation ( IMLE ) ( Li and Malik , 2018 ) attempt to memorize the training data as part of the learning algorithm . These methods are not as successful as GANs in producing realistic images . We believe that the reason for this difference in performance is due to a lack of overparameterization in GANs , GLO , and IMLE . Our solution starts with measuring how well a generative model can generate the training data . We explain in Section 3.2 that our goodness of fit measure F ( G ) is zero if we are able to perfectly generate the training data . If we can not generate the training data , then F ( G ) represents how far away we are from generating our training set in an average of total least square sense . We use this goodness of fit measure to evaluate different models and training settings as well as study the evolution of the approximation error through training . Ideally , we would like to overparameterize GANs in order to increase their capacity and reduce F ( G ) . Recently , it was shown that overparameterization in classifiers and autoencoders leads to better performance ( Radhakrishnan et al. , 2019 ; Belkin et al. , 2018 ) . Another reason to overparameterize is that we observed , while calculating F , that our models actually use the the full potential of the latent distribution on z to generate different images . In other words , suppose that we train a GAN with z ∼ N ( 0 , I ) , then we observe that the optimal z ’ s corresponding to the closest generated images from the training set are also distributed as N ( 0 , I ) . That is , once trained , the latent distribution fixed a priori becomes the optimal one minimizing the approximation error . Despite the above findings , increasing the complexity of the generator becomes very difficult due to the training algorithms for GANs requiring careful hyper-parameter settings for convergence ( Rege and Monteleoni , 2019 ) . As such , we explore two alternatives that do not impact the training stability . First , increasing the dimension of the latent space which is currently set to 100 across models and dataset . We demonstrate that this solution indeed allows to reduce the approximation error . Second , we consider a mixture of GAN setting . That is , we train K different GANs on subsets of the data of size approximately NK for total data size N . Note that training a mixture of GANs has been done in practice Hoang et al . ( 2017 ) , we thus will quantify the approximation error reduction that can be obtained with this solution . Hence , we see that if our original GAN has P parameters , we now have KP parameters total . Also , each one is trained on N/K data . Hence , we see that our ” effective ” parameters for this mixture of GANs is P ′ N ′ = K 2 P N . Hence we can get an effective overparameterization of 100 fold if we divide our data into K = 10 subsets . We first demonstrate how a single GAN trained on a smaller dataset has a smaller approximation error . In particular we also find that how the dataset is subsampled , random or from a clustering based partitioning , matters for performances . We then build on those finding to train the mixture of GANs on a K-means based partitioned dataset and demonstrate important reduction in the approximation error . We summarize our contributions that apply to arbitrary generative models s.a. GANs , VAE as follows : • We provide a novel goodness of fit measure for generative networks and how it defines necessary conditions for generative networks to be optimal ( Sec . 3.1 ) . We also relate the metric to mode collapse and provide implementation details on computing it efficiently ( Sec . 3.2 ) . Finally we demonstrate how our metric compares to standard GAN metric s.a. the Frechet score ( Sec . 3.3 ) . • We demonstrate how our goodness of fit metric allows to gain novel insights into GANs . We show that DCGAN and WGAN do not memorize and have very different behavior w.r.t . overfitting Sec . 4.1 . In particular DCGAN is able to match with WGAN performances if early stopping is performed . We then show the impact architecture and residual connection ( Sec . 4.2 ) . Finally , we study the latent space distribution and in particular demonstrate how the optimal latent space distribution which minimizes the approximation error of a trained GAN matches the distribution used for training , highlighting how current approximation errors are due to underparametrized GANs ( Sec . 4.3 ) . • We provide two solutions to reduce the approximation error without altering training stability Sec . 5 . First , we propose to increasing the latent space dimension in Sec . 5.1 . Then we study how dataset subsampling also helps reducing the approximation error Sec . 5.2 which motives the use of a mixture of GANs Sec . 5.3 . 2 BACKGROUND . In this section we briefly overview Generative Adversarial Networks ( GANs ) and generative latent optimization ( GLO ) ( Bojanowski et al. , 2017 ) , which is another generative model . Finally describe how one can optimize the generator network latent space to obtain a desired generated sample . We remind that all our development applies to arbitrary black/white box generative networks even though we only focus on GANs in this paper . Generative Adversarial Networks . Generative adversarial networks ( GANs ) are generative neural networks that use an adversarial loss ; the adversarial loss is typically another neural network . In other words , a GAN consists of two neural networks that compete against each other . The generator network G : R ` → Rp generates p-dimensional images from an ` -dimensional latent space . The discriminator D : Rp → ( 0 , 1 ) is a classifier which is trained to distinguish between the training set and generated images . The training loss for a batch size of NB for the discriminator is given by LD = − 1 NB NB∑ i=1 log ( D ( xi ) ) − 1 NB NB∑ j=1 log ( 1−D ( G ( zj ) ) ) , where xi and G ( zi ) is a real image and a generated image for each i ∈ { 1 , 2 , . . . , NB } , respectively . The generator loss is given by LD = 1 NB NB∑ j=1 log ( 1−D ( G ( zj ) ) ) . Notice that the loss for the generator does not explicitly use the training data . Instead , the training data is used indirectly through the training of the discriminator . In this paper , we discuss two popular GANs : DCGAN ( Radford et al. , 2015 ) which uses the loss above and WGAN ( Arjovsky et al. , 2017 ) which uses a slightly different learning algorithm . Generative latent optimization . In contrast to GANs , GLO is a generative network that does not use an adversarial loss . Instead , GLO attempts to memorize the training data by using this loss : LG = 1 NN NB∑ i=1 min z∈R ` L ( G ( z ) , xi ) where L is a loss function . In the original paper , the authors use different loss functions to demonstrate how the model differs . Latent space optimization . The generator is a mapping from a latent space R ` into an image space Rp . In the GLO paper one aims at finding a specific z ∈ R ` such that the generated sample G ( z ) is close to a target output . In particular , one picks a target as being a randomly generated GAN image from some target vector z∗ . That is , the target is guaranteed to lie in the span of G. Following this , one aims at finding the z vector that led to the target G ( z∗ ) by ẑ = argmin z∈R ` ‖G ( z ) −G ( z∗ ) ‖22 . ( 1 ) Since the above optimization problem in non-convex , there is not theoretic guarantee of finding a global minima , in general . However , empirically it was shown that the above optimization problem is solved 100 % of the time in practice ( Lipton and Tripathi , 2017 ) . We now propose to leverage the above to develop our goodness of fit measure . 3 GOODNESS OF FIT METRIC . In this section we first motivate and define our metric ( Sec . 3.1 ) and provide its approximation ( Sec . 3.2 ) . We demonstrate that our measure being minimized is a necessary and sufficient condition to detect mode collapse . Finally , we study the different with current GAN measures that are the Inception Score and the Frechet Inception Score ( Sec . 3.3 ) . 3.1 METRIC , OPTIMUM GENERATIVE NETWORK , AND MODE COLLAPSE . The generator G is a continuous mapping for any type of layer used as current layers in deep learning are all continuous . As such , the following defines the image of the generator : Imag ( G ) = { G ( z ) : z ∈ R ` } , ( 2 ) with ` the dimension of the latent space . Then the approximation of the true data manifold denoted as X with G can be measure by ” how far ” is the span of G. Since the two quantities to compare are sets one solution is the following standard Total Least Square metric defines as d ( Imag ( G ) , X ) = ∫ min z ‖G ( z ) − x‖dx ( 3 ) which in practice with a finite dataset becomes our proposed measure : F ( G , X ) = 1 N N∑ n=1 min z ‖G ( z ) − xn‖2 = 1 N N∑ n=1 m ( xn ; G ) . ( 4 ) As a result , F is an empirical average least square distance between reference points , the observed input , and Imag ( G ) . Turning the above argument into a probabilistic setting , we obtain the following motivation . A generative model or network is learned to approximate some target distribution . This target distribution is itself observed via some samples , the given observations . It is common to use the likelihood as a measure of fitness for such models , and it is defined as L ( X ) = ∏N n=1 p ( xn ) , with p ( xn ) some distribution density . A necessary condition to allow maximization of the likelihood is that the samples lie in the support of the distribution , that is p ( xn ) > 0 ⇐⇒ xn ∈ Support ( p ) , as Support ( p ) = { x ∈ X : p ( x ) > 0 } . In the case of a generative network , the distribution p ( x ) is not easily available . However its support is directly accessible as we have Proposition 1 . Support ( p ) = Imag ( G ) . As a result , from a probabilistic fitting point of view , ensuring that all the samples lie in the span of the generator is a necessary condition that must be fulfilled , which otherwise would prevent maximization of the likelihood , leading to the following result . Theorem 1 . The optimal generative model in term of distribution approximation must have F ( G , X ) = 0 . The above setting also allows the following direct and intuitive result that any sample x in X such that m ( x ; G ) = 0 is a sample that can be generated , potentially with very small probability , by a generative network . We conveniently employed the L2 in our metric and we will use this throughout the rest of the paper . However , notice that if X ⊂ Imag ( G ) , the choice of the distance function is not important because d ( x , y ) = 0 implies that x = y for any distance metric d. Relation to mode collapse . We now relate the value of m ( xn ; G ) for samples from the training set to mode collapse . We also refer to the case m ( xn ; G ) = 0 as a memorized sample and in general we refer as memorization F ( X , G ) = 0 . The lack of memorization in current generative networks is evident when generators experience mode collapse . Mode collapse translates into the absence of approximation of the generator to some parts of the target distribution . Suppose that the data comes from a distribution PX . Then , mode collapse happens if PX ( x ) > 0 but minz ||G ( z ) − x|| > 0 . In practice , we do not have PX , but we do have the empirical distribution P̂X , so that mode collapse will occur if we do not memorize the training data . A generative model must memorize the training data in order to avoid mode collapse . We thus obtain the following result . Proposition 2 . A necessary condition to avoid mode collapse for a generative network G is to memorize , that is , F ( X , G ) = 0 . We now demonstrate how one can compute our metric and in particular m ( xn ; G ) efficiently .
This paper defines a goodness of fit measure F for generative networks, that reflects how well a model can generate the training data. F allows to detect mode collapse: as long as it is strictly positive, mode collapse is observed as parts of the training data have not been memorized. It aims at providing an alternative to the Fréchet Inception Distance and the Inception Score that rely on pretrained neural networks (whereas this new measure does not). It also provides insight into the DCGAN and WGAN networks in that regard, observing for instance that data subsampling helps decrease F, which motivates the use of a mixture of GANs.
SP:34ae68bada17973bcb8f7aab9a70349b31dde1ad
A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS
1 INTRODUCTION AND RELATED WORK . Generative adversarial networks ( Goodfellow et al. , 2014 ) are a specific type of generative model that has shown impressive performance lately . The main idea is that there are two networks that compete against each other : a generator network that generates images and a discriminator network that tries to distinguish between real and fake images . These models are useful because they can generate very realistic images that are not in the training set . Throughout the rest of the paper , we will use GANs as a specific class of models to study , however , the goodness of fit measure discussed in Section 3.2 and its applications can be extended to other generative networks such as Variational Autoencoders . Some GANs that appear to be successful in practice can not actually reproduce the training set , as we will see . Other generative models , such as Generative Latent Optimization ( GLO ) ( Bojanowski et al. , 2017 ) and Implicit Maximum Likelihood Estimation ( IMLE ) ( Li and Malik , 2018 ) attempt to memorize the training data as part of the learning algorithm . These methods are not as successful as GANs in producing realistic images . We believe that the reason for this difference in performance is due to a lack of overparameterization in GANs , GLO , and IMLE . Our solution starts with measuring how well a generative model can generate the training data . We explain in Section 3.2 that our goodness of fit measure F ( G ) is zero if we are able to perfectly generate the training data . If we can not generate the training data , then F ( G ) represents how far away we are from generating our training set in an average of total least square sense . We use this goodness of fit measure to evaluate different models and training settings as well as study the evolution of the approximation error through training . Ideally , we would like to overparameterize GANs in order to increase their capacity and reduce F ( G ) . Recently , it was shown that overparameterization in classifiers and autoencoders leads to better performance ( Radhakrishnan et al. , 2019 ; Belkin et al. , 2018 ) . Another reason to overparameterize is that we observed , while calculating F , that our models actually use the the full potential of the latent distribution on z to generate different images . In other words , suppose that we train a GAN with z ∼ N ( 0 , I ) , then we observe that the optimal z ’ s corresponding to the closest generated images from the training set are also distributed as N ( 0 , I ) . That is , once trained , the latent distribution fixed a priori becomes the optimal one minimizing the approximation error . Despite the above findings , increasing the complexity of the generator becomes very difficult due to the training algorithms for GANs requiring careful hyper-parameter settings for convergence ( Rege and Monteleoni , 2019 ) . As such , we explore two alternatives that do not impact the training stability . First , increasing the dimension of the latent space which is currently set to 100 across models and dataset . We demonstrate that this solution indeed allows to reduce the approximation error . Second , we consider a mixture of GAN setting . That is , we train K different GANs on subsets of the data of size approximately NK for total data size N . Note that training a mixture of GANs has been done in practice Hoang et al . ( 2017 ) , we thus will quantify the approximation error reduction that can be obtained with this solution . Hence , we see that if our original GAN has P parameters , we now have KP parameters total . Also , each one is trained on N/K data . Hence , we see that our ” effective ” parameters for this mixture of GANs is P ′ N ′ = K 2 P N . Hence we can get an effective overparameterization of 100 fold if we divide our data into K = 10 subsets . We first demonstrate how a single GAN trained on a smaller dataset has a smaller approximation error . In particular we also find that how the dataset is subsampled , random or from a clustering based partitioning , matters for performances . We then build on those finding to train the mixture of GANs on a K-means based partitioned dataset and demonstrate important reduction in the approximation error . We summarize our contributions that apply to arbitrary generative models s.a. GANs , VAE as follows : • We provide a novel goodness of fit measure for generative networks and how it defines necessary conditions for generative networks to be optimal ( Sec . 3.1 ) . We also relate the metric to mode collapse and provide implementation details on computing it efficiently ( Sec . 3.2 ) . Finally we demonstrate how our metric compares to standard GAN metric s.a. the Frechet score ( Sec . 3.3 ) . • We demonstrate how our goodness of fit metric allows to gain novel insights into GANs . We show that DCGAN and WGAN do not memorize and have very different behavior w.r.t . overfitting Sec . 4.1 . In particular DCGAN is able to match with WGAN performances if early stopping is performed . We then show the impact architecture and residual connection ( Sec . 4.2 ) . Finally , we study the latent space distribution and in particular demonstrate how the optimal latent space distribution which minimizes the approximation error of a trained GAN matches the distribution used for training , highlighting how current approximation errors are due to underparametrized GANs ( Sec . 4.3 ) . • We provide two solutions to reduce the approximation error without altering training stability Sec . 5 . First , we propose to increasing the latent space dimension in Sec . 5.1 . Then we study how dataset subsampling also helps reducing the approximation error Sec . 5.2 which motives the use of a mixture of GANs Sec . 5.3 . 2 BACKGROUND . In this section we briefly overview Generative Adversarial Networks ( GANs ) and generative latent optimization ( GLO ) ( Bojanowski et al. , 2017 ) , which is another generative model . Finally describe how one can optimize the generator network latent space to obtain a desired generated sample . We remind that all our development applies to arbitrary black/white box generative networks even though we only focus on GANs in this paper . Generative Adversarial Networks . Generative adversarial networks ( GANs ) are generative neural networks that use an adversarial loss ; the adversarial loss is typically another neural network . In other words , a GAN consists of two neural networks that compete against each other . The generator network G : R ` → Rp generates p-dimensional images from an ` -dimensional latent space . The discriminator D : Rp → ( 0 , 1 ) is a classifier which is trained to distinguish between the training set and generated images . The training loss for a batch size of NB for the discriminator is given by LD = − 1 NB NB∑ i=1 log ( D ( xi ) ) − 1 NB NB∑ j=1 log ( 1−D ( G ( zj ) ) ) , where xi and G ( zi ) is a real image and a generated image for each i ∈ { 1 , 2 , . . . , NB } , respectively . The generator loss is given by LD = 1 NB NB∑ j=1 log ( 1−D ( G ( zj ) ) ) . Notice that the loss for the generator does not explicitly use the training data . Instead , the training data is used indirectly through the training of the discriminator . In this paper , we discuss two popular GANs : DCGAN ( Radford et al. , 2015 ) which uses the loss above and WGAN ( Arjovsky et al. , 2017 ) which uses a slightly different learning algorithm . Generative latent optimization . In contrast to GANs , GLO is a generative network that does not use an adversarial loss . Instead , GLO attempts to memorize the training data by using this loss : LG = 1 NN NB∑ i=1 min z∈R ` L ( G ( z ) , xi ) where L is a loss function . In the original paper , the authors use different loss functions to demonstrate how the model differs . Latent space optimization . The generator is a mapping from a latent space R ` into an image space Rp . In the GLO paper one aims at finding a specific z ∈ R ` such that the generated sample G ( z ) is close to a target output . In particular , one picks a target as being a randomly generated GAN image from some target vector z∗ . That is , the target is guaranteed to lie in the span of G. Following this , one aims at finding the z vector that led to the target G ( z∗ ) by ẑ = argmin z∈R ` ‖G ( z ) −G ( z∗ ) ‖22 . ( 1 ) Since the above optimization problem in non-convex , there is not theoretic guarantee of finding a global minima , in general . However , empirically it was shown that the above optimization problem is solved 100 % of the time in practice ( Lipton and Tripathi , 2017 ) . We now propose to leverage the above to develop our goodness of fit measure . 3 GOODNESS OF FIT METRIC . In this section we first motivate and define our metric ( Sec . 3.1 ) and provide its approximation ( Sec . 3.2 ) . We demonstrate that our measure being minimized is a necessary and sufficient condition to detect mode collapse . Finally , we study the different with current GAN measures that are the Inception Score and the Frechet Inception Score ( Sec . 3.3 ) . 3.1 METRIC , OPTIMUM GENERATIVE NETWORK , AND MODE COLLAPSE . The generator G is a continuous mapping for any type of layer used as current layers in deep learning are all continuous . As such , the following defines the image of the generator : Imag ( G ) = { G ( z ) : z ∈ R ` } , ( 2 ) with ` the dimension of the latent space . Then the approximation of the true data manifold denoted as X with G can be measure by ” how far ” is the span of G. Since the two quantities to compare are sets one solution is the following standard Total Least Square metric defines as d ( Imag ( G ) , X ) = ∫ min z ‖G ( z ) − x‖dx ( 3 ) which in practice with a finite dataset becomes our proposed measure : F ( G , X ) = 1 N N∑ n=1 min z ‖G ( z ) − xn‖2 = 1 N N∑ n=1 m ( xn ; G ) . ( 4 ) As a result , F is an empirical average least square distance between reference points , the observed input , and Imag ( G ) . Turning the above argument into a probabilistic setting , we obtain the following motivation . A generative model or network is learned to approximate some target distribution . This target distribution is itself observed via some samples , the given observations . It is common to use the likelihood as a measure of fitness for such models , and it is defined as L ( X ) = ∏N n=1 p ( xn ) , with p ( xn ) some distribution density . A necessary condition to allow maximization of the likelihood is that the samples lie in the support of the distribution , that is p ( xn ) > 0 ⇐⇒ xn ∈ Support ( p ) , as Support ( p ) = { x ∈ X : p ( x ) > 0 } . In the case of a generative network , the distribution p ( x ) is not easily available . However its support is directly accessible as we have Proposition 1 . Support ( p ) = Imag ( G ) . As a result , from a probabilistic fitting point of view , ensuring that all the samples lie in the span of the generator is a necessary condition that must be fulfilled , which otherwise would prevent maximization of the likelihood , leading to the following result . Theorem 1 . The optimal generative model in term of distribution approximation must have F ( G , X ) = 0 . The above setting also allows the following direct and intuitive result that any sample x in X such that m ( x ; G ) = 0 is a sample that can be generated , potentially with very small probability , by a generative network . We conveniently employed the L2 in our metric and we will use this throughout the rest of the paper . However , notice that if X ⊂ Imag ( G ) , the choice of the distance function is not important because d ( x , y ) = 0 implies that x = y for any distance metric d. Relation to mode collapse . We now relate the value of m ( xn ; G ) for samples from the training set to mode collapse . We also refer to the case m ( xn ; G ) = 0 as a memorized sample and in general we refer as memorization F ( X , G ) = 0 . The lack of memorization in current generative networks is evident when generators experience mode collapse . Mode collapse translates into the absence of approximation of the generator to some parts of the target distribution . Suppose that the data comes from a distribution PX . Then , mode collapse happens if PX ( x ) > 0 but minz ||G ( z ) − x|| > 0 . In practice , we do not have PX , but we do have the empirical distribution P̂X , so that mode collapse will occur if we do not memorize the training data . A generative model must memorize the training data in order to avoid mode collapse . We thus obtain the following result . Proposition 2 . A necessary condition to avoid mode collapse for a generative network G is to memorize , that is , F ( X , G ) = 0 . We now demonstrate how one can compute our metric and in particular m ( xn ; G ) efficiently .
This work proposed a new goodness of fit measure for generative network evaluations, which is based on how well the network can generate the training data. The measure is zero if the network could perfectly recover the training data, and would represent how far it is from generating the training set in the average manner of the total least square sense, where the one-to-one mapping between the generated data and the training sample is constructed through latent space optimization. Using the proposed measure, the authors showed an interesting trend present in the DCGAN training and the impact of the residual connection. The authors might want to add some discussion in Section 4.2 regarding why the residual connection is detrimental for covering the support. Increasing the model complexity through larger latent space dimension and learning mixtures is proposed as solutions to improve the measure as well.
SP:34ae68bada17973bcb8f7aab9a70349b31dde1ad