paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Spike-based causal inference for weight alignment
1 INTRODUCTION . Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function . Given that people and animals can also show clear behavioral improvements on specific tasks ( Shadmehr et al. , 2010 ) , however the brain determines its synaptic updates , on average , the changes in must also correlate with the gradients of some loss function related to the task ( Raman et al. , 2019 ) . As such , the brain may have some way of calculating at least an estimator of gradients . To-date , the bulk of models for how the brain may estimate gradients are framed in terms of setting up a system where there are both bottom-up , feedforward and top-down , feedback connections . The feedback connections are used for propagating activity that can be used to estimate a gradient ( Williams , 1992 ; Lillicrap et al. , 2016 ; Akrout et al. , 2019 ; Roelfsema & Ooyen , 2005 ; Lee et al. , 2015 ; Scellier & Bengio , 2017 ; Sacramento et al. , 2018 ) . In all such models , the gradient estimator is less biased the more the feedback connections mirror the feedforward weights . For example , in the REINFORCE algorithm ( Williams , 1992 ) , and related algorithms like AGREL ( Roelfsema & Ooyen , 2005 ) , learning is optimal when the feedforward and feedback connections are perfectly symmetric , such that for any two neurons i and j the synaptic weight from i to j equals the weight from j to i , e.g . Wji = Wij ( Figure 1 ) . Some algorithms simply assume weight symmetry , such as Equilibrium Propagation ( Scellier & Bengio , 2017 ) . The requirement for synaptic weight symmetry is sometimes referred to as the “ weight transport problem ” , since it seems to mandate that the values of the feedforward synaptic weights are somehow transported into the feedback weights , which is not biologically realistic ( Crick , 1989-01-12 ; Grossberg , 1987 ) . Solving the weight transport problem is crucial to biologically realistic gradient estimation algorithms ( Lillicrap et al. , 2016 ) , and is thus an important topic of study . Several solutions to the weight transport problem have been proposed for biological models , including hard-wired sign symmetry ( Moskovitz et al. , 2018 ) , random fixed feedback weights ( Lillicrap et al. , 2016 ) , and learning to make the feedback weights symmetric ( Lee et al. , 2015 ; Sacramento et al. , 2018 ; Akrout et al. , 2019 ; Kolen & Pollack , 1994 ) . Learning to make the weights symmetric is promising because it is both more biologically feasible than hard-wired sign symmetry ( Moskovitz et al. , 2018 ) and it leads to less bias in the gradient estimator ( and thereby , better training results ) than using fixed random feedback weights ( Bartunov et al. , 2018 ; Akrout et al. , 2019 ) . However , of the current proposals for learning weight symmetry some do not actually work well in practice ( Bartunov et al. , 2018 ) and others still rely on some biologically unrealistic assumptions , including scalar value activation functions ( as opposed to all-or-none spikes ) and separate error feedback pathways with one-to-one matching between processing neurons for the forward pass and error propagation neurons for the backward pass Akrout et al . ( 2019 ) ; Sacramento et al . ( 2018 ) . Interestingly , learning weight symmetry is implicitly a causal inference problem—the feedback weights need to represent the causal influence of the upstream neuron on its downstream partners . As such , we may look to the causal infererence literature to develop better , more biologically realistic algorithms for learning weight symmetry . In econometrics , which focuses on quasi-experiments , researchers have developed various means of estimating causality without the need to actually randomize and control the variables in question Angrist & Pischke ( 2008 ) ; Marinescu et al . ( 2018 ) . Among such quasi-experimental methods , regression discontinuity design ( RDD ) is particularly promising . It uses the discontinuity introduced by a threshold to estimate causal effects . For example , RDD can be used to estimate the causal impact of getting into a particular school ( which is a discontinuous , all-or-none variable ) on later earning power . RDD is also potentially promising for estimating causal impact in biological neural networks , because real neurons communicate with discontinuous , all-or-none spikes . Indeed , it has been shown that the RDD approach can produce unbiased estimators of causal effects in a system of spiking neurons Lansdell & Kording ( 2019 ) . Given that learning weight symmetry is fundamentally a causal estimation problem , we hypothesized that RDD could be used to solve the weight transport problem in biologically realistic , spiking neural networks . Here , we present a learning rule for feedback synaptic weights that is a special case of the RDD algorithm previously developed for spiking neural networks ( Lansdell & Kording , 2019 ) . Our algorithm takes advantage of a neuron ’ s spiking discontinuity to infer the causal effect of its spiking on the activity of downstream neurons . Since this causal effect is proportional to the feedforward synaptic weight between the two neurons , by estimating it , feedback synapses can align their weights to be symmetric with the reciprocal feedforward weights , thereby overcoming the weight transport problem . We demonstrate that this leads to the reduction of a cost function which measures the weight symmetry ( or the lack thereof ) , that it can lead to better weight symmetry in spiking neural networks than other algorithms for weight alignment ( Akrout et al. , 2019 ) and it leads to better learning in deep neural networks in comparison to the use of fixed feedback weights ( Lillicrap et al. , 2016 ) . Altogether , these results demonstrate a novel algorithm for solving the weight transport problem that takes advantage of discontinuous spiking , and which could be used in future models of biologically plausible gradient estimation . 2 RELATED WORK . Previous work has shown that even when feedback weights in a neural network are initialized randomly and remain fixed throughout training , the feedforward weights learn to partially align themselves to the feedback weights , an algorithm known as feedback alignment ( Lillicrap et al. , 2016 ) . While feedback alignment is successful at matching the learning performance of true gradient descent in relatively shallow networks , it does not scale well to deeper networks and performs poorly on difficult computer vision tasks ( Bartunov et al. , 2018 ) . The gap in learning performance between feedback alignment and gradient descent can be overcome if feedback weights are continually updated to match the sign of the reciprocal feedforward weights ( Moskovitz et al. , 2018 ) . Furthermore , learning the feedback weights in order to make them more symmetric to the feedforward weights has been shown to improve learning over feedback alignment ( Akrout et al. , 2019 ) . To understand the underlying dynamics of learning weight symmetry , Kunin et al . ( 2019 ) define the symmetric alignment cost function , RSA , as one possible cost function that , when minimized , leads to weight symmetry : RSA : = ‖W − Y T ‖2F ( 1 ) = ‖W‖2F + ‖Y ‖2F − 2tr ( WY ) where W are feedforward weights and Y are feedback weights . The first two terms are simply weight regularization terms that can be minimized using techniques like weight decay . But , the third term is the critical one for ensuring weight alignment . In this paper we present a biologically plausible method of minimizing the third term . This method is based on the work of Lansdell & Kording ( 2019 ) , who demonstrated that neurons can estimate their causal effect on a global reward signal using the discontinuity introduced by spiking . This is accomplished using RDD , wherein a piecewise linear model is fit around a discontinuity , and the differences in the regression intercepts indicates the causal impact of the discontinuous variable . In Lansdell & Kording ( 2019 ) , neurons learn a piece-wise linear model of a reward signal as a function of their input drive , and estimate the causal effect of spiking by looking at the discontinuity at the spike threshold . Here , we modify this technique to perform causal inference on the effect of spiking on downstream neurons , rather than a reward signal . We leverage this to develop a learning rule for feedback weights that induces weight symmetry and improves training . 3 OUR CONTRIBUTIONS . The primary contributions of this paper are as follows : • We demonstrate that spiking neurons can accurately estimate the causal effect of their spiking on downstream neurons by using a piece-wise linear model of the feedback as a function of the input drive to the neuron . • We present a learning rule for feedback weights that uses the causal effect estimator to encourage weight symmetry . We show that when feedback weights update using this algorithm it minimizes the symmetric alignment cost function , RSA . • We demonstrate that this learning weight symmetry rule improves training and test accuracy over feedback alignment , approaching gradient-descent-level performance on Fashion-MNIST , SVHN , CIFAR-10 and VOC in deeper networks . 4 METHODS . 4.1 GENERAL APPROACH . In this work , we utilize a spiking neural network model for aligning feedforward and feedback weights . However , due to the intense computational demands of spiking neural networks , we only use spikes for the RDD algorithm . We then use the feedback weights learned by the RDD algorithm for training a non-spiking convolutional neural network . We do this because the goal of our work here is to develop an algorithm for aligning feedback weights in spiking networks , not for training feedforward weights in spiking networks on other tasks . Hence , in the interest of computational expediency , we only used spiking neurons when learning to align the weights . Additional details on this procedure are given below . 4.2 RDD FEEDBACK TRAINING PHASE . At the start of every training epoch of a convolutional neural network , we use an RDD feedback weight training phase , during which all fully-connected sets of feedback weights in the network are updated . To perform these updates , we simulate a separate network of leaky integrate-and-fire ( LIF ) neurons . LIF neurons incorporate key elements of real neurons such as voltages , spiking thresholds and refractory periods . Each epoch , we begin by training the feedback weights in the LIF network . These weights are then transferred to the convolutional network , which is used for training the feedforward weights . The new feedforward weights are then transferred to the LIF net , and another feedback training phase with the LIF net starts the next epoch ( Figure 2A ) . During the feedback training phase , the LIF network undergoes a training phase lasting 90 s of simulated time ( 30 s per set of feedback weights ) ( Figure 2B ) . We find that the spiking network used for RDD feedback training and the convolutional neural network are very closely matched in the activity of the units ( Figure S1 ) , which gives us confidence that this approach of using a separate non-spiking network for training the feedforward weights is legitimate . During the feedback training phase , a small subset of neurons in the first layer receive driving input that causes them to spike , while other neurons in this layer receive no input ( see Appendix A.2 ) . The subset of neurons that receive driving input is randomly selected every 100 ms of simulated time . This continues for 30 s in simulated time , after which the same process occurs for the subsequent hidden layers in the network . This protocol enforces sparse , de-correlated firing patterns that improve the causal inference procedure of RDD .
This paper considers the "weight transport problem" which is the problem of ensuring that the feedforward weights $W_{ij}$ is the same as the feedback weights $W_{ji}$ in the spiking NN model of computation. This paper proposes a novel learning method for the feedback weights which depends on accurately estimating the causal effect of any spiking neuron on the other neurons deeper in the network. Additionally, they show that this method also minimizes a natural cost function. They run many experiments on FashionMNIST and CIFAR-10 to validate this and also show that for deeper networks this approaches the accuracy levels of GD-based algorithms.
SP:76a052062e3e4bb707b24a8809c220c8ac1df83a
Spike-based causal inference for weight alignment
1 INTRODUCTION . Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function . Given that people and animals can also show clear behavioral improvements on specific tasks ( Shadmehr et al. , 2010 ) , however the brain determines its synaptic updates , on average , the changes in must also correlate with the gradients of some loss function related to the task ( Raman et al. , 2019 ) . As such , the brain may have some way of calculating at least an estimator of gradients . To-date , the bulk of models for how the brain may estimate gradients are framed in terms of setting up a system where there are both bottom-up , feedforward and top-down , feedback connections . The feedback connections are used for propagating activity that can be used to estimate a gradient ( Williams , 1992 ; Lillicrap et al. , 2016 ; Akrout et al. , 2019 ; Roelfsema & Ooyen , 2005 ; Lee et al. , 2015 ; Scellier & Bengio , 2017 ; Sacramento et al. , 2018 ) . In all such models , the gradient estimator is less biased the more the feedback connections mirror the feedforward weights . For example , in the REINFORCE algorithm ( Williams , 1992 ) , and related algorithms like AGREL ( Roelfsema & Ooyen , 2005 ) , learning is optimal when the feedforward and feedback connections are perfectly symmetric , such that for any two neurons i and j the synaptic weight from i to j equals the weight from j to i , e.g . Wji = Wij ( Figure 1 ) . Some algorithms simply assume weight symmetry , such as Equilibrium Propagation ( Scellier & Bengio , 2017 ) . The requirement for synaptic weight symmetry is sometimes referred to as the “ weight transport problem ” , since it seems to mandate that the values of the feedforward synaptic weights are somehow transported into the feedback weights , which is not biologically realistic ( Crick , 1989-01-12 ; Grossberg , 1987 ) . Solving the weight transport problem is crucial to biologically realistic gradient estimation algorithms ( Lillicrap et al. , 2016 ) , and is thus an important topic of study . Several solutions to the weight transport problem have been proposed for biological models , including hard-wired sign symmetry ( Moskovitz et al. , 2018 ) , random fixed feedback weights ( Lillicrap et al. , 2016 ) , and learning to make the feedback weights symmetric ( Lee et al. , 2015 ; Sacramento et al. , 2018 ; Akrout et al. , 2019 ; Kolen & Pollack , 1994 ) . Learning to make the weights symmetric is promising because it is both more biologically feasible than hard-wired sign symmetry ( Moskovitz et al. , 2018 ) and it leads to less bias in the gradient estimator ( and thereby , better training results ) than using fixed random feedback weights ( Bartunov et al. , 2018 ; Akrout et al. , 2019 ) . However , of the current proposals for learning weight symmetry some do not actually work well in practice ( Bartunov et al. , 2018 ) and others still rely on some biologically unrealistic assumptions , including scalar value activation functions ( as opposed to all-or-none spikes ) and separate error feedback pathways with one-to-one matching between processing neurons for the forward pass and error propagation neurons for the backward pass Akrout et al . ( 2019 ) ; Sacramento et al . ( 2018 ) . Interestingly , learning weight symmetry is implicitly a causal inference problem—the feedback weights need to represent the causal influence of the upstream neuron on its downstream partners . As such , we may look to the causal infererence literature to develop better , more biologically realistic algorithms for learning weight symmetry . In econometrics , which focuses on quasi-experiments , researchers have developed various means of estimating causality without the need to actually randomize and control the variables in question Angrist & Pischke ( 2008 ) ; Marinescu et al . ( 2018 ) . Among such quasi-experimental methods , regression discontinuity design ( RDD ) is particularly promising . It uses the discontinuity introduced by a threshold to estimate causal effects . For example , RDD can be used to estimate the causal impact of getting into a particular school ( which is a discontinuous , all-or-none variable ) on later earning power . RDD is also potentially promising for estimating causal impact in biological neural networks , because real neurons communicate with discontinuous , all-or-none spikes . Indeed , it has been shown that the RDD approach can produce unbiased estimators of causal effects in a system of spiking neurons Lansdell & Kording ( 2019 ) . Given that learning weight symmetry is fundamentally a causal estimation problem , we hypothesized that RDD could be used to solve the weight transport problem in biologically realistic , spiking neural networks . Here , we present a learning rule for feedback synaptic weights that is a special case of the RDD algorithm previously developed for spiking neural networks ( Lansdell & Kording , 2019 ) . Our algorithm takes advantage of a neuron ’ s spiking discontinuity to infer the causal effect of its spiking on the activity of downstream neurons . Since this causal effect is proportional to the feedforward synaptic weight between the two neurons , by estimating it , feedback synapses can align their weights to be symmetric with the reciprocal feedforward weights , thereby overcoming the weight transport problem . We demonstrate that this leads to the reduction of a cost function which measures the weight symmetry ( or the lack thereof ) , that it can lead to better weight symmetry in spiking neural networks than other algorithms for weight alignment ( Akrout et al. , 2019 ) and it leads to better learning in deep neural networks in comparison to the use of fixed feedback weights ( Lillicrap et al. , 2016 ) . Altogether , these results demonstrate a novel algorithm for solving the weight transport problem that takes advantage of discontinuous spiking , and which could be used in future models of biologically plausible gradient estimation . 2 RELATED WORK . Previous work has shown that even when feedback weights in a neural network are initialized randomly and remain fixed throughout training , the feedforward weights learn to partially align themselves to the feedback weights , an algorithm known as feedback alignment ( Lillicrap et al. , 2016 ) . While feedback alignment is successful at matching the learning performance of true gradient descent in relatively shallow networks , it does not scale well to deeper networks and performs poorly on difficult computer vision tasks ( Bartunov et al. , 2018 ) . The gap in learning performance between feedback alignment and gradient descent can be overcome if feedback weights are continually updated to match the sign of the reciprocal feedforward weights ( Moskovitz et al. , 2018 ) . Furthermore , learning the feedback weights in order to make them more symmetric to the feedforward weights has been shown to improve learning over feedback alignment ( Akrout et al. , 2019 ) . To understand the underlying dynamics of learning weight symmetry , Kunin et al . ( 2019 ) define the symmetric alignment cost function , RSA , as one possible cost function that , when minimized , leads to weight symmetry : RSA : = ‖W − Y T ‖2F ( 1 ) = ‖W‖2F + ‖Y ‖2F − 2tr ( WY ) where W are feedforward weights and Y are feedback weights . The first two terms are simply weight regularization terms that can be minimized using techniques like weight decay . But , the third term is the critical one for ensuring weight alignment . In this paper we present a biologically plausible method of minimizing the third term . This method is based on the work of Lansdell & Kording ( 2019 ) , who demonstrated that neurons can estimate their causal effect on a global reward signal using the discontinuity introduced by spiking . This is accomplished using RDD , wherein a piecewise linear model is fit around a discontinuity , and the differences in the regression intercepts indicates the causal impact of the discontinuous variable . In Lansdell & Kording ( 2019 ) , neurons learn a piece-wise linear model of a reward signal as a function of their input drive , and estimate the causal effect of spiking by looking at the discontinuity at the spike threshold . Here , we modify this technique to perform causal inference on the effect of spiking on downstream neurons , rather than a reward signal . We leverage this to develop a learning rule for feedback weights that induces weight symmetry and improves training . 3 OUR CONTRIBUTIONS . The primary contributions of this paper are as follows : • We demonstrate that spiking neurons can accurately estimate the causal effect of their spiking on downstream neurons by using a piece-wise linear model of the feedback as a function of the input drive to the neuron . • We present a learning rule for feedback weights that uses the causal effect estimator to encourage weight symmetry . We show that when feedback weights update using this algorithm it minimizes the symmetric alignment cost function , RSA . • We demonstrate that this learning weight symmetry rule improves training and test accuracy over feedback alignment , approaching gradient-descent-level performance on Fashion-MNIST , SVHN , CIFAR-10 and VOC in deeper networks . 4 METHODS . 4.1 GENERAL APPROACH . In this work , we utilize a spiking neural network model for aligning feedforward and feedback weights . However , due to the intense computational demands of spiking neural networks , we only use spikes for the RDD algorithm . We then use the feedback weights learned by the RDD algorithm for training a non-spiking convolutional neural network . We do this because the goal of our work here is to develop an algorithm for aligning feedback weights in spiking networks , not for training feedforward weights in spiking networks on other tasks . Hence , in the interest of computational expediency , we only used spiking neurons when learning to align the weights . Additional details on this procedure are given below . 4.2 RDD FEEDBACK TRAINING PHASE . At the start of every training epoch of a convolutional neural network , we use an RDD feedback weight training phase , during which all fully-connected sets of feedback weights in the network are updated . To perform these updates , we simulate a separate network of leaky integrate-and-fire ( LIF ) neurons . LIF neurons incorporate key elements of real neurons such as voltages , spiking thresholds and refractory periods . Each epoch , we begin by training the feedback weights in the LIF network . These weights are then transferred to the convolutional network , which is used for training the feedforward weights . The new feedforward weights are then transferred to the LIF net , and another feedback training phase with the LIF net starts the next epoch ( Figure 2A ) . During the feedback training phase , the LIF network undergoes a training phase lasting 90 s of simulated time ( 30 s per set of feedback weights ) ( Figure 2B ) . We find that the spiking network used for RDD feedback training and the convolutional neural network are very closely matched in the activity of the units ( Figure S1 ) , which gives us confidence that this approach of using a separate non-spiking network for training the feedforward weights is legitimate . During the feedback training phase , a small subset of neurons in the first layer receive driving input that causes them to spike , while other neurons in this layer receive no input ( see Appendix A.2 ) . The subset of neurons that receive driving input is randomly selected every 100 ms of simulated time . This continues for 30 s in simulated time , after which the same process occurs for the subsequent hidden layers in the network . This protocol enforces sparse , de-correlated firing patterns that improve the causal inference procedure of RDD .
Strong paper in the direction of a more biologically plausible solution for the weight transport problem, where the forward and the backward weights need to be aligned. Earlier work for feedback alignment has included methods such as hard-coding sign symmetry. In this method, the authors show that a piece-wise linear model of the feedback as a function of the input given to a neuron can estimate the causal effect of a spike on downstream neurons. The authors propose a learning rule based on regression discontinuity design (RDD) and show that this leads to stronger alignment of weights (especially in earlier layers) compared to previous methods. The causal effect is measured directly from the discontinuity introduced while spiking - the difference between the outputs of the estimated piece-wise linear model at the point of discontinuity is used as the feedback.
SP:76a052062e3e4bb707b24a8809c220c8ac1df83a
AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion
1 INTRODUCTION . Language is the core of civilization , and speech is the most powerful and natural form of communication . Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mechanism ( Eriksson & Wretling ( 1997 ) ) and challenging concepts of prosodic transfer ( Gomathi et al . ( 2012 ) ) . In the literature , this is achieved using Voice Conversion ( VC ) technique ( Stylianou ( 2009 ) ) . Recently , VC has gained more attention due to its fascinating real-world applications in privacy and identity protection , military operations , generating new voices for animated and fictional movies , voice repair in medical-domain , voice assistants , etc . Voice Conversion ( VC ) technique converts source speaker ’ s voice in such a way as if it were spoken by the target speaker . This is primarily achieved by modifying spectral and prosodic features while retaining the linguistic information in the given speech signal ( Stylianou et al . ( 1998 ) ) . In addition , Voice cloning is one of the closely related task to VC ( Arik et al . ( 2018 ) ) . However , in this research work we only focus to advance the Voice Conversion . With the emergence of deep learning techniques , VC has become more efficient . Deep learningbased techniques have made remarkable progress in parallel VC . However , it is difficult to get parallel data , and such data needs alignment ( which is a arduous process ) to get better results . Building a VC system from non-parallel data is highly challenging , at the same time valuable for practical application scenarios . Recently , many deep learning-based style transfer algorithms have been applied for non-parallel VC task . Hence , this problem can be formulated as a style transfer problem , where one speaker ’ s style is converted into another while preserving the linguistic content as it is . In particular , Conditional Variational AutoEncoders ( CVAEs ) , Generative Adversarial Networks ( GANs ) ( proposed by Goodfellow et al . ( 2014 ) ) , and its variants have gained significant attention in non-parallel VC . However , it is known that the training task for GAN is hard , and the convergence property of GAN is fragile ( Salimans et al . ( 2016 ) ) . There is no substantial evidence that the gen- erated speech is perceptually good . Moreover , CVAEs alone do not guarantee distribution matching and suffers from the issue of over smoothing of the converted features . Although , there are few GAN-based systems that produced state-of-the-art results for non-parallel VC . Among these algorithms , even fewer can be applied for many-to-many VC tasks . At last , there is the only system available for zero-shot VC proposed by Qian et al . ( 2019 ) . Zero-shot conversion is a technique to convert source speaker ’ s voice into an unseen target speaker ’ s speaker via looking at a few utterances of that speaker . As known , solutions to a challenging problem comes with trade-offs . Despite the results , architectures have become more complex , which is not desirable in real-world scenarios because the quality of algorithms or architectures is also measured by the training time and computational complexity of learning trainable parameters ( Goodfellow et al . ( 2016 ) ) . Motivated by this , we propose computationally less expensive Adaptive GAN ( AdaGAN ) , a new style transfer framework , and a new architectural training procedure that we apply to the GAN-based framework . In AdaGAN , the generator encapsulates Adaptive Instance Normalization ( AdaIN ) for style transfer , and the discriminator is responsible for adversarial training . Recently , StarGAN-VC ( proposed by Kameoka et al . ( 2018 ) ) is a state-of-the-art method among all the GAN-based frameworks for non-parallel many-to-many VC . AdaGAN is also GAN-based framework . Therefore , we compare AdaGAN with StarGAN-VC for non-parallel many-to-many VC in terms of naturalness , speaker similarity , and computational complexity . We observe that AdaGAN yields state-of-the-art results for this with almost 88.6 % less computational complexity . Recently proposed AutoVC ( by Qian et al . ( 2019 ) ) is the only framework for zero-shot VC . Inspired by this , we propose AdaGAN for zero-shot VC as an independent study , which is the first GAN-based framework to perform zeroshot VC . We reported initial results for zero-shot VC using AdaGAN.The main contributions of this work are as follows : • We introduce the concept of latent representation based many-to-many VC using GAN for the first time in literature . • We show that in the latent space content of the speech can be represented as the distribution and the properties of this distribution will represent the speaking style of the speaker . • Although AdaGAN has much lesser computation complexity , AdaGAN shows much better results in terms of naturalness and speaker similarity compared to the baseline . 2 RELATED WORK . Developing a non-parallel VC framework is challenging task because of the problems associated with the training conditions using non-parallel data in deep learning architectures . However , attempts have been made to develop many non-parallel VC frameworks in the past decade . For example , Maximum Likelihood ( ML ) -based approach proposed by Ye & Young ( 2006 ) , speaker adaptation technique by Mouchtaris et al . ( 2006 ) , GMM-based VC method using Maximum a posteriori ( MAP ) adaptation technique by Lee & Wu ( 2006 ) , iterative alignment method by Erro et al . ( 2010 ) , Automatic Speech Recognition ( ASR ) -based method by Xie et al . ( 2016 ) , speaker verification-based method using i-vectors by Kinnunen et al . ( 2017 ) , and many other frameworks ( Chen et al . ( 2014 ) ; Nakashika et al . ( 2014 ) ; Blaauw & Bonada ( 2016 ) ; Hsu et al . ( 2016 ) ; Kaneko & Kameoka ( 2017 ) ; Saito et al . ( 2018a ) ; Sun et al . ( 2015 ) ; Shah et al . ( 2018b ; c ) ; Shah & Patil ( 2018 ) ; Biadsy et al . ( 2019 ) ) . Recently , a method using Conditional Variational Autoencoders ( CVAEs ) ( Kingma & Welling ( 2013 ) ) was proposed for non-parallel VC by ( Hsu et al . ( 2016 ) ; Saito et al . ( 2018a ) ) . Recently , VAE based method for VC was proposed , which also uses AdaIN to transfer the speaking style ( Chou et al . ( 2019 ) ) . One powerful framework that can potentially overcome the weakness of VAEs involves GANs . While GAN-based methods were originally applied for image translation problems , these methods have also been employed with noteworthy success for various speech technology-related applications , we can see via architectures proposed by ( Michelsanti & Tan ( 2017 ) ; Saito et al . ( 2018b ) ; Shah et al . ( 2018a ) ) , and many others . In GANs-based methods , Cycle-consistent Adversarial Network ( CycleGAN ) -VC is one of the state-of-the-art methods in the non-parallel VC task proposed by ( Kaneko & Kameoka ( 2017 ) ) . Among these non-parallel algorithms , a few can produce good results for non-parallel many-tomany VC . Recently , StarGAN-VC ( Kameoka et al . ( 2018 ) ) is a state-of-the-art method for the nonparallel many-to-many VC among all the GAN-based frameworks . Past attempts have been made to achieve conversion using style transfer algorithms ( Atalla et al . ( 2018 ) ; Chou et al . ( 2018 ) ; Qian et al . ( 2019 ) ) . The most recent framework is the AutoVC ( proposed by Qian et al . ( 2019 ) ) using style transfer scheme , the first and the only framework in VC literature which achieved state-of-the-art results in zero-shot VC . 3 APPROACH . 3.1 PROBLEM FORMULATION . The traditional VC problem is being reformulated as a style transfer problem . Here , we assume Z is a set of n speakers denoted by Z = { Z1 , Z2 , ... , Zn } , where Zi is the ith speaker , and U is the set of m speech utterances denoted by U = { U1 , U2 , ... , Um } , where Ui is the ith speech utterance . Now , probability density function ( pdf ) is generated for given Zi , and Ui denoted by pX ( .|Zi , Ui ) via the stochastic process of random sampling from the distributions Zi and Ui . Here , Xi ∼ pX ( .|Zi , Ui ) can be referred as features of given Ui with speaking style of Zi . The key idea is to transfer the speaking style of one speaker into another in order to achieve VC . For this , let us consider a set of random variables ( Z1 , U1 ) corresponding to a source speaker , and ( Z2 , U2 ) corresponding to a target speaker . Here , U1 and U2 are spoken by Z1 and Z2 , respectively . Our goal is to achieve pX̂ ( .|Z2 , U1 ) . Now , we want to learn a mapping function to achieve our goal for VC . Our mapping function is able to generate the distribution denoted by X̂Z1→Z2 with speaking style of Z2 while retaining the linguistic content of U1 . Formally , we want to generate the pdf ( i.e. , pX̂Z1→Z2 ( .|Z1 , U1 , Z2 , U2 ) ) to be close or equal to the pX̂ ( .|Z2 , U1 ) . Accurately , our mapping function will achieve this property , as shown in eq . 1. pX̂Z1→Z2 ( .|Z1 , U1 , Z2 , U2 ) = pX̂ ( .|Z2 , U1 ) . ( 1 ) Intuitively , we want to transfer the speaking style of Z2 to the Z1 while preserving the linguistic content of U1 . Therefore , converted voice is perceptually sound as if utterance U1 were spoken by Z2 . With this , AdaGAN is also designed to achieve zero-shot VC . During zero-shot conversion , U1 and U2 can be seen or unseen utterances , and Z1 and Z2 can be seen or unseen speakers . 3.2 ADAPTIVE INSTANCE NORMALIZATION ( AdaIN ) Our key idea for style transfer in VC revolves around the AdaIN . First , AdaIN was introduced for arbitrary style transfer in image-to-image translation tasks by Huang & Belongie ( 2017 ) . In this paper , AdaIN helps us to capture the speaking style and linguistic content into a single feature representation . AdaIN takes features of a source speaker ’ s speech ( i.e. , X ) and sample features of the target speaker ’ s speech ( i.e. , Y ) . Here , x is a feature from the set X related to the linguistic content of source speech , and Y is features related to the speaking style of the target speaker . AdaIN will map the mean and standard deviation of X ( i.e. , µX and σx ) in such a way that it will match with mean , and standard deviation of Y ( i.e. , µY and σY ) . Mathematical equation of AdaIN is defined as ( Huang & Belongie ( 2017 ) ) : AdaIN ( x , Y ) = σY ( x− µX σX ) + µY . ( 2 ) From eq . ( 2 ) , we can infer that AdaIN first normalizes x , and scales back based on mean and standard deviations of y . Intuitively , let ’ s assume that we have one latent space which represents the linguistic content in the distribution and also contains speaking style in terms of the mean and standard deviation of the same distribution . To transfer the speaking style , we have adopted the distribution properties ( i.e. , its mean and standard deviation ) of the target speaker . As a result , the output produced by AdaIN has the high average activation for the features which are responsible for style ( y ) while preserving linguistic content . AdaIN does not have any learning parameters . Hence , it will not affect the computational complexity of the framework .
This paper presents a voice conversion approach using GANs based on adaptive instance normalization (AdaIN). The authors give the mathematical formulation of the problem and provide the implementation of the so-called AdaGAN. Experiments are carried out on VCTK and the proposed AdaGAN is compared with StarGAN. The idea is ok and the concept of using AdaIN for efficient voice conversion is also good. But the paper has a lot of issues both technically and grammatically, which makes the paper hard to follow.
SP:941824acd2bae699174e6bed954e2938eb4bede1
AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion
1 INTRODUCTION . Language is the core of civilization , and speech is the most powerful and natural form of communication . Human voice mimicry has always been considered as one of the most difficult tasks since it involves understanding of the sophisticated human speech production mechanism ( Eriksson & Wretling ( 1997 ) ) and challenging concepts of prosodic transfer ( Gomathi et al . ( 2012 ) ) . In the literature , this is achieved using Voice Conversion ( VC ) technique ( Stylianou ( 2009 ) ) . Recently , VC has gained more attention due to its fascinating real-world applications in privacy and identity protection , military operations , generating new voices for animated and fictional movies , voice repair in medical-domain , voice assistants , etc . Voice Conversion ( VC ) technique converts source speaker ’ s voice in such a way as if it were spoken by the target speaker . This is primarily achieved by modifying spectral and prosodic features while retaining the linguistic information in the given speech signal ( Stylianou et al . ( 1998 ) ) . In addition , Voice cloning is one of the closely related task to VC ( Arik et al . ( 2018 ) ) . However , in this research work we only focus to advance the Voice Conversion . With the emergence of deep learning techniques , VC has become more efficient . Deep learningbased techniques have made remarkable progress in parallel VC . However , it is difficult to get parallel data , and such data needs alignment ( which is a arduous process ) to get better results . Building a VC system from non-parallel data is highly challenging , at the same time valuable for practical application scenarios . Recently , many deep learning-based style transfer algorithms have been applied for non-parallel VC task . Hence , this problem can be formulated as a style transfer problem , where one speaker ’ s style is converted into another while preserving the linguistic content as it is . In particular , Conditional Variational AutoEncoders ( CVAEs ) , Generative Adversarial Networks ( GANs ) ( proposed by Goodfellow et al . ( 2014 ) ) , and its variants have gained significant attention in non-parallel VC . However , it is known that the training task for GAN is hard , and the convergence property of GAN is fragile ( Salimans et al . ( 2016 ) ) . There is no substantial evidence that the gen- erated speech is perceptually good . Moreover , CVAEs alone do not guarantee distribution matching and suffers from the issue of over smoothing of the converted features . Although , there are few GAN-based systems that produced state-of-the-art results for non-parallel VC . Among these algorithms , even fewer can be applied for many-to-many VC tasks . At last , there is the only system available for zero-shot VC proposed by Qian et al . ( 2019 ) . Zero-shot conversion is a technique to convert source speaker ’ s voice into an unseen target speaker ’ s speaker via looking at a few utterances of that speaker . As known , solutions to a challenging problem comes with trade-offs . Despite the results , architectures have become more complex , which is not desirable in real-world scenarios because the quality of algorithms or architectures is also measured by the training time and computational complexity of learning trainable parameters ( Goodfellow et al . ( 2016 ) ) . Motivated by this , we propose computationally less expensive Adaptive GAN ( AdaGAN ) , a new style transfer framework , and a new architectural training procedure that we apply to the GAN-based framework . In AdaGAN , the generator encapsulates Adaptive Instance Normalization ( AdaIN ) for style transfer , and the discriminator is responsible for adversarial training . Recently , StarGAN-VC ( proposed by Kameoka et al . ( 2018 ) ) is a state-of-the-art method among all the GAN-based frameworks for non-parallel many-to-many VC . AdaGAN is also GAN-based framework . Therefore , we compare AdaGAN with StarGAN-VC for non-parallel many-to-many VC in terms of naturalness , speaker similarity , and computational complexity . We observe that AdaGAN yields state-of-the-art results for this with almost 88.6 % less computational complexity . Recently proposed AutoVC ( by Qian et al . ( 2019 ) ) is the only framework for zero-shot VC . Inspired by this , we propose AdaGAN for zero-shot VC as an independent study , which is the first GAN-based framework to perform zeroshot VC . We reported initial results for zero-shot VC using AdaGAN.The main contributions of this work are as follows : • We introduce the concept of latent representation based many-to-many VC using GAN for the first time in literature . • We show that in the latent space content of the speech can be represented as the distribution and the properties of this distribution will represent the speaking style of the speaker . • Although AdaGAN has much lesser computation complexity , AdaGAN shows much better results in terms of naturalness and speaker similarity compared to the baseline . 2 RELATED WORK . Developing a non-parallel VC framework is challenging task because of the problems associated with the training conditions using non-parallel data in deep learning architectures . However , attempts have been made to develop many non-parallel VC frameworks in the past decade . For example , Maximum Likelihood ( ML ) -based approach proposed by Ye & Young ( 2006 ) , speaker adaptation technique by Mouchtaris et al . ( 2006 ) , GMM-based VC method using Maximum a posteriori ( MAP ) adaptation technique by Lee & Wu ( 2006 ) , iterative alignment method by Erro et al . ( 2010 ) , Automatic Speech Recognition ( ASR ) -based method by Xie et al . ( 2016 ) , speaker verification-based method using i-vectors by Kinnunen et al . ( 2017 ) , and many other frameworks ( Chen et al . ( 2014 ) ; Nakashika et al . ( 2014 ) ; Blaauw & Bonada ( 2016 ) ; Hsu et al . ( 2016 ) ; Kaneko & Kameoka ( 2017 ) ; Saito et al . ( 2018a ) ; Sun et al . ( 2015 ) ; Shah et al . ( 2018b ; c ) ; Shah & Patil ( 2018 ) ; Biadsy et al . ( 2019 ) ) . Recently , a method using Conditional Variational Autoencoders ( CVAEs ) ( Kingma & Welling ( 2013 ) ) was proposed for non-parallel VC by ( Hsu et al . ( 2016 ) ; Saito et al . ( 2018a ) ) . Recently , VAE based method for VC was proposed , which also uses AdaIN to transfer the speaking style ( Chou et al . ( 2019 ) ) . One powerful framework that can potentially overcome the weakness of VAEs involves GANs . While GAN-based methods were originally applied for image translation problems , these methods have also been employed with noteworthy success for various speech technology-related applications , we can see via architectures proposed by ( Michelsanti & Tan ( 2017 ) ; Saito et al . ( 2018b ) ; Shah et al . ( 2018a ) ) , and many others . In GANs-based methods , Cycle-consistent Adversarial Network ( CycleGAN ) -VC is one of the state-of-the-art methods in the non-parallel VC task proposed by ( Kaneko & Kameoka ( 2017 ) ) . Among these non-parallel algorithms , a few can produce good results for non-parallel many-tomany VC . Recently , StarGAN-VC ( Kameoka et al . ( 2018 ) ) is a state-of-the-art method for the nonparallel many-to-many VC among all the GAN-based frameworks . Past attempts have been made to achieve conversion using style transfer algorithms ( Atalla et al . ( 2018 ) ; Chou et al . ( 2018 ) ; Qian et al . ( 2019 ) ) . The most recent framework is the AutoVC ( proposed by Qian et al . ( 2019 ) ) using style transfer scheme , the first and the only framework in VC literature which achieved state-of-the-art results in zero-shot VC . 3 APPROACH . 3.1 PROBLEM FORMULATION . The traditional VC problem is being reformulated as a style transfer problem . Here , we assume Z is a set of n speakers denoted by Z = { Z1 , Z2 , ... , Zn } , where Zi is the ith speaker , and U is the set of m speech utterances denoted by U = { U1 , U2 , ... , Um } , where Ui is the ith speech utterance . Now , probability density function ( pdf ) is generated for given Zi , and Ui denoted by pX ( .|Zi , Ui ) via the stochastic process of random sampling from the distributions Zi and Ui . Here , Xi ∼ pX ( .|Zi , Ui ) can be referred as features of given Ui with speaking style of Zi . The key idea is to transfer the speaking style of one speaker into another in order to achieve VC . For this , let us consider a set of random variables ( Z1 , U1 ) corresponding to a source speaker , and ( Z2 , U2 ) corresponding to a target speaker . Here , U1 and U2 are spoken by Z1 and Z2 , respectively . Our goal is to achieve pX̂ ( .|Z2 , U1 ) . Now , we want to learn a mapping function to achieve our goal for VC . Our mapping function is able to generate the distribution denoted by X̂Z1→Z2 with speaking style of Z2 while retaining the linguistic content of U1 . Formally , we want to generate the pdf ( i.e. , pX̂Z1→Z2 ( .|Z1 , U1 , Z2 , U2 ) ) to be close or equal to the pX̂ ( .|Z2 , U1 ) . Accurately , our mapping function will achieve this property , as shown in eq . 1. pX̂Z1→Z2 ( .|Z1 , U1 , Z2 , U2 ) = pX̂ ( .|Z2 , U1 ) . ( 1 ) Intuitively , we want to transfer the speaking style of Z2 to the Z1 while preserving the linguistic content of U1 . Therefore , converted voice is perceptually sound as if utterance U1 were spoken by Z2 . With this , AdaGAN is also designed to achieve zero-shot VC . During zero-shot conversion , U1 and U2 can be seen or unseen utterances , and Z1 and Z2 can be seen or unseen speakers . 3.2 ADAPTIVE INSTANCE NORMALIZATION ( AdaIN ) Our key idea for style transfer in VC revolves around the AdaIN . First , AdaIN was introduced for arbitrary style transfer in image-to-image translation tasks by Huang & Belongie ( 2017 ) . In this paper , AdaIN helps us to capture the speaking style and linguistic content into a single feature representation . AdaIN takes features of a source speaker ’ s speech ( i.e. , X ) and sample features of the target speaker ’ s speech ( i.e. , Y ) . Here , x is a feature from the set X related to the linguistic content of source speech , and Y is features related to the speaking style of the target speaker . AdaIN will map the mean and standard deviation of X ( i.e. , µX and σx ) in such a way that it will match with mean , and standard deviation of Y ( i.e. , µY and σY ) . Mathematical equation of AdaIN is defined as ( Huang & Belongie ( 2017 ) ) : AdaIN ( x , Y ) = σY ( x− µX σX ) + µY . ( 2 ) From eq . ( 2 ) , we can infer that AdaIN first normalizes x , and scales back based on mean and standard deviations of y . Intuitively , let ’ s assume that we have one latent space which represents the linguistic content in the distribution and also contains speaking style in terms of the mean and standard deviation of the same distribution . To transfer the speaking style , we have adopted the distribution properties ( i.e. , its mean and standard deviation ) of the target speaker . As a result , the output produced by AdaIN has the high average activation for the features which are responsible for style ( y ) while preserving linguistic content . AdaIN does not have any learning parameters . Hence , it will not affect the computational complexity of the framework .
This work describes an efficient voice conversion system that can operate on non-parallel samples and convert from and to multiple voices. The central element of the methodology is the AdaIn modification. This is an efficient speaker adaptive technique where features are re-normalized to a particular speaker's domain. The rest of the machinery is well motivated and well executed, but less novel. This addition enables the voice conversion between speakers.
SP:941824acd2bae699174e6bed954e2938eb4bede1
Improving Evolutionary Strategies with Generative Neural Networks
1 INTRODUCTION . We are interested in the global minimization of a black-box objective function , only accessible through a zeroth-order oracle . In many instances of this problem the objective is expensive to evaluate , which excludes brute force methods as a reasonable mean of optimization . Also , as the objective is potentially non-convex and multi-modal , its global optimization can not be done greedily but requires a careful balance between exploitation and exploration of the optimization landscape ( the surface defined by the objective ) . The family of algorithms used to tackle such a problem is usually dictated by the cost of one evaluation of the objective function ( or equivalently , by the maximum number of function evaluations that are reasonable to make ) and by a precision requirement . For instance , Bayesian Optimization ( Jones et al. , 1998 ; Shahriari et al. , 2016 ) targets problems of very high evaluation cost , where the global minimum must be approximately discovered after a few hundreds of function evaluations . When aiming for a higher precision and hence having a larger budget ( e.g . thousands of function evaluations ) , a popular algorithm class is the one of Evolutionary Strategies ( ES ) ( Rechenberg , 1978 ; Schwefel , 1977 ) , a family of heuristic search procedures . ES algorithms rely on a search distribution , which role is to propose queries of potentially small value of the objective function . This search distribution is almost always chosen to be a multivariate Gaussian . It is namely the case of the Covariance Matrix Adaptation Evolution Strategies ( CMA-ES ) ( Hansen & Ostermeier , 2001 ) , a state-of-the-art ES algorithm made popular in the machine learning community by its good results on hyper-parameter tuning ( Friedrichs & Igel , 2005 ; Loshchilov & Hutter , 2016 ) . It is also the case for Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2008 ) algorithms , which were recently used for direct policy search in Reinforcement Learning ( RL ) and shown to compete with state-of-the-art MDP-based RL techniques ( Salimans et al. , 2017 ) . Occasionally , other distributions have been used ; e.g . fat-tails distributions like the Cauchy were shown to outperform the Gaussian for highly multi-modal objectives ( Schaul et al. , 2011 ) . We argue in this paper that in ES algorithms , the choice of a standard parametric search distribution ( Gaussian , Cauchy , .. ) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum . To overcome the limitations of classical parametric search distributions , we propose using flexible distributions generated by bijective Generative Neural Networks ( GNNs ) , with computable and differentiable log-probabilities . We discuss why common existing optimization methods in ES algorithms can not be directly used to train such models and design a tailored algorithm that efficiently train GNNs for an ES objective . We show how this new algorithm can readily incorporate existing ES algorithms that operates on simple search distributions , Algorithm 1 : Generic ES procedure input : zeroth-order oracle on f , distribution π0 , population size λ repeat ( Sampling ) Sample x1 , . . . , xλ i.i.d∼ πt ( Evaluation ) Evaluate f ( x1 ) , . . . , f ( xn ) . ( Update ) Update πt to produce x of potentially smaller objective values . until convergence ; like the Gaussian . On a variety of objective functions , we show that this extension can significantly accelerate ES algorithms . We formally introduce the problem and provide background on Evolutionary Strategies in Section 2 . We discuss the role of GNNs in generating flexible search distributions in Section 3 . We explain why usual algorithms fail to train GNNs for an ES objective and introduce a new algorithm in Section 4 . Finally we report experimental results in Section 5 . 2 PRELIMINARIES . In what follows , the real-valued objective function f is defined over a compact X and π will generically denote a probability density function overX . We consider the global optimization of f : x∗ ∈ argmin x∈X f ( x ) ( 1 ) 2.1 EVOLUTIONARY STRATEGIES . The generic procedure followed by ES algorithms is presented in Algorithm 1 . To make the update step tractable , the search distribution is tied to a family of distributions and parametrized by a realvalued parameter vector θ ( e.g . the mean and covariance matrix of a Gaussian ) , and is referred to as πθ . This update step constitutes the main difference between ES algorithms . Natural Evolution Strategies One principled way to perform that update is to minimize the expected objective value over samples x drawn from πθ . Indeed , when the search distribution is parametric and tied to a parameter θ , this objective can be differentiated with respect to θ thanks to the log-trick : J ( θ ) , Eπθ [ f ( x ) ] and ∂J ( θ ) ∂θ = Eπθ [ f ( x ) ∂ log πθ ( x ) ∂θ ] ( 2 ) This quantity can be approximated from samples - it is known as the score-function or REINFORCE ( Williams , 1992 ) estimator , and provides a direction of update for θ . Unfortunately , naively following a stochastic version of the gradient ( 2 ) – a procedure called Plain Gradient Evolutionary Strategies ( PGES ) – is known to be highly ineffective . PGES main limitation resides in its instability when the search distribution is concentrating , making it unable to precisely locate any local minimum . To improve over the PGES algorithm the authors of Wierstra et al . ( 2008 ) proposed to descend J ( θ ) along its natural gradient ( Amari , 1998 ) . More precisely , they introduce a trust-region optimization scheme to limit the instability of PGES , and minimize a linear approximation of J ( θ ) under a Kullback-Leibler ( KL ) divergence constraint : argmin δθ J ( θ + δθ ) ' J ( θ ) + δθT∇θJ ( θ ) s.t KL ( πθ+δθ||πθ ) ≤ ( 3 ) To avoid solving analytically the trust region problem ( 3 ) , Wierstra et al . ( 2008 ) shows that its solution can be approximated by : δθ∗ ∝ −F−1θ ∇θJ ( θ ) where Fθ = Eπθ [ ∇θ log πθ ( x ) ∇θ log πθ ( x ) T ] ( 4 ) is the Fischer Information Matrix ( FIM ) of πθ . The parameter θ is therefore not updated along the negative gradient of J but rather along F−1θ ∇θJ ( θ ) , a quantity known as the natural gradient . The FIM Fθ is known analytically when πθ is a multivariate Gaussian and the resulting algorithm , Exponential Natural Evolutionary Strategies ( xNES ) ( Glasmachers et al. , 2010 ) has been shown to reach state-of-the-art performances on a large ES benchmark . CMA-ES Naturally , there exist other strategies to update the search distribution πθ . For instance , CMA-ES relies on a variety of heuristic mechanisms like covariance matrix adaptation and evolution paths , but is only defined when πθ is a multivariate Gaussian . Explaining such mechanisms would be out of the scope of this paper , but the interested reader is referred to the work of Hansen ( 2016 ) for a detailed tutorial on CMA-ES . 2.2 LIMITATIONS OF CLASSICAL SEARCH DISTRIBUTIONS . ES implicitly balance the need for exploration and exploitation of the optimization landscape . The exploitation phase consists in updating the search distribution , and exploration happens when samples are drawn from the search distribution ’ s tails . The key role of the search distribution is therefore to produce a support adapted to the landscape ’ s structure , so that new points are likely to improve over previous samples . We argue here that the choice of a given parametric distribution ( the multivariate Gaussian distribution being overwhelmingly represented in state-of-the-art ES algorithms ) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum . For instance , a Gaussian distribution is not adapted to navigate a curved valley because of its inability to continuously curve its density . This lack of flexibility will lead it to drastically reduce its entropy , until the curved valley looks locally straight . At this point , the ES algorithm resembles a hill-climber and barely takes advantage of the exploration abilities of the search distribution . An illustration of this phenomenon is presented in Figure 2 on the Rosenbrock function . Another limitation of classical search distribution is their inability to follow multiple hypothesis , that is to explore at the same time different local minima . Even if mixture models can show such flexibility , hyper-parameters like the number of mixtures have optimal values that are impossible to guess a priori . We want to introduce flexible search distributions to overcome these limitations . Such distributions should , despite their expressiveness , be easily trainable . We should also be concerned when designing them with their role in the exploration/exploitation trade off : a search distribution with too much capacity could over-fit some seemingly good samples , leading to premature convergence . To sum-up , we want to design search-distributions that are : • more flexible than classical distributions • yet easily trainable • while keeping control over the exploration / exploitation trade-off In the following section , we carefully investigate the class of Generative Neural Networks ( GNNs ) to find a parametric class of distributions satisfying such properties . 3 FLEXIBLE SEARCH DISTRIBUTIONS WITH GNNS . Generative Neural Networks ( MacKay , 1995 ) have been studied in the context of density estimation and shown to be able to model complex and highly multimodal distributions ( Srivastava et al. , 2017 ) . We propose here to leverage their expressiveness for ES , and train them in a principled way thanks to the ES objective : J ( π ) = Eπ [ f ( x ) ] As discussed in Section 2 , optimizing J ( π ) with gradient-based methods is possible through the score-function estimator , which requires to be able to compute and efficiently differentiate the logprobabilities of π . 3.1 GNN BACKGROUND . The core idea behind a GNN is to map a latent variable z ∈ Z drawn from a known distribution νω to an output variable x = gη ( z ) where gη is the forward-pass of a neural network . The parameter η represents the weights of this neural network while ω describe the degrees of freedom of the latent space distribution νω . We denote θ= ( ω , η ) and πθ ( x ) the density of the output variable x . For general neural network architectures , it is impossible to compute πθ ( x ) for samples x drawn from the GNN . This is namely why their are often trained with adversarial methods ( Goodfellow et al. , 2014 ) for sample generation purposes , bypassing the need of computing densities , but at the expense of a good density estimation ( mode-dropping ) . An alternative to adversarial methods was proposed with variational auto-encoders ( Kingma & Welling , 2013 ) however at the cost of learning two neural networks ( an encoder and a decoder ) . A less computationally expensive method consists in restricting the possible architectures to build bijective GNNs , also known as Normalizing Flows ( NF ) ( Rezende & Mohamed , 2015 ; Papamakarios et al. , 2017 ) , which allows the exact computation of the distribution ’ s density . Indeed , if gη is a bijection from Z to X with inverse hη , g−1η , the change of variable formula provides a way to compute πθ ( x ) : πθ ( x ) = νω ( hη ( x ) ) · ∣∣∣∣∂hη ( x ) ∂x ∣∣∣∣ ( 5 ) To have a tractable density one therefore needs to ensure that the determinant of the Jacobian |∂hη ( x ) /∂x| is easily computable . Several models satisfying these two properties ( i.e bijectivity and computable Jacobian ) have been proposed for density estimation ( Rippel & Adams , 2013 ; Dinh et al. , 2014 ; 2016 ) , and proved their expressiveness despite their relatively simple structure . NFs therefore answer two of our needs when building our new search distribution : flexibility and easiness to train . In this work , we will focus on one NF model : the Non-Linear Independent Component Estimation ( Dinh et al. , 2014 ) ( NICE ) model , for its numerical stability and volume preserving properties .
In ES the goal is to find a distribution pi_theta(x) such that the expected value of f(x) under this distribution is high. This can be optimized with REINFORCE or with more sophisticated methods based on the natural gradient. The functional form of pi_theta is almost always a Gaussian, but this isn't sufficiently flexible (e.g. multi-modal) to provide a good optimization algorithm. In response, the authors advocate for using a flexible family of generative neural networks for pi_theta. Using NICE as a generative model is desirable because it maintains volumes. This means that we can adjust volumes in latent space and this directly corresponds to volumes in x space. Doing so is useful to be able to tune how concentrated the search distribution is and to explicitly reason about the mode of the search distribution.
SP:25106cb1a3e5ead20e58b680eeb6aa361c07e1ff
Improving Evolutionary Strategies with Generative Neural Networks
1 INTRODUCTION . We are interested in the global minimization of a black-box objective function , only accessible through a zeroth-order oracle . In many instances of this problem the objective is expensive to evaluate , which excludes brute force methods as a reasonable mean of optimization . Also , as the objective is potentially non-convex and multi-modal , its global optimization can not be done greedily but requires a careful balance between exploitation and exploration of the optimization landscape ( the surface defined by the objective ) . The family of algorithms used to tackle such a problem is usually dictated by the cost of one evaluation of the objective function ( or equivalently , by the maximum number of function evaluations that are reasonable to make ) and by a precision requirement . For instance , Bayesian Optimization ( Jones et al. , 1998 ; Shahriari et al. , 2016 ) targets problems of very high evaluation cost , where the global minimum must be approximately discovered after a few hundreds of function evaluations . When aiming for a higher precision and hence having a larger budget ( e.g . thousands of function evaluations ) , a popular algorithm class is the one of Evolutionary Strategies ( ES ) ( Rechenberg , 1978 ; Schwefel , 1977 ) , a family of heuristic search procedures . ES algorithms rely on a search distribution , which role is to propose queries of potentially small value of the objective function . This search distribution is almost always chosen to be a multivariate Gaussian . It is namely the case of the Covariance Matrix Adaptation Evolution Strategies ( CMA-ES ) ( Hansen & Ostermeier , 2001 ) , a state-of-the-art ES algorithm made popular in the machine learning community by its good results on hyper-parameter tuning ( Friedrichs & Igel , 2005 ; Loshchilov & Hutter , 2016 ) . It is also the case for Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2008 ) algorithms , which were recently used for direct policy search in Reinforcement Learning ( RL ) and shown to compete with state-of-the-art MDP-based RL techniques ( Salimans et al. , 2017 ) . Occasionally , other distributions have been used ; e.g . fat-tails distributions like the Cauchy were shown to outperform the Gaussian for highly multi-modal objectives ( Schaul et al. , 2011 ) . We argue in this paper that in ES algorithms , the choice of a standard parametric search distribution ( Gaussian , Cauchy , .. ) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum . To overcome the limitations of classical parametric search distributions , we propose using flexible distributions generated by bijective Generative Neural Networks ( GNNs ) , with computable and differentiable log-probabilities . We discuss why common existing optimization methods in ES algorithms can not be directly used to train such models and design a tailored algorithm that efficiently train GNNs for an ES objective . We show how this new algorithm can readily incorporate existing ES algorithms that operates on simple search distributions , Algorithm 1 : Generic ES procedure input : zeroth-order oracle on f , distribution π0 , population size λ repeat ( Sampling ) Sample x1 , . . . , xλ i.i.d∼ πt ( Evaluation ) Evaluate f ( x1 ) , . . . , f ( xn ) . ( Update ) Update πt to produce x of potentially smaller objective values . until convergence ; like the Gaussian . On a variety of objective functions , we show that this extension can significantly accelerate ES algorithms . We formally introduce the problem and provide background on Evolutionary Strategies in Section 2 . We discuss the role of GNNs in generating flexible search distributions in Section 3 . We explain why usual algorithms fail to train GNNs for an ES objective and introduce a new algorithm in Section 4 . Finally we report experimental results in Section 5 . 2 PRELIMINARIES . In what follows , the real-valued objective function f is defined over a compact X and π will generically denote a probability density function overX . We consider the global optimization of f : x∗ ∈ argmin x∈X f ( x ) ( 1 ) 2.1 EVOLUTIONARY STRATEGIES . The generic procedure followed by ES algorithms is presented in Algorithm 1 . To make the update step tractable , the search distribution is tied to a family of distributions and parametrized by a realvalued parameter vector θ ( e.g . the mean and covariance matrix of a Gaussian ) , and is referred to as πθ . This update step constitutes the main difference between ES algorithms . Natural Evolution Strategies One principled way to perform that update is to minimize the expected objective value over samples x drawn from πθ . Indeed , when the search distribution is parametric and tied to a parameter θ , this objective can be differentiated with respect to θ thanks to the log-trick : J ( θ ) , Eπθ [ f ( x ) ] and ∂J ( θ ) ∂θ = Eπθ [ f ( x ) ∂ log πθ ( x ) ∂θ ] ( 2 ) This quantity can be approximated from samples - it is known as the score-function or REINFORCE ( Williams , 1992 ) estimator , and provides a direction of update for θ . Unfortunately , naively following a stochastic version of the gradient ( 2 ) – a procedure called Plain Gradient Evolutionary Strategies ( PGES ) – is known to be highly ineffective . PGES main limitation resides in its instability when the search distribution is concentrating , making it unable to precisely locate any local minimum . To improve over the PGES algorithm the authors of Wierstra et al . ( 2008 ) proposed to descend J ( θ ) along its natural gradient ( Amari , 1998 ) . More precisely , they introduce a trust-region optimization scheme to limit the instability of PGES , and minimize a linear approximation of J ( θ ) under a Kullback-Leibler ( KL ) divergence constraint : argmin δθ J ( θ + δθ ) ' J ( θ ) + δθT∇θJ ( θ ) s.t KL ( πθ+δθ||πθ ) ≤ ( 3 ) To avoid solving analytically the trust region problem ( 3 ) , Wierstra et al . ( 2008 ) shows that its solution can be approximated by : δθ∗ ∝ −F−1θ ∇θJ ( θ ) where Fθ = Eπθ [ ∇θ log πθ ( x ) ∇θ log πθ ( x ) T ] ( 4 ) is the Fischer Information Matrix ( FIM ) of πθ . The parameter θ is therefore not updated along the negative gradient of J but rather along F−1θ ∇θJ ( θ ) , a quantity known as the natural gradient . The FIM Fθ is known analytically when πθ is a multivariate Gaussian and the resulting algorithm , Exponential Natural Evolutionary Strategies ( xNES ) ( Glasmachers et al. , 2010 ) has been shown to reach state-of-the-art performances on a large ES benchmark . CMA-ES Naturally , there exist other strategies to update the search distribution πθ . For instance , CMA-ES relies on a variety of heuristic mechanisms like covariance matrix adaptation and evolution paths , but is only defined when πθ is a multivariate Gaussian . Explaining such mechanisms would be out of the scope of this paper , but the interested reader is referred to the work of Hansen ( 2016 ) for a detailed tutorial on CMA-ES . 2.2 LIMITATIONS OF CLASSICAL SEARCH DISTRIBUTIONS . ES implicitly balance the need for exploration and exploitation of the optimization landscape . The exploitation phase consists in updating the search distribution , and exploration happens when samples are drawn from the search distribution ’ s tails . The key role of the search distribution is therefore to produce a support adapted to the landscape ’ s structure , so that new points are likely to improve over previous samples . We argue here that the choice of a given parametric distribution ( the multivariate Gaussian distribution being overwhelmingly represented in state-of-the-art ES algorithms ) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum . For instance , a Gaussian distribution is not adapted to navigate a curved valley because of its inability to continuously curve its density . This lack of flexibility will lead it to drastically reduce its entropy , until the curved valley looks locally straight . At this point , the ES algorithm resembles a hill-climber and barely takes advantage of the exploration abilities of the search distribution . An illustration of this phenomenon is presented in Figure 2 on the Rosenbrock function . Another limitation of classical search distribution is their inability to follow multiple hypothesis , that is to explore at the same time different local minima . Even if mixture models can show such flexibility , hyper-parameters like the number of mixtures have optimal values that are impossible to guess a priori . We want to introduce flexible search distributions to overcome these limitations . Such distributions should , despite their expressiveness , be easily trainable . We should also be concerned when designing them with their role in the exploration/exploitation trade off : a search distribution with too much capacity could over-fit some seemingly good samples , leading to premature convergence . To sum-up , we want to design search-distributions that are : • more flexible than classical distributions • yet easily trainable • while keeping control over the exploration / exploitation trade-off In the following section , we carefully investigate the class of Generative Neural Networks ( GNNs ) to find a parametric class of distributions satisfying such properties . 3 FLEXIBLE SEARCH DISTRIBUTIONS WITH GNNS . Generative Neural Networks ( MacKay , 1995 ) have been studied in the context of density estimation and shown to be able to model complex and highly multimodal distributions ( Srivastava et al. , 2017 ) . We propose here to leverage their expressiveness for ES , and train them in a principled way thanks to the ES objective : J ( π ) = Eπ [ f ( x ) ] As discussed in Section 2 , optimizing J ( π ) with gradient-based methods is possible through the score-function estimator , which requires to be able to compute and efficiently differentiate the logprobabilities of π . 3.1 GNN BACKGROUND . The core idea behind a GNN is to map a latent variable z ∈ Z drawn from a known distribution νω to an output variable x = gη ( z ) where gη is the forward-pass of a neural network . The parameter η represents the weights of this neural network while ω describe the degrees of freedom of the latent space distribution νω . We denote θ= ( ω , η ) and πθ ( x ) the density of the output variable x . For general neural network architectures , it is impossible to compute πθ ( x ) for samples x drawn from the GNN . This is namely why their are often trained with adversarial methods ( Goodfellow et al. , 2014 ) for sample generation purposes , bypassing the need of computing densities , but at the expense of a good density estimation ( mode-dropping ) . An alternative to adversarial methods was proposed with variational auto-encoders ( Kingma & Welling , 2013 ) however at the cost of learning two neural networks ( an encoder and a decoder ) . A less computationally expensive method consists in restricting the possible architectures to build bijective GNNs , also known as Normalizing Flows ( NF ) ( Rezende & Mohamed , 2015 ; Papamakarios et al. , 2017 ) , which allows the exact computation of the distribution ’ s density . Indeed , if gη is a bijection from Z to X with inverse hη , g−1η , the change of variable formula provides a way to compute πθ ( x ) : πθ ( x ) = νω ( hη ( x ) ) · ∣∣∣∣∂hη ( x ) ∂x ∣∣∣∣ ( 5 ) To have a tractable density one therefore needs to ensure that the determinant of the Jacobian |∂hη ( x ) /∂x| is easily computable . Several models satisfying these two properties ( i.e bijectivity and computable Jacobian ) have been proposed for density estimation ( Rippel & Adams , 2013 ; Dinh et al. , 2014 ; 2016 ) , and proved their expressiveness despite their relatively simple structure . NFs therefore answer two of our needs when building our new search distribution : flexibility and easiness to train . In this work , we will focus on one NF model : the Non-Linear Independent Component Estimation ( Dinh et al. , 2014 ) ( NICE ) model , for its numerical stability and volume preserving properties .
As the title of the paper states, this paper tries to improve evolution strategies (ES) using a generative neural network. In the standard ES candidate solution is generated from a multivariate normal distribution, where the parameters of the distribution are adapted during the optimization process. The authors claim that the gaussian distribution, i.e., the ellipsoidal shape of the sampling distribution, is not adequate for the objective functions such as multimodal functions or functions with curved ridge levelsets such as the well-known Rosenbrock functions. The motivation is clearly stated. The technique is interesting and non-trivial. However, the experimental results are not very convincing to conclude that the proposed approach achieves the stated goal. Moreover, this paper may fit more to optimization conferences such as GECCO.
SP:25106cb1a3e5ead20e58b680eeb6aa361c07e1ff
Potential Flow Generator with $L_2$ Optimal Transport Regularity for Generative Models
1 INTRODUCTION . Many of the generative models , for example , generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ; Arjovsky et al. , 2017 ; Salimans et al. , 2018 ) and normalizing flow models ( Rezende & Mohamed , 2015 ; Kingma & Dhariwal , 2018 ; Chen et al. , 2018 ) , aim to find a generator that could map the input distribution to the target distribution . In many cases , especially when the input distributions are purely noises , the specific maps between input and output are of little importance as long as the generated distributions match the target ones . However , in other cases like imageto-image translations , where both input and target distributions are distributions of images , the generators are required to have additional regularity such that the input individuals are mapped to the “ corresponding ” outputs in some sense . If paired input-output samples are provided , Lp penalty could be hybridized into generators loss functions to encourage the output individuals to fit the ground truth ( Isola et al. , 2017 ) . For the cases without paired data , a popular approach is to introduce another generator and encourage the two generators to be the inverse maps of each other , as in CycleGAN ( Zhu et al. , 2017 ) , DualGAN ( Yi et al. , 2017 ) and DiscoGAN ( Kim et al. , 2017 ) , etc . However , such a pair of generators is not unique and lacks clear mathematical interpretation about its effectiveness . In this paper we introduce a special generator , i.e. , the potential flow generator , with L2 optimal transport regularity . By applying such generator , not only are we trying to find a map from the input distribution to the target one , but we also aim to find the optimal transport map that minimizes the squared Euclidean transport distance . In Figure 1 we provide a schematic comparison between generators with and without optimal transport regularity . While both generators provide a scheme to map from the input distribution to the output distribution , the total squared transport distances in the left generator is larger than that in the right generator . Note that the generator with optimal transport regularity has the characteristic of “ proximity ” in that the inputs tend to be mapped to nearby outputs . As we will show later , this “ proximity ” characteristic of L2 optimal transport regularity could be utilized in image translation tasks . Compared with other approaches like CycleGAN , the L2 optimal transport regularity has a much clearer mathematical interpretation . There have been other approaches to learn the optimal transport map in generative models . For example , Seguy et al . ( 2017 ) proposed to first learn the regularized optimal transport plan and then the optimal transport map , based on the dual form of regularized optimal transport problem . Also , Yang & Uhler ( 2018 ) proposed to learn the unbalanced optimal transport plan in an adversarial way derived from a convex conjugate representation of divergences . In the W2GAN model proposed by Leygonie et al . ( 2019 ) , the discriminator ’ s objective is the 2-Wasserstein metric so that the generator is supposed to recover the L2 optimal transport map . All the above approaches need to introduce , and are limited to , specific loss functions to train the generators . Our proposed potential flow generator takes a different approach in that with only a slight augmentation to the original generator loss functions , our generator could be integrated into a wide range of generative models with various generator loss functions , including different versions of GANs and normalizing flow models . This simple modification makes our method easy to adopt on various tasks considering the existing rich literature and the future developments of generative models . In Section 2 we present a formal definition of optimal transport map and the motivation to apply L2 optimal transport regularity to generators . In Section 3 we give a detailed formulation of the potential flow generator and the augmentation to the original loss functions . Results are then provided in Section 4 . We include the discussion and conclusions in Section 5 . 2 GENERATIVE MODELS AND OPTIMAL TRANSPORT MAP . First , we introduce the concept of push forward , which will be used extensively in the paper . Definition 1 Given two Polish space X and Y , B ( X ) and B ( Y ) the Borel σ-algebra on X and Y , and P ( X ) , P ( Y ) the set of probability measures on B ( X ) and B ( Y ) . Let f : X→ Y be a Borel map , and µ ∈ P ( X ) . We define f # µ ∈ P ( Y ) , the push forward of µ through f , by f # µ ( A ) = µ ( f−1 ( A ) ) , ∀A ∈ B ( Y ) . ( 1 ) With the concept of push forward , we can formulate the goal of GANs and normalizing flow models as to train the generator G such that G # µ is equal to or at least close to ν in some sense , where µ and ν are the input and target distribution , respectively . Usually , the loss functions for training the generators are metrics of closeness that vary for different models . For example , in continuous normalizing flows ( Chen et al. , 2018 ) , such metric of closeness is DKL ( G # µ||ν ) or DKL ( ν||G # µ ) . In Wasserstein GANs ( WGANs ) ( Arjovsky et al. , 2017 ) , the metric of closeness is the Wasserstein-1 distance between G # µ and ν , which is estimated in a variational form with the discriminator neural network . As a result , the generator and discriminator neural networks are trained in an adversarial way : min G max D is 1-Lipschitz Ex∼νD ( x ) − Ez∼µD ( G ( z ) ) , ( 2 ) where D is the discriminator neural network and the Lipschitz constraint could be imposed via the gradient penalty ( Gulrajani et al. , 2017 ) , spectral normalization ( Miyato et al. , 2018 ) , etc . Now we introduce the concept of optimal transport map as follows : Definition 2 Given a cost function c : X×Y→ R , and µ ∈ P ( X ) , ν ∈ P ( Y ) , we let T be the set of all transport maps from µ to ν , i.e . T : = { f : f # µ = ν } . Monge ’ s optimal transport problem is to minimize the cost functional C ( f ) among T , where C ( f ) = Ex∼µc ( x , f ( x ) ) ( 3 ) and the minimizer f∗ ∈ T is called the optimal transport map . In this paper , we are concerned mostly with the case where X = Y = Rd with L2 transport cost , i.e. , the transport c ( x , y ) = ‖x− y‖2 . We assume that µ and ν are absolute continuous w.r.t . Lebesgue measure , i.e . they have probability density functions . In general , Monge ’ s problem could be ill-posed in that T could be empty set or there is no minimizer in T. Also , the optimal transport map could be non-unique . However , for the special case we consider , there exists a unique solution to Monge ’ s problem ( Brenier , 1991 ; Gangbo & McCann , 1996 ) . Informally speaking , with L2 transport cost the optimal transport map has the characteristic of “ proximity ” , i.e . the inputs tend to be mapped to nearby outputs . In image translation tasks , such “ proximity ” characteristic would be helpful if we could properly embed the images into Euclidean space such that our preferred input-output pairs are close to each other . A similar idea is also proposed in Yang & Uhler ( 2018 ) for unbalanced optimal transport . Apart from image translations , the L2 optimal transport problem is important in many other aspects . For example , it is closely related to gradient flow ( Ambrosio et al. , 2008 ) , Fokker-Planck equations ( Santambrogio , 2017 ) , flow in porous medium ( Otto , 1997 ) , etc . 3 POTENTIAL FLOW GENERATOR . 3.1 POTENTIAL FLOW FORMULATION OF OPTIMAL TRANSPORT MAP . We assume that µ and ν have probability density ρµ and ρν , respectively , and consider all smooth enough density fields ρ ( t , x ) and velocity fields v ( t , x ) , where t ∈ [ 0 , T ] , subject to the continuity equation as well as initial and final conditions ∂tρ+∇ · ( ρv ) = 0 , ρ ( 0 , · ) = ρµ , ρ ( T , · ) = ρν . ( 4 ) The above equation states that such velocity field will induce a transport map : we can construct an ordinary differential equation ( ODE ) du dt = v ( t , u ) , ( 5 ) and the map from the initial point to the final point gives the transport map from µ to ν . As is proposed by Benamou & Brenier ( 2000 ) , for the transport cost function c ( x , y ) = ‖x− y‖2 , the minimal transport cost is equal to the infimum of T ∫ Rd ∫ T 0 ρ ( t , x ) |v ( t , x ) |2dxdt ( 6 ) among all ( ρ , v ) satisfying equation ( 4 ) . The optimality condition is given by v ( t , x ) = ∇φ ( t , x ) , ∂tφ+ 1 2 |∇φ|2 = 0 . ( 7 ) In other words , the optimal velocity field is actually induced from a flow with time-dependent potential φ ( t , x ) . The use of this formulation is well-known in optimal transport community ( Trigila & Tabak , 2016 ; Peyré et al. , 2019 ) . In this paper we integrate this formulation in the deep generative models . Instead of solving Monge ’ s problem and find the exact L2 optimal transport map , which is unrealistic due to the limited families of neural network functions as well as the errors arising from training the neural networks , our goal is to regularize the generators in a wide range of generative models , so that the generator maps could approximate the L2 optimal transport map at least in low dimensional problems . The maps would also be endowed with the characteristics of “ proximity ” so that we can apply them to engineering problems . 3.2 POTENTIAL FLOW GENERATOR . The potential φ ( t , x ) is the key function to estimate , since the velocity field could be obtained by taking the gradient of the potential and consequently the transport map could be obtained from Equation 5 . There are two strategies to use neural networks to represent φ . One can take advantage of the fact that the time-dependent potential field φ is actually uniquely determined by its initial condition from Equation 7 , and use a neural network to represent the initial condition of φ , i.e . φ ( 0 , x ) , while approximating φ ( t , x ) via time discretization schemes . Alternatively , one can use a neural network to represent φ ( t , x ) directly and later apply the PDE regularity for φ ( t , x ) in Equation 7 . We name the generators defined in the above two approaches as discrete potential flow generator and continuous potential flow generator , respectively , and give a detailed formulation as follows .
The paper proposes a ‘potential flow generator’ that can be seen as a regularizer for traditional GAN losses. It is based on the idea that samples flowing from one distribution to another should follow a minimum travel cost path. This regularization is expressed as an optimal transport problem with a squared Euclidean cost. Authors rely on the dynamic formulation of OT proposed by Benamou and Brenier, 2000. They propose to learn a time-dependent potential field which gradient defines the velocity fields used to drive samples from a source distribution toward a target one. Experiments on a simple 1D case (where the optimal transport map is known), and on images with an MNIST / CelebA qualitative example.
SP:d6218fdd95b48f3e69bf12e96f938cecde8ff7ab
Potential Flow Generator with $L_2$ Optimal Transport Regularity for Generative Models
1 INTRODUCTION . Many of the generative models , for example , generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ; Arjovsky et al. , 2017 ; Salimans et al. , 2018 ) and normalizing flow models ( Rezende & Mohamed , 2015 ; Kingma & Dhariwal , 2018 ; Chen et al. , 2018 ) , aim to find a generator that could map the input distribution to the target distribution . In many cases , especially when the input distributions are purely noises , the specific maps between input and output are of little importance as long as the generated distributions match the target ones . However , in other cases like imageto-image translations , where both input and target distributions are distributions of images , the generators are required to have additional regularity such that the input individuals are mapped to the “ corresponding ” outputs in some sense . If paired input-output samples are provided , Lp penalty could be hybridized into generators loss functions to encourage the output individuals to fit the ground truth ( Isola et al. , 2017 ) . For the cases without paired data , a popular approach is to introduce another generator and encourage the two generators to be the inverse maps of each other , as in CycleGAN ( Zhu et al. , 2017 ) , DualGAN ( Yi et al. , 2017 ) and DiscoGAN ( Kim et al. , 2017 ) , etc . However , such a pair of generators is not unique and lacks clear mathematical interpretation about its effectiveness . In this paper we introduce a special generator , i.e. , the potential flow generator , with L2 optimal transport regularity . By applying such generator , not only are we trying to find a map from the input distribution to the target one , but we also aim to find the optimal transport map that minimizes the squared Euclidean transport distance . In Figure 1 we provide a schematic comparison between generators with and without optimal transport regularity . While both generators provide a scheme to map from the input distribution to the output distribution , the total squared transport distances in the left generator is larger than that in the right generator . Note that the generator with optimal transport regularity has the characteristic of “ proximity ” in that the inputs tend to be mapped to nearby outputs . As we will show later , this “ proximity ” characteristic of L2 optimal transport regularity could be utilized in image translation tasks . Compared with other approaches like CycleGAN , the L2 optimal transport regularity has a much clearer mathematical interpretation . There have been other approaches to learn the optimal transport map in generative models . For example , Seguy et al . ( 2017 ) proposed to first learn the regularized optimal transport plan and then the optimal transport map , based on the dual form of regularized optimal transport problem . Also , Yang & Uhler ( 2018 ) proposed to learn the unbalanced optimal transport plan in an adversarial way derived from a convex conjugate representation of divergences . In the W2GAN model proposed by Leygonie et al . ( 2019 ) , the discriminator ’ s objective is the 2-Wasserstein metric so that the generator is supposed to recover the L2 optimal transport map . All the above approaches need to introduce , and are limited to , specific loss functions to train the generators . Our proposed potential flow generator takes a different approach in that with only a slight augmentation to the original generator loss functions , our generator could be integrated into a wide range of generative models with various generator loss functions , including different versions of GANs and normalizing flow models . This simple modification makes our method easy to adopt on various tasks considering the existing rich literature and the future developments of generative models . In Section 2 we present a formal definition of optimal transport map and the motivation to apply L2 optimal transport regularity to generators . In Section 3 we give a detailed formulation of the potential flow generator and the augmentation to the original loss functions . Results are then provided in Section 4 . We include the discussion and conclusions in Section 5 . 2 GENERATIVE MODELS AND OPTIMAL TRANSPORT MAP . First , we introduce the concept of push forward , which will be used extensively in the paper . Definition 1 Given two Polish space X and Y , B ( X ) and B ( Y ) the Borel σ-algebra on X and Y , and P ( X ) , P ( Y ) the set of probability measures on B ( X ) and B ( Y ) . Let f : X→ Y be a Borel map , and µ ∈ P ( X ) . We define f # µ ∈ P ( Y ) , the push forward of µ through f , by f # µ ( A ) = µ ( f−1 ( A ) ) , ∀A ∈ B ( Y ) . ( 1 ) With the concept of push forward , we can formulate the goal of GANs and normalizing flow models as to train the generator G such that G # µ is equal to or at least close to ν in some sense , where µ and ν are the input and target distribution , respectively . Usually , the loss functions for training the generators are metrics of closeness that vary for different models . For example , in continuous normalizing flows ( Chen et al. , 2018 ) , such metric of closeness is DKL ( G # µ||ν ) or DKL ( ν||G # µ ) . In Wasserstein GANs ( WGANs ) ( Arjovsky et al. , 2017 ) , the metric of closeness is the Wasserstein-1 distance between G # µ and ν , which is estimated in a variational form with the discriminator neural network . As a result , the generator and discriminator neural networks are trained in an adversarial way : min G max D is 1-Lipschitz Ex∼νD ( x ) − Ez∼µD ( G ( z ) ) , ( 2 ) where D is the discriminator neural network and the Lipschitz constraint could be imposed via the gradient penalty ( Gulrajani et al. , 2017 ) , spectral normalization ( Miyato et al. , 2018 ) , etc . Now we introduce the concept of optimal transport map as follows : Definition 2 Given a cost function c : X×Y→ R , and µ ∈ P ( X ) , ν ∈ P ( Y ) , we let T be the set of all transport maps from µ to ν , i.e . T : = { f : f # µ = ν } . Monge ’ s optimal transport problem is to minimize the cost functional C ( f ) among T , where C ( f ) = Ex∼µc ( x , f ( x ) ) ( 3 ) and the minimizer f∗ ∈ T is called the optimal transport map . In this paper , we are concerned mostly with the case where X = Y = Rd with L2 transport cost , i.e. , the transport c ( x , y ) = ‖x− y‖2 . We assume that µ and ν are absolute continuous w.r.t . Lebesgue measure , i.e . they have probability density functions . In general , Monge ’ s problem could be ill-posed in that T could be empty set or there is no minimizer in T. Also , the optimal transport map could be non-unique . However , for the special case we consider , there exists a unique solution to Monge ’ s problem ( Brenier , 1991 ; Gangbo & McCann , 1996 ) . Informally speaking , with L2 transport cost the optimal transport map has the characteristic of “ proximity ” , i.e . the inputs tend to be mapped to nearby outputs . In image translation tasks , such “ proximity ” characteristic would be helpful if we could properly embed the images into Euclidean space such that our preferred input-output pairs are close to each other . A similar idea is also proposed in Yang & Uhler ( 2018 ) for unbalanced optimal transport . Apart from image translations , the L2 optimal transport problem is important in many other aspects . For example , it is closely related to gradient flow ( Ambrosio et al. , 2008 ) , Fokker-Planck equations ( Santambrogio , 2017 ) , flow in porous medium ( Otto , 1997 ) , etc . 3 POTENTIAL FLOW GENERATOR . 3.1 POTENTIAL FLOW FORMULATION OF OPTIMAL TRANSPORT MAP . We assume that µ and ν have probability density ρµ and ρν , respectively , and consider all smooth enough density fields ρ ( t , x ) and velocity fields v ( t , x ) , where t ∈ [ 0 , T ] , subject to the continuity equation as well as initial and final conditions ∂tρ+∇ · ( ρv ) = 0 , ρ ( 0 , · ) = ρµ , ρ ( T , · ) = ρν . ( 4 ) The above equation states that such velocity field will induce a transport map : we can construct an ordinary differential equation ( ODE ) du dt = v ( t , u ) , ( 5 ) and the map from the initial point to the final point gives the transport map from µ to ν . As is proposed by Benamou & Brenier ( 2000 ) , for the transport cost function c ( x , y ) = ‖x− y‖2 , the minimal transport cost is equal to the infimum of T ∫ Rd ∫ T 0 ρ ( t , x ) |v ( t , x ) |2dxdt ( 6 ) among all ( ρ , v ) satisfying equation ( 4 ) . The optimality condition is given by v ( t , x ) = ∇φ ( t , x ) , ∂tφ+ 1 2 |∇φ|2 = 0 . ( 7 ) In other words , the optimal velocity field is actually induced from a flow with time-dependent potential φ ( t , x ) . The use of this formulation is well-known in optimal transport community ( Trigila & Tabak , 2016 ; Peyré et al. , 2019 ) . In this paper we integrate this formulation in the deep generative models . Instead of solving Monge ’ s problem and find the exact L2 optimal transport map , which is unrealistic due to the limited families of neural network functions as well as the errors arising from training the neural networks , our goal is to regularize the generators in a wide range of generative models , so that the generator maps could approximate the L2 optimal transport map at least in low dimensional problems . The maps would also be endowed with the characteristics of “ proximity ” so that we can apply them to engineering problems . 3.2 POTENTIAL FLOW GENERATOR . The potential φ ( t , x ) is the key function to estimate , since the velocity field could be obtained by taking the gradient of the potential and consequently the transport map could be obtained from Equation 5 . There are two strategies to use neural networks to represent φ . One can take advantage of the fact that the time-dependent potential field φ is actually uniquely determined by its initial condition from Equation 7 , and use a neural network to represent the initial condition of φ , i.e . φ ( 0 , x ) , while approximating φ ( t , x ) via time discretization schemes . Alternatively , one can use a neural network to represent φ ( t , x ) directly and later apply the PDE regularity for φ ( t , x ) in Equation 7 . We name the generators defined in the above two approaches as discrete potential flow generator and continuous potential flow generator , respectively , and give a detailed formulation as follows .
This is a great paper using optimal transport theory for generative and implicit models. Instead of using general vector fields, the authors apply the potential vector fields in optimal transport theory to design neural networks. The mathematics is correct with convincing examples. This brings an important mathematical connection between fluid dynamics and GANs or implicit models.
SP:d6218fdd95b48f3e69bf12e96f938cecde8ff7ab
Fast is better than free: Revisiting adversarial training
1 INTRODUCTION . Although deep network architectures continue to be successful in a wide range of applications , the problem of learning robust deep networks remains an active area of research . In particular , safety and security focused applications are concerned about robustness to adversarial examples , data points which have been adversarially perturbed to fool a model ( Szegedy et al. , 2013 ) . The goal here is to learn a model which is not only accurate on the data , but also accurate on adversarially perturbed versions of the data . To this end , a number of defenses have been proposed to mitigate the problem and improve the robustness of deep networks , with some of the most reliable being certified defenses and adversarial training . However , both of these approaches come at a non-trivial , additional computational cost , often increasing training time by an order of magnitude over standard training . This has slowed progress in researching robustness in deep networks , due to the computational difficulty in scaling to much larger networks and the inability to rapidly train models when experimenting with new ideas . In response to this difficulty , there has been a recent surge in work ∗Equal contribution . that tries to to reduce the complexity of generating an adversarial example , which forms the bulk of the additional computation in adversarial training ( Zhang et al. , 2019 ; Shafahi et al. , 2019 ) . While these works present reasonable improvements to the runtime of adversarial training , they are still significantly slower than standard training , which has been greatly accelerated due to competitions for optimizing both the speed and cost of training ( Coleman et al. , 2017 ) . In this work , we argue that adversarial training , in fact , is not as hard as has been suggested by this past line of work . In particular , we revisit one of the the first proposed methods for adversarial training , using the Fast Gradient Sign Method ( FGSM ) to add adversarial examples to the training process ( Goodfellow et al. , 2014 ) . Although this approach has long been dismissed as ineffective , we show that by simply introducing random initialization points , FGSM-based training is as effective as projected gradient descent based training while being an order of magnitude more efficient . Moreover , FGSM adversarial training ( and to a lesser extent , other adversarial training methods ) can be drastically accelerated using standard techniques for efficient training of deep networks , including e.g . cyclic learning rates ( Smith & Topin , 2018 ) , mixed-precision training ( Micikevicius et al. , 2017 ) , and other similar techniques . The method has extremely few free parameters to tune , and can be easily adapted to most training procedures . We further identify a failure mode that we call “ catastrophic overfitting ” , which may have caused previous attempts at FGSM adversarial training to fail against PGD-based attacks . The end result is that , with these approaches , we are able to train ( empirically ) robust classifiers far faster than in previous work . Specifically , we train an ` ∞ robust CIFAR10 model to 45 % accuracy at = 8/255 ( the same level attained in previous work ) in 6 minutes ; previous papers reported times of 80 hours for PGD-based training ( Madry et al. , 2017 ) and 10 hours for the more recent “ free ” adversarial training method ( Shafahi et al. , 2019 ) . Similarly , we train an ` ∞ robust ImageNet classifier to 43 % top-1 accuracy at = 2/255 ( again matching previous results ) in 12 hours of training ( compared to 50 hours in the best reported previous work that we are aware of ( Shafahi et al. , 2019 ) ) . Both of these times roughly match the comparable time for quickly training a standard non-robust model to reasonable accuracy . We extensively evaluate these results against strong PGDbased attacks , and show that they obtain the same empirical performance as the slower , PGD-based training . Thus , we argue that despite the conventional wisdom , adversarially robust training is not actually more challenging than standard training of deep networks , and can be accomplished with the notoriously weak FGSM attack . 2 RELATED WORK . After the discovery of adversarial examples by Szegedy et al . ( 2013 ) , Goodfellow et al . ( 2014 ) proposed the Fast Gradient Sign Method ( FGSM ) to generate adversarial examples with a single gradient step . This method was used to perturb the inputs to the model before performing backpropagation as an early form of adversarial training . This attack was enhanced by adding a randomization step , which was referred to as R+FGSM ( Tramèr et al. , 2017 ) . Later , the Basic Iterative Method improved upon FGSM by taking multiple , smaller FGSM steps , ultimately rendering both FGSM-based adversarial training ineffective ( Kurakin et al. , 2016 ) . This iterative adversarial attack was further strengthened by adding multiple random restarts , and was also incorporated into the adversarial training procedure . These improvements form the basis of what is widely understood today as adversarial training against a projected gradient descent ( PGD ) adversary , and the resulting method is recognized as an effective approach to learning robust networks ( Madry et al. , 2017 ) . Since then , the PGD attack and its corresponding adversarial training defense have been augmented with various techniques , such as optimization tricks like momentum to improve the adversary ( Dong et al. , 2018 ) , combination with other heuristic defenses like matrix estimation ( Yang et al. , 2019 ) or logit pairing ( Mosbach et al. , 2018 ) , and generalization to multiple types of adversarial attacks ( Tramèr & Boneh , 2019 ; Maini et al. , 2019 ) . In addition to adversarial training , a number of other defenses against adversarial attacks have also been proposed . Adversarial defenses span a wide range of methods , such as preprocessing techniques ( Guo et al. , 2017 ; Buckman et al. , 2018 ; Song et al. , 2017 ) , detection algorithms ( Metzen et al. , 2017 ; Feinman et al. , 2017 ; Carlini & Wagner , 2017a ) , verification and provable defenses ( Katz et al. , 2017 ; Sinha et al. , 2017 ; Wong & Kolter , 2017 ; Raghunathan et al. , 2018 ) , and various theoretically motivated heuristics ( Xiao et al. , 2018 ; Croce et al. , 2018 ) . While certified defenses have been scaled to reasonably sized networks ( Wong et al. , 2018 ; Mirman et al. , 2018 ; Gowal et al. , 2018 ; Cohen et al. , 2019 ; Salman et al. , 2019 ) , the guarantees don ’ t match the empirical robustness obtained through adversarial training . With the proposal of many new defense mechanisms , of great concern in the community is the use of strong attacks for evaluating robustness : weak attacks can give a misleading sense of security , and the history of adversarial examples is littered with adversarial defenses ( Papernot et al. , 2016 ; Lu et al. , 2017 ; Kannan et al. , 2018 ; Tao et al. , 2018 ) which were ultimately defeated by stronger attacks ( Carlini & Wagner , 2016 ; 2017b ; Athalye et al. , 2017 ; Engstrom et al. , 2018 ; Carlini , 2019 ) . This highlights the difficulty of evaluating adversarial robustness , as pointed out by other work which began to defeat proposed defenses en masse ( Uesato et al. , 2018 ; Athalye et al. , 2018 ) . Since then , several best practices have been proposed to mitigate this problem ( Carlini et al. , 2019 ) . Despite the eventual defeat of other adversarial defenses , adversarial training with a PGD adversary remains empirically robust to this day . However , running a strong PGD adversary within an inner loop of training is expensive , and some earlier work in this topic found that taking larger but fewer steps did not always significantly change the resulting robustness of a network ( Wang , 2018 ) . To combat the increased computational overhead of the PGD defense , some recent work has looked at regressing the k-step PGD adversary to a variation of its single-step FGSM predecessor called “ free ” adversarial training , which can be computed with little overhead over standard training by using a single backwards pass to simultaneously update both the model weights and also the input perturbation ( Shafahi et al. , 2019 ) . Finally , when performing a multi-step PGD adversary , it is possible to cut out redundant calculations during backpropagation when computing adversarial examples for additional speedup ( Zhang et al. , 2019 ) . Although these improvements are certainly faster than the standard adversarial training procedure , they are not much faster than traditional training methods , and can still take hours to days to compute . On the other hand , top performing training methods from the DAWNBench competition ( Coleman et al. , 2017 ) are able to train CIFAR10 and ImageNet architectures to standard benchmark metrics in mere minutes and hours respectively , using only a modest amount of computational resources . Although some of the techniques can be quite problem specific for achieving bleedingedge performance , more general techniques such as cyclic learning rates ( Smith & Topin , 2018 ) and half-precision computations ( Micikevicius et al. , 2017 ) have been quite successful in the top ranking submissions , and can also be useful for adversarial training . 3 ADVERSARIAL TRAINING OVERVIEW . Adversarial training is a method for learning networks which are robust to adversarial attacks . Given a network fθ parameterized by θ , a dataset ( xi , yi ) , a loss function ` and a threat model ∆ , the learning problem is typically cast as the following robust optimization problem , min θ ∑ i max δ∈∆ ` ( fθ ( xi + δ ) , yi ) . ( 1 ) A typical choice for a threat model is to take ∆ = { δ : ‖δ‖∞ ≤ } for some > 0 . This is the ` ∞ threat model used by Madry et al . ( 2017 ) and is the setting we study in this paper . The procedure for adversarial training is to use some adversarial attack to approximate the inner maximization over ∆ , followed by some variation of gradient descent on the model parameters θ . For example , one of the earliest versions of adversarial training used the Fast Gradient Sign Method to approximate the inner maximization . This could be seen as a relatively inaccurate approximation of the inner maximization for ` ∞ perturbations , and has the following closed form ( Goodfellow et al. , 2014 ) : δ ? = · sign ( ∇x ` ( f ( x ) , y ) ) . ( 2 ) A better approximation of the inner maximization is to take multiple , smaller FGSM steps of size α instead . When the iterate leaves the threat model , it is projected back to the set ∆ ( for ` ∞ perturbations . This is equivalent to clipping δ to the interval [ − , ] ) . Since this is only a local approximation of a non-convex function , multiple random restarts within the threat model ∆ typically improve the approximation of the inner maximization even further . A combination of all these techniques is Algorithm 1 PGD adversarial training for T epochs , given some radius , adversarial step size α and N PGD steps and a dataset of size M for a network fθ for t = 1 . . . T do for i = 1 . . .M do // Perform PGD adversarial attack δ = 0 // or randomly initialized for j = 1 . . . N do δ = δ + α · sign ( ∇δ ` ( fθ ( xi + δ ) , yi ) ) δ = max ( min ( δ , ) , − ) end for θ = θ −∇θ ` ( fθ ( xi + δ ) , yi ) // Update model weights with some optimizer , e.g . SGD end for end for Algorithm 2 “ Free ” adversarial training for T epochs , given some radius , N minibatch replays , and a dataset of size M for a network fθ δ = 0 // Iterate T/N times to account for minibatch replays and run for T total epochs for t = 1 . . . T/N do for i = 1 . . .M do // Perform simultaneous FGSM adversarial attack and model weight updates T times for j = 1 . . . N do // Compute gradients for perturbation and model weights simultaneously ∇δ , ∇θ = ∇ ` ( fθ ( xi + δ ) , yi ) δ = δ + · sign ( ∇δ ) δ = max ( min ( δ , ) , − ) θ = θ −∇θ // Update model weights with some optimizer , e.g . SGD end for end for end for known as the PGD adversary ( Madry et al. , 2017 ) , and its usage in adversarial training is summarized in Algorithm 1 . Note that the number of gradient computations here is proportional to O ( MN ) in a single epoch , where M is the size of the dataset and N is the number of steps taken by the PGD adversary . This is N times greater than standard training ( which has O ( M ) gradient computations per epoch ) , and so adversarial training is typically N times slower than standard training .
The main claim of this paper is that a simple strategy of randomization plus fast gradient sign method (FGSM) adversarial training yields robust neural networks. This is somewhat surprising because previous works indicate that FGSM is not a powerful attack compared to iterative versions of it like projected gradient descent (PGD), and it has not been shown before that models trained on FGSM can defend against PGD attacks. Judging from the results in the paper alone, there are some issues with the experiment results that could be due to bugs or other unexplained experiment settings.
SP:927a1f8069c0347c4d0a8b1b947533f1c508ba42
Fast is better than free: Revisiting adversarial training
1 INTRODUCTION . Although deep network architectures continue to be successful in a wide range of applications , the problem of learning robust deep networks remains an active area of research . In particular , safety and security focused applications are concerned about robustness to adversarial examples , data points which have been adversarially perturbed to fool a model ( Szegedy et al. , 2013 ) . The goal here is to learn a model which is not only accurate on the data , but also accurate on adversarially perturbed versions of the data . To this end , a number of defenses have been proposed to mitigate the problem and improve the robustness of deep networks , with some of the most reliable being certified defenses and adversarial training . However , both of these approaches come at a non-trivial , additional computational cost , often increasing training time by an order of magnitude over standard training . This has slowed progress in researching robustness in deep networks , due to the computational difficulty in scaling to much larger networks and the inability to rapidly train models when experimenting with new ideas . In response to this difficulty , there has been a recent surge in work ∗Equal contribution . that tries to to reduce the complexity of generating an adversarial example , which forms the bulk of the additional computation in adversarial training ( Zhang et al. , 2019 ; Shafahi et al. , 2019 ) . While these works present reasonable improvements to the runtime of adversarial training , they are still significantly slower than standard training , which has been greatly accelerated due to competitions for optimizing both the speed and cost of training ( Coleman et al. , 2017 ) . In this work , we argue that adversarial training , in fact , is not as hard as has been suggested by this past line of work . In particular , we revisit one of the the first proposed methods for adversarial training , using the Fast Gradient Sign Method ( FGSM ) to add adversarial examples to the training process ( Goodfellow et al. , 2014 ) . Although this approach has long been dismissed as ineffective , we show that by simply introducing random initialization points , FGSM-based training is as effective as projected gradient descent based training while being an order of magnitude more efficient . Moreover , FGSM adversarial training ( and to a lesser extent , other adversarial training methods ) can be drastically accelerated using standard techniques for efficient training of deep networks , including e.g . cyclic learning rates ( Smith & Topin , 2018 ) , mixed-precision training ( Micikevicius et al. , 2017 ) , and other similar techniques . The method has extremely few free parameters to tune , and can be easily adapted to most training procedures . We further identify a failure mode that we call “ catastrophic overfitting ” , which may have caused previous attempts at FGSM adversarial training to fail against PGD-based attacks . The end result is that , with these approaches , we are able to train ( empirically ) robust classifiers far faster than in previous work . Specifically , we train an ` ∞ robust CIFAR10 model to 45 % accuracy at = 8/255 ( the same level attained in previous work ) in 6 minutes ; previous papers reported times of 80 hours for PGD-based training ( Madry et al. , 2017 ) and 10 hours for the more recent “ free ” adversarial training method ( Shafahi et al. , 2019 ) . Similarly , we train an ` ∞ robust ImageNet classifier to 43 % top-1 accuracy at = 2/255 ( again matching previous results ) in 12 hours of training ( compared to 50 hours in the best reported previous work that we are aware of ( Shafahi et al. , 2019 ) ) . Both of these times roughly match the comparable time for quickly training a standard non-robust model to reasonable accuracy . We extensively evaluate these results against strong PGDbased attacks , and show that they obtain the same empirical performance as the slower , PGD-based training . Thus , we argue that despite the conventional wisdom , adversarially robust training is not actually more challenging than standard training of deep networks , and can be accomplished with the notoriously weak FGSM attack . 2 RELATED WORK . After the discovery of adversarial examples by Szegedy et al . ( 2013 ) , Goodfellow et al . ( 2014 ) proposed the Fast Gradient Sign Method ( FGSM ) to generate adversarial examples with a single gradient step . This method was used to perturb the inputs to the model before performing backpropagation as an early form of adversarial training . This attack was enhanced by adding a randomization step , which was referred to as R+FGSM ( Tramèr et al. , 2017 ) . Later , the Basic Iterative Method improved upon FGSM by taking multiple , smaller FGSM steps , ultimately rendering both FGSM-based adversarial training ineffective ( Kurakin et al. , 2016 ) . This iterative adversarial attack was further strengthened by adding multiple random restarts , and was also incorporated into the adversarial training procedure . These improvements form the basis of what is widely understood today as adversarial training against a projected gradient descent ( PGD ) adversary , and the resulting method is recognized as an effective approach to learning robust networks ( Madry et al. , 2017 ) . Since then , the PGD attack and its corresponding adversarial training defense have been augmented with various techniques , such as optimization tricks like momentum to improve the adversary ( Dong et al. , 2018 ) , combination with other heuristic defenses like matrix estimation ( Yang et al. , 2019 ) or logit pairing ( Mosbach et al. , 2018 ) , and generalization to multiple types of adversarial attacks ( Tramèr & Boneh , 2019 ; Maini et al. , 2019 ) . In addition to adversarial training , a number of other defenses against adversarial attacks have also been proposed . Adversarial defenses span a wide range of methods , such as preprocessing techniques ( Guo et al. , 2017 ; Buckman et al. , 2018 ; Song et al. , 2017 ) , detection algorithms ( Metzen et al. , 2017 ; Feinman et al. , 2017 ; Carlini & Wagner , 2017a ) , verification and provable defenses ( Katz et al. , 2017 ; Sinha et al. , 2017 ; Wong & Kolter , 2017 ; Raghunathan et al. , 2018 ) , and various theoretically motivated heuristics ( Xiao et al. , 2018 ; Croce et al. , 2018 ) . While certified defenses have been scaled to reasonably sized networks ( Wong et al. , 2018 ; Mirman et al. , 2018 ; Gowal et al. , 2018 ; Cohen et al. , 2019 ; Salman et al. , 2019 ) , the guarantees don ’ t match the empirical robustness obtained through adversarial training . With the proposal of many new defense mechanisms , of great concern in the community is the use of strong attacks for evaluating robustness : weak attacks can give a misleading sense of security , and the history of adversarial examples is littered with adversarial defenses ( Papernot et al. , 2016 ; Lu et al. , 2017 ; Kannan et al. , 2018 ; Tao et al. , 2018 ) which were ultimately defeated by stronger attacks ( Carlini & Wagner , 2016 ; 2017b ; Athalye et al. , 2017 ; Engstrom et al. , 2018 ; Carlini , 2019 ) . This highlights the difficulty of evaluating adversarial robustness , as pointed out by other work which began to defeat proposed defenses en masse ( Uesato et al. , 2018 ; Athalye et al. , 2018 ) . Since then , several best practices have been proposed to mitigate this problem ( Carlini et al. , 2019 ) . Despite the eventual defeat of other adversarial defenses , adversarial training with a PGD adversary remains empirically robust to this day . However , running a strong PGD adversary within an inner loop of training is expensive , and some earlier work in this topic found that taking larger but fewer steps did not always significantly change the resulting robustness of a network ( Wang , 2018 ) . To combat the increased computational overhead of the PGD defense , some recent work has looked at regressing the k-step PGD adversary to a variation of its single-step FGSM predecessor called “ free ” adversarial training , which can be computed with little overhead over standard training by using a single backwards pass to simultaneously update both the model weights and also the input perturbation ( Shafahi et al. , 2019 ) . Finally , when performing a multi-step PGD adversary , it is possible to cut out redundant calculations during backpropagation when computing adversarial examples for additional speedup ( Zhang et al. , 2019 ) . Although these improvements are certainly faster than the standard adversarial training procedure , they are not much faster than traditional training methods , and can still take hours to days to compute . On the other hand , top performing training methods from the DAWNBench competition ( Coleman et al. , 2017 ) are able to train CIFAR10 and ImageNet architectures to standard benchmark metrics in mere minutes and hours respectively , using only a modest amount of computational resources . Although some of the techniques can be quite problem specific for achieving bleedingedge performance , more general techniques such as cyclic learning rates ( Smith & Topin , 2018 ) and half-precision computations ( Micikevicius et al. , 2017 ) have been quite successful in the top ranking submissions , and can also be useful for adversarial training . 3 ADVERSARIAL TRAINING OVERVIEW . Adversarial training is a method for learning networks which are robust to adversarial attacks . Given a network fθ parameterized by θ , a dataset ( xi , yi ) , a loss function ` and a threat model ∆ , the learning problem is typically cast as the following robust optimization problem , min θ ∑ i max δ∈∆ ` ( fθ ( xi + δ ) , yi ) . ( 1 ) A typical choice for a threat model is to take ∆ = { δ : ‖δ‖∞ ≤ } for some > 0 . This is the ` ∞ threat model used by Madry et al . ( 2017 ) and is the setting we study in this paper . The procedure for adversarial training is to use some adversarial attack to approximate the inner maximization over ∆ , followed by some variation of gradient descent on the model parameters θ . For example , one of the earliest versions of adversarial training used the Fast Gradient Sign Method to approximate the inner maximization . This could be seen as a relatively inaccurate approximation of the inner maximization for ` ∞ perturbations , and has the following closed form ( Goodfellow et al. , 2014 ) : δ ? = · sign ( ∇x ` ( f ( x ) , y ) ) . ( 2 ) A better approximation of the inner maximization is to take multiple , smaller FGSM steps of size α instead . When the iterate leaves the threat model , it is projected back to the set ∆ ( for ` ∞ perturbations . This is equivalent to clipping δ to the interval [ − , ] ) . Since this is only a local approximation of a non-convex function , multiple random restarts within the threat model ∆ typically improve the approximation of the inner maximization even further . A combination of all these techniques is Algorithm 1 PGD adversarial training for T epochs , given some radius , adversarial step size α and N PGD steps and a dataset of size M for a network fθ for t = 1 . . . T do for i = 1 . . .M do // Perform PGD adversarial attack δ = 0 // or randomly initialized for j = 1 . . . N do δ = δ + α · sign ( ∇δ ` ( fθ ( xi + δ ) , yi ) ) δ = max ( min ( δ , ) , − ) end for θ = θ −∇θ ` ( fθ ( xi + δ ) , yi ) // Update model weights with some optimizer , e.g . SGD end for end for Algorithm 2 “ Free ” adversarial training for T epochs , given some radius , N minibatch replays , and a dataset of size M for a network fθ δ = 0 // Iterate T/N times to account for minibatch replays and run for T total epochs for t = 1 . . . T/N do for i = 1 . . .M do // Perform simultaneous FGSM adversarial attack and model weight updates T times for j = 1 . . . N do // Compute gradients for perturbation and model weights simultaneously ∇δ , ∇θ = ∇ ` ( fθ ( xi + δ ) , yi ) δ = δ + · sign ( ∇δ ) δ = max ( min ( δ , ) , − ) θ = θ −∇θ // Update model weights with some optimizer , e.g . SGD end for end for end for known as the PGD adversary ( Madry et al. , 2017 ) , and its usage in adversarial training is summarized in Algorithm 1 . Note that the number of gradient computations here is proportional to O ( MN ) in a single epoch , where M is the size of the dataset and N is the number of steps taken by the PGD adversary . This is N times greater than standard training ( which has O ( M ) gradient computations per epoch ) , and so adversarial training is typically N times slower than standard training .
The authors claimed a classic adversarial training method, FGSM with random start, can indeed train a model that is robust to strong PGD attacks. Moreover, when it is combined with some fast training methods, such as cyclic learning rate scheduling and mixed precision, the adversarial training time can be significantly decreased. The experiment verifies the authors' claim convincingly.
SP:927a1f8069c0347c4d0a8b1b947533f1c508ba42
BETANAS: Balanced Training and selective drop for Neural Architecture Search
Automatic neural architecture search techniques are becoming increasingly important in machine learning area . Especially , weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources . However , existing weight sharing methods mainly suffer limitations on searching strategies : these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates , or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths . To address these issues , we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths . The experimental results show that our proposed method can achieve a leading performance of 79.0 % on ImageNet under mobile settings , which outperforms other state-of-the-art methods in both accuracy and efficiency . 1 INTRODUCTION . The fast developing of artificial intelligence has raised the demand to design powerful neural networks . Automatic neural architecture search methods ( Zoph & Le , 2016 ; Zhong et al. , 2018 ; Pham et al. , 2018 ) have shown great effectiveness in recent years . Among them , methods based on weight sharing ( Pham et al. , 2018 ; Liu et al. , 2018 ; Cai et al. , 2018 ; Guo et al. , 2019 ) show great potentials on searching architectures with limited computational resources . These methods are divided into 2 categories : alternatively training ones ( Pham et al. , 2018 ; Liu et al. , 2018 ; Cai et al. , 2018 ) and oneshot based ones ( Brock et al. , 2017 ; Bender et al. , 2018 ) . As shown in Fig 2 , both categories construct a super-net to reduce computational complexity . Methods in the first category parameterize the structure of architectures with trainable parameters and alternatively optimize architecture parameters and network parameters . In contrast , one-shot based methods train network parameters to convergence beforehand and then select architectures with fixed parameters . Both categories achieve better performance with significant efficiency improvement than direct search . Despite of these remarkable achievements , methods in both categories are limited in their searching strategies . In alternatively training methods , network parameters in different branches are applied with different training frequency or updating strength according to searching strategies , which makes different sub-network convergent to different extent . Therefore the performance of sub-networks extracted from super-net can not reflect the actual ability of that trained independently without weight sharing . Moreover , some paths might achieve better performance at early steps while perform not well when actually trained to convergence . In alternatively training methods , these operators will get more training opportunity than other candidates at early steps due to their well performance . Sufficient training in turn makes them perform better and further obtain more training opportunities , forming the Matthew Effect . In contrast , other candidates will be always trained insufficiently and can never show their real ability . Differently , One-shot methods train paths with roughly equal frequency or strength to avoid the Matthew Effect between parameters training and architectures selection . However , training all paths to convergence costs multiple time . Besides , the operators are shared by plenty of sub-networks , making the backward gradients from different training steps heavily conflict . To address this issue , we follows the balanced training strategy to avoid the Matthew Effect , and propose a drop paths approach to reduce mutual interference among paths , as shown in Fig 1 . Experiments are conducted on ImageNet classification task . The searching process costs less computational resources than competing methods and our searched architecture achieves an outstanding accuracy of 79.0 % , which outperforms state-of-the-art methods under mobile settings . The proposed method is compared with other competing algorithms with visualized analysis , which demonstrates its the effectiveness . Moreover , we also conduct experiments to analysis the mutual interference in weight sharing and demonstrate the rationality of the gradually drop paths strategy . 2 RELATED WORK . Automatic neural architecture search techniques has attracted much attention in recent years . NASNet ( Zoph & Le , 2016 ; Zoph et al. , 2018 ) proposes a framework to search for architectures with reinforcement learning , and evaluates each of the searched architectures by training it from scratch . BlockQNN ( Zhong et al. , 2018 ; Guo et al. , 2018 ; Zhong et al. , 2018 ) expands the search space to the entire DAG and selects nets with Q-learning . Network pruning methods ( Li et al. , 2019 ; Noy et al. , 2019 ) prune redundant architectures to reduces search spaces . Considering the searching policy , most of these methods depend on reinforcement learning , evolutionary algorithms and gradient based algorithms ( Bello et al. , 2017 ; Liu et al. , 2018 ; Cai et al. , 2018 ) . The most related works to our method are the ones based on weight sharing proposed by ( Pham et al. , 2018 ) , from which two streams are derived : Alternatively training methods ( Cai et al. , 2018 ; Liu et al. , 2018 ) and one-shot methods ( Brock et al. , 2017 ; Bender et al. , 2018 ; Guo et al. , 2019 ) . Methods in the first stream alternatively train architecture parameters and network parameters . During search process , operators in the super-net are selectively trained and evaluated with a certain policy and the policy is updated dynamically according to the evaluations . Among them , ENAS ( Pham et al. , 2018 ) introduces RL to select paths . DARTS ( Liu et al. , 2018 ) improves the accuracy and efficiency of paths selection policy by considering the importance of each path as trainable parameters . ProxyLessNAS ( Cai et al. , 2018 ) proposes to directly search on target datasets with single paths and makes the latency term differentiable . Single-Path NAS ( Stamoulis et al. , 2019 ) directly shares weights via super-kernel . By contrast , one-shot based methods Guo et al . ( 2019 ) ; Brock et al . ( 2017 ) ; Bender et al . ( 2018 ) firstly train each path in the super-net with equal frequency to convergence , then all the architectures are selected from the super-net and evaluated with fixed parameters . Darts+ Liang et al . ( 2019 ) improves Darts with early stop . Progressive-NAS Chen et al . ( 2019 ) gradually increases blocks while searching . HM-NAS Yan et al . ( 2019 ) uses mask to select paths . ( Sciuto et al. , 2019 ) Our work benefits from the advantages of both categories : On one hand , the importance factors are evaluated with a gradient based approach but it has no influence on training shared parameters . On the other hand , the shared parameters are updated uniformly as those in one-shot . 3 APPROACH . ProxyLessNAS ( Cai et al. , 2018 ) and Single Path One-shot ( Guo et al. , 2019 ) proposed to train the super-net with only one path on in each step to make the performance trained with weight sharing more close to that trained alone . Both of them enhance the performance of weight sharing to a higher level . ProxyLessNAS updates architecture parameters and network parameters alternatively . Paths are selectively trained according to their performance , and paths with higher performance get more training opportunities . Single Path One-shot first proposed to balanced train all paths until convergence and then use evolution algorithms to select network structures . The equivalent functions of the choice blocks in two methods are described as mPL and mOS in Eq 1 : mPL ( x ) = o1 ( x ) with probability p1 , . . . , oN ( x ) with probability p2 . , mOS ( x ) = o1 ( x ) with probability 1/N , ... , oN ( x ) with probability 1/N . ( 1 ) Our method follows the alternatively training ones , in which architecture parameters and network parameters are optimized alternatively in each step . To give a better solution to the problems discussed above , we train each candidate path with equal frequency to avoid the `` Matthew effect '' and gradually dropped least promising paths during searching process to reduce conflicts among candidate paths . 3.1 PIPELINE . The pipeline of our method is shown in Algorithm 1 . First of all , a super-net is constructed with L choice blocks O1 , O2 , . . . , OL , as shown in Fig 1 . Each choice block Ol is composed of M candidate paths and corresponding operators ol,1 , ol,2 , . . . , ol , M . The importance factor of ol , m is denoted as αl , m and αl , m are converted to probability factor pl , m using softmax normalization . Secondly , the parameters of ol , m and their importance factors αl , m are trained alternatively in Phase 1 and Phase 2 . When training αl , m , latency term is introduced to balance accuracy and complexity . Paths with αl , m lower than thα will be dropped and no more trained . Algorithm 1 Searching Process Initialization : Denote Ol as the choice block for layer l with M candidate operators { ol,1 , ol,2 , . . . , ol , M } . αl,1 , αl,2 , . . . , αl , M are the corresponding importance factors of candidate operators and initialized with identical value . Smax denotes the max number of optimization steps . 1 : while t < Smax do 2 : Phase1 : Randomly select ol , ml ∈ Ol for block Ol with uniform probability , then fix all αl , m and train the super-net constructed with the selected o1 , m1 , o2 , m2 , . . . , oL , mL for some steps . 3 : Phase2 : Fix all the parameters in ol , m and measure their flops/latency . Then evaluate each operator ol , m with both cross-entropy loss and flops/latency loss . Update αl , m according to the losses feedback . 4 : for ol , m ∈ Ol do 5 : if αl , m < thα then Ol = Ol \ { ol , m } t = t+ 1 6 : for ol , m ∈ Ol do ml = argmaxm ( αl , m ) 7 : return o1 , m1 , o2 , m2 , . . . , oL , mL Finally , after alternatively training ol , m and αl , m for given steps , paths with the highest importance factor in each choice block are selected to compose a neural architecture as the searching result . 3.2 BALANCED TRAINING . Alternatively training methods focus computational resources on most promising candidates to reduce the interference from redundant branches . However , some operators that perform well at early phases might not perform as well when they are trained to convergence . These operators might get much more training opportunities than others due to their better performance at the beginning steps . Higher training frequency in turn maintains their dominant position in the following searching process regardless their actual ability , forming the Matthew Effect . In contrast , the operators with high performance when convergent might never get opportunities to trained sufficiently . Therefore , the accuracy of alternatively training methods might degrade due to inaccurate evaluations and comparison among candidate operators . Our method follows the alternatively optimizing strategy . Differently , we only adopt gradient to architectures optimization while randomly sample paths with uniformly probability when training network parameters to avoid the Matthew Effect . More specifically , when updating network parameters of ol , m in Phase 1 and architecture parameters in Phase 2 , the equivalent output of choice block Ol is given as Opathl in Eq 2 and O arch l in Eq 3 : Opathl ( x ) = ol , m ( x ) { with probability 1M ′ , if αl , m > thα with probability 0 , else . ( 2 ) Oarchl ( x ) = ol , m ( x ) { with probability pl , m , if αl , m > thα with probability 0 , else . ( 3 ) Where M ′ is the number of remaining operators in Ol currently , and pl , m is the softmax form of αl , m . The αl , m of dropped paths are not taken into account when calculating pl , m . The parameters in both phases are optimized with Stochastic Gradient Descent ( SGD ) . In Phase 1 , the outputs in Eq 2 only depends on network parameters , thus gradients can be calculated with the Chain Rule . In Phase 2 , the outputs not only depend on the fixed network parameters but also architecture parameters αl , m . Note that Oarchl ( x ) is not differentiable with respect to αl , m , thus we introduce the manually defined derivatives proposed by Cai et al . ( 2018 ) to deal with this issue : Eq 3 can be expressed as Oarchl ( x ) = ∑ gl , m · ol , m ( x ) , where gl,0 , gl,0 , . . . , gl , M ′ is a one-hot vector with only one element equals to 1 while others equal to 0 . Assuming ∂gl , j/∂pl , j ≈ 1 according to Cai et al . ( 2018 ) , the derivatives of Oarchl ( x ) w.r.t . αl , m are defined as : ∂Oarchl ( x ) ∂αl , m = M ′∑ j=1 ∂Oarchl ( x ) ∂gl , j ∂gl , j ∂pl , j ∂pl , j ∂αl , m ≈ M ′∑ j=1 ∂Oarchl ( x ) ∂gl , j ∂pl , j ∂αl , m = M ′∑ j=1 ∂Oarchl ( x ) ∂gl , j pj ( δmj − pm ) ( 4 ) From now on , Opathl ( x ) and O arch l ( x ) are differentiable w.r.t . network parameters and architecture parameters respectively . Both parameters can be optimized alternatively in Phase 1 and Phase 2 .
This paper introduces a better searching strategy in the context of automatic neural architecture search (NAS). Especially, they focus on improving the search strategy for previously proposed computationally effective weight sharing methods for NAS. Current search strategies for the weight sharing NAS methods either focus on uniformly training all the network paths or selectively train different network paths with different frequency, where both have their own issues like wasting resources for unpromising candidates and unfair comparison among network paths. To this end, this paper proposes a balanced training strategy with “selective drop mechanism”. Further, they validate their approach by showing leading performance on ImageNet under mobile settings.
SP:eb8b8a0bae8d3f488caf70b6103ed3fd9631cb9f
BETANAS: Balanced Training and selective drop for Neural Architecture Search
Automatic neural architecture search techniques are becoming increasingly important in machine learning area . Especially , weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources . However , existing weight sharing methods mainly suffer limitations on searching strategies : these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates , or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths . To address these issues , we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths . The experimental results show that our proposed method can achieve a leading performance of 79.0 % on ImageNet under mobile settings , which outperforms other state-of-the-art methods in both accuracy and efficiency . 1 INTRODUCTION . The fast developing of artificial intelligence has raised the demand to design powerful neural networks . Automatic neural architecture search methods ( Zoph & Le , 2016 ; Zhong et al. , 2018 ; Pham et al. , 2018 ) have shown great effectiveness in recent years . Among them , methods based on weight sharing ( Pham et al. , 2018 ; Liu et al. , 2018 ; Cai et al. , 2018 ; Guo et al. , 2019 ) show great potentials on searching architectures with limited computational resources . These methods are divided into 2 categories : alternatively training ones ( Pham et al. , 2018 ; Liu et al. , 2018 ; Cai et al. , 2018 ) and oneshot based ones ( Brock et al. , 2017 ; Bender et al. , 2018 ) . As shown in Fig 2 , both categories construct a super-net to reduce computational complexity . Methods in the first category parameterize the structure of architectures with trainable parameters and alternatively optimize architecture parameters and network parameters . In contrast , one-shot based methods train network parameters to convergence beforehand and then select architectures with fixed parameters . Both categories achieve better performance with significant efficiency improvement than direct search . Despite of these remarkable achievements , methods in both categories are limited in their searching strategies . In alternatively training methods , network parameters in different branches are applied with different training frequency or updating strength according to searching strategies , which makes different sub-network convergent to different extent . Therefore the performance of sub-networks extracted from super-net can not reflect the actual ability of that trained independently without weight sharing . Moreover , some paths might achieve better performance at early steps while perform not well when actually trained to convergence . In alternatively training methods , these operators will get more training opportunity than other candidates at early steps due to their well performance . Sufficient training in turn makes them perform better and further obtain more training opportunities , forming the Matthew Effect . In contrast , other candidates will be always trained insufficiently and can never show their real ability . Differently , One-shot methods train paths with roughly equal frequency or strength to avoid the Matthew Effect between parameters training and architectures selection . However , training all paths to convergence costs multiple time . Besides , the operators are shared by plenty of sub-networks , making the backward gradients from different training steps heavily conflict . To address this issue , we follows the balanced training strategy to avoid the Matthew Effect , and propose a drop paths approach to reduce mutual interference among paths , as shown in Fig 1 . Experiments are conducted on ImageNet classification task . The searching process costs less computational resources than competing methods and our searched architecture achieves an outstanding accuracy of 79.0 % , which outperforms state-of-the-art methods under mobile settings . The proposed method is compared with other competing algorithms with visualized analysis , which demonstrates its the effectiveness . Moreover , we also conduct experiments to analysis the mutual interference in weight sharing and demonstrate the rationality of the gradually drop paths strategy . 2 RELATED WORK . Automatic neural architecture search techniques has attracted much attention in recent years . NASNet ( Zoph & Le , 2016 ; Zoph et al. , 2018 ) proposes a framework to search for architectures with reinforcement learning , and evaluates each of the searched architectures by training it from scratch . BlockQNN ( Zhong et al. , 2018 ; Guo et al. , 2018 ; Zhong et al. , 2018 ) expands the search space to the entire DAG and selects nets with Q-learning . Network pruning methods ( Li et al. , 2019 ; Noy et al. , 2019 ) prune redundant architectures to reduces search spaces . Considering the searching policy , most of these methods depend on reinforcement learning , evolutionary algorithms and gradient based algorithms ( Bello et al. , 2017 ; Liu et al. , 2018 ; Cai et al. , 2018 ) . The most related works to our method are the ones based on weight sharing proposed by ( Pham et al. , 2018 ) , from which two streams are derived : Alternatively training methods ( Cai et al. , 2018 ; Liu et al. , 2018 ) and one-shot methods ( Brock et al. , 2017 ; Bender et al. , 2018 ; Guo et al. , 2019 ) . Methods in the first stream alternatively train architecture parameters and network parameters . During search process , operators in the super-net are selectively trained and evaluated with a certain policy and the policy is updated dynamically according to the evaluations . Among them , ENAS ( Pham et al. , 2018 ) introduces RL to select paths . DARTS ( Liu et al. , 2018 ) improves the accuracy and efficiency of paths selection policy by considering the importance of each path as trainable parameters . ProxyLessNAS ( Cai et al. , 2018 ) proposes to directly search on target datasets with single paths and makes the latency term differentiable . Single-Path NAS ( Stamoulis et al. , 2019 ) directly shares weights via super-kernel . By contrast , one-shot based methods Guo et al . ( 2019 ) ; Brock et al . ( 2017 ) ; Bender et al . ( 2018 ) firstly train each path in the super-net with equal frequency to convergence , then all the architectures are selected from the super-net and evaluated with fixed parameters . Darts+ Liang et al . ( 2019 ) improves Darts with early stop . Progressive-NAS Chen et al . ( 2019 ) gradually increases blocks while searching . HM-NAS Yan et al . ( 2019 ) uses mask to select paths . ( Sciuto et al. , 2019 ) Our work benefits from the advantages of both categories : On one hand , the importance factors are evaluated with a gradient based approach but it has no influence on training shared parameters . On the other hand , the shared parameters are updated uniformly as those in one-shot . 3 APPROACH . ProxyLessNAS ( Cai et al. , 2018 ) and Single Path One-shot ( Guo et al. , 2019 ) proposed to train the super-net with only one path on in each step to make the performance trained with weight sharing more close to that trained alone . Both of them enhance the performance of weight sharing to a higher level . ProxyLessNAS updates architecture parameters and network parameters alternatively . Paths are selectively trained according to their performance , and paths with higher performance get more training opportunities . Single Path One-shot first proposed to balanced train all paths until convergence and then use evolution algorithms to select network structures . The equivalent functions of the choice blocks in two methods are described as mPL and mOS in Eq 1 : mPL ( x ) = o1 ( x ) with probability p1 , . . . , oN ( x ) with probability p2 . , mOS ( x ) = o1 ( x ) with probability 1/N , ... , oN ( x ) with probability 1/N . ( 1 ) Our method follows the alternatively training ones , in which architecture parameters and network parameters are optimized alternatively in each step . To give a better solution to the problems discussed above , we train each candidate path with equal frequency to avoid the `` Matthew effect '' and gradually dropped least promising paths during searching process to reduce conflicts among candidate paths . 3.1 PIPELINE . The pipeline of our method is shown in Algorithm 1 . First of all , a super-net is constructed with L choice blocks O1 , O2 , . . . , OL , as shown in Fig 1 . Each choice block Ol is composed of M candidate paths and corresponding operators ol,1 , ol,2 , . . . , ol , M . The importance factor of ol , m is denoted as αl , m and αl , m are converted to probability factor pl , m using softmax normalization . Secondly , the parameters of ol , m and their importance factors αl , m are trained alternatively in Phase 1 and Phase 2 . When training αl , m , latency term is introduced to balance accuracy and complexity . Paths with αl , m lower than thα will be dropped and no more trained . Algorithm 1 Searching Process Initialization : Denote Ol as the choice block for layer l with M candidate operators { ol,1 , ol,2 , . . . , ol , M } . αl,1 , αl,2 , . . . , αl , M are the corresponding importance factors of candidate operators and initialized with identical value . Smax denotes the max number of optimization steps . 1 : while t < Smax do 2 : Phase1 : Randomly select ol , ml ∈ Ol for block Ol with uniform probability , then fix all αl , m and train the super-net constructed with the selected o1 , m1 , o2 , m2 , . . . , oL , mL for some steps . 3 : Phase2 : Fix all the parameters in ol , m and measure their flops/latency . Then evaluate each operator ol , m with both cross-entropy loss and flops/latency loss . Update αl , m according to the losses feedback . 4 : for ol , m ∈ Ol do 5 : if αl , m < thα then Ol = Ol \ { ol , m } t = t+ 1 6 : for ol , m ∈ Ol do ml = argmaxm ( αl , m ) 7 : return o1 , m1 , o2 , m2 , . . . , oL , mL Finally , after alternatively training ol , m and αl , m for given steps , paths with the highest importance factor in each choice block are selected to compose a neural architecture as the searching result . 3.2 BALANCED TRAINING . Alternatively training methods focus computational resources on most promising candidates to reduce the interference from redundant branches . However , some operators that perform well at early phases might not perform as well when they are trained to convergence . These operators might get much more training opportunities than others due to their better performance at the beginning steps . Higher training frequency in turn maintains their dominant position in the following searching process regardless their actual ability , forming the Matthew Effect . In contrast , the operators with high performance when convergent might never get opportunities to trained sufficiently . Therefore , the accuracy of alternatively training methods might degrade due to inaccurate evaluations and comparison among candidate operators . Our method follows the alternatively optimizing strategy . Differently , we only adopt gradient to architectures optimization while randomly sample paths with uniformly probability when training network parameters to avoid the Matthew Effect . More specifically , when updating network parameters of ol , m in Phase 1 and architecture parameters in Phase 2 , the equivalent output of choice block Ol is given as Opathl in Eq 2 and O arch l in Eq 3 : Opathl ( x ) = ol , m ( x ) { with probability 1M ′ , if αl , m > thα with probability 0 , else . ( 2 ) Oarchl ( x ) = ol , m ( x ) { with probability pl , m , if αl , m > thα with probability 0 , else . ( 3 ) Where M ′ is the number of remaining operators in Ol currently , and pl , m is the softmax form of αl , m . The αl , m of dropped paths are not taken into account when calculating pl , m . The parameters in both phases are optimized with Stochastic Gradient Descent ( SGD ) . In Phase 1 , the outputs in Eq 2 only depends on network parameters , thus gradients can be calculated with the Chain Rule . In Phase 2 , the outputs not only depend on the fixed network parameters but also architecture parameters αl , m . Note that Oarchl ( x ) is not differentiable with respect to αl , m , thus we introduce the manually defined derivatives proposed by Cai et al . ( 2018 ) to deal with this issue : Eq 3 can be expressed as Oarchl ( x ) = ∑ gl , m · ol , m ( x ) , where gl,0 , gl,0 , . . . , gl , M ′ is a one-hot vector with only one element equals to 1 while others equal to 0 . Assuming ∂gl , j/∂pl , j ≈ 1 according to Cai et al . ( 2018 ) , the derivatives of Oarchl ( x ) w.r.t . αl , m are defined as : ∂Oarchl ( x ) ∂αl , m = M ′∑ j=1 ∂Oarchl ( x ) ∂gl , j ∂gl , j ∂pl , j ∂pl , j ∂αl , m ≈ M ′∑ j=1 ∂Oarchl ( x ) ∂gl , j ∂pl , j ∂αl , m = M ′∑ j=1 ∂Oarchl ( x ) ∂gl , j pj ( δmj − pm ) ( 4 ) From now on , Opathl ( x ) and O arch l ( x ) are differentiable w.r.t . network parameters and architecture parameters respectively . Both parameters can be optimized alternatively in Phase 1 and Phase 2 .
In this paper, the authors proposed a new training strategy in achieving better balance between training efficiency and evaluation accuracy with weight sharing-based NAS algorithms. It is consisted of two phrases: in phrase 1, all path are uniformly trained to avoid bias, in phrase 2, less competitive options are pruned to save cost. The proposed method achieved the SOTA on IN mobile setting.
SP:eb8b8a0bae8d3f488caf70b6103ed3fd9631cb9f
Iterative energy-based projection on a normal data manifold for anomaly localization
1 INTRODUCTION . Automating visual inspection on production lines with artificial intelligence has gained popularity and interest in recent years . Indeed , the analysis of images to segment potential manufacturing defects seems well suited to computer vision algorithms . However these solutions remain data hungry and require knowledge transfer from human to machine via image annotations . Furthermore , the classification in a limited number of user-predefined categories such as non-defective , greasy , scratched and so on , will not generalize well if a previously unseen defect appears . This is even more critical on production lines where a defective product is a rare occurrence . For visual inspection , a better-suited task is unsupervised anomaly detection , in which the segmentation of the defect must be done only via prior knowledge of non-defective samples , constraining the issue to a two-class segmentation problem . From a statistical point of view , an anomaly may be seen as a distribution outlier , or an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism ( Hawkins , 1980 ) . In this setting , generative models such as Variational AutoEncoders ( VAE , Kingma & Welling ( 2014 ) ) , are especially interesting because they are capable to infer possible sampling mechanisms for a given dataset . The original autoencoder ( AE ) jointly learns an encoder model , that compresses input samples into a low dimensional space , and a decoder , that decompresses the low dimensional samples into the original input space , by minimizing the distance between the input of the encoder and the output of the decoder . The more recent variant , VAE , replaces the deterministic encoder and decoder by stochastic functions , enabling the modeling of the distribution of the dataset samples as well as the generation of new , unseen samples . In both models , the output decompressed sample given an input is often called the reconstruction , and is used as some sort of projection of the input on the support of the normal data distribution , which we will call the normal manifold . In most unsupervised anomaly detection methods based on VAE , models are trained on flawless data and defect detection and localization is then performed using a ∗Equal contributions . distance metric between the input sample and its reconstruction ( Bergmann et al. , 2018 ; 2019 ; An & Cho , 2015 ; Baur et al. , 2018 ; Matsubara et al. , 2018 ) . One fundamental issue in this approach is that the models learn on the normal manifold , hence there is no guarantee of the generalization of their behavior outside this manifold . This is problematic since it is precisely outside the dataset distribution that such methods intend to use the VAE for anomaly localization . Even in the case of a model that always generates credible samples from the dataset distribution , there is no way to ensure that the reconstruction will be connected to the input sample in any useful way . An example illustrating this limitation is given in figure 1 , where a VAE trained on regular grid images provides a globally poor reconstruction despite a local perturbation , making the anomaly localization challenging . In this paper , instead of using the VAE reconstruction , we propose to find a better projection of an input sample on the normal manifold , by optimizing an energy function defined by an autoencoder architecture . Starting at the input sample , we iterate gradient descent steps on the input to converge to an optimum , simultaneously located on the data manifold and closest to the starting input . This method allows us to add prior knowledge about the expected anomalies via regularization terms , which is not possible with the raw VAE reconstruction . We show that such an optimum is better than previously proposed autoencoder reconstructions to localize anomalies on a variety of unsupervised anomaly localization datasets ( Bergmann et al. , 2019 ) and present its inpainting capabilities on the CelebA dataset ( Liu et al. , 2015 ) . We also propose a variant of the standard gradient descent that uses the pixel-wise reconstruction error to speed up the convergence of the energy . 2 BACKGROUND . 2.1 GENERATIVE MODELS . In unsupervised anomaly detection , the only data available during training are samples x from a non-anomalous dataset X ⊂ Rd . In a generative setting , we suppose the existence of a probability function of density q , having its support on all Rd , from which the dataset was sampled . The generative objective is then to model an estimate of density q , from which we can obtain new samples close to the dataset . Popular generative architectures are Generative Adversarial Networks ( GAN , Goodfellow et al . ( 2014 ) ) , that concurrently train a generator G to generate samples from random , low-dimensional noise z ∼ p , z ∈ Rl , l d , and a discriminator D to classify generated samples and dataset samples . This model converges to the equilibrium of the expectation over both real and generated datasets of the binary cross entropy loss of the classifier minG maxD [ Ex∼q [ log ( D ( x ) ) ] + Ez∼p [ log ( 1 − D ( G ( z ) ) ) ] ] . Disadvantages of GANs are that they are notoriously difficult to train ( Goodfellow , 2017 ) , and they suffer from mode collapse , meaning that they have the tendency to only generate a subset of the original dataset . This can be problematic for anomaly detection , in which we do not want some subset of the normal data to be considered as anomalous ( Bergmann et al. , 2019 ) . Recent works such as Thanh-Tung et al . ( 2019 ) offer simple and attractive explanations for GAN behavior and propose substantial upgrades , however Ravuri & Vinyals ( 2019 ) still support the point that GANs have more trouble than other generative models to cover the whole distribution support . Another generative model is the VAE ( Kingma & Welling ( 2014 ) ) , where , similar to a GAN generator , a decoder model tries to approximate the dataset distribution with a simple latent variables prior p ( z ) , with z ∈ Rl , and conditional distributions output by the decoder p ( x|z ) . This leads to the estimate p ( x ) = ∫ p ( x|z ) p ( z ) dz , that we would like to optimize using maximum likelihood estimation on the dataset . To render the learning tractable with a stochastic gradient descent ( SGD ) estimator with reasonable variance , we use importance sampling , introducing density functions q ( z|x ) output by an encoder network , and Jensen ’ s inequality to get the variational lower bound : log p ( x ) = log Ez∼q ( z|x ) p ( x|z ) p ( z ) q ( z|x ) ≥ Ez∼q ( z|x ) log p ( x|z ) −DKL ( q ( z|x ) ‖p ( z ) ) = −L ( x ) ( 1 ) We will use L ( x ) as our loss function for training . We define the VAE reconstruction , per analogy with an autoencoder reconstruction , as the deterministic sample fV AE ( x ) that we obtain by encoding x , decoding the mean of the encoded distribution q ( z|x ) , and taking again the mean of the decoded distribution p ( x|z ) . VAEs are known to produce blurry reconstructions and generations , but Dai & Wipf ( 2019 ) show that a huge enhancement in image quality can be gained by learning the variance of the decoded distribution p ( x|z ) . This comes at the cost of the distribution of latent variables produced by the encoder q ( z ) being farther away from the prior p ( z ) , so that samples generated by sampling z ∼ p ( z ) , x ∼ p ( x|z ) have poorer quality . The authors show that using a second VAE learned on samples from q ( z ) , and sampling from it with ancestral sampling u ∼ p ( u ) , z ∼ p ( z|u ) , x ∼ p ( x|z ) , allows to recover samples of GAN-like quality . The original autoencoder can be roughly considered as a VAE whose encoded and decoded distributions have infinitely small variances . 2.2 ANOMALY DETECTION AND LOCALIZATION . We will consider that an anomaly is a sample with low probability under our estimation of the dataset distribution . The VAE loss , being a lower bound on the density , is a good proxy to classify samples between the anomalous and non-anomalous categories . To this effect , a threshold T can be defined on the loss function , delimiting anomalous samples with L ( x ) ≥ T and normal samples L ( x ) < T . However , according to Matsubara et al . ( 2018 ) , the regularization term LKL ( x ) = DKL ( q ( z|x ) ‖p ( z ) ) has a negative influence in the computation of anomaly scores . They propose instead an unregularized score Lr ( x ) = −Ez∼q ( z|x ) log p ( x|z ) which is equivalent to the reconstruction term of a standard autoencoder and claim a better anomaly detection . Going from anomaly detection to anomaly localization , this reconstruction term becomes crucial to most of existing solutions . Indeed , the inability of the model to reconstruct a given part of an image is used as a way to segment the anomaly , using a pixel-wise threshold on the reconstruction error . Actually , this segmentation is very often given by a pixel-wise ( An & Cho , 2015 ; Baur et al. , 2018 ; Matsubara et al. , 2018 ) or patch-wise comparison of the input image , and some generated image , as in Bergmann et al . ( 2018 ; 2019 ) , where the structural dissimilarity ( DSSIM , Wang et al . ( 2004 ) ) between the input and its VAE reconstruction is used . Autoencoder-based methods thus provide a straightforward way of generating an image conditioned on the input image . In the GAN original framework , though , images are generated from random noise z ∼ p ( z ) and are not conditioned by an input . Schlegl et al . ( 2017 ) propose with AnoGAN to get the closest generated image to the input using gradient descent on z for an energy defined by : EAnoGAN = ||x−G ( z ) ||1 + λ · ||fD ( x ) − fD ( G ( z ) ) ||1 ( 2 ) The first term ensures that the generation G ( z ) is close to the input x . The second term is based on a distance between features of the input and the generated images , where fD ( x ) is the output of an intermediate layer of the discriminator . This term ensures that the generated image stays in the vicinity of the original dataset distribution . 3 PROPOSED METHOD . 3.1 ADVERSARIAL PROJECTIONS . According to Zimmerer et al . ( 2018 ) , the loss gradient with respect to x gives the direction towards normal data samples , and its magnitude could indicate how abnormal a sample is . In their work on anomaly identification , they use the loss gradient as an anomaly score . Here we propose to use the gradient of the loss to iteratively improve the observed x . We propose to link this method to the methodology of computing adversarial samples in Szegedy et al . ( 2014 ) . After training a VAE on non-anomalous data , we can define a threshold T on the reconstruction loss Lr as in ( Matsubara et al. , 2018 ) , such that a small proportion of the most improbable samples are identified as anomalies . We obtain a binary classifier defined by A ( x ) = { 1 if Lr ( x ) ≥ T 0 otherwise ( 3 ) Our method consists in computing adversarial samples of this classifier ( Szegedy et al. , 2014 ) , that is to say , starting from a sample x0 with A ( x0 ) = 1 , iterate gradient descent steps over the input x , constructing samples x1 , . . .xN , to minimize the energy E ( x ) , defined as E ( xt ) = Lr ( xt ) + λ · ||xt − x0||1 ( 4 ) An iteration is done by calculating xt+1 as xt+1 = xt − α · ∇xE ( xt ) , ( 5 ) where α is a learning rate parameter , and λ is a parameter trading off the inclusion of xt in the normal manifold , given by Lr ( xt ) , and the proximity between xt and the input x0 , assured by the regularization term ||xt − x0||1 .
The paper proposes to use autoencoder for anomaly localization. The approach learns to project anomalous data on an autoencoder-learned manifold by using gradient descent on energy derived from the autoencoder's loss function. The proposed method is evaluated using the anomaly-localization dataset (Bergmann et al. CVPR 2019) and qualitatively for the task of image inpainting task on CelebA dataset.
SP:1f95868a91ef213ebf3be6ca2a0f059e93b4be37
Iterative energy-based projection on a normal data manifold for anomaly localization
1 INTRODUCTION . Automating visual inspection on production lines with artificial intelligence has gained popularity and interest in recent years . Indeed , the analysis of images to segment potential manufacturing defects seems well suited to computer vision algorithms . However these solutions remain data hungry and require knowledge transfer from human to machine via image annotations . Furthermore , the classification in a limited number of user-predefined categories such as non-defective , greasy , scratched and so on , will not generalize well if a previously unseen defect appears . This is even more critical on production lines where a defective product is a rare occurrence . For visual inspection , a better-suited task is unsupervised anomaly detection , in which the segmentation of the defect must be done only via prior knowledge of non-defective samples , constraining the issue to a two-class segmentation problem . From a statistical point of view , an anomaly may be seen as a distribution outlier , or an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism ( Hawkins , 1980 ) . In this setting , generative models such as Variational AutoEncoders ( VAE , Kingma & Welling ( 2014 ) ) , are especially interesting because they are capable to infer possible sampling mechanisms for a given dataset . The original autoencoder ( AE ) jointly learns an encoder model , that compresses input samples into a low dimensional space , and a decoder , that decompresses the low dimensional samples into the original input space , by minimizing the distance between the input of the encoder and the output of the decoder . The more recent variant , VAE , replaces the deterministic encoder and decoder by stochastic functions , enabling the modeling of the distribution of the dataset samples as well as the generation of new , unseen samples . In both models , the output decompressed sample given an input is often called the reconstruction , and is used as some sort of projection of the input on the support of the normal data distribution , which we will call the normal manifold . In most unsupervised anomaly detection methods based on VAE , models are trained on flawless data and defect detection and localization is then performed using a ∗Equal contributions . distance metric between the input sample and its reconstruction ( Bergmann et al. , 2018 ; 2019 ; An & Cho , 2015 ; Baur et al. , 2018 ; Matsubara et al. , 2018 ) . One fundamental issue in this approach is that the models learn on the normal manifold , hence there is no guarantee of the generalization of their behavior outside this manifold . This is problematic since it is precisely outside the dataset distribution that such methods intend to use the VAE for anomaly localization . Even in the case of a model that always generates credible samples from the dataset distribution , there is no way to ensure that the reconstruction will be connected to the input sample in any useful way . An example illustrating this limitation is given in figure 1 , where a VAE trained on regular grid images provides a globally poor reconstruction despite a local perturbation , making the anomaly localization challenging . In this paper , instead of using the VAE reconstruction , we propose to find a better projection of an input sample on the normal manifold , by optimizing an energy function defined by an autoencoder architecture . Starting at the input sample , we iterate gradient descent steps on the input to converge to an optimum , simultaneously located on the data manifold and closest to the starting input . This method allows us to add prior knowledge about the expected anomalies via regularization terms , which is not possible with the raw VAE reconstruction . We show that such an optimum is better than previously proposed autoencoder reconstructions to localize anomalies on a variety of unsupervised anomaly localization datasets ( Bergmann et al. , 2019 ) and present its inpainting capabilities on the CelebA dataset ( Liu et al. , 2015 ) . We also propose a variant of the standard gradient descent that uses the pixel-wise reconstruction error to speed up the convergence of the energy . 2 BACKGROUND . 2.1 GENERATIVE MODELS . In unsupervised anomaly detection , the only data available during training are samples x from a non-anomalous dataset X ⊂ Rd . In a generative setting , we suppose the existence of a probability function of density q , having its support on all Rd , from which the dataset was sampled . The generative objective is then to model an estimate of density q , from which we can obtain new samples close to the dataset . Popular generative architectures are Generative Adversarial Networks ( GAN , Goodfellow et al . ( 2014 ) ) , that concurrently train a generator G to generate samples from random , low-dimensional noise z ∼ p , z ∈ Rl , l d , and a discriminator D to classify generated samples and dataset samples . This model converges to the equilibrium of the expectation over both real and generated datasets of the binary cross entropy loss of the classifier minG maxD [ Ex∼q [ log ( D ( x ) ) ] + Ez∼p [ log ( 1 − D ( G ( z ) ) ) ] ] . Disadvantages of GANs are that they are notoriously difficult to train ( Goodfellow , 2017 ) , and they suffer from mode collapse , meaning that they have the tendency to only generate a subset of the original dataset . This can be problematic for anomaly detection , in which we do not want some subset of the normal data to be considered as anomalous ( Bergmann et al. , 2019 ) . Recent works such as Thanh-Tung et al . ( 2019 ) offer simple and attractive explanations for GAN behavior and propose substantial upgrades , however Ravuri & Vinyals ( 2019 ) still support the point that GANs have more trouble than other generative models to cover the whole distribution support . Another generative model is the VAE ( Kingma & Welling ( 2014 ) ) , where , similar to a GAN generator , a decoder model tries to approximate the dataset distribution with a simple latent variables prior p ( z ) , with z ∈ Rl , and conditional distributions output by the decoder p ( x|z ) . This leads to the estimate p ( x ) = ∫ p ( x|z ) p ( z ) dz , that we would like to optimize using maximum likelihood estimation on the dataset . To render the learning tractable with a stochastic gradient descent ( SGD ) estimator with reasonable variance , we use importance sampling , introducing density functions q ( z|x ) output by an encoder network , and Jensen ’ s inequality to get the variational lower bound : log p ( x ) = log Ez∼q ( z|x ) p ( x|z ) p ( z ) q ( z|x ) ≥ Ez∼q ( z|x ) log p ( x|z ) −DKL ( q ( z|x ) ‖p ( z ) ) = −L ( x ) ( 1 ) We will use L ( x ) as our loss function for training . We define the VAE reconstruction , per analogy with an autoencoder reconstruction , as the deterministic sample fV AE ( x ) that we obtain by encoding x , decoding the mean of the encoded distribution q ( z|x ) , and taking again the mean of the decoded distribution p ( x|z ) . VAEs are known to produce blurry reconstructions and generations , but Dai & Wipf ( 2019 ) show that a huge enhancement in image quality can be gained by learning the variance of the decoded distribution p ( x|z ) . This comes at the cost of the distribution of latent variables produced by the encoder q ( z ) being farther away from the prior p ( z ) , so that samples generated by sampling z ∼ p ( z ) , x ∼ p ( x|z ) have poorer quality . The authors show that using a second VAE learned on samples from q ( z ) , and sampling from it with ancestral sampling u ∼ p ( u ) , z ∼ p ( z|u ) , x ∼ p ( x|z ) , allows to recover samples of GAN-like quality . The original autoencoder can be roughly considered as a VAE whose encoded and decoded distributions have infinitely small variances . 2.2 ANOMALY DETECTION AND LOCALIZATION . We will consider that an anomaly is a sample with low probability under our estimation of the dataset distribution . The VAE loss , being a lower bound on the density , is a good proxy to classify samples between the anomalous and non-anomalous categories . To this effect , a threshold T can be defined on the loss function , delimiting anomalous samples with L ( x ) ≥ T and normal samples L ( x ) < T . However , according to Matsubara et al . ( 2018 ) , the regularization term LKL ( x ) = DKL ( q ( z|x ) ‖p ( z ) ) has a negative influence in the computation of anomaly scores . They propose instead an unregularized score Lr ( x ) = −Ez∼q ( z|x ) log p ( x|z ) which is equivalent to the reconstruction term of a standard autoencoder and claim a better anomaly detection . Going from anomaly detection to anomaly localization , this reconstruction term becomes crucial to most of existing solutions . Indeed , the inability of the model to reconstruct a given part of an image is used as a way to segment the anomaly , using a pixel-wise threshold on the reconstruction error . Actually , this segmentation is very often given by a pixel-wise ( An & Cho , 2015 ; Baur et al. , 2018 ; Matsubara et al. , 2018 ) or patch-wise comparison of the input image , and some generated image , as in Bergmann et al . ( 2018 ; 2019 ) , where the structural dissimilarity ( DSSIM , Wang et al . ( 2004 ) ) between the input and its VAE reconstruction is used . Autoencoder-based methods thus provide a straightforward way of generating an image conditioned on the input image . In the GAN original framework , though , images are generated from random noise z ∼ p ( z ) and are not conditioned by an input . Schlegl et al . ( 2017 ) propose with AnoGAN to get the closest generated image to the input using gradient descent on z for an energy defined by : EAnoGAN = ||x−G ( z ) ||1 + λ · ||fD ( x ) − fD ( G ( z ) ) ||1 ( 2 ) The first term ensures that the generation G ( z ) is close to the input x . The second term is based on a distance between features of the input and the generated images , where fD ( x ) is the output of an intermediate layer of the discriminator . This term ensures that the generated image stays in the vicinity of the original dataset distribution . 3 PROPOSED METHOD . 3.1 ADVERSARIAL PROJECTIONS . According to Zimmerer et al . ( 2018 ) , the loss gradient with respect to x gives the direction towards normal data samples , and its magnitude could indicate how abnormal a sample is . In their work on anomaly identification , they use the loss gradient as an anomaly score . Here we propose to use the gradient of the loss to iteratively improve the observed x . We propose to link this method to the methodology of computing adversarial samples in Szegedy et al . ( 2014 ) . After training a VAE on non-anomalous data , we can define a threshold T on the reconstruction loss Lr as in ( Matsubara et al. , 2018 ) , such that a small proportion of the most improbable samples are identified as anomalies . We obtain a binary classifier defined by A ( x ) = { 1 if Lr ( x ) ≥ T 0 otherwise ( 3 ) Our method consists in computing adversarial samples of this classifier ( Szegedy et al. , 2014 ) , that is to say , starting from a sample x0 with A ( x0 ) = 1 , iterate gradient descent steps over the input x , constructing samples x1 , . . .xN , to minimize the energy E ( x ) , defined as E ( xt ) = Lr ( xt ) + λ · ||xt − x0||1 ( 4 ) An iteration is done by calculating xt+1 as xt+1 = xt − α · ∇xE ( xt ) , ( 5 ) where α is a learning rate parameter , and λ is a parameter trading off the inclusion of xt in the normal manifold , given by Lr ( xt ) , and the proximity between xt and the input x0 , assured by the regularization term ||xt − x0||1 .
This paper discusses an important problem of solving the visual inspection problem limited supervision. It proposes to use VAE to model the anomaly detection. The major concern is how the quality of f_{VAE} is estimated. From the paper it seems f_{VAE} is not updated. Will it be sufficient to rely a fixed f_{VAE} and blindly trust its quality?
SP:1f95868a91ef213ebf3be6ca2a0f059e93b4be37
Long History Short-Term Memory for Long-Term Video Prediction
While video prediction approaches have advanced considerably in recent years , learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions . To address this challenge , existing algorithms rely on extra supervision ( e.g. , action or object pose ) , motion flow learning , or adversarial training . In this paper , we propose a new recurrent unit , Long History Short-Term Memory ( LH-STM ) . LH-STM incorporates long history states into a recurrent unit to learn longer range dependencies . To capture spatiotemporal dynamics in videos , we combined LH-STM with the Context-aware Video Prediction model ( ContextVP ) . Our experiments on the KTH human actions and BAIR robot pushing datasets demonstrate that our approach produces not only sharper near-future predictions , but also farther into the future compared to the state-of-the-art methods . 1 INTRODUCTION . Learning the dynamics of an environment and predicting consequences in the future has become an important research problem . A common task is to train a model that accurately predicts pixel-level future frames conditioned on past frames . It can be utilized for intelligent agents to guide them to interact with the world , or for other video analysis tasks such as activity recognition . An important component of designing such models is how to effectively learn good spatio-temporal representations from video frames . The Convolutional Long Short-Term Memory ( ConvLSTM ) network ( Xingjian et al. , 2015 ) has been a popular model architecture choice for video prediction . However , recent stateof-the-art approaches produce high-quality predictions only for one or less then ten frames ( Lotter et al. , 2016 ; Villegas et al. , 2017a ; Byeon et al. , 2018 ) . Learning to predict long-term future video frames remains challenging due to 1 ) the presence of complex dynamics in high-dimensional video data , 2 ) prediction error propagation over time , and 3 ) inherent uncertainty of the future . Many recent works ( Denton & Fergus , 2018 ; Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ) focus on the third issue by introducing stochastic models ; this issue is a crucial challenge for long-term prediction . However , the architectures currently in use are not sufficiently powerful and efficient for long-term prediction , and this is also an important but unsolved problem . The model needs to extract important information from spatio-temporal data and retain this information longer into the future efficiently . Otherwise , its uncertainty about the future will increase even if the future is completely predictable given the past . Therefore , in this paper , we attempt to address the issue of learning complex dynamics of videos and minimizing long-term prediction error by fully observing the history . We propose a novel modification of the ConvLSTM structure , Long History Short-Term Memory ( LH-STM ) . LH-STM learns to interconnect history states and the current input by a History SoftSelection Unit ( HistSSel ) and double memory modules . The weighted history states computed by our HistSSel units are combined with the history memory and then used to update the current states by the update memory . The proposed method brings the power of higher-order RNNs ( Soltani & Jiang , 2016 ) to ConvLSTMs , which have been limited to simple recurrent mechanisms so far . The HistSSel unit acts as a short-cut to the history , so the gradient flow in the LSTM is improved . More powerful RNNs are likely to be necessary to solve the hard and unsolved problem of long-term video prediction , which is extremely challenging for current architectures . Moreover , by disentangling the history and update memories , our model can fully utilize the long history states . This structure can better model long-term dependencies in sequential data . In this paper , the proposed modification is integrated into the ConvLSTM-based architectures to solve the long-term video prediction problem : Context-aware Video Prediction ( ContextVP ) model . The proposed models can fully leverage long-range spatio-temporal contexts in real-world videos . Our experiments on the KTH human actions and the BAIR robot pushing datasets show that our model produces sharp and realistic predictions for more frames into the future compared to recent state-of-the-art long-term video prediction methods . 2 RELATED WORK . Learning Long-Term Dependencies with Recurrent Neural Networks ( RNN ) : While Long ShortTerm Memory ( LSTM ) has been successful for sequence prediction , many recent approaches aim to capture longer-term dependencies in sequential data . Several works have proposed to allow dynamic recurrent state updates or to learn more complex transition functions . Chung et al . ( 2016 ) introduced the hierarchical multiscale RNN that captures a hierarchical representation of a sequence by encoding multiple time scales of temporal dependencies . Koutnik et al . ( 2014 ) modified the standard RNN to a Clockwork RNN that partitions hidden units and processes them at different clock speeds . Neil et al . ( 2016 ) introduced a new time gate that controls update intervals based on periodic patterns . Campos et al . ( 2017 ) proposed an explicit skipping module for state updates . Zilly et al . ( 2017 ) increased the recurrent transition depth with highway layers . Fast-Slow Recurrent Neural Networks ( Mujika et al. , 2017 ) incorporate ideas from both multiscale ( Schmidhuber , 1992 ; El Hihi & Bengio , 1996 ; Chung et al. , 2016 ) and deep transition ( Pascanu et al. , 2013 ; Zilly et al. , 2017 ) RNNs . The advantages of the above approaches are efficient information propagation through time , better long memory traces , and generalization to unseen data . Alternative solutions include the use of history states , an attention model or skip connections . Soltani & Jiang ( 2016 ) investigated a higher order RNN to aggregate more history information and showed that it is beneficial for long range sequence modeling . Cheng et al . ( 2016 ) deployed an attention mechanism in LSTM to induce relations between input and history states . Gui et al . ( 2018 ) incorporated into an LSTM , dynamic skip connections and reinforcement learning to model long-term dependencies . These approaches use the history states in a single LSTM by directly adding more recurrent connections or adding an attention module in the memory cell . These models are used for one dimensional sequence modeling , whereas our proposed approach separates the history and update memories that learn to encode the long-range relevant history states . Furthermore , our approach is more suitable for high-dimensional ( e.g. , video ) prediction tasks . Video Prediction : The main issue in long-term pixel-level video prediction is how to capture longterm dynamics and handle uncertainty of the future while maintaining sharpness and realism . Oh et al . ( 2015 ) introduced action-conditioned video prediction using a Convolutional Neural Network ( CNN ) architecture . Villegas et al . ( 2017b ) and Wichers et al . ( 2018 ) focused on a hierarchical model to predict long-term videos . Their model estimates high-level structure before generating pixel-level predictions . However , the approach by Villegas et al . ( 2017b ) requires object pose information as ground truth during training . Finn et al . ( 2016 ) used ConvLSTM to explicitly model pixel motions . To generate high-quality predictions , many approaches train with an adversarial loss ( Mathieu et al. , 2015 ; Wichers et al. , 2018 ; Vondrick et al. , 2016 ; Vondrick & Torralba , 2017 ; Denton et al. , 2017 ; Lee et al. , 2018 ) . Weissenborn et al . ( 2019 ) introduced local self-attention on videos directly for a large scale video precessing . Another active line of investigation is to train stochastic prediction models using VAEs ( Denton & Fergus , 2018 ; Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ) . These models predict plausible futures by sampling latent variables and produce long-range future predictions . The Spatio-temporal LSTM ( Wang et al. , 2017 ; 2018a ) was introduced to better represent the dynamics of videos . This model is able to learn spatial and temporal representations simultaneously . Byeon et al . ( 2018 ) introduced a Multi-Dimensional LSTM-based approach ( Stollenga et al. , 2015 ) for video prediction . It contains directional ConvLSTM-like units that efficiently aggregate the entire spatio-temporal contextual information . Wang et al . ( 2018b ) recently proposed a memory recall function with 3D ConvLSTM . This work is the most related to our approach . It uses a set of cell states with an attention mechanism to capture the long-term frame interaction similar to the work of Cheng et al . ( 2016 ) . With 3D-convolutional operations , the model is able to capture short-term and long-term information flow . In contrast to this work , the attention mechanism in our model is used for a set of hidden state . We also disentangle the history and update memory cells to better memorize and restore the relevant information . By integrating a double memory LH-STM into the context-aware video prediction ( ContextVP ) model ( Byeon et al. , 2018 ) , our networks can capture the entire spatio-temporal context for a long-range video sequence . 3 METHOD . In this section , we first describe the standard ConvLSTM architecture and then introduce the LH-STM . Finally , we explain the ConvLSTM-based network architectures for multi-frame video prediction using the Context-aware Video Prediction model ( ContextVP ) ( Byeon et al. , 2018 ) . 3.1 CONVOLUTIONAL LSTM . Let Xn1 = { X1 , ... , Xn } be an input sequence of length n. Xk ∈ Rh×w×c is the k-th frame , where k ∈ { 1 , ... , n } , h is the height , w the width , and c the number of channels . For the input frame Xk , a ConvLSTM unit computes the current cell and hidden states ( Ck , Hk ) given the cell and hidden states from the previous frame , ( Ck−1 , Hk−1 ) : Ck , Hk = ConvLSTM ( Xk , Hk-1 , Ck-1 ) , ( 1 ) by computing the input , forget , output gates ik , fk , ok , and the transformed cell state Ĉk : ik = σ ( Wi ∗Xk +Mi ∗Hk-1 + bi ) , fk = σ ( Wf ∗Xk +Mf ∗Hk-1 + bf ) , ok = σ ( Wo ∗Xk +Mo ∗Hk-1 + bo ) , Ĉk = tanh ( Wĉ ∗Xk +Mĉ ∗Hk-1 + bĉ ) , Ck = fk Ck-1 + ik Ĉk , Hk = ok tanh ( Ck ) , ( 2 ) where σ is the sigmoid function , W and M are 2D convolutional kernels for input-to-state and state-to-state transitions , ( ∗ ) is the convolution operation , and ( ) is element-wise multiplication . The size of the weight matrices depends on the size of convolutional kernel and the number of hidden units . 3.2 LONG HISTORY SHORT-TERM MEMORY ( LH-STM ) . LH-STM is an extension of the standard ConvLSTM model by integrating a set of history states into the LSTM unit . The history states include the spatio-temporal context of each frame in addition to the pixel level information in the frame itself . Figure 1 illustrates the differences between a standard RNN , Higher-Order RNN ( Soltani & Jiang , 2016 ) , and our proposed model ( Double and Single LH-STM ) . History Soft-Selection Unit ( HistSSel ) : The HistSSel unit computes the relationship between the recent history and the earlier ones using dot-product similar to ( Vaswani et al. , 2017 ) 1 . This mechanism can be formulated as SoftSel ( Q , K , V ) = softmax ( WQQ ·WQK ) ·WQV . It consists of queries ( Q ) , keys ( K ) and values ( V ) . It computes the dot products of the queries and the keys ; and then applies the a softmax function . Finally , the values ( V ) are weighted by the outputs of softmax function . The queries , keys , and values can be optionally transformed by the WQ , WK , and WV matrices . Using this mechanism , HistSSel computes the relationship between the last hidden state Hk-1 and the earlier hidden states Hk-m : k-2 at time step k ( See Figure 2 ) . Hk-m : k-2 is the set of previous hidden states , ( Hk-m , Hk-m-1 , · · ·Hk-3 , Hk-2 ) . We will show in Section 4.2 the benefit of using history states versus using the past input frames directly . The history soft-selection mechanism can be formulated as follows : HistSSel ( Hk-1 , Hk-m : k-2 ) = n∑ i=2 softmaxi ( H̃ Q k-1 · H̃ K k-i ) · H̃Vk-i , H̃ji =W j i Hi + b j i , j ∈ { Q , K , V } . 2 ( 3 ) Single LH-STM : A simple way to employ HistSSel in ConvLSTM ( Equation 1 ) is to add HistSSel ( Hk-1 , Hk-m : k-2 ) in addition to the input and the previous states ( Xk , Hk-1 , Ck-1 ) in the Equation 6 Figure 1c shows a diagram of the computation . This direct extension is named Single LH-STM : Hk = SingleConvLSTM ( Xk , Hk-1 , Ck-1 , HistSSel ( Hk-1 , Hk-m : k-2 ) ) . ik = σ ( Wi ∗Xk +Mi ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bi ) , fk = σ ( Wf ∗Xk +Mf ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bf ) , ok = σ ( Wo ∗Xk +Mo ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bo ) Ĉk = tanh ( Wĉ ∗Xk +Mĉ ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bĉ ) , Ck = ik Ĉk + fk Ck-1 , Hk = ok tanh ( Ck ) . ( 4 ) Double LH-STM : To effectively learn dynamics from the history states , we propose Double LH-STM . It contains two ConvLSTM blocks , History LSTM ( H-LSTM ) and Update LSTM ( U-LSTM ) . The goal of the Double LH-STM is to explicitly separate the long-term history memory and the update memory . By disentangling these , the model can better encode complex long-range history and keep track of the their dependencies . Figure 1d illustrates a diagram of Double LH-STM . 1The purpose of this unit is to compute the importance of each history state . While we used a selfattention-like unit in this paper , this can be achieved with other common layers as well , e.g. , fully connected or convolutional layers . 2bji can be omitted . The H-LSTM block explicitly learns the complex transition function from the ( possibly entire ) set of past hidden states , Hk-m : k-1 , 1 < m < k. If m = k , H-LSTM incorporates the entire history up to the time step k − 1 . The U-LSTM block updates the states Hk and Ck for the time step k , given the input Xk , previous cell state Ck−1 , and the output of the H-LSTM , H ′k−1 . The History-LSTM ( H-LSTM ) and Update-LSTM ( U-LSTM ) can be formulated as : H-LSTM i′k-1 = σ ( M ′ i ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′i ) , f ′k-1 = σ ( M ′ f ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′f ) , o′k-1 = σ ( M ′ o ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′o ) Ĉ′k-1 = tanh ( M ′ ĉ ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′ĉ ) , C′k-1 = f ′ k−1 C′k-2 + i′k−1 Ĉ′k−1 , H ′k-1 = o ′ k-1 tanh ( c′k-1 ) . ( 5 ) U-LSTM ik = σ ( Wi ∗Xk +Mi ∗H ′k-1 + bi ) , fk = σ ( Wf ∗Xk +Mf ∗H ′′k-1 + bf ) , ok = σ ( Wo ∗Xk +Mo ∗H ′k-1 + bo ) , Ĉk = tanh ( Wĉ ∗Xk +Mĉ ∗H ′′k-1 + bĉ ) , Ck = fk Ck-1 + ik Ĉk , Hk = ok tanh ( Ck ) . ( 6 )
The paper proposes a type of recurrent neural network module called Long History Short-Term Memory (LH-STM) for longer-term video generation. This module can be used to replace ConvLSTMs in previously published video prediction models. It expands ConvLSTMs by adding a "previous history" term to the ConvLSTM equations that compute the IFO gates and the candidate new state. This history term corresponds to a linear combination of previous hidden states selected through a soft-attention mechanism. As such, it is not clear if there are significant differences between LH-STMs and previously proposed LSTMs with attention on previous hidden states. The authors propose recurrent units that include one or two History Selection (soft-attention) steps, called single LH-STM and double LH-STM respectively. The exact formulation of the double LH-STM is not clear from the paper. The authors then propose to use models with LH-STM units for longer term video generation. They claim that LH-STM can better reduce error propagation and better model the complex dynamics of videos. To support the claims, they conduct empirical experiments where they show that the proposed model outperforms previous video prediction models on KTH (up to 80 frames) and the BAIR Push dataset (up to 25 frames).
SP:cf0db5624fc03cd71e331202c16808174b4a9ae7
Long History Short-Term Memory for Long-Term Video Prediction
While video prediction approaches have advanced considerably in recent years , learning to predict long-term future is challenging — ambiguous future or error propagation over time yield blurry predictions . To address this challenge , existing algorithms rely on extra supervision ( e.g. , action or object pose ) , motion flow learning , or adversarial training . In this paper , we propose a new recurrent unit , Long History Short-Term Memory ( LH-STM ) . LH-STM incorporates long history states into a recurrent unit to learn longer range dependencies . To capture spatiotemporal dynamics in videos , we combined LH-STM with the Context-aware Video Prediction model ( ContextVP ) . Our experiments on the KTH human actions and BAIR robot pushing datasets demonstrate that our approach produces not only sharper near-future predictions , but also farther into the future compared to the state-of-the-art methods . 1 INTRODUCTION . Learning the dynamics of an environment and predicting consequences in the future has become an important research problem . A common task is to train a model that accurately predicts pixel-level future frames conditioned on past frames . It can be utilized for intelligent agents to guide them to interact with the world , or for other video analysis tasks such as activity recognition . An important component of designing such models is how to effectively learn good spatio-temporal representations from video frames . The Convolutional Long Short-Term Memory ( ConvLSTM ) network ( Xingjian et al. , 2015 ) has been a popular model architecture choice for video prediction . However , recent stateof-the-art approaches produce high-quality predictions only for one or less then ten frames ( Lotter et al. , 2016 ; Villegas et al. , 2017a ; Byeon et al. , 2018 ) . Learning to predict long-term future video frames remains challenging due to 1 ) the presence of complex dynamics in high-dimensional video data , 2 ) prediction error propagation over time , and 3 ) inherent uncertainty of the future . Many recent works ( Denton & Fergus , 2018 ; Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ) focus on the third issue by introducing stochastic models ; this issue is a crucial challenge for long-term prediction . However , the architectures currently in use are not sufficiently powerful and efficient for long-term prediction , and this is also an important but unsolved problem . The model needs to extract important information from spatio-temporal data and retain this information longer into the future efficiently . Otherwise , its uncertainty about the future will increase even if the future is completely predictable given the past . Therefore , in this paper , we attempt to address the issue of learning complex dynamics of videos and minimizing long-term prediction error by fully observing the history . We propose a novel modification of the ConvLSTM structure , Long History Short-Term Memory ( LH-STM ) . LH-STM learns to interconnect history states and the current input by a History SoftSelection Unit ( HistSSel ) and double memory modules . The weighted history states computed by our HistSSel units are combined with the history memory and then used to update the current states by the update memory . The proposed method brings the power of higher-order RNNs ( Soltani & Jiang , 2016 ) to ConvLSTMs , which have been limited to simple recurrent mechanisms so far . The HistSSel unit acts as a short-cut to the history , so the gradient flow in the LSTM is improved . More powerful RNNs are likely to be necessary to solve the hard and unsolved problem of long-term video prediction , which is extremely challenging for current architectures . Moreover , by disentangling the history and update memories , our model can fully utilize the long history states . This structure can better model long-term dependencies in sequential data . In this paper , the proposed modification is integrated into the ConvLSTM-based architectures to solve the long-term video prediction problem : Context-aware Video Prediction ( ContextVP ) model . The proposed models can fully leverage long-range spatio-temporal contexts in real-world videos . Our experiments on the KTH human actions and the BAIR robot pushing datasets show that our model produces sharp and realistic predictions for more frames into the future compared to recent state-of-the-art long-term video prediction methods . 2 RELATED WORK . Learning Long-Term Dependencies with Recurrent Neural Networks ( RNN ) : While Long ShortTerm Memory ( LSTM ) has been successful for sequence prediction , many recent approaches aim to capture longer-term dependencies in sequential data . Several works have proposed to allow dynamic recurrent state updates or to learn more complex transition functions . Chung et al . ( 2016 ) introduced the hierarchical multiscale RNN that captures a hierarchical representation of a sequence by encoding multiple time scales of temporal dependencies . Koutnik et al . ( 2014 ) modified the standard RNN to a Clockwork RNN that partitions hidden units and processes them at different clock speeds . Neil et al . ( 2016 ) introduced a new time gate that controls update intervals based on periodic patterns . Campos et al . ( 2017 ) proposed an explicit skipping module for state updates . Zilly et al . ( 2017 ) increased the recurrent transition depth with highway layers . Fast-Slow Recurrent Neural Networks ( Mujika et al. , 2017 ) incorporate ideas from both multiscale ( Schmidhuber , 1992 ; El Hihi & Bengio , 1996 ; Chung et al. , 2016 ) and deep transition ( Pascanu et al. , 2013 ; Zilly et al. , 2017 ) RNNs . The advantages of the above approaches are efficient information propagation through time , better long memory traces , and generalization to unseen data . Alternative solutions include the use of history states , an attention model or skip connections . Soltani & Jiang ( 2016 ) investigated a higher order RNN to aggregate more history information and showed that it is beneficial for long range sequence modeling . Cheng et al . ( 2016 ) deployed an attention mechanism in LSTM to induce relations between input and history states . Gui et al . ( 2018 ) incorporated into an LSTM , dynamic skip connections and reinforcement learning to model long-term dependencies . These approaches use the history states in a single LSTM by directly adding more recurrent connections or adding an attention module in the memory cell . These models are used for one dimensional sequence modeling , whereas our proposed approach separates the history and update memories that learn to encode the long-range relevant history states . Furthermore , our approach is more suitable for high-dimensional ( e.g. , video ) prediction tasks . Video Prediction : The main issue in long-term pixel-level video prediction is how to capture longterm dynamics and handle uncertainty of the future while maintaining sharpness and realism . Oh et al . ( 2015 ) introduced action-conditioned video prediction using a Convolutional Neural Network ( CNN ) architecture . Villegas et al . ( 2017b ) and Wichers et al . ( 2018 ) focused on a hierarchical model to predict long-term videos . Their model estimates high-level structure before generating pixel-level predictions . However , the approach by Villegas et al . ( 2017b ) requires object pose information as ground truth during training . Finn et al . ( 2016 ) used ConvLSTM to explicitly model pixel motions . To generate high-quality predictions , many approaches train with an adversarial loss ( Mathieu et al. , 2015 ; Wichers et al. , 2018 ; Vondrick et al. , 2016 ; Vondrick & Torralba , 2017 ; Denton et al. , 2017 ; Lee et al. , 2018 ) . Weissenborn et al . ( 2019 ) introduced local self-attention on videos directly for a large scale video precessing . Another active line of investigation is to train stochastic prediction models using VAEs ( Denton & Fergus , 2018 ; Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ) . These models predict plausible futures by sampling latent variables and produce long-range future predictions . The Spatio-temporal LSTM ( Wang et al. , 2017 ; 2018a ) was introduced to better represent the dynamics of videos . This model is able to learn spatial and temporal representations simultaneously . Byeon et al . ( 2018 ) introduced a Multi-Dimensional LSTM-based approach ( Stollenga et al. , 2015 ) for video prediction . It contains directional ConvLSTM-like units that efficiently aggregate the entire spatio-temporal contextual information . Wang et al . ( 2018b ) recently proposed a memory recall function with 3D ConvLSTM . This work is the most related to our approach . It uses a set of cell states with an attention mechanism to capture the long-term frame interaction similar to the work of Cheng et al . ( 2016 ) . With 3D-convolutional operations , the model is able to capture short-term and long-term information flow . In contrast to this work , the attention mechanism in our model is used for a set of hidden state . We also disentangle the history and update memory cells to better memorize and restore the relevant information . By integrating a double memory LH-STM into the context-aware video prediction ( ContextVP ) model ( Byeon et al. , 2018 ) , our networks can capture the entire spatio-temporal context for a long-range video sequence . 3 METHOD . In this section , we first describe the standard ConvLSTM architecture and then introduce the LH-STM . Finally , we explain the ConvLSTM-based network architectures for multi-frame video prediction using the Context-aware Video Prediction model ( ContextVP ) ( Byeon et al. , 2018 ) . 3.1 CONVOLUTIONAL LSTM . Let Xn1 = { X1 , ... , Xn } be an input sequence of length n. Xk ∈ Rh×w×c is the k-th frame , where k ∈ { 1 , ... , n } , h is the height , w the width , and c the number of channels . For the input frame Xk , a ConvLSTM unit computes the current cell and hidden states ( Ck , Hk ) given the cell and hidden states from the previous frame , ( Ck−1 , Hk−1 ) : Ck , Hk = ConvLSTM ( Xk , Hk-1 , Ck-1 ) , ( 1 ) by computing the input , forget , output gates ik , fk , ok , and the transformed cell state Ĉk : ik = σ ( Wi ∗Xk +Mi ∗Hk-1 + bi ) , fk = σ ( Wf ∗Xk +Mf ∗Hk-1 + bf ) , ok = σ ( Wo ∗Xk +Mo ∗Hk-1 + bo ) , Ĉk = tanh ( Wĉ ∗Xk +Mĉ ∗Hk-1 + bĉ ) , Ck = fk Ck-1 + ik Ĉk , Hk = ok tanh ( Ck ) , ( 2 ) where σ is the sigmoid function , W and M are 2D convolutional kernels for input-to-state and state-to-state transitions , ( ∗ ) is the convolution operation , and ( ) is element-wise multiplication . The size of the weight matrices depends on the size of convolutional kernel and the number of hidden units . 3.2 LONG HISTORY SHORT-TERM MEMORY ( LH-STM ) . LH-STM is an extension of the standard ConvLSTM model by integrating a set of history states into the LSTM unit . The history states include the spatio-temporal context of each frame in addition to the pixel level information in the frame itself . Figure 1 illustrates the differences between a standard RNN , Higher-Order RNN ( Soltani & Jiang , 2016 ) , and our proposed model ( Double and Single LH-STM ) . History Soft-Selection Unit ( HistSSel ) : The HistSSel unit computes the relationship between the recent history and the earlier ones using dot-product similar to ( Vaswani et al. , 2017 ) 1 . This mechanism can be formulated as SoftSel ( Q , K , V ) = softmax ( WQQ ·WQK ) ·WQV . It consists of queries ( Q ) , keys ( K ) and values ( V ) . It computes the dot products of the queries and the keys ; and then applies the a softmax function . Finally , the values ( V ) are weighted by the outputs of softmax function . The queries , keys , and values can be optionally transformed by the WQ , WK , and WV matrices . Using this mechanism , HistSSel computes the relationship between the last hidden state Hk-1 and the earlier hidden states Hk-m : k-2 at time step k ( See Figure 2 ) . Hk-m : k-2 is the set of previous hidden states , ( Hk-m , Hk-m-1 , · · ·Hk-3 , Hk-2 ) . We will show in Section 4.2 the benefit of using history states versus using the past input frames directly . The history soft-selection mechanism can be formulated as follows : HistSSel ( Hk-1 , Hk-m : k-2 ) = n∑ i=2 softmaxi ( H̃ Q k-1 · H̃ K k-i ) · H̃Vk-i , H̃ji =W j i Hi + b j i , j ∈ { Q , K , V } . 2 ( 3 ) Single LH-STM : A simple way to employ HistSSel in ConvLSTM ( Equation 1 ) is to add HistSSel ( Hk-1 , Hk-m : k-2 ) in addition to the input and the previous states ( Xk , Hk-1 , Ck-1 ) in the Equation 6 Figure 1c shows a diagram of the computation . This direct extension is named Single LH-STM : Hk = SingleConvLSTM ( Xk , Hk-1 , Ck-1 , HistSSel ( Hk-1 , Hk-m : k-2 ) ) . ik = σ ( Wi ∗Xk +Mi ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bi ) , fk = σ ( Wf ∗Xk +Mf ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bf ) , ok = σ ( Wo ∗Xk +Mo ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bo ) Ĉk = tanh ( Wĉ ∗Xk +Mĉ ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + bĉ ) , Ck = ik Ĉk + fk Ck-1 , Hk = ok tanh ( Ck ) . ( 4 ) Double LH-STM : To effectively learn dynamics from the history states , we propose Double LH-STM . It contains two ConvLSTM blocks , History LSTM ( H-LSTM ) and Update LSTM ( U-LSTM ) . The goal of the Double LH-STM is to explicitly separate the long-term history memory and the update memory . By disentangling these , the model can better encode complex long-range history and keep track of the their dependencies . Figure 1d illustrates a diagram of Double LH-STM . 1The purpose of this unit is to compute the importance of each history state . While we used a selfattention-like unit in this paper , this can be achieved with other common layers as well , e.g. , fully connected or convolutional layers . 2bji can be omitted . The H-LSTM block explicitly learns the complex transition function from the ( possibly entire ) set of past hidden states , Hk-m : k-1 , 1 < m < k. If m = k , H-LSTM incorporates the entire history up to the time step k − 1 . The U-LSTM block updates the states Hk and Ck for the time step k , given the input Xk , previous cell state Ck−1 , and the output of the H-LSTM , H ′k−1 . The History-LSTM ( H-LSTM ) and Update-LSTM ( U-LSTM ) can be formulated as : H-LSTM i′k-1 = σ ( M ′ i ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′i ) , f ′k-1 = σ ( M ′ f ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′f ) , o′k-1 = σ ( M ′ o ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′o ) Ĉ′k-1 = tanh ( M ′ ĉ ∗Hk-1 + HistSSel ( Hk-1 , Hk-m : k-2 ) + b′ĉ ) , C′k-1 = f ′ k−1 C′k-2 + i′k−1 Ĉ′k−1 , H ′k-1 = o ′ k-1 tanh ( c′k-1 ) . ( 5 ) U-LSTM ik = σ ( Wi ∗Xk +Mi ∗H ′k-1 + bi ) , fk = σ ( Wf ∗Xk +Mf ∗H ′′k-1 + bf ) , ok = σ ( Wo ∗Xk +Mo ∗H ′k-1 + bo ) , Ĉk = tanh ( Wĉ ∗Xk +Mĉ ∗H ′′k-1 + bĉ ) , Ck = fk Ck-1 + ik Ĉk , Hk = ok tanh ( Ck ) . ( 6 )
This paper proposes a new LSTM architecture called LH-STM (and Double LH-STM). The main idea deals with having a history selection mechanism to directly extract what information from the past. The authors also propose to decompose the history and update in LH-STM into two networks called Double LH-STM. In experiments, the authors evaluate and compare their two architectures with previously proposed models. They show that their architecture outperforms previous in the PSNR, SSIM and VIF metrics.
SP:cf0db5624fc03cd71e331202c16808174b4a9ae7
Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View
1 INTRODUCTION . The Transformer is one of the most commonly used neural network architectures in natural language processing . Variants of the Transformer have achieved state-of-the-art performance in many tasks including language modeling ( Dai et al. , 2019 ; Al-Rfou et al. , 2018 ) and machine translation ( Vaswani et al. , 2017 ; Dehghani et al. , 2018 ; Edunov et al. , 2018 ) . Unsupervised pre-trained models based on the Transformer architecture also show impressive performance in many downstream tasks ( Radford et al. , 2019 ; Devlin et al. , 2018 ) . The Transformer architecture is mainly built by stacking layers , each of which consists of two sub-layers with residual connections : the self-attention sub-layer and the position-wise feed-forward network ( FFN ) sub-layer . For a given sentence , the self-attention sub-layer considers the semantics and dependencies of words at different positions and uses that information to capture the internal structure and representations of the sentence . The position-wise FFN sub-layer is applied to each position separately and identically to encode context at each position into higher-level representations . Although the Transformer architecture has demonstrated promising results in many tasks , its design principle is not fully understood , and thus the strength of the architecture is not fully exploited . As far as we know , there is little work studying the foundation of the Transformer or different design choices . In this paper , we provide a novel perspective towards understanding the architecture . In particular , we are the first to show that the Transformer architecture is inherently related to the Multi-Particle Dynamic System ( MPDS ) in physics . MPDS is a well-established research field which aims at modeling how a collection of particles move in the space using differential equations ( Moulton , 2012 ) . In MPDS , the behavior of each particle is usually modeled by two factors separately . The first factor is the convection which concerns the mechanism of each particle regardless of other particles in the system , and the second factor is the diffusion which models the movement of the particle resulting from other particles in the system . Inspired by the relationship between the ODE and neural networks ( Lu et al. , 2017 ; Chen et al. , 2018a ) , we first show that the Transformer layers can be naturally interpreted as a numerical ODE solver for a first-order convection-diffusion equation in MPDS . To be more specific , the selfattention sub-layer , which transforms the semantics at one position by attending over all other positions , corresponds to the diffusion term ; The position-wise FFN sub-layer , which is applied to each position separately and identically , corresponds to the convection term . The number of stacked layers in the Transformer corresponds to the time dimension in ODE . In this way , the stack of self-attention sub-layers and position-wise FFN sub-layers with residual connections can be viewed as solving the ODE problem numerically using the Lie-Trotter splitting scheme ( Geiser , 2009 ) and the Euler ’ s method ( Ascher & Petzold , 1998 ) . By this interpretation , we have a novel understanding of learning contextual representations of a sentence using the Transformer : the feature ( a.k.a , embedding ) of words in a sequence can be considered as the initial positions of a collection of particles , and the latent representations abstracted in stacked Transformer layers can be viewed as the location of particles moving in a high-dimensional space at different time points . Such an interpretation not only provides a new perspective on the Transformer but also inspires us to design new structures by leveraging the rich literature of numerical analysis . The Lie-Trotter splitting scheme is simple but not accurate and often leads to high approximation error ( Geiser , 2009 ) . The Strang-Marchuk splitting scheme ( Strang , 1968 ) is developed to reduce the approximation error by a simple modification to the Lie-Trotter splitting scheme and is theoretically more accurate . Mapped to neural network design , the Strang-Marchuk splitting scheme suggests that there should be three sub-layers : two position-wise feed-forward sub-layers with half-step residual connections and one self-attention sub-layer placed in between with a full-step residual connection . By doing so , the stacked layers will be more accurate from the ODE ’ s perspective and will lead to better performance in deep learning . As the FFN-attention-FFN layer is “ Macaron-like ” , we call it Macaron layer and call the network composed of Macaron layers the Macaron Net . We conduct extensive experiments on both supervised and unsupervised learning tasks . For each task , we replace Transformer layers by Macaron layers and keep the number of parameters to be the same . Experiments show that the Macaron Net can achieve higher accuracy than the Transformer on all tasks which , in a way , is consistent with the ODE theory . 2 BACKGROUND . 2.1 RELATIONSHIP BETWEEN NEURAL NETWORKS AND ODE . Recently , there are extensive studies to bridge deep neural networks with ordinary differential equations ( Weinan , 2017 ; Lu et al. , 2017 ; Haber & Ruthotto , 2017 ; Chen et al. , 2018a ; Zhang et al. , 2019b ; Sonoda & Murata , 2019 ; Thorpe & van Gennip , 2018 ) . We here present a brief introduction to such a relationship and discuss how previous works borrow powerful tools from numerical analysis to help deep neural network design . A first-order ODE problem is usually defined as to solve the equation ( i.e. , calculate x ( t ) for any t ) which satisfies the following first-order derivative and the initial condition : dx ( t ) dt = f ( x , t ) , x ( t0 ) = w , ( 1 ) in which x ( t ) ∈ Rd for all t ≥ t0 . ODEs usually have physical interpretations . For example , x ( t ) can be considered as the location of a particle moving in the d-dimensional space and the first order time derivative can be considered as the velocity of the particle . Usually there is no analytic solution to Eqn ( 1 ) and the problem has to be solved numerically . The simplest numerical ODE solver is the Euler ’ s method ( Ascher & Petzold , 1998 ) . The Euler ’ s method discretizes the time derivative dx ( t ) dt by its first-order approximation x ( t2 ) −x ( t1 ) t2−t1 ≈ f ( x ( t1 ) , t1 ) . By doing so , for the fixed time horizon T = t0 + γL , we can estimate x ( T ) from x0 . = x ( t0 ) by sequentially estimating xl+1 . = x ( tl+1 ) using xl+1 = xl + γf ( xl , tl ) ( 2 ) where l = 0 , · · · , L − 1 , tl = t0 + γl is the time point corresponds to xl , and γ = ( T − t0 ) /L is the step size . As we can see , this is mathematically equivalent to the ResNet architecture ( Lu et al. , 2017 ; Chen et al. , 2018a ) : The function γf ( xl , tl ) can be considered as a neural-network block , and the second argument tl in the function indicates the set of parameters in the l-th layer . The simple temporal discretization by Euler ’ s method naturally leads to the residual connection . Observing such a strong relationship , researchers use ODE theory to explain and improve the neural network architectures mainly designed for computer vision tasks . Lu et al . ( 2017 ) ; Chen et al . ( 2018a ) show any parametric ODE solver can be viewed as a deep residual network ( probably with infinite layers ) , and the parameters in the ODE can be optimized through backpropagation . Recent works discover that new neural networks inspired by sophisticated numerical ODE solvers can lead to better performance . For example , Zhu et al . ( 2018 ) uses a high-precision Runge-Kutta method to design a neural network , and the new architecture achieves higher accuracy . Haber & Ruthotto ( 2017 ) uses a leap-frog method to construct a reversible neural network . Liao & Poggio ( 2016 ) ; Chang et al . ( 2019 ) try to understand recurrent neural networks from the ODE ’ s perspective , and Tao et al . ( 2018 ) uses non-local differential equations to model non-local neural networks . 2.2 TRANSFORMER . The Transformer architecture is usually developed by stacking Transformer layers ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ) . A Transformer layer operates on a sequence of vectors and outputs a new sequence of the same shape . The computation inside a layer is decomposed into two steps : the vectors first pass through a ( multi-head ) self-attention sub-layer and the output will be further put into a position-wise feed-forward network sub-layer . Residual connection ( He et al. , 2016 ) and layer normalization ( Lei Ba et al. , 2016 ) are employed for both sub-layers . The visualization of a Transformer layer is shown in Figure 2 ( a ) and the two sub-layers are defined as below . Self-attention sub-layer The attention mechanism can be formulated as querying a dictionary with key-value pairs ( Vaswani et al. , 2017 ) , e.g. , Attention ( Q , K , V ) = softmax ( QKT / √ dmodel ) · V , where dmodel is the dimensionality of the hidden representations and Q ( Query ) , K ( Key ) , V ( Value ) are specified as the hidden representations of the previous layer in the so-called self-attention sublayers in the Transformer architecture . The multi-head variant of attention allows the model to jointly attend to information from different representation subspaces , and is defined as Multi-head ( Q , K , V ) = Concat ( head1 , · · · , headH ) WO , ( 3 ) headk = Attention ( QW Q k , KW K k , V W V k ) , ( 4 ) where WQk ∈ Rdmodel×dK , WKk ∈ Rdmodel×dK , WVk ∈ Rdmodel×dV , and WO ∈ RHdV ×dmodel are project parameter matrices , H is the number of heads , and dK and dV are the dimensionalities of Key and Value . Position-wise FFN sub-layer In addition to the self-attention sub-layer , each Transformer layer also contains a fully connected feed-forward network , which is applied to each position separately and identically . This feed-forward network consists of two linear transformations with an activation function σ in between . Specially , given vectors h1 , . . . , hn , a position-wise FFN sub-layer transforms each hi as FFN ( hi ) = σ ( hiW1 + b1 ) W2 + b2 , where W1 , W2 , b1 and b2 are parameters . In this paper , we take the first attempt to provide an understanding of the feature extraction process in natural language processing from the ODE ’ s viewpoint . As discussed in Section 2.1 , several works interpret the standard ResNet using the ODE theory . However , we found this interpretation can not be directly applied to the Transformer architecture . First , different from vision applications whose size of the input ( e.g. , an image ) is usually predefined and fixed , the input ( e.g. , a sentence ) in natural language processing is always of variable length , which makes the single-particle ODE formulation used in previous works not applicable . Second , the Transformer layer contains very distinct sub-layers . The self-attention sub-layer takes the information from all positions as input while the position-wise feed-forward layer is applied to each position separately . How to interpret these heterogeneous components by ODE is also not covered by previous works ( Tao et al. , 2018 ; Chen et al. , 2018a ) .
In this work, the authors show that the sequence of self-attention and feed-forward layers within a Transformer can be interpreted as an approximate numerical solution to a set of coupled ODEs. Based on this insight, the authors propose to replace the first-order Lie-Trotter splitting scheme by the more accurate, second-order Strang splitting scheme. They then present experimental results that indicate an improved performance of their Macaron Net compared to the Transformer and argue that this is due to the former being a more accurate numerical solution to the underlying set of ODEs.
SP:69da1cecdf9fc25a9e6263943a5396b606cdcfef
Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View
1 INTRODUCTION . The Transformer is one of the most commonly used neural network architectures in natural language processing . Variants of the Transformer have achieved state-of-the-art performance in many tasks including language modeling ( Dai et al. , 2019 ; Al-Rfou et al. , 2018 ) and machine translation ( Vaswani et al. , 2017 ; Dehghani et al. , 2018 ; Edunov et al. , 2018 ) . Unsupervised pre-trained models based on the Transformer architecture also show impressive performance in many downstream tasks ( Radford et al. , 2019 ; Devlin et al. , 2018 ) . The Transformer architecture is mainly built by stacking layers , each of which consists of two sub-layers with residual connections : the self-attention sub-layer and the position-wise feed-forward network ( FFN ) sub-layer . For a given sentence , the self-attention sub-layer considers the semantics and dependencies of words at different positions and uses that information to capture the internal structure and representations of the sentence . The position-wise FFN sub-layer is applied to each position separately and identically to encode context at each position into higher-level representations . Although the Transformer architecture has demonstrated promising results in many tasks , its design principle is not fully understood , and thus the strength of the architecture is not fully exploited . As far as we know , there is little work studying the foundation of the Transformer or different design choices . In this paper , we provide a novel perspective towards understanding the architecture . In particular , we are the first to show that the Transformer architecture is inherently related to the Multi-Particle Dynamic System ( MPDS ) in physics . MPDS is a well-established research field which aims at modeling how a collection of particles move in the space using differential equations ( Moulton , 2012 ) . In MPDS , the behavior of each particle is usually modeled by two factors separately . The first factor is the convection which concerns the mechanism of each particle regardless of other particles in the system , and the second factor is the diffusion which models the movement of the particle resulting from other particles in the system . Inspired by the relationship between the ODE and neural networks ( Lu et al. , 2017 ; Chen et al. , 2018a ) , we first show that the Transformer layers can be naturally interpreted as a numerical ODE solver for a first-order convection-diffusion equation in MPDS . To be more specific , the selfattention sub-layer , which transforms the semantics at one position by attending over all other positions , corresponds to the diffusion term ; The position-wise FFN sub-layer , which is applied to each position separately and identically , corresponds to the convection term . The number of stacked layers in the Transformer corresponds to the time dimension in ODE . In this way , the stack of self-attention sub-layers and position-wise FFN sub-layers with residual connections can be viewed as solving the ODE problem numerically using the Lie-Trotter splitting scheme ( Geiser , 2009 ) and the Euler ’ s method ( Ascher & Petzold , 1998 ) . By this interpretation , we have a novel understanding of learning contextual representations of a sentence using the Transformer : the feature ( a.k.a , embedding ) of words in a sequence can be considered as the initial positions of a collection of particles , and the latent representations abstracted in stacked Transformer layers can be viewed as the location of particles moving in a high-dimensional space at different time points . Such an interpretation not only provides a new perspective on the Transformer but also inspires us to design new structures by leveraging the rich literature of numerical analysis . The Lie-Trotter splitting scheme is simple but not accurate and often leads to high approximation error ( Geiser , 2009 ) . The Strang-Marchuk splitting scheme ( Strang , 1968 ) is developed to reduce the approximation error by a simple modification to the Lie-Trotter splitting scheme and is theoretically more accurate . Mapped to neural network design , the Strang-Marchuk splitting scheme suggests that there should be three sub-layers : two position-wise feed-forward sub-layers with half-step residual connections and one self-attention sub-layer placed in between with a full-step residual connection . By doing so , the stacked layers will be more accurate from the ODE ’ s perspective and will lead to better performance in deep learning . As the FFN-attention-FFN layer is “ Macaron-like ” , we call it Macaron layer and call the network composed of Macaron layers the Macaron Net . We conduct extensive experiments on both supervised and unsupervised learning tasks . For each task , we replace Transformer layers by Macaron layers and keep the number of parameters to be the same . Experiments show that the Macaron Net can achieve higher accuracy than the Transformer on all tasks which , in a way , is consistent with the ODE theory . 2 BACKGROUND . 2.1 RELATIONSHIP BETWEEN NEURAL NETWORKS AND ODE . Recently , there are extensive studies to bridge deep neural networks with ordinary differential equations ( Weinan , 2017 ; Lu et al. , 2017 ; Haber & Ruthotto , 2017 ; Chen et al. , 2018a ; Zhang et al. , 2019b ; Sonoda & Murata , 2019 ; Thorpe & van Gennip , 2018 ) . We here present a brief introduction to such a relationship and discuss how previous works borrow powerful tools from numerical analysis to help deep neural network design . A first-order ODE problem is usually defined as to solve the equation ( i.e. , calculate x ( t ) for any t ) which satisfies the following first-order derivative and the initial condition : dx ( t ) dt = f ( x , t ) , x ( t0 ) = w , ( 1 ) in which x ( t ) ∈ Rd for all t ≥ t0 . ODEs usually have physical interpretations . For example , x ( t ) can be considered as the location of a particle moving in the d-dimensional space and the first order time derivative can be considered as the velocity of the particle . Usually there is no analytic solution to Eqn ( 1 ) and the problem has to be solved numerically . The simplest numerical ODE solver is the Euler ’ s method ( Ascher & Petzold , 1998 ) . The Euler ’ s method discretizes the time derivative dx ( t ) dt by its first-order approximation x ( t2 ) −x ( t1 ) t2−t1 ≈ f ( x ( t1 ) , t1 ) . By doing so , for the fixed time horizon T = t0 + γL , we can estimate x ( T ) from x0 . = x ( t0 ) by sequentially estimating xl+1 . = x ( tl+1 ) using xl+1 = xl + γf ( xl , tl ) ( 2 ) where l = 0 , · · · , L − 1 , tl = t0 + γl is the time point corresponds to xl , and γ = ( T − t0 ) /L is the step size . As we can see , this is mathematically equivalent to the ResNet architecture ( Lu et al. , 2017 ; Chen et al. , 2018a ) : The function γf ( xl , tl ) can be considered as a neural-network block , and the second argument tl in the function indicates the set of parameters in the l-th layer . The simple temporal discretization by Euler ’ s method naturally leads to the residual connection . Observing such a strong relationship , researchers use ODE theory to explain and improve the neural network architectures mainly designed for computer vision tasks . Lu et al . ( 2017 ) ; Chen et al . ( 2018a ) show any parametric ODE solver can be viewed as a deep residual network ( probably with infinite layers ) , and the parameters in the ODE can be optimized through backpropagation . Recent works discover that new neural networks inspired by sophisticated numerical ODE solvers can lead to better performance . For example , Zhu et al . ( 2018 ) uses a high-precision Runge-Kutta method to design a neural network , and the new architecture achieves higher accuracy . Haber & Ruthotto ( 2017 ) uses a leap-frog method to construct a reversible neural network . Liao & Poggio ( 2016 ) ; Chang et al . ( 2019 ) try to understand recurrent neural networks from the ODE ’ s perspective , and Tao et al . ( 2018 ) uses non-local differential equations to model non-local neural networks . 2.2 TRANSFORMER . The Transformer architecture is usually developed by stacking Transformer layers ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ) . A Transformer layer operates on a sequence of vectors and outputs a new sequence of the same shape . The computation inside a layer is decomposed into two steps : the vectors first pass through a ( multi-head ) self-attention sub-layer and the output will be further put into a position-wise feed-forward network sub-layer . Residual connection ( He et al. , 2016 ) and layer normalization ( Lei Ba et al. , 2016 ) are employed for both sub-layers . The visualization of a Transformer layer is shown in Figure 2 ( a ) and the two sub-layers are defined as below . Self-attention sub-layer The attention mechanism can be formulated as querying a dictionary with key-value pairs ( Vaswani et al. , 2017 ) , e.g. , Attention ( Q , K , V ) = softmax ( QKT / √ dmodel ) · V , where dmodel is the dimensionality of the hidden representations and Q ( Query ) , K ( Key ) , V ( Value ) are specified as the hidden representations of the previous layer in the so-called self-attention sublayers in the Transformer architecture . The multi-head variant of attention allows the model to jointly attend to information from different representation subspaces , and is defined as Multi-head ( Q , K , V ) = Concat ( head1 , · · · , headH ) WO , ( 3 ) headk = Attention ( QW Q k , KW K k , V W V k ) , ( 4 ) where WQk ∈ Rdmodel×dK , WKk ∈ Rdmodel×dK , WVk ∈ Rdmodel×dV , and WO ∈ RHdV ×dmodel are project parameter matrices , H is the number of heads , and dK and dV are the dimensionalities of Key and Value . Position-wise FFN sub-layer In addition to the self-attention sub-layer , each Transformer layer also contains a fully connected feed-forward network , which is applied to each position separately and identically . This feed-forward network consists of two linear transformations with an activation function σ in between . Specially , given vectors h1 , . . . , hn , a position-wise FFN sub-layer transforms each hi as FFN ( hi ) = σ ( hiW1 + b1 ) W2 + b2 , where W1 , W2 , b1 and b2 are parameters . In this paper , we take the first attempt to provide an understanding of the feature extraction process in natural language processing from the ODE ’ s viewpoint . As discussed in Section 2.1 , several works interpret the standard ResNet using the ODE theory . However , we found this interpretation can not be directly applied to the Transformer architecture . First , different from vision applications whose size of the input ( e.g. , an image ) is usually predefined and fixed , the input ( e.g. , a sentence ) in natural language processing is always of variable length , which makes the single-particle ODE formulation used in previous works not applicable . Second , the Transformer layer contains very distinct sub-layers . The self-attention sub-layer takes the information from all positions as input while the position-wise feed-forward layer is applied to each position separately . How to interpret these heterogeneous components by ODE is also not covered by previous works ( Tao et al. , 2018 ; Chen et al. , 2018a ) .
The paper points out a formal analogy between transformers and an ODE modelling multi-particle convection (the feed-forward network) and diffusion (the self-attention head). The paper then adapts the Strang-Marchuk splitting scheme for solving ODEs to construct a slightly different transformer architecture: “FFN of Attention of FFN”, instead of “FFN of Attention”. The new architecture, refered to as a Macaron-Net, yields better performance in a variety of experiments.
SP:69da1cecdf9fc25a9e6263943a5396b606cdcfef
Risk Averse Value Expansion for Sample Efficient and Robust Policy Learning
1 INTRODUCTION . In contrast to the tremendous progress made by model-free reinforcement learning algorithms in the domain of games ( Mnih et al. , 2015 ; Silver et al. , 2017 ; Vinyals et al. , 2019 ) , poor sample efficiency has risen up as a great challenge to RL , especially when interacting with the real world . Toward this challenge , a promising direction is to integrate the dynamics model to enhance the sample efficiency of the learning process ( Sutton , 1991 ; Calandra et al. , 2016 ; Kalweit & Boedecker , 2017 ; Oh et al. , 2017 ; Racanière et al. , 2017 ) . However , classic model-based reinforcement learning ( MBRL ) methods tend to lag behind the model-free methods ( MFRL ) asymptotically , especially in cases of noisy environments and long trajectories . The hybrid combination of MFRL and MBRL ( Hybrid-RL for short ) has attracted much attention due to this reason . A lot of efforts has been devoted to this field , including the dyna algorithm ( Kurutach et al. , 2018 ) , model-based value expansion ( Feinberg et al. , 2018 ) , I2A ( Weber et al. , 2017 ) , etc . The robustness of the learned policy is another concern in RL . For stochastic environments , policy can be vulnerable to tiny disturbance and occasionally drop into catastrophic consequences . In MFRL , off-policy RL ( such as DQN , DDPG ) typically suffers from such problems , which in the end leads to instability in the performance including sudden drop in the rewards . To solve such problem , risk sensitive MFRL not only maximize the expected return , but also try to reduce those catastrophic outcomes ( Garcıa & Fernández , 2015 ; Dabney et al. , 2018a ; Pan et al. , 2019 ) . For MBRL and Hybrid-RL , without modeling the uncertainty in the environment ( especially for continuous states and actions ) , it often leads to higher function approximation errors and poorer performances . It is proposed that complete modeling of uncertainty in transition can obviously improve the performance ( Chua et al. , 2018 ) , however , reducing risks in MBRL and Hybrid-RL has not been sufficiently studied yet . In order to achieve both sample efficiency and robustness at the same time , we propose a new Hybrid-RL method more capable of solving stochastic and risky environments . The proposed method , namely Risk Averse Value Expansion ( RAVE ) , is an extension of the model-based value expansion ( MVE ) ( Feinberg et al. , 2018 ) and stochastic ensemble value expansion ( STEVE ) ( Buckman et al. , 2018 ) . We systematically analyse the approximation errors of different methods in stochastic environments . We borrow ideas from the uncertainty modeling ( Chua et al . ( 2018 ) ) and risk averse reinforcement learning . The probabilistic ensemble environment model is used , which captures not only the variance in estimation ( also called epistemic uncertainty ) , but also stochastic transition nature of the environment ( also called aleatoric uncertainty ) . Utilizing the ensemble of estimations , we further adopt a dynamic confidence lower bound of the target value function to make the policy more risk-sensitive . We compare RAVE with prior MFRL and Hybrid-RL baselines , showing that RAVE not only yields SOTA expected performance , but also facilitates the robustness of the policy . 2 RELATED WORKS . The model-based value expansion ( MVE ) ( Feinberg et al. , 2018 ) is a Hybrid-RL algorithm . Unlike typical MFRL such as DQN that uses only 1 step bootstrapping , MVE uses the imagination rollouts of length H to predict the target value . The assistance of environment model can greatly improve the sample efficiency at the start , but the precision of long term inference becomes limited asymptotically . In order to properly balance the contribution of the value expansion of different horizons , stochastic ensemble value expansion ( STEVE ) ( Buckman et al. , 2018 ) adopts an interpolation of value expansion of different horizon . The accuracy of the expansion is estimated through the ensemble of environment models as well as value functions . Ensemble of environment models also models the uncertainty to some extent , however , ensemble of deterministic model captures mainly epistemic uncertainty instead of stochastic transitions ( Chua et al. , 2018 ) . The uncertainty or the function approximation error is typically divided into three classes ( Geman et al. , 1992 ) : the noise exists in the objective environment , e.g. , the stochastic transitions , which is also called aleatoric uncertainty ( Chua et al. , 2018 ) . The model bias is the error produced by the limited expressive power of the approximating function , which is measured by the expectation of ground truth and the prediction of the model , in case that infinite training data is provided . The variance is the uncertainty brought by insufficient training data , which is also called epistemic uncertainty . Dabney et al . ( 2018b ) discuss the epistemic and aleatoric uncertainty in their work and focus on the latter one to improve the distributional RL . Recent work suggests that ensemble of probabilistic model ( PE ) is considered as more thorough modeling of uncertainty ( Chua et al. , 2018 ) , while simply aggregate deterministic model captures only variance or epistemic uncertainty . The stochastic transition is more related to the noise ( or aleatoric uncertainty ) , and the epistemic uncertainty is usually of interest to many works in terms of exploitation & exploration ( Pathak et al. , 2017 ; Schmidhuber , 2010 ; Oudeyer & Kaplan , 2009 ) . Other works adopt ensemble of deterministic value function for exploration ( Osband et al. , 2016 ; Buckman et al. , 2018 ) . Risks in RL typically refer to the inherent uncertainty of the environment and the fact that policy may perform poorly in some cases ( Garcıa & Fernández , 2015 ) . Risk sensitive learning requires not only maximization of expected rewards , but also lower variances and risks in performance . Toward this object , some works adopt the variance of the return ( Sato et al. , 2001 ; Pan et al. , 2019 ; Reddy et al. , 2019 ) , or the worst-case outcome ( Heger , 1994 ; Gaskett , 2003 ) in either policy learning ( Pan et al. , 2019 ; Reddy et al. , 2019 ) , exploration ( Smirnova et al. , 2019 ) , or distributional value estimates ( Dabney et al. , 2018a ) . An interesting issue in risk reduction is that reduction of risks is typically found to be conflicting with exploration and exploitation that try to maximize the reward in the long run . Authors in ( Pan et al. , 2019 ) introduce two adversarial agents ( risk aversion and longterm reward seeking ) that act in combination to solve this problem . Still , it remains quite tricky and empiristic to trade-off between risk-sensitive and risk-seeking ( exploration ) in RL . In this paper , we propose a dynamic confidence bound for this purpose . A number of prior works have studied function approximation error that leads to overestimation and sub-optimal solution in MFRL . Double DQN ( DDQN ) ( Van Hasselt et al. , 2016 ) improves over DQN through disentangling the target value function and the target policy that pursues maximum value . In TD3 ( Fujimoto et al. , 2018 ) the authors suggest that systematic overestimation of value function also exists in actor-critic MFRL . They use an ensemble of two value functions , with the minimum estimate being used as the target value . Selecting the lower value estimation is similar to using uncertainty or lower confidence bound which is adopted by the other risk sensitive methods ( Pan et al. , 2019 ) , though they claimed different motivations . 3 PRELIMINARIES . 3.1 ACTOR-CRITIC MODEL-FREE REINFORCEMENT LEARNING . The Markov Decision Processes ( MDP ) is used to describe the process of an agent interacting with the environment . The agent selects the action at ∈ A at each time step t. After executing the action , it receives a new observation st+1 ∈ S and a feedback rt ∈ R from the environment . As we focus mainly on the environments of continuous action , we denote the parametric deterministic policy that the agent uses to decide its action as at = µθ ( st ) . Typically we add Gaussian exploration noise on top of the deterministic policy , such that we have a stochastic behavioral policy πθ , σ : S×A → R. It is calculated as πθ , σ ( st , at ) = pN ( at|µθ ( st ) , σ2 ) , where pN ( x|m , σ2 ) represents the probability density at x in a Gaussian distribution N ( m , σ2 ) . As the interaction process continues , the agent generates a trajectory τ = ( s0 , a0 , r0 , s1 , a1 , r1 , ... ) following the policy πθ , σ . For finite horizon MDP , we use the indicator d : S → { 0 , 1 } to mark whether the episode is terminated . The objective of RL is to find the optimal policy π∗ to maximize the expected discounted sum of rewards along the trajectory . The value performing the action a with policy π at the state s is defined by Qπ ( s , a ) = Es0=s , a0=a , τ∼π ∑∞ t=0 γ trt , where 0 < γ < 1 is the discount factor . The value iteration in model-free RL tries to approximate the optimal valueQπ ∗ with a parametric value function Q̂φ by minimizing the Temporal Difference ( TD ) error , where φ is the parameter to be optimized . The TD error between the estimates of Q-value and the corresponding target values is shown in equation . 1 , where φ′ is a delayed copy of the parameter φ , and a′ ∼ πθ′ , with θ′ being a delayed copy of θ ( Lillicrap et al. , 2015 ) . L φ = Eτ [ ∑ t ( Q̂target ( rt , st+1 ) − Q̂φ ( st , at ) ) 2 ] with Q̂target ( rt , st+1 ) = rt + γ · ( 1− d ( st+1 ) ) · Q̂φ′ ( st+1 , a′ ) ) ( 1 ) To optimize the deterministic policy function in a continuous action space , deep deterministic policy gradient ( DDPG ) ( Lillicrap et al. , 2015 ) maximizes the value function ( or minimizes the negative value function ) under the policy µθ with respect to parameter θ , shown in equation . 2 . L θ = −Eτ [ ∑ t Q̂φ′ ( st , µθ ( st ) ) ] ( 2 ) 3.2 ENVIRONMENT MODELING . To model the environment in continuous space , an environment model is typically composed of three individual mapping functions : f̂r , ζr : S × A× S → R , f̂s , ζs : S × A → S , and f̂d , ζd : S → [ 0 , 1 ] , which are used to approximate the feedback , next state and probability of the terminal indicator respectively ( Gu et al. , 2016 ; Feinberg et al. , 2018 ) . Here ζr , ζs and ζd are used to represent the parameters of the corresponding mapping functions . With the environment model , starting from st , at , we can predict the next state and reward by ŝt+1 = f̂s , ζs ( st , at ) , r̂t = f̂r , ζr ( st , at , ŝt+1 ) , d̂t+1 = f̂d , ζd ( ŝt+1 ) , ( 3 ) and this process might go on to generate a complete imagined trajectory of [ st , at , r̂t , ŝt+1 , ... ] . The neural network is commonly used as an environment model due to its powerful express ability . To optimize the parameter ζ we need to minimize the mean square error ( or the cross entropy ) of the prediction and the ground truth , given the trajectories τ under the behavioral policy .
This paper proposes a novel deep reinforcement learning algorithm at the intersection of model-based and model-free reinforcement learning: Risk Averse Value Expansion (RAVE). Overall, this work represents a significant but incremental step forwards for this "hybrid"-RL class of algorithms. However, the paper itself has significant weaknesses in its writing, analysis, and presentation of ideas.
SP:fc98effb95b87ad325f609c31b336c7dafd9ac30
Risk Averse Value Expansion for Sample Efficient and Robust Policy Learning
1 INTRODUCTION . In contrast to the tremendous progress made by model-free reinforcement learning algorithms in the domain of games ( Mnih et al. , 2015 ; Silver et al. , 2017 ; Vinyals et al. , 2019 ) , poor sample efficiency has risen up as a great challenge to RL , especially when interacting with the real world . Toward this challenge , a promising direction is to integrate the dynamics model to enhance the sample efficiency of the learning process ( Sutton , 1991 ; Calandra et al. , 2016 ; Kalweit & Boedecker , 2017 ; Oh et al. , 2017 ; Racanière et al. , 2017 ) . However , classic model-based reinforcement learning ( MBRL ) methods tend to lag behind the model-free methods ( MFRL ) asymptotically , especially in cases of noisy environments and long trajectories . The hybrid combination of MFRL and MBRL ( Hybrid-RL for short ) has attracted much attention due to this reason . A lot of efforts has been devoted to this field , including the dyna algorithm ( Kurutach et al. , 2018 ) , model-based value expansion ( Feinberg et al. , 2018 ) , I2A ( Weber et al. , 2017 ) , etc . The robustness of the learned policy is another concern in RL . For stochastic environments , policy can be vulnerable to tiny disturbance and occasionally drop into catastrophic consequences . In MFRL , off-policy RL ( such as DQN , DDPG ) typically suffers from such problems , which in the end leads to instability in the performance including sudden drop in the rewards . To solve such problem , risk sensitive MFRL not only maximize the expected return , but also try to reduce those catastrophic outcomes ( Garcıa & Fernández , 2015 ; Dabney et al. , 2018a ; Pan et al. , 2019 ) . For MBRL and Hybrid-RL , without modeling the uncertainty in the environment ( especially for continuous states and actions ) , it often leads to higher function approximation errors and poorer performances . It is proposed that complete modeling of uncertainty in transition can obviously improve the performance ( Chua et al. , 2018 ) , however , reducing risks in MBRL and Hybrid-RL has not been sufficiently studied yet . In order to achieve both sample efficiency and robustness at the same time , we propose a new Hybrid-RL method more capable of solving stochastic and risky environments . The proposed method , namely Risk Averse Value Expansion ( RAVE ) , is an extension of the model-based value expansion ( MVE ) ( Feinberg et al. , 2018 ) and stochastic ensemble value expansion ( STEVE ) ( Buckman et al. , 2018 ) . We systematically analyse the approximation errors of different methods in stochastic environments . We borrow ideas from the uncertainty modeling ( Chua et al . ( 2018 ) ) and risk averse reinforcement learning . The probabilistic ensemble environment model is used , which captures not only the variance in estimation ( also called epistemic uncertainty ) , but also stochastic transition nature of the environment ( also called aleatoric uncertainty ) . Utilizing the ensemble of estimations , we further adopt a dynamic confidence lower bound of the target value function to make the policy more risk-sensitive . We compare RAVE with prior MFRL and Hybrid-RL baselines , showing that RAVE not only yields SOTA expected performance , but also facilitates the robustness of the policy . 2 RELATED WORKS . The model-based value expansion ( MVE ) ( Feinberg et al. , 2018 ) is a Hybrid-RL algorithm . Unlike typical MFRL such as DQN that uses only 1 step bootstrapping , MVE uses the imagination rollouts of length H to predict the target value . The assistance of environment model can greatly improve the sample efficiency at the start , but the precision of long term inference becomes limited asymptotically . In order to properly balance the contribution of the value expansion of different horizons , stochastic ensemble value expansion ( STEVE ) ( Buckman et al. , 2018 ) adopts an interpolation of value expansion of different horizon . The accuracy of the expansion is estimated through the ensemble of environment models as well as value functions . Ensemble of environment models also models the uncertainty to some extent , however , ensemble of deterministic model captures mainly epistemic uncertainty instead of stochastic transitions ( Chua et al. , 2018 ) . The uncertainty or the function approximation error is typically divided into three classes ( Geman et al. , 1992 ) : the noise exists in the objective environment , e.g. , the stochastic transitions , which is also called aleatoric uncertainty ( Chua et al. , 2018 ) . The model bias is the error produced by the limited expressive power of the approximating function , which is measured by the expectation of ground truth and the prediction of the model , in case that infinite training data is provided . The variance is the uncertainty brought by insufficient training data , which is also called epistemic uncertainty . Dabney et al . ( 2018b ) discuss the epistemic and aleatoric uncertainty in their work and focus on the latter one to improve the distributional RL . Recent work suggests that ensemble of probabilistic model ( PE ) is considered as more thorough modeling of uncertainty ( Chua et al. , 2018 ) , while simply aggregate deterministic model captures only variance or epistemic uncertainty . The stochastic transition is more related to the noise ( or aleatoric uncertainty ) , and the epistemic uncertainty is usually of interest to many works in terms of exploitation & exploration ( Pathak et al. , 2017 ; Schmidhuber , 2010 ; Oudeyer & Kaplan , 2009 ) . Other works adopt ensemble of deterministic value function for exploration ( Osband et al. , 2016 ; Buckman et al. , 2018 ) . Risks in RL typically refer to the inherent uncertainty of the environment and the fact that policy may perform poorly in some cases ( Garcıa & Fernández , 2015 ) . Risk sensitive learning requires not only maximization of expected rewards , but also lower variances and risks in performance . Toward this object , some works adopt the variance of the return ( Sato et al. , 2001 ; Pan et al. , 2019 ; Reddy et al. , 2019 ) , or the worst-case outcome ( Heger , 1994 ; Gaskett , 2003 ) in either policy learning ( Pan et al. , 2019 ; Reddy et al. , 2019 ) , exploration ( Smirnova et al. , 2019 ) , or distributional value estimates ( Dabney et al. , 2018a ) . An interesting issue in risk reduction is that reduction of risks is typically found to be conflicting with exploration and exploitation that try to maximize the reward in the long run . Authors in ( Pan et al. , 2019 ) introduce two adversarial agents ( risk aversion and longterm reward seeking ) that act in combination to solve this problem . Still , it remains quite tricky and empiristic to trade-off between risk-sensitive and risk-seeking ( exploration ) in RL . In this paper , we propose a dynamic confidence bound for this purpose . A number of prior works have studied function approximation error that leads to overestimation and sub-optimal solution in MFRL . Double DQN ( DDQN ) ( Van Hasselt et al. , 2016 ) improves over DQN through disentangling the target value function and the target policy that pursues maximum value . In TD3 ( Fujimoto et al. , 2018 ) the authors suggest that systematic overestimation of value function also exists in actor-critic MFRL . They use an ensemble of two value functions , with the minimum estimate being used as the target value . Selecting the lower value estimation is similar to using uncertainty or lower confidence bound which is adopted by the other risk sensitive methods ( Pan et al. , 2019 ) , though they claimed different motivations . 3 PRELIMINARIES . 3.1 ACTOR-CRITIC MODEL-FREE REINFORCEMENT LEARNING . The Markov Decision Processes ( MDP ) is used to describe the process of an agent interacting with the environment . The agent selects the action at ∈ A at each time step t. After executing the action , it receives a new observation st+1 ∈ S and a feedback rt ∈ R from the environment . As we focus mainly on the environments of continuous action , we denote the parametric deterministic policy that the agent uses to decide its action as at = µθ ( st ) . Typically we add Gaussian exploration noise on top of the deterministic policy , such that we have a stochastic behavioral policy πθ , σ : S×A → R. It is calculated as πθ , σ ( st , at ) = pN ( at|µθ ( st ) , σ2 ) , where pN ( x|m , σ2 ) represents the probability density at x in a Gaussian distribution N ( m , σ2 ) . As the interaction process continues , the agent generates a trajectory τ = ( s0 , a0 , r0 , s1 , a1 , r1 , ... ) following the policy πθ , σ . For finite horizon MDP , we use the indicator d : S → { 0 , 1 } to mark whether the episode is terminated . The objective of RL is to find the optimal policy π∗ to maximize the expected discounted sum of rewards along the trajectory . The value performing the action a with policy π at the state s is defined by Qπ ( s , a ) = Es0=s , a0=a , τ∼π ∑∞ t=0 γ trt , where 0 < γ < 1 is the discount factor . The value iteration in model-free RL tries to approximate the optimal valueQπ ∗ with a parametric value function Q̂φ by minimizing the Temporal Difference ( TD ) error , where φ is the parameter to be optimized . The TD error between the estimates of Q-value and the corresponding target values is shown in equation . 1 , where φ′ is a delayed copy of the parameter φ , and a′ ∼ πθ′ , with θ′ being a delayed copy of θ ( Lillicrap et al. , 2015 ) . L φ = Eτ [ ∑ t ( Q̂target ( rt , st+1 ) − Q̂φ ( st , at ) ) 2 ] with Q̂target ( rt , st+1 ) = rt + γ · ( 1− d ( st+1 ) ) · Q̂φ′ ( st+1 , a′ ) ) ( 1 ) To optimize the deterministic policy function in a continuous action space , deep deterministic policy gradient ( DDPG ) ( Lillicrap et al. , 2015 ) maximizes the value function ( or minimizes the negative value function ) under the policy µθ with respect to parameter θ , shown in equation . 2 . L θ = −Eτ [ ∑ t Q̂φ′ ( st , µθ ( st ) ) ] ( 2 ) 3.2 ENVIRONMENT MODELING . To model the environment in continuous space , an environment model is typically composed of three individual mapping functions : f̂r , ζr : S × A× S → R , f̂s , ζs : S × A → S , and f̂d , ζd : S → [ 0 , 1 ] , which are used to approximate the feedback , next state and probability of the terminal indicator respectively ( Gu et al. , 2016 ; Feinberg et al. , 2018 ) . Here ζr , ζs and ζd are used to represent the parameters of the corresponding mapping functions . With the environment model , starting from st , at , we can predict the next state and reward by ŝt+1 = f̂s , ζs ( st , at ) , r̂t = f̂r , ζr ( st , at , ŝt+1 ) , d̂t+1 = f̂d , ζd ( ŝt+1 ) , ( 3 ) and this process might go on to generate a complete imagined trajectory of [ st , at , r̂t , ŝt+1 , ... ] . The neural network is commonly used as an environment model due to its powerful express ability . To optimize the parameter ζ we need to minimize the mean square error ( or the cross entropy ) of the prediction and the ground truth , given the trajectories τ under the behavioral policy .
This paper expands on previous work on hybrid model-based and model-free reinforcement learning. Specifically, it expands on the ideas in Model-based Value Expansion (MVE) and Stochastic Ensemble Value Expansion (STEVE) with a dynamically-scaled variance bias term to increase risk aversion over the course of learning, which the authors call Risk Averse Value Expansion (RAVE). Experimental results indicate notable improvements over their selected model-free and hybrid RL baselines on continuous control tasks in terms of initial learning efficiency (how many environment steps are needed to achieve a particular level of performance), asymptotic performance (how high the performance is given the same large number of environment steps), and avoidance of negative outcomes (how infrequently major negative outcomes are encountered over the course of training).
SP:fc98effb95b87ad325f609c31b336c7dafd9ac30
Encoding Musical Style with Transformer Autoencoders
1 INTRODUCTION . There has been significant progress in generative modeling , particularly with respect to creative applications such as art and music ( Oord et al. , 2016 ; Engel et al. , 2017b ; Ha & Eck , 2017 ; Huang et al. , 2019a ; Payne , 2019 ) . As the number of generative applications increase , it becomes increasingly important to consider how users can interact with such systems , particularly when the generative model functions as a tool in their creative process ( Engel et al. , 2017a ; Gillick et al. , 2019 ) To this end , we consider how one can learn high-level controls over the global structure of a generated sample . We focus on symbolic music generation , where Music Transformer ( Huang et al. , 2019b ) is the current state-of-the-art in generating high-quality samples that span over a minute in length . The challenge in controllable sequence generation is the fact that Transformers ( Vaswani et al. , 2017 ) and their variants excel as language models or in sequence-to-sequence tasks such as translation , but it is less clear as to how they can : ( 1 ) learn and ( 2 ) incorporate global conditioning information at inference time . This contrasts with traditional generative models for images such as the variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) or generative adversarial network ( GAN ) ( Goodfellow et al. , 2014 ) which typically incorprate global conditioning as part of their training procedure ( Sohn et al. , 2015 ; Sønderby et al. , 2016 ; Isola et al. , 2017 ; Van den Oord et al. , 2016 ) . In this work , we introduce the Transformer autoencoder , where we aggregate encodings across time to obtain a holistic representation of the performance style . We show that this learned global representation can be incorporated with other forms of structural conditioning in two ways . First , we show that given a performance , our model can generate performances that are similar in style to the provided input . Then , we explore different methods to combine melody and performance representations to harmonize a melody in the style of the given performance . In both cases , we show that combining both global and fine-scale encodings of the musical performance allows us to gain better control of generation , separately manipulating both the style and melody of the resulting sample . Empirically , we evaluate our model on two datasets : the publicly-available MAESTRO ( Hawthorne et al. , 2019 ) dataset , and an internal dataset of piano performances transcribed from 10,000+ hours of audio ( Anonymous for review ) . We find that the Transformer autoencoder is able to generate not only performances that sound similar to the input , but also accompaniments of melodies that follow a given style , as shown through both quantitative and qualitative experiments as well as a user listening study . In particular , we demonstrate that our model is capable of adapting to a particular musical style even in the case where we have one single input performance . 2 PRELIMINARIES . 2.1 DATA REPRESENTATION FOR MUSIC GENERATION . The MAESTRO ( Hawthorne et al. , 2019 ) dataset consists of over 1,100 classical piano performances , where each piece is represented as a MIDI file . The internal performance dataset consists of over 10,000 hours of piano performances transcribed from audio ( Anonymous for review ) . In both cases , we represent music as a sequence of discrete tokens , effectively formulating the generation task as a language modeling problem . The performances are encoded using the vocabulary as described in ( Oore et al. , 2018 ) , which captures expressive dynamics and timing . This performance encoding vocabulary consists of 128 note on events , 128 note off events , 100 time shift events representing time shifts in 10ms increments from 10ms to 1s , and 32 quantized velocity bins representing the velocity at which the 128 note on events were played . 2.2 MUSIC TRANSFORMER . We build our Transformer autoencoder from Music Transformer , a state-of-the-art generative model that is capable of generating music with long-term coherence ( Huang et al. , 2019b ) . While the original Transformer uses a self-attention mechanism that operates over absolute positional encodings of each token in a given sequence ( Vaswani et al. , 2017 ) , Music Transformer replaces this with relative attention ( Shaw et al. , 2018 ) , which allows the model to keep better track of regularity based on event orderings and periodicity in the performance . Huang et al . ( 2019b ) propose a novel algorithm for implementing relative self-attention that is significantly more memory-efficient , enabling the model to generate musical sequences over a minute in length . For more details regarding the self-attention mechanism and Transformers , we refer the reader to ( Vaswani et al. , 2017 ; Parmar et al. , 2018 ) . 3 CONDITIONAL GENERATION WITH THE TRANSFORMER AUTOENCODER . 3.1 MODEL ARCHITECTURE . We leverage the standard encoder and decoder stacks of the Transformer as a foundation for our model , with minor modifications that we outline below . Transformer Encoder : For both the performance and melody encoder networks , we use the Transformer ’ s stack of 6 layers which are each comprised of a : ( 1 ) multi-head relative attention mechanism ; and a ( 2 ) position-wise fully-connected feed-forward network . The performance encoder takes as input the event-based performance encoding of an input performance , while the melody encoder learns an encoding of the melody which has been extracted from the input performance . Depending on the music generation task , which we elaborate upon in Section 3.2 , the encoder output ( s ) are fed into the Transformer decoder . Figure 1 describes the way in which the encoder and decoder networks are composed together . Transformer Decoder : The decoder shares the same structure as the encoder network , but with an additional multi-head attention layer over the encoder outputs . At each step of generation , the decoder takes in the output of the encoder , as well as each new token that was previously generated . The model is trained end-to-end with maximum likelihood . That is , for a given sequence x of length n , we maximize log pθ ( x ) = ∑n i=1 log pθ ( xi|x < i ) with respect to the model parameters θ . 3.2 CONDITIONING MECHANISM . Performance Conditioning and Bottleneck For this task , we aim to generate samples that sound “ similar ” to a conditioning input performance . We incorporate a bottleneck in the output of the Transformer encoder in order to prevent the model from simply memorizing the input ( Baldi , 2012 ) . Thus , as shown in Figure 1 , we mean-aggregate the performance embedding across the time dimension in order to learn a global representation of style . This mean-performance embedding is then fed into the autoregressive decoder , where the decoder attends to this global representation in order to predict the appropriate target . Although this bottleneck may be undesirable in sequence transduction tasks where the input and output sequences differ ( e.g . translation ) , we find that it works well in our setting where we require the generated samples to be similar in style to the input sequence . Melody & Performance Conditioning : Next , we synthesize any given melody in the style of a different performance . Although the setup shares similarities to that of the melody conditioning problem in ( Huang et al. , 2019b ) , we note that we also provide a conditioning performance signal , which makes the generation task more challenging . During training , we follow an internal procedure to extract melodies from performances in the training set , quantize the melody to a 100ms grid , and encode it as a sequence of tokens that uses a different vocabulary than the performance representation . We then use two distinct Transformer encoders ( each with the same architecture ) as in Section 3.1 to separately encode the melody and performance inputs . The melody and performance embeddings are combined to use as input to the decoder . We explore various ways of combining the intermediate representations : ( 1 ) sum , where we add the performance and melody embeddings together ; ( 2 ) concatenate , where we concatenate the two embeddings separated with a stop token ; and ( 3 ) tile , where we tile the performance embedding across every dimension of time in the melody encoding . In all three cases , we work with the meanaggregated representation of the input performance . We find that different approaches work better than others on some dataets , a point which we elaborate upon in Section 5 . Input Perturbation In order to encourage the encoded performance representations to generalize across various melodies , keys , and tempos , we draw inspiration from the denoising autoencoder ( Vincent et al. , 2008 ) as a means to regularize the model . For every target performance from which we extract the input melody , we provide the model with a perturbed version of the input performance as the conditioning signal . We allow this “ noisy ” performance to vary across two axes of variation : ( 1 ) pitch , where we artificially shift the overall pitch either down or up by 6 semitones ; and ( 2 ) time , where we stretch the timing of the performance by at most 5 % . In our experiments , we find that this augmentation procedure leads to samples that sound more pleasing ( Oore et al. , 2018 ) . We provide further details on the augmentation procedure in Appendix A . 4 SIMILARITY EVALUATION ON PERFORMANCE FEATURES . Although a variety of different metrics have been proposed to quantify both the quality ( Engel et al. , 2019 ) and similarity of musical performances relative to one another ( Yang & Lerch , 2018 ; Hung et al. , 2019 ) , the development of a proper metric to measure such characteristics in music generation remains an open question . Therefore , we draw inspiration from ( Yang & Lerch , 2018 ) to capture the style of a given performance based its the pitch- and rhythm-related features using 8 features : 1 . Note Density ( ND ) : The note density refers to the average number of notes per second in a performance : a higher note density often indicates a fast-moving piece , while a lower note density correlates with softer , slower pieces . This feature is a good indicator for rhythm . 2 . Pitch Range ( PR ) : The pitch range denotes the difference between the highest and lowest semitones ( MIDI pitches ) in a given phrase . 3 . Mean Pitch ( MP ) / Variation of Pitch ( VP ) : Similar in vein to the pitch range ( PR ) , the average and overall variation of pitch in a musical performance captures whether the piece is played in a higher or lower octave . 4 . Mean Velocity ( MV ) / Variation of Velocity ( VV ) : The velocity of each note indicates how hard a key is pressed in a musical performance , and serves as a heuristic for overall volume . 5 . Mean Duration ( MD ) / Variation of Duration ( VD ) : The duration describes for how long each note is pressed in a performance , representing articulation , dynamics , and phrasing .
This paper presents a technique for encoding the high level “style” of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global “style embedding”. Additionally, the Music Transformer model is also conditioned on a combination of both “style” and “melody” embeddings to try and generate music “similar” to the conditioning melody but in the style of the performance embedding.
SP:bddd3d499426725b02d3d67ca0a7f8ef0c30e639
Encoding Musical Style with Transformer Autoencoders
1 INTRODUCTION . There has been significant progress in generative modeling , particularly with respect to creative applications such as art and music ( Oord et al. , 2016 ; Engel et al. , 2017b ; Ha & Eck , 2017 ; Huang et al. , 2019a ; Payne , 2019 ) . As the number of generative applications increase , it becomes increasingly important to consider how users can interact with such systems , particularly when the generative model functions as a tool in their creative process ( Engel et al. , 2017a ; Gillick et al. , 2019 ) To this end , we consider how one can learn high-level controls over the global structure of a generated sample . We focus on symbolic music generation , where Music Transformer ( Huang et al. , 2019b ) is the current state-of-the-art in generating high-quality samples that span over a minute in length . The challenge in controllable sequence generation is the fact that Transformers ( Vaswani et al. , 2017 ) and their variants excel as language models or in sequence-to-sequence tasks such as translation , but it is less clear as to how they can : ( 1 ) learn and ( 2 ) incorporate global conditioning information at inference time . This contrasts with traditional generative models for images such as the variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) or generative adversarial network ( GAN ) ( Goodfellow et al. , 2014 ) which typically incorprate global conditioning as part of their training procedure ( Sohn et al. , 2015 ; Sønderby et al. , 2016 ; Isola et al. , 2017 ; Van den Oord et al. , 2016 ) . In this work , we introduce the Transformer autoencoder , where we aggregate encodings across time to obtain a holistic representation of the performance style . We show that this learned global representation can be incorporated with other forms of structural conditioning in two ways . First , we show that given a performance , our model can generate performances that are similar in style to the provided input . Then , we explore different methods to combine melody and performance representations to harmonize a melody in the style of the given performance . In both cases , we show that combining both global and fine-scale encodings of the musical performance allows us to gain better control of generation , separately manipulating both the style and melody of the resulting sample . Empirically , we evaluate our model on two datasets : the publicly-available MAESTRO ( Hawthorne et al. , 2019 ) dataset , and an internal dataset of piano performances transcribed from 10,000+ hours of audio ( Anonymous for review ) . We find that the Transformer autoencoder is able to generate not only performances that sound similar to the input , but also accompaniments of melodies that follow a given style , as shown through both quantitative and qualitative experiments as well as a user listening study . In particular , we demonstrate that our model is capable of adapting to a particular musical style even in the case where we have one single input performance . 2 PRELIMINARIES . 2.1 DATA REPRESENTATION FOR MUSIC GENERATION . The MAESTRO ( Hawthorne et al. , 2019 ) dataset consists of over 1,100 classical piano performances , where each piece is represented as a MIDI file . The internal performance dataset consists of over 10,000 hours of piano performances transcribed from audio ( Anonymous for review ) . In both cases , we represent music as a sequence of discrete tokens , effectively formulating the generation task as a language modeling problem . The performances are encoded using the vocabulary as described in ( Oore et al. , 2018 ) , which captures expressive dynamics and timing . This performance encoding vocabulary consists of 128 note on events , 128 note off events , 100 time shift events representing time shifts in 10ms increments from 10ms to 1s , and 32 quantized velocity bins representing the velocity at which the 128 note on events were played . 2.2 MUSIC TRANSFORMER . We build our Transformer autoencoder from Music Transformer , a state-of-the-art generative model that is capable of generating music with long-term coherence ( Huang et al. , 2019b ) . While the original Transformer uses a self-attention mechanism that operates over absolute positional encodings of each token in a given sequence ( Vaswani et al. , 2017 ) , Music Transformer replaces this with relative attention ( Shaw et al. , 2018 ) , which allows the model to keep better track of regularity based on event orderings and periodicity in the performance . Huang et al . ( 2019b ) propose a novel algorithm for implementing relative self-attention that is significantly more memory-efficient , enabling the model to generate musical sequences over a minute in length . For more details regarding the self-attention mechanism and Transformers , we refer the reader to ( Vaswani et al. , 2017 ; Parmar et al. , 2018 ) . 3 CONDITIONAL GENERATION WITH THE TRANSFORMER AUTOENCODER . 3.1 MODEL ARCHITECTURE . We leverage the standard encoder and decoder stacks of the Transformer as a foundation for our model , with minor modifications that we outline below . Transformer Encoder : For both the performance and melody encoder networks , we use the Transformer ’ s stack of 6 layers which are each comprised of a : ( 1 ) multi-head relative attention mechanism ; and a ( 2 ) position-wise fully-connected feed-forward network . The performance encoder takes as input the event-based performance encoding of an input performance , while the melody encoder learns an encoding of the melody which has been extracted from the input performance . Depending on the music generation task , which we elaborate upon in Section 3.2 , the encoder output ( s ) are fed into the Transformer decoder . Figure 1 describes the way in which the encoder and decoder networks are composed together . Transformer Decoder : The decoder shares the same structure as the encoder network , but with an additional multi-head attention layer over the encoder outputs . At each step of generation , the decoder takes in the output of the encoder , as well as each new token that was previously generated . The model is trained end-to-end with maximum likelihood . That is , for a given sequence x of length n , we maximize log pθ ( x ) = ∑n i=1 log pθ ( xi|x < i ) with respect to the model parameters θ . 3.2 CONDITIONING MECHANISM . Performance Conditioning and Bottleneck For this task , we aim to generate samples that sound “ similar ” to a conditioning input performance . We incorporate a bottleneck in the output of the Transformer encoder in order to prevent the model from simply memorizing the input ( Baldi , 2012 ) . Thus , as shown in Figure 1 , we mean-aggregate the performance embedding across the time dimension in order to learn a global representation of style . This mean-performance embedding is then fed into the autoregressive decoder , where the decoder attends to this global representation in order to predict the appropriate target . Although this bottleneck may be undesirable in sequence transduction tasks where the input and output sequences differ ( e.g . translation ) , we find that it works well in our setting where we require the generated samples to be similar in style to the input sequence . Melody & Performance Conditioning : Next , we synthesize any given melody in the style of a different performance . Although the setup shares similarities to that of the melody conditioning problem in ( Huang et al. , 2019b ) , we note that we also provide a conditioning performance signal , which makes the generation task more challenging . During training , we follow an internal procedure to extract melodies from performances in the training set , quantize the melody to a 100ms grid , and encode it as a sequence of tokens that uses a different vocabulary than the performance representation . We then use two distinct Transformer encoders ( each with the same architecture ) as in Section 3.1 to separately encode the melody and performance inputs . The melody and performance embeddings are combined to use as input to the decoder . We explore various ways of combining the intermediate representations : ( 1 ) sum , where we add the performance and melody embeddings together ; ( 2 ) concatenate , where we concatenate the two embeddings separated with a stop token ; and ( 3 ) tile , where we tile the performance embedding across every dimension of time in the melody encoding . In all three cases , we work with the meanaggregated representation of the input performance . We find that different approaches work better than others on some dataets , a point which we elaborate upon in Section 5 . Input Perturbation In order to encourage the encoded performance representations to generalize across various melodies , keys , and tempos , we draw inspiration from the denoising autoencoder ( Vincent et al. , 2008 ) as a means to regularize the model . For every target performance from which we extract the input melody , we provide the model with a perturbed version of the input performance as the conditioning signal . We allow this “ noisy ” performance to vary across two axes of variation : ( 1 ) pitch , where we artificially shift the overall pitch either down or up by 6 semitones ; and ( 2 ) time , where we stretch the timing of the performance by at most 5 % . In our experiments , we find that this augmentation procedure leads to samples that sound more pleasing ( Oore et al. , 2018 ) . We provide further details on the augmentation procedure in Appendix A . 4 SIMILARITY EVALUATION ON PERFORMANCE FEATURES . Although a variety of different metrics have been proposed to quantify both the quality ( Engel et al. , 2019 ) and similarity of musical performances relative to one another ( Yang & Lerch , 2018 ; Hung et al. , 2019 ) , the development of a proper metric to measure such characteristics in music generation remains an open question . Therefore , we draw inspiration from ( Yang & Lerch , 2018 ) to capture the style of a given performance based its the pitch- and rhythm-related features using 8 features : 1 . Note Density ( ND ) : The note density refers to the average number of notes per second in a performance : a higher note density often indicates a fast-moving piece , while a lower note density correlates with softer , slower pieces . This feature is a good indicator for rhythm . 2 . Pitch Range ( PR ) : The pitch range denotes the difference between the highest and lowest semitones ( MIDI pitches ) in a given phrase . 3 . Mean Pitch ( MP ) / Variation of Pitch ( VP ) : Similar in vein to the pitch range ( PR ) , the average and overall variation of pitch in a musical performance captures whether the piece is played in a higher or lower octave . 4 . Mean Velocity ( MV ) / Variation of Velocity ( VV ) : The velocity of each note indicates how hard a key is pressed in a musical performance , and serves as a heuristic for overall volume . 5 . Mean Duration ( MD ) / Variation of Duration ( VD ) : The duration describes for how long each note is pressed in a performance , representing articulation , dynamics , and phrasing .
In this paper, the author extends the standard music Transformer into a conditional version: two encoders are evolved, one for encoding the performance and the other is used for encoding the melody. The output representation has to be similar to the input. The authors conduct experiments on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances to verify the proposed algorithm.
SP:bddd3d499426725b02d3d67ca0a7f8ef0c30e639
Corpus Based Amharic Sentiment Lexicon Generation
keywords : Amharic Sentiment lexicon , Amharic Sentiment Classification , Seed words 1 INTRODUCTION . Most of sentiment mining research papers are associated to English languages . Linguistic computational resources in languages other than English are limited . Amharic is one of resource limited languages . Due to the advancement of World Wide Web , Amharic opinionated texts is increasing in size . To manage prediction of sentiment orientation towards a particular object or service is crucial for business intelligence , government intelligence , market intelligence , or support decision making . For carrying out Amharic sentiment classification , the availability of sentiment lexicons is crucial . To-date , there are two generated Amharic sentiment lexicons . These are manually generated lexicon ( 1000 ) ( Gebremeskel , 2010 ) and dictionary based Amharic SWN and SOCAL lexicons ( Neshir Alemneh et al. , 2019 ) . However , dictionary based generated lexicons has short-comings in that it has difficulty in capturing cultural connotation and language specific features of the language . For example , Amharic words which are spoken culturally and used to express opinions will not be obtained from dictionary based sentiment lexicons . The word ጉርሻ/ '' feed in other people with hands which expresses love and live in harmony with others '' / in the Amharic text : `` እንደ ጉርሻ ግን የሚያግባባን የለም . . . ጉርሻ እኮ አንዱ ለሌላው የማጉረስ ተግባር ብቻ አይደለም፤ በተጠቀለለው እንጀራ ውስጥ ፍቅር አለ፣ መተሳሰብ አለ፣ አክብሮት አለ። '' has positive connotation or positive sentiment . But the dictionary meaning of the word ጉርሻ is `` bonus '' . This is far away from the cultural connotation that it is intended to represent and express . We assumed that such kind of culture ( or language specific ) words are found in a collection of Amharic texts . However , dictionary based lexicons has short comings to capture sentiment terms which has strong ties to language and culture specific connotations of Amharic . Thus , this work builds corpus based algorithm to handle language and culture specific words in the lexicons . However , it could probably be impossible to handle all the words in the language as the corpus is a limited resource in almost all less resourced languages like Amharic . But still it is possible to build sentiment lexicons in particular domain where large amount of Amharic corpus is available . Due to this reason , the lexicon built using this approach is usually used for lexicon based sentiment analysis in the same domain from which it is built . The research questions to be addressed utilizing this approach are : ( 1 ) How can we build an approach to generate Amharic Sentiment Lexicon from corpus ? ( 2 ) How do we evaluate the validity and quality of the generated lexicon ? In this work , we set this approach to build Amharic polarity lexicons in automatic way relying on Amharic corpora which is mentioned shortly . The corpora are collected from different local news media organizations and also from facebook news ' comments and you tube video comments to extend and enhance corpus size to capture sentiment terms into the generated PPMI based lexicon . 2 RELATED WORKS . In this part , we will present the key papers addressing corpus- based Sentiment Lexicon generation . In ( Velikovich et al. , 2010 ) , large polarity lexicon is developed semiautomatically from the web by applying graph propagation method . A set of positive and negative sentences are prepared from the web for providing clue to expansion of lexicon . The method assigns a higher positive value if a given seed phrase contains multiple positive seed words , otherwise it is assigned negative value . The polarity p of seed phrase i is given by : pi = p + i − βp − i , where β is the factor that is responsible for preserving the overall semantic orientations between positive and negative flow over the graph . Both quantitatively and qualitatively , the performance of the web generated lexicon is outperforming the other lexicons generated from other manually annotate lexical resources like WordNet . The authors in ( Hamilton et al. , 2016 ) developed two domain specific sentiment lexicons ( historical and online community specific ) from historical corpus of 150 years and online community data using word embedding with label propagation algorithm to expand small list of seed terms . It achieves competitive performance with approaches relying on hand curated lexicons . This revealed that there is sentiment change of words either positively to negatively or vice-versa through time . Lexical graph is constructed using PPMI matrix computed from word embedding . To fill the edges of two nodes ( wi , wj ) , cosine similarity is computed . To propagate sentiment from seeds in lexical graph , random walk algorithm is adapted . That says , the polarity score of a seed set is proportional to probability of random walk from the seed set hitting that word . The generated lexicon from domain specific embedding outperforms very well when compared with the baseline and other variants . Our work is closely associated to the work of Passaro et al . ( 2015 ) . Passaro et al . ( 2015 ) generated emotion based lexicon by bootstrapping corpus using word distributional semantics ( i.e . using PPMI ) . Our approach is different from their work in that we generated sentiment lexicon rather than emotion lexicon . The other thing is that the approach of propagating sentiment to expand the seeds is also different . We used cosine similarity of the mean vector of seed words to the corresponding word vectors in the vocabulary of the PPMI matrix . Besides , the threshold selection , the seed words part of speech are different from language to language . For example , Amharic has few adverb classes unlike Italian . Thus , our seed words do not contain adverbs . 3 PROPOSED CORPUS BASED APPROACHES . There are variety of corpus based strategies that include count based ( e.g.PPMI ) and predictive based ( e.g . word embedding ) approaches . In this part , we present the proposed count based approach to generate Amharic Sentiment lexicon from a corpus . In Figure 1 , we present the proposed framework of corpus based approach to generate Amharic Sentiment lexicon . The framework has four components : ( Amharic News ) Corpus Collections , Preprocessing Module , PPMI Matix of Word-Context , Algorithm to generate ( Amharic ) Sentiment Lexicon resulting in the Generated ( Amharic ) Sentiment Lexicon . The algorithm and the seeds in figure 1 are briefly described as follows . To generate Amharic Sentiment lexicon , we follow four major steps : 1 . Prepare a set of seed lists which are strongly negatively and positively polarized Adjectives , Nouns and Verbs ( Note : Amharic language contains few adverbs ( Passaro et al. , 2015 ) , adverbs are not taken as seed word ) . We will select at least seven most polarized seed words for each of aforementioned part-of-speech classes ( Yimam , 2000 ) . Selection of seed words is the most critical that affects the performance of bootstrapping algorithm ( Waegel , 2003 ) . Most authors choose the most frequently occurring words in the corpus as seed list . This is assumed to ensure the greatest amount of contextual information to learn from , however , we are not sure about the quality of the contexts . We adapt and follow seed selection guidelines of Turney & Littman ( 2003 ) . After we tried seed selection based on Turney & Littman ( 2003 ) , we update the original seed words . Sample summary of seeds are presented in Table 1 . 2 . Build semantic space word-context matrix ( Potts , 2013 ; Turney & Pantel , 2010 ) using the number of occurrences ( frequency ) of target word with its context words of window size±2 . Word-Context matrix is selected as it is dense and good for building word rich representations ( i.e . similarity of words ) unlike word-document design which is sparse and computationally expensive ( Potts , 2013 ; Turney & Pantel , 2010 ) . Initially , let F be word-context raw frequency matrix with nr rows and nc columns formed from Amharic text corpora . Next , we apply weighting functions to select word semantic similarity descriminant features . There are variety of weighting functions to get meaningful semantic similarity between a word and its context . The most popular one is Point-wise Mutual Information ( PMI ) ( Turney & Littman , 2003 ) . In our case we use positive PMI by assigning 0 if it is less than 0 ( Bullinaria & Levy , 2007 ) . Then , let X be new PMI based matrix that will be obtained by applying Positive PMI ( PPMI ) to matrix F. Matrix X will have the same number of rows and columns matrix F. The value of an element fij is the number of times that word wi occurs in the context cj in matrix F. Then , the corresponding element xij in the new matrix X would be defined as follows : xij = { PMI ( wi , cj ) if PMI ( wi , cj ) > 0 0 if PMI ( wi , cj ) ≤ 0 ( 1 ) Where , PMI ( wi , cj ) is the Point-wise Mutual Information that measures the estimated co-occurrence of word wi and its context cj and is given as : PMI ( wi , cj ) = log P ( wi , cj ) P ( wi ) P ( cj ) ( 2 ) Where , P ( wi , cj ) is the estimated probability that the word wi occurs in the context of cj , P ( wi ) is the estimated probability of wi and P ( cj ) is the estimated probability of ci are defined in terms of frequency fij . 3 . Compute the cosine distance between target term and centroid of seed lists ( e.g . centroid for positive adjective seeds , −−→ µ+adj ) . To find the cosine distance of a new word from seed list , first we compute the centroids of seed lists of respective POS classes ; for example , centroids for positive seeds S+ and negative seeds S- , for adjective class is given by : −−→ µ+adj ( S + ) = Σw∈S+ ~w |S+| & −−→ µ−adj ( S − ) = Σw∈S− ~w |S−| ( 3 ) Similarly , centroids of the other seed classes will be found . Then , the cosine distances of target word from positive and negative adjective seeds of centroids , −−→ µ+adj and −−→ µ−adj is given by : cosine ( ~wi , −−→ µ+adj ) = ~wi . −−→ µ+adj || ~wi|||| −−→ µ+adj || & cosine ( ~wi , −−→ µ−adj ) = ~wi . −−→ µ−adj || ~wi|||| −−→ µ−adj || ( 4 ) As word-context matrix x is vector space model , the cosine of the angle between two words vectors is the same as the inner product of the normalized unit word vectors . After we have cosine distances between word wi and seed with centroid ~wi , ~µ+adj , the similarity measure can be found using either : Sim ( ~wi , −−→ µ+adj ) = 1 cosine ( ~wi , −−→ µ+adj ) or Sim ( ~wi , −−→ µ+adj ) = 1− cosine ( ~wi , −−→ µ+adj ) ( 5 ) Similarly , the similarity score , Sim ( ~wi , −−→ µ−adj ) can also be computed . This similarity score for each target word is mapped and scaled to appropriate real number . A target word whose sentiment score is below or above a particular threshold can be added to that sentiment dictionary in ranked order based on PMI based cosine distances . We choose positive PMI with cosine measure as it is performed consistently better than the other combination features with similarity metrics : Hellinger , Kullback-Leibler , City Block , Bhattacharya and Euclidean ( Bullinaria & Levy , 2007 ) . 4 . Repeat from step 3 for the next target term in the matrix to expand lexicon dictionary . Stop after a number of iterations defined by a threshold acquired experimental testing . The detail algorithm for generating Amharic sentiment lexicon from PPMI is presented in algorithm 1 Algorithm description : The algorithm 1 reads the seed words and generates the merge of expanded seed words using PPMI . Line 1 loads the seed words and assigns to their corresponding category of seed words . Similarly , from line 2 to 6 loads the necessary lexical resources such as PPMI matrix , vocabulary list , Amharic-English , AmharicAmharic , Amharic-Sentiment SWN and in line 7 , the output Amharic Sentiment Lex . by PPMI is initialized to Null . From line 8 to 22 , it iterates for each seed words polarity and categories . That is , line 9 to 11 checks that each seed term is found in the corpus vocabulary . Line 12 initializes the threshold by a selected trial number ( in our case 100,200,1000 , etc. ) . From line 13 to 22 , iterates from i=0 to threshold in order to perform a set of operations . That is , line 16 computes the mean of the seed lexicon based on equation 3 specified in the previous section . Line 17 computes the similarity between the mean vector and the PPMI word-word co occurrence matrix and returns the top i most closest terms to the mean vector based on equation 5 . Lines 18-19 , it removes top closest items which has different part-of-speech to the seed words . Line 20-21 check the top i closest terms are has different polarity to the seed lexicon . Line 22 updates the PPMI lexicon by inserting the newly obtained sentiment terms . Line 23 returns the generated Amharic Sentiment lexicon by PPMI . Using corpus based approach , Amharic sentiment lexicon are built where it allows Input : PPMI : Word-Context PPMI matrix Output : AM_Lexicon_by_PPMI : Generated Amharic Sentiment Lexicon 1 seed_noun+ , seed_noun− , seed_adj+ , seed_adj− , seed_verb+ , seed_verb− ← LoadCorespondingSeedCatagoryfile PPMI ← LoadPPMIMatrixfile V ocab ← LoadV ocabularyfile AmharicEnglishDic ← LoadAmharicEnglishDictionaryfile AmharicAmharicDic ← LoadAmharicAmharicDictionaryfile AmharicSWN ← LoadAmharicSentimentSWNfile Amharic_Sentiment_Lex_by_PPMI ← Null foreach seed_lexicon ∈ seed_noun+ , seed_noun− , seed_adj+ , seed_adj− , seed_verb+ , seed_verb− : do 2 foreach seed ∈ seed_lexicon : do 3 if seed ∈ Vocab then 4 Remove Seed from Seed_Lexicon 5 Threshold←Number of iterations foreach i← 0 to Threshold : do 6 mean_vector← compute_mean ( seed_lexicon ) by equation 3 top_ten_closest_terms← compute_similarity ( mean_vector , PPMI ) by equation 4 if term ∈ top_ten_closest_terms and term ∈ seed_lexicon : then 7 Remove the term from top_ten_closest_terms list as it is duplicate 8 if Any term in top_ten_closest_terms has different part_speech from seed_lexicon then 9 Remove the term from top_ten_closest_terms list 10 if Any term in top_ten_closest_terms has different polarity from Amharic SWN lexicon then 11 Remove the term from top_ten_closest_terms list 12 Update seed_lexicon by inserting top_ten_closest_terms list 13 AM_Lexicon_by_PPMI←AM_Lexicon_by_PPMI + seed_lexicon ; algorithm 1 : Amharic Sentiment Lexicon Generation Algorithm Using PPMI finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach . The quality of this lexicon will be evaluated using similar techniques used in dictionary based approaches ( Neshir Alemneh et al. , 2019 ) . However , this approach may not probably produce sentiment lexicon with large coverage as the corpus size may be insufficient to include all polarity words in Amharic language . To reduce the level of this issue , we combine the lexicons generated in both dictionary based and corpus based approaches for Amharic Sentiment classification .
This paper introduces a corpus-based approach to build sentiment lexicon for Amharic. In order to save time and costs for the resource-limited language, the lexicon is generated from an Amharic news corpus by the following steps: manually preparing polarized seed words lists (strongly positive and strongly negative), calculating the co-occurrence of target word in its context via Positive Point-wise Mutual Information (PPMI) method, measuring the similarity between target words and seed words by cosine distance, iterating with the threshold 100 and 200. The PPMI lexicon is stemmed and evaluated from aspects of subjectivity detection, coverage, agreement and sentiment classification. Three other lexicons: Manual developed by manual, SOCAL and SWN developed by bilingual dictionary, are used as benchmark to compare with the PPMI lexicon. In sentiment classification experiment the PPMI lexicon did not show a superior performance. All the four lexicons have similar accuracy, between 42.16% ~ 48.87%. Only when the four are combined together the result is improved to 83.51%.
SP:e472738b53eec7967504021365ac5b4808028ec1
Corpus Based Amharic Sentiment Lexicon Generation
keywords : Amharic Sentiment lexicon , Amharic Sentiment Classification , Seed words 1 INTRODUCTION . Most of sentiment mining research papers are associated to English languages . Linguistic computational resources in languages other than English are limited . Amharic is one of resource limited languages . Due to the advancement of World Wide Web , Amharic opinionated texts is increasing in size . To manage prediction of sentiment orientation towards a particular object or service is crucial for business intelligence , government intelligence , market intelligence , or support decision making . For carrying out Amharic sentiment classification , the availability of sentiment lexicons is crucial . To-date , there are two generated Amharic sentiment lexicons . These are manually generated lexicon ( 1000 ) ( Gebremeskel , 2010 ) and dictionary based Amharic SWN and SOCAL lexicons ( Neshir Alemneh et al. , 2019 ) . However , dictionary based generated lexicons has short-comings in that it has difficulty in capturing cultural connotation and language specific features of the language . For example , Amharic words which are spoken culturally and used to express opinions will not be obtained from dictionary based sentiment lexicons . The word ጉርሻ/ '' feed in other people with hands which expresses love and live in harmony with others '' / in the Amharic text : `` እንደ ጉርሻ ግን የሚያግባባን የለም . . . ጉርሻ እኮ አንዱ ለሌላው የማጉረስ ተግባር ብቻ አይደለም፤ በተጠቀለለው እንጀራ ውስጥ ፍቅር አለ፣ መተሳሰብ አለ፣ አክብሮት አለ። '' has positive connotation or positive sentiment . But the dictionary meaning of the word ጉርሻ is `` bonus '' . This is far away from the cultural connotation that it is intended to represent and express . We assumed that such kind of culture ( or language specific ) words are found in a collection of Amharic texts . However , dictionary based lexicons has short comings to capture sentiment terms which has strong ties to language and culture specific connotations of Amharic . Thus , this work builds corpus based algorithm to handle language and culture specific words in the lexicons . However , it could probably be impossible to handle all the words in the language as the corpus is a limited resource in almost all less resourced languages like Amharic . But still it is possible to build sentiment lexicons in particular domain where large amount of Amharic corpus is available . Due to this reason , the lexicon built using this approach is usually used for lexicon based sentiment analysis in the same domain from which it is built . The research questions to be addressed utilizing this approach are : ( 1 ) How can we build an approach to generate Amharic Sentiment Lexicon from corpus ? ( 2 ) How do we evaluate the validity and quality of the generated lexicon ? In this work , we set this approach to build Amharic polarity lexicons in automatic way relying on Amharic corpora which is mentioned shortly . The corpora are collected from different local news media organizations and also from facebook news ' comments and you tube video comments to extend and enhance corpus size to capture sentiment terms into the generated PPMI based lexicon . 2 RELATED WORKS . In this part , we will present the key papers addressing corpus- based Sentiment Lexicon generation . In ( Velikovich et al. , 2010 ) , large polarity lexicon is developed semiautomatically from the web by applying graph propagation method . A set of positive and negative sentences are prepared from the web for providing clue to expansion of lexicon . The method assigns a higher positive value if a given seed phrase contains multiple positive seed words , otherwise it is assigned negative value . The polarity p of seed phrase i is given by : pi = p + i − βp − i , where β is the factor that is responsible for preserving the overall semantic orientations between positive and negative flow over the graph . Both quantitatively and qualitatively , the performance of the web generated lexicon is outperforming the other lexicons generated from other manually annotate lexical resources like WordNet . The authors in ( Hamilton et al. , 2016 ) developed two domain specific sentiment lexicons ( historical and online community specific ) from historical corpus of 150 years and online community data using word embedding with label propagation algorithm to expand small list of seed terms . It achieves competitive performance with approaches relying on hand curated lexicons . This revealed that there is sentiment change of words either positively to negatively or vice-versa through time . Lexical graph is constructed using PPMI matrix computed from word embedding . To fill the edges of two nodes ( wi , wj ) , cosine similarity is computed . To propagate sentiment from seeds in lexical graph , random walk algorithm is adapted . That says , the polarity score of a seed set is proportional to probability of random walk from the seed set hitting that word . The generated lexicon from domain specific embedding outperforms very well when compared with the baseline and other variants . Our work is closely associated to the work of Passaro et al . ( 2015 ) . Passaro et al . ( 2015 ) generated emotion based lexicon by bootstrapping corpus using word distributional semantics ( i.e . using PPMI ) . Our approach is different from their work in that we generated sentiment lexicon rather than emotion lexicon . The other thing is that the approach of propagating sentiment to expand the seeds is also different . We used cosine similarity of the mean vector of seed words to the corresponding word vectors in the vocabulary of the PPMI matrix . Besides , the threshold selection , the seed words part of speech are different from language to language . For example , Amharic has few adverb classes unlike Italian . Thus , our seed words do not contain adverbs . 3 PROPOSED CORPUS BASED APPROACHES . There are variety of corpus based strategies that include count based ( e.g.PPMI ) and predictive based ( e.g . word embedding ) approaches . In this part , we present the proposed count based approach to generate Amharic Sentiment lexicon from a corpus . In Figure 1 , we present the proposed framework of corpus based approach to generate Amharic Sentiment lexicon . The framework has four components : ( Amharic News ) Corpus Collections , Preprocessing Module , PPMI Matix of Word-Context , Algorithm to generate ( Amharic ) Sentiment Lexicon resulting in the Generated ( Amharic ) Sentiment Lexicon . The algorithm and the seeds in figure 1 are briefly described as follows . To generate Amharic Sentiment lexicon , we follow four major steps : 1 . Prepare a set of seed lists which are strongly negatively and positively polarized Adjectives , Nouns and Verbs ( Note : Amharic language contains few adverbs ( Passaro et al. , 2015 ) , adverbs are not taken as seed word ) . We will select at least seven most polarized seed words for each of aforementioned part-of-speech classes ( Yimam , 2000 ) . Selection of seed words is the most critical that affects the performance of bootstrapping algorithm ( Waegel , 2003 ) . Most authors choose the most frequently occurring words in the corpus as seed list . This is assumed to ensure the greatest amount of contextual information to learn from , however , we are not sure about the quality of the contexts . We adapt and follow seed selection guidelines of Turney & Littman ( 2003 ) . After we tried seed selection based on Turney & Littman ( 2003 ) , we update the original seed words . Sample summary of seeds are presented in Table 1 . 2 . Build semantic space word-context matrix ( Potts , 2013 ; Turney & Pantel , 2010 ) using the number of occurrences ( frequency ) of target word with its context words of window size±2 . Word-Context matrix is selected as it is dense and good for building word rich representations ( i.e . similarity of words ) unlike word-document design which is sparse and computationally expensive ( Potts , 2013 ; Turney & Pantel , 2010 ) . Initially , let F be word-context raw frequency matrix with nr rows and nc columns formed from Amharic text corpora . Next , we apply weighting functions to select word semantic similarity descriminant features . There are variety of weighting functions to get meaningful semantic similarity between a word and its context . The most popular one is Point-wise Mutual Information ( PMI ) ( Turney & Littman , 2003 ) . In our case we use positive PMI by assigning 0 if it is less than 0 ( Bullinaria & Levy , 2007 ) . Then , let X be new PMI based matrix that will be obtained by applying Positive PMI ( PPMI ) to matrix F. Matrix X will have the same number of rows and columns matrix F. The value of an element fij is the number of times that word wi occurs in the context cj in matrix F. Then , the corresponding element xij in the new matrix X would be defined as follows : xij = { PMI ( wi , cj ) if PMI ( wi , cj ) > 0 0 if PMI ( wi , cj ) ≤ 0 ( 1 ) Where , PMI ( wi , cj ) is the Point-wise Mutual Information that measures the estimated co-occurrence of word wi and its context cj and is given as : PMI ( wi , cj ) = log P ( wi , cj ) P ( wi ) P ( cj ) ( 2 ) Where , P ( wi , cj ) is the estimated probability that the word wi occurs in the context of cj , P ( wi ) is the estimated probability of wi and P ( cj ) is the estimated probability of ci are defined in terms of frequency fij . 3 . Compute the cosine distance between target term and centroid of seed lists ( e.g . centroid for positive adjective seeds , −−→ µ+adj ) . To find the cosine distance of a new word from seed list , first we compute the centroids of seed lists of respective POS classes ; for example , centroids for positive seeds S+ and negative seeds S- , for adjective class is given by : −−→ µ+adj ( S + ) = Σw∈S+ ~w |S+| & −−→ µ−adj ( S − ) = Σw∈S− ~w |S−| ( 3 ) Similarly , centroids of the other seed classes will be found . Then , the cosine distances of target word from positive and negative adjective seeds of centroids , −−→ µ+adj and −−→ µ−adj is given by : cosine ( ~wi , −−→ µ+adj ) = ~wi . −−→ µ+adj || ~wi|||| −−→ µ+adj || & cosine ( ~wi , −−→ µ−adj ) = ~wi . −−→ µ−adj || ~wi|||| −−→ µ−adj || ( 4 ) As word-context matrix x is vector space model , the cosine of the angle between two words vectors is the same as the inner product of the normalized unit word vectors . After we have cosine distances between word wi and seed with centroid ~wi , ~µ+adj , the similarity measure can be found using either : Sim ( ~wi , −−→ µ+adj ) = 1 cosine ( ~wi , −−→ µ+adj ) or Sim ( ~wi , −−→ µ+adj ) = 1− cosine ( ~wi , −−→ µ+adj ) ( 5 ) Similarly , the similarity score , Sim ( ~wi , −−→ µ−adj ) can also be computed . This similarity score for each target word is mapped and scaled to appropriate real number . A target word whose sentiment score is below or above a particular threshold can be added to that sentiment dictionary in ranked order based on PMI based cosine distances . We choose positive PMI with cosine measure as it is performed consistently better than the other combination features with similarity metrics : Hellinger , Kullback-Leibler , City Block , Bhattacharya and Euclidean ( Bullinaria & Levy , 2007 ) . 4 . Repeat from step 3 for the next target term in the matrix to expand lexicon dictionary . Stop after a number of iterations defined by a threshold acquired experimental testing . The detail algorithm for generating Amharic sentiment lexicon from PPMI is presented in algorithm 1 Algorithm description : The algorithm 1 reads the seed words and generates the merge of expanded seed words using PPMI . Line 1 loads the seed words and assigns to their corresponding category of seed words . Similarly , from line 2 to 6 loads the necessary lexical resources such as PPMI matrix , vocabulary list , Amharic-English , AmharicAmharic , Amharic-Sentiment SWN and in line 7 , the output Amharic Sentiment Lex . by PPMI is initialized to Null . From line 8 to 22 , it iterates for each seed words polarity and categories . That is , line 9 to 11 checks that each seed term is found in the corpus vocabulary . Line 12 initializes the threshold by a selected trial number ( in our case 100,200,1000 , etc. ) . From line 13 to 22 , iterates from i=0 to threshold in order to perform a set of operations . That is , line 16 computes the mean of the seed lexicon based on equation 3 specified in the previous section . Line 17 computes the similarity between the mean vector and the PPMI word-word co occurrence matrix and returns the top i most closest terms to the mean vector based on equation 5 . Lines 18-19 , it removes top closest items which has different part-of-speech to the seed words . Line 20-21 check the top i closest terms are has different polarity to the seed lexicon . Line 22 updates the PPMI lexicon by inserting the newly obtained sentiment terms . Line 23 returns the generated Amharic Sentiment lexicon by PPMI . Using corpus based approach , Amharic sentiment lexicon are built where it allows Input : PPMI : Word-Context PPMI matrix Output : AM_Lexicon_by_PPMI : Generated Amharic Sentiment Lexicon 1 seed_noun+ , seed_noun− , seed_adj+ , seed_adj− , seed_verb+ , seed_verb− ← LoadCorespondingSeedCatagoryfile PPMI ← LoadPPMIMatrixfile V ocab ← LoadV ocabularyfile AmharicEnglishDic ← LoadAmharicEnglishDictionaryfile AmharicAmharicDic ← LoadAmharicAmharicDictionaryfile AmharicSWN ← LoadAmharicSentimentSWNfile Amharic_Sentiment_Lex_by_PPMI ← Null foreach seed_lexicon ∈ seed_noun+ , seed_noun− , seed_adj+ , seed_adj− , seed_verb+ , seed_verb− : do 2 foreach seed ∈ seed_lexicon : do 3 if seed ∈ Vocab then 4 Remove Seed from Seed_Lexicon 5 Threshold←Number of iterations foreach i← 0 to Threshold : do 6 mean_vector← compute_mean ( seed_lexicon ) by equation 3 top_ten_closest_terms← compute_similarity ( mean_vector , PPMI ) by equation 4 if term ∈ top_ten_closest_terms and term ∈ seed_lexicon : then 7 Remove the term from top_ten_closest_terms list as it is duplicate 8 if Any term in top_ten_closest_terms has different part_speech from seed_lexicon then 9 Remove the term from top_ten_closest_terms list 10 if Any term in top_ten_closest_terms has different polarity from Amharic SWN lexicon then 11 Remove the term from top_ten_closest_terms list 12 Update seed_lexicon by inserting top_ten_closest_terms list 13 AM_Lexicon_by_PPMI←AM_Lexicon_by_PPMI + seed_lexicon ; algorithm 1 : Amharic Sentiment Lexicon Generation Algorithm Using PPMI finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach . The quality of this lexicon will be evaluated using similar techniques used in dictionary based approaches ( Neshir Alemneh et al. , 2019 ) . However , this approach may not probably produce sentiment lexicon with large coverage as the corpus size may be insufficient to include all polarity words in Amharic language . To reduce the level of this issue , we combine the lexicons generated in both dictionary based and corpus based approaches for Amharic Sentiment classification .
This paper proposes a domain-specific corpus-based approach for generating semantic lexicons for the low-resource Amharic language. Manual construction of lexicons is especially hard and expensive for low-resource languages. More importantly, the paper points out that existing dictionaries and lexicons do not capture cultural connotations and language specific features, which is rather important for tasks like sentiment classification. Instead, this work proposes to automatically generate a semantic lexicon using distributional semantics from a corpus.
SP:e472738b53eec7967504021365ac5b4808028ec1
Quantum Semi-Supervised Kernel Learning
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets . While the availability of data for training machine learning models is steadily increasing , oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels . One of the approaches for addressing this issue is to use semi-supervised learning , which leverages not only the labeled samples , but also unlabeled feature vectors . Here , we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines . The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss , while maintaining the same quantum speedup as the Quantum LS-SVM . 1 INTRODUCTION . Data sets used for training machine learning models are becoming increasingly large , leading to continued interest in fast methods for solving large-scale classification problems . One of the approaches being explored is training the predictive model using a quantum algorithm that has access to the training set stored in quantum-accessible memory . In parallel to research on efficient architectures for quantum memory ( Blencowe , 2010 ) , work on quantum machine learning algorithms and on quantum learning theory is under way ( see for example Refs . ( Biamonte et al. , 2017 ; Dunjko & Briegel , 2018 ; Schuld & Petruccione , 2018 ) and ( Arunachalam & de Wolf , 2017 ) for review ) . An early example of this approach is Quantum LS-SVM ( Rebentrost et al. , 2014a ) , which achieves exponential speedup compared to classical LS-SVM algorithm . Quantum LS-SVM uses quadratic least-squares loss and squared-L2 regularizer , and the optimization problem can be solved using the seminal HHL ( Harrow et al. , 2009 ) algorithm for solving quantum linear systems of equations . While progress has been made in quantum algorithms for supervised learning , it has been recently advocated that the focus should shift to unsupervised and semi-supervised setting ( Perdomo-Ortiz et al. , 2018 ) . In many domains , the most laborious part of assembling a training set is the collection of sample labels . Thus , in many scenarios , in addition to the labeled training set of size m we have access to many more feature vectors with missing labels . One way of utilizing these additional data points to improve the classification model is through semi-supervised learning . In semi-supervised learning , we are given m observations x1 , ... , xm drawn from the marginal distribution p ( x ) , where the l ( l m ) first data points come with labels y1 , ... , yl drawn from conditional distribution p ( y|x ) . Semi-supervised learning algorithms exploit the underlying distribution of the data to improve classification accuracy on unseen samples . In the approach considered here , the training samples are connected by a graph that captures their similarity . Here , we introduce a quantum algorithm for semi-supervised training of a kernel support vector machine classification model . We start with the existing Quantum LS-SVM ( Rebentrost et al. , 2014a ) , and use techniques from sample-based Hamiltonian simulation ( Kimmel et al. , 2017 ) to add a semisupervised term based on Laplacian SVM ( Melacci & Belkin , 2011 ) . As is standard in quantum machine learning ( Li et al. , 2019 ) , the algorithm accesses training points and the adjacency matrix of the graph connecting samples via a quantum oracle . We show that , with respect to the oracle , the proposed algorithm achieves the same quantum speedup as LS-SVM , that is , adding the semisupervised term does not lead to increased computational complexity . 2 PRELIMINARIES . 2.1 SEMI-SUPERVISED LEAST-SQUARES KERNEL SUPPORT VECTOR MACHINES .. Consider a problem where we are aiming to find predictors h ( x ) : X → R that are functions from a RKHS defined by a kernel K. In Semi-Supervised LS-SVMs in RKHS , we are looking for a function h ∈ H that minimizes min h∈H , b∈R γ 2 l∑ i=1 ( yi − ( h ( xi ) + b ) ) 2 + 1 2 ||h||2H + 1 2 ||∇h||2E , where γ is a user define constant allowing for adjusting the regularization strength . The last term captures the squared norm of the graph gradient on the graph G that contains all training samples as vertices , and expresses similarity between samples through , possibly edges Gu , v 1 2 ||∇h||2E = 1 2 ∑ u∼v Gu , v ( h̄u − h̄v ) 2 = h̄TLh̄ , where h̄u is the function value h ( xi ) for the vertex u corresponding to training point xi , and L is the combinatorial graph Laplacian matrix such that L [ i , j ] = Dj −Gi , j . The Representer Theorem states that if H is RKHS defined by kernel K : X × X → R , then the solution minimizing the problem above is achieved for a function h that uses only the representers of the training points , that is , a function of the form h ( x ) = ∑m j=1 cjKxj ( x ) = ∑m j=1 cjK ( xj , x ) . Thus , we can translate the problem from RKHS into a constrained quadratic optimization problem over finite , real vectors min c , ξ , b γ 2 m∑ i=1 ξ2i + 1 2 cTKc+ 1 2 cTKLKc s.t . 1− yi b+ m∑ j=1 cjK [ i , j ] = ξi where l ≤ m is the number of training points with labels ( these are grouped at the beginning of the training set ) , and h̄ = Kc , since function h is defined using representers Kxi . The semi-supervised term , the squared norm of the graph gradient of h , 1/2||∇h||2E , penalizes large changes of function h over edges of graph G. In defining the kernel K and the Laplacian L and in the two regularization terms we use all m samples . On the other hand , in calculating the empirical quadratic loss we only use the first l samples . The solution to the Semi-Supervised LS-SVMs is given by solving the following ( m+1 ) × ( m+1 ) system of linear equations [ 0 1T 1 K +KLK + γ−11 ] [ b α ] = [ 0 y ] , ( 1 ) where y = ( y1 , ... , ym ) T , 1 = ( 1 , ... , 1 ) T , 1 is identity matrix , K is kernel matrix , L is the graph Laplacian matrix , γ is a hyperparameter and α = ( α1 , ... , αm ) T is the vector of Lagrangian multipliers . 2.2 QUANTUM COMPUTING AND QUANTUM LS-SVM . Quantum computers are devices which perform computing according to the laws of quantum mechanics , a mathematical framework for describing physical theories , in language of linear algebra . Quantum Systems . Any isolated , closed quantum physical system can be fully described by a unit-norm vector in a complex Hilbert space appropriate for that system ; in quantum computing , the space is always finite-dimensional , Cd . In quantum mechanics and quantum computing , Dirac notation for linear algebra is commonly used . In Dirac notation , a vector x ∈ Cd and its complex conjugate xT , which represents a functional Cd → R , are denoted by |x〉 ( called ket ) and 〈x| ( called bra ) , respectively . We call { |ei〉 } di=1 the computational basis , where |ei〉 = ( 0 , ... , 1 , ... 0 ) T with exactly one 1 entry in the i-th position . Any |v〉 = ( v1 , ... , vd ) T can be written as |v〉 = ∑d i=1 vi|ei〉 ; coefficient vi ∈ C are called probability amplitudes . Any unit vector |x〉 ∈ Cd describes a d-level quantum state . Such a system is called a pure state . An inner product of |x1〉 , |x2〉 ∈ Cd is written as 〈x1|x2〉 . A two-level quantum state |ψ〉 = α|0〉 + β|1〉 , where |0〉 = ( 1 , 0 ) T , |1〉 = ( 0 , 1 ) T and α , β ∈ C , |α|2 + |β|2 , is called a quantum bit , or qubit for short . When both α and β are nonzero , we say |ψ〉 is in a superposition of the computational basis |0〉 and |1〉 ; the two superposition states |+〉 = 1√ 2 ( |0〉+ |1〉 ) and |−〉 = 1√ 2 ( |0〉 − |1〉 ) are very common in quantum computing . A composite quantum state of two distinct quantum systems |x1〉 ∈ Cd1 and |x2〉 ∈ Cd2 is described as tensor product of quantum states |x1〉 ⊗ |x2〉 ∈ Cd1 ⊗ Cd2 . Thus , a state of an n-qubit system is a vector in the tensor product space ( C2 ) ⊗n = C2 ⊗ C2 ⊗ ... ⊗ C2 , and is written as ∑2n−1 i=0 αi|i〉 , where i is expressed using its binary representation ; for example for n = 4 , we have |2〉 = |0010〉 = |0〉 ⊗ |0〉 ⊗ |1〉 ⊗ |0〉 . Transforming and Measuring Quantum States . Quantum operations manipulate quantum states in order to obtain some desired final state . Two types of manipulation of a quantum system are allowed by laws of physics : unitary operators and measurements . Quantum measurement , if done in the computational basis , stochastically transforms the state of the system into one of the computational basis states , based on squared magnitudes of probability amplitudes ; that is , 1√ 2 ( |0〉+ |1〉 ) will result in |0〉 and |1〉 with equal chance . Unitary operators are deterministic , invertible , normpreserving linear transforms . A unitary operator U models a transformation of a quantum state |u〉 to |v〉 = U|u〉 . Note that U|u1〉 + U|u2〉 = U ( |u1〉+ |u2〉 ) , applying a unitary to a superposition of states has the same effect as applying it separately to element of the superposition . In quantum circuit model unitary transformations are referred to as quantum gates – for example , one of the most common gates , the single-qubit Hadamard gate , is a unitary operator represented in the computational basis by the matrix H : = 1√ 2 ( 1 1 1 −1 ) . ( 2 ) Note that H|0〉 = |+〉 and H|1〉 = |−〉 . Quantum Input Model . Quantum computation typically starts from all qubits in |0〉 state . To perform computation , access to input data is needed . In quantum computing , input is typically given by a unitary operator that transforms the initial state into the desired input state for the computation – such unitaries are commonly referred to as oracles , and the computational complexity of quantum algorithms is typically measured with access to an oracle as the unit . For problems involving large amounts of input data , such as for quantum machine learning algorithms , an oracle that abstracts random access memory is often assumed . Quantum random access memory ( qRAM ) uses logN qubits to address any quantum superposition of N memory cell which may contains either quantum or classical information . For example , qRAM allows accessing classical data entries xji in quantum superposition by a transformation 1 √ mp m∑ i=1 p∑ j=1 |i , j〉|0 ... 0〉 qRAM−−−→ 1√ mp m∑ i=1 p∑ j=1 |i , j〉|xji 〉 , where |xji 〉 is a binary representation up to a given precision . Several approaches for creating quantum RAM are being considered ( Giovannetti et al. , 2008 ; Arunachalam et al. , 2015 ; Biamonte et al. , 2017 ) , but it is still an open challenge , and subtle differences in qRAM architecture may erase any gains in computational complexity of a quantum algorithm Aaronson ( 2015 ) . Quantum Linear Systems of Equations . Given an input matrix A ∈ Cn×n and a vector b ∈ Cn , the goal of linear system of equations problem is finding x ∈ Cn such that Ax = b . When A is Hermitian and full rank , the unique solution is x = A−1b . If A is not a full rank matrix then A−1 is replaced by the Moore-Penrose pseudo-inverse . HHL algorithm introduced an analogous problem in quantum setting : assuming an efficient algorithm for preparing b as a quantum state b = ∑n i=1 bi|i〉 using dlog ne+ 1 qubits , the algorithm applies quantum subroutines of phase estimation , controlled rotation , and inverse of phase estimation to obtain the state |x〉 = A −1|b〉 ‖A−1|b〉 ‖ . ( 3 ) Intuitively and at the risk of over-simplifying , HHL algorithm works as follows : if A has spectral decomposition A = ∑n i=1 λiviv T i ( where λi and vi are corresponding eigenvalues and eigenstates of A ) , then A−1 maps λivi 7→ 1 λi vi . The vector b also can be written as the linear combination of the A ’ s eigenvectors vi as b = ∑n i=1 βivi ( we are not required to compute βi ) . Then A−1b = ∑n i=1 βi 1 λi vi . In general A and A−1 are not unitary ( unless all A ’ s eigenvalues have unit magnitude ) , therefore we are not able to apply A−1 directly on |b〉 . However , since U = eiA = ∑n i=1 e iλiviv T i is unitary and has the same eigenvectors as A and A −1 , one can implement U and powers of U on a quantum computer by Hamiltonian simulation techniques ; clearly for any expected speed-up , one need to enact eiA efficiently . The HHL algorithm uses the phase estimation subroutine to estimate an approximation of λi up to a small error . The Next step computes a conditional rotation on the approximated value of λi and an auxiliary qubit |0〉 and outputs 1 λi |0〉+ √ 1− 1 λ2i |1〉 . The last step involves the inverse of phase estimation and quantum measure- ment for getting rid of garbage qubits and outputs our desired state |x〉 = A−1|b〉 = ∑n i=1 βi 1 λi vi . Density Operators . Density operator formalism is an alternative formulation for quantum mechanics that allows probabilistic mixtures of pure states , more generally referred to as mixed states . A mixed state that describes an ensemble { pi , |ψ〉i } is written as ρ = k∑ i=1 pi|ψi〉〈ψi| , ( 4 ) where ∑k i=1 pi = 1 forms a probability distribution and ρ is called density operator , which in a finite-dimensional system , in computational basis , is a semi-definite positive matrix with Tr ( ρ ) = 1 . A unitary operator U maps a quantum state expressed as a density operator ρ to UρU† , where U† is the complex conjugate of the operator U . Partial Trace of Composite Quantum System . Consider a two-part quantum system in a state described by tensor product of two density operators ρ ⊗ σ . A partial trace , tracing out the second part of the quantum system , is defined as the linear operator that leaves the first part of the system in a state Tr2 ( ρ⊗ σ ) = ρ tr ( σ ) , where Tr ( σ ) is the trace of the matrix σ . To obtain Kernel matrix K as a density matrix , quantum LS-SVM ( Rebentrost et al. , 2014b ) relies on partial trace , and on a quantum oracle that can convert , in superposition , each data point { xi } mi=1 , xi ∈ Rp to a quantum state |xi〉 = 1‖ xi ‖ ∑p t=1 ( xi ) t|t〉 , where ( xi ) t refers to the tth feature value in data point xi and assuming the oracle is given ‖xi ‖ and yi . Vector of the labels is given in the same fashion as |y〉 = 1‖ y ‖ ∑m i=1 yi|i〉 . For preparation the normalized Kernel matrix K ′ = 1 tr ( K ) K where K = XTX , we need to prepare a quantum state combining all data points in quantum superposition |X〉 = 1√∑m i=1‖ xi ‖ 2 ∑m i=1 |i〉⊗‖xi ‖ |xi〉 . The normalized Kernel matrix is obtained by discarding the training set state , K ′ = Tr2 ( |X〉〈X| ) = 1∑m i=1 ‖xi ‖ 2 m∑ i , j=1 ‖xi ‖ ‖xj ‖ 〈xi|xj〉|i〉〈j| . ( 5 ) The approach used above to construct density matrix corresponding to linear kernel matrix can be extended to polynomial kernels ( Rebentrost et al. , 2014b ) . LMR Technique for Density Operator Exponentiation . In HHL-based quantum machine learning algorithms , including in the method proposed here , matrix A for the Hamiltonian simulation within the HHL algorithm is based on data . For example , A can contain the kernel matrix K captured in the quantum system as a density matrix . Then , one need to be able to efficiently compute e−iK∆t , where K is scaled by the trace of kernel matrix . Since K is not sparse , a strategy similar to ( Lloyd et al. , 2014 ) is adapted for the exponentiation of a non-sparse density matrix : Tr1 { e−iS∆t ( K ⊗ σ ) eiS∆t } = σ − i∆t [ K , σ ] +O ( ∆t2 ) ≈ e−iK∆tσeiK∆t , ( 6 ) where S = ∑ i , j |i〉〈j| ⊗ |j〉〈i| is the swap operator and the facts Tr1 { S ( K ⊗ σ ) } = Kσ and Tr1 { ( K ⊗ σ ) S } = σK are used . The equation ( 6 ) summarizes the LMR technique : approximating e−iK∆tσeiK∆t up to error O ( ∆t2 ) is equivalent to simulating a swap operator S , applying it to the state K ⊗ σ and discarding the first system by taking partial trace operation . Since the swap operator is sparse , its simulation is efficient . Therefore the LMR trick provides an efficient way to approximate exponentiation of a non-sparse density matrix . Quantum LS-SVM . Quantum LS-SVM ( Rebentrost et al. , 2014b ) uses partial trace to construct density operator corresponding to the kernel matrixK . Once the kernel matrixK becomes available as a density operator , the quantum LS-SVM proceeds by applying the HHL algorithm for solving the system of linear equations associated with LS-LSVM , using the LMR technique for performing the density operator exponentiation e−iK∆t where the density matrix K encodes the kernel matrix .
The paper proposes a quantum computer-based algorithm for semi-supervised least squared kernel SVM. This work builds upon LS-SVM of Rebentrost et al (2014b) which developed a quantum algorithm for the supervised version of the problem. While the main selling point of quantum LS-SVM is that it scales logarithmically with data size, supervised algorithms shall not fully enjoy logarithmic scaling unless the cost for collecting labeled data is also logarithmic, which is unlikely. Therefore, semi-supervised setting is certainly appealing. Technically, there are two main contributions. The first is the method of providing Laplacian as an input to the quantum computer. The second contribution, which is about the computation of matrix inverse (K + KLK)^{-1}, is a bit more technical, and could be considered as the main contribution of the paper.
SP:77d59e1e726172184249bdfdd81011617dc9c208
Quantum Semi-Supervised Kernel Learning
Quantum machine learning methods have the potential to facilitate learning using extremely large datasets . While the availability of data for training machine learning models is steadily increasing , oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels . One of the approaches for addressing this issue is to use semi-supervised learning , which leverages not only the labeled samples , but also unlabeled feature vectors . Here , we present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines . The algorithm uses recent advances in quantum sample-based Hamiltonian simulation to extend the existing Quantum LS-SVM algorithm to handle the semi-supervised term in the loss , while maintaining the same quantum speedup as the Quantum LS-SVM . 1 INTRODUCTION . Data sets used for training machine learning models are becoming increasingly large , leading to continued interest in fast methods for solving large-scale classification problems . One of the approaches being explored is training the predictive model using a quantum algorithm that has access to the training set stored in quantum-accessible memory . In parallel to research on efficient architectures for quantum memory ( Blencowe , 2010 ) , work on quantum machine learning algorithms and on quantum learning theory is under way ( see for example Refs . ( Biamonte et al. , 2017 ; Dunjko & Briegel , 2018 ; Schuld & Petruccione , 2018 ) and ( Arunachalam & de Wolf , 2017 ) for review ) . An early example of this approach is Quantum LS-SVM ( Rebentrost et al. , 2014a ) , which achieves exponential speedup compared to classical LS-SVM algorithm . Quantum LS-SVM uses quadratic least-squares loss and squared-L2 regularizer , and the optimization problem can be solved using the seminal HHL ( Harrow et al. , 2009 ) algorithm for solving quantum linear systems of equations . While progress has been made in quantum algorithms for supervised learning , it has been recently advocated that the focus should shift to unsupervised and semi-supervised setting ( Perdomo-Ortiz et al. , 2018 ) . In many domains , the most laborious part of assembling a training set is the collection of sample labels . Thus , in many scenarios , in addition to the labeled training set of size m we have access to many more feature vectors with missing labels . One way of utilizing these additional data points to improve the classification model is through semi-supervised learning . In semi-supervised learning , we are given m observations x1 , ... , xm drawn from the marginal distribution p ( x ) , where the l ( l m ) first data points come with labels y1 , ... , yl drawn from conditional distribution p ( y|x ) . Semi-supervised learning algorithms exploit the underlying distribution of the data to improve classification accuracy on unseen samples . In the approach considered here , the training samples are connected by a graph that captures their similarity . Here , we introduce a quantum algorithm for semi-supervised training of a kernel support vector machine classification model . We start with the existing Quantum LS-SVM ( Rebentrost et al. , 2014a ) , and use techniques from sample-based Hamiltonian simulation ( Kimmel et al. , 2017 ) to add a semisupervised term based on Laplacian SVM ( Melacci & Belkin , 2011 ) . As is standard in quantum machine learning ( Li et al. , 2019 ) , the algorithm accesses training points and the adjacency matrix of the graph connecting samples via a quantum oracle . We show that , with respect to the oracle , the proposed algorithm achieves the same quantum speedup as LS-SVM , that is , adding the semisupervised term does not lead to increased computational complexity . 2 PRELIMINARIES . 2.1 SEMI-SUPERVISED LEAST-SQUARES KERNEL SUPPORT VECTOR MACHINES .. Consider a problem where we are aiming to find predictors h ( x ) : X → R that are functions from a RKHS defined by a kernel K. In Semi-Supervised LS-SVMs in RKHS , we are looking for a function h ∈ H that minimizes min h∈H , b∈R γ 2 l∑ i=1 ( yi − ( h ( xi ) + b ) ) 2 + 1 2 ||h||2H + 1 2 ||∇h||2E , where γ is a user define constant allowing for adjusting the regularization strength . The last term captures the squared norm of the graph gradient on the graph G that contains all training samples as vertices , and expresses similarity between samples through , possibly edges Gu , v 1 2 ||∇h||2E = 1 2 ∑ u∼v Gu , v ( h̄u − h̄v ) 2 = h̄TLh̄ , where h̄u is the function value h ( xi ) for the vertex u corresponding to training point xi , and L is the combinatorial graph Laplacian matrix such that L [ i , j ] = Dj −Gi , j . The Representer Theorem states that if H is RKHS defined by kernel K : X × X → R , then the solution minimizing the problem above is achieved for a function h that uses only the representers of the training points , that is , a function of the form h ( x ) = ∑m j=1 cjKxj ( x ) = ∑m j=1 cjK ( xj , x ) . Thus , we can translate the problem from RKHS into a constrained quadratic optimization problem over finite , real vectors min c , ξ , b γ 2 m∑ i=1 ξ2i + 1 2 cTKc+ 1 2 cTKLKc s.t . 1− yi b+ m∑ j=1 cjK [ i , j ] = ξi where l ≤ m is the number of training points with labels ( these are grouped at the beginning of the training set ) , and h̄ = Kc , since function h is defined using representers Kxi . The semi-supervised term , the squared norm of the graph gradient of h , 1/2||∇h||2E , penalizes large changes of function h over edges of graph G. In defining the kernel K and the Laplacian L and in the two regularization terms we use all m samples . On the other hand , in calculating the empirical quadratic loss we only use the first l samples . The solution to the Semi-Supervised LS-SVMs is given by solving the following ( m+1 ) × ( m+1 ) system of linear equations [ 0 1T 1 K +KLK + γ−11 ] [ b α ] = [ 0 y ] , ( 1 ) where y = ( y1 , ... , ym ) T , 1 = ( 1 , ... , 1 ) T , 1 is identity matrix , K is kernel matrix , L is the graph Laplacian matrix , γ is a hyperparameter and α = ( α1 , ... , αm ) T is the vector of Lagrangian multipliers . 2.2 QUANTUM COMPUTING AND QUANTUM LS-SVM . Quantum computers are devices which perform computing according to the laws of quantum mechanics , a mathematical framework for describing physical theories , in language of linear algebra . Quantum Systems . Any isolated , closed quantum physical system can be fully described by a unit-norm vector in a complex Hilbert space appropriate for that system ; in quantum computing , the space is always finite-dimensional , Cd . In quantum mechanics and quantum computing , Dirac notation for linear algebra is commonly used . In Dirac notation , a vector x ∈ Cd and its complex conjugate xT , which represents a functional Cd → R , are denoted by |x〉 ( called ket ) and 〈x| ( called bra ) , respectively . We call { |ei〉 } di=1 the computational basis , where |ei〉 = ( 0 , ... , 1 , ... 0 ) T with exactly one 1 entry in the i-th position . Any |v〉 = ( v1 , ... , vd ) T can be written as |v〉 = ∑d i=1 vi|ei〉 ; coefficient vi ∈ C are called probability amplitudes . Any unit vector |x〉 ∈ Cd describes a d-level quantum state . Such a system is called a pure state . An inner product of |x1〉 , |x2〉 ∈ Cd is written as 〈x1|x2〉 . A two-level quantum state |ψ〉 = α|0〉 + β|1〉 , where |0〉 = ( 1 , 0 ) T , |1〉 = ( 0 , 1 ) T and α , β ∈ C , |α|2 + |β|2 , is called a quantum bit , or qubit for short . When both α and β are nonzero , we say |ψ〉 is in a superposition of the computational basis |0〉 and |1〉 ; the two superposition states |+〉 = 1√ 2 ( |0〉+ |1〉 ) and |−〉 = 1√ 2 ( |0〉 − |1〉 ) are very common in quantum computing . A composite quantum state of two distinct quantum systems |x1〉 ∈ Cd1 and |x2〉 ∈ Cd2 is described as tensor product of quantum states |x1〉 ⊗ |x2〉 ∈ Cd1 ⊗ Cd2 . Thus , a state of an n-qubit system is a vector in the tensor product space ( C2 ) ⊗n = C2 ⊗ C2 ⊗ ... ⊗ C2 , and is written as ∑2n−1 i=0 αi|i〉 , where i is expressed using its binary representation ; for example for n = 4 , we have |2〉 = |0010〉 = |0〉 ⊗ |0〉 ⊗ |1〉 ⊗ |0〉 . Transforming and Measuring Quantum States . Quantum operations manipulate quantum states in order to obtain some desired final state . Two types of manipulation of a quantum system are allowed by laws of physics : unitary operators and measurements . Quantum measurement , if done in the computational basis , stochastically transforms the state of the system into one of the computational basis states , based on squared magnitudes of probability amplitudes ; that is , 1√ 2 ( |0〉+ |1〉 ) will result in |0〉 and |1〉 with equal chance . Unitary operators are deterministic , invertible , normpreserving linear transforms . A unitary operator U models a transformation of a quantum state |u〉 to |v〉 = U|u〉 . Note that U|u1〉 + U|u2〉 = U ( |u1〉+ |u2〉 ) , applying a unitary to a superposition of states has the same effect as applying it separately to element of the superposition . In quantum circuit model unitary transformations are referred to as quantum gates – for example , one of the most common gates , the single-qubit Hadamard gate , is a unitary operator represented in the computational basis by the matrix H : = 1√ 2 ( 1 1 1 −1 ) . ( 2 ) Note that H|0〉 = |+〉 and H|1〉 = |−〉 . Quantum Input Model . Quantum computation typically starts from all qubits in |0〉 state . To perform computation , access to input data is needed . In quantum computing , input is typically given by a unitary operator that transforms the initial state into the desired input state for the computation – such unitaries are commonly referred to as oracles , and the computational complexity of quantum algorithms is typically measured with access to an oracle as the unit . For problems involving large amounts of input data , such as for quantum machine learning algorithms , an oracle that abstracts random access memory is often assumed . Quantum random access memory ( qRAM ) uses logN qubits to address any quantum superposition of N memory cell which may contains either quantum or classical information . For example , qRAM allows accessing classical data entries xji in quantum superposition by a transformation 1 √ mp m∑ i=1 p∑ j=1 |i , j〉|0 ... 0〉 qRAM−−−→ 1√ mp m∑ i=1 p∑ j=1 |i , j〉|xji 〉 , where |xji 〉 is a binary representation up to a given precision . Several approaches for creating quantum RAM are being considered ( Giovannetti et al. , 2008 ; Arunachalam et al. , 2015 ; Biamonte et al. , 2017 ) , but it is still an open challenge , and subtle differences in qRAM architecture may erase any gains in computational complexity of a quantum algorithm Aaronson ( 2015 ) . Quantum Linear Systems of Equations . Given an input matrix A ∈ Cn×n and a vector b ∈ Cn , the goal of linear system of equations problem is finding x ∈ Cn such that Ax = b . When A is Hermitian and full rank , the unique solution is x = A−1b . If A is not a full rank matrix then A−1 is replaced by the Moore-Penrose pseudo-inverse . HHL algorithm introduced an analogous problem in quantum setting : assuming an efficient algorithm for preparing b as a quantum state b = ∑n i=1 bi|i〉 using dlog ne+ 1 qubits , the algorithm applies quantum subroutines of phase estimation , controlled rotation , and inverse of phase estimation to obtain the state |x〉 = A −1|b〉 ‖A−1|b〉 ‖ . ( 3 ) Intuitively and at the risk of over-simplifying , HHL algorithm works as follows : if A has spectral decomposition A = ∑n i=1 λiviv T i ( where λi and vi are corresponding eigenvalues and eigenstates of A ) , then A−1 maps λivi 7→ 1 λi vi . The vector b also can be written as the linear combination of the A ’ s eigenvectors vi as b = ∑n i=1 βivi ( we are not required to compute βi ) . Then A−1b = ∑n i=1 βi 1 λi vi . In general A and A−1 are not unitary ( unless all A ’ s eigenvalues have unit magnitude ) , therefore we are not able to apply A−1 directly on |b〉 . However , since U = eiA = ∑n i=1 e iλiviv T i is unitary and has the same eigenvectors as A and A −1 , one can implement U and powers of U on a quantum computer by Hamiltonian simulation techniques ; clearly for any expected speed-up , one need to enact eiA efficiently . The HHL algorithm uses the phase estimation subroutine to estimate an approximation of λi up to a small error . The Next step computes a conditional rotation on the approximated value of λi and an auxiliary qubit |0〉 and outputs 1 λi |0〉+ √ 1− 1 λ2i |1〉 . The last step involves the inverse of phase estimation and quantum measure- ment for getting rid of garbage qubits and outputs our desired state |x〉 = A−1|b〉 = ∑n i=1 βi 1 λi vi . Density Operators . Density operator formalism is an alternative formulation for quantum mechanics that allows probabilistic mixtures of pure states , more generally referred to as mixed states . A mixed state that describes an ensemble { pi , |ψ〉i } is written as ρ = k∑ i=1 pi|ψi〉〈ψi| , ( 4 ) where ∑k i=1 pi = 1 forms a probability distribution and ρ is called density operator , which in a finite-dimensional system , in computational basis , is a semi-definite positive matrix with Tr ( ρ ) = 1 . A unitary operator U maps a quantum state expressed as a density operator ρ to UρU† , where U† is the complex conjugate of the operator U . Partial Trace of Composite Quantum System . Consider a two-part quantum system in a state described by tensor product of two density operators ρ ⊗ σ . A partial trace , tracing out the second part of the quantum system , is defined as the linear operator that leaves the first part of the system in a state Tr2 ( ρ⊗ σ ) = ρ tr ( σ ) , where Tr ( σ ) is the trace of the matrix σ . To obtain Kernel matrix K as a density matrix , quantum LS-SVM ( Rebentrost et al. , 2014b ) relies on partial trace , and on a quantum oracle that can convert , in superposition , each data point { xi } mi=1 , xi ∈ Rp to a quantum state |xi〉 = 1‖ xi ‖ ∑p t=1 ( xi ) t|t〉 , where ( xi ) t refers to the tth feature value in data point xi and assuming the oracle is given ‖xi ‖ and yi . Vector of the labels is given in the same fashion as |y〉 = 1‖ y ‖ ∑m i=1 yi|i〉 . For preparation the normalized Kernel matrix K ′ = 1 tr ( K ) K where K = XTX , we need to prepare a quantum state combining all data points in quantum superposition |X〉 = 1√∑m i=1‖ xi ‖ 2 ∑m i=1 |i〉⊗‖xi ‖ |xi〉 . The normalized Kernel matrix is obtained by discarding the training set state , K ′ = Tr2 ( |X〉〈X| ) = 1∑m i=1 ‖xi ‖ 2 m∑ i , j=1 ‖xi ‖ ‖xj ‖ 〈xi|xj〉|i〉〈j| . ( 5 ) The approach used above to construct density matrix corresponding to linear kernel matrix can be extended to polynomial kernels ( Rebentrost et al. , 2014b ) . LMR Technique for Density Operator Exponentiation . In HHL-based quantum machine learning algorithms , including in the method proposed here , matrix A for the Hamiltonian simulation within the HHL algorithm is based on data . For example , A can contain the kernel matrix K captured in the quantum system as a density matrix . Then , one need to be able to efficiently compute e−iK∆t , where K is scaled by the trace of kernel matrix . Since K is not sparse , a strategy similar to ( Lloyd et al. , 2014 ) is adapted for the exponentiation of a non-sparse density matrix : Tr1 { e−iS∆t ( K ⊗ σ ) eiS∆t } = σ − i∆t [ K , σ ] +O ( ∆t2 ) ≈ e−iK∆tσeiK∆t , ( 6 ) where S = ∑ i , j |i〉〈j| ⊗ |j〉〈i| is the swap operator and the facts Tr1 { S ( K ⊗ σ ) } = Kσ and Tr1 { ( K ⊗ σ ) S } = σK are used . The equation ( 6 ) summarizes the LMR technique : approximating e−iK∆tσeiK∆t up to error O ( ∆t2 ) is equivalent to simulating a swap operator S , applying it to the state K ⊗ σ and discarding the first system by taking partial trace operation . Since the swap operator is sparse , its simulation is efficient . Therefore the LMR trick provides an efficient way to approximate exponentiation of a non-sparse density matrix . Quantum LS-SVM . Quantum LS-SVM ( Rebentrost et al. , 2014b ) uses partial trace to construct density operator corresponding to the kernel matrixK . Once the kernel matrixK becomes available as a density operator , the quantum LS-SVM proceeds by applying the HHL algorithm for solving the system of linear equations associated with LS-LSVM , using the LMR technique for performing the density operator exponentiation e−iK∆t where the density matrix K encodes the kernel matrix .
This paper developes a quantum algorithm for kernel-based support vector machine working in a semi-supervised learning setting. The motivation is to utilise the significant advantage of quantum computation to train machine learning models on large-scale datasets efficiently. This paper reviews the existing work on using quantum computing for least-squares svm (via solving quantum linear systems of equations) and then extends it to deal with kernel svm in a semi-supervised setting.
SP:77d59e1e726172184249bdfdd81011617dc9c208
Invertible generative models for inverse problems: mitigating representation error and dataset bias
1 INTRODUCTION . Generative deep neural networks have shown remarkable performance as natural signal priors in imaging inverse problems , such as denoising , inpainting , compressed sensing , blind deconvolution , and phase retrieval . These generative models can be trained from datasets consisting of images of particular natural signal classes , such as faces , fingerprints , MRIs , and more ( Karras et al. , 2017 ; Minaee and Abdolrashidi , 2018 ; Shin et al. , 2018 ; Chen et al. , 2018 ) . Some such models , including variational autoencoders ( VAEs ) and generative adversarial networks ( GANs ) , learn an explicit low-dimensional manifold that approximates a natural signal class ( Goodfellow et al. , 2014 ; Kingma and Welling , 2013 ; Rezende et al. , 2014 ) . We will refer to such models as GAN priors . With an explicit parameterization of the natural signal manifold by a low dimensional latent representation , these generative models allow for direct optimization over a natural signal class . Consequently , they can obtain significant performance improvements over non-learning based methods . For example , GAN priors have been shown to outperform sparsity priors at compressed sensing with 5-10x fewer measurements . Additionally , GAN priors have led to theory for signal recovery in the linear compressive sensing and nonlinear phase retrieval problems ( Bora et al. , 2017 ; Hand and Voroninski , 2017 ; Hand et al. , 2018 ) , and they have also shown promising results for the nonlinear blind image deblurring problem ( Asim et al. , 2018 ) . A significant drawback of GAN priors for solving inverse problems is that they can have representation error or bias due to architecture and training . This can happen for many reasons , including because the generator only approximates the natural signal manifold , because the natural signal manifold is of higher dimensionality than modeled , because of mode collapse , or because of bias in the training dataset itself . As many aspects of generator architecture and training lack clear principles , representation error of GANs may continue to be a challenge even after substantial hand crafting and engineering . Additionally , learning-based methods are particularly vulnerable to the biases of their training data , and training data , no matter how carefully collected , will always contain degrees of bias . As an example , the CelebA dataset ( Liu et al. , 2015 ) is biased toward people who are young , who do not have facial hair or glasses , and who have a light skin tone . As we will see , a GAN prior trained on this dataset learns these biases and exhibits image recovery failures because of them . In contrast , invertible neural networks can be trained as generators with zero representation error . These networks are invertible ( one-to-one and onto ) by architectural design ( Dinh et al. , 2016 ; Gomez et al. , 2017 ; Jacobsen et al. , 2018 ; Kingma and Dhariwal , 2018 ) . Consequently , they are capable of recovering any image , including those significantly out-of-distribution relative to a biased training set ; see Figure 1 . We call the domain of an invertible generator the latent space , and we call the range of the generator the signal space . These must have equal dimensionality . Flow-based invertible generative models are composed of a sequence of learned invertible transformations . Their strengths include : their architecture allows exact and efficient latent-variable inference , direct loglikelihood evaluation , and efficient image synthesis ; they have the potential for significant memory savings in gradient computations ; and they can be trained by directly optimizing the likelihood of training images . This paper emphasizes an additional strength : because they lack representation error , invertible models can mitigate dataset bias and improve performance on inverse problems with out-of-distribution data . In this paper , we study generative invertible neural network priors for imaging inverse problems . We will specifically use the Glow architecture , though our framework could be used with other architectures . A Glow-based model is composed of a sequence of invertible affine coupling layers , 1x1 convolutional layers , and normalization layers . Glow models have been successfully trained to generate high resolution photorealistic images of human faces ( Kingma and Dhariwal , 2018 ) . We present a method for using pretrained generative invertible neural networks as priors for imaging inverse problems . The invertible generator , once trained , can be used for a wide variety of inverse problems , with no specific knowledge of those problems used during the training process . Our method is an empirical risk formulation based on the following proxy : we penalize the likelihood of an image ’ s latent representation instead of the image ’ s likelihood itself . While this may be couterintuitive , it admits optimization problems that are easier to solve empirically . In the case of compressive sensing , our formulation succeeds even without direct penalization of this proxy likelihood , with regularization occuring through initialization of a gradient descent in latent space . We train a generative invertible model using the CelebA dataset . With this fixed model as a signal prior , we study its performance at denoising , compressive sensing , and inpainting . For denoising , it can outperform BM3D ( Dabov et al. , 2007 ) . For compressive sensing on test images , it can obtain higher quality reconstructions than Lasso across almost all subsampling ratios , and at similar reconstruction errors can succeed with 10-20x fewer measurements than Lasso . It provides an improvement of about 2x fewer linear measurements when compared to Bora et al . ( 2017 ) . Despite being trained on the CelebA dataset , our generative invertible prior can give higher quality reconstructions than Lasso on out-of-distribution images of faces , and , to a lesser extent , unrelated natural images . Our invertible prior outperforms a pretrained DCGAN ( Radford et al. , 2015 ) at face inpainting and exhibits qualitatively reasonable results on out-of-distribution human faces . We provide additional experiments in the appendix , including for training on other datasets . 2 METHOD AND MOTIVATION . We assume that we have access to a pretrained generative invertible neural network G : Rn → Rn . We write x = G ( z ) and z = G−1 ( x ) , where x ∈ Rn is an image that corresponds to the latent representation z ∈ Rn . We will consider a G that has the Glow architecture introduced in Kingma and Dhariwal ( 2018 ) . It can be trained by direct optimization of the likelihood of a collection of training images of a natural signal class , under a standard Gaussian distribution over the latent space . We consider recovering an image x from possibly-noisy linear measurements given by A ∈ Rm×n , y = Ax+ η , where η ∈ Rm models noise . Given a pretrained invertible generator G , we have access to likelihood estimates for all images x ∈ Rn . Hence , it is natural to attempt to solve the above inverse problem by a maximum likelihood formulation given by min x∈Rn ‖Ax− y‖2 − γ log pG ( x ) , ( 1 ) where pG is the likelihood function over x induced by G , and γ is a hyperparameter . We have found this formulation to be empirically challenging to optimize ; hence we study the following proxy : min z∈Rn ‖AG ( z ) − y‖2 + γ‖z‖ . ( 2 ) Unless otherwise stated , we initialize ( 2 ) at z0 = 0 . The motivation for formulation ( 2 ) is as follows . As a proxy for the likelihood of an image x ∈ Rn , we will use the likelihood of its latent representation z = G−1 ( x ) . Because the invertible network G was trained to map a standard normal in Rn to a distribution over images , the log-likelihood of a point z is proportional to ‖z‖2 . Instead of penalizing ‖z‖2 , we alternatively penalize the unsquared ‖z‖ . In Appendix B , we show comparable performance for both the squared and unsquared formulations . In principle , our formulation has an inherent flaw : some high-likelihood latent representations z correspond to low-likelihood images x . Mathematically , this comes from the Jacobian term that relates the likelihood in z to the likelihood in x upon application of the map G. For multimodel distributions , such images must exist , which we will illustrate in the discussion . This proxy formulation relies on the fact that the set of such images has low probability and that they are inconsistent with enough provided measurements . Surprisingly , despite this potential weakness , we will observe image reconstructions that are superior to BM3D and GAN-based methods at denoising , and superior to GAN-based and Lasso-based methods at compressive sensing . In the case of compressive sensing and inpainting , we take γ = 0 in formulation ( 2 ) . The motivation for such a formulation initialized at z0 = 0 is as follows . There is a manifold of images that are consistent with the provided measurements . We want to find the image x of highest likelihood on this manifold . Our proxy turns the likelihood maximization task over an affine space in x into the geometric task of finding the point on a manifold in z-space that is closest to the origin with respect to the Euclidean norm . In order to approximate that point , we run a gradient descent in z down the data misfit term starting at z0 = 0 . In the case of GAN priors for G : Rk → Rn , we will use the formulation from Bora et al . ( 2017 ) , which is the formulation above in the case where the optimization is performed over Rk , γ = 0 , and initialization is selected randomly . All the experiments that follow will be for an invertible model we trained on the CelebA dataset of celebrity faces , as in Kingma and Dhariwal ( 2018 ) . Similar results for models trained on birds and flowers ( Wah et al. , 2011 ; Nilsback and Zisserman , 2008 ) can be found in the appendix . Due to computational considerations , we run experiments on 64 × 64 color images with the pixel values scaled between [ 0 , 1 ] . The train and test sets contain a total of 27,000 and 3,000 images , respectively . We trained a Glow architecture ( Kingma and Dhariwal , 2018 ) ; see Appendix A for details . Once trained , the Glow prior is fixed for use in each of the inverse problems below . We also trained a DCGAN for the same dataset . We solve ( 2 ) using LBFGS , which was found to outperform Adam ( Kingma and Ba , 2014 ) . DCGAN results are reported for an average of 3 runs because we observed some variance due to random initialization . 3 APPLICATIONS . 3.1 DENOISING . We consider the denoising problem with A = I and η ∼ N ( 0 , σ2I ) , for images x in the CelebA test dataset . We evaluate the performance of a Glow prior , a DCGAN prior , and BM3D for two different noise levels . Figure 2 shows the recovered PSNR values as a function of γ for denoising by the Glow and DCGAN priors , along with the PSNR by BM3D . The figure shows that the performance of the regularized Glow prior increases with γ , and then decreases . If γ is too low , then the network fits to the noise in the image . If γ is too high , then data fit is not enforced strongly enough . The left panel reveals that an appropriately regularized Glow prior can outperform BM3D by almost 2 dB . The experiments also reveal that appropriately regularized Glow priors outperform the DCGAN prior , which suffers from representation error and is not aided by the regularization . The right panel confirms that with smaller noise levels , less regularization is needed for optimal performance . A visual comparison of the recoveries at the noise level σ = 0.1 using Glow , DCGAN priors , and BM3D can be seen in Figure 3 . Note that the recoveries with Glow are sharper than BM3D . See Appendix B for more quantitative and qualitative results .
This paper proposes to employ the likelihood of the latent representation of images as the optimization target in the Glow (Kingma and Dhariwal, 2018) framework. The authors argue that to optimize the ''proxy for image likelihood'' has two advantages: First, the landscapes of the surface are more smooth; Second, a latent sample point in the regions that have a low likelihood is able to generate desired outcomes. In the experimental analysis, the authors compare their proposed method with several baselines and show prior performance.
SP:e58dc2d21175a62499405b7f4c3a03b135530838
Invertible generative models for inverse problems: mitigating representation error and dataset bias
1 INTRODUCTION . Generative deep neural networks have shown remarkable performance as natural signal priors in imaging inverse problems , such as denoising , inpainting , compressed sensing , blind deconvolution , and phase retrieval . These generative models can be trained from datasets consisting of images of particular natural signal classes , such as faces , fingerprints , MRIs , and more ( Karras et al. , 2017 ; Minaee and Abdolrashidi , 2018 ; Shin et al. , 2018 ; Chen et al. , 2018 ) . Some such models , including variational autoencoders ( VAEs ) and generative adversarial networks ( GANs ) , learn an explicit low-dimensional manifold that approximates a natural signal class ( Goodfellow et al. , 2014 ; Kingma and Welling , 2013 ; Rezende et al. , 2014 ) . We will refer to such models as GAN priors . With an explicit parameterization of the natural signal manifold by a low dimensional latent representation , these generative models allow for direct optimization over a natural signal class . Consequently , they can obtain significant performance improvements over non-learning based methods . For example , GAN priors have been shown to outperform sparsity priors at compressed sensing with 5-10x fewer measurements . Additionally , GAN priors have led to theory for signal recovery in the linear compressive sensing and nonlinear phase retrieval problems ( Bora et al. , 2017 ; Hand and Voroninski , 2017 ; Hand et al. , 2018 ) , and they have also shown promising results for the nonlinear blind image deblurring problem ( Asim et al. , 2018 ) . A significant drawback of GAN priors for solving inverse problems is that they can have representation error or bias due to architecture and training . This can happen for many reasons , including because the generator only approximates the natural signal manifold , because the natural signal manifold is of higher dimensionality than modeled , because of mode collapse , or because of bias in the training dataset itself . As many aspects of generator architecture and training lack clear principles , representation error of GANs may continue to be a challenge even after substantial hand crafting and engineering . Additionally , learning-based methods are particularly vulnerable to the biases of their training data , and training data , no matter how carefully collected , will always contain degrees of bias . As an example , the CelebA dataset ( Liu et al. , 2015 ) is biased toward people who are young , who do not have facial hair or glasses , and who have a light skin tone . As we will see , a GAN prior trained on this dataset learns these biases and exhibits image recovery failures because of them . In contrast , invertible neural networks can be trained as generators with zero representation error . These networks are invertible ( one-to-one and onto ) by architectural design ( Dinh et al. , 2016 ; Gomez et al. , 2017 ; Jacobsen et al. , 2018 ; Kingma and Dhariwal , 2018 ) . Consequently , they are capable of recovering any image , including those significantly out-of-distribution relative to a biased training set ; see Figure 1 . We call the domain of an invertible generator the latent space , and we call the range of the generator the signal space . These must have equal dimensionality . Flow-based invertible generative models are composed of a sequence of learned invertible transformations . Their strengths include : their architecture allows exact and efficient latent-variable inference , direct loglikelihood evaluation , and efficient image synthesis ; they have the potential for significant memory savings in gradient computations ; and they can be trained by directly optimizing the likelihood of training images . This paper emphasizes an additional strength : because they lack representation error , invertible models can mitigate dataset bias and improve performance on inverse problems with out-of-distribution data . In this paper , we study generative invertible neural network priors for imaging inverse problems . We will specifically use the Glow architecture , though our framework could be used with other architectures . A Glow-based model is composed of a sequence of invertible affine coupling layers , 1x1 convolutional layers , and normalization layers . Glow models have been successfully trained to generate high resolution photorealistic images of human faces ( Kingma and Dhariwal , 2018 ) . We present a method for using pretrained generative invertible neural networks as priors for imaging inverse problems . The invertible generator , once trained , can be used for a wide variety of inverse problems , with no specific knowledge of those problems used during the training process . Our method is an empirical risk formulation based on the following proxy : we penalize the likelihood of an image ’ s latent representation instead of the image ’ s likelihood itself . While this may be couterintuitive , it admits optimization problems that are easier to solve empirically . In the case of compressive sensing , our formulation succeeds even without direct penalization of this proxy likelihood , with regularization occuring through initialization of a gradient descent in latent space . We train a generative invertible model using the CelebA dataset . With this fixed model as a signal prior , we study its performance at denoising , compressive sensing , and inpainting . For denoising , it can outperform BM3D ( Dabov et al. , 2007 ) . For compressive sensing on test images , it can obtain higher quality reconstructions than Lasso across almost all subsampling ratios , and at similar reconstruction errors can succeed with 10-20x fewer measurements than Lasso . It provides an improvement of about 2x fewer linear measurements when compared to Bora et al . ( 2017 ) . Despite being trained on the CelebA dataset , our generative invertible prior can give higher quality reconstructions than Lasso on out-of-distribution images of faces , and , to a lesser extent , unrelated natural images . Our invertible prior outperforms a pretrained DCGAN ( Radford et al. , 2015 ) at face inpainting and exhibits qualitatively reasonable results on out-of-distribution human faces . We provide additional experiments in the appendix , including for training on other datasets . 2 METHOD AND MOTIVATION . We assume that we have access to a pretrained generative invertible neural network G : Rn → Rn . We write x = G ( z ) and z = G−1 ( x ) , where x ∈ Rn is an image that corresponds to the latent representation z ∈ Rn . We will consider a G that has the Glow architecture introduced in Kingma and Dhariwal ( 2018 ) . It can be trained by direct optimization of the likelihood of a collection of training images of a natural signal class , under a standard Gaussian distribution over the latent space . We consider recovering an image x from possibly-noisy linear measurements given by A ∈ Rm×n , y = Ax+ η , where η ∈ Rm models noise . Given a pretrained invertible generator G , we have access to likelihood estimates for all images x ∈ Rn . Hence , it is natural to attempt to solve the above inverse problem by a maximum likelihood formulation given by min x∈Rn ‖Ax− y‖2 − γ log pG ( x ) , ( 1 ) where pG is the likelihood function over x induced by G , and γ is a hyperparameter . We have found this formulation to be empirically challenging to optimize ; hence we study the following proxy : min z∈Rn ‖AG ( z ) − y‖2 + γ‖z‖ . ( 2 ) Unless otherwise stated , we initialize ( 2 ) at z0 = 0 . The motivation for formulation ( 2 ) is as follows . As a proxy for the likelihood of an image x ∈ Rn , we will use the likelihood of its latent representation z = G−1 ( x ) . Because the invertible network G was trained to map a standard normal in Rn to a distribution over images , the log-likelihood of a point z is proportional to ‖z‖2 . Instead of penalizing ‖z‖2 , we alternatively penalize the unsquared ‖z‖ . In Appendix B , we show comparable performance for both the squared and unsquared formulations . In principle , our formulation has an inherent flaw : some high-likelihood latent representations z correspond to low-likelihood images x . Mathematically , this comes from the Jacobian term that relates the likelihood in z to the likelihood in x upon application of the map G. For multimodel distributions , such images must exist , which we will illustrate in the discussion . This proxy formulation relies on the fact that the set of such images has low probability and that they are inconsistent with enough provided measurements . Surprisingly , despite this potential weakness , we will observe image reconstructions that are superior to BM3D and GAN-based methods at denoising , and superior to GAN-based and Lasso-based methods at compressive sensing . In the case of compressive sensing and inpainting , we take γ = 0 in formulation ( 2 ) . The motivation for such a formulation initialized at z0 = 0 is as follows . There is a manifold of images that are consistent with the provided measurements . We want to find the image x of highest likelihood on this manifold . Our proxy turns the likelihood maximization task over an affine space in x into the geometric task of finding the point on a manifold in z-space that is closest to the origin with respect to the Euclidean norm . In order to approximate that point , we run a gradient descent in z down the data misfit term starting at z0 = 0 . In the case of GAN priors for G : Rk → Rn , we will use the formulation from Bora et al . ( 2017 ) , which is the formulation above in the case where the optimization is performed over Rk , γ = 0 , and initialization is selected randomly . All the experiments that follow will be for an invertible model we trained on the CelebA dataset of celebrity faces , as in Kingma and Dhariwal ( 2018 ) . Similar results for models trained on birds and flowers ( Wah et al. , 2011 ; Nilsback and Zisserman , 2008 ) can be found in the appendix . Due to computational considerations , we run experiments on 64 × 64 color images with the pixel values scaled between [ 0 , 1 ] . The train and test sets contain a total of 27,000 and 3,000 images , respectively . We trained a Glow architecture ( Kingma and Dhariwal , 2018 ) ; see Appendix A for details . Once trained , the Glow prior is fixed for use in each of the inverse problems below . We also trained a DCGAN for the same dataset . We solve ( 2 ) using LBFGS , which was found to outperform Adam ( Kingma and Ba , 2014 ) . DCGAN results are reported for an average of 3 runs because we observed some variance due to random initialization . 3 APPLICATIONS . 3.1 DENOISING . We consider the denoising problem with A = I and η ∼ N ( 0 , σ2I ) , for images x in the CelebA test dataset . We evaluate the performance of a Glow prior , a DCGAN prior , and BM3D for two different noise levels . Figure 2 shows the recovered PSNR values as a function of γ for denoising by the Glow and DCGAN priors , along with the PSNR by BM3D . The figure shows that the performance of the regularized Glow prior increases with γ , and then decreases . If γ is too low , then the network fits to the noise in the image . If γ is too high , then data fit is not enforced strongly enough . The left panel reveals that an appropriately regularized Glow prior can outperform BM3D by almost 2 dB . The experiments also reveal that appropriately regularized Glow priors outperform the DCGAN prior , which suffers from representation error and is not aided by the regularization . The right panel confirms that with smaller noise levels , less regularization is needed for optimal performance . A visual comparison of the recoveries at the noise level σ = 0.1 using Glow , DCGAN priors , and BM3D can be seen in Figure 3 . Note that the recoveries with Glow are sharper than BM3D . See Appendix B for more quantitative and qualitative results .
This paper investigates the performance of invertible generative models for solving inverse problems. They argue that their most significant benefit over GAN priors is the lack of representation error that (1) enables invertible models to perform well on out-of-distribution data and (2) results in a model that does not saturate with increased number of measurements (as observed with GANs). They use a pre-trained Glow invertible network for the generator and solve a proxy for the maximum likelihood formulation of the problem, where the likelihood of an image is replaced by the likelihood of its latent representation. They demonstrate results on problems such as denoising, inpainting and compressed sensing. In all these applications, the invertible network consistently outperforms DCGAN across all noise levels/number of measurements. Furthermore, they demonstrate visually reasonable results on natural images significantly different from those in the training dataset.
SP:e58dc2d21175a62499405b7f4c3a03b135530838
Deep symbolic regression
1 INTRODUCTION . Understanding the mathematical relationships among variables in a physical system is an integral component of the scientific process . Symbolic regression aims to identify these relationships by searching over the space of tractable mathematical expressions to best fit a dataset . Specifically , given a dataset of ( X , y ) pairs , where X ∈ Rn and y ∈ R , symbolic regression aims to identify a function f ( X ) : Rn → R that minimizes a distance metric D ( y , f ( X ) ) between real and predicted values . That is , symbolic regression seeks to find the optimal f ? = argminf D ( y , f ( X ) ) , where the functional form of f is a tractable expression . The resulting expression f ? may be readily interpretable and/or provide useful scientific insights simply by inspection . In contrast , conventional regression imposes a single model structure that is fixed during training , often chosen to be expressive ( e.g . a neural network ) at the expense of being easily interpretable . However , the space of mathematical expressions is discrete ( in model structure ) and continuous ( in model parameters ) , growing exponentially with the length of the expression , rendering symbolic regression an extremely challenging machine learning problem . Given the large and combinatorial search space , traditional approaches to symbolic regression typically utilize evolutionary algorithms , especially genetic programming ( GP ) ( Koza , 1992 ; Bäck et al. , 2018 ) . In GP-based symbolic regression , a population of mathematical expressions is “ evolved ” using evolutionary operations like selection , crossover , and mutation to improve a fitness function . While GP can be effective , it is also known to scale poorly to larger problems and to exhibit high sensitivity to hyperparameters . Deep learning has permeated almost all areas of artificial intelligence , from computer vision ( Krizhevsky et al. , 2012 ) to optimal control ( Mnih et al. , 2015 ) . However , deep learning may seem incongruous with or even antithetical toward symbolic regression , given that neural networks are typically highly complex , difficult to interpret , and rely on gradient information . We propose a framework that resolves this incongruity by tying deep learning and symbolic regression together with a simple idea : use a large model ( i.e . neural network ) to search the space of small models ( i.e . symbolic expressions ) . This framework leverages the representational capacity of neural networks while entirely bypassing the need to interpret a network . We present deep symbolic regression ( DSR ) , a gradient-based approach for symbolic regression based on reinforcement learning . In DSR , a recurrent neural network ( RNN ) emits a distribution over mathematical expressions . Expressions are sampled from the distribution , instantiated , and evaluated based on their fitness to the dataset . This fitness is used as the reward signal to train the RNN parameters using a policy gradient algorithm . As training proceeds , the RNN adjusts the likelihood of an expression relative to its reward , assigning higher probabilities to better fitting expressions . We demonstrate that DSR outperforms a standard GP implementation in its ability to recover exact symbolic expressions from data , both with and without added noise . We summarize our contributions as follows : 1 ) a novel method for solving symbolic regression that outperforms standard GP , 2 ) an autoregressive generative modeling framework for optimizing hierarchical , variable-length objects , 3 ) a framework that accommodates in situ constraints , and 4 ) a novel risk-seeking strategy that optimizes for best-case performance . 2 RELATED WORK . Symbolic regression . Symbolic regression has a long history of evolutionary strategies , especially GP ( Koza , 1992 ; Bäck et al. , 2018 ; Uy et al. , 2011 ) . Among non-evolutionary approaches , the recent AI Feynman algorithm ( Udrescu & Tegmark , 2019 ) is a multi-staged approach to symbolic regression leveraging the observation that physical equations often exhibit simplifying properties like multiplicative separability and translational symmetry . The algorithm identifies and exploits such properties to recursively define simplified sub-problems that can eventually be solved using simple techniques like a polynomial fit or small brute force search . Brunton et al . ( 2016 ) develop a sparse regression approach to recover nonlinear dynamics equations from data ; however , their search space is limited to linear combinations of a library of basis functions . AutoML . Our framework has many parallels to a body of works within automated machine learning ( AutoML ) that use an autoregressive RNN to define a distribution over discrete objects and use reinforcement learning to optimize this distribution under a black-box performance metric ( Zoph & Le , 2017 ; Ramachandran et al. , 2017 ; Bello et al. , 2017 ) . The key methodological difference to our framework is that these works optimize objects that are both sequential and fixed length . For example , in neural architecture search ( Zoph & Le , 2017 ) , an RNN searches the space of neural network architectures , which are encoded by a sequence of discrete “ tokens ” specifying architectural properties ( e.g . number of neurons ) of each layer . The length of the sequence is fixed or scheduled during training . In contrast , a major contribution of our framework is defining a search space that is both inherently hierarchical and variable length . The most similar AutoML work searches for neural network activation functions ( Ramachandran et al. , 2017 ) . While the space of activation functions is hierarchical in nature , the authors ( rightfully ) constrain this space substantially by positing a functional unit that is repeated sequentially , thus restricting their search space back to a fixed-length sequence . This constraint is well-justified for learning activation functions , which tend to exhibit similar hierarchical structures . However , a repeating-unit constraint is not practical for symbolic regression because the ground truth expression may have arbitrary structure . Autoregressive models . The RNN-based distribution over expressions used in DSR is autoregressive , meaning each token is conditioned on the previously sampled tokens . Autoregressive models have proven to be useful for audio and image data ( Oord et al. , 2016a ; b ) in addition to the AutoML works discussed above ; we further demonstrate their efficacy for hierarchical expressions . GraphRNN defines a distribution over graphs that generates an adjacency matrix one column at a time in autoregressive fashion ( You et al. , 2018 ) . In principle , we could have constrained GraphRNN to define the distribution over expressions , since trees are a special case of graphs . However , GraphRNN constructs graphs breadth-first , whereas expressions are more naturally represented using depth-first traversals ( Li et al. , 2005 ) . Further , DSR exploits the hierarchical nature of trees by providing the parent and sibling as inputs to the RNN , and leverages the additional structure of expression trees that a node ’ s value determines its number of children ( e.g . cosine is a unary node ) . 3 METHODS . Our overall approach involves representing mathematical expressions by the pre-order traversals of their corresponding symbolic expression trees , developing an autoregressive model to generate expression trees under a pre-specified set of constraints , and using reinforcement learning to train the model to generate better-fitting expressions . 3.1 GENERATING EXPRESSIONS WITH A RECURRENT NEURAL NETWORK . We leverage the fact that algebraic expressions can be represented using symbolic expression trees , a type of binary tree in which nodes map to mathematical operators , input variables , or constants . Operators are internal nodes and may be unary ( e.g . sine ) or binary ( e.g . multiply ) . Input variables and constants are terminal nodes . We encode an expression τ by the pre-order traversal ( i.e . depth-first , then left-to-right ) of its corresponding expression tree.1 We denote the ith node in the traversal as τi and the length of the traversal as |τ | = T . Each node has a value within a given library L of possible node values or “ tokens , ” e.g . { + , − , × , ÷ , sin , cos , x } . Expressions are generated one node at a time along the pre-order traversal ( from τ1 to τT ) . For each node , a categorical distribution with parameters ψ defines the probabilities of selecting each node value from L. To capture the “ context ” of the expression as it is being generated , we condition this probability upon the selections of all previous nodes in that traversal . This conditional dependence can be achieved very generally using an RNN with parameters θ that outputs a probability vector ψ in autoregressive manner . Specifically , the ith output vector ψ ( i ) of the RNN defines the probability distribution for selecting the ith node value τi , conditioned on the previously selected node values τ1 : ( i−1 ) : p ( τi|τ1 : ( i−1 ) ; θ ) = ψ ( i ) L ( τi ) , where L ( τi ) is the index in L corresponding to node value τi . The likelihood of the sampled expression is computed using the chain rule of conditional probability : p ( τ |θ ) = |τ |∏ i=1 p ( τi|τ1 : ( i−1 ) ; θ ) = |τ |∏ i=1 ψ ( i ) L ( τi ) The sampling process is illustrated in Figure 1 and described in Algorithm 1 . Additional algorithmic details of the sampling process are described in Subroutines 1 and 2 in Appendix A . Starting at the root node , a node value is sampled according to ψ ( 1 ) . Subsequent node values are sampled autoregressively in a depth-first , left-to-right manner until the tree is complete ( i.e . all tree branches reach terminal nodes ) . The resulting sequence of node values is the tree ’ s pre-order traversal , which can be used to reconstruct the tree2 and its 1Given an expression tree ( or equivalently , its pre-order traversal ) , the corresponding mathematical expression is unique ; however , given an expression , its expression tree ( or its corresponding traversal ) is not unique . For example , x2 and x · x are equivalent expressions but yield different trees . For simplicity , we use τ somewhat abusively to refer to an expression where it technically refers to an expression tree ( or equivalently , its corresponding traversal ) . 2In general , a pre-order traversal is insufficient to uniquely reconstruct the tree . However , in this context , we know how many child nodes each node has based on its value , e.g . “ multiply ” is a binary operator and thus has two children . corresponding expression . Note that different samples of the distribution have different tree structures of different size . Thus , the search space is inherently both hierarchical and variable length . Providing hierarchical inputs to the RNN . Naively , the input to the RNN when sampling τi would be a representation ( i.e . embedding or one-hot encoding ) of the previously sampled token , τi−1 . Indeed , this is typical in related autoregressive models , e.g . when generating sentences ( Vaswani et al. , 2017 ) or for neural architecture search ( Zoph & Le , 2017 ) . However , the search space for symbolic regression is inherently hierarchical , and the previously sampled token may actually be very distant from the next token to be sampled in the expression tree . For example , the fifth and sixth tokens sampled in Figure 1 are adjacent nodes in the traversal but are four edges apart in the expression tree . To better capture hierarchical information , we provide as inputs to the RNN a representation of the parent and sibling node of the token being sampled . We introduce an empty token for cases in which a node does not have a parent or sibling . Pseudocode for identifying the parent and sibling nodes given a partial traversal is provided in Subroutine 2 in Appendix A. Constraining the search space . Under our framework , it is straightforward to apply a priori constraints to reduce the search space . To demonstrate , we impose several simple , domain-agnostic constraints : ( 1 ) Expressions are limited to a pre-specified minimum and maximum length . We selected minimum length of 2 to prevent trivial expressions and a maximum length of 30 to ensure expressions are tractable . ( 2 ) The children of an operator should not all be constants , as the result would simply be a different constant . ( 3 ) The child of a unary operator should not be the inverse of that operator , e.g . log ( exp ( x ) ) is not allowed . ( 4 ) Direct descendants of trigonometric operators should not be trigonometric operators , e.g . sin ( x + cos ( x ) ) is not allowed because cosine is a descendant of sine . While still semantically meaningful , such composed trigonometric operators do not appear in virtually any scientific domain . We apply these constraints in situ ( concurrently with autoregressive sampling ) by zeroing out the probabilities of selecting tokens that would violate a constraint . Pseudocode for this process is provided in Subroutine For domains without this property , the number of children can be sampled from an additional RNN output . A pre-order traversal plus the corresponding number of children for each node is sufficient to uniquely reconstruct the tree . Algorithm 1 : Sampling an expression from the RNN 1 function SampleExpression ( θ , L ) input : RNN with parameters θ ; library of tokens L output : Pre-order traversal τ of an expression sampled from the RNN 2 τ = [ ] // Empty list 3 x = empty‖empty // Initial RNN input is empty parent and sibling 4 h0 = ~0 // Initialize RNN cell state to zero 5 for i = 1 , . . . , T do 6 ( ψ ( i ) , hi ) = RNN ( x , hi−1 ; θ ) 7 ψ ( i ) ← ApplyConstraints ( ψ ( i ) , L , τ ) // Adjust probabilities 8 τi = Categorical ( ψ ( i ) ) // Sample next token 9 τ ← τ‖τi // Append token to traversal 10 if ExpressionComplete ( τ ) then 11 return τ 12 x← ParentSibling ( τ ) // Compute next parent and sibling 13 end 14 return τ Algorithm 2 : Deep symbolic regression 1 function DSR ( α , N , L , X , y ) input : learning rate α ; batch size N ; library of tokens L ; input dataset ( X , y ) output : Best fitting expression τ ? 2 Initialize RNN with parameters θ , defining distribution over expressions p ( ·|θ ) 3 τ ? = null 4 b = 0 5 repeat 6 T = { τ ( i ) ∼ p ( ·|θ ) } i=1 : N // Sample expressions ( Algorithm 1 ) 7 T ← { OptimizeConstants ( τ ( i ) , X , y ) } i=1 : N // Optimize constants 8 R = { R ( τ ( i ) ) − λCC ( τ ( i ) ) } i=1 : N // Compute rewards 9 ĝ = 1N ∑N i=1R ( τ ( i ) ) ∇θ log p ( τ ( i ) |θ ) // Compute policy gradient 10 θ ← θ + α ( ĝ1 + ĝ2 ) // Apply gradients 11 if maxR > R ( τ ? ) then τ ? ← τ ( argmaxR ) // Update best expression 12 return τ ? 1 in Appendix A . This process ensures that all samples adhere to all constraints , without rejecting samples post hoc . In contrast , imposing constraints in GP-based symbolic regression can be problematic ( Craenen et al. , 2001 ) . In practice , evolutionary operations that violate constraints are typically rejected post hoc ( Fortin et al. , 2012 ) .
This paper presents deep symbolic regression (DSR), which uses a recurrent neural network to learn a distribution over mathematical expressions and uses policy gradient to train the RNN for generating desired expressions given a set of points. The RNN model is used to sample expressions from the learned distribution, which are then instantiated into corresponding trees and evaluated on a dataset. The fitness on the dataset is used as the reward to train the RNN using policy gradient. In comparison to GP, the presented DSR approach recovers exact symbolic expressions in majority of the benchmarks.
SP:0d872fb4321f3a4a3fc61cf4d33b0c7e33f2d695
Deep symbolic regression
1 INTRODUCTION . Understanding the mathematical relationships among variables in a physical system is an integral component of the scientific process . Symbolic regression aims to identify these relationships by searching over the space of tractable mathematical expressions to best fit a dataset . Specifically , given a dataset of ( X , y ) pairs , where X ∈ Rn and y ∈ R , symbolic regression aims to identify a function f ( X ) : Rn → R that minimizes a distance metric D ( y , f ( X ) ) between real and predicted values . That is , symbolic regression seeks to find the optimal f ? = argminf D ( y , f ( X ) ) , where the functional form of f is a tractable expression . The resulting expression f ? may be readily interpretable and/or provide useful scientific insights simply by inspection . In contrast , conventional regression imposes a single model structure that is fixed during training , often chosen to be expressive ( e.g . a neural network ) at the expense of being easily interpretable . However , the space of mathematical expressions is discrete ( in model structure ) and continuous ( in model parameters ) , growing exponentially with the length of the expression , rendering symbolic regression an extremely challenging machine learning problem . Given the large and combinatorial search space , traditional approaches to symbolic regression typically utilize evolutionary algorithms , especially genetic programming ( GP ) ( Koza , 1992 ; Bäck et al. , 2018 ) . In GP-based symbolic regression , a population of mathematical expressions is “ evolved ” using evolutionary operations like selection , crossover , and mutation to improve a fitness function . While GP can be effective , it is also known to scale poorly to larger problems and to exhibit high sensitivity to hyperparameters . Deep learning has permeated almost all areas of artificial intelligence , from computer vision ( Krizhevsky et al. , 2012 ) to optimal control ( Mnih et al. , 2015 ) . However , deep learning may seem incongruous with or even antithetical toward symbolic regression , given that neural networks are typically highly complex , difficult to interpret , and rely on gradient information . We propose a framework that resolves this incongruity by tying deep learning and symbolic regression together with a simple idea : use a large model ( i.e . neural network ) to search the space of small models ( i.e . symbolic expressions ) . This framework leverages the representational capacity of neural networks while entirely bypassing the need to interpret a network . We present deep symbolic regression ( DSR ) , a gradient-based approach for symbolic regression based on reinforcement learning . In DSR , a recurrent neural network ( RNN ) emits a distribution over mathematical expressions . Expressions are sampled from the distribution , instantiated , and evaluated based on their fitness to the dataset . This fitness is used as the reward signal to train the RNN parameters using a policy gradient algorithm . As training proceeds , the RNN adjusts the likelihood of an expression relative to its reward , assigning higher probabilities to better fitting expressions . We demonstrate that DSR outperforms a standard GP implementation in its ability to recover exact symbolic expressions from data , both with and without added noise . We summarize our contributions as follows : 1 ) a novel method for solving symbolic regression that outperforms standard GP , 2 ) an autoregressive generative modeling framework for optimizing hierarchical , variable-length objects , 3 ) a framework that accommodates in situ constraints , and 4 ) a novel risk-seeking strategy that optimizes for best-case performance . 2 RELATED WORK . Symbolic regression . Symbolic regression has a long history of evolutionary strategies , especially GP ( Koza , 1992 ; Bäck et al. , 2018 ; Uy et al. , 2011 ) . Among non-evolutionary approaches , the recent AI Feynman algorithm ( Udrescu & Tegmark , 2019 ) is a multi-staged approach to symbolic regression leveraging the observation that physical equations often exhibit simplifying properties like multiplicative separability and translational symmetry . The algorithm identifies and exploits such properties to recursively define simplified sub-problems that can eventually be solved using simple techniques like a polynomial fit or small brute force search . Brunton et al . ( 2016 ) develop a sparse regression approach to recover nonlinear dynamics equations from data ; however , their search space is limited to linear combinations of a library of basis functions . AutoML . Our framework has many parallels to a body of works within automated machine learning ( AutoML ) that use an autoregressive RNN to define a distribution over discrete objects and use reinforcement learning to optimize this distribution under a black-box performance metric ( Zoph & Le , 2017 ; Ramachandran et al. , 2017 ; Bello et al. , 2017 ) . The key methodological difference to our framework is that these works optimize objects that are both sequential and fixed length . For example , in neural architecture search ( Zoph & Le , 2017 ) , an RNN searches the space of neural network architectures , which are encoded by a sequence of discrete “ tokens ” specifying architectural properties ( e.g . number of neurons ) of each layer . The length of the sequence is fixed or scheduled during training . In contrast , a major contribution of our framework is defining a search space that is both inherently hierarchical and variable length . The most similar AutoML work searches for neural network activation functions ( Ramachandran et al. , 2017 ) . While the space of activation functions is hierarchical in nature , the authors ( rightfully ) constrain this space substantially by positing a functional unit that is repeated sequentially , thus restricting their search space back to a fixed-length sequence . This constraint is well-justified for learning activation functions , which tend to exhibit similar hierarchical structures . However , a repeating-unit constraint is not practical for symbolic regression because the ground truth expression may have arbitrary structure . Autoregressive models . The RNN-based distribution over expressions used in DSR is autoregressive , meaning each token is conditioned on the previously sampled tokens . Autoregressive models have proven to be useful for audio and image data ( Oord et al. , 2016a ; b ) in addition to the AutoML works discussed above ; we further demonstrate their efficacy for hierarchical expressions . GraphRNN defines a distribution over graphs that generates an adjacency matrix one column at a time in autoregressive fashion ( You et al. , 2018 ) . In principle , we could have constrained GraphRNN to define the distribution over expressions , since trees are a special case of graphs . However , GraphRNN constructs graphs breadth-first , whereas expressions are more naturally represented using depth-first traversals ( Li et al. , 2005 ) . Further , DSR exploits the hierarchical nature of trees by providing the parent and sibling as inputs to the RNN , and leverages the additional structure of expression trees that a node ’ s value determines its number of children ( e.g . cosine is a unary node ) . 3 METHODS . Our overall approach involves representing mathematical expressions by the pre-order traversals of their corresponding symbolic expression trees , developing an autoregressive model to generate expression trees under a pre-specified set of constraints , and using reinforcement learning to train the model to generate better-fitting expressions . 3.1 GENERATING EXPRESSIONS WITH A RECURRENT NEURAL NETWORK . We leverage the fact that algebraic expressions can be represented using symbolic expression trees , a type of binary tree in which nodes map to mathematical operators , input variables , or constants . Operators are internal nodes and may be unary ( e.g . sine ) or binary ( e.g . multiply ) . Input variables and constants are terminal nodes . We encode an expression τ by the pre-order traversal ( i.e . depth-first , then left-to-right ) of its corresponding expression tree.1 We denote the ith node in the traversal as τi and the length of the traversal as |τ | = T . Each node has a value within a given library L of possible node values or “ tokens , ” e.g . { + , − , × , ÷ , sin , cos , x } . Expressions are generated one node at a time along the pre-order traversal ( from τ1 to τT ) . For each node , a categorical distribution with parameters ψ defines the probabilities of selecting each node value from L. To capture the “ context ” of the expression as it is being generated , we condition this probability upon the selections of all previous nodes in that traversal . This conditional dependence can be achieved very generally using an RNN with parameters θ that outputs a probability vector ψ in autoregressive manner . Specifically , the ith output vector ψ ( i ) of the RNN defines the probability distribution for selecting the ith node value τi , conditioned on the previously selected node values τ1 : ( i−1 ) : p ( τi|τ1 : ( i−1 ) ; θ ) = ψ ( i ) L ( τi ) , where L ( τi ) is the index in L corresponding to node value τi . The likelihood of the sampled expression is computed using the chain rule of conditional probability : p ( τ |θ ) = |τ |∏ i=1 p ( τi|τ1 : ( i−1 ) ; θ ) = |τ |∏ i=1 ψ ( i ) L ( τi ) The sampling process is illustrated in Figure 1 and described in Algorithm 1 . Additional algorithmic details of the sampling process are described in Subroutines 1 and 2 in Appendix A . Starting at the root node , a node value is sampled according to ψ ( 1 ) . Subsequent node values are sampled autoregressively in a depth-first , left-to-right manner until the tree is complete ( i.e . all tree branches reach terminal nodes ) . The resulting sequence of node values is the tree ’ s pre-order traversal , which can be used to reconstruct the tree2 and its 1Given an expression tree ( or equivalently , its pre-order traversal ) , the corresponding mathematical expression is unique ; however , given an expression , its expression tree ( or its corresponding traversal ) is not unique . For example , x2 and x · x are equivalent expressions but yield different trees . For simplicity , we use τ somewhat abusively to refer to an expression where it technically refers to an expression tree ( or equivalently , its corresponding traversal ) . 2In general , a pre-order traversal is insufficient to uniquely reconstruct the tree . However , in this context , we know how many child nodes each node has based on its value , e.g . “ multiply ” is a binary operator and thus has two children . corresponding expression . Note that different samples of the distribution have different tree structures of different size . Thus , the search space is inherently both hierarchical and variable length . Providing hierarchical inputs to the RNN . Naively , the input to the RNN when sampling τi would be a representation ( i.e . embedding or one-hot encoding ) of the previously sampled token , τi−1 . Indeed , this is typical in related autoregressive models , e.g . when generating sentences ( Vaswani et al. , 2017 ) or for neural architecture search ( Zoph & Le , 2017 ) . However , the search space for symbolic regression is inherently hierarchical , and the previously sampled token may actually be very distant from the next token to be sampled in the expression tree . For example , the fifth and sixth tokens sampled in Figure 1 are adjacent nodes in the traversal but are four edges apart in the expression tree . To better capture hierarchical information , we provide as inputs to the RNN a representation of the parent and sibling node of the token being sampled . We introduce an empty token for cases in which a node does not have a parent or sibling . Pseudocode for identifying the parent and sibling nodes given a partial traversal is provided in Subroutine 2 in Appendix A. Constraining the search space . Under our framework , it is straightforward to apply a priori constraints to reduce the search space . To demonstrate , we impose several simple , domain-agnostic constraints : ( 1 ) Expressions are limited to a pre-specified minimum and maximum length . We selected minimum length of 2 to prevent trivial expressions and a maximum length of 30 to ensure expressions are tractable . ( 2 ) The children of an operator should not all be constants , as the result would simply be a different constant . ( 3 ) The child of a unary operator should not be the inverse of that operator , e.g . log ( exp ( x ) ) is not allowed . ( 4 ) Direct descendants of trigonometric operators should not be trigonometric operators , e.g . sin ( x + cos ( x ) ) is not allowed because cosine is a descendant of sine . While still semantically meaningful , such composed trigonometric operators do not appear in virtually any scientific domain . We apply these constraints in situ ( concurrently with autoregressive sampling ) by zeroing out the probabilities of selecting tokens that would violate a constraint . Pseudocode for this process is provided in Subroutine For domains without this property , the number of children can be sampled from an additional RNN output . A pre-order traversal plus the corresponding number of children for each node is sufficient to uniquely reconstruct the tree . Algorithm 1 : Sampling an expression from the RNN 1 function SampleExpression ( θ , L ) input : RNN with parameters θ ; library of tokens L output : Pre-order traversal τ of an expression sampled from the RNN 2 τ = [ ] // Empty list 3 x = empty‖empty // Initial RNN input is empty parent and sibling 4 h0 = ~0 // Initialize RNN cell state to zero 5 for i = 1 , . . . , T do 6 ( ψ ( i ) , hi ) = RNN ( x , hi−1 ; θ ) 7 ψ ( i ) ← ApplyConstraints ( ψ ( i ) , L , τ ) // Adjust probabilities 8 τi = Categorical ( ψ ( i ) ) // Sample next token 9 τ ← τ‖τi // Append token to traversal 10 if ExpressionComplete ( τ ) then 11 return τ 12 x← ParentSibling ( τ ) // Compute next parent and sibling 13 end 14 return τ Algorithm 2 : Deep symbolic regression 1 function DSR ( α , N , L , X , y ) input : learning rate α ; batch size N ; library of tokens L ; input dataset ( X , y ) output : Best fitting expression τ ? 2 Initialize RNN with parameters θ , defining distribution over expressions p ( ·|θ ) 3 τ ? = null 4 b = 0 5 repeat 6 T = { τ ( i ) ∼ p ( ·|θ ) } i=1 : N // Sample expressions ( Algorithm 1 ) 7 T ← { OptimizeConstants ( τ ( i ) , X , y ) } i=1 : N // Optimize constants 8 R = { R ( τ ( i ) ) − λCC ( τ ( i ) ) } i=1 : N // Compute rewards 9 ĝ = 1N ∑N i=1R ( τ ( i ) ) ∇θ log p ( τ ( i ) |θ ) // Compute policy gradient 10 θ ← θ + α ( ĝ1 + ĝ2 ) // Apply gradients 11 if maxR > R ( τ ? ) then τ ? ← τ ( argmaxR ) // Update best expression 12 return τ ? 1 in Appendix A . This process ensures that all samples adhere to all constraints , without rejecting samples post hoc . In contrast , imposing constraints in GP-based symbolic regression can be problematic ( Craenen et al. , 2001 ) . In practice , evolutionary operations that violate constraints are typically rejected post hoc ( Fortin et al. , 2012 ) .
This paper presents a RNN-RL based method for the symbolic regression problem. The problem is new (to Deep RL) and interesting. My main concern is about the proposed method, where the three RL related equations (not numbered) at page 5 are also direct copy-from-textbook policy gradient equations without specific adaptation to the new application considered in this paper, which is very strange. The two conditional probability definitions considered at page 3 are not mentioned in later text. These are only fractions of the underlying method and by reading the paper back and forth several times, it is not clear of the basic algorithmic flowchart, let alone more detailed description of the related parameters. Without these information, it is impossible to have a fair judge of the novelty and feasibility of the proposed method. The empirical results are also limited in small dataset, which makes it hard to verify the generality of the superior claim.
SP:0d872fb4321f3a4a3fc61cf4d33b0c7e33f2d695
A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS
1 INTRODUCTION . Large Convolutional Neural Networks ( CNNs ) ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Szegedy et al. , 2016b ; a ) and automatic Neural Architecture Search ( NAS ) based networks ( Zoph et al. , 2018 ; Liu et al. , 2018 ; Real et al. , 2018 ) have evolved to show remarkable accuracy on various tasks such as image classification ( Deng et al. , 2009 ; Krizhevsky & Hinton , 2009 ) , object detection ( Lin et al. , 2014 ) , benefited from huge amount of learnable parameters and computations . However , these large number of weights and high computational cost enabled only limited applications for mobile devices that require the constraint on memory space being low as well as for devices that require real-time computations ( Canziani et al. , 2016 ) . With regard to solving these problems , Howard et al . ( 2017 ) ; Sandler et al . ( 2018 ) ; Zhang et al . ( 2017b ) ; Ma et al . ( 2018 ) proposed parameter and computation efficient blocks while maintaining almost same accuracy compared to other heavy CNN models . All of these blocks utilized depthwise separable convolution , which deconstructed the standard convolution with the ( 3 × 3 × C ) size for each kernel into spatial information specific depthwise convolution ( 3 × 3 × 1 ) and channel information specific pointwise ( 1 × 1 × C ) convolution . The depthwise separable convolution achieved comparable accuracy compared to standard spatial convolution with hugely reduced parameters and FLOPs . These reduced resource requirements made the depthwise separable convolution as well as pointwise convolution ( PC ) more widely used in modern CNN architectures . Nevertheless , we point out that the existing PC layer is still computationally expensive and occupies a lot of proportion in the number of weight parameters ( Howard et al. , 2017 ) . Although the demand toward PC layer has been and will be growing exponentially in modern neural network architectures , there has been a little research on improving the naive structure of itself . Therefore , this paper proposes a new PC layer formulated by non-parametric and extremely fast conventional transforms . Conventional transforms that we applied on CNN models are Discrete Walsh-Hadamard Transform ( DWHT ) and Discrete Cosine Transform ( DCT ) , which have widely been used in image processing but rarely been applied in CNNs ( Ghosh & Chellappa , 2016 ) . We empirically found that although both of these transforms do not require any learnable parameters at all , they show the sufficient ability to capture the cross-channel correlations . This non-parametric property enables our proposed CNN models to be significantly compressed in terms of the number of parameters , leading to get the advantages ( i.e . efficient distributed training , less communication between server and clients ) referred by Iandola et al . ( 2016 ) . We note that especially DWHT is considered to be a good replacement of the conventional PC layer , as it requires no floating point multiplications but only additions and subtractions by which the computation overheads of PC layers can significantly be reduced . Furthermore , DWHT can take a strong advantage of its fast version where the computation complexity of the floating point operations is reduced from O ( n2 ) to O ( n log n ) . These non-parametric and low computational properties construct extremely efficient neural network from the perspective of parameter and computation as well as enjoying accuracy gain . Our contributions are summarized as follows : • We propose a new PC layer formulated with conventional transforms which do not require any learnable parameters as well as significantly reducing the number of floating point operations compared to the existing PC layer . • The great benefits of using the bases of existing transforms come from their fast versions , which drastically decrease computation complexity in neural networks without degrading accuracy . • We found that applying ReLU after conventional transforms discards important information extracted , leading to significant drop in accuracy . Based on this finding , we propose the optimal computation block for conventional transforms . • We also found that the conventional transforms can effectively be used especially for extracting high-level features in neural networks . Based on this , we propose a new transformbased neural network architecture . Specifically , using DWHT , our proposed method yields 1.49 % accuracy gain as well as 79.4 % and 49.4 % reduced parameters and FLOPs , respectively , compared with its baseline model ( MobileNet-V1 ) on CIFAR 100 dataset . 2 RELATED WORK . 2.1 DECONSTRUCTION AND DECOMPOSITION OF CONVOLUTIONS . For reducing computation complexity of the existing convolution methods , several approaches of rethinking and deconstructing the naive convolution structures have been proposed . Simonyan & Zisserman ( 2014 ) factorized a large sized kernel ( e.g . 5 × 5 ) in a convolution layer into several small size kernels ( e.g . 3 × 3 ) with several convolution layers . Jeon & Kim ( 2017 ) pointed out the limitation of existing convolution that it has the fixed receptive field . Consequently , they introduced learnable spatial displacement parameters , showing flexibility of convolution . Based on Jeon & Kim ( 2017 ) , Jeon & Kim ( 2018 ) proved that the standard convolution can be deconstructed as a single PC layer with the spatially shifted channels . Based on that , they proposed a very efficient convolution layer , namely active shift layer , by replacing spatial convolutions with shift operations . It is worth noting that the existing PC layer takes the huge proportion of computation and the number of weight parameters in modern lightweight CNN models ( Howard et al. , 2017 ; Sandler et al. , 2018 ; Ma et al. , 2018 ) . Specifically , MobileNet-V1 ( Howard et al. , 2017 ) requires 94 % , 74 % of the overall computational cost and the overall number of weight parameters for the existing PC layer , respectively . Therefore , there were attempts to reduce computation complexity of PC layer . Zhang et al . ( 2017b ) proposed ShuffleNet-V1 where the features are decomposed into several groups over channels and PC operation was conducted for each group , thus reducing the number of weight parameters and FLOPs by the number of groups G. However , it was proved in Ma et al . ( 2018 ) that the memory access cost increases as G increases , leading to slower inference speed . Similarly to the aforementioned methods , our work is to reduce computation complexity and the number of weight parameters in a convolution layer . However , our objective is more oriented on finding out mathe- matically efficient algorithms that make the weights in convolution kernels more effective in feature representation as well as efficient in terms of computation . 2.2 QUANTIZATION . Quantization in neural networks reduced the number of bits utilized to represent the weights and/or activations . Vanhoucke et al . ( 2011 ) applied 8-bit quantization on weight parameters , which enabled considerable speed-up with small drop of accuracy . Gupta et al . ( 2015 ) applied 16-bit fixed point representation with stochastic rounding . Based on Han et al . ( 2015b ) which pruned the unimportant weight connections through thresholding the values of weight , Han et al . ( 2015a ) successfully combined the pruning with 8 bits or less quantization and huffman encoding . The extreme case of quantized networks was evolved from Courbariaux et al . ( 2015 ) , which approximated weights with the binary ( +1 , −1 ) values . From the milestone of Courbariaux et al . ( 2015 ) , Courbariaux & Bengio ( 2016 ) ; Hubara et al . ( 2016 ) constructed Binarized Neural Networks which either stochastically or deterministically binarize the real value weights and activations . These binarized weights and activations lead to significantly reduced run-time by replacing floating point multiplications with 1-bit XNOR operations . Based on Binarized Neural Networks ( Courbariaux & Bengio , 2016 ; Hubara et al. , 2016 ) , Local Binary CNN ( Juefei-Xu et al. , 2016 ) proposed a convolution module that utilizes binarized nonlearnable weights in spatial convolution based on Local Binary Patterns , thus replacing multiplications with addition/subtraction operations in spatial convolution . However , they did not consider reducing computation complexity in PC layer and remained the weights of PC layer learnable floating point variables . Our work shares the similarity to Local Binary CNN ( Juefei-Xu et al. , 2016 ) in using binary fixed weight values . However , Local Binary Patterns have some limitations for being applied in CNN since they can only be used in spatial convolution as well as there are no approaches that enable fast computation of them . 2.3 CONVENTIONAL TRANSFORMS . In general , several transform techniques have been applied for image processing . Discrete Cosine Transform ( DCT ) has been used as a powerful feature extractor ( Dabbaghchian et al. , 2010 ) . For N -point input sequence , the basis kernel of DCT is defined as a list of cosine values as below : Cm = [ cos ( ( 2x+ 1 ) mπ 2N ) ] , 0 ≤ x ≤ N − 1 ( 1 ) where m is the index of a basis and captures higher frequency information in the input signal as m increases . This property led DCT to be widely applied in image/video compression techniques that emphasize the powers of image signals in low frequency regions ( Rao & Yip , 2014 ) . Discrete Walsh Hadamard Transform ( DWHT ) is a very fast and efficient transform by using only +1 and −1 elements in kernels . These binary elements in kernels allow DWHT to perform without any multiplication operations but addition/subtraction operations . Therefore , DWHT has been widely used for fast feature extraction in many practical applications , such as texture image segmentation ( Vard et al. , 2011 ) , face recognition ( Hassan et al. , 2007 ) , and video shot boundary detection ( G. & S. , 2014 ) . Further , DWHT can take advantage of a structured-wiring-based fast algorithm ( Algorithm 1 ) as well as allowing very high efficiency in encoding the spatial information ( Pratt et al. , 1969 ) . The basis kernel matrix of DWHT is defined using the previous kernel matrix as below : HD = ( HD−1 HD−1 HD−1 −HD−1 ) , ( 2 ) where H0 = 1 and D ≥ 1 . In this paper we denote HDm as the m-th row vector of HD in Eq . 2 . Additionally , we adopt fast DWHT algorithm to reduce computation complexity of PC layer in neural networks , resulting in an extremely fast and efficient neural network . 3 METHOD . We propose a new PC layer which is computed with conventional transforms . The conventional PC layer can be formulated as follows : Zijm =W > m ·Xij , 1 ≤ m ≤M ( 3 ) where ( i , j ) is a spatial index , and m is output channel index . In Eq . 3 , N and M are the number of input and output channels , respectively . Xij ∈ RN is a vector of input X at the spatial index ( i , j ) , Wm ∈ RN is a vector of m-th weight W in Eq . 3 . For simplicity , the stride is set as 1 and the bias is omitted in Eq . 3 . Our proposed method is to replace the learnable parameters Wm with the bases in the conventional transforms . For example , replacing Wm with HDm in Eq . 3 , we now can formulate the new multiplication-free PC layer using DWHT . Similarly , the DCT basis kernels Cm in Eq . 1 can substitute for Wm in Eq . 3 , formulating another new PC layer using DCT . Note that the normalization factors in the conventional transforms are not applied in the proposed PC layer , because Batch Normalization ( Ioffe & Szegedy , 2015 ) performs a normalization and a linear transform which can be viewed as a normalization in the existing transforms . The most important benefit of the proposed method comes from the fact that the fast algorithms of the existing transforms can be applied in the proposed PC layers for further reduction of computation . Directly applying above new PC layer gives computational complexity of O ( N2 ) . Adopting the fast algorithms , we can significantly reduce the computation complexity of PC layer fromO ( N2 ) to O ( NlogN ) without any change of the computation results . We demonstrate the pseudo-code of our proposed fast PC layer using DWHT in Algorithm 1 based on the Fast DWHT structure shown in Figure 1a . In Algorithm 1 , for logN iterations , the evenindexed channels and odd-indexed channels are added and subtracted in element-wise manner , respectively . The resulting elements which were added and subtracted are placed in the first N/2 elements and the last N/2 elements of the input of next iteration , respectively . In this computation process , each iteration requires only N operations of additions or subtractions . Consequently , Algorithm 1 gives us complexity of O ( NlogN ) in addition or subtraction . Compared to the existing PC layer that requires complexity of O ( N2 ) in multiplication , our method is extremely cheaper than the conventional PC layer in terms of computation costs as shown in Figure 1b and in power consumption of computing devices ( Horowitz , 2014 ) . Note that , similarly to fast DWHT , DCT can also be computed in a fast manner that recursively decomposes the N -point input sequence into two subproblems of N/2-point DCT ( Kok , 1997 ) . Compared to DWHT , DCT takes advantage of using more natural shapes of cosine basis kernels , which tend to provide better feature extraction performance through capturing the frequency information . However , DCT inevitably needs multiplications for inner product betweenC andX vectors , and a look up table ( LUT ) for computing cosine kernel bases which can increase the processing time and memory access . On the other hand , as mentioned , the kernels of DWHT consist only of +1 , −1 which allows for building a multiplication-free module . Furthermore , any memory access towards kernel bases is not needed if our structured-wiring-based fast DWHT algorithm ( Algorithm 1 ) is applied . Our comprehensive experiments in Section 3.1 and 3.2 show that DWHT is more efficient than DCT in being applied in PC layer in terms of trade-off between the complexity of computation cost and accuracy . Note that , for securing more general formulation of our newly defined PC layer , we padded zeros along the channel axis if the number of input channels is less than that of output channels while truncating the output channels when the number of output channels shrink compared to that of input channels as shown in Algorithm 1 . Figure 1a shows the architecture of fast DWHT algorithm described in Algorithm 1 . This structuredwiring-based architecture ensures that the receptive field of each output channels isN , which means each output channel is fully reflected against all input channels through log2N iterations . This fullyreflected property helps to capture the input channel correlations in spite of the computation process of what channel elements will be added and subtracted being structured in a deterministic manner . For successfully fusing our new PC layer into neural networks , we explored two themes : i ) an optimal block search for the proposed PC ; ii ) an optimal insertion strategy of the proposed block found by i ) , in a hierarchical manner on the blocks of networks . We assumed that there are an optimal block unit structure and an optimal hierarchy level ( high- , middle- , low-level ) blocks in the neural networks favored by these non-learnable transforms . Therefore , we conducted the experiments for the two aforementioned themes accordingly . We evaluated the effectiveness for each of our networks by accuracy fluctuation as the number of learnable weight parameters or FLOPs changes . For comparison , we counted total FLOPs with summation of the number of multiplications , additions and subtractions performed during the inference . Unless mentioned , we followed the default experimental setting as 128 batch size , 200 training epochs , 0.1 initial learning rate where 0.94 is multiplied per 2 epochs , and 0.9 momentum with 5e-4 weight decay value . In all the experiments , the model accuracy was obtained by taking an average of Top-1 accuracy values from three independent training results . Algorithm 1 A new pointwise convolution using fast DWHT algorithm Input : 4D input features X ( B ×N ×H ×W ) , output channel M 1 : n← log2N 2 : if N < M then 3 : ZeroPad1D ( X , axis=1 ) . pad zeros along the channel axis 4 : end if 5 : for i← 1 to n do 6 : e← X [ : , : : 2 , : , : ] 7 : o← X [ : , 1 : : 2 , : , : ] 8 : X [ : , : N/2 , : , : ] ← e+ o 9 : X [ : , N/2 : , : , : ] ← e− o 10 : end for 11 : if N > M then 12 : X ← X [ : , : M , : , : ] 13 : end if
This paper proposes a new pointwise convolution layer, which is non-parametric and can be efficient thanks to the fast conventional transforms. Specifically, it could use either DCT or DHWT to do the transforming job and explores the optimal block structure to use this new kind of PC layer. Extensive experimental studies are provided to verify the new PC layer and experimental results show that the new layer could reduce the parameters and FLOPs while not loosing accuracy.
SP:4706017e6f8b958c7d0825fed98b285ea2994b59
A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS
1 INTRODUCTION . Large Convolutional Neural Networks ( CNNs ) ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Szegedy et al. , 2016b ; a ) and automatic Neural Architecture Search ( NAS ) based networks ( Zoph et al. , 2018 ; Liu et al. , 2018 ; Real et al. , 2018 ) have evolved to show remarkable accuracy on various tasks such as image classification ( Deng et al. , 2009 ; Krizhevsky & Hinton , 2009 ) , object detection ( Lin et al. , 2014 ) , benefited from huge amount of learnable parameters and computations . However , these large number of weights and high computational cost enabled only limited applications for mobile devices that require the constraint on memory space being low as well as for devices that require real-time computations ( Canziani et al. , 2016 ) . With regard to solving these problems , Howard et al . ( 2017 ) ; Sandler et al . ( 2018 ) ; Zhang et al . ( 2017b ) ; Ma et al . ( 2018 ) proposed parameter and computation efficient blocks while maintaining almost same accuracy compared to other heavy CNN models . All of these blocks utilized depthwise separable convolution , which deconstructed the standard convolution with the ( 3 × 3 × C ) size for each kernel into spatial information specific depthwise convolution ( 3 × 3 × 1 ) and channel information specific pointwise ( 1 × 1 × C ) convolution . The depthwise separable convolution achieved comparable accuracy compared to standard spatial convolution with hugely reduced parameters and FLOPs . These reduced resource requirements made the depthwise separable convolution as well as pointwise convolution ( PC ) more widely used in modern CNN architectures . Nevertheless , we point out that the existing PC layer is still computationally expensive and occupies a lot of proportion in the number of weight parameters ( Howard et al. , 2017 ) . Although the demand toward PC layer has been and will be growing exponentially in modern neural network architectures , there has been a little research on improving the naive structure of itself . Therefore , this paper proposes a new PC layer formulated by non-parametric and extremely fast conventional transforms . Conventional transforms that we applied on CNN models are Discrete Walsh-Hadamard Transform ( DWHT ) and Discrete Cosine Transform ( DCT ) , which have widely been used in image processing but rarely been applied in CNNs ( Ghosh & Chellappa , 2016 ) . We empirically found that although both of these transforms do not require any learnable parameters at all , they show the sufficient ability to capture the cross-channel correlations . This non-parametric property enables our proposed CNN models to be significantly compressed in terms of the number of parameters , leading to get the advantages ( i.e . efficient distributed training , less communication between server and clients ) referred by Iandola et al . ( 2016 ) . We note that especially DWHT is considered to be a good replacement of the conventional PC layer , as it requires no floating point multiplications but only additions and subtractions by which the computation overheads of PC layers can significantly be reduced . Furthermore , DWHT can take a strong advantage of its fast version where the computation complexity of the floating point operations is reduced from O ( n2 ) to O ( n log n ) . These non-parametric and low computational properties construct extremely efficient neural network from the perspective of parameter and computation as well as enjoying accuracy gain . Our contributions are summarized as follows : • We propose a new PC layer formulated with conventional transforms which do not require any learnable parameters as well as significantly reducing the number of floating point operations compared to the existing PC layer . • The great benefits of using the bases of existing transforms come from their fast versions , which drastically decrease computation complexity in neural networks without degrading accuracy . • We found that applying ReLU after conventional transforms discards important information extracted , leading to significant drop in accuracy . Based on this finding , we propose the optimal computation block for conventional transforms . • We also found that the conventional transforms can effectively be used especially for extracting high-level features in neural networks . Based on this , we propose a new transformbased neural network architecture . Specifically , using DWHT , our proposed method yields 1.49 % accuracy gain as well as 79.4 % and 49.4 % reduced parameters and FLOPs , respectively , compared with its baseline model ( MobileNet-V1 ) on CIFAR 100 dataset . 2 RELATED WORK . 2.1 DECONSTRUCTION AND DECOMPOSITION OF CONVOLUTIONS . For reducing computation complexity of the existing convolution methods , several approaches of rethinking and deconstructing the naive convolution structures have been proposed . Simonyan & Zisserman ( 2014 ) factorized a large sized kernel ( e.g . 5 × 5 ) in a convolution layer into several small size kernels ( e.g . 3 × 3 ) with several convolution layers . Jeon & Kim ( 2017 ) pointed out the limitation of existing convolution that it has the fixed receptive field . Consequently , they introduced learnable spatial displacement parameters , showing flexibility of convolution . Based on Jeon & Kim ( 2017 ) , Jeon & Kim ( 2018 ) proved that the standard convolution can be deconstructed as a single PC layer with the spatially shifted channels . Based on that , they proposed a very efficient convolution layer , namely active shift layer , by replacing spatial convolutions with shift operations . It is worth noting that the existing PC layer takes the huge proportion of computation and the number of weight parameters in modern lightweight CNN models ( Howard et al. , 2017 ; Sandler et al. , 2018 ; Ma et al. , 2018 ) . Specifically , MobileNet-V1 ( Howard et al. , 2017 ) requires 94 % , 74 % of the overall computational cost and the overall number of weight parameters for the existing PC layer , respectively . Therefore , there were attempts to reduce computation complexity of PC layer . Zhang et al . ( 2017b ) proposed ShuffleNet-V1 where the features are decomposed into several groups over channels and PC operation was conducted for each group , thus reducing the number of weight parameters and FLOPs by the number of groups G. However , it was proved in Ma et al . ( 2018 ) that the memory access cost increases as G increases , leading to slower inference speed . Similarly to the aforementioned methods , our work is to reduce computation complexity and the number of weight parameters in a convolution layer . However , our objective is more oriented on finding out mathe- matically efficient algorithms that make the weights in convolution kernels more effective in feature representation as well as efficient in terms of computation . 2.2 QUANTIZATION . Quantization in neural networks reduced the number of bits utilized to represent the weights and/or activations . Vanhoucke et al . ( 2011 ) applied 8-bit quantization on weight parameters , which enabled considerable speed-up with small drop of accuracy . Gupta et al . ( 2015 ) applied 16-bit fixed point representation with stochastic rounding . Based on Han et al . ( 2015b ) which pruned the unimportant weight connections through thresholding the values of weight , Han et al . ( 2015a ) successfully combined the pruning with 8 bits or less quantization and huffman encoding . The extreme case of quantized networks was evolved from Courbariaux et al . ( 2015 ) , which approximated weights with the binary ( +1 , −1 ) values . From the milestone of Courbariaux et al . ( 2015 ) , Courbariaux & Bengio ( 2016 ) ; Hubara et al . ( 2016 ) constructed Binarized Neural Networks which either stochastically or deterministically binarize the real value weights and activations . These binarized weights and activations lead to significantly reduced run-time by replacing floating point multiplications with 1-bit XNOR operations . Based on Binarized Neural Networks ( Courbariaux & Bengio , 2016 ; Hubara et al. , 2016 ) , Local Binary CNN ( Juefei-Xu et al. , 2016 ) proposed a convolution module that utilizes binarized nonlearnable weights in spatial convolution based on Local Binary Patterns , thus replacing multiplications with addition/subtraction operations in spatial convolution . However , they did not consider reducing computation complexity in PC layer and remained the weights of PC layer learnable floating point variables . Our work shares the similarity to Local Binary CNN ( Juefei-Xu et al. , 2016 ) in using binary fixed weight values . However , Local Binary Patterns have some limitations for being applied in CNN since they can only be used in spatial convolution as well as there are no approaches that enable fast computation of them . 2.3 CONVENTIONAL TRANSFORMS . In general , several transform techniques have been applied for image processing . Discrete Cosine Transform ( DCT ) has been used as a powerful feature extractor ( Dabbaghchian et al. , 2010 ) . For N -point input sequence , the basis kernel of DCT is defined as a list of cosine values as below : Cm = [ cos ( ( 2x+ 1 ) mπ 2N ) ] , 0 ≤ x ≤ N − 1 ( 1 ) where m is the index of a basis and captures higher frequency information in the input signal as m increases . This property led DCT to be widely applied in image/video compression techniques that emphasize the powers of image signals in low frequency regions ( Rao & Yip , 2014 ) . Discrete Walsh Hadamard Transform ( DWHT ) is a very fast and efficient transform by using only +1 and −1 elements in kernels . These binary elements in kernels allow DWHT to perform without any multiplication operations but addition/subtraction operations . Therefore , DWHT has been widely used for fast feature extraction in many practical applications , such as texture image segmentation ( Vard et al. , 2011 ) , face recognition ( Hassan et al. , 2007 ) , and video shot boundary detection ( G. & S. , 2014 ) . Further , DWHT can take advantage of a structured-wiring-based fast algorithm ( Algorithm 1 ) as well as allowing very high efficiency in encoding the spatial information ( Pratt et al. , 1969 ) . The basis kernel matrix of DWHT is defined using the previous kernel matrix as below : HD = ( HD−1 HD−1 HD−1 −HD−1 ) , ( 2 ) where H0 = 1 and D ≥ 1 . In this paper we denote HDm as the m-th row vector of HD in Eq . 2 . Additionally , we adopt fast DWHT algorithm to reduce computation complexity of PC layer in neural networks , resulting in an extremely fast and efficient neural network . 3 METHOD . We propose a new PC layer which is computed with conventional transforms . The conventional PC layer can be formulated as follows : Zijm =W > m ·Xij , 1 ≤ m ≤M ( 3 ) where ( i , j ) is a spatial index , and m is output channel index . In Eq . 3 , N and M are the number of input and output channels , respectively . Xij ∈ RN is a vector of input X at the spatial index ( i , j ) , Wm ∈ RN is a vector of m-th weight W in Eq . 3 . For simplicity , the stride is set as 1 and the bias is omitted in Eq . 3 . Our proposed method is to replace the learnable parameters Wm with the bases in the conventional transforms . For example , replacing Wm with HDm in Eq . 3 , we now can formulate the new multiplication-free PC layer using DWHT . Similarly , the DCT basis kernels Cm in Eq . 1 can substitute for Wm in Eq . 3 , formulating another new PC layer using DCT . Note that the normalization factors in the conventional transforms are not applied in the proposed PC layer , because Batch Normalization ( Ioffe & Szegedy , 2015 ) performs a normalization and a linear transform which can be viewed as a normalization in the existing transforms . The most important benefit of the proposed method comes from the fact that the fast algorithms of the existing transforms can be applied in the proposed PC layers for further reduction of computation . Directly applying above new PC layer gives computational complexity of O ( N2 ) . Adopting the fast algorithms , we can significantly reduce the computation complexity of PC layer fromO ( N2 ) to O ( NlogN ) without any change of the computation results . We demonstrate the pseudo-code of our proposed fast PC layer using DWHT in Algorithm 1 based on the Fast DWHT structure shown in Figure 1a . In Algorithm 1 , for logN iterations , the evenindexed channels and odd-indexed channels are added and subtracted in element-wise manner , respectively . The resulting elements which were added and subtracted are placed in the first N/2 elements and the last N/2 elements of the input of next iteration , respectively . In this computation process , each iteration requires only N operations of additions or subtractions . Consequently , Algorithm 1 gives us complexity of O ( NlogN ) in addition or subtraction . Compared to the existing PC layer that requires complexity of O ( N2 ) in multiplication , our method is extremely cheaper than the conventional PC layer in terms of computation costs as shown in Figure 1b and in power consumption of computing devices ( Horowitz , 2014 ) . Note that , similarly to fast DWHT , DCT can also be computed in a fast manner that recursively decomposes the N -point input sequence into two subproblems of N/2-point DCT ( Kok , 1997 ) . Compared to DWHT , DCT takes advantage of using more natural shapes of cosine basis kernels , which tend to provide better feature extraction performance through capturing the frequency information . However , DCT inevitably needs multiplications for inner product betweenC andX vectors , and a look up table ( LUT ) for computing cosine kernel bases which can increase the processing time and memory access . On the other hand , as mentioned , the kernels of DWHT consist only of +1 , −1 which allows for building a multiplication-free module . Furthermore , any memory access towards kernel bases is not needed if our structured-wiring-based fast DWHT algorithm ( Algorithm 1 ) is applied . Our comprehensive experiments in Section 3.1 and 3.2 show that DWHT is more efficient than DCT in being applied in PC layer in terms of trade-off between the complexity of computation cost and accuracy . Note that , for securing more general formulation of our newly defined PC layer , we padded zeros along the channel axis if the number of input channels is less than that of output channels while truncating the output channels when the number of output channels shrink compared to that of input channels as shown in Algorithm 1 . Figure 1a shows the architecture of fast DWHT algorithm described in Algorithm 1 . This structuredwiring-based architecture ensures that the receptive field of each output channels isN , which means each output channel is fully reflected against all input channels through log2N iterations . This fullyreflected property helps to capture the input channel correlations in spite of the computation process of what channel elements will be added and subtracted being structured in a deterministic manner . For successfully fusing our new PC layer into neural networks , we explored two themes : i ) an optimal block search for the proposed PC ; ii ) an optimal insertion strategy of the proposed block found by i ) , in a hierarchical manner on the blocks of networks . We assumed that there are an optimal block unit structure and an optimal hierarchy level ( high- , middle- , low-level ) blocks in the neural networks favored by these non-learnable transforms . Therefore , we conducted the experiments for the two aforementioned themes accordingly . We evaluated the effectiveness for each of our networks by accuracy fluctuation as the number of learnable weight parameters or FLOPs changes . For comparison , we counted total FLOPs with summation of the number of multiplications , additions and subtractions performed during the inference . Unless mentioned , we followed the default experimental setting as 128 batch size , 200 training epochs , 0.1 initial learning rate where 0.94 is multiplied per 2 epochs , and 0.9 momentum with 5e-4 weight decay value . In all the experiments , the model accuracy was obtained by taking an average of Top-1 accuracy values from three independent training results . Algorithm 1 A new pointwise convolution using fast DWHT algorithm Input : 4D input features X ( B ×N ×H ×W ) , output channel M 1 : n← log2N 2 : if N < M then 3 : ZeroPad1D ( X , axis=1 ) . pad zeros along the channel axis 4 : end if 5 : for i← 1 to n do 6 : e← X [ : , : : 2 , : , : ] 7 : o← X [ : , 1 : : 2 , : , : ] 8 : X [ : , : N/2 , : , : ] ← e+ o 9 : X [ : , N/2 : , : , : ] ← e− o 10 : end for 11 : if N > M then 12 : X ← X [ : , : M , : , : ] 13 : end if
This paper presents a new pointwise convolution (PC) method which applies conventional transforms such as DWHT and DCT. The proposed method aims to reduce the computational complexity of CNNs without degrading the performance. Compared with the original PC layer, the DWHT/DCT-based methods do not require any learnable parameters and reduce the floating-point operations. The paper also empirically optimizes the networks by removing ReLU after the proposed PC layers and using conventional transforms for high-level features extraction. Experiments on CIFAR100 show that the DWHT-based model improves the accuracy and reduces parameters and FLOPs compared with MobileNet-V1.
SP:4706017e6f8b958c7d0825fed98b285ea2994b59
Uncertainty-Aware Prediction for Graph Neural Networks
1 INTRODUCTION . Inherent uncertainties introduced by different root causes have emerged as serious hurdles to find effective solutions for real world problems . Critical safety concerns have been brought due to lack of considering diverse causes of uncertainties , resulting in high risk due to misinterpretation of uncertainties ( e.g. , misdetection or misclassification of an object by an autonomous vehicle ) . Graph neural networks ( GNNs ) ( Kipf & Welling , 2016 ; Veličković et al. , 2018 ) have gained tremendous attention in the data science community . Despite their superior performance in semi-supervised node classification and/or regression , they didn ’ t allow to deal with various types of uncertainties . Predictive uncertainty estimation ( Malinin & Gales , 2018 ) using Bayesian NNs ( BNNs ) has been explored for classification prediction or regression in the computer vision applications , with wellknown uncertainties , aleatoric and epistemic uncertainties . Aleatoric uncertainty only considers data uncertainty derived from statistical randomness ( e.g. , inherent noises in observations ) while epistemic uncertainty indicates model uncertainty due to limited knowledge or ignorance in collected data . On the other hand , in the belief or evidence theory , Subjective Logic ( SL ) ( Josang et al. , 2018 ) considered vacuity ( or lack of evidence ) as uncertainty in an subjective opinion . Recently other uncertainties such as dissonance , consonance , vagueness , and monosonance ( Josang et al. , 2018 ) are also introduced . This work is the first that considers multidimensional uncertainty types in both DL and belief theory domains to predict node classification and out-of-distribution ( OOD ) detection . To this end , we incorporate the multidimensional uncertainty , including vacuity , dissonance , aleatoric uncertainty , and epistemic uncertainty in selecting test nodes for Bayesian DL in GNNs . We perform semi-supervised node classification and OOD detection based on GNNs . By leveraging the modeling and learning capability of GNNs and considering multidimensional uncertainties in SL , we propose a Bayesian DL framework that allows simultaneous estimation of different uncertainty types associated with the predicted class probabilities of the test nodes generated by GNNs . We treat the predictions of a Subjective Bayesian GNN ( S-BGNN ) as nodes ’ subjective opinions in a graph modeled as Dirichlet distributions on the class probabilities , and learn the S-BGNN model by collecting the evidence from the given labels of the training nodes ( see Figure 1 ) . This work has the following key contributions : • A Subjective Bayesian framework to predictive uncertainty estimation for GNNs . Our pro- posed framework directly predicts subjective multinomial opinions of the test nodes in a graph , with the opinions following Dirichlet distributions with each belief probability as a class probability . Our proposed framework is a generative model , so it cal be highly applicable across all GNNs and allows simultaneously estimating different types of associated uncertainties with the class probabilities . • Efficient approximate inference algorithms : We propose a Graph-based Kernel Dirichlet distribution Estimation ( GKDE ) method to reduce error in predicting Dirichlet distribution . We designed an iterative knowledge distillation algorithm that treats a deterministic GNN as a teacher network while considering our proposed Subjective Bayesian GNN model ( a realization of our proposed framework for a specific GNN ) as a distilled network . This allows the expected class probabilities based on the predicted Dirichlet distributions ( i.e. , outputs of our trained Bayesian model ) to match the predicted class probabilities of the deterministic GNN model , along with uncertainty estimated in the predictions . • Comprehensive experiments for the validation of the performance of our proposed framework . Based on six real graph datasets , we compared the performance of our propose framework with that of other competitive DL algorithms . For a fair comparison , we tweaked the DL algorithms to consider various uncertainty types in predicted decisions . 2 RELATED WORK . Epistemic Uncertainty in Bayesian Deep Learning ( BDL ) : Machine/deep learning ( M/DL ) research mainly considered aleatoric uncertainty ( AU ) and epistemic uncertainty ( EU ) using BNNs for computer vision applications . AU consists of homoscedastic uncertainty ( i.e. , constant errors for different inputs ) and heteroscedastic uncertainty ( i.e. , different errors for different inputs ) ( Gal , 2016 ) . A BDL framework was presented to estimate both AU and DU simultaneously in regression settings ( e.g. , depth regression ) and classification settings ( e.g. , semantic segmentation ) ( Kendall & Gal , 2017 ) . Later , a new type of uncertainty , called distributional uncertainty ( DU ) , is defined based on distributional mismatch between the test and training data distributions ( Malinin & Gales , 2018 ) . Dropout variational inference ( Gal & Ghahramani , 2016 ) is used as one of key approximate inference techniques in BNNs . Other methods ( Eswaran et al. , 2017 ; Zhang et al. , 2018 ) measure overall uncertainty in node classification but didn ’ t consider uncertainty decomposition and GNNs . Uncertainty Quantification in Belief/Evidence Theory : In the belief/evidence theory domain , uncertainty reasoning has been substantially explored , such as Fuzzy Logic ( De Silva , 2018 ) , DempsterShafer Theory ( DST ) ( Sentz et al. , 2002 ) , or Subjective Logic ( SL ) ( Jøsang , 2016 ) . Belief theory focuses on reasoning of inherent uncertainty in information resulting from unreliable , incomplete , deceptive , and/or conflicting evidence . SL considered uncertainty in subjective opinions in terms of vacuity ( i.e. , lack of evidence ) and vagueness ( i.e. , failing in discriminating a belief state ) ( Jøsang , 2016 ) . Recently , other uncertainty types have been studied , such as dissonance ( due to conflicting evidence ) and consonance ( due to evidence supporting composite states ) ( Josang et al. , 2018 ) . In deep NNs , SL is considered to train a deterministic NN for supervised classification in computer vision applications ( Sensoy et al. , 2018 ) . However , they didn ’ t consider a generic way of estimating multidimensional uncertainty using Bayesian DL for GNNs used for the applications in graph data . 3 PROPOSED APPROACH . Now we define the problem of uncertainty-aware semi-supervised node classification and then present a Bayesian GNN framework to address the problem . 3.1 PROBLEM DEFINITION . Given an input graph G = ( V , E , r , yL ) , where V = { 1 , · · · , N } is a ground set of nodes , E ⊆ V×V is a ground set of edges , r = [ r1 , · · · , rN ] T ∈ RN×d is a node-level feature matrix , ri ∈ Rd is the feature vector of node i , yL = { yi | i ∈ L } are the labels of the training nodes L ⊂ V , and yi ∈ { 1 , . . . , K } is the class label of node i . We aim to predict : ( 1 ) the class probabilities of the testing nodes : pV\L = { pi ∈ [ 0 , 1 ] K | i ∈ V \ L } ; and ( 2 ) the associated multidimensional uncertainty estimates introduced by different root causes : uV\L = { ui ∈ [ 0 , 1 ] m | i ∈ V \ L } , where pi , k is the probability that the class label yi = k and m is the total number of uncertainty types . 3.2 MULTIDIMENSIONAL UNCERTAINTY QUANTIFICATION . Multiple uncertainty types may be estimated , such as aleatoric uncertainty , epistemic uncertainty , vacuity , dissonance , among others . The estimation of the first two types of uncertainty relies on the design of an appropriate Bayesian DL model with parameters , θ . Following ( Gal , 2016 ) , node i ’ s aleatoric uncertainty is : Aleatoric [ pi ] = EProb ( θ|G ) [ H ( yi|r ; θ ) ] , where H ( · ) is Shannon ’ s entropy of Prob ( pi|r ; θ ) . The epistemic uncertainty of node i is estimated by : Epistemic [ pi ] = H [ EProb ( θ|G ) [ ( yi|r ; θ ) ] ] − EProb ( θ|G ) [ H ( yi|r ; θ ) ] ( 1 ) where the first term indicates entropy ( or total uncertainty ) . Vacuity and dissonance can be estimated based on the subjective opinion for each testing node i ( Josang et al. , 2018 ) . Denote i ’ s subjective opinion as [ bi1 , · · · , biK , vi ] , where bik ( ≥ 0 ) is the belief mass of the k-th category , vi ( ≥ 0 ) is the uncertainty mass ( i.e. , vacuity ) , and K is the total number of categories , where ∑K k=1 bik + vi = 1. i ’ s dissonance is obtained by : ω ( bi ) = K∑ k=1 ( bik∑Kj=1 , j 6=k bijBal ( bij , bik ) ∑K j=1 , j 6=k bij ) , ( 2 ) where the relative mass balance between a pair of belief masses bij and bik is expressed by Bal ( bij , bik ) = 1 − |bij − bik|/ ( bij + bik ) . To develop a Bayesian GNNs framework that predicts multiple types of uncertainty , we estimate vacuity and dissonance using a Bayesian model . In SL , a multinomial opinion follows a Dirichlet distribution , Dir ( pi|αi ) , where αi ∈ [ 1 , ∞ ] K represents the distribution parameters . Given Si = ∑K k=1 αik , belief mass bi and uncertainty mass vi can be obtained by bik = ( αik − 1 ) /Si and vi = K/Si . 3.3 PROPOSED BAYESIAN DEEP LEARNING FRAMEWORK . Let p = [ p1 , . . . , pN ] > ∈ RN×K denote the class probabilities of the node in V , where pi = [ pi1 , . . . , piK ] > refers to the class probabilities of a specific node i . As shown in Figure 1 , our proposed Bayesian GNN framework can be described by the generative process : • Sample θ from a predefined prior distribution , i.e. , N ( 0 , I ) . • For each node i ∈ V : ( 1 ) Sample the class probabilities pi from a Dirichlet distribution : Dir ( pi|αi ) , where αi = fi ( r ; θ ) is parameterized by a GNN network α = f ( r ; θ ) : RN×d → [ 1 , ∞ ] N×K that takes the attribute matrix r as input and directly outputs all the node-level Dirichlet parameters α = [ α1 , · · · , αN ] , and θ refer to the hyper-parameters of the GNN network ; and ( 2 ) Sample yi ∼ Cat ( yi|pi ) , a categorical distribution on pi . In this design , the graph dependencies among the class labels in yL and yV\L are modeled via the GNN network f ( r ; θ ) . Our proposed framework is different from the traditional Bayesian GNN network ( Zhang et al. , 2018 ) in that the output of the former are the parameters of node-level Dirichlet distributions ( α ) , but the output of the latter are directly node-level class probabilities ( p ) . The conditional probability of p , Prob ( p|r ; θ ) , can be obtained by : Prob ( p|r ; θ ) = ∏N i=1 Dir ( pi|αi ) , αi = fi ( r ; θ ) ( 3 ) where the Dirichlet probability function Dir ( pi|αi ) is defined by : Dir ( pi|αi ) = Γ ( Si ) ∏K k=1 Γ ( αik ) ∏K k=1 pαik−1ik , Si = ∑K k=1 αik ( 4 ) Based on the proposed Bayesian GNN framework , the joint probability of y conditioned on the input graph G and the node-level feature matrix r can be estimated by : Prob ( y|r ; G ) = ∫ ∫ Prob ( y|p ) Prob ( p|r ; θ ) Prob ( θ|G ) dpdθ , ( 5 ) where Prob ( θ|G ) is the posterior probability of the parameters θ conditioned on the input graph G , which are estimated in Sections 3.4 and 3.6 . The aleatoric uncertainty and the epistemic uncertainty can be estimated using the equations described in Section 3.2 . The vacuity associated with the class probabilities ( pi ) of node i can be estimated by : Vacuity ( pi ) = EProb ( θ|G ) [ vi ] = EProb ( θ|G ) [ K/ ∑K k=1 αik ] . The dissonance of node i is estimated as : Disso . [ pi ] = EProb ( θ|G ) [ ω ( bi ) ] , where ω ( bi ) is defined in Eq . ( 2 ) .
This paper proposes to model various uncertainty measures in Graph Convolutional Networks (GCN) by Bayesian MC Dropout. Compared to existing Bayesian GCN methods, this work stands out in two aspects: 1) in terms of prediction, it considers multiple uncertainty measures including aleatoric, epistemic, vacuity and dissonance (see paper for definitions); 2) in terms of generative modeling, the GCN first predicts the parameters of a Dirichlet distribution, and then the class probabilities are sampled from the Dirichlet. Training/inference roughly follows MC Dropout, with two additional priors/teachers: 1) the prediction task is guided by a deterministic teacher network (via KL(model || teacher)), and 2) the Dirichlet parameters are guided by a kernel-based prior (via KL(model || prior)). Experiments on six datasets showed superior performance in terms of the end prediction task, as well as better uncertainty modeling in terms of out-of-distribution detection.
SP:63ad3be1dae7ede5c02a847304072c1cbc91b1cb
Uncertainty-Aware Prediction for Graph Neural Networks
1 INTRODUCTION . Inherent uncertainties introduced by different root causes have emerged as serious hurdles to find effective solutions for real world problems . Critical safety concerns have been brought due to lack of considering diverse causes of uncertainties , resulting in high risk due to misinterpretation of uncertainties ( e.g. , misdetection or misclassification of an object by an autonomous vehicle ) . Graph neural networks ( GNNs ) ( Kipf & Welling , 2016 ; Veličković et al. , 2018 ) have gained tremendous attention in the data science community . Despite their superior performance in semi-supervised node classification and/or regression , they didn ’ t allow to deal with various types of uncertainties . Predictive uncertainty estimation ( Malinin & Gales , 2018 ) using Bayesian NNs ( BNNs ) has been explored for classification prediction or regression in the computer vision applications , with wellknown uncertainties , aleatoric and epistemic uncertainties . Aleatoric uncertainty only considers data uncertainty derived from statistical randomness ( e.g. , inherent noises in observations ) while epistemic uncertainty indicates model uncertainty due to limited knowledge or ignorance in collected data . On the other hand , in the belief or evidence theory , Subjective Logic ( SL ) ( Josang et al. , 2018 ) considered vacuity ( or lack of evidence ) as uncertainty in an subjective opinion . Recently other uncertainties such as dissonance , consonance , vagueness , and monosonance ( Josang et al. , 2018 ) are also introduced . This work is the first that considers multidimensional uncertainty types in both DL and belief theory domains to predict node classification and out-of-distribution ( OOD ) detection . To this end , we incorporate the multidimensional uncertainty , including vacuity , dissonance , aleatoric uncertainty , and epistemic uncertainty in selecting test nodes for Bayesian DL in GNNs . We perform semi-supervised node classification and OOD detection based on GNNs . By leveraging the modeling and learning capability of GNNs and considering multidimensional uncertainties in SL , we propose a Bayesian DL framework that allows simultaneous estimation of different uncertainty types associated with the predicted class probabilities of the test nodes generated by GNNs . We treat the predictions of a Subjective Bayesian GNN ( S-BGNN ) as nodes ’ subjective opinions in a graph modeled as Dirichlet distributions on the class probabilities , and learn the S-BGNN model by collecting the evidence from the given labels of the training nodes ( see Figure 1 ) . This work has the following key contributions : • A Subjective Bayesian framework to predictive uncertainty estimation for GNNs . Our pro- posed framework directly predicts subjective multinomial opinions of the test nodes in a graph , with the opinions following Dirichlet distributions with each belief probability as a class probability . Our proposed framework is a generative model , so it cal be highly applicable across all GNNs and allows simultaneously estimating different types of associated uncertainties with the class probabilities . • Efficient approximate inference algorithms : We propose a Graph-based Kernel Dirichlet distribution Estimation ( GKDE ) method to reduce error in predicting Dirichlet distribution . We designed an iterative knowledge distillation algorithm that treats a deterministic GNN as a teacher network while considering our proposed Subjective Bayesian GNN model ( a realization of our proposed framework for a specific GNN ) as a distilled network . This allows the expected class probabilities based on the predicted Dirichlet distributions ( i.e. , outputs of our trained Bayesian model ) to match the predicted class probabilities of the deterministic GNN model , along with uncertainty estimated in the predictions . • Comprehensive experiments for the validation of the performance of our proposed framework . Based on six real graph datasets , we compared the performance of our propose framework with that of other competitive DL algorithms . For a fair comparison , we tweaked the DL algorithms to consider various uncertainty types in predicted decisions . 2 RELATED WORK . Epistemic Uncertainty in Bayesian Deep Learning ( BDL ) : Machine/deep learning ( M/DL ) research mainly considered aleatoric uncertainty ( AU ) and epistemic uncertainty ( EU ) using BNNs for computer vision applications . AU consists of homoscedastic uncertainty ( i.e. , constant errors for different inputs ) and heteroscedastic uncertainty ( i.e. , different errors for different inputs ) ( Gal , 2016 ) . A BDL framework was presented to estimate both AU and DU simultaneously in regression settings ( e.g. , depth regression ) and classification settings ( e.g. , semantic segmentation ) ( Kendall & Gal , 2017 ) . Later , a new type of uncertainty , called distributional uncertainty ( DU ) , is defined based on distributional mismatch between the test and training data distributions ( Malinin & Gales , 2018 ) . Dropout variational inference ( Gal & Ghahramani , 2016 ) is used as one of key approximate inference techniques in BNNs . Other methods ( Eswaran et al. , 2017 ; Zhang et al. , 2018 ) measure overall uncertainty in node classification but didn ’ t consider uncertainty decomposition and GNNs . Uncertainty Quantification in Belief/Evidence Theory : In the belief/evidence theory domain , uncertainty reasoning has been substantially explored , such as Fuzzy Logic ( De Silva , 2018 ) , DempsterShafer Theory ( DST ) ( Sentz et al. , 2002 ) , or Subjective Logic ( SL ) ( Jøsang , 2016 ) . Belief theory focuses on reasoning of inherent uncertainty in information resulting from unreliable , incomplete , deceptive , and/or conflicting evidence . SL considered uncertainty in subjective opinions in terms of vacuity ( i.e. , lack of evidence ) and vagueness ( i.e. , failing in discriminating a belief state ) ( Jøsang , 2016 ) . Recently , other uncertainty types have been studied , such as dissonance ( due to conflicting evidence ) and consonance ( due to evidence supporting composite states ) ( Josang et al. , 2018 ) . In deep NNs , SL is considered to train a deterministic NN for supervised classification in computer vision applications ( Sensoy et al. , 2018 ) . However , they didn ’ t consider a generic way of estimating multidimensional uncertainty using Bayesian DL for GNNs used for the applications in graph data . 3 PROPOSED APPROACH . Now we define the problem of uncertainty-aware semi-supervised node classification and then present a Bayesian GNN framework to address the problem . 3.1 PROBLEM DEFINITION . Given an input graph G = ( V , E , r , yL ) , where V = { 1 , · · · , N } is a ground set of nodes , E ⊆ V×V is a ground set of edges , r = [ r1 , · · · , rN ] T ∈ RN×d is a node-level feature matrix , ri ∈ Rd is the feature vector of node i , yL = { yi | i ∈ L } are the labels of the training nodes L ⊂ V , and yi ∈ { 1 , . . . , K } is the class label of node i . We aim to predict : ( 1 ) the class probabilities of the testing nodes : pV\L = { pi ∈ [ 0 , 1 ] K | i ∈ V \ L } ; and ( 2 ) the associated multidimensional uncertainty estimates introduced by different root causes : uV\L = { ui ∈ [ 0 , 1 ] m | i ∈ V \ L } , where pi , k is the probability that the class label yi = k and m is the total number of uncertainty types . 3.2 MULTIDIMENSIONAL UNCERTAINTY QUANTIFICATION . Multiple uncertainty types may be estimated , such as aleatoric uncertainty , epistemic uncertainty , vacuity , dissonance , among others . The estimation of the first two types of uncertainty relies on the design of an appropriate Bayesian DL model with parameters , θ . Following ( Gal , 2016 ) , node i ’ s aleatoric uncertainty is : Aleatoric [ pi ] = EProb ( θ|G ) [ H ( yi|r ; θ ) ] , where H ( · ) is Shannon ’ s entropy of Prob ( pi|r ; θ ) . The epistemic uncertainty of node i is estimated by : Epistemic [ pi ] = H [ EProb ( θ|G ) [ ( yi|r ; θ ) ] ] − EProb ( θ|G ) [ H ( yi|r ; θ ) ] ( 1 ) where the first term indicates entropy ( or total uncertainty ) . Vacuity and dissonance can be estimated based on the subjective opinion for each testing node i ( Josang et al. , 2018 ) . Denote i ’ s subjective opinion as [ bi1 , · · · , biK , vi ] , where bik ( ≥ 0 ) is the belief mass of the k-th category , vi ( ≥ 0 ) is the uncertainty mass ( i.e. , vacuity ) , and K is the total number of categories , where ∑K k=1 bik + vi = 1. i ’ s dissonance is obtained by : ω ( bi ) = K∑ k=1 ( bik∑Kj=1 , j 6=k bijBal ( bij , bik ) ∑K j=1 , j 6=k bij ) , ( 2 ) where the relative mass balance between a pair of belief masses bij and bik is expressed by Bal ( bij , bik ) = 1 − |bij − bik|/ ( bij + bik ) . To develop a Bayesian GNNs framework that predicts multiple types of uncertainty , we estimate vacuity and dissonance using a Bayesian model . In SL , a multinomial opinion follows a Dirichlet distribution , Dir ( pi|αi ) , where αi ∈ [ 1 , ∞ ] K represents the distribution parameters . Given Si = ∑K k=1 αik , belief mass bi and uncertainty mass vi can be obtained by bik = ( αik − 1 ) /Si and vi = K/Si . 3.3 PROPOSED BAYESIAN DEEP LEARNING FRAMEWORK . Let p = [ p1 , . . . , pN ] > ∈ RN×K denote the class probabilities of the node in V , where pi = [ pi1 , . . . , piK ] > refers to the class probabilities of a specific node i . As shown in Figure 1 , our proposed Bayesian GNN framework can be described by the generative process : • Sample θ from a predefined prior distribution , i.e. , N ( 0 , I ) . • For each node i ∈ V : ( 1 ) Sample the class probabilities pi from a Dirichlet distribution : Dir ( pi|αi ) , where αi = fi ( r ; θ ) is parameterized by a GNN network α = f ( r ; θ ) : RN×d → [ 1 , ∞ ] N×K that takes the attribute matrix r as input and directly outputs all the node-level Dirichlet parameters α = [ α1 , · · · , αN ] , and θ refer to the hyper-parameters of the GNN network ; and ( 2 ) Sample yi ∼ Cat ( yi|pi ) , a categorical distribution on pi . In this design , the graph dependencies among the class labels in yL and yV\L are modeled via the GNN network f ( r ; θ ) . Our proposed framework is different from the traditional Bayesian GNN network ( Zhang et al. , 2018 ) in that the output of the former are the parameters of node-level Dirichlet distributions ( α ) , but the output of the latter are directly node-level class probabilities ( p ) . The conditional probability of p , Prob ( p|r ; θ ) , can be obtained by : Prob ( p|r ; θ ) = ∏N i=1 Dir ( pi|αi ) , αi = fi ( r ; θ ) ( 3 ) where the Dirichlet probability function Dir ( pi|αi ) is defined by : Dir ( pi|αi ) = Γ ( Si ) ∏K k=1 Γ ( αik ) ∏K k=1 pαik−1ik , Si = ∑K k=1 αik ( 4 ) Based on the proposed Bayesian GNN framework , the joint probability of y conditioned on the input graph G and the node-level feature matrix r can be estimated by : Prob ( y|r ; G ) = ∫ ∫ Prob ( y|p ) Prob ( p|r ; θ ) Prob ( θ|G ) dpdθ , ( 5 ) where Prob ( θ|G ) is the posterior probability of the parameters θ conditioned on the input graph G , which are estimated in Sections 3.4 and 3.6 . The aleatoric uncertainty and the epistemic uncertainty can be estimated using the equations described in Section 3.2 . The vacuity associated with the class probabilities ( pi ) of node i can be estimated by : Vacuity ( pi ) = EProb ( θ|G ) [ vi ] = EProb ( θ|G ) [ K/ ∑K k=1 αik ] . The dissonance of node i is estimated as : Disso . [ pi ] = EProb ( θ|G ) [ ω ( bi ) ] , where ω ( bi ) is defined in Eq . ( 2 ) .
The authors proposed a Bayesian graph neural network framework for node classification. The proposed models outperformed the baselines in six node classification tasks. The main contribution is to evaluate various uncertainty measures for the uncertainty analysis of Bayesian graph neural networks. The authors show that vacuity and aleatoric measure are important to detect out-of-distribution and the dissonance uncertainty plays a key role for improving performance.
SP:63ad3be1dae7ede5c02a847304072c1cbc91b1cb